Search

How to forward real IP from Caddy server?
Hello,
I have hosted azuracast in my minipc and I want to forward the IP of the song requester, right now it's only taking one IP the "podman container ip" so basically Azuracast thinks that every request is coming from the IP address 10.89.1.1 which is the IP of interface created by podman.
undefined
57: podman3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 0e:fa:6d:33:b9:39 brd ff:ff:ff:ff:ff:ff inet 10.89.1.1/24 brd 10.89.1.255 scope global podman3 valid_lft forever preferred_lft forever inet6 fe80::b876:abff:fede:c3ef/64 scope link valid_lft forever preferred_lft forever
also I am explicitly forwarding the IP using X-Forwarded-Host.
undefined
reverse_proxy http://localhost:4000/ { header_up X-Forwarded-Host {host} }
I don't know how to resolve it, any help would be appreciated :)
Edit: I didn't had to so any of this stuff, what I should have done is just enabling "reverse proxy" option in Azuraca

Nextcloud AIO inside container - domain verification fails
I am setting up nextcloud AIO in a podman container on my VPS. After some struggle, I got to the installation page, but domain checking is simply not working out.
After looking up, I decided to check the port from host machine. Strangely, curl localhost:11000
hangs indefinitely.
nextcloud-aio-domaincheck
container is running, and it mapped port as 0.0.0.0:11000->11000/tcp
. The domaincheck server should be reachable, and I don't think firewall would be preventing localhost access..
The single line log from domaincheck container is:
undefined
2025-03-20 13:47:43: (../src/server.c.1939) server started (lighttpd/1.4.76)
I am utterly lost here. Does anyone know what would be possible reasons, and how to troubleshoot the issue? Any pointers would be greatly appreciated. Thank you in advance!
EDIT: Just ran sudo podman exec nextcloud-aio-mastercontainer curl nextcloud-aio-domaincheck:11000
, it seems to work in the internal network. At a loss how this does not get exposedd to the host

How do I debug network issues, regarding caddy in podman?
Disclaimer: I am running personal website on cloud, since it feels iffy to expose local IP to internet. Sorry for posting this on selfhosting, I don't know anywhere else to ask.
I am planning to multiplex forgejo, nextcloud and other services on port 80 using caddy.
This is not working, and I am having issues diagnosing which side is preventing access.
One thing I know: it's not DNS, since dig <my domain>
works well.
I would like some pointers for what to do in this circumstances. Thanks in advance!
What I have looked into:
- curling localhost from the server works well, caddy returns a simple result.
curl <my domain>
times out, currently trying to inspect packets - it seems like server receives TCP without HTTP.curl <my domain>:3000
displays forgejo page, as forgejo exposes at 3000 in its container, which podman routes to host 3000.
EDIT: my Caddyfile is as follows.
undefined
:80 { respond "Hello World!" } http://<my domain> { respond "This should respond" } http:/

How do I give Jellyfin permanent access to an external drive?
I didn't like Kodi due to the unpleasant controls, especially on Android, so I decided to try out Jellyfin. It was really easy to get working, and I like it a lot more than Kodi, but I started to have problems after the first time restarting my computer.
I store my media on an external LUKS encrypted hard drive. Because of that, for some reason, Jellyfin's permission to access the drive go away after a reboot. That means something like chgrp -R jellyfin /media/username
does work, but it stops working after I restart my computer and unlock the disk.
I tried modifying the /etc/fstab file without really knowing what I was doing, and almost bricked the system. Thank goodness I'm running an atomic distro (Fedora Silverblue), I was able to recover pretty quickly.
How do I give Jellyfin permanent access to my hard drive?
Solution:
- Install GNOME Disks
- Open GNOME Disks
- On the left, click on the drive storing your media
- Click "Unloc

I've set up docker services behind nginx proxy manager so they're accessible with https, but the http services are still open. How do I close them?
I'm using a docker compose file, and I have everything running just fine, containers talking to each other as needed, NPM reverse proxying everything via a duckdns subdomain... everything's cool.
Problem is, I can still go to, for example, http://192.168.1.30:8080/ and get the services without http.
I've tried commenting out the ports in the compose file, which should make them only available on the internal network, I thought. But when I do that, the containers can no longer connect to each other.
Any advice for me?
Edit:
Thanks for the quick & helpful suggestions!
While investigating bridge networks, I noticed a mention that containers could only find each other on the default container bridge by container name, which I did not know. I had tried 127.0.0.1, localhost, the external IP, hostnames, etc but not container names.
In the end, the solution was just to use container names when telling each container how to find the others. No need for creating bridge ne

Nextcloud can't see config.php in new install directory
Update: Turned out I had like 3 versions of php and 2 versions of postgres all installed in different places and fighting like animals. Cleaned up the mess, fresh install of php and postgres, restored postgres data to the database and bobs your uncle. What a mess.
Thanks to everyone who commented. Your input is always so helpful.
Original Post
Hey everyone, it's me again. I'm now on NGINX, surprisingly simple, not here with a webserver issue today though, rather a nextcloud specific issue. I removed my last post about migrating from Apache to Caddy after multiple users pointed out security issues with what I was sharing, as well as suggesting caddy would be unable to meet my complex hosting needs. Thank you, if that was you.
During the NGINX setup which has gone shockingly smoothly I moved all of my site root directories from /usr/local/apache2/secure to /var/www/
Everything so far has moved over nicely... that is until nextcloud.

Can't renew cert on a self-hosted lemmy instance D:
EDIT: Thanks everyone for your time and responses. To break as little as possible attempting to fix this I've opted to go with ZeroSSL's DNS process to acquire a new cert. I wish I could use this process for all of my certs as it was very quick and easy. Now I just have to figure out the error message lemmy is throwing about not being able to run scripts.
Thank you all for your time sincerely. I understand a lot more than I did last night.
Original Post
As the title says I'm unable to renew a cert on a self-hosted lemmy instance. A friend of mine just passed away and he had his hands all up in this and had it working like magic. I'm not an idiot and have done a ton of the legwork to get our server running and working - but lemmy specifically required a bit of fadanglin' to get working correctly. Unfortunately he's not here to ask for help, so I'm turning to you guys. I haven't had a problem with any of my other software such as nextcloud or pixelfed but for some re

Is it possible to mount a ZFS drive in OpenMediaVault?
Original Post:
I recently had a Proxmox node I was using as a NAS fail catastrophically. Not surprising as it was repurposed 12 year old desktop. I was able to salvage my data drive, but the boot drive was toast. Looks like the sata controller went out and fried the SSD I was using as the boot drive. This system was running TurnKey FileServer as a LXC with the media storage on a subvol on a ZFS storage pool.
My new system is based on OpenMediaVault and I'm am happy with it, but I'm hitting my head against a brick wall trying to get it to mount the ZFS drive from the old system. I tried installing ZFS using the instructions here as OMV is based on Debian but haven't had any luck so far.
Solved:
- Download and install OMV Extras
- OMV's web admin panel, go to System -> Plugins and install the Kernel Plugin
- Go to System -> Kernel and click the blue icon that says Proxmox (looks like a box with a down arrow as of Jan

OPNSense accessible on WAN by default?
Solved : I was still on my local network instead of my LTE network, so I was accessing the global ip through the local network, and thus the access page.
Hello,
I am running OPNSense as my router for my ISP and my local network.
When I access my global ip, it lands me on the login page of my OPNSense router. Is that normal?
The only Firewall WAN Rule I added is the rule to enable my Wireguard instance (and I disabled it to test if that was the issue)
I was messing with the NAT Outbound for the Road Warrior setup as explained in the OPNSense Road Warrior tutorial, but that rule is also disabled.
I enabled OutboundDNS to override a local domain.
And I have a dynamic DNS to access my VPN with a FQDN instead of the ip directly.
But otherwise, I have the vanilla configuration. I disabled all of these rules I've created to make sure that they weren't the issue, and I can still access my OPNSense from the WAN interface.
So is that a normal default behaviour? If so, how can I

Thanks guys! I was finally able to self host my own raw-html "blog"
So, I've been trying to accomplish this for a while. First I posted asking for help getting started, then I posted about trying to open ports on my router. Now, I proudly post about being able to show the world (for the first time ever) my abysmal lack of css and html skills.
I would like to thank everyone in this community, specially to those who took the time to answer my n00b questions. If you'd like to see it, it will be available at: https://kazuchijou.com/
(Beware however, for you might cringe into oblivion and back.)
Since this website is hosted on my desktop computer, there will be some down-time here and then, however I'll leave it on for the next 48 hours (rip electricity bill) only for you guys to see. <3
Now, there are a couple of things that need addressing:
I set it up as a cloudflare tunnel and linked it to my domain. However, I still don't know any docker at all (despite using it for th

Forward authentication with Authentik for Firefly3
*** For anyone stumbling on this post, and is as newbie as I am right now, forward auth doesn't work with FireflyIII.
I thought that forward auth was the same as a proxy, but in this case, it is the proxy that provides the x-authentik tags.
So for Firefly, set up Authentik as a proxy provider and not a forward auth.
I haven't figured out the rest yet, but at least, x-authentik-email is in my header now.
Good luck ***
Hello,
I am trying to setup Authentik to do a forward auth for Firefly3, using caddy. I am trying to learn External authentication so my knowledge is limited.
My setup is as follows.
By looking at the Firefly doc Firefly doc, I need to set
AUTHENTICATION_GUARD=remote_user_guard
AUTHENTICATION_GUARD_HEADER=HTTP_X_AUTHENTIK_EMAIL
in my .env file. I used the base .env file provided by Firefly and modified only these two lines
Then, in my Authentik, I made a forward auth for a single app

Noob stuck on port-forwarding wile trying to host own raw-html website. Pls help
Edit: Solution
Yeah, thanks to u/[email protected] I contacted my ISP and found out that in fact they were blocking my port forwarding capabilities. I gave them a call and I had to pay for a public IP address plan and now it's just a matter of testing again. Thank you very much to everyone involved. I love you. It was Megacable by the way. If anyone from my country ever encounters the same problem I hope this post is useful to you.
Here's the original post:
Hey!
Ok, so I'm trying to figure this internet thing out. I may be stupid, but I want to learn.
So, what I'm essentially doing is trying to host my own raw html website on my own hardware and get it out to the internet for everyone to see (temporarily of course, I don't want to get in trouble with hackers and bots) I just want to cross that out of my bucket list.
What I've done so far:
- I set up a qemu/kvm virtual machine with debian as my server
- I configured a bridge so that it's available to my local network
- I g

Help Running Scrutiny
Hello All,
I am trying to run scrutiny via docker compose and I am running into an issue where nothing shows up on the wub UI. If anyone here has this working would love some ideas on what the issue could be.
as per there trouble shooting for this I followed those steps and here is the output
undefined
$ smartctl --scan /dev/sda -d scsi # /dev/sda, SCSI device /dev/sdb -d sat # /dev/sdb [SAT], ATA device /dev/nvme0 -d nvme # /dev/nvme0, NVMe device
undefined
docker run -it --rm \ -v /run/udev:/run/udev:ro \ --cap-add SYS_RAWIO \ --device=/dev/sda \ --device=/dev/sdb \ ghcr.io/analogj/scrutiny:master-collector smartctl --scan /dev/sda -d scsi # /dev/sda, SCSI device /dev/sdb -d sat # /dev/sdb [SAT], ATA device
So I think I am imputing the devices correctly.
I only really changed the port number for the web UI to 8090 from 8080 in there example as 8080 is

How to change qBittorrent admin password in docker-container?
I'm currently trying to spin up a new server stack including qBittorrent. when I launch the web UI, it asks for a login on first launch. According to the documentation, the default user id admin and the default password is adminadmin.
Solved:
For qBittorrent ≥ v4.1, a randomly generated password is created at startup on the initial run of the program. After starting the container, enter the following into a terminal:
docker logs qbittorrent
or sudo docker logs qbittorrent
(if you do not have access to the container)
The command should return:
******** Information ********
To control qBittorrent, access the WebUI at: http://localhost:5080
The WebUI administrator username is: admin
The WebUI administrator password was not set. A temporary password is provided for this session: G9yw3qSby
You should set your own password in program preferences.
Use this password to login for this session. Then create a new password by opening http://{localhost}:5080 and navigate the menus

Chaining routers and GUA IPv6 addresses
Hey fellow self-hosting lemmoids
Disclaimer: not at all a network specialist
I'm currently setting up a new home server in a network where I'm given GUA IPv6 addresses in a 64 bit subnet (which means, if I understand correctly, that I can set up many devices in my network that are accessible via a fixed IP to the oustide world). Everything works so far, my services are reachable.
Now my problem is, that I need to use the router provided by my ISP, and it's - big surprise here - crap. The biggest concern for me is that I don't have fine-grained control over firewall rules. I can only open ports in groups (e.g. "Web", "All other ports") and I can only do this network-wide and not for specific IPs.
I'm thinking about getting a second router with a better IPv6 firewall and only use the ISP router as a "modem". Now I'm not sure how things would play out regarding my GUA addresses. Could a potential second router also assign addresses to devices in that globally routable space directl

How do I redirect to a /path with Nginx Proxy Manager?
Hi folks,
Just set up Nginx Proxy Manager + Pihole and a new domain with Porkbun. All is working and I have all my services service.mydomain.com
, however some services such as pihole seem to be strictly reachable with /admin at the end. This means with my current setup it only directs me to pihole.mydomain.com
which leads to a 403 Forbidden.
This is what I have tried, but with no prevail. Not really getting the hang of this so would really appriciate a pinpoint on this :)


Randomly getting ECH errors on self-hosted services.
In the last couple of weeks, I've started getting this error ~1/5 times when I try to open one of my own locally hosted services.

I've never used ECH, and have always explicitly restricted nginx to TLS1.2 which doesn't support it. Why am I suddenly getting this, why is it randomly erroring, then working just fine again 2min later, and how can I prevent it altogether? Is anyone else experiencing this?
I'm primarily noticing it with Ombi. I'm also mainly using Chrome Android for this. But, checking just now; DuckDuckGo loads the page just fine everytime, and Firefox is flat out refusing to load it at all.

There's 20+ services going through the same nginx

Missing /etc/systemd/resolved.conf file
Solution: I just had to create the file
I wanted to install Pi-Hole on my server and noticed that port 53 is already in use by something.
Apparently it is in use by systemd-resolved:
undefined
~$ sudo lsof -i -P -n | grep LISTEN [...] systemd-r 799 systemd-resolve 18u IPv4 7018 0t0 TCP 127.0.0.53:53 (LISTEN) systemd-r 799 systemd-resolve 20u IPv4 7020 0t0 TCP 127.0.0.54:53 (LISTEN) [...]
And the solution should be to edit /etc/systemd/resolved.conf
by changing #DNSStubListener=yes
to DNSStubListener=no
according to this post I found. But the /etc/systemd/resolved.conf
doesn't exist on my server.
I've tried sudo dnf install /etc/systemd/resolved.conf
which did nothing other than telling me that systemd-resolved
is already installed of course. Rebooting also didn't work. I don't know what else I could try.
I'm running Fedora Server.
Is there another wa

Having difficulty visiting an mTLS-authenticated website from GrapheneOS
I host a website that uses mTLS for authentication. I created a client cert and installed it in Firefox on Linux, and when I visit the site for the first time, Firefox asks me to choose my cert and then I'm able to visit the site (and every subsequent visit to the site is successful without having to select the cert each time). This is all good.
But when I install that client cert into GrapheneOS (settings -> encryption & credentials -> install a certificate -> vpn & app user certificate), no browser app seems to recognize that it exists at all. Visiting the website from Vanadium, Fennec, or Mull browsers all return "ERR_BAD_SSL_CLIENT_AUTH_CERT" errors.
Does anyone have experience successfully using an mTLS cert in GrapheneOS?

Weird (to me) networking issue - can you help?
I have two subnets and am experiencing some pretty weird (to me) behaviour - could you help me understand what's going on?
Scenario 1
undefined
PC: 192.168.11.101/24 Server: 192.168.10.102/24, 192.168.11.102/24
From my PC I can connect to .11.102, but not to .10.102:
bash
ping -c 10 192.168.11.102 # works fine ping -c 10 192.168.10.102 # 100% packet loss
Scenario 2
Now, if I disable .11.102 on the server (ip link set <dev> down
) so that it only has an ip on the .10 subnet, the previously failing ping works fine.
undefined
PC: 192.168.11.101/24 Server: 192.168.10.102/24
From my PC:
bash
ping -c 10 192.168.10.102 # now works fine
This is baffling to me... any idea why it might be?
Here's some additional information:
- The two subnets are on different vlans (.10/24 is untagged and .11/24 is tagged 11).
- The PC and Server are connected to the same managed switch, which however does nothing "strange" (i