this post was submitted on 27 Mar 2024
34 points (88.6% liked)

Selfhosted

40246 readers
1101 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I want to move away from Cloudflare tunnels, so I rented a cheap VPS from Hetzner and tried to follow this guide. Unfortunately, the WireGuard setup didn't work. I'm trying to forward all traffic from the VPS to my homeserver and vice versa. Are there any other ways to solve this issue?

VPS Info:

OS: Debian 12

Architecture: ARM64 / aarch64

RAM: 4 GB

Traffic: 20 TB

you are viewing a single comment's thread
view the rest of the comments
[–] ptz@dubvee.org 21 points 7 months ago* (last edited 7 months ago) (2 children)

You don't want to forward all traffic. You can do SNAT port forwards across the VPN, but that requires the clients in your LAN to use the VPS as their gateway (I do this for a few services that I can't run through a proxy; its clunky but works well).

Typically, you'll want to proxy requests to your services rather than forwarding traffic.

  1. Setup Wireguard or OpenVPN on the VPS as a server VPN. Allow whatever listener port in the firewall (I use ufw on Debian, but you can use iptables if you want)
  2. Install HAProxy or Nginx (or Nginx Proxy Manager) on the VPS to act as your frotnend. Those will listen on ports 80/443 and proxy requests to your backend servers. They'll also be responsible for SSL termination, and your public-facing certs will be set there.
  3. Point your DNS records for your services to the VPS's public IPv4
  4. On your LAN, configure your router to connect to the VPS as a VPN client and route into your LAN from the VPN subnet -or- install the VPN client (WG/OVPN) on each host
  5. In your VPS's reverse proxy (HAProxy, etc), set the backend server address and port to the VPN address of your host

I've done this since ~2013 (before CF tunnels were even a product) and has worked great.

My original use case was to setup direct connectivity between a Raspberry PI with a 3G dongle with a server a home on satellite internet. Both ends of that were behind CG-NAT, so this was the solution I came up with.

[–] lemmyvore@feddit.nl 3 points 7 months ago (1 children)

Out of curiosity, why not a simple reverse proxy on the VPS (that only adds client real IP to headers), tunneled to a full reverse proxy on the home server (that does host routing and everything else) through a SSH tunnel?

[–] AlexPewMaster@lemmy.zip 1 points 7 months ago (1 children)

How would that kind of a setup look like?

[–] lemmyvore@feddit.nl 4 points 7 months ago* (last edited 7 months ago) (1 children)

Variant 1:

  • SSH tunnel established outgoing from home server to VPS_PUBLIC_IP:22, which makes an encrypted tunnel that "forwards" traffic from VPS_PUBLIC_IP:443 to HOME_LOCALHOST:443.
  • Full reverse proxy listening on HOME_LOCALHOST:443 and does everything (TLS termination, host routing, 3rd-party auth etc.)
  • Instead of running home proxy on the host you can ofc run it inside a container, just need to also run the ssh tunnel from inside that container.

Pro: very secure, VPS doesn't store any sensitive data (no TLS certificates, only a SSH public key) and the client connections pass through the VPS double-encrypted (TLS between client browser and home proxy, wrapped inside SSH).

Con: you don't get the client's IP. When the home apps receive the connections they appear to originate at the home end of the SSH tunnel, which is a private interface on the home server.

Variant 2 (in case you need client IPs):

  • SSH tunnel established same way as variant 1 but listens on VPS_LOCALHOST:PORT.
  • Simple reverse proxy on VPS_PUBLIC_IP:443. It terminates the TLS connections (decrypts them) using each domain's certificate. Adds the client IP to the HTTP headers. Forwards the connection into VPS_LOCALHOST:PORT which sends it to the home proxy.
  • Full reverse proxy at home set up same way as variant 1 except you can listen to 80 and not do any TLS termination because it's redundant at this point – the connection has already been decrypted and will arrive wrapped inside SSH.

Pro: by decrypting the TLS connection the simple proxy can add the client's IP to the HTTP headers, making it available to logs and apps at home.

Con: the VPS needs to store the TLS certificates for all the domains you're serving, you need to copy fresh certificates to the VPS whenever they expire, and the unencrypted connections are available on the VPS between the exit from TLS and the entry into the SSH tunnel.

Edit: Variant 3? proxy protocol

I've never tried this but apparently there's a so called proxy_protocol that can be used to attach information such as client IP to TLS connections without terminating them.

You would still need a VPS proxy and a home proxy like in variant 2, and they both need to support proxy protocol.

The frontend (VPS) proxy would forward connections in stream mode and use proxy protocol to add client info on the outside.

The backend (home) proxy would terminate TLS and do host routing etc. but also it can unpack client IP from the proxy protocol and place it in HTTP headers for apps and logs.

Pro: It's basically the best of both variant 1 and 2. TLS connections don't need to be terminated half-way, but you still get client IPs.

Please note that it's up to you to weigh the pros and cons of having the client IPs or not. In some circumstances it may actually be a feature to not log client IPs, for example If you expect you might be compelled to provide logs to someone.

[–] AlexPewMaster@lemmy.zip 1 points 7 months ago (1 children)

Very interesting... How do I get started?

[–] lemmyvore@feddit.nl 1 points 7 months ago (1 children)

The SSH tunnel is just one command, but you may want to use autossh to restart it if it fails.

If you choose variant 2 you will need to configure a pass-through reverse proxy on the VPS that does TLS termination (uses correct certificates for each domain on 443). Look into nginx, caddy, traefik or haproxy.

For the full home proxy you will once again need a proxy but you'll additionally need to do host routing to direct each (sub)domain to the correct app. You'll probably want to use the same proxy as above to avoid learning two different proxies.

I would recommend either caddy (both) or nginx (vps) + nginx proxy manager (home) if you're a beginner.

[–] AlexPewMaster@lemmy.zip 1 points 7 months ago (1 children)

How do I make the SSH tunnel forward traffic? It can't be as easy as just running ssh user@SERVER_IP in the terminal.

(I only need variant 1 btw)

[–] lemmyvore@feddit.nl 1 points 7 months ago* (last edited 7 months ago) (1 children)

You also add the -R parameter:

ssh -R SERVER_IP:443:HOME_PROXY_IP:HOME_PROXY_PORT user@SERVER_IP

https://linuxize.com/post/how-to-setup-ssh-tunneling/ (you want the "remote port forwarding"). ssh -R, -L and -D options are magical, more people should learn about them.

You may also need to open access to port 443 on the VPS. How you do that depends on the VPS service, check their documentation.

[–] AlexPewMaster@lemmy.zip 1 points 7 months ago (1 children)

Hi, whenever I try to enter the ports 80 and 443 at the beginning of the -R parameter, I get this error: Warning: remote port forwarding failed for listen port 80. How do I fix this?

[–] lemmyvore@feddit.nl 1 points 7 months ago

Ah yes. Ports below 1024 are normally privileged and only superuser can use them (and the account you're using to ssh in is not and should not be root).

This link has several possible solutions: https://unix.stackexchange.com/questions/10735/allowing-a-regular-user-to-listen-to-a-port-below-1024

[–] AlexPewMaster@lemmy.zip 1 points 7 months ago (1 children)

The biggest obstacle for me is the connection between the VPS and my homeserver. I have tried this today and I tried pinging 10.0.0.2 (the homeserver IP via WireGuard) and get this as a result:

PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
From 10.0.0.1 icmp_seq=1 Destination Host Unreachable
ping: sendmsg: Destination address required
From 10.0.0.1 icmp_seq=2 Destination Host Unreachable
ping: sendmsg: Destination address required
^C
***
10.0.0.2 ping statistics
***
2 packets transmitted, 0 received, +2 errors, 100% packet loss, time 1019ms

Not sure why though.

[–] ptz@dubvee.org 2 points 7 months ago* (last edited 7 months ago) (1 children)

Can you post your WG config (masking the public IPs and private key if necessary)?

With wireguard, the allowed-ips setting is basically the routing table for it.

Also, you don't want to set the endpoint address (on the VPS) for your homeserver peer since it's behind NAT. You'll only want to set that on the 'client' side. Since you're behind NAT, you'll also want to set the persistent keepalive in the client peer so the tunnel remains open.

[–] AlexPewMaster@lemmy.zip 2 points 7 months ago (1 children)

Hi, thank you so much for trying to help me, I really appreciate it!

VPS wg0.conf:

[Interface]
Address = 10.0.0.1/24
ListenPort = 51820
PrivateKey = REDACTED

PostUp = iptables -t nat -A PREROUTING -p tcp -i eth0 '!' --dport 22 -j DNAT --to-destination 10.0.0.2; iptables -t nat -A POSTROUTING -o eth0 -j SNAT --to-source SERVER_IP
PostUp = iptables -t nat -A PREROUTING -p udp -i eth0 '!' --dport 55107 -j DNAT --to-destination 10.0.0.2;

PostDown = iptables -t nat -D PREROUTING -p tcp -i eth0 '!' --dport 22 -j DNAT --to-destination 10.0.0.2; iptables -t nat -D POSTROUTING -o eth0 -j SNAT --to-source SERVER_IP
PostDown = iptables -t nat -D PREROUTING -p udp -i eth0 '!' --dport 55107 -j DNAT --to-destination 10.0.0.2;

[Peer]
PublicKey = REDACTED
AllowedIPs = 10.0.0.2/32

Homeserver wg0.conf:

[Interface]
Address = 10.0.0.2/24
PrivateKey = REDACTED
 
[Peer]
PublicKey = REDACTED
AllowedIPs = 0.0.0.0/0
PersistentKeepalive = 25
Endpoint = SERVER_IP:51820

(REDACTED would've been the public / private keys, SERVER_IP would've been the VPS IP.)

[–] ptz@dubvee.org 3 points 7 months ago (1 children)

On the surface, that looks like it should work (assuming all the keys are correct and 51820/udp is open to the world on your VPS).

Can you ping the VPS's WG IP from your homeserver and get a response? If so, try pinging back from the VPS after that.

Until you get the bidirectional traffic going, you might try pulling out the iptables rules from your wireguard script and bringing everything back up clean.

[–] AlexPewMaster@lemmy.zip 1 points 7 months ago (1 children)

I do not get a response when pinging the VPS's WG IP from my homeserver. It might have something to do with the firewall that my VPS provider (Hetzner) is using. I've now allowed the port 51820 on UDP and TCP and it's still the same as before... This is weird.

[–] ptz@dubvee.org 2 points 7 months ago (1 children)

I'm not familiar with Hetzner, but I know people use them; haven't heard any kinds of blocks for WG traffic (though I've read they do block outbound SMTP).

Maybe double-check your public and private WG keys on both ends. If the keys aren't right, it doesn't give you any kind of error; the traffic is just silently dropped if it doesn't decrypt.

[–] AlexPewMaster@lemmy.zip 2 points 7 months ago (1 children)

Hmm, the keys do match on the two different machines. I have no idea why this doesn't work...

[–] ptz@dubvee.org 2 points 7 months ago* (last edited 7 months ago) (1 children)

Dumb question: you're starting wireguard right? lol

In most distros, it's systemctl start wg-quick@wg0 where wg0 is the name of the config file in /etc/wireguard

If so, then maybe double/triple check any firewalls / iptables rules. My VPS providers don't have any kind of firewall in front of the VM, but I'm not sure about Hetzner.

Maybe try stopping wireguard, starting a netcat listener on 51820 UDP and seeing if you can send to it from your homelab. This will validate that the UDP port is open and your lab can make the connection.

### VPS
user@vps:  nc -l -u VPS_PUBLIC_IP 51820

### Homelab
user@home:  echo "Testing" | nc -u VPS_PUBLIC_IP 51820

### If successful, VPS should show:
user@vps:  nc -l -u VPS_PUBLIC_IP 51820
Testing

I do know this is possible as I've made it work with CG-NAT on both ends (each end was a client and routed through the VPS).

[–] AlexPewMaster@lemmy.zip 2 points 7 months ago (1 children)

The command you provided for the VPS returns UDP listen needs -p arg, so I just added -p right before the port number and then it worked. Running the homelab command returns no port[s] to connect to... Not good.

[–] ptz@dubvee.org 3 points 7 months ago (1 children)

At least that points you to the problem: firewall somewhere.

Try a different port with your netcat test, perhaps? 51820 is the well-known WG port. Can't imagine they'd intentionally block it, but you never know.

Maybe Hetzner support can offer more guidance? Again, I'm not sure what or how they do network traffic before it gets to the VM. On all of mine, it's just a raw gateway and up to me to handle all port blocking.

If you figure that part out and are still stuck on the WG part, just shoot me a reply.

[–] AlexPewMaster@lemmy.zip 2 points 7 months ago (1 children)

I tried to open the port 22 on UDP (yeah, I am getting pretty desperate over here...) and still get the message no port[s] to connect to... Someone else on this post commented that I should stop using iptables for opening ports and start using something else as a firewall. Should I try this approach?

[–] ptz@dubvee.org 3 points 7 months ago (1 children)

Yeah, might be worth a shot. iptables is nice, but very verbose and somewhat obtuse.

I'd just clear out iptables completely and use ufw. Should be in Debian's package manager.

Here's a cheat sheet: https://www.digitalocean.com/community/tutorials/ufw-essentials-common-firewall-rules-and-commands

[–] AlexPewMaster@lemmy.zip 2 points 7 months ago (1 children)

What do you mean with "clear out iptables completely"? Should I remove the iptables package with sudo apt remove iptables?

[–] ptz@dubvee.org 3 points 7 months ago (1 children)

I believe iptables --flush should clear out any entries you've made. You can also reboot and clear them (unless you've got scripts bound to your interface up/down config that adds rules).

Basically just need to get any custom iptables rules you made out of there and then re-implement any FW rules with ufw

You can still use iptables alongside UFW, but I only use those for more complex things like port forwarding, masquerading, etc.

[–] AlexPewMaster@lemmy.zip 2 points 7 months ago (1 children)

Alright, I switched to ufw and... it's still not working. sigh

Should we just try something completely different? WireGuard doesn't seem to be working on my VPS. Someone in the comments mentioned tunneling via SSH, sounds interesting.

[–] ptz@dubvee.org 3 points 7 months ago* (last edited 7 months ago) (1 children)

That would work, but I've noticed performance isn't as good as a UDP VPN that uses the kernel's tun module. OpenVPN is also an option, but it's a LOT more involved to configure (I used to run it before Wireguard existed).

The oddest part is you can't get a netcat message through. That implies firewall somewhere.

What is the output of your ufw status ?

[–] AlexPewMaster@lemmy.zip 2 points 7 months ago (1 children)

I've added some different ports for the future, but this is my ufw status:

Status: active

To                         Action      From
--                         ------      ----
OpenSSH                    ALLOW       Anywhere                  
51820                      ALLOW       Anywhere                  
2333                       ALLOW       Anywhere                  
80                         ALLOW       Anywhere                  
81                         ALLOW       Anywhere                  
443                        ALLOW       Anywhere                  
80/tcp                     ALLOW       Anywhere                  
OpenSSH (v6)               ALLOW       Anywhere (v6)             
51820 (v6)                 ALLOW       Anywhere (v6)             
2333 (v6)                  ALLOW       Anywhere (v6)             
80 (v6)                    ALLOW       Anywhere (v6)             
81 (v6)                    ALLOW       Anywhere (v6)             
443 (v6)                   ALLOW       Anywhere (v6)             
80/tcp (v6)                ALLOW       Anywhere (v6)
[–] ptz@dubvee.org 1 points 7 months ago (1 children)

I can't recall if ufw opens both TCP and UDP or just TCP by default.

Try explicitly allowing 51820/udp with ufw allow 51820/udp

[–] AlexPewMaster@lemmy.zip 2 points 7 months ago* (last edited 7 months ago)

I've added the firewall rule and it still says no port[s] to connect to whenever I run echo "Testing" | nc -u SERVER_IP -p 51820. I feel like you're trying to stay on a sinking ship, so I would suggest to try another method to see if we even can get the whole "bypass CGNAT with a VPS" thing to work at all.

Update: I've tried setting up SSH tunneling instead and it STILL doesn't work. I contacted Hetzner support about this issue and I'm hoping that they can resolve the firewall issues that I'm having.