this post was submitted on 25 Jun 2023
3 points (100.0% liked)

Selfhosted

40183 readers
634 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Any guides on how to host at home? I'm always afraid that opening ports in my home router means taking the heavy risk of being hacked. Does using something like CloudFlare help? I am a complete beginner.

you are viewing a single comment's thread
view the rest of the comments
[–] terribleplan@lemmy.nrd.li 2 points 1 year ago

Yeah, you're basically on the right track. I do a couple things in a possibly interesting way that you may find useful:

  1. I run multiple frpss on different servers. Haven't gotten around to setting up a LB in front or automatically removing them from DNS, but doing that sort of thing is the eventual plan. This means running as many frpcs as I have frpss. I also haven't gotten to the point of figuring out what to do if e.g. one service exposed via frps is healthy but another is not. It may make sense to run HAProxy in front of it or something... sounds terrible...
  2. I have multiple frpc.inis, they define all of the connection details for a particular frps then use includes = /conf.d/*.ini to load up whatever services that frpc exposes.
  3. I run frpc in docker and use volumes to manage e.g. putting the right frpc.ini and /conf.d/<service>.ini files in there.
  4. I use QUIC for the communication layer between frpc and frps using certificates for client authentication.
  5. I run my frpcs (one container per frps, I'm considering ways to combine them to make it less annoying to deploy) right alongside the service I am exposing remotely, so I run e.g. one for Traefik, one for ~~gogs~~ ~~gitea~~ forgejo ssh, etc. If you are using docker-compose I would put one (set of) frpc in that compose file to expose whatever services it has. Similar thought for k8s, I would do sidecar containers as part of your podspec.
  6. If I have more than one instance of a service, such as e.g. running multiple Traefik "ingress" stacks, I run a set of frpcs per deployment of that service.
  7. Where possible I use proxy protocol via proxy_protocol_version = v2 to easily preserve incoming IP address. Traefik supports this natively, which is the most important service to me as most of what I run connects over HTTP(s).
  8. I choose to terminate end-user SSL using Traefik within my homelab, so the full TLS session gets sent as a plain TCP stream. There is support for HTTPS within frp using plugin = https2http, but I like my setup better.

As to your question of "what happens when the frpcs go offline?", it depends on service type. I only use services of type = tcp and type = udp, so can't speak to anything beyond that with experience.

In the case of type = tcp your frps you can run multiple frpcs and the frps will load-balance to them, meaning if you run multiple you should get some level of HA because if one connection breaks it should just use the other, killing any still-open connections to the failed frpc. Same thought there as how e.g. cloudflared using their Tunnels feature makes two connections to two of their datacenters. If there is nothing to handle a particular TCP service on an frps I think the connection gets refused, it may even stop listening on the port, but I'm not sure of that.

Sadly in the case of type = udp the frps will only accept one frpc connection. I still run multiple frpcs, but those particular connections just fail and keep retrying until the "active" frpc for that udp service dies. I believe this means that if there is nothing to handle a particular UDP service on an frps it just drops the packets since there isn't really a "connection" to kill/refuse/reset, the same thing about stopping listening may apply here as well but I am also unsure in this case.

My wishlist for frp is, in no particular order:

  • frpc making multiple connections to a server
  • frpc being able to connect to multiple servers
  • Some sort of native ALPN handling in frps, and ability to use a custom ALPN protocol for frp traffic (so I can run client traffic and frp traffic on the same port)
  • frps support for loadbalancing to multiple UDP via some sort of session tracking, triple-based routing, or something else
  • frps support for clustering or something, so even if one frps didn't have a way to route a service it could talk to another frps nearby that did
    • or even support for tiering the idea of "locality", first it tries on the local machine, then it tries in the same zone/region/etc
  • It'd be super neat if there were a way to do something like Cloudflare's "Keyless SSL"

Overall I am pretty happy with frp, but it seems like it is trying to solve too much (e.g "secret" services, p2p, HTTP, TCP multiplexing). I would love to see something more focused purely TCP/UDP (and maybe TLS/QUIC) edge-ingress emerge and solve a narrower problem space even better. Maybe there is some sort of complex network-level solution with a VPN and routing daemons (BGP?) and firewall/NAT stuff that could do this better, but I really just want a tiny executable and/or container I can run on both ends and have things "just work".