this post was submitted on 09 Jan 2024
66 points (98.5% liked)

Selfhosted

39980 readers
781 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Currently, I run Unraid and have all of my services' setup there as docker containers. While this is nice and easy to setup initially, it has some major downsides:

  • It's fragile. Unraid is prone to bugs/crashes with docker that take down my containers. It's also not resilient so when things break I have to log in and fiddle.
  • It's mutable. I can't use any infrastructure-as-code tools like terraform, and configuration sort of just exist in the UI. I can't really roll back or recover easily.
  • It's single-node. Everything is tied to my one big server that runs the NAS, but I'd rather have the NAS as a separate fairly low-power appliance and then have a separate machine to handle things like VMs and containers.

So I'm looking ahead and thinking about what the next iteration of my homelab will look like. While I like unraid for the storage stuff, I'm a little tired of wrangling it into a container orchestrator and hypervisor, and I think this year I'll split that job out to a dedicated machine. I'm comfortable with, and in fact prefer, IaC over fancy UIs and so would love to be able to use terraform or Pulumi or something like that. I would prefer something multi-node, as I want to be able to tie multiple machines together. And I want something that is fault-tolerant, as I host services for friends and family that currently require a lot of manual intervention to fix when they go down.

So the question is: how do you all do this? Kubernetes, docker-compose, Hashicorp Nomad? Do you run k3s, Harvester, or what? I'd love to get an idea of what people are doing and why, so I can get some ideas as to what I might do.

you are viewing a single comment's thread
view the rest of the comments
[–] Toribor@corndog.social 16 points 10 months ago (2 children)

In my opinion trying to set up a highly available fault tolerant homelab adds a large amount of unnecessary complexity without an equivalent benefit. It's good to have redundancy for essential services like DNS, but otherwise I think it's better to focus on a robust backup and restore process so that if anything goes wrong you can just restore from a backup or start containers on another node.

I configure and deploy all my applications with Ansible. You can programmatically create config files, pass secrets, build or start containers, cycle containers automatically after config changes, basically everything you could need.

Sure it would be neat if services could fail over automatically but things only ever tend to break when I'm making changes anyway.

[–] CubitOom@infosec.pub 4 points 10 months ago* (last edited 10 months ago)

I would say that if you are going to host it at home then kubenetes is more complex. Bare metal kubernetes control plane management has some pitfalls. But if you were to use a cloud provider like linode or digital ocean and use their kubernetes service, then only real extra complexity is learning how to manage Kubernetes which is minimal.

There is a decent hardware investment needed to run kubernetes if you want it to be fully HA (which I would argue means it needs to be a minimum of 2 clusters of 3 nodes each on different continents) but you could run a single node cluster with autoscaling at a cloud provider if you don't need HA. I will say it's nice not to have to worry about a service failing periodically as it will just transfer to another node in a few seconds automatically.

[–] nopersonalspace@lemmy.world 1 points 10 months ago (1 children)

Yeah I guess that's true, I do think the other part about having configs done programatically is a lot more important anyway. If things go down but all it takes to get it back is to re-run the configs from files then it's not so bad

[–] taladar@sh.itjust.works 4 points 10 months ago

More importantly, if you do things programmatically you will still have the information how you did it last time the next time you need to move to a new major version of something which is particularly important in a home setting where you don't do tasks like that often.