Hey guys. I’ve been considering maybe moving to another OS for my home lab. Do you have have any suggestions? Especially former Unraid users? Mostly just for arrs though I would like to run reverse proxy/file hosting as well. Proxmox seems pretty trendy can I use it for arrs as well as backups?
Rant/extra info:
Tap for spoiler
I’ve been using Unraid for a couple years now, even paid for basic registration. I’ve largely used it to run all my arrs in docker, pihole and had a HASSIO VM running.
I recently tried setting up nextcloud, during the set up (which like nearly everything, I followed a video guide for) I ran into a novel error. So I deleted the nextcloud docker and got it from the official repo instead. Now my nextcloud share is gone and I can’t create new shares??
Stuff like this happened when I set up guac. Weird errors, plenty of which have little documentation or explanation. Plenty of which I need to ssh in or use Linux commands to fix. Which lead me to, “I’m having to learn this stuff anyway, why not spin up a Linux server and learn properly”.
Should I just rebuild/give Unraid a bit more time, it is young OS wise right?
Proxmox is good as a host OS, you’ll set up a VM for docker and run your stuff in that.
It has a built in backup system to image your VMs and Containers, you can combine it with Proxmox Backup Server either in a VM or on another system for incremental backups and deduplication.
As far as Nextcloud goes I’m not surprised you had issues, their setup is weird, non-standard and very unstable in my experience. I switched to Syncthing long ago and it’s so much better.
Proxmox also can run on ZFS, has support to run containers, and can also manage backups.
Have you solved the issues Syncthing has with Android? Seems Android v9 and later networking blocks the LAN access for finding local relays. Even manually configuring relay IPs in Syncthing doesn’t resolve the issue.
I don’t use it on Android because I don’t need sync, I need backups.
Photo backups are handled by Immich, and a general backup is done by the FolderSync app on a daily schedule over WebDAV to my server.
It runs just fine for me on Android 14. I don’t remember if it found the other devices automatically, but setting them up manually is trivial too. And devices can inform each other about each other if you enable it.
OS: NixOS (high learning curve but its been worth it). Nix (the config language) is a functional programming language, so it can be difficult to grok. Documentation is shit as its evolved while maintaining backwards compatibility. If you use the new stuff (Nix Flakes) you have to figure what’s old and likely not applicable (channels or w/e).
BYOD: Just using LVM. All volumes are mirrored across several drives of different sizes. Some HDD volumes have an SSD cache layer on top (e.g., monero node). Some are just on an SSD (e.g., main system). No drive failures yet so can’t speak to how complex restoring is. All managed through NixOS with https://github.com/nix-community/disko.
I run stuff on a mix of OCI containers (podman or docker, default is podman which is what I use) and native NixOS containers which use systemd-nspawn.
The OS itself I don’t back up outside of mirroring. I run an immutable OS (every reboot is like a fresh install). I can redeploy from git so no need to backup. I have some persistent BTRFS volumes mounted where logs, caches, and state go. Don’t backup, but I swap the volume every boot and keep the last 30 days of volumes or a min of at least 10 for debugging.
I just use rclone for backups with some bash scripts. Devices back up to home lab which backs up to cloud (encrypted with my keys) all using rclone (RoundSync for phone).
Runs Arrs, Jellyfin, Monero node, Tor entry node, wireguard VPN (to get into network from remote), I2C, Mullvad VPN (default), Proton VPN (torrents with port forwarding use this), DNS (forced over VPN using DoT), PiHole in front of that, three of my WiFi vlans route through either Mulvad, I2C, or Tor. I’ll use TailsOS for anything sensitive. WiFi is just to get to I2C or Onion sites where I’m not worried about my device possibly leaking identity.
Its pretty low level. Everything is configured in NixOS. No GUIs. If its not configured in nix its wiped next reboot since the OS is immutable. All tracked in git including secrets using SOPS. Every device has its own master key setup on first install. I have a personal master key should I need to reinstall which is tracked outside of git in a password manager.
Took a solid month to get the initial setup done while learning NixOS. I had a very specific setup of LVM > LUKS encryption /w Secure Boot and Hardware Key > BTRFS. Overkill on security but I geek out on that stuff. Been stable but still tinkering with it a year later.
I have seen Nix come up quite a bit and have been tempted to try it. I’ve rolled with Arch before so I was considering going back to it but maybe something new be go.
The OS itself I don’t back up outside of mirroring. I run an immutable OS (every reboot is like a fresh install). I can redeploy from git so no need to backup. I have some persistent BTRFS volumes mounted where logs, caches, and state go. Don’t backup, but I swap the volume every boot and keep the last 30 days of volumes or a min of at least 10 for debugging.
Something like this has always interested me. I remember reading about doing similar with Windows. Not so much it being immutable so much as having a decent starting image that you load on any device you want with all your programs ready to go.
Runs Arrs, Jellyfin, Monero node, Tor entry node, wireguard VPN (to get into network from remote), I2C, Mullvad VPN (default), Proton VPN (torrents with port forwarding use this), DNS (forced over VPN using DoT), PiHole in front of that, three of my WiFi vlans route through either Mulvad, I2C, or Tor. I’ll use TailsOS for anything sensitive. WiFi is just to get to I2C or Onion sites where I’m not worried about my device possibly leaking identity.
Do you have a guide or ten you used for all this perchance? Unraid has stuff like trashguides and space invader one. Especially the DNS part onwards? If not it’s cool I have Mullvad set up and Pihole with my current setup so I’ll be able to work it out. This is all very compelling for me to try out (I should really have learned about wireguard by now). Thanks a lot for such an interesting and informative write up!
Nixos’ weakness is definitely it’s documentation. There’s often configuration snippets you can copy and paste, though. If you go with NixOS, make sure to come back with questions, the community is very helpful.
Best resource I’ve found is searching GitHub.
My setup closely follows https://github.com/Misterio77/nix-config.
For servarr I just translated someone else’s docker compose setup to nix. There are some ready made nix ones you can look at like https://github.com/rasmus-kirk/nixarr/tree/main/nixarr.
The complex networking I just picked up over time once I knew my way around a little bit.
GitHub is your best resource.
lang:nix search terms
.For the networking I found some repos with Nix and Gluetun (OCI containers). I don’t see them in my bookmarks, so it was probably a day project when I set up and didn’t keep the references.
That part is still in docker / podman. So any docker network guide just needs to be translated to nix.
You might want to read the recent blog post (linked at top) and discussion on Hacker News first.
I wouldn’t run NixOS in a container. With native nix containers I’m pretty sure they share the store. For docker I’d use images built with nix (doesn’t run nix itself) or pull from docker hub.
I’m running a normal linux distro, with everything running in containers using docker compose files. No VMs, since they are overkill for my needs. I’m running stuff like the *arr stack, home assistant, smokeping, unifi controller, pihole etc. Setting it up is quite simple, and the distro can be whatever you prefer (I use Arch btw).
Arch btw
Like I needed even more temptation. Cheers for confirming my suspicion that rolling a Linux distro is likely the go. I don’t really need a VM, just hassio doesn’t (didn’t?) have pass through without it.
What do you need to pass trough? I’ve done usb passthrough in the past with docker as well
Ideally a Bluetooth dongle/maybe a speaker. Looks like an old problem from a quick google. So there we go don’t need VMs!
I never used unraid but was thinking about it
I went to truenas for my NAS and Ubuntu server for my application server instead. I use dockge for my docker webui and I’m happy with that setup
For entry homelab stuff I still think it’s great. Literally just smacked it into an old HP server (now my cannibalised gaming builds) and it was good to go. However I was pretty inexperienced then (hence why I think I may have borked something fundamentally). Now days I’m more comfortable which getting under the hood hence looking for alternative. Definitely would still suggest Unraid to some though.
I was tempted to do something like an Ubuntu server. I figured all my NAS stuff is run through docker anyway. Cheers I’ll check out dockge
Yeah but as far as i know, unraid doesn’t really do anything that for example TrueNAS Scale can’t do? And TrueNAS is free and really rock-solid.
So if someone doesn’t want to host an Ubuntu Server i’d recommend checking out TrueNAS Scale and simply throwing some dockers at it
unraid doesn’t really do anything that for example TrueNAS Scale can’t do?
UnRAIDs parity is completely different than ZFS (TrueNAS) - and I’d argue unRAID is a better option for hosting a home media server. TrueNAS uses ZFS, so data on the drives is striped and they all need to be spun up together. UnRAID doesn’t stripe data, so only the relevant drive needs to be spun up (+parity if you’re writing). This also means if you lose parity +1 drive you only lose the data on that drive. Whereas with ZFS if you lose parity +1 you lose EVERYTHING in that array. It’s also way easier to expand your array in UnRAID, simply plug in any drive (as long as its smaller than parity) and it’ll just work. Expanding or adding vdevs in ZFS is not so simple and requires planning.
On top of all that, UnRAID can do ZFS too now (although I wouldn’t recommend it for the main array for the reasons stated above). So if anything the question should be “what can TrueNAS Scale do that UnRAID can’t do?”
I’d argue TrueNAS is better if you need top speeds and advanced features like bit-rot protection. But for a simple home media server where things like idle power-use and ease-of-use are more important I think unRAID wins hands down.
Nextcloud borked my Unraid server. Took me forever to find the source of constant lockups. Apart from that, the Nextcloud container took up more of my time than any other part of my server, including the OS.
This was a couple years ago. Maybe things have changed.
My Unraid server is a dream otherwise. Rock solid and 30 containers running smoothly for years and years.
Just another data point.
I used Proxmox for a long time before Unraid, but that’s when getting deep into it was a hobby. Now I just want it to work.
Ive had truenas, moved to unraid in the past few months. The one constant has been nextcloud is a pita. Even the legacy manual install blows. I dropped it and have been much happier ever since.
I have been using openmediavault for years and years. Basically debian with some configuration already done for a web gui, quick access to shares and user controls, and a simple but ready docker setup for your containers. Extremely light weight.
I have unraid on a test server, but I just can’t see the point of using it over omv. Raid is not important to me, you have to make backup either way. Containers are containers, and a vm is not something I need
For what it’s worth I recently went down this rabbit hole, and I decided to stick with unRAID for the following reasons:
UnRAIDs parity is completely different than ZFS (TrueNAS) - and I’d argue unRAID is a better option for hosting a home media server. TrueNAS uses ZFS, so data on the drives is striped and they all need to be spun up together. UnRAID doesn’t stripe data, so only the relevant drive needs to be spun up (+parity if you’re writing). This also means if you lose parity +1 drive you only lose the data on that drive. Whereas with ZFS if you lose parity +1 you lose EVERYTHING in that array. It’s also way easier to expand your array in UnRAID, simply plug in any drive (as long as its smaller than parity) and it’ll just work. Expanding or adding vdevs in ZFS is not so simple and requires planning.
On top of all that, UnRAID can do ZFS too now (although I wouldn’t recommend it for the main array for the reasons stated above). So if anything the question should be “what can TrueNAS Scale do that UnRAID can’t do?”
I’d argue TrueNAS is better if you need top speeds and advanced features like bit-rot protection. But for a simple home media server where things like idle power-use and ease-of-use are more important I think unRAID wins hands down.
I use MicroOS as a base and my services are docker stacks handled with dockge. No problems until now.
TrueNAS Scale if you want something simple that just works and Proxmox if you wanna configure/customize stuff with a lot more power under the hood…
Imo, either choice is better than unraid.
Not sure what went wrong with next cloud but it might be worth figuring it out first. I do remember having to look through a few guides. I set up mariadb, redis, and collabora containers along with it for database performance and to be able to edit docs in the browser
I legitimately don’t understand the trendiness of proxmox given that vms are overkill compared to containers. If you are migrating from unraid you are likely already using the docker version of all your arr services so going and spinning up vms feels like a step backwards.
You can either use the exact same containers and use systemd to run them as raw services or use something like docker compose or dozens of other tools to orchestrate them. I use k8s but can’t recommend it with a straight face after taking down VMs for being overkill (very different kinds of overkill but still)
You talk like there is not in between containers and VMs. You can use both.
I built my recommendation around the likelihood this person is already using docker and therefore already has containers that would be extremely easy to run without unraid. There would be less lift to use the same config files and volume mounting they are already using.
Operationally though I would never run vms and containers in the same orchestrated system. Look at what they are asking to do. Why would you run sonarr as a container and radarr as a vm. Obviously they are going to end up just doing one or the other
No, that would make no sense and is obviously not what i meant.
But you could separate the arr stack from things like pihole with a vm. For example you could pin one thread to that VM so you will not bottleneck your DNS when you are doing heavy loads on the rest of the system. This is just one example what can be done.
Just because you do not see a benefit, does not mean there is none.
Also, VMs are not “heavy” thanks to virtualization technology built into modern hardware, VMs are quite light on the system. Yes they still have overhead but its not like you are giving up big percentages of your potential performance, depending on the setup.
What he said. 👏
I agree with this, though I think a lot of people don’t differentiate between operating system containers like LXC provides and application containers like docker provides.
Sometimes you need a VM. They’re not overkill, just useful for different things.
Examples; Running Windows, Running OSX, Passing through hardware to use isolated from the host (PCIe devices, USB, etc), Linux guests where you need a full kernel and permissions (for example to run Docker without issues caused by being nested inside a container).
VMs don’t really have much more overhead than a container in most use cases too. For example a VM with debian installed uses about 30MB of RAM.
I was replying specifically in the context of the original question. Unraid already has their services tooling built out over containers so this person already is probably using containerized versions of the arr services. It would be overkill to go build vms for these services specifically for what you said. They don’t need to be windows or osx, they don’t need hardware passthrough, they don’t need a full kernel.
That aside. You absolutely can run containers as a full isolated kernel and directly map hardware to them. CGroups absolutely allows for those use cases. You may not be using docker anymore but docker is more of a crutch for beginners who probably dont need those things.
One example of this in the real world are COS and Bottlerocket which are literally distributions of Linux where even core is components are individually running under different containers via cgroups. COS runs on every GKE cluster in the world and bottlerocket on most EKS clusters.
The benefit of splitting services between VM’s is the same as it always has been: I can break one service without breaking ALL of them. Containers are an improvement over native installs but they do not solve this problem completely.
I can break one container without breaking all of them? I can run them in isolated container networks and even isolated cgroups if I want to. Docker hides a lot of the core reasons tools like jails and chroot and eventually LXC were created but containers absolutely can do the things you are using vms for if you are willing to learn how they work