I mean basically the title. Currently all my services are just running directly on my arch server and it has been working well enough for me and i am super comfortable working with it. A few months back I had a minor crash of the server where the system had become not functional. I was able to recover the server to the point that my services could run but i never got the graphical part of the server going again or nextcloud running.

At this point I’m just considering wiping the os to a fresh one and starting clean to get everything working correctly again. What I’m wondering is, is it worth learning docker and deploying all my services that way or should I just continue with the way i have been doing it for years now?

I will be running the various Arr apps, Emby, NextCloud, Qbit, Homepage?, and probably a few others that i can’t recall off the top of my head. Some of the services are accessed of site if that matters at all. I did briefly explore docker in the past but got stuck and my friend pushed me towards straight arch. Now I’m considering giving it another shot but wanted to hear folks here input on the pros and cons of either way

  • StewedAngelSkins@alien.topB
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 months ago

    yeah docker is a pretty good option. worth trying out. just don’t randomly deploy community images from dockerhub like a dumbass. tons of them were created by people who demonstrably dont know what the fuck they are doing. but if you stick to official images, and make your own when there aren’t any official images, you’ll be fine.

  • mlfh@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    I’d personally recommend putting your provisioning steps for each service into Ansible playbooks. That way, you can spin them all up from zero any time, distribute them across different hosts, in vms or lxc containers, any way you like.

  • Fermiverse@feddit.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    I never used containers, K8s etc and built my server all from scratch using debian.

    This year I switched to a hypervisor and use the proxmox supplied lxc containers.

    Never without again. The convenience of spin a new one up, fiddle around without messing up the main sys, doing snapshots to clear some mess if needed makes selfhost so much easier.

    No matter what software you use I would say containers.

  • cr0wstuf@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    While I really like arch, I don’t think it’s a good distro to be running apps directly on due to the rolling release nature. It’s better to run apps directly in a distro like Debian or alpine. Yes, running the apps in a container will bring more stability and reliability, but you’re still risking botched updates to the host that could cause instability.

    That being said, your immediate best move would be to use docker. Your best move long term is to move to a distribution thats not rolling release.

  • gioco_chess_al_cess@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    Use docker on arch. It is perfectly fine for one server. The need for release based distributions strictly comes only when managing many servers where updates should be unattended.

  • kevdogger@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    I’ve ran arch for many years in many vms acting as servers. I’ve never had any more issues with arch than Ubuntu. With any system you can choose you need your keep backups or snap shots

    • Do_TheEvolution@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      Same here. Its my go-to for years.

      Except I had encountered an issue relatively recently, where newest kernel had regression with virtual dvd under esxi hypervisor, causing higher cpu load than typical.

      So I took time and switched all my shit to lts kernel, which I should have used from the get-go.

      But other than that, which was solved easily by removing dvd or switching kernel, I had zero issues, and even had some deployments where i was updating ~2 years old arch install and it went smoothly…

      • kevdogger@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        That’s unfortunate. Most of time I just use lts kernel. I too am just using servers accessed via ssh and terminal.

  • JL_678@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    I agree with others and have an alternative view. How about you install a hypervisor like Proxmox and then you get the flexibility of running Docker, LXC containers or even VMs.

    Personally, I run a mix of LXC containers and Docker. Why? I really like Docker but the all inclusive nature of the containers can make customization of settings difficult.

    In contrast, LXCs are heavier than Docker containers but they act like a full Linux machine and so you can use all of your past system admin knowledge and customize away. They are mich lighter than a VM and so they are a nice middle ground.

    Summary for simple self-contained apps, I use Docker and for more complex apps, I rely on LXC containers. With Proxmox you can easily use both and so it is the best of both worlds!

  • ddifdevsda@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    Definitely worth the effort. If you want the services you run to be stable, that is :)

    BTW; using Arch on your server is probably not the best idea! The reason why many people prefer release-based distros on their servers is because they are much less likely to have a dependency conflict. Also, while I love Arch — it’s just not for servers that are required to stay stable and reliable.

    So, let’s get back to the question at hand: why Docker? It’ll be WAY easier for you to control everything. Every image has its own environment with dependencies that don’t interfere with other services’ requirements. Also, updating your services will be much easier and without needing that much attention — you won’t be risking breaking stuff that’s already running on your server.

    The downside will be that you won’t have as much understanding of your system and everything that’s running on it. But that could be solved with a separate PC for tinkering or a VM :)

    Good luck!

  • Alfagun74@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    Definitely. I cant imagine the pain of non-isolated services fucking each other up on my root os. damn. You should check out caprover.

  • ithilelda@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    short answer yes. I think you already come to the same conclusion but just being intimidated by a new technology stack that you must learn fresh. Well don’t be! It isn’t hard, and it is definitely worth the effort!

  • CameronDev@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    Document. It doesnt matter what you use exactly, but document it. It will make recovery easier regardless of the underlaying server/software.

  • Do_TheEvolution@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    is it worth learning docker and deploying all my services that way or should I just continue with the way i have been doing it for years now?

    100% worth!

    It is really amazing approach that eases so many aspects and makes you feel more in control and more willing to try stuff.

    This repository should be helpful.

    Examples of bunch of popular services running in docker and some other stuff like backups with borg or kopia.

    using Arch for my home server

    I too run Arch as my go-to linux server, usually docker host is being its main job. Sometimes wireguard node or NUT sever for UPS.

    Reason being its a damn good OS and I am most comfortable with it since I am running it on my main desktop. But another thing is that I usually run under some hypervisor(hyper-v and esxi) as a virtual machine, not straight on metal. So taking snapshot of it is matter of second and reverting to that snapshot is a minute… so that is one aspect that lets me go with any linux I damn like without that much consideration for reputation of stability.

    That repo that I linked has even notes on arch fresh install, but arch started to include archinstall script on ISO, i decided to rather use that.

    I started to deploy Arch so much that I even have few ansible playbooks to get arch the way I like. Which mostly means some basic services and packages and workflow being about nnn file manager and micro text editor.

    Also recommend you use lts kernel when installing arch, just for that extra stability.

    Also it seem you were running xorg too, which I recommend abandoning. So much extra packages, so much more that can go wrong on update compared to bare arch with terminal and ssh… but if it really ease your workflow then fine.