Hi y’all. I’ve got an Intel Nuc 10 here. I want to run a few apps on it, like BitWarden, PiHole, NextCloud, Wireguard, and maybe more, just for my own use, inside my home.

Is there a way to guage whether the hardware is up to the task in advance? Like, if love to be able to plan this by saying, “this container will use x MB of ram and 5% of the cpu” and so on?

I want to run everything on this one PC since that’s all I have right now.

EDITED TO ADD: T****hank you all! Great info. :thumbsup

  • subtext
    link
    fedilink
    English
    arrow-up
    17
    ·
    11 months ago

    I don’t have an answer for you, but I will tell you from my experience, you can probably run a lot more on that thing than you might think.

    I run all of my services on docker and I think I have 30+ services up at all times. What you should remember is that even under your most demanding workload, you’re probably only hitting like 5 services at a time while the rest sit idle. And if you are picking good, efficient apps (I really like the linuxserver.io apps), they’re not pulling much under load and certainly not while idling.

    Your NUC sounds much more capable than my BeeLink and mine doesn’t break a sweat. The other commenter had it right, just keep adding stuff until you see a degradation of performance, I’m yet to hit one.

    • Hizeh@hizeh.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      11 months ago

      I agree. Run everything you want and then when you see performance degradation then you’ll know the limits of your hardware based on your workloads.

      You already have the NUC so why not push it’s limits? The alternative is to try and guestimate your workload needs and buy matching hardware… which is very difficult.

    • Maxy@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      11 months ago

      To add to this with another example: my server runs

      • jellyfin
      • Nextcloud
      • gitea
      • Monica (a CRM, look it up on awesome-selfhosted)
      • vaulwarden (rust implementation of Bitwarden)
      • code-server
      • qBitTorrent-nox
      • authelia (2FA)
      • pihole
      • smbd
      • sshd
      • Caddy

      In total, I’m using about 1.5GB out of 6GB of RAM (with another 1GB out of 16GB of swap being used), and the idle CPU usage is only 1%-ish (i5-3470 with the BIOS-settings set to power saving).

      Even on very old and low-powered hardware, you can still run a lot of services without any problems.

  • ᓰᕵᕵᓍ@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    edit-2
    11 months ago

    CPU wise: Monitor load average as you load services. If it stays below the number of cores you are fine. That being said nuc 10 has a 6 core cpu Its more than OK for a barebones. For reference I’m running smooth on a raspberry400 4 GB RAM.

    Vaultwarden

    nginx webdav

    Photoprism

    Librephotos

    Owntracks

    Traccar

    Monocker

    Brave go-aync

    Mozilla sync

    Wallabag

    radicale

    Baikal

    Ncfpm

    Wireguard

    Jellyfin

    Rsstt

    Joplin webview

    Just fine

    So you’ll be fine

  • veloxy@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    11 months ago

    For comparison, I’m running about a hundred containers on a 9 year old laptop easily (i7 4700 HQ with 16GB ram), I’m sure I can run many more

    • TheDarkBanana87@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 months ago

      Is it running 24/7?

      Im currently thinking of using my old laptop for this, but im scared it may get hot

      Although i only got core i5 something and a 4gb ram ( asus k46cb )

      • veloxy@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        It is running 24/7 yes, temps are stable and only really goes up when I’m home and actively doing things that would make it go up (like watching jellyfin). It runs with the lid closed and screen off.

        You can always use one of those laptop stands with coolers underneath, or even without coolers, just having it lifted may improve airflow too. I did monitor the temps the first few days but it really doesn’t seem to be an issue, CPU temps at the moment is around 50 C, GPU is disabled as it’s old and can’t even be used to transcode anything.

        You can always just use your laptop to try it out, see where it goes and then decide to spend money on something better and more suited to your needs.

  • Mythnubb
    link
    fedilink
    English
    arrow-up
    5
    ·
    11 months ago

    I just slowly add more services and watch my RAM and CPU.

    For example, my setup is an older laptop for processing and I have a NAS for storage. The laptop has a 5th gen i5 with 8GB of RAM with a Linux OS. It’s currently running 19 containers.

    Just monitor it and play around. You’ll get a feel of what your equipment can handle.

    • perishthethoughtOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 months ago

      Ok thanks. I’ve seen other posts here concerning how to monitor services so I’ll check those out next.

  • Still@programming.dev
    link
    fedilink
    English
    arrow-up
    4
    ·
    11 months ago

    ram is really the limiting factor for most servers

    if you’re gonna have less than 5 users on the services they’re probably not all going to be used at the same time so cpu usage will depend on which are being hit at the moment

    none of the services you’ve listed are particularly heavy so you’ll be good for those and a bunch more no problem

  • rambos@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    11 months ago

    I should add more ram soon because Im running 30 services on 8GB atm and looks like Im about to hit the wall. Services I run atm are pihole, nextcloud, wireguard server, arr stack, jellyfin, homeassistant and more.

  • drdaeman@lemmy.zhukov.al
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 months ago

    It’s very hard to say anything definitive, because many of those can generate different load depending on how much traffic/activity it gets (and how it correlates with other service usage at the same time). Could be from minimal load (all services for personal use, so single user, low traffic) to very busy system (family and friends instance, high traffic) and hardware requirement estimates would change accordingly.

    As you already have a machine - just put them all there and monitor resource utilization. If it fits - it fits, if it doesn’t - you’ll need to replace (if you’re CPU-bound, I believe CPUs are not upgradeable on those?) or upgrade (if you’re RAM-bound) your NUC. You won’t have to reinstall them twice anyway.

    • vegetaaaaaaa@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      This is the only real answer - it is not possible to do proper capacity planning without trying the same workload on similar hardware [1].

      Some projects give an estimation of resource usage depending on a number of factors (simultaneous/total users…) but most don’t, and even the estimations may be far from actual usage during peak load, with many concurrent services, etc.

      The only real answer is close monitoring of resource usage and response times (possibly with alerting), and start adding resources or cutting down on resource-hungry features/programs if resource usage goes over a certain threshold (~80% is when you should start paying attention) and/or performance starts to degrade.

      My general advice is to max out installed RAM from the start, virtualize your hosts (which make it easier to add/remove resources or migrate a hungry VM on more powerful hardware later), and watch out for disk I/O on certain workloads (databases… having db engines running off SSDs helps greatly).

  • duncesplayed@lemmy.one
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    11 months ago

    BitWarden+PiHole+NextCloud+Wireguard combined will add to like maybe 100MB of RAM or so.

    Where it gets tricky, especially with something like NextCloud, is the performance you see from NextCloud will depend tremendously on what kind of hard drives you have and how much of it can be cached by the OS. If you have 4GB of RAM, then like 3.5GB-ish of that can be used as cache for NextCloud (and whatever else you have that uses considerable storage). If you have tiny NextCloud storage (like 3.5GB or less), then your OS can keep the entire storage in cache, and you’ll see lightning-fast performance. If you have larger storage (and are actually accessing a lot of different files), then NextCloud will actually have to touch disk, and if you’re using a mechanical (spinning rust) hard drive, you will definitely see the 1-second lag here and there for when that happens.

    And then if you have something like Immich on top of that…

    And then if you have transmission on top of that…

    Anything that is using considerable filesystem space will be fighting over your OS’s filesystem cache. So it’s impossible to say how much RAM would be enough. 512MB could be more than enough. 1TB could be not enough. It depends on how you’re using it and how tolerant you are of cache misses.

    Mostly you won’t have to think about CPU. Most things (like NextCloud) would be using like <0.1% CPU. But there are some exceptions.

    Notably, Wireguard (or anything that requires encryption, like an HTTPS server) will have CPU usage that depends on your throughput. Wireguard, in particular, has historically been a heavy CPU user once you get up to like 1Gbit/s. I don’t have any recent benchmarks, but if you’re expecting to use Wireguard beyond 1Gbit/s, you may need to look at your CPU.

  • Anonymouse@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 months ago

    This is tangential to your question, but I’ve been playing with Kubernetes and its ability to ration resources like CPU and RAM. I’m guessing that Docker has a similar facility. Doing this, I hope, will allow me to have Plex transcode videos in the background without affecting the responsiveness of a web app I’m using or will kill and restart that one app I wrote that has a memory leak that I can’t find.

  • HeavyRaptor@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    11 months ago

    It sounds like it could easily run these. You could probably get away with a newer raspberry pi for them so the nuc should have no issues.

    For reference the heaviest thing for me has been Home assistant os, which needs dedicated ram and cores for it’s VM. I’ve had no issues with running almost a dozen services on a 4790k based system along HA including: Immich, plex, radarr/sonarr/prowlarr/etc, usually a dedicated game server for Valheim or Minecraft or something, and some other lighter services.

    I think ram (16gb) is going to be the limiting factor in my case but I haven’t hit that limit yet

  • vividspecter
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    If you’ve already bought the device there’s no much you can other than try the apps out and monitor them with docker stats and see if it gets out of hand. It would be nice if it were documented, but it’s hard because it’s a moving target and some apps have bugs from time to time (including memory leaks) and also can behave differently on different devices (particularly containers that depend on the GPU).

    Anyway, containers are good for this since you can easily create them, stop then, remove them etc without leaving a mess so if something ends up being too heavy, it’s no big deal.

  • Giddy@aussie.zone
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    11 months ago

    What is the processor and memory?

    I have a 5 year old NUC with a celeron processor and 8GB memory and it doesn’t blink at whatever I throw at it. Currently running npm, Nextcloud (including embedded phone track, notes, contacts and bookmarks apps), Bitwarden, calibre web, Kavita, audio bookshelf, air sonic, dokuwiki, freshrss, transmission, paperless, dash and n8n as well as serving as file server for my home network.

    EDIT - I also have a 4gb rpi4 running WireGuard, pihole, borg and time machine

  • Monkey With A Shell@lemmy.socdojo.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Docker is very efficient in resource usage since it doesn’t have to run the backing OS processes miltiple times. The major thing to keep an eye on is the running average versus any spikes you see. If the spikes seem to be getting more frequent it’s possible you’re running into resource starvation somewhere and things are fighting for their piece of the pie. It can turn into a cascading failure pretty suddenly if things go wrong. A simple measure, in any kind of Linux system you should be able to see a ‘load’ metric as a nice overview, if the numbers on that get higher than the number of CPU cores things are probabbly about to go bad.

  • SleepyBear@lemmy.myspamtrap.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    To echo the other comment, you’ll probably run far more than expected.

    But you also need to think through and consider usage for each service. Is this BitWarden for you, or you and a thousand friends?

    Most services you run will scale up their hardware usage depending on how much load they’re being subjected to.

    Eg, I run Crafty Controller for my kids to manage their Minecraft servers. There’s a huge load different for each additional server.

    Wireguard with no traffic uses barely any resources. Pump high amounts of traffic through it for a lot of simultaneous connections and that’ll change.