Is it bad to keep my host machines to be on for like 3 months? With no down time?

What is the recommend? What do you do?

  • R_X_R@alien.topB
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 months ago

    Prod environments typically don’t have downtime. Save for patching every quarter that requires a host reboot.

  • horse-boy1@alien.topB
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 months ago

    I had one Linux server that was up for over 500 days. It would have been up longer but I was organizing some cables and accidentally unplugged it.

    Where I worked as a developer we had Sun Solaris servers as desktops to do my dev work on. I would just leave it on even during the weekends and vacations, it also had our backup webserver on it so we just let to run 100%. One day the sys admin said you may want to reboot your computer, it’s been over 600 days. 😆 I guess he didn’t have to reboot after patching all that time and I didn’t have any issues with it.

  • CameronDev@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    I boot my big server whenever i need it, everything else is 24/7. I have had no catastrophic failures in either for the last 2-3 years, so it seems to be fine?

  • hauntedyew@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    Even though live kernel patching is a thing, I generally do a full reboot every month or two for the next big patch.

    Full shut downs? Are we upgrading them, dusting them, or doing any other maintenance to them? That would be the only case besides UPS failure or power outage.

  • Busy_Tonight7591@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    Never! I have 2 mini pcs in separate locations running 24/7. One for downloading content, and running a DNS server/dynamic dns. The other for point-to-point VPN to access multiple NVRs that are blocked from the WAN itself. Luckily they both sip little power!

  • persiusone@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    Mine are running all of the time, including during power outages, and are only shut down for physical maintenance and reboot for software maintenance.

    This is a little variable through. Windows hosts tend to require more frequent software reboots in my experience. About once a year, I physically open each device and inspect, clean dust (fairly rare to find it for my setup though), and perform upgrades, replace old storage devices and such. Otherwise I leave them alone.

    I usually get about 5-7 years out of the servers and 10 out of networking hardware, but sometimes a total failure occurs unexpectedly still and I just deal with it as needed.

  • CasualEveryday@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    I suppose it depends what kind of hardware you’re using. I have enterprise class servers that are meant to run 24/7 and they do. They’ll be useless technologically before they wear out.

  • Cynyr36@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    Whenever there is a proxmox kernel update. Every few years to dust them If i get new hardware.

  • thank_burdell@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    old 486 slackware 4.0 server I had on a big UPS made it through several dorm/apartment moves without a shutdown. Something like 7 years of uptime when I finally retired it.