I’m pretty new to selfhosting, but one thing that I know to take seriously is log collection. Since there are a lot of different type of logs (kernel log, application logs, etc) and logs come in many different formats (binary, json, strings) - it’s no easy task to collect them centrally and look through them whenever neccessarly.

I’ve looked at grafana and tried the agent briefly, but it wasn’t as easy as I thought (and it might be a too big tool for my needs). So I thought to ask the linuxlemmy community to get some inspiration.

  • markstos@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Ah yes, I’ve answered a number of support questions from people who have used this method and don’t understand why their app quit working.

    • Azzu
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      I haven’t done the math, but I can count the times a service stopped working and I had to delve into the log files on one hand. The most difficult part usually is setting them up, and as soon as that’s done, usually they keep running indefinitely. I’m relatively sure that researching ways to “properly” handle log files, visualizing them, actually setting it up etc, is much much more time investment than the few times that stuff actually breaks and you’d need any of that and it saves any time.

      At least for personal stuff. For more critical services, of course it makes sense.

      • markstos@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        On modern Linux servers, often logs are setup by default to go to the systemd journal, where they be queried by the service name. There are no extra steps, except to review the logs when something breaks.

        When I’m helping someone troubleshoot, that’s the first question I ask: what do the logs say?