• 0 Posts
  • 14 Comments
Joined 9 months ago
cake
Cake day: October 10th, 2023

help-circle







  • Traditionally all incoming lines into a server room or wiring closet gets punched down to a panel in the rack and then jumpers are wired to everything in the rack. You never put an end on a cable that came out of a wall. The idea being you would have maybe 5 feet of extra cable in a loop behind the rack in case you needed to reorganize the room in the future. It sucks pulling cable, so leave some extra.

    If there were multiple racks then usually one of them was just for wires and switches and the others were for servers. I usually used different color cables for different things too (like use orange for links between switches and blue for servers, etc), but for a home rack I wouldn’t bother with that. Different color zip ties on cables can be handy too.




  • If it were me and I was intending to automate this I would probably do the following. Set up each test distro as a VirtualBox image and take a snapshot so I could easily roll back. Then I would write a script for each distro that downloaded the package, installed and launched the app. I would then probably query the window system to make sure the gui showed up, wait a period of time if I had to and take a screenshot.

    This can probably all be done as a set of bash scripts.


  • This happens literally all the time for me both personally and professionally. I see mostly low effort attempts across various ports or things like sweeps of common username/password attempts on ssh or common management endpoints on http.

    This is why it’s important to keep all publicly accessible servers and services updated and follow standard security guidelines. Things like only using public key auth for ssh for instance.

    At work we get hit occasionally in large bursts and have to ban ips for a bit to get them to go away.



  • If your plan is to switch to self hosting then I assume you have the ability to design and build the site. If that is the case then something like Linode or Digitalocean server would be perfect. It would make the eventual transition easier.

    As for hardware, that depends on the expected amount of traffic and what your site has on it. If you’re only getting something like 5 requests per second and your site is mostly like text with some images then a raspberry pi running Linux is more than enough. If the expected amount of traffic is more or maybe you’re also serving videos with a lot of images then you will need more.

    Honestly I would personally never self host a public website. Between things like DoS, random hacking attempts and just natural traffic spikes, there is a lot to consider and build out as far as security and hardware. I would stick to cloud but in the very least use the cloud for now to judge your hardware needs for the future.