If you want 10G performance, you need to get a 10G nic. They are only 30-40$ on ebay.
While, you CAN bond a pair of 2.5GBe ports, and POTENTIALLY get 5g of throughput, it will not be on a single session. ie- you can’t download a file at 5Gbps.
10G hardware is cheap.
I use technitium as the primary server, with a pair of backup servers running bind9.
The backup servers do zone-transfers from the primary.
I don’t think homelabs were ever the intended audience. There are MUCH more price effective, reliable, and performant options over their cases + expanders.
Ebay.
Also, i3 doesn’t really use less power. The -T models will use a lot less power. But, you aren’t really going to notice a difference with the i3 models.
TDP is the same between them too.
For reference, I have 3 micros. i5-9500T, i7-6700, and i5-8500t. They all use pretty much the same 8-12watts of idle power.
Also, I generally avoid the pre-6th gen computers. DDR-3, slower, less efficient. i5-6500 is the oldest processors in my lab.
And- right now, 50$ is the going-price for M900s / Optiplexes/etc, with an i5-6500t.
Although, you can get the i3 models for 30$ or so.
Go pick up a optiplex micro on ebay. 6th gen intel, or newer.
This will cost you around 50-150$ depending on which one you get.
Slap a couple NVMes into it, and a 2.5" SSD.
Run your docker containers here, including paperless-ngx.
3kw inverter/charger 1k
Can pick up a 6kw inverter/charger for around 800$. (Prob cheaper if you went with 48v too…)
Been there, and done this project.
https://xtremeownage.com/2021/06/12/portable-2-4kwh-power-supply-ups/
Honestly, they are all extremely overpriced, IMO.
You might check out unraid too…
I went from TrueNAS Core -> Unraid -> TrueNAS Scale -> And Landed back on Unraid.
My reasons were documented here: https://xtremeownage.com/2021/11/10/unraid-vs-truenas-scale-2021/
Even if you do want to do casaOS, or linux- I’d still recommend putting proxmox as the base os.
No… I have proper, tested backups.
I did put the disclaimer front and center! Ceph really needs a ton of hardware before it starts even comparing to normal storage solutions.
But, the damn reliability is outstanding.
https://static.xtremeownage.com/blog/2023/proxmox-building-a-ceph-cluster/
Having around 10 total enterprise NVMes, and 10G networking, I am pretty happy with the results.
It runs all of my VMs, kubernetes, etc, and doesn’t bottleneck.
I use esphome for this.
Example- the exhaust fans in my bathroom, are on an automatic timer, that kicks off as soon as you leave the bathroom.
Timer itself runs on the shelly which toggles the light/exhaust fan. So, even if home assistant has to reboot, or goes unavailable, the plug still gets turned off automatically.
Depending on the use-case, absolutely.
For a small site, absolutely.
I have a few dozen externally exposed projects that I self host though, a few of them are rather resource intensive, which would add up pretty quickly in AWS.
In my case, keeping everything in an isolated DMZ, handles reducing the risk vastly, as well as completely isolating internet-exposed applications from everything else.
It’s all about have proper redundancy, and risk-aversion.
And, of course, working backups, and a contingency plan when something bad happens.
But- then, how are my side-businesses supposed to make money?
Storefronts, and other externally-exposed services generally don’t work too well… when they aren’t exposed.
What are the chances of
mesuccessfully setting up a doorbell cam that is fully offline (no access to the Internet at all)?
100%.
What are the chances of me successfully setting up a doorbell cam that is fully offline (no access to the Internet at all)?
Depends on you.
No- Cleantalk.
Do have auto-updates. Don’t have wordfence- just heard about it, yesterday.
But, I do have another service that provides vulnerability scanning… and after checking my emails, it has been trying to notify me of the issue for about a month now… Suppose, I need to actually check those.
I was eyeballing a MD3600 yesterday, for only 150$.
Went back and forth on the idea of running it for an iSCSI san… but, remembered why I prefer zfs and ceph over HW raid.