• 0 Posts
  • 12 Comments
Joined 11 months ago
cake
Cake day: October 27th, 2023

help-circle

  • Ok so, storage spaces isn’t the same as raid. A mirrored storage spaces pool, is not raid1. It’s very similar to it in that it’s a mirrored set, but it’s not the same thing. In a raid, because see, in storage spaces, you can have a mirrored set with 3 drives, and you’ll actually be able to about one and a half times one drive of data in that pool. This is because in storage spaces, it’s the DATA that is being duplicated, not the drives. So don’t confuse the concepts.

    Now, as for why it shows that size, well, because you configured it to. Storage spaces completely decouple the pool size from actual currently usable space. You can create a pool of only 1 8gb drive, and yet say it’s a 1PB pool and it’ll happily do it for you. You’ll still only be able to actually store 8gb ofc, but the pool will report rhe 1PB of maximum space.


  • Generally the enclosures are just that, enclosures that offer the connection. There are exceptions though where enclosure does something more. Some enclosures do encryption and some just use the same controller for single drive and multidrive and your one drive is actually set up as a 1 drive raid array in which case you may have data slightly shifted to accomodate the headers for that. You can then still recreate everything, but it’s a pain.

    But as I said, generally they’re just providing the drive as is in which case there won’t be any issue.


  • It really depends. For like a desktop, I’d avoid unless it was really cheap as I’d basically nullifies the value of all non standard parts and I’d include things like cpu if the motherboard is nonstandard. So value basically becomes only like drives and such.

    For a server though, non standard is the norm and hete vendors even do stuff like vendorlocking instead which then IMO is a way bigger issue, especially since knowing beforehand if it does or not isn’t something anyone actually tells you before testing.


  • You’re using experimental drivers and force unmounting… And you actually have the gall to then try to pin the blame for errors from that on ntfs? Just no.

    ntfs does have many issues which is why ms is developing refs to replace it. But stability or corruption isn’t one of those issues. Ntfs is extremely solid in that regard due to the journaling.

    Ntfs drivers in linux are however very buggy and generally considered experimental and that you should not write to ntfs drives if there’s any data you care about as it could easily destroy all data there.

    If you need a common writable data area then use exfat, not ntfs.


  • I have 7 dual cpu servers so I might be a bit biased in this regard. But worthwhile is like entirely subjective. Robust is also a weird wordchoice since there’s multiple conflicting interpretations on that.

    For worthwhile… Well, it’s as I said subjective, but cost efficiency is very rarely the driving factor for homelabs.

    For robust, do you mean robust in the sense of more powerful? Then ofc a dual slot server will be more robust but then you again are back to worthwhile. If you mean robust in terms of stability. Then absolutely not. Multi socket servers are much less stable than single core. Not unstable by any stretch, but not AS stable. Every additional component you add will always add complexity, and most importantly, additional points of possible failures. While at the same time, the system can’t survive if one CPU dies, hence stability of the system is lower the more CPU sockets you have. That’s why dual and quad are so popular even if 8 slot and more actually existing and is denser which is important in datacenters. But after quad slot, you start getting actual issues of system stability that it’s usually better to sacrifice some density and go for more servers instead and blade centers are usually not THAT much lower density.




  • If stability is what you’re after (both in terms of versioning and in the sense of as few unscheduled reboots as possible), then neither is a good option. Both update quite often and go with an “introduce feature now, worry about stability later” and end up having to constantly patch a bunch of stuff.

    If you’re comfortable with a CLI, then I’d recommend Vyos and then going with the stable branch. It’s had 3 service patches since 1.3.0 released in 2021. The last being in june and before that, you have to go to september last year. Ofc, downside is that you’ll miss out on a lot of features. Like I don’t think stable has wireguard support yet, and not certain it will be ready for when 1.4 goes stable either (it’s currently in 1.4 rolling). You could implement some of it yourself because it’s built on Debian, but anything you do like that is tied to your current image. So if you upgrade, you have to do it again so I don’t recommend it.

    Point is, if you need features, don’t, but if it’s the most stable you’re after, I can highly recommend at least having a look. Though I always recommend getting a proper router above any router os on amd64. You’ll get more out of it, cheaper, with less power consumption and lower latency.


  • As in average? 1491W 30 day average according to the power meter. Fully loading everything is around 5kW iirc though that doesn’t really happen. Highest in last 30 days is 3774W peak and I think that’s when I accidentally shut down the UPS so everything was booting at the same time after. I don’t think I ever go over 3kW in normal circumstances.

    Using 5 storage servers, 2 of which are storinators and 3 supermicros. And then two compute nodes which are Proliant DL380, g10 and a g11 that I just bought last week. Plus ofc some network gear which isn’t really anything too fancy, it’s just two routers, which while they do do PoE, I don’t use it so they’re not really high power or anything.


  • I don’t do backups of a lot. The important stuff that has like, an actual backup, has 2 copies off site. One at my work, one at my wife’s work. Just a regular LTO tape. The vast majority of what I store, is not something I consider important enough to backup as it’s stuff like, dvds and blurays I’ve ripped and still have and thus could get out of storage and rip again should that need arise. I do however donoff site replication. 3 replica ceph pool where one copy has to be off site. There’s again one node at my work and one at my wife’s work, plus ofc the local cluster which for drive constraints in the work nodes, always has two of the replicas. Might add a second node at my work but while we both get to place the servers there, I still pay for power, and wife for power and connection. With power prices these days, it gets kind of expensive.


  • Don’t need to. I use ceph with replication, so the drive can die after 5 mins and it wouldn’t matter at all to the array. I just put a sticker on it that tells me when it’s bought so I can RMA it if it dies quickly. Beyond that, I can basically lose two thirds of my drives and still be perfectly fine, or I could lose data from 3 drives failing if I’m unlucky and it’s from three different servers, one of which is off site. And that loss has to be within like a couple of hours between them or it’ll have time to create new replicas of it, and it will only be a small subset of the data that has priority once the second drive fails as it will only need to create new replicas of the data that only exist on those two drives which isn’t going to be more than a couple of hundred megs really.

    Basically, I love Ceph if that doesn’t show ;P