He/Him They/Them

Working in IT for about 15 years. Been online in one way or another since the late 90’s.

I like games / anime but very picky with them.

Cats are the best people.

  • 0 Posts
  • 7 Comments
Joined 1 year ago
cake
Cake day: June 14th, 2023

help-circle


  • Possible yes. Cost effective / valid business case probably not. Every extra 9 is diminishing returns: it’ll cost you exponentially more than the previous 9 and money saved from potential downtime is reduced. Like you said 32 seconds of downtime, how much money is that for the business?

    You’re pretty much looking at multiple geographically diverse T4 datacenters with N+2 or even N+3 redundancy all the way up and down the stack, while also implementing diversity wherever possible so no single vendor of anything can cause you to not be operational.

    Even with all that though, you’ll eventually get wrecked by DNS somewhere somehow, because it’s always DNS.



  • I run linux for everything, the nice thing is everything is a file so I use rsync to backup all my configs for physical servers. I can do a clean install, run my setup script, then rsync over the config files, reboot and everyone’s happy.

    For the actual data I also rsync from my main server to others. Each server has a schedule for when they get rsynced to so I have a history of about 3 weeks.

    For virtual servers I just use the proxmox built in backup system which works great.

    Very important files get encrypted and sent to the cloud as well, but out of dozens of TB this only accounts for a few gigs.

    I’ve also never thrown out a disk or USB stick in my life and use them for archiving, even if the drive is half dead as long as it’ll accept data I shove a copy of something on it, label and document it. There’s so many copies of everything that it can all be rebuild if needed even if half these drives end up not working. I keep most of these off-site. At some point I’ll have to physically destroy the oldest ones like the few 13 GB IDE disks that just make no sense to bother with.


  • If you’re using memory for storage operations, especially for something like ZFS cache, then you ideally want ECC so errors are caught and corrected before they corrupt your data, as a best practice.

    In the real world unless you’re buying old servers off ebay that already have it installed the economics don’t make sense for self hosted. The issues are so rare and you should have good backups anyways. I’ve never run into a problem for not using ECC, been self hosting since 2010 and have some ZFS pools nearly that old. I exclusively run on consumer stuff with the exception of HBAs and networking, never had ECC.