Quote:
Originally Posted by baddog
Number 1 . . . or are you talking about a different name?
|
shows 3rd for me, after some place that makes weed growing cabinets... then I clicked on the link... HELLO 1999!

Seriously though, even if their backup server was a 96Tb raided NAS with twin gigabit NIC's capable of feeding at 2000mbits/s, it would take about 100+ hours just to transfer the data of 1000 servers, assuming 100% throughput which is hardly the case with encrypted NAS data...
Not to mention that even after it is restored, there are configuration problems and setup issues that need to be attended to as the servers are not backed up bit for bit... Doesn't take knowledge of a tape drive to figure out why this is going to take a while. 4000 man hours is a very conservative estimate, not considering ongoing support to fix small configuration issues.
Do some math: 4000 man hours at tech costs + overtime + contracts, say avg $25/hr and that is conservative... $100k plus a month worth of refunds (1000 servers at ~450$ a piece, $450,000) and lost business (say a very conservative 20%) makes this a pretty impossible situation financially but despite that, it is being done. From what I understand backups are actually being restored and support is being offered so maybe just lighten up and let the bits transfer at whatever speed they may.
I agree that with a proper disaster plan this could have been much smoother but either way you look at it... I can't think of another host that would come back from having every single one of their servers corrupted at once.