On July 24th, 2010 the area was affected by the breach of the Hartwick Lake dam at Delhi, Iowa. This 83 year old hydroelectric dam located 25 miles upriver from the USSHC campus suffered a massive failure as the berms around the dam gave way after continuous and excessive rainfall.
The water rushed towards Hopkinton, Iowa, and Monticello, Iowa. USSHC’s campus is located near Monticello, Iowa.
The USSHC campus was completely unaffected even though the surrounding communities have suffered flood damage and been declared disaster areas. “Dam break at Delhi” was one of the “out of this world disaster(s)” that USSHC had a plan for. Knowing that the base elevation of the USSHC campus is significantly (over 100 feet) higher than the Delhi, Iowa dam, and the flooded lake, the USSHC staff was able to know imediately that we were not in danger. We were able to focus on helping friends and family affected by the Lake Delhi dam breach and flooding while simply verifying that all generation and UPS functionality was working properly. Not once did the flooding affect USSHC directly other than causing a potentail threat to grid power, which we are prepared for (even over extended periods of time). Our unique design means that even if we are surrounded by catastrophic natural disasters we can keep running as usual, for as long as we need to.
In 2008, we witnessed the flooding of Cedar Rapids, Iowa (a metro corridor with an approx. population of 450,000). Several data rooms in the downtown area were offline due to several factors all rooted in bad planning.
The flood was forecast to rise to 28 feet above flood stage. Since the Cedar river runs directly through the downtown, this was going to put a large part of the downtown area under water.
Alliant Energy was taking this threat seriously, and began shutting down electrical service to the downtown area ahead of the rising flood water to protect their distribution system. The flood was forecast to rise to 28 feet, yet not everyone was moving their servers.
Once the water flooded the basements and started to submerge telco rooms, fiber rings started to fail. Once enough nodes went off line, connectivity started to be lost. Basement located electrical switchgear started to fail. Standby generation was useless. Then the first floors started to go under water. At this point, some people were still hanging on, trying to ride it out. The only access in many cases was via the 2nd and 3rd story downtown “skywalk” system. Stories were published of datacenter staff operating portable gasoline powered generators in the isles of data rooms, hauling 5 gallon cans of gasoline around the servers. Eventually people had to flee, and leave the servers behind, unable to access them for days while they waited for flood waters to recede.
We started to get phone calls from people that had servers that were offline, and sitting in buildings that were flooded that they couldn’t get to. They wanted to know if they could bring their servers to us, but didn’t know when they could even get access to their servers. Every one of those calls came after their servers had gone offline. They all expressed disbelief that a flood could have impacted them, because they were on upper floors of downtown buildings.
If you don’t plan for contingencies, you’re planning for disaster.