The following is a guest blog by Alex Carroll, co-owner and managing member of Lifeline Data Centers in Indianapolis.
Anyone in the data center industry—or in business, for that matter—understands the importance of uptime. Recent statistics show that it costs, on average, $8,851 each minute businesses experience a data center outage — an essential reason to minimize the incidents that cause downtime.
While there’s already pressure for IT professionals and data center managers to maintain a high rate of uptime, the demand will be even more intense in the 2020s. The expectation will be for 100% uptime, as internet connectivity—especially with the emergence of the Internet of Things (IoT)—will become essential for everyday living, experts projected.
“For data centers, the idea that you need to be perfect will not be far from the truth,” futurist Michael Rogers said during a Dell World presentation. “Every decision you make needs to head to that point on the horizon.”
In the future, losing an internet connection will be as disruptive as losing electrical power, he added. “We will be asking data centers to provide the type of reliability power plants provide, only moreso,” he said.
Unfortunately, data center operations of all sizes are not there yet. According to an AFCOM survey, 81% of respondents reported a data center failure in the previous five years. About 20 percent had experienced five or more failures.
Did your data center report a failure in the last five years?
Assessing data center uptime
Among the initiatives data centers are exploring to increase uptime include infrastructures that receive higher ratings from the Uptime Institute for reliability; predictive support which anticipates failures; and the minimizing of human errors, which have been attributed to as much as 75% of data center outages.
The Uptime Institute, for example, certifies data centers based on four tiers — Tier I through Tier IV. Under the classification system, the uptime rating is determined by infrastructure, uninterruptible power supply (UPS), power and cooling equipment, engine generators, and other components that impact uptime. Even a slight difference in the uptime rating — from 99% to 99.9% could translate into nine hours a year, which could result in significant losses.
Also, training employees to avoid the type of errors that can contribute downtime should be a top priority for your data center. Understanding why and how downtime happens will be critical in combatting it.
What you should know
Downtime in any business is no joke and can create serious problems. From loss of productivity to loss of revenue, if you’re experiencing downtime on even a semi-consistent basis, it’s time for you to outsource your data center needs or find a new data center.
At Lifeline Data Centers, we developed custom processes (and trademarked them) because they worked so well:
- Redundant Array of Generators™
- Redundant Array of UPS’s™
- Redundant Array of Chiller Plants™
- Most Direct Power Path™
These custom processes have contributed to our 99.999% uptime, and our largest data center where we have been able to employ our full sets of technology has not experienced an outage since inception—going on eight years.
Alex Carroll, Managing Member at Lifeline Data Centers
Alex, co-owner, is responsible for all real estate, construction and mission critical facilities: hardened buildings, power systems, cooling systems, fire suppression and environmentals. Alex also manages relationships with the telecommunications providers and has an extensive background in IT infrastructure support, database administration and software design and development. Alex headed the team that developed Lifeline’s proprietary, award-winning equipment maintenance methodology. He is also hands-on every day in the data center.