Data Centers With Patients

The medical community faces many fundamental challenges, not the least of which is an explosion of data.

By Kenneth W. Betz, Senior Editor

Data-center customers each have a private cage, custom built to meet their uptime and compliance needs, with deference given to the customer’s end-equipment vendor preferences. Photo: Lifeline Data Centers

A technician tracks operation of a facility’s power system with the ASCO PowerQuest power and monitoring control system. Photo: ASCO

Healthcare today is a data-driven business, comprising a complex system of providers, payers, suppliers, research organizations, and patients, all of whom must be digitally linked. Whether those data are housed on-site, co-located, or in the cloud is an important decision healthcare administrators must face moving forward.

It wasn’t so long ago that major healthcare organizations almost universally believed that information systems were best handled in house. Today, some hospital data centers are being co-located, and cloud-based platforms are being considered. So pronounced has been the change in philosophy that Forrester Research, Cambridge, MA, states, “by 2017 it will be unthinkable for a healthcare company to plan for a new data center.”

Certainly, existing healthcare data centers, representing millions of dollars in investment, will not disappear tomorrow, but ever-increasing data demands will make expanding them a challenge and will likely cause hospitals to have second thoughts about the investment and limits of on-site expansion. Costs to build a data center are in the thousands of dollars for each square foot.

On the other hand, cloud-based solutions make many administrators nervous. Some data-security experts dismiss the cloud as a “data exfiltration tool.” Regulations concerning the security and confidentiality of electronic health records so far have slowed any large-scale exodus to the cloud.

What is not in question is the centrality of data to all phases of healthcare. “Healthcare facilities have become data centers with patients,” said Bhavesh Patel, vice president, Global Marketing, ASCO Power Technologies, Florham Park, NJ. “Data digitization rules. Hospitals need to capture, manage, store, and protect more and more data. Patient histories, diagnoses, and prescriptions are examples. At the Seattle Cancer Care Alliance in Washington State, patients may see an oncologist, get medications from the pharmacy, and perhaps undergo an MRI or CAT scan during a single visit. All the data must be captured,” he said.

“Also, the business-end of healthcare facilities relies on data centers for capturing costs and revenue, maintaining employee records and payroll, and storing back-up data from personal computers, e-mails, and other electronic communication. Hospitals are at different stages of achieving all of that, of course, but all of them need to get there because it’s mandated by law,” Patel added.

A technician tracks operation of a facility’s power system with the ASCO PowerQuest power and monitoring control system. Photo: ASCO

A critical power management system (CPMS) helps management comply with healthcare data-center requirements by capturing data and producing automated reports for inspection. Photo: ASCO

“As the healthcare industry evolves and strengthens its focus on preventive care, and as health organizations become more distributed in nature—with large campuses and regional networks—IT infrastructure is critical for the centralized management and storage of patient data. What used to be a paper-only industry is now completely paperless,” said Justin Carron, global healthcare segment manager at Eaton, Cleveland, OH.

“The data center is critical to the success of modern hospitals and to what Gartner Research, Stamford, CT, analysts call real-time healthcare systems (RTHS),” agreed James Cerwinski, director at Raritan Inc., Somerset, NJ.

“Hospitals use hundreds of applications, ranging from the typical ones found in most businesses—e-mail, online portals, back-office applications, HR, and financials—to applications supporting the work done in labs, ERs, hospital floors, clinics, radiology, and in just about every corner of a hospital,” Cerwinski explained. “According to one of Raritan’s customers, Florida-based UF Health Shands, its most important application, out of more than 200, is the hospital-information system that supports all patient caregivers. Each day the system gathers information on caregiver-patient encounters, medical records, and every work order, such as lab work, in all of Shands’ locations.”

Major concerns

A critical power management system (CPMS) helps management comply with healthcare data-center requirements by capturing data and producing automated reports for inspection. Photo: ASCO

The computer room air handlers at Lifeline’s data center deliver cold air to the front of the computers and networking gear. The units were custom built when Lifeline found units available on the open market were unsuitable. Photo: Lifeline Data Centers

Security, reliability, and uptime are the major concerns facing hospital data centers, regardless of whether they are traditional onsite installations, co-located, or cloud based.

“Energy efficiency is important, of course, and an inefficient data center will get someone’s wrist slapped. Poor reliability and power availability, however, will get someone fired. It’s that black and white,” said Patel.

“Mark Hungerford, the operating engineer at Fred Hutchinson Cancer Research Center in Seattle, WA, told us, ‘It’s all about reliability. Everything, especially our data centers, is built to never go down,’” Patel related.

“That’s because the center’s campus operations scream ‘mission critical!’ Data centers totaling 18,000 sq. ft. store colossal volumes of information produced by more than 200 research labs and advanced imaging facilities, cell monitoring and manufacturing operations, and specialized tools that analyze and sequence DNA and RNA,” he explained.

According to ASCO’s Patel, “Data centers for healthcare networks are particularly sensitive to interruptions. Data-center downtime costs more than $5,000 per minute and on average $500,000 per incident, according to a 2011 Ponemon Institute, Traverse City, MI, study of U.S.-based data centers. For healthcare facilities that rely on IT systems to support critical applications, such as electronic patient data, the highest cost of a single event in the study topped $1 million, or more than $11,000 per minute. In healthcare facilities, power problems can result in losses more significant than financial costs—loss of human life.”

Security competes with reliability as a matter of crucial importance because of the Health Insurance Portability and Accountability Act (HIPAA), the Patient Safety and Quality Improvement Act (PSQIA), and other requirements, Patel commented.

“It’s one thing if someone’s credit card information is hacked from a national department store chain’s data base,” he said. “It’s another thing entirely if someone’s medical history, prescriptions, and even genetic predisposition to certain diseases are hacked. That’s a nightmare that might be causing sleepless nights for healthcare facility executives. It’s a real problem because the frequency of serious security breaches during the past year seems to be increasing and, unfortunately, people appear to becoming numb to them.”

“Healthcare organizations face very severe penalties for data breaches and must choose very wisely when deciding to outsource services or implement internal IT programs,” Carron added.

“Storage is also a key factor,” he continued. “Some U.S. health organizations have data records far more extensive than the Library of Congress. The sheer amount of data is immense, and healthcare organizations need to be able to plan and predict not only for capacity requirements, but also for electrical infrastructure upgrades to support and protect expansive storage hardware.”

The physical resources to support all those data are of equal concern. “Uptime can be impacted if you run out of resources supporting your data-center operations,” Raritan’s Cerwinski warned. “Some of the top resource concerns and major pain points are: Does your data center have enough power to support all operations? Do you have enough cooling? Do you have enough space to place servers? Do you have enough network and power connections?”

“DCIM (data center infrastructure management) has made server moves, adds, and changes more efficient because it tells us exactly where a server and its supporting infrastructure are located. If someone moves a server, an alert is sent,” he said.

Cerwinski continued, “By using DCIM tools, managers know where each piece of equipment is located and their relationships with other systems; how much power capacity is available; if there are any harmful hotspots or wasteful over-cooled areas—and DCIM tools give suggestions on the optimal place to install a new server. These tools are essential to delivering high availability and keeping costs down by eliminating over-provisioning and wasted resources. They also support security by tracking equipment moves and personnel entering the data center.”

Standards and requirements

The computer room air handlers at Lifeline’s data center deliver cold air to the front of the computers and networking gear. The units were custom built when Lifeline found units available on the open market were unsuitable. Photo: Lifeline Data Centers

Coolant pumps serve the many redundant coolant loops servicing customer equipment areas at the Lifeline data center. The cooling systems can tolerate multiple component and path failures and still maintain temperature and humidity in conformity with the ASHRAE TC 9.9 Standard. Photo: Lifeline Data Centers

On one level, the requirements for hospital data centers are similar to any other data center. The major difference comes in physically securing them. “A data center in the Midwest that’s dedicated to maintaining healthcare records for hospitals across the country has layers of security, including a barrier that can stop a truck. Inside, an elaborate, multi-layer system helps ensure only authorized access to servers, Patel said.

He added, “Healthcare facilities need to comply with National Fire Protection Association (NFPA) 70, 99, and 110 requirements, as well as Joint Commission, Medicaid, and National Electrical Code (NEC) 220.87 reporting mandates to maintain proper accreditation. A critical power management system (CPMS) helps management comply with the requirements by capturing data and producing automated reports for inspection.”

“A critical power management system at Bryan Medical Center in Lincoln, NE, for example, produces automated reports that Joint Commission inspectors and the local fire marshal prefer to review,” Patel noted.

“For co-location data centers responsible for healthcare information, a CPMS enables them to report on any downtime as part of its service level agreement (SLA). For instance, the report may show that even when downtime occurred, healthcare data management was back up and running instantly, or, say, in exactly three seconds,” he said.

On site or off?

Coolant pumps serve the many redundant coolant loops servicing customer equipment areas at the Lifeline data center. The cooling systems can tolerate multiple component and path failures and still maintain temperature and humidity in conformity with the ASHRAE TC 9.9 Standard. Photo: Lifeline Data Centers

Data-center customers each have a private cage, custom built to meet their uptime and compliance needs, with deference given to the customer’s end-equipment vendor preferences. Photo: Lifeline Data Centers

The explosion of healthcare data poses a significant question for hospitals: on or off site? “Many hospitals are no longer one-location entities, but are multi-site campuses, so data centers tend to be multi-sited as well, even though they may not be at every campus. A Midwest data center, at which we have power switching and controls systems, has two sites, but their service is cloud-based for their healthcare clients. Large hospital chains with numerous facilities probably have their own cloud,” Patel said.

Obviously, a solution for one organization may not be a good fit for another. “Some organizations have large main campuses with available property to construct a data center as well as satellite branches where disaster-recovery facilities can be constructed in order to avoid having all assets in one place. In these instances, it can be more cost effective to construct your own data center network,” Eaton’s Carron said.

“For smaller organizations without spare real estate and regional dispersion, it may make more sense to seek out an accredited co-location provider with robust disaster recovery infrastructure. Or, we’ve seen many smaller or independent healthcare systems, such as TriRivers Health Partners, collaborate for the construction of data centers, which allows them to share expenses,” he added.

Lifeline’s data center has many 500-kVA uninterruptible power supply systems servicing customer equipment. The center’s power delivery systems are TIA-942 Rated 4 compliant, meaning they provide isolated parallel redundant or fully compartmentalized diverse-path power delivery. A catastrophic failure in one delivery path, e.g., a generator failing to start during a utility power outage, will not affect customer equipment operation. Photo: Lifeline Data Centers

Shown is one of many stepdown/isolation transformers that comprise Lifeline’s 2N isolated parallel redundant-power delivery systems. Photo: Lifeline Data Centers

TriRivers has a 33,000 sq.-ft. hosting data center in Rockford, IL, and a backup facility at FHN Memorial Hospital in Freeport, IL.

“As far as the cloud goes, a hybrid strategy allows healthcare organizations to get the best of both worlds—with the flexibility of an outsourced cloud infrastructure and the security benefits of a brick-and-mortar data center that allows them to actually ‘own’ the patient records internally,” Carron said.

Raritan’s Cerwinski concurs that there is no one-size-fits-all solution. “Our customers have owner-operated data centers and co-located data centers. Some customers use a hybrid approach, using the co-located center to augment capacity needs of their data center,” he said.

“One of Raritan’s customers is a healthcare co-location provider. They use our intelligent energy-management solutions to monitor the energy usage of the customers residing in its data center to improve operational efficiency and reduce costs. They also use the Raritan DCIM monitoring software to readily share energy reports and SLA updates with its clientele, so that customers can remotely monitor the energy they are using and purchasing. The co-location data center also meets a number of standards, including HIPAA, and is audited annually against SSAE-16 SOC 2 standards,” Cerwinski said.

Growing pains

Shown is one of many stepdown/isolation transformers that comprise Lifeline’s 2N isolated parallel redundant-power delivery systems. Photo: Lifeline Data Centers

Lifeline’s data center has many 500-kVA uninterruptible power supply systems servicing customer equipment. The center’s power delivery systems are TIA-942 Rated 4 compliant, meaning they provide isolated parallel redundant or fully compartmentalized diverse-path power delivery. A catastrophic failure in one delivery path, e.g., a generator failing to start during a utility power outage, will not affect customer equipment operation. Photo: Lifeline Data Centers

Expanding existing hospital data centers is no easy task because hospital administrators may not have anticipated the scope of data they would need to store.

“Twenty or more years ago, healthcare facilities were not constructed under the assumption that they would need the massive IT infrastructures that they rely on today. This can make expanding aging facilities difficult; however, there are many electrical modernization strategies that can help older facilities meet the modern electrical requirements of today. Facilities should always consult with a power-management expert to fully understand the impact that increased IT infrastructure will have on power systems and make the proper modifications to ensure critical reliability, availability, and safety,” Eaton’s Carron observed.

“Luckily, if modernization cannot meet the immediate needs of healthcare facilities, the industry can also rely on a number of accredited co-location providers and cloud services that allow temporary data-center services while internal resources are constructed, ensuring that capacity needs are always met in a reliable and compliant manner,” said Carron.

“Planning for infrastructure growth, such as for a data center, is a conundrum faced by facilities managers and engineers all too often,” ASCO’s Patel observed. “Should an infrastructure larger than needed in the near term be built for anticipated long-term growth? Or, should it be built to satisfy only near-term demand, with more infrastructure added later as needed?”

“Mark Hungerford has wrestled with the conundrum for 15 years as the operating engineer for the Fred Hutchinson Cancer Research Center in Seattle. The center has grown six-fold since 1991 and now covers 15 acres and employs 4,000 people,” he reported.

“They generally followed the model they used for the first new building—they started with a basic, robust infrastructure of high-quality equipment that would accommodate expansion and grew out from there,” Patel said. “This approach has served the campus well. Today, it’s recognized for its reliable and right-sized infrastructure.”

On the positive side, Patel noted, “even though the square footage [a hospital] has allotted for additional data-center capability may be small, it still may be sufficient because power density, or the kW rate for a given server rack, has increased. While older data centers may have a power density of 5 kW per rack, new-build facilities can achieve 20 kW. Higher densities allow a smaller building, fewer racks, and fewer rack power distribution units (PDUs).”

Hospital data centers are no longer confined to a broom closet converted to a computer room. They’re an integral, if largely unseen, part of the healthcare infrastructure, one that will continue to evolve as healthcare changes.

An Alternative To On-Site Data Centers

Fortunately, there are alternatives to on-site hospital data centers. One of them is co-location, a choice in which Rich Banta and Alex Carroll, co-owners of Lifeline Data Centers in Indianapolis, firmly believe.

The cost for a hospital to build a data center from the ground up is $1,200 to $1,500/sq. ft., not including maintenance and overall cost of ownership, according to Carroll. By contrast, Lifeline offers co-location services for about $150/sq. ft., including all the certifications, upkeep, maintenance, and critical functions such as power uptime, he said.

The cost/sq. ft. to build from the ground up includes the cost of the physical building: chillers, generators, UPSs, power infrastructure, cabling infrastructure, and physical security considerations. It does not include ongoing equipment maintenance, physical building and exterior maintenance, security retrofit, uptime retrofit, and facility operations, Carroll said.

Comparing the numbers, Banta agreed that the prediction hospitals will not be planning new data centers in the near future is on target.

The cost of building a data center is not unlike building out a radiology wing, and hospitals have all kinds of standards and experts to do that, Banta said. A data center is an entirely different set of disciplines. What hospitals learned building out a radiological area doesn’t apply, he added.

“Hospitals are used to buying MRI machines and other radiological modalities, so high prices aren’t new. Nevertheless, the cost of a data center is eye-popping, even to them,” Banta observed. “It’s on their balance sheet, and they don’t want it there.”

“With hospitals going filmless and paperless, if IT systems are down, patient care is affected, and lives could hang in the balance,” he said. “Hospitals with savvy risk managers are looking to transfer that risk and the associated capital expenditures as quickly as they can.”

The amount of medical data generated, and that needs to be readily available, is increasing at a rapid pace. “It is orders of magnitude,” Banta said. “In the area of digitized radiology, the old standard for CT scanners was 64 slices. Now they have gone to 128, and that chews through a lot of storage space. A radiologist pulling up your data in an emergency doesn’t need a subset of your information, he needs every bit of it.”

“That’s where the existing data-center infrastructure of hospitals, some of them in basements of 60-year-old buildings, just isn’t going to get it done,” he said. “They don’t want to make the capital investment in a second generator, a second UPS system, a completely redundant HVAC system, or any of the other availability requirements that the industry is pushing towards.”

“Once you get into these levels of availability, a retrofit is not the answer. It’s a total redo, and that is really capital intensive—we can attest to that—and that’s not capital they want to spend,” Banta said.

Banta emphasized that co-location is not the same as the cloud, and he personally doesn’t see hospitals going the cloud route. “A co-located data center is shared infrastructure in terms of power and cooling; the cloud is shared infrastructure in terms of computing power and in terms of security. Everything [in the cloud] shares a security model and shares vulnerability, so if another resident of the cloud is severely compromised, you’re at risk yourself. That is currently an unacceptable level of risk for anybody who is liable for protecting patient information,” he said.

In a co-location model, hospitals or other clients own the hardware and equipment. “We are simply a high-tech landlord with compliance built in. It’s their servers, their storage equipment, and they own the software licenses. We provide the space, power, cooling, physical security, and access to telecommunication carriers,” Banta said. “We are a carrier hotel with 22 telecommunication carriers resident in our facility at the disposal of clients to use to connect to their data. We are also responsible for evidence, artifacts, and preparation for audits of power and cooling availability and physical security.”

Lifeline provides another interesting take on data centers. The company’s data center is located in a former shopping center in Indianapolis and another is planned in a former big-box retail store in Fort Wayne, IN, a trend that is being seen in other parts of the country as well. The shopping center occupied by Lifeline was built during the Cold War era and was designated as a fallout shelter. “The shell of the building can take an EF5 tornado straight on,” Banta said. “That’s all part of the hardening and security features that make data centers so expensive. Other limitations in siting a data center include its proximity to major railroads, highways, bodies of water, flood plains, and other potential hazards,” he explained.

Ken’s View

Where Lost Socks Go

Enough already. Big data, the Internet of Things (IoT), the cloud. Does anyone really know what those terms mean? Or how to capitalize them?

What is big data? Is there someplace I can get some small data? Aren’t all those 1s and Os, the binary bits that compose data, the same size, or do they come, for example, in tall, grande, venti, or trenta?

Sorry if I’m confused, but naming simple, everyday things shouldn’t be this hard. Venti is Italian for 20, and a venti at a certain ubiquitous coffee chain is said to be 20 oz. So far so good, but why then is a grande, which is claimed to be 16 oz., not a sedici (Italian for 16)? There’s also a trenta, said by some self-proclaimed experts on the Internet to be 31 oz. So why isn’t it called a trentuno? And tall makes no sense at all now that short is no longer officially on the menu. By the way, these measures may vary, depending on whether they are applied to hot or cold drinks—or what Internet sources one consults.

When it comes to more complicated concepts, naming doesn’t get any better. I’m pretty sure there is no data stored in the clouds outside my window, but I’ve read about new sealed hard drives filled with helium. At one-seventh the density of air, helium produces less drag on the moving parts, using less power and running cooler. If you put enough of them together would they float skyward? Would that be the cloud? On the other hand, I’ve heard the cloud likened to the place where lost socks go, so maybe that’s the real cloud.

But not to quibble. I’ll accept there’s a lot of data out there—somewhere. What to do with it? Why, optimize it, stupid. Unfortunately, optimize is yet another buzzword—the next big thing which, in the end, isn’t so new after all.

“Comrades, let’s optimize,” exhorts author Francis Spufford on the website promoting his novel/history/fantasy Red Plenty, which tells, among other things, of the Soviet potato optimization program back in the shoe-thumping-we-will-bury-you1960s. Lurking behind this potato-optimization, market-control scheme was a Large Electronically Computing Machine (the literal translation of Bolshaya Elektronno-Schetnaya Mashina, or BESM, which is not a fictional invention, by the way). The BESM, which the Soviets cobbled together because IBM wouldn’t sell them computers in those Cold-War days, number-crunched thousands of variables in a bid to make potatoes abundantly available to the masses. Big data, you see, is nothing new.

Neither is the potato shortage. The former Soviet Union still doesn’t have a handle on its potato supply. Even as I write this, Potato News Today (I’m not making this up), an online source for all the dirt on tubers, reports a Russian potato shortage is looming. Not 50 years ago. Today. So much for big data.

The proliferation of data and what to do with it is not new either. In 1941 Jorge Luis Borges wrote a short story, “The Library of Babel,” in which the library of the title contained every bit of knowledge about everything, far beyond the capacity of anyone to process. The frustrated souls searching for meaning in this overkill of data, “…disputed in the narrow corridors, proffered dark curses, strangled each other on the divine stairways, flung the deceptive books into the air shafts, met their death cast down in a similar fashion by the inhabitants of remote regions. Others went mad…” Sounds like another day at the office. Or the Internet.

Let’s hope a more amicable approach can be found. I attended a couple of educational sessions that discussed the Internet of Things at AHR in Chicago recently and, while I’m still convinced IoT is a silly name and an incorrectly capitalized acronym, I began to see the utility of it, more so on a commercial and industrial level (the IIoT, to confuse matters further) than on a consumer one. I don’t care if my smart toaster tells my smart watch the toast is done so it can text my smart phone to alert me to that fact. Given a choice, I’d rather toast my bread on a stick over an open flame.

Obviously, much work is yet to be done before the IoT becomes truly useful. A recent e-newsletter from Harbor Research, a Boulder, CO-based research and consulting firm, sums it up: “Some things that look easy turn out to be hard. That’s part of the strange saga of the Internet of Things and its perpetual attempts to get itself off the ground. But some things that should be kept simple are allowed to get unnecessarily complex, and that’s the other part of the story. The drive to develop technology can inspire grandiose visions that make simple thinking seem somehow embarrassing or not worthwhile. That’s understandable in science fiction. But it’s not a good thing when defining and deploying real-world technology.”

Now, if you’ll excuse me, I must fling some data into the air shaft—and search for those lost socks.

*Kenneth W. Betz, Senior Editor, CBP

 

Data cache

Want more information?

Comments are closed.