As CTO for the IT Division, Schneider Electric’s Kevin Brown is charged with infusing innovation into all the products and services Schneider Electric delivers to its customers. So, it’s always a treat to hear him talk about the issues customers are facing and how the company plans to address them. If you don’t believe that, just ask him!

To no one’s surprise who has heard Kevin before, that’s exactly what he did during his presentation at the recent International Colocation Club event hosted by Schneider Electric in London. In less than 60 minutes, Kevin made clear not only how much has changed in the data center world in the last few years, but more importantly the kind of change that lay ahead and how best colocation providers can prepare for it.

In this post, I’ll present what I took away as the highlights from his presentation, but you can see the full version for yourself below.

[embedded content]

How customer needs influenced data center innovation

The bulk of Kevin’s talk focused on the needs of larger, centralized data centers. He addressed the question of why air containment systems weren’t more widely used in data centers, given the proven efficiency they provide. For colocation companies, the answers included things like:

  • A need to minimize construction once tenants are in the data center
  • Difficulty in moves, adds, and changes
  • A desire to roll in fully configured racks
  • Cost and deployment issues

This research led to the development of Schneider Electric HyperPod, a self-contained system designed to deploy fully configured IT racks in increments of 8 to 12 racks. Without the need for on-site construction, HyperPod enables companies to gain all the efficiency of air containment systems while retaining the flexibility to quickly roll racks in and out.

HyperPod is particularly effective for colocation companies for a simple reason. “A colocation provider told us, ‘If I can get all the infrastructure ready for [a customer], there’s a benefit to me because I can start billing them immediately.’” He talked of a provider that had to wait 3 or 4 months from the time a customer signed a contract until it started billing because that’s how long it took to get all the infrastructure in place. That 3 to 4 months of revenue was enough to cost-justify the HyperPod solution, Kevin said.

He also touched on cooling strategies, noting the key driver for more modern, efficient cooling systems is a simple one: they enable 11% more power to be available for IT equipment, in the same space. “Just by implementing air economization, you free up power,” he said.

Perhaps some of the most exciting work going on has to do with power, including ways to deal with data centers that are becoming more “peaky” in terms of power use, meaning use can vary dramatically from day to day or even hour to hour. Emerging energy storage system (ESS) applications show promise in addressing the issue, Kevin said.

With the increased adoption of Lithium-ion battery technology inside UPS systems, for example, data centers can ride through peak power demand periods without relying on additional power from the grid, a concept known as peak shaving. For starters, that can help colocation companies save on demand charges and buy lower-cost power during periods of less demand.  

A Li-ion battery solution that provides 2 to 3 hours of run time may even be able to replace generators in some applications, Kevin said. At the very least, it’s a discussion folks are starting to have. “The market is adopting Lithium-ion much faster than I would’ve guessed,” he noted.  

Interconnected data centers win out

Finally, he talked about software as crucial to dealing with data centers that have more interaction with the utility grid and renewables, greater power variability with respect to IT, a mix of complex cooling architectures and more sites to manage in hybrid environments – all with the same or fewer staff.

“The only way I can address this level of complexity and this limitation of people and staff is by going to a cloud-based management system,” he said. The need to collect and analyze huge amounts of data is a problem tailor-made for the cloud.

That’s where the Schneider Electric EcoStruxure™ solution comes in. With modules that address buildings, IT and power considerations, EcoStruxure provides an architecture for collecting data from connected devices, providing edge control, and performs analytics to produce actionable insights from the data – a topic we covered in this blog on the increasing demand for data and data analytics.

After hearing Kevin and others speak at the Colo Club event, I’m convinced this is a time of unprecedented growth and opportunity for colocation providers, and we’re excited as a company to help customers take advantage of it. To delve deeper into the technologies Kevin talked about, click be sure to check out his full presentation on how colocation providers can leverage IoT solutions within their data center architecture.

A healthcare setting is a great example of an environment that can’t afford to suffer disruptions. Consider a magnetic resonance imaging (MRI) machine, which is important in not only the treatment of patients but to the revenue stream of the institution. Should it go down, it would jeopardize patient satisfaction as well as income.

MRI Machine

Even a small healthcare facility with a single MRI machine may perform some 6,000 scans per year, while a major medical center with several machines may perform 18,000 scans annually. At a cost of around $2,600 per scan, that represents a major source of revenue in both settings. Beyond the revenue stream, each scan is an opportunity to provide a valuable service for the patient. But if the machine fails for any reason, or a machine is down and scans must be rescheduled, it can also be a major hit to patient satisfaction.

MRI machine cooling puts premium on redundancy

It takes specialized expertise to ensure the 24×7 operation of an MRI machine to avoid such a fate. Cooling requirements, for example, are challenging to say the least.

To ensure proper operation and accurate imaging, the magnet at the heart of an MRI machine must be maintained at the ultra-low temperature of 4 Kelvin, which is -270°C. That’s accomplished using liquid helium, which is cooled using a specialized compressor unit called a cryocooler. Another series of coils and heat exchangers is used to cool the cryocooler, which is crucial because if the temperature of the helium rises, the machine could overheat. Without cooling, within a few hours all the helium will vaporize and leak out through the safety valves, after which the magnet will suffer irreparable damage. Helium recharges costs tens of thousands of dollars while the loss of the magnet can drive the capital loss into the millions.

A typical MRI cooling system relies on a dedicated chilled water system to supply the cold water that cools the various components of the MRI machine, including the cryocooler. Should the chiller be taken down for maintenance or in the unlikely event it suffers a fault, the cooling system must be able to use an alternate water supply, namely city water.

A system like the Schneider Electric MRI “all in one” cooling solution couples a highly reliable chiller and a packaged hydronic kit. Imagine the hydronic kit as a black box connected to chiller and city water on one end, and to the cryocooler on the other end. If the chiller is at fault, the box will automatically switch to city water to cool the cryocooler indirectly, through a heat exchanger. In case of a water pump failure, city water can flow directly to the cryocooler, with the city water pressure doing the job of circulating the water. That level of redundancy brings the risk of an MRI cooling failure close to zero.

How IOT and big data improve MRI cooling reliability

Having two backup options is the kind of cushion you need in a critical facility such as a hospital. Another is the ability to continually gather data on the status of all equipment involved in the MRI cooling process.

This involves instrumenting various components of the MRI infrastructure such that they can report status and health data to a centralized management platform. There, diagnostic algorithms examine the data to identify anything that may be out of the norm, indicating a component that is in need of attention.

That’s essentially what solutions like Schneider Electric’s EcoStruxure do, in the process enabling predictive maintenance, so healthcare providers (or, more likely, a solution provider) can address any issues before an unexpected failure occurs. In short, that means greater MRI machine reliability and less downtime.

For service providers, such a valuable maintenance program can also be a source of recurring revenue, probably one that’s of far greater value in the long term than the initial MRI cooling system sale.

Cooling an MRI machine, with a magnet that must be maintained at nearly unfathomably low temperatures, is an extreme example of cooling technology, and one that requires careful monitoring to keep a constant, intelligent eye on it all and to deliver the level of reliability that keeps customer satisfaction high and protects hospital revenue and capital investments.

To learn more about how to address requirements ranging from operational efficiency and security to patient safety and satisfaction, visit the EcoStruxure for Healthcare page. You’ll find resources including white papers, videos, blog posts and case studies to help ensure your hospital thrives at every level, from the emergency room to the executive suite.

Every process requires detail, precision and collaboration. If all of the components of the process aren’t working together, it fails. And that can result in catastrophe. As an example, let’s consider a relatively simple machine and one of its primary components: the bicycle and, more specifically, its wheels.

As long as its rider expends the energy, a bicycle’s wheels will keep spinning, thus carrying the rider from point A to point B without incident.  But every sub-component of that primary part must remain in sync. With a finely tuned racing bike, if even one spoke bends or breaks, the wheel first begins to wobble, and if the wobble isn’t addressed, it will fail. And then…catastrophe.

Therefore, the bicycle’s rider needs to be in harmony with the entirety of the machine and the bike-riding process. By fully understanding every dynamic, he or she can ensure a safe, smooth journey. But understanding the dynamics– and to keep the bike moving gracefully forward– requires certain levels of education, good practice and a commitment to changing performance.

While this isn’t a perfect analogy, when it comes to confronting new, advanced and aggressive cyber assaults on our industrial control and safety systems, this is what the process automation and industrial manufacturing industry must keep in mind today.

To ensure we reach our destination safely and without incident, no one organization can stand on its own. Instead, like a finely tuned racer, every component needs to work together, applying their knowledge, technical know-how and experience, so we are all better able first to contemplate and understand new threat vectors and, second, to anticipate and combat dangerous, new cyber incursions.

Most of us have read reports about the malware various industry cybersecurity vendors and the U.S. Department of Homeland Security have dubbed Triton, Trisis and Hatman. But if not, here are the broad strokes: An unnamed end user suffered a highly sophisticated and prolonged cyber-attack, which resulted in a safe plant shutdown in August 2017.  Through sophisticated cyber-attack methodologies we had never seen before, a 10-year old Tricon controller was breached. However, the safety system detected an anomaly and behaved as it was supposed to: It took the plant to a safe state, protecting the end-user from any harm.

Security professionals quickly began to investigate the incident, whereupon they noticed the malware. They learned the malware had been deployed on the safety instrumented system engineering workstation. They were also able to determine that the distributed control system had also been compromised, which was even more disturbing.

Since then, all evidence indicates that multiple site and process security lapses were exploited, which ultimately enabled the attacker(s) to gain remote connectivity to the safety controller, from which the attack was initiated. No single vulnerability caused or enabled the attack. The attack’s sophistication, as well as the attack vector, demonstrate that the incident is not unique to any specific controller; it could have been carried out on any industrial system.

Given that the Triton incident, for the first time, allows us to truly envision attackers having the ability to manipulate the DCS while reprogramming the SIS controllers, the global industrial process and manufacturing industry must heed this as a warning. Concerns about the possibility of attacks on industrial systems in the era of the IIoT are escalating, and they extend across industries and broader society. The message has never been more clear: when it comes to cybersecurity, the industry needs to come together. There is simply too much at stake.

Join us for an upcomming webcast “Cybersecurity – the time is NOW” on April 5th.  Click on image to register for our Cybersecurity Academy and learn about this webcast and more!

Our industry is conservative and continues to take the “if it ain’t broke, don’t fix it” approach, and that has to change. We all need to take ownership to develop a stronger cybersecurity culture, which, in my view, could be accomplished in three measures.

First, vendors have to reinforce their commitments to making their products stronger and to educating end users on what they need to do to adhere to security best practices at their sites. Part of that means educating ourselves on the landscape and how today’s threat vectors are already impacting critical infrastructure. Take the Ukrainian power grid attacks as examples. There, in consecutive years (2015 and 2016), two different utilities were attacked via the same exact vector, crippling power grids and leaving large swaths of the populace without heat during the most brutal parts of winter. Those attacks were strong lessons, but we in the industry still aren’t learning.

Second, we have to come together to put into place stronger unifying standards and practices. While much needs to be discussed when it comes to standards, the simplest first step is to ensure our systems are consistently up to date. The Wannacry attack is a perfect example because the eternal blue vulnerability exposed from the NSA leak was easily avoidable if companies had simply patched their systems. Yes, it was a Zero Day, but Microsoft had identified the vulnerability and provided a patch two months before Zero Day hit. Anyone who waited too long to update their systems was obviously and unnecessarily in peril.

We also need to focus on education, especially awareness. Everyone in the industry needs constant and recurring education on what to do and what not to do when it comes to cybersecurity.

We should all have in place ongoing reminders to drive general awareness. Everyone in the industrial workforce needs to understand password policies, BYOD policies, how appropriate-use methodology works and why it is there. They need to know the difference between a phishing attack and a standard e-mail attack, the difference between a virus and a malware attack. Organizations should even run simulated penetration tests, where they send e-mail to groups of associates to test them on their security acumen.

In this era of increased connectivity, everyone– not just security professionals, but workers across the entire manufacturing enterprise– needs to understand potential attacks and have the wherewithal to protect themselves and their company.

Third, we have to drive new levels of cross-industry collaboration and openness. That is why we need to call for an impartial industry group or consortium to create a better understanding of the severe intensity of the threat and then help create a culture where everyone knows security is a part of his or her everyday job.

In a detailed analysis of the Triton incident, ARC Advisory Group’s Larry O’Brien wrote: “In the face of increasingly bold, innovative attacks, perpetrated by malicious actors who have unlimited time, resources and funding, every vendor, end user, third-party provider and systems integrator needs to take part in open conversations and drive new approaches that allow installed and new technology to combat the highest level cyber-attacks.”

He couldn’t be more correct. Driving true change to improve industry’s cybersecurity culture requires a commitment to transparency that promotes openness across competitive lines. This problem isn’t limited to a single company, industry or region. It’s an international threat to public safety that can only be addressed and resolved through collaboration– collaboration that goes beyond borders and competitive interests.

Like the finely tuned racing bike I mentioned at top, by working together to understand and improve every dynamic, the industry can experience a safe, smooth, incident-free journey. This is the only way we can ensure the safety and security of our global infrastructure and the long-term protection of the people, communities and environment we serve.

Join us for an upcomming webcast “Cybersecurity – the time is NOW” on April 5th.  To learn more click on link below and register on our CyberSecurity Academy to learn about this webcast and more!

Edge Computing is forecast to grow at a very rapid rate in a very short timescale and was therefore a major talking point at DCD Zettastructure in London. One major challenge is that so far consensus has not been achieved on what Edge actually stands for. Speaking with Dave Johnson, Executive Vice President for the IT Division at Schneider Electric he said; “In my opinion, the Infrastructure Masons came up with best definition of Edge in Smarak Bhuyan’s blog, ‘Edgy about the Edge.’ ”

[embedded content]

Edge Computing Defined by the Infrastructure Masons

The Infrastructure Masons is a global group of 1,500 data center professionals, representing more than $100Bn infrastructure projects in over 130 countries. According to them, an Edge location is a computing enclosure, space or data center which is geographically dispersed to be physically closer to the point of origin of data or a user base. For an Edge to exist, there must be a hub or a core so that dispersion of computing to the periphery would qualify as “Edge Computing.” Consequently, the physical enclosure, space or facility to accommodate the distributed IT resources could be defined as the “Edge Data Center.”

We are moving to a point where soon more than twice as much compute will be done outside of the traditional notion of a data center, at the Edge, on distributed IT equipment and, e.g., on smart devices. “Edge conjures up an image of the facility being a single rack enclosure, a micro data center, or a prefab or regional facility,” continued Dave Johnson. “Edge could be anything from a small facility in a town center run by a colocation service provider, to a micro data center in a retail store.”

A common thread is keeping both the data and data processing capacity in close physical proximity to the point of use. This may be required for a number of reasons, e.g., many of these applications require low latency or high bandwidth to be successful. But in some cases, location could be driven by regulatory reasons – such as confining data within a defined geography, or restricting it from being communicated outside a given jurisdiction.

The Types of Applications Driving the Move to Edge Computing

The applications best suited to these types of Edge facilities are, in part, still emerging. However, says Johnson, the Infrastructure Masons see existing use cases including content distribution networks; the requirement for local processing; IoT devices and next generation workloads such as augmented reality, virtual reality, drone footage and autonomous vehicles.

Driverless vehicles are frequently cited. Life-critical application simply cannot wait for data to be processed and instructions via some far-off hyperscale data center. But there are also medical applications and AR/VR applications in the pipeline which will be dependent on the Edge. “We’ll start to see applications that involve hologram experiences for business or personal use. They need a lot of bandwidth and they need near zero latency – especially if you’re talking medical procedures.”

There are other more creative examples, said Dave Johnson. “If you look at the opportunity presented by something like Amazon’s acquisition of Whole Foods Market. Already we’ve seen at least one industry pundit suggesting that the company will use its brick and mortar stores to house Edge or micro data centers. With 460 shops, it could mean that customers might end up streaming Amazon Prime movies from the local grocery store!”

In terms of infrastructure for Edge, Dave Johnson said that existing sites like mobile or cell-phone towers would lend themselves very well in terms of location and ubiquity. The introduction of 5G could also propel Edge facility growth, where existing locations will probably need to be retrofitted using, e.g., prefabricated or micro data centers. However, much infrastructure already exists in places like stairwells, under desks and in network closets. The challenge is that not much of it was designed to host these sorts of requirements.

Cloud Computing falls short, enter Edge Data Centers

While on the one hand, organizations have moved many standard and non-critical applications into the cloud, those being kept on premise tend to be of a higher priority and more critical nature. They are often latency and bandwidth sensitive. Likewise, these sorts of constraints also make many of the emerging applications mentioned here unsuitable for cloud delivery. The result is that organizations are left managing hybrid infrastructure comprising on-premise and outsourced facilities.

There’s already a pressing and market-driven need to raise the standard of many on-premise data centers, including the way they’re managed. Online analyst Business Insider states that 5.6 billion IoT devices owned by enterprises and governments will utilize edge computing for data collection and processing by 2020, while Markets and Markets forecasts IT systems to grow from $1.7B in 2016, to $6.72B by 2022.

Dave Johnson said that as a leader in both data center physical infrastructure and data center management software, “We’re (Schneider Electric) already very good at distributed IT environments, hyperscale or centralized facilities and regional data centers. We’re very excited about the idea of being able to provide an end-to-end solution for Edge, from the enclosures, power and cooling, to the management software, monitoring and services. Edge represents a very, very exciting opportunity for us to support our customers across their range of environments.”

Find out more about how Schneider Electric is already helping deliver Edge Computing.