What is one thing cloud computing and the Internet of Things have in common? They both challenge your company’s ability to maintain the level of control IT has worked so hard to master. Using cloud computing to support IoT initiatives makes it that much harder to ensure things are working the way they should, adding further risk to these critical new efforts.
The emerging cloud-centric, IoT-driven world is characterized by swarms of networked, smart “things” collecting and forwarding high volumes of data to a cloud, where it is correlated and analyzed and used to support new business opportunities.
Given the potential importance of these efforts—Harvard Business Review goes as far as to say IoT is “transforming companies”—you need to understand how services work end-to-end across these multi-domain, hyper-connected, environments in order for IT to get in front of problems involving reliability, availability and performance before they become business problems.
That’s easier said than done given this simple fact: IoT sensors aren’t going to complain to tech support about performance degradations. Things can go soft before they go south, and if you’ve built an important new business initiative on IoT, you might not see the problem before it impacts customer experience or revenue. What’s more, if you use an IoT cloud platform from one of the big hosting companies—say, Amazon, Microsoft, or Google—you’re already an arm’s length away from direct control.
The Business Impact of IoT Availability and Reliability
As mightily as we guard against it, downtime happens. Witness the outage this past summer that knocked British Airways’ systems offline, grounding all flights out of the airline’s two major hubs in the United Kingdom.
The outage rippled around the world, eventually stranding some 75.000 passengers. It took four days to normalize operations and get people to where they needed to go. That outage stemmed from human error in a data center after power failed and a contractor restored it too quickly, leading to a surge that damaged servers and other equipment. According to some sources, the snafu ended up costing British Airways close to $200 million.
And this example surfaced in a data center that is tightly managed. Compare that to an IoT deployment with resources reporting into a third-party cloud platform. The chance of interrelated problems cropping up is greatly enhanced, as is the chance that IT won’t see service and application performance problems developing and be able to grasp the big-picture implications.
Consider the potential downside for a company that sells production tooling to automakers and tracks those CNC machines as an IoT value-add. By tracking machine uptime information, data about parts created, and production timing, the supplier can give customers real-time information about the factory floor environment, which they can then use to optimize operations. Collecting that information paves the way for even more data-intensive service opportunities, such as integration with customer ERP systems to help schedule parts and otherwise streamline production.
In this kind of highly leveraged IoT deployment extending into multiple systems and key operations, the health of IoT resources and strong service performance is business critical. The supplier in question can’t risk losing track of sensors or data and has to ensure services in the cloud are performing as expected, both for the survival of the business and, in the case of heavily regulated industries, compliance reasons.
But how do you manage these complex IoT environments?
Many companies use multiple tools to address this IoT challenge, but it takes time to piece together the different components necessary to get a big-picture view—and time is one resource that is always in short supply.
What you need is a top-down approach: a way to ensure the business is operating as it should, using a solution that can assure you that services supporting that business are running correctly. Given the multidimensional aspect of the challenge—remote sensors, local and wide area networks, virtualization, the cloud providers, the applications, the analytics, the cybersecurity posture, etc.—the only real source of truth in this Tower of Babel is traffic on the network.
Traffic-based [or wire data] intelligence can deliver big-picture awareness across complex, converged IT environments, providing insight into apps, service enablers, and the network without the lengthy correlation process required when you’re trying to piece together the truth with narrowly focused tools that only provide a glimpse into their platform or domain and the data itself constrains visibility.
What’s more, focusing on service and security assurance derived from wire data makes it possible to easily and economically scale visibility and intelligent insights along with the infrastructure. After all, IoT initiatives typically start small and, if successful, scale exponentially. That is virtually impossible to do in a cost-effective manner using multiple point tools, especially when you consider that pieces of the service delivery path are within a cloud provider’s domain.
To pinpoint problems before they impact business, continuous monitoring and real-time analytics are needed to understand IoT devices within the context of end-to-end service delivery.
Any other approach is going to be too little, too late, and will ultimately jeopardize critical IoT initiatives.
~Written by John Dix. John is an IT industry veteran who has been chronicling the major shifts in IT since the emergence of distributed processing in the early ‘80s. An award-winning writer and editor, he was the editor-in-chief for NetworkWorld for many years and an analyst for research firm IDC.