Looking along the horizon for the “smart” sea change in IoT
Looking along the horizon for the “smart” sea change in IoT
We’ve all been inundated with the hype surrounding Internet of Things “smart stuff” and the impending arrival of our robot overlords, so we tend to minimize the mind-blowing wonder of the responsive and intelligent computing metamorphosis that is upon us.
For years, the IoT community has been saying that if we really want “things” to be of value, they cannot be dumb. The first wave was getting everything connected, and we have made headway there. The next step is to actually make “things” smarter.
There are a variety of commercial solutions that do not really deliver on the promise of automating our way to more productive lives. And the concerns over properly securing our connected things still weigh heavily. But there really have been transformative leaps in computing capability and achievable functionality. The killer use case for IoT is on the horizon, but before defining what that is and describing how it is going to manifest, I think it’s important to broadly identify how we got here.
The “Trinity”
The impact of the open source movement in driving exponential leaps in technological advancement cannot be minimized. The algorithms and computing infrastructure that drive “smart” things — IoT, Artificial Intelligence, and machine learning capabilities — have been around for decades. Anyone at the NSA can tell you as much.
The difference now is in accessibility to the masses. These technologies were once jealously guarded, closed off from the wider world, and only available within formidable institutions possessing vast resources in both personnel and compute power. Open source changed all that. New things no longer have to be constructed from ground zero, thus supercharging the innovation cycle. The widespread access to knowledge bases and software allows anyone so inclined to build upon the shoulders of giants and leverage the wisdom of crowds.
The creative explosion fueled by open source helped give rise to the cloud, which is the second movement responsible for ushering in our new era of computing. Freed from the physical limitations and expense of individual server stacks and on-premise storage, the “app for everything” age dawned and the capacity for on-demand collection and consumption of big data was unleashed. Once we could scale compute power unconstrained by geography, our technology became mobile and the dream of smaller and increasingly powerful devices trafficking in colossal quantities of information became a reality.
Big data gives lifeblood to modern computing. But data does not do anything and, in itself, has no value. This brings us to the third movement in the “smart” revolution: analytics. The types of augmented computing that people encounter in everyday life now — voice recognition, image recognition, self-driving and driver-assisting cars — are founded in concepts that rose out of analytics and the pursuit of predictive analytics models, which was all the rage just a few short years ago.
The disheartening realization with predictive analytics was that, to train effective models, you need both massive amounts of data and scores of data scientists to continually build and maintain and improve data models. We were once again running up against the roadblocks of access and resource constraint.
And so we arrive at the present, where things are shifting in a new direction. The difference now is that we do not need to recruit an army of data scientists to build models; we have taught our programs to remove some of those roadblocks for themselves.
Inherent intelligence
Our AI-driven systems, especially Deep Learning systems, can now be fed millions upon millions of training sets, train in days/hours, and continuously re-train as more data becomes available. Open source tools and cloud computing are still important and evolving, and we still traffic in loads of data to perform lightning-fast analysis, but our programs now incorporate AI as the engine to make themselves “smarter.”
Expertise from vastly different computing realms has congealed to imbue programs with previously unimagined capabilities. The paradox is that as the cloud becomes ever more powerful and less expensive, the smart IoT strategy is to move much of the first line of entry processing away from the cloud and to the edge. This serves two purposes: to enable on-device decisions without needing cloud intervention and to deliver edge patterns and analytics to the cloud for fast second-stage analytics. Tiny AI engines can now perform analysis in near real time on edge devices and “things” no larger than a matchbook. And as these points of computational power grow increasingly commonplace in ordinary objects — intelligent routers and gateways, autonomous vehicles, real-time medical monitoring devices — their potential functionality expands exponentially.
Artificial intelligence at the edge
In the early days of IoT (aka M2M), the focus was on getting data up to the cloud when possible. FTPing log files every night was the rage. When General Electric came on the scene with the “industrial internet,” everyone began talking about real-time live data connectivity. That was a big jump from FTP, but people treated edge devices as simply “things” that transferred data to the cloud for analytics. We are now in the midst of an exponential reverse fan out of that thinking. Real-time requirements are redefining the paradigm. The cloud is now shifting into the role of IoT support and second-tier analytics, and the processing is getting pushed out to the edge.
For example, we have been working with a company developing a next-generation medical monitoring device. Initially, we assumed with such a small device, we would send raw data from the device to the cloud for analysis. But that is not what was desired, nor is it what transpired. The company wanted the analytics on the monitor. They wanted the analytics and pattern detection to occur directly on the device, to take actions on the device, and for only “intelligent” (as opposed to raw) data be sent up to the cloud. The model differed dramatically from standard industrial M2M operations — where everything would be connected, and batches of data coming in from all sources would be collected and processed on some set timeline at some central repository.
The whole purpose of connecting now is to obtain instantaneous precision results at the point of entry for immediate answers. Even the low latency involved in “traditional” cloud-processing with hundreds of thousands if not millions and billions of devices is not as efficient for real-time edge analytics as using this new architecture. In some cases, you can achieve a data reduction of 1,000x by just sending the analytics and patterns vs. raw data to the cloud.
We no longer deal in dumb collection devices; we need them to do more than just curate. They must be artificially (and naturally) intelligent — capable of doing pattern recognition and analytics in their tiny engines. They push those results up to the cloud for other uses. As this ideal proliferates, so, too, do the possible applications.
As is perfectly embodied in the example of an autonomous car, this dual edge/cloud analytics model produces precision, real-time results that can be continually and automatically refined against ever-growing troves of more data, thus producing valuable, useable information and powering productive action. Even a year ago, I would have called B.S. on this notion for widespread IoT and AI integration — but edge computing and AI have really has broken out of the lab and into our world. It will yield outcomes we have never seen before.
The killer use cases for IoT are manifesting through truly intelligent edge devices — in solutions that are purpose-built for specific problems or tasks, then interconnected and subjected to patterns that move beyond their initial application. As more and more smarter, AI-enabled “things” are incorporated into our everyday lives and operate at the edges of our inter-communicating networks, we will see things moving beyond merely being connected and into actively embodying intelligence. Smart stuff indeed.
This article is produced in partnership with Greenwave Systems.
The author is Vice President and Engineering System Architect at Greenwave Systems, where he guides development on the edge-based visual analytics and real-time pattern discovery environment AXON Predict. He has over 25 years of experience executing enterprise systems and advanced visual analytics solutions.
The post Looking along the horizon for the “smart” sea change in IoT appeared first on ReadWrite.
(37)