Real-time applications for IoT with edge computing

Posted on Posted in AI, IoT, Machine Learning

IoT applications

Sensors generate data, applications consume this data and build value. The data consumers are implemented as algorithms using a data driven approach or by implementing rules using logical statements. Rule based approach will give way to data driven approach. For data driven approach mathematical models are built and trained from the collected historical data. Algorithms using these models are embedded into applications generating inferences and answering queries for the applications.

Building a mathematical model is hard and also requires domain specific knowledge about data. Training can take long execution time as the models can be multilayered in case of deep neural networks or data is large. The complexities associated with too much or too less data, or choosing the correct model are not discussed in this article. To shorten times, requires intense computing resources that can be provided by modern GPU’s[1]. Cloud providers are equipped with such infrastructure and also provide massive data stores.

If the applications are hosted in the cloud the algorithms are updated when models are modified or retrained. There exist software solutions simplify deploying these models and applications. These tasks are performed by a team of machine learning scientists and software engineers in cooperation with domain experts that have deep understanding of the data.

In this processthe data was uploaded to the cloud and result of the query or the answer was returned to resource for which value was created. The type of value that is to be created is the dominating factor for it to be useful. For most cases in automation and real world applications this values is time specific.

Real-Time applications

In real-time applications the query should be processed and delivered in time, that has a upper limit. Anything later and the value is null.

Cloud providers will argue that networks are getting faster and this will eliminate the latencies seen today [2]. A car driving on the streets improves its value by adding a self driving kit. The driving application needs to receive decisive stimulus information in real-time. Security systems and industrial controls have similar constraints. Translating sign language into text/voice requires processing at the edge, accessibility in organizations will demand such edge processing [3]. The networks are far from being useful in such situations.

Edge Computing

Edge computing comes to the rescue by hosting the applications close to the point where stimulus is required. Thus the value created is preserved by moving the application down the stack. The requirements of the edge device are, it should be capable of executing the application process and the algorithm efficiently.

Gateways devices that live at the edges may not be capable of this. The existing cloud infrastructure needs to be augmented to accommodate the changes in the layer. Creating additional services on the gateway is not sufficient, but addition of computational devices that will speed up the evaluation of the algorithms is a necessity [1]. The gateways will also communicate with other gateways creating a peer to peer distributed environment [2].

[1] AI at the Edge Nvidia

[2] The End of Cloud Computing

[3] Translating Sign Language