With the advent of smart cities, and society’s obsession of ‘being connected’, data networks have been overloaded with thousands of IoT sensors sending their data to the cloud, needing massive and very expensive computing resources to crunch the data.

Is it really a problem?

The collection of all these smaller IoT data streams (from smart sensors), has ironically resulted in a big data challenge for IT infrastructures in the cloud which need to process

massive datasets – as such there is no more room for scalability. The situation is further complicated with the fact, that a majority of sensor data is coming from remote locations, which also presents a massive security risk.

It’s estimated that the global smart sensor market will have over 50 billion smart devices in 2020. At least 80% of these IoT/IIoT smart sensors (temperature, pressure, gas, image, motion, loadcells) will use Arm’s Cortex-M technology, but have little or no smart data reduction or security implemented.

The current state of play

The modern IoT eco system problem is three-fold:

  • Endpoint security
  • Data reduction
  • Data quality

Namely, how do we reduce our data that we send to the cloud, ensure that the data is genuine and how do ensure that our Endpoint (i.e. the IoT sensor) hasn’t been hacked?

The cloud is not infallible!

Traditionally, many system designers have thrown the problem over to the cloud. Data is sent from IoT sensors via a data network (Wifi, Bluetooth, LoRa etc) and is then encrypted in the cloud. Extra services in the cloud then perform data analysis in order to extract useful data.

So, what’s the problem then?

This model doesn’t take into account invalid sensor data. A simple example of this, could be glue failing on a temperature sensor, such that it’s not bonded to the motor or casing that it’s monitoring. The sensor will still give out temperature data, but it’s not valid for the application.

As for data reduction – the current model is ok for a few sensors, but when the network grows (as is the case with smart cities), the solution becomes untenable, as the cloud is overloaded with data that it needs to process.

No endpoint security, i.e. the sensor could be hacked, and the hacker could send fake data to the cloud, which will then be encrypted and passed onto the ML (machine learning) algorithm as genuine data.

What’s the solution?

Algorithms, algorithms….. and in built security blocks.

Over the last few years, hundreds of silicon vendors have been placing security IP blocks into their silicon together with a high performance Arm Cortex-M4 core. These so called enhanced micro-controllers offer designers a low cost and efficient solution for IoT systems for the foreseeable future.

A lot can be achieved by pre-filtering sensor data, checking it and only sending what is neccessary to the cloud. However, as with so many things, knowledge of security and algorithms are paramount for success.

When looking at undertaking a NPD (new product development) it’s always tempting to take short cuts in order make the development more attractive to management or a client. I’ve lost count as to the number of times that I’ve heard,

 If we had the budget and time, of course we’d do it properly.

The problem with this, is that after doing this several times, it becomes the norm, and taking short cuts and in order to potentially save money usually leads to more problems in the long run.

 Follow the yellow brick road

For those of you who remember the Wizard of Oz, following the yellow brick road led Dorothy to Emerald city. The NPD process is the same, and is nicely described on Wiki and many other great resources, so there’s no reason to ignore it. If you can’t get the information out of clients, propose a workshop or private session, where you can discuss all requirements and get a clear picture of the clients vision/expectations. Remember that before taking on any new development, you should undertake the task of defining specifications:

  •  User specifications: a document that specifies what the user(s) expects the product (e.g. software) to be able to do.
  •  Functional specifications: are requirements that define what a system is supposed to do. A functional specification does not define the inner workings of the proposed system, and does not include the technical details of how the system function will be implemented.
  • Technical specifications: based on the Workshops, User and Functional requirements, you can finally construct the technical specifications documentation. This document(s) will contain all technical details of how the system will be implemented, and usually includes tables, equations and sketches of GUI layouts and hardware block diagrams.
  • Review: the specifications and findings with the client, and make sure that they understand what they will be getting (i.e. the deliverables).

Although it’s tempting to ignore these steps and start playing with software, following the aforementioned steps keeps you focused and almost always leads to shorter development times.

About the author: Sanjeev Sarpal is director of algorithms and analytics at Advanced Solutions Nederland BV. He holds a PhD in signal processing and has over 20 years commercial experience with the design and deployment of algorithms for smart sensor applications.