Posts

Modern embedded processors, software frameworks and design tooling now allow engineers to apply advanced measurement concepts to smart factories as part of the I4.0 revolution.

In recent years, PM (predictive maintenance) of machines has received great attention, as factories look to maximise their production efficiency while at the same time retaining the invaluable skills of experienced foremen and production workers.

Traditionally, a foreman would walk around the shop floor and listen to the sounds a machine would make to get an idea of impending failure. With the advent of I4.0 AIoT technology, microphones, edge DSP algorithms and ML may now be employed in order to ‘listen’ to the sounds a machine makes and then make a classification and prediction.

One of the major challenges is how to make a computer hear like a human. In this article we will discuss how sound weighting curves can make a computer hear like a human, and how they can be deployed to an Arm Cortex-M microcontroller for use in an AIoT application.

Physics of the human ear

An illustration of the human ear shown below. As seen, the basic task of the ear is to translate sound (air vibration) into electrical nerve impulses for the brain to interpret.

The ear achieves this via three bones (Stapes, Incus and Malleus) that act as a mechanical amplifier for vibrations received at the eardrum. These amplified sounds are then passed onto the Cochlea via the Oval window (not shown).

The Cochlea (shown in purple) is filled with a fluid that moves in response to the vibrations from the oval window. As the fluid moves, thousands of nerve endings are set into motion. These nerve endings transform sound vibrations into electrical impulses that travel along the auditory nerve fibres to the brain for analysis.

Modelling perceived sound

Due to complexity of the fluidic mechanical construction of the human auditory system, low and high frequencies are typically not discernible. Researchers over the years have found that humans are most perceptive to sounds in the 1-6kHz range, although this range varies according to the subject’s physical health.

This research led to the definition of a set of weighting curves: the so-called A, B, C and D weighting curves, which equalises a microphone’s frequency response. These weighting curves aim to bring the digital and physical worlds closer together by allowing a computerised microphone-based system to hear like a human.

The A-weighing curve is the most widely used as it is mandated by IEC-61672 to be fitted to all sound level meters. The B and D curves are hardly ever used, but C-weighting may be used for testing the impact of noise in telecoms systems.

a-weighting curve

The frequency response of the A-weighting curve is shown above, where it can be seen that sounds entering our ears are de-emphasised below 500Hz and are most perceptible between 0.5-6kHz. Notice that the curve is unspecified above 20kHz, as this exceeds the human hearing range.

ASN FilterScript

ASN’s FilterScript symbolic math scripting language offers designers the ability to take an analog filter transfer function and transform it to its digital equivalent with just a few lines of code.

The analog transfer functions of the A and C-weighting curves are given below:

\(H_A(s) \approx \displaystyle{7.39705×10^9 \cdot s^4 \over (s + 129.4)^2\quad(s + 676.7)\quad (s + 4636)\quad (s + 76655)^2}\)

\(H_C(s) \approx \displaystyle{5.91797×10^9 \cdot s^2\over(s + 129.4)^2\quad (s + 76655)^2}\)

These analog transfer functions may be transformed into their digital equivalents via the bilinear() function. However, notice that \(H_A(s) \) requires a significant amount of algebracic manipulation in order to extract the denominator cofficients in powers of \(s\).

Convolution

A simple trick to perform polynomial multiplication is to use linear convolution, which is the same algebraic operation as multiplying two polynomials together. This may be easily performed via FilterScript’s conv() function, as follows:

y=conv(a,b);

As a simple example, the multiplication of \((s^2+2s+10)\) with \((s+5)\), would be defined as the following three lines of FilterScript code:

a={1,2,10};
b={1,5};
y=conv(a,b);

which yields, 1 7 20 50 or \((s^3+7s^2+20s+50)\)

For the A-weighting curve Laplace transfer function, the complete FilterScript code is given below:

ClearH1;  // clear primary filter from cascade

Main() // main loop

a={1, 129.4};
b={1, 676.7};
c={1, 4636};
d={1, 76655};

aa=conv(a,a); // polynomial multiplication
dd=conv(d,d);

aab=conv(aa,b);
aabc=conv(aab,c);

Na=conv(aabc,dd);
Nb = {0 ,0 , 1 ,0 ,0 , 0, 0}; // define numerator coefficients
G = 7.397e+09; // define gain

Ha = analogtf(Nb, Na, G, "symbolic");
Hd = bilinear(Ha,0, "symbolic");

Num = getnum(Hd);
Den = getden(Hd);
Gain = getgain(Hd)/computegain(Hd,1e3); // set gain to 0dB@1kHz

a-weighting

Frequency response of analog vs digital A-weighting filter for \(f_s=48kHz\). As seen, the digital equivalent magnitude response matches the ideal analog magnitude response very closely until \(6kHz\).

The ITU-R 486–4 weighting curve

Another weighting curve of interest is the ITU-R 486–4 weighting curve, developed by the BBC. Unlike the A-weighting filter, the ITU-R 468–4 curve describes subjective loudness for broadband stimuli. The main disadvantage of the A-weighting curve is that it underestimates the loudness judgement of real-world stimuli particularly in the frequency band from about 1–9 kHz.

Due to the precise definition of the 486–4 weighting curve, there is no analog transfer function available. Instead the standard provides a table of amplitudes and frequencies – see here. This specification may be directly entered into FilterScript’s firarb() function for designing a suitable FIR filter, as shown below:

ClearH1;  // clear primary filter from cascade
ShowH2DM;

interface L = {10,400,10,250}; // filter order

Main()

// ITU-R 468 Weighting
A={-29.9,-23.9,-19.8,-13.8,-7.8,-1.9,0,5.6,9,10.5,11.7,12.2,12,11.4,10.1,8.1,0,-5.3,-11.7,-22.2};
F={63,100,200,400,800,1e3,2e3,3.15e3,4e3,5e3,6.3e3,7.1e3,8e3,9e3,1e4,1.25e4,1.4e4,1.6e4,2e4};

A={-30,A};  //  specify arb response
F={0,F,fs/2};   

Hd=firarb(L,A,F,"blackman","numeric");

Num=getnum(Hd);
Den={1};
Gain=getgain(Hd);

ITU-R 468–4 curve
Frequency response of an ITU-R 468-4 FIR filter designed with FilterScript’s firarb() function  for \(f_s=48kHz\)

As seen, FilterScript provides the designer with a very powerful symbolic scripting language for designing weighting curve filters. The following discussion now focuses on deployment of the A-weighting filter to an Arm based processor via the tool’s automatic code generator. The concepts and steps demonstrated below are equally valid for FIR filters.

Automatic code generation to Arm processor cores via CMSIS-DSP

The ASN Filter Designer’s automatic code generation engine facilitates the export of a designed filter to Cortex-M Arm based processors via the CMSIS-DSP software framework.

The tool’s built-in analytics and help functions assist the designer in successfully configuring the design for deployment. Professional licence users may expedite the deployment by using the Arm deployment wizard that automates the steps described below.

Before generating the code, the H2 filter (i.e. the filter designed in FilterScript) needs to be firstly re-optimised (transformed) to an H1 filter (main filter) structure for deployment. The options menu can be found under the P-Z tab in the main UI.

P-Z editor

All floating point IIR filters designs must be based on Single Precision arithmetic and either a Direct Form I or Direct Form II Transposed filter structure. The Direct Form II Transposed structure is advocated for floating point implementation by virtue of its higher numerically accuracy.

Quantisation and filter structure settings can be found under the Q tab (as shown on the left). Setting Arithmetic to Single Precision and Structure to Direct Form II Transposed and clicking on the Apply button configures the IIR considered herein for the CMSIS-DSP software framework.

Select the Arm CMSIS-DSP framework from the selection box in the filter summary window:

The automatically generated C code based on the CMSIS-DSP framework for direct implementation on an Arm based Cortex-M processor is shown below:

As seen, the ASN Filter Designer’s automatic code generator generates all initialisation code, scaling and data structures needed to implement the A-weighting filter IIR filter via Arm’s CMSIS-DSP library. A detailed help tutorial is available by clicking on the Show me button. 

Author

  • Dr. Sanjeev Sarpal

    Sanjeev is an AIoT visionary and expert in signals and systems with a track record of successfully developing over 25 commercial products. He is a Distinguished Arm Ambassador and advises top international blue chip companies on their AIoT solutions and strategies for I4.0, telemedicine, smart healthcare, smart grids and smart buildings.

    View all posts

IIR (infinite impulse response) filters are generally chosen for applications where linear phase is not too important and memory is  limited. They have been widely deployed in audio equalisation, biomedical sensor signal processing, IoT/IIoT smart sensors and high-speed telecommunication/RF applications and form a critical building block in algorithmic design.

Advantages

  • Low implementation footprint: requires less coefficients and memory than FIR filters in order to satisfy a similar set of specifications, i.e., cut-off frequency and stopband attenuation.
  • Low latency: suitable for real-time control and very high-speed RF applications by virtue of the low coefficient footprint.
  • May be used for mimicking the characteristics of analog filters using s-z plane mapping transforms.

Disadvantages

  • Non-linear phase characteristics.
  • Requires more scaling and numeric overflow analysis when implemented in fixed point.
  • Less numerically stable than their FIR (finite impulse response) counterparts, due to the feedback paths.

Definition

An IIR filter is categorised by its theoretically infinite impulse response,

\(\displaystyle
y(n)=\sum_{k=0}^{\infty}h(k)x(n-k)
\)

Practically speaking, it is not possible to compute the output of an IIR using this equation. Therefore, the equation may be re-written in terms of a finite number of poles \(p\) and zeros \(q\), as defined by the linear constant coefficient difference equation given by:

\(\displaystyle
y(n)=\sum_{k=0}^{q}b(k)x(n-k)-\sum_{k=1}^{p}a(k)y(n-k)
\)

where, \(a(k)\) and \(b(k)\) are the filter’s denominator and numerator polynomial coefficients, who’s roots are equal to the filter’s poles and zeros respectively. Thus, a relationship between the difference equation and the z-transform (transfer function) may therefore be defined by using the z-transform delay property such that,

\(\displaystyle
\sum_{k=0}^{q}b(k)x(n-k)-\sum_{k=1}^{p}a(k)y(n-k)\quad\stackrel{\displaystyle\mathcal{Z}}{\longleftrightarrow}\quad\frac{\sum\limits_{k=0}^q b(k)z^{-k}}{1+\sum\limits_{k=1}^p a(k)z^{-k}}
\)

As seen, the transfer function is a frequency domain representation of the filter. Notice also that the poles act on the output data, and the zeros on the input data. Since the poles act on the output data, and affect stability, it is essential that their radii remain inside the unit circle (i.e. <1) for BIBO (bounded input, bounded output) stability. The radii of the zeros are less critical, as they do not affect filter stability. This is the primary reason why all-zero FIR (finite impulse response) filters are always stable.

A discussion of IIR filter structures for both fixed point and floating point can be found here.

Classical IIR design methods

A discussion of the most commonly used or classical IIR design methods (Butterworth, Chebyshev and Elliptic) will now follow. For anybody looking for more general examples, please visit the ASN blog for the many articles on the subject.

Passband ripple, Transition band and Stopband attenuation, IIR filter

ASN Filter Designer’s graphical designer supports the design of the following four IIR classical design methods:

  • Butterworth
  • Chebyshev Type I
  • Chebyshev Type II
  • Elliptic

The algorithm used for the computation first designs an analog filter (via an analog design prototype) with the desired filter specifications specified by the graphical design markers – i.e. pass/stopband ripple and cut-off frequencies. The resulting analog filter is then transformed via the Bilinear z-transform into its discrete equivalent for realisation.

Biquad implementations are advocated for numerical stability.

The Bessel prototype is not supported, as the Bilinear transform warps the linear phase characteristics. However, a Bessel filter design method is available in ASN FilterScript.

As discussed below, each method has its pros and cons, but in general the Elliptic method should be considered as the first choice as it meets the design specifications with the lowest order of any of the methods. However, this desirable property comes at the expense of ripple in both the passband and stopband, and very non-linear passband phase characteristics. Therefore, the Elliptic filter should only be used in applications where memory is limited and passband phase linearity is less important.

The Butterworth and Chebyshev Type II methods have flat passbands (no ripple), making them a good choice for DC and low frequency measurement applications, such as bridge sensors (e.g. loadcells). However, this desirable property comes at the expense of wider transition bands, resulting in low passband to stopband transition (slow roll-off). The Chebyshev Type I and Elliptic methods roll-off faster but have passband ripple and very non-linear passband phase characteristics.

Comparison of classical design methods

The frequency response charts shown below, show the differences between the various design prototype methods for a 5th order lowpass filter with the same specifications. As seen, the Butterworth response is the slowest to roll-off and the Elliptic the fastest.

Elliptic

Elliptic filters offer steeper roll-off characteristics than Butterworth or Chebyshev filters, but are equiripple in both the passband and the stopband. In general, Elliptic filters meet the design specifications with the lowest order of any of the methods discussed herein.

Elliptic 5th order, Elliptic Filter

Filter characteristics

  • Fastest roll-off of all supported prototypes
  • Equiripple in both the passband and stopband
  • Lowest order filter of all supported prototypes
  • Non-linear passband phase characteristics
  • Good choice for real-time control and high-throughput (RF applications) applications

Butterworth

Butterworth filters have a magnitude response that is maximally flat  in the passband and monotonic overall, making them a good choice for DC and low frequency measurement applications, such as loadcells. However, this highly desirable ‘smoothness’ comes at the price of decreased roll-off steepness. As a consequence, the Butterworth method has the slowest roll-off characteristics of all the methods discussed herein.

Butterworth filter 5th order

Filter characteristics

  • Smooth monotonic response (no ripple)
  • Slowest roll-off for equivalent order
  • Highest order of all supported prototypes
  • More linear passband phase response than all other methods
  • Good choice for DC measurement and audio applications

Chebyshev Type I

Chebyshev Type I filters are equiripple in the passband and monotonic in the stopband. As such, Type I filters roll off faster than Chebyshev Type II and Butterworth filters, but at the expense of greater passband ripple.

Chebyshev I; Chebyshev type 1 filter

Filter characteristics

  • Passband ripple
  • Maximally flat stopband
  • Faster roll-off than Butterworth and Chebyshev Type II
  • Good compromise between Elliptic and Butterworth

Chebyshev Type II

Chebyshev Type II filters are monotonic in the passband and equiripple in the stopband making them a good choice for bridge sensor applications. Although filters designed using the Type II method are slower to roll-off than those designed with the Chebyshev Type I method, the roll-off is faster than those designed with the Butterworth method.

Chebyshev type II 5th order

Filter characteristics

  • Maximally flat passband
  • Faster roll-off than Butterworth
  • Slower roll-off than Chebyshev Type I
  • Good choice for DC measurement applications

 

 

Download demo now

Licencing information

Did you know that there are 23 billion IoT embedded devices currently deployed around the world? This figure is expected to grow to a whopping 1 trillion devices by 2050!

Less known, is that 80% of IoT devices are based around Arm’s Cortex-M microcontroller technology. Sometimes clients ask us if we support their Arm Cortex-M based demo-board of choice. The answer is simply: yes!

200+ IC vendors supported

The ASN Filter Designer has an automatic code generator for Arm Cortex-M cores, which means that we support virtually every Arm based demo-board: ST, Cypress, NXP, Analog Devices, TI, Microchip/Atmel and over 200+ other manufacturers. Our compatibility with Arm’s free CMSIS-DSP software framework removes the frustration of implementing complicated digital filters in your IoT application – leaving you with code that is optimal for Cortex-M devices and that works 100% of the time.

The Arm Cortex-M family of microcontrollers are an excellent match for IoT applications. Some of the advantages include:

  • Low power and cost – essential for IoT devices
  • Microcontroller with DSP functionality all-in-one
  • Embedded hardware security functionality
  • Cortex-M4 and M7 cores with hardware floating support (enhanced microcontrollers)
  • Freely available CMSIS-DSP C library: supporting over 60 signal processing functions

Automatic code generation for Arm’s CMSIS-DSP software framework

Simply load your sensor data into the ASN Filter Designer signal analyser and perform a detailed analysis. After identifying the wanted and unwanted components of your signal, design a filter and test the performance in real-time on your test data. Export the designed design to Arm MDK, C/C++ or integrate the filter into your algorithm in another domain, such as in Matlab, Python, Scilab or Labview.

Use the tool in your RAD (rapid application development) process, by taking advantage of the automatic code generation to Arm’s CMSIS-DSP software framework, and quickly integrate the DSP filter code into your main application code.

Let the tool analyse your design, and automatically generate fully compliant code for either the M0, M0+, M3, M4 and the newer M23 and M33 Cortex cores. Deploy your design within minutes rather than hours.

Proud Arm knowledge partner

We are proud that we are an Arm knowledge partner! As an Arm DSP knowledge partner, we will be kept informed of their product roadmap and progress for the coming years.

Try it for yourself and see the benefits that the ASN Filter Designer can offer your organisation by cutting your development costs by up to 75%!

How to choose between analog signal processing (ASP) and digital signal processing (DSP). How too chose for ASP or DSP; analog filters or digital filters?

The internet of things (IoT) has gained tremendous popularity over the last few years, as many organisations strive to add IoT smart sensor technologies to their product portfolios. The basic paradigm centres around connecting everything to everything, and exchanging all data. This could be house hold appliances to more blue sky applications, such as smart cities. But what does this particularly mean for you?

Almost all IoT applications involve the use of sensors. But how do SME and even multi-national organisations transform their legacy product offering into a 21st century IoT application? One the first challenges that many organisations face is how to migrate to an IoT application while balancing design time, time to market, budget and risk.

Sounds interesting? Then read further….

We recently completed a project for a client who manufactured their own sensors, but wanted to improve their sensor measurement accuracy from ±10% to better than ±0.5% without going down the road of a massive re-design project.

 

The question that they asked us was simply: “Is it possible to get high measurement accuracy performance from a signal that is corrupted with all kinds of interference components without a hardware re-design?”

Our answer: “Yes, but the winning recipe centres around knowing what architectural building blocks to use”.

Traditionally, many design bureaus will evaluate the sensor performance and try and improve the measurement accuracy performance by designing new hardware and adding a few standard basic filtering algorithms to the software. This sort of intuitive approach can lead to very high development costs for only a modest increase in sensor performance. For many SMEs these costs can’t be justified, but perhaps there’s a better way?

Algorithms: the winning recipe

Algorithms and mathematics are usually regarded by many organisations as ‘academic black magic’ and are generally overlooked as a solution for a robust IoT commercial application. As a consequence very few organisations actually take the time to analytically analyse a sensor measurement problem, and those who do invent something tend to come up with something that’s only useable in the lab. There has been a trend over the years to turn to Universities or research institutes, but once again the results are generally too  academic and are based more on getting journal publications, rather than a robust solution suitable for the market.

Our experience has been that the winning recipe centres around the balance of knowing what architectural blocks to use, and having the experience to assess what components to filter out and what components to enhance.  In some cases, this may even involve some minor modifications to the hardware in order to simplify the algorithmic solution. Unfortunately, due to the lack of investment in commercially experienced, academically strong (Masters, PhD) algorithm developers and the pressure of getting a project to the finish line, many solutions (even from reputable multi-national organisations) that we’ve seen over the years only result in a moderate increase in performance.

Despite the plethora of commercially available data analysis software, many organisations opt to do basic data analysis in Microsoft Excel, and tend to stay away from any detailed data analysis as it’s considered an unnecessary academic step that doesn’t really add any value.   This missed opportunity generally leads to problems in the future, where products need to be recalled for a ‘round of patchwork’ in order solve the so called ‘unforeseen problems’. A second disadvantage is that performance of the sensors may be only satisfactory, whereas a more detailed look may have yielded clues on how make the sensor performance good or in some cases even excellent.

 Algorithms can save the day!

 “Although many organisations regard data analysis as a waste of money, our experience and customers prove otherwise.”

Investing in detailed data analysis at the beginning of a project usually results in some good clues as to what needs to be filtered out and what needs to be enhanced in order to achieve the desired performance.   In many cases,  these valuable clues allow  experienced algorithm developers to concoct a combination of signal processing building blocks without re-designing any hardware – which is very desirable for many organisations! Our experience has shown that this fundamental first step can cut project development costs by as much as 75%, while at the same time achieving the desired smart sensor measurement performance demanded by the market.

So what does this all mean in the real world?

Returning the story of our customer, after undertaking a detailed data analysis of their sensor data, our developers were able design a suitable algorithm achieving a ±0.1% measurement accuracy from the original ±10% with only minor modifications to the hardware. This enabled the customer to present his IoT application at a trade show and go into production on time, and yes, we stayed within budget!

Author

  • Dr. Sanjeev Sarpal

    Sanjeev is an AIoT visionary and expert in signals and systems with a track record of successfully developing over 25 commercial products. He is a Distinguished Arm Ambassador and advises top international blue chip companies on their AIoT solutions and strategies for I4.0, telemedicine, smart healthcare, smart grids and smart buildings.

    View all posts

It’s estimated that the global smart sensor market will have over 50 billion smart devices in 2020. At least 80% of these IoT/IIoT smart sensors (temperature, pressure, gas, image, motion, loadcells) will use Arm’s Cortex-M technology – where the largest growth is in smart Image sensors (ADAS) & smart Temperature sensors (HVAC).

IoT sensor measurement challenge

The challenge for most, is that many sensors used in these applications require a little bit of filtering in order to clean the measurement data in order to make it useful for analysis.

Let’s have a look at what sensor data really is…. All sensors produce measurement data. These measurement data contain two types of components:

  • Wanted components, i.e. information what we want to know
  • Unwanted components, measurement noise, 50/60Hz powerline interference, glitches etc – what we don’t want to know

Unwanted components degrade system performance and need to be removed.

So, how do we do it?

DSP means Digital Signal Processing and is a mathematical recipe (algorithm) that can be applied to IoT sensor measurement data in order to clean it and make it useful for analysis.

But that’s not all! DSP algorithms can also help in analysing data, producing more accurate results for decision making with ML (machine learning). They can also improve overall system performance with existing hardware (no need to redesign your hardware – a massive cost saving!), and can reduce the data sent off to the cloud by pre-analysing data and only sending what is necessary.

Nevertheless, DSP has been considered by most to be a black art, limited only to those with a strong academic mathematical background. However, for many IoT/IIoT applications, DSP has been become a must in order to remain competitive and obtain high performance with relatively low cost hardware.

Do you have an example?

Consider the following application for gas sensor measurement (see the figure below). The requirement is to determine the amplitude of the sinusoid in order to get an estimate of gas concentration (bigger amplitude, more gas concentration etc). Analysing the figure, it is seen that the sinusoid is corrupted with measurement noise (shown in blue), and any estimate based on the blue signal will have a high degree of uncertainty about it – which is not very useful if getting an accurate reading of gas concentration!

Algorithms clean the sensor data

After ‘cleaning’ the sinusoid (red line) with a DSP filtering algorithm, we obtain a much more accurate and usable signal which helps us in estimating the amplitude/gas concentration. Notice how easy it is to determine the amplitude of red line.

This is only a snippet of what is possible with DSP algorithms for IoT/IIoT applications, but it should give you a good idea as to the possibilities of DSP.

How do I use this in my IoT application?

As mentioned at the beginning of this article, 80% of IoT smart sensor devices are deployed on Arm’s Cortex-M technology. The Arm Cortex-M4 is a very popular choice with hundreds of silicon vendors, as it offers DSP functionality traditionally found in more expensive DSPs. Arm and its partners provide developers with easy to use tooling and a free software framework (CMSIS-DSP) in order to get you up and running within minutes.

Author

  • Dr. Sanjeev Sarpal

    Sanjeev is an AIoT visionary and expert in signals and systems with a track record of successfully developing over 25 commercial products. He is a Distinguished Arm Ambassador and advises top international blue chip companies on their AIoT solutions and strategies for I4.0, telemedicine, smart healthcare, smart grids and smart buildings.

    View all posts

With the advent of smart cities, and society’s obsession of ‘being connected’, data networks have been overloaded with thousands of IoT sensors sending their data to the cloud, needing massive and very expensive computing resources to crunch the data.

Is it really a problem?

The collection of all these smaller IoT data streams (from smart sensors), has ironically resulted in a big data challenge for IT infrastructures in the cloud which need to process

massive datasets – as such there is no more room for scalability. The situation is further complicated with the fact, that a majority of sensor data is coming from remote locations, which also presents a massive security risk.

It’s estimated that the global smart sensor market will have over 50 billion smart devices in 2020. At least 80% of these IoT/IIoT smart sensors (temperature, pressure, gas, image, motion, loadcells) will use Arm’s Cortex-M technology, but have little or no smart data reduction or security implemented.

The current state of play

The modern IoT eco system problem is three-fold:

  • Endpoint security
  • Data reduction
  • Data quality

Namely, how do we reduce our data that we send to the cloud, ensure that the data is genuine and how do ensure that our Endpoint (i.e. the IoT sensor) hasn’t been hacked?

The cloud is not infallible!

Traditionally, many system designers have thrown the problem over to the cloud. Data is sent from IoT sensors via a data network (Wifi, Bluetooth, LoRa etc) and is then encrypted in the cloud. Extra services in the cloud then perform data analysis in order to extract useful data.

So, what’s the problem then?

This model doesn’t take into account invalid sensor data. A simple example of this, could be glue failing on a temperature sensor, such that it’s not bonded to the motor or casing that it’s monitoring. The sensor will still give out temperature data, but it’s not valid for the application.

As for data reduction – the current model is ok for a few sensors, but when the network grows (as is the case with smart cities), the solution becomes untenable, as the cloud is overloaded with data that it needs to process.

No endpoint security, i.e. the sensor could be hacked, and the hacker could send fake data to the cloud, which will then be encrypted and passed onto the ML (machine learning) algorithm as genuine data.

What’s the solution?

Algorithms, algorithms….. and in built security blocks.

Over the last few years, hundreds of silicon vendors have been placing security IP blocks into their silicon together with a high performance Arm Cortex-M4 core. These so called enhanced micro-controllers offer designers a low cost and efficient solution for IoT systems for the foreseeable future.

A lot can be achieved by pre-filtering sensor data, checking it and only sending what is neccessary to the cloud. However, as with so many things, knowledge of security and algorithms are paramount for success.

I recently attended a seminar on advanced instrumentation, where algorithms were heavily featured. The project pitches heavily emphasised implementation rather than analysis and design, which started an interesting discussion, and led me to think about providing some hints that we’ve successfully used over the years:

1. What do we want to achieve? This is perhaps obvious, but I’ve seen that many people do over look this step and jump into Matlab or C in order to try something out. I would urge some caution here, and suggest that you think very carefully about what you’re about to undertake before writing a single line of code. Don’t be afraid to ask your colleagues/network for advice, as their suggestions may save you months of development time. Also consider using established techniques such as, MoSCoW.

2. The specifications: After establishing the ‘big picture’, split up the specifications into ‘must haves’ and ‘nice to haves’. This may take some time to work out, but undertaking this step saves a considerable amount of time in the development process, and keeps the client in the loop. The specifications don’t need to be 100% complete at this stage (they’re always minor details to be worked out), but make sure that you’re clear about what you’re about undertake, and don’t be afraid to do some analysis or short experiments if required.

3. Algorithm design: Sketch out the algorithm’s building blocks (Visio is a good tool), and for each idea produce a short list of bullets (pros and cons) and computational complexity. This will allow you easily review each concept with your peers.

4. Test data: arrange for some test vectors data (from clients or design some of your own synthetic signals), and sketch out a simple test plan of test vectors that you aim to use in order to validate your concept.

5. Development: Depending on your programming ability, you may decide to implement in C/C++, but Matlab/Octave are very good starting points, as the dynamic data types, vector math and toolboxes give you maximum flexibility. Use the testplan and vectors that you’ve designed in step 4. However, in the case of how to best design your algorithm for streaming applications, I would say that many aspects of the algorithm can be tested with an offline (data file) approach. For a majority of our radar and audio work, we always begin with data file comprised of 10-30seconds worth of data in order to prove that the algorithm functions as expected. Subsequent implementation steps can be used to make the algorithm streaming, but bear in mind that this may take a considerable amount of time!

6. Avoid a quick fix! Depending on the complexity of your algorithm, there will be certain testvectors that degrade the performance of your algorithm or even cause it to completely fail. Allocate sometime to investigate this behaviour, but remember to prioritise the importance, and don’t spend months looking for a minor bug. Try and avoid looking for a quick fix or a patch, as they generally re-appear in the future and kick you up the backside.

7. Implementation: after verifying that your concept is correct, you can finally consider target implementation. This step couples back to the previous steps, as the algorithm complexity will have direct influence on the implementation platform and development time. Some good questions to ask yourself: Is the target platform embedded? In which case, do I need an FPGA, DSP or microcontroller? Will it be fixed point or floating point? Perhaps it will be PC based, in which case is it for Windows, Linux or Mac or for a tablet? What tools do you need in order to develop and test the algorithm?

8. Validation: Verify that your implemented algorithm works with your test vectors and that look for any difficult cases that you can find – remembering point 6.

9. Documentation: In all of the aforementioned steps, documentation is essential. Make sure that you document your results, and provide a paper trail such that a colleague can continue with your work if you get hit by a bus.

Author

  • Dr. Sanjeev Sarpal

    Sanjeev is an AIoT visionary and expert in signals and systems with a track record of successfully developing over 25 commercial products. He is a Distinguished Arm Ambassador and advises top international blue chip companies on their AIoT solutions and strategies for I4.0, telemedicine, smart healthcare, smart grids and smart buildings.

    View all posts