Posts

Over the last few years, there has been tremendous interest in the possibility of replacing humans at the workplace with AI. One obvious advantage is AI’s ability to process massive amounts of data and perform tasks such as repetitive tasks (such as data entry) much more efficiently than humans, leading to higher productivity and reduced operational costs. AI can also work continuously without fatigue, ensuring 24/7 customer service. However, can AI truly think and rationalise like a human?

Challenges in understanding of the real-world

Our experience with AI systems suggests categorically that AI’s current limitation is that it does not have any common sense, as there is no reasoning component in the current generation AI model-based inference. As such, current AI does not truly understand the world the way humans do. AI models are trained on large datasets, which are essentially collections of text, images, sensor data etc, and generates responses based on statistical correlations in data. While they can learn patterns and extract important features from this data, they don’t inherently understand the meaning of this data. Understanding is a fundamental component of human intelligence and commonsense, allowing us to take actions and draw conclusions that may seem logical to some, but irrational to people with other experiences in life. In short, we can conclude that commonsense is built up from real-world experiences, social interactions, emotions and context, which is something that AI currently lacks.

The aforementioned acknowledges that current AI models lack commonsense due to the absence of reasoning components. Also, there is the potential for AI models to converge on solutions that may not adhere to Bayesian learning principles, which is an important consideration. We’ll now look at this aspect in depth with a few examples, but we’ll start off with examples of where having no common sense can be turned to an advantage.

Transparency and fake news

Having no common sense and no real understanding of the data can also be turned into an advantage. Consider the example of an AI sorting through CVs (resumes) of suitable candidates for a job. The model can be limited to just focus on the work experience and education, and ignore the name, gender and nationality, making the process much more transparent. Humans will generally try and form a picture in their heads about the candidate and may then appraise the CV with prejudice rather than merit.

Perhaps one of the best examples of AI has been in the media. Whereby a model can be fed with a certain narrative (e.g. anti-abortion or pro-war) and then instructed to produce a new article, filling in the details with arbitrary photos and facts taken from other media sources, and older publications. Many of the articles are not verified by an editorial team before publication, resulting in unconfirmed stories making it to the news websites. This is not just limited to news, reviewers of several scientific publications have also reported fake articles sent for review – some of which have been published, which is an area of concern for the scientific community.

Is it a Dog or a Cat?

Consider an AI model trained to classify images of animals. The model can accurately classify images of cats and dogs when presented with typical images of these animals. However, when presented with an unusual image, the model might fail to classify it correctly due to the lack of reasoning and common sense.

For instance, if the AI model is given an image of a cat wearing a dog costume, it might classify the image as a dog because it lacks the reasoning component to understand that the core features of a cat are still present despite the costume. A human, using common sense, would easily identify the animal as a cat in a dog’s costume.

In this example, the AI model converges on a solution that classifies the image as a dog, which may disobey Bayesian learning principles that consider the prior probability of encountering a cat versus a dog in such a context.

This limitation highlights the importance of integrating reasoning components into AI models to enhance their common sense and improve their ability to handle unusual or unexpected situations effectively.

Bayesian learning enhances deep learning networks by providing uncertainty quantification, preventing overfitting, facilitating model comparison, enabling data-efficient learning, and improving interpretability. This makes Bayesian approaches highly valuable in critical applications where reliability, robustness, and transparency are paramount. More information can be found in the following video.

Data vs Science for IoT T&M applications

Many IoT test and measurement (T&M) and calibration methods use sinewaves to check compliance of the DUT (device under test) by measuring the sinewave’s amplitude, some examples include:

  • Measuring material fatigue/strain with a loadcell – in vehicle and bridge/building applications measuring material fatigue and strain is essential for safety. An AC sinusoidal excitation overcomes the difficulty of dealing with instrumentation electronics DC offsets.
  • Calibrating CT (current transformers) sensors channels – a sinusoid of known amplitude is applied to channel input and the output amplitude is measured.
  • Measuring gas concentration in infra-red gas sensors – the resulting sinusoid’s amplitude is used to provide an estimate of gas concentration.
  • Measuring harmonic amplitudes in power quality smart grids applications – in 50/60Hz power systems, certain harmonic amplitudes are of interest.
  • ECG biomedical compliance testing (IEC60601-2-47) – channel compliance with IEC regulations needed for FDA testing typically uses a set of sinewaves at known amplitudes, to ensure that the channel’s signal chain amplitude error is within specification.

The latter example is particularly interesting, as the basic idea is to measure the amplitude differences in the DUT’s signal chain for a set of sinewaves at 0.67, 1, 2, 10, 20 and 40Hz with respect to a 5Hz reference sinewave. Where, it is assumed the amplitude of all input sinewaves remains constant, and that the relative amplitude error must be within ±3dB for the signal chain to be classed as IEC compliant.

There are a number of signal processing methods that can be employed to perform the estimation, such as the FFT, AM modulation, Hilbert transform and full-wave rectification. All of these methods require extra filtering operations and the FFT for examples, requires low frequency trend removal (usually a DC offset), and windowing so there a number of factors to take into consideration, which complicates the challenge. The FFT is perhaps one of the most widely used methods but is limited by its frequency resolution, which leads to a bias on the amplitude estimate if it’s not centred at a multiple of the ideal frequency bin resolution (\(F_s/N\)).

As most IoT devices use a low-cost oscillator, the sampling rate error can be as high as ±3%, leading to a significant bias in the amplitude estimation using the FFT method. Therefore, an important first step for establishing an estimate of the sinewave’s amplitude is to estimate the exact sampling frequency of the DUT.

Another simpler method that we’ve seen on some IoT devices, is to use high time-resolution timestamps from a higher accuracy crystal oscillator, but for lower-cost IoT devices this may not be available, so it’s better to have a strategy that extracts the exact sampling rate from the dataset.

Sampling rate estimation using AI

Sampling rate estimation can be achieved using AI, whereby datasets of the known input test sinewaves are collected for subsets of the ideal sampling rate. For many IoT ECG devices, 200Hz is typically used. Therefore, assuming an ideal sampling rate of 200Hz, we can generate test sinewave data sampled at 199.5Hz, 199.6Hz…..200.4Hz, 200.5Hz etc. This collection of sinewave data can then be fed into an ML classifier for estimation of the true sampling rate. Assuming that the training dataset is large enough to cover all required scenarios, this method will work.

However, it should be noted that this approach doesn’t have any commonsense, since it’s purely based on data and has no understanding of the physical process that it’s modelling. This becomes apparent if the sampling error is, say 199.54Hz. As the model doesn’t have any data for this scenario and doesn’t have any commonsense and as such can’t improvise, it must choose between 199.5Hz and 199.6Hz which will lead to a bias in the true sampling rate estimate. Another problem appears if another sampling rate or other test frequencies are used, as these were not taken into account during the training process.

Sampling rate estimation using a UKF

An alternative approach is to model the physical process using an Unscented Kalman Filter (UKF). The UKF’s flexibility allows for a more detailed mathematical model of the process to be implemented, leading to the possibility of estimating the sinewave’s amplitude, phase, DC offset as well as the true sampling rate.

Assuming stationarity, a mathematical model of the process can be described as,

\(\displaystyle y(n) = B+ A \sin(2\pi f\frac{n}{F_s} + \theta) + v(n)\)

Where, \(\theta\) is initial phase offset, \(v(n)\) is the measurement noise and \(A\) (sinewave’s amplitude), \(B\) (signal’s DC offset) and \(F_s\) (sampling rate) are the parameters that we want to estimate.

This model can be broken down and the entities of interest (\(A, B\) and \(F_s\)) implemented as state variables in the Kalman update equations. Notice that although the phase of sinewave is linear, the output of the \(\sin()\) function is non-linear, which means that the relationship between the observed signal to the entity of interest (\(F_s\) in our case) is non-linear. This is the main reason for choosing the UKF, as it is well suited to handling non-linear relationships. A description of the UKF equations is beyond the scope of this article, but the reader is referred to some of the excellent textbooks on the UKF for a complete description of the algorithm.

Assuming that the test sinewave is high accuracy – a realistic assumption since a modern calibrated signal generator has frequency error in the \(\mu\)Hz region, we can use the Kalman filtering equations to estimate the true sampling rate over time. An important to point to realise is that like the AI method described above, the UKF also doesn’t have any commonsense, but has the virtue of ‘understanding’ the process that it’s modelling by virtue of the mathematical model implemented in its update equations. This means that a sinewave of any frequency and sampling rate can be applied to the UKF, and assuming that the exact sinewave frequency is also entered into the state equations, the UKF method will always work.

However, one potential weakness of the method for this application is that the Kalman filter is a statistical state estimation method, meaning that its state estimation will be optimal in a statistical sense, but not necessarily in a deterministic sense. This means that there is no guarantee that the state estimates will be correct in a deterministic (absolute) sense.

An animation of the UKF estimation for a 32.3Hz sinewave, sampled at 100Hz with a 0.1% sampling rate error (100.1Hz) is shown below, as seen the UKF correctly estimates the state-estimates of the test sinewave within 1 second.

AI in weapons technology

Recently, much emphasis has been placed on developing smart weapons using AI technology. Major weapons manufacturers from all over the world are currently experimenting with AI-based drone technology that can be used to attack enemy combatants in swarms as well deploying GPS-guided smart munitions and developing new EW (electronic warfare) jamming technology.

Many Western nations allocate substantial resources to defence spending, with significant portions of their budgets dedicated to military operations and technological advancements. However, modern conflict zones highlight the evolving challenges that defence systems face, particularly in terms of operational complexity and sustainability in international theatres of operation.

Looking further to the East, nations such as Russia and China allocate a much smaller budget to their military-industrial complex. Their approach focuses instead on utilizing AI in a more targeted manner, emphasizing established scientific and mathematical principles, such as control theory, with AI applied for classification purposes. Over recent years, as repeatly demostrated in various international conflict zones, this strategy has proven to be very effective. Technologies like hypersonic missiles and electronic warfare systems developed by Russia have managed to evade many Western air defense systems, altering the balance of power in several theatres of operation. This impressive performance challenges the notion that simply investing large sums of money into AI weapons technology guarantees superior results.

Returning to the subject at hand, all of these AI smart weapons still lack any common sense as the AI cannot reason like a human. As such, it is dangerous to allow these systems to operate autonomously and to have high expectations of their performance in a combat situation. That being said, researchers working at Russia’s AIRI research institute contend to have taken significant steps forward in developing the world’s first self-learning AI system (Headless-AD) that can adapt to new situations/tasks without any human intervention by autoregressively predicting actions using the AI’s existing learning history model as context. If successful, Headless-AD would be a great leap forward in developing sentient AI technology for all walks of life.

Human Intelligence and Digital Intelligence

In the field of computation, it is essential to differentiate between traditional algorithms and machine learning models. An algorithm is a direct output of human intelligence, crafted through logical reasoning and problem-solving techniques. It represents a set of predefined instructions designed to solve specific problems. The human mind formulates these steps to ensure a consistent and accurate outcome.

In contrast, a trained machine learning (ML) model is the product of digital intelligence. While algorithms underpin the model’s structure, the true power of an ML model arises through its capacity to learn from large scale data. This process involves adjusting parameters during the training period (not during the inference time/runtime) to optimize performance in tasks like prediction, classification, or decision-making. In this sense, the model evolves beyond its initial algorithmic foundation, generating insights and results that may not be directly encoded by human logic.

An algorithm is a direct manifestation of human intelligence, designed through logic, reasoning, and problem-solving techniques. On the other hand, a trained machine learning model represents the outcome of digital intelligence, which evolves through the iterative processing of data.

The convergence of these two forms of intelligence—human and digital—marks a significant shift in computational systems. Algorithms, though foundational, are static and require manual updates. Machine learning models, by contrast, can be improved by providing them with more training data when available. This shift positions ML models as more flexible and adaptive tools for solving complex problems where human-defined rules may fall short.

The distinction between human-driven algorithms and data-driven machine learning models emphasizes the growing role of adaptive systems in areas such as autonomous driving, personalized medicine, and financial forecasting. As machine learning continues to evolve, the boundaries between explicit programming and emergent behavior will continue to blur, paving the way for systems capable of independent learning and decision-making.

Key takeaways

There has been considerable interest in the potential of replacing human roles in the workplace with AI. However, as discussed herein, AI fundamentally lacks an understanding of the meaning behind the data it processes for classification tasks. This ‘lack of understanding’ is a core component of human intelligence and common sense, which enables individuals to make decisions and draw conclusions that may appear logical to some but irrational to others based on varying life experiences. In essence, common sense is derived from real-world experiences, social interactions, emotions, and context—attributes that AI currently lacks and is unlikely to acquire in the foreseeable future.

Nevertheless, the absence of common sense and a deep understanding of data can also be leveraged to create a more transparent process for job applicants and application reviews. Conversely, AI can be utilized to generate misleading information shaped by influential entities to support specific narratives and sway public opinion.

Authors

  • Dr. Sanjeev Sarpal

    Sanjeev is an AIoT visionary and expert in signals and systems with a track record of successfully developing over 25 commercial products. He is a Distinguished Arm Ambassador and advises top international blue chip companies on their AIoT solutions and strategies for I4.0, telemedicine, smart healthcare, smart grids and smart buildings.

    View all posts
  • Dr. Jayakumar Singaram

    Jayakumar is a seasoned expert in semiconductor technology and AIoT. He advices companies such as Mistral Solutions, SunPlus Software, and Apollo Tyres at the strategic level on their AIoT solutions. He successfully founded Epigon Media Technologies, which focuses on Research and Development for the global market, and is also the co-author of the book "Deep Learning Networks: Design, Development, and Deployment."

    View all posts

“Improve your existing resources”

In a previous blog, we talked about how IoT can help in taking control. There is another step further: to optimize your processes with AIOT.

Benefits

  • Better use of existing resources
  • Take the right decisions at the right time
  • Optimal circumstances

Better use of existing resources

Control means you have a clear overview how assets are being used. Such as:

  • How long does each step in a process take?
  • What are the whereabouts of my assets (trucks, cranes, forklifts, containers…)?
  • What is the state of maintenance?

The next step is of course, to optimize your business.

First of all, many blogs write about total-new situations. In fact, most AIOT is needed in companies which are already established. With their inventory, processes, customers and all the responsibilities which come with them. Large investments have been made to reach the business today. And so, their processes may not be optimal, at least they work. So how to benefit from AIOT, without throwing away all these investments? And: how to be sure processes are at least working as they do know? Smart sensors help to bring the whole process at today’s level, without throwing away resources which are working fine. Besides, companies can choose to implement AIOT piecemeal.

This is especially the case when it’s about highly essential functions such as infrastructure, sluices and installations. Here, the asset is not just an asset, but a part of a total infrastructure. Downtime of such an asset has large implications for society as a whole.

Many processes are still monitored piecemeal. A further optimization is to connect systems with each other. Get 1 overview in 1 dashboard. Learn how your processes are doing, and where are the optimizations are required.

Take the right decisions at the right time

To measure is to know, to know is to be able to improve.

One most mentioned benefits of AIOT is preventive maintenance. Preventive maintenance means that something is repaired or replace, before it is breaks. Or at least, to maintain while the damage is still small. In normal situations there would be downtime, now repairs can be made scheduled. And if downtime is needed for repairs, then it can be scheduled at times the least inconvenient.

It’s already been said: to be able to schedule repairs. Take the right decisions at the right time. Besides, in the old situation, a foreman has to do his round, where he gives each machine the same attention. With AIOT, the quality of the assets can be guarded with sensors. So, at his round, a foreman can give most attention to the machines which mostly need it.

The same applies to a sector as biomedical: ‘to prevent is better then to cure’. So, help your clients and/or yourself to stay healthy. An example is fall detection. And does the elderly take his medicine?

Help your patients with therapy, to make use of knowledge from all previous patients: is therapy going on track? Also: give the patients who need it the right amount of attention. Instead of seeing all your patients with a standard scheduled time-frame, and as a consequence, give none of them enough time really.  If therapy is lagging, you probably want to give those patient attentions. Is therapy going faster then expected: what are the reasons? How can this knowledge be used to improve therapy in the future? Besides, if people can do therapy and appointments at home, they don’t have to spend their precious time; where the actual time needed for treatment is shorter then the time spent on travelling and waiting.

Optimal circumstances

Sensors can guard that product are made or kept in optimal circumstances. E.g., if cutting parts of a machine are still sharp enough, and in their right precision. Or guard the temperature of cooling or keep an eye on the indoor air quality. This may also make guarantees possible, and thus creating added value to your products or service.

Since the end of July 2020, KPN has renewed its mobile network which enables 5G. KPN is rapidly expanding coverage throughout the Netherlands. Business customers and entrepreneurs can already make use of special 5G services. To see tomorrow’s digital highway in action, since 5G is one of the enablers for smart industry

3.5 GHz frequency auction

Starting in 2022, KPN will auction G5 frequencies. Meanwhile, together with customers and technology partners, telecom and ICT service provider KPN has launched 5G field labs to discover the value of 5G applications. Thanks in part to 5G technology, these types of applications will become a reality.

During the new 5G auction, frequencies in the 3.5 GHz band will be distributed. These will enable connections at much higher speeds. At the auction in the first quarter of 2022, at least three parties must obtain licenses for the frequencies. No single party may acquire more than 40 percent of the available frequencies, says the proposal for the course of the auction.

A total of 300 megahertz of bandwidth is to be distributed. This consists of three blocks of 60 megahertz and twelve blocks of 10 megahertz. The auction will be held in three phases. Prior to and during the auction the Ministry of Economic Affairs will not provide information about the total number of participants. At the end, the winning parties will be announced and the State Secretary will make the entire bidding process public.

Tomorrow’s digital highway

Thanks to the capacity and reliability of our network, new applications such as innovations in security, healthcare, mobility, logistics and the manufacturing industry become possible.  Unlike 4G, 5G is expected to become an ecosystem from which many business sectors, industries and areas can benefit. Innovations from which the whole of society benefits. In addition to higher speed, 5G focuses explicitly on flexibility in the network to support very short response times and higher reliability. This will enable a wide range of new applications for customers and industries.

5G is expected to provide a huge boost in business for augmented and virtual reality, robotics, drones, intelligent assets, wearables, AI-based video analytics and Internet of Things (IoT), among others.

In addition to 5G, Internet of Things also requires edge computing. This involves placing a small cloud with computing power, storage and network capacity at the edge of the network, as it were, close to applications, devices and users. Because data no longer has to travel all the way up and down to the cloud or a data center, time-critical applications such as self-driving cars and augmented reality become possible.

5G: enabler for Industry 4.0

5G is also a key enabler for Industry 4.0. This involves using Internet of Things, cloud computing and data integration, among other things, to make the production process fully computer-controlled and remote. The human thought process is thereby partially or completely taken over. Due to its high speed and reliability and short latency, 5G is essential within Industry 4.0 for, for example, controlling production lines, facilitating self-driving vehicles and connecting large numbers of IoT devices.

Field Labs

KPN Eindhoven 5G Fieldlab

The national rollout of 5G has only just started, but KPN has been testing 5G for useful applications in its Field Labs for some time. 

The 5G field lab for the manufacturing industry shows everyone which 5G indoor use cases are possible in a factory environment. Besides speed, 5G also enables larger reliability and very low network latency. A large number of wireless sensors also plays an important role in the further rollout of the IoT. Thus, the 5G Field Labs shows that 5G can be used for very different applications simultaneously.

Biomedical devices are at the forefront of AI and IOT (more often called AIOT). What is your most important reason to use sensors for biomedical devices?

Biomedical sensors for ai, iot and aiot to optimize

To control

  • Does the patient follow the medical instructions? Examples: is he doing his therapy on time and in the right way. Does he take his medications?  Especially groups of risk can be monitored so that timely action can be taken if necessary
  • Is treatment going well?  For both doctor and client alike. And even better:  You can optimize the healing process
  • Do medically devices still give the right measurement?
sensors biomedical devices optimize ai iot aiot

To optimize

  • Optimize your treatment: Compare the treatment results from your client with your other clients. And thus, find out point of improvement
  • Give attention for those who need it. Nobody wants to spend time unnecessary in a waiting room
  • Better use of existing resources
  • Connect systems with each other
  • Take the right decisions at the right time
  • Preventive maintenance Security

To innovate

  • Better serve your clients
  • Be at the forefront of medical developments
  • Track & trace
  • Create optimal circumstances with modern technology
sensors for biomedical devices iot ai aiot

To save

  • Give the client the best care
  • Spend your budget where most needed
  • To prevent is better than to cure
  • Prevent greater suffering, avoid extra high costs
  • Nobody is waiting for unnecessary treatment
  • Preventive maintenance on medical devices prevents higher repair costs and downtime

ASN Filter Designer’s new ANSI C SDK framework, provides developers with a comprehensive automatic C code generator for microcontrollers and embedded platforms. This allows developers to directly deploy their AIoT filtering application from within the tool to any STM32, Arduino, ESP32, PIC32, Beagle Bone and other Arm, RISC-V, MIPS microcontrollers for direct use.

Arm’s CMSIS-DSP library vs. ASN’s C SDK Framework

Thanks to our close collaboration with Arm’s architecture team, our new ultra-compact, highly optimised ANSI C based framework provides outstanding performance compared to other commercial DSP libraries, including Arm’s optimised CMSIS-DSP library.

Benchmarks for STM32: M3, M4F and M7F microcontrollers running an 8th order IIR biquad lowpass filter for 1024 samples

As seen, using o1 complier optimisation, our framework is able to surpass Arm’s CMSIS-DSP library’s performance on an M4F and M7F. Although notice that performance of both libraries is worse on the Cortex-M3, as it doesn’t have an FPU. Despite the difference, both libraries perform equally well, but the ASN DSP library has the added advantage of extra functionality and being platform agnostic, making it ideal for variety of biomedical (ECG, EMG, PPG), audio (sound effects, equalisers) , IoT (temperature, gas, pressure) and I4.0 (flow measurement, vibration analysis, CbM) applications.

AIoT applications designed on the newer Cortex-M33F and Cortex-M55F cores can also take advantage of extra filtering blocks, double precision arithmetic support, providing a simple way of implementing high performance AI on the Edge applications within hours.

Advantages for developers

  • A developer can now develop, test and deploy a complete DSP filtering application within the ASN Filter Designer within a few hours. This is very different from a traditional R&D approach that assigns a team of developers for several days in order to achieve the same level of accuracy required for the application.
  • Open source and agnostic code base: In order to allow developers to get the maximum performance for their applications, the ASN-DSP SDK is provided as open source and is written in ANSI C. This means that any embedded processor and any level of compiler optimisation can be used.
  • Memory size required for the ASN-DSP SDK is relativity lower than other standard DSP libraries, which makes the ASN-DSP SDK extremely suitable for microcontrollers that have memory constrains.
  • Using the ASN Filter Designer’s signal analyser tool, developers now can test the performance, accuracy and assess the frequency response of their designed filter and get optimised C code which they can directly use in their application.
  • The SDK also supports some extra filtering functions, such as: a median filter, a moving average filter, all-pass, single section IIR filters, a TKEO biomedical filter, and various non-linear functions, including RMS, Abs, Log and Sqrt.  These functions form the filter cascade within the tool, and can be used to build signal processing applications, such as EMG and ECG biomedical applications.
  • The ASN-DSP SDK supports both single and double precision floating point arithmetic, providing excellent numerical accuracy and wide dynamic range. The library is unique in the sense that it supports double precision arithmetic, which although is not the most optimal for microcontrollers, allows for the implementation of high-fidelity filtering applications.

The ANSI C SDK framework is further extended by our new C# .NET framework, allowing .NET developers to build high performance desktop applications with signal processing capabilities.

Find out more and try it yourself

Benchmarks on a variety of 32-bit embedded platforms, including a biomedical EMG filtering example, are covered in the following application note.

The both framework SDKs are available in ASNFD v5.0, which may be downloaded here.