Posts

From lithography machines to medical instruments, real-world edge systems demand deterministic behaviour, reliable sensor interpretation, and carefully engineered algorithms that go far beyond simply training an AI model.

Artificial Intelligence or AI has been glorified as the future of automation, often portrayed as the ultimate solution for efficiency, decision-making, and innovation across industries. It is marketed as a transformative technology for everything from healthcare and finance to autonomous systems and industrial processes.

In practice, this narrative does not reflect present reality, as AI in its current form remains too limited to be relied upon for mission-critical applications that require deterministic behaviour, such as those stipulated by ISO/IEC standards. Although AI performs impressively in controlled settings, it often struggles when exposed to the complexity, variability and unpredictability of real-world environments — particularly when those conditions fall outside the assumptions used to train the model.

This degradation in performance occurs because AI lacks common sense reasoning and struggles with real-world subtlety, i.e. it does not understand the real world in the same way that humans do. Trained largely on synthetic or otherwise limited datasets, it has no intrinsic grasp of the physical or situational subtleties present in operational environments. When real-world conditions differ from those seen during training, AI systems often misinterpret context, leading to unreliable or misleading outcomes — an unacceptable limitation for any mission-critical application.

What is a mission-critical application?

An application is classed as mission-critical when results must be delivered predictably, repeatably and within guaranteed timing constraints defined by system and stakeholder requirements. In many operational environments, unreliable or delayed behaviour can lead directly to financial loss, disrupted processes or regulatory non-compliance. In embedded edge systems (e.g. semiconductor lithography machines, factory automation, medical instruments and logistics automation), algorithms interact directly with physical processes under strict constraints such as bounded latency, limited computational resources, low power consumption and long-term reliability, while often needing to comply with IEC/ISO standards.

At the heart of these systems are physical measurements, i.e. sensors that produce signals that reflect real-world processes, but are often corrupted by noise, interference and environmental disturbances. This broader challenge is increasingly discussed under the banner of Physical AI — intelligent systems that perceive, reason and act within the physical world through sensors, signals and real-time control. Achieving this in practice requires the integration of deterministic digital signal processing (DSP) algorithms, machine learning (ML) and embedded systems design, forming the basis of Real-Time Edge Intelligence (RTEI).

Intelligent machines existed long before AI

The idea of embedding intelligence into machines predates modern AI by decades. Early automation systems implemented rules‑based behaviour using mechanical logic and later electromechanical control systems. However, with the development of electronics, these concepts evolved into embedded systems built on PLCs, microcontrollers (MCUs) and later digital signal processors (DSPs), allowing complex signal analysis and control algorithms to be implemented in software while maintaining robust deterministic behaviour.

Figure 1: Evolution of intelligent systems. From embedded systems to connected IoT platforms and modern edge devices combining DSP and ML techniques.

The rise of IoT

As sensing technologies became cheaper and connectivity expanded, embedded devices increasingly became networked, leading to the Internet of Things (IoT). In many architectures, devices primarily act as data forwarders, streaming sensor measurements to cloud infrastructure where ML models perform analysis.

While this approach enables powerful functionality, it also introduces limitations such as network latency, bandwidth consumption and dependence on continuous connectivity. For many real‑world systems, intelligence must therefore move closer to the data source.

The challenge of real‑world sensor data

Real-world sensor data are rarely clean, as the signals are affected by measurement noise, powerline interference and environmental disturbances. As such, extracting meaningful information from these measurements has traditionally relied on signal processing techniques, implemented either using analog electronics or digitally using a DSP or microcontroller. These techniques perform tasks such as filtering, spectral analysis and feature extraction. Filtering removes noise, spectral analysis reveals hidden patterns and feature extraction transforms raw measurements into structured information describing the underlying system behaviour. These deterministic methods make signal interpretation predictable and verifiable.

The Evolution from Edge AI to RTEI

ML models operate most effectively on a few well‑designed features based on the physics of the sensor data.  In order to facilitate this, DSP algorithms (designed using human intelligence) are a fundamental pre-step for signal enhancement and feature extraction, such that the ML model can perform its classification task based on high-quality feature data.  It is this combination that can provide high classification accuracy, even when conditions slightly deviate from the original training datasets.

Figure 2: RTEI architecture. Sensor measurements are processed through deterministic DSP algorithms to extract meaningful features for ML inference on the edge device. An optional cloud connection provides secondary services such as access to a data lake and device management.

As such, engineers are increasingly adopting architectures in which DSP, ML and the embedded computing platform (e.g., Arm Cortex processors with hardware support for DSP and ML workloads) are developed together, while remaining aligned with stakeholder requirements and relevant IEC/ISO compliance standards. This augmented approach improves reliability while ensuring long-term robustness across a wide range of edge applications.

RTEI Solutions Handbook

The architectural principles behind Real-Time Edge Intelligence (RTEI), including practical DSP techniques, embedded implementation strategies and real-world case studies, are explored in detail in RTEI solutions handbook, which may be purchased directly from ASN. 

Author

  • Sanjeev is a RTEI (Real-Time Edge Intelligence) visionary and expert in signals and systems with a track record of successfully developing over 26 commercial products. He is a Distinguished Arm Ambassador and advises top international blue chip companies on their AIoT/RTEI solutions and strategies for I5.0, telemedicine, smart healthcare, smart grids and smart buildings.

    View all posts

 

In our age of AI, IoT, and real-time signal processing, it’s tempting to believe that algorithmic thinking is a modern invention. But over 1,100 years ago, in the heart of the Islamic Golden Age, two extraordinary thinkers were already shaping what we now call systems engineering: Muḥammad ibn Mūsā al-Khwārizmī and Yaḥyā ibn ʻAdī al-Kindī.

Their brilliance wasn’t just in what they knew—it was in how they thought.

The Power of Critical Thinking

Al-Khwārizmī and Al-Kindī lived in a time when knowledge was unified. Mathematics, philosophy, astronomy, and medicine were seen as part of a single, coherent worldview. This intellectual breadth allowed them to see connections others missed.

In some ways, generative AI attempts to mimic this cross-disciplinary reach—drawing associations across vast domains with billions of model parameters. But unlike Al-Kindī and Al-Khwārizmī, it operates without commonsense, without understanding, and without true context. This alone makes AI fundamentally inferior to the critical thinking of these scholars. Their minds weren’t just expansive—they were disciplined, purposeful, and deeply structured.

They also had something rare today: time. Freedom from the pressure to publish or the noise of constant alerts, they dedicated themselves to real understanding. Without emails, 24hour news feeds, or KPIs pulling at their attention, they achieved the kind of clarity that produced lasting contributions to mathematics, science, and cryptography—many of which still shape our world today.

Just as the Latinized name of Al-Khwārizmī gave rise to the word algorithm, the idea behind algorithms—structured procedures for solving problems—is far older and more global. Across different civilizations, these ideas took root in unique yet strikingly similar ways.

In classical Tamil, the word சூத்திரம் (sūthiram) has long referred to concise formulas or rules. Found in ancient grammar, mathematics, and philosophy, a suthiram was a compact, logical step—an algorithm in spirit. Tamil arithmetic manuscripts used them to express procedures for multiplication and permutations, echoing the same structured problem-solving seen in other cultures.

Even today, Tamil retains this legacy. In schools and textbooks, சூத்திரம் appears alongside the transliterated அல்காரிதம்—a reminder that algorithmic thinking has deep, indigenous roots. The algorithm is not a modern invention. It’s a universal pattern—and every culture has its own word for the thread.

This global legacy reminds us that algorithms were once tools of understanding, not just instruments of automation. If we are to use algorithms wisely today, we must restore that spirit: clear purpose, cultural context, and conscious design.

And Yet, Here We Are

Today, we automate decisions,  and design systems that affect real lives. The legacy of Al-Khwārizmī and Al-Kindī challenges us to ask:

Are our algorithms truly serving people—or just chasing efficiency?

Nowhere is this more urgent than in AI. As we build increasingly powerful models, the temptation is to optimize for performance at all costs. But Al-Khwārizmī reminds us that an algorithm must be understandable. And Al-Kindī reminds us that it must serve human reason and values.

This is precisely why Real-Time Edge Intelligence (RTEI) is so important. RTEI combines human reasoning, grounded in deterministic DSP algorithms, with digital reasoning, driven by machine learning.

But there is a crucial difference: digital reasoning can hallucinate. It can invent patterns and correlations—what we sometimes call “creativity”—but it has no ethics, no intent, and no understanding. Human reasoning, on the other hand, brings structure, purpose, and responsibility.

RTEI is not just a technical fusion. It is a moral architecture: one that safeguards the unpredictability of AI with the reliability and interpretability of classical DSP. In a way, it echoes the very legacy of these scholars: logic and structure, guided by ethics and accountability.

A Principle That Still Applies in 2025

They remind us that clear, ethical, and deeply structured thinking transcends time.

This is where the classical principle of رفع الحرج (removal of hardship) becomes strikingly relevant. It wasn’t just about compassion—it was about reducing confusion and unnecessary complexity in systems.

RTEI carries this legacy forward: combining the reliability of DSP with the adaptability of ML to build systems that are powerful, yet clear and responsible. In a world where generative AI can hallucinate, we need that balance more than ever.

Sometimes, the most radical idea is to slow down and think like it’s the 9th century!

Author

  • Sanjeev is a RTEI (Real-Time Edge Intelligence) visionary and expert in signals and systems with a track record of successfully developing over 26 commercial products. He is a Distinguished Arm Ambassador and advises top international blue chip companies on their AIoT/RTEI solutions and strategies for I5.0, telemedicine, smart healthcare, smart grids and smart buildings.

    View all posts