Integrating AI into Bionic Hands

04 Jan 2024 Dmitriy Liukevych, Oleksii Svitlychnyi

In previous blog posts, we provided a functional overview and delved into the technical details of Alt-Bionics’ affordable prosthetic hand, outlining the journey from concept to a product ready for commercial release.
What does it take to train an AI to interpret the unique language of muscle signals? This chapter peels back the curtain on the AI training process, revealing how each prosthetic hand is fine-tuned to its wearer’s unique muscular patterns. Unlike the traditional approach, our AI doesn’t generalize; it personalizes. Stay with us for an insightful journey into the core of AI-powered prosthetics.

Challenges

In this project, we confront a pivotal challenge: how can we develop a system that not only responds to generic commands but also adapts to the unique muscle signals of the user? The answer begins with the EMG sensors, a crucial component we discussed in our previous posts. These sensors are the touchpoints where the muscle contractions in the remaining portion of the forearm, below the elbow where the hand has been amputated, are registered by the bionic hand. Capturing the electrical activity that muscles generate, EMG sensors are akin to biological microphones that pick up the whispers of nerve impulses.

The prosthetic hand we are crafting aims to adeptly interpret signals from muscle activity, facilitating an interaction between user intent and prosthetic response. But unlike common AI tasks, such as recognizing spoken words which can be trained on a diverse dataset to understand different voices, our AI model must specialize, as no two amputees are alike. The patterns of EMG signals, which are essentially the voice of the muscles, vary not only from person to person but also according to the specifics of an individual’s amputation. Factors such as the level of the forearm amputation and the positioning of the sensors add layers of complexity to the AI model’s training. It must become an expert in the unique language of the wearer’s EMG signals, a language shaped by their anatomy and the nuances of their residual limb.

Exploring the depths of AI training, we recognize that the microcontroller within the prosthetic limb, while sophisticated, is not designed for the intensive demands of AI model training. The complexity and computational heft needed are far beyond its scope. Initially, we considered leveraging the power of smartphones for local training, seeking a more direct and self-sufficient approach.

Yet, as we navigated through the practicalities, it became apparent that, despite their advanced capabilities, even smartphones fall short for the heavy lifting of AI training. The allure of an entirely mobile-based solution was strong, offering independence from external servers and an all-in-one user experience. However, the intensive processing requirements and significant energy consumption presented clear challenges. While TensorFlow can operate within mobile platforms to some extent, the efficiency and effectiveness of on-device training could not meet our standards for speed and user convenience.

Confronted with these insights, we turned our focus to a backend processing strategy. By harnessing the power of dedicated servers, we ensure that the computational muscle and memory needed for AI training are fully available. This shift guarantees that we can process complex data streams effectively, constructing an AI model that is not only powerful but also finely tuned to the individual patterns of each user.

Training the AI Model

Our AI training process is aimed at understanding and translating the language of an individual’s muscle movements into precise selection of grip activation, associated by the user with specific muscle movements. Let’s walk through this approach step by step.
Engaging with the AI Training Module: When the user accesses the AI training module in the app, they encounter a system designed to map specific muscle movements to the prosthetic’s memory slots. There are six such slots available, each capable of holding a distinct grip configuration that the user can set and customize through the app.

Memory Slots and Muscle Movements: The customization process involves assigning unique forearm movements to each memory slot. The prosthetic’s microcontroller, which houses these slots, awaits input from the user. As a user performs a movement, the corresponding muscle activity pattern is detected by the EMG sensors, and this pattern is what the AI will learn to recognize.
Data Transmission and Collection: Through a Bluetooth connection, the prosthetic hand begins to stream EMG sensor data to the mobile app. The bandwidth of Bluetooth can comfortably handle this information in live mode, ensuring no detail is lost. Once received, the app labels and buffers this data, marking it for the specific movement it corresponds to – like tagging a file for easy retrieval.

Incremental Learning and Flexibility: The AI training does not demand continuous internet access, allowing the user the flexibility to train the system at their convenience. The data collected during these sessions is labeled accordingly and can be sent to the server for processing later on. This incremental learning approach means that training can be paused and resumed without losing progress, accommodating the user’s schedule and pace.
Training on the Backend: After the user has completed enough repetitions of a movement, the collected data for each movement is ready to be processed. This data is sent to a server where our sophisticated infrastructure, powered by Python and TensorFlow, begins the model training process. The AI is specifically being trained to recognize patterns in the EMG sensor signals that correspond to the user’s unique muscle movements.
Model Optimization and Deployment: Once the model has been trained, it is converted from TensorFlow format to TensorFlow Lite (TFLite) format, which is optimized for mobile and embedded platforms like our prosthetic hand’s microcontroller. The optimized model is then transferred back to the mobile application and, subsequently, to the prosthetic hand itself.
Seamless Integration and Operation: Upon receiving the model, the prosthetic hand’s microcontroller, equipped with TensorFlow libraries, integrates and stores the model. It is now ready to recognize and respond to the user’s specific muscle movements, activating the corresponding grip as programmed into one of the six memory slots.

Deciphering Muscle Signals

The journey of training our AI begins with the continuous stream of data from two electromyography (EMG) sensors. These sensors act as the gateway to understanding the subtle electrical activity generated by muscle movements. As one might observe on a graph, the signals we analyze are essentially the voltage readings that reflect the muscular activities of the user.

In the realm of advanced bionic technology, distinguishing meaningful muscle signals from mere background noise is a fascinating yet complex challenge. It’s akin to trying to hear a whisper in the midst of a bustling crowd. This article delves into the ingenious methods used to ensure precision in signal detection, a critical component in the development of responsive and intuitive bionic limbs.

  • Establishing a Baseline: The journey begins with setting a Calibrated Average Noise Level. This baseline helps in identifying what constitutes as background noise. Anything falling below this level is not considered part of the muscle movement signal.
  • The Activation Threshold: This threshold is the pivotal point where our system distinguishes real muscle activity from mere noise. It’s only when a signal’s strength surpasses this threshold that the system recognizes a deliberate muscle movement, suitable for capturing and using in AI training.
  • Role of the Microcontroller: In this intricate dance of signals, the microcontroller plays a vigilant role. It continuously monitors incoming signals and springs into action once the signal exceeds the Activation Threshold. This prompt response ensures that every significant nuance of the muscle signal is captured without loss.
  • Logging and Buffering Process: Crossing the Activation Threshold triggers an efficient and precise data logging process. The system begins to buffer data from the EMG sensors, capturing a complete and accurate picture of the signal’s characteristics.
  • Understanding Signal Endpoints: Identifying the end of a signal is just as crucial as recognizing its start. The conclusion of a muscle signal is marked when readings drop below the Activation Threshold. This marks the completion of one full cycle of signal activity, which is then graphically represented. These graphs are more than just voltage timelines; they are unique blueprints of individual muscle movement patterns.

Through these advanced techniques, bionic technology is not just mimicking human movements; it’s learning and adapting to each user’s unique physical language.

 

 

Ready to start the conversation?