The company’s event-based digital neuromorphic IP can add effective AI processing to SoCs.
Edge AI becomes a thing. Instead of only using an embedded microprocessor in edge applications and sending the data to a cloud for AI processing, many edge companies are considering adding AI to the edge itself, then communicate conclusions about what the edge processor “sees” instead of sending raw sensory data such as an image. To date, this momentum has been held back by the costs and power requirements of early implementations. What customers are looking for is proven AI technology that can run under one watt that they can add to a microcontroller for on-board processing.
Many startups have entered the field, looking to compete in a field of AI that doesn’t have an 800-pound holder to move (aka, NVIDIA). Many startups have some sort of in-memory or quasi-memory architecture to reduce data movement, coupled with digital multiply-accumulate (MAC) logic. BrainChip takes a different approach, applying event-based digital neuromorphic logic with SRAM that the company says is power-efficient, flexible, scalable, and enables on-chip learning. Let’s take a closer look.
The Akida platform
Brainchip has a long list of improvements it has made to the second generation Akida platform. The motivation for these additions was to enable processing on the modalities that customers are increasingly demanding: real-time video, audio and time-series data such as voice recognition, human action recognition, text translation and video object detection. For some of these applications, the addition of an optional Vision Transformer (ViT) engine, along with an enhanced neural mesh, can deliver up to 50 TOPS (trillion operations per second) according to the company.
The company sells its intellectual property in sensor and SoC designs seeking to add AI to the edge. While adoption has been slow for BrainChip’s first product, the AKD1000, there have been some high-profile demonstrations of its use by companies like Mercedes in the EQXX concept vehicle and by NASA on an autonomy project. and cognition in small satellites and the adoption of their development kits and boards by a number of companies for prototyping purposes.
Now with the second generation. BrainChip has added support for 8-bit weights and activations, the aforementioned ViT, and hardware support for innovative Temporal Event Based Neural Network (TENN) support. Akida retains its ability to process multiple layers at once, managed by its intelligent DMA which manages the loading and storage of models and data autonomously. This can activate low power sensors attached to an Akida node without the need for CPU. The diagram below shows how Akida sensors can be paired with an SoC for multi-sensor inference processing.
Akida IP can be used to create sensors and be deployed in more advanced SoCs.
The new Akida platform, which should be available later this year. is designed to handle a variety of popular networks, including CNNs, DNNs, Vision Transformers, and SNNs. Event-based design is particularly suited to time-series data for issues such as audio processing, video object detection, and vital signs monitoring and prediction.
BrainChip showed some initial benchmarks that demonstrate orders of magnitude fewer operations and smaller model size that can benefit state-of-the-art AI implementations. In video object detection, a 16m implementation can handle 30 frames per second at 1382×512 resolution, in less than 75mW. 28nm keyword detection can support over 125 inferences/sec taking less than 2 microJoules per inference. BrainChip has filed patent applications for the acceleration of the TENN model.
The Akida processor and software can implement multi-pass processing as well as on-chip learning.
Akida’s runtime software handles the operation efficiently, including key features such as its multi-pass processing which are handled transparently to the user. Model development and tuning is supported on TensorFlow framework with MetaTF.
The Akida architecture and the associated advantages.
BrainChip is considering three ranges of adoption, including a basic MCU with 1 to 4 nodes for always-on CPU-free operation, a class MCU + Deep Learning Accelerators, and a high-end MCU with up to 64 nodes and a ViT processor optional. Either way, on-chip learning is possible by leveraging the trained model as a feature extractor and adding new classes in the final layer without being tied to cloud training resources.
Akida can provide energy-efficient AI alternatives across a wide range of solutions currently … [+]
conclusion
While BrainChip has spoken in the past about the partnerships it has forged, such as with MegaChips and Renesas, business growth has been slower, possibly due to the IP model taking longer to mature. develop. With the inclusion of 8-bit operations, the vision transformer, temporal convolutions (TENN models), and the ability to run models like ResNet-50 completely on the Akida Neural processor with minimal CPU dependencies, we believe the company is laying the groundwork to turn the corner and land bigger design wins. A key factor may be the software, which is currently based on TensorFlow but will soon support PyTorch as well, an essential addition given the current development landscape.
Disclosures: This article expresses the opinions of the authors and should not be taken as advice on buying or investing in the companies mentioned. Cambrian AI Research is fortunate to have many, if not most, semiconductor companies as customers, including Blaize, Cerebras, D-Matrix, Esperanto, FuriosaAI, Graphcore, GML, IBM, Intel, Mythic, NVIDIA, Qualcomm Technologies , Si-Five, SiMa.ai, Synopsys and Tenstorrent. We have no investment position in any of the companies mentioned in this article and do not plan to initiate any in the near future. For more information, please visit our website at https://cambrian-AI.com.