Dallas-based Texas Instruments (Nasdaq: TXN) has announced the launch of two new microcontroller (MCU) product lines with edge AI capabilities. Both feature TI’s TinyEngine neural processing unit—a dedicated hardware accelerator that optimizes “deep learning inference operations to reduce latency and improve energy efficiency when processing at the edge.”
The new chip families—marketed with TI’s classic engineering-speak as MSPM0G5187 and AM13Ex—support TI’s “commitment to enabling edge AI” across its entire embedded processing portfolio, the company said.
“TI invented the digital signal processor almost 50 years ago, laying the groundwork for today’s edge AI processing,” said Amichai Ron, SVP of embedded processing and DLP products at TI.
“Now TI is leading the next phase of innovation by integrating the TinyEngine NPU across our entire microcontroller portfolio,” he added in a statement, “including general-purpose and high-performance, real-time MCUs. By enabling AI across our software, tools, devices and ecosystem, we’re making edge AI accessible and easy to use for every customer and every application.”
‘Far-reaching applications’ enabled by smaller chips
Bob O’Donnell, president and chief analyst at TECHnalysis Research, noted that while much of the world has been focused on AI acceleration and NPUs in bigger security operations centers, “it turns out some of the more interesting and far-reaching applications of AI can be enabled inside smaller chips like microcontrollers.”
“Edge-based applications of AI acceleration can make consumer devices more intelligent and industrial devices more efficient,” O’Donnell added in a statement. “Plus, if you can combine these chips with software development tools that themselves leverage AI to help build AI features, you bring the power of AI acceleration to a significantly wider audience of engineers and device designers.”
What the TinyEngine NPU can do
Able to compute locally, TI’s TinyEngine NPU executes computations required by neural networks in parallel to the primary CPU running application code. Compared to similar MCUs without an accelerator, this hardware acceleration delivers some key benefits.
First, it minimizes the flash memory footprint. It also lowers latency by up to 90 times per AI inference. And in a world where AI’s energy-intensive demands have alarmed governments and consumers alike, the TinyEngine NPU reduces energy utilization “by more than 120 times per AI inference,” TI said.
Efficiency like that allows resource-constrained devices—including portable, battery-powered products—to process AI workloads, TI said.
At under $1 in 1,000-unit quantities, the MSPM0G5187 MCU reduces system and operating costs by offering an affordable alternative to other MCU or processor architectures, TI added.
Helping developers deploy edge AI ‘in any device’
Both of these new MCU families are supported by TI’s CCStudio Edge AI Studio, a free development environment that simplifies model selection, training and deployment across TI’s embedded processing portfolio.
This “edge AI toolchain” gives engineers full flexibility to run AI models on TI MCUs via either hardware or software, TI said.
The company said it offers more than 60 models and application examples available in the tool to help developers start deploying edge AI “in any device,” with additional tasks and models planned in the future.
Don’t miss what’s next. Subscribe to Dallas Innovates.
Track Dallas-Fort Worth’s business and innovation landscape with our curated news in your inbox Tuesday-Thursday.














