How AI Compilers Are Revolutionising Autonomous Driving and Smart Cities

Vishakha Agrawal, an expert in AI compiler optimization, explores the vital role of AI compilers in enhancing performance, power efficiency, and scalability across CPUs, GPUs, FPGAs, and AI accelerators. She highlights their impact on real-time sensor data processing for smart cities and autonomous vehicles, emphasizing edge computing, energy-efficient AI models, and workload distribution in heterogeneous systems
In the age of digital transformation, autonomous vehicles and smart cities are at the forefront of technological innovation, relying on real-time data processing for efficiency, scalability, and performance. While much attention is given to AI models and hardware, Vishakha Agrawal, a distinguished expert in AI compiler optimization, highlights the crucial role of AI compilers in enabling these advancements.
"AI compilers bridge the gap between complex algorithms and diverse hardware platforms, ensuring seamless execution across CPUs, GPUs, FPGAs, and AI accelerators," Vishakha explains. "They are vital for optimizing performance, power efficiency, and scalability, particularly for resource-constrained environments like edge computing."
Currently at AMD, Vishakha is pioneering AI engine compilers designed for heterogeneous systems, integral to processing real-time sensor data for smart city infrastructures. "In smart cities, distributed systems manage everything from traffic control to energy efficiency, requiring robust AI models that can handle vast data streams with minimal latency," she says.
Her journey in AI compiler optimization has spanned industry giants, including Intel and SiFive. At Intel, she played a key role in deep learning optimization through the nGraph-bridge library, streamlining AI inference on edge devices. "Autonomous vehicles depend on real-time sensor fusion, and optimizing AI inference at the edge is crucial for their safety and navigation," Vishakha notes.
During her tenure at SiFive, she advanced vector algorithms and RISC-V optimization for edge computing devices, which are essential for smart city applications. "Power efficiency and performance are critical in these environments," she emphasizes. "By refining vector algorithm implementations, we enhance AI processing capabilities while maintaining energy efficiency."
One of her most significant projects involved parallel computing optimization at Intel using OpenMP runtime libraries. "Processing massive sensor data generated by autonomous systems is a challenge," she says. "Optimizing compiler technologies for heterogeneous systems ensures smooth workload distribution, enabling real-time AI processing."
Vishakha's contributions to the MLIR framework further underscore her commitment to enhancing AI efficiency. "Custom operators and kernels for transformer models like BERT, integrated into TensorFlow, are helping drive energy-efficient AI processing," she shares. "These breakthroughs are crucial not just for autonomous driving but for broader applications in sensor fusion."
Her expertise extends beyond industry applications, as reflected in her published research paper, "Energy-Efficient Large Language Models: Advancements and Challenges." "The future of AI lies in edge computing," she predicts. "Localized data processing is becoming essential, and advanced compilation techniques will be key to optimizing AI models for real-world applications."
For Vishakha, AI compiler technology is more than just a technical domain—it is a transformative force shaping the future of smart, sustainable urban ecosystems. "We are not just improving AI performance; we are driving innovation that makes cities safer, transportation smarter, and energy consumption more efficient. AI compilers are at the heart of this revolution, and I am excited to be part of it."

















