ADVERTISEMENT

Meta Unveils Next-Gen AI Chips for Faster Performance

Update: 2024-04-10 22:59 IST

Meta has revealed advancements in its custom AI chips, promising improved performance and accelerated training for its ranking models. The Meta Training and Inference Accelerator (MTIA) is optimized for Meta's ranking and recommendation algorithms, enhancing both training efficiency and inference tasks.

According to Meta's recent blog post, MTIA represents a significant step in its long-term strategy to develop AI infrastructure tailored to its services. The company aims to align its chip designs with current technology infrastructure while remaining adaptable to future GPU advancements.

ADVERTISEMENT

Initially announced in May 2023, MTIA v1 was slated for data centre deployment, with subsequent generations likely targeting the same. Despite initial projections indicating a 2025 release, Meta has confirmed that both MTIA versions are already in production.

While MTIA presently focuses on training ranking and recommendation algorithms, Meta envisions expanding its capabilities to encompass training generative AI, such as the Llama language models. The new MTIA chip boasts enhancements, including 256MB on-chip memory and a clock speed of 1.3GHz, compared to the v1's 128MB and 800GHz, respectively.

Early tests conducted by Meta demonstrate a threefold performance improvement of the new chip across four evaluated models compared to its predecessor.

Looking ahead, Meta is reportedly exploring additional AI chip projects, including Artemis, that have been explicitly designed for inference tasks. The development of custom AI chips reflects a broader trend within the industry, with companies like Google, Microsoft, and Amazon investing in proprietary chip designs to meet the growing demand for computing power driven by AI applications.

Google introduced its TPU chips in 2017, while Microsoft unveiled its Maia 100 chips. Amazon's Trainium 2 chip, capable of training foundation models at quadruple the speed of its predecessor, underscores the importance of custom chip solutions in meeting AI computational demands.

The competition for powerful AI chips underscores the necessity for custom-designed hardware to support AI workloads effectively. As demand for chips continues to surge, industry leader Nvidia's valuation has reached an impressive $2 trillion, highlighting the immense value placed on AI chip technology in today's market landscape. 

ADVERTISEMENT

Tags:    

Similar News