Live
- Bhupalapally: 22 selected for kabaddi tournament
- Safiya awarded PhD
- School for brick kiln workers’ children opens
- ISRO: A digital bridge and global diplomatic force
- Devanahalli toll plaza collects `308 cr in a single year
- Indiramma house survey conducted
- Alphores Narender Reddy donates Rs 1 lakh to library
- Provide all facilities at Group-2 exam centres: Collector Venkatesh Dhotre
- Kadiyam Kavya seeks coach factory at Kazipet
- Gold rates in Hyderabad today slashes, check the rates on 14 December, 2024
Just In
Meta Unveils Next-Gen AI Chips for Faster Performance
Meta's latest MTIA chips promise faster training speeds and potential expansion to train generative AI models, enhancing its infrastructure.
Meta has revealed advancements in its custom AI chips, promising improved performance and accelerated training for its ranking models. The Meta Training and Inference Accelerator (MTIA) is optimized for Meta's ranking and recommendation algorithms, enhancing both training efficiency and inference tasks.
According to Meta's recent blog post, MTIA represents a significant step in its long-term strategy to develop AI infrastructure tailored to its services. The company aims to align its chip designs with current technology infrastructure while remaining adaptable to future GPU advancements.
Initially announced in May 2023, MTIA v1 was slated for data centre deployment, with subsequent generations likely targeting the same. Despite initial projections indicating a 2025 release, Meta has confirmed that both MTIA versions are already in production.
While MTIA presently focuses on training ranking and recommendation algorithms, Meta envisions expanding its capabilities to encompass training generative AI, such as the Llama language models. The new MTIA chip boasts enhancements, including 256MB on-chip memory and a clock speed of 1.3GHz, compared to the v1's 128MB and 800GHz, respectively.
Early tests conducted by Meta demonstrate a threefold performance improvement of the new chip across four evaluated models compared to its predecessor.
Looking ahead, Meta is reportedly exploring additional AI chip projects, including Artemis, that have been explicitly designed for inference tasks. The development of custom AI chips reflects a broader trend within the industry, with companies like Google, Microsoft, and Amazon investing in proprietary chip designs to meet the growing demand for computing power driven by AI applications.
Google introduced its TPU chips in 2017, while Microsoft unveiled its Maia 100 chips. Amazon's Trainium 2 chip, capable of training foundation models at quadruple the speed of its predecessor, underscores the importance of custom chip solutions in meeting AI computational demands.
The competition for powerful AI chips underscores the necessity for custom-designed hardware to support AI workloads effectively. As demand for chips continues to surge, industry leader Nvidia's valuation has reached an impressive $2 trillion, highlighting the immense value placed on AI chip technology in today's market landscape.
© 2024 Hyderabad Media House Limited/The Hans India. All rights reserved. Powered by hocalwire.com