Meta boosts AI chip power for enhanced performance

The company plans to expand the chip’s capabilities to train generative AI models.

 Flag, Text

Meta is gearing up for the next leap in AI chip technology, promising enhanced power and faster training for its ranking models. The Meta Training and Inference Accelerator (MTIA) aims to optimise training efficiency and streamline reasoning tasks, particularly for ranking and recommendation algorithms. In a recent announcement, Meta emphasised MTIA’s pivotal role in its long-term strategy to fortify AI infrastructure for current and future technological advancements, aligning with existing technology setups and forthcoming GPU developments.

The company’s commitment to custom silicon extends beyond computational power, encompassing memory bandwidth, networking, and capacity enhancements. Initially unveiled in May 2023 with a focus on data centres, MTIA v1 was slated for a 2025 release. However, Meta was surprised by revealing that both MTIA iterations are already in production, indicative of accelerated progress in their chip development roadmap.

While MTIA currently specialises in training ranking and recommendation algorithms, Meta envisions expanding its capabilities to include generative AI training, such as with its Llama language models. The forthcoming MTIA chip boasts significant upgrades, featuring 256MB memory on-chip and operating at 1.3GHz, compared to its predecessor’s 128MB and 800GHz configuration. Early performance tests indicate a threefold improvement across evaluated models, reflecting Meta’s strides in chip optimisation.

Why does it matter?

Meta’s pursuit mirrors a broader trend among AI companies, with players like Google, Microsoft, and Amazon venturing into custom chip development to meet escalating computing demands. The competitive landscape underscores the need for tailored solutions to efficiently power AI models. As the industry witnesses unprecedented growth in chip demand, market leaders like Nvidia stand poised for substantial valuation, highlighting the critical role of custom chips in driving AI innovation.