Meta Platforms on Wednesday unveiled details about the next generation of the company’s in-house artificial intelligence accelerator chip.

“This chip’s architecture is fundamentally focused on providing the right balance of compute, memory bandwidth and memory capacity for serving ranking and recommendation models,” the Mark Zuckerberg-led company says in a blog post.

The next generation of Meta’s large-scale infrastructure is being built with AI in mind, including supporting new generative AI products, recommendation systems and advanced AI research, the post says. “It’s an investment we expect will grow in the years ahead, as the compute requirements to support AI models increase alongside the models’ sophistication,” the company says.

Last year, Meta unveiled its Meta Training and Inference Accelerator (MTIA), its first-generation AI inference accelerator that it designed in-house with Meta’s AI workloads in mind. “It was designed specifically for our deep learning recommendation models that are improving a variety of experiences across our apps and technologies,” says Meta.

MTIA is a long-term bet to provide the most efficient architecture for Meta’s unique workloads. “As AI workloads become increasingly important to our products and services, this efficiency will be central to our ability to provide the best experiences for our users around the world. MTIA v1 was an important step in improving the compute efficiency of our infrastructure and better supporting our software developers as they build AI models that will facilitate new and better user experiences,” the tech giant says.

“This new version of MTIA more than doubles the compute and memory bandwidth of our previous solution while maintaining our close tie-in to our workloads. It is designed to efficiently serve the ranking and recommendation models that provide high-quality recommendations to users,” it adds.

MTIA has been deployed in Meta’s data centres and is now serving models in production. “We are already seeing the positive results of this program as it’s allowing us to dedicate and invest in more compute power for our more intensive AI workloads,” the company says.

“MTIA will be an important piece of our long-term roadmap to build and scale the most powerful and efficient infrastructure possible for Meta’s unique AI workloads. We’re designing our custom silicon to work in cooperation with our existing infrastructure as well as with new, more advanced hardware (including next-generation GPUs) that we may leverage in the future. Meeting our ambitions for our custom silicon means investing not only in compute silicon but also in memory bandwidth, networking and capacity, as well as other next-generation hardware systems,” it says.

Meanwhile, Intel on Tuesday introduced its latest artificial intelligence chip, Gaudi 3, to meet the growing demand for big AI models. Intel claims its latest Gaudi 3 chip is more power-efficient than Nvidia H100. Intel Gaudi 3 will be available to OEMs – including Dell Technologies, Hewlett Packard Enterprise, Lenovo and Supermicro – in the second quarter of 2024.

Follow us on Facebook, X, YouTube, Instagram and WhatsApp to never miss an update from Fortune India. To buy a copy, visit Amazon.