This investment and the hiring decisions are all part of Meta’s overall budget for developing AI infrastructure. Zuckerberg in January had said that Meta is planning to spend up to $65 billion in 2025 to scale its artificial intelligence infrastructure.
Meta has been on a poaching spree for talent across major tech and AI firms. In one of his first posts after joining Meta, former Scale AI CEO Alexandr Wang, who now leads the company’s newly restructured Superintelligence team, named several recently recruited team members.
“I’m excited to be the Chief AI Officer of Meta, working alongside natfriedman, and thrilled to be accompanied by an incredible group of people joining on the same day. Towards superintelligence,” Wang said in a post on X on Tuesday.
Given that Meta CEO Mark Zuckerberg is reportedly personally involved in closing key hires for the team, the move has come to be popularly referred to as ‘Zuck Bucks’.
What is the Superintelligence Lab? Who has been hired by Meta?
The restructured Superintelligence team includes seven former OpenAI employees, one from Anthropic, three from Google aside from Wang.
The new division, called Meta Superintelligence Labs (MSL), includes a number of high-profile hires from leading AI labs. The unit will be tasked with building advanced AI systems designed to rival or surpass human intelligence. The move is part of a broader restructuring aimed at consolidating Meta’s foundational AI, product, and FAIR (Fundamental AI Research) teams under a single umbrella, while also establishing a new lab focused on next-generation model development.
Among them is Trapit Bansal, who pioneered reinforcement learning on chain-of-thought prompting and co-created OpenAI’s O-series models. Shuchao Bi, creator of GPT-40’s voice mode and 04-mini, previously led multimodal post-training at OpenAI. Huiwen Chang, co-creator of GPT-40’s image generation capability, earlier invented the MaskGIT and Muse text-to-image architectures at Google Research.
Ji Lin contributed to a wide range of OpenAI models, including 03/04-mini, GPT-40, GPT-4.1, GPT-4.5, 40-imagegen, and the Operator reasoning stack. Joel Pobar, who worked on inference at Anthropic, returns to Meta after 11 years with the company, where he worked on HHVM, Hack, Flow, Redex, performance tooling, and machine learning.
Jack Rae, previously pre-training tech lead for Gemini and reasoning for Gemini 2.5, also led DeepMind’s early LLM efforts, including Gopher and Chinchilla. Hongyu Ren, co-creator of GPT-40, 40-mini, 01-mini, 03-mini, o3, and 04-mini, was previously leading a post-training group at OpenAI.
Johan Schalkwyk, a former Google Fellow, was an early contributor to Sesame and the technical lead for Maya. Pei Sun, who worked on post-training, coding, and reasoning for Gemini at DeepMind, previously built the last two generations of Waymo’s perception models. Jiahui Yu, co-creator of o3, 04-mini, GPT-4.1, and GPT-40, led OpenAI’s perception team and co-led multimodal work at Gemini. And finally, Shengjia Zhao, co-creator of ChatGPT, GPT-4, all mini models, 4.1, and 03, previously led synthetic data efforts at OpenAI.
Aside from Wang, the team will be headed by Nat Friedman, the former CEO of GitHub, who will help oversee product development and applied AI research.
Meta’s aggressive hiring spree is reshaping the economics of AI talent, setting off a compensation arms race across the industry. Top-tier researchers are now commanding multi-year pay packages valued at over $10 million, with a few deals reportedly approaching the $100 million mark.
This surge in compensation is putting intense pressure on smaller AI labs and startups, many of which are struggling to compete with deep-pocketed tech giants.
The resulting talent consolidation risks reinforcing the dominance of a handful of well-funded firms.
Meta’s approach to AI
In an internal memo now circulated online, Zuckerberg described Meta’s vision of delivering personal superintelligence to the masses, emphasising its advantage in compute resources, experience in building global-scale products, and leadership in emerging categories like AI-powered glasses and wearables.
In June, Meta announced a massive $14.3 billion investment in Scale AI, marking its largest bet yet on securing high-quality training data for artificial intelligence. The deal grants Meta a 49% stake in the data-labelling firm and brought Scale AI founder Alexandr Wang into its leadership ranks to spearhead the MSL. The move directly targets one of Meta’s biggest hurdles in the AI race, which is limited access to the specialised datasets needed to train competitive large language models.
While OpenAI continues to dominate the global AI market share with ChatGPT, Meta has struggled to keep pace. Its recently released Llama 4 models drew tepid feedback, with users citing weak performance in coding and generic outputs when compared to some smaller competitors.
Meta’s investment in Scale AI surpasses even Microsoft’s headline-making $13 billion stake in OpenAI, signalling the company’s intent to aggressively close the gap in foundational AI capabilities
This investment and the hiring decisions are all part of Meta’s overall budget for developing AI infrastructure. Zuckerberg in January had said that Meta is planning to spend up to $65 billion in 2025 to scale its artificial intelligence infrastructure. Aside from hiring AI talent, a significant portion of the investment will go toward constructing a data centre exceeding 2 gigawatts in capacity.
“We're planning to invest $60-65B in capex this year while also growing our AI teams significantly, and we have the capital to continue investing in the years ahead,” Zuckerberg had said in a post on Facebook in January.
Fortune India is now on WhatsApp! Get the latest updates from the world of business and economy delivered straight to your phone. Subscribe now.