Foxconn unveils ‘FoxBrain’ AI model, steps into AI race
Developed using 120 Nvidia H100 GPUs in four weeks, the model is based on Meta’s Llama 3.1 architecture and is optimised for traditional Chinese and Taiwanese language styles.

Foxconn, the Taiwanese electronics giant best known for assembling Apple’s iPhones, has taken a big step into the AI arena by launching its large language model, ‘FoxBrain.’ Built on Meta’s Llama 3.1 architecture, FoxBrain is optimised explicitly for traditional Chinese and Taiwanese language use, marking a milestone as Taiwan’s first AI language model.
Developed swiftly over four weeks using 120 of Nvidia’s powerful H100 graphics processors, FoxBrain is positioned as a high-performing AI model, particularly strong in data analysis, problem-solving, code generation, and decision-making tasks. Foxconn admitted that while FoxBrain slightly trails China’s DeepSeek model, it nonetheless approaches global benchmarks and boasts capabilities well-suited for complex reasoning tasks.
The company initially intends FoxBrain to streamline internal operations, mainly focusing on enhancing manufacturing processes, improving supply chain management, and fostering smarter decision-making. Foxconn also aims to collaborate with technology partners, making its AI findings publicly accessible through open-source channels to encourage broader adoption and innovation across industries.
Nvidia was pivotal in FoxBrain’s development, providing infrastructure and expertise through Taiwan’s most powerful supercomputer, ‘Taipei-1,’ in Kaohsiung. Foxconn is set to unveil more details about FoxBrain at Nvidia’s GTC developer conference later this month, highlighting its ambition to become a leader in hardware manufacturing and AI-driven technologies.