Google expands open source AI offerings

Google has introduced a range of open-source tools to support generative AI projects and infrastructure at its Cloud Next conference.

 Electronics, Screen, Computer, Pc, Computer Hardware, Hardware, Monitor, Laptop, Phone, Mobile Phone, Logo, White Board

In a surprising departure from its usual modus operandi, Google unveiled open-source tools at its Cloud Next conference, traditionally known for closed-source offerings. This shift aims to cultivate developer goodwill and further Google’s ecosystem ambitions. Among the notable releases is MaxDiffusion, a collection of reference implementations of diffusion models tailored for XLA devices, such as Google’s tensor processing units (TPUs) and recent Nvidia GPUs.

Another significant launch is JetStream, designed to boost the performance of generative AI models, particularly text-generating ones. Limited currently to TPUs with future promises of GPU compatibility, JetStream boasts up to a threefold improvement in ‘performance per dollar’ for models like Google’s Gemma 7B and Meta’s Llama 2.

Google has also expanded its MaxText collection, adding Gemma 7B, OpenAI’s GPT-3, Llama 2, and models from the AI startup Mistral. These models, optimised for TPUs and Nvidia GPUs, are customisable and fine-tunable to meet developers’ requirements, maximising hardware utilisation for enhanced energy efficiency and cost optimisation.

Collaborating with Hugging Face, Google introduces Optimum TPU, which aims to facilitate the integration of certain AI workloads onto TPUs, particularly text-generating models. Despite its current limitations—it supports only Gemma 7B and is restricted to model running rather than training—Google promises future enhancements. The collaboration underscores Google’s commitment to democratising access to advanced AI hardware, hinting at further developments in the pipeline.