Google, a subsidiary of Alphabet Inc., provided fresh information on its supercomputers on Tuesday, claiming that they are quicker and more energy-efficient than equivalent systems from Nvidia Corp.
The Tensor Processing Unit, or TPU, is a unique semiconductor that Google created. More than 90% of the work done by the business on artificial intelligence training—the process of uploading the data through models to make them useful at tasks like producing text that is similar to human speech or creating images—uses those chips.
In a scientific document published on Tuesday, Google describes how it connected more than 4,000 chips into a supercomputer using optical switches it invented on its own.
Because the so-called large language models that drive products like Google’s Bard or OpenAI’s ChatGPT have grown exponentially in size and are now much too large to fit on a single chip, improving these links has emerged as a key area of rivalry among companies that create “AI supercomputers.”
Instead, the models must be shared over hundreds of chips, which must then cooperate to train the model for a few weeks or longer. The largest publicly available language model created by Google to date, the PaLM model, was trained over the course of 50 days using two of the “4,000-chip supercomputers.”
According to Google, their supercomputers make it simple to change connections between chips on the fly, helping to avoid issues and adjusting for performance advantages.
Google’s supercomputer has been operational since 2020 in a data centre in Mayes County, Oklahoma, even though the company is just now making information about it public. Midjourney, a startup, reportedly used this approach to train its model, which creates new visuals after being fed a short passage of text.
According to the report, Google’s processors are up to “1.7 times faster and 1.9 times more energy-efficient” than a system built around the fourth-generation TPU rival Nvidia’s A100 chip for reasonably sized systems.
Google claimed that it did not compare its 4th generation processor to Nvidia’s current top-of-the-line H100 chip because the H100 was introduced to the market after Google’s and uses more modern manufacturing techniques.
With Jouppi telling Reuters that Google had “a solid collection of potential chips,” the search giant suggested that it might be developing a new TPU to compete with the “Nvidia H100,” but gave no further specifics.
Microsoft will settle FTC complaints that it violated children’s privacy by paying $20 million.
Twitter Withdraws from EU Disinformation Code, Faces New Legal Obligations
Netflix broadens its global ban on password sharing.