Google Unveils AI-Powered Video Creation Tool, Google Vids, at Cloud Next Conference
Google has introduced a groundbreaking AI-fueled video creation tool called Google Vids. Set to become part of the Google Workspace productivity suite upon release, Google Vids aims to revolutionize the way users create and collaborate on videos. Aparna Pappu, VP & GM at Google Workspace, unveiled the tool, describing it as “your video editing, writing and production assistant, all in one.” Google Vids transforms existing assets, such as marketing copy or images stored in Google Drive, into compelling videos. One of the key features of Google Vids is its collaborative nature, allowing team members to work on a video story simultaneously in real-time, directly in the browser. This eliminates the need for emailing files back and forth, while maintaining the same access controls and security provided by Google Workspace. Google Vids is currently in limited testing, with plans to roll out to additional testers in Google Labs in June. Eventually, the tool will be available for customers with Gemini for Workspace subscriptions.
Google Releases Open Source Tools to Support AI Model Development at Cloud Next Conference
Google has taken a unique approach by introducing a range of open source tools designed to support generative AI projects and infrastructure. One of the key releases is MaxDiffusion, a collection of reference implementations of various diffusion models that run on XLA devices, which optimize and speed up specific types of AI workloads. Google has also launched JetStream, a new engine for running text-generating AI models, which currently supports TPUs and promises up to 3 times higher “performance per dollar” for models like Google’s Gemma 7B and Meta’s Llama 2. Additionally, Google has expanded its MaxText collection of text-generating AI models targeting TPUs and Nvidia GPUs in the cloud. The collection now includes Gemma 7B, OpenAI’s GPT-3, Llama 2, and models from AI startup Mistral, all of which can be customized and fine-tuned by developers.
Meta Confirms Imminent Release of Llama 3 Open Source Large Language Model
Meta has confirmed that it will release Llama 3, the next generation of its open source large language model, within the next month. The announcement was made during an event in London on Tuesday by Nick Clegg, Meta’s president of global affairs, and Chris Cox, the company’s Chief Product Officer. The tech giant plans to roll out a suite of next-generation foundation models throughout the year, with the aim of powering multiple products across Meta using Llama 3. This move comes as Meta strives to catch up with competitors like OpenAI, which took the industry by surprise with the launch of ChatGPT over a year ago. Llama 3 is expected to address the limitations of its predecessors by providing more accurate answers and fielding a wider range of questions, including controversial topics. Joelle Pineau, vice president of AI Research at Meta, stated that the goal is to make a Llama-powered Meta AI the most useful assistant in the world.
Intel Launches Gaudi 3 AI Accelerator Chip to Compete with Nvidia and AMD
Intel has unveiled its next-generation AI processing chip, the Gaudi 3 AI accelerator, designed to streamline AI development and make it faster, more straightforward, and scalable. The chip promises to deliver four times more computing power, double the network bandwidth, and 1.5 times the HBM memory bandwidth compared to its predecessor. Jeni Barovian, Intel’s vice president and general manager for its data center AI solutions strategy, emphasized the significance of the Gaudi 3 launch, stating that the chip will deliver the performance, scalability, and efficiency required to build future AI systems. While the Gaudi 3 was previewed by Intel CEO Patrick Gelsinger five months ago, it will be generally available in the third quarter of 2024. Eitan Medina, the COO of Intel’s Habana Labs, described the Gaudi 3 AI Accelerator as having a “heterogeneous computer architecture” with advanced specifications. Building solutions based on Gaudi 3 will be similar to that of Gaudi 2, with Intel doubling the network bandwidth from each accelerator, allowing customers to build clusters of any size depending on their workload needs.