What is Meta’s AI Studio and how does it work?
Cover Photo Major News from Google, Apple, Meta's AI Studio, Google Gemini, Runway Gen-3 Alpha, Nvidia, Shutterstock and Getty Images

Apple Leverages Google’s TPUs for AI Model Training, Bypassing Nvidia

Apple’s recent research paper reveals the company’s use of Google’s tensor processing units (TPUs) to train two key AI models for its upcoming AI features. This decision marks a departure from the industry norm of using Nvidia’s dominant GPUs. Apple utilized 2,048 TPUv5p chips for its device-based AI model and 8,192 TPUv4 processors for its server AI model. The choice to use Google’s cloud infrastructure highlights the competitive landscape in AI hardware. As Apple rolls out its Apple Intelligence features, this revelation underscores the company’s strategic approach to AI development and its potential for creating even more sophisticated models using Google’s technology.

Meta Introduces AI Studio: Personalized Chatbots for Social Media Platforms

Meta Platforms has unveiled AI Studio, a new tool enabling users to create and share custom AI chatbots across its social media platforms. Built on Meta’s advanced Llama 3.1 model, AI Studio allows users to design personalized AI characters that can handle common interactions on platforms like Instagram. This feature aims to enhance user engagement and provide creators with an automated way to manage communications. Meta’s move comes as competition in the AI space intensifies, with rivals like OpenAI reportedly developing advanced AI capabilities. The introduction of AI Studio reflects Meta’s strategy to integrate AI more deeply into its social media ecosystem.

Google Gemini to Introduce Advanced AI Image Editing Features

Google Gemini is developing new capabilities for its AI image generation tool, allowing users to fine-tune and edit AI-created images. The upcoming features, discovered in unfinished code, will enable users to make detailed adjustments to address common AI image flaws like anatomical errors or impossible structures. Two editing methods are planned: submitting prompts for specific changes and interactively selecting areas for modification. These tools aim to enhance efficiency for professionals in visual fields and casual users alike. While similar features exist in other AI platforms, this development signifies Google’s commitment to advancing its generative AI capabilities and competing with industry rivals.

Runway Gen-3 Alpha Introduces Image-to-Video Conversion, Advancing AI Video Generation

Runway has unveiled a new feature for its Gen-3 Alpha AI video model, allowing users to generate realistic videos from still images. This update enhances artistic control and consistency in video creation, with impressive speed and quality. The platform offers 5 or 10-second video options, with built-in safety measures to prevent misuse. This development positions Runway as a strong competitor in the rapidly evolving AI video generation market, alongside companies like OpenAI and Pika. However, the company faces legal challenges over data scraping practices, highlighting ongoing debates about AI’s impact on copyright and creative industries.

Nvidia Unveils Comprehensive Suite of Tools to Accelerate Humanoid Robot Development

Nvidia has introduced a range of services and tools aimed at advancing humanoid robotics development globally. The company’s offerings include new NIM microservices for robot simulation and learning, the Osmo orchestration service for managing complex workflows, and an AI-enabled teleoperation system for efficient robot training. These technologies are designed to streamline development processes, reduce costs, and enhance data generation for AI models. Nvidia is also expanding access to its computing platforms and launching a Humanoid Robot Developer Program, partnering with leading robotics companies. This initiative marks a significant step in Nvidia’s efforts to drive innovation in the rapidly evolving field of humanoid robotics.

Shutterstock and Getty Images Enhance Creative Services with Nvidia’s Edify AI Technology

Shutterstock and Getty Images have unveiled significant upgrades to their creative services, powered by Nvidia’s Edify AI technology. Shutterstock has launched a generative 3D service in commercial beta, allowing quick prototyping of 3D assets and 360 HDRi backgrounds. Getty Images has improved its generative AI service, doubling image creation speed and enhancing output quality. These advancements enable designers and artists to boost productivity in 3D modeling, virtual lighting, and image generation. Both services utilize Nvidia’s visual AI foundry and NIM microservices, offering users powerful tools for creating and customizing visual content with unprecedented speed and accuracy.

Frequently asked questions

Meta’s AI Studio is a new tool that allows users to create custom AI chatbots for social media platforms like Instagram. Built on Meta’s Llama 3.1 model, it enables users to design personalized AI characters that can handle routine interactions and communications. The tool aims to help creators manage their social media presence more efficiently by automating common interactions while maintaining a personalized touch.
Apple has chosen to use Google’s tensor processing units (TPUs) for training its AI models because they offer specific advantages for their needs. The company is using 2,048 TPUv5p chips for device-based AI and 8,192 TPUv4 processors for server AI models. This strategic decision marks a departure from the industry standard of using Nvidia GPUs and suggests Apple is prioritizing specific performance characteristics or cost efficiencies offered by Google’s cloud infrastructure.
Google Gemini is developing advanced AI image editing capabilities that will allow users to fine-tune and modify AI-generated images. The upcoming features include two main editing methods: submitting specific prompts for changes and interactive area selection for modifications. These tools will help users address common AI image flaws like anatomical errors and impossible structures, making the platform more useful for both professional and casual users.
Runway Gen-3 Alpha’s image-to-video conversion feature transforms still images into realistic videos lasting 5 or 10 seconds. The system maintains artistic consistency while generating fluid motion from static images. It includes built-in safety measures to prevent misuse and offers impressive processing speed and output quality, making it a valuable tool for content creators and artists.
Nvidia has launched a comprehensive suite of tools including NIM microservices for robot simulation and learning, the Osmo orchestration service for workflow management, and an AI-enabled teleoperation system. These tools are part of their Humanoid Robot Developer Program and are designed to streamline development processes, reduce costs, and improve data generation for AI models in robotics.
Shutterstock and Getty Images are using Nvidia’s Edify AI technology to enhance their creative services. Shutterstock has launched a generative 3D service for creating 3D assets and 360 HDRi backgrounds, while Getty Images has improved its generative AI service with doubled image creation speed and better output quality. Both services utilize Nvidia’s visual AI foundry and NIM microservices for faster and more accurate content creation.
Meta’s AI Studio distinguishes itself by being specifically designed for social media integration, particularly across Meta’s platforms like Instagram. Built on the advanced Llama 3.1 model, it offers creators the unique ability to design personalized AI characters that understand platform-specific contexts and can maintain consistent engagement with followers. This social media focus and deep platform integration sets it apart from general-purpose chatbot solutions.
Picture of Gor Gasparyan

Gor Gasparyan

Optimizing digital experiences for growth-stage & enterprise brands through research-driven design, automation, and AI