Meta Introduces AI Chatbots for Business Messaging on WhatsApp and Messenger
At Meta Connect 2024, Meta has unveiled AI-powered business chatbots for WhatsApp and Messenger, accessible through click-to-message ads. These chatbots can handle customer inquiries, showcase products, and complete purchases. The company is increasingly integrating AI into its advertising tools, including features for generating ad images and headlines. Meta reports that over a million advertisers are using its AI ad tools, with 15 million ads created last month. While Meta claims AI ads improve click-through rates, some research suggests customers may prefer human interaction over instant AI responses in customer service scenarios.
Meta Unveils Multimodal Llama 3.2 AI Models with Image Capabilities
Meta has introduced Llama 3.2, an update to its AI model family that now includes multimodal capabilities. The new 11B and 90B models can interpret charts, graphs, and images, as well as caption and identify objects in pictures. These models are available globally except in Europe due to regulatory concerns. Meta also launched smaller text-only models, 1B and 3B, optimized for edge devices. The company aims to expand its AI influence through open-source offerings, though with some usage restrictions. This move reflects Meta’s strategy to compete in the AI space while navigating complex regulatory landscapes.
Meta AI Introduces Image Understanding and Editing Capabilities
Meta has unveiled new features for its AI assistant, enabling it to understand and edit photos. Users can now share images with Meta AI, ask questions about them, and request edits such as adding or removing objects, changing outfits, or modifying backgrounds. The technology extends to Instagram, where it can generate backgrounds for Stories based on shared feed photos. Meta is also testing automatic dubbing and lip-syncing for Reels translations. These advancements, powered by Llama 3.2 models, aim to make Meta AI a versatile and widely-used assistant across the company’s platforms, with CEO Mark Zuckerberg highlighting its free, unlimited access as a key differentiator.
Meta Expands AI-Powered Image Generation Across Social Platforms
Meta is rolling out its AI-powered ‘Imagine’ feature across Facebook, Instagram, and Messenger, allowing users to generate images from text prompts directly within their feeds, Stories, and profile pictures. This expansion enables users to create eye-catching, customized visuals to accompany their posts and enhance engagement. The update also includes AI-suggested captions for Stories and personalized chat themes in Messenger. Meta is testing AI-generated content in feeds based on user interests and trends, aiming to encourage more interaction with its AI tools. CEO Mark Zuckerberg emphasizes Meta AI’s free, unlimited access as a key differentiator, positioning it to become a widely-used AI assistant globally.
Meta AI Introduces Voice Responses and Celebrity Voices
Meta has unveiled new voice capabilities for its AI assistant across Instagram, Messenger, WhatsApp, and Facebook. Users can now receive spoken responses from Meta AI, with the option to choose from various voices, including AI clones of celebrities like Judi Dench and John Cena. The assistant has also gained image analysis abilities, allowing users to ask questions about shared photos. Additionally, Meta is testing an AI translation tool for Instagram Reels that automatically dubs and lip-syncs creators’ speech in different languages. These enhancements aim to make Meta AI more versatile and engaging, though the effectiveness of celebrity voices remains to be seen.
Meta Showcases AI-Generated Creator Avatars in Surreal Demo
At Meta Connect, Mark Zuckerberg demonstrated AI Studio, a platform for creating custom chatbots, by conversing with an AI-generated version of creator Don Allen Stevenson III while the real Stevenson watched. This feature allows creators to design virtual versions of themselves to handle frequent inquiries, potentially saving time. The presentation also introduced celebrity-voiced chatbots, highlighting Meta’s push into AI-driven interactions. This demonstration underscores the growing trend of AI-generated content and its potential to blur the lines between real and virtual personas in digital communication.
Meta Unveils Neural Interface for Orion AR Glasses Control
Meta is developing a wrist-worn neural interface to control its prototype Orion AR glasses. This device, inspired by CTRL-labs technology, allows wearers to navigate apps on the glasses through gestures. CEO Mark Zuckerberg highlighted the interface as the first to enable direct brain-to-device signaling for Orion. The wristband is expected to be compatible with other Meta AR hardware and will soon be available for purchase. Orion glasses, still in concept stage, feature tiny projectors in the temples to create a heads-up display, aiming to deliver true augmented reality experiences.
Ray-Ban Meta Smart Glasses Get AI-Powered Upgrades
Meta has announced significant updates to its Ray-Ban Meta smart glasses, introducing AI-driven features and smartphone-like functionalities. The glasses will soon offer real-time AI video processing, allowing users to ask questions about their surroundings and receive verbal responses. Live language translation capabilities for English, French, Italian, and Spanish are also in the works. Additional enhancements include reminders, QR code scanning, and integrations with popular music and audiobook services. These upgrades aim to make the smart glasses more versatile and user-friendly, potentially positioning them as a viable alternative to smartphones for certain tasks. The glasses will also be available with Transitions lenses for improved adaptability to various lighting conditions.