Cover Photo Major News from Remarkable Alexa, ElevenLabs, Meta and OpenAI

Amazon Unveils Paid Tier for Revamped Alexa AI Assistant

Amazon is reportedly planning a major overhaul of its Alexa voice assistant, including introducing a paid premium tier. According to Reuters, the retail giant is working on a new “Remarkable Alexa” that would replace the current free “Classic Alexa” service. The revamped Alexa would be powered by advanced AI, allowing it to handle more complex tasks and queries. Amazon is considering charging $5 to $10 per month for the premium tier, while keeping a free, but more limited, version. The effort, known as “Project Banyan” internally, aims to have the new Alexa ready by August. However, the launch timeline and pricing details could still change. The move comes as Amazon struggles to make Alexa profitable, while facing competition from more advanced AI assistants like ChatGPT. The company is reportedly viewing 2024 as a “must win” year for its AI efforts against rivals deploying increasingly sophisticated tools.

ElevenLabs Introduces Free AI Voice Isolator to Compete with Adobe

ElevenLabs has launched a new tool: the AI Voice Isolator. Available today, this tool allows creators to remove unwanted ambient noise from content like films, podcasts, and YouTube videos. It’s free with usage limits on the ElevenLabs platform, following their recent Reader app launch. The Voice Isolator processes uploaded content to eliminate noise and enhance speech clarity, claiming studio-quality output. Head of design Ammaar Reshi demonstrated its effectiveness by removing leaf blower noise. Tests showed it successfully removed various noises but struggled with wall banging and finger snapping. Sam Sklar noted it doesn’t work on music vocals yet but may improve. ElevenLabs plans to enhance the tool further and provide API access soon. Free use is limited to 10 minutes of audio per month, with paid plans starting at $5 per month for more extensive needs.

Meta Unveils Groundbreaking AI Models with Multi-Token Prediction

Meta released pre-trained AI models on Wednesday, utilizing a novel multi-token prediction approach. This technique, introduced in April, enables models to predict multiple future words simultaneously, promising enhanced performance and reduced training times. Released under a non-commercial research license on Hugging Face, the models focus on code completion tasks. This aligns with Meta’s commitment to open science and positions the company as an AI innovation leader. The method could address concerns about computational demands and environmental impact of large AI models, potentially improving various language-related tasks. However, democratizing powerful AI tools raises ethical concerns and security challenges, including potential misuse and spread of misinformation. Meta’s release is part of a larger suite of AI research artifacts, demonstrating the company’s ambition to lead across multiple AI domains.

OpenAI Suffers Internal Breach, Hacker Steals AI Technology Details

OpenAI experienced a security breach in 2023 where a hacker gained access to the company’s internal messaging systems. The hacker was able to steal details about the design of OpenAI’s artificial intelligence technologies from discussions in an online employee forum. However, the hacker did not gain access to the systems where OpenAI houses and builds its AI models. OpenAI executives informed employees and the company’s board about the breach, but decided not to share the news publicly as no customer or partner information was stolen. They did not consider the incident a national security threat, believing the hacker was a private individual with no known ties to a foreign government. As a result, OpenAI did not inform federal law enforcement agencies about the breach. This comes as the Biden administration was planning to impose new guardrails around advanced AI models like ChatGPT to safeguard U.S. technology from potential misuse by countries like China and Russia. 

The incident highlights the growing concerns around the security and potential misuse of powerful AI technologies. In May, 16 companies developing AI, including OpenAI, pledged to develop the technology safely as regulators struggle to keep up with the rapid innovation and emerging risks.