Cover Photo Major News from Google, ElevenLabs, Microsoft, and Slack

Google Unveils “Model Explorer”, an open source tool for AI model visualization and debugging

Google has introduced a new open-source tool, “Model Explorer,” designed to make artificial intelligence models more transparent and understandable. Announced on the Google AI research blog, Model Explorer employs a hierarchical approach to visualize even the most complex neural networks, including language and diffusion models. This tool addresses the challenge of understanding the inner workings of increasingly complex AI systems. Model Explorer utilizes advanced graphics rendering techniques from the gaming industry, enabling it to smoothly visualize large models with millions of nodes and edges. This overcomes the limitations of existing visualization tools that struggle with such complexity. Google has already used Model Explorer internally to streamline the deployment of large models on resource-constrained platforms like mobile devices. Model Explorer is part of Google’s “AI on the Edge” initiative, which aims to push more artificial intelligence compute to devices.

ElevenLabs Launches Audio Native, Bringing AI Narration to Websites

AI voice startup ElevenLabs has unveiled Audio Native, a new tool that allows users to add AI-narrated audio versions of their website content. Launched this week, Audio Native is an embeddable audio player that utilizes ElevenLabs’ text-to-speech technology to automatically narrate web pages. The tool is available for $11 per month as part of the “creator” tier and includes a listener dashboard for tracking audience engagement. ElevenLabs highlighted examples of websites already using Audio Native on its X page, including its own blog, bensbites.com, and a November 2023 New Yorker article.  The company has also worked with publications like The Atlantic and The New York Times. This launch follows ElevenLabs’ recent release of ElevenLabs Reader, which can voice text from web pages and documents in 11 different voices. 

EU Threatens Microsoft with Billions in Fines Over Missing Generative AI Risk Information

The European Union is threatening Microsoft with a hefty fine, potentially reaching billions of dollars, for failing to provide information about the risks associated with its generative AI tools. The EU’s Digital Services Act (DSA) mandates that large online platforms, including Microsoft’s Bing search engine, assess and mitigate systemic risks posed by their services. In March, the EU requested information from Microsoft and other tech giants about the potential risks of generative AI, particularly concerning civic discourse and electoral processes.  Microsoft failed to provide some of the requested documents, prompting the EU to issue a warning. The company has until May 27th to comply or face a fine of 1% of its global annual revenue, which could amount to over $2 billion. The EU is particularly concerned about the potential for Bing’s AI features, such as the AI assistant “Copilot in Bing” and the image generation tool “Image Creator by Designer,” to contribute to the spread of misinformation, including deepfakes.  With European Parliament elections approaching in June, the EU is focused on mitigating AI-fueled disinformation. Microsoft claims to be cooperating with the European Commission and committed to addressing their concerns.  The company says it has taken steps to mitigate risks across its online services, including measures to safeguard the 2024 elections.

Slack Faces Backlash Over AI Data Collection Policy

Slack is under fire for its policy of using customer data to train its AI models.  The controversy erupted after a post on Hacker News highlighted Slack’s privacy policy, which states that users are automatically opted in to having their data used for AI training. To opt out, users must email the company directly. This revelation sparked outrage among users who felt the policy was unclear and deceptive. Critics pointed out that the privacy policy does not explicitly mention “Slack AI,” the company’s new suite of AI-powered features, leaving users unsure about the scope of data collection. Adding to the confusion, Slack’s policy distinguishes between “global models” and “AI models.” The company claims that “global models,” which power features like channel recommendations and search, are trained on customer data but do not memorize or reproduce it. However, the policy is less clear about the use of customer data for “AI models.” Slack maintains that “Slack AI” itself does not use customer data for training, relying instead on large language models (LLMs) hosted within Slack’s own infrastructure. The company has acknowledged the need to update its privacy policy to address the confusion surrounding “Slack AI.”  This incident underscores the importance of transparency and user control in the age of AI, where companies are increasingly reliant on user data to develop their products.