What is Google’s Model Explorer and how does it help with AI development?
Cover Photo Major News from Google, ElevenLabs, Microsoft, and Slack

Google Unveils “Model Explorer”, an open source tool for AI model visualization and debugging

Google has introduced a new open-source tool, “Model Explorer,” designed to make artificial intelligence models more transparent and understandable. Announced on the Google AI research blog, Model Explorer employs a hierarchical approach to visualize even the most complex neural networks, including language and diffusion models. This tool addresses the challenge of understanding the inner workings of increasingly complex AI systems. Model Explorer utilizes advanced graphics rendering techniques from the gaming industry, enabling it to smoothly visualize large models with millions of nodes and edges. This overcomes the limitations of existing visualization tools that struggle with such complexity. Google has already used Model Explorer internally to streamline the deployment of large models on resource-constrained platforms like mobile devices. Model Explorer is part of Google’s “AI on the Edge” initiative, which aims to push more artificial intelligence compute to devices.

ElevenLabs Launches Audio Native, Bringing AI Narration to Websites

AI voice startup ElevenLabs has unveiled Audio Native, a new tool that allows users to add AI-narrated audio versions of their website content. Launched this week, Audio Native is an embeddable audio player that utilizes ElevenLabs’ text-to-speech technology to automatically narrate web pages. The tool is available for $11 per month as part of the “creator” tier and includes a listener dashboard for tracking audience engagement. ElevenLabs highlighted examples of websites already using Audio Native on its X page, including its own blog, bensbites.com, and a November 2023 New Yorker article.  The company has also worked with publications like The Atlantic and The New York Times. This launch follows ElevenLabs’ recent release of ElevenLabs Reader, which can voice text from web pages and documents in 11 different voices. 

EU Threatens Microsoft with Billions in Fines Over Missing Generative AI Risk Information

The European Union is threatening Microsoft with a hefty fine, potentially reaching billions of dollars, for failing to provide information about the risks associated with its generative AI tools. The EU’s Digital Services Act (DSA) mandates that large online platforms, including Microsoft’s Bing search engine, assess and mitigate systemic risks posed by their services. In March, the EU requested information from Microsoft and other tech giants about the potential risks of generative AI, particularly concerning civic discourse and electoral processes.  Microsoft failed to provide some of the requested documents, prompting the EU to issue a warning. The company has until May 27th to comply or face a fine of 1% of its global annual revenue, which could amount to over $2 billion. The EU is particularly concerned about the potential for Bing’s AI features, such as the AI assistant “Copilot in Bing” and the image generation tool “Image Creator by Designer,” to contribute to the spread of misinformation, including deepfakes.  With European Parliament elections approaching in June, the EU is focused on mitigating AI-fueled disinformation. Microsoft claims to be cooperating with the European Commission and committed to addressing their concerns.  The company says it has taken steps to mitigate risks across its online services, including measures to safeguard the 2024 elections.

Slack Faces Backlash Over AI Data Collection Policy

Slack is under fire for its policy of using customer data to train its AI models.  The controversy erupted after a post on Hacker News highlighted Slack’s privacy policy, which states that users are automatically opted in to having their data used for AI training. To opt out, users must email the company directly. This revelation sparked outrage among users who felt the policy was unclear and deceptive. Critics pointed out that the privacy policy does not explicitly mention “Slack AI,” the company’s new suite of AI-powered features, leaving users unsure about the scope of data collection. Adding to the confusion, Slack’s policy distinguishes between “global models” and “AI models.” The company claims that “global models,” which power features like channel recommendations and search, are trained on customer data but do not memorize or reproduce it. However, the policy is less clear about the use of customer data for “AI models.” Slack maintains that “Slack AI” itself does not use customer data for training, relying instead on large language models (LLMs) hosted within Slack’s own infrastructure. The company has acknowledged the need to update its privacy policy to address the confusion surrounding “Slack AI.”  This incident underscores the importance of transparency and user control in the age of AI, where companies are increasingly reliant on user data to develop their products.

Frequently asked questions

Google’s Model Explorer is an open-source tool designed to visualize and debug complex AI models. It uses advanced gaming industry graphics techniques to display neural networks with millions of nodes and edges in an understandable way. The tool helps developers and researchers better understand the inner workings of complex AI systems, particularly useful for deploying large models on mobile devices and other resource-constrained platforms.
ElevenLabs’ Audio Native is an embeddable audio player that automatically converts website content into AI-narrated audio. Available for $11 per month under the “creator” tier, it includes analytics through a listener dashboard. The tool uses ElevenLabs’ text-to-speech technology to generate natural-sounding narration and has been implemented by major publications like The Atlantic and The New York Times.
The EU is threatening Microsoft with fines up to 1% of its global annual revenue (potentially over $2 billion) for failing to provide information about risks associated with its generative AI tools. Under the Digital Services Act, Microsoft must assess and report on systemic risks of services like Bing’s AI features, particularly concerning electoral processes and misinformation. The company has until May 27th to comply with the EU’s information request.
Users have criticized Slack for automatically opting them into AI data collection without clear communication. The main concern is that users must email the company directly to opt out, and the privacy policy doesn’t clearly distinguish between “global models” and “AI models” data usage. While Slack claims “Slack AI” doesn’t use customer data for training, the policy’s lack of clarity has led to widespread user distrust.
Google is tackling AI transparency through Model Explorer and its “AI on the Edge” initiative. The tool makes complex AI systems more accessible and understandable by providing hierarchical visualization capabilities. This approach helps developers identify and resolve issues in AI models, particularly when adapting them for mobile and edge devices.
ElevenLabs provides content creators with tools like Audio Native and ElevenLabs Reader. These solutions offer AI-powered text-to-speech capabilities in multiple voices, audience engagement tracking, and seamless website integration. The platform supports various content types and has been adopted by major publications for audio content generation.
Tech companies are facing increased scrutiny over AI regulation compliance, as seen with Microsoft’s EU challenges and Slack’s privacy policy issues. Companies are being required to provide detailed risk assessments, implement clear opt-out mechanisms, and ensure transparency in their AI operations. This includes addressing concerns about data collection, model training, and potential misuse of AI technologies.
Picture of Gor Gasparyan

Gor Gasparyan

Optimizing digital experiences for growth-stage & enterprise brands through research-driven design, automation, and AI