Cover Photo Major News from Youtube, Perplexity, Groq and Microsoft's MInference

YouTube Unveils AI Tool to Surgically Remove Copyrighted Music from Videos

YouTube is rolling out a new AI-powered tool that allows creators to selectively remove copyrighted music from their videos without silencing the entire audio track. The “Erase Song” option, part of YouTube’s “Video Copyright” summary page, enables creators to remove protected content while preserving the rest of the video’s audio. This feature is designed to address the ongoing tug-of-war between creators and copyright holders over copyright strikes, known as “copystrikes” on the platform. It is a significant improvement over previous limited options, such as muting the entire video or replacing the song, which often disrupted the viewing experience. The song-erasing feature is particularly important for creators who rely on monetization through the YouTube Partner Program. 

As YouTube continues to face challenges with copyright enforcement, exacerbated by AI, these abilities will be a boon to both copyright holders and video makers. The platform has hinted at more improvements and features that use AI to make and share content without the threat of copyright claims.

Perplexity AI Unveils Upgraded ‘Pro Search’ Tool to Revolutionize Research and Beat ChatGPT

Perplexity has announced a significant upgrade to its ‘Pro Search’ tool, aiming to stand out from competitors like ChatGPT. The upgraded Pro Search boasts enhanced math, programming, and multi-step reasoning capabilities, making research faster and more efficient than ever before.

The key features of the upgraded Pro Search include integrating the Wolfram|Alpha engine, enabling the AI to solve complex mathematical questions quickly and accurately. Additionally, the AI has improved its multi-step reasoning, allowing it to understand when a question requires a multifaceted approach and work through goals step-by-step. The search results have also been expanded, with the AI exploring a broader range of sources and displaying its step-by-step process.

Perplexity positions the enhanced Pro Search as a game-changer for professionals across various fields, from attorneys and marketers to developers and engineers. The tool empowers users to make more informed decisions by providing thoroughly researched answers. The upgraded Pro Search is accessible to all Perplexity users, with a limit of five free uses every four hours, and the company also offers a Pro subscription, granting users more access to the advanced search capabilities.

Groq Unveils Lightning-Fast LLM Engine, Sees Rapid Developer Growth

Groq has introduced a groundbreaking capability that allows users to make lightning-fast queries and perform other tasks with leading large language models (LLMs) directly on its website. The new feature showcases Groq’s impressive processing speed. In tests, the company’s engine was able to reply at a rate of around 1256.54 tokens per second, a speed that appears almost instantaneous, outperforming even the powerful GPU chips from companies like Nvidia. 

The experience is particularly significant as it demonstrates to both developers and non-developers the remarkable speed and flexibility of LLM chatbots. Groq’s CEO, Jonathan Ross, believes that the ease of use and performance of the company’s fast engine will drive even greater adoption of LLMs in the future.

Groq has already amassed a developer base of more than 282,000, a testament to the growing interest in its technology. The company’s focus on enterprise-level AI applications and its promise of more efficient and less power-hungry processing compared to GPUs have made it an attractive option for businesses seeking to deploy AI solutions.

Microsoft Unveils ‘MInference’ Demo, Revolutionizing AI Processing Speed

Microsoft has unveiled an interactive demonstration of its new MInference technology on the AI platform Hugging Face. MInference, which stands for “Million-Tokens Prompt Inference,” aims to dramatically accelerate the processing of large language models (LLMs) by addressing the computational challenges associated with lengthy text inputs.

According to Microsoft’s research, MInference can slash processing time by up to 90% for inputs of one million tokens, equivalent to around 700 pages of text, while maintaining accuracy. This breakthrough addresses a critical challenge in the AI industry, as the demand for processing larger datasets and longer text inputs continues to grow.

The release of MInference also intensifies the competition in AI research among tech giants, as various companies work on efficiency improvements for LLMs. Microsoft’s public demo asserts its position in this crucial area of AI development, potentially prompting other industry leaders to accelerate their own research in similar directions.