Apple Intelligence Features Face Delay, Expected in October Update
Apple’s highly anticipated artificial intelligence features, collectively known as Apple Intelligence, will not be included in the initial release of iOS 18 and iPadOS 18 in September. Instead, these AI capabilities are now slated for rollout in October through subsequent software updates. The delay affects various devices, including the iPhone 15 Pro series, iPads, and Macs with M1 chips or later. Apple plans to make these features available to developers for early testing via beta versions. This setback follows Apple’s recent emphasis on AI advancements and comes amid challenges in aligning with new EU tech regulations.
Google’s AI Models Make Strides in Mathematical Reasoning
Google’s DeepMind has unveiled two new AI systems, AlphaProof and AlphaGeometry 2, which have shown significant progress in solving complex mathematical problems. These models successfully tackled four out of six questions from the 2024 International Math Olympiad, marking the best performance by AI in this competition to date. AlphaProof, a combination of the Gemini language model and AlphaZero, demonstrated particularly impressive results, solving the most challenging question that stumped most human contestants. This advancement represents a crucial step towards developing AI systems with enhanced reasoning capabilities, moving beyond simple word prediction to more abstract problem-solving.
JPMorgan Introduces AI-Powered Research Assistant for Employees
JPMorgan Chase has launched an in-house generative AI tool called LLM Suite, designed to assist employees with tasks typically performed by research analysts. The system, capable of writing, generating ideas, and summarizing documents, is now available to approximately 50,000 staff members, primarily in the asset and wealth management division. This move follows a trend in the financial sector, with rival Morgan Stanley having introduced a similar AI-powered chatbot last year. JPMorgan’s initiative highlights the growing adoption of AI technologies in the banking industry to enhance productivity and streamline operations.
Salesforce’s MINT-1T: A Game-Changer in AI Research and Development
Salesforce AI Research has released MINT-1T, an enormous open-source dataset containing one trillion text tokens and 3.4 billion images. This multimodal dataset, ten times larger than previous public datasets, combines text and images to mimic real-world documents. MINT-1T’s size and diversity could revolutionize AI development, particularly in multimodal learning. By making this resource publicly available, Salesforce has leveled the playing field for AI research, enabling smaller labs and individual researchers to compete with tech giants. However, the dataset’s scale also raises important ethical questions about privacy, bias, and responsible AI development.
Apple Joins White House AI Safety Initiative, Signaling Commitment to Responsible Development
Apple has signed the White House’s voluntary commitment to developing safe, secure, and trustworthy AI, joining 15 other major tech companies. This move comes as Apple prepares to launch its generative AI offering, Apple Intelligence, into core products. The commitment involves red-teaming AI models before public release, securing model weights, and developing content labeling systems. While these voluntary commitments lack strong enforcement mechanisms, they represent a first step towards responsible AI development. Apple’s participation signals its willingness to cooperate with potential future AI regulations, as the government continues to expand its efforts in AI oversight and research support.
NIST Unveils Dioptra: A Tool to Assess AI Model Vulnerabilities
The National Institute of Standards and Technology (NIST) has re-released Dioptra, an open-source tool designed to evaluate AI model risks, particularly those arising from data poisoning attacks. This web-based platform allows companies and users to assess, analyze, and track AI risks by benchmarking models and simulating threats. Dioptra’s launch aligns with President Biden’s executive order on AI safety and complements international efforts to develop AI testing standards. While the tool currently has limitations, such as only working with locally downloadable models, it represents a significant step towards quantifying AI system vulnerabilities and enhancing overall model safety.