Siri Gets a Major Upgrade: Apple’s AI-Powered Transformation
Apple is set to launch a revamped version of its digital assistant, Siri, this fall. The integration of Apple Intelligence, the company’s generative AI offering, and a partnership with OpenAI, will bring a slew of new features and updates to Siri. The key highlights include the integration of ChatGPT, allowing Siri to tap into OpenAI’s powerful language model to answer a wide range of questions, from recipe ideas to identifying unknown flowers. Siri will also become more context-aware, able to retrieve personal information and documents to assist users seamlessly.
This major overhaul of Siri promises to transform the way iPhone, iPad, and Mac users interact with their digital assistant, making it more intelligent, versatile, and user-friendly than ever before.
Google’s Robots Learn Like Humans, Mastering Tasks Through Video Observation
Google DeepMind’s robotics team has taught their RT-2 robots to learn from watching videos, much like a human intern. The team has published a study showcasing how these robots, equipped with the Gemini 1.5 Pro generative AI model, can absorb information from video tours and use that knowledge to navigate environments and carry out various tasks.
The Gemini 1.5 Pro model’s long context window allows the AI to process extensive information simultaneously. Researchers would film video tours, and the robots would watch and learn about the environment. The details enable the robots to complete tasks based on their acquired knowledge, using verbal and image outputs.
The Gemini 1.5 Pro model’s ability to complete multi-step tasks stands out. The robots can navigate, visually process contents, and return to answer questions, demonstrating a level of understanding beyond the current standard of single-step orders.
While not yet ready for widespread use, this technology represents a significant advancement in robotics. Integrating AI models like Gemini 1.5 Pro could transform industries such as healthcare, shipping, and janitorial duties, as robots equipped with these capabilities could navigate complex environments and carry out a wide range of tasks.
Zendesk CTO Envisions AI Transforming the Future of Customer Experience
Zendesk’s Chief Technology Officer, Adrian McDermott, has shared his vision for the future of Customer Experience (CX), where AI will play a transformative role. According to a new report by Zendesk, 81% of CX leaders believe AI will improve CX, and 86% think CX will be radically transformed in the next three years.
McDermott, who participated in a panel discussion at the VB Transform 2024 event, emphasized that Zendesk has been steadily integrating AI into its intelligent CX platform. The report surveyed over 1,300 senior CX leaders and found overwhelming interest in AI, with 77% believing traditional CX will give way to radically different industry dynamics due to AI’s influence.
While not every organization has AI as a top priority today, McDermott believes that within a few years, 100% of all customer interactions will involve AI, whether through a co-pilot experience, knowledge discovery, or a full conversational experience.
However, McDermott stressed the importance of maintaining a human touch, even with pervasive AI in CX. He highlighted the need for proper journey design and escalation protocols, as human conversations will remain crucial. To address this, Zendesk acquired Klaus, a Quality Assurance company, to monitor both human and bot interactions.
McDermott outlined three stages of AI implementation in customer service: human-in-the-middle, concierge implementation, and expanded automation. He expects to see significant changes in the industry’s comfort level with AI over the next year, as more companies gain practical experience with AI-driven customer service solutions.
While the transformation will be an evolution that takes time, McDermott believes the future of CX will be profoundly shaped by the integration of AI, ultimately enhancing the customer experience.
Patronus AI Unveils ‘Lynx’: An Open-Source Bullshit Detector Outperforming Industry Giants
Patronus AI, a New York-based startup, has unveiled Lynx, an open-source model designed to detect and mitigate hallucinations in large language models. This breakthrough could reshape enterprise AI adoption as businesses across sectors grapple with the reliability of AI-generated content.
Lynx outperforms industry giants like OpenAI’s GPT-4 and Anthropic’s Claude 3 in hallucination detection tasks, representing a significant leap forward in AI trustworthiness. Patronus AI reports that Lynx achieved 8.3% higher accuracy than GPT-4 in detecting medical inaccuracies and surpassed GPT-3.5 by 29% across all tasks.
Anand Kannappan, CEO of Patronus AI, explained that hallucinations in LLMs can lead to incorrect decision-making, misinformation, and a loss of trust from clients and customers. To address this, Patronus AI also released HaluBench, a new benchmark for evaluating AI model faithfulness in real-world scenarios, including domain-specific tasks in finance and medicine.
The decision to open-source Lynx and HaluBench could accelerate the adoption of more reliable AI systems across industries. Kannappan stated that Patronus AI plans to monetize Lynx through enterprise solutions, including scalable API access, advanced evaluation features, and bespoke integrations.
IBM Revolutionizes Enterprise AI with Open Innovation and Customizable Models
At VB Transform 2024, IBM’s David Cox, VP of AI models and director at the MIT-IBM Watson AI Lab, presented a compelling vision for open innovation in enterprise generative AI. Cox challenged the tech industry to reevaluate its practices around open-source models, calling for more standardized, transparent, and collaborative approaches to AI development.
Cox highlighted the critical nature of the current moment in AI, emphasizing the need to avoid lock-in and make informed decisions about where to invest. He outlined a nuanced view of openness, cautioning that many “open” large language models (LLMs) lack the essential properties of successful open-source software, such as frequent updates, structured release cycles, and active community contributions.
To address this, Cox introduced IBM’s Granite series of open-source AI models, which prioritize transparency and performance. He also proposed a novel perspective on LLMs as data representations, highlighting the significant gap between publicly available information and the proprietary “secret sauce” of enterprises.
Cox outlined a three-step approach for enterprises to integrate their unique data and knowledge into open-source LLMs. This includes finding a trusted base model, creating a new representation of business data, and deploying, scaling, and creating value.
To bring this vision to life, Cox presented InstructLab, a collaborative project between IBM and Red Hat. InstructLab offers a “genuinely open-source contribution model for LLMs,” enabling enterprises to precisely target areas for model enhancement and integrate their proprietary expertise.
Through initiatives like InstructLab, IBM is leading the charge in revolutionizing enterprise AI adoption. The focus is shifting from generic, off-the-shelf models to tailored solutions that reflect each company’s unique knowledge and expertise. As this technology matures, the competitive edge may well belong to those who can most effectively turn their institutional knowledge into AI-powered insights.