What new features are coming to Apple’s Siri through Apple Intelligence?
Cover Photo Major News from Apple Intelligence, Apple's Siri, Apple Watch, AI2, Hugging Face's LightEval, Oracle, Google Cloud and Reflection 70B

Apple’s Siri Upgrade: A Leap Towards Smarter AI Assistance through Apple Intelligence

Apple unveiled plans to significantly enhance Siri’s capabilities through Apple Intelligence and a partnership with OpenAI. The upgrade promises to transform iPhones into more sophisticated personal assistants, offering improved language understanding and context awareness. Users will be able to type questions, engage in natural conversations, and perform complex tasks across various apps. The new Siri will understand personal context, allowing for more intuitive interactions. Apple Intelligence features will roll out gradually, starting with beta versions next month. The partnership with OpenAI will enable Siri to handle complex queries about world knowledge. This upgrade represents Apple’s effort to compete in the AI space and provide users with a more capable virtual assistant.

Apple Watch Embraces AI: Translation and Smart Features in watchOS 11

Apple’s latest watchOS update introduces AI-powered enhancements to the Apple Watch experience. The new Translate app, available on watchOS 11, offers speech recognition and translation capabilities across multiple languages. On newer Apple Watch models, this feature can function independently without a phone connection. Smart Stack, a widget display feature, is being improved to automatically add relevant widgets based on context. Additionally, a new AI-driven photos watch face will curate images from the user’s library. These upgrades demonstrate Apple’s commitment to integrating AI technology into its wearable devices, enhancing functionality and user experience across various aspects of the Apple Watch.

AI2 Unveils OLMoE: A Cost-Effective, Open-Source Language Model

The Allen Institute for AI has introduced OLMoE, a new open-source large language model that aims to balance performance and cost-effectiveness. Using a sparse mixture of experts architecture, OLMoE boasts 7 billion parameters but only utilizes 1 billion per input token. This design allows for efficient processing and reduced inference costs. AI2 emphasizes the model’s fully open-source nature, contrasting it with other mixture of experts models that often lack transparency in their training data and code. OLMoE builds upon AI2’s previous OLMO model and incorporates diverse training data. In benchmark tests, OLMoE has shown competitive performance against larger models while maintaining lower resource requirements, potentially making advanced AI more accessible to researchers and academics.

LightEval: Hugging Face’s Open-Source Tool Revolutionizes AI Evaluation

Hugging Face has unveiled LightEval, an open-source evaluation suite designed to assess large language models. This customizable tool addresses the growing need for precise, context-specific AI evaluation across industries. LightEval integrates with Hugging Face’s existing tools, offering a complete AI development pipeline that supports various hardware configurations. By making the tool open-source, Hugging Face encourages greater transparency and accountability in AI development. LightEval’s flexibility allows organizations to tailor evaluations to their specific needs, from measuring fairness in healthcare applications to optimizing e-commerce recommendation systems. This release marks a significant step towards democratizing AI development and ensuring models are not only accurate but also aligned with ethical and business standards.

Oracle Database Expands to Google Cloud, Enhancing Enterprise Data Solutions

Oracle and Google have announced the general availability of Oracle Database@Google Cloud, a service that allows enterprises to deploy Oracle’s database technologies directly within Google Cloud data centers. This integration combines Oracle’s database expertise with Google Cloud’s infrastructure and advanced AI capabilities, enabling organizations to leverage both platforms’ strengths. The service is initially available in four Google Cloud regions, with plans for global expansion. This partnership aims to simplify cloud migration, accelerate innovation, and provide customers with greater flexibility in managing their data. The integration also facilitates the development of AI applications using Oracle data on Google Cloud and enhances data analytics capabilities through seamless integration with services like BigQuery.

Controversy Surrounds Reflection 70B: Open-Source AI Model’s Performance Disputed

HyperWrite’s Reflection 70B, an open-source AI model based on Meta’s Llama, faced scrutiny shortly after its release. Initially touted as the top open-source model, its performance claims were quickly challenged by third-party evaluators. Discrepancies emerged between reported and independently tested results, particularly on the MMLU benchmark. HyperWrite attributed issues to upload errors on Hugging Face. Questions arose about the model’s origins, with some suggesting it might be based on an older Llama version or even a wrapper for a proprietary model. The AI community remains divided, with some defending Reflection 70B’s capabilities while others accuse the creators of potential fraud. As the controversy unfolds, researchers await further clarification and the release of updated model weights for independent verification.

Frequently asked questions

Through Apple Intelligence, Siri is getting significant upgrades including improved language understanding, context awareness, and the ability to handle complex queries. Users will be able to type questions and engage in more natural conversations across apps. The partnership with OpenAI will enhance Siri’s knowledge base, allowing it to answer more sophisticated questions. These features will begin rolling out in beta versions next month, making iPhones more capable personal assistants.
The Apple Watch is receiving several AI-powered enhancements in watchOS 11, including a new Translate app that works across multiple languages, even without phone connectivity on newer models. The update also introduces Smart Stack, an intelligent widget display system, and an AI-driven photos watch face that automatically curates images from your library. These features demonstrate Apple’s commitment to making their wearable technology smarter and more useful.
OLMoE stands out for its cost-effective design using a sparse mixture of experts architecture. While it has 7 billion parameters total, it only uses 1 billion per input token, making it more efficient than traditional models. It’s fully open-source, including training data and code, and offers competitive performance against larger models while requiring fewer resources. This makes it particularly valuable for researchers and academics working with limited budgets.
LightEval is an open-source evaluation suite that allows organizations to assess large language models with customizable metrics. It integrates seamlessly with existing Hugging Face tools and supports various hardware configurations. The tool enables precise, context-specific evaluations across industries, from healthcare to e-commerce, while promoting transparency and accountability in AI development.
The Oracle Database@Google Cloud integration allows businesses to deploy Oracle’s database technologies directly within Google Cloud data centers. This combination provides access to Oracle’s database expertise alongside Google Cloud’s infrastructure and AI capabilities. Organizations can simplify cloud migration, accelerate innovation, and better manage their data while leveraging both platforms’ strengths.
Apple Intelligence will enhance iPhone functionality by improving Siri’s natural language processing, enabling more intuitive interactions based on personal context, and allowing for seamless integration across different apps. Users will experience more sophisticated task handling, better conversation capabilities, and improved response accuracy. The integration with OpenAI will also expand the system’s knowledge base for more comprehensive answers.
The controversy around Reflection 70B centers on disputed performance claims and benchmark results, particularly on the MMLU benchmark. Questions have been raised about the model’s true origins, with some suggesting it might be based on an older Llama version or possibly a wrapper for a proprietary model. The AI community remains divided on its legitimacy, and independent verification of the model’s capabilities is still pending.
Picture of Gor Gasparyan

Gor Gasparyan

Optimizing digital experiences for growth-stage & enterprise brands through research-driven design, automation, and AI