What is GPT-4o Mini and how does it differ from GPT-4o?
Cover Photo Major News from GPT-4o Mini, ChatGPT Enterprise, Groq, Apple OpenELM and Google

OpenAI Introduces GPT-4o mini: A Cost-Effective, Multimodal AI Model

OpenAI has unveiled GPT-4o mini, a smaller and more affordable version of its powerful GPT-4o model. This new AI can handle text and image inputs, with plans to expand to audio and video capabilities. Priced at just $0.15 per million input tokens and $0.60 per million output tokens, it’s significantly cheaper than its predecessor. OpenAI claims it outperforms comparable models on various benchmarks. The model will replace GPT-3.5 Turbo in ChatGPT for paid subscribers and will be available on Apple devices this fall. While not as powerful as GPT-4o for complex tasks, it offers a cost-effective solution for many AI applications.

OpenAI Enhances ChatGPT Enterprise with Advanced Compliance and Control Features

OpenAI has introduced new features for ChatGPT Enterprise, focusing on compliance, data security, and user management. The Enterprise Compliance API now provides detailed records of interactions, enabling better auditing and data control. OpenAI has partnered with third-party compliance providers to support various regulatory requirements, including GDPR, HIPAA, and FINRA.

To streamline user management, OpenAI is rolling out an identity management system using SCIM, allowing easier provisioning and removal of user access. Additionally, administrators now have greater control over custom GPTs, including the ability to set approved domains and manage sharing permissions.

These enhancements aim to make ChatGPT Enterprise more secure and adaptable for large organizations, particularly in regulated industries, as OpenAI focuses on expanding its enterprise offerings in 2024.

Groq’s Open-Source AI Model Outperforms Industry Giants in Function Calling

Groq, an AI hardware startup, has released two open-source language models that have claimed the top spot on the Berkeley Function Calling Leaderboard (BFCL). The Llama-3-Groq-70B-Tool-Use model outperformed proprietary offerings from OpenAI, Google, and Anthropic, achieving 90.76% overall accuracy. Developed in collaboration with Glaive, these models use only ethically generated synthetic data for training, challenging the need for vast real-world datasets. The models are now publicly available through the Groq API and Hugging Face, potentially accelerating innovation in complex tool use and function calling applications. This breakthrough demonstrates the potential of open-source AI to compete with and even surpass closed-source alternatives, potentially reshaping the AI landscape.

Apple Clarifies AI Training Practices, Distances from YouTube Data Controversy

Apple has addressed concerns about its AI training practices, emphasizing that its upcoming Apple Intelligence system does not use YouTube data. The company uses high-quality, licensed content and offers websites the option to opt-out of data collection. Apple’s research model, OpenELM, was trained on the controversial Pile dataset but is not used in consumer products. The company has no plans for future versions of OpenELM and reaffirms its commitment to respecting creators’ rights. The controversy stems from EleutherAI’s apparent use of YouTube data without permission, raising questions about data ethics in AI development.

Google Scales Back AI Overviews in Search Results Amid Accuracy Concerns

Google has significantly reduced the presence of AI-generated Overviews in search results, with appearances dropping from 15% to less than 7% since April. This decline is particularly noticeable in education, entertainment, and e-commerce queries. The tech giant is likely responding to user feedback and concerns about AI-generated inaccuracies. Despite this setback, Google remains committed to implementing AI in search, balancing innovation with reliability. The move may temporarily alleviate worries about AI’s impact on organic website traffic, but the long-term implications for the search industry are still unfolding. As AI continues to evolve, marketers are advised to stay vigilant and adaptable in this changing landscape.

Frequently asked questions

GPT-4o Mini is OpenAI’s new cost-effective multimodal AI model that can process both text and images. It’s a smaller, more affordable version of GPT-4o, costing just $0.15 per million input tokens and $0.60 per million output tokens. While not as powerful as GPT-4o for complex tasks, it offers better performance than comparable models and will replace GPT-3.5 Turbo in ChatGPT for paid subscribers. The model is designed to provide a balance between capability and cost-effectiveness.
ChatGPT Enterprise now includes enhanced compliance and control features through its Enterprise Compliance API, which provides detailed interaction records for better auditing. It supports various regulatory requirements including GDPR, HIPAA, and FINRA. The platform also introduces a new identity management system using SCIM for easier user access control and improved management of custom GPTs, including domain approval and sharing permissions.
GPT-4o Mini is priced at $0.15 per million input tokens and $0.60 per million output tokens, making it significantly more affordable than its predecessor and other competing models. This pricing structure makes it accessible for businesses and developers who need efficient AI capabilities without the high costs associated with more powerful models like GPT-4o.
ChatGPT Enterprise offers advanced features specifically designed for business use, including enhanced compliance tools, detailed audit trails, and robust data security measures. It includes third-party compliance support for various regulations, advanced user management through SCIM, and greater control over custom GPTs. These features make it more suitable for large organizations and regulated industries compared to regular ChatGPT.
According to the announcement, GPT-4o Mini is scheduled to become available on Apple devices in fall 2024. This integration will allow Apple users to access the model’s capabilities directly through their devices, though specific details about the implementation and features are still forthcoming.
Google has reduced its AI-generated Overviews in search results from 15% to less than 7% since April, particularly in education, entertainment, and e-commerce queries. This reduction comes in response to concerns about AI-generated inaccuracies, though Google maintains its commitment to implementing AI in search while prioritizing reliability.
Groq’s open-source language models have achieved top performance on the Berkeley Function Calling Leaderboard with 90.76% accuracy, surpassing proprietary models from OpenAI, Google, and Anthropic. These models are notable for using only ethically generated synthetic data and being publicly available through the Groq API and Hugging Face, demonstrating that open-source AI can compete with and exceed closed-source alternatives.
Picture of Gor Gasparyan

Gor Gasparyan

Optimizing digital experiences for growth-stage & enterprise brands through research-driven design, automation, and AI