What is Apple’s new 4M AI model and why is it significant?
Cover Photo Major News from Apple, Deepfake, Meta, Youtube and Anthropic

Apple Unveils 4M AI Model Public Demo: A Game-Changer in AI Technology

Apple, in collaboration with EPFL, released a public demo of the 4M AI model on Hugging Face Spaces, democratizing advanced AI technology. The demo showcases a versatile model enabling image creation, object detection, and 3D scene manipulation through natural language inputs. This shift from Apple’s secretive R&D approach aims to attract developers and build an AI ecosystem. The release aligns with Apple’s AI advancements, market success, and strategic innovation. The 4M model’s unified architecture promises coherent AI applications across Apple’s ecosystem, enhancing user experiences. However, concerns about data ethics and privacy challenge Apple’s user-centric image. The release, alongside Apple’s AI strategy at WWDC, signals the company’s commitment to leading the AI revolution while prioritizing user privacy. As these technologies evolve, users can expect a transformative shift in device interactions, testing Apple’s ability to deliver advanced AI while safeguarding user trust.

Deepfake Threats Surge: $40 Billion Losses Expected by 2027

Deepfake-related losses are set to rise from $12.3 billion in 2023 to $40 billion by 2027, with a 32% annual growth rate. Deloitte anticipates a spike in deepfake incidents in 2024, reaching 140,000-150,000 cases globally. The proliferation of generative AI tools enables the creation of low-cost deepfake videos, voice impersonations, and fraudulent documents, posing a significant risk to banking and financial services.

Enterprises are ill-prepared for adversarial AI attacks like deepfakes, with one in three lacking a defense strategy. Ivanti’s research shows that 30% of enterprises have no plans to counter adversarial AI attacks, despite 74% already facing AI-powered threats. CEOs are prime targets for sophisticated deepfake attacks, leveraging voice and video manipulations to defraud companies.

CrowdStrike CEO George Kurtz underscores the rapid advancement of deepfake technology, emphasizing its potential to manipulate narratives and influence actions. Enterprises must enhance defenses against deepfakes and adversarial AI to combat evolving threats posed by nation-states and cybercriminal organizations.

Meta Shifts from ‘Made with AI’ to ‘AI Info’ Tags for Photos

Meta responds to user feedback by changing the ‘Made with AI’ label to ‘AI info’ on photos across its apps. The move aims to clarify that labeled images may not be entirely AI-generated but could involve AI-powered editing tools. The company acknowledges the need for clearer communication and aligning user expectations with labeling practices. Despite the label change, Meta continues to use technical metadata standards like C2PA and IPTC to detect AI usage in photos. Tools such as Adobe’s Generative AI Fill, if used for editing, may trigger the new ‘AI info’ tag. The updated tag intends to convey that AI was involved in creating or modifying the image, enhancing user understanding. While the new label addresses some concerns, it does not address the detection of fully AI-generated photos or provide details on the extent of AI editing. Meta and other platforms face the challenge of setting fair guidelines for photographers using AI tools, ensuring transparency in labeling practices.

YouTube Enhances Privacy Controls for AI-Generated Content

YouTube introduces a new policy in June allowing users to request the removal of AI-generated content simulating their face or voice. The change, part of YouTube’s responsible AI agenda, enables individuals to report such content as a privacy violation, distinct from misleading deepfakes. Criteria for removal include uniqueness, disclosure of AI use, and public interest considerations. YouTube grants a 48-hour response window to content uploaders following a complaint, emphasizing full removal if deemed necessary. The platform aims to enhance transparency and user control over AI content, aligning with evolving privacy concerns in the digital landscape.

Anthropic Funds New AI Benchmarks Program

Anthropic launches a funding initiative for developing advanced AI benchmarks to assess model performance and societal impact, including generative models like Claude. The program, unveiled on Monday, will provide payments to third-party organizations capable of measuring AI models’ advanced capabilities. The focus is on enhancing AI safety and addressing benchmarking challenges in the industry. Anthropic aims to create challenging benchmarks emphasizing AI security, societal implications, and tasks like cyberattacks and misinformation manipulation. The company plans to support research into AI’s potential in scientific study, language processing, bias mitigation, and toxicity control. Anthropic envisions platforms for large-scale model trials and offers tailored funding options for projects. While laudable, the initiative raises trust concerns due to the company’s commercial interests in AI. Anthropic seeks alignment with its AI safety classifications, potentially influencing applicants’ definitions of safe and risky AI. The community questions references to catastrophic AI risks like nuclear weapons, emphasizing the need for realistic assessments amidst AI regulatory challenges. Anthropic hopes the program will drive progress in comprehensive AI evaluation standards.

Frequently asked questions

Apple’s 4M AI model is a groundbreaking public demo released on Hugging Face Spaces in collaboration with EPFL. This versatile model enables image creation, object detection, and 3D scene manipulation through natural language inputs. It represents a significant shift in Apple’s typically secretive approach to R&D and aims to democratize AI technology while building a developer ecosystem. The model’s unified architecture promises to deliver coherent AI applications across Apple’s product range.
The deepfake threat is escalating rapidly, with projected losses expected to reach $40 billion by 2027, up from $12.3 billion in 2023. Deloitte predicts 140,000-150,000 deepfake incidents globally in 2024. Most concerning is that one-third of enterprises lack defense strategies against these threats, despite 74% already experiencing AI-powered attacks. CEOs are particularly vulnerable to sophisticated deepfake scams using voice and video manipulation.
Meta is replacing its “Made with AI” label with “AI info” across its platforms to better reflect that images may not be entirely AI-generated but could include AI-assisted edits. The company continues using technical metadata standards like C2PA and IPTC for AI detection while striving to improve transparency in how AI-modified content is identified to users.
YouTube’s new policy allows users to request removal of AI-generated content that simulates their face or voice without consent. The platform provides a 48-hour window for content creators to respond to complaints and will remove content if necessary. This policy specifically addresses privacy violations related to AI-generated content, separate from misleading deepfakes.
Deepfake-related financial losses are expected to grow at a 32% annual rate, reaching $40 billion by 2027. This dramatic increase is driven by the accessibility of low-cost generative AI tools that can create convincing fake videos, voice impersonations, and fraudulent documents, particularly targeting the banking and financial services sector.
Anthropic’s new funding initiative aims to develop advanced AI benchmarks for assessing model performance and societal impact. The program will fund third-party organizations to create challenging benchmarks focusing on AI safety, security, and potential risks. It emphasizes measuring AI capabilities in areas like scientific study, language processing, bias mitigation, and toxicity control.
Businesses should develop comprehensive defense strategies against deepfake and adversarial AI attacks. This includes implementing AI detection tools, establishing verification protocols for communications, training employees to recognize potential deepfake attempts, and staying updated on the latest deepfake detection technologies. Regular security assessments and collaboration with cybersecurity experts are also crucial.
Picture of Gor Gasparyan

Gor Gasparyan

Optimizing digital experiences for growth-stage & enterprise brands through research-driven design, automation, and AI