Cover Photo Major News from Midjourney, OpenAI, the Screen Actors Guild-American Federation of Television and Radio Artists and deep fake technology

Midjourney Launches Enhanced AI Image Editor Amid Legal Challenges

Midjourney has unveiled a new unified web-based AI image editor, enhancing its platform’s capabilities amid growing competition. This updated interface consolidates features like inpainting and outpainting into a single, user-friendly view, introducing a virtual brush tool for more precise editing. The new editor is accessible to users who have created at least ten images on the platform. Additionally, Midjourney has improved communication between its web and Discord communities by mirroring messages across both platforms. This release comes as the company faces a class-action lawsuit from artists alleging copyright violations, with legal proceedings advancing towards discovery. Despite these challenges, Midjourney remains focused on innovation and enhancing user experience.

OpenAI Disrupts Iranian Influence Operation Targeting U.S. Elections

OpenAI has taken action against a network of ChatGPT accounts linked to an Iranian influence operation aimed at manipulating perceptions around the U.S. presidential election. The operation, identified as Storm-2035, generated AI-crafted articles and social media posts that attempted to polarize public opinion on various political issues. Although the content produced did not gain significant traction, it echoes previous tactics used by state actors on social media platforms. OpenAI’s intervention was informed by a Microsoft report detailing the group’s activities, which included creating deceptive news sites and engaging with U.S. voter groups across the political spectrum. The company has previously targeted similar operations, reflecting ongoing concerns about the misuse of AI in electoral processes.

Video Game Performers Demand Protections Against AI Threats

Video game performers are on strike, advocating for protections against the unregulated use of artificial intelligence in their industry. Concerns have arisen that AI could replicate performances without consent, jeopardizing job opportunities and reducing the need for human actors. The Screen Actors Guild-American Federation of Television and Radio Artists has been negotiating with game studios for over 18 months, seeking transparency and fair compensation related to AI usage. While studios claim to offer meaningful protections, performers argue that definitions of “performers” and the ethical implications of AI-generated content remain contentious. The ongoing strike underscores the need for clear guidelines as AI technologies evolve in gaming.

Deep Fake Concerns Emerge in Presidential Race

As the presidential race intensifies, allegations of deep fake technology usage have surfaced, with former President Trump accusing Kamala Harris of manipulating rally crowd images to interfere with the election. Deep fakes, which are AI-generated videos that can misrepresent individuals, pose a significant threat to democratic processes by blurring the lines between reality and fabrication. The accessibility of deep fake technology means that even individuals with basic editing skills can create convincing disinformation. While advancements in detection tools are underway, including initiatives from tech companies and collaborative efforts to establish detection protocols, the need for media literacy and robust strategies to combat deep fakes is critical to preserving the integrity of elections and public trust.