What new features has Midjourney added to its AI image editor?
Cover Photo Major News from Midjourney, OpenAI, the Screen Actors Guild-American Federation of Television and Radio Artists and deep fake technology

Midjourney Launches Enhanced AI Image Editor Amid Legal Challenges

Midjourney has unveiled a new unified web-based AI image editor, enhancing its platform’s capabilities amid growing competition. This updated interface consolidates features like inpainting and outpainting into a single, user-friendly view, introducing a virtual brush tool for more precise editing. The new editor is accessible to users who have created at least ten images on the platform. Additionally, Midjourney has improved communication between its web and Discord communities by mirroring messages across both platforms. This release comes as the company faces a class-action lawsuit from artists alleging copyright violations, with legal proceedings advancing towards discovery. Despite these challenges, Midjourney remains focused on innovation and enhancing user experience.

OpenAI Disrupts Iranian Influence Operation Targeting U.S. Elections

OpenAI has taken action against a network of ChatGPT accounts linked to an Iranian influence operation aimed at manipulating perceptions around the U.S. presidential election. The operation, identified as Storm-2035, generated AI-crafted articles and social media posts that attempted to polarize public opinion on various political issues. Although the content produced did not gain significant traction, it echoes previous tactics used by state actors on social media platforms. OpenAI’s intervention was informed by a Microsoft report detailing the group’s activities, which included creating deceptive news sites and engaging with U.S. voter groups across the political spectrum. The company has previously targeted similar operations, reflecting ongoing concerns about the misuse of AI in electoral processes.

Video Game Performers Demand Protections Against AI Threats

Video game performers are on strike, advocating for protections against the unregulated use of artificial intelligence in their industry. Concerns have arisen that AI could replicate performances without consent, jeopardizing job opportunities and reducing the need for human actors. The Screen Actors Guild-American Federation of Television and Radio Artists has been negotiating with game studios for over 18 months, seeking transparency and fair compensation related to AI usage. While studios claim to offer meaningful protections, performers argue that definitions of “performers” and the ethical implications of AI-generated content remain contentious. The ongoing strike underscores the need for clear guidelines as AI technologies evolve in gaming.

Deep Fake Concerns Emerge in Presidential Race

As the presidential race intensifies, allegations of deep fake technology usage have surfaced, with former President Trump accusing Kamala Harris of manipulating rally crowd images to interfere with the election. Deep fakes, which are AI-generated videos that can misrepresent individuals, pose a significant threat to democratic processes by blurring the lines between reality and fabrication. The accessibility of deep fake technology means that even individuals with basic editing skills can create convincing disinformation. While advancements in detection tools are underway, including initiatives from tech companies and collaborative efforts to establish detection protocols, the need for media literacy and robust strategies to combat deep fakes is critical to preserving the integrity of elections and public trust.

Frequently asked questions

Midjourney’s new unified web-based editor includes enhanced inpainting and outpainting capabilities, all accessible from a single interface. The platform has introduced a virtual brush tool for more precise editing and improved cross-platform communication between web and Discord communities. These features are available to users who have created at least ten images on the platform, representing a significant upgrade to Midjourney’s user experience.
OpenAI has actively disrupted an Iranian influence operation called Storm-2035 that was using ChatGPT to generate misleading content about the U.S. presidential election. The company identified and shut down multiple accounts creating AI-crafted articles and social media posts designed to polarize public opinion. Working with Microsoft’s intelligence, OpenAI has demonstrated its commitment to preventing the misuse of AI in electoral processes.
Video game performers are primarily concerned about AI technology being used to replicate their performances without consent or proper compensation. They worry that AI could reduce job opportunities and replace human actors in games. Through SAG-AFTRA, they’re seeking transparent guidelines on AI usage and fair compensation structures, as well as clear definitions of what constitutes a “performer” in the age of AI-generated content.
Midjourney is currently facing a class-action lawsuit from artists who allege copyright violations related to the training of their AI image generation system. The case is moving into the discovery phase, where both parties will examine evidence and documentation. This legal challenge represents one of the first major cases addressing AI art generation and copyright infringement.
Deep fakes are creating new challenges in the presidential race, with accusations of manipulated crowd images and videos emerging. The technology’s accessibility makes it easier for anyone to create convincing disinformation, potentially influencing voter perception. This has led to increased concerns about election integrity and the need for better detection tools and media literacy education.
Tech companies are developing advanced detection tools and establishing collaborative protocols to identify deep fake content. These efforts include AI-powered verification systems and cross-industry partnerships to create standardized detection methods. Additionally, there’s a growing focus on educating the public about media literacy and the signs of manipulated content.
OpenAI employs a comprehensive monitoring system that tracks unusual patterns of account activity and content generation. They work with intelligence partners like Microsoft to identify coordinated misuse attempts and can quickly shut down accounts involved in influence operations. The system is particularly focused on detecting political manipulation and disinformation campaigns using their AI tools.
Picture of Gor Gasparyan

Gor Gasparyan

Optimizing digital experiences for growth-stage & enterprise brands through research-driven design, automation, and AI