The recent revelation that Midjourney AI’s founders actively collected stolen art to train its art generation tool is just the latest evidence of the unethical nature of GenAI art. But as legal and regulatory cases mount against these AI tools, artists will have to fend for themselves until regulations are put in place to protect their work from being used to train AI models without proper permissions, attribution, and compensation. For a while, it seemed like the only method of prevention was to simply not post your art online—a poor solution for artists who use social media and online portfolios to find clients.
How to Protect Your Art from AI (Glazing)
Researchers from the University of Chicago have now released two free desktop apps for Windows and Mac that not only make it impossible for an AI model to read an image or use it to generate content, but they actually harm those models if they try.
The first, Glaze, adds digital “noise” to an image, making it unreadable to GenAI art tools like Midjourney. The added noise is minimal: at worst, you might see some minor grain or artifacts, similar to what happens when you upload an image to Instagram or X (aka Twitter). The tool has already seen major improvements in minimizing visual impact, and it’s safe to assume that image clarity will continue to improve as the team continues to work on the app.
The second tool, Nightshade, seeds the image with “poisoned” data that can confuse an AI model if it tries to use it. While some GenAI proponents argue that this could be “illegal” – akin to distributing malware – it’s actually closer to the copyright protections that other media formats use to prevent piracy, such as the Digital Rights Management (DRM) that publishers often place on an e-book.