ChatGPT got its new image generation capabilities after OpenAI released a new update to GPT-4o last week. Since then, over 700 million images have been generated by users, some playful as in the style of various studios, others not so much. As we noted in a recent report, ChatGPT users have even started using the platform to create fake IDs like Aadhaar and PAN cards. And while the chatbot doesn’t refuse to generate these images, there are 7 use cases where it definitely draws the line. We went through the GPT-4o native image generator’s system card to find out all the restrictions that OpenAI has put on its new image generator, and here’s what you should know.
7 Images that ChatGPT won’t generate:
1) Erotic images and sexual deepfakes:
ChatGPT blocks any attempt to create sexually exploitative images. The company says it has also put in place ”heightened security measures” to prevent the creation of non-consensual intimate images or sexual deepfakes.
2) Photorealistic violent images:
While ChatGPT allows users to depict violence in their images for ‘artistic, creative or fictional contexts’, it prevents users from creating photorealistic graphically violent images in ‘certain contexts’.
Notably, creating photorealistic images is one of the highly touted features of GPT-4o’s image generator, which many users have used to bring their favourite fictional movie or game character to life, among other use cases.
3) Content promoting self-harm:
The new ChatGPT image generator prevents any attempts of generating images that can facilitate self-harm like generation of any instructions for causing self-harm. OpenAI says it has also added protection for users it believes are under the age of 18 but it doesn’t go into details about them.
4) Extremist propoganda:
OpenAI has added measures in ChatGPT that prevent generation of images that related to extremist propoganda and recruitment. The company allows users to generate hateful symbols
“We allow users to generate hateful symbols in a critical, educational, or otherwise neutral context, as long as they don’t clearly praise or endorse extremist agendas.” OpenAI said int its GPT-4o native image geneation system card.
However, the company does not go into detail about how it defines hateful symbols or extremist propoganda, which could be an issue of wider complexity as AI generated images become more popular.
5) Instructions for illicit activities:
ChatGPT also takes preventive measures to block the creation of images with advice or instructions related to weapons, violent wrong doing, theft and other illicit activities. Given that the ChatGPT’s new image generator is particularly good at generating instructions like PPTs or Infographics, it seems like a welcome step that OpenAI is blocking content related to generation of creation of manuals on carrying out illegal activities.
6) Creating and editing images of minors:
OpenAI has taken a strong stance for addressing child safety risks with GPT-4o’s image generator as it blocks any attempts at editing real-life images of children. The same protection applies to minor public figures who images cannot be created or edited using the new image tool.
7) Copying styles of living artists:
ChatGPT freely generates images in the style of various film studios such as Ghibli or Pixar, with the option to generate these images even in the style of some specific films. However, OpenAI does not allow the creation of an image in the style of a living artist.
With ChatGPT also refusing to generate Ghibli-style images for some users due to copyright concerns, the question of what the AI should and shouldn’t be allowed to generate is likely to be a hot topic in the coming days, weeks and months.