OpenAI’s latest flagship model, ChatGPT-4o, has impressed the internet with its ability to generate ultra-realistic images as the Studio Ghibli-style became a massive hit. But now, the AI tool is at the centre of a growing controversy.
Several social media users raised alarms about the tool’s potential for insurance fraud, particularly its ability to add fake scratches, dents, and damage to car photos with uncanny accuracy.
From creative tool to criminal aid?
In a now-viral post on X, a user demonstrated how ChatGPT’s image generation could convincingly modify a photo of a spotless car to show deep side scratches and a shattered tail light.
“If you are still Ghiblipoasting pivot to light insurance fraud,” wrote an X user by the handle @skyquake_1.
The generated image showed digitally created damage so realistic that many users admitted they would not be able to tell it was altered — unless they were trained professionals.
Another similar post, this time on Instagram showcased how the AI tool can be used to pull this fraud. An Instagram account by the name of Chatgptricks wrote, “People are already using ChatGPT’s new image generator to fake receipts and accidents.” It also shared a screenshot of how users are engaged in developing fake receipts for a restaurant.
The potential for abuse could be real, particularly in the context of remote insurance claim settlements. Many insurers now allow customers to submit photographic evidence for minor or moderate damages online, without a physical inspection, in a bid to speed up processing.
A fraudster could theoretically:
Take a real photo of their car
Use AI tools like ChatGPT-4o to simulate damage
Submit the doctored image for reimbursement or repairs
Walk away with a payout for damage that never happened
This type of scam could be especially effective in claims involving scratches, fender benders, vandalism, or disaster-related damage.
If insurers do employ fraud detection units, they often focus on large-scale or suspicious patterns. Subtle AI-generated modifications — especially from tools trained on lighting, reflections, and realism — could slip past an untrained human eye.
OpenAI’s usage policies prohibit the use of its tools for illegal activities, including fraud. The company has implemented safeguards to prevent malicious use of image generation.
As AI becomes more accessible and hyper-realistic, social media users are urging both consumers and institutions to remain vigilant.