When Rachel Brown first saw images on the news of a Tesla Cybertruck in flames outside of Trump Tower in Las Vegas, she thought it must be fake. Just a couple of weeks later, images of the Hollywood sign burning as wildfires tore through Los Angeles — equally as shocking – spread across the Internet.
But only one of those images was an AI-generated fraud. And it was really hard to tell which one it was, Brown said.
That’s because AI image generators have become so good that it's now almost impossible for humans to detect when an image or a video is real. That can be dangerous when public safety is at risk, especially following a potential terrorist attack or during a climate crisis prompting thousands of evacuations.
![Headshot of Rachel Brown](../../../images/2025/02/2024-09-05-pnca-headshots-rachel-brown-3500-1.jpg)
Brown, an assistant professor of computer science at Willamette University's School of Computing & Information Sciences who researches the intersection between computer graphics, vision science, and human perception, said the regulation of these image generators is complicated but offers some advice on telling the difference between what is fake and what is real.
“The thing about human attention is that we don’t look at most parts of an image,” she said. “Most of it is in our peripheral vision, which is much lower fidelity than our central vision. The parts of an image that are most likely to be caught if they're fake are people's faces and the parts that we’re looking at.”
Unfortunately, the computers that generate these images know that, too.
“Study the image instead of just looking at whatever your eye goes to naturally,” Brown said.
Remember that the AI image generators are bad at physics, she said. This often leads to inconsistencies in shadows, reflections, and the physical arrangement of objects within a scene. For instance, if an object’s reflection doesn’t match its position or if shadows fall in unnatural directions, the image may have been generated by AI. Physics is even harder to fake in videos, so those inconsistencies are more obvious there than in still images.
Brown said to investigate the image’s source if its origin is unclear. While some AI-generated images may not exhibit obvious flaws, verifying the source can provide additional confirmation.
The accessibility of generative models has lowered the barrier for creating fake content, Brown said. This has led to a surge in "bad actors" producing convincing forgeries, like the image of the famous Hollywood sign going up in flames. As a result, even small imperfections in images are worth scrutinizing.
“If there’s nothing obviously wrong with it, that doesn’t mean it’s not fake,” she said. “A lot of people think that this is a technology problem that requires a technological solution. For right now, it’s a sociological problem that needs to be solved by educating people that you need to verify the information that you consume.”