Deepfake maps could really mess with your sense of the world

Deepfake maps could really mess with your sense of the world

by Tech News
0 comments 66 views
A+A-
Reset

Misinformation mapped? —

Researchers applied AI techniques to make portions of Seattle look more like Beijing

Will Knight, Wired.com

Extreme close-up photograph of a road map.

Enlarge / A macro shot of the city of Seattle, Washington, on a map.

Satellite images showing the expansion of large detention camps in Xinjiang, China, between 2016 and 2018 provided some of the strongest evidence of a government crackdown on more than a million Muslims, triggering international condemnation and sanctions.

Other aerial images—of nuclear installations in Iran and missile sites in North Korea, for example—have had a similar impact on world events. Now, image-manipulation tools made possible by artificial intelligence may make it harder to accept such images at face value.

In a paper published online last month, University of Washington professor Bo Zhao employed AI techniques similar to those used to create so-called deepfakes to alter satellite images of several cities. Zhao and colleagues swapped features between images of Seattle and Beijing to show buildings where there are none in Seattle and to remove structures and replace them with greenery in Beijing.

wired logo

Zhao used an algorithm called CycleGAN to manipulate satellite photos. The algorithm, developed by researchers at UC Berkeley, has been widely used for all sorts of image trickery. It trains an artificial neural network to recognize the key characteristics of certain images, such as a style of painting or the features on a particular type of map. Another algorithm then helps refine the performance of the first by trying to detect when an image has been manipulated.

As with deepfake video clips that purport to show people in compromising situations, such imagery could mislead governments or spread on social media, sowing misinformation or doubt about real visual information.

“I absolutely think this is a big problem that may not impact the average citizen tomorrow but will play a much larger role behind the scenes in the next decade,” says Grant McKenzie, an assistant professor of spatial data science at McGill University in Canada, who was not involved with the work.

“Imagine a world where a state government, or other actor, can realistically manipulate images to show either nothing there or a different layout,” McKenzie says. “I am not entirely sure what can be done to stop it at this point.”

A few crudely manipulated satellite images have already spread virally on social media, including a photograph purporting to show India lit up during the Hindu festival of Diwali that was apparently touched up by hand. It may be just a matter of time before far more sophisticated “deepfake” satellite images are used to, for instance, hide weapons installations or wrongly justify military action.

Gabrielle Lim, a researcher at Harvard Kennedy School’s Shorenstein Center who focuses on media manipulation, says maps can be used to mislead without AI. She points to images circulated online suggesting that Alexandria Ocasio-Cortez was not where she claimed to be during the Capitol insurrection on January 6, as well as Chinese passports showing a disputed region of the South China Sea as part of China. “No fancy technology, but it can achieve similar objectives,” Lim says.

Manipulated aerial imagery could also have commercial significance, given that such images are hugely valuable for digital mapping, tracking weather systems, and guiding investments.

US intelligence has acknowledged that manipulated satellite imagery is a growing threat. “Adversaries may use fake or manipulated information to impact our understanding of the world,” says a spokesperson for the National Geospatial-Intelligence Agency, part of the Pentagon that oversees the collection, analysis, and distribution of geospatial information.

The spokesperson says forensic analysis can help identify forged images but acknowledges that the rise of automated fakes may require new approaches. Software may be able to identify telltale signs of manipulation, such as visual artifacts or changes to the data in a file. But AI can learn to remove such signals, creating a cat-and-mouse game between fakers and fake-spotters.

“The importance of knowing, validating, and trusting our sources is only increasing, and technology has a large role in helping to achieve that,” the spokesperson says.

Spotting images manipulated with AI has become a major area of academic, industry, and government research. Big tech companies such as Facebook, which are concerned about spreading misinformation, are backing efforts to automate the identification of deepfake videos.

Zhao at the University of Washington plans to explore ways to automatically identify deepfake satellite images. He says that studying how landscapes change over time could help flag suspect features. “Temporal-spatial patterns will be really important,” he says.

However, Zhao notes that even if the government has the technology needed to spot such fakes, the public might be caught unawares. “If there is a satellite image which is widely spread in social media, that could be a problem,” he says.

This story first appeared on wired.com.

Read More

You may also like

Leave a Comment