Friday, February 23, 2024

How to tell images of Trump arrested, Pope in a coat were AI-made

YOU MAY ALSO LIKE



Comment

This article is a preview of The Tech Friend newsletter. Sign up here to get it in your inbox every Tuesday and Friday.

In the past week or so, people playing around with artificial intelligence software spit out fake images of Donald Trump being arrested and Pope Francis appearing to strut in a blinged-out puffy coat.

And AI-generated images depicted what looked like French President Emmanuel Macron caught up in a riot.

Technology boosted by AI is helping people generate false images, video and audio that are becoming more difficult to distinguish from reality.

I have five clues that you can look for to spot AI-generated images, including by zeroing in on hands, background images and inanimate objects that often don’t look quite right.

Whenever there is an evolution in technology, we need to learn to navigate new challenges. Teaching ourselves the skills of an AI detective gives us power. The skepticism about the Trump fakes showed that we’re not AI suckers. (A Manhattan grand jury voted to indict Trump on Thursday.)

But I’ll tell you the truth: I feel uneasy about writing this newsletter. I don’t want to hype your fears about AI fakes, which is in and of itself risky. And focusing on AI forensics may also distract us from a deeper reason that fakes are alluring.

We have 15 years of social media history — and centuries of conspiracy theories — that show the sophistication of the “evidence” is not what makes false information believable. We fall for fakes when we want to believe the reality they present.

5 tip offs that an image may be an AI-generated fake

1. Look at the hands. AI software has a history of generating human hands with too many fingers or other oddities. The technology is starting to nail hands now, but there are often still glitches.

In the fake image of Pope Francis, for example, his right hand looks squashed and so does what appears to be a takeout coffee cup he’s clutching. At a cursory glance, you can recognize that these details don’t look quite right.

2. Inanimate objects might be off-base: AI software including that from Midjourney — which was used to create the puffy coat Pope and the fake Trump arrest images — can generate objects that defy reality.

To spot this, focus on items in an image like eyeglasses, fences or bicycles.

Some eagle-eyed people noticed that in the fake image of Pope Francis, the traditional pectoral cross around his neck only had one strap.

Computer-generated people might be missing an earring or the earpieces of their eyeglasses don’t match. These flaws were more noticeable in prior generations of AI image software, but these distortions still pop up.

Machines can also be tricky for AI. The journalist Luke Bailey tweeted images of AI-generated unicycles that were laughably off base.

3. Is there garbled text? If you’re wondering if an image is made by AI, look for writing on objects like street signs or billboards.

Bailey also showed an AI-generated image of Prince Harry clutching a bag of McDonald’s food. The restaurant chain’s logo looked realistic but the text on the bag was gibberish.

4. Scan the background. AI-generated images may have blurry or distorted details, particularly in the background.

In one of the fake images of Trump, the faces of law enforcement officers appeared to be blurry or misshapen. In another, the eyes of the AI-generated fake police officers appeared to be looking in the wrong direction.

5. Are the images overly glossy or artistic-looking? Some AI-generated images of real people appear garishly stylized or depict people with plastic-looking faces.

The face of the AI-generated Pope Francis had an “aesthetic sheen,” said Henry Ajder, a specialist in manipulated or artificially generated media. “AI software smooths them a bit too much and makes them look too shiny.”

It will become harder for you to spot AI-generated people or doctored images as AI technology advances. Ajder cautioned that these clues to spotting AI images might be out of date soon. “In weeks these flaws can be trained out of these models,” he said.

The bigger picture: It’s irresponsible to treat AI fakes as doomsday

Fake images aren’t new. For more than a decade, for example, a fake image of a shark supposedly swimming in flooded city streets has circulated repeatedly during hurricanes or other storms.

But it is scary that AI software gives almost anyone the ability to churn out convincing-looking images in minutes.

Our challenge is to treat the risks of AI fakes with neither too little concern nor so much that it creates a self-fulfilling panic.

Researchers talk about a phenomenon known as the “liar’s dividend”: The more we believe that what we see and hear is fake, we run the risk of disbelieving in the authenticity of anything. This Orwellian mistrust is what authoritarian governments love. You and I must resist this.

It’s also important to recognize that fakes and hoaxes have been part of our lives forever. They are partly a symptom of our mistrust in one another and our fears.

I’ve fallen for fakes, too. Early in the coronavirus outbreak in 2020, I saw a viral tweet with an image of Tom Hanks apparently holed up in a hospital room with Wilson, the volleyball from his “Cast Away” film.

The image was a photoshopped fake from a satirical Australian news publication, but I retweeted it without thinking. I was scared of the pandemic and this moment of levity felt like a relief. I wanted it to be true.

Claire Wardle, a co-founder of the Information Futures Lab at Brown University, told me that she was heartened that relatively few people seemed to believe the AI-generated Trump images were real.

She said that shows many of us have learned to be discerning about what we see online and look for confirmation. Wardle said she saw comments on Twitter from people saying if the Trump arrest images were genuine, the information would have been published on conventional news websites.

“It’s easy to go the doomsday route but actually I think we’re smarter than we think we are,” Wardle said.

One of the biggest headaches of Help Desk readers is getting a Facebook account taken over by hackers.

And Facebook stinks at making it straightforward to recover an account. My colleague Heather Kelly has suggestions for how you can avoid preventing a Facebook account takeover in the first place.

If you only do one thing, turn on two-factor authentication – an additional step like a secret code to access your Facebook account in addition to your password. To do this:

Tap the three lines in the upper right hand corner (Android app) OR the three lines in the lower right corner (iPhone app) → Scroll down to Settings & privacy → Settings → Meta Accounts Center at the top of the screen → Password and security → Two-factor authentication → click “Edit” and enter your Facebook password.

You’ll see three options to choose from. Most people should choose one of these two:

  1. Text message (SMS): Facebook will text a number to your phone that you have to enter into the website or Facebook app when you log in, after you enter your password.
  2. Authentication app: This works similar to the text option, but you will open a third-party app to get the numeric code instead of a message. We recommend Twilio’s Authy or Google Authenticator (iOS, Android).

Are you bored of me recommending two-factor authentication? Too bad. I’m going to keep doing it until our whole stupid system of passwords is nuked into dust.





Source link

Leave a Reply

Your email address will not be published.