Deep Fakes. What are they?

https://www.economist.com/the-economist-explains/2019/08/07/what-is-a-deepfake

SUSAN SONTAG understood that photographs are unreliable narrators. “Despite the presumption of veracity that gives all photographs authority, interest, seductiveness,” she wrote, “the work that photographers do is no generic exception to the usually shady commerce between art and truth.” But what if even that presumption of veracity disappeared? Today, the events captured in realistic-looking or -sounding video and audio recordings need never have happened. They can instead be generated automatically, by powerful computers and machine-learning software. The catch-all term for these computational productions is “deepfakes”.

The term first appeared on Reddit, a messaging board, as the username for an account which was producing fake videos of female celebrities having sex. An entire community sprung up around the creation of these videos, writing software tools that let anyone automatically paste one person’s face onto the body of another. Reddit shut the community down, but the technology was out there. Soon it was being applied to political figures and actors. In one uncanny clip Jim Carrey’s face is melded with Jack Nicholson’s in a scene from “The Shining”.

Tools for editing media manually have existed for decades—think Photoshop. The power and peril of deepfakes is that they make fakery cheaper than ever before. Before deepfakes, a powerful computer and a good chunk of a university degree were needed to produce a realistic fake video of someone. Now some photos and an internet connection are all that is required.

The production of a deepfake about, say, Barack Obama, starts with lots of pictures of the former president (this, incidentally, means that celebrities are easier to deepfake than normal people, as the internet holds more data that describe them). These photos are fed into a piece of software known as a neural network, which makes statistical connections between the visual appearance of Mr Obama and whatever aspect of him you wish to fake. If you want to go down the ventriloquist route and have Mr Obama say things that the man himself has never said, then you must direct your software to learn the associations between particular words and the shape of Mr Obama’s mouth as he says them. To affix his face onto another person’s moving body, you must direct the software to learn the associations between face and body.

To make the imagery more realistic, you can have the software compete with a copy of itself, one version generating imagery, and the other trying to spot fakes. This technique, known as a generative adversarial networks (GAN), is the purest form of deepfake, conjuring up images that are entirely unique, not just using machine learning to mash existing photos together. The image-generating software will keep improving until it finds a way to beat the network that is spotting fakes, producing images that are statistically precise, pure computational hallucinations—even if still dodgy to the human eye. The computer can generate images which are statistically accurate representations of a dog, for instance, while still not quite understanding the visual nuances of fur. Currently this lends GAN images a creepy edge, but that is likely to evaporate in future, as the technique improves.

The consequences of cheap, widespread fakery are likely to be profound, albeit slow to unfold. Plenty worry about the possible impact that believable, fake footage of politicians might have on civil society—from a further loss of trust in media to the potential for electoral distortions. These technologies could also be deployed against softer targets: it might be used, for instance, to bully classmates by creating imagery of them in embarrassing situations. And it is not hard to imagine marketers and advertisers using deepfake tools to automatically tweak the imagery in adverts and promotional materials, optimising them for maximal engagement—the faces of models morphed into ideals of beauty that are customised for each viewer, pushing consumers to make aspirational purchases. In a world that was already saturated with extreme imagery, deepfakes make it plausible to push that even further, leaving Ms Sontag’s “presumption of veracity” truly dead in the water.

 

Equifax – Take the money or the credit monitoring?

https://www.clarionledger.com/story/news/2019/08/06/no-125-equifax-settlement-what-you-can-really-expect-bill-moak-consumer-watch/1927187001/

 

The ink was hardly dry on the press releases telling us we could get a check for $125 from the recent Equifax settlement when another put the brakes on the expectations of millions of Americans who were put at risk by the Equifax security breach. It just goes to demonstrate, once again, that you shouldn’t count your money until it’s actually in your wallet.

If you haven’t heard by now, don’t expect to get anywhere near that much (if anything) when checks are cut in January from the massive settlement.

Like many Americans, I took to my computer and logged into the settlement website when the cash payments were announced July 22. Sure enough, the website promised me, I would get $125 cash if I picked Door Number 1. Behind Door Number 2 was free credit monitoring. Unsurprisingly, most Americans just said, “Show me the money!”

But it wasn’t to be. Whether planners were optimistic, naïve or just took a shot in the dark, the $31 million set aside for actual cash payments was far too small to actually make the payments if more than 248,000 people filed claims. (By the way, $31 million is a drop in the proverbial bucket compared with the up to $700 million going to lawyers, government agencies and the few ordinary folks who can prove real damage.) The Federal Trade Commission hasn’t said how many have actually filed, but many sources indicate it’s already in the millions. And the agency no longer lists the $125 payment at the top of the claim form.

Simple math reveals that, if five million claims are filed, each check would be $6.20. The FTC has admitted that the average consumer’s check will be “nowhere near” the original $125 possibility.

“Pick free credit monitoring,” advised the FTC’s Robert Schoshinski in a blog post a couple of days after the initial press release. “The public response to the settlement has been overwhelming, and we’re delighted that millions of people have visited ftc.gov/Equifax and gone on to the settlement website’s claims form,” Schoshinski wrote without a hint of irony.

Since the announcements, a torrent of complaints has erupted. “With just $31 million to be divided up by all the Americans who filed to receive their $125 check, Americans have the choice of receiving pennies for having their credit details spilled out online, or receiving virtually worthless credit monitoring,” said Sen. Ron Wyden, D-Oregon, in a statement. “Another clear failure by the FTC.”

But the FTC said there’s been a misunderstanding. “The option to obtain reimbursement for alternative credit monitoring, as set forth originally in the class action settlement, was never intended to be a cash payout for all affected consumers,” the agency said in a statement, and points out that the value of the credit monitoring being offered in the settlement is sold by Equifax for $1,200.

A lot more could happen with this story, depending on how many people file claims. It’s possible that the amount of reimbursement could be raised eventually and you’ll get your money, but that will take years.

If you can demonstrate you used your own money and time because of the breach, you can be reimbursed. Anything beyond 10 hours of time, however, must be documented. “You can still ask for reimbursement for any other credit monitoring you purchased after Sept. 7, 2017, or costs associated with credit freezes after that date, any losses due to identity theft, or any notary fees, long-distance phone call bills, postage, copying, or mileage involved in trying to deal with the fallout of the breach,” noted Slate’s Josephine Wolf.

If you’ve already filed a claim, you will likely be contacted soon by the company administering the settlement, offering you the opportunity to change your option and take out free credit monitoring after all. In light of this situation, some pundits suggest it might be a good alternative. And many experts suggest that freezing your credit for the near future is a good idea as well.

Is the Russian Face App Stealing Your Facial Data?

Full Article Available Here:

https://www.forbes.com/sites/thomasbrewster/2019/07/17/faceapp-is-the-russian-face-aging-app-a-danger-to-your-privacy/#5dd2e59e2755

No, FaceApp isn’t taking photos of your face and taking them back to Russia for some nefarious project. At least that’s what current evidence suggests.

After going viral in 2017, and amassing more than 80 million active users, it’s blowing up again thanks to the so-called FaceApp Challenge, in which celebs (and everyone else) have been adding years to their visage with the app’s old-age filter. The app uses artificial intelligence to create a rendering of what you might look like in a few decades on your iPhone or Android device.

But one tweet set off a minor internet panic this week, when a developer warned that the app could be taking all the photos from your phone and uploading them to its servers without any obvious permission from the user.

The tweeter, Joshua Nozzi, said later he was trying to raise a flag about FaceApp having access to all photos, even if it wasn’t uploading them to a server owned by the Russian company.

Storm in an internet teacup?

This all turns out to be another of the Web’s many storm-in-teacup moments. A security researcher who goes by the pseudonym Elliot Alderson (real name Baptiste Robert) downloaded the app and checked where it was sending users’ faces. The French cyber expert found FaceApp only took submitted photos—those that you want the software to transform—back up to company servers.

Of course, given the developer company is based in St. Petersburg, the faces will be viewed and processed in Russia. The data in those Amazon data centers could be mirrored back to computers in Russia. It’s unclear how much access FaceApp employees have to those images, and Forbes hadn’t received comment from the company at the time of publication about just what it does with uploaded faces.

So while Russian intelligence or police agencies could demand FaceApp hand over data if they believed it was lawful, they’d have a considerably harder time getting that information from Amazon in the U.S.

Permission to land on your phone

So is there a privacy concern? FaceApp could operate differently. It could, for instance, process the images on your device, rather than take submitted photos to an outside server. As iOS security researcher Will Strafach said: “I am sure many folks are not cool with that.”

It’s unclear how well FaceApp’s AI would process photos on the device rather than more powerful servers. FaceApp improves its face-changing algorithms by learning from the photos people submit. This could be done on the device, rather than the server, as machine learning features are available on Android and iOS, but FaceApp may want to stick to using its own computers to train its AI.

Users who are (understandably) concerned about the app having permission to access any photos at all might want to look at all the tools they have on their smartphone. It’s likely many have access to photos and an awful lot more. Your every move via location tracking, for instance. To change permissions, either delete the app, or go to app settings on your iPhone or Android and change what data tools are allowed to access.

FaceApp responds

Forbes contacted FaceApp founder Yaroslav Goncahrov, who provided a statement Wednesday morning. He said that user data is not transferred to Russia and that “most of the photo processing in the cloud.”

“We only upload a photo selected by a user for editing. We never transfer any other images from the phone to the cloud,” Goncharov added.

“We might store an uploaded photo in the cloud. The main reason for that is performance and traffic: we want to make sure that the user doesn’t upload the photo repeatedly for every edit operation. Most images are deleted from our servers within 48 hours from the upload date.”

He said that users can also request that all user data be deleted. And users can do this by going to settings, then support and opt to report a bug, using the word “privacy” in the subject line message. Goncahrov said this should help speed up the process.

And he added: “We don’t sell or share any user data with any third parties.”