Is the Russian Face App Stealing Your Facial Data?

Full Article Available Here:

No, FaceApp isn’t taking photos of your face and taking them back to Russia for some nefarious project. At least that’s what current evidence suggests.

After going viral in 2017, and amassing more than 80 million active users, it’s blowing up again thanks to the so-called FaceApp Challenge, in which celebs (and everyone else) have been adding years to their visage with the app’s old-age filter. The app uses artificial intelligence to create a rendering of what you might look like in a few decades on your iPhone or Android device.

But one tweet set off a minor internet panic this week, when a developer warned that the app could be taking all the photos from your phone and uploading them to its servers without any obvious permission from the user.

The tweeter, Joshua Nozzi, said later he was trying to raise a flag about FaceApp having access to all photos, even if it wasn’t uploading them to a server owned by the Russian company.

Storm in an internet teacup?

This all turns out to be another of the Web’s many storm-in-teacup moments. A security researcher who goes by the pseudonym Elliot Alderson (real name Baptiste Robert) downloaded the app and checked where it was sending users’ faces. The French cyber expert found FaceApp only took submitted photos—those that you want the software to transform—back up to company servers.

Of course, given the developer company is based in St. Petersburg, the faces will be viewed and processed in Russia. The data in those Amazon data centers could be mirrored back to computers in Russia. It’s unclear how much access FaceApp employees have to those images, and Forbes hadn’t received comment from the company at the time of publication about just what it does with uploaded faces.

So while Russian intelligence or police agencies could demand FaceApp hand over data if they believed it was lawful, they’d have a considerably harder time getting that information from Amazon in the U.S.

Permission to land on your phone

So is there a privacy concern? FaceApp could operate differently. It could, for instance, process the images on your device, rather than take submitted photos to an outside server. As iOS security researcher Will Strafach said: “I am sure many folks are not cool with that.”

It’s unclear how well FaceApp’s AI would process photos on the device rather than more powerful servers. FaceApp improves its face-changing algorithms by learning from the photos people submit. This could be done on the device, rather than the server, as machine learning features are available on Android and iOS, but FaceApp may want to stick to using its own computers to train its AI.

Users who are (understandably) concerned about the app having permission to access any photos at all might want to look at all the tools they have on their smartphone. It’s likely many have access to photos and an awful lot more. Your every move via location tracking, for instance. To change permissions, either delete the app, or go to app settings on your iPhone or Android and change what data tools are allowed to access.

FaceApp responds

Forbes contacted FaceApp founder Yaroslav Goncahrov, who provided a statement Wednesday morning. He said that user data is not transferred to Russia and that “most of the photo processing in the cloud.”

“We only upload a photo selected by a user for editing. We never transfer any other images from the phone to the cloud,” Goncharov added.

“We might store an uploaded photo in the cloud. The main reason for that is performance and traffic: we want to make sure that the user doesn’t upload the photo repeatedly for every edit operation. Most images are deleted from our servers within 48 hours from the upload date.”

He said that users can also request that all user data be deleted. And users can do this by going to settings, then support and opt to report a bug, using the word “privacy” in the subject line message. Goncahrov said this should help speed up the process.

And he added: “We don’t sell or share any user data with any third parties.”

Getting up to Speed with AI and Cybersecurity – What you need to know:

Full Article Available Here:

In 1971 Bob Thomas, an American IT academic wrote Creeper, the first computer program that could migrate across networks. It would travel between terminals on the ARPANET printing the message “I’m the creeper, catch me if you can”. Creeper was made self-replicating by fellow academic and email inventor, Ray Thomlinson, creating the first documented computer virus.

In order to contain Creeper, Thomlinson wrote Reaper, a program that would chase Creeper across the network and erase it – creating the world’s first antivirus cybersecurity solution.

How cybersecurity has developed

Back then it would have been hard to imagine how a virus as simple and harmless as Creeper could be the precursor to the development of destructive malware and ransomware such as ILOVEYOU and WannaCry.

Thankfully, modern cybersecurity has come a long way since Reaper. These days, any mention of cybersecurity will inevitably lead to discussion about artificial intelligence (AI) and machine learning (ML) driven security solutions.

This is because the next generation of cybersecurity threats require agile and intelligent programs that can rapidly adapt to new and unforeseen attacks. AI and ML’s potential to meet this challenge certainly hasn’t gone unnoticed by cybersecurity decision makers, the vast majority of which believe that AI is fundamental to the future of cybersecurity.

Yet despite the hype, many decision makers are still unsure about exactly how AI and ML powered security products work.

AI and cybersecurity

Recently “neural network” AI techniques have become extremely popular, fostering the perception that they’re shiny and new. Yet many are often surprised to learn that AI is not a new phenomenon.

AI is by no means the new kid on the block, neural networks have been around for more than half a century, and some of the first commercial neural networks for malware detection and destruction were developed over 20 years ago – protecting against floppy disk boot sectors viruses in the age of Windows 98.

Machine Learning techniques

Another thing that seems to come as a surprise is just how many different places ML is found helping protect systems. This might be due to people reacting to the “machine” part of ML. In reality, ML is just another form of learning from examples—a concept everyone can understand. So, whether it’s a human or machine that’s learning to perform a task, all that matters is the level of sophistication and expertise that results.

A good example is the predictive keyboard on your smartphone. It has a little machine learning engine in it that reads what you type and learns from your typing style to predict what you might say next—or at least what you intend to say next. As you feed it more and more text, it can more confidently and accurately learn what you personally say and how you say it.

The value is that you have your own non-human helper that can predict your speech. Instead of a predictive keyboard, if we feed the ML your typing, mousing and other activities, it can learn even more about your unique behavior, becoming an expert at recognizing you and your little idiosyncrasies.

Instead of text input, if you feed it malware – you have a malware detector. Feed it network attacks and you have an IDS. These and many variations are found in network and EPP products. It’s the first kind of application that many people think of for AI in cybersecurity, and it’s probably the most widespread and mature.

In practice, machine learning is far more complex than merely tasking a computer to solve a problem. As with Creeper and Reaper, the development of ML- and AI-based threat detection takes a high degree of understanding built upon experience as well as an innovative approach that is always a few steps ahead of the attackers.