Gotta love a selfie right? Well, this week FaceApp (a sort of crude photoshop driven by AI) has blown up across the internet as everyone from Drake to your grandma has posted a picture of what they will look like old. Or at least an augmented face-tuned version of what they could look like old.

Suddenly we are all presented with our own mortality while we gasp over how many wrinkles we might get or how much we look like our parents. A bit of harmless fun and not really a new idea. Incidentally, old me looked terrible and I now fear aging more than ever. What fun!

Anyway, about a month ago we had the same thing but with gender where everyone was giggling over what they would look like as the ‘opposite’ gender. A woefully binary party trick. Before that it was looking like babies and before that it was the 10 year challenge. A viral sensation where everyone posted side-by-side selfies that were taken 10 years apart. Each time these challenges appear almost from nowhere, take-over our news feeds and then disappear as quickly. So, what is this all about? Why are we so obsessed with ourselves? And what is being done with all the images and data created?

He’s got a big… ego

It’s not really surprising that the photo challenges that do well off are often selfie based. The rise of selfie culture over the last decade has been huge and as a result we are more accustomed to our own faces than ever. Data shows that social content featuring human faces gets better engagement which in our algorithmic world means the more we like the more we see. Add to the mix that humans are ego driven beings, it makes sense that selfies are such a common thing. Getting a like on a picture of ourselves is sort of validating. That isn’t a judgement. Hell, I’m far from immune myself. I love a cute selfie and I never delete. It’s just an observation.

But could it be that we have become so bored of our own actual faces that we will jump at any opportunity post a pic of ourselves and the pursuit of likes? It feels like we’re searching for new and interesting ways to get our faces seen by the internet even if it means agreeing to some ‘alarming’ data policies.

FaceApp’s terms has had some people up in arms warning about the use of the data which in the wake of Cambridge Analytica is hardly surprising. But hold on. Do any of us really know what the likes of Facebook are doing with all the images we post? Conspiracy theorists said that the 10 year challenge was all about plugging Facebook’s facial recognition software with old and current images of millions of people so we could all be monitored big brother style. There is no evidence of this of course, but it’s not as farfetched as it seems in this day and age. I mean, I dread to think what someone might do with my old photos. I can never be famous that’s for sure.

From Russia with malicious intent

The other key point that people keep making is that it’s a Russian made app and that therefore it MUST be harvesting our data for nefarious reasons. That feels like a lazy conclusion that is at best ignorant and at worst xenophobic. And besides, let’s not pretend that Google and your phone company don’t know where you are at all times. They’re not Russian. Let’s not pretend that Facebook isn’t doing the same thing as FaceApp but on an enormous scale and with many more data points being collected. But is the fear around FaceApp valid? I asked Sam Watson, Head of Mobile here at Brass, for his thoughts on the matter.

“I remember when the first iteration of this app came out in 2017 and the office was awash with gender swapping selfies with little or no concern about the Russian development origin however there were concerns about their terms but this was pre Cambridge Analytica (or 2017 BCa) so although there are blog posts and news articles on this there wasn’t as much hysteria on the concerns of the apps data terms. There was a big backlash to the brief race filters they launched and quickly removed. But was this more about AI training data bias not cognitive bias from the developers? Did this immediately put the app developers on the back foot in the eyes of the public? I don’t think it helped them. Now we’re in the year 2019 ACa (After Cambridge Analytica) everyone is scrutinising app terms. Looking at them now what can we take from it:

You grant FaceApp a perpetual, irrevocable, nonexclusive, royalty-free, worldwide, fully-paid, transferable sub-licensable license to use, reproduce, modify, adapt, publish, translate, create derivative works from, distribute, publicly perform and display your User Content and any name, username or likeness provided in connection with your User Content in all media formats and channels now known or later developed, without compensation to you. When you post or otherwise share User Content on or through our Services, you understand that your User Content and any associated information (such as your [username], location or profile photo) will be visible to the public

Does this mean they are going to be using my face on an ad without my permission? Probably not… but it is more likely to be continually used to train their neural network to better understand how to manipulate or understand faces. There are some questions about the other permissions they request but are these also to enable the neural network to create more convincing faces? I.e. if they have access to the camera they can look at you but if they have access to your photo library they are increasing their training data by about 10 orders of magnitude.

Let’s face the facts though, the app gets a lot of press, it does a very convincing job of seamlessly altering faces without any user intervention. Could the grand plan for FaceApp to create a social platform that automatically manipulates people’s faces to look their best all the time? This could be perfectly aligned to the whole selfie dysmorphia culture we live in. Equally as the world gets more contactless, deviceless and more face orientated  are FaceApp creating the foundations for automated beauty? Every time a CCTV captures me using the FaceApp framework I look like the best version of myself! Or could they be training 100,000,000’s of faces to understand the ravages of time and create a real life anti-aging product? Are they working with other content creators and banking on virtual worlds/networks in the future that need characters creating for them – would save a ‘helluva’ a lot of money on game dev design. Could it be a russian spy app that is going to create deep fake crimes using our faces? It’s likely all of these are options for the developers but right now I’m banking on the fact they are just loving the data from training perspective and then they’ll work out who to sell the data to!”

I am not a number

Look, it’s clear we live in a time where data is valuable and misuse of that data could be dangerous but that doesn’t necessarily mean it’s going to be used like that. However, if you are worried about it and don’t want your data being harvested then perhaps don’t use the app. Don’t agree to the terms. Don’t post pictures side-by-side pictures of yourself 10 years apart with all your address and bank details at the bottom. Just an idea. That being said, there is a saying ‘If you’re not paying for the service then you are the product’. It feels like that is what people are now realising that what we’ve all been giving away freely, our image, what we like, what we don’t like, what we buy, where and when, all of it is it of huge worth to brands, businesses and governments. Makes you think doesn’t it? Are the likes really worth it?

This post felt cute, might delete later.