Picture Perfect: Real Tech, Fake People

There are some great philosophical questions that have no answers, that create streams of thoughts with no end, that ask us to look beyond the surface and search for more. 

 

In 2020, those questions have not lost their relevance, but have in fact become even more profound. As we search for meaning and ways to connect to what makes us human, technology that can mimic us poses the ultimate philosophical debate:

 

How do we define what a human being really is?

 

With tech that can understand context, programs that can analyze and draw insights from data and machine learners that strive to be the next Michelangelo, is being human less about our brains and more about our composition?

 

Maybe not for long. With StyleGAN technology that can create hyper-realistic images of humans, there is less and less space between humans and computers than ever before. 

 

This technology uses generative adversarial networks (GAN). Adversarial networks are essentially two networks having a contest (hence the “adversarial”). A generative network generates candidates, while a discriminative one evaluates them. Initially, training data is used, and then, through the course of the “contest,” the GAN is able to generate new data sets. 

 

StyleGAN is an approach for training generator models that are capable of synthesizing large, high-quality images. StyleGAN solves the variability of real photos by adding styles (or features like hair, face shape, skin tone) at each layer of convolution. 

 

While there may be the occasional telltale giveaways, the technology is clearly very adept. You can even lay down some bets and see how many times you get it right or wrong here. A fun game, for sure, but the real question still lingers: What is the purpose of this, and who’s using it?

 

The internet is awash with conspiracy theories, bad science, bad research, terrible advice and more. And with more and more accusations flying on the internet on doxing and surveillance, do fake people help or hinder us?

 

There are positives, like this example of a CGI model in Japan with a huge Instagram following. “She” is proof that this type of technology could be used for modeling, game development, movies and even porn. Though it could mean models and actors could be out of jobs. 

 

The darker side of creating fake people is that we can create more realistic doctored images that are then perpetuated in the media and convince people to believe things that simply aren’t true. And in the age of social media, things can spread around the world instantly, making it difficult to rein things in and share the truth. Fake images could lend legitimacy to bot and duplicate accounts used to spread misinformation. 

 

Ultimately, it was the idea of “fake porn” that caused California to pass a law in 2019 that banned the use of pornography created from human image synthesizing technology with the consent of the people being depicted. Quite a conundrum when you think about it, since real people aren’t being depicted. 

 

The more we dive into this subject, the more we verge into the territory of facial recognition software. As of April 2020, the best identification algorithm has an error rate of just 0.08%, compared to 4.1% from the leading algorithm in 2014, according to tests by the National Institute of Standards and Technology (NIST). What’s hard to discern is whether or not these tests would be fooled by fake images. Currently, media forensic programs are already studying ways in which to identify and counteract fake media, including images created by GANs. 

 

There are already a lot of challenges when it comes to recognizing faces in facial recognition, and a plethora of fake people could potentially muddy the waters even further. 

 

No wonder then that Jevin West and Carl Bergstrom of Washington University and the creators of Calling Bullshit are trying to show people how to identify these artificial images. Their book aims to go beyond just fake imagery and show how to sift through the pile of junk on the internet to distill what is fact from what is fiction. As machine learning and data science progress even further, these tools can be used to rigorously fact-check themselves and others, hopefully leading to a more reliable internet for us all. 

 

Whatever the potential negatives, however, the fact remains that GAN’s creativity is incredible. 

 

Looking to read more about the intersection of artistry and machine learning? Check out our article Where Art Meets AI

Share on twitter
Share on facebook
Share on linkedin
Morven Watt

Morven Watt

Related Content

Black Box Feature Image

From the “Black Box” to the Sandbox: Advancing Product Management and Design Collaboration

For product and design teams, a “black box” understanding of each other’s functions can create problems like poor communication and lack of trust, resulting in inferior products. Pragmatic Institute’s design practice directors explain how both groups can move into the sandbox: a space to build new things together. Learn how an intentional approach to collaboration will help you improve product quality and your team’s efficiency in achieving market-winning, delightful solutions.

VIEW

Training on Your Schedule

Fill out the form today and our sales team will help you schedule your private Pragmatic training today.

Stay Informed

Sign up to stay up-to-date on the latest industry best practices. Get content such as:

    • The Pragmatic – Industry insider magazine
    • The ever-growing webinar series 
    • Our world-class podcast series

Subscribe