Resources > Articles

The Unintended Consequences of Products that Work Too Well

The Unintended Consequences of Products that Work Too Well

The Unintended Consequences of Products that Work Too Well

 

RECOMMENDATION ENGINES ARE POWERFUL. At their core, they are driven by artificial intelligence (AI) and designed specifically to provide a more personalized experience for the end-user. The implication is that a more personalized experience will lead to a better experience—one in which users get more value out of the product.

But what happens when a recommendation engine works too well? What if an ultra-personalized experience is actually a bad thing? As product professionals, do we have an ethical responsibility to ensure that we not only design our products well but also protect our end users?

Guillaume Chaslot, a software engineer and researcher at Université Paris-Est., believes we should, and he draws his belief from his own personal experience.

From 2010 to 2013, Chaslot worked at Google and was tasked to improve YouTube’s recommendation engine, called Up Next. He and his peers worked tirelessly on the algorithm, which was designed to maximize a user’s time on YouTube. After all, the more time spent on YouTube, the more advertisements viewers see. And, like most other ad-supported business models, the more time users spend engaging with the product, the more revenue that is generated.

Since his three-year stint at Google, Chaslot has become openly critical of Up Next. “Having worked on YouTube’s recommendation algorithm, I started investigating and came to the conclusion that the powerful algorithm I helped build plays an active role in the propagation of false information,” he wrote in “How YouTube’s AI Boosts Alternative Facts,” published in 2017 on Medium.

In the piece, Chaslot pointed out how the algorithm has caused conspiracy theories, “alternatives facts” and fake news to circulate around the internet. All it takes is for a user to search one topic that is similar to other video topics and YouTube begins encouraging—even automatically playing—more videos just like it.

But if the user is interested in this type of content, is that a bad thing? YouTube’s recommendation engine caught the attention of journalists from The Guardian and Buzzfeed, among others. In a Buzzfeed article, “We Followed YouTube’s Recommendation Algorithm Down the Rabbit Hole,” staffers conducted their own experiments to see whether their queries would result in recommendations to view questionable content.

In one instance, a search of the term “U.S. House of Representatives” led to a PBS NewsHour video clip. But when the staffer clicked the “Up Next” recommendations, it led to a video featuring an Arizona rancher recalling an incident in which he called the U.S. Border Patrol on an alleged illegal immigrant. That video originally was posted to YouTube by a group that was identified in 2016 as a hate group by the Southern Poverty Law Center.

“Today, recommendation engines are perhaps the biggest threat to societal cohesion on the internet—and, as a result, one of the biggest threats to societal cohesion in the offline world, too,” wrote Renee DiResta for Wired magazine. “The recommendation engines we engage with are broken in ways that have grave consequences: amplified conspiracy theories, gamified news, nonsense infiltrating mainstream discourse, misinformed voters. Recommendation engines have become The Great Polarizer.”

Should a company like Google have a responsibility for Up Next to protect its users from videos that contain content that may be spreading misinformation or is potentially harmful? Apparently, the technology giant believes it should.

##tweet##
In January, just after Buzzfeed released its piece, YouTube published changes intended to improve its recommendations. “We’ll begin reducing recommendations of borderline content and content that could misinform users in harmful ways—such as videos promoting a phony miracle cure for a serious illness, claiming the Earth is flat, or making blatantly false claims about historic events like 9/11,” YouTube staff wrote.

So, if Google is taking this stand, should this give us even more pause as we consider what’s right for our own products?

This is a decision that product professionals—and their companies—must make on their own. The case of Google and YouTube is unique; after all, such a popular and widespread product brings more intense scrutiny by the media and public.

We may not face the same kind of scrutiny with our products, and we may not have journalists openly experimenting with and testing our algorithms to explore the potential effect on users. Still, it’s worth considering how our products are designed and asking some important questions:

  • Are all features, as designed today, improving users’ lives?
  • Is it possible for any features to detract from users’ quality of life?
  • Do the metrics being tracked take “quality of user’s life” into account in any way?
  • Is the company willing to change a product to promote a user’s well-being, even if it may affect the overall business model?

The case of YouTube and Up Next illustrates how it is possible for software products to work perfectly as designed, yet still lead to unintended consequences. It’s often said that, in serving as a product manager, having customer empathy is a necessary trait—perhaps even the most important trait. If we truly believe that, we should at least consider the role we play in the consequences of our products.

Author

Author:

Other Resources in this Series

Most Recent

Is Your Training Budget Going to Waste?
Article

Is Your Training Budget Going to Waste? How to Calculate Training ROI 

The latest report from Training magazine has some news – U.S. companies have, for the first time, spent over $100 billion on training.  So, why the big spend? In the fast-paced, competitive business world, companies...
: OpenAI's ChatGPT Enterprise Takes Center Stage
Article

How ChatGPT Enterprise Addresses Key Concerns in Generative AI

OpenAI just released ChatGPT Enterprise, a business-oriented upgrade of its popular AI chatbot—make no mistake, this is a big deal. 
AI and Product Management
Article

AI and Product Management: Navigating Ethical Considerations 

Explore the critical aspects of AI product management, its challenges, and strategies for ensuring responsible and successful implementation.
How to learn AI for Product Managers
Article

How to Learn AI as a Product Manager: Start Here 

As a product manager, harnessing the power of AI can be a game-changer for your product. Whether automating mundane tasks, providing personalized experiences or making data-driven decisions, AI has many applications that can propel your...
Category: AI
Article

Beyond SEO: Driving Customer Attraction, Retention and Top-Line Growth

Does your website speak to your customers and fulfill your business objectives?

OTHER ArticleS

Is Your Training Budget Going to Waste?
Article

Is Your Training Budget Going to Waste? How to Calculate Training ROI 

The latest report from Training magazine has some news – U.S. companies have, for the first time, spent over $100 billion on training.  So, why the big spend? In the fast-paced, competitive business world, companies...
: OpenAI's ChatGPT Enterprise Takes Center Stage
Article

How ChatGPT Enterprise Addresses Key Concerns in Generative AI

OpenAI just released ChatGPT Enterprise, a business-oriented upgrade of its popular AI chatbot—make no mistake, this is a big deal. 

Sign up to stay up to date on the latest industry best practices.

Sign up to received invites to upcoming webinars, updates on our recent podcast episodes and the latest on industry best practices.

Subscribe

Subscribe

Training on Your Schedule

Fill out the form today and our sales team will help you schedule your private Pragmatic training today.