This article on ethical AI comes from a recent Data Chat featuring Aishwarya Srinivasan, a Data Scientist in the Google Cloud AI Services team.
When we think of who should be on a data team, most people imagine engineers, data scientists and analysts.
But, I imagine that in the near future, the table isn’t packed with people with only technical training.
In fact, my recent research has centered on the types of professionals who should be on the data team. In this article, I want to share all the roles needed to ensure long-term success for AI and emerging technologies.
This is a huge challenge because we have seen AI and technology being built without considering diversity and inclusion—and without understanding the product’s impact on users (adolescents, for example). That’s why I believe there has to be people with different backgrounds working in a tech company and building ethical AI applications.
One part of the solution is model explainability. People need to see that the model is fair. These are more technical aspects that require data scientists.
But there are more roles you could have on a team.
Data Privacy and Data Security Officers
These are people who are specifically focused on understanding how to protect the data from any kind of breeches.
Their job is to make sure that the company is adhering to data governance rules and policies.
Cybersecurity Officers
We have seen machine learning models being attacked or compromised. We have seen the decisions, which are coming out of the model, being used for unethical reasons.
That’s why it’s important that cybersecurity officers are also involved in this pipeline of building AI models.
Social Science Researcher or User Experience Researcher
Somebody like a social science researcher or a user experience researcher, they don’t need to know how the AI models are working. Their job is to ask the question, “Why is this particular application being built?”
Let’s say, for example, we are building some kind of facial recognition application. They would be there to make sure that the data which is being used to train it is diverse.
This was one of the major issues that arose when one organization had a facial recognition model that was not able to identify women correctly because they were without makeup.
Technical experts might not be trained to look for these problems. That’s why we need user experience researchers or social science researchers who understand the impact of these applications and who understand how to incorporate diversity.
AI Ethicists
Finally, AI ethicist is an executive level role. The person in this role is responsible for observing and making sure that every piece of the puzzle is working right together.
The Case of the Self-Driving Car and Ethical AI
There are going to be side effects of these technologies on society.
A simple example is self-driving cars. Companies are building excellent technology and solutions. It becomes a challenge when they are trying to productionalize and integrate these technologies into everyday life.
If a self-driving car gets into an accident, whose insurance is liable? Whom should we blame if there’s no driver? How do you make sure that the liabilities are being channeled correctly? Would the car manufacturers get into trouble or is it the technology providers?
These are the questions we’re trying to answer as it relates to ethical AI. And that’s where we will face challenges when it comes to integrating technology into our society.
If we are going to be using this technology every single day, what policies need to change? What privacy initiatives should be implemented? What security needs to be designed? What ethics should be considered?
Are you looking to elevate your data team? You can with our new course, Business-Driven Data Analysis.
Improve your data team’s approach to analysis and stakeholder communication, and empower them to drive business outcomes through critical insights.
Author
-
Aishwarya is working as a Data Scientist in the Google Cloud AI Services team to build machine learning solutions for customer use cases, leveraging core Google products including TensorFlow, DataFlow, and AI Platform. Aishwarya previously worked as an AI & ML Innovation Leader at IBM Data & AI, where she was working cross-functionally with the product team, data science team and sales