4 minute read
How can product managers leverage AI for human-centric product design? Expert Cal Al-Dhubaib talked about this important topic and shared some tips for product professionals in a recent podcast.
AI is no longer on the sidelines of product development. It’s embedded in how we build, scale, and deliver value. In 2024, U.S. private AI investment surged to $109.1 billion, and the number of FDA-approved AI-enabled medical devices reached 950, up from just six a decade ago. These shifts aren’t theoretical. They’re reshaping expectations for how modern product professionals operate.
But while AI’s technical capabilities are advancing quickly, product managers face a different challenge: integrating AI into products in a way that feels useful, ethical, and trustworthy to real users.
This is the heart of human-centric product design, not replacing humans with automation, but designing AI features that support and empower them. As Cal Al-Dhubaib, a leading data scientist and entrepreneur who launched Pandata, puts it, “There’s this misperception that it’s AI or human. One of our core beliefs is, it’s a mixture of both.”
Keep reading to learn about this topic as recently discussed on our podcast or listen to the discussion here.
AI Is a Tool. So Design It Like One
For product managers, the most valuable AI isn’t the most advanced. It’s the most usable. That means building tools that fit naturally into workflows, support decision-making, and allow for human oversight when needed. This calls for organizations to not see AI as a trend and focus on including it where it actual adds value, not to just check a box.
Cal frames this shift through an umbrella term he identifies as “generative experiences.” These are interfaces that allow users to interact with AI intuitively, without forcing them into unnatural or rigid processes. What does this look like in everyday life?
“Imagine your inbox with a summary bar along the side telling you action items requested of you from each email, and that you can then say, “Who is that individual?” or “Remind me some information about what their role is?” or you can dig for information in a more intuitive way.
In this context, AI hasn’t been tacked onto a product, but the value it could add to the user experience has been considered first. It’s natural and doesn’t require context switching and it is a more human-centered inclusion of AI into a product.
In other words, the best AI doesn’t lead. It assists.
Use AI Where Product Management Structure Already Exists
A fundamental principle Cal shares is deceptively simple: Start with what humans are already doing well.
“When humans can’t agree, the model can’t converge,” he explains. That’s because AI relies on consistent patterns and labeled examples. If a task is ambiguous or disputed among human experts, an AI model won’t find stable ground to learn from.
That’s why one of the first questions Cal asks when advising teams is, “What are humans doing today?” And if the response is, “Well, we couldn’t, but AI can,” that’s a major red flag.
“There are very few situations where that actually makes sense,“ he explains. Product professionals need to be skeptical of AI solutions designed to fill gaps where no solid human process exists. In those cases, the problem is often too ill-defined for a model to solve effectively.
Strong candidates for AI include:
- Routing customer support tickets by urgency or topic
- Suggesting knowledge base articles during chat interactions
- Tagging, classifying, or summarizing structured documents
Red flags for product managers:
- “We don’t do this yet. AI will handle it.”
- “Our team doesn’t agree, so maybe a model will sort it out.”
- “It’s too complex for humans, so let’s automate it.”
If the product team hasn’t built or trusted a repeatable human process, AI won’t be able to either.
Integrate AI Seamlessly into Product Workflows
AI should be designed to intervene at the right moment, with the right level of confidence, and in a way that’s actionable. One of Cal’s examples from healthcare contrasts two models: on decisions with high accuracy but delayed availability, and another with lower accuracy but available at the point of decision.
The team didn’t dismiss the lower-accuracy model. Instead, they mapped where the model was most likely to be right and used it to trigger selective alerts. Where confidence was low, it triggered no alerts. Where confidence was high, alerts were triggered. This helped clinicians to better focus their efforts and more appropriately consider treatment paths. This system also prevented “alert fatigue”.
“We’re not trying to make AI the decision-maker. We’re flagging points where humans may want to re-look.“ This inclusion of AI in the workflow treats AI as an assistant to, not a replacement for, human discernment, insight and decision making.
Product management takeaways:
- Design for workflow integration, not feature isolation
- Set thresholds for model confidence before triggering actions
- Include override options and wherever possible
- Avoid alert fatigue by suppressing low-confidence triggers
When AI is framed as a co-pilot, not a controller, users tend to trust it more and use it more.
Build Safeguards and Metrics into AI-Driven Products
AI will get things wrong. Google admits that its AI overviews produce “odd and inaccurate“ results and a study found that around 52% of questions about programming were answered incorrectly by ChatGPT.
Cal agrees that AI will be wrong some amount of time and reframes the conversation, “It isn’t about, can we get AI to work on this problem? It’s can we get it to be useful in the context of this problem?”
And further, the fact that AI gets things wrong is not a problem if you’re prepared for it. It’s only a problem if you aren’t.
“Have a good answer to ‘What happens when it’s wrong?‘” Cal advises. Product managers need to plan for AI’s edge cases just like they plan for UX gaps or system outages.
Build these into your product design:
- KPIs for real-world impact, not just technical accuracy
- Logging of AI-driven decisions and overrides
- Manual review options for ambiguous or sensitive outputs
- A mechanism for users to flag problems or feedback
- Scheduled audits of model performance across user segments
However, Cal warns against relying solely on lab-tested metrics. “Accuracy is stats. But impact is what matters.“ A model that’s 90 percent accurate on test data but causes alert fatigue or user distrust in practice? That’s not a win.
Train Your Product Team for AI Literacy
Even if your product team doesn’t build models, you still need to understand how they behave and when to challenge them.
“An AI model that hasn’t been audited is not a good AI model. Period.“ Cal makes this point with conviction. Product managers don’t need to become data scientists, but they do need to lead responsibly.
Basic AI literacy for product teams should include:
- Knowing how to interpret confidence scores and thresholds
- Understanding where training data bias or drift might emerge
- Asking vendors for model documentation (e.g., model cards)
- Identifying areas where AI decisions require human-in-the-loop controls
- Ensuring executive stakeholders understand AI’s limitations
This isn’t just technical hygiene. It’s a market requirement. Users expect transparency. Regulators increasingly demand it. And your reputation depends on it.
Make AI Boring (and That’s a Good Thing)
The most trustworthy AI doesn’t feel magical. It feels predictable.
“Make it boring. Treat AI like every other function you manage — measure value, manage risk, optimize outcomes.”
When product managers stop overhyping AI and start operationalizing it, real value emerges. That means embedding AI in backend workflows, internal tools, content systems, where it amplifies human capacity without demanding constant attention.
Your ultimate goal:
- Products that solve user problems more efficiently
- AI seamlessly integrated in ways that enhance and asset
- Teams that trust the AI to help them, not override them
- Workflows that get smarter without getting harder
Final Thought for Product Professionals
Human-centric product design with AI doesn’t mean shying away from automation. It means being intentional, choosing to build systems that support users, clarify decisions, and evolve safely.
For product managers and product professionals, the path forward isn’t about chasing the next model release. It’s about designing the right product experience and making sure AI earns its place inside it.
Author
-
The Pragmatic Editorial Team comprises a diverse team of writers, researchers, and subject matter experts. We are trained to share Pragmatic Institute’s insights and useful information to guide product, data, and design professionals on their career development journeys. Pragmatic Institute is the global leader in Product, Data, and Design training and certification programs for working professionals. Since 1993, we’ve issued over 250,000 product management and product marketing certifications to professionals at companies around the globe. For questions or inquiries, please contact [email protected].
View all posts