Skin in The Game with Debbie Go

Human Accountability vs. AI: Amit Shivpuja on the Risks of LLMs | Skin in the Game (Teaser)

Debbie Go Season 2 Episode 3

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 1:36

We’re wired to trust things that sound like us. It’s a psychological shortcut that helps us navigate the world, but in the realm of AI, it’s a major liability.

In this teaser from #SkinInTheGamewithDebbieGo, Walmart Data & AI Leader Amit Shivpuja explains why natural language interfaces are a double-edged sword. While these tools make AI more accessible than ever, they also create a dangerous illusion of certainty.

Amit highlights a fascinating shift in how we process information:
"People are a little skeptical if you give them a table of numbers. But if you write them a sentence saying the answer is X, they tend to trust it. Just because it's an easy-to-use interface doesn't mean you blindly trust it."

🎙️Catch the Full Episode
Don't miss the full conversation with Amit Shivpuja on Monday, March 2 to learn how to balance innovation with critical thinking.

#AI #GenerativeAI #DataScience #LLM #AIGovernance #WalmartData #DigitalTransformation #DataGovernance #HumanInTheLoop  #TechLeadership #ResponsibleAI  #SkinInTheGamewithDebbieGo

Enjoying the podcast? Leave us a quick review!

Support the show

Follow Us:

Debbie:

You build gen AI tools to make data more accessible. Beyond efficiency gains, what has surprised you most about teams interacting with these AI assistants?

Amit:

One I think is adoption. One of the challenges data people have had with especially the traditional SQL codes or dashboards is that we are making certain assumptions on what the end user's comfort or familiarity is for absorption or usage of the same. There are still stakeholders who don't like even dashboards. So the fact that the natural language interface is there automatically drops a certain set of barriers because the person is like, hey, I'm asking the question in a form that I am comfortable with. So that's one of them. But the other one, which is interesting both in a positive and in negative way, is trust. Because of the natural language nature of LLMs and the interaction of agents and all of that, people tend to put a lot more trust in the system than they would normally do. They're a little skeptical if you give them a table of numbers, but if you write them a sentence saying the answer is X, they tend to trust it. Now that's a good thing and a bad thing because even if one doesn't know how an LLM works, one should know what the limitations of an LLM are. It's a probabilistic model. So that means there has to be a human in the loop. There has to be rigorous testing and validation. Just because it's an easy to use interface, you don't just blindly trust it.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

Think Fast Talk Smart: Communication Techniques Artwork

Think Fast Talk Smart: Communication Techniques

Matt Abrahams, Think Fast Talk Smart
Grit & Growth Artwork

Grit & Growth

Stanford Graduate School of Business
View From The Top Artwork

View From The Top

StanfordGSB