Skin in The Game with Debbie Go
Skin in The Game invites you into the world of business and personal transformation, where host Debbie Go uncovers how successful leaders navigate their most challenging decisions and put everything on the line. Finally, a business podcast that moves beyond surface-level advice to deliver actionable insights through real stories of risk, resilience, and bold decisions that paid off.
Whether you're scaling a startup, advancing your career, or planning your next venture, these conversations equip you with battle-tested wisdom and practical strategies for success.
Join Debbie Go to learn how today's most successful leaders turn challenges into opportunities – and get ready to put your own skin in the game.
Connect with Debbie:
🌐 Website
▶️ YouTube
Skin in The Game with Debbie Go
LLM Trust Trap: Amit Shivpuja on Why Natural Language Bypasses Critical Thinking | Skin in the Game
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
There is a paradox emerging in the age of Generative AI.
Business leaders are trained to be skeptical of raw data. They question spreadsheets. They interrogate dashboards. But the moment that same data is wrapped in a conversational sentence by an LLM? The skepticism vanishes.
In the latest episode of Skin in the Game, I sat down with Amit Shivpuja (Director of Data Product & AI Enablement @ Walmart) to discuss why our biggest AI challenge isn't the technology—it's the misplaced trust that comes with a friendly chat interface.
As Amit put it:
"People tend to put a lot more trust in the system than they would normally do. They're a little skeptical if you give them a table of numbers, but if you write them a sentence saying the answer is X, they tend to trust it. It's on us to show them how we came up with the answer, not just give them a number."
We also explored a fascinating case from his VR days where survey data (what people said) completely contradicted usage data (what people did). The lesson? Whether you are dealing with human feedback or AI output, validation is non-negotiable.
If your organization is racing to adopt AI tools, this conversation is a vital reality check. We dig into:
- The "Data Sandwich" framework for building trustworthy systems
- How to build trust through radical transparency
- Why accountability must always remain with the human
🔗 Links & Resources:
- linkedin.com/in/amitshivpuja
- datacompass.substack.com
📺 Watch on YouTube: youtu.be/o6n5IP_Z2DY
What makes YOU trust an AI output? Is it the authority of the tone, or the transparency of the source?. Drop your thoughts below. 👇
#AI #DataGovernance #DigitalTransformation #Leadership #GenerativeAI #Walmart #DataIntegrity #StanfordLEAD #stanfordgsb #SkinInTheGamewithDebbieGo
Enjoying the podcast? Leave us a quick review!
Follow Us:
- 🎧 Podcast website
- ▶️ YouTube
Data isn't just information, it is the pulse of the economy. We are talking about billions of dollars in play and decisions that send ripples across the whole industry. My guest today is Amit. Shivpuja. He has shaped data strategy across diverse sectors, from fintech and VR to his current role driving transformation at Walmart. But what makes Amit fascinating isn't just the sheer size of his mandate. It's his philosophy on the human element behind the numbers. In this episode, we're moving past the buzzwords, we'll discuss how large organizations innovate without losing discipline, the reality of human AI symbiosis, and why the biggest hurdle to digital transformation isn't your technology, it's your mindset. Amit, it's a real pleasure to have you. Welcome to Skin in the Game.
Amit:Thank you so much for having me and welcome the opportunity.
Debbie:Amit, let's start with your career. You've built data strategy in fintech, VR, e-commerce, and now at the massive scale of Walmart. How do you transfer lessons learned across such diverse sectors? And what core principles stay constant regardless of the industry?
Amit:I think I'll start with the core principles first. Basically, if you think about any data decision, the goal is to get to an insight. When I say insight, it's the ability to use the data for an action or an email or something, right? And something changes or something is done with the data or pattern that's recognized. But also in turn, it should generate more data. So that's, you know, that closed loop is what an insight is. And so, in order to get to an insight, independent of industry, independent of the problem that's being solved, there are a couple of core components to that. This is part of my sandwich methodology or my sandwich framework, where you need the data, which is like the meat or the patty of the sandwich. You need technology and algorithms and all that, which is like the vegetables that you put on top. You need people, you need someone who understands the thing and is able to translate that and has the skills to leverage all the relevant technology. So this is like your dressings, like your mayonnaise or you know, ketchup. And then finally, you need some structure around it, which is the governance and processes that are there. Now, this is sandwiched between this is where it becomes specific for a particular industry or a thing, which is the first slice of bread is your domain. So is it healthcare, is it finance, etc. That sets the boundary of the sandwich, right? What can you go outside of, inside of? And the second is regulation, security, and compliance, right? That's the other framework. So these components kind of become the core of the data sandwich for you to get to an insight. Now, this is independent of industry. Now, if you take a different industry, all that changes is the bread, or you know, those dimensions become the criteria. So you take those constraints and you apply those to the domain or the problem you're trying to solve with that same sandwich framework, and you're able to get to an insight.
Debbie:That's a great analogy I made in FMCG or even in healthcare. When we talk about insight, we always talk about the consumer tension. What are the pain points that you're trying to solve?
Amit:I mean, we have multiple consumers, right? We have internal consumers, we have external consumers, we have dormant and environmental consumers. So all of them can be put into this ecosystem, you know, to get to what insight or perspective that we want and then interpret it accordingly.
Debbie:Makes sense. Now, in large-scale retail merchandising, I know there are billions on the line each quarter specifically. Now, tell me about a moment when data pointed one way, but your instincts pulled another. And what were the real stakes and how did it play out?
Amit:So one of the things that often happens is that it's a question of do you lead with your gut or do you lead with what the data tells you, right? This is what many folks run into. And I don't think there's a single answer for the same. Both of them have a role to play because studies have shown the value of instinct in the same and the thing. But the situation also involves biases, right? That's where the perspectives come from. So basically, if I want to think of an example, I'd actually go back to my time in virtual reality. And at the time we were building a virtual reality store, and think of it like an Apple App Store or Google Play Store, but for virtual reality content. So you have a headset, you would go to this place and you would take content. What we found was that the people were telling us it's very popular, you have lots of usage, you know, a lot of things, right? So there was a general excitement given the new newness of the technology. But when we went into the data, we found that there was definitely these spikes of excitement. You know, it could come, especially when a new title was launched, or the new, you know, I get to find Mount Everest this week, and then next week I get to go on inside the pyramids. That kind of an experience. But then it would fade away. So it would be like they would try the experience, maybe they would share it with their panel, you know, the people that they knew, and then it would fade away. So that nuance of the excitement is not a sustained thing, right? So we were getting feedback and surveys telling us, oh, this is awesome, you know, this is the thing. But then the data actually showed us that this is a very transient thing. You had to keep doing stuff in order to keep the user engaged and coming back to you. Now, this we are talking in the hundreds of thousands. We are not talking in small numbers, right? So it immediately becomes a very interesting problem, the whole chicken and the egg problem, right? Do we get more users and then more content, or do we get more content and then we get more users? Kind of a thing. And what we found was the best way was to get more content because then you had enough of a runway for people to consume, and then you could get more incentivize or attract more users to come and consume the content.
Debbie:Using data to really mind the insights and validate what you're observing in the market. That's a great example, Amit. Very few organizations operate at a truly global scale. What's your guiding framework for driving innovation without losing that operational discipline?
Amit:We can look at global scale in two ways. One in terms of the resources that you have in order to solve your problem or what you're trying to do. And the audience or the ecosystem that you are part of is the second. And the third is the customer or the audience for whom you're doing what you're doing, right? You're offering either retail or you're offering VR or a FinTech services, whatever that is. When we define processes, if we do it in a form that is understanding of the nuances of each of these layers, uh capabilities or the ecosystem of each of these layers, we're able to build those into our processes and our systems and our approaches. For example, if we have a global team, right, you're able to build that culture, the strengths and weaknesses, and the time differences and all of that while you're delivering effectively. If you understand that, you know, hey, this is a regulated industry, and you know, you have to go through these checks and approvals, then you're able to plan that into your process and steps that you're able to do. And simultaneously, if you understand the audience as being, you know, like, hey, they have certain preferences, they have certain likes, you build that into your plans as well. So that's been the approach, which is at each stage being sensitive to what the nuances are, what works, what doesn't work, what is the constraint at that level. Building that into your process and refining it over time is what will get us to that end state.
Debbie:Speaking of global team, I know you're scaling teams across the US and India. What's a tough skin-in-the-game call you made that maybe hurt short term but paid off in the long run?
Amit:So we have this in data all the time, right? Because what happens is that when we think about data strategically, it's not a question of just giving a quick table, right, or a column right now. Because this data might evolve, we need to make sure it's documented, we need to make sure it's clean and all of that stuff. More often than not, we are having the reverse conversation, which is to say, hey, give us a little more time. We'll make sure that the data set is trusted, reliable, and performant so that you can do the thing. It's actually an interesting dynamic because the business wants to move quickly. So, for example, in the beginning of 25, when we had tariffs, right? So when tariffs come along, we have to figure out as Walmart are we doing the best, are we offering the best prices? Now, I cannot go to my business and say, I need three months to make sure you have quality data and then give you the same. But I simultaneously can't also just give them some data today and then take my hands off it. So it's a balancing act, and the approach that we like to use is to basically figure out what's the minimum that we need right now to enable business to effectively make decisions. And in parallel, figure out all the governance and the data maturity stuff that we can do so that eventually after the initial prototyping and everybody's kind of got a sense, we are able to back it with robust, trusted data. And the production systems run off the robust trusted data. So it's a bit of a balancing act that has to be done where you're meeting the initial analytical exploratory need. You're being transparent with what challenges or limitations you have around the data, and then you come with the production trusted quality-defined data to make the deliverable happen.
Debbie:Because a lot of businesses want data analysis now. And sometimes the data needs to be cleaned to be analyzed, and that takes time.
Amit:Just to add to that, is that if you actually do this process the right way, what will happen over time is the number of data sources that you have to enable should start diminishing. It should plateau away. Because already the trusted KPIs, they're already available, they're already governed, they're already in the trusted source. Right? So you're investing time only on what is new and not what you already have.
Debbie:That's smart. That's a real case scenario that I'm sure all our listeners will find very helpful. You've used the kitchen metaphor to explain data strategy. But when you look at the major retail enterprise as the kitchen, what is that one ingredient you think is the most undervalued right now?
Amit:So in today's time, actually it's two parts to it. I call it the data and AI literacy. And what I mean by that is that because of the pervasive ease of access in companies for both data and AI, thanks to Chat GPT and you know LLMs and all of that. We as organizations we need to invest in educating our colleagues and stakeholders on what the data is and what the AI can and cannot do. Because even if one doesn't know how an LLM works, one should know what the limitations of an LLM are. It's a probabilistic model. So that means there has to be a human in the loop. There has to be rigorous testing and validation. Just because it's an easy-to-use interface, you don't just blindly trust it. Simultaneously, what are those trusted locations where you can get the data from? Where can you get the definitions from the same? How do you know that the data set is trusted or not? That level of understanding should be there as well so that when somebody's using an LLM, they're putting the right data in so that you don't have the garbage in, garbage squared out situation occurring. So it's basically that combination of literacy, which we have to enable as data leaders, is the key thing I would say at this time organization should focus on. And it's actually interesting. You can do it as a combination of both automated systems and, you know, like a self-serve kind of a thing, if you do your cataloging and documentation the right way, and have people to back them up as well. In fact, you can use LLMs to back that up too. So that side's the current thing that organizations should focus on.
Debbie:You talked about lifting the data curtain. Many leaders are comfortable delegating data as a technical specialty. How do you move them from a place of delegation to one of genuine ownership and, as you said, strategic literacy?
Amit:My perspective on this is that you get people involved as early as possible. And let me clarify what that means. On the engineering side, the people who are building the data, enabling the data, you want them to have as much business and domain knowledge as possible. So that they understand, hey, well, what is this data going to be used for? What's its purpose? What does it mean for the organization? Simultaneously, you want the business people to understand what are steps it takes to get the data in place. You know, you have to define the scripts, you have to store it, you have to run the checks on it, it has to be synced, you know, there is orchestration, all of that. Now, they did not know how it is done, but they should at least understand that there are technical processes and monitoring effort that goes into for this. And when you have this built into your process, right, where, for example, the minute a product manager comes up with a product spec, there should be a data section and an analytics section where you call out, hey, I want this data, I want this KPI, this is what my success criteria is. So the technical person, when they're building that table, is like, hey, business asked for that KPI. Where's my column or columns that will give him that KPI? Right? And this is the context for the same. And when the project plan comes together, it includes enough technical explanation that the business knows that, okay, engineering or data modeling or data architects are going to do this, this, and this, and this is why they're going to do it. So that sharing of information is what will actually help peel the curtain away and eventually build that trust. So tomorrow, let's say, for example, there is some unforeseen situation, and engineering comes back to the tech teams come back to business and say, hey, we, you know, you won't get data for a day. And this is why it is. There's already that built trust and transparency that business is like, okay, I trust you guys, go do what you can. I will now handle the customer. Right? So that's what it leads to.
Debbie:So it's more about each function would share information, how the processes work.
Amit:And as early as possible. Agree, more upfront. Yeah. You build gen AI tools to make data more accessible. Beyond efficiency gains, what has surprised you most about teams interacting with these AI assistants? One I think is adoption. One of the challenges data people have had with especially the traditional SQL codes or dashboards is that we are making certain assumptions on what the end user's comfort or familiarity is for absorption or usage of the same. There are still stakeholders who don't like even dashboards. So the fact that the natural language interface is there automatically drops a certain set of barriers because the person is like, hey, I'm asking the question in a form that I am comfortable with. Right? So that's one of them. But the other one, which is interesting both in a positive and in negative way, is trust. Because of the natural language nature of LLMs and the interaction of agents and all of that, people tend to put a lot more trust in the system than they would normally do. You know, they're a little skeptical if you give them a table of numbers, but if you write them a sentence saying the answer is X, you know, they tend to trust it. Now that's a good thing and a bad thing. It's a bad thing in the sense that you need to validate whether that makes sense or not. Right? So it's on us to even show them how we came up with the answer, not just to give them a number. Just say that, you know, hey, we went through this, and if you want, take a look at it. So that kind of limited section, if I can say that, is another surprise that really stands out today as well.
Debbie:With AI making suggestions and giving us recommendations that may or may not be true. Where does that ultimate accountability for a final decision now sit in organization?
Amit:To me, it's always the human beings, because I don't see a scenario today where a person can say the algorithm told me. Because the next question is gonna be why did the algorithm say what it was going to say? Right? So an understanding of the thing. So I think it's still human at the end of the day, right? We have to take the call. I mean, think about it this way as well, right? So let's say we an algorithm makes a prediction. It says, do this. It's a human being that has to do this. We are not yet there where it's all AI, AI, AI, AI. We're getting there, but even now it is, as of today, at least, it's very much a human being's gotta do something with it, right? Or is going to make a decision with it. That still automatically puts the onus on us.
Debbie:So we don't let AI run our lives and we still make a judgment.
Amit:It once we build a little more trust and we're able to, I would like to be able to test and stress test AI systems a little more before we let AI influence AI influence to make a new picture.
Debbie:I know, I well, we see a lot of my friends like would trust AI to plan their travel agenda, their accommodation. So there are a lot of things that we're trying to let AI decide for us. No, but see how far it goes.
Amit:See, the thing with the use case you mentioned is one start and finish. But let's say if you told AI, be my travel agent even during the trip, you know, I'll give you access to my location, I'll give you access to my thing, you keep giving me updates and all of that. That's a different use case. I don't know how many people would trust that. Right? Because if it's a start and a finish, it kind of has a finality to it, right? Hey, book me all the hotels on this trip, right? It goes through, it books all the hotels. But if you say, hey, book me the hotels and then when I land there, book me a tour and I'm tired today and all that, I don't think we're there yet. So that I think is not there yet.
Debbie:I mean, I'm curious, as customer expectation evolved, how do you see the retail experience reshaping over the next five years, maybe? Especially when it comes to blending digital and physical or meeting the demands for sustainability.
Amit:It's actually interesting because there's another dimension to this, right? Which is who the customer is is also changing in the next five to ten years. Uh, because it'll be less of the millennials, less of the boomers, less of those generations, and more of Gen Alpha, Gen Beta, and the future generations. So I think we're at an interesting transition point where we're moving, I would say, from physical to online, you know, mobile and online more. I'm really curious to see how many Gen Alphas or whatever will walk into stores. They might just be happy, you know, just looking up a video and this thing and then say, this is my size, give me the thing. Or, you know, simulating it using AI to see how it fits and looks types and make a decision. So that's actually an interesting transition that all of us are curious to see what will happen, right? At which point will people say, no, I need it physical, and at which point do we need it purely digital? The other thing that also is that when we walk into stores and let's say I want to buy ketchup, I might see the brand I want, but I might notice another brand that I didn't expect because it's in that proximity, or you know, it just caught my eye, or there's a sales sticker there. Now you can do that online as well, but then again, you have limited real estate, right? It's a different kind of limitation. So it's a question of each format strengths and weaknesses, but given the generational change that's there, but what I do see happening is that uh if we really succeed down the route of these agentic systems and these AI systems, the way that you know it's being told and way it's being marketed to us with intelligence and all of that stuff. I'd be really curious to see how the recommendations come your way. Like I doubt it'll be more of, you know, like, hey, I want to cook pasta today, right? And that's all you say, and automatically it knows that you know Amit likes this brand of pasta, it likes this brand of pasta. Sauce and he likes these vegetables and these kind of things. But I'm curious to see where it will recommend something, you know, like, hey, this new pasta sauce has come try it, kind of a thing. And just maybe that I would be more willing to accept it, given that particular context where it's like, okay, I'll try kind of a thing because of the trust that's built up. So what I'm basically getting at is the transaction switches from I want to buy pasta, I want to buy pasta sauce, I want to buy vegetables, I want to buy the seasoning, to I want to make pasta. So it moves a little more intentional and more high level. And then the below stuff happens, it's kind of like a level of abstraction, if I can say that, right? So how much abstraction are we willing to accept will really become the question.
Debbie:In your view of the human AI symbiosis, AI brings speed and consistency, right? Even scale. Uh-huh. While humans bring context and judgment in a high-pressure retail setting, how do you get your teams comfortable to let AI do the heavy lifting so that they can focus on asking the right questions and owning the final call?
Amit:It's again similar to what we discussed earlier, right? Is having all the stakeholders involved very early on in the process. Being very clear on what use cases we want AI to solve or contribute to, taking them along the journey on what approach we're going to take to get to those end products, having them evaluated, tested, stress tested, and become comfortable with the output. Ideally, what we want at the end is for our teams to be the ones who market and give that AI solution more than us. They should love it so much that they're like, hey, give it to me. You know, I want to use it kind of a thing, right? So the product mindset is really powerful in this particular thing. Many of the design thinking stuff that we did, you know, as part of our lead program and stuff like that, becomes extremely critical here as well, which is if we do it the right way with the right stakeholders, with the right communication, with the right transparency, they become the supporters and the champions in that journey. To the extent that, like this podcast, they have skin in the game. So, you know, they're like, hey, I have the next version that we need to work on. And you know, I think we should do this next kind of a thing. So that collaborative ecosystem, I think, is the way forward.
Debbie:Every company now claims to be AI driven from your vantage point. What are the clear signs that a company is really delivering real outcomes versus just riding the hype?
Amit:So I think we need to define what AI really means, right? Companies have been using AI for forecasting, for time series analysis, for recommendations for years now. So I think a lot of companies aren't truly AI companies, depending on the use case that we're talking about. But if we define AI in the scope of agents and gen LLMs and all of that stuff, I think it is, we have to see it in action to believe it. Right? I mean, I see in many forums people advertising saying, oh, you know, this company was able to replace its customer service completely with an agent. Sure, it might have worked for that company, but I want to see what happens when the company changes. You know, when the company's products change, when it grows bigger, or there's a new regulation that comes along. That adaptation is really what I'd like to see on how effective that was. Because there have been instances where that initial version works brilliantly, and you change one thing and the second version just collapses for the same. So I think it's very early stages, honestly, even for large organizations, we do have successful use cases that we can talk about and we are making a difference. But we're also still figuring out what are the best factors and the best ways of using things. So when companies say they're using AI in their effective way, I'm a little skeptical. I'm almost like, hey, show me the proof. You know, the proof is in the personal opinion, there's a lot of hype. Me too.
Debbie:Let's get a bit more personal, Amit. Your career hasn't been that linear. Was there a specific crossroads where you chose the less obvious path because it promised a greater depth of impact for you?
Amit:So one of the things I had a mentor tell me very early on was to focus a lot on learning as much as you could. And one of the things that I did is I've joined a lot more startups where I've had the opportunity to build teams or build the initiative or grow the initiative, et cetera. I've even now I shy away from hey, we have this team come and run it for us. That's not my uh thing. Uh I think it's a part of the challenge, it's part of the discomfort, it's part of the excitement. Oh, cool, I get to solve something new that gets there. And what that has done is it's given me exposure to the constraints and an understanding of the ecosystem of the same. So I would say that that's what's led to almost all my career decisions, right? I mean, it looks like you know, I've gone from building to selling to marketing to product management to consulting and all of that. But in each one of those things, I have leveraged what I've learned previously and built something new or learned something new. And that's still the criteria I use today. You know, like even I chose the Walmart role because it was an opportunity to be part of building this team from scratch. And you know, the T organization didn't have these data enablement and AI enablement functions before. So I still apply that same methodology right now, which is there's a challenge, I get to learn, I'm stretched, I still bring something to the table and I get to have fun. And I've seen that the resources fellow. I think I'm pretty well compensated and stuff despite you know all the this thing and all the recognition and all that happens. So my thing is that if you don't have the constraints, I can't speak for everybody, but if you don't have the constraints, have fun and try something different in the careers that you're picking. You'll eventually figure out a pathway that works for you. And that'll be uniquely you.
Debbie:I see that learning mindset from you, Amit, because we're both Stanford leaders. Right now you're sharing your data and AI Compass frameworks publicly. So, how has the act of writing and mentoring sharpened your own leadership internally?
Amit:Uh it's actually a very good litmus test. Like I can come up with as many frameworks as I want, but if I can't come communicate and I can't have someone else understand it and absorb it and use it, then it's not effective. It is just a theoretical thing. So writing the book was an opportunity for me to put my perspective out there and help people hopefully make certain decisions more easily for the same. And it's the same thing with my thought posts and my blog posts that I do. I'm always aspiring to, independent of technical level, provide a perspective that hopefully helps a leader or a decision maker on a person understand a concept and then do something with it. Right? So the reason also I call my book the Compass is that I'm not a navigation system, I'm a navigation aid. Right? I'm hoping to give enough guidance that you'll figure out the pathway types and that you won't get lost.
Debbie:Transformation requires smart risk taking. How do you create that safe space for your team and for yourself even to experiment and even fail without losing that crucial momentum?
Amit:For me, what it's been is one is getting buy-in, as much buy-in as you can up front. That you know, very transparently that, hey, we don't know all the answers, we need to figure this out. We want to experiment. The second is being as clearly defined as you can of what experiments you want to run. You know, like, hey, okay, first we're going to compare A and B. Then we've got to compare A and C and getting stakeholders in that loop as you come across things. The third thing that's work that I've learned a lot from my mentors and my leaders is that they own the repercussions as much as the person does. In fact, I tell my teams if something goes wrong, I'll own the repercussions, you don't worry about it at that time. We'll then do a retrospective, we'll then figure out what we could have done differently, et cetera. So we actually have to model that behavior so that people have the trust in the same. And sometimes you also do it and show them that, hey, I made this mistake, this is the consequence, this is how I fixed it, and this is how what I learned from it. So when they see that, when we they see us as leaders modeling that behavior, they're also more open to do it. Right? And then when you replicate that same support mechanism where you replicate that same transparency and the trust in them, they're also willing to try and do the same and stretch and give it a shot for the same. We do have to, as more experienced people and leaders, put some guardrails because you know you have to think you don't have unlimited budgets, we don't have unlimited time, et cetera. And you don't want people going down rabbit holes. But as long as you're transparent and clear about that, I think it works out for the same. So it's a yeah, it's a combination of all of that that makes for safe space and the trust.
Debbie:That's a great culture that you're modeling, allowing your team to feel safe in terms of taking risks and experimenting because that's where you get true innovation. Now, to close a mid with a final skin in the game question, across everything you're building in your current role, your writing, your advisory work, what do you want most to be judged on 10 years from now? And what is the legacy you're intentionally building?
Amit:So this one I'm gonna tie to I don't know if you did the Living a Life of Consequence course as part of the lead program.
Debbie:I missed that, but I was told it was really good.
Amit:Yeah. So in that course, we write our own obituary, and that's an exercise if someone hasn't done it, is really worthwhile doing. So when I did that exercise, what I realized was that I settled on two things. One, I'm on a journey. And two, at the end of the day, I want to be remembered for having touched lives. Now, whether that is I had a chance to help someone navigate a year-long initiative and get successful on it, or whether I was able to give some advice at some stage in the thing, and you know, they got some benefit out of it, or I've had an opportunity to help them with resources or something like that. Whatever that is. That ability to touch a life and at the end make a difference is what I've come to settle as what I want my legacy to be. So I continue down my journey and I've hopefully had an opportunity to touch as many lives as I can. Another thing that came from the lead program is one of my mentors from the lead program, she told me very early on, she's like, Ahmed, if you can't give 10 minutes of time a week to someone else to help, you need to re-evaluate your life. And so I've taken that to heart and I try to give as much time as I can back to leaders or to even people who reach out for mentorship or help. I try to give as much as I can. And it's such a learning opportunity, it's such a humbling opportunity to see the trust and thing that people place in you for the same. So I would say that is what I'd like my legacy to be is that I touched lives in some positive or some way that helps.
Debbie:You're making a huge difference, Amit.
Amit:Right.
unknown:Yeah.
Debbie:Well, thank you for joining us today and being so generous with your insights and journey.
Amit:Thank you again for the opportunity and look forward to more of these conversations.
Debbie:In an era where every company claims to be data-driven or AI first, Amit gives us a vital reality check. AI brings the speed and scale, but it is the human who must bring the context and judgment. The real work, the real skin in the game is having the courage to lift the data curtain and ensure accountability stays with the leader, not the algorithm. So no matter your industry, the lesson holds, you can have the best ingredients in the kitchen. But it takes a skilled chef and the right recipe to create something exceptional. If this conversation changed how you think about data and leadership, please subscribe. And to help me bring your voices to care about most, who should be our next guest? Let me know in the comments or drop me a message on LinkedIn. I'm Debbie Go. Thanks for listening.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
Think Fast Talk Smart: Communication Techniques
Matt Abrahams, Think Fast Talk Smart
Grit & Growth
Stanford Graduate School of Business