Now Streaming on

YouTube listen on podcast

About our guest

Dr. Meeta Yadav Vouk is the Vice President of Product Management, AI and Analytics at Teradata, where she leads the development of trusted AI solutions for mission-critical workloads across regulated industries.

With a distinguished background spanning academia, chip design, and product leadership at IBM, she has pioneered innovative approaches to AI implementation and infrastructure scaling. Her expertise includes building AI systems that help global organizations deploy automation and generative AI with control, speed, and accountability.

In this episode, Meeta speaks about how enterprises are building automation strategies that can scale responsibly across highly regulated environments. Meeta shares how Teradata’s AI Factory and Enterprise Vector Store are helping customers deploy generative AI and automation in ways that support growth, reduce operational risk, and meet compliance requirements across different markets.

Episode transcript

[00:00:00] Dr. Meeta Yadav Vouk:  I think you’ve probably seen in the news, you know, an agent went rogue and, you know, with the agentic AI framework and went and deleted certain things. Right? And its AI is hallucinating. It does use hate and profanity. I had a very interesting experience with my 16 year old. His cell phone wasn’t working, so he called the cell phone provider. He’s just got a new SIM with a new provider. He called them up, and, of course, he ran into AI. And AI was just verifying a couple of things with him. And then suddenly, like, when AI couldn’t really get past what he was asking for, AI said to my son, “there’s no need to call me Mister Fox.” And my son was surprised and so was I. I was driving, so I was sitting next to him and said, I’m gonna pass you to a human agent. You know? It was fascinating too because my son didn’t realize what had happened, but he got scolded by AI and that was the first time that I had encountered it, you know, and I did write a piece on Mister Fox versus the human agent. You know? Both of them were absolutely unhelpful.

[00:01:05] Jason Hemingway:  Welcome to In other words, the podcast from Phrase, where we speak with business leaders shaping how organizations grow, adapt, and connect with customers around the world. I’m your host, Jason Hemingway, CMO at Phrase. And today, we’re joined by Meeta Vouk who is the vice president of AI and Analytics at Teradata. Dr. Vouk is a distinguished leader in enterprise AI, and she spent years at the forefront of AI innovation, leading development of trusted solutions for mission critical workloads across some highly regulated industries. She’s also built and scaled AI infrastructure that helps global organizations deploy automation and generative AI with control speed and accountability. So welcome, Meeta. Nice to see you.

[00:01:47] Dr. Meeta Yadav Vouk:  Thank you.

[00:01:48] Jason Hemingway:  And also with us is Simone Bohnenberger, who’s our Chief Product Officer at Phrase, and she brings valuable insight from how AI and automation are shaping global content operations, customer engagement, and multilingual delivery to the conversation today. So, Simone, hi to you as well.

[00:02:04] Simone Bohnenberger:  Hello, Jason.

[00:02:05] Jason Hemingway:  Hi. So, look, let’s start, Meeta. Let’s go to you. You’ve had a fascinating career journey all the way from things like academia in your early career to chip design and product leadership at IBM and now at Teradata. Let’s just, we always like to talk about what brought you into the position you’re in now. So if you can just give us a little history of how you got where you are today. It’s always fascinating to hear.

[00:02:31] Dr. Meeta Yadav Vouk:  My mind always gets me into trouble, Jason, when I can’t understand things. So I was super intrigued by how chips work and I had a wonderful professor who taught a class on ASIC design, and that’s kinda what got me into chip design and then, you know, when I was doing my doctorate, my PhD was funded by the Department of Defense. And there the problem that they were seeing was they were getting inundated by these cyberattacks, and simple rule based systems couldn’t keep up. So they wanted a behavior modeling system. That was just an early foray for me into AI, behavior based systems, but doing it in hardware. You know, this was gosh, about seventeen years ago. I date myself but it’s really just, you know, that solving that problem on how do you look at massive amounts of internet traffic, and how do you model that behavior? How do you detect outliers? How do you figure that? And how do you do it at scale? And then how do you create a hardware fabric that’s, you know, flexible enough that it can accommodate? That was my PhD dissertation. So, it kind of got me more into that. So, you know, then I continued with that research, started teaching, and taught for about twelve years. I’m still an adjunct professor. And then, you know, the next big leap was when I was running IBM Research in Singapore. We had a fascinating project we were doing with the government of Singapore. We were working with the Maritime Port Authority. We were talking about how do you detect illegal bunkering activities on the waters of Singapore? Singapore is one of the largest ports in the world, so we actually deployed AI models in production where we were looking at locations of the bunkering vehicles as well as ships, and we were able to predict with quite a great accuracy where illegal activities were happening. And the port inspectors for the Singapore Maritime Authority were using our app to just say, here, go investigate this. But, you know, the early stages of AI were more around human-in-the-loop. Right? And now as we move towards agentic AI and more automation, you know, we are starting to see how do we then really start to solve for fraud where it’s just, you have tough SLAs and everything has to be automated? There’s just not enough time for a human to intervene. Right? And then, you know, the web is part of the mainframe business as well where it processes 73% of the world’s financial transactions. Right? So when you’re looking at trillions of dollars flowing and you have 2 milliseconds to figure out whether a transaction is fraudulent or not, you know, I’ve really just seen that journey of human-in-the-loop to automating more things and, really, that’s how I got into AI, and that’s kind of what my journey has been so far.

[00:05:17] Jason Hemingway:  And then a little bit about what you’re doing today at Teradata. Tell us a little bit about the day-to-day role that you’re in, and and I guess that viewpoint of that shift in terms of human-in-the-loop to much more automation now is kind of how things have evolved, I guess from your perspective as your career has kind of gone on. And now at Teradata, what are you firmly squarely looking at today?

[00:05:42] Dr. Meeta Yadav Vouk:  Yeah. So at, you know, Teradata again as a 47-year old company, and that means, you know, every time you swipe your credit card or a debit card, apply or buy a train ticket or a plane ticket, you know, you’re touching or buying something from online retailers. So when we think about AI for systems like that, we think about the workloads that are running on the system. Right? They tend to be mission critical for those businesses. We tend to work in extremely regulated industries. Teradata is a hybrid company, so they’re on-prem as well as on the cloud. And with this idea of sovereign AI emerging so strongly, right, we have to look at really meeting our customers where they are, you know, we don’t believe in AI for AI’s sake. There’s just so much hype in the market. So, we have to be really grounded in truth on what does it take to productize AI for regulated industries and mission critical workers, right? I’d like to say we are in the big kids playground for AI. So, you know, that stack that we need, and that stack tends to be an enterprise breed of service, you know, the security, the resiliency, the auditability, the trust that you need from your AI system, and it has to be reproducible. So we’re looking at, you know, the enterprise vector store. Right? When you’re really bringing unstructured documents and videos and audio together with your structured data, what does that unlock? At Teradata, we’re looking at our integrated Modelops story. What does it look like to manage your RAG pipelines? You know? When you build your agentic AI platform, what does it look like to have guardrails around your agent? How do you prevent them from going rogue? What does it take to manage all of your agents? Right? So and, you know, really, we scale in performance. Right? AI needs to scale. So what does performant AI with trust and security in prem and hybrid cloud look like? That’s the mission we are trying to solve for our customers. So, you know, it keeps us fairly busy.

[00:07:38] Jason Hemingway:  Yeah. No. I mean, I can guarantee there’s one other person on this call that will say it keeps them busy? And why don’t I open it out to you, Simone, and just say, I know the answer to this question already, but I’m gonna ask it. You know? Does that kind of resonate with what you’re seeing across global markets and global teams using AI at the moment?

[00:07:58] Simone Bohnenberger:  Yeah. It does very much. I like it, Meeta, how you said it keeps you fairly busy. I want to see what it looks like when you’re very busy. So there are a couple of points that really resonated with me. By the way, I really like the story about the Port of Singapore and how you first started with detecting or predicting illegal activity. And I think this predictive capability is where, I think, the crux of the problem. Right? I think enterprises global enterprises who are increasingly deploying AI also on the front line to generate revenue or in customer interactions, be that in financial services, chatbots, insurance underwriting, or multilingual content or branding, carries some risks with it. And what enterprises love is predictability, scalability, trust, explainability. In a way, you could say that’s anathema to the actual mathematics that sit and underpin AI. Right? Because it’s probabilistic. And so it’s like trying to get a, what do you say in English, a square peg into a hole. How do you manage that? And I think that makes that scalable, so I feel that every single day in terms of customer requirements and an interesting component you’ve mentioned as well is the real time. So, one of the typical guardrails has been humans-in-the-loop and their different design principles. Maybe we can talk about them later and where you can weave in human oversight but we are moving into a world that’s increasingly real time or where the data is so big that you can’t have human review at scale. So how do you build the trust, the valuation frameworks, you minimize human in the review, and you minimize your risk? So, I think that that really resonated. The other point maybe that I want to make is a little bit moving it to the macro level and merely say, we’ve seen a lot of investment in AI. I think Goldman is calling it the trillion dollar wave, yet, we also need to see the ROI of that. Right? And I think what’s very interesting is that if you just look at Nvidia, for example, or AMD or other companies, their revenue alone last year was 100 billion dollars. That’s just into chips. And that’s not even, uh, that’s not even including IBM and everyone else, META. Right? This is just the tip of the iceberg. And we need to see some ROI, yet at the same time, we can see how companies just about manage to get 2 out of 10 AI POCs into production. Right? There are lots of complications around that. So, yeah, I think there are lots of issues, and I’m very busy, not just fairly busy.

[00:10:16] Jason Hemingway:  Well, love it. No. I think we’ll try and come back a little bit later to that ROI question. But let’s go back into that kind of scalability side of things and how you know, Meet, you’ve worked lots with global organizations scaling AI across, you know, what what effectively is global operations. What does that look like when you’ve got AI automation together, you know, to maintain trust, consistency, and agility across perhaps different countries? How does that kind of hang together for you?

[00:10:47] Dr. Meeta Yadav Vouk:  Yeah. It’s tricky. Right? It’s not easy, Jason because with AI, we are also seeing emerging regulations. And navigating those regulations from a global perspective as well as a regional perspective, you know, I mean, we thought data was hard. Now add AI regulations on top of it. It makes for a very complex equation. We always tend to design for global needs. Right? Our customers are global. So when you talk about sovereign AI, we also have to look at, oh, who has liquid cooling? You know? What are the power requirements in growth countries, for example? What can they power? If they need something in a remote location, what does that look like? So we look at things from that lens. So we always design for global requirements, but we definitely, now, especially with sovereign AI, we will need to get more deep into what do those regional requirements look like? We’ve always done that, you know and it’s not just, are your tools available in the languages and, you know, you have accessibility and everything else in those countries, but now we’re looking at, am I honoring those requirements? And then not just honoring those requirements, if I work for a trusted and ethical company, am I doing just? Like, am I honoring my own guardrails and my things on how we want to do it? So it’s complex. It varies on a case by case basis, especially when we are helping our customers productize something as well. From a product perspective, you know, we tend to look at it from a holistic scale. But just things vary, and I think that’s where automation will help in the next few years. So for example, if you look at fraud, solving for fraud in Europe looks very different from solving for financial fraud in the United States. The fraud patterns look different, you know, the kind of frauds that they get look different. The language that the fraud is happening looks different. So you can’t, you know, you have to analyze. You have to do analysis on those. You know, what kind of models will you be allowed to use after the EU AI Act in Europe versus America? You know, so we constantly have our eye on that. There’s just bespoke work that needs to happen on AI implementation and scaling at the moment. So, you know, I always say AI is going to generate a lot of jobs because productizing AI is still people heavy. Right? Because you’ve got to train your system, and then you’ve got to teach your system how to understand European fraud that’s happening, right? And even in Europe, it varies by country. Sweden looks very different from fraud in England, right? So what does, what does that look like? So there are parts I think we can generalize, but there are parts that are going to be bespoke. And that bespokeness is going to really benefit from automation in the future as well because, you know, deploying an AI system is not a one- and-done approach. You’ve got to constantly keep at it.

[00:13:37] Jason Hemingway:  Yeah. And I think, Simone, you do concur with that kind of, again, it’s like, you know, a good glorious agreement there. And perhaps to further because I know you’ll agree with that statement, but perhaps to further that sort of thinking, where do you think organizations you’ve seen run into trouble in maintaining that kind of focus, maybe whether it’s, you know, countrywide or different regulatory systems? Where’s your experience where companies have kind of thought about, you know, maintaining that balance between being agile, but also making sure that you’ve got all of the different regulatory data, all the different, you know, cultural nuance, all the differences between, you know, locations, essentially?

[00:14:15] Simone Bohnenberger:  It’s a big challenge. I like Meeta’s example from financial services. I remember my time as a consultant in financial services having to work with banks on KYC and enhanced KYC and sanctions list, and they were different by country. So, I think there is the technical component, and then there’s the cultural component. Right? And I think I can talk a little bit more about the cultural component. From a technical perspective, I think the key is really to build that cultural flexibility into your ML pipeline from beginning to end. That sounds a little bit abstract. But what I mean, for example, to adhere to privacy requirements or GDPR in Europe, you need to be clear what data you can use to actually train your models. Right? You need to understand what’s in your datasets. If you want to be ethical and if you want to reflect a different tone of voice, you need different types of datasets. So you need to optimize them differently also with cultural nuance all the way to how do you deliver content to different audiences? So our background is very much in multilingual content generation, and that’s really where nuance matters. So you can make mistakes in terms of biased AI where chatbots become sexist, but you can also completely miss the tone of voice of a different audience that you, for example, address in Japan, you need to talk to them very differently than to an audience in the US. I think where I see organizations get that right is organizations who take a step back and think about their global markets first and foremost and develop a strategy for each of their core markets. And then they weave the AI into that strategy, meaning, for example, if we’re deploying or if you want to address our audiences in Japan, we adapt our content to Japanese needs and to do that, we provide the right data, input the right prompts, and the right also governance frameworks to deliver that. I think that’s really a, in our world, bit more of a country-by-country or local-by-local strategy that you implement. I think that’s the key to it, but that comes with complexity. And what Meeta said, to productize, it requires human expertise, a lot of human domain expertise.

[00:16:13] Jason Hemingway:  Yeah. So let’s look into that sort of a little bit more, Meeta. You know, you’ve got huge efficiency gains, huge scalability gains, but also you’re taking humans a little bit further away, the more you automate, you’re taking humans further away from that decision making process. So, how do you sort of ensure that automation supports all of those human-like quality decisions or whatever they might be especially in, you know, diverse markets where, you’ve got the benefit of having humans in a local market is that they know all the local nuances and all of those things that you need to know. So how do you sort of balance that automation versus that human kind of way of thinking or that human oversight, let’s say? What do you see there?

[00:16:57] Dr. Meeta Yadav Vouk:  It’s a multipart strategy just to really address that. Right, Jason? When enterprises were productizing AI, I think, you know, I like to describe it as a storm or as a spiral, where we were doing use cases on the fringes. Right? Here’s a picture of a dress. Write a copy for it. And you had a human looking at that copy. Right? That it wasn’t writing something it reaches and you constantly had a human looking at AI’s output but it was making the human more efficient. Right? Nothing was getting past the human. Right? And now we are moving closer to the eye of the storm. Right? Where, like, I talked about the fraud use case or the money laundering use case. You really can’t or, you know, loan processing, for example. Right? We are automating these processes, and we have AI making decisions. What makes the human more comfortable with all this automation making the decisions? Right? With agents not going rogue. Right? Making sure they’re reading the right data and I think explainable and trusted AI is the answer in my mind. Right? AI tends to be very arrogant and very sure of its answers. Right? The first thing I want AI to say is, hey, “I’m an AI making a decision, and here was why I made this decision, here are the factors that influenced it, and here’s how much they influenced it. And I’m X percentage sure of my answer. I’m 73% sure of this loan that I’ve just denied, and here are the factors I considered. And, yes, I didn’t look at the race or the gender of this person. I really looked at other factors”. And I think as we get into these more complex, deep reasoning models and large language models, you know, explainability is super hard. Right? I wanted to explain that chain of thought. You know, this was the data that I used to come up with this inference, and this is how we did it. So I think making the human comfortable with the decision that AI made, so for example, let’s just say, you know, you detect somebody who’s laundered money. Can I go back and really look at the auditability of why that decision was made? Right? Did AI make a mistake, or did that guy really deserved to be flagged as a launderer? So these analysts are looking at that, having just an analysis on how that decision was made. So even though you’re not looking at every single fraudulent transaction, if something comes up, you’re looking at it, but you have the entire reasoning behind it, you know, all the factors that went into it, and you have that explainability on why it happened. I think that would be reassuring to people. I think we just have to have these guardrails. You know? So, you know, I really appreciate the EU AI Act because it’s really trying to force us to think like that and make sure that the AI we productize honors that system. So I really think the way to do that is just trusted, explainable AI.

[00:19:45] Jason Hemingway:  Yeah. It’s interesting, isn’t it? That thing about trust and it’s very, oftentimes, trust is a very human, you know, expression, isn’t it? Because why do I trust somebody I’ve worked with for years to do the right thing in a certain job? It’s because, you know, you’ve got experience. You can talk to them. They can explain what they’re doing. And usually, there’s a record of, you know, how well they do it at the end. So if you can get your AI to sort of think in those human terms, I guess, it will be explained in those human terms that we can all relate to, then people have more confidence in it. And that, to go to you, Simone, a little bit, that kind of idea of, you know, transparency, trust, and control is very, I know, in the forefront of the way we think of it at Phrase. Going sort of to the, you know, what if there was a, you know, a situation where, is there a risk that the human layer is removed too early when you’re looking at the AI workflows that you look at? You know? Or is it, again, about, you know, maybe not too early, but is it about when it’s right to to trust what’s going on? What would you think about that?

[00:20:46] Simone Bohnenberger:  When it’s right to trust, I like that, and how to determine that. It’s really tricky. As Meeta mentioned, what do we mean by trust? Right? So first and foremost, I think we always use the human comparison, but we humans are awfully fallible, especially when it comes to repetitive tasks. So, when it comes to humans who are repeatedly looking through documents, we know that we’re about 70% right. That means, you know, 30% of the time, we’re not. Yet our expectation to trust the eyes that it’s 100% right. So, I think we need to work a little bit on our understanding and our expectations of trust when we talk about AI. At the moment, it seems it needs to be better than a human in terms of performance for it to be trustworthy. Then how do we open up this black box of AI, especially in the world of LLMs that are so multilayered? They have billions and trillion parameters. There’s a chain of thought, but sometimes that happens after the fact, and you retrace it, and it does not always mirror exactly how a model has made decisions. It’s really tricky. So, I think there are different ways to feed trust into, I would say, the end-to-end process of this AI journey that you take. It starts with your in-house dataset. So, one question around trust is often ethical bias. So one opportunity is, if you don’t have access to the underlying model and you or customers use an LLM and you fine tune that with your own in-house data, at least understand your in-house data very well. Right? Like, analyze that, go through your datasets, analyze them for bias duplication, all sorts of noise that you don’t want, all the way to give users the capability to use evaluation frameworks where they are in charge of the thresholds, for example. And they can go back and investigate and see with this threshold, this is what the machine decided. I’m okay with this, maybe not, maybe I need a higher threshold. So think about more in terms of risk and reward, and then as much transparency literally at every step along this chain of AI, you know, ingesting data, making decisions, and producing outputs.

[00:22:47] Jason Hemingway:  Yeah. Love it, so let’s take our foot off the gas a moment, and we call this our mid show moment, Meeta. And we’re gonna ask you a question about you know, we call it the inbox confession. And really, it’s just we talked a lot about automation. We talked a lot about humans. Is there one thing that you wish you could automate in your work day to make your life easier?

[00:23:11] Dr. Meeta Yadav Vouk:  Oh, you made it tricky, Jason, by saying work day. I was prepared for my –

[00:23:16] Jason Hemingway:  In your day, let’s just say, day is fine.

[00:23:17] Dr. Meeta Yadav Vouk:  Yeah. And I read this brilliant piece, quite a few months ago, and I was very jealous of the person who came up with this idea. And she said, “I don’t want AI to take my creative work away. I want AI to do my laundry, and I want AI to do my dishes.”

[00:23:35] Jason Hemingway:  I love that. Yeah. 

[00:23:36] Dr. Meeta Yadav Vouk: So I think I would definitely fall in that camp but if I look at my work day or I just look at life, right, I think one of the things that has happened is that fraud has accelerated. The number of spam phone calls or the spam messages that I get, text messages on my phone is unreal. So, I think the one thing that I would love about automation is for my phone to understand my behavior and my phone to say, “oh, yeah, yeah, Meeta, flag these.” It doesn’t do that right now. To say these are tricky. Right? And of course, my device is listening to me all the time, but it’s not bright enough to say, oh, no, she just wanted to do this as a one-off, that’s not her natural trend. Her natural trend is, you know, this is what’s of interest to Meeta. My interest is bucketed into one or two words that I’ve said during the day, and it ignores all of the context that it’s had in the past. So, you know, my life would be better if it could do laundry or dishes, but the next best thing would be, you know, removing the clutter from my life. Don’t add noise to it. And at the moment, I feel AI is adding more noise rather than removing it. I also feel, and I don’t know if you guys have run into this, I want richly thought out content to be written no matter what that platform is, right? AI generated content, I would like it labeled to say 20% written by AI. It’s just that content, you know. I mean, I would rather spend my time doing that. So I think, you know, just personally, that would be really, really helpful for me to automate.

[00:25:15] Jason Hemingway:  I think that, I mean, you gave a great answer to the initial question, but I think you’ve triggered a few other thoughts in my brain as well, particularly on content. I mean, you know, a Chief Marketing Officer who doesn’t think about content every day of the week. But I think that there is that kind of, and I’m trying to avoid saying the word authentic, but I like the idea of content that’s useful, interesting, or helps me. And sometimes you’re awash, and you particularly see it on, you know, channels like LinkedIn. You’re awash with AI generated content now. So you’re really, really searching for the things that are actually not trying to sell me something, but actually making my life a little bit better as sort of what you said in your inbox confession. And I think there is a danger with AI that it just gives people the opportunity to just pump out noise, pump out things that just aren’t very well thought through. So I think there is a danger. And, also, I think people are getting much better at spotting it like you just said. They’re much better at saying, I don’t feel like that’s right. And it’s, I know, I don’t just mean things like em dashes, you know, which everybody’s now kind of aware of. I was looking at something today that was basically sort of saying that it’s much easier to spot people who’ve never posted on LinkedIn for years and years suddenly pumping out content on a daily basis. You’re like, well, hang on, they’ve not just changed their whole work day to sort of start writing, starting from scratch. So, I think that’s an interesting point. And I guess that takes us into a sort of, you talked about it very briefly, and we’ll talk about the ROI in a minute. But are there some things and you said creativity. So let’s expand on that a little bit more, Meeta and Simone. There are some things that you just both think would just stay human and just let you know, let’s keep that a thing that we hold dear, or indeed, are the gloves off? Just say, well, you know, maybe, maybe not.

[00:27:00] Dr. Meeta Yadav Vouk:  I think in all this conversation about AI and in all of this mad rush that, you know, we’ve labeled it in different terms, I think we’ve lost the understanding of what it means to be human in this age. Right? A piece of art generated. It could be just very personal to me, Jason. Right? A piece of art generated by a human being who spends time thinking, has brought their life experiences, their humanity into it, versus a piece of art regurgitated by AI because it’s digested all of those creators’ work. It’s going to resonate to me differently. When it hangs, you know, in my house on a wall, I’m gonna relate to it differently. You know that that rug that you see is more than 100 years old. It’s eaten by dogs, you know, but it tells a story. It was a song that was woven into a rug. Right? And now the pieces of art that you have, and I think that that storytelling and that humanity in some of the fields that we do, I personally would like it to be more human because, you know, it feels different to me, it hits different to me. I also believe that AI is never going to be sentient. Right? AI is never going to have that human experience and it cannot reflect, you know, what humanity does and what life living is and I think to me I would be very biased towards that, I don’t want to know I wouldn’t want to read a novel written by AI, I’d like to read a novel written by a human being.

[00:28:32] Jason Hemingway:  I might read it once just to see what it’s like but yeah, I know what you mean. So, Simone, where do you draw? Where’s your personal line in terms of where you think it stops and starts or maybe where it should or shouldn’t go?

[00:28:43] Simone Bohnenberger:  I think a minute ago, I was tempted to go back to my book shelf and only open books that were printed before 2022, but, it’s not quite like that. I think it’s a bit more nuanced for me, especially when it comes to art. You know, when Duchamp put this toilet out there, we were all outraged. It’s like, how can this be art? And now it’s art. So, if you go to the Tate Modern 25th anniversary, they have a replica standing there. It really depends on the human who operates the machine. Right? It’s a tool. It’s a means towards an end. That’s AI. And it depends really on the person who operates and gives us the inputs because AI itself, we sometimes misleadingly use the term reasoning. It doesn’t reason from a human point of view, right? It detects patterns. It uses previous patterns, applies them to new things. It will never tell us why the weather forecast is the weather forecast. It will just tell us based on what it’s learned, what happens next. And I think, as such as it’s a tool, and I think when it comes to being creative, it’s just a different, instead of using a pencil, you now use AI to generate really interesting images. So I have, sometimes on LinkedIn, you see what people do with AI and image generation. I think that’s fascinating. And there is a sense of creativity there to say, look, show me Hong Kong in the 50s, but the background is blurry, there’s some neon lights, and there’s a woman crossing the street with a trench coat. I mean, that is human general intellectual input, and that will not go away. In terms of maybe more practical for business applications, I think the first step for AI is really to help us to automate and not be super creative. Right? Help us to automate existing workflows that become more scalable and allow us to maybe reach individuals better, but also in a more nuanced way because one clutter in my inbox I don’t like is if I get these really general emails about something I would either never buy or someone who I’m not. And that’s why I go delete, delete, delete. One trigger for me is, is this tailored to what I need right now? Like, I live in London. It’s summer. There’s some kids’ sandals on sale. I’m exaggerating this. This is personalization, I think, is where AI can come in but to get there, you need to provide it with a lot of context. Right? So I think the beauty of the LLMs is really to unlock that more personalized approach to humans. And it’s not replacing creativity at all.

[00:30:55] Jason Hemingway:  Yeah. I think, no, I mean, that’s true. And it’s personalization. It can even be real time personalization on almost every interaction. But I think, again, you’ve got to bring in those things like trust. You’ve got to bring in those ideas or is it authentic? And I think when you think about personalization to a great degree, it says I’m gonna say it’s personal to the individual but what I mean is that person is the only person who can just, you know, really tell you whether it was correct or not, really, because you’re trying to aggregate, and you’d want to make sure that whatever you’re sending, that communication is relevant to that individual and doesn’t go into that box of, have you just started spamming me with loads of things that are irrelevant? Or have you listened to 2 things and sent me 28 things that just aren’t totally or are totally irrelevant to actually what I’m doing? So I think that’s interesting. And, Meeta, if we can get back to you on this, there’s a whole idea of it being probabilistic versus deterministic, which is a complicated way of me saying it hallucinates. You know? For me, as a layman, it’s like you’ve got an element that it might make stuff up. So is that, you know, still serious concerns for enterprise leaders? And not just technically, but for reputation, where do you sort of see that? And what design principles do you think matter for sort of making it appropriate? That’s a massively long question. Sorry.

[00:32:14] Dr. Meeta Yadav Vouk:  I think you’ve probably seen in the news, you know, an agent went rogue and, you know, with their agentic AI framework and went and deleted certain things. Right? And its AI is hallucinating. It does use hate and profanity. I had a very interesting experience with my 16 year old. His cell phone wasn’t working, so he called the cell phone provider. He’s just got a new SIM with a new provider. He called them up, and, of course, he ran into AI and AI was just verifying a couple of things with him. And then suddenly, like, when AI couldn’t really get past what he was asking for, AI said to my son, there’s no need to call me Mister Fox. And my son was surprised, and so was I. I was driving, so I was sitting next to him and it said, “I’m gonna pass you to a human agent”, you know? So, it was fascinating doing it because my son didn’t realize what had happened, but he got scolded by AI. You know? And that was the first time that I had encountered it. You know? And I did write a piece on Mister Fox versus the human agent. You know? Both of them were absolutely unhelpful but the point being, I think we’ve got to put guardrails around AI. Right? I mean, I look at it from a slightly different lens because I have young children but even when we look at enterprises, right, these guardrails, making sure there’s no new profanity. Like Simone said, just data and governance around the data that you’re using. Right? And I think we’ve got to go deep into explainability as well. So, you know, we’ve got to prevent hallucination. We’ve got to have guardrails not just around our LLMs, but now around our agents as well to prevent these agents from going into the data they don’t need to go into. So really, as agentic AI evolves more and we start to orchestrate it more, and I know guardrails are evolving around that too, just a deeply integrated system that’s keeping AI in check a little bit. Right? You know? Of course, I could talk to my 16 year old, but I wouldn’t want to be scolded by AI. You know? I was gentle, and he just said, don’t call me Mister Fox and he hadn’t even said anything. So it was fascinating. But I do think that keeping AI in check, making sure that things are happening with character.ai is not happening. So I think we just, as a society, need to do that. As enterprises, we need to do that but I also think, Jason, it’s just where we are in our journey of AI. I start to see more things come around security for AI, trust for AI, explainable AI. We’ll make more strides in that space, I feel, over the next couple of years.

[00:34:55] Jason Hemingway:  Yeah. And I think you’re right. I think it’s, I wouldn’t call it early days, but it’s formative days. Let’s call it that. I think it’s probably a better way of thinking about it. And, Simone, what about your sort of thinking on design principles? I can almost guarantee it’s very aligned, but any extra thoughts?

[00:35:09] Simone Bohnenberger:  Yeah. Maybe the key tension that we really need to resolve at the moment is AI is being used as a tool to automate and scale and do much more with less. And the question then is how can you ensure that that much more that we produce, be it content or decisions, can still be governed by humans? And traditional governance frameworks don’t work. Because the traditional workforce you would see in financial services, in multilingual content would be AI generates something at the very end, a human still ticks every time, reviews every output. That’s not scalable. Right? Because a human still checks the homework every single time. So how can you move into a world where humans only come in in moments where we have high or low confidence and something is right or wrong? And that means humans shouldn’t be at the very end of the process. It’s also not enough if they’re at the very beginning, if they just prompt something and then walk away because then there’s model drift and there are other issues. So really thinking through use case by use case, what kind of human interaction would you need along the actual chain of decision making is really important. So an example is for example, in the UK, we have Tractable, it’s a really cool insurance product where you can submit pictures for simple claims after a car crash, and the pictures get analyzed quite quickly, and you get your claim paid pretty quickly. For low risk and small damages, that happens automatically but the system, for example, midway escalates if it’s a more complex issue, if it’s a more complex damage, right, and immediately goes to a human. That’s midway in the process. So really designing the process around the use case, and then and also then redefining what we think about risk and how we manage it, so, if we want that automation, we can’t have it a 100% because the eyeball may make mistakes. What are the use cases where children are exposed to? We have a very low risk appetite. I don’t want my 9 year old to be exposed to bad language or problems or fraud or any of that. So the threshold there is quite high. What about other use cases where we’re at lower risk? For example, where your brand isn’t at risk as much because you’re rolling out a campaign in a country that’s maybe not as revenue generating than another, right, or where the competitive pressure isn’t so high. So I think redefining risk and thinking through where the human sits along the way of this end-to-end AI process is a good way of thinking about it.

[00:37:26] Jason Hemingway:  Yeah. And I think, Meeta, from your perspective, if you can look at that, you know, Simone was basically talking about a time horizon of, you know, thinking about over time redefining your risk appetite. But if you look at the sort of next three to five years, what do you or what’s in your mind as the biggest opportunities for AI to support, you know, business growth? Let’s call it that.

[00:37:50] Dr. Meeta Yadav Vouk:  Yeah. And I think, you know, we all know that AI production for mission critical workloads is still low, Jason, for various reasons. Right? But that’s ramping up. So in the next five years, I think we will see a lot of evolution, in my opinion around just deep reasoning models. Right? I think we will see agentic AI become more mature where it has more human confidence. Right? And I do think this idea of data governance, AI governance is going to be just absolutely, you know, everywhere. And I think we’ll just become more confident in AI being productized and running. We’ll get it to a point where it’s enough, well, it’s detecting fraud in our banks, it’s detecting money laundering activities, it’s automating all of those tasks, and we have pretty high confidence. That’s where it is. AI will be successful when we stop talking about AI.

[00:38:45] Jason Hemingway: What a great line that is, to sort of, get to our final part of the podcast. So, yeah, when we stop talking about it. I wouldn’t have a job in podcasting if we don’t stop talking about it. But, anyway, no, so, look, we’re gonna just do a couple of quick fire questions. The first one I’ll ask you both and I guess, Meeta, you go first and then Simone. So if you could describe, like, AI transformation in one word, what would it be?

[00:39:15] Dr. Meeta Yadav Vouk:  Oh, gosh, um, unpredictable change is what I would say at the moment. That’s where we are.

[00:39:19] Jason Hemingway:  Okay. I’d give you, uh, that was two words, but I’ll give you it as a kind of linked word. And Simone, to you?

[00:39:27] Simone Bohnenberger:  I would call it risk management. Also two words, maybe risk.

[00:39:31] Jason Hemingway:  Yeah. Okay. Good. Yeah. Risk I guess the connotation with risk could be negative, but risk management is good. I love it. I love it. And then just Meeta, just a couple of last questions. We ask all our guests these last ones. So if you could describe global growth in one word, what would it be?

[00:39:45] Dr. Meeta Yadav Vouk:  Oh, gosh. Um, and I thought about this. Um, unprecedented.

[00:39:52] Jason Hemingway:  Okay. That’s good. That’s good. Well, you can explain your answer if you wish.

[00:39:58] Dr. Meeta Yadav Vouk:  No. I just think, I do think it’s transformative. This technology is transformative than other things we’ve seen before. So, it’s gonna be very interesting to see how it plays out. 

[00:40:10] Jason Hemingway:  Good. And then the last question before we wrap up. Who do you think we should speak to next?

[00:40:16] Dr. Meeta Yadav Vouk:  Oh, because there’s so many amazing people doing this. I think it’d be good to hear from somebody who’s at the intersection of quantum and AI. What does it take to build quantum ready AI systems?

[00:40:38] Jason Hemingway:  Wow. Okay. Yeah. I mean, I’m gonna have to brush up on my quantum for that podcast episode, but we will go and find someone, look at that rather pointed note that’s made me think, okay. Thank you both for a very interesting discussion on AI, the future of AI, the current status of AI, and the guardrails, and the risks, and the rewards. I think it was an absolutely brilliant discussion. I just wanna say thank you for coming to another episode of In other words, and that’s it. So thank you, Meeta.

[00:41:11] Dr. Meeta Yadav Vouk:  Thank you for having me, Jason.

[00:41:12] Jason Hemingway:  Yeah. That’s great. Thank you very much, and speak again soon. Thanks both.

[00:41:16] Dr. Meeta Yadav Vouk: Thanks, bye!

[00:41:17] Simone Bohnenberger:  Thank you.

[00:41:18] Jason Hemingway:  Well, that’s it for another episode of In other words, the podcast from Phrase. I’ve been your host, Jason Hemingway, joined by Simone Bohnenberger. And a big thank you to Dr. Meeta Vouk for sharing her thoughts on AI and how enterprise leaders can deploy it at scale. If you enjoyed today’s episode, please be sure to subscribe to In other words on Spotify, Apple Podcasts, or on your favorite podcast platform. You can also find more conversations on leadership growth and what it really takes to scale globally at phrase.com. Thanks again for listening, and see you next time.

All episodes

  • In this episode of In Other Words, host Jason Hemingway speaks with political scientist and author Dr. Brian Klaas and Phrase CEO Georg Ell about how chaos, randomness, and influence shape our lives and our work. Drawing on stories from his book Fluke, Brian reveals how tiny, overlooked decisions shape history, why leaders must embrace resilience over control, and how businesses can build cultures of experimentation to thrive in uncertain times.

    You control nothing, influence everything

    In this episode of In other words, host Jason Hemingway speaks with political scientist and author Dr. Brian Klaas and Phrase CEO Georg Ell about how chaos, randomness, and influence…

    Listen here

  • Rema Vasan TikTok

    Brands that own the moment, own the market

    Rema Vasan, Head of North America Business Marketing at TikTok, joins our CMO Jason Hemingway, to explore how culture drives business growth, why agility is a non-negotiable, and how AI is…

    Listen here