Tech Travels

EP18: AI Implementation and Governance: Insights from Swathi Young, Allwyn Corporation CTO

Steve Woodard Season 1 Episode 18

Send us a text

Discover the fascinating journey of AI technology from initial concept to real-world application with invaluable insights from Swathi Young, the CTO at Allwyn Corporation. Gain a deeper understanding of how AI transitions from individual use to organizational implementation and learn about the crucial importance of identifying clear use cases and business goals. Swathi shares compelling examples of how different sectors have successfully integrated AI, achieving quick wins and substantial returns on investment.

Navigate the complex landscape of AI governance and security, focusing on data stewardship, access control, and the critical issue of privacy. Hear about the challenges of proving machine learning model outcomes in high-stakes fields like fraud detection and criminal justice. We also delve into the potential of generative AI in cybersecurity, emphasizing the need for robust AI governance frameworks to ensure transparency, compliance, and responsible AI use.

Finally, explore the cutting-edge world of AI-driven wearables and video generation tools. From Meta's Ray-Ban AI glasses to health-monitoring rings and bracelets, we discuss their potential and the mixed reviews they've garnered. We address the challenges of mainstream adoption and the importance of user-centric design, as well as the ongoing debates around copyright in AI-generated content. Join us for a comprehensive look at the future of AI technology and its transformative potential.


Swathi Young LinkedIn
https://www.linkedin.com/in/swathiyoung/

Support the show



Follow TechTravels on X and YouTube

YouTube Channel
https://www.youtube.com/@thetechtravels

Tech Travels Twitter
https://twitter.com/thetechtravel

Tech Travels
https://techtravels.buzzsprout.com/

Speaker 1:

Welcome back Tech Travelers. Today we're diving into the topic that sits at the intersection of innovation and practical application, ai a path from prototype to production and to take us on this journey. Today I'm thrilled to have back on the show Swathi, young, cto at Allen Corporation, who's also a renowned expert in artificial intelligence technology and leadership, to help kind of shed light on this topic. Swathi, thank you so much for joining us again on the podcast. It's great to have you back on the show.

Speaker 2:

Steve, thank you so much. I appreciate the opportunity and always welcome to dig deep into these very hot topics right now, but I've had the good fortune of working on it for quite some time and I guess I'll start off with, you know, helping our audience understanding this here is that you know we're really starting to see AI.

Speaker 1:

it seems like the rate of innovation and where it's going over the last couple of months just seems to be so fast, and I think it's important for our listeners to just kind of understand, from a technology perspective, what it really takes to go from kind of a proof of concept to a fully operational, fully scalable production system. And I'd love to get in this here, but I think I want to throw this number out here is that you know we just there was an industry study here and it showed that basically was that 75% of people are already using artificial intelligence or AI at work. 46% of them started using it less than six months ago, which is a huge number. So I would love to dive into your thoughts around.

Speaker 1:

Okay, so I'm a company, I'm thinking about putting in some sort of AI entity. Kind of walk us through some things that we need to understand around this type of transformational technology and where we should start.

Speaker 2:

Yeah, I think the first point I want to differentiate is a lot of people have started using AI at work, but not necessarily at organizational level, correct? So think of the days before Salesforcecom Every salesperson used some sort of software to track their leads and prospects and converting into sales or an Excel spreadsheet right. So right now, where organizations are is employees are using, to maximize their productivity, their own tools, mostly charities. My friend just told me that she uses gemini at her work. So employees are using these tools from a consumer perspective. Now, from an enterprise application perspective, there are some success stories with large organizations who have moved especially generative AI to production, but fewer stories, I would say, success stories for smaller companies who have successfully taken generative AI, which is either hugging face like an open source API or an open API, open AI APIs for their organization that we have not seen at scale adoption at scale. So I just want to throw that caveat out there. But when you talk about AI, it's this ambiguous umbrella term, right? So there is machine learning that started off a few years back, decades. I want to say, with netflix, did their recommendation engine more than a decade ago that the netflix actually recommends, based on your past viewing history of movies, it would recommend a new show for you. That's a traditional machine learning, which also falls under the umbrella of AI. Amazon recommends a product to you that also is machine learning, that falls under AI. But what has changed now? Which? The rate of change of large language models that's available to public is really high. So, after OpenAI's, chat, gpt, we saw Gemini, we saw Mistral, we saw Publicity, and one of our favorite tools these days is Cloud. So all these are giving consumers. They have taken AI and put it in the hands of the consumers, basically where you can easily ask a question, get responses.

Speaker 2:

Now, large organizations implementing at scale are still struggling because it's not easy for, first of all, identifying a good use case that will give you a good what I want to say, a quick win or return on investment type of situation. So for large organizations, it's still a challenge situation. So, for large organizations, it's still a challenge and I would like to start with, always think of what are the goals you're trying to achieve, what are some problems you're trying to solve? And if you don't know what problems large language models can solve, you can look at the industry example. So, for example, if you are a public sector, because I was just telling you I came from AWS public sector conference.

Speaker 2:

So if you're a public sector agency, there are so many use cases, whether you're government of Australia or US, for you know, easing congestion of traffic, for easing congestion of your air, traffic control or roadways there are some common, you know use cases in. If you look at IRS, maybe you can think about how to detect fraud in tax filings and things like that. How to detect fraud in tax filings and things like that. Similar fraud detection you can also do for HHS, department of Health and Human Services, where Medicare and Medicaid you know the claims data. Also you can use it for fraud detection.

Speaker 2:

So there are very high level use cases for every industry sector. For manufacturing, there is is smart manufacturing, inventory management and forecasting becoming better. So I would always think is begin at what problem you're trying to solve and how you can leverage AI. And if you don't have an idea what problems you can solve with AI, then look at your industry sector. There are a lot of published use cases over there and this is where I come in to help and I call myself a technology storyteller because I can connect the dots and my strength is understanding businesses and the business workflows, and what is the best tool in the tool set, whether it is traditional AI, supervised learning, unsupervised learning, or you want to use a large language model, so that would be the first step.

Speaker 1:

It's interesting you mentioned that you mentioned use case. You mentioned fraud detection, intelligence searching, document processing. There seems to be, there's tons of data that a lot of entities are sitting on right now no-transcript. So let's say, for example, I'm an organization where I say I have found our ideal use case. We've identified a specific use case we want to target and let's just say you know it's either maybe it's a customer engagement or sales and marketing analytics From your perspective. You know, kind of how does one get going with going into it, kind of with eyes wide open from getting things together to build a prototype, to move that all the way through into production.

Speaker 2:

Yeah, I think, and even for there are some considerations for the prototype and production which are similar. Number one who who is the one? Who's going to deliver that code and the algorithm? Do you have in-house experts or do you have to bring external entities, or you know consultants or contractors to do that work? You do an assessment of where you are. Number one right. And number two is that if you already have a consulting company or a system integrator working with your organization, maybe you extend them. Or the third interesting option that most often organizations might or might not consider is you can collaborate with the university.

Speaker 2:

Now I'm in Washington DC and we are blessed with a lot of universities. We have American Georgetown. My alma mater there is George Washington, george May, so you name it and there's a lot so you could collaborate with the universities to do your pilot. And that's an important step, because too often organizations are like the minute they think of AI, first of all, there's a fear around it, but secondly, oh, we don't have a million dollars to invest in it. So maybe not start with a million, but start with these collaborations to make sure you understand what your pilot project is, and I would encourage to do a pilot compared to just a prototype or a proof of concept. Like I was telling in the other conference session, too often AI projects are going to the graveyard without seeing the light at the end of the tunnel. So we would rather start a pilot. And the difference between pilot and prototype in my mind is a prototype can be a quick and dirty saying oh, we thought this is a use case that might work and it worked right. Instead, a pilot is actual value to your business.

Speaker 2:

So if you're taking your document processing, say in um, in sales, maybe you are selling something in a physical store and you have paper documents. So you want to make sure and you have like 10 stores. Take one store, convert the physical documents into a digital format using a large language model and do whatever automated processing Either you want to see the sales, you want to see what products are selling the most, whatever is your KPI and the metrics that you're using it for and do it successfully for that one location. So that's a pilot, so you've done it. It is still valuable. It's not throwaway work. And you said, okay, it works for this location.

Speaker 2:

But what if we had 10, 1500 locations? How do we scale? Then we think of more infrastructure to deploy into production, which basically it's not just throwing hardware or cloud compute, it's also architecting the solution in a way that's scalable, as perhaps you have 10 locations today, but end of 2025, you're expanding to 100 locations, how would you have a scalable architecture? That's what you have to keep in mind when you're doing a production-ready deployment. And the other new thing that's on the horizon is obviously operationalizing. It means you have ML Ops now. Llm ops, basically, once you move it to production, you have to keep monitoring it because as your data changes, your output might change. Because unless, unlike traditional software, um, your large language model, whether it's generative, ai or machine learning, depends on data and the way your data changes, your outcomes might change. So you have to monitor it and make sure there's not too much of a drift from what you designed to what's in production.

Speaker 1:

It's funny because you mentioned the scalability of the large of the AI entity and kind of the pilot that you're putting together and how that scales across an organization, right. We've kind of always thought in technology, right, that, like the cloud is a great place for you to be able to consume resources, consumption-based model you can infinitely scale. But I would imagine that the problem might be and of course I'm leaning on your expertise here is around how do you kind of maintain and monitor something that is continually growing and evolving? What is the talent and skill Is there?

Speaker 1:

a talent and skills gap there. You mentioned MLOps and LLMops. At the rate at which things are progressing so fast, it's very difficult, I would think, for most people who work in operations day to day. They're now having to manage systems that are completely net new to them and they're thinking what so kind of where do you, where do you?

Speaker 2:

And this is where I think if you're a large organization, you can lean into, you know, cloud providers like AWS just coming back from the conference, they have introduced Bedrock and all these capabilities, but truly it's too early to produce an end-to-end use case in enterprise at scale in generative AI. To be very frank, I know McKinsey has done an internal implementation of a chatbot across their, however many thousands and thousands of employees. They use a chatbot called Lily that uses generative AI capabilities. It's been a case study. They published the learnings, they've done folks articles on it, but truly at scale we are still not yet there. Form of going to market strategy for that would be to, you know, augment your technology teams with the AWS or Google Cloud, because they have the tools, but they are ready for a real use case to make it to production and you're right as the LLMs expand.

Speaker 2:

So there are two ways of looking at it when it comes to LLM infrastructure. One is most organizations will use the APIs of OpenAPI, so you don't need to build a large language model from scratch. You're leveraging the large language model and, like I said, you can pick something off the shelf like an API provided by OpenAI. There are APIs provided by cloud or perplexity I'm not sure about perplexity, but cloud definitely and then you have your open source APIs, like a hugging face that you can leverage. So, essentially, once you decide which LLM API you use, that is the best fit for your organization. On the other hand, if you are like a big pharmaceutical company, you want to build, maybe, your own language model because you are building on top of your proprietary data, and Bloomberg has done that. I think Bloomberg has built their own large language model using financial data, and Bloomberg has done that. I think Bloomberg has built their own large language model using financial data. So, essentially, if you're building your own large language, model, then infrastructure requirements is totally different.

Speaker 2:

That's where you will have relationships with NVIDIA, get all your GPUs and things like that. But I think for all the rest of us, we just leverage the large language model APIs and then they also have a scalability, just like cloud providers. They are also billing on consumption basis. It's very different the way they calculate the consumption, but it's still like scaled according to your consumption.

Speaker 1:

I think also it's important to also keep in mind is that I think some of the critical elements around building a kind of a generative AI model is really thinking about the key components with inside your AI governance framework. Right, and I know that you and I talked about this on the last podcast as well as you know, talking about the governance framework and how the guiding it's basically going to serve as the guiding principles for how this thing is going to grow and scale. What do you think that are some of the most critical components that everyone needs to consider inside their governance framework, regardless of where they're building it?

Speaker 2:

Yeah, that's a great question because governance is one of the things that I observe falls in the wayside and it's like an afterthought, but I think, in order for a better output and outcome, I think governance should be given more priority than it's currently being done. I think three key aspects when it comes to governance is whether it is a large language model or traditional machine learning. The input of any AI solution is data. So the first thing is about the whole governance principles around data. That includes metadata management, that includes having solid data stewardship, having data committees, and the second important aspect is access control. So it's so much more important. And also the data lineage. Right Prominence of data is so important, especially for AI, because at some point, especially when it comes to fraud detection.

Speaker 2:

So it was interesting a few years back I worked on a proof of concept taking the CMS Center for Medicaid and Medicare that data for fraud to apply machine learning algorithms for fraud detection. But one of the interesting things is that CMS, whenever they do the fraud analysis right now, they use traditional statistical modeling and they say a provider which would be a doctor, a doctor's office or even a large hospital say there is a fraudulent activity in your billing statement. Any of them, or all of them, can actually take CMS to court and say no. And in the court you actually have to prove that. Your statistical modeling because if somebody is submitting 5 million bills per year and you found fraudulent patterns, you're not going to manually see the 5 million bills or investigate, you're going to use some sort of statistical modeling, random sampling, what they call Now machine learning. If subject to a court order, how would you prove in the court order that you know, using the machine learning techniques, this provider was deemed fraudulent? So it's very important for us in this type of very critical situations, especially the ones if you're using for criminal justice or even to predict crime, like a minority report, you have to be very, very careful, both as a business user and as a technologist, to understand what are the inputs to your model and what is the weightage of the model that is making this recommendation. Why is it saying this provider is fraudulent and where did you take the data from? Did it come directly from the provider? Is it historical data? So there's so many parameters you have to be cognizant about so that you're very, very you know careful before you give a prediction and say, okay, this is a fraudulent. So to my point.

Speaker 2:

The second one data lineage and also how the data is being used by the machine learning model. That governance principles are so important. And third one I would say data security and privacy. Obviously, anonymizing the data. When it comes to a lot of things like HR data, recruitment data, data in healthcare, they have to be anonymized. And healthcare is a very interesting use case because there are some use cases. You need to have at least the gender data right Because the demographics and gender. There are some diseases that affect certain demographics and genders than others. So to that extent you can anonymize the data you have to in healthcare instances. So I would say the privacy and security considerations.

Speaker 1:

It's interesting the security implications you know you mentioned kind of around, like you know, for organizations, you know looking in and kind of bringing into kind of the aspects of cybersecurity you know thinking about, you know leveraging Well, look, you know there's. I got a letter from my internet provider that basically said that I was subject to a data breach. I got an email from Ticketmaster just last week that says that my Ticketmaster account, by all my concert tickets, was basically subject to a data breach, hack. Right, you would probably think that most organizations would jump at the chance to want to try to leverage things like generative AI for things like rapid security or forensic analysis or things like that. Do we start to see this scene so much happening now, with data hacks and data leakage and things like that, people wanting to rapidly use this without giving real thought and consideration to how certain models may or may not be considered in terms of using Gen AI for cyber?

Speaker 2:

I think it's the chicken and egg question A lot of people are still not experimenting. There is, you know, there is a lot of strong use cases of using generative AI for preventing cyber attacks right, but we don't see that. I think where we are seeing is product companies like a cybersecurity product company is incorporating Gen-AI in their product. It's not like. I think what needs to happen is both. Almost all product companies are trying to incorporate Gen-AI in their products because they are technology companies, they are future thinking and they want to be competitive in the marketplace. But what has to happen is it's not enough if the product companies do Large organizations like a Ticketmaster, should invest and investigate the use case.

Speaker 2:

Again back to production. Maybe they are doing some prototypes and proof of concepts but, how would they take it to production and prevent this, because this is a great example of using generative AI to prevent security breaches. Right Again, I think it's a matter of time. From the consumer side, we are benefiting, getting Claude, 3.5, sonnet and all that. I think the enterprise side has to play catch up, which usually takes longer.

Speaker 1:

It's incredible to see this. I think there was a Gartner study that said that AI investments expected to reach $97 billion just by the end of this year alone the growing importance and commitment of organizations toward AI initiatives. I found this study incredibly, incredibly funny. Where it also said at the same time is that 50% of companies have adopted the at least one business function and 20% of them has been able to successfully scale. But exactly right back to your point again is that a lot of organizations are trying to figure out what is our use case, what's our return on investment, what's the true business outcome we're looking to solve versus just throwing something out there. But again, I can't I can't stress it enough as as as what you said earlier, as you know, kind of starting from the beginning with building a real, true generative AI governance framework, understanding your data models and then having that completely being able to come, you know, transparency with compliance and auditing capabilities as well. I think those things are key and essential. What else am I leaving out here?

Speaker 2:

I think responsible AI is a big aspect of it, especially like when I talked about use cases in criminal justice and criminal justice, yes, ai has the capacity to do predict certain crime, maybe based on how algorithms work. But is that the right use case? And then bias is huge. If you think about use cases in recruitment, hr, even in healthcare, right, the bias can inherently seep in because, like for image processing, we know there have been enough research papers published where images are still not able to deal with dark skin versus fair skin and things like that. So, responsible AI, best practices of evaluating for bias. Secondly, evaluating for fairness.

Speaker 2:

And then transparency of models, in that if you're taken to a court of law, can you prove your algorithm is fair or not. Going into technical details, but what are the inputs, parameters to the algorithm? What is the weightage in which the recommendation was made given to those attributes and parameters? To that extent right? So definitely transparency, and there's a lot of technical study in the area of transparency, called interpretability of the models and so on. So that's another important aspect.

Speaker 1:

Yeah, I completely agree. I think that there's still a lot more to be done in this area. I know you mentioned the implications around CMS and claims denial. I think I was also reading another provider was also basically being a class action lawsuit. They were basically saying that our claims were basically denied in less than 2.3 seconds, or something like that, and a lot of people were making the claim. This is well listen. Is that your, your ai, your algorithm has got bias built into it, and is that these people are being denied valid claims that should be looked at, you know, individually, uh, and examined that way, versus just a vast swath of them were basically automatically denied, and they were.

Speaker 1:

I think they were into the millions in terms of number of cases that were denied in less than a second or something like that, which was incredible. So I do think that there needs to be more work and attention being paid into kind of the algorithm and how the, the governance and how the bias is all built into it. Um, what are some of the cool things you're working on, you see and see coming up in the landscape in the next six months around ai? What do you see as some of the cool things you're working on you see and see coming up in the landscape in the next six months around AI? What do you see as kind of the next sizzles?

Speaker 2:

I think we are going to continue seeing all these newer versions. And there is all generative AI. I call it the generative AI wars, but we are the benefactor. But we are the benefactor, I mean we are benefiting from all the available chatbots and generative AI chatbots that are for the consumers and increasing. I would say I'm a power user. There's not a day which goes by where I'm not spending multiple.

Speaker 2:

Cool on the horizon are the image processing and video generation tools that are coming, especially as I played with the Luma Labs AI and it was so cool and my son gave me a prompt. I was able to show him a video for five seconds that it generated and he really loved it. And I know Sora, which is OpenAI's video generation, is still not publicly available, whereas LumaLabs is already publicly available. So there's going to be some very, very cool video and imagery generation. And now, of course, on the flip side, there is a debate around copyrights of the data that's being used to generate these images and videos. So, but ultimately, it's a pandora's box. It's already opened up. You can't put it back, but we would benefit to create. You know, I I can't even imagine the kind of jobs my son will have who's only eight, because it's going to be a brave new world it's so funny.

Speaker 1:

I um what my twin four-year-old boys and, uh, my, my youngest one, of course he's basically already overwhelmed Alexa to the point to where she's basically got the red circle on it and my wife and I are having, and my wife and I have this conversation.

Speaker 2:

We say what age?

Speaker 1:

should we unleash him on chat GPT? Right, because he Because he's literally like a thousand miles an hour, right. But you mentioned the adoption and the things like the wearables and things like that and I wanted to get your opinion and thoughts, kind of as we start to round out the segment here. You know the wearables is a huge market. Recently I was talking about the meta Ray-Ban sunglasses right, where you kind of had the AI camera that was already in there and you had the commercial was like hey, meta, read this menu. That's in a different language. Hey, meta, what does the sign say? Because I'm a traveler in a foreign country and to me it looked really, really cool.

Speaker 1:

I thought this was a great combination of AI and AI into wearables that was affordable for the consumer. I think some of the sunglasses were around $2.99, somewhere around there. But then some of the people started kind of saying, well, listen, I tried these glasses. I went to certain restaurants that had different menus and different languages. It did not work for me. I tried to basically ask it what I was looking at. It told me a street sign. I was actually looking over a bridge, so it wasn't there, kind of like. It wasn't like it was as advertised, and I kind of wonder you know we talk about going from prototype into production. I mean, was this really ready for prime time? I don't know. That has got some pretty cool products, but what's your thoughts?

Speaker 2:

Yeah, I think there are two aspects when it comes to wearables. I still feel that something to do with humans and the physical interaction with the device. There is still friction no-transcript friction that would still be there, I think, for adoption of wearables maybe something like a ring that beeps when your heart rate goes up. Those, those have been in the market but again, adoption has not been so great. I think maybe the meta classes are not ready for prime time, but even for a longer period of time. I think there has to be some bioengineering research to be done there about increasing adoption of wearables and the friction there is that people experience this feeling of being uncomfortable, even though it's beautiful. It's like I'm in the movie. All that, but for two hours just so close to me, was nerve wracking, right, I couldn't do it. I couldn't even do it for one hour.

Speaker 2:

So there is this friction part, but I think there is, I think there is a good use case for wearables such as a ring, a heartbeat monitor, which Apple Watch already has, but even enhanced versions for older demographic who are susceptible to falling, susceptible to heart attack, stroke, etc.

Speaker 2:

So there is a market there, but I think the technology is there for such a use case. It's about prioritizing those use cases and flooding the market with a lot of different wearables. Right, we only have like one or two right now for the glasses and others, but for glasses, personally, I think there is a frictional element, but like a ring or a bracelet. There are a lot of other use cases.

Speaker 1:

So your outlook on the future when it comes to wearables is more positive, or is it still more skeptical, like I'm still waiting to see how it looks?

Speaker 2:

I'm still more skeptical. There are a lot of positive use cases that could be possible and they have been there, but my question is that adoption is not. I think when it comes to physicality of people, there is more friction to adoption, like you can think of um elon musk's neural link that's embedded in the brain, right I? I don't know how many will sign up for that. I'm I for one. I'll simply opt out I'll, I'll wait.

Speaker 1:

Uh, I'll wait a little bit longer Again. Prototype to production right, it's like how much has this been tested? How many case studies have you went to? What's the rate of success? Will I die?

Speaker 1:

on implant or something like that, Something kind of crazy. Right, it's interesting, Very exciting to see Swati. I want to thank you again for coming on the show and sharing your insights and again, this whole journey from prototype to production is really a very it is indeed a very complex topic, but I think your approach and understanding to it really kind of helps us understand the transformational aspects of that. So again, I want to thank you so very much for joining us on the podcast.

Speaker 1:

Thank you, Steve Always fun to talk to you Awesome and thanks everyone for listening. Until then, stay curious, stay informed and, most of all, happy travels. Thanks so much for listening to the Tech Travels Podcast with Steve Woodard. Please tune in next time and be sure to follow us and subscribe on the Apple Podcast and Spotify platforms. We'll see you next time.