Tech Travels

EP10: The AI and Blockchain Fusion with VezTek CEO: Sani Abdul-Jabbar

Steve Woodard

Send us a text

Discover how the tech world is grappling with the latest AI regulations from Europe as Sani Abdul-Jabbar CEO of VezTek joins the Tech Travels Podcast. We dive into Europe's latest AI regulations, the synergy of AI and blockchain in energy trading, and the digital era's impact on education and the workforce. Sani brings insights from his book 'Makers: A Slender Knowledge,' offering a nuanced view on AI's global impact, ethical innovation, and the transformative potential of blockchain. Join us as we explore how these technologies are reshaping learning, work, and the tech landscape.

Venture with us into a hybrid realm where AI meets blockchain, and the future of energy trading is rewritten. I explore the hesitation within the business community to combine these powerful technologies, despite the clear strategic advantages. Sani and I scrutinize the dichotomy between recognizing AI's potential and the hesitancy to invest in it, discussing the hurdles that both emerging and established companies must overcome to stay competitive in a tech-driven market.

We round off our conversation with a leap into the digital era's impact on education and the workforce, where Sani shares his vast experiences from his global travels. We ponder the metaverse's growing influence on our daily lives, the emergent tech trends poised to transform industries, and the evolving skills required to thrive alongside AI. Tune in to envisage a future where artificial intelligence is not a specter of job loss but a beacon of entrepreneurial opportunity, reshaping the way we learn, work, and create.


About Sani Abdul-Jabbar
https://www.linkedin.com/in/sani?utm_source=share&utm_campaign=share_via&utm_content=profile&utm_medium=ios_app

“Makers A Slender Knowledge” Book
https://amzn.to/3xn9ajG

VezTek
https://www.linkedin.com/company/veztekusa/

“Makers is a visionary exploration of AI’s societal impact. It challenges us to consider the ethical and human dimensions of AI development, making this a must-read story.”
Nolan Bushnell, Founder of Atari and Chuck E. Cheese, author, and a founding father of the video game industry

Support the show



Follow TechTravels on X and YouTube

YouTube Channel
https://www.youtube.com/@thetechtravels

Tech Travels Twitter
https://twitter.com/thetechtravel

Tech Travels
https://techtravels.buzzsprout.com/

Speaker 1:

Welcome to Tech Travels hosted by the seasoned tech enthusiast and industry expert, steve Woodard. With over 25 years of experience and a track record of collaborating with the brightest minds in technology, steve is your seasoned guide through the ever-evolving world of innovation. Join us as we embark on an insightful journey, exploring the past, present and future of tech under Steve's expert guidance.

Speaker 2:

Welcome back, fellow travelers, to another amazing episode of Tech Travels. Today, we have the pleasure of hosting Sani Abdul-Jabbar, a visionary CEO at VizTech, a company that's at the forefront of emerging tech. Sani leads a talented team that's pushing the boundaries around blockchain and artificial intelligence. Also, as an author and a thought leader, a voice and authority in Web3, sani is here to share his insights as to what's shaping our digital landscape today. So buckle up as we embark on this journey through technology with Sani Abdul-Jabbar. Sani, welcome to Tech Travels. It's an honor to have you on the show today.

Speaker 3:

Thank you.

Speaker 2:

Steve, it's a pleasure to be here. I want to very glad to have you. I really want to kind of lead with this story here. It just broke yesterday World's first major act to regulate AI, passed by European lawmakers. It says the European Union is now a global standard setter in artificial intelligence. So kind of help us walk through a little bit of this late breaking news and kind of what this means for us here in the United States.

Speaker 3:

Yeah, so that certainly is the hot topic today. This conversation about regulation in EU has been going on since December, at least Well, since before that, but December was early December. They presented the first draft of this bill in their parliament and it was eventually approved yesterday. The gist of this bill is that it's so. They divided, or categorized, rather, ai technologies by risk level of risk. So the highest extreme level would be unacceptable level of risk, the lowest would be acceptable level of risk, and then there are three other levels in between total five, I think and so that's what they have done Now.

Speaker 3:

Historically, I have always supported the idea of regulation, mindful regulation. That's the point. Mindful is the key. In the US, we have this cowboy mindset in the tech world. Right, don't stop me, don't regulate me, because if you regulate me, that hinders the development of technology. That's what I'm trying to do. But the problem is that, yes, I can develop technology, but if there are no rules of engagement and I come to you, steve, try to sell the technology, you're going to say well, what are the rules of engagement? How are we going to play If there's no gameplay? If there are no rules of the game, we can't play right. I can't sell that technology to large corporations. I can't sell that technology to government agencies. They're not going to touch it. So therefore we need regulation.

Speaker 3:

The counter argument of that is, if the regulation is put in place just for the sake of regulation, then we run into a situation where regulation becomes counterproductive.

Speaker 3:

Right, lawmakers aren't necessarily experts in technology, it's the influence groups, it's the think tanks, it's the influence groups, it's it's the um think tanks, it's the lobbying groups who and the industry groups who have a role to play in that. But you know, there's always um priority of interests and all prioritization of interest and competitive interest. That is uh that we have to deal with um. So, having said all that, I haven't read the full bill uh of in the eu that was passed in the eu parliament yesterday, so I can't comment on it with specifics. But uh, if it's put together mindfully, if there are experts involved in it, if they are not putting regulation in place just for the out of fear not not necessarily um to put you guardrails around it then it can be counterproductive. It doesn't seem so so far what I've read about it. It doesn't look like it looks quite reasonable, but, like I said, I haven't read the whole thing yet.

Speaker 2:

Yeah, absolutely, and I think the idea of regulation just means that it provides a certainty that we really need.

Speaker 2:

The tech community really does thrive on innovation, but without a clear framework, as you mentioned, we really can't realize the potential of these technologies. But your point about the mindful regulation really resonates strongly with me because, I mean, I think it's really about striking a critical balance. I think we need to ensure that their experts are at the table with these regulations when they're drafted so they don't really reflect, you know, fear or misunderstanding around artificial intelligence. But also, given that the EU has recently taken steps, I think it really sets the stage that it's becoming more and more of a guide, because here in the US we have this, like I said, this Wild West type of mentality of tech towards a more structured frontier. Now, with the post-pandemic surge in AI awareness and application, I think it's really crucial that we navigate the hype cycle thoughtfully. So could this bill be a blueprint for really just marrying innovation with responsibility, or is it something that all of us in an industry need to watch closely and learn as we move forward?

Speaker 3:

Every technology, every innovative technology, goes through a hype cycle. Now, hype, in my opinion, is not necessarily a bad thing, because what hype does? It attracts resources, attracts people, attracts funding and we develop things, so that's a good thing. The negative of hype is that everyone and you know their grandmother starts claiming that they are experts in that field. I attended a an event not last week, no, this week, actually on monday where we reviewed several projects nine 900 projects total, uh with a group along with other group of experts in the industry for investment. So we were reviewing the projects for investment. At least 70% of them were somethingai, and once you start asking questions, you realize that there is no AI. A lot of them was oh, I'm building a custom GPT, a custom GPT plugin. Well, that's not your intellectual property, so don't call it your project. You're asking for hundreds of millions of dollars for something that you don't own. Openai can shut you down tomorrow, so that Then there are a lot of people who are just doing smart automation of things and they're calling it AI doing smart automation of things and they're calling it AI. Overall, I think there are less than 10% companies who are actually building something interesting that can be legitimately called AI. So that's that.

Speaker 3:

That's the current state of the industry, in my opinion, at least what I am seeing in the spaces where I operate. Are we in the hype cycle? Most certainly yes. My gut feeling is that we need at least till the end of this year to reasonably come out of this hype cycle, because what's going to happen by the end of this year is that you will know companies that are ai, companies which are actually ai or not.

Speaker 3:

So that's what's going to happen by the end of this year and my guesstimate is that by the end of the next year, companies who are claiming to be ai but not ai and who have raised money, they will run out of money. So industry consolidation will happen towards the end of the next year and that's when we'll start seeing real players in the industry. The winners will become more visible. This is not to say that really interesting technologies aren't evolving and coming up in the market every day. Recently I attended a meeting a couple months ago now, at Microsoft. It was the kind of meeting where you have to sign NDA before going to the meeting. They presented some products for the entertainment industry, film industry and it's mind boggling what you can do the kind of stuff that these products can do. I can't talk too much about it because of the non-disclosure commitments, but I can tell you that this is mind boggling.

Speaker 2:

If anyone thinks that jobs aren't going to get impacted, industries aren't going to get impacted, they're dreaming right, yeah, and I I think the distinction you draw really is powerful, because the distinction between superficial AI applications and genuine innovation is really a wake-up call, and I mean it's sobering to hear your estimate that under 10% of projects are truly artificial intelligence. I also think that your prediction on on industry consolidation really aligns with the critical need to discern real expertise from the hype. So, with that in mind, when considering expertise, what is your top criteria for identifying true AI specialists in this crowded field? And just give me a little bit of idea kind of what goes into that thought process.

Speaker 3:

So, first of all, this technology is not such a new technology. Yes, chad GPT made it popular in the end of 2022, but the first large-scale AI-powered project that we did was in 2018 or 2015, somewhere there. So technology has been around. So there's a technical side of it and that's easy to judge if someone has those technical skills or not. You just look at their previous work and all that. Gpt, and especially its use in the industry, that's relatively new at the level and the scale where it's evolving now. So, at that scale, what's important is that and the rapid evolution of the industry. So what's very important for me is I don't care if you know everything today or not, but I do care if you dedicate time and resources to learning constantly. Constantly, because it's an evolving industry. Every day something new is coming out.

Speaker 3:

So when I interview, or my team interviews, people who are looking to join our teams, beyond their technical skills, we also judge what they're reading. We look at their LinkedIn profiles and we see what they're posting. We look at their you know, we ask them you know what's the recent book that you read or what's the recent articles that you read, because that tells me that this person is staying on top of things. Are there any interesting things you have created using LLMs, gpts, that kind of stuff, even in your personal time, even for your personal entertainment reasons, because that shows your interest in the technology. A few things that are really important right now for the business world, for the industry and the leaders to understand is, when it comes to project selection and product development, rules are still the same. The best practices are still the same. You have to think about ROI. You have to think about who it impacts. You have to think about who the decision makers, decision influencers are, who the customers and consumers are. These are the basics of any product development. Have always been.

Speaker 3:

What's slightly new this time around is in the past, data used to be the sideshow. Now data is the main show. What that means is, if you ask me today and it happens, I mean Steve, like, if know, put a dollar in my saving account. Uh, every time someone asks me, so you know, give me this blue pill that will solve all my, all my problems. I'd be quite rich people. It hap. It happens every time. We have a new emerging tech and it's happening again now in in the wake of ai and even blockchain. Still where a business leader will come to us and say tell me how I can bring AI into my business today to automate all my processes so that I can achieve that competitive advantage right away that I can using AI, and my response to them is no, that's not how it's going to work. It's the same answer that I gave you two years ago when we were talking about blockchain, and two years before that when we were talking about other things.

Speaker 3:

This time around, in in the context of product development, what's new is that? Or project selection, you have to first think where do you in your organization's processes you have the most amount of data? That's the low hanging fruit. That's where you gotta start chat gpt, having access to it or any other llm or any ai tool. Just the fact that you and I both have access to it doesn't give us competitive advantage. You and I both can get subscription of chat gpt for 20, for example, or other tools for different levels of you know investment, but anyone can do that if they can afford it. So that's not competitive advantage.

Speaker 3:

Competitive advantages if Steve picks up a GPT and trains it on his writings, his speaking, his history, his businesses data.

Speaker 3:

Now that's competitive advantage because Sunny does not have access to that data, so I don't have that competitive advantage you do. That's what I'm trying to advocate for businesses is that when you want to bring in AI, leverage AI and other emerging technologies for competitive advantage, think where do you have data in your organization Typically finance, accounting, sales, so that's the first level where you have the most amount of and most clean data. When you go below that, then the data is not that clean, but there's still marketing operations, things like that. You still find some data. So then that would be the second layer and so on. So that would be people looking for opportunities in the AI world if they can just learn these skills and use these skills where they can analyze processes of a business, see where automation opportunities are, see where the data is and then figure out what tools, what AI tools, can be brought in to automate those processes using that internal proprietary data. That's the low hanging fruit.

Speaker 2:

Yeah, sani, I completely agree that the true competitive advantage lies in utilizing the proprietary data with AI tools and then tailoring them to the unique aspects of one's particular business. Them to the unique aspects of one's particular business. So, just on the topic of leveraging unique assets, this really brings to mind the contrast with blockchain's philosophy of decentralization. Now, while AI, especially in the form of services like ChatGPT, really tends to centralize knowledge within the major tech companies, it looks to me like blockchain really aims to distribute that control. But what's interesting to observe is these different approaches AI really centralizing versus blockchains decentralizing. So I'm very curious how do you see this interplay affecting businesses that are looking to adopt these technologies for competitive edge? Do you start to see that there's a synthesis between these models? Could they offer the best of both worlds? I mean, I think they tend to favor more centralization.

Speaker 3:

True, although I think we have an answer to that problem Within the blockchain industry, blockchain technology. You have three different types of blockchains. One is public, one is private, one is hybrid. So public is anyone can join, private is only certain people can join or certain entities can join, and hybrid is a group of the two. So think an industry where a bunch of companies are allowed to join that chain. So that would be hybrid, where it's somewhat public, but not quite right.

Speaker 3:

Ai, on the other hand, especially when you look at machine learning models, they use large amounts of data, right, so you can limit that data to. You can say that only look at this one data set, don't look at global data set. Openai actually started offering this type of a service for businesses not too long ago, and other companies will do it as well, because obviously we can't open up all our data to everybody, so that doesn't work. So one of the companies that I head up, it's in the energy sector energy trading, fuel commodities, primarily jet fuel, etc. And it's middle market, and I realized that in that market there are lots of players, many, many, many players. Some are small, some are large and some work on deals for years and never get anywhere. Others, they have more deals going on, but these deals are large scale deals. So if you can close one deal, this is like hundreds of millions of dollars.

Speaker 3:

However, there is no centralized system in place, because information is the key. If I know where to buy the product, where to sell the product, how to price the product, I have the power to price the product. I have the power. Right, that's, that's my intellectual property, that's my um, that's the power, as as soon as it's uh, this information is available to anybody else, I lose my competitive edge, can't survive. So I propose the idea of creating a uh hybrid blockchain, where a hybrid blockchain based um erp type system where we can streamline all these very complex processes involved in energy trading. Put all of that on the system. It would be a centralized system in which all buyers not all, but whoever wants to play you know they would be members of it buyers and sellers.

Speaker 3:

And this idea wasn't very popular in that industry because the information to your earlier point, because you know, on one side we have public information versus private information. The idea didn't fly very well because, like I said, people are very protective of their information, no matter how much you try to assure them that this information stays with you. You own it and when you put it on the chain, it's fully secured, no problem. It didn't work. But if you bring it at a company level that you can do that, technologically you can do it. You can keep all the data within a company, but then the cost of operating it goes through the roof. And, as I mentioned many of these, most of these companies are small companies. They don't have those kinds of resources. So, can, technologically, ai and blockchain work together? That's actually a match made in heaven, right? If you just look at blockchain, it's, I believe. In my opinion, it's one piece of the puzzle you have to look at.

Speaker 3:

Data collection. Collection comes from sensors, humans, machines, any sources. Right Data comes in. Then you secure that data somewhere. That's your blockchain in any of I mentioned three types. There are others it's. Think of it as a spectrum of how public a chain is. And then what do you do with that data? That's, that's the AI part. You do something using AI, look for pattern detection or whatever. You create some actionable insights out of that data and then you use that. So both technologies are very well suited to work together. The issue is not that they can't work together. The issue is how much just communicating this to the stakeholders, the idea that your information is secure when it's on the chain and that's going to be a problem when you're working across entities as long as it's within one entity a government, a very large organization you're fine, no problem. But when you go outside that and you want to share information across entities, even if it's fully secure, that's a hard sell.

Speaker 2:

It's interesting. I mean, your insights into the synergy between AI and blockchain really are enlightening and I love this, especially considering the security and data privacy concerns I mean really that you've outlined. But, given your expertise with these technologies, what would you say the primary hurdles are when you encounter or when you're advising companies. Specifically, how do you address the hesitation of executives who really may grasp the potential for AI blockchain but are reluctant to really make that leap due to perceived risks or a lack of a strategic vision specifically for implementation of AI products?

Speaker 3:

So we get some industry surveys done every year to get a sense of the tech landscape. In the recent survey, we found out that over 78% business leaders they completely understand and believe that AI and emerging technologies are the competitive edge. If they don't adopt these technologies within the next 10 years, they will lose their competitive edge. They won't be able to survive within one decade, regardless how stable you are. Today, complete understanding, less than four percent leaders are actually investing um significant amount of resources into leveraging these technologies and and bringing these tech into into their business operations. When asked why, the most common response was I, I don't have information, I don't have guidance, I don't know who to trust because everyone is claiming to be the expert. I'm concerned that my existing processes that are working perfectly they will break if I introduce a new tech into it and that kind of stuff. The issue wasn't money. The issue was fear. The solution to that is quite easy. As a matter of fact, the easiest solution is you got to find experts. I mean, we have people in our company at Vesek, who this is their bread and butter. This is what they, you know, live and breathe. So just talking to people like that and saying you know, look at my processes and tell me where the opportunity is, and we are not always going to tell you that you have the opportunity. If you don't have the data, either your own or access to it, somehow you can't use these tools Right. It's not a competitive advantage GPT type tools or other AI tools. You have to somehow customize them to a level where they become yours, where they become your competitive edge. Not just, you know, anybody can pay the fee and get subscription. Then that's not a competitive edge.

Speaker 3:

Now, on lower levels, like you know, creating some tech, generating some text or marketing copy, or GPT to write your emails or improve your, whatever that can be done, but I don't see that as a competitive advantage. Yes, it increases your efficiency. I think I made a statement somewhere, got some pushback for it. If today you go in any decent size organization, look at their processes, introduce publicly available ai tools, you can increase their efficiency by by about 15, 15 to 17 percent. So that's that. But almost any reason a reasonable size organization has access to that. If everybody has access to something, I don't consider that competitive advantage. Competitive advantage is when you have access but others don't. To get to that level, you have to make these tools yours by customizing them based on your data.

Speaker 2:

Yeah, and customization takes time, it takes effort, it takes cycles within your team to be able to develop this. I think in your book you mentioned again that the book is amazing. I started reading it. The maker is a slender knowledge uh, phenomenal so far. I think you mentioned in the book that vestec, you know, really shines as an innovation hub for emerging technologies and that, and then in that it says it redefines them within 12 to 16 months, and I'm really interested here a little bit around your emerging convergence framework and kind of what led you to it. How did you know, how do you bridge the gap between skills and infrastructure and how do you evangelize that when you're talking with executives and people in the tech space?

Speaker 3:

So that that emerging emergence convergence framework, which is a emerging tech forecasting framework. It evolved and was developed as a very painful need that we had at that point. Vesdek, I founded it in 2006, and the idea was we're always going to be emerging tech company. That means, whatever technology is emerging, we're going to provide consulting, development resources in those technologies. Well, the challenge was that at that point, technology landscape was shifting like complete paradigm shift within about three to five years. At that point, then what happened is that that that change, it became the period of that change. It became shorter and shorter and shorter. Frequency just kept going up.

Speaker 3:

Currently, in the post-COVID era, we are at 12 to 14 months, 12 to 16 months max, right, and then that's when change happens. If you look back to, every less than two years we have some new technology that that becomes talk of the town. So this is the landscape that we've been dealing, dealing with since 2006. What many of our peers in the industry, what they do? What they do is they wait for the demand to come in, and when the demand comes in, then they go out and look for resources and and then they try to capture the demand.

Speaker 3:

That model wasn't working for us. So what we did is we created this framework which, like I said, it's a forecasting framework. It allows us to kind of look down the path and see what technologies are coming down the pike and then invest in those technologies ahead of time, develop resources, build infrastructure, retool, retrain our existing teams and see if we can be prepared in time to capture that demand when it comes up in the market. So we have to be mindful of hype versus actual demand. We have to be mindful for actual investments coming in that space or not. Are there enough resources available in the market or not? Things like that. So that's the emergence convergence framework. Currently it gives us about 12 to 14 months head start. So that's a good thing to have to know that we need to be prepared for technology X in 12 months.

Speaker 2:

That's incredible because most of the time you don't see that quick. You typically see the cycles anywhere between 24 to 36 months, on a turnaround for something like very new, innovative. We don't really know what this is. We're looking for experts in the field to kind of help us figure out a trajectory, but that's incredible that you have that. I want to dive into the book a little bit the Maker's Slender Knowledge. I started reading it and I've been fascinated with it. I really want to get through it. But what I found interesting was I want to hear your perspective on kind of what led you to really write the book and how you kind of curated knowledge from different people from different walks of life to create this amazing book.

Speaker 3:

Excellent question. So what happened is that many years? I mentioned earlier that our first large scale project that leveraged AI was in 2015,. I think what was happening is that every time I would bring up AI, machine learning, anything like that the first, the most common response was Terminator. So robots are going to take over the world right, the doom and gloom scenario. As an entrepreneur, I'm a hopeful person, but that's my default setting. So I started thinking there has to be a way to avoid that scenario, and so I started asking questions and at that point, looking at the evolution of technology and all the forecasting models that we had, the estimate was it would take about 20 years for the AI technology and robotics technology to get to a point where it becomes AGI artificial general intelligence. That's the threat, that's your terminators, right?

Speaker 3:

So I started asking this question what would it take to stop AI from doing stupid things 20 years from today? And I just literally I asked this question to countless number of people, anybody who would listen to me, regardless of their background, because I really believed that magic happens at the convergence of different disciplines. Because I really believed that magic happens at the convergence of different disciplines as long as you keep it within one silo or within one area of knowledge. You're kind of limited. So I went across the aisles and I spoke with people in different fields, including politics and clergy and sociology I mean, you name it and the response that resonated with me the most was the person asked me a question in response to my question, and his question was what would you do today to prevent your kids from doing stupid things, 20 years from today? And it's like that's a good question. What do you do? Well, you try to teach good values to your children. You hope that they would learn and you hope that they're listening and you hope that they would become better people. It's like that's exactly what you need to do with AI, because the way you train AI and the way you raise children is similar in certain ways Similar as in when we teach our kids values. The best way that we've been taught is be the role model. Kids are looking up to you. Be the role model. Do good things in front of them. They'll learn, instead of lecturing them, exposing them to good environments and sometimes you lecture them. So equivalent of lecturing would be direct data entry in the AI world. Right, uh, sometime you expose them to different environments. Um, take them out shopping, for example, with you and see how the process of shopping is done at a grocery store, right. And sometime you give them the credit card and have them, uh, pay to the cashier the equivalent that exposing them to different environments in the AI world would be, for example, exposing your AI model to YouTube data, for instance, right? So there are lots of similarities. So that's where the idea came from. If you teach good human values to AI today, if you bake that morality and the values in the way we are developing AI, then there is a chance that AI would develop into the kind of beings that we can coexist with and thrive alongside. So that's where the idea came from.

Speaker 3:

Now, when I started writing the book, initially I thought of writing the book in a almost like a manifesto how to 10 ways to do X kind of model. But, uh, I had the good fortune of uh speaking with Mr Mark Victor Hanson uh, chicken soup for the soul author, very well-known person. He later became my publisher. So I had a conversation with him about it, asking him the same question. He's like you got to write a book about it. I was like, okay, let's write a book and then when he heard that I wanted to write it as how to, he's like no, no, no one wants that Write it as a story.

Speaker 3:

So the book is a work of fiction, but mindfully divided into chapters. Each chapter is about a value and experience of AI in how it interacts with humans and how humans respond to it, and my hope from this is we can raise some questions. We can trigger people to think what can be done in our personal capacities today, because our decisions today is the data for AI training tomorrow. So my decision today in my organization don't just impact my people today. It can potentially impact a lot of people in a lot of places down the road if my decisions that I made today is used for training of the AI. So that's kind of was my motivation.

Speaker 2:

Yeah, sani, the way that you've woven the interaction of human and artificial intelligence values into a narrative. It's a very powerful approach and I think it brings awareness and it really is a stimulating thought. The storytelling aspect really seems to connect deeply, I think, with our need to understand complex issues like artificial intelligence through a relatable experience. Given this impact, do you envision that the collective stories and experiences that are shared in society really could lead to more unified ethical framework for AI, and how might we harness the storytelling to really create AI values that are widely accepted and implemented?

Speaker 3:

In one of my recent interviews, I brought up the idea of AI, ethics and values, and the person interviewing me, he said well, all AI is built on values. There is no AI without values. Now, whose values? That's another question. From the history, we do know that whenever we try to impose any one entity's values on other people, it doesn't work. We have seen that over and over, whether it's in the religious context or political context or any context. So I don't think it's going to be values imposed by any single entity. I think what's going to happen and these are just speculations, right, we don't know yet. I think what's going to happen is that, over time, hopefully, as you know, human race, as citizens of this world, will come up to some understanding, some agreement over some basic rules, some basic. You know, these are the five basic values that we can agree on, for example, and um, perhaps that would be the starting point.

Speaker 3:

I don't think it's going to be a whole set of values coming from some one entity. The other possibility is that, um, you end up with one clear market leader. You know, google, for example, controls 80 of of the search market today. They can set the rules for the browser search industry right Because they own 80% of the market share. So something like that could also happen. Where one company or one entity has that much power that they can set the rules, that can happen. I have also speculated that UN may have to get involved, some UN level body where they can come in and they can say okay, you know, we need some central international organization to come up with some principles, some values and goals, some values and rules of engagement that we can agree on. We don't know yet. So these are all speculations. Your guess is as good as mine.

Speaker 2:

So, sani, so as our conversation as we draw to a close, I'm really struck by the poignant dedication in your book A Slender Knowledge, where you charge your sons not only to coexist but also to thrive alongside AI entities.

Speaker 2:

And the message really resonates with me profoundly because, as a father of young twin boys who are also on the onset of their journey in a world where there lies that intersection between humanity and AI and they're becoming increasingly blurred, I think it's fascinating somewhat. I think it's fascinating, but somewhat daunting, to really consider the landscape and how they will navigate that in the next five years, I think, the terrain where AI will become even more present and influential. So, drawing on your hopeful vision as you set forth for your children, where do you see us standing with AI in that timeframe and how can we as a society also lay the groundwork for future generations to not just adapt but also to leverage AI to their advantage and also ensuring that they don't just survive but they indeed thrive?

Speaker 3:

that they don't just survive, but they indeed thrive. So I don't call myself futurist, I call myself nexus. Futurist would be oh, in 70 years X is going to happen. I'll be dead in 70 years. So you can't come and sue me, but I'm more interested in what's likely in the next few years so we can plan for it. Your kids are young, four years, um. So they are about what? 20 years away from the workforce, roughly, yeah give or take, yeah, my kids a little bit older, uh, 13 and 10.

Speaker 3:

The way I think is I I think 10 years ahead. So how the world looks like 10 years ahead and then work backwards 10 years ahead, and this is not just my opinion. Many, many experts in the industry they agree on it. So the life 10 years ahead is likely to be very virtual. We got a taste of that with the whole idea of metaverse. It was too early, too ahead of its time. Right now it's just entertainment and we don't have enough technology, the right kind of technology, to really put Metaverse to work.

Speaker 3:

If you look backwards, the early days of Internet, the very first version of the Internet, the very first generation, was just post some articles on the server and then in Urbana-Champaign, for example, and then in Chicago, researchers could read those articles. That was the internet, the very first version. The second iteration there was some interaction. Third iteration well, in the second there was also e-commerce kind of got involved, and now in the third one, decentralized data and trust. That's big. So you keep going forward 10 years forward. Virtual lives, virtual internet, virtual lifestyle that's highly likely to happen, in which we work, we play, we live in these virtual environments. Very early version of that. Is this what we are doing right now. You know remotely, but we're talking about like fully immersed environments in the future.

Speaker 3:

In order to get there, there are certain technologies that need to happen. The e-commerce system needs to evolve. The internet speed needs to evolve. The energy sector needs to evolve. Currently, 3% of global energy is being consumed by emerging technologies. By the end of this year, we are expecting 8% energy to be consumed by emerging technologies. That's not sustainable. So that has to change.

Speaker 3:

Jobs that are currently any jobs that's data-driven job. Well, until recently, I used to say if it's a data-driven job anything where your information think a physician not a surgeon, but physician who only uses lots of information those jobs are very much at risk of being significantly impacted. But I think you posted on your LinkedIn yesterday, or somebody, a robot that came out within this month that can do like dishes and folding laundry and that kind of stuff. It's like wow, okay, so even physical jobs are now getting impacted. So for the kids especially, knowing that we're moving towards that virtual world and knowing that in order to get there, we need certain other technologies and industries to involve, I think that's where we need to pay attention, because that's going to then determine the education of our children today, the training and their future professional prospects.

Speaker 2:

Fascinating. So you're probably going to say that this is me probably thinking kind of double clicking on that a little bit is thinking around embracing artificial intelligence and understanding how AI really plays a role in learning and development at a much earlier age. So you think that probably kids who are now in elementary school will probably be more exposed as we start looking at AI's role in education as an educator, as a way for them to rapidly adopt and learn faster than most kids today you probably start to. Is that kind of a vision you start to see within kids in school today?

Speaker 3:

So I have seen a mix of that. I've seen some kids actually pushing back on AI, ai-based tools, the way they see it, and I think that has a lot to do with their grownups, teachers, parents, for example. My son he's like I hate ChatGPT. I said, why would you say that? That's a very strong statement. He's like because before ChatGPT, my teachers would give me homework. I would come home, do my homework, I'd turn it in, I'm good. Now my teacher thinks that I'm using ChatGPT to write my essays. Now they want me to do my homework in the classroom, sitting in front of them.

Speaker 3:

Now my son he doesn't like when he writes or when he's doing his thing. He likes full attention, headphones on certain music playing full quiet, no disturbance. When he's sitting in the classroom. He doesn't have that luxury. So he sees it as a threat to him and he pushes back. Even when I tried to explain to him that it can make your life easier and there are certain things you can do, he wouldn't accept it. So, um, so there's that.

Speaker 3:

But then there are other kids who who see, who are interested in it. They want to learn it. They have to get comfortable with AI because this is their future Kids who are in middle and high school today, and even college. They're going to be kind of like how our generation, you and me, how we adopted the internet. We aren't internet natives, we are internet migrants. We were analog before, living analog lives. Then we adopted the internet lifestyle. Our next generation. They were digital natives, so for them, everything comes very natural when it comes to digital Kids today. They are not AI natives. I'm talking about middle and high school and beyond, right Elementary school and younger. They're going to be, to a degree, ai natives, so they will be very comfortable with these technologies.

Speaker 3:

My biggest fear in the context of this conversation is the skills that we are giving or we will give to the kids, because even grownups are not 100% certain what we need to learn, how we will use these technologies, how we get along these technologies. Many of us see it as a threat, Many of us see it as a tool. Others see it as a belief system, believe it or not? I've been told I don't believe in AI and I'm like it's not deity that you have to believe in. It's technology. So even we are uncertain, so I don't know how we're going to give clear guidance and directions to the young ones. It's an evolving situation, even in the education field.

Speaker 2:

I definitely agree with you that the loss of that personal connection between a student and an instructor is definitely a concern. I've spoken with a couple of educators in this field who also have that same fear of. You know is that I have a, I have a great connection with my students and there's a lot of, there's a lot of concern that that might become a huge drift between that student instructor relationship, but kind of in the classroom setting, and feeling like there's that personal connection, that human connection to another individual. But I think back to your book again is that, you know, figuring out how, how we can integrate and how we can kind of coexist around these entities will probably take some time and it's going to be interesting to see what's going to happen. Probably in the next five years, probably the next 10 years I'm not going to be a futurist but I'll probably say in the next 15, 20 years we might see a completely different landscape when it comes to the education in the classroom.

Speaker 2:

Sani, thank you so much for joining us today on the show. Where can we follow you? Where can we get more? Where can we learn more from you and your travels?

Speaker 3:

The best way to get in touch with me and follow me is via LinkedIn. I'm quite active there and on LinkedIn it's just my last name Sani S-A-N-I.

Speaker 2:

Awesome. Sani, thank you for this wonderful conversation today. Thank you for sharing your knowledge with us and we hope to have you back again on Tech Travels. Thank you very much, it's been a pleasure.