Tech Travels

EP17: High-Performance Computing Revolution: AI, ML, and Quantum Innovations with Brooks Seahorn

Steve Woodard Season 1 Episode 17

Send us a text

Curious about the untapped potential of high-performance computing (HPC) in revolutionizing artificial intelligence and machine learning? Our latest episode promises to demystify these complex topics with the help of our guest, Brooks Seahorn. Imagine having access to a colossal, temporary supercomputer that can transform weeks of work into mere hours—Brooks will break it all down for you. We'll journey from the historical significance of early supercomputers like the IBM Stretch to today's cutting-edge giants like Frontier at Oak Ridge, emphasizing the indispensability of HPC in handling massive computational tasks.

Ever wondered how we've transitioned from the simplicity of tools like the AWS DeepLens to the awe-inspiring infrastructure of modern HPC clusters? This episode sheds light on that journey, emphasizing the importance of mastering HPC tools such as Slurm to bridge the skill gap in today's fast-paced AI landscape. We'll also discuss the staggering advancements in GPU technology and the cost implications that come with it, comparing the early days of cloud computing to today's rapid AI advancements. Whether you're a seasoned tech veteran or a curious newcomer, there's a wealth of knowledge to uncover.

The future of computing is here, and it includes the thrilling integration of quantum computing within HPC environments. Brooks and I explore how quantum machines could soon become a part of our HPC clusters, revolutionizing industries from encryption to edge computing. Hear real-world applications like using HPC on a seafaring vessel to analyze sea temperatures in real time, and the fascinating challenges of data transmission in remote areas. Finally, we'll share our passion for technology, tracing it back to the late '80s, and invite you to join us in inspiring the next generation of tech innovators. Don't miss this episode packed with insights, passion, and a call to action for all tech enthusiasts!

Support the show



Follow TechTravels on X and YouTube

YouTube Channel
https://www.youtube.com/@thetechtravels

Tech Travels Twitter
https://twitter.com/thetechtravel

Tech Travels
https://techtravels.buzzsprout.com/

Speaker 1:

Oh, there's no clicking, it's all keyboard. It's all keyboard and it's really powerful. And I would say for us, though her a little gray right here, run it with a larger data set up on the cluster. We're going to give you, if you want to grab about a hundred cores, a couple of terabytes of memory. Keep the time below 60 minutes.

Speaker 2:

Welcome back, fellow travelers, to another exciting episode of Tech Travels. In today's episode, we're going to dive into the topic around high-performance computing, also known as HPC, and understand a little bit why this is a very critical, important and critically important element around building artificial intelligence and machine learning models. Now, this is a very super complex topic, even for the most experienced technologist. So I thought, to help us understand this topic better, who better than to help us break this down into more digestible and funny and witty pieces of humor? None other than Brooks Sehorne, a true pioneer in the tech landscape. And, brooks, it's great to have you back on. Welcome back to the Tech Travels Podcast.

Speaker 1:

Thanks, man, I'm glad to be back on. The first one was a hoot. I got a bunch of pushback on some things I said, so I'm excited to find out what happens with this one. Literally, the collective hold my beer after that episode, so let's see what happens with this one.

Speaker 2:

So I want to dive into this topic around high-performance computing, but I also want to also just kind of help set the stage for some of our listeners here. You know, if I was a third grader, how would you explain high performance computing to me in a way that I can easily understand it? You know?

Speaker 1:

it's interesting when you talk about it in those terms because, you know, trying to really break it down like that, I kind of get to this point where I say this imagine if you could just snap your fingers, get a really big computer and get just the right features you need, with just the right amount of memory, with just the right amount of cpus and everything, do a job on it, like, figure something out like your math homework, and then get the answer back and then you release it. It's what we've talked about so long in the cloud. You know cloud was always like you only pay for what you consume, so you can build the biggest thing you want, that you can immediately get rid of it. Hpc is that way as well. Us, as you know, wanting to build machine learning models, wanting to build artificial intelligence, things like that. Those are huge jobs that require huge computers. So HPC gives us a path into being able to say give me that huge computer. Literally thousands of CPUs, terabytes, terabytes upon terabytes of memory run the job. Give me back the answer.

Speaker 2:

So this is just simply a way to us aggregate computing power that delivers more. You know, more of a high put throughput for an outcome, right?

Speaker 1:

Exactly, exactly. That's exactly the idea. You know you've got some like gigantic ETL extract transform load that you need to do. That's a great example of where you could do massive ETL jobs relatively quickly, whereas normally if you're just on your computer, on like maybe a really good desktop computer, it could take days, maybe weeks, to complete the operation. With HPC, with that spread out of your resources, being able to get huge resources, that's where that come in.

Speaker 1:

And let me put it in context for everybody what I'm talking about. Think of computers like Frontier at Oak Ridge Mountain up in Tennessee. This thing has like 9,400 CPUs inside of it. This thing is huge, and so that would give us the opportunity within the hpc space, if you were doing like an extract transform load, to say, look, I need a couple of hundred cpus because the transform is going to be massive and it's going to be a ton of work. But the thing is is that I just only need it for this job and once the job is done, I want to release those resources to other users. That's the basic concept of hpc and that's why it's important to understand it and the interesting thing is is this is not a.

Speaker 2:

This is not new right. This has been something that's been out there since the 1950s and 1960s, I think I remember back. You know, even you know people talking about the IBM stretch supercomputer. That was back in the 1960s.

Speaker 1:

So it seems like.

Speaker 2:

I think it used to use transistors. I think I studied at college around vacuum tubes and you know, really, just thinking around how supercomputers at that time were really defined. So this is kind of almost, I would say, like with artificial intelligence, with everything that's been happening in the AI space, there seems to be more of a renaissance. To get back to the hardware, right, you remember a couple of years ago it was we moved off hardware and get back to the hardware, right?

Speaker 2:

you remember a couple years ago it was we moved off hardware and we moved to the cloud, right? Now everything now is moving back to getting back to the actual kind of hardware, the cpu, the gpu, and then programming it at that layer and and I want to kind of understand a little bit more about this this kind of this resurgence. What's caused this resurgence, I guess, is maybe just the chat GPT, it's, it's AI, it's the evolution, it's the revolution.

Speaker 1:

It is. And the thing is is that I liked the way you said that, because it is almost like a revolution. It's like we're going back to okay, I'm not going to use a GUI, I'm at the console. Oh, I need to think about how much memory I'm actually using. Oh, I need to think about releasing resources, consuming resources, things like that, whereas you know, so often when we talk about technology, we're on our phone, some gee whiz interface, you know, being able to talk to Jack B, cheat the chat, gbt, stuff like that.

Speaker 1:

This is very basic stuff, this is command line. What kind of CPU you've got, having the hardware there? Maybe usually the case, and this is what we're seeing with a lot of really large companies. They've got these giant clusters sitting there that they're taking advantage of and it's getting back to the core of it. This is why so often people are surprised, moving into AI and ML, that there's these folks running these things. They're using these low down Linux systems and these low down Linux tools and all these really low likes. Where do I click? Oh, there's no clicking, it's all keyboard. It's all keyboard and it's really powerful. And I would say for us though her a little gray right here. It's cool because it's like, yeah, that's the way we used to do it and it's just to see the power of it come back to where we've got the power back is just so exciting.

Speaker 2:

It is incredible because, even though the technology has been around for some period of time roughly probably about 50, 60 years we're seeing this resurgence back into it again. There seems to be a real kind of skills gap, with people who've just never seen hardware like this before right just coming into this um, and I feel like there there definitely needs to be a way for us to upskill. Um, talk a little bit about kind of what you're seeing in terms of kind of the evolution of the learner, the right the advocacy around uh ai development and and learning, not just the language model but getting down to the actual hardware element itself.

Speaker 2:

Kind of talk me through some of that, you know when you start talking about AI and ML.

Speaker 1:

A lot of us and you and I are both the same way. We started learning about ML, we started playing with it, we started doing things with it and it was great, but it was always within this context of hey, isn't that fun? And we were doing it right here on our laptop. Maybe we had like a Jupiter notebook or something like that, or we ran something in the background and it wasn't too heavy, it wasn't too intensive and it allowed us to kind of play with some of the tricks of it. For those of you who are out there who currently have a tear rolling down your face, that deep lens the Amazon or AWS deep lens has gone away. I still have mine. I wish it worked. Mine is a complete brick, but this is uh. When did we go to that, steve? The, the uh, the one, the event that we had, and it was in Florida. Do you remember that when you?

Speaker 2:

remember that one? Yeah, it was uh. Was it a tech kickoff 20, uh 2017, 2018. I think it was.

Speaker 1:

I think it was 2018. Yeah, we were getting the. They you could. It was, I think it was 2018. Yeah, we were getting the. You could go to that one class and walk away with a deep lens camera, so everybody was trying to get into it. But the thing was, is that model was so simple. You could crush it right there on your machine. You could push it onto that device, have the model up in the cloud. That's fantastic. Here's the problem.

Speaker 1:

If you've learned that way and, let's say, you get hired by and I don't want to name any companies, just think big company that may be using AI, ml in some way when you get there, what you're liable to hear is something along the lines of hey, that's a great model, here's what we want you to do. Run it with a larger data set up on the cluster. We're going to give you, if you want to grab about 100 cores, a couple of terabytes of memory. Keep the time below 60 minutes. We do have H100s out there, so be sure to bring those in as resources that you can use. And your head's just starting to come off your head. You're like off your neck because you're like what are they talking about? That's the skill gap, understanding what HPC is and how you can put your job up there. You don't have to necessarily know all the architecture and how things are actually working, but knowing it's there and knowing you can use it, or should be using it, for really big jobs. Steve, there are eight. There are ML jobs that I know of out there that have runtimes of months, wow, in order for it to complete. And this thing is consuming, you know, a thousand CPUs, several hundred terabytes of memory, some of the biggest GPUs that you can think of, to run over a couple of months on a machine like you know, like a Mac here or a Wintel box. Impossible Can never be done, and so you've got to make that jump.

Speaker 1:

How do I do that? And as far as the advocacy goes, I will say this to just about anyone Start looking into the user tools of HPC environments. I don't want to do any salesmanship here. I do work for a company that supports one of the biggest open source HPC software out there. It's called Slurm. You can go check it out. It's completely open source, which to me is pretty amazing that it is in that state, and you can just go grab it, run it, run it local and do stuff. Well, you'd have to do some work to get it to run, but the idea is that with that, you would start to understand how do I actually put a job on a cluster, how do I actually do those things? And, to be absolutely honest with you, without even doing that, there's a lot of documentation out there that could get you going in the right direction. So, out there, that could get you going in the right direction. So if you're really serious about AI and ML and you don't have that set of skills that can answer the question how do I put this on a cluster? Start looking into it, get yourself educated on it. It's open source, open documentation. You can find these things and I highly encourage anyone go out there and check it out, because it is going to be a. It's going to be a big stumbling block for you, a big one, when you suddenly get that one day, when they show you the giant cluster.

Speaker 1:

Like. I've seen these clusters before, steve, there was one I was at a couple of weeks ago. They were showing it off to us and I was like, wow, look at all the stuff I mean like in case in in cages I've never seen before, like one of their cage doors, several of the cage doors they have water flowing through the door, like normally. It's like on this, oh no, it's in the door. Um, there was an H 100. For anybody out there who doesn't know what the H 100, how much would that cost us, steve, to pick up an H 100?

Speaker 2:

I think they're right around half a million dollars now, aren't they A quarter million dollars?

Speaker 1:

The one that I saw was half a million, 10 figure, and I was just like, oh my, when I saw it. Knowing how to put a job onto that thing and use it is critical, and so that's where that stuff comes in, and that skill gap that can bite you shows up. You know what it is. It's tantamount to this, steve Remember, we would talk to people about cloud and we would get to networking and we would say, okay, 192.16800 slash 24. Who doesn't know what this means? And the room was just crickets because nobody knew IP addressing. They knew cloud but not IP addressing. If you're an AI and ML and you don't know how to use HPC, you need to get educated on it. But that's just the first part. I'll come to the second part later.

Speaker 2:

I remember that we would do that exercise for students and learners, right, and you think about the explosion at which we're hitting artificial intelligence with large language model. Now, with Microsoft's PHY3, phy2, phy3, you're getting into smaller language models. Now you're looking at what Apple is doing with the ELMs, right, there's just a rapid pace of innovation. And I think back to when we would talk a lot about Moore's law and I love this concept right Around Moore's law, and it was basically 10X every five years, 100 times every 10 years.

Speaker 2:

But it seems like and I'm just going to throw it out, there is that seems like NVIDIA. They've gone a thousand X over the last eight and there's two more years to go, but, with that being said, the pace at which innovation is happening is just so fast speed. I feel like the moment of watching Spaceballs again, when we go to ludicrous speed. Ludicrous speed go.

Speaker 1:

Exactly, you know. The thing about it is yeah, it is kind of ludicrous speed, ludicrous speed go. Exactly the thing about it is yeah, it is kind of ludicrous. What flips me out about the thing, though, is that there's still so many people out there saying, okay, you don't have a great business case yet. I don't know. I've seen some real good business cases where I've used tools like that to get jobs done really, really fast. Now, correct no, it wasn't 100, but I'm telling you what it sure cut down the work time on it. So, yeah, it's out there, and it starts making you wonder, moving at the speed that nvidia is innovating keep in mind, they put the grace hopper chips out there. Is it the blackwell that are coming out next in the fall? Oh my goodness. And these things are even faster than that. Once you start getting into those speeds and everybody this is the point Steve and I are making here Once you get into that kind of performance, those big LLMs you can run them a lot faster, you can make them a lot smaller. Suddenly, they're sitting on your watch trying to help you out in just the journal.

Speaker 1:

So my thing has always been kind of like I don't expect it to be some big business driver. I really don't expect it to be some big business driver. I really don't. I'm almost seeing it as like this incredible assistant that's going to live in and around in our lives to really make things better. Now, of course, now we do have the challenge of you know, I want AI to wash the dishes and fold the clothes. I don't want it the other way around. You know, I don't want to be folding the clothes and doing the dishes. So, yeah, ai is over there actually doing my job for me, but we clothes and doing the dishes. So, yeah, ai is over there actually doing my job for me. But we're kind of seeing a little bit of that. So, at that speed of innovation and, by the way, intel's CEO came out this morning on that very point, talking about Moore's law, talking about the speed of innovation, talking about NVIDIA, because that's got them all spooked over at Intel is because of what those GPUs are doing Going back to HPC, knowing, for example, what a GPU is, how it works, how to take advantage, like if you're doing the different types of mathematics that you can take advantage of on a GPU that runs much faster, knowing how to call those out in a job and take advantage of them.

Speaker 1:

You've got to know how to do it. Everyone You've got to know how to do it because once you get to that space, yeah, you can figure it out. Yeah, they may show you. Show up, know what you're doing. Just show up. Knowing what you can do, you'd be a lot better off.

Speaker 2:

Good gravy. So fun antidote right, and this is from a great source at Chad GPT. It says I ran this experiment through it and I wanted to kind of give it. Remember we were going back in our first podcast. We were talking about the lack of humor and artificial intelligence. I hope that I was listening to our podcast because here we go, all right. So the fun antidote is this the comparison humor between kind of where NVIDIA is heading with the Blackwell architecture is using traditional computing for AI tasks is like trying to tow a jumbo jet with a bicycle. With high-performance computing and Blackwell it feels like you've swapped the bike for a rocket engine.

Speaker 1:

Mm-hmm, mm-hmm, exactly, exactly.

Speaker 2:

Not far from really true here.

Speaker 1:

Yeah, and the thing is, in talking about our last podcast, this is what came back to bite me that whole hold my beer moment. Do you remember I popped off about how AI couldn't get the muddy sound in music that I can get when I pull down one of these guys back here and I can get that. I had somebody within a week I didn't tell you about this Steve Within a week send me a muddy sounding bass line and to make it muddy, he trained the model to take some of the strings and what will happen is, when it vibrates too big, everyone, it'll hit the fret, the metal fret, and you'll get that ringing sound. He injected it into it and then, on top of that, it was almost like he figured out how do I get a grainy AM radio sound and put that on it, put it into the model to the point of now, you just put the note, you send the note trans, you got this muddy sounding bass guitar.

Speaker 1:

And again, it's that speed of innovation when you have chips like what we're talking about, they can crunch down that data in a big hurry and get it out there. Now you're in a situation, I think, steve, where Not only is it, oh, you've come up with a great idea, but how fast can you get that model created and pushed to the market? Remember that whole thing about how fast can you get your feature into production. How fast can you get that LLM squashed down and ready to roll on that device?

Speaker 2:

Yeah, that's a really good question. I think that's the biggest thing. I think IDC did a study where they said 55% of most organizations are still trying to figure out what their position on AI is right, what their foundation is, and trying to understand what it is that they're trying to solve for Almost every single day, I kind of hear a lot of requests for you know, hey, you know, talk about AI, talk about some prototyping, some sort of MVP, and the question always goes back to what exactly is the problem? What exactly the problem is that you're trying to solve? For what is the desired business outcome? And a lot of it is vague, it's very ambiguous, and I don't think that they are really truly able to define exactly what it is that they're trying to do with it.

Speaker 2:

I feel like we're still at very early stages. The early adopters, um, and you're and you're right those who are going to figure out how to rapidly prototype something to get it into into, kind of into either customer facing or as much into their environment as possible um, that's going to have the best, uh, total cost of ownership, the most return on investment. Um, you know, I think that's going to be kind of a winner, um, and I think most are probably still kind of, maybe still scratching, scratching your head on it, right? Yeah, what is this widget?

Speaker 1:

yeah, what is this thing, what is this tool? I mean it literally is like walking into like a mechanic shop seeing a tool laying there. You go, wow, that's a powerful tool. I wonder what I could use it for, for and it's a couple of places that are already being used, but you get this idea that could be used in even a bigger space, and I think that's one of the things that gets me kind of like that when I think about people getting into AI and ML and not knowing HPC because of that thing right there. When you finally think of that, oh, wouldn't it be great too? Yeah, that would be great.

Speaker 1:

Now, how are you going to get that model done as quickly as possible to get ahead of your competitor? Because I think, steve, maybe you've seen this in your life. Have you ever done something where you've thought of this great idea and then, two months later, somebody was doing it and you're like, hey, I thought of that first, that was my idea. You didn't tell anybody. It's like there's this collective idea that descends on the planet and somebody does something with it, understanding HPC, having access to HPC, and that's a little bit of a. That's going to be to me a little bit of a.

Speaker 1:

Here's the. Am I going to say the right word, d-democratization? Yes, d-democratization. I don't know if that's the right word, but you remember we used to preach about that. Like in the cloud, you can use anything if you can pay for it and it's a lot cheaper If you have access. If you're at an organization that has a giant HPC system, you're going to be able to feed it all that data to get that really good model. This can become that incredible tool so that when you finally do think of, I know what I can do with this to actually be able to spit out that model and make it work.

Speaker 2:

You mentioned an interesting point. You know working for an organization that has all the equipment, that has the capital to be able to invest in the equipment. Right, do I need to be in an organization that has HPCs running in order for me to learn? Can I learn on my own? Is there a way for me to simulate running on an HPC or high-performance computing cluster without actually having to pay for it? It seems like there might be a little bit of a hook there.

Speaker 1:

Yeah, we've gone all the way back around to here again. We're all the way back to trying to get our CCNAs again back in the early 2000, and nobody owns a router. So it's like, how are we going to learn how to do this to that thing when I can't afford one of those things? There are actually some simulators out there. There's actually one we actually make, one called a DSO Docker scale out. It'll allow you to basically simulate a 10 node environment with GPUs. It's very small, it runs inside a virtual machine. We have one. There's a few others out there that you can find that will allow you to do some simulation.

Speaker 1:

If you're crafty and I dare not show exactly how I'm looking at it right there you can actually install Slurm, the product that we kind of watch over open source, remember, on Raspberry Pi fives. You can do it. You can build a small cluster at home for a few 100 bucks and actually do those sorts of things. So is it absolutely free? No, it's not free. Our DSO is free, running it inside a virtual machine. You have to do a little bit of work. You could do it on a Raspberry Pi 5 as well, and actually the amazing thing about that, steve, is if you do it that way, you would have to do it exactly the way it would be done with a giant cluster, because you're going to treat each one of those things like a node, put all the special other machines in there that you want, so you can do it. That you want, so you can do it. It takes a little more work. Nobody has like a nice, you know.

Speaker 2:

Go to this website begin clicking, and if they do, I wouldn't trust it.

Speaker 1:

Just log in with your credit card. That's all we need. Just log it, yeah, and it's going to work just fine, so it okay, so there's.

Speaker 2:

So there are simulators out there that can give me the capability. I can build my own kind of quasi-quantum computer with a couple of Well, not quantum.

Speaker 1:

Quantum and HPC Well, HPC.

Speaker 2:

I was going to kind of Sorry, I was kind of moving into this direction here. Quantum seems to be like kind of that potential future trend of HPC right, Yep, Kind of from your perspective and help kind of helped me understand a little bit of the context around separating exactly the difference between high performance computing and quantum. Just, you know, broad spectrum.

Speaker 1:

Yeah, here's the thing about it, and this is what I've heard a lot from a lot of engineers in the space Quantum to a lot of systems will simply become another resource in the HPC environment. It'll just become what we what we call a generic resource that you can reach out and use. And so at that point and again, this is understanding how to use HPC when you submit your job, you can make a request in that command line, a man line, yeah, you command line this thing and you make a request for when it becomes available, like time on a quantum box. Now there's a couple more things too that we have to talk about. For example, understanding application layout, understanding you're not going to do this in a language that's not supported in that environment. So we're back to things like C, c++, those types of language. Python does work well in HPC. I'm a big Rust. What do they call us Rustations? Rust does have some support in HPC. So knowing how to write code in those particular languages obviously is going to be a big part of making sure it works. But that's really the end point of it.

Speaker 1:

Steve, is like when a lot of these quantum machines become available, if you're in an organization that can afford one or can access one. It could become part of a cluster and then it would become a resource that you could take advantage of when you actually push out there. So that's another big power of HPC. I would expect that once they become available, our fun friends at Name, your Favorite Cloud Provider, will have them out there and you could spin up and they had different HPC solutions as well. They're not going to be free. For example, a parallel cluster in AWS. You could spin up a cluster right there and experiment with it. Make sure you set your billing alarm, kids, and you could set it up. And then, once they have Quantum available, you could theoretically spin that up, add that to your cluster and make that a consumable resource. So once Quantum becomes available, I really see it as becoming oh, it's going to be a quick access for anybody who needs to be able to use it in an HPC environment.

Speaker 2:

Quantum's going to break AES-256, true or not, debatable.

Speaker 1:

You hear the hesitation? Because I spoke to somebody not too long ago who I cannot name, who I absolutely trust, and he was like, yeah, I mean, that was his. I mean he was like, yeah, and I was like, really, dude, you think? So? He's like, look, it's his point was well taken. Any key whatsoever that we use for cryptography is not for encryption, it's for time of encryption. How long do you need to protect that data set from being unencrypted?

Speaker 1:

Great story about this guy. He was going to a football game won't name it because I don't want to place them anywhere in the world it was. It was American football. Everybody you know the game where we only use our hands. Anyway, um, he was going in and this was a long time ago. This is the when he told the stories about 1012 years ago. This is when he told the stories about 10, 12 years ago.

Speaker 1:

He, uh, they were using these portable scanners to actually like credit card print, print, ticket, give it to people. Yeah, you can go in so you could buy it right there. Well, he was watching what they were doing. He was telling me they were changing the batteries out pretty constantly. So what this guy did was he literally went the next day because he used to be a member of, or he owned, a consulting group before he became a C-level officer at a company and he said I think you're over-encrypting, I'm pretty sure you're over-encrypting.

Speaker 1:

They were using like a 256 encryption on these things and he said to them you don't need to do that. You need to protect these tickets for about 10 hours. Once they have to let anybody hack it. Great, you printed a ticket for a game. That's already happened. You're a knucklehead. So I think, based on what he told me, yeah, he fully expects it. Now, I kind of expect it and it's just a matter of time, because encryption, more than anything, is protecting data for a certain amount of time and quantum can do it. If the math all works out, then, yeah, it's going to crack it real fast. That's why you're seeing all these people coming forth and saying we've got a quantum proof encryption protocol and I'm like, really, you don't even know what quantum is yet and, okay, great, go ahead and do that.

Speaker 2:

But, at least it's a step in the right direction. Potentially yeah. Let's hope that it doesn't happen. A lot of Microsoft. You know cryptographic keys are in AES-256. Swerving back into HPC again, you know, what I find really interesting is, you know, when people talk about high-performance computing, hpc with edge computing, and then I think about things about bringing data, the HPC capabilities, closer to data sources like IoT devices, reducing latency, increasing processing speed. I mean that to me is really really cool. I can't wait for that to happen.

Speaker 1:

We have a customer who has a cluster on a boat no, no, no, no, no, no, no, no Like a, like a, you know seafaring, you know like that and what they're doing is is they're running data crunching on sea temperature, like they're dropping probes and stuff like that, and they're doing that sort of work right there on the ship and then, once it's all crunched down, satelliting it up. So absolutely you can do that sort of stuff much closer and take advantage of those. The idea that, okay, I've got a machine here with 10 cores, it could go real fast. Wouldn't it be neat if I had 10 of these machines and via HPC, I could have this virtual 100 core machine. That would run the job much faster. You can stick that on a boat and get that faster.

Speaker 1:

Now the other side of it that I always kind of go back to is would have it been better just to siphon that data all the way back to the cluster sitting somewhere? You're going to have to decide, you're going to have to figure that one out yourself. But I will say this You've got the option and in some cases like let's say, you're at the North Pole or South Pole you may have to go with. We've got to do it on site. No, by the way, heat's not a concern there.

Speaker 2:

So run those things like crazy, yeah. Yeah, I mean you're probably going to have to get into something like low earth orbit to be able to send the data through some sort of like a satellite, you know communication. You talk about being in remote, distant places. You talk about kind of having a boat where it's actually testing water temperatures. You're probably way out there, um, you know, into deep waters, uh, far away from, you know, cell phone communications and towers.

Speaker 2:

So probably doing something with low earth orbits or leos, um, close enough to probably see space aliens exactly just to get close enough to see them.

Speaker 1:

They're coming down there and you know you're doing it wrong here. We'll give you something. I wish that was true, no, but. But here's the other thing about it, though, that I was really surprised by talking to a lot of researchers who are creating these type of software applications.

Speaker 1:

In a lot of cases, as their model's building the steering that has to happen in order to end up with a really good model, you can't send the data up and get that right. It's got to be local, going to your idea. I've got to keep it closer. So, as these models are building and it's pulling those water temperatures up, it's kind of steering the way the application's going in terms of the logic to actually create that language model. So you can't have it sitting back somewhere in Colorado. It's got to be on that boat in the middle of the Pacific. So once it's done low Earth orbit, get the data over everybody's happy. But you're going to have to get it closer to really be able to take advantage of what you're going on in terms of building that model in a reasonable amount of time.

Speaker 2:

I would think that cruise companies would be all over something like this or have the current capabilities where they're doing cruise ships right. They're out there everywhere. They're all over the international seas. They're going to different parts of the ocean.

Speaker 1:

They're all over the sea, international seas. They're going to different parts of the ocean. If anybody who's watching if you're seeing my face kind of warp out because steve knows good and well, I know about this and I know a particular cruise line that is absolutely doing this and what they're doing is fantastic because the example I got to see from them was the question of is the passenger? That was the question they were trying to do and it literally to me. It knocked me over because it was like next level customer service, because their point was not from a negative but from a positive. If we have a guest aboard one of our ships and I hope I'm saying that right, because I got in a lot of trouble there, because I can't remember if I was supposed to say ship or boat One of them is like you're not supposed to say that I think it was ship. So anyway, they want to be able to say, oh, we have a customer there who's potentially drunk, we need to go help them, like we need to send a crew member up there to help them get back. How were they doing it? They were doing it with ML. Where was the ML running On board the ship? Because, know, send that data up, wait to get an answer and come back they may have fallen overboard.

Speaker 1:

So, yes, that idea of of getting it close like that, and uh, it's not even just. You know, we're talking about that remote stuff, but there's also cases where there is like a um, see, if I can walk up on this one carefully a, a, a, a great part to take your family for fun, and that's all I'm going to say. They're using that type of stuff too from a security and to an enhanced the experience of the people who are actually there by, for example, let's say, there's a character that your kid is just in love with. So what you do is is you have to give them money they can give you in the enhanced experience. They can literally find you in the crowd, assure that's who they are, talk to. So, and so use the modeling and actually make sure that when the person steps out to say hi to your kid, they're in an area of the park where there's fewer people, there's fewer chances for other people to come up to increase the possibility of that one on one experience.

Speaker 1:

So, everybody, if you think this stuff has not got some amazing capabilities, you're wrong. If you don't think and this to me is where it really gets me, steve. In some cases it's almost getting the point of the stuff is going to make things look magical. Remember that old quote about if a society has sufficiently enough advanced technology, it will appear like magic? That's where this stuff is taking us with AI, ml and being able to crush these big jobs using HPC to quickly get it out into the space. I'm so excited.

Speaker 2:

I really am. I think, and you're right, I think. I think a lot of people are excited. I know me personally I've really been excited over the last six months to see that just the trajectory of where things have been the last three to five years and then all of a sudden, probably in the last maybe year, 18 months or whatever, is that you've seen this huge explosion, massive amount of concentrated energy into a particular topic, specifically around ai, where there's more now than ever before, you have more access to things such as training, learning, education uh, everything's online.

Speaker 2:

It's very low to no cost in terms of being able to learn, adapt and have a new skill, which is also kind of why I love it, because there's something new coming out all the time.

Speaker 1:

Exactly. There was something I was looking at the other day, and I ground my teeth a little bit, because it looked like oh great, somebody has a new acronym for AI Super duper. It was like gear reversed low language modeling with inherent redundancy. Okay, just stop right there. This sounds like jibber jabber, but the point is is that, yeah, there's so many people working in the space coming up with these ideas, and I think some of them want to make it sound like it's something special, but in a lot of cases, when you really look at it, it's that old concept, those old principles, injecting into the space, because some of those principles stay the same, and injecting into the space because some of those principles stay the same, and I think, more than anything, that's why we're seeing the explosion. We've been there, we've done that, we've learned the principles. Now let's put these principles in place with AI and ML, and we don't have to relearn it, and we can go even faster.

Speaker 1:

What I'm curious about, though, is what is the new stuff that's going to be coming out of this? At some point? It's not going to be like oh, a different, you know a new song, or somebody's done something funny, you know Will Smith eating spaghetti. It's not going to be that. It's going to be something totally weird, something totally different. And I think the challenge is going to be for a lot of us is trying to get over that thing of is this evil, is this bad, and go no, no, no, no, no, no. It's just technology, it's the application of it is where we're going to have to ask ourselves where are we exactly at with this sort of thing.

Speaker 2:

You know, I, I, I listened to a lot of interviews with CEOs. I listened to a lot of, a lot, a lot of people who are very, you know, very concerned around the, the, the social, the social impact of on society with artificial intelligence, and there's a lot of skeptics in terms of saying, listen, it's great for the 10 million people who are already currently working in it right now. What are we doing with the other 8 billion people on the planet that don't have access to this advanced technology? Right, you know, it's kind of the how do we kind of shift the paradigm to get everybody involved? And I think it's it's going back to kind of like Maslow's hierarchy, which is kind of you know, when, when people don't have to worry about fighting for food and for struggle and things like that, they can then focus on things such as learning and training, education anyway, but I digress Um, no, that's a great point.

Speaker 1:

That is. That is a huge deal.

Speaker 2:

Yeah, but I think it's interesting because again is you see AI almost in every aspect of our life, every aspect of our society. Everyone is going to be interacting with some sort of AI entity in the next 12 to 14 months, if not already right now. I guarantee you, if you're probably booking a travel ticket anywhere right now is that you're probably going to be chatting with some sort of virtual agent.

Speaker 1:

That's probably the chatbot right.

Speaker 2:

I think that I'm very excited about the landscape to see what happens. I know I need to continue to keep educating and to keep up pace. Right, and I think again it's that thing is you have to run as fast as possible, just to just keep up.

Speaker 1:

Yeah, yeah, you do, and I will also. This is something I've told a lot of people about technology and there's no nice way to say this If you've just got. Oh my gosh, when did it start for me? It was probably about 1989, 1988, working, trying to get my degree in organic chemistry, starting to work with computers and stuff like that. The bug bit. It has not let go of me since then. I'm absolutely. I still love technology. My goodness, I'm such a nerd.

Speaker 1:

If you don't have that passion about it, this can be kind of a tough business because people give you got to learn your whole life. Yeah, and that's part of the fun for us is learning how to do these things, Not to mention exactly what you said. I'm back to the command line again. Yeah, that's awesome. So where's the mouse? Don't need it, Just put it away. Put it away, it's going to slow you down. Just put it away. And it is that passion about it and the fun and the excitement.

Speaker 1:

So I would say to a lot of people and this is what I hope a lot of people get from this when it comes to technology, let's make sure the door is properly marked. It's real big. There's a big neon sign over the top that says everybody is welcome. Let's make sure that is there. But at the same time, let's not be ignorant and start walking into the crowd and shoving people towards the door because they may not want to do this. It may not be something they want to do, but I will guarantee you there's probably somebody in that crowd out there that you never thought about, who's got a real heart for this stuff and just wants to, who's looking at it going. I'll never be a part of that. We need to make sure that they know. Hpc, AI, ML the door's open. You are welcome. We can get all the we need all the help we can get.

Speaker 2:

Yeah, somewhere, welcome, we can get all the we need all the help we can get. Yeah, somewhere out there there's another Jeff Bezos or Jensen Hong, or basically the next Mark Zuckerberg. Right, there's the next innovator out there, right, like you've got, you've got these people with with a huge capacity to learn, and you know, it's like they just they, they, they need to get in touch with the technology.

Speaker 1:

That's what I really love about it. Yeah, I mean it's incredible, and the applications of it are. It's just ridiculous. I mean I was thinking the other day just going through the grocery store just looking around like there are so many applications for this stuff sitting here. There are so many applications. I want an application where I can show the thing the water.

Speaker 1:

Everybody does this. They'll pick up a watermelon, they'll thump it. Yeah, this one sounds good. You don't know what you're doing. Or my wife has got this thing where it's got a nice yellow bottom and the bands are real wide. That's a good one. No, that doesn't mean anything. But if I could train an ML model to pick out a great watermelon and it could be integrated into my glasses so I could just there's a good one, let. So I could just there's a good one. Let me tell you what 99 cent download will be rich in no time whatsoever. There's so many applications for this stuff and I think sometimes we get ahead of ourselves thinking it's going to be something huge, like it's going to be able to fly the plane. No, back up, don't need that Watermelon picking. That could be fun and interesting.

Speaker 2:

I like the idea of that We'll stay away from the no airline pilot flights for now.

Speaker 1:

Well, you know, we say that, we say that. I don't know if you heard the story. One of our country's veterans, I believe what was happening was he thought he had indigestion. Turns out when he got to the hospital he was having a heart attack or something like that, realized he couldn't drive himself. The man had a Tesla, got in it, autopiloted him to the emergency room. He got out, told him go park yourself. Basically, that is amazing. That is incredible when that stuff starts happening. That's what I'm talking about. Yeah, I still want my big truck that burns gas and I can go mudding it, but knowing that that thing is there and can do that for us, wow. And all we got to do is start learning about AI, ml, how to use HPC to run these big jobs, we could change the world Exactly. There's some kid out there somewhere who can make this happen and I cannot wait for them to do it?

Speaker 2:

Yeah, absolutely, Brooks. I can go on about this all day. I can't begin to thank you. So again, thank you so very much for joining us today. Thank you for sharing your insights on this really important topic, I think high-performance computing. Again, it's very complex.

Speaker 1:

I hope we did it justice to be able to kind of break it down for our listeners we scratched the surface just a bit of this monstrous thing, but I'm hoping that by scratching that service we have enough people go. Oh wait, I need to go look.

Speaker 2:

Yes, you do yes, you do yeah, and brooks, thanks again for having on yes thank you for having me.

Speaker 2:

Oh, it's a pleasure. Oh, um, and to all our listeners out there, thank you for tuning into this episode. Um, I really hope that you found this informative, uh and very fascinating, just as much as Brooks and I did. Um, again, we really appreciate your support on these journeys. Don't forget to subscribe to the tech uh, tech travels podcast, uh, on your favorite podcast platform. Uh, stay tuned next episode. Until next time, stay curious, but, most importantly, stay informed and happy travels.

Speaker 1:

Absolutely Bye everybody.