Episode 388: Don't Fear Generative AI – Invest In It
On this week's Stansberry Investor Hour, Dan and Corey are joined by Dr. John Sviokla. John is an author, executive fellow at Harvard Business School, and co-founder of GAI Insights – the world's leading generative artificial-intelligence ("AI") analyst firm. He joins the podcast to talk all things AI – its investing potential, limitations, and real-world applications.
John kicks off the show by explaining how GAI Insights is helping organizations and communities understand and use generative AI. Currently, many executives don't know enough about it to even recognize its opportunities in the workplace. John says that workers whose jobs involve words, images, numbers, and sounds ("WINS") will be the most impacted by this technology. He also breaks down the three new forms of capital: network, behavioral, and cognitive. When it comes to the latter, businesses are trying to protect their proprietary data and processes today by keeping their AI behind firewalls...
The sequence of how you analyze a problem is actually really complicated. Which models do you use when to find the signal that will give you the answer? Do I do the principal component analysis first? Do I do the regression analysis first? How do I manipulate this data so I can find the signal? It's a lot of expertise in that. You do not want to teach the large language models how to do that stuff.
Next, John talks about how these AI models are trained, the process of training workers to use AI, and the limitations of AI. One such area AI struggles with is creating new ways to look at a problem. However, it's surprisingly good at empathizing and mimicking human emotions. John then discusses AI's computability, the transformer algorithm, and how AI could impact the broad market. He notes...
We've done an analysis with the assistance of Valens Research, and we've looked at which companies are WINS intensive. And we think about 50% of the market cap and 50% of the profit of the entire publicly traded market is up for grabs with generative AI and AI.
Finally, John describes the four levels of generative-AI adoption. Those in the top level – "intelligence leveragers" – drive value by using AI to build AI. Right now, technology is the only industry with these kinds of companies. But John says that in the next five to seven years, each major industry will have an intelligence leverager. This presents a huge opportunity for investors. John gives several real-world situations across different industries (like pharmaceuticals and financials) where AI implementation will be game-changing. He says, for example...
Pfizer is very adept and advanced in the way that they're applying AI. And if you think about the economics of drug discovery, if [they can improve their odds] even a little bit, that's hugely valuable in their business model.
Click here or on the image below to watch the video interview with John right now. For the full audio episode, click here.
(Additional past episodes are located here.)
Dan Ferris: Hello and welcome to the Stansberry Investor Hour. I'm Dan Ferris. I'm the editor of Extreme Value and The Ferris Report, both published by Stansberry Research.
Corey McLaughlin: And I'm Corey McLaughlin, editor of the Stansberry Daily Digest. Today we talk with John Sviokla, co-founder of GAI Insights, and as you'll hear, an AI expert.
Dan Ferris: John is a very exciting guy. We had a ball talking to him in this interview. He is the first person who has spoken at length about artificial intelligence that I feel really gets it and really has something to teach us. Get out your notepads. You're going to be busy. You're going to love this. Let's do it. Let's talk with John Sviokla. Let's do it right now.
Corey McLaughlin: For the last 25 years, Dan Ferris has predicted nearly every financial and political crisis in America including the collapse of Lehman Brothers in 2008 and the peak of the Nasdaq in 2021. Now he has a new major announcement about a crisis that could soon threaten the U.S. economy and can soon bankrupt millions of citizens. As he puts it, there is something happening in this country, something much bigger than you may yet realize. And millions are about to be blindsided unless they take the right steps now.
Find out what's coming and how to protect your portfolio by going to www.americandarkday.com and sign up for this free report. The last time the U.S. economy looked like this, stocks didn't move for 16 years, and many investors lost 80% of their wealth. Learn the steps you can take right away to protect and potentially grow your holdings many times over at www.americandarkday.com.
Dan Ferris: John, welcome to the show. Glad you could be here.
John Sviokla: Lovely to be here again. Thanks so much for having me.
Dan Ferris: You bet. So, for the next nearly hour or so, Corey and I will be haranguing you with all kinds of questions. You are a new guest to the show, and so, I think it's probably appropriate to just give our listeners and viewers just a little background of yourself and really maybe even just start with what you're doing now, what you're doing right now.
John Sviokla: Sure. Yes. Right now, I'm co-leading a research firm that's focused on how generative AI and AI are going to – is going to change organizations and communities and how those organizations and communities can understand what it is, how to use it and get ethical value from it. And so, a really exciting time. It's just so dynamic right now.
Dan Ferris: And for, let's say, for whom do you do this, let's say?
John Sviokla: Sure. Yeah, absolutely. My, my career really started my, I was a professor at Harvard Business School and my doctoral thesis was on the economic impacts of adopted expert systems, and I created some of the first courses there with some colleagues on electronic commerce and AI and so forth. And then went from there into being a consultant in digital consultancies and then started this firm after a career helping organizations do that. So, the people we do it for, very, very clear, the AI leader inside large organizations, usually half a billion or more, and that person is either a technical person who has a business responsibility or a business person who's inherited the technical responsibility, but they have real accountability for delivering value to the bottom line.
And then the second is the vendor and investor community. How should vendors get their message out? Because it's a very crowded market, lots of investment and innovation. So, think of us as a mini research company, like a Gartner or Forrester, where it's a two-sided market. We serve the practitioners, and we serve the vendors. And our core issue here is to enable people with community, with news, with research, with strategy, best practices, and the name of the company is GAI Insights as in generative artificial intelligence insights.
Dan Ferris: Now, this really interests me because I, everybody, of course, everybody has their opinions on what artificial intelligence means and what the landscape will look like going forward, what industries will it eliminate or reduce and what will be there in its place. And I think it's interesting that you're a consultant to these people because I think about when I think about a company like Accenture, the gargantuan publicly traded company, I've thought to myself, well, 80% of those people will be fired, won't they? Because you will replace them.
You, instead of talking to them in a meeting and paying them way too much, you will simply type your question into your perhaps tailor-made chat bot and say, "What the hell do we do? How do we address this problem?" But you're smart. You're getting in between that. You're the guy who's telling the company that's going to fire Accenture, to a certain extent.
John Sviokla: Yeah. What we're seeing is that the we published a framework back in September 9, 2023, in the Harvard Business Review that we call the wins framework. And because when you say the biggest, one of the biggest questions, Dan, is where's this going to impact? What's boloney, what's real, where's it going to impact, how soon? Those are two massive questions in the market. So, people talk about cognitive work or knowledge work. That's too broad, right? My carpenter is a knowledge worker. So is my attorney. My attorney is going to be much more impacted by generative AI in the near term than my carpenter.
So, we came up with this framework and it has two dimensions. One is if you look at the cost base of your organization and you say how much of my cost base is made up of people creating or improving words, images, numbers, and sounds, "WINS," words, images, numbers, and sounds. And then, and that includes software. That includes movies, right? So, that's one dimension. And the other dimension is how digitized is that already? If the answer is very digitized already, like one of my former employers, Price Waterhouse Coopers, PWC, the answer is your entire P&L with the exception of some technology, some risk issues, and some real estate leases, they don't even buy it, is all WINS workers fully digitized, right?
And this works at the task level. What tasks? OK. Customer service, software programming, functions, IT, marketing, customer service, industries, movies, you look at the strike in Hollywood, they should strike because their industry is going to be completely transformed. Or a function inside biopharma like new drug discovery. That's a lot of WINS work, right? And a lot of it's highly digitized already with things like AlphaFold from Google. So, those are the things that we believe in the next two to four years are going to be fundamentally transformed. When we published it, it was three to five years.
And if you look at the past year, 18 months, you see WINS work intensive organizations, the leaders commit to this. So, JPMorgan Chase, generative AI throughout the organization. You look at folks like Standard and Poor's, in terms of how they're publishing their data, they have a generative AI agent in the front. You look at any marketing department, AT&T is going after this, there are new startups like GetJerry, that they've completely transformed the nature of customer service, where people are much – when you’re in dialogue with the machine, it's fundamentally different than when you're just querying it for an answer. And that's what we're talking about here.
So, those are the organizations that are going to, the tasks, the functions and the organizations. And I'll just say one other thing before your next question. What we're seeing is that the talent that understands how to use this, these set of tools is they look at it as the biggest career accelerant, right? Because I can do better work, faster, higher quality, and with more knowledge than my competition inside an organization. And we're getting a lot of calls from people who are frustrated inside their organizations that aren't using these new tools.
Corey McLaughlin: Yeah, that actually gets to a question I had in mind to ask you. When you talk about the talent in the organizations, a lot of times, I guess they're the ones experimenting with these new tools first and seeing how they do it. I've looked at, we're writers here or I am, and so you look at perplexity and those sorts of tools that can help in the creation process that way. What are the challenges that you hear from like CEOs or the leadership level about working with AI or trying to use the technology? What don't they know?
John Sviokla: Yes. Well, I think that this is a track that can help implement new technologies for business value most of my career, and this is the one that has the most difference between understanding and emotional reaction of any I've seen. And so, the biggest challenge, Corey, is that most executives don't have enough hands-on experience to even recognize what the opportunities might be, that they've sat down and put in some stupid question to Chat GPT and it gives them a stupid answer as opposed to ask a simple question, get a simple answer.
And people are not, they don't understand, hey, look, I can actually use this in dialogue. I can create different roles and capabilities. This is a complex machine. The analogy I use is that most people, imagine, you've seen a pipe organ. Church pipe organs have about 73 keys. They've got about 40 stops you can pull out, they've got about 30 pedals, right? And I think it's 72 keys. So, imagine that's what a large language model is, and people walk up, and they play "Chopsticks" on the middle board, and they say, not so impressive. It's like, wait a second. Yeah. OK. It plays "Chopsticks" just like your cheapo electronic piano, but this thing can do a lot.
Dan Ferris: So, to carry the analogy forward, then I remember a fellow named Thomas Specht. He's probably not even around anymore – when I was studying music, who was a virtuoso organist when I was studying music in college, and he played these pieces by Johann Sebastian Bach. And his arms were all over the place and his feet never stopped moving sometimes. And it was just like what you were watching, it was like an octopus playing this giant thing and it sounded gorgeous, and J.S. Bach is complex. So, carrying the analogy forward then, you don't just start typing questions into something and say, I'm using AI. It sounds there is perhaps training and lots of learning and there's a whole bunch of stuff between getting this tool and being competent to use it.
John Sviokla: Yes. And Dan, you put your finger on exactly why I think this is a transformational movement. If you go back in the history of the Industrial Revolution, most people have heard of Henry Ford, at least in this country, or Dahmer Benz, if you're in Europe, and so Ford created the production line that he, other people had it, slaughterhouses had it. He saw continuous manufacturing and milling and things like that, right? But what a lot of people don't understand is he partnered up with a guy by the name of Frederick Taylor, who was the father of scientific –
Dan Ferris: Frederick Winslow Taylor. Yep.
John Sviokla: You betcha. One of my favorite Baptists. Anyway, he, for those in your audience who don't know, he was the one who said, look, management of knowledge of how to manufacture stuff sat in the guild. I was a member of the guild. I was apprenticed to that guild, whether I was a hooper, or I was a blacksmith, or I was a painter, whatever. The knowledge sat with the workers, and then I worked my way up as a journeyman, until I did my masterpiece, and then I became a master in that guild, right? And they would judge my masterpiece, and, yes, you're in. OK, Taylor said, "Forget that. I'm going to watch the best man," which is what he said. "And I'm going to learn everything I can. I'm going to take that knowledge," and knowledge now sits in management. It sits in the corporation, which is one of the reasons Karl Marx absolutely hated Frederick Taylor, because he was pulling the knowledge out of work and into capital, and management.
We're now doing that, that's what this stuff is about. It's a new learnings. And what did that, by the way, what did that give us? The fact, look at the quality of this thing and have this consistency and these and the doors behind me. These are things that used to be crafted by craftspeople. Now, they're totally standard and they're fantastic and they're cheap in comparison, right?
If you look at the early productivity curves of the Industrial Revolution, that was a knowledge system applied to capital and to labor, right? And the fundamental three forms of capital and traditional capitalism are natural capital, energy, wood, so forth, right? Human capital and financial capital. As we digitize stuff, there's three new forms of capital we birth. And if anybody's interested, I have articles on these things. We birth three new forms of capital. Network capital. Who am I connected to? Behavioral capital. Oh, by the way, who sells network capital? Folks like LinkedIn.
Behavioral capital. What's my behavior? What’s the dwell time? What's the search behavior? All that stuff. Google, Facebook, they sell behavioral capital to companies, my behavioral capital. In exchange I get free email or cheap goods and services, right? Same thing with LinkedIn. Now we're moving into cognitive capital, which is the third kind. How do people think? How does the organizational process think? What's our tacit knowledge, our explicit knowledge? How do we make this really work? If you're an underwriter, which analysis do you do and what sequence, right?
That's the new kind of cognitive guild, if you will. And what this is doing is that we're going into the biggest unautomated and unsupported part of organizations, which is unstructured data and often indirect R&D cost or WINS work, that stuff that has resisted leverage and automation. With these tools we're starting to get there. And our message to companies is that you need to understand these things for two reasons. One, with behavioral capital and network capital in the main, we've exchanged those. We've given those to the mega providers in exchange for cheap goods and services, right?
We're OK with that. I don't think it's a good trade. But cognitive capital, how your organization thinks, you do not want to make that trade. You do not want to teach the hyper-scalers how to do your business. And so, we're seeing a big trend toward own your own intelligence. You can use that stuff for stuff you don't compete on. But if you compete on particular dimensions, I don't know if you've ever done any advanced data analytics, but if you have, the sequence of how you analyze a problem is actually really complicated.
Which models do you use when to find the signal that will give you the answer? Do I do the principal component analysis first? Do I do the regression analysis first? How do I manipulate this data so I can find the signal? There's a lot of expertise in that. You do not want to teach the large language models how to do that stuff. I don't.
Dan Ferris: Can you avoid them learning it?
John Sviokla: Yes, absolutely. Yeah, you can. There's you have lots of opportunities. There are providers out there like Inflection AI – that in full disclosure, we have a relationship with, a business relationship – but they will put it behind your firewall. They will share the source code and the weights and the data with you. OpenAI will not do that, nor will Llama. So, it's not really open source. So, Meta and Llama models and so forth, they're downloadable, but they're not open because they don't give – they'll share the weights, but they won't share the data, and they won't share the training methods. So, there are ways to do it, but you got to pay attention, but let's get back to it.
Dan Ferris: Sounds interesting. Yeah.
Corey McLaughlin: It is interesting. So, you're talking about companies with creating their own proprietary language models or whatever it may be, I think, if that's what you're referring to.
John Sviokla: I would say training the models, because remember, you've got data, you've got the model itself, you've got data you can train it on, and then you've got inference once you have it, right? And so, what I'm saying is you take something like a Llama 70 billion or 1.5 billion model. I'd probably not use the Chinese models, but the Chinese are going crazy on the small models, too. And then you bring it behind the firewall, and you use your proprietary data, or you hire somebody like Inflection.
You say, look, we want to use your expertise and capability. You can host it in our data center. We'll keep behind the firewall and we own all that IP. We Borg Warner, we JPMorgan, right? And that's what I mean, Corey, not so much to go into the business of de novo building models.
Corey McLaughlin: Gotcha. Yeah. I remember early on in this whole conversation with AI hearing some analysts talk about, "Oh, Apple will create their own model" or that sort of thing. But sounds like that's not what you're suggesting.
John Sviokla: Yes. Apple's current implementation is they have small models on the phones and then they backstop when they need a big model, the Apple intelligence architecture, as I understand it today, and all this stuff is really dynamic, will then go to OpenAI for the backstop if they need a large language. To keep it super simple, we have we think about 5%, at the most, 10% of companies at scale are deploying in production. Well over 70%, probably more like 80 or 90% are experimenting. So, the difference between implemented and experimentation is very, very large. And we have a very simple model we call EAT AI, which is you educate, you apply, and then you transform.
So, the educate, you need hands-on experience by the senior executives, at least five to ten hours each on something that's important to them. It could be a vacation. Could be looking for colleges, could be looking up a malady. Someone in their family has colon cancer or something. Do something that matters to you and interact with those models. Or do it for work and have enough experience there. Apply it in the WINS areas, customer service, onboarding, marketing, software development, apply it in your organization.
Then you have enough knowledge and skill to actually step back and say, OK, we kind of have an idea about what this stuff is about. How can we now think bigger? That's the transform part. So, it's super simple. By trying to jump, I've seen a lot of consultancies, including Accenture, trying to jump you straight to transform, and I haven't seen any of those succeed yet.
Dan Ferris: Right. So, then the question, at least one question becomes how do you, I'm just thinking about the organist analogy, like, how long does it take? If I'm a guy, let's say you mentioned the technical guy in here, it's a business task or vice versa. The business guy in here, it's a technical problem. So, how, if I'm new to that and I've inherited this, how long does it take me to become a decent organist in our analogy and how long does it take me to or maybe you to help me teach three, five, 10, 50, 500, I don't know how many people to basically become virtuoso, like chatbot operators or something? That’s my sort of image with this.
John Sviokla: Yeah.
Dan Ferris: It can't be instant. It's not instant on the organ if our analogy holds.
John Sviokla: No, absolutely. And so, I think you've asked exactly the right question. And to me, it's about organizational capability, which is a combination of knowledge and skill applied. And the analogy that we're using is inspired by the quality movement, which is that you need a pyramid of expertise that is defined by how much WINS work you have. What I mean by that is that you need white belts, yellow belts, green belts, black belts in this capability. So, your friend in front of the organ was the black belt.
And so now if you look in the quality literature, and I have some experience with quality programs, the defining characteristic of moving from white belt to black belt is three things: increasing knowledge, increasing skill, the ability to apply that knowledge in real time, and increasing ability to deliver a project, a program, and to – oh, sorry, there's four things – and to be able to teach other people. The difference between a white belt and a green belt is the white belt has been taught some stuff, the yellow belt has delivered some stuff and been taught some stuff, the green belt has taught other people how to do it, has delivered a few things and has higher knowledge and skill.
The black belt is deep in all those things. So, let's say you're JPMorgan Chase. They've got 50,000 bankers roughly. I would say the way to think about it is they probably need 5,000 white belts, right? They probably need 3,000 yellow belts. They probably need 1,000 green belts. And they probably need 200 black belts for 50,000 people. Now, if you are Alliant Food Services, well, not Alliant, now they're part of Cisco.
So, you're Cisco Food Services, you're not very WINS intensive. I think they've got about 100,000 employees. Don't hold me to that. So, twice as many. I would say they need far fewer. They would only need 1,000 white belts and 5,000 yellow belts. You know what I mean? Because they're not as WINS intensive.
Dan Ferris: I'm going to just plop a question down from left field on you. What can't be done by AI? What do humans really do? You must have a view on this. You must have seen this with your own two eyes every single day.
John Sviokla: Yes. There's a lot of confusion. First of all, AI is very well defined as a set of techniques in computer science. It is very badly defined as a general concept because it's very hard to define what artificial is, and it's very hard to define what intelligence is. Other than that, it's really clear. And so, let me just lay a couple things out. First of all, thinking. Do machines think? I think machines think already. Now, planes fly and birds fly, but planes don't fly like birds. They don't flap their wings and stuff, and they fly very differently.
And there's lots of things about birds, we don't understand how they fly. We don't know how they navigate for homing and things like that. Some people think there's quantum effects involved and all kinds of crazy stuff, right? I mean, heck, lobsters navigate by feeling the electromagnetic field of the earth. I mean, whatever. Makes me feel more guilty when I throw those babies in the pot. Sorry. But if you want to mess with the lobster just get some magnets and hack around. Anyway.
Corey McLaughlin: Sorry, geniuses.
John Sviokla: So, look, planes don't fly like birds, but they fly, and machines don't think like people, but they think. They already think, and they think faster and better in certain areas. So, this whole thing about do machines think? Game over, machines think. They don't think like we do. That's OK. Now, is it conscious? Absolutely, positively not. Totally red herring, right? We can't define consciousness, and there are all kinds of major things that consciousness does that we don't even have a model for, never mind an explanation. You take the placebo effect in drugs. We can measure it.
We have no idea what it, we know it's real, right? You cure yourself by believing you're taking something. We have no idea how that works. We don't know where it sits in the brain. Does it sit in the brain? How does it work? We have no idea. So, for these crazy people like Sam, not Sam Altman, Sam what's his name, the really popular podcaster? Anyway, these extreme materialists who say we're nothing but a set of, we're just a computer, it's total baloney. They're unscientific. There's tons of scientific evidence that says that's a stupid idea. So, they're not conscious. Now, so that's one thing.
The other thing you asked, what can't they do? There's tons of stuff they can't do. And the kinds of things generally speaking, that they are weaker at are creative recombinations of ideas from different models, OK? For example, there's progression in the diagnosis of cancer going from reading radiographs and images and so forth and other soft tissue images to now, OK, maybe we can measure molecules coming out of somebody's mouth that might have cancer in it, cancer molecules in it. That's a switch of representation from, I'm analyzing the living data lights out of existing data. It's a whole new way to think about the problem.
OK. Forget all that stuff. And I want to, actually, maybe people aren't exhaling cancer molecules, and I can detect that. AI has a really hard time creating a whole new way to look at the problem. It does a really good job at recombining things in ways you haven't thought of before, and that's incredibly powerful. You look at drug discovery, for example, we've looked at less than 1% of the known compounds with new drugs. The ability to search that massive search space efficiently and recombine and say, oh, that drug you forgot about has those attributes.
Those attributes might go with these and have this kind of mechanism, but that's already an articulation of the drug in a way that can relate its characteristics to a mechanism to an effect. Know what I mean? It's not like saying, "Hey, forget about the X-rays and the MRIs. Let's think about molecules coming out of your mouth." To me that’s a different leap. So, it's really bad at that. It's very good at empathizing, which a lot of people don't. This whole thing about emotion or not. Computers are really good at emotion and mimicking human emotion. And this has been something that's been known for a long time.
In the 1950s, people were putting in more personal information to computers about their psychological state than to humans, because there's a particular stress or tension. When I'm talking to a human that can talk, there's been studies that show my galvanic skin responses, which is how they do lie detectors, actually goes up when I'm talking to a human that is verbal. If I talk to a preverbal human, like a baby or an animal, and I'm talking, my galvanic skin response does not go up, so I don't have the same stress. So, there are many people who would actually disclose more emotional information to a computer than to a human more easily. So, this emotion thing is tricky. It's actually more intimate in some ways.
Dan Ferris: Right.
Corey McLaughlin: Yeah, it's all fascinating.
Dan Ferris: It is fascinating, and that's exactly the question that I had in mind. What do people do? What do computers do? What's the difference? And you answered it very well. And in a shocking kind of way, actually, the words "computers do emotions really well" I never expected. I would never have expected those words to come out of your mouth, "Computers do emotions really well" or understand them or mimic them actually, you said.
We know the Turing test, right? AI can do that, right? AI can fool people into believing that it's not a machine, that it's a real human. So, I should have anticipated that. Yeah. I'm sorry, Corey.
John Sviokla: Turing, look at Turing. What a guy. So, the guy saves the British Empire, then they chemically castrate him.
Dan Ferris: Oh yeah, horrible story.
John Sviokla: Because he's gay. But he had the genius to understand in the Turing test, figuring out the definition of intelligence. And he was so elegant. What a great idea. And this goes back to the thing about, he didn't spend time, hey, artificials, hard intelligence, hard. He just, he came up with a very clever way to think about it. So anyway, that's a guy. I don't know if you have your list of people in history you'd love to meet. Alan Turing's right on my list.
Dan Ferris: Oh, absolutely. Yeah. Yeah, genius. And then they create this thing that can figure out what the Germans are doing, what are, and then they said, "Well, we can't really use it so much." It’s a very strange story because obviously they'll know that we know that. It got complicated quickly, and then what they did to him was horrible. But I agree. Let's all go back in time and talk with Alan Turing. And so here we are, we know the difference between humans and computers, OK, and it's not what we expected, and we have this technology that is very different.
What you're describing is very different, and you did mention from Chat GPT. When all of us regular people think of AI right away, the only thing that I know from personal experience is using Chat GPT. And I stopped using it ‘cause I thought this is silly. And indeed, a colleague forwarded a paper to me, a paper, an academic paper titled "Chat GPT is bullsh*t" meaning that it's a bullsh*tter. It does the same thing. It just collects a bunch of stuff together and tries to sound smart, the way a good bullsh*tter does. But behind that, the firewall that you mentioned before, that's a whole different world. A completely different world. That's not the world of Chat GPT.
John Sviokla: Yeah. Well, yes and no. I use a concept that kind of, I think, sits more fundamentally than artificial intelligence, which is, is the reality computable? And there's two dimensions that matter in computability. One is level of knowledge of the task or phenomenon of interest, right? And the other is digitization. And if something is highly digitized, and knowledge comes from ignorance, to categorization, that's an interior cruise ship. That's a posterior cruise ship, right? Description. Correlation. This goes with this. Most of underwriting, hey, you're 21 and male, this is your likelihood of crashing. Causal model, flight simulator.
If you fly many times the co-pilot and that 737 may never have been in that airplane with passengers until they are co-pilot with the passengers in back. They've never flown the physical aircraft because they've got a causal model of how that 737 works so deep that flying the simulator is equivalent to getting hours in the airplane. Categorization, correlation, causation. Times level of digitization equals computability.
So, if you look at the Google car, for example, and I know Chris Hermsen, the guy that started that project for Brent told me a story, he said, look, what happened at the beginning of that is that they had, and you look at my model, they had really good causal knowledge of how the fluid dynamics, the electronics, the aerodynamics of those cars. They have good simulations of cars, right? But in the drive, when it was the self-driving car, the problem was the external world was not digitized enough. So, they had maps, GPS and lots of onboard computers and sensors, and they had a three-foot error.
And I grew up in Bracken, Massachusetts. I'm born and bred Masshole, but even in Massachusetts, three feet is too much for driving, right? And maybe not in Naples, but Massachusetts. OK. So, the way they solved this problem is they were the first ones to put the LIDAR, the laser range finder on the top. And the old LIDARs, they're much better now, painted and collected 1.5 million pieces of data per second. Things spinning around, right? Painting and collecting. OK. That allowed them to digitize enough of the environment that they could use their causal knowledge of the car in its environment to have a self-driving car. So, that's an example of increasing computability.
Now, what does artificial intelligence do? It increases both sides of that. It allows me to take things like, if you look at the new robot stuff, I can now scan environments and categorize the objects in that environment using a large language model so that I can guess at its causal capability. So, I can digitize more of the environment at a deeper level. And on the other side, I can take stuff from simple categorization, like we had with language before, to at least a sophisticated causation, I mean correlation, in large language models.
Now, those probabilistic models don't have a causal model in them, so it's not going to go up to cause, but it's a really good correlation. And then in artificial intelligence, I can take unstructured data, semi-structured data, different data types, and I can munch all that stuff down to increase its computability. That's the important thing.
Corey McLaughlin: Is this the computability, the computer, sorry, go ahead.
John Sviokla: That's like Fred Taylor. What did Fred Taylor do? He took semi-structured and unstructured stuff. He structured it. He put it into the corporation. This is a massive enhancement of using capital to compute the world.
Corey McLaughlin: So, this description that you're using, computability, right before we started here, we briefly mentioned that you were working on AI, I guess more or less. I don't know if it was called that or not back then or not, but since 1983. So, that's 40 years ago, and now here we are today where it's really just now coming to society, really
John Sviokla: Yes.
Corey McLaughlin: Is this the thinking that you had back then and you're just kind of applying it to what has developed since then?
John Sviokla: Somewhat. Look, I had no idea. There are two things that I had no idea would deliver what they delivered. One is the increasing just raw compute power into neural networks and the kind of capabilities. Those enhanced way more than I would have expected, so that was a surprise to me. Kind of like before I saw MTV, I thought MTV, who would want to watch music? Then I saw my first MTV thing and oh yeah, that's why. And the second thing was the genius of the transformer algorithm. But the way I think about it is the transformer algorithm, I don't know if you've ever read the books, Dune, Frank Herbert's Dune where the sandworms come.
I think of that thing as like this giant sandworm going through an unidimensional infinite search space and finding the spice. You know what I mean? That transformer has effectively found the microstructure of language at a level, as Steven Wolfer would say, we haven't looked at it since Aristotle, in terms of what the real microstructure of language is. You think about logic. It's about the structure of language, right? And so, now we've got this big step up in the microstructure of language understanding. And that transformer was totally breakthrough, the fact that the combination of large neural networks with massive amounts of compute and so forth, plus the transformer algorithm, that to me was a complete breakthrough. That was just like finding DNA. This is profound in terms of its impact.
Corey McLaughlin: And do you have a framework now for what areas you think, or what kind of major buckets of, I don't even know, products, services that will emerge from this? We had at our Stansberry Research Conference a couple of weeks ago in Las Vegas, we had Zack Kass used to work at Open AI there and he laid out this idea of right now you have enhanced apps, using better search engines, basically like the Perplexity or Chat GPT, and then, autonomous agents would be coming next things that can do, you can assign tasks to, and then beyond that, what do you think of on that front, just moving ahead?
John Sviokla: Yes. I think a lot of those insights are only helpful to people trying to invest in AI in the companies. I don't think it helps senior executives or investors looking at companies and should they be affected by AI. That's why I go back to our WINS framework. And we've done an analysis with the assistance of Valens Research, and we've looked at which companies are WINS intensive, and we think about 50% of the market cap and 50% of the profit of the entire publicly traded markets is up for grabs with generative AI now.
Dan Ferris: 50% of the market cap.
John Sviokla: And 50% of profitability.
Dan Ferris: Wow.
John Sviokla: And it'll be actually higher now because of the skewing of market cap toward the AI companies. And so, how do we, back to your question, Corey, how do we make this real? First, I would plot somebody on the WINS framework and say, "OK, how susceptible are they to a new model?" And then I start to look for evidence. And in 2024, we look at a lot of companies. We do news every day, five days a week. People are welcome to just go to GIinsights.com and you can get our news briefing every day. You can either watch the show or you can get the email. So, we look at, and we're building up a case study database, right? And so, what we see is four levels of adoption.
One is what we call toe dippers, so those are people who tried it a little bit. Example of that would be McDonald's. McDonald's tried to use some generative AI. They did it with Google. They screwed it up. I don't know if you've seen their hilarious things where somebody is in the drive thru, and it's like, "Yeah, I want four shakes." They’re like, "Oh, you want 32 chicken McNuggets?" It’s like, "No." "Oh, 100 chicken McNuggets." It’s ridiculous, right? And so, of course they pull back and they clamp the whole thing down. We're only going to use Microsoft inside. Toe dipper, right? That kind of thing.
Then you have what we call islands of automation. There's a friend of mine over at American Securities, multi-billion dollar, about $30 billion-plus under management. Senior exec there, he helps with their AI efforts and they've automated certain things, revenue cycle management, radiologic image recognition, and had good ROI on a project-by-project basis. So, islands of automation and those folks are in the mix. Then we've got people who are what we call orchestrators, people who are taking this capability.
So, back to my E framework, educate, apply, transform, they're in the transformation level and they're coordinating generative AI, traditional AI, traditional machine learning, and they're figuring out how to transform the economics of the business. And something like Blue Cross Blue Shield of Michigan, who we wrote up for the Harvard Business Review in August of this year, is an example of them. They're an orchestrator. They're doing it transformational. And then, and this is super important for your investors, we have what we call intelligence leveragers, and these are people who use AI to build AI to drive value.
So, they're not dealing with the algebra, they're doing the calculus. And back to Fred Taylor, those organizations that got on the scientific management improvement curve crushed everyone else, crushed them. Swift and meatpacking, Ford and cars, like the whole routine, right? We believe if you look at, and who are these intelligence leveragers? Right now, the only industry that we have seen that has intelligence leverages in it is the technology industry. And if you listen to Jensen Huang, if you listen to Zuckerberg, they will tell you we couldn't do the AI we have now without the AI helping build the AI, right? OK. I believe that every major industry in the next five to seven years is going to have one or more intelligence leveragers.
Now, what are the implications for investors? The implications for investors are you'll see two phenomena. One is the value of that company will go fantastically high. And two, you'll see an oligopoly, a concentration of profits, just as you see in the technology companies. So, I believe the same thing is going to happen in pharma, same thing's going to happen in cars, right? The people who understand how to unlock this. Now, where would those people come from? Where would those companies come from?
They will either come from new innovations, new companies that have five to seven years to grow into it, or some organizations will transform themselves. Who'd be a candidate? Well, JPMorgan Chase is an example. Pfizer is an example. Pfizer is very adept and advanced in the way that they're applying AI. And if you think about the economics of drug discovery, if I can improve my odds even a little bit that's hugely valuable in their business model. So, I believe that what investors should look for today is do they see folks who are WINS intensive, who are at least doing islands of automation? What's a company like that? Honeywell.
Honeywell is using AI on all kinds of stuff, onboarding people, helping with customer service, making intelligent products, doing better software. So, Honeywell, and again, this is from the outside, and we had the ad leader from Honeywell come and present at our conference. I'll, as with any investment, vet it yourself, but they would be islands of automation going toward transformation. So, when looking at their stock that is an AI upside, potentially. Now watch for when will they actually have the AI help build the AI? Can they become an intelligence leverager? This is so dynamic, Corey.
You look at this, OK, this thing, OK, that's my granddaughter. Anyway. This bad boy, right now, we've got, what, I think it's 6.5 billion roughly smartphones around the world over, I think it's 8 billion phones total. The refresh cycle, I haven't been able to give a good number, but I think it's about 10% to 20% a year on these, right? In 2025, every major manufacturer is going to be launching these things with neural chips on them and models on them. Apple already has a bunch of neural chips out there. Samsung, Qualcomm, everybody's building. OK.
So, that means we are going to have intelligence in your hand that will be to – let's say a billion people in about 12 to 18 months. What does that mean? That means that every product or process will have a tutor attached to it. How does this process work? Let me tell you. How do I repair this car? Let me tell you. Now it won't be these idiot lights on the dashboard. You'll be having a conversation in the car. What's really going on here? I really don't know. Now, will that conversation come from GM, Ford, BMW? Or will it come from a third party through your phone? I don't know, but it will be there.
Corey McLaughlin: Right. You won’t need to go to YouTube and say, "How do I fix this stove?" You can have the app on your phone from the manufacturer if they're leveraging intelligence, right?
John Sviokla: Exactly. And so, then you say, "OK, how does this play out?" I'm just trying to make it really concrete, because these people talk about [inaudible]. And the Asian thing, we have another whole conversation about, I think it's RP on steroids, but whatever. Let's say you're an industrial manufacturer, and there's one in particular that we've done some work with, and you've got a global presence and you're about whatever, $5 billion in sales, OK? And 20% of your revenue is parts.
And you know, as I know, parts, unless you’re totally screwing it up, they don't disclose it, but unless they're totally screwing it up, that 20% in parts is probably responsible for 40% or 50% of the profit or more. OK, great. Now, they happen to be in the United States, they're in Canada, they're in France, they're in Germany, they're in Spain. They buy little manufacturers, they sell a similar industrial product to municipalities all around the world. OK, great. All right. They have 27 brands. They keep the brand, you know the story. You can imagine the roll up, right?
Dan's got a manufacturing company and Galacia, his kid's a drug addict. He wants to get liquid. He sells it to these guys, and they roll it in to their platform. How about if you loaded every single product description, image, FAQ, repair manual into one of these bad boys, and it actually sits on your phone, because you could probably do that with a $70 billion parameter model or a $7 billion, which easily goes on these phones, right? And with the multimodal multimedia capability, you can take a picture of anything, and it can interpret it, right?
How about if in any language, I need a product or a part or a thing, and I just point it at the freaking thing, and I say, I need one of those. Or I say, explain to me how this works or what's wrong with this. And then going back to the P&L, I take that 20% of revenue, and I make it 30% or 40%. And I'm globally dominant in this category because Google's never going to come after me with that. The big model guys aren't going to come after me. So, now I've taken my most profitable line item and I've pumped it up, pick a number, 10%, 50%, 100%, right?
And then that flows down through my P&L. I might double the profitability of the company and nobody's going to come after me, right? And by the way, the kind of people who operate these machinery, they're often immigrants. So, I need it in Portuguese. I need it in Vietnamese. I need it in French. Right here. No problem. So, that's what I mean about looking at organizations like, wow.
So, then that industrial manufacturer is $5 billion trading at, I forget what they're trading at. Let's say they're trading at one times revenue or maybe now they're worth two times revenue, right? Because they have the global service and parts platform for this category. Again, I'm just trying to make it real.
Dan Ferris: Yeah. You're doing a good job.
Corey McLaughlin: No, yeah, it's great. This is something that I think a lot of people want to hear because it's, you hear AI, like you said, it's artificial intelligence. How does that define first of all, and then how do companies practically, like what's practically going to show up in your house or company from it? So, it's great. Yeah.
John Sviokla: Yeah. Three simple concepts to keep in mind to take it from the [inaudible] and baloney than the real stuff. First of all, they're power tools for knowledge work, for symbol work, for WINS work. What do I mean by that? If you walked in your house, you'd hired somebody to put a new outlet in. Say I want an outlet over here, right? And I came in, and I saw the carpenter or the electrician there, and he was sitting there with a hand drill going like this. I feel like, what the heck, man? I'm paying you $100 an hour, are you nuts? I'll buy you a drill.
Well, if you walk in and your customer service people aren't using these tools or your lawyer or your consultant, they're sitting there with the hand drills and you're paying them to sit there with the hand drills. It's like, what the heck? Go get a power tool. So, there's that. And the second thing is to remember is, and Harari has talked about this, brilliant guy, the guy who did Sapiens and stuff, he's a little pessimistic for my taste, but I think he's brilliant. Anyway, language and story is the operating system of society. People do great things because of a story. They kill other people because of a story, and dialogue is the mode and voice is taking off.
Voice is the new UI, AI is the new UI, and voice is going to be huge, right? So, every single important interaction is going to be mediated by AI, by people who know what they're doing. And I don't mean totally substituted. Like one company, getjerry.com has five AI robots that do the chat bot and texting, and one of the things it does, they have an escalation robot that watches the emotional tone of the text and pops it to a human being faster if it's getting emotional. OK, so I'm not saying, it's not like the voice trees that we get caught in IVRs and that craziness.
This is about actually a context that is empathetic. So, when you start thinking about it that way we've been automating since, pick your number, 1820, 1840, something like that. Automating in earnest since about 1870, 1890, right in there. In that long history of automation, we've been taking stuff out of humans and out of animals and sticking it into the machine. This is the first time the machine talks back in English or in Hindi or in whatever, the first time. That is a major difference.
Dan Ferris: That's brilliant. Yeah. Wow. That’s cool.
John Sviokla: Yeah. And the third thing is our intuition. We talk about artificial general intelligence. No, it's the wrong analogy. You are talking to a hive mind. It's beehive. It's not human, but each of those bees could be a bird or an elephant or a cockroach. You know what I mean? And so, the analogy, I don't know if you guys ever watch Rick and Morty. It's hilarious. Anyway, it's a cartoon that's, I think it's Family Swim. And the premise is from Doc and Marty and Back to the Future the idea is where were Marty's parents letting him hang out with Doc?
Anyway, so they got sued for Doc and Marty, so now it's Rick and Morty. And Rick is the crazy scientist. And so anyway, Rick falls in love with a hive mind, and that hive mind can take advantage, can take over anybody's consciousness. And so, he's trying to break up with the hive mind. So, as he's walking down the street, everybody's saying, "Why are you leaving me?" Anyway, I love that analogy. So, you imagine trying to break up with the hive mind. But the idea is that you can call out any kind of tutor. I was going to get groceries the other day.
And I like Nietzsche, I like Socrates, so I said to one of the robots, I use four or five robots a day, I said, "Could you please have Socrates and Nietzsche argue about the meaning of life?" So, Pi, which is the one from inflection that I was using in the car, is like, "Well, Socrates would say this, and Nietzsche would say that." And so, let's say you're onboarding into an organization. Say you're a young tax manager coming into PWC or someplace like that, you can say, OK, tell me what should I do in the first 90 days, right?
What are the goals and objectives? How should I manage my career? What should I tell my executive to manage me by? How is PWC differentiated from other folks? What's the incentive structure for the senior partner they'll be reporting to? What professional organization? And it can give you a 90-day plan. And then you can say, OK, let's simulate that. You be my boss, you have this kind of personality.
And this other part of you, I want to be my coach. You have this kind of personality. And you can go into that trilogue, right? That's what I mean about the hive mind. You can call out any tutor from any time in history at any level of expertise. I'm a fifth grader. I'm a PhD chemist. It'll put it in that language.
Dan Ferris: Nice. John, this has been amazing, but I'm on overload and I've run out of note-taking space. I've never taken this many notes in an interview before, which tells me something.
Corey McLaughlin: I see you, Dan, scribbling.
Dan Ferris: Yeah. There's a lot here. So, I'm going to go ahead and I'm going to ask you my final question, and I cannot wait to hear this. I actually do have a little bit more space here. My final question is the same for every guest and it's, no matter what the topic, the topic is usually finance, but now it's AI. And if you've already said the answer by all means, feel free to repeat it. But the question is very simple. If you could leave our listener with a single thought today, what would you like that to be?
John Sviokla: The one thing I would like your listeners to do is to invite the robot to everything.
Dan Ferris: Invite the robot to everything. I love that. That's brilliant. Invite the robot to everything.
John Sviokla: You're planning a meal, you're planning a vacation. You are about to sign a legal document buying something. You're you want to help your kid understand mathematics, ask the robot for help.
Dan Ferris: All right. Embrace it. Don't be afraid of it in other words, don't be afraid of it at all.
John Sviokla: It's there. There's great data from a guy named Bloom at the University of Chicago in the '80s. And he showed in his experiment, I don't know if it's been repeated, but that students in a 1 to 30 classroom, our traditional model that we inherited from the Prussians after they get their butt kicked by Napoleon, right? And then Horace Mann went over and said, "We want some of that" and brought it back to help us with industrialization. OK, that model compared to the tutoring model, because remember before the Prussians, everyone in education got tutored.
You're praised, you're whatever, every single one of the founding fathers had a tutor, right? Alexander had Socrates, right? Alexander the Great. OK. Tutoring delivers two standard deviations better improvement than 1 to 30. Two standard deviations better. That means you can take the worst student in a 1 to 30 environment, and you can make them as good as the best student by tutoring. Yeah. So, if I'm running an organization or a family or a child or a whatever, and I learn two standard deviations better than you, game over.
Dan Ferris: John, I think you just changed my life, and I want to thank you for it ahead of time. I think it's a good change. If it's not, I'll come back and hunt you down. But this has been absolutely fascinating to me. I've taken copious notes and we're definitely going to invite you back at some point in the fairly near future. But listen, thank you so much for being here. This has been, I've really enjoyed this.
John Sviokla: My pleasure.
Corey McLaughlin: Yeah, thanks so much.
John Sviokla: Excellent. Thanks. Bye-bye now.
Dan Ferris: Throughout history gold has been the most secure, least volatile, most international and least political form of money, and when market uncertainty is high many investors look for safe haven outlets including billionaire investors like hedge fund founder Ray Dalio and John Paulson who have recently loaded up on gold. After climbing past $2,700 an ounce, we could see gold reach as high as $3,000 an ounce by the end of 2025 or higher. Find out the best strategies for investing in gold. When you go to www.2025goldsurge.com and sign up for our free report.
You'll discover the easiest ways to buy gold before it rallies and the number one pick for this current gold bull run from our analyst team. Learn more and get your free report at www.2025goldsurge.com. Well, that was quite something. I suddenly have a desire to take AI a lot more seriously and want to know a lot more about it. I could have just listened to him for a while longer. If I had more paper to take notes, I would have just said this podcast is going to be 90 minutes instead of 60 because I was really learning a lot from him.
Corey McLaughlin: Yeah. I was learning a ton as well about how to think about AI, just starting with the definition of it itself. And then I think a lot of people are searching for those kind of answers that he gave about how this technology really affects companies, people, which ones will have the advantage. He laid out a great thing about which, great ways to think about that, the intelligence leveragers, which could come in different industries or like the tech companies.
Obviously, we've seen it with the tech companies, Google, Amazon, Microsoft, the ones that have been linking up with nuclear energy already, those are the ones that are clearly taking it seriously now to power all of this stuff too, which is a whole other conversation. But if you're a person looking to use AI or a company trying to figure it out, I can see why he's a highly regarded consultant, for sure, for what he's doing and why he's doing it. It's just, yeah, he's really, that's the best explanation I've heard about a lot of what AI actually is and what it could be. Yeah.
Dan Ferris: I know, and when I hear the word "consultant" I usually go, oh, no, thanks, because I was sort of one. I was a publishing consultant for a little while many, many, many years ago in the D.C. area. And when I hear the word consultant," I think, eh, but obviously there are consultants out there who know a hell of a lot and we just talked to one of them. Like I said I'm overwhelmed. I feel like, and John has all these great little, three types of capital and four types of AI users.
I didn't know he had worked with Valiance Research with our colleague Joel Littman, but 50% of market cap, 50% of profitability in the S&P 500 or in the stock market, I think he said, actually is up for grabs. And WINS workers, words, images, numbers, sounds. Gee, Corey, do you know anybody like that? I think I might know somebody like that.
Corey McLaughlin: I know a couple people. I know at least two. Yeah, I know at least two people. I know. And that's the thing, that's why, personally, I'm thinking about how you stay ahead of all of this, or keep up with it for my future, for my current and future career. But then also the companies as investments that would be good as well. Yeah, super useful. My mind, I don't know if this is a pun, but my mind is kind of blown right now from all of, given our topic, given everything that we just heard. So, I'll probably be listening to it again and checking out his newsletter myself when this comes out.
Dan Ferris: I feel like it's the first time I've heard someone talk at any length about artificial intelligence that I didn't feel like I was getting a lot of kind of BS, high conceptual stuff that didn't connect to anything. This is the first time that I've heard someone talk at length in a way that gave me a bunch of ideas for things to read about and learn about and know more about and ways to try to use it myself. Like when he said the power tool analogy, I was like, I need more power tools right away. I thought of myself, I need more power tools.
So, very useful. Very useful stuff in this interview today. I really enjoyed it. And John's a great talker. He has a really good way of explaining things. He was completely down with trying to make it as concrete as possible, which the consultants don't usually want to do that because making it concrete tells you what they're doing for their clients behind the scenes, so they don't want to get too concrete, but he helped us out a lot there. And wow, I really enjoyed this, and I'm sure everyone listening who has any interest in AI did as well.
So, that is another interview, and that's another episode of the Stansberry Investor Hour. I hope you enjoyed it as much as we really, truly did today. We do provide a transcript for every episode. Just go to www.investorhour.com. Click on the episode you want, scroll all the way down, click on the word "transcript" and enjoy. If you liked this episode and know anybody else who might like it, tell them to check it out on their podcast app or at investorhour.com, please.
And also, do me a favor, subscribe to the show on iTunes, Google Play, or wherever you listen to podcasts. And while you're there, help us grow with a rate and a review. Follow us on Facebook and Instagram. Our handle is @investorhour. On Twitter our handle is at investor_hour. Have a guest you want us to interview? Drop us a note at feedback@investorhour.com or call our listener feedback line 800-381-2357. Tell us what's on your mind and hear your voice on the show. For my cohost, Corey McLaughlin, until next week, I'm Dan Ferris. Thanks for listening.
Announcer: Thank you for listening to this episode of the Stansberry Investor Hour. To access today's notes and receive notice of upcoming episodes, go to InvestorHour.com and enter your email. Have a question for Dan? Send him an email. Feedback@InvestorHour.com. This broadcast is for entertainment purposes only and should not be considered personalized investment advice. Trading stocks and all other financial instruments involves risk. You should not make any investment decision based solely on what you hear. Stansberry Investor Hour is produced by Stansberry Research and is copyrighted by the Stansberry Radio Network.
Opinions expressed on this program are solely those of the contributor and do not necessarily reflect the opinions of Stansberry Research, its parent company, or affiliates. You should not treat any opinion expressed on this program as a specific inducement to make a particular investment or follow a particular strategy, but only as an expression of opinion. Neither Stansberry Research nor its parent company or affiliates warrant the completeness or accuracy of the information expressed in this program, and it should not be relied upon as such. Stansberry Research, its affiliates, and subsidiaries are not under any obligation to update or correct any information provided on the program.
The statements and opinions expressed on this program are subject to change without notice. No part of the contributor's compensation from Stansberry Research is related to the specific opinions they express. Past performance is not indicative of future results. Stansberry Research does not guarantee any specific outcome or profit. You should be aware of the real risk of loss in following any strategy or investment discussed on this program. Strategies or investments discussed may fluctuate in price or value. Investors may get back less than invested. Investments or strategies mentioned on this program may not be suitable for you.
This material does not take into account your particular investment objectives, financial situation, or needs and is not intended as a recommendation that is appropriate for you. You must make an independent decision regarding investments or strategies mentioned on this program. Before acting on information on the program, you should consider whether it is suitable for your particular circumstances and strongly consider seeking advice from your own financial or investment advisor.
[End of Audio]