July 24, 2024

July 24, 2024

5YF Episode #20: Poolside CEO Jason Warner

AI eats software, specialized models win, billions of coders, synthetic data, NVIDIA and the Hyperscalers, and the future of code w/ Poolside CEO Jason Warner

5 year frontier

Transcript

Jason Warner (00:00)

Sam has said if you're relying on the model not becoming 100x better, because that's what your effectively application company is going to do, you're about to get steamrolled. Don't bet on that. Well, software is actually going to be worse. Because as the models get better in software, most of the things that happen at the application layer are going to be in the model. And it's going to be night and day different.

Daniel Darling (00:31)

Welcome to the Five Year Frontier podcast, a preview of the future through the eyes of the innovators shaping our world. Through short, insight-packed discussions, I seek to bring you a glimpse of what a key industry could look like five years out. I'm your host, Daniel Darling, a venture capitalist at Focal, where I spend my days with founders at the very start of their journey to transform an industry. The best have a distinct vision of what's to come, a guiding north star they're building towards, and that's what I'm here to share with you. Today's episode explores the future of coding. We will discuss the competitive AI landscape,

Nvidia and the hyperscalers, specialized versus generalized models, synthetic data, software that writes itself and the future of software development. Guiding us is one of the most renowned leaders in software development, Jason Warner, co-founder and CEO of Poolside AI. Poolside is an AI startup challenging OpenAI and Anthropic by creating its own models to build the most capable AI for software development.

Although less than two years old and launching this summer, Poolside has already raised over $500 million from leading investors such as Redpoint Ventures, Bain Capital Ventures, and DST Global. Jason was previously the CTO at GitHub, the world's largest developer platform, both before and after the Microsoft $7.5 billion acquisition, where he also helped develop Copilot. Prior to that, he was head of engineering at Heroku and a senior technical leader at numerous other companies. In addition to leading top software development teams,

Jason served as general partner at Redpoint Ventures who have invested in companies like Snowflake, Stripe, HashiCorp, and Netflix. He also sits on the operating board at Bridgewater Associates, helping them innovate at the technology frontier. Jason, so nice to see you. Thanks for coming on for the chat.

Jason Warner (02:51)

Thanks for having me.

Daniel Darling (02:53)

You were CTO of GitHub both prior and after their $7.5 billion acquisition by Microsoft. And while you were there, you helped incubate a real groundbreaking product called Copilot, which was really the world's first look at an AI that could work alongside software developers and help them write code. Essentially a kind of auto-complete for coding. Now, clearly you believe they haven't captured the market, given you've since gone all in on founding Poolside, which is also building AI for software development. So what embark on a standalone company and what did you see needing done differently in the industry?

Jason Warner (03:43)

So one, I don't think people fully realize what's going to happen to the world with AI. Now, specifically, I don't think that anybody understands this as a broad macro level, but I definitely don't think that people understand it's going to happen to software. But if you do, and this is where I spend my time, what you realize is that software eating the world has barely started. What matters and what doesn't matter. And what matters when you're thinking about the future of software is understanding how developers will interact with these systems and then how people who aren't developers in the moment will be able to create in the future.

And if you understand what that is, that is about building an AI specific for software and that understands more about the aspect of software. So GitHub and GitHub Copilot are not that. GitHub Copilot is an application that calls GPT-4 and 3.5 turbo. That is a general purpose model, but it also is not focused on the idea. And if you want to project out, which I think we'll get to full program synthesis, it's not going to come from a general purpose model.

Daniel Darling (03:43)

Got it. so, you know, really underlying it is what model it's calling it and how people are building that. And so one of the underlying advances that makes these large language models so effective today is reinforcement learning. And each major player in the space kind of has a different approach. You've got OpenAI's reinforcement learning with humans in the loop that provide the feedback. You've got Anthropic with a constitutional algorithms instead of humans providing the feedback.

And then you've got Poolside who's introducing one specific to coding called reinforcement learning via code execution feedback. So can you explain how that works and what advantage it gives you?

Jason Warner (04:20)

Think of it as a massive simulator, a code execution environment. So imagine that if you pull down some Git repository from Bitbucket, GitLab, or GitHub, and you identify tens of thousands of candidate tasks inside that code base that are good to effectively put inside the simulator environment and say to our model, our base model that we're training, hey, how good are you? Can you complete these tasks? Give us 20 possible solutions to this task that we've just identified for you. And from that, then we will explore the solution space, whether it be a positive or negative solution, and then kind of grade them along the way of, this is the correct solution, and it's excellent. Across these categories of things that we defined as like a grading scale. And so what we're doing is just identifying thousands and hundreds of thousands and eventually millions of those candidate tasks inside these various code bases. We're having our model execute against those. Some of those are going to be bad. Some of those are going to be excellent. Some of those are going be atrocious. But each one of those is valuable data to us. So from that, our model is getting better. It is learning both ways to not do something and ways to do something, as well as it's generating a ton of synthetic data for us. And that synthetic data is going to pay dividends in the long run for us as we train our next generation model.

Daniel Darling (05:35)

Interesting. And that's really two areas I'd love to unpack now. One is the synthetic data piece. But before that, you're really talking about one big question in the AI industry, which is, will generalized models versus specialized models be able to compete with each other? Who will really win in that? You've got OpenAI saying that the generalized models are the way of the future to be able to do all different types of tasks. And then you've got the specialized models like Poolside.

Why are you confident that the specialized model will succeed in this area? And is this the case also outside of the field of software?

Jason Warner (06:12)

It's a matter of physics at the end of the day. In an unconstrained world where there's infinite compute, infinite data, infinite time, we all have the exact same access to the same resources, we would all be doing the exact same thing, which is training the largest possible general purpose model we can possibly do. But we don't live inside that world. We have a whole bunch of constraints. We have inference budgets.

We have limited access to data. We have compute constraints. How many GPUs can somebody network at the moment in a single cluster and an interconnected cluster? These are real physical constraints at the moment. so OpenAI is the effectively the person who proved out a whole bunch of things. We owe them a debt of gratitude. Their point as being the only one in the space to go after something, general purpose made sense. Anthropic being number two and nothing else existed at the time that mattered made sense.

Do you want to be the third, let alone the eighth best general purpose model in the world? We don't have this conversation for full self-driving. We don't have this conversation for robotics. We don't have this conversation for drug discovery. We understand that specialty models in those domains are going to be what wins ultimately because of the constraints and the parameters and the budgets and all the different various different things that go on in those areas. But we allowed ourselves as an industry to kind of get confused on this for software.

I'm not confused. We will be able to go further faster in the domain than somebody who's not focused on it. That's just a natural course of like what has ever happened in history. You focus on something, you're going to get better at it and you're going to get better faster than somebody who views that as a side quest.

Daniel Darling (07:48)

And that makes intuitive sense and focus is a real superpower in that way. And you can probably tell them, we want to take you on all the big AI topics because you see an incredible career that's spanning all different types areas from investment to the corporate side to now the startup side. And you mentioned synthetic data and the data unlock there previously. Is there a bottleneck around data in the industry or what do you see the value of synthetic data to?

Jason Warner (08:17)

The dirty little secret of the entire industry is we have access to the exact same data. We all have access to the same publicly available data. This is why OpenAI has been striking the data partnerships. You can look through the lens of there's only two things that all of us are trying to do. One, become more compute efficient, or to become more data efficient. When OpenAI or Anthropic or someone that says that we're going to get a 22,000 GPU cluster or a 30,000 GPU cluster, or Elon was over there saying, I'm going to try to get to 100,000 GPU cluster by the end of the year, they're effectively saying, I don't have access to more data. So my way of making my models better is going to be scaling up my cluster. Now there's lots of other things inside there terms of they could train faster, they could run more experiments, but just bear with me as I say that.

And then people who are going to be on the hunt for data. And those are going to be largely come from general purpose models and data partnerships because they're going to scour for getting access to all the Salesforce data. They're going to partner with some publishing house to try to get access to them. But the holy grail of all this is being able to generate your way out of the data problem. And again, that's what software does. That's what software has. It allows us to go do that. Our model we call it Poolside Malibu is our first release.

Poolside Malibu sits inside of massive simulator environment. And inside that simulator environment, there's hundreds of millions, if not billions of tasks that need to be completed by the model. It's effectively an eval platform, right? So we say, for each one of those, give us 10 to 20 possible solutions. And then we go through a whole bunch of other things, which we grade and range at. But the possible solutions are synthetic data. And that synthetic data, positive, negative, different quality or whatnot, is actually real live. code synthetic data for us to train on in the future.

Daniel Darling (10:01)

Okay, that's really well outlined. And on the compute camp that you talk about, one of the big questions is around how do you get access to enough compute? How do you solve the energy problem powering that compute? So what does that future look like to you? Are we just in a mad scramble for more of it?

Jason Warner (10:18)

The things that will matter in the next 10 years as we build out these various intelligence layers are energy, the chips that make and these AIs or intelligence layers. And if you understand that stack, that stack matters and a whole bunch of other things, no matter what we wish are effectively rounding errors on that. So if you believe this, you believe what I just said, then you'll do anything in your power to become one of those intelligence layers, because we're talking about GDP capture as economic upside, not market share. And the math on that is different. We've not seen math like that before.

And so when you think about the compute needs, the energy needs and all that sort of stuff, there's a reason why all the hyperscalers are looking at nuclear power plants. And there's a reason why they're working with the Middle East and stuff. understand what I just said. They believe it as well. Well, the hyperscalers are going to do anything they can to get their hands on as much power as possible and as many chips as possible. And so all the hyperscalers are making their own chips and we're going to do it in Nvidia is making sure they have the next two generations of these chips that are going to both train and run these AIs laid

And then obviously the OpenAI, Anthropics in US and the world are going to race to become the preeminent intelligence layer in our domain. So if I play that back to you, really people are seeing the huge amount of GDP that can be automated with AI or sort of captured by AI.

And so the question around securing compute and the infrastructure behind that, whether it's energy or chips is really just a necessity to get the job done and that the ROI of all of is so massive that people will do anything they can to get that job done. Foundation AI companies, not application AI companies. And by the way, there's massive distinct differences between the two. If there's AI producers, people who are building the actual foundation models themselves and AI consumers, people who call to the AI companies.

AI in general has broken people's and looks very specifically as broken VC's brains because it's very difficult to understand company like mine or Anthropic or OpenAI needs to raise so much money to go build these things. But what has happened is they've confused what value looks like here. So AI application companies have fallen into the AI bucket, the same as the AI foundation model companies have fallen into the AI bucket. But if you understand the distinction between the two where value capture is going to happen from a VC perspective or from an investment perspective, you understand what's at play.

And if you understand what I said, a previous answer before about what's going to matter in the next 10 years. Yeah, like there's companies who are not investing in foundation model companies and they're dead and they just don't know it. Like there's some mid scalers that are trying to think of themselves as arm dealers and they're, they just don't know what they're on the long table obsolescence. And we see this every, every platform shift. We see this all the time, but there's names that we all know and love or maybe not love, but you know, there's stocks out there and they're just not, they're not going to be relevant in 15 years.

They're going to be relevant for the next five or six, seven possibly, but there's going to be some sort of long tail for them where they just lose that, that relevance. And it's largely because the intelligence layer is going to pass them by.

Daniel Darling (13:31)

Anyone that’s top of mind that you think?

Jason Warner (13:33)

I don't want to get in trouble. There's a whole bunch of people who you would consider to be mid-scaler type of, type of sizes. well we're talking hundreds of billions of dollars in valuation and they're playing around in small months. playing around in different types of markets. They're playing around still kind of like in SaaS orientation, and they're just going to get rolled. They are 100 % going to get rolled here in some short order. But it's the same thing that happened with a lot of infrastructure companies when all the clouds came out. Like how many networking infrastructure, those types of companies still exist when AWS is the 800 pound gorilla and the space thing could do all of the stuff. Most of those things, most of those companies don't matter anymore. Even if they're still around, they just don't

Daniel Darling (14:15)

Look, I think you give a really good macro picture now. I'm excited to zoom down into your world specifically, which is around software development where you're a world leading expert in. So 40 % of code is already generated by AI today from last time I checked. So where are we headed if this is where we are in the first innings?

Jason Warner (14:34)

The way I always thought about these AIs helping out developers is the following. And give me a second to lay this out. So the moment we released GitHub Copilot in June 21 in this beta form, we effectively were saving developers seconds. It was fancy code completion backed by a derivative of OpenAI GPT-3 called Codex. Well, side-by-side chat enters the equation that kind of saves developers minutes. We're starting to enter a phase where we'll possibly save developers hours and then days of time. And so just think of it as an increasing amount of time saved. That's one lens to do this.

Well, another lens to think about this is the moment before we released GitHub Copilot, every single line of code in the history of humanity was effectively human endeavored. The compute upstairs is what built it, is your fingers hitting the keyboard and your brain kind of figuring out what to go do. Well, the second we released GitHub Copilot, we entered into a developer-led AI assisted world. And we all know that's where we sit. That's the zeitgeist of what we talk about. Well, that's going to quickly give way to an AI developer assisted and it'll happen right around the time we start saving developers a lot more time. So I'm multiple hours going into days is when that will happen. And you've heard the word agents and you've heard the word, agentic behavior a lot kind of pointed around and that's it's it's yes, it's good, but it's also bad because it actually belies what's going to happen. It'll happen when the models are more capable of doing these things.

And once we hit that inflection point, it's only a short, a little while before we enter into AI-led anyone assisted for creation. So right now there's roughly 100 million developers on the planet, people who are technically capable of creating. Now with AIs, you can do more, but you still technically need to be a developer or at least have some sophistication. But at some point we're going to enter into an AI-led anyone assisted. Think about what it would look like when someone could create a simple website mid-journey style, where you say, hey, I want to go do X, Z. There's a whole bunch of follow-up questions that happen.

Next thing you know, out pops two or three examples for you to iterate on, going back and forth with the AI on that. Now I say a little bit of time, that could be an hour, it could be two, could be a day, but the point being it happens. That's a very different world for people to enter into. As the models get more capable, what we will see is the entire software development lifecycle is actually major elements of this will start to collapse into the models.

And we already know this to be true because we've seen it happen in the general purpose landscape. And with GPT-2, GPT-3, and GPT-4, had those hot, agentic summers where everybody built all these frameworks around GPT-2 or GPT-3. But what happened is when GPT-3 came out, all the ones that were built around GPT-2, went away. Same thing happened when GPT-4 came out. All the agentic behaviors that were built around GPT-3, they all went away. And the reason why is because the models got so much more capable.

And Sam has said, if you're relying on the model not becoming 100x better, because that's what your effectively application company is going to do, you're about to get steamrolled. Don't bet on that. Well, software is actually going to be worse. Because as the models get better at software, most of the things that happen at the application layer are going to be in the model. And it's going to be night and day different. That's fascinating. And you can really start to project that out.

So in one case, you've got the developers in these Ironman suits. And then the other one, which is really one of my favorite topics is around how you can enable a non-developer to write software using just natural language prompts.

Daniel Darling (18:08)

So if you pull on that thread further and the capabilities start to advance where the models are able to enable a person who is not a developer like me to produce software, are we going to get to a point where just like someone would enter in a request into Google and get something as a link, are we going to get to a point where I can enter something into a system and then software is produced in real time to meet the demands?

Jason Warner (18:37)

Real time is going to be a hard thing to understand because that's going back to a matter of physics, like how fast can we run these things? So inference is a real issue. How fast and how big and how complex, all that sort of stuff. But what I will talk about here is what I say is what you're asking about is what I call full program synthesis.

So end to end, you make a request and that the other side of this top software. Yes, 100%. That will happen. It'll happen to a point where you will be capable of doing a large set of things that you previously wouldn't have been able to do. And some of that might be too complex to be done in single shot, but you might be able to do this over the course of a couple of different back and forths. But yes, you'll be able to do this. And I think that even more than this, also means that you'll effectively feel like you've just been given superpowers.

We can we see these in other areas already we've seen it in images we've seen it in songs we've seen it in poems or yeah you know full text blurbs but you're gonna see in software now it's much more complex than software to do this and I mean much more complex because it's not just a text output or an image output but to get to running software it's a systems problem which is why you know we talked about different phases for poolside well phase two is for us to be a cloud company we're gonna run that software for because as the SDLC kind of collapses into the models and become more capable, we'll be the world's first software intelligence layer, the full stop, the most capable AI for software on planet. Well, what happens when you're the most capable AI for software on the planet? At some point you're making a request in and I'm going to show you a couple of running examples. I'm going to run your code for you. I'm going to do that for you. Part of my value proposition, but also part of the ultimate idea of like these things working together is that we're going to run that for you.

Daniel Darling (20:17)

Who are the big winners and losers in that kind of world. It's such a big reshuffling and collapsing of the software industry. What does that world look like to you in five years time and who are the winners and losers?

Jason Warner (20:29)

Well, I think obviously all the people are winners. humanity is a major winner in this. And I'm not an AI doomer, as you could probably tell, given the fact that what I'm making my life's work. But I also think that that's such a misleading concept that people who just don't really understand the technology fall into that one. But I think humanity is going to win when you're capable.

And it's possible to go do stuff you previously couldn't go do, humanity is a big winner. But also imagine a scenario where all of sudden all of the medical advancements or scientific advancements where you have a couple of researchers but you need teams of developers to go put some of these ideas into practice can get a long way, if not all of the way, via just a system. Well, that is a compressioning of learning cycles that's gonna happen for us. That's not technically a software function like it requires software to see your end goal. Imagine what that looks like, what for humanity. So that's, that's one of those. Now there's a lot of other winners, like, you know, if you're a hyperscaler right now and you understand what I said before about energy chips and make and run AI and intelligence layers, and you're all in on this right here, only one of them is truly acting like they understand that. But then there's going to be a whole host of losers.

And again, it comes down to what I think people who just don't fully or maybe not have grasped this yet and can't make the adaptation really quickly. And so you can think of these people who have had legacy businesses and they really don't want to kill the legacy business. Can you imagine when somebody is able to basically pull up some sort of an X clone, whether it be like a CRM clone of some sort in a matter of hours that matches so much of the functionality or it can crawl and build you an X or Y plugin so it could pull and read, pull data from or push data to something else.

You're not going to put up with all this BS that we put up with in this industry from some of these effectively dinosaur companies who are all about lock-in, as an example. That won't be a thing.

Daniel Darling (22:28)

And so you've sat on both of the sides of these hyperscalers, right? You know, being within a Microsoft and now outside in your own startup. Are they just going to get bigger and more dominant in that it seems like they have understood and woken to the potential of that and in fact, they're driving a lot.

Microsoft, Google, Meta, Apple, all of the hyperscalers in there. Are we gonna see just far more dominance from that area or is this something that's potentially disruptive to them as well?

Jason Warner (22:57)

So those are particularly key ones and interesting ones, mostly because of their war chests. How much cash they keep on balance sheet is interesting from those. Microsoft is obviously the current leader here.

What's going on and effectively the way to think about Microsoft and this is the easiest way for me to explain how I view them is Microsoft is not innovate and they haven't innovated for decades. Microsoft is a PE company and they are a PE holding company for various assets across a number of different sectors. So the relationship with OpenAI, the acquisition of GitHub, LinkedIn, Minecraft, all of the various things. They are not a software development company anymore. They are a PE company.

And they're amalgamating things across various industries that hold them together and give them this market position. And they do that better than anybody. And what they do recognize, they recognize the trend. They were well ahead of everybody else on the AI side of the fence. So as long as they keep that up, and I see no indication that anything's going to change over there, they're in a very interesting market position. The other folks on that have effectively been late to this trend.

And some of interestingly, like given their market position, I'm particularly thinking about Google on this one. This is should have been Google's domain. This is what they should have owned from the jump. And for whatever reason, they can't. you know, there's a lot of talk about why they can't and they haven't and will they, you know, you never bet against Google.

But I always make a bet on people. And it would appear that while there's a individual sets of many tens of thousands of incredibly smart people, there's effectively one, two, three people making very bold decisions inside of Google at the moment, it would appear. And that's what it's going to take. does not, you cannot do the next 10 years by diffusing your leadership and saying like, I kind of hope that all 15 people in this room can agree and get to a strategy. No, it takes one person saying F everything else that's said in this room right now, we're doing this. That's it. And if you can't run your company at a mega scale that way right now, you're in trouble.

Which leads me to Apple. Apple making that partnership with OpenAI is interesting. Because if you think about what Siri should have been, this is effectively ceding market position to OpenAI. But we don't know the full gravity and details of what the relationship's going to be. I'm really, truly interested in that one.

Daniel Darling (25:19)

And that also brings up the question of small models at the edge, device versus the large language models at the center.

Do you think that there's a role for these kind of smaller, more dynamic, lightweight models?

Jason Warner (25:31)

Small models are going to play a role, but they're not going to be a market mover. It's always the large language. It's always the large models that are going to matter. Large models are where the intelligence happens. The small models are going to be distillments down. They're going to be sharp tools. They're not the overwhelming future of AI. They're going to be adjuncts to the large neural networks.

Daniel Darling (25:51)

And what about Meta? They've obviously done a massive push with the open source community and really done great work there with Lama. What's your view?

Jason Warner (25:59)

It would appear that their strategy is to turf everything that is not open-air and un-throttled. could possibly do that and then close source it at some point. That's very clear what they would do. They want to keep everything open source here. And it's also not open source. It's open wait. Until they open source their datasets, we should not be calling them open source. So it's a little bit of a co-op on their part, but there's no world in which they would continue to give all that away for free.

If these models get sufficiently large, sufficiently intelligent, they'll pull that back. But I think it's an aggressive, really important catch up kind of strategy that only a couple of people in the industry could play.

Daniel Darling (26:34)

We're talking about so much disruption and so much change that I want to kind of like skip over that to the future of when this all starts to play and maybe the dust is starting to settle a bit. on the poolside website, you state we believe in a technological future, one of abundance for humanity. Fast forwarding to that kind of future that you're thinking about, what does the world look like to you?

Jason Warner (26:57)

I look at, as I go back to what I say before about these intelligence layers, these things are these neural networks that are going to exist in the world that people are going to be able to ask questions to and get answers from. We've not seen something like this since we can walk onto the internet before Yahoo when you had to navigate the pages with Google, where you could just go away and say, hey, I'm looking for a thing. Can you help me figure out a little bit more information about this topic?

first time you started to see search engines appear, you're like, WTF, this is kind of amazing. And we would literally spend all night, if you remember way back when looking up topics, because like, holy crap, you could actually go deep on a topic for the first time without having to go to the library or having all the encyclopedias there. And also, there were also rabbit holes you could go down to. So just think about the fact that like, these things are going to exist, but it's gonna be slightly different than search engines, which is going to be, I want to go do a thing. Can you help me achieve this goal in the shortest amount of time with the most amount of intelligence possible. And all of sudden you're capable of doing that.

Daniel Darling (27:55)

Switching to another part of your career, you were an investor at Redpoint, the venture capital firm that backed Snowflake, Twilio, Stripe amongst others, and of course, Poolside. And you were there from 2021 to 2023 as we entered into this new large language model AI wave. So putting your VC hat back, what qualities of an AI startup today would they need to have to really seize your attention from all the noise out there?

Jason Warner (28:24)

So obviously it's going to depend upon stage and a couple of other things, but let's just net it all out. Killer team, because that's really at the end of the day, what matters. You have to have a killer team. Founding team in particular is going to be killer, but who they can attract and recruit and retain got to be absolutely amazing. And it has to be very different than what has actually happened so far in a lot of these. You have to have apply researchers, researchers and distributed systems engineers. These are massive systems. You have to be able to track and recruit those. So I think that there's some diversity in the founding team across skill sets and areas, incredibly important. I also think that we have left the phase of science projects. About three years ago, you could raise a ton of money and have zero notion of how you're going to commercialize your entity.

You've got to have commercial instinct. You really have to know how you're going to get customers who retain them. You know, the dirty little secret of all AI companies, particularly AI application companies or AI consumers and AI producers is churn is through the roof. There's so many people that are playing with applications, possibly even paying for some short period of time and then turning off. So you have to know how you're to retain them. So what I'm looking for in the team, I'm always looking for the team at the end of the day, it's the team that's going to win. And if you can find a team that can build out the technology, that should be a base core requirement, knows how to commercialize and has both sides. You have core technology instincts and you have commercialization instincts. You've got a chance. And if you've got those things and a lot of other things just don't matter.

Daniel Darling (29:56)

And do you think that they're now at the stage where new entrepreneurs and startups can build on top of these large language models to take the application layer into the real world economy and start capturing this GDP value that you're talking about and start to slice up the different verticals and industries and build these?

Jason Warner (30:13)

I think so. We're in the AI light bulb phase right now. There's a lot of second and third order effects to happen, but you can do this right now. And as the domains get their own AIs as well, you'll be able to build much more specifically in those domains. So think about software. Up to this point before we're releasing this summer, poolside, everyone's been using general purpose models, and there's only so much you can do with a general purpose model. And so with us, you'll be able to do a lot more. But yes, you can and should be thinking about building on top of these things. And I think the table is set for the foundation model companies, we effectively know who they all are in the world at this point. But the application layer is still wide open. That's where the thousands of startups are going to come from.

Now, my advice to the VCs you know, listening to the podcast here is by God figure out how you value the AI application companies because so far you've done a really bad job. You've been giving them foundation company multiples and you're basically giving them the death sentence. They just don't know yet. And so there's two different kind of worlds as the world of the foundation models that can really capture a huge amount of the value. And then there's the application layers that acquire customers or attention, the revenue.

There's three layers right now in AI. You have AI foundation model players and we know who they're all set and they're going to attract tons of capital because that's the predominance of where the GDP capture is going to happen. Then you have middleware companies. Those are like tooling companies, helping enterprises get used to like small models or like pipe things together, all that sort of stuff. Those are classic middleware type of companies. And you should understand we've been through this phase multiple times. And then there's AI application companies. And these are no different than any application company in the world, but we've been treating them differently because it's somebody layered AI onto it, but it is no different. Figma is no different than old Adobe or Salesforce or whatever. These are application companies. So it is all about what are you doing that's unique? How are you getting the customer? How you retain a customer? Do you have some sort of discernible mode, whether it be a workflow or some sort of partnership or something else? But don't confuse the fact that somebody is consuming AI with somebody that's producing AI.

Daniel Darling (32:24)

And do you think that we've seen yet an AI native company aside from these kind of model layers? Have we seen an AI native company at the application layer yet?

Jason Warner (32:34)

I'm certain we have. That is a developer that has been able to kind of leverage enough of automation to build themselves up to a point. I don't think that we've seen what Sam has talked about in the past, which is that first solo or five person unicorn, but I think we're going to see a lot, a lot smaller companies, which I'm actually a big fan of I think most companies at this point are bloated and overly staffed. And I think that has to do with the fact that a lot of the work is trivial and kind of annoying and kind of treachery, treacherous. And as we kind of move some of those things out, we'll be able to lean off quite a bit. And so those are the AI natives are going to be ones who don't ever take on that bloat in the first place.

Daniel Darling (33:12)

Absolutely. And that's something that we're seeing on our side as early stage VCs and are very excited about from there. Well, look, You know, Jason, thanks so much. We've covered such a wide range of topics because you are, you know, such a deep expert in so many fields of this and really excited about what you're building at poolside. Good luck this summer with the big release. I'm sure it's the first of many more to come and thanks for coming on to chat with me today.

It's a real highlight to be able to chat about the future of AI and software development with Jason. Not only is he ushering in disruption, with a billion dollar foundation model company, but he's seen the new wave from the start, having been at GitHub, then Microsoft, and even from the investment side over at Redpoint.

Jason makes a really compelling case for specialized models and clearly believes the greatest value will accrue to the model layer. What stands out for the future he's helping to create is a wave of capability in the creation of new software programs, not just from developers, but also non-developers. Turning ideas into actions like never before, which as he puts it, will feel like we've all been given superpowers.

Jason is a great one to follow, sharing unfiltered insights from deep within the AI industry. You can find him on Twitter @jasoncwarner and see him gear up poolside for its big release over the summer. I hope you enjoyed today's episode and please subscribe to the podcast to listen to more coming down the pipe. Until next time, thanks for listening and have a great rest of your day.

back to episode thoughts