Transcript
Troy Demmer: I think with the burgeoning of AI, it just enables you to even extract more insights out of these models. And so you start to create a situation where the model may know even better than the observations that can be made in person.
Daniel Darling: Welcome to the 5 Year Frontier podcast, a preview of the future through the eyes of the innovators shaping our world. Through short insight packed discussions, I seek to bring you a glimpse of what a key industry could look like five years out. I'm, your host Daniel Darling, a venture capitalist at Focal, where I spend my days with founders at the very start of their journey to transform an industry. The best have a distinct vision of what's to come and guiding North Star they're building towards. And that's what I'm here to share with you.
Daniel Darling: Today's episode is about the future of industrial robotics. In it, we cover turning the physical world into data, robots that climb, fly, and swim, AI's ability to predict failures, and America's $5 trillion aging infrastructure problem. Guiding us will be Troy Demmer, co-founder of Gecko Robotics, who use a fleet of advanced robots and AI software to help government and heavy industry maintain and manage their critical infrastructure.
From Navy vessels to power plants to dams, Gecko collects data and delivers insights across over 500,000 of the world's most important and critical infrastructure. It has developed a digital layer of intelligence over the built world to improve performance, prevent breakage and failures, and increasingly predict how an asset will behave in the future. The company has raised over $220 million from top investors including Founders Fund and US Innovative Technology Fund. Gekko's co-founder and chief product officer is Troy Demmer. A graduate from Carnegie Mellon University, Troy was previously in the healthcare industry working at the University of Pittsburgh's Medical center before launching his first startup, 360showings, which 3D rendered homes for the real estate market.
Along with his work at Gecko, Troy also runs his own venture firm, First Order Fund, which invests in early-stage startups building moats using data. Troy, nice to see you. Thanks for coming on to chat with me today.
Troy Demmer: Thanks for having me. Great to see you.
Daniel Darling: I'd like to start with the problem that Gecko is addressing because it's a really, really big one which is helping to maintain and repair our critical physical infrastructure. From bridges to electrical grids to aircraft carriers. And machinery and equipment unexpectedly breaking and failing costs a Fortune 500, something like a trillion and a half dollars a year. And aging infrastructure in the US is estimated to be a $5 trillion problem. So this is a massive task to address. What you're doing in practice is Gecko uses a bunch of sensor packed robots that climb, crawl, swim, fly and collect data and create digital versions essentially of the physical assets. Can you talk us through your collection of robots that do this?
Troy Demmer: Yeah, absolutely. So as you suggested, it's really about understanding sort of the state of the built environment. And you know, there's a lot of sensing technologies that could be useful for that. Yeah, there's different types of sensors that the medical industry has used for a long time, things like ultrasound, sonograms to, you know, things that have maybe made its appearance more recently. Things like lidar photogrammetry, electromagnetics. These types of technologies can really help to give sort of again that picture.
In, terms of what's going on within the built world for us, we think a lot about, okay, well, what solves the right problem? So if it's a surface thing, you don't need to kind of see into the walls, right? And sort of have that X-ray vision of what's going on. But then many applications you need to do that. In some instances we're trying to figure out what's going on with steel bonded to concrete and concrete and rebar and what's going on in all those different layers of that asset. And that's a very complicated problem. That requires many different types of sensors to all be kind of working to piece together what's going on at each of those layers. Right?
The point of robotics and other types of ways that we basically scale up those sensors to deliver them to the point that we need to be able to gather that data. So in some cases we're using robotics, in other cases we're using drones, in other cases we're installing a permanent sensor to give us like a time, an ongoing time series data. So there's many different ways to sort of either get that resolution or that scale needed. And that's when we use those different form factors.
I think it's really important because this allows things to stay in operation. These types of technologies can be deployed when the facility is still operating or when cars are still going across bridges. So the whole whole idea is you don't actually have to shut down the equipment to be able to take these observations and these measurements.
Daniel Darling: And let's talk about your customers for a moment. What type of assets are you monitoring?
Troy Demmer: Our approach is what's the most critical infrastructure, whether that be to ensure national security, whether that be to ensure, public safety. And where process industries, energy refineries, power plants, metal and mining, refining, where their value really comes from. Like how do they take a raw material and turn it into something more valuable? The way to think about is like, where does pressure and heat transform something, right? And where that happens, it's a volatile environment. It's an environment that is containing lots of pressure and lots of heat and something's being transformed from one thing into another.
That's usually a good area to look at in terms of, for us, where we can expect some very challenging environments to have this predictability and insight in terms of is my condition of my asset what I think it'll be? Can it do that next production run, or am I going to be surprised by something, something that fails or something that, you know, has a very sort of catastrophic explosion? These are industries that if the wrong thing has a leak, it can cause huge catastrophic consequences for the environment, for the humans that work there, for the people that live and reside in the area, not to mention the collateral damage to the business. We really started with some of these really high consequence areas that you just, you can't really afford mistakes like that, despite them happening way too often.
Daniel Darling: Still, it's probably best to have a robot rather than a human operate and maintain it, from there. So a lot of this is around how you collect data. How do you enrich these assets with data models from there, what you call data layers. How do you then package that up into your software and start to communicate that back to customers in terms of translating data into something they can digest and of value?
Troy Demmer: Yeah, to paint the picture here a little bit, we reconstruct that asset, right. For whatever exists there, you know, physically. We try to represent that in a digital, you know, you might use the word digital twin. It's, you know, a bit overutilized, but sort of a, reconstruct that asset. We call it a rich asset model. And from that rich asset model, then we're able to layer all these different context layers on top of it. It's usually the combination of those combined with information that, you know, is already known about that asset, such as when was it installed, when was it commissioned, what was the materials, what was the design specs, right? What sort of operating pressures or temperatures should it typically, endure.
And so all of that helps to build sort of this digital version of that asset. And then when we start to contextualize sort of what's the current state, what's the health of this asset today? That then enables a range of decision making from maybe I want to operate this asset more differently to I need to make an intervention or I need to monitor something because if I don't, I'm going to be in a situation where I have a failure, the failures that lead to the $1.5 trillion of losses that you mentioned early on.
Then it goes even beyond that, which starts to again stretch out that time horizon to say, how do I manage this asset over its life cycle? Maybe there's a, new operation that I want to do with it. Maybe I want to modernize that asset. What is the actual asset that I have today? Can I fit this new apparatus on it or could I change sort of the feedstock or the input into it? What kind of safety factor do I have still built into this asset? You know, as this asset is kind of degraded over time, the whole concept is having to persist this digital asset that enables them to make this range of decision making.
I think the really valuable thing is when you start to make those aggregations, when you start to make those summarizations across broad swaths of assets, sort of the end to end value stream for a specific customer, or you start to look across all the tanks that we've, we've scanned to date, how are different materials responding over time and how are different process environments, your environmental factors, different regions of the world impacting these built world materials? How do we build more intelligently in the future?
Daniel Darling: Yeah, and that seems like a really powerful thing that you can help with, unlock with scale, right? As you start to monitor more and more assets and I think you have an incredible amount, I think it's 500,000. When I was looking at how many assets you monitor, what are some of the benefits of scale in terms of having access to across customers of a similar type of assets and how you can start to leverage that for the benefit of all the other customers?
Troy Demmer: Yeah, this insight was learned pretty early on. Where we're working exclusively, like really focused on just one vertical, one asset within that vertical. It was thermal power plants and the boilers that combust anything from biomass to fossil fuels. And what was really interesting about it is like even before we go scan that next asset that n+1, we would realize we would have insights in terms of like, where are the likely problem areas, where are the areas that we should focus our attention and how consequential is one area over another area based upon experiences, things that we've been trending over years. So that's some of the power.
And then really I think with the burgeoning of AI, it just enables you to even extract more insights out of these models. And so you start to create situation where the model may know even better than the observations that can be made in person. Right. Because now you've sort of got again this corpus of data that is so good at predicting what's going to come next. And I don't think we're there yet. This is sort of the stage that sets up.
Daniel Darling: If you had to fast forward a couple years, which is the underlying premise of this podcast, you think that maybe with enough robustness and data in the models, their ability to predict an asset and the failure modes of an asset starts to really increase. And it's not just reliant on the everyday sensors that are monitoring it more real time, but it shifts more to the model that has such a density of information.
Troy Demmer: I think that's right to be refined, but I think what's really valuable about something like that is just the access to the technology. You know, today Gecko’s assets are largely in North America and are there places where they're really difficult to get to offshore or places and just locations where it's like it's hard to even get a robot there, much less a human.
How do you sort of monitor this or at least have some insights going into something to sort of stack, rank and prioritize to say, hey, I've got this, maybe a customer's got a million assets under their purview and it's just like, where do I even start? What's my starting place? Is this is going to be a 10 year journey to sort of digitize and understand everything that's going on across 30 facilities? Well, now you've got a prioritization, a stack ranking on how to go eat the elephant, so to say. And really sort of start to target in and make the biggest impact de-risk your business and start creating operational efficiencies and spending capex in the way that you get the most bang for your buck. So I think there's a lot of practicalities to it. I think there's a lot of ways in which new tech unlock that's happening will accelerate probably even the things that I'm thinking about even more quickly.
Daniel Darling: Are you starting to see this data being used not just to inform how you maintain and operate, but also how you do new product designs or upgrades to these kind of systems.
Troy Demmer: Yeah, the range of possibilities here, I mean, there's a lot of really interesting research that goes on from a material standpoint, but the thing that was always missing was what's the feedback loop. To know the performance of that asset without waiting decades or whatever to play out. And so now you can sort of increment that and get a feedback loop much more quickly to say, is this change that I've made on a design or material, is it improving my performance, my outcomes, the longevity of my assets?
Daniel Darling: I've seen that you do a lot of work with the US Navy trying to get it to stay asset ready. And you had a remarkable stat saying that only a third of the ships are available for deployment due to maintenance cycles. Maybe you can run us through a little bit of the work that you're doing with the Navy to be able to make their ships ready.
Troy Demmer: We really focus on what's the biggest problem to do that, working backwards, what's the biggest cause of dry dock delays, what's the highest cost associated with things that are discovered too late in the process, whether that be on the new build side in terms of, wow, this could have been caught sooner. It's really costly to rebuild something or to find a quality assurance issue as, it gets closer and closer to be putting in the water. You know, the earlier you can catch that, the more cost effective it is to root that out.
And so for us, we're just kind of looking at exactly which of those components, which parts of the asset are the biggest driver to that, using the tools we have and building new tools to fill in the gaps and just doing that in a way where we can get to the end to end system that gives one view, one pane of glass. To be able to make those decisions, to be able to understand, okay, here's what I would do differently when the next ship comes in. So it becomes a continual, continual learning process.
Today, most things, and I've seen this in both private sector, public sector. It's very much fiefdoms like every facility kind of operates with their own kind of special procedure and process. And based upon, it's really built around the people that work there. And so there's not these learnings and these, not this progress that can be made because it's just that it's just very, very kind of specific to the group of people they have and so, you know, the orchestration of how do we understand the material state and readiness of all the Navy assets, whether they've been yet to be built, they're already commissioned, or they're at the point of needing some modernization.
Having that repository of data help someone like the Chief Naval officer one day make these decisions, to say, that's the one I'm, putting into battle, or this is the one that I'm going to go fix next, based upon a real prioritization, based upon a real stack ranking. And today that visibility is, you got to go to a lot of different places to try to get that. And the signal is not always, the most accurate. It's a lot of times the loudest voice in the room can get the resources.
For us, we think about really two ways that we can drive a lot of innovation is finding ways to get that data more cost effectively, more scalably. As you mentioned, there's a lot of ways to increase the scale of the robots, the intelligence of the robots, sort of making it so that they can be deployed very easily, at scale, the types of data layers that they're able to generate. So creating net new data layers that gives more and more insights in terms of what's actually going on to reinforce that rich asset model and make it even more valuable to the end customer.
Daniel Darling: The data collection piece is something that you spend a lot of time on, and I saw that you testified, in fact, in front of Congress this year around that piece around the United States needs to improve its data collections ability to properly train its AI models. Can you share a little bit of your beliefs?
Troy Demmer: This was with the Department of Homeland Security, and it was a, legislative hearing for that committee. And you know, what they were really focused on, in a lot of sense was the security of national assets. Right? And a lot of times they think about the digital security, the site, cybersecurity of them, right? And so the perspective we brought to that was, what about the physical security? You know, we have ports, we have roadways, we have critical energy systems. You know, what's the physical security of those assets? And, you know, if there's vulnerabilities that exist because of compromises, then that just makes them even more of a target to a foreign state actor or something.
But I think is what's really important is that I think AI has been sort of, and it's increasingly impressive every day in terms of its capabilities, its use. It's a great research tool. It's really helpful to the creative brainstorming process. But a lot of the data that it's been trained on is just an amalgamation. It's a summation of the Internet. And a lot of the data that exists today about these types of things just aren't there. They're held inside of corporations. They're held maybe they haven't even been gathered yet. Being really conscientious of AI is only as good as the data it's trained on.
So if, you know, I'm good data in places that you have low fidelity data or inaccurate data now not only does AI become not useful, it becomes harmful because it's going to start recommending the wrong actions to be taken. That goes way beyond industrials. You know, I think it's the health of the human body, I think it's transportation. I think it's when you really start to look at GDP, you know, outside of consumerism and B2C commerce and social and some of the things that are available data wise on the Internet, a lot of this is not, you know, these are corpuses of data that have not been put out there on the Internet. And these are real opportunities I think to reinforce that data, make sure that data is good and then find more useful ways to make it work for the humans in those environments.
Daniel Darling: Switching gears a little bit to the startup ecosystem around Gecko, you've purposely built it to be open to API connectivity to other kind of robotics companies that want to build onto your software. You built Fulcrum, an API layer to allow for these robotic companies to plug into the software. What are some examples that you're seeing for that and what has been the strategy around that?
Troy Demmer: There's a lot more companies that we want to attract to the ecosystem to give them a place for all that data to go. The customers are just demanding it. They don't want another 50 proprietary logins to go view one sort of narrow slice of their data. Right? And so the ability to have this central clearinghouse, to have all this place where all this data can be contextualized. And so we built some really strong partnerships with different companies that have a novel kind of unique technology out there and been helpful to help them process their data and help contextualize that data and make the unlock for the customer in terms of the value that can be generated from it more useful.
Daniel Darling: It'd be great to hear an example of a robotics company using the platform.
Troy Demmer: Yeah. So Anybotics is a really cool company. They're based in Zurich, Switzerland. we got the opportunity to meet them about three years ago now. It was actually on a trip to Davos, Switzerland, you know, toured their facility, super cool facility. They were building this walking dog that would go into sort of these operating environments. And they're constructing a lot of different sources of information than Gecko is. Gecko is focused on the things that we do and they're really looking at sort of the operator walk downs and so they're able to take a dog and one of these walking dog robots and actually program it for that environment. And then it can continue to take that source of information and then digitize it, analyze trends over time. And we bring that data and marry it up with the structural state health of that data. Right. And so now you can kind of get a sense of how the operating world comes back to the maintenance and sort of capex investment world. And so there's like a really strong partnership there.
Daniel Darling: Your platform allows for all kinds of different types of robots to be plugged into it and then you amalgamate and make sense of that information and unify that with your own data layers for the customers. That's, that's a really neat approach. I can see that's immensely beneficial to incumbent industrial companies.
How are you starting to see, or could you foresee, the emergence of a more AI native industrial company from scratch that can take advantage of your types of technologies and how all of this automation is starting to happen and start to build a challenger to some of these massive industrial giants?
Troy Demmer: Yeah, I mean everything has to be translated back to the world of bits. And I think you need some advancements in robotics to be able to do that. And we're seeing that, you know, optimus robots, things like that, Boston dynamic robots. Like that's a key part of the process is there are manual components.
I think a lot of organizations have tried to be sort of central command and have one place to sort of control all their systems. And it's just like it's not exactly how things work. And so you still need that proximity to the specific plant. But I think there's a lot of sub function changes along the way where you get these kind of knowledge management systems and all this sort of source of tribal knowledge. And how do you codify that and how do you sort of structure that information to generate real true best practices and pathways to manage things that today again tend to be fairly artisanal. So I think there's a lot of low hanging fruit here for sure, both for the incumbents and for someone to sort of reimagine things from the ground up.
Daniel Darling: Troy, really enjoyed the discussion today. Thanks so much for coming on and congrats on all the fantastic work you're doing. It sounds like an immense problem, an undertaking to go solve, but you're well on your way for there, so appreciate you coming to chat.
Troy Demmer: It's a ton of fun. Thanks so much. Thanks for having me.
Daniel Darling: I really enjoyed how Troy and the team at Gecko are using cutting edge robotics and software to address a really physical problem, that of our aging infrastructure and critical assets. Their diverse fleet of robots allow organizations to collect the first party data they need to create a digital model of their assets and it equips them for the first time with the ability to drive insights and predictions using AI.
What stood out to me is how this starts to look like at scale. With enough data on a given asset type, AI models can predict events and even redesign the asset itself for better performance without the need for constant physical data collection at all. This form of advanced simulation certainly is where the world is going and will have incredible impact on the industrial economy.
I hope you enjoyed today's episode and please subscribe to the podcast. To follow the work of Gecko, head over to their account on X @GeckoRobotics. I hope you enjoyed today's episode and please subscribe to the podcast to listen to more coming down the pipe. Until next time, thanks for listening and have a great rest of your day.