As AI systems continue to advance at an astonishing pace, the next great challenge lies in embodied intelligence—the ability of AI to perceive, reason, and act in the physical world. In a recent discussion featuring Michael Cho, co-founder of FrodoBots Labs (opens new window); Juan Benet, founder of Protocol Labs; and Jonathan Victor, co-founder of Ansa Research (opens new window), the panel explored the promise of their new project, BitRobot (opens new window), a decentralized AI research network designed to push the boundaries of embodied AI. Hosted by Lacey Wisdom, a partner at Protocol VC (opens new window), the conversation touched on why robotics represents the "final boss" in the path to artificial general intelligence (AGI), the challenges of training AI in physical environments, and how crypto-incentivized networks can unlock massive-scale data collection and model training. With the rise of decentralized robotics, this discussion sheds light on a new paradigm for AI development—one that could accelerate progress beyond what traditional, centralized institutions can achieve.
Lacey Wisdom: Let's kick it off here. I'd love to introduce our amazing speakers today. We're joined by Michael Cho, who's the founder of FrodoBots, and we're really excited to have led his round at Protocol VC and talk a little bit more about what he's doing with BitRobot today. And then we're also joined by Juan, the founder of Protocol Labs, creator of IPFS, libp2p, and the brain behind so many things that have come out of our innovation network. And Jonathan Victor, who is a former Filecoin ecosystem lead and current co-founder of Ansa Research. So, today's X Space is a discussion to introduce BitRobot, which is a subnet-based AI research network that's designed to advance embodied AI. And you might be wondering, what is embodied AI?
We're going to actually get into that during this discussion and talk about all the wonderful ways that crypto incentives can basically help to scale decentralized robotics. I think a lot of times when we look at these DePIN networks, the real value add is that we can save a lot of money in terms of research costs and increasing the diversity of actual data collection, and BitRobot is going to have such an amazing approach to really push this vision forward.
Without further ado, I'd love to just start with a kind of easy kickoff topic. I'd love to ask each of you where you think embodied AI fits into the advancement of AI overall. And maybe, Michael, you can start.
Michael Cho: Thanks for having me, Lacey. And I'm super excited about this chat. My view on embodied AI, or basically robotics, is that I think robotics is the “final boss” in the path to true AGI. If you count AGI as a type of AI that can do basically majority of what humans can do. Looking at how things have been going the last one or two years, it seems to me that at least in the cognitive — and all the digital domain — AI, will probably surpass human capability in most cognitive abilities.
So, therefore, what remains, in a few years' time — the only frontier remaining, possibly, is all the physical robotic stuff. There are different reasons why robotics is going to take way longer. So I feel by then, maybe most of the attention, and also funding, would go toward solving this final piece of the puzzle. ChatGPT can tell us everything about playing basketball or swimming, but does it actually know how to express that in a robotic embodiment and actually swim and play basketball? It's still subpar humans, right? I think robotics is going to be the final piece of this whole AGI thing.
Lacey Wisdom: Juan, I'd love to get your perspective as well on this.
Juan Benet: I’m excited to be here with everybody. I fully agree with Michael. The challenges that we've seen over time over the last few decades whenever there's a software-only domain, where we have a very clear and concrete set of tasks that are very viable, they can turn into a good training set. We've seen the latest models triumph over almost every major problem that we've thrown at them, and embodied AI and robotics have been extremely challenging for many decades.
The visions of robotics started in the mid-20th century with all kinds of predictions and hopes of what would come true by the ’60s, ’70s, and ’80s. And all of that ended up being way too optimistic, way too hopeful. The problems were dramatically harder than everyone expected. And it turns out that when you look at animals, the amount of cognitive space in our brains devoted to just the basic tasks of moving, sensing the environment, and making sense of the world around it, then plotting trajectories and actuating all the muscles and so on, is a huge fraction of all of the neurons in animals. So any kind of complicated path-planning turned out to be extremely hard relative to what we previously thought were way harder problems, like writing a symphony or creating art or something like that.
It turned out that trying to actuate a robot to move elegantly through a complicated environment and apply just the right pressures, just the right movements to maybe, I don't know, make some eggs for breakfast is a dramatically harder task than just creating a symphony or writing a research report or something like that.
Now we're in this amazing confluence at this moment where the models are getting smart enough and large enough to actually take on these large-scale challenges, but the problem is we don't have the right environment to collect data to be able to train these models. So the successes of deep learning over the last decade and a half have shown this environment where if you can collect the right training sets and the right data sets, you can get trained models that can be very effective, but the problem today is that we don't quite have the right training environment to collect those data sets. So that's where I'm super excited about what Michael and others have been working on to try and generate an environment to be able to do this. So it's a pretty exciting time at the moment.
Lacey Wisdom: Totally. I think it's really an interesting point that you put there—that a lot of the things that we thought were going to be most difficult for AI to do, AI has actually been able to do already. Perhaps what really makes us most human is really tasks that we take for granted because our bodies are just set up to do them already, like making eggs or going for a walk. I'd love it if you could just look into your crystal ball and give us a sense of what you think the landscape for the future is going to be with embodied AI and robotics.
Juan Benet: Look, I think looking ahead it's been pretty clear over the last two to three years that a lot of large scale groups that have been working on robotics for a long time have now pieced together the fact that within the next three to five years a lot of these problems are going to start getting solved and that robotics can finally be way more productive. We've seen this in many companies that have been manufacturing robotics in China. We've seen this from Tesla and the Optimus bot. And I expect that probably by 2030, we might have millions of humanoid robots in production.
But in order to get there, we're going to need to solve these extremely hard challenges around mobility, and we're going to need to be able to generate these large-scale training data sets. My sense is that basically by 2030, 2035, a lot of the predictions that people were making in the 20th century about robotics will start coming true, similar to how over the last 10 years, a lot of those predictions started coming true in just the software-only domain in environments where we basically beat chess. We beat go. We started being able to produce music, produce art, and create entire movies. So the rate of progress has been amazing over the last decade and a half. And I think looking ahead to the next five, 10 years we'll start going into much larger scale domains.
Once you have humanoid robots, or not just humanoids, but just larger scale robots that can operate in the real world, you start opening up a large degree of capabilities, and you start being able to do production at scale. And so, a large fraction of the economy and all of the work that goes into the economy could start being done by these kinds of robots. My sense is by 2035, 2040, it would be a totally different landscape, economically. A very different sort of world. We'll see that transition start happening between now and 2030, when these systems and these robots will start coming online. We'll solve a lot of the hard problems in mobility, navigation and communication between these systems. And a lot of that just starts with creating a very robust landscape of challenges and a great environment to do data collection at scale.
Lacey Wisdom: And so, I guess the question is, what's crypto's place in all of this? How does open-source collaboration really factor into being a solution for driving robotics forward? Juan, I'll let you take a gander.
Juan Benet: Crypto has been extremely successful at solving problems where you have a kind of very concrete challenge that you need a large number of people to do. You can turn that challenge into a game or with a reward, where you can clarify what participating in the game means and then what the reward means and what proportional activity means. And so we've seen this just at the very beginning with bitcoin, which amassed one of the largest scale computing systems in the world with massive amounts of energy devoted to mining bitcoin. We used that same structure to build the world's largest decentralized storage network in Filecoin. We've been tremendously successful at amassing this massive scale storage networking storage and tons of data. We basically created the category of decentralized physical infrastructure networks. Other groups have solved similar kinds of problems with wireless connectivity in Helium, video calling, and real-time communication networks in Huddle01. And now we're starting to see even energy powerplants, like solar powerplants, being built with Glow.
So, all kinds of systems in decentralized physical and infrastructure networks can be solved by creating crypto incentives that enable anybody in the world to devote some resources or do some jobs for the sake of the network. And the kind of amazing power here comes from turning what would normally be a very complicated sequence of events in a traditional company setting where you would maybe have to apply to do this work, and then get approval, and then do the work, and then submit results, and then have humans check it, and so on, and turn it entirely into a cryptographically verifiable process that is more asynchronous and permissionless, where anybody can contribute resources and join the network. And so that creates this super scalable environment where you can amass massive-scale supply of some activity and that's where crypto DePIN fits with this robotics set of challenges that we're talking about, where we can create a massive scale robotics network to solve this grand challenge of generating the data required to be able to train all of these models for embodied the AI — by creating a decentralized physical infrastructure network that can reward participants for adding resources into the network or by operating robots in the network doing tasks and so on. But I'll let Michael go in depth into it.
Lacey Wisdom: That's part of what we first really loved about what Michael was doing when we first met him, and he was working on FrodoBots. I think there were certain aspects of gamification there that we thought were really amazing and really fulfilled all of the vision here around robotic gaming. So, Michael, I'd love it if you could share a little bit more about your experience on FrodoBots, why you chose this gamification method, and how it solved some of the bottlenecks of data collection for AI.
Michael Cho: So I am very bullish on robotic gaming as a thing. Although, honestly, I don't think we're fully validated yet. So my view — and I totally agree with what Juan said, like maybe roughly the kind of timeline where certainly within our lifetime, this world will be full of robots that are able to do real productive stuff that today humans have to do, like wash the dishes, clean the toilet, that sort of stuff. But I think until we truly get to, let's say, a level five autonomy for doing these tasks, the robots are going to appear very dumb, for a long time, and then until suddenly it's a superhuman, right? I think that gaming is a very good wedge, and if you look across the history of gaming, gaming has always led the technology in a lot of fields — VR and simulations. In fact, if not for gaming, we wouldn't have NVIDIA and then the actual GPUs. Gaming sustained them for a couple of decades, mainly gaming demand. And now, of course, it turns out that the GPU, you use it to train AI. For the time being, these robots are great as entertainment — and has a lot of entertainment value. For those who don't know, we run this game that's like Pokemon Go in real life with cyborg robots. And we literally have gamers, and these are not even crypto gamers. I would say 99 percent of them don't even know this is a crypto project. They're web2 gamers. And some of them actually pay us hundreds of dollars just for the privilege of driving a bot somewhere in the world.
Now, I'm not saying that this kind of game is going to be the next League of Legends. Let's say if the DAU is in the thousands or even tens of thousands, it's going to be a very small game in the world of gaming, but it's going to be a huge thing in the sphere of robotics because in the real world, it's just so challenging and so operationally costly to do some of this. And so if you can make a relatively interesting, fun gaming experience, in effect these human beings, now, instead, they are your users. In fact, they pay you for the privilege of playing the bots. And I think that really makes the economics a lot better. I don't think every robotic task can be fully gamified. But I think wherever you can, that's a very good wedge. We open-sourced a data set about 2,000 hours, two terabytes. It doesn't sound very big, but it turns out it's already the largest of its kind, certainly in the public domain. And that's actually how we got to work with a lot of top researchers. Yeah, I think gaming is a very good wedge. It's a great way to get attention. And sometimes, as the gaming or capabilities of these robots improve, they might suddenly realize: "Hey, this robot that was built for battle bots and entertainment can also cook me a meal."
Lacey Wisdom: That's really interesting. I'd love to just double-click on something you just said there. Why do you think this method really appealed to so many top researchers? Because I think that might not be clear to the wider audience.
Michael Cho: Yeah, so we’ve been very privileged so far. We’ve worked with some top researchers from DeepMind, Stanford, UC Berkeley, and whatnot. In academia, you can have very good researchers, but for structural reasons, they are actually very devoid of resources. I would say increasingly, if you don't have some, a minimum threshold of, let's say, compute or data, it's very hard for you as a researcher in academia to actually really move the needle. Even though we are such a small project somehow because we were doing these things — we're just crazy enough to just put a bunch of robots out there — a lot of them were interested because they didn't even have this kind of data set to begin with. In a way, this is the interesting thing about robotics: Unlike other DePIN projects like Helium or Hivemapper that are trying to take mindshare and marketshare from big incumbents, like the telcos and whatnot, in the field of robotics, I would say everyone is starting from zero, except maybe Tesla, which has a huge lead obviously from self-driving on the road. But even for Tesla, for the Optimist, they are trying to get a data set and because this data set just doesn't naturally exist.
And so, if we do this token incentive — it's been proven many times already, right? There are so many different projects that have scaled so well because of crypto incentives. This is where crypto can really move the needle and contribute in a very meaningful way because even our web2 counterparts are also starting from zero. Anyway, I think maybe Jon should also say a couple of things about this.
Jonathan Victor: Sure. Where to dive in? Maybe a couple of things that we've touched on so far. I think one reason why crypto is particularly well-suited, touching on the point, Michael, you were just making, is if you segment the world into who has all of the requisite resources and who has some of the requisite resources, you basically can divide it along megalabs, like Tesla, and almost everyone else.
But if you were to say who has the research need or who has the research talent, who has the hardware resources, who has the compute, the storage and so on — if you were to break it into individual silos, like Tesla's across all of those things, but you'll have a number of people who maybe fill individual gaps. And so I think the core insight of token economies is that you can align disparate groups against a single economic resource, and you can balance your economy to have these different groups coordinate with each other. And to Juan’s point, if you can do it asynchronously, you can actually scale way faster.
And I think one of the core insights is where there's a lot of different groups that are rate-limited in different ways. But if we can coordinate them together, we can actually get to a much bigger outcome. And the other piece that is maybe a subtler point is that crypto is really interesting for capital formation, and robotics is incredibly capital intensive. If you were to ask who has a hundred thousand humanoid robots today, it may be a handful of the manufacturers in China. Tesla's ramping to tens of thousands, hopefully by the end of this year. We'll see. But definitely not research universities. They may have one or two that are available to them.
And so it's actually quite interesting if you can actually decentralize your capital base to say, “How can I set up the incentives to allow the massive capital formation that we've seen in other DePIN networks to create the fleet of infrastructure, such that if someone wanted to collect data — so actually have human teleoperators do stuff, like fold clothes, as an example — collect that as a dataset.” That's obviously incredibly valuable, but also more subtly, it gives you a fleet that you can go evaluate your models on.
Right now, if you had a model and you wanted to go test it and see how this model performs over a million hours worth of iteration? If you only have one robot, you have to wait for that one robot to do a million hours’ worth of evaluation. If you have a million, if you have a million robots that you can rent for an hour, then obviously you can parallelize.
This is where the fact that crypto can decentralize the capital base so you can get way more parallelism is incredibly important. That's maybe a more subtle point, but one that's more interesting to me as well, which is the actual economics. And this isn't just for researchers, this is for startups too. There's like a lot of money going towards these large robotics labs that are much newer, and you see the rounds they're doing, and they're in the nine figures, and it really reminds me a lot of the early, I guess a year ago, like the AI wave where everyone is clamoring to go acquire their own GPUs. And I think one of the big questions is, can you make that way more capital efficient? Does everyone need to have a standing fleet of their own personal humanoids? Or can they tap into a much larger pool that they can rent on demand? Almost like an AWS of the hardware itself.
But that's the sort of thing that if you have the right coordination mechanisms, if you have the right crypto rails, you actually can make possible. Have that be something that a whole community of folks is maintaining. I think we're at a very unique moment even just for why at this point in history, if you look at what is it taking to build the supply chains, where the costs of these things are low enough that it's actually possible to do these things at scale. Although 50,000 sounds really high for a humanoid, there are smaller robots that are much cheaper. There are some interesting papers about cross-training so you can combine datasets that are created across different modalities. So we're at a very unique moment where all of these things are converging, which I feel like is why DePIN's got a really important role in the future of how these things grow.
Lacey Wisdom: That's a great segue into talking a little bit more about BitRobot and the problem that you're solving with BitRobot if you wanted to kick off, Michael.
Michael Cho: Yeah, so BitRobot is basically a network of subnets. And I think the thing I love the most about the way we set this up is that it's meant to be very flexible and expansionary in scope. Meaning you can have one subnet that, let's say, takes care of one type of robot, let's say, sidewalk robot. And in this subnet, it might involve a human gamer and maybe do the DePIN thing so that we can crowdsource the deployment of these robots. And then in this particular subnet, the output would be, let's say, real-world data sets, right? And then in another subnet, you can have, let's say, a humanoid. That is, let's say, purely simulation and training, some kind of simulation environment. So the people that you need to actually activate, or the contributors that you want to activate in this particular network, would be like compute providers and researchers.
The way we've set it up is that it is really going to be very flexible. But the whole idea is that each subnet will have a very defined output. And it's like each of them is a mini ecosystem to basically create the output that is going to contribute to the overall advancement of embodied AI. And we think that there's a lot of synergy, ultimately, among the different subnets. Where one subnet could use the output, let's say it could be a data set from another subnet, and then use that to train for a bigger model, right? And then maybe another. And then that, in turn, goes into the simulation thing for another subnet. I think that's the setup.
Juan Benet: One of the things I would add here is that subnet architecture is an extremely good way of organizing a lot of disparate ways of contributing to a network. So maybe in the last five, seven years of crypto, DePIN networks have come up with ways of identifying a way to contribute and create concrete structures and rules to measure that contribution, and then reward it. And a lot of his work ends up being baked in and hardcoded into the core of a protocol. It gets defined as a reward function over some work contributed. One of the really innovative things that BitTensor contributed is this much more general framework for assessing valuable contributions to a network through the use of these different subnet structures, where you can define a different subnet that is going to reward a different type of contribution. Then you adjust the rewards that you give that subnet as a second step where parties are judging the relative contribution of different subnets and orienting emission toward them.
This is connected to other work that we've seen in the space, things like retro PGF (public goods funding) or impact evaluators, or some of the deep funding work, where you try to assess valuable contributions from a very large distribution of possible contributions, and try to figure out what is creating the most value for the network and reward that.
And so, one of the things that I think is super clever about the BitRobot network is that it uses a subnet architecture to bring together a lot of different disparate resources in robotics. You could be contributing hardware resources. You could be contributing an actual fleet of live robots that are ready to be teleoperated. You could be contributing an actual application or game or something that makes use of those resources. And you just enable this wide, large-scale open-ended innovation network wide, and then you assess which of these things ended up contributing most of the value of the network, and rewarding those things. I think that's an extremely powerful new model for how to route block rewards and incentive structures into different parties across a large-scale network to focus on the things that are going to move the needle most to create value in a network.
I think it's a great innovation, and I think we're going to see this model start permeating into a lot of other networks across the entire space. And I'm super excited to see this and how fast it can go and scale up to what I think is going to be one of the largest, if not the largest, robotics networks deployed globally — and fairly quickly.
Jonathan Victor: And maybe one other little thing I think that's quite interesting about organizing things in subnets is it also natively forces you to set up the answers for “How do I do native payment rails for a lot of tasks?” So you can totally use block rewards. It's the incentive structure for these things. You've already built the answer for, like, “How do I route stablecoin payments to a bunch of different tasks?” If you're already doing the accounting of who's doing what work and how that work is being valued, it gives you the native way to actually do a contribution or redistribution of compensation to a group for producing an output, which can be pretty important if you're trying to say, “How do I reward all the contributors into a data set?” If you're trying to say, yeah, this is a data set of useful tasks that you're now licensing out to an AI lab as an example.
Michael Cho: I have two more things to add. Another reason why I think this subnet idea is so powerful is because it's so flexible, right? In that research, it's going to shift over time. Maybe this year, there's a certain robotic task that is completely unsolved, but in a couple of years, let's say that's solved. And if you have very fixed tokenomics that may not reflect the changes or requirements of what embodied AI research really requires at that time. But if you set it up as a subnet, you can actually make the whole network adjust very quickly, so that hopefully the right resources go to those subnets that it are really contributing to what the research field needs at a particular point in time.
The other thing is: What I realized while doing FrodoBots the first couple of years is that while we have built — or rather get a very significant data set with sidewalk robots — I realized that while the data is very valuable and in fact, very proprietary and unique, it actually doesn't solve the final problem, which is still solving, let's say, autonomous navigation, right? You need to add in the other components. You need to add in really good researchers. For example, you need to give them compute along with the data, and then you also need to give them a fleet of physical robots. So to really complete the whole thing, you can't just have the data. It's needed, but it doesn't do the whole thing. And again, this is where I think the subnet architecture really brings in value. One subnet could just focus on gathering the data. The next subnet will focus on now that we have the data from another subnet. We rope in really good researchers who know how to do this. And then, get the compute that they need, get the storage that they need. And it gives them a fleet of robots that could come from another subnet, right? So that we can eventually get to a point where we get to some kind of robotic AI models that can get to some kind of level four, level five. That's ultimately what everyone wants, right? You can't just give a data set to someone. Ultimately, people want intelligence. To get to intelligence, you need the combination of all of these things working together. You can't just have one or the other.
Lacey Wisdom: I think that's a really great point, Michael. And that's something that we really liked at Protocol VC about your vision, is that by going with the subnet architecture, you're basically unlocking scale and acceleration in the robotics field in a way that I don't think any web2 robotics competitors can do because they are going to be very limited and resource-constrained in terms of what they can do internally. To build out these different hardware pieces and the data components and by doing the subnets, you're creating this Russian doll effect where different subnets can basically feed into each other and also use the data that other subnets are generating, which we think is really exciting. Juan, I'd love to hear from you how BitRobot fits into the wider picture of Protocol Labs and beyond.
Juan Benet: So we're trying to push humanity forward by driving breakthroughs in computing. And from my perspective, AI systems are one of the most important things happening in the world today that is going to totally transform how humanity operates. Everything about how we do work, how we operate, how we produce things, how we engage with each other, and so on. We're in the middle of this massive transformation, and the next frontier in that transformation is to enable systems to be embodied in the real world, to start operating at larger scales. And so this is a major component of what the future is going to be. And we need to be actively driving breakthroughs in that.
Now, BitRobot specifically is like this amazing confluence where there's all of the AI robotics improvements, but it's done in a crypto-decentralized environment where we've built probably one of the most important crypto networks, the largest storage network on the planet, using the same kind of incentive structure. And so we're deeply familiar with how to build and scale these systems. It's the perfect opportunity for us where we see it as a major step forward in broader AI, the broader transformation, and finally creating the landscape on which robotics can finally be figured out. And we do so with crypto networks, decentralized physical infrastructure, and so on, which is very core to a lot of what we built over the last 10 years.
Lacey Wisdom: That's great. Pivoting to you, Michael, I'd love to hear why you decided to build BitRobot on Solana.
Michael Cho: Honestly, before this project, I was a complete crypto skeptic. I thought all crypto was a scam. This was three years ago. So when we first started this project — me and my brother, my brother's my co-founder as well — we just wanted to build a cheap sidewalk robot. That was our only point at the time. But then two months in, we discovered Helium, and I was just mind-blown that you can use crypto in this way. And so then I realized, okay, I need to learn from these guys, but I have no idea what's crypto and whatnot. And then, somehow, in that very week, there was this Solana hackathon in Singapore. And we did a little bit of research, okay, and then I realized there's this thing called Ethereum, but it's very costly to use. So I figured, okay, with Solana at least I can test things pretty quickly. Honestly, it just started like that. So we've been building on Sol, I would say, for nearly three years now. Although, to be honest, because we still don't have the token in these last three years, 99 percent of our focus is all on the hardware, all the real life, real world operations. But for me, as a builder, I just want a blockchain that I know has performance. That can scale really well. I don't need to think about all these scaling issues. So I've been very comfortable just continuing to build on Solana — and, of course, the whole ecosystem has survived the really harsh FTX episode and bounced back super strong. In general, all the other builders I've met in Solana are all very product-focused. A lot of great makers. The other thing is in the last year plus, there are a lot of big DePIN projects that have obviously been built on Solana as well. You can argue it's very battle-tested these days. I thought it's a no-brainer for us to just continue building on Solana.
Lacey Wisdom: Do you have anything else to add to that, Jonathan or Juan?
Jonathan Victor: Maybe my only other note is, so I had a tweet yesterday when the BitRobot network did its first announcement. So I've been working on Filecoin stuff for the last five and some years, continuing to work on it, obviously. I think one of the most important pieces is the networks like BitRobot. And there's actually like a handful of these now inside of the DePIN space. But I think this is now rarefied territory where there are actual networks in web3 that are going to be putting out petabytes to exabytes of data. Michael hopefully corrected me that actually it will probably get to exabytes for BitRobot, but I think we're seeing all of these things that people have been working on in this space for a while, converging together. The actual need for decentralized, cheap, resilient infrastructure is a critical part of how we can actually build a solution for embodied AI. So yeah, for me at least, I think this is one of the pieces that's really interesting where, even for local compute and thinking of moving the compute to the data versus moving the data around once you actually start working with these massive data sets, you just can't ship it over the wire. And so I think it's actually, at least for me, quite fulfilling to see that yes, we are now getting to the part where here are the networks that I think people have been thinking about for the last five, or for Juan, probably like a decade out.
Lacey Wisdom: Juan, maybe you could give some perspective on how Filecoin works with Solana.
Juan Benet: Yeah, exactly. What I was going to add is that I think I've been saying for a few years now that the future is very multichain-oriented and multinetwork-oriented. We have lots of different systems and networks that are interconnected to solve different kinds of problems. One of the coolest things that we've seen in the Solana and Filecoin ecosystems is leveraging Filecoin to store all of the data that the Solana network produces. Solana produces an enormous amount of data over time. Not all of it is stored in the main visible ledger to everybody. There's a larger kind of account space. Being able to store all of that on Filecoin and create direct onchain. Primitives to be able to do that is one of the cool projects that we've been working on together. And so when you think about BitRobot and creating this additional large network that is going to be a mix of robotics infrastructure and hardware and AI systems, plus connecting to a lot of the different AI agent infrastructure that's being built out across the crypto space. Most of that activity right now is happening in Solana and in the Ethereum network. And so having a network that can connect easily and seamlessly with both of those environments and then store all of the data produced over time and into the network is a really good design kind of infrastructure, and that's one of the things that I'm super excited about.
Lacey Wisdom: Awesome. And I see a question already. We're actually going to get to Q& A in just a few more minutes. If you guys have questions, feel free to comment them below, or you can ask them live in a few minutes. But first, we want to just talk about what's next for the BitRobot community and how people can get involved. And I'll push it over to you, Michael, to answer a question.
Michael Cho: First, watch out for that white paper. There's a lot more detail I don't think we can go over. For example, what is the governance structure like? How do we go about deciding which subnet gets what? What kind of admission schedule would the network see? Those sorts of things. So, that white paper, I think we will, we'll publish it in about two weeks' time. And then, in the meantime, just just today, there are already a couple of people who reached out, and these are founders of projects of different sizes that already are interested to see whether they can spin up subnets. So I think we have a bunch of partnerships. I guess they will start as partnerships, but eventually, they are all subnet candidates. And again, that's why I'm so excited about the tokenomics and this subnet-based architecture. So hopefully this year, we'll see a lot more subnets once the network goes live. Then next week, we also have a pretty big product thing. I'll just keep it a secret for now, but next, yeah, do watch out for the tweet.
Lacey Wisdom: And for any researchers or developers that want to join the first subnets, are there any channels that they should go through or do you, or should they just be DMing you?
Michael Cho: Yeah, honestly, the research community is actually the committee that worked the hardest to build a personal relationship with. I would say most of the researchers that are doing, for example, urban navigation, I already know them in person. And then over the last months, a lot of them have actually started reaching out, usually through friends or just DM me directly. But I think we'll definitely do it a lot more systematically, especially once people are set up properly. It shouldn't be going to me directly. It should be going really to the network and shouldn't be bottlenecked by me as the founder. Eventually, hopefully, most of these things should be permissionless because of a proposal, that sort of thing. And yeah, of course, we have our Discord. We're pretty active now on Twitter (X) (opens new window). So the researchers, yeah, they have ways to find us pretty easily, I would say.
Lacey Wisdom: That's great. And so I want to open it up to Q&A now. This question was submitted. Any thoughts on the work being done at VersesAI with their efforts in the IEEE on the Spatial Web standard? I've been trying to peek under their hood to see if they have realistic alternatives to hardware-intensive deep learning for building reliable, understandable models. Michael or Juan or Jonathan, do you want to take a gander at that?
Juan Benet: I think maybe I'll take a stab at broadly. BitRobot, in general, is orthogonal to the different models that you might use to do this. So the structure of the network enables any participant or any researcher to use whatever model they want to operate, to train and operate on robotic infrastructure. So if folks find a different alternative structure to be very good and successful for robotics, they're very welcome to try it out and to use the network. So it's meant to be abstracted out where anybody can use everything. My own guess here would be that we're likely to see most of the successes in the next few years still continuing in the same kind of deep learning type architectures. That said there are some good examples where if you take some of these networks and distill them you can then find equivalent models that could have arrived at similar type algorithms. And so we might see some other successes in other pathways, and it's great to have other people exploring those. But I would probably still bet on continuing to scale deep learning models to yield most of the results.
Lacey Wisdom: Michael, would you like to add on to that?
Michael Cho: Yeah, I'll actually echo what Juan said. In general, the subnet architecture is very flexible. I can totally imagine if, let's say, this spatial web standard really takes off, then naturally there will be subnets that take advantage of some resources to continue to put on this standard, right? It doesn't just need to be this kind of spatial format. It could also be, let's say, a simulation environment. What I would love to see is just great builders taking advantage of these crypto coordination mechanics to easily speed up the subnet that they wanna spin up so that they can get the resources and so that they can really do the research that they need to do. I don't know about the specifics of spatial work. I'm just Googling now. Yeah, I can totally imagine something like this being a couple of subnets, actually.
Lacey Wisdom: If there are no other questions from the audience, or if you're still stewing on something to ask, I'll pose another one here. We've talked a lot about the benefits of the subnet architecture. I would love to hear people chime in on what they think some of the biggest challenges will be. It could be with the subnet architecture, or it could be with some of the hardware components, but I would love your thoughts, Jonathan, Michael, or Juan, on what some of the challenges for BitRobot will be.
Michael Cho: Yeah, maybe I'll start first. So I think some of these things are not just BitRobot, but it is just with robotics in general. I think the AI software element of AI, that part is gonna increase, the pace of development will be very fast. But you're always going to be limited by hardware, and especially for example, for humanoids, I would say most of the humanoids today, I personally don't think that's a final form. Usually, what you find is that, for example, they'll have actuators at the joints. But if you look at a human body, actually, the actuators are not at the joints. Your ankle and your knees don't actually — the actuators are in the muscles. In your biceps and whatnot. And I suspect we need another one or two generations of a totally different class of hardware iteration to get to that kind of humanoid, which are actually, physically speaking, a lot safer. The other thing is that I have yet to see, for example, really high fidelity set up touch sensors. We take for granted that when you grab an object, you don't need to see the object, and you know that you're grabbing the object, and you roughly know when you’re touching. It's a big data modality. Right now, there are just no high-grade touch sensors that can match humans at a very reasonable price. That is completely missing.
So right now, we're just trying to push, strap things, and enforce things with bold state vision because we have very cheap cameras these days and very low performing cameras. So I think we still need a couple of generations of hardware. Some of the subnets, in fact, can contribute to this kind of innovation even on the hardware designs. And then one thing that's unique about robots — because these robots are not going to be that great — is that they actually need a lot of help. So I think we shouldn't underestimate the manual labor that's involved. There's actually a lot of operations. And in a way, these robots are like dogs. They need dog keepers to help them. It could be sidewalk robots, right? You need a lot of humans in the loop, and I don't think we should underestimate that, but I also believe that the tokens are gonna take care of themselves. There will be some subnets that are able to incentivize correctly so that we have enough humans in the loop so that we can get to those outputs that the subnets really want. But I clearly think that's another big challenge, specifically to robotics. That's very different, let's say, from GPU or just storage, right? Those are pretty static.
Lacey Wisdom: Great. And we'd love to hear you also weigh in on this too. Jonathan.
Jonathan Victor: Sure. Yeah. I think one of the biggest challenges is finding the right set of tasks that fit the trifecta of things that are accessible. So you can get this advantage of scale that DePIN can get you, which inherently requires a pre-distributed set of people to be involved in these tasks. Two, you need it to be something where, in advance of having an embodied AI model, it's actually useful. So Michael was like touching on gaming as one strategy for this. I think we've talked about a couple of others that could fit into this. But things that basically make it “useful” to have the robot. And then three, it's like creating the right outputs. I think one challenge DePIN has historically seen is doing a lot of valuable things but then needing to have a revenue stream that shows up way too far down the line. And I think trying to figure out the sequencing so that you can go from base camp one to base camp two so that you can snowball your way up, I feel that's where a lot of the strategy is going to be pretty important. This is also where it's really useful that we're in touch with AI labs and the researchers to have a finger on the pulse of what actually matters. I think that's going to be critical for making sure that we can navigate and prioritize which tasks to go after first.
Lacey Wisdom: Totally. And I think what we're really excited about at ProtocolVC with BitRobot is that it's one of the few projects we've seen that is not really cut off from the realities of other research and outside of crypto. And I think that the fact that Bitrobot is already working with some of the top labs and R&D focused on robotics means that we'll always have a really good idea of what is really important to robotics research today. So, if there are no additional questions, I would love to thank the speakers. Thank you, Juan, for taking time out of your day. JV, Michael, we know that you're very busy. And so, we really appreciate you taking the time to hop on and talk about BitRobot’s amazing vision. We think that this is going to be an inflection point for open source robotics and the AI movement. And we think that BitRobot is really going to be able to fully accelerate embodied AI research in every way. Thank you. Also, listeners for tuning in. If you want more information on Bitrobot, we really encourage you to go to bitrobot.ai for more information. Thanks.
Michael Cho: Thanks for having us.