Today, I’m talking with Jesse Lyu, the founder and CEO of Rabbit. The startup company makes the adorable r1 AI gadget — a little handheld designed by superstar design firm Teenage Engineering. It’s meant to be how you talk to an AI agent, which then goes off onto the internet and does things for you, from playing music on Spotify and ordering an Uber to even buying things on Amazon.
Rabbit launched with a lot of hype at CES and a big party in New York, but early reviews of the device were universally bad. Our own David Pierce gave it a 3 out of 10 in May, saying that most of the features don’t work or don’t even exist. And the core feature that didn’t seem to exist was the most important of all: Rabbit’s large action model, or LAM, which is meant to allow the system to open a web browser in the cloud and browse for you. The LAM is supposed to intelligently understand what it’s looking at on a website and literally click around to accomplish tasks on your behalf.
Listen to Decoder, a show hosted by The Verge’s Nilay Patel about big ideas — and other problems. Subscribe here!
There have been a lot of questions about just how real Rabbit’s LAM was, but the company finally launched what it calls the LAM Playground, which lets people use a bare-bones version of the system. It does indeed appear to be clicking around on the web, although it is very slow.
So, I wanted to know how Jesse planned to invest in the LAM and compete with other AI agents that promise to do things for you. For example, Microsoft just announced a new agent-y version of Copilot, and Apple’s vision for the next generation of Siri is an AI agent — and it’ll run on your phone and have direct access to those apps and your data inside them. It’s the same with Google and Gemini and Amazon’s rumored next generation of Alexa. This is major competition for a startup, and Jesse talked about wanting to get out ahead of it.
But really, I wanted to know how Rabbit’s system works and whether it’s durable — not just technically, which is challenging, but also from a business and legal perspective. After all, if Rabbit’s idea works and the LAM really does go and browse websites for you… what’s stopping companies like Spotify and DoorDash from blocking it? You might have a strong point of view here — Jesse certainly does — but at some point, there’s going to be a fight about this, and it’s not clear what’s going to happen.
To put this in historical context, about a decade ago, a handful of startups tried to stream broadcast television without licenses by putting a bunch of antennas in a single location and building apps that let people access them. This felt technically legal — what’s the difference between all those people having their own antennas and putting all those antennas in a single place and those accessing them over the internet? Some of these companies were seriously innovative — the most famous was a company called Aereo, which spent a ton of money designing specialized TV antennas the size of a nickel so it could pack as many of them into a data center as possible. I wrote about Aereo back then — visited the antenna floor, interviewed the CEO, the whole thing. Aereo then got sued by the TV networks, the case went to the Supreme Court in 2014, and you will note that Aereo no longer exists.
I don’t know if Rabbit is another Aereo, and I don’t know how all these companies will react to having robots browse their websites instead of people. And I certainly don’t know how legal systems around the world will handle the inevitable lawsuits to come. I asked Jesse about all of this, and you’ll hear his answer: he thinks Rabbit will be so successful that these companies will want to show up and make deals. I have to say, I don’t know about that, either.
I do know that this is a pretty intense and occasionally contentious interview. Jesse didn’t back down, and that means we got pretty deep into it. Let me know what you think.
Okay, Jesse Lyu, founder and CEO of Rabbit. Here we go.
The following transcript was edited for clarity.
Jesse Lyu, you’re the founder and CEO of Rabbit. Welcome to Decoder.
Thank you, Nilay. Glad to be here.
I’m very excited to talk to you. Rabbit is a fascinating company. The idea for the r1 product is fascinating. I think a lot of people think that something that looks like the r1 is the next evolution of smartphones or products or something. And then there’s the company itself, which is really interesting, and you’ve got a connection to Teenage Engineering, which is one of our favorite companies here at The Verge. So, just a lot to talk about.
And you’ve got some news to share about opening up Rabbit’s Large Action Model so people can play with it, and it’s kind of an early version. I really want to talk about that.
But let’s start with Rabbit itself. The company has not been around that long. The r1 just started shipping six months ago. What is Rabbit? How’d the company start?
Long story short, it’s a very young company. So here’s a little bit of history of it. I actually started a AI company back in 2013, which was called Raven Tech, and we were at YC Winter ‘15 batch.
And it’s basically my personal dream to chase this grand vision that, I guess, me being this generation, growing up, we watched so many sci-fi movies, there’s AI stuff here and there. And I guess every geek wants to build their own Jarvis at some point.
So I think that’s exactly how I started Raven Tech 11, 12 years ago. And back then, we had this idea, we had this direction, but the technology back then — obviously, there wasn’t like GPU training, there weren’t transformers and stuff.
So we worked really hard on the early days of voice dictation and NLP and NLU, which is natural language processing and natural language understanding. So the technology wasn’t there, but we tried our best. We actually built and entire cloud system and the hardware, which is similar to what we have in Rabbit today. But the form factor was more of a smart speaker — as we all know back 10 years ago, everyone was chasing that form factor.
Ultimately, the company got acquired, so it’s not a new idea for myself, but it’s definitely a new opportunity that when I saw the progress on the research side, the transformer, obviously, [when] I got a chance to try ChatGPT, or GPT’s API.
We were really impressed because we felt the timing is right. Because being able to do something like r1 or more sci-fi, Jarvis stuff, you really need to figure out two parts from the back end. One is that you want to make sure that by talking to the device, the computer or device actually understand what you’re talking about, which is the transformer, the large language model part. But we believe at around 2020, 2021, we believe that the transformer is absolutely the right path that opening, other companies are heading to.
We believe that portion has been solved, will be solved. So our focus immediately shifted to: after this device can understand you, can it actually help you do things?
And the company that I started 10, 11 years ago, Raven Tech, we were actually one of the first company that designed a cloud API structure. After the [voice] recognition, after the understanding, the query got sent into different APIs. The system has a detector to understand, “Oh, maybe you are looking for a restaurant on Yelp. Maybe you want to play a song from this streaming software.” But I guess 10 years ago, there’s a great opportunity of APIs. There are a lot of company working on APIs. And if you remember 10 years ago in Silicon Valley, everyone was talking about maybe in the future the entire operating system will be just HTML5. Right? But that didn’t live quite long.
I think now when we’re looking after 2020, the API business is not really major business for most of the popular services. So we also want to take an evaluation of whether we can build a generic piece of agent technology, which is really hard. Because I believe the current AI is all generic. Obviously, there’s a lot of people doing vertical stuff. Right? You can build an agent for Excel. You can build an agent for legal documentation process. But I think the biggest dream, what’s really make us excited is the generic part of it. It’s like, can we build something that without pre-training, without knowing people want to do what, and they just talk whatever they want, and we’ll be able to smart enough to handle all the tasks. So that’s why we felt the opportunity was right, and we started Rabbit right after COVID.
The idea that agents are going to be a big part of our life, and in particular general purpose agents that go take actions for us on the internet — I’ve heard this idea from all kinds of folks, from startup founders like yourself to the CEOs of the biggest companies in the world. I want to come back to that. That’s a big idea, but I just want to stay focused on Rabbit for a second. How many people work at Rabbit today?
I believe at the current moment, we’re roughly around 50 people, 50 to 60 people if we plus the interns. But when we started, the company was seven, and by the time we launched as CES it was 17. So just by growing the team within four or five months, it was quite a challenging job for me.
So CES was the big launch. We were there, David Pierce was at the party. The Rabbit was introduced. You gave demos in a hotel room, I think. And then you had the launch party here at the TWA Hotel at JFK, which is very cool. The thing’s been out, but you’ve been growing. You said you started at 17 people in January at CES, and you have 50 now. What are you adding all those people to do?
Most of it’s just engineers. We have a very small group of design, hardware design or ID that we started from day one, and most of the new folks are working on AI and infrastructure perspective, like cloud basically. We not only ship the hardware. We build the entire Rabbit OS for it. So I think the major work is always going to be in the software part.
How is the whole company structured? As you go from seven to 17 to 50, you obviously have to decide how to structure Rabbit. How is that structured now? How has it changed?
We are primarily located in Santa Monica. We have a device team of really great folks in the Bay Area, and we have a couple of research engineers here and there. So it’s kind of mostly in person, but a somewhat hybrid system. And the way that we find our people is mostly by internal referring. So we’re not spending money chasing for agents, agencies to do the hiring. Most of the good folks, we basically do internal recommendation.
But your 50 people that you have now, how is that organized inside the company?
It’s really flat in a sense. We have different departments, obviously. The hardware ODM/OEM part is in Asia. We have our ID team in collaboration with folks in Stockholm.,Teenage Engineering in this case. And we do our own graphics and marketing, all of that in house. And then for the software part, we have the device team that they need to work with the ODM/OEM. And we have the cloud team, we have the AI team. That’s basically how much team we have. And each team, there’s obviously crossovers, and we basically work project-based.
So there is no crazy hierarchy going on. I mean, the biggest company I ever led was back in the Raven days. I believe by the time we got acquired, we were 250 people. So this is still within my comfort zone, to manage 50-ish people.
Teenage Engineering is a big part of the Rabbit story. They obviously designed the r1 hardware, and then their founder, Jesper Kouthoofd, is your chief design officer. How much more hardware are you designing right now? Are there iterations to come? Do you have a roadmap of new products?
The way we work together — obviously this is not the first time we’ve collaborated. We did a collaboration back in Raven days.
First of all, Teenage Engineering is my hero company. It’s basically a fanboy dream come true story for me, and I really appreciate their help over the years.
The way that we work together is very intuitive. There are obviously many ways that coudl be considered to be the proper way of designing a project like this, but I think we’re out of the ordinary way of doing this — I can give you an example. Back in the Raven days, all we did is that we had probably two meetings in person, a couple of phone calls, no email, no text messages. We set up a secret Instagram account that we just used to share sketches, and we just hit like on our Instagram accounts, and that’s how we designed the previous Raven project.
This time, it was even quicker. I think I shared this publicly. I think we spent probably 10 minutes on deciding how the r1’s going to look like, and we had quick sketches here and there. Ultimately, I pushed Jesper for using the current color, which is the orange.
We do have maybe two or three projects in our minds, but I think by the end of this year, our current focus is to really get this LAM pushed to the next level. So yeah, stay tuned. I think one thing people will realize is that this team does hardware really quick. Because when we start sketching the r1, it was like last year back in November, and we introduced that by January, and we start shipping by April. So if we want to launch the next project, it’s going to be roughly, I don’t know, six to eight months timeframe. Certainly not like a year or two.
But that being said, I think... I was having my own community voice chat yesterday. I was talking to people about the current r1 because I really don’t like the current consumer electronics. Like, one year per generation by default, regardless. We’ve seen that from the smartphone companies doing annual release for all this stuff with minor changes. When we started designing the r1, the entire Rabbit OS runs out of cloud. That means that this piece of hardware, even though it’s $199 and not the latest chips, it’s really capable of offloading the future features to this device. So I don’t think r1 is like a one-year lifespan device. Our community does, though. They think they can tweak so many things about it. So in that sense, we’re not in a rush to drop another version of it, but we do have different form factors in our mind at the moment.
And is Jesper actively working on those designs, or as chief design officer, is he working on something else?
He was literally in our office three days ago. Yeah, we are actively working together. Correct.
How much money have you raised so far?
That’s a good question. I want to be accurate, but it’s somewhere around $50 million total in the whole lifespan. Last part was $35 million led by Sound Venture and also Khosla Venture, and Amazon Alexa, Foundation Synergist. So last round was $35M, and if you consider all the money together, I think it’s around $50M.
When I look at the amount of money that other AI companies are going out to raise, right as we are speaking, OpenAI just raised the biggest round ever in history to go build, obviously, a foundation model, digital god, whatever Sam Altman thinks he’s doing. Do you think you can compete at $35 million a round?
No, but I think talking about competition — money is one part of it. I think I’ve considered myself a veteran because I’ve done startups before. I know how it works. Certainly, money is very important, probably most important in the early couple of years.
But I think when we talk about competition, we ultimately want to ship products to consumers. Because the way I look at it is that people are not buying electricity. Electricity is basically controlled by — here in California, it’s Southern California Edison, right? You have an address, you have to pay for it regardless of how much electricity you’re using. But I think people are ultimately buying microwaves, cars, motorcycles, televisions. People are buying products powered by electricity. So research-wise, I can say very clearly, we have at this moment of Rabbit, there’s no way that we can compete over OpenAI, Anthropic, and DeepMind and Google, but how can we play the game?
We become partners with everyone. Right? So r1 is hosting every single model, the latest models from these guys. Their capabilities combined with our product innovation on the Rabbit OS and all the features offered to our user. So there’s no way we can compete over on a research perspective, but we ship product fast.
You saw OpenAI just released the Instant API, as they call it. I was actually invited to the meeting, but I was launching the LAM Playground yesterday, so I couldn’t be there in person. But they’re offering an API for people to build an agent for it. But yesterday, we dropped a LAM Playground, which you can go to any website and just [browse] it by voice.
So I think competition is a different magnitude. I think money is definitely important. We hope that we can raise more money, of course. But I think right now, if you talk about competition, we have to play smart. They are good on the research. We are good on converting all the latest research into a piece of product that users can use today.
Let’s talk about what that product is today. So right now you have the r1. You can buy it. It’s a beautiful piece of hardware. It is orange. It is very striking. It has a screen, it has a scroll dial, and then it has a connection to your service in the cloud, which goes and does stuff for you.
Yep.
That costs $199. Are you making money on the sale of each individual r1 unit right now?
Correct.
What’s the margin? What’s your profit on r1?
I have my r1 right here. It’s a very good margin, even though I cannot tell you the details, but it’s over 40 percent.
Do you make over 40 percent on the hardware margin of the r1?
On hardware margin, which we did the math, we run the calculations. We might have to redo the math because yesterday, literally after dropping the LAN Playground, the server crashed multiple times. So we might need to redo the calculation.
But again, first of all, in the beginning we’re making money. Now we have these more powerful features moving forward. I think I haven’t heard a company that went bankrupt because they got a popular service that is so popular that they couldn’t afford cloud bills. I think if you build a good product, there will be —
Well, hold on, I can draw that line for you. So it’s $199. You’re making over 40 percent per unit, so that’s between $80 and $90, right? It’s not 50 percent, which would be $100, so it’s a little less. So between $80 and $90 in margin. That margin — you do have to pay your cloud bills, right?
Yeah.
So is that margin all being fed into your cloud bills?
Obviously, we have this dedicated instance with all these cloud competitors. Right? I mean, don’t get me wrong. The Amazon AWS, they’re hosting on AWS, and there’s AWS, Google Cloud, Microsoft Azure. On the LLM partnerships, we have Anthropic, OpenAI and Gemini. So don’t get me wrong, it —
That’s a lot of companies that like to make a lot of money. I just want to be... They’re not cheap to partner with, all those companies.
They’re not cheap, but what I’m trying to point out is that they are competing so fiercely in a way that they have a lot of good benefits for the early startups. I have to shout out for all these companies. So they really want to figure out a way to help you on board and maybe making your money in the long run, but I think at this current scale, we can totally handle it, yes. So we get great deals from them.
So if I buy an r1 from you, you take $90 of margin, or $80 of margin. At what point, how much do I have to use my r1 to turn that negative for you? Because everything I do with an AI, that’s a token. That token costs money. It costs multiple servers. Your bandwidth cost money. It all costs money. How much does a single r1 user have to use their r1 to take up $90 of margin, or $80 of margin from you?
So I think of a moderate user using it in a non-robotic way or a non-malicious way, it’s going to be really hard to break down. But —
Is that two years worth of usage? One year? Six months?
I think it’s definitely over a year and a half. I’m not sure about two years because there’s new features we’re going to implement into this, including LAM Playground and teach mode.
But yeah, so I want to share my understanding of this is that yes, we did the mathematics. We are making money. No problem. We wish we can sell more, which we’re hoping that we can sell more. That’s going to definitely help. But I think the target of this whole launch strategy is not set for making X amount of money on first six months.
I think there’s other companies that really greedy about how they want to launch their product. I’m not going to even mention a name, so that won’t work. That won’t work. So I think if you look at any new generation of product, if the founder and the company and the board decide to set up a strategy that’s, “Let’s squeeze every single penny out of the user,” it’s not going to work.
Because we know AI is very early, and we know that there’s going to be a lot of things that go wrong. In fact, I believe that every company, regardless of if you’re big or small, if you work on the latest AI stuff, the first two weeks, it’s going to be disaster because you’re going to find a lot of the misbehavior about the AI. You’re going to find a lot of the edge cases by the model.
So I think the whole thing is too new. There’s no way that we want to charge for subscription. That’s even worse. I don’t like that strategy in general. So even though this sounds very concerning, that, okay, you can easily twist my story or someone might twist my story, be like, “Oh, Rabbit is doing everything great except they’re going bankrupt no matter what” right? I think there’s a very stupid way to think in that sense, because a great innovation, you have to focus on the innovative part first. Then you figure out the money part. If we start figuring out the money part now, none of this is making sense. Really. None of this is making sense.
I think there’s other people in the industry that they have a great understanding of everything, and then they decided to release a wallpaper app, charge $4.50 per month. Right? Hopefully that works, I guess. You can go talk to that guy and you say, “Hey, there’s no way you’re going to bankrupt because your money checks, all this equation checks. If you charge for this, you’re going to be making money.” But that’s based on the perspective that the whole logic needs to stand up, right?
So I think I’m not really wasting a lot of my time at this point on trying to basically fine tune a little about mathematical equations to make this more like, 20 percent, 50 percent. Obviously, as a startup, we need to survive, even though we have a roller coaster ride since launch. But we’re growing, and we’re surviving, and we’re still pushing the features that none of the other devices, including iPhone can do, which is a very, very good sign.
So one, I don’t think anybody has ever linked criticism of Humane to criticism of Marques’ wallpaper app on our show before. Well done. I think Marques has a very different view of where his expertise is and what went wrong with that app and maybe one day we’ll talk to him about it.
But my question for you, when you talk about growth and you talk about the unit economics of the Rabbit is that on some curve, the hardware becomes unprofitable for you. Just me having a Rabbit for longer than 18 months becomes unprofitable for you. That’s the moment that you would charge a subscription. You would say “to continue using this thing, it can’t be negative for our company.” And that’s the thing that I’m pushing on here.
I think there are multiple solutions to that question.
One is that obviously — let’s use R1 for every user for more than 18 months. There’s a couple of solutions. One is that we are going to launch the next generation device, and maybe multiple devices, that are still profitable from the hardware.
Two, I think we have this prepared for since day one. From last week we’ve rolled out the alpha of Teach Mode to a very selected group of testers. I would love to give you the access, so please reach out to us later on. We’ll see if we can help you set it up. But we rolled out a very small group of our testers, roughly around 20, 25 people to be honest. And then over the last 72 hours I saw more than probably 200, more than 200 lessons or agents has been created through Teach mode. And if you look at the current Apple ecosystem or Android ecosystem, I think the hardware is not going to be the number one money contributor.
It’s really hard to make on top of the margin of the hardware anyway. So at some point you want to convert that into services and software. That doesn’t mean that you’re going to charge a subscription for the device. What I think is very promising is that we are going to slowly roll out the teach mode to beta testers and hopefully by the end of this year we can grand open the teach mode as we promised on day one. So all these lessons created or Rabbits or agents created by each independent users or developers, they can be considered as a new generation of an app store. On that, we can make big money.
Using the app store economics of taking 30 percent.
I don’t want to invent any — exactly. I think I’m not trying to invent any new business model. I think as a startup it’s very risky to invent your own business model, but there is a very great business model out there which is App Store and that’s contributing like what, 70 percent for the annual income, right?
I’m just curious, just as I’ve played with R1s and looked at the device, I’ve always wondered how on earth are you making money at $199? So that makes sense to me.
When you think about what the Rabbit is actually doing, I ask it a query, it shows me a beautiful animation on the screen, which is adorable, and it goes off into the web and uses a bunch of APIs. And now the new large action model, which is the news, right? Yesterday you announced the large action model playground. People can watch it work. I’ve seen the LAM click around on The Verge’s website just to read headlines, which is neat. Is that the back end of this, I ask the Rabbit to do something and in the cloud it goes and clicks around on the web for me?
So we have to separate two different systems here, maybe three different systems here. Let’s talk before yesterday, because yesterday is really a great milestone. Before yesterday, what happens is that you talk to the R1. We have an intention triage system, which basically we convert this audio to a text, we send that text to our LLM providers, and then we have an intention triage system. From there, after the LLM understand the intention we send to different APIs or different features. There are a lot of feature which is on device, right? Like set a smart timer or something like that. Or there’s a simple question, but we think that there’s other services or model probably answers better than the default LLM. So sometimes we send a particular query to Perplexity. Sometimes we send a particular query to Wolfram Alpha.
So you can understand as intention triage system is dispensing on this to different destinations and then the relative features will trigger.
But after yesterday, we have this playground and that’s a first stepping stone towards what we really want to create, which is a generic cross-platform agent system. It has to be generic, which on this case it is a generic. It is not cross-platform yet because it handles only a website. It will be cross-platform very soon. But with this generic website agent system, essentially you can just talk to Rabbit, be like, “Hey, go to ABC website or go somewhere and then help me do this.” So that’s exactly how we wish to design a product, and I think everyone in the industry is heading towards this direction, which is you say something, we understand you and we help you do it. And what happens as we put windows on the Rabbit hole that you can see is that the agent will break down different steps.
I’m going to Google first. I’m searching for The Verge, I’m clicking to The Verge’s home website. I’m trying to find this title as you requested. I’m clicking the button to share this. And in theory you can chain multiple steps, infinite steps, follow up queries to the system.
So I’ll gave you an example. I think I showed this to another reporter, it’s “hey, go to Reddit first and search for what are people recommending for the 2024 best TV 4K HDR. Get that model, then go to Best Buy, add that to my cart. If Best Buy is out of stock, then search on Amazon. If they both are out of stock, get me the second recommended model.”
So you can actually chain different queries and you can pause it, you can add, you can tweak it, you can fine tune it. So it’s really just like a playground. You can freely explore the system and the system is fairly good enough to do daily tasks. And people are obviously developers and our hackers — white hat hackers of course — are giving us impressive showcases. There are people using the LAM playground to create an app just by talking to R1 because there are third party AI destinations that you can just use prompt and create an app and download the code and stuff like that. So it’s really amazing to see all this great showcases just within actually precisely 24 hours.
So I want to make the marker between yesterday and the day before it, right? You announced the Rabbit at CES in January with the LAM, but it wasn’t there. Why announce it without its fundamental enabling feature?
It is not accurate. I want to take this opportunity to address that. If you go to the connections, now we have seven apps. By day one we have four apps. Those are the first iteration of LAM, which is not a generic technology. We never on the CES claimed that you can now go to Amazon to order something. We said we are working towards this piece and today there’s four apps that you can connect. We are going to add more services. And over the past couple of months we did add three more services. So as of today there are seven services in total, then we keep working on the current LAM Playground and when the time is right, we swap it.
So there’s a lot of debate saying LAM wasn’t there. That is not true. I can trace back to where this rumor starts is where there are people hacking into the R1. They saw R1 is fundamentally powered by an Android system on the local device, and obviously that should be the case. It would be more sketchy if it’s not Android. So at the bottom of it is an Android system and they dump the code, which — you can do that. In fact, every good piece of hardware in history has been hacked.
So someone goes into this and jailbreaks the R1, which I guess every piece of hardware is jailbreakable at some point. Obviously, that’s flattering to us. If you build a form factor and no one even bothered to jailbreak it, it’s probably not a good form factor anyway. So people jailbreak it, find out the Android code, they dump the Android code to another media and they say, hey, there’s nothing about AI here. There’s nothing about LAM here. Of course, because all the stuff is in AWS.
That’s where the rumor starts. And then there’s a lot of media and they just take that piece and reiterate that.
The apps you started with, Spotify, DoorDash, there are a few others. Those are APIs, right? You were using their APIs. You were actually opening Spotify on the web in Chrome and clicking on it.
Yes. Yes. Because what do you mean —
Why?
There is no API —
That’s the most brittle way to use Spotify I can think of —
There is no API. There is no API.
You made a smart speaker. Spotify can run on smart speakers and other kinds —
That’s a partnership. That’s a partnership. Go to Spotify, read their documentations. There is a specific line is that you cannot use API to build a voice-activated application. Literally.
So Spotify right now on the R1, when I asked to play song, it goes and opens Spotify on the web somewhere —
Goes to the window. Yes.
And then you’re re-streaming the audio to my device through your service.
Correct. Correct. Yes.
Does Spotify know that you’re doing this?
Yes.
And they’re okay with that?
We had a conversation. They realize this is agent behavior. And we said, look, we ask the user to log in on your website and they’re a 100 percent legitimate user and they’re a paid user. And when we do the trick, we help them click the button.
I’ve always been very curious about this. I’ve been dying to ask you these questions. So I ask my R1 to play a song. Somewhere in AWS, a virtual machine fires up, opens a web browser, opens Spotify, logs into my Spotify account using my credentials, clicks around on Spotify, pushes a button to play a song, and then you capture that audio and re-stream it to me on my R1?
Everything is accurate except we don’t help you log in. You have to log in for yourself and we don’t save your connection.
But the part where you are re-streaming audio that Spotify is playing to your virtual machine to me, you’re doing that?
We are basically giving everyone a virtual machine, which is a VNC, which is 100 percent within policy, and you have the right to access that VNC. And on that VNC, we basically work directly on a website just like today’s LAM Playground. So we’re not getting the audio from the server from Spotify or somewhere else. We’re basically going to the Spotify website and playing — and do the things for you and play that song for you.
Okay, but where do the bits go? The bits come to the virtual machine and then they come from the virtual machine to my Rabbit.
Correct.
So you are re-streaming the song to me.
I’m not re-streaming the song to you. I’m basically presenting the VNC directly to your R1.
Wait, explain how that works. Maybe I’m not technical enough to understand how that works. You’re presenting the VNC to my R1.
Correct.
So it is running locally on my computer?
With no UI.
Okay, I see what you mean. So I’m logged into a cloud computer. The R1 is the client to a cloud computer. And Spotify is playing on that cloud computer and the R1 is taking that audio. Okay. That raises a million extra questions, right?
First of all, I see where you’re going. Okay? Before you go deeper, I just want to say first of all, we’re not using API. Second of all to say LAM is not there, that’s false claim. Because we have all these services, if you really pay attention to their documentation, there is no API for like DoorDash. There is no API for Uber.
But I just want to be clear, that’s a choice those companies have made to prevent companies like Rabbit from automating their services and disintermediating their services from the user.
So as you think about these agent models going out onto the web, however they’re expressed, whether it’s the LAM, whether it’s whatever you’re doing before the LAM Playground hit, all of those companies are going to have a point of view on whether agents can use their services in this way. That’s pretty unsettled.
Yes.
And I’m curious, you have a few services, they might’ve just said, okay, let’s see how this goes. But over time you’re going to enter into a much more complicated set of negotiations that will actually be probably determined by the big companies making deals, right? You can see how OpenAI or Microsoft or Amazon would make a deal to have DoorDash accessible by agents and DoorDash would say, “we’ve made this deal, you can’t be accessible.” How do you solve that problem?
It’s not a problem for now. We’ll see how this problem evolves, but I remember when Apple is relatively not so big, not as big as today. When I read the Steve Jobs book, there’s one chapter. He said, okay, “go talk to Sony, from tomorrow 99 cents per track,” right? Remember that moment?
So at some point this level of negotiation needs to be happening. I’m not sure if we’re leading this or someone else is leading this, but this is the working proof that we’re not using API, and I don’t think the services are not building API just because they’re trying to prevent people from automating the company. It’s just because API to them is not making money. And they for sure will love to set up a negotiation in some phase later when we grow bigger. You know, we tried to reach out to Uber, we did, before launch. They were like, “who are you? You’re too small.” That’s it. “We don’t care.”
And so then you have Uber on the R1 now, that’s opening the Uber desktop app?
No, the Uber website, which is very janky, which is very —
That’s what I’m asking. Sorry. What I meant by desktop app is in the web browser you’re calling an Uber. If you’re running on Android, why not open an Android virtual machine and use the Android app?
It is a little bit more technical to achieve that, which we are working on the other platforms. I think I showed a vary select group of people a working prototype that LAM is operating on the desktop OS such as Linux with all the local apps. So we’re definitely heading in that direction.
Is there a possibility they can detect the fact that these are not human users, but in fact agent users?
I guess there’s always a way that you can detect, but I think the question is — this is actually a very good topic that we’re talking about here. Think about CAPTCHAs.
Sure.
LAM playground or any capable AI models now can go there and solve text-based CAPTCHAs. So their old systems to prevent automated systems like this are currently failing. This is an industry effort to push everyone in the industry to rethink about — now with this AI, now with all these agents, how their business is going to reform or how all these policies need to be changed?
I do agree, this is a very complicated topic, but what I can see is that this is not Rabbit doing some really fancy magic here. Every company is doing this. We have other agent companies even the GPTs are doing this. So this is a new wave emerging for all this old services that they have to think about. But I can tell you my personal experience dealing with scenarios like this, like when we first started building one of the first smart speakers back in 2013 — all this music labels, they don’t care. They didn’t care until everyone’s building smart speakers, they’re like, ‘okay, we have to resell the whole copyrights for this particular form factor.”
I guess at the end of day, it’s about money. They want to sell the same copyrights to as many form factories as they want, if there’s a popular one. So we’re okay to have this kind of negotiations, but certainly like you said, there’s bigger companies that are doing similar things or even more advanced things that needs to be addressed.
I give you another example like Siri and Microsoft, there’s a feature called Microsoft Recall, which they pulled back that feature now and I think they relaunched it. Which is very aggressive, that is taking screenshots of your local computer.
So this is what I saw happening in AI in these early days. There’s going to be a lot of different takes and tries and eventually people will reconcile and agree on single piece of terms and agreements.
But if you check how we automate the website to their interface, the most important part is we don’t create fake users. We don’t create spam users. We don’t log in on your behalf and you are you. The way I help you to do things is by help you click the buttons and mouse. It’s equivalent of, if I want my buddy to help me — I’ll give you example. So if I’m busy, I’m about to head into a meeting, I want my buddy to help me order a burger from DoorDash. All I need to do is I unlock my phone, I pass my phone to my guy and my guy helped me click that.
And in this process, I’m not sharing my credentials to my buddy. I’m not telling him my phone password, I’m not telling him my DoorDash password. I’m not even sharing my credit card info. All he has to do is just add to the cart and click confirm. That’s it. So this guy is the equivalent of the first generation of LAM, which is unfortunately we don’t like it. So that’s why we work so hard. Now we have Playground, which is a more generic technology.
Well, let me ask you about that difference between the first generation of LAM and the playground. The playground sounds like the thing you’ve always wanted to build. You actually have an agent that can look at web pages, understand them, take action on them.
The first one, it might have been a LAM in the broader definition, but as technology it was expressed as testing software that was moving in an automated way through these interfaces. You weren’t actually understanding the interfaces. You were able to just navigate them. Because that’s pretty normal robotic process automation stuff. Were you just building on that kind of technology while the LAM came into existence?
No, no.
No? Okay.
We’re working on neuro-symbolic, right? So the idea is that —
But even in the first versions?
Yeah.
The question I’ve always had is, what happens if Spotify — before the LAM existed because I understand that the claim is that this version can understand every website — but if Spotify changed its interface or DoorDash changed its interface, Rabbit was kind of getting tripped up, right?
I’ll tell you, Spotify changes its interface all the time and I think in the past six months, five months since the first LAM was adding the Spotify with the connection since launch. I think we probably put Spotify under maintenance for maybe two times, one hour in total.
That’s a very hard proof.
That’s a hard proof, but I — just take this for what it’s worth, I think that means it’s not good enough, right? The Spotify app on my phone never goes down for maintenance, and if the claim is the agent can go take actions for me, I have to rely on that at 100 percent.
No —
And so I think the question for me that I have, this whole thing, is the delta between what you want to do, which is have agents go and crawl the web for me and the reality of what we can do now. Actually the middle ground is APIs, the middle ground is not so brittle. You —
Okay —
It makes more sense to me that the agent would, instead of using an interface designed for my eyes, use an interface designed for computers.
I really want to laugh hard.
Okay.
Really. Two things. I disagree that Spotify is not working good. Spotify has been working amazing.
Sure.
Five months, maybe two times we put it under maintenance and the total amount of time put under maintenance is probably under one hour. You can ask any R1 users. That’s not through API, which is impressive. That’s through agent.
I’m —
That’s through agent to handle to —
I get that it’s impressive for an agent. I’m just saying that API —
You said it’s not —
I said it’s not —
You said it’s not good.
Good enough. I said it’s not good enough.
It is not good enough.
Right? Where’s the curve where it’s 100 percent?
Okay, now that’s my —
Because the API is 100 percent.
That’s my second part. Yes, API is 100 percent, but you’re relying on, they gave you the API that’s stable, that works, that never break-
I’m the user, I don’t care. That’s what I’m getting at: as the user, why should I care?
The user doesn’t need to care. We need to care.
Okay.
We need to care and we need to care because we checked what are the good APIs we can use, don’t get me wrong, Perplexity API’s has being great.
Sure.
OpenAI’s API breaks every day or two and they said , “We observe an issue.” You can follow the, Is ChatGPT Down? It’s very detailed, how many breaks per day, it’s, I guess more than 10 on average that the ChatGPT API breaks or is unstable, whatever it takes. We have a notifier. So, first of all, API is not stable. It is not stable.
Sure.
And you have to chase for the services people want. We want to offer this music feature and we think Spotify has the best experience overall, and we want to chase for this partnership and we’re still chasing for this partnership. But to talk from a technical perspective, why I said I don’t like APIs is because — think about Alexa. Alexa speakers are all using APIs and you literally have to go there and negotiate. Because like I said, today, not everyone’s opening APIs, a lot of the traditional services don’t have APIs, and then startups, for a startup it’s impossible. wWen you go talk to them, they think you’re too small, right?
We did that, we just did that to everyone. They think we’re too small, they don’t care, so we can’t get an API, Does that mean that we’re not going to figure out an alternative way to make it work? No, hell no! We’re going to make it work and this is exactly how we make it work. So we care about users to use this feature. We don’t care about how to do it. In fact, because we know that you don’t care how this has been done, I don’t want to spend six months, eight months suited up to talk to Spotify people and Uber people, and one by one, let’s do that.
Well the promise here is you’re going to eventually have a general purpose LAM that is just using the web for you, right? You said you hand your phone to a buddy, which is why you can make the Rabbit device and just talk to it and it goes off and does stuff in the general case. The enormous Death Star that everyone sees is that Apple has announced substantially the same feature for Siri on the iPhone.
Yeah.
And Apple can get the deals and Apple can pull developers into an API relationship locally on the phone with Siri, and Apple honestly can just burn money until it chooses not to build a car or whatever it wants to do. And getting people to buy another device that doesn’t just fall back to the Spotify app on iOS when it breaks seems very challenging. How do you overcome that? Because if the technology isn’t 100 percent better 100 percent of the time, that feels like a challenging sale.
Yeah, this is the fun part of the game, really.
How do you win the game?
I think, first of all, speaking for myself, I’ve sold my company before when I was 25. I don’t want to build another app. I should chase my same dream because I really think that the grand vision that I have and our team was working on is actually the current direction everyone’s chasing. And it just feels so bad if you don’t chase the same dream no matter how hard it is, really, and in reality, we feel blessed and happy to say the exact situation because we don’t have any serious competitors from startups to be honest. When everyone —
Well there’s one, and they seem like a pretty spectacular failure, right?
Yes.
Humane launched with a lot of money and a big T-Mobile partnership and a subscription fee and Time magazine and all that stuff and it doesn’t seem like that has gone very well.
So I said as of right now I don’t think we have serious competitors from startup and then when we talk about competitors, obviously there’s Apple, there’s every big company out there, including OpenAI.
So first of all, I think this is good for us because it validates our direction is absolutely correct and I also am curious about — what are going to be the definitive routes for the generic agent technology because different people in the industry might have different ideas. There are still debatable states, there is no eval for agent systems yet, there is no very good eval yet, and you can see a lot of different research houses and companies trying different routes.
Obviously there’s API routes like GPT’s, which doesn’t really take off, there’s pure neuro-symbolic routes, there’s Haber routes, there’s all this multi-modality. So we’re still in the phase of everyone trying their own recipe and hopefully that can become a definitive recipe, including Apple.
I think the benefit for Apple to do that is that yes, they understand the user better, much, much better than any companies out there and they have infinite money, theoretically infinite money, and they have the very closed ecosystem. The way that they’re rolling this out is that they have this SDK called App Intent, right? So different companies or app developers need to choose to enroll or not enroll with that to have the new Siri to control stuff. I guess my relative advantage as a small group, as Rabbit, is that we move fast.
We move fast and we keep growing. I think if we put all the cards on our table, we had a spectacular launch. We are the most sold dedicated hardware yet, and we have made good profit, we fix all the day one problems and the company actually quadrupled the size. So we’re growing, we’re moving fast, and now we drop this. Like you said, put a marker between today and yesterday. I think today I can say, a lot of things that you can do on R1, you cannot do it on a iPhone, I believe eventually everyone will be able to come to the same solution that all the devices can do kind of like similar stuff, but I firmly believe at least this remaining half of the year or the Q4 of 2024 and probably till Q1 2025, it is still a game of you have something that they don’t have, versus you guys all have the similar stuff, who’s done better?
So I think relatively we have a good six to eight months headstart, we have our little room here, but obviously I also believe when a big company wants to kill a startup, they have a million ways to kill you. That’s just the reality. People keep talking to me and asking questions, “What happens if the risk is too high? What happens if the company dies?”
I really don’t think that all these questions matter because we’re on this course, we’re going to see the end, whether it’s a good end or bad end, and I don’t think any answer to these questions will change our course to be honest. I can go here and tell you and be a cry baby like, “This is super hard, this is impossible. Everyone in the industry can kill us easily, or a YouTube reviewer can kill us by posting a review.”
It doesn’t change the course because we are doing things, we’re launching, we’re shipping things, we’re moving forward. So it’ll be interesting to see what Apple came from.
I was on the Apple iPhone upgrade program, so I automatically get a new iPhone every year by paying the same monthly fee, but I really don’t find any reason to upgrade that because people are talking about Rabbit being launched too early, now you have a company like Apple, if you go to the... What is that called? Sunset Boulevard in Los Angeles where it’s close to here or, I guess Mission Street in San Francisco. You go to any major cities, you see these gigantic posters, billboards that Apple put there, right? iPhone 16, iPhone 16 Pro, what are the other lines underneath? It says Apple Intelligence. Is it ready? Is it out? No.
Let me talk about growth for a second. You mentioned you quadrupled and I guess you mean by employee size?
Yeah.
You told Fast Company last month the R1 is only being used daily by 5,000 people. Is that higher or lower than you expected?
First of all, you saw that article from I guess Verge? I think —
No, it’s Fast Company, that’s what it says.
Yeah, no, yeah, but there’s —
I’m reading it, I’m looking at it.
No, but there’s a Verge [article] says R1 only has 5,000 users daily, which is from-
Well that’s a quote from you.
First of all, I think that what I said there can be misinterpreted. What I said is that if you go look at the data dump right now, you probably will find 5,000 people using R1. At least 5,000 people.
I’m just going to quote you. Fast Company. “Lyu said, ‘Right now around 5,000 people use the R1 daily.’”
I said it can be misinterpreted. Okay?
Okay.
First of all, I think we saw a very steady growth of all the people interacting with R1 and each time with new features, there’s going to be more people using it. I will give you some numbers that I want to throw to you and maybe I can share very detailed usage sometime in the future. First of all, there are about 5 percent of people that have their R1, they’re not happy, that return it, less than 5 percent.
Sure.
Which is a very good number, and I think the top features that people are using are asking questions and vision and all that, and we really are hoping for people to discover more use cases, but unfortunately we have like four or seven apps on the connections, that’s one of the bottleneck. So if you check for the total query, most of the cases you ask a question, you forget about it, so it’s not about how many times you ask R1, it’s about what kind of task you ask R1 and is R1 actually going to help you? So I guess, yeah, very unfortunate, it seems that that’s a misinterpretation.
So what’s the number? What’s the daily active number? We’ll issue the correction tomorrow, what is it?
I will go back and get you a very accurate number, but I can tell you yesterday our server actually crashed, so I think —
Is it double? Is it 10,000? Is it 25,000?
Oh, yesterday our cloud cost actually, I think... Actually, let me check right here, because I can check right here.
This is why I love having a founder on the show.
Okay, so the past one day is 33,760.
Okay.
So 33,760, yes. So almost 34K yesterday.
Okay. 34,000 active users yesterday. Okay.
Yeah, and —
What percentage of your sales is that?
Yesterday?
Yeah, 33,760 people. What percentage of your total sales is that?
I think we delivered more than 100,000 units, and that should be around 33 percent, 34 percent.
Sure, that makes sense, and that I’m assuming yesterday, because it was a launch of LAM playground, this is a big spike.
Yes.
What were the days before that?
So past two days, 5206, so if you minus 33, that’s another 20,000.
Wait, I’m sorry, I don’t think I followed. You said numbers, but I don’t think I followed them. Past two days, say it again.
So past two days, 5206, so —
That’s the total of two days?
Correct.
Okay, and one day is with the LAM playground on, so okay, I got what you’re saying.
Correct.
So you’re saying it’s 5,000 active users at any time, not daily.
Correct.
Okay. And then you’re getting about 20,000 users daily and then we’ll see if that goes up —
Correct.
... because of the LAM playground.
Correct. Then there’s an article by The Verge that used that title, 5,000, which is wrong. I can tell you, that’s wrong. That’s very wrong. That’s me saying —
Well, you tell Fast Company and then we will update it, but we —
Well, he was a —
... ran your quote in the magazine, so we feel good about that.
He wasn’t there and he... he or she. That journalist wasn’t there and that’s not what I said in the quote, okay?
Welcome back. So you heard all that back and forth about Rabbit’s daily active users, and CEO Jesse Lyu saying he’d get us a better number. I asked the company to clear it up, and what Jesse actually said to Fast Company was that at any given time, Rabbit has 5,000 users. The Fast Company article has been corrected, and we’ll use Jesse’s number of between 20,000 and 34,000 daily active users, which is between – which is still substantially less than the 100,000 units sold.
Now that we have the number, we’ll run it, but my question to you is, you’ve got to sell more R1s, you’ve got to get more people who’ve already bought them to continue using it, and you are, in fact, whether or not Apple Intelligence has arrived or not, it will arrive in some fashion in the coming weeks.
There’s a report just a week or so ago that Jony Ive is working with Sam Altman and OpenAI on a hardware device. Something will happen with Humane, something will happen with Google, something will happen with Samsung. As that universe of competitors expands, it feels like the core technology you’re betting on is being able to automate a VNC with a large action model, right?
You’re going to open up user sessions for people in the cloud and then your LAM is going to go click around on the web for them and that will get you out of the challenges of needing to strike API deals with various companies, with other kinds of deals, copyright deals with various companies, whatever you might need.
Is that durable? The idea that this will keep Rabbit away from needing all of the deals that the big companies will just go pay and get? Because that’s the thing that I think about the most. I can think of 10 companies that came up with a technical solution to a legal problem, and even if the technical solution was amazing, the legal problem eventually caught up with them.
We’re confident that this technology is the current technology route that it will work, and I haven’t yet seen another approach that actually makes any generic agent system work in any other manner.
That doesn’t mean that we’re locked in to one technical path. If you talk to any company, it’s probably not a smart idea to say, “Hey, we just bet on this for the next 10 years.” The technology changes so fast, you have to adapt.
But right now, I think we’re off to a good start, we launched a concept with playground with free of charge that you can explore so that we understand how this system can be improved. In fact, I believe the speed can be improved very fast, but we’re not here to say, “Hey, we’re stuck into this.”
We do have patents about this, but we’re not saying, “Hey, we think this is the correct path to go.” I don’t think anyone in the AI industry can give you a very definitive answer, be like, “Hey, if you just do this, here’s the structure. This is going to guarantee you the best result in the long run.” I think that’s not a good way to think of it, but yeah, I agree. Everyone in the industry are experimenting with something new and a lot of companies that we saw are going to, like you said, run into some sort of legal problems. There’s music generation platforms, there’s —
I mean, this feels like the story of the AI industry broadly, right?
There’s a YouTube training video can be used by this or that. There’s all sorts of things like this. But I think it’s not just the builders are adapting, the industry are going to adapt to the builders too. At some point, there’s going to be a conclusion that, “Okay, this is a new policy, these are new terms that we need to follow.”
Are you building to that goal? I think, again, this is just the big question I’m thinking about all of these things. Basically every AI product is a technical solution that is ahead of wherever the legal system is or wherever the business deals are.
At some point Spotify might show up on your doorstep and say, “You know what? We’re not going to allow agents. It has to be a human user, and we’re going to change our terms of service to say it has to be a human user.” DoorDash might say it, whoever might say it. Are you ready for that outcome? Do you have the budget socked away to go lawyer up and fight that fight?
No. At the moment we don’t have the resources to fight that fight, and at the moment, that’s not a real threat to us because they said we’re too small.
[Laughs] Fair enough. When do you think the turn hits?
I don’t think that it’s a dead end for us, right?
No, I’m saying when do you think it’s a turn? When do you think that becomes a conversation about whether you can have agent users or human users?
Yeah, that’s exactly what I’m talking about. I don’t think that they are not willing to change their terms.
And I think it’s unlikely they’re going to put terms like, it has to be a human. It cannot be. There’s a lot of automation tools out there already. There’s no turning back.
I think what they would like to work with any companies, including us, is that when they see a popular demand from this new kind of agent technology, they want to charge for it, and then we ask our user and us to pay for them, and that’s a business deal. That’s more like money terms. That’s what I can see. But as for now, we’re not breaking any of their terms and agreements. And if they change the terms and agreements tomorrow, we’ll take a look and we’ll see how we adapt. But the agent is out there already. There’s a lot of agents running already, so I think there’s no turning back, and it’s very unlikely to say, “Hey, we are going to stop agents using our services.”
That’s not going to happen.
Think on the longest timeline you can, let’s assume everything works out and it’s all solved. How much time and money is it going to take before the general purpose agent you’re trying to build is a hundred percent reliable and can just do all the things that we all imagine them being able to do?
I might have a different opinion here. I think foundation models like OpenAI’s, obviously they’re raising a crazy amount of money. I think we take benefit from what they’ve been working on, because their primary services is selling their models as APIs, which saves a lot of money. We don’t want to recreate the wheel retraining an LLM. I think it might not be as scary as a lot of people might think.
I think there’s a huge gap between converting the latest technology into a piece of product versus pushing for a more advanced technology. Obviously I’m very proposed to do high-end research. We want to have a research house here set up at the same scale as OpenAI and DeepMind, even though they’re already far, far behind. But I think what we’re trying to do right now at this current scale, because here’s the money we have. We don’t have $1 billion, we don’t have $2 billion. We have this very limited budget. Is that how can we convert the latest technology and research, and build to a product that we can ship early and collect feedback and learn from it?
So a lot of people have different definitions of AGI. I don’t really talk about this term because I think so many people have so many definitions for it. But I do think that AI understands what you say and can help you do things, and maybe here we’re talking about virtually help you click buttons and stuff. There are a lot of companies doing humanoid android that they’re actually giving a hand and legs for the AI to do things.
I think it is an entire humanity’s effort, and a lot of the resources can be shared instead of each company has to go raise for this amount of money and take that amount of time to achieve the same goal. So it’s really hard to say, but we know we need more money and resources, that’s for sure. But I think you’ve seen how efficient this team has been performing from seven people, 17 people till today. We raised obviously much less than Humane or any big companies out there. I think it’s actually one of our advantages that we can do things in a relatively cost-efficient way and fast.
Timeline wise though — again, assuming everything goes your way, is it a year from now that you can build on all the foundation models and all the other investment in this thing? Just does whatever I ask on the web, is it five years? What do you think?
I think the AI model will get very smart very fast, but I think we’re talking about a generational shift. I think obviously we don’t want a 2024 piece of technology operating on eBay’s website, which is basically designed back in 1990, right? So I think a lot of the infra needs to be refreshed, and the biggest gap as I can see here is productionized.
So I think in our roadmap we think that it’s very likely that we can get all this separate piece of technology we have like LAM playground, Teach Mode and rabbitOS at some point, maybe next year, merged into a new rabbitOS 2.0. And that actually will push a huge step forward towards this generic goal.
But my general take is that the AI model is smart enough, but the action part is a lot of infrastructure. There’s a huge gap between research and productionized, so that’s what we learned. So I will say I’m very optimistic in the three years term, but I think, like I said, right now and starting next year is everyone trying different approaches, and we’ll see which one works, but I think we’re confident on the approach we’re take right now.
I just want to end by asking about form factors. Obviously the Rabbit is a very distinctive piece of hardware. People really like the design. We’ve seen just a lot of interesting glasses lately. The idea that we’re all going to work cameras on our face and someone’s going to build the display. Do you think that’s correct? I was wearing the Meta Ray-Bans yesterday. I was like, why would I wear these all the time? I’d rather have a thing.
I am not against any form factors. In fact, I really think that there will be a lot of form factors. But when we were trying to design R1, the reason is that we know it’s not going to be a smartphone because we know people are going to do a lot of other things on smartphones, which the current AI cannot do. So we deliberately avoided the smartphone phone factor. Talking about pins with lasers and glasses — I have different comments for each phone factor because there’s no universal rules here. Let’s talk about pins. I think my general pushback for making it a pin with a laser like Humane, I think first of all, I think it’s really cool, but I think it’s too risky. You are trying to offer a new way of utilizing your technology, to have user use the software. That’s already new to them, and you don’t want to just introduce a sci-fi type of gear.
So two new things stacked together that’s too risky. So if you look at r1, it’s a very familiar design. You know there’s a button you know you’re going to push, you know the wheel probably can scroll. There’s a screen, you can look at things. So the r1 form factor is very conservative, in the sense that it de-risks the software.
It’s just like, people haven’t figured out how to interact in a virtual world, and all of a sudden back in 2016, there’s 200 different companies making goggles and they all failed. So I think I’m very, very conservative on the hardware form factor.
Talking about glasses, that’s a different story. I think your skull actually grows to fit the frame, not the other way around. I used to wear prescription frames. I know the pain, your skull is growing to fit the glass frame, not the other way around. So I think there is really no generic fit on the glass frame. I was having fun with my design team joking, I’m like, “Maybe if we do the glass, we’ll probably do the Dragon Ball style,” like the power reader or whatever that is.
Like the old Google Glass form factor?
I can’t wrap my head around “I have to put a frame that doesn’t fit.” So we’ll see.
I think the current smartphone is perfect. I really like the slate of a glass or a screen form factor, but the real problem here is not about the form factor. The problem is about the apps, right? Because now we see all this agent technology, AI stuff, and they’re doing things that apps are doing, and they’re doing things that apps can’t do, so I think the problem is with apps.
I forgot to ask you the main question. You’ve had a number of startups, you’ve done a number of things, you have a big idea here. How do you make decisions? What’s your framework for making decisions?
I am a very intuitive person. I like to trust my intuition on big directions like what’s going to happen in the long run. But meanwhile, I’m quite conservative in that I hate to predict things.
So I think when people replay this episode, they’ll hear probably that I got really tricked by some of your questions. It’s just my brain couldn’t work for predictions.
It’s that I don’t like to make predictions — like what happens if this happens, if that happens, what do you think? When I manage my team, I tell people, “We make decisions based on current facts, and we find the best solutions to it.” If you spend too much time — at least, if I spend too much time thinking about what if Apple knocks on your door, what you’re going to do? What if A happens, then B happens, then C happens, what you’re going to do?
Most likely you’re going to get a different strategy, right? Because if you think if B is a solution to A — when A happens, you just do B. But there are other types of people they’re like, “Hold on, have you ever thought of when A happens then D happens, then E happens, then F happens, are you still going to do B?” If you think that way, probably not.
So I just choose not to predict a lot of what ifs, and I make short, clear, concise decisions based on current facts. And in fact, if you do the recap for what we launched back in the CES, it was probably the best timing. The price is probably just right, the color probably just right, and the decisions of not negotiating, spending six months negotiating with T-Mobile is probably just right. I make current decisions and that’s my style.
And I talk to people, everyone talks to me. I told everyone in my team, they can find me anytime. Talk to me anytime. I spend a lot of time talk to my people.
We’re, in general, just a very real team, down to earth. I really don’t like some of the other types of startups that spend too much time enjoying the feeling, if you understand what I’m indicating. But there are a lot of people that they say, “Oh, I’m a founder. I’m cool.”
No, I’ve grown enough to get rid of that. Probably the same way as if I’m 21, 22, but now I’m 34. Startups are really tough. It’s a war. It’s about survival. It is really, really tough. And it doesn’t really matter if others want to do something. You have to survive, and just surviving by your own is tough in any sense.
So that’s why a lot of people ask me — I get asked a lot like, “Okay, what if they do this? What if they do that?” Well, end of the day, there’s nothing you can do. You have to do your thing and they will react to it.
I think it’s fair to say that with Rabbit and other startups like us, the biggest companies like Apple, they react to us. They react to us in a very hostile way, a very unusual way that they have this new phone, but all those things are still not there.
We’re making very small dent, but that even doesn’t matter. I think for us, we care about our customers. One thing I want to say is that yes, there are a lot of misinformation, there are haters, there are all that feedbacks, criticisms. If you talk to the r1 user, they’re happy. That’s what I care. That’s what I care. Otherwise, there will be a lot of returns, there will be a lot of refunds. We have less than 5 percent return. Put that term in any consumer market electronics device, it’s a good benchmark, and we are going to keep releasing all the stuff. We pushed 17 OTA within five months. The other company pushed like, what? 2, 3, 4, 5 OTAs.
So I really hope people can see us — we’re a bunch of underdogs. Our solution isn’t perfect, but it is David versus Goliath on day one because it’s a reality, and don’t expect perfect stuff from us because we are not perfect. We raise very little amount of money and we’re a small team, but we move fast. What we can guarantee is that when Rabbit shows you something, you probably couldn’t even find somewhere else. Just like the hardware, just like the playground or even the very janky day one version of LAM. We are the first company that has Apple Music can be streamed to our device.
Yeah. Does Apple, because you’re opening it on the web?
Yeah. I mean, I don’t get legal documents to my door. Maybe I will get one, but maybe they think we’re too small, but we do things in our way. I guess, that’s what I want to say. We’re really down to the ground team. That’s my style.
Yeah. Jesse, thank you so much for coming to Decoder and being so game to answer these questions. I really appreciate it.
Yeah, thank you so much.
Decoder with Nilay Patel /
A podcast from The Verge about big ideas and other problems.