UiPath CEO Daniel Dines on AI agents replacing our jobs

Today, I’m talking with Daniel Dines, the cofounder and, once again, the CEO of UiPath, a software company that specializes in something called robotic process automation (RPA). We’ve been featuring a lot of what I like to call full-circle Decoder guests on the show lately, and Daniel is a perfect example. He was first on […]

Apr 7, 2025 - 15:31
 0
UiPath CEO Daniel Dines on AI agents replacing our jobs
A photo illustration of UiPath CEO Daniel Dines.

Today, I’m talking with Daniel Dines, the cofounder and, once again, the CEO of UiPath, a software company that specializes in something called robotic process automation (RPA). We’ve been featuring a lot of what I like to call full-circle Decoder guests on the show lately, and Daniel is a perfect example.

He was first on Decoder back in 2022, right before he moved to a co-CEO arrangement with Rob Enslin, a Google Cloud executive brought on to help steer UiPath after it went public. In January of last year, Daniel stepped down to become chief innovation officer and Rob stepped up to become sole CEO — and then, less than six months later, Rob resigned, and Daniel took his job as sole CEO back.

Founders stepping aside for outside CEOs and then returning as CEO later on is quite a trope in the tech world, and Daniel and I spent a while pulling his version of that story apart. He made some pretty key decisions along the way to relinquishing control of the company he founded — and then some equally important decisions when coming back. If you’re a Decoder listener, you know I’m fascinated by the middle part of these stories that usually gets glossed over, so we really dug in here.

Listen to Decoder, a show hosted by The Verge’s Nilay Patel about big ideas — and other problems. Subscribe here!

But there’s a lot more going on with UiPath than C-suite shuffles — the company was founded to sell automation software. That entire market is being upended by AI, particularly agentic AI, which is supposed to click around on the internet and do things for you.

The main technology UiPath has been selling for years now is RPA, which has been around since the early 2000s. It aims to solve a pretty big problem that a lot of organizations have. Let’s say you run a hospital with ancient billing software. You could spend millions upgrading that software and the computers it runs on at great risk, or you could just hire UiPath to build an RPA system for you that automates that software and presents a much nicer interface to users. This decreases the risk of upgrading all that software, it makes your users happier because they’re using a much nicer interface, and it might provide you some efficiency by developing new automated workflows along the way. 

UiPath built a fairly successful business doing that basic version of RPA; I encourage you to listen to our episode in 2022 where we unpack it in great detail. But as you might expect, that’s all getting upended by agentic AI systems that promise to automate things in much more powerful ways, with much simpler natural language interfaces. So Daniel has to figure out how UIPath can integrate and deploy AI into its products — or risk being made obsolete.

Daniel and I really got into that, and then I also wanted to push him on the practical economics of the business. The big AI startups like Anthropic and OpenAI don’t have to make any profits right now. They’re just raising mountains of investment and promising massive returns when all of this AI works. 

But UiPath is a public company, and it’s licensing this technology at a cost. So I wanted to know what Daniel thought about the cost of licensing AI tech, selling it to customers, and trying to have all of that make a profit while the underlying economics of the AI industry itself remain pretty unsettled. 

We also talked about what all of this might mean for our experiences at work, and whether a world of robots sending emails to other robots is actually a good goal. This one really goes places — Daniel was game to truly dig in. I think you’ll like it.

Okay, UiPath CEO Daniel Dines. Here we go. 

This interview has been lightly edited for length and clarity.

Daniel Dines, you’re the founder and — once again — the CEO of UiPath. Welcome back to Decoder.

Thank you so much for having me, Nilay.

I’m very excited to talk to you. I love a full circle episode of Decoder. You were last on the show in the spring of 2022. It’s been a little bit of a roller coaster since then. You were just about to have a co-CEO named Rob Enslin. You hired him from Google Cloud. Then, you stepped down a little over a year ago to focus on being the chief innovation officer. Then, Rob was the sole CEO. Then, Rob stepped down, and now you’re CEO again. You’ve made some changes to the company.

Explain what’s going on there, because that’s a lot of decisions. Obviously, we’re a show about decisions, and there’s a lot of AI stuff I want to talk about. But let’s start with that little bit of history. Why step down and why come back?

Well, roller coaster is a good word. Sometimes people exaggerate with it, but in our case, it’s really what happened. Why? Look, I was always trying to do what’s best for this company. This company is, in a way, my baby. I spent almost 20 years [building it]. This year, 2025, is 20 years since I founded UiPath. I thought that if we can get the best talent, and especially with [Enslin’s] background in go-to-market, this is going to help us. And Rob is a nice guy. We got along pretty well. And look, it’s been mostly a good ride. It gave me some time off, so I switched to chief innovation officer. I ran our product and engineering teams.

In 2023, I had my own time for reflection, especially after I moved a lot of my responsibilities to Rob. I spent that summer in reflection mode, honestly, with a bit of soul searching around “what do I want?” I would say that I missed my early 20s craziness, with people having a lot of fun and going on spring break. I had to work. In post-communist Romania, there was a lot of turmoil, so life was not that fun for me at that stage. I thought maybe I will get to experience what it means to take it a little bit easier.

It was important for me because I discovered that UiPath is actually kind of an anchor for me. It gives me a framework of mind, a direction. It’s very hard for me to wake up every day and give myself something to do unless I am in this big machine and this machine is on a trajectory. It forced my mind to be there. And I’m surrounded by great people. I talk to smart investors, analysts, customers, and partners. It’s a living organism. So, I discovered that this is a gift that I have, being in the position to run this company.

Then, things in early 2024 didn’t go well for us, from an overall market perspective. I think the macro was pretty bad for some companies. We had some execution issues. Our initial go-to-market was “land and expand,” and we over-rotated the company to go mostly after big deals. So, our float business suffered, and paired with some of the macro challenges, it created a difficult environment. Rob decided to leave the company in May 2024. In all fairness, at the time, I was ready to take it back. It came faster than I anticipated, but mentally I was prepared after my summer and my time off.

Did you go on a spring break? Did you take a minute? Were you in Palm Beach?

No, no, I didn’t go to Palm Beach, but I spent a few weeks in the Mediterranean on a boat. So maybe close to it.

Spring break is not the same in your 40s as it is in your 20s, is the thing that I’ve discovered.

Yeah, exactly.

I always want to drill into the actual moments of change. I always joke that I watch a lot of music documentaries. There’s act one where everyone’s in the garage, and there’s act three where they’re playing Shea Stadium. And act two, where the actual moments of change happen, are often glossed over. This is one of those moments. You made a decision to come back as CEO, Rob made a decision to leave. What was that conversation like? Did you initiate it? Did he start it? Was he leaving and you already decided that you were coming back? Walk us through it.

It was simple actually. We decided to meet in New York following Q1 2024. He told me that he thought it was better that I take the company back and he resign for personal reasons. Indeed, he needed to take some time off because some members of his family were not well. I told him, “Let’s reflect a bit on this. Let’s think a bit.” But in the end, he was resolute in his decision. 

I also realized after that discussion that there will be many changes in the company. We needed to contract a bit. We oversized the company for this elephant hunting, so there needed to be a few changes. And I realized it’s actually better that I do the changes. It’s going to be a lot of pain, and we’ve already been through some pain. The last three quarters were not easy for us by any metric.

Would you have made the change if he hadn’t volunteered? Was it obvious to you that you were going to come back as CEO?

I realized something. It would be difficult to get an external CEO while I am here. It’s kind of not possible. I would consider growing someone internally rather than bringing in someone externally. It’s really hard to know someone after you talk for a few hours and you go for a dinner, and it affects the culture of a company so much. Even if I have the controlling stake of the company, it’s not like you get someone and you can command them every day, “You do this and this and this.” No, it has a really huge implications. 

I care deeply about the company and the people. Rob had all the best intentions in the world, but seeing the things that sometimes made me uncomfortable, it was not easy, and it’s not easy for anyone. Naturally, there were two camps created — Daniel Camp and Rob Camp — and sometimes they didn’t talk. Again, without our intention, it was a dynamic that didn’t work well. So to me, it was clear that I had to either take back the CEO role and drive the company, or next time I would step down completely.

This is a pretty common problem with founders. Obviously, The Verge is much smaller than UiPath, but I only have a handful of co-founders left. I often tell people that they should be the editor-in-chief, and it’s perceived as a threat. They’re like, “No, we wouldn’t do it if you were here.” Did you have the power as the founder and the controlling stakeholder to say, “I’m just making this decision, I’m coming back?” Was there an approval process? This is one of those moments that seems like it comes up a bunch with founders.

Theoretically, I had the power to do this, but in practical terms, it’s something very difficult to do. Look, we are a public company. It’s board governance. I have a seat on the board. The board should make the decision. So, the board would have to make a collective decision to fire Rob on my pressure. They could have mutinied against me, but it’s not so simple. It’s doable, but–

That’s really the question. We see some of these decisions from the outside. The founder coming back as a CEO seems like a very natural course of events, but then it’s very complicated on the inside, particularly with founders who were the CEO, stepped aside for another CEO, and then came back.

If there is a battle between the founder and the CEO, yes, things could be pretty ugly. In our case, it really was not. Rob really exited under the best conditions. He gave me all the time. He assisted me with the transition. He then took some time off to fix his personal stuff. From this perspective, it was a smooth transition.

You mentioned the company had grown in ways you didn’t want it to. With a new CEO, there are cultural implications with how they would like to run the company. Then as the founder, you come back and want to change it back. You just reported financial results. Things seem to be a little more stable than they were in the past. What changes have you made, either to go in a forward direction or to go back to the way things were when you were CEO?

I wanted to bring back some of our mojo of being customer-centric, working with customers, and doing whatever they required to be successful. We went back largely to “land and expand,” to being customer-centric while still preserving the muscle to do big deals. We need both. Forecasting is kind of difficult in a company that depends only on big deals. The lumpiness in revenue can create issues with forecasting. It’s normal to have both sides of the equation. 

That’s also a thing that I didn’t realize. We’re not a technology that you can go day one and say, “I will sell you a $100 million of automation.” Let’s see a smaller division and see how it works. Then, expand into other divisions, and then company-wide.

So, regardless if you have a good Rolodex, you won’t go to another CEO and say, “Okay my friend, give me this big deal because I’m here for you, and I promise you we’ll do it best.” You need to prove it, and you need to earn your way into a company. That’s why, in our DNA, the essence is to stay extremely customer-centric, work with them, help them find opportunities, help them deliver the value, prove the value, and have them message internally about the benefits of automation. We kind of lost a bit of this muscle. 

And now, we’ve segmented differently. I created an executive accounting program where we have our top 50 diamond accounts with all our executives attached to those accounts, and we are taking it very seriously. We also have a co-innovation program where we build software together. We decentralized our customer success function that was centrally run. It was a bit disjointed from the sales motion, so we decentralized it into the region, and it’s much more aligned with the customer right now. We even changed  the compensation of our sellers and customer success to be closer to the adoption of our software. Regional partners were also moved within the sales teams. I simplified and streamlined the international part of our business into one big region. There have really been a lot of changes.

Were all those changes in your head while you were the chief innovation officer? You were watching the company change and the results, and you were thinking, “This is how I would fix it?” Or did you come to this plan after you retook the CEO role?

I think some of the pain that we were experiencing was known at that point. The changes? Not really so much. It took me a month to understand who the people on my team would be and what kind of changes we were going to make. 

I love having people come back on the show because I get to read their old decision-making frameworks back to them. You left, you took a break, you got to think about who you wanted to be and how you wanted to spend your time. 

The last time you were on the show I asked, “How do you make decisions?” You said, “I’m trying to learn more by listening to people. I have no idea how to run a big company at this stage because I have never been in a similar situation before, but I’m trying to build a close-knit executive team that relies on each other.” Then, you said the thing people say, which is to “[make decisions] fast if they can be reversed, and do them slowly if they’re irreversible.” Is that still your framework? Have you come to a different approach? Are those still the basics?

I think largely, yes. I like to give space to people to delegate. My style is to agree on goals, agree on the plans, and then let the people run. If I find issues, even small issues, my style is to dig around to see if there are signs of potential cancer or things that are completely not working. You discover interesting things. But yes, I think the faith of the company depends extremely on the cohesion of the leadership team. A big difference in how I make hiring decisions compared to 2022 is that I will never trade chemistry for talent. Bringing talent that doesn’t fit into an organization never works, and long term, it creates really big issues.

I asked you about the structure of the company last time, and you had a really interesting answer. You didn’t talk about the structure at all. You talked about the culture and said you want the culture of the company to be “one single word.” The word you picked was “humility,” and you talked about that for a minute. It’s been two years since then. I’ve come to believe that the structure question is really a proxy for a culture question. By describing the structure of the company, you’re describing the culture. Would you still pick “humility” if I asked you to describe the culture of the company?

I think at that time, humility was the most-needed aspect because we rode a very successful IPO, and our stock was very high. Many people made a lot of money. We lost a bit of humility at that point. Right now, we are back to our roots. I think the company has been through pain, and we understand better.

Look, I am not smart enough to learn from successes, and UiPath is not smart enough to learn from successes, but I think we are smart enough to learn from pain and suffering. Humility was in the genesis of our company and it’s an integral part. What we need now more is to be bold and fast. We are making a big push into our agentic automation era, and I see great things happening. It’s a new energy.

Also, we ran RPA (robotic process automation) for seven, eight years. There was a bit of fatigue at the end. We were just perfecting the software and getting into some white spaces, but it was not that exciting. Agentic AI brings a lot of excitement to the table. We pivoted in product and engineering overnight basically, more than half of the organization into building the new agentic products. All the teams are energized because they know. We basically put agentic automation as our number one priority as a company.

We literally changed direction. It’s not the Titanic, but it’s a big boat. I think very few companies have a chance for an act two, and we have this chance. AI and automation are so synergetic. I think more and more people came to that conclusion. Agentic, in essence, is AI plus automation. It’s the fusion of AI and automation. We’re so well-positioned to deliver on this promise. So our product and engineering is going at a breakneck pace, making really bold decisions. From a technology standpoint, we’ve replatformed our workflow engine to a more modern technology. They really embodied being bold and fast. I cannot say yet that this is true for other parts of the company, and this is where I work with our leaders to be completely prepared for our act two. 

I’m going to ask one more question about structure, and then I have a lot of questions about agentic AI and automation. One of the big decisions you made when you took over the role as sole CEO once again is you cut about 400 people. You laid off 10 percent of the company. Did you end up restructuring around that cut? Why make that decision, and what was the goal?

We looked into our central functions at that point. And in all fairness, we over-hired people in those central functions, and we had to streamline the organization. Decisions to fire people are the hardest from an emotional standpoint, from a cultural standpoint, and financially. It’s very hard to make them. Every time we had to do them, it’s been a thorough process. I was never rushing, and I was always fighting more on “do we really need to?” 

And it came at one of the lowest moments for us, along with the CEO changes. I think now, as we put it behind us, we are more prepared. The world is in an interesting, challenging phase right now. Nobody knows where it’s going to go. I think we as a company are a bit more prepared, more streamlined, and agile. We took time to heal the pain, and I think the confidence in the company is restoring. Looking back, I think that was the right thing to do for the company.

I wanted to ask that question as the lead-in to AI. You’re describing making these cuts as a low moment, as something that was very difficult to do. The right decision, but very difficult to do. You pull the thread on AI, and what I hear from our audience is, “This automation is going to come for our companies and we will all be out of a job.” White-collar workers will be out of a job. Software engineers might be out of a job. Lawyers are terrified of being out of a job. Do you see that connection, that if your software is successful you will reorient the economy and a lot of people might lose their jobs?

If we are realistic right now, it’s all a matter of the time of change, not the change itself. Your job and my job have changed over time. Jobs change. It’s a matter of when it’s going to be and how compressed the change is. Right now, I’m not so fearful that it’s going to come so suddenly. If you look at AI and the real use cases, we still have to see widespread adoption. It’s a productivity gain right now, more like an assistant type of AI. I ask something, I get that response, I do my job a bit faster and better. It’s not at the point yet to affect really huge volumes of the population.

I think agentic AI is one of the steps toward deploying AI into more of an enterprise context, and it might accelerate the way jobs are transforming. What do I mean by this? I think a job today is not a simple task. There are very few people whose job you can describe as one single task. So a job is a multitude of operational things, repetitive things, and many ad hoc things. It depends on different environments and businesses.

I think that many repetitive tasks have been solved. We have the technology to basically eliminate many of them from one’s job. Now we also have the technology to help people with more of these ad hoc tasks, like research tasks. I think the jobs will be moved more toward where people make decisions. They’ll analyze what information agents are retrieving and what they’re putting together. Agents plus automation. People will analyze, will make decisions, and then the actions will be carried on by enterprise workflows, robots that we already have. So jobs will transform more into decision-making, inspections, and overseeing from a command plane. 

I think about this all the time. I don’t know that I’m a great editor-in-chief. I feel like you could automate me by just walking into rooms and having a soundboard that says “make it shorter” or “make it longer,” and you just spin the wheel and pick one. But I know when to say those things because I spent years writing blog posts, then stories, and now podcasting. I have all this experience executing the decisions so that I have a high level of confidence in the decisions that I’m making when I make them.

How do you get that if no one is executing the decisions? If that’s all robots? I just want to make a comparison to you. You were the founder, you spent all this time running this company. How would you make good decisions if you didn’t have all of that experience?

The execution experience?

Yeah.

That’s a good question. Eventually, many things will be like a black box. I don’t know why if I press a key on my keyboard it displays on the screen, but I can make the decision to press. In a way, operations will be like a black box for many companies, and decisions will be at a higher level. I think we can still make decisions even if we don’t know how things are cooked behind the scenes.

I’m curious how that plays out. I am of the school that says the best leaders are the ones who spent time on the ground. That’s not always true. I’ve talked to a lot of leaders on the show, but particularly when I talk to founders, that experience at every stage of the company is what informs the confidence to make changes. If operations are a black box, I wonder where that confidence comes from.

I need to reflect more on that. Probably the best people will understand the operations as well. Even if they are carried out by robots and AI, they will understand in order to make better decisions and change the operations. But this is more of an analytical type of person. The types of jobs where there’s more mechanical typing, copying, and pasting are going to disappear.

So the last time you were on the show, I don’t think there was a lot of hype around RPA. I was into it because I’m fascinated by the idea of computers using computers, and when you were on the show in 2022 was sort of the height of that. You were riding high. This is why you said you needed humility. The idea was that instead of upgrading a lot of old computer systems, we would abstract them away with UiPath technology, build new interfaces, and that would allow all kinds of flexibility. That was a big idea. 

I think that has changed. In the AI age, we see a lot of companies promising agentic capabilities. We see a lot of companies saying that they’ll move even farther up the stack, all the way up to decision-making. But when I look back on that conversation and everything that’s happened since, the thing that gets me is that robotic process automation, the idea that you have some old hospital building’s system and UiPath will build a modern way to use it, is deterministic. You knew where all the buttons in that software were, you could program your way through them. Maybe you needed some machine learning to understand the interfaces better or to make it less brittle, but you knew what the inputs and the outputs were. RPA knows the path between those things.

AI is totally not deterministic. The robot’s going to go do something. Is there a connection between the software you were building, the RPA you are selling, and the agentic capabilities you want to build? Because it seems like there’s a fundamental technology shift that has to happen there.

I think you expressed the essence of what we are building when you say deterministic and non-deterministic. These are exactly the terms I use when I am explaining how robots and AI should interact. Look, LLMs are not meant to do deterministic tasks. If you ask an LLM to multiply two numbers, it cannot figure out how to multiply two numbers because it’s not statistical matching. What it would do best is understand, “Ah, I’m required to multiply two numbers. I have a tool that knows how to multiply two numbers, so I will call a tool and I will get the precise answer.” This is how they work. They don’t have the intelligence inside them because it’s a non-deterministic tool. It’s not meant to do a series of deterministic steps.

In the same way, you can think of transactional work that produces side effects in enterprise systems. It should be deterministic. You cannot have a 95 percent chance of succeeding a payment transaction. It has to be 100 percent, and if there is an exception, people should be notified. It cannot be “maybe.” 

Our robots offer this fully deterministic way to do transactions across multiple systems, transactions that create effects on these systems. With LLMs and with technology like OpenAI’s Operator or “computer use” from Anthropic — actually we are users, and we work closely with both of these companies to integrate their technology — you can complement what RPA is doing on parts of the process that you couldn’t automate before. If I have a process that relies on doing research… like if I’m traveling, I want to create a travel agent with AI. This travel agent will do research on available flights and across a multitude of airlines. It’s no big harm if I miss one flight option.

So I can have a non-deterministic tool, go and extract the information, then an agent can make some decisions. It can present to the user, “These are available flights.” But then when I book a flight, I have to use something deterministic. When the money transacts, money changes hands. Basically we can have the best of both worlds. We can extend the reach of deterministic with non-deterministic while accepting the risks of non-deterministic. And there are domains like research or testing an application when we can take more risks. It makes sense. It depends on your level of risk you can accept.

It makes sense to me. I see your competitors and your partners, like OpenAI and Anthropic, and they’ve made their entire technology bet on agentic AI. I assume that their plan is for that to get good enough to do everything. Your approach is that there’s some stuff that traditional RPA, the traditional deterministic computer, needs to do, and that can be layered with an LLM or an AI system. I’m just wondering what the intersection point is. Will there ever be an intersection point when OpenAI says, “Operator can do it all,” and that presents some kind of paradigm shift for your business?

I am absolutely sure that the intersection point is when you can define a task in a deterministic way and know the steps. There is really no point in having an LLM that does this task all the time to rediscover how to do it or to think about every step because it’s impossible to get to 100 percent accuracy. We are testing these LLMs for simple form filling. They can work very well, but think about it. You need to run it hundreds, or even thousands of times to get to 100 percent accuracy. This is not what the technology is for.

What I am saying is that LLMs will eventually create routines that can work 100 percent accurately. But the idea that LLMs will discover a process every time like you would when you see an application or a book for the first time in your life… humans don’t work like this. We learn. You learn an application, and then if you watch yourself, most of the things you’ll do will be on autopilot.

We’ve had other companies come on the show and talk about their agentic software approaches. Actually, they were facsimiles of the agentic software they wanted to build. So, Rabbit came on the show, and its first version of the Rabbit R1 was running testing software in the background. You would ask for a song on Spotify, and it would just click around on the Spotify website in the cloud and then stream the song to you. Its claim was that it actually did build the agent, but it needed to build the first version and have proof of concept. 

But the deterministic system, in one very real way, can act like the thing people want from the AI system. It can almost do it and then it’s brittle, but the AI can make it less brittle by reacting to change or an unexpected outcome. How do you merge those things together? How do you decide which system to use? Because that seems like the technology problem of the moment.

The way we are seeing the adoption of combined agentic AI and automation is by putting a workflow technology on top of it. Our agents are more like data-in, action-out agents — not necessarily conversational agents. We focus on delivering enterprise agents that work in the context of an enterprise process. So to us, the critical piece is this orchestration part. 

Let’s say you have a loan agent that has to approve some loans. A workflow is triggered when the loan application is received. So, you have an enterprise workflow. Then, that workflow will first send the application to a reading agent that is specialized in extracting the information from the application. Then, I can send it to a human user to verify something basic if I’m not confident enough in what I extracted. It can be a more junior person that does this verification. 

Then, the workflow will send it to an agent that will make loan recommendations. That agent can start to call tools like, “Get this person’s credit score.” So this tool is definitely something deterministic. It’s either an API to a credit score agency or you can use an RPA bot. That is clearly deterministic. You are not going to use something like OpenAI’s Operator to just figure out a guy’s credit score. There is absolutely no point. It’s taking too much time and it’s not reliable. 

Already you see it’s a combination. The workflow knows how to direct the fixed paths of a process, and then agents are capable of making recommendations and calling tools that will give the context. Then, after the agent makes a recommendation to approve this loan, it will go to a human user. The workflow will create a task, a human user will get it in their inbox asking them to approve or not. They press a button and approve. The workflow will go back maybe to the last agent and say, “Please compose a nice acceptance message particular to this client.”

It’s a simplistic view, but this is how we believe the world and enterprise customers will adopt agents. Also, they need to have some confidence in the system. You said we are talking about this black box system, a swarm of agents that do their magic and sometimes they make mistakes. Until you accept it, you need to have confidence and you need to see the work. Everybody is more confident when they see the workflow. They can say, “Look, if that happens, it goes like this. If that happens, it goes like this.” So you can trace it, you can understand it, you can reason with it.

One of my takes on the interaction between humans and AI is that for a long time we have to speak the same language. Even when you create an application or an automation, AI actually creates code. AI can eventually work directly with machine code. They don’t have to create Python code, but it’s important that AI creates Python code because humans can reason, change, and accept it. It’s going to be the same in automation applications. AI will use existing platforms, will create artifacts on top of those existing platforms, and people will validate what’s going on there.

On the consumer side, the value of the existing platforms is, I think, under enormous threat. So I call this the “DoorDash problem” on the consumer side. We just had Amazon’s Panos Panay on, and it’s rolling out a new version of Alexa. You’re going to be able to say, “Alexa, buy me a sandwich,” and it will just get DoorDash to send you a sandwich. 

This is a huge problem for DoorDash. Its margins are under significant pressure if their interface gets commoditized in that way. We’re going to have the CEO of DoorDash on the show eventually and I’ll ask him this question. but I can abstractly see the pressure on some of these systems that are going to get commoditized by new kinds of interfaces.

The classic RPA truly depended on those systems existing. You needed the existing loan system that nobody wanted to upgrade so you could build the RPA interface on top of it. You need the credit score interface that might not have a great API, but you can use RPA to go get it from their website. AI changes that because it’s coming to all of those systems as well. There’s some part of the AI industry that’s chasing all of those things at once, not just building this orchestration layer. 

What do you think about the long-term longevity of those systems? I look on the consumer side and I say, “Oh, this is a big problem for DoorDash. This is a big problem for Uber.” I don’t know exactly how it works on the enterprise side.

We’ll see how it evolves. The fact that we still have a lot of mainframes, and our RPA touches a lot of mainframes, shows that the changing of enterprise systems is much more difficult than in the consumer space. If you look at complex enterprise applications like Workday and SAP, I can see people adding a nice layer of voice on top that’s AI-powered. You know, “Change my vacation responder to this.”

But the tablet and mobile phone didn’t make the keyboard or mouse obsolete. I think they will still have to coexist. Many people can work on user interfaces faster with a keyboard than with voice, but voice is going to become a good way to interact with applications. When you need to absorb a lot of information simultaneously, you need the user interface. In many cases, you’ll still need to interact with it. It’s easier than telling the AI, “Please press the okay button.” I will just go and click the button. It’s easier and it’s faster. They have to coexist. 

I was thinking about the DoorDash problem. You’re basically saying that Amazon can build its own DoorDash. If it can control the interface with the client, it doesn’t matter who delivers in the end because– 

It’s not that they will build their own DoorDash. It’s that DoorDash’s opportunities to make additional revenue will go away. It won’t be able to upsell, won’t be able to do deals, won’t be able to have exclusives. The interface will be commoditized and it will just become a service provider with Amazon or whoever’s AI agent being in control of the business. You see that for a lot of these ideas. You need an ecosystem of service providers for the agent to go and address, and that crushes the margins of the service providers.

It’s possible.

I think I see it in the consumer space. You see the back and forth. There’s some amount of, “We don’t want you here. We’re going to block your agents from using our services.” That is already happening on the consumer side. There’s some amount of dealmaking. Then on the enterprise side, it seems like there’s going to be a lot of dealmaking where maybe instead of API access, we’re allowing agentic access or RPA access because the data is what’s valuable there.

To a certain extent, we had the same problem with RPA. Think about the fact that most enterprise or SaaS software was licensed by user seats. With RPA, you needed far fewer user seats. You can have one seat that does the job of hundreds of seats. They found ways to kind of prevent this and create special service accounts to deal with it. Some vendors do not allow it. I’m sure they will find some ways to deal with it because how can Alexa order if DoorDash doesn’t want to receive the order? There has to be something in it for both of them.

I think that’s an enormous technical challenge, and the business challenge is even harder. You have to get a lot of people to agree to fundamentally restructure their businesses in order for any of this to work. Again, on the enterprise side, there’s more dealmaking. You have some instincts, some history, some moves to say, “Okay, here’s how we’re going to structure access to the data.” I have no idea how it’ll play out on the consumer side.

You mentioned a thing about LLMs not having memory, having to rethink the workflow every single time. That’s true. I think the AI companies are working on that. But they’re also pushing the idea of reasoning, that now we’re going to layer LLM approaches over and over again into a simulacrum of human reasoning. I don’t know if that’s correct. They say they can do it. Is that having an impact on what you’re doing? Can you say, “Here’s the decision, here’s the process by which a decision is made”?

The way we are seeing the reasoning part is that it’s more helpful, in our world, for creating automations. We have this Copilot-type of technology where you describe a process and it can create the artifact to execute the process. The smarter an LLM is, the closer to reality the creation gets and the developer has to change it less. So in a way, it’s like creating code, if you want. It’s the same thing. The smarter LLMs will create better code, but that code is still going to be executed by hyperscalers. It’s not LLMs that do that. Think about it. Maybe LLMs will do everything. Why would they generate code at all?

You mentioned hyperscalers. One of the things that I’ve been thinking about a lot is the amount of investment the hyperscalers are doing just to buy Nvidia chips, to build data centers, or to invest in nuclear fusion against the promise that there will be this much demand for AI services.

They have to make money doing this somehow. It’s unclear how the bleeding edge, frontier AI companies are going to make money. I don’t know how OpenAI will ever make a dollar. I don’t know how Anthropic will ever make a dollar except by raising more money, which they’re very good at. That’s on a long-term plan. You’re a public company. You have to make the money. You have to buy the tokens, you have to use them, you have to build the products, you have to charge your market price. Are the rates we’re at now sustainable?

I don’t know if it’s sustainable or not for them, but if I were them, I would do the same. What if this is indeed the biggest revolution of our time? What if all of these GPUs and AI agents will take over the world and I am not there?

But I’m saying you’ve got to charge your customer some price for the use of an AI tool. You’re not running all of your own models. You’re partnered with some of these companies. You’re buying some of their capacity. They’re, in turn, buying capacity from Azure, AWS, or whatever they’re running on. All of these companies need a margin and some of their margins are negative. OpenAI loses money on inference right now, but it’s selling that capacity to you. 

At some point, they’re going to turn the knob and say, “We’ve got to make money.” They’re going to raise prices on you, and you’ll have to pass that cost to your actual customers who are actual businesses trying to automate their companies and raise their own margins. When will it become too expensive? That seems like the correction that’s coming. You’re going to say, “Okay, OpenAI raised our prices. UiPath has raised its prices,” and some customers are going to say no.

If we look at through our lens of the processes we automate, what’s the alternative at this point? Using human labor? I think even if OpenAI increases prices, I still don’t think humans can compete with AI plus automation when it is possible. And long term, the pricing will go down and it’s a lot of competition for the business. I’m not really concerned about this aspect.

Have you structured your technology so that you can swap between AI providers? Are you tied to OpenAI, Anthropic, or is that easily modular?

No, not at all. We actually offer our customers a piece of technology that we call AI Trust Layer, where they can switch between different providers or bring their own on-prem model if they want.

You just bought a company called Peak, which is another AI provider. Why make that bet? Why bring in technology?

We want to get into vertical agents. Peak is a pricing and inventory agent, and it has really solid experience in delivering these dedicated solutions based on agentic AI, and we want to extend that. Of course, we’ll integrate it first into our platform, but we want to come out with more dedicated agents. It makes the entire go-to-market easier. We want it to work a bit like a locomotive for the entire platform because it can create more demand for automation.

How does that technology plug into your existing stack? I understand it has markets you might not have or that you want to get bigger in, but ideally you buy a company and what you’re going to do is sell its existing markets more of your tools.

Definitely. That was on our mind. I think we have really good synergies in our go-to-market, and we can really accelerate its go-to-market, particularly in the manufacturing industries. We have very solid manufacturing practices in the US, Germany, and Japan.

Do you think there’s an opportunity for you to commoditize the AI tools themselves? I just keep thinking about this. You have your AI Trust Layer, you have your own vertical systems that you’re buying that you might deploy. At some point, what matters to companies is the business outcome, not that they have an OpenAI partnership. It feels like the big AI companies are trying to be everything to everyone, and you’re trying to specialize. Do you think at some point you’re going to say, “What we deliver are business outcomes and the technology doesn’t actually matter”?

I think that generative AI is going through this phase. Initially, it was a nice toy. Everybody put budgets to experiment with it, and now we are moving toward the phase where people really want outcomes. Initially, they all used OpenAI, and our strategy was to use OpenAI because it’s the best. If you want to make a proof of concept, why would you use something different? 

But as you go and you specialize it for different types of industries and processes, you can choose whatever is more appropriate. We look at everything from DeepSeek, Llama, to Anthropic. We use all of them in different parts of the business. In the end, we are more of an AI engineering company, and our job is to build nice products that deliver value for customers. Behind the scenes, we use whatever LLMs are best for a particular scenario.

I actually want to ask you about DeepSeek. Was that as shocking of a moment for you? The industry reacted to the idea that you could run the model much more cheaply very harshly — very harshly. Did you see that and say, “This will bring my cost down. This is also a revolution”?

Selfishly, for UiPath, any open-source capable model is a great thing for us and for our customers. My belief is that these dedicated agents will require a combination of fine-tuning and really good prompts. So, if you can have a great model that you can fine-tune and combine with good prompts, that will provide the highest value and the cheapest price. We find you can actually distill it into a smaller model that works very well for a particular domain.

Where do you see the biggest growth for traditional RPA, for AI, and for the hybrid of AI and RPA?

RPA is an established industry right now that grows in the low double-digits. The demand that we are seeing right now for our agentic technology, I have never seen in the RPA world. It really opens all the doors. We get a seat at the table where we are not used to being from the automation perspective. People are really excited about this idea of agentic automation. They get it. The value proposition is kind of simple for us. I can go to my clients and tell them, “Guys, where did you deploy robots? How are people interacting with the robots today? Why are we not reducing the work of people, deploying agents, and creating an enterprise workflow that will connect agents with people and robots?” It’s a no-brainer proposition. It resonates, it’s simple, it creates a lot of excitement.

I want to tell you about my favorite Slack room at Vox Media and get your reaction to it.

We have a room called Finance Support, and in this room, people ask a Slack robot to do stuff: file invoices, give receipts, all this stuff. I look at this room once a week, and it cracks me up every time. I literally fall over and giggle every time because the people who are new to this room type full sentences: “Hi, I need help with this receipt. Can you itemize this thing? I’ve got a flight.” The people who are repeat users have discovered that they just need to scream nouns at a robot.

So they just show up and they just say the word “expenses,” and all of this is in one stack. There are people who are very polite and then people who are just yelling nouns at a robot. You can see this secondary language of human-machine interaction developing: “I’m just going to say keywords to the robot because that’s all it needs from me.”

I look at that and I say, “Oh, that’s a revolution.” First of all, it’s very funny. But this is a revolution in business. You’re going to have some people who are just saying keywords in Slack to get things done for their business to an agent that might just go off and do it, and then you have the people who are used to all of the niceties of business fluffing up their communication. At some point, you’re just going to have robots saying nouns to each other instead of using an API. In many ways, that’s what RPA was. You’re just using the human interface instead of an API. Do you see all of business changing around this as clearly as I do when I look at this Slack room?

Yeah, and even for RPA, this is true. Many people are using RPA by creating a Slack channel that connects directly with a robot that does something. AI just extends the same idea. To me, it’s kind of fascinating how we communicate with bots. I discovered myself — well, maybe it’s just an impression — but if I say, “please,” I think that LLMs come back with better responses. [Laughs]

Here’s something I also worry about. You’re the CEO. You get a lot of emails, you send a lot of emails. Do you ever worry about the loop where you’re responding to an email that was written by AI with another email that’s written by AI and suddenly everyone’s just pushing the summarize button and no one’s actually talking?

I personally write my emails because everybody in the company and clients knows my own tone and my broken English. So I cannot use LLMs. But yes, I’ve seen many instances where it looks like LLMs are talking to each other.

You’re the automation vendor. LLMs talking to each other — there’s something hollow there, right? Is that something you want to achieve with your products, or is it something you’re trying to avoid? 

I think to a certain extent we want to achieve that with our product. We want to facilitate agents talking to each other, but in a more controlled environment.

Daniel, you’ve given us so much time. You’re going to have to come back. I feel like I could just talk about the philosophical repercussions of all of these systems with you for many more hours, but you’ve given us so much time. Thank you for being on Decoder.

Thank you so much. 

Questions or comments about this episode? Hit us up at decoder@theverge.com. We really do read every email!