Bonus: Designing for Trust in Urban Systems with Ryan Powell of Waymo
July 22, 2025
33:40
S3: Bonus
In this episode from ID’s Shapeshift conference, Ryan Powell (MDes 2001) shares how Waymo uses design to tackle one of the world’s most pressing challenges: the 1.35 million annual traffic fatalities and 50 million injuries from vehicle crashes.
Transcript
Intro
Welcome to ID events. A series on the With Intent Podcast from the Institute of Design at Illinois Tech. This past May, design and tech leaders gathered at ID’s Shapeshift conference to reimagine how we approach AI—shifting the conversation from what technology can do to what we think it should do. In this episode, we share remarks from Ryan Powell, Head of UX at Waymo and ID graduate, whose session explored how design builds trust in self-driving cars at the individual, community, and city-wide level. Here’s Ryan Powell on designing for trust in urban systems.
Ryan 00:40
As Albert said, I graduated from ID almost 25 years ago, but it’s nice to see some, I see a few familiar faces in the audience, so this is a bit of a homecoming for me. So thanks for having me, and it means a lot to be here. As Albert mentioned, I’m at Waymo. Waymo started out as Google’s self-driving car project, so I’ll talk a little bit about that. Really the theme that I want to try to touch on is how an application of AI, so autonomous driving, it can be guided by human-centered design and really have an impact. So I’m going to kind of jump right in here with a stark global challenge, and that is around the world. Every year about 50 million people are injured from a vehicle crash and about 1.35 million people lose their lives. I remember back in 1999 when I was a student here at ID, I was coming home on a Friday evening, there was going to be a party later on that night at a friend’s house, and I was going up Wells Street in the bike lane and a car ran a red light and struck me.
So I flipped up, hit the windshield, it shattered, I was knocked unconscious. This was around rush hour, which was fortunate because there were a lot of people nearby. But I woke up, spent the night in the hospital. This was before we all carried cell phones. And so my friends, the next morning I got home from the hospital and I had a bunch of messages on my answering machine, but I was lucky, just a few bumps and scrapes and some stitches. But this really is the core challenge that we’re trying to affect at Waymo, and it’s what motivates a lot of us to be there. And so when we think about what we’re doing, on the one hand you can say, yes, a self-driving car is very innovative, but we actually think our technology, it’s really more about an imperative and trying to get it out there in the market so that we can affect these numbers that you see here.
Ryan 02:48
So Waymo, as I mentioned, started out about a decade ago as Google’s self-driving car venture. I joined the team a little over nine and a half years ago. And what we like to say at Waymo is that we’re building a generalizable driver. So when you look at the United States alone, I think it’s something like 3 trillion vehicle miles traveled every year just in the US. And then as you look at that number globally, it’s something like tens of trillions of miles. And you can imagine those miles are, there’s a lot of variety in those miles. So that could be somebody who has a personal car that’s driving, it could be transportation systems within a city. There could be of course, goods that are being transported from point A to point B. But the idea at Waymo is that we’re building this driver with the idea that we can apply it to a lot of different applications.
So the first application that we are focused on at the moment is a ride hailing service that we call Waymo One. So today, just like Uber and Lyft, we have an app that’s on iOS and on Android you can download our app and you can hail a Waymo and you can tell it basically, I want to be picked up at point A and taken to point B. Today we are in these four cities. So we started out in Phoenix, I believe it was 2017 in the suburb of Chandler, and then we moved on to San Francisco and then Los Angeles, and then we recently launched in Austin. And what’s a little bit unique about Austin is that that’s a partnership with Uber. And so we’re at this very early stage trying to experiment with different ways to go to market. And so in San Francisco and in Phoenix and in LA, you can download the Waymo app, anybody can take a ride so that it’s a completely open service.
Ryan 04:42
There’s no wait list there. We aren’t sort of gating the service on any parameters right now. But then in Austin, if you have the Uber app, you will be matched at times to a Waymo. And so the way to think about that model is that we’re really kind of augmenting that fleet of human drivers that they have. This slide is probably already a little bit outdated, but we have announced that we’re coming to Atlanta very soon and that’ll be that similar partnership with Uber, like we’re in Austin and then we will be in Miami and DC as well. We’ve also, we’re doing some driving right now in Tokyo. And what’s exciting about that, it’s our first left hand, left hand sided driving market that we are doing some driving in. And we just announced a partnership with Toyota where we’re going to develop work with one of their vehicle platforms to integrate our technology.
And along with Hyundai, who we’re working with right now, and Zeekr, some of you, I’ll kind of show a picture later on some of the vehicles that we’re working. So the idea is that we work with different vehicle makers to integrate our technology, and so we have a bunch of different form factors for our ride hailing service. But when I think back to starting nine years ago, the big question at the time, I remember I reported to Dmitri, who’s our co-CEO right now, and he spent his entire life trying to figure out how to get a car to basically drive itself. And the other big question that we were starting to kick around back then was, okay, are people actually going to get into a self-driving car? We’re not going to be able to affect those numbers that I shared at the start unless people adopt this technology.
And so as designers when I joined, it was just me, another designer, a few researchers. The question that we really grappled with at the time was, again, are people really going to get into a self-driving car? So this really served as the cornerstone for all of the design work that we have done at Waymo. And I’ll talk a little bit about that today. There are kind of three chapters. It’s a little bit hard to see here on the screen, but I’m going to talk about the work that we are doing to foster trust between our technology and our riders. That’s sort of the first chapter. The second chapter then is thinking about how when we are a part of a city, how do we make sure that we go with the flow or that we integrate into the fabric that already exists there and that we don’t really stand out and cause disruption. And then the third chapter, I’m going to touch on some of the considerations at the civic level. So cities care very much about new transportation providers that are coming into cities, and how do we approach that in a way where we ensure that there’s a smooth adoption to what we’re doing?
Ryan 07:38
So the first one, which is maybe a nice segue from the last conversation is this idea of trust. And so when we thought about early on, are people going to get into a self-driving car, it really became a question of how do we design for trust? And that’s really been the core principle that we’ve worked with now for many years. The first thing that we did when we started out was we spent a lot of time riding in cars where you had a human driver and you had passengers. And the big thing that we noticed, and this is kind of borrowing from some of the methodologies here at ID, but the first thing that we picked up on is there’s a lot of communication that happens today between a passenger and a human driver. A lot of times that communication can be direct, right? You might ask the driver, Hey, why are you taking this route versus another one or the light’s green?
Why aren’t you moving? But what we found is that what’s also very interesting is there’s a lot of nonverbal communication that happens. And so in this example here, let’s imagine that you’re a passenger in that backseat. You might be on your phone, you might be kind of distracted or doing something with your head down. The first thing that you’re going to notice in this scenario is that the car is slowing down. So you’re going to feel something that might cause you to look up. And in this scenario, you’re going to probably notice that the human driver, his gaze is fixed on this jogger that’s in the photograph here. And in that moment, it’s probably going to be enough information for you to understand, okay, we’re slowing down. I felt the vehicle slow down. And I glance up and I see what’s happening there with the driver and the jogger, and it’s enough information for me to go back to what I was doing.
But of course, AVs, within the absence of a human driver, there’s that communication gap. And so we started out very early on thinking about a proxy for basically the type of communication that happens between human drivers and passengers. Some of you may have seen images like this. This is what our internal development team, so all of the engineers that are working on our autonomous driving technology, they use tools where their interface looks like this. And as a design team, we work on this internal tooling as well, but they are of course expert users and they want to understand everything that the car can see and what’s affecting its behavior. So if I’m trying to basically improve maybe our interaction around a cyclist and I’m making a change to some of the code, and then I want to see in simulation how that change is doing, I would look at a view like this, but early on the big question that we got from a lot of riders was, well, when we think about the communication and in the absence of that human driver, what can Waymo actually see?
Ryan 10:29
And so it’s obvious for a lot of us as designers that showing riders something like this is way too complex for people that are just trying to move from point A to point B. So a lot of the work that we did early on was trying to translate the complexity that you see here. And there’s something that’s a little bit more so today when you get into a Waymo, there is an interface in the front of the vehicle and one in the back as well. And what, I’ll talk a little bit about some of the approach that we’ve taken here with this interface. The idea though is that what we’re trying to do here is again trying to design a proxy for that communication that happens between a human driver and a passenger. So if we kind of break down the UI that you see in the car today, it kind of starts with this 3D scene that we have here.
And so you see the Waymo near the center there, you see the green trajectory that’s telling you where the car is going to go, and then you see basic elements of the road graph. So you see crosswalks that are there, you see the edge of the road versus the non road surface. And what you’ll notice, at least in this still image is we use our laser points to also render the humans that you see in this frame. So you have a cyclist that’s going across the intersection and you have some pedestrians there as well. What we found early on is that we used to actually render the surrounding traffic that was around the Waymo with these laser points too. But what we found is that just by nature of the way that lidar works, it kind of bounces off the surfaces that are facing the Waymo.
And so you get these shapes that are really kind of incomplete. And the feedback that we got early on was that, Hey, it doesn’t really look like Waymo can see that well. And so we did a lot of work and of course the engineers are like, you can see three football fields away when we’re not occluded. So we did a lot of work too to try to extrapolate those shapes, fill them in. But the point is that it still looked buggy, so it worked against that principle of trust and it was actually doing the opposite of what we were trying to achieve. We were trying to think about how do we increase that trust? And so this is what that 3D scene looks like today. We use these harder shapes to render the traffic that’s nearby. What we also noticed early on is that people would often wonder like, Hey, why don’t I see any buildings in the scene as well?
Ryan 12:43
I look out of the window and you can see a bunch of buildings, but in the interface here, I’m just looking at the car, the Waymo and what’s around it, but I don’t see buildings. And so we pulled in some of these shapes as well from Google Maps, but what we found is the buildings are there to provide a soft reference for the real world. We kind of dimmed them back a little bit. And the objects that we still focus on here is the self-driving car, the nearby traffic and the pedestrians that you see there. We kind of think of those more as our first class objects.
Early on, some of the research that we did riding with people in a self-driving car, they were a bit more anxious when it came to construction zones. And the anxiety stemmed from the fact that a construction zone might not have been there in the morning. You could have a two three person crew kind of pop up with some cones and might be doing something to the road. And what made people nervous in that situation is they’re like, it’s such a changing dynamic environment. Does Waymo actually understand what’s going on? And so this is a small change, but it’s probably the one feature that has paid significantly high dividends. When we think about the return on investment, we started showing these traffic cones in the UI. So the way self-driving cars work, they have an enormous perception library. We understand what each object is and we can kind of choose what we want to show in the UI. But just turning on these traffic cones, again, kind of worked towards that goal of helping people understand what Waymo actually sees, which then led to higher trust. And so it’s something that even to this day, almost again, I’ve been there for over nine years, we hear this come up all the time in our user research just how impressed people are that Waymo actually understands that not only are there traffic cones, but they’re arranged in this pattern as well.
Ryan 14:58
The other thing that we do, I talked about that communication between a human driver and a rider, and often that communication is very subtle. There are kind of nonverbal cues that happen. So we do the same thing in the UI. So in this case here, if you’re in the Waymo, we’re going to make a right hand turn and the car is going to yield to this pedestrian that’s in the walkway here. And what we do in the UI, again in a very sort of subtle way is that we will highlight the crosswalks where there is a human in that crosswalk and that is what we are yielding to. And so again, if you’re in the backseat of the Waymo and you might be on your phone and you kind of notice that we’re not moving, sometimes it can be hard to see from the backseat what’s exactly happening around the car, but this is an opportunity then to glance at the UI and to see that, okay, I see there’s a person here and that’s why Waymo’s not going, and so I can go back to doing what I’m doing.
We also use, then you have this 3D scene that you see in our interface. And then on top of that we have what we call the status layer. And the way that I like to think about this is it’s the area where we have a little bit more of that direct communication that you might have with a human driver. So in this case, let’s say that Waymo is about ready to go through this intersection and there’s a cyclist that’s there in the center and we start to slow down again, you’re going to probably feel the car slowing down first. That’s going to be a cue to you, and you may look out and see the cyclist or you may not be able to see them. But in the UI, we actually, with the status layer here, we are a bit more explicit and we actually tell you that we’re yielding to the cyclist.
Ryan 16:27
We use sound design as well. So there are moments where we want to call your attention to the screen so we know that if we use some UX sounds that you’re more likely to look at the screen. So we’re very judicious on how we apply sort of our soundscape. My significant other is a sound designer. And so we take a lot of pride in the sound design work and how we incorporate that into the Waymo experience as well. The idea here is that we don’t want it to feel like a video game when you’re driving around. I would kind of say that this is more of a lean back experience and again, that this UI is here to foster that trust and to be there when you need it, when you want to look up and glance at it, but it’s not something that you’re going to stare at the whole time.
Ryan 17:10
Because one of the big value propositions around a self-driving car is that when you don’t have that human driver upfront, you really have that space to do whatever it is that you want. And that’s really the big kind of aha moment. I think when people take their first ride is that, yeah, it’s cool the car is driving itself at the same time I’m in here by myself. I can listen to the music that I want to listen to, I can sing along, I can take a phone call, I can take a nap, all those types of things because I have that space to myself and there’s that trust there right after a while, you know that if there’s something that happens with the Waymo where it might slow down or do something, there’s the way that we’re communicating basically what’s happening.
Ryan 17:57
Okay, so thinking then about the second chapter. We started out driving in Phoenix. I often get asked the question, why Phoenix back then? It’s a number of reasons that go into these types of decisions. The regulatory environment of course is very favorable. In Arizona, Phoenix, there’s a lot of new infrastructure. So the streets are very wide, they are good streets to drive in. It can be hot there, of course it’s getting hot about right now and you see fewer people out and about when the weather gets really warm. And the density of course is not as great as a city like San Francisco. But when we went from Phoenix to San Francisco, the big shift that we made as designers is that we realized that we had been focused up until this point on the rider and what their needs are. And we really had to start thinking about how to be a good citizen when it comes to being a road user in a very dense urban environment like San Francisco.
Ryan 19:01
So there’s a number out there, I think it’s like we encounter 30 times the number of pedestrians and cyclists on the roads in San Francisco versus what we did in Phoenix. And I think I was talking to Amy last night from Microsoft and she wrote in a Waymo about a year ago, and she told me that she didn’t have a great experience because there was I think a double parked car that was there and the Waymo was kind of waiting very patiently behind that double parked car and it was waiting too long. And that’s what the experience was like when we first started in San Francisco. I would say that was very, we were a very cautious driver and in a lot of cases that translated into over yielding. So what I mean by that is that we were waiting for a lot of stuff to happen around us before we then found our way through whatever was happening.
Ryan 19:54
And what’s tricky about San Francisco is that you don’t want to be an aggressive driver, of course when you’re making autonomous driving technology, but you need to have a little bit of assertion when you’re in these denser environments because again, you want to go with the flow. You don’t want to be that outlier where the Waymo is kind of holding up traffic and frustrating people. So we all have experienced this, I think as pedestrians when we’ve been out and about and you are say at an intersection and you are going to cross the intersection. Oftentimes what we do as pedestrians is we make eye contact with that human driver. Sometimes the cue is even more explicit where that driver or you as a pedestrian might even wave to that person. But the point is, is that there is again that communication that’s happening between a pedestrian and another driver, another human driver I should say.
Ryan 20:50
So for Waymo then, the way that we kind of navigate this in the absence of a human driver is something called legible motion. And this is a concept from robotics and animation if there are any animation designers that are in the house. But the idea that you can convey intent through motion. There’s one of my colleagues at work told me there’s a good example of this where I think there’s a robotic arm and this arm is basically going to pick up a drink in front of it. There are two drinks and it’s going to make a decision and pick up the drink. If you don’t think about legibility, the way that that robotic arm might work is it might just kind of reach out straight ahead at the very last minute. It might deviate and choose the drink on the left or the right. And if you had to try to guess what it was going to do, it would feel very awkward or ambiguous as an observer.
Ryan 21:45
But when you think about legibility, and let’s say that we want to grab the drink on the right and think about that arm kind of coming in and sort of making a soft trajectory, a little bit curved to kind of give you that hint that it’s going to take that thing on the right, that is a much more legible action and it’s through motion that you’re able to understand that. And so in most cases, the classic example for us is at a four-way stop, we will nudge forward now and it’s a signal to the other pedestrians or the other drivers that we’re about ready to take our turn. But again, as we sort of did a bunch of work with pedestrians to really understand what their experience was like with Waymo’s being in San Francisco, we found that there was a particular scenario where when we had them sort of measure what the experiences or rate the experience, we kind of geek out on different measures for how you sort of rate these interactions.
Ryan 22:42
And so I would love to talk about that if anybody’s interested. But we had people basically tell us, are there moments where you’re kind of unsure what the Waymo’s going to do and you might not feel anxious per se, but it’s definitely kind of interrupts the flow that you’re in. And the scenario that came up was when the Waymo is actually stopped or coming to a stop. And in those examples, we can’t rely on motion to really convey what the intent is. So the example here again, is that think about a stop sign or a four-way stop, and if the Waymo was there in the early days, it would wait for you as a pedestrian to cross. But there’d be that awkward moment where you’d stand there and you’re like, okay, does it see me and should I cross? What should I do? So a little bit of a dance there.
Ryan 23:32
And so fortunately for us, what we decided, how can we convey intent when we don’t have motion working in our advantage? Fortunately, we have our lidar dome on top of the vehicle also has a series of LEDs in it. And this example here, you see our logo that’s projected in this sort of marketing shot. This is a great display. It’s 360 degrees, it’s super bright, it works in sunlight, it works at night. And the first application of this was helping you find the correct Waymo to get into. When we were in Phoenix, there’s a big mall there, Chandler Fashion Mall. There’d be five Waymo’s outside the Nordstrom. You’d come out and of course we do things like use Bluetooth and your location to authenticate you and unlock the car as you approach. But when there’s a row of five of them, it still is awkward knowing which one to get into.
Ryan 24:27
And so the first application was having your initials that you can set in the app and they show up there. There’s a funny scene in San Francisco, I think it’s funny. Charli XCX had a concert maybe about five, six months ago. There was a ton of Waymo’s that came afterwards to pick up the concert goers. And so there was a sea of Waymo’s, and so this is how you find your Waymo in a sea of Waymo’s. But we started thinking about, okay, how can we maybe leverage this display that we have to try to make those interactions when the car has stopped a little bit smoother with pedestrians when we’re in the city? And so what we started to do was think a lot about these signals and we did a bunch of iterations on these things, and it’s really hard to do this type of work.
Ryan 25:17
There’s an entire body of research out there, and we were by no means experts. You see a lot of Wizard of Oz prototypes out there by Stanford, other entities where they’ve tried having somebody dressed in a dark suit, and so it looks like there’s no driver and you have an external display and you do different signals. What’s hard about this is it’s hard to know, is Waymo trying to tell you to do something or is it trying to tell you that? Is it trying to tell you to do something or is it telling you what it’s going to do? That type of communication sort of mismatch often happens when you look at all the research that’s out there. A way that I would kind of make sense of it is to boil it down to these three points that when you have a signal that you use like this, it has to be legible, of course, it has to be easy to learn and it has to be glanceable, something that I can look up and see right away.
Ryan 26:14
And so we did a bunch of user research at Waymo where we had different types of signals. This is on the roof of our office and we were having different people go through. And what’s hard also about this is what triggers do we use to actually display these signals? And you can imagine all the complexity that you have out there in a dense environment like San Francisco or Chicago. So that takes a lot of fine tuning, but also making sure that we’re hitting on those three principles that I mentioned as well from a human perspective is also what we spent a lot of time doing where we landed, at least today, if you were in any of the cities that we’re in right now, what you would see if you were at a stop sign, not a stoplight, but a stop sign and you’re in the Waymo’s already there, what we do is we have this forward facing signal, and what we’re trying to tell you is two things.
Ryan 27:14
A, we see you, and B, it’s okay to cross. And so what we’re able to do is pull different examples of this from the wild and look at how it affects people’s behavior to give us a sense of is this signal working as intended or do we need to do some fine tuning? I would by no means say that we have solved this. I think we’ve just sort of started to scratch the surface. But again, this is kind of going towards that goal of making sure that when we’re in a city that we are really kind of going with the flow and we’re not a disruptive agent when we think of all the complexity and stuff that’s happening around us.
Ryan 27:54
The sort of third and final chapter here then is thinking about cities at more of a civic level and thinking about what their needs are. And so some of you have probably seen some of the headlines where other autonomous driving technology companies have maybe run into some hurdles with different cities based on different things that are happening. And so as we think about bringing this technology to as many people as possible, we really want to be careful about how we enter a city and how we work with cities to bring our technology there. So just like the work that we’ve done with riders, with pedestrians and trying to understand what their needs are, we spent a lot of time with cities as well. And so this comes down to just talking to a lot of stakeholders that are in cities. So these are government employees, these are consultants that work with cities to really understand a few things.
Ryan 28:50
Here. We want to know how do cities kind of think about transportation, autonomous vehicles, what do they want in a nutshell? And then how should we respond based on that understanding? And so we went and did this across the US and we’re now doing this in markets outside the US and we have basically assembled a bunch of themes. And so I pulled out a theme here to kind of share and talk about as an example, the first insight is that cities really think about transportation as a system. It seems a little bit obvious, but this was something that has really stuck with us. And what I mean by that is they’ve invested in often cases, a lot in public transportation, and they want new transportation providers that are coming to a city to really help them understand how you fit into that system. And so a great example of that then is just thinking about the first and last mile problem that a lot of cities have where how do you get people to and from a transit hub or a transit station where there might be gaps in the existing network that exists today.
Ryan 30:01
So this work has inspired a lot of a couple pilots that we’ve done one in San Francisco and one in LA where what we’re doing is we provide an incentive. So it’s a monetary incentive, it’s a credit on a future ride. So when somebody takes a Waymo to or from a transit center, we provide them with a credit for their next Waymo ride. And what we’re trying to do here is we’re trying to understand how that might again, sort of affect behavior when it comes to not only Waymo usage, but also using public transportation in these cities. And so this has been well received so far in San Francisco and in Los Angeles. Okay, so that then brings me back to the initial challenge that I mentioned upfront. How are we doing against this problem that we’re all motivated on trying to affect? I would say that when we think about that question that I brought up early on around will people even get into a self-driving car, I think the answer is yes, we still have work to do.
Ryan 31:10
We know that when we look at populations of people, there are early adopters, and we all know the different types of folks that are out there, but right now we are in these four cities that we’re in. We’re doing 250,000 rides a week, but these are fully driverless rides. There’s nobody else that’s in the car. I think Tekedra, our co-CEO mentioned last week that we’ve also hit a milestone where we have 10 million fully driverless rides that have been paid for in these four cities. So that’s a big milestone for us. And then the other thing that’s nice to see, there’s a body of work out there that actually is looking at our safety and how do you compare it to a human driver and how do we measure up? And if you go to waymo.com/safety, there’s a whole area that talks about the research that other people are doing in addition to Waymo on basically trying to understand how our technology compares to human driving.
Ryan 32:09
And what we’re finding is this growing evidence that we are making roads safer, especially in cases where there’s a crash that involves not only the driver of the vehicle, but also these other road users, so pedestrians and cyclists. And so I would say that we’ve started to have an impact. And when we think about AI and its application when it comes to autonomous technology, these are some proof points that we are starting to chip away there. The last thing I guess I’ll just end with here is that I’m also an optimist when it comes to technology in the future. And I know that I think it’s a very exciting time when we think about AI right now, there’s a lot of ambiguity on how things are going to play out, but what I still always come back to is this human-centered lens and really trying to maintain a focus on what human needs are we trying to solve and how can we apply that to AI and in my case in particular, to autonomous driving technology. And so that’s my call to action here is maybe to kind of keep that in mind as we sort of explore this topic together. But yeah, thank you.
Outro
This event was recorded live at the Institute of Design at Illinois Tech. If you haven’t already, be sure to subscribe to WIT for more discussions about design and its impact on the world around us. Thank you for listening.
Key Points
- The challenge of building trust between passengers and autonomous vehicles in the absence of human drivers
- The evolution from Phoenix’s suburban environment to San Francisco’s dense urban challenges, requiring “legible motion” and LED signals to communicate with pedestrians and other road users
- Working with cities as stakeholders to integrate autonomous vehicles into existing transportation systems, including first-and-last-mile solutions for public transit
- Growing evidence that autonomous vehicles are making roads safer, particularly for vulnerable road users like pedestrians and cyclists
- The importance of maintaining human-centered design principles when developing AI applications, focusing on human needs rather than technological capabilities