Skip to Main Content
institute
of desiGn
Search

Bonus: Shaping an Inclusive AI Future with Anamitra Deb of Omidyar Network

July 29, 2025

28:14

S3: Bonus

00:00
00:00

In this episode from ID’s Shapeshift conference, we hear from Anamitra Deb, Senior Vice President of Programs and Policy at Omidyar Network. His talk explores how we might create a responsible future where access to AI’s benefits are shared by all humans.

Transcript

Intro

Welcome to ID events. A series on the With Intent Podcast from the Institute of Design at Illinois Tech.

This past May, design and tech leaders gathered at ID’s Shapeshift conference to reimagine how we approach AI—shifting the conversation from what technology can do to what we think it should do.

In this episode, we’ll hear from Anamitra Deb, Senior Vice President of Programs and Policy at Omidyar Network. His talk explored how we might create a responsible future where access to AI’s benefits are shared by all humans.

Here’s Anamitra Deb on Shaping an Inclusive AI future.

Anamitra (00:41)

So first of all, thank you so much for inviting me here to speak today. As Albert and Anijo said, we had a pretty chance meeting a couple of years ago. It was because we were doing some work on online trust and safety and human-centered design. And design professionals have been so core to the idea that if you change the defaults in the tech ecosystem, you completely change the experiences that people have. And especially you can change the experiences for the most vulnerable people that don’t have technology accessible to them. And that’s really why I think we’re all here today. How do we shape an inclusive AI future that involves everybody that uses technology in a way that’s democratic and in the public interest and where the defaults in the system work for everybody. But before I go, thank you. But before I go into that, let me start with something personal. I want to talk about my grandmother. She turned a hundred years old in March, which is quite cool. My 10-year-old thinks it’s the pinnacle of human achievement to be a hundred years old, by the way. So I’m way down on his list right now.

Anamitra (01:53)

I’m close to a hundred, but not that close. So she turned a hundred years old. She lives in Mumbai in India, and it’s amazing how much technology is at her disposal. She’s able to use blood glucose monitors. She’s able to have systems on her that monitor her vitals, that send the information to her doctor on a daily basis so we know if there’s ever a reason to actually come in and check her out. She’s able to stream the movies from her youth on Netflix and every day she spends three hours basically just falling back in love with whatever film star she was in love with in 1950. She can listen to the old tunes on Spotify. She endangers the lives of poor Indian guys on scooters every day because she orders food and they have to get there in 10 minutes so they don’t get paid.

Anamitra (02:44)

So she does that every day and she has so much of her family, including us and her son who lives in New York who live across the world and she’s able to FaceTime with them, she’s able to WhatsApp with them. She’s able to use voice to text to be able to send messages to them. So it’s really a remarkable way of seeing someone who you don’t think the technology ecosystem’s built for using technology every day and gives me great optimism. But I’m also conscious that there’s no way she would be able to do that if she didn’t live in a very caring human environment. So the only reason all of this is possible is because she has caretakers every day that tend to her 24/7. That’s not possible for everybody. She lives in a daughter’s house and she lives in a house of love. And that makes a huge difference because she really is undergoing that second childhood, right?

Anamitra (03:35)

She really is at a point where she can make very few decisions on her own. She can’t communicate very cleanly. She has good days and she has some really bad days and weeks as well. So what we’ve realized over time is that the only reason all of that technology works is because she’s immersed in a structure of human care and love and compassion without which her life quality would be significantly worse. And so when we work on technology, on digital technology, on AI at Omidyar Network, that’s sort of the way that we come at technology. We understand that digital technology can make wondrous things possible. It can create shared power possibilities and progress. Those are my brand guidelines. I have to say that at the same time, it only has that effect if we make collective and intentional choices about what technology is, what it does in our world, what problems it solves.

Anamitra (04:34)

The Amazon person yesterday was amazing at this because he was saying, look, there’s specific problems it can solve and there’s other problems it can’t solve. And we are the arbiters of which problems it can solve and which ones it can’t. And ultimately, technology is only going to succeed. Digital technology is only going to succeed at ensuring inclusivity, equity, fairness, justice, and all the things we actually care about if humans make collective choices, if we exert our shared agency and if we think about societal governance as a way of ensuring that technology works for us and not the other way around. So I have a few slides I’ll go through if this works. Hopefully it will. There we go. Let me start a little bit by talking about who we are. And by we, I mean Omidyar Network. Omidyar Network was founded by Pierre and Pam Omidyar.

Anamitra (05:25)

Pierre and Pam Omidyar are the founders who created eBay and back in the day, they bought PayPal and they scaled up PayPal. All of these companies have gone in different ways. We are completely not a part of the company anymore. We are an independent philanthropic organization. As Albert said, our mission is to bend the arc of the digital revolution towards shared power, prosperity and possibility. We focus on the governance, the business and the culture of tech, which is pretty much everything to do with tech. We have a dual checkbook approach. So what that means is we both have an LLC that can make for-profit investments in social impact organizations, startups usually. And we have a foundation that can make C3 grants. So 501C3 grants, charitable purpose grants to civil society organizations that are building up the ecosystem, doing policy and advocacy, trying out new systems, creating new models of governance and so on and so forth.

Anamitra (06:21)

Through all of our work in the last 20 years, we’ve committed close to $2 billion in philanthropy. So some people, that’s a lot. And to Gates Foundation, that’s a rounding error. We invest in and collaborate with some of the world’s best visionaries. I mean, look, none of this work is possible without us having the champions in civil society that actually do this work every day. Lee and I had a great talk yesterday on the podcast about this as well to go into some of this and that’s why I want to feature some of our partners and grantees. This is just a small selection of some of the folks we work with, but they’re all doing incredible work. Some of these folks I’ll talk about a little bit more as we talk about online trust and safety and design, but we have folks like Culture CoLab. It’s actually Pop Culture CoLab and the pop is missing because of the color scheme.

Anamitra (07:08)

Pop Culture CoLab works on narrative, on really on cultural activations, on making people who don’t see themselves as part of the storytelling in technology to be part of the storytelling in technology. Technology is so ubiquitous today that it’s in our homes, it’s in our cars, it’s in our lives, it’s in our relationships. There’s no way to avoid it. And so one of the things I always say to folks is expertise in technology is a really questionable term today. There are technical experts for sure, but all of you are experts. All of us are experts because we use technology in an immersive fashion every single day. There’s not been a time in the history of the world where a single person uses more technology, whether it’s you or whether it’s my grandmother. And so one of the things Pop Culture CoLab tries to do is it tries to interrogate who is an expert in technology, who can be involved in technology, who can make choices about technology.

Anamitra (07:58)

And the answer is really everybody. And that’s really important. AI Now is a phenomenal organization. AI Now Institute actually run by two women who are some of the best thinkers on societal governance of AI. If you haven’t checked out their work, I really encourage you to do so. The Copyright Alliance is a really interesting project. It’s a collection of people who are trying to figure out what will happen to the creator economy because of generative AI. And that’s one of the big questions today, as you saw from the SAG-AFTRA strike a couple of years ago, when people exert their bargaining power to do better things for technology to try and figure out how technology will be used in their workplaces to try and figure out how the productivity gains and profits of that technology will actually be distributed to the workers, it sets an amazing template for society.

Anamitra (08:47)

Those are the things we really have to watch out for. No one knows whether this is going to replace and automate all workers or whether it’s going to enhance productivity of all workers, and it’s somewhere in between. But the point is that for the five or 10 years while we are figuring that out, if we don’t involve workers in the decision-making about what is going to happen to them, then I think we get adverse outcomes every single time. I won’t go through all of these. Let me just mention a couple of others that are important to this talk. The Black Innovation Alliance tries to think about how responsible innovation can be differently done. So how can you involve communities at the start? They’re based in Atlanta, but they are a federated network of chapters that are thinking about Innovation Alliance and trying to get businesses and startups that are more representative of the communities that they come from.

Anamitra (09:32)

And that’s a really important piece of the puzzle here. And then Human Connections AI is a very interesting organization because what they’re trying to do is they’re trying to be a hub for everyone who wants to think about what AI companions and chatbots are going to do to our personal relationships. So not what they’re going to do when you complain to Amazon about a customer service problem, that’s a pretty okay thing. You’re going to get some good outcomes and most of the time you may or may not. It depends on how loyal a customer you are to Amazon, but should we allow AI companions to target our children and teens is a really big question. There haven’t been that many times in the history of our civilization over the last hundred years that we’ve said no to certain technologies, but we have said no to certain technologies because they’re either extractive, manipulative, or just harmful without the benefit being part of the sufficiently part of the equation.

Anamitra (10:27)

And AI companions and chatbots might be great for adults, consenting adults who want to have different kinds of relationships. Human machine interactions and relationships will change over the next 10 or 20 years. But we do have a perspective that anyone who’s 17 and below is not sufficiently capable of understanding the manipulative process of what an AI companion can do, and the benefits don’t seem to outweigh the risks. So one of the things that Human Connections AI is trying to do is to understand where human machine relationships and to create evidence for where human machine relationships are helpful and where human machine relationships can be less helpful over time. So when we talk about building an inclusive AI future, what do we talk about? We talk about embedding humanity into our digital future. We actually use the word hardwiring humanity into our digital future. And the reason we do that is because what we’ve seen is that every technological revolution that promises so much doesn’t deliver inclusive, equitable and fair outcomes without two things happening.

Anamitra (11:30)

And those two things are the countervails. They’re vital countervails and they’re interdependent countervails that make technology be better in service to society. One is shared agency. You have to have a sense of shared agency. Humans have to feel like they have some control, they have some power. And today the ecosystem is quite unbalanced in terms of where that power lies in terms of where the concentration of decision-making lies in terms of how much we feel like we have shared agency over our lives. If you ask my grandmother what’s going to happen with AI, she thinks of the Terminator movies, but we could be thinking of the Wakanda movies, right? They’re a much more inclusive, integrated, community-based way of using technology. So one of the things we always say to people, just even if you just change your mind shift, don’t think Terminator, think Wakanda. It’s a way better, more positive human integrated technology future.

Anamitra (12:21)

But that means we have to expand this power. We have to expand who’s sitting at the table. We have to expand who’s talking about technology. So a lot of our work is about bringing kids and teens to decision-making forums. And it’s amazing what happens when they do that because lawmakers who are like 70-year-old white men are just in tears when they hear the online lived experiences of kids and teens talking about what they like and what they don’t like about the internet that should work for them. We bring parents to the table, we bring workers to the table, we bring faith leaders to the table. We bring veterans to the table who are currently very upset about what’s going to happen if there’s Medicaid cuts or social security cuts. They depend on them. So there’s a broad coalition of people who care about technology choices and technology decisions.

Anamitra (13:01)

And one of the things we try to do is to expand that sense of network power. The coalitions only work when they’re broad based enough that they can countervail the very powerful lobbying of some of the interests that are going the other way. And then the other thing we think about is mainstreaming a culture of people first, innovation. So how do we shift the narratives, the beliefs, the behaviors, and how do we embed those into startup choices? How do we embed those into civil society organizations? How do we make sure that people are really thinking about a culture where people are making decisions about what innovation looks like? Because innovation is, especially in Silicon Valley, which is where I am, innovation has a certain flavor and it has a certain look, and some of it is extremely successful and extremely valuable, and some of it is just kind of like what my CEO says, like doing jobs for young white dudes that their mothers used to do for them, wash their laundry a little faster, cook their food a little faster.

Anamitra (13:57)

It’s okay, it’s not that great. It’s very convenient, but it’s not really advancing the public interest in any serious way. So we do a lot of work on shared agency. We think that’s really important. We think we have to feel like we care and we have optimism. It’s sort of like how they talk about consumer sentiment in markets. If you don’t believe the markets are going to work for you, you’re less likely to save. You’re less likely to make good purchasing behavior decisions. You’re less likely to make good financial decisions, you’re less likely to save for your kids and think about college and think about upward mobility. So in the same way about technology, if we don’t think positively about technology, we don’t feel like we can reclaim some power and agency over the future that we want, and we don’t think that there are moments like this when new technologies appear, when that moment is actually a really strong window, three to five years to make those choices, then we won’t feel as invested and the futures won’t show up that the way we want.

Anamitra (14:49)

So then the second thing we really work on is societal governance. We really believe that you have to govern technology in some ways. Digital technology is the last remaining wild west in the US. There’s just no way in the world that you’d be able to put out a car on the road today without seat belts, without rear view cameras, without speed limits on the highways or whatever else. And digital technology has none of these. Its beauty is that it has none of these in some ways, but over time we’ve also been realizing that that comes with a real societal cost. The societal cost. When technology is tested on all of us, but it’s not well tested before it’s released, when we are thinking about applications that don’t have safety regulations, there’s no way we would do that in the food industry. There’s no way we would do that in the automotive industry.

Anamitra (15:31)

There’s no way we would do that in the drugs and pharmaceutical industry. But we’ve chosen for the last 30 years to do that in digital technology. And so one of the things we say is it’s time for some rules on the road. Good constraints actually make innovation better. They don’t make it worse. One of the most repeated and false statements I think you hear in DC and in Sacramento and other places is that any regulation is bad, but actually regulation is good for consumer protection. Regulation is good for transparency and regulation is good because it gives the field a bunch of fences. And within that field, once it’s fenced, tremendous innovation is possible. Fuel emission standards are a great example of this. We got better cars, we got more fuel efficient cars, and ultimately we got electric cars because fuel emission standards were actually a constraint.

Anamitra (16:18)

There were fences on that field and seat belts are fences on that field. Think about how safe you feel, those of you who are family and kids that you take in cars, because we have some rules of the road, we have speed limits and we have safety belts and airbags and rear view cameras today. That wasn’t always the case. And it takes some time. These things take usually 10, 20, 30 years to get there. But we’ve had 30 years of technology. We’ve had 30 years since Newt Gingrich decided the internet was going to be a privately governed and monetized sphere in 1996. And so it’s time to change, I think. So we work a lot on changing that, but part of changing that is you can’t just always be a diagnostician. You can’t just be the doctor that says something’s broken. You have to be a good clinician as well. You have to show a pathway to better health. And so one of the things we do is we invest in those kinds of alternatives. What are better models? What are better solutions? What are alternative governance structures? What are ways in which you can see a pathway to solutions and a better future? You can’t just be telling people something’s a problem, you have to give them an alternative. And so we do a lot of work trying to identify people who come up with great solutions and great alternatives for that technology.

Anamitra (17:28)

These are some examples of places where we’ve had some success and when we say we’ve had some success, it really means our partners have had success. We are just sort of funding them. But The Tech We Want is a really interesting activation we do every year in Austin on the side of South by Southwest. It brings together thousands of people that are interested in creating alternative innovation engines, more responsible technology narratives about reclaiming our power and agency. If any of you are ever in Austin around South by Southwest, I really encourage you to come and see it. And we got really amazing people talking about the interests of different kinds of people. And this year we focused it quite a lot on AI. We had panels on storytellers, on workers, on alternative innovation engines and ecosystems that were local. And it was a really interesting experience.

Anamitra (18:19)

The Responsible Tech Youth Power Fund was the first of its kind, and I actually said if we want, as I said earlier, if you want to change the stories that are told about the internet, we have to talk about kids and teens and the way they’re experiencing the internet. And so the fund invests in youth and intergenerationally led organizations, and it’s amazing what youth organizations can do. It’s amazing thinking back to when people are in college or in grad school and the organizations that they found, and sometimes they don’t know whether it’s going to last more than three or four years, but within those three or four year windows in North Carolina and in Illinois and in Maryland and in Seattle and in Sacramento, they’re creating legislative change by having really amazingly run campaigns. They understand new media better than, certainly better than I do. And so they’re able to use communications campaigns and advocacy in a way that is just actually catered for today’s internet and today’s media ecosystem.

Anamitra (19:12)

So that’s a really amazing thing to watch. The Model Alliance. You wouldn’t think that generative AI has a lot to do with fashion models, but in New York, the Model Alliance was actually an alliance made from fashion models who decided that if there were going to be 3D generated images of their bodies, of their likenesses that were going to be used to try on clothes to be used in ads, to be used in product placement and so on, that they should have a say in how those are made. They should have the copyright and the IP rights to license those out, and therefore they should get a share of the spoils. And they won that. Kathy Hochul signed that into law last year. This is a tremendous thing because it shows that in different industries that you don’t think is being affected by generative AI. And I bet you didn’t know if you thought this morning fashion models was on the forefront of regulatory change.

Anamitra (20:02)

I certainly didn’t when we first came across them, but it’s amazing. It’s led by this amazing set of women who do this and they’ve got this done and that now it’s a lighthouse for other people. It’s a beacon for other people. It’s a bat signal in the air that says change is possible and even whatever field you work in, it’s possible to create change. And it’s possible to use generative AI without saying no to it, but to sustainably embed it by putting workers in the forefront of decision making on the table. Then finally, the age appropriate design codes. We’ve had a bunch of kids codes, the Nebraska Governor signing one into law right now. So for those of you who please don’t look at your phones right now, but when you look at it later, it’s an amazing thing because what we’ve been able to say is that there should be privacy rights, data restriction rights, data minimization rights on the internet for when children are using the internet.

Anamitra (20:55)

I was saying to Lee yesterday, it wasn’t the case a few years ago that YouTube’s default setting a few years ago was autoplay, right? If you watched a video, you kept being shown more videos. That’s not the case for kids anymore. Snapchat has some very different rules for kids anymore. Even Meta has some pretty different rules for kids at this point. And we are really trying to work with them to make the internet safer for children. And this includes everything from the worst things on the internet are child sexual abuse materials, and there’s a lot of it. And AI actually can generate synthetic versions of it that make it even worse. And we are trying to create obviously laws that say that synthetic AI, child sexual abuse materials should be subject to the same laws as the real stuff. But even above that, just in terms of grooming behavior, predatory behavior and online destructive, online engagement, behavior, manipulative behavior, cyber bullying, shaming, you’ve seen the Instagram stuff that talks about teens and especially girls’ health and body image changes over time.

Anamitra (21:53)

These are intentional design choices at the end of the day, the intentional design choices about what you choose to set as the default in the system when you choose to show a certain video after a certain video, when your algorithm optimizes for a certain kind of engagement, when your content engines are trying to keep you and keep your attention online for longer. What we’ve seen over time is that those lead to fairly destructive behaviors in society. And we all have a say and especially design institutes, design graduates, people who work like Albert who’ve worked their years to try and make design one of the nodes of responsible innovation and online safety have a really big role to play here. So let me just talk about a couple of things we’ve done on the digital safety and trust front over the last few years, and I promise that’s just a couple of slides left at this point.

Anamitra (22:44)

So one is we’ve done a lot of work surveying users in Nigeria, Colombia, USA, and found that the most interesting places are where values compete. So where you have a need for user empowerment and transparency for content moderation, but you also want to give users privacy and encryption. What do you do in areas like that? A lot of the feedback that comes from users say, well put the decision in our hands. The work we did in different countries showed us that actually what users want the most is they want design improvements that give them more empowerment over time. We’ve done a lot of work on dark patterns. Dark patterns are the patterns you don’t see. They’re the default choices that you don’t see that drive some of the internet. Autoplay was a good example of this. We’ve done a lot of work over the years on how do you make sure that clear intentional design can actually make dark patterns go away, make them less explicit, make them less harmful over time?

Anamitra (23:34)

And today we are trying to do something around what’s called the designing from the margins framework, which is how do you get the most vulnerable and underrepresented communities and equities to be a part of the digital technology ecosystem? I can talk a little bit more of this in Q and A, but let me just go through and then there’s a lot of opportunity right now. There’s a three to five year window. Generative AI is a new technology. So AI today feels new. It feels in some ways ripe for both innovation and ripe for guardrails. And so here are some of the issues that we are working on. I’ve talked about some of these before, so I’ll go through them pretty quickly. One is there is strong traction on efforts to regulate AI for kids and online teen safety. That’s really important. And in some ways it’s a Trojan horse for what systemic practices will allow on the broader internet, what we allow for our kids and teens.

Anamitra (24:22)

When companies have to change their default systems, they often make them for everybody actually turning off autoplay then becomes something that you can opt into for everybody. It’s one of those kinds of things, and I’m not picking on YouTube, they’re actually an example of someone who’s taken a lot of steps to make design better over time. We do need to have broader coalitions working on design and producing responsible innovation and thinking about balancing online trust and safety with user empowerment and privacy. And I think that’s a big part of what we think the next few years can really talk about. And that means that we have opportunities to think about new consumer protection guidelines. We have opportunities to think about how to reduce fraud scams and hoaxes that disproportionately target our senior citizens. And then finally, we are doing a lot of work. This is the Copyright Alliance that we talked about a little bit earlier on AI and copyright in some ways.

Anamitra (25:19)

Some people say that generative AI’s original sin is that it hoovered up the internet. It hoovered up everything including both proprietary and non-proprietary IP to create the LLMs that actually generate amazing sort of next word tokenization, right? So in some ways there’s been a lot of pushback on that. What is fair use of that material? Where does the IP actually sit? Who should decide what is copyrighted and what’s not? How do you protect copyright in the generative AI age? And the law on that and the policymaking on that and the guardrails on that are pretty unclear right now. So one of the things that the Copyright Alliance, which is the big tent coalition of individual creators and artists and media folks is trying to do is establish some sort of guardrails in that system. The reason we do all of this coming back to the start is philanthropy can’t do a lot, but it can do some things well.

Anamitra (26:14)

And one of the ways we think about our philanthropy is that it’s the risk capital for public interest outcomes. We’re willing to fail, we’re willing to back people. And my CEO says this all the time, the worst thing that can happen is a really well-intentioned group of people didn’t get done what they wanted. That’s literally the worst that can happen. And that’s not so bad for money that we’re trying to give away to people who are trying to create change and create change faster. We are uniquely positioned to create that third path. We see our role as taking a bet on a lot of people who are trying to create change and accelerate change in the public interest. And in some ways it’s our role to do that, to try and push for the values that we care about. Through that, there are really amazing opportunities these days to address the existing inequities in the status quo and to change the defaults of the system so that it’s a more inclusive, fair and equitable system over time.

Anamitra (27:06)

So really that’s part of what we do and we focus on the digital technology ecosystem. We don’t focus on healthcare and we don’t focus on education, which are also amazingly important systems, but at least in the digital technology ecosystem, we think that reducing the concentration of power in decision making, reducing bias and inequity in technological models and applications, these are all really important reasons to work in this arena. And as I said earlier, we can’t do that without funding better alternatives, better opportunities, better futures, and making people feel like they have power agency over what they believe. So at the end of the day, we know that we only succeed with our partners, with our fellow funders, with all of you in the communities. If everyone feels invested in a technological future, that works for everybody. So that’s what I hope we can talk a little bit more about in Q and A. And thank you so much for listening to me.

Key Points

  • Why shared agency and societal governance are essential countervails to concentrated tech power
  • How expanding who sits at the decision-making table, from kids and teens to workers and faith leaders – creates better technology outcomes
  • The case for “rules of the road” in digital technology, comparing current Wild West conditions to safety standards in automotive and pharmaceutical industries
  • Stories where partners have had success in developing better approaches, from youth-led advocacy to fashion models winning IP rights over AI-generated likenesses
  • The three-to-five-year window of opportunity to shape AI’s development toward inclusive, equitable outcomes
  • How designers can be central to creating technology defaults that work for the most vulnerable users

Additional Resources