Podcast > Culture Fit > EPISODE 4

Built-in Bias

Racial bias has not only worked its way into tech workplaces, it’s also in the products we build. How can we mitigate the harm that these biases cause in the products Silicon Valley puts out into the world?

In this episode

Clyde Ford

Clyde Ford

Author, psychotherapist, software engineer

Kim Crayton

Kim Crayton

Antiracist economist, founder of the #causeascene movement

Jacqueline Gibson

Jacqueline Gibson

Software Engineer, Digital Equity Advocate

Vincent Southerland

Vincent Southerland

Executive Director of NYU Law’s Center on Race, Inequality, and the Law

Dr. Allissa Richardson

Dr. Allissa Richardson

Assistant Professor of Journalism and Communication at USC

Adam Recvlohe

Adam Recvlohe

Software Engineer, Founder of Natives in Tech

David Dylan Thomas

David Dylan Thomas

Author, speaker, content strategy advocate at Think Company

Highlights

“We generally tend to look at technology almost as though it's a neutral force and not something that has this incredible historical baggage around human rights and around racism.”

Clyde Ford

“We need to understand that tech is not neutral, cause there's far too many people in tech who cause harm because they have the fallacy that tech is not biased.”

Kim Crayton

“Allowing whiteness to be considered as synonymous with humanity is why we continue to see technology built for certain types of people.”

Jacqueline Gibson

Episode Links

Show notes

Dairien: [00:00:00] Recently, Twitter users noticed something odd. When posting an image that had a Black person and a white person in the image, Twitter would crop it so that the preview of the image favored the white person. In other words, if you have dark skin and you post a photo of yourself and a friend, and that friend has lighter skin, Twitter preview would prefer that lighter skin friend. It's unclear what exactly is happening with Twitter's algorithm. But this is what a Twitter spokesperson had to say: "Our team did test for [00:00:30] bias before shipping the model and did not find evidence of racial or gender bias in our testing. But it's clear from these examples that we've got more analysis to do." Despite Twitter's best intentions, this is an example of Silicon Valley perpetuating racial bias.

In previous episodes of the series, we talked about the people that are included in the tech industry, and we also identify people that are often excluded. In this episode, we're going to talk about the [00:01:00] products that the tech industry creates. What's happening with the production of these tools? And why do so many of them include features that are harmful to Black, Latinx and Indigenous  groups?

I'm Dairien Boyd. This is Culture Fit: Racial Bias in Tech.

Clyde: [00:01:24] We generally tend to look at technology almost as though it's a neutral force, and you know, [00:01:30] and not something that has this incredible historical baggage around human rights and around racism. 

Dairien: [00:01:36] That's software engineer and author Clyde Ford. As he said, we like to think of technology as neutral. Computers are robots after all. But humans created robots. We crafted every detail to meet our ideals. So then we have to realize that the perspective of technology creators manifests in technology itself. Sure, we may build products with the user heavily in mind, but [00:02:00] those tools don't often serve our user populations equally. As creators of tech products, we get to make the final decision about what's included and what's not.

We also get to make the decision about who has access to the technology and who doesn't. What data does the technology collect and which data does the technology not collect? Who's protected by the technology and who's not protected? Here's Kim Crayton, anti-racist economist. 

Kim: [00:02:25] We need to understand that tech is not neutral because there's far too many people in tech [00:02:30] who cause harm because they have a fallacy that tech is not biased. It gives us an opportunity to distance ourselves from the harm that we cause. Everything that we build has a bias because a human built it and every human has a bias of their lived experience. 

Dairien: [00:02:50] We won't eliminate bias. As we learned in episode one, our brain has to make these shortcuts to get through the day. We rely on biases as part of that. So instead, we need to [00:03:00] better understand these biases and who they may harm. 

Kim: [00:03:03] Everything we do, everything we do in our lives, what shoes we buy, what cars we buy, what food we eat, is based on our lived experience and bias. All of those are preferences. A preference is a bias. And we end up coding our preferences, hence our biases, into the products and services that we create. When we don't check ourselves, check our blind spots, bring in people from other lived [00:03:30] experience, we end up causing harm. 

Dairien: [00:03:32] If I asked you to paint a picture of the world, you could only paint that picture through your own experience. Right? Our planet has 7 billion people think of how many experiences we're completely unaware of our blind spots are truly infinite. So what Kim said is right, we've got to check ourselves. The more we listen to people with different experiences than our own. The more we can illuminate our own blind spots.

[00:04:00] Let's hear from Jacqueline Gibson, she's a software engineer and digital equity advocate. She warns what might happen if we don't practice learning about other people.

Jacqueline: [00:04:12] Failing to check these practices, that's how you see so many of the biases that are permeated throughout society finding themselves integrated into the tech that we rely on everyday. Seemingly one-off technical flukes that we notice in the world, they're [00:04:30] actually indicative of deeper systemic failings in the way that we design technology. 

Dairien: [00:04:36] Technology is much more than gadgets. It heavily shapes how we live our lives. So the problems are a direct representation of how we're failing each other within society. 

Jacqueline: [00:04:45] People create technologies at the end of the day to serve the needs and the wants of other people. But if we aren't considered to be synonymous with the average idea of human, then we aren't included in the customer persona [00:05:00] the teams are using when they're planning out their products.

Allowing whiteness to be considered as synonymous with humanity is why we continue to see technology built for certain types of people. The dehumanization, and thingification of Black individuals together that benefits hegemonic forces and those who benefit from hegemony often opt to reinforce these processes. Because by continuing to separate Black individuals from humanity, that makes it easier for people to not consider Black individuals when [00:05:30] creating technology.

Dairien: [00:05:31] Africans were brought to America as property. They weren't seen as humans. They were seen as beasts, slaves. So when you hear the term slave in the U.S. it carries that weight. Which means it's insensitive to refer to a computer hard drive control that identifies a master drive and a slave drive. This is a common practice it's been around for a while.

The Los Angeles County government asked suppliers to stop using master and slave as far back as 2003. [00:06:00] But Microsoft's get hub only changed master and slave terminology in July 2020. When we replace terms that appear racist, we're not putting an end to racism, but we might end up with language that's a lot more user-friendly, more useful. Instead of referring to master and slave to describe relationships in a hierarchy, we can refer to the master taxonomy as a global taxonomy.

Not only is that less racist, it's also more clear. What else can we do to protect people from the harm that technology [00:06:30] produces? 

Jacqueline: [00:06:30] There have been several times just if I'm speaking, frankly, in my life where I haven't been able to use motion sensors, and it's because it doesn't detect my skin because it's searching for lightness. 

Dairien: [00:06:42] Jacqueline ran her own unofficial experiment at her university. It was on a sensor activated water fountain. 

Jacqueline: [00:06:48] My friend is white. So we used her hand and it worked. We tried using white paper, it worked. Tried using like a color water bottle, we had, I had a Tiffany blue water bottle and she had a red one. It worked. But [00:07:00] then when I put my hand in front of it, we got nothing.

We still have sensors that don't work for everyone. Who are they testing this on? Are they not testing for darker skin? Is that why I can't use the water fountain in the building that I go to all the time? And what does that say to me as a black consumer if I can't consistently use your product? 

Dairien: [00:07:23] Detecting different skin tones is also important for facial recognition software. 

Jacqueline: [00:07:27] From facial recognition technology being unable [00:07:30] to determine the difference between two Asian American women, to image classification models telling a black woman that her mouth is open in a picture because it wasn't designed to account for larger lips, there are plenty of stories to demonstrate the dangers of designing for and with certain groups in mind. And I've had issues before, like when iPhones first rolled out the ability to unlock screens, I couldn't unlock my phone when I was in a dark room, but my whiter complexion friends could. And so when you have instances like that, [00:08:00] it's just a constant reminder that these things aren't built for you, they're not built with you in mind. And the problems that you would have aren't the problems that these teams are prioritizing. 

Dairien: [00:08:11] The story that Jacqueline tells about being ignored by technology, it's painful to hear. And it's an experience that a lot of dark-skinned people face. It's a constant reminder that their identity doesn't fit with the dominant culture. Here's Clyde Ford's perspective on the situation.

Clyde: [00:08:27] A study just came out at the end of [00:08:30] last year to show that facial recognition technology is, out of the gate, biased against people of color and Native Americans, women, very young and the very old. And the only group that it really works well with is white men. And white men in this case from 25 to 40. There's a really good reason for that. That's because the technology was probably designed by folks in that same category who didn't understand that the way to develop technology that works for [00:09:00] everybody is to make sure that you build into the way you produce this technology the various tools and checks that can make sure you're not releasing a product that's biased.

Dairien: [00:09:12] In 2019, the National Institute of Standards and Technology ran a study on facial recognition technologies. The result, there were higher rates of false positives for Asian and African American faces compared to white faces. False positive. It's a polite way of saying it failed. Facial recognition has crept into [00:09:30] many of our products.

Maybe you use facial recognition to unlock your phone. When it incorrectly identifies a nonwhite face, the result can be embarrassing and even harmful. Back in June, the New York Times ran a story about a mistaken identity.

The man involved was Robert Williams. Robert was at work when the police came and told him he had a felony warrant for his arrest. Robert knew he didn't commit a crime, but the cops took him to jail and held him overnight. They forced him to miss work for the first time in four years. [00:10:00] The next day, when detectives interrogated Robert, they showed him a photo of a man who was shoplifting at a store.

That man looked nothing like Robert. He was detained simply because he was black. All thanks to a facial recognition software that matched a photo of Robert to a photo of a random thief. So Robert was released from custody, having been detained simply for being black. So how does this happen? Well, Michigan's got a multimillion dollar contract with DataWorks Plus. [00:10:30] Remember that national study I mentioned earlier? It found that the same algorithms DataWorks Plus uses misidentify Black and Asian faces ten to a hundred times more than white faces. There are companies that are aware of the harm that facial recognition can cause in policing. Here's Clyde Ford again. 

Clyde: [00:10:46] IBM, Microsoft, Amazon. Some of the companies that have been really involved in this facial technology and facial recognition technology came out with statements in which they said, we're not going to continue [00:11:00] on with this technology because we realize that it's being used in these various ways to discriminate against people based on skin color.

And it's not even accurate in this area as well too. Now the big boys came out and said they weren't going to get involved or further their involvement with this. But a lot of the small companies, which are still very deeply involved in facial recognition technology, simply stepped into the void left by the Amazons and Microsofts and [00:11:30] IBM's. And so this technology and its discriminatory use and discrimination within how it's designed is still present right now. 

Dairien: [00:11:40] The companies that are responsible for Robert's false arrest refused to abandon facial recognition technology. And they're companies you've probably never heard of like, NEC, Rank One, Clearview AI, Vigilant Solutions, Cognitech. They supply many of the tools implemented by police forces.

[00:12:00] Facial recognition, as far from the only way that police tooling can unfairly target Black people. There's also predictive policing. Not far from Silicon Valley in Fresno, California, Beware Software was deploying predictive policing algorithms to allegedly help officers identify threat rates while on duty. Here's Jacqueline Gibson. 

Jacqueline: [00:12:20] The factors that actually went into determining the rate of risk were never fully explained to city officials or the law enforcement users who were [00:12:30] supposed to be using the software every day. And instead, the vendor withheld that info and said that these were trade secrets and proprietary data.

And so when this was revealed and people came to the realization that they had no comprehensive understanding of how did the software determine its ratings for the system, the city council voted ultimately not to approve the continued use of the software. I talk about that case because it demonstrates how easily people are willing to rely [00:13:00] on quote, unquote, objective software without knowing what it can or cannot do.

So people often think, Oh, well, if we just rely on an algorithm to do this, this will be better than letting a human do it, because it's a computer. They're objective. But the problem that I think a lot of people fail to consider is that yes, these algorithms and tech are computers, et cetera. They're not people, they don't have our emotions. But the problem is even if they aren't [00:13:30] people, they're created by people and human beings are flawed. We have biases. It's a scientific fact. 

Dairien: [00:13:36] Human biases get baked into these products because the algorithms are based on historical data. To explain this further, here's Vincent Sutherland, executive director of NYU's Center on Race Inequality, and the Law. 

Vincent: [00:13:48] All these tools really analyze large data sets to find, um, patterns. And, and from those patterns, the tools and help to make forecasts or predictions about what those [00:14:00] patterns reveal.

You know, we can imagine in a world that's steeped in racial bias, it's of course going to produce all sorts of racially biased patterns, and therefore the tools that are using to reflect back on what that role looks like and are going to actually perpetuate racial inequality. 

Dairien: [00:14:16] Basically racist policing feeds off of itself. There are multiple types of predictive policing tools that departments deploy today.

Vincent: [00:14:23] There's place-based tools, which really try and forecast or predict where crime or criminal [00:14:30] activity might take place. And then there are these kind of person based systems that really try and predict or forecast who may be the victim or perpetrator of a crime. And in both instances, the data that these tools rely on and use are the types of data points that we all know have been tainted by the racial inequality that really is, is kind of part and parcel of our society. So you think about historical crime data because that historical crime data is often based on where police [00:15:00] are deployed, where police are making arrests.

And if we know anything about the history of policing in this country, you know, that communities of color, particularly Black and brown communities are over-policed, sending a signal essentially that you need to send more police back in these communities, which then creates this kind of feedback loop.

And that's what kind of, where we see these practices being the, kind of, the most harmful despite efforts I think by a lot of developers to, to claim race neutrality and claim that they've been able to scrub the data so to speak, of racial inequality. The [00:15:30] reality is, there is no real way to do those things.

In a society where race and inequality have woven into the fabric of, of everything we do, it's really impossible to say we've accounted for every way in which race matters. So that's kind of. How these problems emerge. 

Dairien: [00:15:47] Discriminatory practices consistently hit black communities the hardest.In a recent interview on the new thinking podcast. You said my fear is that the more we turn to technology and assume that it's neutral, we [00:16:00] presume or assume that it doesn't see or care about race. That it's just a computer trying to figure things out. Can these systems truly be neutral? What is the reality here? 

Vincent: [00:16:10] I don't believe that any of these systems are neutral or objective. The reality is that every component of the systems, from the data that they're analyzing to the actual tools themselves, are all going to be constructed by humans. And humans inherently have their own biases and bring those biases [00:16:30] and their own perspectives to the table when they're designing and thinking about the use and deployment of these types of algorithmic tools.

So, whether we're talking about the data, which is collected by human beings and often produced by human beings, or always produced by human beings, or we're thinking about the tools themselves, which are also going to be produced and designed and deployed by human beings, that bias is going to taint the ways in which these things operate. 

Dairien: [00:16:57] How would you like technologists and the [00:17:00] developers of these policing algorithms to better evaluate the consequences of what they create?

Vincent: [00:17:05] Part of that evaluation would have to take into consideration the context within which they're operating. Are they actually helping to reduce a criminal legal system that we know is tainted by racial inequality? Are they shifting resources and power to communities who have been for centuries marginalized and divested from? Are they producing better [00:17:30] outcomes in terms of reducing racial disparities in terms of who's being stopped, accused and arrested? Are the community members from the neighborhoods where these tools are going to be deployed in the rooms when the decisions are made about whether or not we even need to create a predictive policing system? Where are those voices?

Dairien: [00:17:52] Vincent is concerned about predictive policing tools, and he's right to be. Historically bias data is reinforcing racist practices. [00:18:00] We hear slogans like defund the police and back the blue. They've only intensified since Minneapolis police murdered George Floyd back in June. It's only right that we scrutinize police practices to ensure that they're equally protecting everyone.

We see activists protesting the historical brutalization of black people by police and they've done so by taking advantage of digital tools to document the black lives matter movement, essential tech like smartphones and social media, [00:18:30] they help spread information that empower anti-racist activism. We can acknowledge the righteousness that these tech products provide, but at what cost? This is something Dr. Alissa Richardson documents in her work. She's an assistant professor of journalism and communication at USC. She's written about how the Black Lives Matter activists were able to record video for the entire world to see, but these images can have traumatizing effects. Here's Dr. Richardson. 

Dr. Richardson: [00:18:59] When we think [00:19:00] about the usefulness of these videos, because while Black people have been using them to shine a light on what has been happening for a very long time, it's also retraumatizing.

Death is it's the most sacred journey that someone is going to make in life, and it should be private. It should be reserved for, if we're fortunate, loved ones only to look upon. If we think about the number of white people who've lost their lives in mass shootings, if you think about Las Vegas, for example, that heinous attack, [00:19:30] those images have been scrubbed from the internet.

We think about white death in those way and how it's invisible. You have to think very hard about the last time that you've seen a white person die on televised news. And then we think about Black death and we think about these things being on loop. It's very easy for me to go into one of the more respected stock photo catalogs and look up Trayvon Martin.

And I would be able to find his post-mortem pictures very easily. [00:20:00] We are used to seeing Black bodies being vandalized and violated in this way. So I'm inspired by the power of the smartphone to create this moral suasion that's necessary to get some action, but I'm also discouraged and saddened that Black people need this proof in the first place despite what's gone on for so long.

Dairien: [00:20:22] Our society has an uncomfortable obsession with images of Black bodies. The moment a police shooting occurs, the victim's [00:20:30] lifeless body is on blast for everyone to see. It's broadcast straight to our phones, sandwiched between a friend's anniversary photos and last night's sports scores. It's all over our Twitter feed. It's like entertainment.

Dr. Richardson: [00:20:46] I heard it about Mike Brown being killed on Twitter. I didn't hear about it from a major cable network. I heard about it from a man named Emmanuel Freeman. For many people that I interviewed in the book, the black lives matter movement did not [00:21:00] begin until people in Ferguson began tweeting about Emanuel Freeman's reports. And so when I think about the power of social media and how Black America began to mobilize around his round the clock coverage, that day, he did not leave his window. And then he posted like his penultimate tweet was something like, you know, I'm done reporting about this, I'm done talking about it. Um, I'm out, basically.

Um, he had done a service that day because we don't have any camera footage to show what exactly [00:21:30] happened. But we do know the indignity he faced in his last moments.

Dairien: [00:21:35] These powerful images that we see shared on Twitter, they build momentum for a movement like the protest of Mike Brown's tragic murder. Silicon Valley has the power to bring global attention to any topic. 

Dr. Richardson: [00:21:46] But then we think about cases like Korryn Gaines, which still makes me very angry to this day. She was faced with police at her door who were attempting to serve her a warrant for a traffic offense. [00:22:00] And she did not open the door, which was her right.

And the officers kept trying to get her to come out. They kicked open the door and she used her cell phone to Facebook livestream her standoff with them. And you have to remember, this is happening a month after she has already seen Diamond Reynolds record via Facebook live Philando Castile's killing.

So it makes sense to reason that she would say, okay, let me try to use Facebook live because something may happen [00:22:30] to me. And you can hear in the video, she's telling her son, you know, they were already kicked in the door. I might not make it. You might want to go out and join them, go ahead and go, go with the cops because I don't want anything to happen to you.

And you can hear him saying, no, I'm going to stay with you, mommy. And he's five. I'm going to stay with you. And the cops are in the hallway gesturing. They are fully armed and they're just like, you know, come out, trying to tell him to come out and he won't. And that's the last that we see of Korryn Gaines because Facebook decided that they would [00:23:00] cut the feed, after the Baltimore police department asked. The Baltimore County police department asked them to. And they said that later on, they were just trying to comply with an ongoing investigation from law enforcement. It's just, it's just an abomination. Because we now know that once that, that feed was cut, we hear that the cops were saying, I'm tired of this expletive, this has lasted long enough. And they go in and they shoot her dead and they shoot him. They [00:23:30] shoot her son. We gave a social media company, the power to tell a story and they showed very quickly how they can take that power away. And so it was a cautionary tale in placing too much trust in the social media complex to keep those lines open.

Dairien: [00:23:50] Social media can also be hijacked by people who want to suppress activist movements and want to change the narrative. 

Dr. Richardson: [00:23:56] Because now there's whole police divisions that are devoted to [00:24:00] prowling social media profiles. And gathering those data and curing them in a way that will harass the protestors. And I have accounts in the book of activists who told me that when they went out to protest, police approached them and called them by their Twitter handle and said, Hey, we're not going to have any trouble out of you today are we? And they were like, Whoa, how do you even know my Twitter account? Social media has been used by law enforcement to surveil peaceful protestors. It becomes [00:24:30] such a fraught issue in terms of protesters not being discouraged to protest, but being discouraged to even use these outlets that liberated them in the first place.

Dairien: [00:24:45] Look, I don't have to tell you there's all kinds of seedy repugnant behavior on social media. I think we've seen enough comment sections.

Kim: [00:24:51] The most marginalized have been telling these social media platforms, Black women particularly have been saying, we have been targets.

Dairien: [00:24:58] This is Kim Crayton [00:25:00] again, she's the anti-racist economist that we heard from earlier. She founded the #causeascene  movement. #Causeascene is a strategic disruption of the status quo that routinely allows the harassment of Black women. 

Kim: [00:25:12] We are targets of harm in tech. Black women, in spite of all the barriers that are created by Twitter, and I'm going to give you an example. The fact that they have lists that I do not know I'm being added to, I cannot opt out in, I cannot opt into. I finally went and looked at [00:25:30] lists because someone who has a big platform had a thread about check your list because he was talking about bots. And I was on over a thousand lists that I did not opt into. And almost 45% of those lists were designed to follow me, target me, harass me. I just recently I had a death threat. I believe it was a bot because the accounts came up, I happened to be scrolling through one of my tweets, looking at the responses to see what I was going to comment. And this thing [00:26:00] had, it was over four different tweets. And I only found the four different tweets. When I went to report this person saying it, if they saw me in person, they would set me on fire. So I get my community cause I have over 13,000 followers now. I block, report, screenshot and send my followers, Hey, please report and block. As I'm looking through their thing, I see, Oh my, they are targeting Black women.

That's all I saw on this account was targeting Black women. So I reached [00:26:30] out to a friend of mine who I saw they had been targeted and I thought it was something new. She's like, no, I reported this account two or three days ago. If an account has been reported as causing harm and threatening people two or three days ago, why did it get to me?

And the only reason it got taken down within two hours, I believe of my community is because I had hundreds of people reporting this. And so I guess it triggered the algorithm to say, Hey, look at this account. That shouldn't happen. As soon as someone [00:27:00] reports harm, a threat, that should instantly go up and somebody should take a look at that. It should not take three days for a black woman to be harassed. And her life being threatened on a platform.

Dairien: [00:27:11] Social media platforms, aren't doing a good enough job protecting black women. Silicon Valley fails to understand the severity of racial harm that many people often face. So we see that social media and smartphones are a double-edged sword.

Communities that experience racial harm can share crucial information for a movement, but at [00:27:30] the same time, their narratives can be changed or they can be individually attacked. It's like activism happens in spite of these products rather than because of the product. So how can tech reframe the way products are put into the world?

How do we consciously build products that include, protect, and uplift all communities? According to Kim Crayton, it'll take qualitative research. That means intimately learning about the groups that these products impact. 

Kim: [00:27:57] Quantitative data means absolutely nothing in [00:28:00] the knowledge economy, without the lived experiences of people, which is qualitative data. Let's say when the hand washing, automated hand-washing came out, and they didn't read black skin. You had data, you just were missing data that was more inclusive of the communities in which you serve. 

Dairien: [00:28:17] We need holistic data. We have to listen to people who are not represented in the product development and understand how the products might affect them.

Kim: [00:28:25] I believe in move fast break things. What I don't believe in is move fast, break things, move fast, [00:28:30] break things, move fast, break things, without stopping in between each iteration to find out what we broke, how we broke it, who did we harm, uh, how do we make a means and how do we move forward? 

Dairien: [00:28:39] People with different lived experiences, help fill our blind spots. So we're able to learn from people that aren't like us. We can see things that we might've missed.

Dr. Alissa Richardson shares how communities have created technology to prevent further harm. 

Dr. Richardson: [00:28:57] For these George Ford uprisings, for example, [00:29:00] we're seeing new advancements where these apps popped up, where you could blur faces and blur shoes out to protect the protestors, because they're like, we're not going to stop telling our stories, but we're definitely going to conceal our identities.

You're seeing laboratories like the one at Stanford University use machine learning to develop these little brown fists. They go over top of people's face, like the fist emoji now appears on these crowd shots. So you can use facial recognition to pinpoint every single person [00:29:30] who was out there. 

Dairien: [00:29:31] If social platforms don't meet our needs, then we'll have to build our own. That's when Adam Recvlohe. Adam is Muskogee Creek Canadian American, and he's the founder of Natives in Tech. 

Adam: [00:29:41] We're trying to uplift and highlight Native people that are working in tech to show off some of the things that they're doing. 

Dairien: [00:29:51] To achieve their mission, Natives in Tech created a forum where Native and Indigenous technologists can foster community together. It's a support network. It creates [00:30:00] solidarity among native tech workers. 

Adam: [00:30:02] Our goal is to build technology that supports Native peoples, and that can be a lot of different kinds of things. Currently, I'm working on a dictionary application for the Muskogee language, just to make it easier, you know, to search. But it could be like a more social platform. And maybe on that social platform, [00:30:30] people can choose in their bio to put like the tribe that they're affiliated with, to put their band or to put their tribal town, you know, in case they're comfortable sharing that information. But more importantly, they at least have the opportunity to, because that's how they identify.

Dairien: [00:30:49] Adam also uses existing social networks to reach people where they're 

at.

Adam: [00:30:53] Another way to be, you know, a good ally if you're invested and want to help, maybe people in the [00:31:00] industry, follow them on Twitter and support their work, help the algorithms, you know, go higher for Native people. It, it doesn't hurt to, I use those platforms for good, I guess, in this sense, to help the algorithm, to elevate their status.

And hopefully they can start to impact even broader group of people than, than who currently follow them now. 

Dairien: [00:31:24] Adam is able to combine the tools that are already available with his own ability to build new and more inclusive tools. [00:31:30] His perspective is shaped by people in his community as creators. If we want to build products for the native communities, what should we consider?

What if your audience isn't as clearly defined, but instead it's just a broad population of people? It's kind of hard to imagine the negative impact that our products might have because well, we have blind spots, and the first step is to be aware of this bias. Here's David Thomas. He's the author of Design for Cognitive Bias. 

David: [00:31:57] There's a bias called deformation professionnelle. And the [00:32:00] basic idea is that you see the whole world through the lens of your job. That might seem like a good thing, but there's a very specific story about this, where police chief Ramsey, who a former commissioner of the city of Philadelphia police force, when he first took the job, started asking his police officers, Hey, what do you think your job is? And a lot of them said to enforce the law. And he would come back with well, yeah, but what if I told you your job was to protect civil rights? That's a bigger job. That's a harder job. But it gives them a mandate and [00:32:30] frankly, permission to treat people with dignity. 

So the way you see your job is critical, it's life or death. And I think for designers, like if we just look at our job as design cool stuff, like that's going to be very limiting and leave a lot of room for error and a lot of room for people to get hurt. 

And I think our challenge now is to come up with a way to define our jobs that lets us be more human to each other. And as designers, I find that the framing effects most often comes up when we're thinking about how we can frame the conversations that we're [00:33:00] having to hopefully lead to a better outcome. The framing effect, in my opinion, is the most dangerous bias in the world. If I say something like, should we go to war in April? Or should we go to war in May? Right. I framed this very dangerous decision in a way that completely leaves out the part where we're supposed to talk about should we be going to the war in the first place? I think that this is like one of those weird biases where it can actually be used for good. So, for example, if you frame of an interaction, even just with a few lines of texts, you can utterly [00:33:30] change the interaction the user has. My favorite example is Rethink where a Tricia Prabhu was only 14 when she came up with this, created this thing where as you're about to type something to social media, that the software thinks might be harmful, it just pops up a little thing that says, This looks like it might be harmful to someone. Are you sure you want to post it?

And something like 90% of the people who saw that intervention decided not to post. And that little bit of framing gave them the moment to reflect and say, wait a minute, there's actually a human being on the other side of the statement. I don't actually want to send it. So I think [00:34:00] framing is hugely powerful and design. 

Dairien: [00:34:02] Understanding these biases, even with the best of intentions, it still leads us to make constant errors in judgment. So we really gotta think critically about what we're communicating and the products that we release to the world. Let's check back in with Jacqueline Gibson, software engineer. 

Jacqueline: [00:34:17] It really requires you to have a shift in perspective, really, to consider when you're designing these things. How could this affect someone from a different background of a different race, a different gender, [00:34:30] et cetera than me? When I like, for example, I talk about this sometimes, like if you're creating a conversational bot. Do you think about what type of language it's using? Is it inclusive? Is it not accepting things that are not standard English? Because it's like, it's not standard American English, so this is jibberish. So you have to think about that when you're designing some of these different systems. 

Dairien: [00:34:50] It's vital to include other people's perspectives if we're going to reach equal representation. This brings me back to my conversation with Kim Crayton. 

And so Kim, you spend, [00:35:00] it seems like you spend a considerable amount of time teaching tech leaders, companies, nonprofits, how to navigate the social climate, but also how to build products that aren't causing harm, um, and to be mindful of the lived experiences of others. Can you talk about how you advise tech leaders? 

Kim: [00:35:19] Well, first of all, there's no way we can eliminate harm. What I focus on is minimizing harm. So what we want to do is create products and services and learn from the data, quantitative and qualitative [00:35:30] data, that we receive, how best to minimize harm. There's always going to be some outlier, something that we don't know about that has a potential for harm. Everything I do, everything I say, I understand in the back of my mind that this is a potential for harm. How am I going to mitigate that? 

Dairien: [00:35:47] Harm mitigation isn't the only thing to think about when you're building an inclusive product.

Kim: [00:35:51] You need to hire people for their lived experiences because their lived experiences will tell you when you have, we are the canaries in the minefield. [00:36:00] And if you don't listen to the canaries, at some point you run out of them.

Dairien: [00:36:04] Much of this ties back to hiring. Imagine if policing products had been decided on by people that actually come from the over-policed communities. Would the products have been deployed at all? Here's Vincent Southerland again from the center of Race, Inequality, and Law at NYU.

Vincent: [00:36:20] Well, thinking about Silicon Valley and thinking about developing technology that is going to address some of the biases that are so deeply seeded in our, in our [00:36:30] country, I can't think of a better voice than the people who have been the most harmed by those biases, those inequities, to help flag and help understand the nuance with which someone's life kind of unfolds and how technology might serve to intervene in ways that could be positive and productive for folks. 

Dairien: [00:36:49] We have plenty of opportunities to incorporate diverse perspectives into the products that we build. Every time that the tech industry puts out a new product or a digital tool into the world, we have the chance to do [00:37:00] it right, to create something that accounts for the needs and experiences of all people that will be impacted. Let's ensure we make good use of those opportunities and we do better to mitigate the built-in biases of technology.

Thank you so much for listening to this episode and thank you to all of our incredible guests. They took the time to record interviews with us. Clyde Ford, Kim Crayton, Jacqueline Gibson, Vincent Southerland, [00:37:30] Dr. Alissa Richardson, Adam Recvlohe and David Dylan Thomas. To learn more about the work of our guests, go and check out our podcast website. It's at all-turtles.com/podcast. We really hope that you share Culture Fit with others. Think of a friend or a family member that needs to hear this message and share it with them. Also, if you could please leave us a review on your favorite podcast platform, that also helps the message reach other people.

Thank you to the team behind the [00:38:00] episodes, including Marie McCoy-Thompson for producing, editing, and co-writing the show. And thanks to Jim Metzendorf for mixing and Dorian Love for composing all of our great music. I'm Dairien Boyd, and I'll see you in the next episode, episode five, where we're going to examine internally at All Turtles and take a hard look at where we can do better in terms of anti-racism. [00:38:30]