Work Minus Unconscious Bias with Sage Franch

24 Aug 2018   |   Culture Leadership

Work Minus Unconscious Bias with Sage Franch

24 Aug 2018   |   Culture Leadership


Welcome back to Work Minus, where we talk about what we need to drop from how we work and quick pivots you can make today to get closer to a better future of work. Today, our guest is Sage Franch. She’s the co-founder at Crescendo and this episode is Work Minus Unconscious Bias. Hi, Sage. How are you today?

Hi! I’m doing well. How are you?

Doing excellent. I’m excited to talk to you and get to know more about you. So, why don’t you start off with a quick background of who you are and what you’re doing right now?

All right. So, I am Sage. I have had a long career in the tech industry. I started out with Windows app development and then moved into teaching developer education courses online. Eventually, that led me into technical evangelism and I started to focus on emerging technologies and how they interact with us in terms of our workplaces and our lives. So, I spent a few years at Microsoft working on mixed reality, artificial intelligence, and then moved a bit into blockchain and had a brief stint as a blockchain teacher. But, recently, I co-founded Crescendo, which is a company that delivers personalized unconscious bias training when inside of Slack. So now, I’m using artificial intelligence to identify unconscious bias in the way that people communicate with each other, and then use that personalized training directly to them and to their individual needs in terms of reducing their own biases.

Wow! And just as a quick note, how is that transition been from being a full-time Microsoft employee to being out on your own, doing your own thing?

Oh, wow. It has been absolutely incredible. Like, I feel like every single dynamic is now completely the opposite in my life. So, no more nine-to-fives, no more having the backing of a huge well-known brand behind you. Going into startup world with just me and my two co-founders, we’ve been working now for about nine months together, everything is on your shoulders as a founder. So, you wear every single hat in the company. I’m a technical co-founder but I’ve done everything, from a bit of the sales, the marketing, the trade shows, all of it. So, it’s been really dynamic, and you know what? I don’t think I could go back.

Excellent. Well, good. I want to define a few things you’ve talked about so far, starting with unconscious bias. So, some people may be familiar with that but others may not. Give us a quick definition of that.

Yeah.  So, unconscious bias is the bias that we don’t know that we have. So, we all see the world in different ways and our perspectives on the world are really informed by the way that we grow up. And more often than not, those experiences are completely different from one another’s. Even though we may look the same, every way that we view the world is completely informed by a different lived experience. So, unconscious bias is, in our case, at Crescendo, the things that we say to other people or the way that we act to other people, who are not like ourselves, that we don’t know are actually harmful or offensive behaviors. So, an example that I recently identified in myself was actually using the word crazy or insane. We say it all the time. Oh, hey man, that’s crazy. But, that’s actually a trigger word and quite harmful to people who live with mental illness. So, that was a surprise for me and I realized that after digging into it a bit more, words like crazy and insane are just lazy placeholders for things that we really mean like amazing or unbelievable or exciting. So since I realized that, me and my team have really been trying to remove words like crazy from our vocabulary and replacing with what we actually mean.

That’s great. It’s a very good example of what we’re trying to get into. So, Crescendo is an example of inclusion training which is trying to become aware and get rid of these unconscious biases. Tell us, pre Crescendo, how do most companies deal with inclusion training.

So, inclusion spans a lot of different media. So, it’s not just the way that we talk to one another but the goal is really to foster work environments where everyone can be successful and happy and feel like they belong. So, traditional inclusion training includes videos about how to make the workplace acceptable, or seminars on why we need gender neutral washrooms and things like that. The particular part that Crescendo is replacing or working alongside of is helping people to empathize with one another and learn how and why to really make people feel like they belong in conversations at the table and just in the workplace, in general.

So in the past, this particular type of training has been done through one of two ways, either you’ll bring in a consultant and they’ll come in for a day or a half-day workshop, but they can only reach maximum forty people because they only have so much bandwidth. But that’s not scalable across the bigger companies. But the other way that they’ve done it is online with you know pre-scripted videos or skits that show your workplace going through some areas or scenarios where bias is displayed, and then you follow through a series of quizzes and questions where you’re just kind of tested to see if you’ve understood what was wrong in that situation. But that’s just one size fits all. And it’s really hard for the average employee to take something so generic and apply it to what they’ve experienced in their everyday life.

All right. So, bump us up to the present. What is Crescendo and how does it kind of go over those old patterns?

Yeah. So, Crescendo is using AI and it lives within your company’s Slack channel or Microsoft team workplace communication where real-time messaging happens, and it listens for things that people say to one another and it screens those for bias. So, it doesn’t just take into consideration the message itself, but it also looks at who you are, who you are talking to, and how you’ve interacted with them in the past. So, it can get a real sense of the context of the situation and see, is this a biased thought or not? And if it is a biased thought, then that builds into your learning profile and we actually send you content that is targeted specifically to that bias, to help you empathize with all the persons receiving it, may have received it, and why it may have not been an appropriate thing to say. So, it’s highly targeted, very personalized, and it’s delivered right within the workflow. So, it’s continuous throughout the year instead of being this once a year online seminar or in-person workshop.

Yeah. In a lot of ways, as I was looking at what Crescendo is, obviously, it can apply to inclusion training, but all types of training it would be relevant too. So, it’s really a very interesting technology.

Yeah. As we see it, more workplace is moving to remote work. And as people are dialing in online, our communication with each other is becoming much more asynchronous and much more remote. So, it’s interesting to see in the future of technology like we’re all going to be communicating digitally and AI is going to have a lot more potential impact just because of the availability of communication and how accessible it is to technology.

Right. You are looking at the way people talk, the way they interact, the way they communicate, and trying to assess how inclusive someone is in a certain situations. What are some of the limits that you face when you’re only able to analyze emails and chats?

So, we are only starting with text. That means we’re not able to detect bias in other employee interactions, so the ones that happen over the phone or face to face. But, that’s intentional. So, there’s two main reasons why we decided to start with text only. The first is, the availability of technology, and the second is privacy. So, text platforms give us a pretty contained environment to work with. Everything that is said within a messaging platform is known that it’s in public eyes, it’s known that people can see it so people are fairly confident with understanding the privacy around what they say in there. But also, every employee in the companies that we’re working with uses text to communicate, whereas, not every employee is in an office together, face to face with other people, and not everyone uses the phone or Skype to communicate. So, starting with the text platforms let us offer the same quality of experience to everybody in the company, and that, in itself, is an effort to be more inclusive because it’s important to us that we give everybody the same opportunity to have these learning experiences. But, more on the privacy side, there’s companies out there who are trying to do similar things but in a much more invasive way. And we always want to make sure that customers’ privacy, users’ privacy is the number one priority for us. There’s another company that is actually putting GPS devices and voice listening devices into employee badges, and then they track your movement around the offices, and they track who you talk to, the demographics of people you talk to, what you say to them, and whether you treat certain people different with other people. And, yeah, that gives you a lot more data to work with, but it’s also horribly invasive and that just sounds like surveillance in 1984, like crazy to me. Oh, look at me. I said “like crazy”.

There you go.

See. It’s an ongoing process.

Yeah. Yeah. Absolutely. No, it’s actually encouraging to talk to people who know the limits of where technology are now, but have chosen to maybe not go that far and to come back a little bit. So, that that’s very interesting to talk about.

Yeah. And I think it’s really important that developers and people who are building technology, especially when it has an AI component, were more conscious of how it impacts the end user. We see it with other companies like Facebook where they put out a bunch of features because it fits the roadmap that they’ve developed or that they planned, and then somebody else comes in and uses it maliciously, or there was somebody within the company, in other cases, makes a decision that impacts millions of users. We’re in an age now where developers really impact people at a global scale and a couple lines of code can make all the difference between whether a person has their personal information secure or not. So, looking at things, especially when we talk about analyzing bias and analyzing the way people communicate with each other, that has the potential to be very invasive and we want to do everything we can to make sure it’s not.

All right. So, I have a very super important question. Have you been able to track emojis, as well, along with text? Because if somebody puts in the wink face or has a certain sticker that they add to a message, it totally changes the context which adds into this nonverbal communication leading into verbal. So, is there any way that you can track those things?

Yeah. Absolutely. We’ve got emojis in there. What we don’t have captured yet is GIFs and images because that’s a bit harder. You have to go into facial recognition within images then and GIFs. But with emojis, we can track that because emojis are codified into just a bunch of characters and that’s represented in the text that we get.

Excellent. Great. Let’s take a step back. We’ve been talking about inclusion training which somebody might link to diversity, as well. But, what are some examples of a company culture that is diverse but is not inclusive?

Yeah. There is a big difference between diversity and inclusion and not everybody realizes that. My favorite quote to illustrate this is by Verna Myers and it’s: Diversity is being invited to the party, but inclusion is being asked dance. Diversity really happens when there’s a lot of people from different backgrounds, from different experiences, who are sharing their experiences with one another. But until you have inclusion, those people don’t necessarily feel like they belong and they don’t feel like a part of the community. So, what’s happening in tech is a great example of this. You can hire as many “diverse people” as you want, but that’s not fixing the problem because they still see marginalized people leaving their tech jobs because of unfairness. There was a great study by Kapor and it showed that 36% of people who leave their tech jobs are leaving because of bias-based unfairness. So, it’s a systemic problem. And, what it shows is that companies are hiring for diversity so they’re fixing the pipeline, but they’re not making the changes to their culture that actually allows that diversity to thrive so people are still leaving. And that’s the difference between diversity and inclusion.

Now, when we talk about AI and unconscious bias, the question always comes up about, all right, we have AI here in 2018. We can apply it to these things. Is that going to magnify our current biases or does it, in some way, lock in our current value set? How do you respond to those types of questions?

You have to be really careful with AI. In the circles with people who don’t know how AI works, it seems like a great solution for anything–just have the machine make the decision instead of us. But, these machines are trained on real data and that data itself can be biased. So, if we build AI with biased data, then that just makes biased decisions and that only perpetuates the biases that we currently have. So, it’s really important that we audit our machines, that we audit our data before we build these models, and continue to audit them as cultural norms evolve, as the world evolves, so that we are being really conscious to build unbiased AI.

Can you give an example about how that might look like in the future? If certain values in places is changed, how might something like your training also change?

Yeah. So, our system is using active learning. So, retraining an AI from scratch is super expensive. But, in our case, we’re implementing this active learning method so that the tool is always sending new messages to people and saying, “Hey, do you consider this to be biased? How do you perceive this?” And then their answer then gets folded back into our dataset. So, we are adapting to cultural norms of the people who are using our systems and we’re doing our best to build something that will evolve with the times. An example of biased AI is what’s happening with face APIs nowadays. So, APIs that are used to build face detection system, there was a study done one that showed that most of the top APIs had a really high accuracy rate when detecting Caucasian faces. But, the darker the skin tone of the person in the image, the less accurate it could identify them as people, or identify their features or their age, or make predictions that it could for Caucasian faces. That’s a great example of a bias dataset skewing an algorithm and skewing a model.

Yeah. That’s really interesting to hear about how the biases can bleed into the AI and affect it as it comes through. When it comes to thinking about unconscious bias inclusion, is it difficult to have a standard definition of what inclusion means? You’re talking about different parts of the world, different parts of the country, that everyone has a different idea about it. And when we’re talking about inclusion training we think, ok, everyone should do the same thing. But, has that been a challenge for you at all to have different varying levels of inclusion?

Well, the goal should always be that golden future of everybody feels that they belong at work, everyone feels that they can bring their whole selves, bring their own cultures and their own perspectives, and have those be respected and valued. Regardless of where in the world you are, that should be the case. I’m not going to pretend that I’m an expert on all cultures and all perspectives, and all workplace dynamics around the world. That’s just not true. And it’s actually, it’s impossible for any one person to have that much knowledge, which is why we’re using AI. But, yeah. It’s not so much about forcing one standard of inclusion upon everybody. It’s awakening everyone to the fact that our workplaces, our world, our lives are all better when we include one another and when we’re all in this together.

Great. Great. When we’re talking about unconscious bias, what do you feel like is the line between an unconscious bias and just a flat-out denial, someone who just doesn’t want to change, doesn’t want to feel the need to change?

Like I said before, everybody has unconscious bias. It doesn’t mean that we’re bad people. It’s just a factor of us all having unique lived experiences, all having different backgrounds. But when the denial comes in, it is hard for us to accept. But, the things that we do day to day could be hurting other people. It’s a lot easier to say, no, I’m not biased, than to say, I am biased. I know I’m hurting people but I have no idea what to do about it and I don’t even know where to start learning. I think it’s a challenging thing and I’ve seen it since I started this company, as well. We’ve seen it on our team. It’s difficult. It’s challenging to notice when you’re being biased. But, being open to it is the first step and that’s what we’re trying to do with Crescendo to people who are open to recognizing their biases to changing. We help you identify your biases. We send you things that help you learn how to correct them. Right now, there’s no easy way to do that so that makes it a lot easier to default into that denial. But, if you open up the learning path and you make it easier for people to find this content, easier for them to move from I’m biased and don’t know what to do about it to, oh, this is how I’m going to take actionable steps to remove that. It makes it a lot better for everyone in the circle.

Yeah. Absolutely. I really love the example you started off with where you talked about becoming aware of the words, crazy and insane and things like that, and being aware of those to reduce your usage of them. What are some other examples you’ve seen from other people as they’ve gone through what Crescendo does? What are some testimonies that they give about how they were becoming more aware?

It’s very interesting to see people on this journey of discovering their unconscious biases. I think, the biggest surprise for me has been how willing people are even when they initially start out in that denial phase. I think, everybody is fundamentally good. Everyone, if given the opportunity to identify where they can improve, they will put in that effort and they will improve. At least that’s what we’ve seen. So, people are learning everything from what it’s like to be a pregnant woman at work to why people celebrate certain things on certain days and how we can be part of allowing other people or welcoming other people to bring their cultures into workplaces where they traditionally wouldn’t. And one of the journey that has been really interesting is the journey of the ally. So, people learning about other people and learning about their perspectives and then taking active steps outside of the text-based communication to change their workplace and to advocate for people who are not like themselves. That’s been very interesting.

Yeah. It sounds like it’s just going on a great path and it’s exciting to hear that people are very open to it, very willing to go through it themselves. Like you said, it’s an opt-in program. Someone has to choose to be tracked by it, which is a very important dynamic.

Yeah. Totally. I should have touched on that some more. With the whole text analysis piece, every stage in that is opted in within Crescendo. So by default, there’s actually no text analysis involved until you opt into it. And the most basic form of our learning profiles is built from your self-identification survey. So, when you first onboard with the tool, our bot asks a bunch of questions, where did you grow up, what cultures are you familiar with, what languages do you speak, and so on. And then from that, we can build a profile that is just a default set of content where you can safely start to learn before we start to analyze your text. And then, once you see value in the platform or if you want to jump right into the text analysis piece, you would opt in to that and then op in to every single channel that you wanted to listen to. So, you’re not ever unknowingly sharing information with us.

It’s fantastic. Thanks so much for being on the show, for sharing about all these things. It’s very important work you’re doing and I’m glad you’re doing what you are. How can people stay connected with you and follow along?

You can follow Crescendo, @crescendo work on Twitter or on our website, And, you can follow me personally on all social media accounts, @thetrendytechie.

Fantastic. Sage, thank you so much and I hope you have a great day.

Thanks, Neil. Take care.

Subscribe to The Digital Workplace

Join the journey to a better future of work