Episode 9: Reading Your Text Message ‘Body Language’

First Contact with Laurie Segall is a production of Dot Dot Dot Media and iHeart Radio.

Laurie Segall: Ok so we just got an email from Es who we’re interviewing – i’m interviewing tomorrow and he says “hi here’s some data we got from the conversation” that’s exciting “some are screenshots of the app which i can give you a demo tomorrow and other data from our system we don’t show on the app to give you a sense of the underlying analytics. Are you ready for the underlying analytics of our connection?

Derek Dodge: Ya! because i honestly have no idea what type of insights they’re able to glean

Laurie Segall: It’s loading. It’s taking a while. 

Derek Dodge: Built in suspense

Laurie Segall: OK… What?! 

Think about this for a second. And don’t judge too quickly. Imagine an AI Assistant that sits on top of your text messages. It reads your conversations. It turns your messages into a novel of data points. Basically a human psychology report. And then it guides you. The assistant can say – hey the person you’re talking to is a bit more introverted, you might want to be a little bit more delicate when you message them. The assistant will give you a percentage likelihood that the person you’re texting with likes you – I’m talking romantic feelings. It’ll read hundreds of data points to help you shape a better more personalized conversation based on what it picks up in the nuances of your language. The pauses between texts. The emojis you use. Are you using words that are positive or negative? So would you use this type of technology? Well if so you’re going to have to give over a lot of your data in exchange. I know what you’re probably thinking, my text messages are pretty personal. What about my privacy? The conversations are anonymized and you can delete them whenever you want. But it’s 2020, leaks and hacks are the norm. So of course there are ethical boundaries. The tech I’m talking about was built by an entrepreneur named Es Lee – it’s an app called  Mei, an AI assistant that you can download that when unleashed can tell a lot about us including our mental state. So in this episode you’re going to hear me talk a lot about depression, love all the raw human elements that this tech can discover. So what does an algorithm say about our messy human conversations? About us. Could it help us communicate better with each other? Or are we crossing a line?

The future is here. We might as well take a deep dive

Im Laurie Segall and this is First Contact

Laurie Segall: Well, welcome to First Contact. This is a show that introduces you to people and tech that changes what it means to be a human. And I think you’re a perfect example of that. And as normally I go into my first contact with founders I, I bring on the show, um, and how we first met, but we’ve never met in real life.

Es Lee: No. Thank you for having me. 

Laurie Segall: But that being said, you might not realize this, but our first contact happened in this room. I interviewed a guy named Shane Mac. Do you know Shane?

Es Lee: Yeah. Yeah. I met him once. Great guy.

Laurie Segall: Interesting guy. Um And he’s building out bot technology to date on the dating apps. And so he said… In the middle of the interview, he brought up, your company. And he said they’re building this, you know, really cool technology that can analyze conversations and it can know if you’re interested in the other person by the way you’re speaking.” I was like, “Whoa. That’s really cool.” 

Laurie Segall: And so, you know I was doing research on you, and something stuck out that you said. You talked about how your text messages have body languages.

Es Lee: Yeah.

Laurie Segall: I love this idea that we could read our text messages like body language. It says so much about us. And I think that’s, like, a good way to start with what you’re building.

Es Lee: Yeah, yeah. I think it’s body language, in a way, in that, I guess there’s kind of like a new language evolving from text. So, 30 years ago, nobody would be able to, look at a phone and say, “Oh, this person likes you,” based on the metadata in the text. Right?

Laurie Segall: Mm-hmm (affirmative).

Es Lee: So if somebody makes eye contact with you, if they smile, these are all kind of signs that we’ve kind of learned over time that this is an indication this person’s interested in you. And now in the form of text messages, it might be a double text. It might be you know a lot of emojis or exclamation points. It might be immediately replying to all your text messages.

Laurie Segall: And the idea is that, um, with what you guys are building, you could actually, do analytics on it and really help people understand relationships and if someone likes you and, and much more than just that. It started very basic like that, but you can understand all sorts of things about people.

Es Lee: Yeah. Yeah. There’s an immense number of data points in conversation. So you can imagine even the conversation that we’re having now. There’s been research out there that the number of times a person says “I” versus the other person is an indicator of the power dynamic between the two.

Es Lee: So when you add all the metadata, along with the content of, you know, what we’re saying, there’s a lot of things that get revealed, just from looking at the data on text messages. 

Laurie Segall: Hmm.

Laurie Segall: So, before we dig into the tech and what you’re building, could we just take a giant step back? Because normally, when I interview folks, I, I… Generally, I have a history with them or I know them. and this is, as we said, my first contact with you. Um, where are you from?

Es Lee: Uh, so I was born in China.

Laurie Segall: Uh-huh (affirmative).

Es Lee: I grew up in Boston.

Laurie Segall: Okay.

Es Lee: And I moved to New York about 12 years ago. I always thought people were really interesting to analyze. I did computer science in college, and then after school, I went into finance and, um, you know, covered the insurance industry.

Laurie Segall: The insurance industry.

Es Lee: Yeah. That’s right.

Es Lee: Um, I was working on an insurance start-up. And whenever I would tell people what I was working on, it would just be like crickets. And then I’d say, “Well, actually, in my back pocket, I’m working on an algorithm that can figure out how much people like you based on your text messages.” And you kinda see the eyes light up, the jaws drop, and then I’m like, “Okay. That’s enough validation that I should probably be working on this.”

Laurie Segall: Hmm. In my experience, um, founders look to try to solve problems that they have, um, so what problem did you have that you’re trying to solve?

Es Lee: Um, so actually, kind of saw right through me. I’ve been in New York for 12 years and, you know, had my share of dating experiences. I think I’m okay in real life, uh, but then when it comes to text messages, I’m terrible.

Laurie Segall: Why?

Es Lee: I guess it’s a common affliction that, actually, a lot of guys may have. they just don’t know how to text. Right? So I was always eager just to kind of end the text communication and get to real life-

Laurie Segall: Hmm.

Es Lee: … whereas that was probably interpreted as a, you know, “He doesn’t like me,” or, “He doesn’t care.” So I kind of wondered just how many relationships could have been better or more had I just seen these things.

Laurie Segall: Hmm. So you’re texting with women and not having luck, and you’re frustrated.

Es Lee: Right.

Laurie Segall: And you have this background, uh, in computer science and, there’s an  algorithm that can help you communicate better. Right? 

Es Lee: Right.

Laurie Segall: Okay. So then what?

Es Lee: so actually, what, what, what kind of brought this t- to fruition was a friend that moved into the city. and he said, you know, “Dating here sucks.” And I was like, “Why?” He goes, “so I went out with this girl a couple of days ago. We really hit it off. Everything was great, and now, all of a sudden, she’s not responding to my text messages.” and he was confused. So I take a look at it and I kind of flip through all his text messages. And I’m like, “She likes you.” 

Laurie Segall: How did you know?

Es Lee: So I kinda looked at the body language.

Laurie Segall: I mean, it sounds like if you don’t have luck on text messages, how are you able to analyze these text messages and look at words as data points and actually know?

Es Lee: Yeah. Th- the funny thing is, actually, I’ve found over the years that it is actually impossible to be objective about your own relationships.

Laurie Segall: Mm-hmm (affirmative).

Es Lee: Right? Often, it needs a third party to kind of look at it from, you know, a bird’s-eye view and say, “Hey, look. Here were all the warning signs. How did you miss all that?” and I actually think that’s how we deal with a lot of our relationships is we, you know, we’re not able to see objectively. Right?

Laurie Segall: Hmm.

Es Lee: Despite knowing kind of all the pitfalls of texting,when it comes to my own relationship, it helps me not at all. but I can kinda see it in other people, and, you know, I can be more objective myself about it. And then, with the algorithm, I can actually give them data behind it.

Laurie Segall: And, all of this is kind of, um, background for a- an algorithm that when it… And so this is all before you created Mei. it was Crush, right? The Crush app that you created.

Es Lee: Yeah. Yeah.

Laurie Segall: and so all of these data points went into creating this Crush app. So talk to us about that.

Es Lee: Yeah. Yeah. I would like to go on record to publicly apologize for Crush. Um, it was downloaded by 150,000 people.

Laurie Segall: Uh-huh (affirmative).

Es Lee: And I joked with my team that, like, we should put out a press release to apologize for ruining 150,000 people’s days.

Laurie Segall: (laughs) Why? What, what was so terrible about it?

Es Lee: ‘Cause, you know, you’re probably… If you’re using an algorithm to figure out kind of whether somebody likes you or not, like, chances are you’re not gonna be happy with what you find. 

Laurie Segall: And what did the algorithm take into account? Can you talk to us about the, the technology behind it? Then we’ll get into the technology you’re using now, which is much beyond just Crush and, and does someone like you or not? What did it actually take into account?

Es Lee: Uh, it took into account  the average response times, the length of the text message. It counted emoji usage, like, exclamation point usages, whether they sent you a link or a picture, uh, and then the number of conversations each side initiated.

Laurie Segall: Hmm.

Laurie Segall: I mean, it is pretty extraordinary if you think about the amount of data we give each other on a daily basis, like the time of day we text, the types of words we use, the types of emojis. It’s like all of this is an insane data set that creates a personality profile that… I, I mean, it, just an extraordinary amount of information about us that you’re utilizing and that advertisers are utilizing that, that when put to use can be used, and I guess you’re trying to use it to kind of help us better ourselves and have better conversations. But it is an extraordinary amount of data that I don’t think people quite understand, like, how as human beings we decode pretty easily. Right?

Es Lee: Yeah, yeah. So I try to back up my text messages, right? So when I get a new phone, get a new app, and it kind of loads back your old text messages. but the program I was using took a really long time and actually showed every text message.

Laurie Segall: Hmm.

Es Lee: Right? And I actually spent three hours looking at all my text messages flash before me at half a second each. it’s an incredible amount of data.

Laurie Segall: Sure.

Es Lee: and the average person actually texts about, a novel’s worth of their thoughts, a year with their thumbs.

Laurie Segall: Wow. I mean, and we’ll get into this a little bit later, but I gave you text messages with my co-founder and dear friend Derek, and as we were giving you the text messages… By the way, we only gave you like a year or a year and a half worth.

Es Lee: Yeah. You gave, you gave a year.

Laurie Segall: But we have, like, 12 years’ worth or something, and it was maybe one of the most, like, horrifying, interesting experiences watching as they were downloading or looking at them. It was just like, you know, looking at all those words, and it’s such, like, a personal experience through technology to, to do. so I want to get into, what you’re doing now, with Mei. So we’re not just doing love anymore? We’ve moved past just an app to, to help people find out if other people are interested in, in them?

Es Lee: Yeah, yeah. We have moved on from that.

Laurie Segall: and so when did we make the pivot, talk us through that.

Es Lee: So my goal had always been to be like the default texting app on a phone… So the most used application on our phone is for messaging, right?

Laurie Segall: Mm-hmm (affirmative).

Es Lee: Yeah. We had instant messaging 20 years ago, and there’s almost been no innovation whatsoever on it. Right? And so I go, “Hey, what if I am able to build a messaging app that actually has kind of an AI that’s just sitting right on it, kinda analyzing your conversations, maybe like… It’s almost like a guardian angel, you know, looking over your shoulder just being like, “Oh, hey, you’re coming across as being rude,” or, you know, “You’re missing this about this person.”

Es Lee: So I always thought there was the capability for that. So Crush was basically the very first step in doing that. you know, I know with a lot of data, a lot of amazing AI is possible. And that wasn’t possible without first having the data that we got from Crush.

Laurie Segall: And, first of all, Mei is, Did I read that you named that after your mom?

Es Lee: I did. I did.

Laurie Segall: That, that’s interesting.

Es Lee: It, it, it was also coincidental, in a way, ’cause I was looking for a, name for the AI.

Laurie Segall: Mm-hmm (affirmative).

Es Lee: And I thought messaging could be done so much better. So, in a way, Mei also stands for “messaging improved.” and maybe one day it’ll be… You know, it’ll stand for “me improved” because You know, we’re gonna take this data and discover what we can, to give you the best information, for you to, operate as an improved version of yourself.

Laurie Segall: And it means beautiful in Chinese?

Es Lee: Yes. It means beautiful in Chinese.

Laurie Segall: You know, I want to get into the algorithm and how it works ’cause I think we’re talking a lot of, like, in theory about how this thing works and all this. So let’s, like, go hardcore into the, the technology behind it.

Es Lee: Yeah, sure.

Laurie Segall: So what exactly does Mei do? Like, like, break it down for us in, in, you know, the most human way you can.

Es Lee: Okay. We’re basically, a replacement to the default texting app on the phone. So, if you and I texted after this, with enough messages,, the AI will be able to pick up certain things about your personality and start giving me advice or, or kind of the little insights that it picks up about you.

Laurie Segall: Like what?

Es Lee: Like it might pick up that, you know, maybe you’re or- more organized than I am.

Laurie Segall: Well, that’s certainly not true, so…

Es Lee: (laughs)

Laurie Segall: Everyone in this room is shaking their heads. Why don’t we, why don’t we pick another one? (laughs)

Es Lee: (laughs)or mo- more, you know, altruistic or empathetic than I am.

Laurie Segall: Okay. We could go with that one (laughs).

Es Lee: OK. So the idea is, you know, as we’re having these conversations, and with enough data, that goes through the system, the AI has actually been able to figure certain things out about, yourself and the other person. So, you know, for example, it might,, try to predict what the age or gender of the other person is. You know, if there’s a, a history of conversation and it figures out that, you know, the other person is using a lot more negative words than they used to, it might say, “Hey, Laurie seems like she’s, you know, a little different. You might want to check in with her.” So we kind of went through as many algorithms as possible to figure out, “Hey, how can we be like that little assistant, your little personal relationship assistant, as you go about, you know, through your day and texting?”

Laurie Segall: What do you mean when you say, like, negative words? I don’t know if folks really, truly understand, what, what that actually means.

Es Lee: Yeah. A lot of, like, natural language, work has been, based on, like, sentiment working around sentiment, like, hey, if you’re saying “sad,” “angry,” or, you know, “I’m happy. I’m glad to see you,” these are, you know, negative and positive words. Right?

Laurie Segall: Mm-hmm (affirmative).

Es Lee: So if you take the dumbest algorithm and just say, hey, let me just look at the number of times you said something negative, used a negative word today-

Laurie Segall: Mm-hmm (affirmative).

Es Lee: … and say, “Oh, on average,. You used 22 out of 5,000 words. But now you’re at 44, and so you’re twice as negative as you tend to be.”

Laurie Segall: Hmm.

Es Lee: And so we have algorithms that we’ll look at statistically, like these numbers, and find when there’s an anomaly and, you know, try to point that out to the user.

We’ve got to take a break to hear from our sponsors but when we come back, we’ll talk Mei and mental health. 

What if an algorithm identified that you were falling into depression.. and then using data it suggested the best person in your contact list to reach out to and here’s what’s suprising, it’s not the person you normally would think. More after the break. 

Laurie Segall: And, what other types of things does it analyze? It still analyzes if someone is into you or the idea is that it can analyze work relationships, all sorts of different types of relationships. Right?

Es Lee: Yeah. Yeah

Laurie Segall: And how?

Es Lee: And how. Yes.

Laurie Segall: Just give… The specifics, I think, are really interesting to us ’cause I, I don’t think people quite understand, the data points that go into a lot of this stuff.

Es Lee: Right. so with Crush, one of the first things that we would ask is, “Hey, what type is… What type of relationship is this?”

Laurie Segall: Mm-hmm (affirmative).

Es Lee: Right? Because, you know, you’re gonna be texting somebody you like a little differently from your family, from your friends. So we wanted to kind of separate out our data set.

Laurie Segall: Mm-hmm (affirmative).

Es Lee: but in doing so, we actually kinda had a lot of labels… To do AI, there’s only two pieces of things that you really need. it’s a ton of data, which we had, and labels. So basically, let’s just extract it and say, “Okay. Let, let, let’s, talk about pictures of, uh, dogs and cats.”

Laurie Segall: Mm-hmm (affirmative).

Es Lee: Right? if you have, hundreds of thousands of pictures of dogs and cats and you go, “Okay. Well, that’s a dog. That’s a cat. That’s a dog. That’s a cat.” Right? there’s ways to turn that picture into, kind of, math, and then have the computer find patterns amongst those pictures and then start predicting whether it was a dog or a cat picture that was seen. So in the, in, in the same way, if you had enough text message conversations when… where, one party says, “Yeah, that was… You know, I had a crush on that person,” and, that person, that’s a family member or-

Laurie Segall: Mm-hmm (affirmative).

Es Lee: … that’s my, my buddy, you’re able to kinda do the same thing.

Laurie Segall: Hmm.

Es Lee: Right with enough data, it starts actually doing this better than people can. There’s, there’s no way a person could have actually listened to 250,000 conversations, right? Or conversations between, you know, 500,000 people.

Laurie Segall: Mm-hmm (affirmative).

Es Lee: But a machine is able to do that and take every single one of those data points and say, “Hey, this person, you know, within, like the, uh, first 20 words say something like handsomeAnd 99 out of 100 of those, that person had marked that they were romantically interested in that person. That machine is able to do things and recognize patterns in ways that people can’t.

Laurie Segall: Hmm.

Es Lee: And if you think about how, a person actually works… Like if you’re young and this is your first romantic relationship, you might not be able to read any of the signs that this person likes you, right? ‘Cause you’ve never seen it before. But you take somebody who’s, you know, seen enough relationships and, you know, your grandma might actually be your… the closest thing to, an AI that we know because she’s just seeing enough information that she can go, “Oh, yeah,  I heard him say that. So, yeah, in my experience, like, yeah. That has not turned out well.”

Laurie Segall: And your grandma might be as close to an AI… That’s really interesting. And so, I mean, it’s like it’s pattern recognition and also based on, kind of, personality types that you guys define. Is that how y- you guys do it?

Es Lee: it’s the Big Five personality profile.

Laurie Segall: Right. Okay., and so I guess, like, you know, what, what’s the p-… The point of it is to help us communicate better? eh… The point of it is to, to give us the words to say to each other because we’re messy and human and sometimes we just don’t know… I- it’s an iteration of you not being able to talk to women? I m-… And, and I don’t mean that in a bad way. Like, I, I mean that in, in the way that, you know, I sometimes have trouble talking, you know, to a family member or something. It just gives us a pattern or a blueprint, um, through AI by which to speak to people. Is that… That’s i- if I’m reading it right.

Es Lee: Well, well, think about it this way. How often when you compose a text message do you think about, “Am I coming acro- across as, like, rude or 

Laurie Segall: Well, I think my, my issue is sometimes I don’t think before I text. 

Es Lee: But, it’s amazing, I think, when the… For most people, I think when they, text something, they scrutinize it, thinking, “What is the other person gonna think when they get this message?” Right?  and I think most of us kind of think the worst. Right?

Laurie Segall: Mm-hmm (affirmative).

Es Lee: But just think about every time you text somebody, if you didn’t like somebody, you wouldn’t text them.

Laurie Segall: Yeah.

Es Lee: So, for example, if the algorithm starts picking up that the person on the other side,is a lot more reserved, it’ll say, “Oh, you know, this person seems like they’re a lot more reserved than you. You might want to try a little harder to relate to them.”

Laurie Segall: What other kind of soft nudges will the AI give us?

Es Lee: Well, on the flip side, if, you know, you’re reserved and the other person is a lot more outgoing, it might say, “Hey, this person’s a lot more outgoing than you. You might want to try to keep it light… You know, keep things light with them.”

Laurie Segall: Does the AI… I mean, as always, AI is so flawed, right-

Es Lee: Yeah.

Laurie Segall: … and, and can get it really wrong sometimes. So, like, what if the AI is giving me advice that’s just terrible? Right?

Es Lee: Yeah, that’s fair. Um, uh, it, it, it’s not infallible. but I, I would argue that is probably better than your friends.

Laurie Segall: I want to talk a little bit about the, the mental health aspect of this because it’s putting a lot of weight on Mei. Like if Mei is looking through a lot of these messages and seeing people at these, like, very vulnerable moments, uh… I’m sure you must see some crazy stuff, like… you must really see some people in pain, saying some, some pretty sad things about depression or suicide. What’s the responsibility of the AI? 

Es Lee: Yeah. I mean, that’s a good question. I don’t think anybody really has a good answer for that. Look. W- w- we actually try to steer away from giving people advice as much as possible. but all the feedback that we got from people were like, “Hey, okay. So you told me this about it, and shoot, what am I supposed to do about it?” Right? People saw that there was application here, potentially, outside dating.

Es Lee: Um, that was one of the first things that we turned to when we go, “Hey. If the AI is able to look at your conversations with everybody, your, your parents, your girlfriend, your best friend, it’s able to kinda understand things about you that, like, really nobody has the perspective. Right? You only get to see somebody in a certain dimension, whereas we, you know, we have a 360 view. And we kind of theorized that, hey, maybe we can actually, you know, figure out patterns.

Es Lee: And so we started, uh, looking into conversations. And, you know, for the longest time, I was looking for, another start-up in mental health or some data set where it’s kinda morbid to say, but, could I get the text messaging history of somebody who killed themselves? Because we have the analytics tools to kind of, you know, comb through the history. and what if we were able to use that to figure out patterns, um, that preceded the actual act?

Es Lee: And so, I talked to many, many psychologists, start-up companies, and the answer was always, “We don’t know,” or, “We don’t have the data, and even if we did have the data, privacy regulation is keeping us from doing anything.” and then i- i- i- it dawned on me, well, we have conversations of 150,000 people. What if we just search through these messages and try to look for the phrase “tried to kill myself”? We actually found, about 10 occurrences of that where it was from the user and not a contact. So we had a lot more information on them.

Es Lee: And they, actually… I- i- in those 10, they actually included a date. So one of them was, “Hey, sorry I’ve been MIA. Um, but, you know, I’ve been going through a rough patch and I actually tried to kill myself two weekends ago. but all I managed to do was down a bottle of, pills and send myself to the hospital.” So you know that person tried. so with that data, we’re able to now look at the patterns. And some of the things that we found w- were actually really, really interesting.

Laurie Segall: LIke How do you feel when you’re looking at messages? This is your product, right? Like, how do you feel when you’re looking at messages and someone is saying, “On this date, I tried to kill myself”? Like, I, I, I don’t, I don’t know really, like, what, what the question is other than, like, how does that… As a founder and with your responsibility and your technology, how does that make you feel?

Es Lee: Um, I mean, I, I think going through some of these, messages… And, by the way, we don’t know the identity of the people ’cause we never ask them for their name. Right? I, I, I think it was an exercise in empathy that if everybody had the opportunity to go through, I would suggest it. Because going through the interactions of some of these people with everybody in their life, it felt like a soap opera. I did it with one, where I looked at the two months of conversation before that, and I tried to kind of jot down every person. “Oh, yeah, this… number 3704, that must have been a friend,” or, “That must have been a girlfriend.”

Es Lee: It was, it was exhausting because imagine you know that this person was gonna go through hardship a month from now when you’re reading the conversations of them crying out for help to people. it was kinda heartbreaking. Um, and, there were, were some conversations where, you know, this person came in and just started listening to them, checking in with them, and saying, “Hey, how are you doing?” Uh, I, I just remember mentally, like, cheering at that point, like, “Thank God for you.”

Es Lee: And so, you know, we went through this because I knew that an algorithm might be able to find that person. and I think when people go through tough times, there really is no solution. There’s no easy solution. Right? People might say, “Hey, if somebody is depressed and you find that out, what you should do is, You know, there’s a lot of bots out there, or a mental health hotline. I think all these people know that there are bots and mental health hotlines, but they don’t reach out. And actually, we think what we worked on is a lot more complete solution than anything that’s out there because the only people that can actually help you are the people that are important to you, that you talk to, that you care about in your life.

Es Lee: If we could use these algorithms to find that person that we could recognize cared about you the most but, also was, depressive or could be empathetic to you… And it doesn’t matter if you have Mother Teresa in your phone. If she doesn’t reply to your text messages, she’s not gonna be able to help you. So we needed the algorithms to find, you know, somebody who tries a lot harder than you do to communicate than you do with them. And so the algorithm goes through all that and finds the person that might be right for you and suggests that you reach out to this person.

Laurie Segall: That’s what you guys do if you notice that someone is getting depressive or has threatened their own life. Mei, will suggest, the AI will suggest, you reach out to someone. And based on your AI, you identify a person based on these messages who you think would be a good person to talk to.

Es Lee: Yes. Yes. And the reason we had to go through kind of reading these messages was we had to back-test the algorithm. So we would go through these messages, and there might be five to six people that they’re texting. And we go, “I think this is the best person to reach out to.” So we went through multiple iterations of the algorithm, uh, and then we found the, the one iteration where that algorithm was gonna pick that exact person that we were gonna pick.

Laurie Segall: Hmm. But, like, doesn’t that feel like playing God, rolling the dice?

Es Lee: I think… It, it’s, it’s normal to see a person in need, and say, “Hey, I think I could help that person.” I feel like it’s a responsibility, in a way, then. Right? Because, I think in society, when you know there are others, there’s a part of a responsibility that grows out of that for the other person to help you in need. You know, so I, I, I actually do think that, like, it, it is the responsibility of the people who have access to something like this.

Laurie Segall: Yeah. I mean, I, I think it’s, a really interesting and ethical conversation of what you guys are sitting on, and I want to get into the privacy. But also think about Facebook, right? Like all these social networks can tell if people are becoming depressive or threatening their own lives. And, and it is… You know, do they contact authorities? Do they put the suicide hotline number?

Laurie Segall: And so it is pretty unique that you guys are identifying and suggesting a person, based on artificial intelligence and, and text message data that you have. It’s, it’s a choice, right? and, and no one knows what the right choice is.

Es Lee: So I think we’ve seen examples, I think, of the big tech firms that actually went and did something. Right? I don’t think they have a right to do that. it’s just, logically, I don’t think they have a right to do that, whereas what we do is we put the power back into that person and we say, “Hey, you might not have noticed this, but this person that, you know, checks in with you, A, a friend that you don’t talk to that frequently, um, we figured, out, that this was a person that might really understand what you’re going through.

Laurie Segall: Right.

Es Lee: And then it’s your choice to reach out to that person. We don’t do anything on your behalf.

Laurie Segall: I mean, and I think one of the most interesting things you said there is, like, you, you pick a person for someone who’s threatening their own life to, to reach out to. One of the, the tidbits that you kind of slipped in there that you guys try to pick out someone who’s also depressive, which is certainly very interesting to me.

Es Lee: I’ll preface this. 10 people is a small amount of data. Right?

Laurie Segall: Yeah.

Es Lee: But when I was going through that exercise and I plotted these things on a graph, I think people would say, like, “Hey, I don’t know what the patterns of, suicidal tendencies look like. It must be really hard to figure out.” When you see some of these graphs, you’re like, “Oh my God. This is perfectly obvious.” So let’s just say a-

Laurie Segall: What do you mean?

Es Lee: so the usage of negative words and lotuw words… there’s research out there that say when people are depressive, they tend to use these words more.

Laurie Segall: Mm-hmm (affirmative).

Es Lee: When we plotted these 10 people out on a graph, we plot the negative word usage of them versus everybody that they’re talking to.

Laurie Segall: What words? Just asking for a friend (laughs).

Es Lee: Uh, it’s a lot of words.

Laurie Segall: Uh-huh (affirmative).

Es Lee: Just negatively…

Laurie Segall: Okay.

Es Lee: Negativity. You know, no, hate, 

Laurie Segall: Uh-huh (affirmative).

Es Lee: sad, angry.

Laurie Segall: Yeah.

Es Lee: When we plotted out the word usage, negative word usage, on a graph, between the person and everybody collectively that they talked to, it tends to show a pretty regular wave. And we found that when, the user, was more negative than they ever were, if the other people that they talked to were just as negative, then everything was fine. It’s when they were the most negative and there was the biggest gap between how negative they were-

Laurie Segall: Hmm.

Es Lee: … and the other people that you started seeing behaviors.

Laurie Segall: Hmm.

Es Lee: And I think when I went through that exercise, by the time I came to, like, the sixth or seventh graph, I didn’t even need to look at the text messages. I just looked at the graph and said, “I’m gonna look at May 26th because I think that day, they might have tried something.”

Laurie Segall: Wow.

Es Lee: And I was pretty close on most of them.

Laurie Segall: It’s crazy. I mean, even crazy to think about the future, like if you could predict… i don’t know And this gets into, like, Minority Report and a lot of stuff-

Es Lee: Right.

Laurie Segall: … like You know, this is, this is used for good, but this is… could also be used in a pretty scary way. Like could you start making some of these predictions if someone’s gonna rob a bank, do something bad, harm themselves, hurt someone?

Es Lee: Right. Right.

Laurie Segall: You know, you could start making some of these really interesting predictions based off of some of the data that you guys are getting, although we’ll get into the privacy ’cause it’s-

Es Lee: Right.

Laurie Segall: … a huge part of this. But it’s anonymized data at the moment. But, um, I’m sure you can make some insane predictions, right?

Es Lee: Yeah. Yeah. Right.

Laurie Segall: Like what?

Es Lee: Um, well, we’ve never tried.

Laurie Segall: But you could.

Es Lee: Uh, theoretically, yes. Um, but yes. I think there’s the possibility for that. and we’ve actually been,  in the four years I’ve been working on this, really been open to anybody coming along and saying, “Hey, like, I’m an expert in this. I would love to look into this.” I, I think this data is very powerful, and so long as it’s being used by the people who kind of intend to use it for public good and for education, I feel, I feel fine with that-

Laurie Segall: Right.

Es Lee: .. because none of us really know how this technology will be used. it’s almost like  a, a hammer could be used to build a house or tear something down. Right? And it’s up to the people, how they, how they choose to use it.

Laurie Segall: Well, you guys are a small start-up, and, big companies use big data in all sorts of different types of ways. This is the national conversation, around our data and our privacy.

We’ve got to take another break to hear from our sponsors but when we come back. I know what you’re thinking. Tech that sees all my text messages and figures out these intimate things about my personality and relationships. What about… privacy?  We’ll dig in after the break.

Laurie Segall: I want to get into privacy a little bit ’cause we’ve been speaking around it, but it’s probably, like, I would say the most important part of what you guys are doing. you guys are an AI that sits on top of messages and reads messages. So, like, the elephant in the room is, like, “Whoa. What about our privacy?” So um, so what is your answer to the question of, you know, how do you protect user data?

Es Lee: Yeah. Uh, I think it’s like security. nothing is foolproof, but, you just want to try to do your best at every step. So we knew text messaging data is probably some of the most intimate data, right?

Laurie Segall: Mm-hmm (affirmative).

Es Lee: So everything within Mei, everything’s off by default. We don’t collect anything. If you want to turn on the AI, we pop up this pop-up that says, “Hey, we’re collecting these messages. If you’re not comfortable with this, don’t go any further.” We did that with Crush, and we probably turned away half of the people, which is fine.

Es Lee: We want people to know exactly what they’re getting into.  but then when we collect the data, we don’t collect anything more than we have to. So for example, we don’t collect the, the pictures because we’re not at the stage where we can analyze pictures. Uh, so we only take what we need. We don’t need your name to help you. Right? the only personally identifiable data that we have, we get is the telephone number. And we have to verify that you own the account. There has to be a way to authenticate you.

Es Lee: We take that and actually hash it to an ID that means nothing to anybody. And, actually, that was a way to protect us against ourselves.

Laurie Segall: What do you mean?

Es Lee: Meaning we can’t look in our database, find something, find a, a user, and then take that ID and try to figure out what telephone number it is.

Laurie Segall: Hmm. So none of your employees could actually be like, “Oh, this is a pretty interesting conversation with so-and-so. I’d like to go look it up.” They couldn’t do that? 

Es Lee: Yeah. Right. And then we go, “Okay. If you want to delete your data from our database, we make it really easy to.” I know there’s a lot of apps out there that make it impossible to. We just put the button in the app and we say, “Okay. Now you’re gonna be deleted.” So every step along the way, we thought, okay. Well, you know, we’re users of apps, too.

Laurie Segall: Mm-hmm (affirmative).

Es Lee: We’re at a crossroads. Which choice do we make? And we always made the one that we thought was for the sake of preserving privacy.

Laurie Segall: What’s interesting, though, is when we were thinking about, okay, well, I, I wanted you to analyze some of my messages, and I don’t have an android, so right now, It’s available on android, and the iOS app, you have to upload your WhatsApp-

Es Lee: Yes.

Laurie Segall: … messages. Why is that?

Es Lee: So on iOS, there’s no third-party apps that can access your, your text messages.

Laurie Segall: Got it.

Laurie Segall: Do you think at any point that would change? I’m assuming that’s, that’s a hindrance to the business to a degree, right?

Es Lee: to a degree. Yeah.

Laurie Segall: Yeah.

Es Lee: I think Apple would say this is for the sake of user privacy.

Laurie Segall: Yeah.

Es Lee: But I actually think the better answer is give them a choice. Right?

Laurie Segall: Hmm.

Es Lee: If you as the user… And I’m telling you exactly what I’m using this data for. If you choose to do so, shouldn’t you have the right to? It’s your messages.

Laurie Segall: Well, what’s interesting, though, I mean… And we had this whole debate when we were thinking about what text messages to send over to you. I was like, “Well, I guess I have to ask so-and-so permission.” They’re like, “No, I don’t.” Like, it goes to this question, do you own your text messages? And there’s legal precedent that says, like, if, if you, Es, send me a message, I… It’s public domain, right? Or it’s…

Es Lee: Yes.

Laurie Segall: I can… You know, it doesn’t matter. Like, you don’t have any right. Right? So like it… You can do this analysis, of anyone I… I could send over text messages of me and any friend, and they don’t have to consent.

Es Lee: Right. Right. In the US, that is legal precedent.

Laurie Segall: Right. And then overseas, you guys comply? That’s-

Es Lee: Yeah. Yeah. GDPR is kinda like the big one in Europe.

Laurie Segall: Yeah. Yeah.

Es Lee: And, and we’re compliant with that. in Canada they have a little more stringent, I guess property rights as it applies to text messages.

Laurie Segall: And, and I saw you said something about you guys are a small start-up. And you’ve never made a cent from data and all this kind of stuff. and you’ve never sold data. I mean, in the time that I’ve covered tech, I’ve heard that before, and I believe you. Right? I believe it. I think the, the issue, I think, is that that could change, right, in a heartbeat. And, and we’ve seen that change. And so there’s less, I think there’s less trust now. So how do you ensure to your users… And maybe the problem isn’t just small start-ups. It’s really the big companies that we need to be talking about-

Es Lee: Right.

Laurie Segall: …that won’t change, right? That-

Es Lee: Right.

Laurie Segall: … their data will continue to be protected as you guys grow.

Es Lee: Right. Well, you are correct. We have never sold any data. we don’t have any plans to. But I have asked myself, you know, even asked my team the question before, you know, we don’t need to now, but what if we were backed against a wall?

Laurie Segall: Yeah.

Es Lee: And it does bring up some, some very interesting questions. I mean, I’m all in favor of regulation that kind of kind of modernizes, how we think about property-

Laurie Segall: Mm-hmm (affirmative).

Es Lee: … and privacy. Whenever I bring up, “This is what we do,” people go, “Wow. There’s application here. There’s application here.”

Laurie Segall: Yeah.

Es Lee: In my mind, I go, there could be so much money made from the things that are necessary. Why w- would we need to turn to something that is nefarious? 

Laurie Segall: What do you mean by that?

Es Lee: So on the iOS site-

Laurie Segall: Mm-hmm (affirmative).

Es Lee: … we’re able to tell you whether… you know, the probability somebody likes you, based on the algorithm. And users pay for that. So, the idea is, with this information, it can be valuable to somebody. I can sell it to them for what they’re willing to pay as opposed to use it for, to sell to somebody else for marketing. I mean, it does, it does bring up a very good point of, how do we monetize this?

Es Lee: I think it’s important to be clear and to tell your users what you’re gonna do. So we actually found the solution about maybe two years ago. We have a credit system in the app, and what it does is when you give us your text messages and when you label data for us, we give you credits. And that’s kind of like a placeholder for what is the value of this data? I actually don’t know. Right? So if somebody uploads all the conversations up, uh, you know, it contributes to the algorithms that might be worth something someday. It contributes to the overall kind of, you know, AI data ecosystem that might be worth a lot someday.

Es Lee: I don’t know what that’s worth, but I’m gonna give you a little placeholder that said, “Hey, you gave me some information.” We found that to be a way that, we can kind of return value to the people that give us their data. So now in the app, any time you, uh… It, it goes, “I think you seem like you’re a really empathetic person. Am I right or wrong?” And you hit, you know, agree, disagree. Uh, you get a small credit. And the idea is none of us know what this data is worth, and maybe I, for advertising reasons, maybe, maybe this is a system that allows us to say, “Okay. Well, I choose to see an ad. You, company, are making money from me seeing this ad.”

Laurie Segall: Hmm.

Es Lee: We can give you a small amount of those economics, and everybody is happy.

Laurie Segall: Right. Like the, the idea that, you kind of take issues with the way that, that companies take data. Uh, right now, it’s like they take your data and we don’t know what’s happening to it, and then we’re advertising it’s under the guise that it’s free.

Es Lee: Right.

Laurie Segall: What?! Are you serious?

Derek Dodge: does it say that you’re secretly in love with me?

Laurie SEgall: no it says my predicted age is 44

Derek Dodge: (laughs)

Laurie Segall: Are you serious?

Derek Dodge: that’s definitely not a maturity level 

Laurie Segall: ok so my predicted age is 44?! i guess for our listeners i’m 34 and it says your predicted age is 36. 

Derek Dodge: Woah i am 36! how would it know that?! 

Laurie Segall: I want to get into my personal experience with Mei ’cause I think it’s…

Es Lee: Okay.

Laurie Segall: I thought it was super, uh, interesting. And, and it was such a personal experience… ’cause what we did… I have an iPhone, so, like, what we did, just for full disclosure, is we sent you guys the messages to analyze and I sent you a year’s worth of messages with Derek, who’s a good friend but also my co-founder. And we thought it’d be interesting to do it, ’cause we’ve been working together, but we’re also friends, to see what your algorithm picked up about us.

Es Lee: Right.

Laurie Segall: Um, but, like, man, it took an hour and a half for my messages to just download ’cause we downloaded all my messages, which was horrifying, um, simply horrifying. When you talk about, like, novels, like, it was like the great American novel minus, like, anything classy or interesting.

Es Lee: (laughs) Did you, did you read through some of it?

Laurie Segall: We were… (laughs) We read through some of it. Yeah. The- they were really… I mean, it’s, it’s fascinating to see these, like, snippets of conversations, especially of you, like, years ago. I mean-

Es Lee: A- and you were like, “Was that me? Could I have texted something like that?”

Laurie Segall: I mean, like, unfortunately, I’m like, “Yes, that was me.” Um, you know, so I’m, I’m curious to see what your algorithm showed because I, I think, like, what you’re doing is fascinating because it’s so beyond just to someone like me. It’s like how do you better working relationships, friendships, all this kind of stuff? So should we try it, see?

Es Lee: Yeah. Yeah, we should dive in.

Laurie Segall: I mean, and have you done this interview, like, knowing my personality traits already?

Es Lee: No, no, no. I’m not, I’m not that-

Laurie Segall: Creepy?

Es Lee: … organized or any… Yeah.

Laurie Segall: I didn’t mean to say creepy. Organized. Uh…

Es Lee: (laughs) Uh, no. I hav- I haven’t looked at any of these.

Laurie Segall: Okay.

Es Lee: Um, so I, I, I did the same thing and downloaded your conversations into-

Laurie Segall: Okay.

Es Lee: … a, a new phone of mine.

Laurie Segall: And is this gonna show me or is this also gonna show me and Derek? ‘Cause if it’s gonna show Derek, he’s in the room, so we might as well just bring him over. Right?

Es Lee: Yeah.

Laurie Segall: He’s glaring at me, so that means we should absolutely bring him over. He’s so grumpy (laughs).

Es Lee: (laughs)

Laurie Segall: I wonder if his personality profile says that (laughs).

Es Lee: Oh, I guess we will see.

Derek Dodge: Do my texts indicate that I’m grumpy?

Laurie Segall: Yeah (laughs).

Es Lee: Um, so it is able to kinda give insights about both Derek and you.

Laurie Segall: Mm-hmm (affirmative).

Es Lee: Uh, so I guess now that you’re sitting down, Derek-

Laurie Segall: Okay.

Es Lee: … uh, you can be the first.

Laurie Segall: Right. So, for our listeners, he’s pressing a button.

Es Lee: Yes. Pressing a button, and it says, uh, “You seem passionate about their work.”

Laurie Segall: I do or Derek?

Es Lee: Derek does.

Laurie Segall: Oh, that’s good.

Derek Dodge: That’s good.

Laurie Segall: Yeah. I mean, that’s helpful given that we just launched a, a company. We launched our media company, Dot Dot Dot. So I’m glad you’re passionate about it. Thank God.

Es Lee: You, you see that it says agree and disagree, and this is kind of our way of getting better. Right?

Laurie Segall: Oh. Okay.

Es Lee: So I’m gonna go with agree on that. So it says that you seem like you’re more philosophical than concrete.

Derek Dodge: Hmm.

Laurie Segall: Hmm.

Derek Dodge: Let me ponder that.

Es Lee: (laughs)

Laurie Segall: (laughs)

Es Lee: Uh, agree on that.

Laurie Segall: And so what is it doing now? It’s, it’s taking us through an, an exercise for Derek?

Es Lee: It’s, it’s trying to understand things about Derek so that if it figures out that there was this big personality difference between the two of you And that might cause you to kinda misunderstand, uh-

Laurie Segall: Okay.

Es Lee: … it’ll try to give advice on how to bridge that communication gap.

Laurie Segall: Okay.

Derek Dodge: Sounds useful.

Laurie Segall: (laughs)

Es Lee: From, uh, Laurie…

Laurie Segall: Why are you looking awkward?

Es Lee: I don’t know how old you are.

Laurie Segall: It got it wrong.

Es Lee: They got it wrong?

Laurie Segall: By the way, I looked at the data. It got it wrong, and that’s good. I, I’m glad you felt a little awkward looking at me thinking that that’s not how old I was, because that’s, uh, that’s 10 years older than I actually am, Es.

Es Lee: Yeah. Maybe it’s, maybe it’s because it’s a professional relationship.

Laurie Segall: Do you think it’s ’cause I have an old soul… is what we were discussing earlier in the office.

Es Lee: Potentially.

Laurie Segall: So for our listeners, your algorithm thinks that I’m 44, and I’m actually 34. So why do you think it picked that out?

Es Lee: So, usually we’d have a lot more information to go off of.

Laurie Segall: Okay.

Es Lee: This was just one conversation.

Laurie Segall: Okay.

Es Lee: so you can imagine if it was done 50 times, maybe the, the, the way you text, Derek might be different from the way you text other people.

Laurie Segall: Maybe because we’ve been, like, more professional over the last year. I bet if we had given it our 12 years, it certainly would not have thought I was 44, 

Derek Dodge: But then… So I also looked at the data, and it predicted, that I am 36. And that is correct.

Es Lee: Oh, wow. All right.

Laurie Segall: It’s like your AI favors Derek, which is no big deal. Let’s keep going.

Es Lee: So, the personality profiles, I actually think I included that when I-

Laurie Segall: Yeah.

Es Lee: … when I sent over the information.

Laurie Segall: Mm-hmm (affirmative).

Es Lee: Um, what did you think about that?

Laurie Segall: It was good. It was pretty spot-on. 

Es Lee: It says your top personality traits are, empathetic, emotionally aware, dutiful, altruistic, and philosophical.

Derek Dodge: I think that’s totally spot-on for her. I’ve always said that Laurie is one of the most empathetic people I know. So the fact that empathy was the first characteristic that it listed, I thought, was spot-on.

Es Lee: What did you think about yours? It says that your top five are, philosophical, empathetic, energetic, ambitious, and fearless.

Derek Dodge: yeah. I think so.

Laurie Segall: I like his

Derek Dodge: I’d like some more of that energy (laughs).

Laurie Segall: I like those for him. Yeah. I like those.

Es Lee: So it’ll also look at your personal-… And, and this is just pure math and it just shows you how similar you guys are.

Laurie Segall: Hmm.

Es Lee: Uh, a 91% similarity.

Laurie Segall: I mean, that’s kinda great, right? I… How does that compare to other people’s similarities?

Es Lee: Uh, that’s pretty good.

Laurie Segall: Pretty good?

Es Lee: Well, I mean, here’s one thing that, we tend to try to empathize and match the way the other person speaks.

Laurie Segall: Mm-hmm (affirmative).

Es Lee: So, just looking at this slice, it kind of shows, you know, whether, whether or not you are these things, this is how you want to come across to the other person.

Laurie Segall: Hmm.

Es Lee: And so, yeah. I mean, that’s a, that’s a good score.

Laurie Segall: It’s great. Keep going.

Es Lee: Uh, let’s see. let’s look at the probability that this is a romantic relationship.

Derek Dodge: (laughs)

Es Lee: So we’re gonna ask, “Does Derek have a crush on me?” So it says there’s a 36% chance. It’s a small chance, but it doesn’t seem likely.

Laurie Segall: I think his husband would agree with that, truthfully.

Derek Dodge: That’s… (laughs) Yes. That’s very true. But I, I mean, I would say I have a crush on you in other ways.

Laurie Segall: Yeah. Thank you (laughs).

Derek Dodge: No, I, I think that’s amazing, though. I mean-

Laurie Segall: Yeah.

Derek Dodge: … we have… after looking at thousands of text messages over a year… It’s able to determine that, basically, we’re friends-

Laurie Segall: Yeah.

Derek Dodge: … and that there isn’t any more there.

Laurie Segall: Yeah.

Es Lee: All right.

Laurie Segall: That’s ’cause it did. It categorized what we are because, technically, we’re like business partners if you were to look at us on paper. But like actually, we’re really good friends. and that’s how it categorized us from that.

Es Lee: Right.

Laurie Segall: although, I mean, it did say I was 10 years older than I am, so…

Es Lee: All right. We’re, we’re gonna have to go back and tweak the algorithm.

Laurie Segall: Yeah. what about, what about these other… There were some graph,

Es Lee: So what’s on the app, there’s kind of a lot more, behind the scenes.

Laurie Segall: Mm-hmm (affirmative).

Es Lee: Um, so those graphs were things that, you know, are all things that contribute to the analysis that we give to users. But it might also be information overload for people who aren’t looking for it.

Laurie Segall: But, like, is it… ‘Cause I want to know, like, could it tell us how I could communicate better with him or, like, if there were issues that we had recently? Like, it’s okay.

Es Lee: Right.

Laurie Segall: You know? Like, I’m… It’s okay.

Derek Dodge: Please help. Please help us.

Laurie Segall: Yeah. Like, we’re okay. Like, we, we’ve been through some stuff together.

Derek Dodge: Well, what’s so interesting to me about this technology is that, Laurie is someone who I communicate with probably more than anybody, especially at this current phase of building out Dot Dot Dot. But, even with anybody, you can communicate better.

Es Lee: Right.

Derek Dodge: And so if this technology can help us learn how to communicate better to each other and we’re already good friends and we’re already high-performing business partners-

Laurie Segall: Mm-hmm (affirmative).

Es Lee: Right.

Derek Dodge: … well, then, that just becomes that much more valuable-

Laurie Segall: Right.

Derek Dodge: … that this can give us insights about the way we communicate to each other, which is predominantly over text message.

Laurie Segall: Yeah.

Derek Dodge: Laurie sends probably 10 text messages per my one text message. That’s just her style.

Laurie Segall: Actually, the data shows I send 3.6 text messages. Is that what it showed?

Es Lee: Yeah, per day, whereas he was, you know, half of that.

Derek Dodge: Which, in this case, is not, not a, um, indicator of the fact that I’m not interested in here.

Derek Dodge: It’s just simply my style of communication.

Laurie Segall: Yeah. Right.

Es Lee: Yeah. There’s actually a tool in here that shows the relationship balance.

Laurie Segall: Oh.

Es Lee: And that’s actually something that, like, was, carried over from Crush.

Laurie Segall: Hmm.

Es Lee: So what Crush did was it not only showed you the balance of the relationship, but it actually showed you how that changed over time. And I think it, it’s an astute point you make that, okay, you might send a lot more messages than Laurie does. Right? And so, like-

Laurie Segall: I send more messages than him.

Es Lee: Or… Sorry. Yes. 

Es Lee: and people have said before, like, “Okay. Well, that’s why Crush doesn’t actually really work, because I’m just terrible at this.” But if you actually look at your relationship balance over time… Right? Like, I’ve looked at, at enough of these graphs, and it’s almost like you can kind of see, like, the, the ebb and flow of relationships. It’s almost like a dance, in a way.

Laurie Segall: Hmm.

Es Lee: Right? Like, we’ve seen relationships where, you know… Actually, my very first employee, a week into the job, I was like, “You want to see this in action? I can actually just, you know, look at your relationship with your boyfriend.” And we went through and I was like, “Something happened on that day, on that day, on that day, and on that day.” She was like, “Oh, yeah. Shit. Yeah. We had a fight on that… on those days.”

Es Lee: And then I told her, like, “Yeah, the relationship balance has never been so in your favor. Like, he’s never tried so hard.” And And she said, “Yeah. We actually had a fight on Saturday, and he’s taking me out to dinner on Thursday.” 

Laurie Segall: That’s interesting, though, like, that you could actually look at the patterns of, like, over a long time and see kind of inherently what… I, I mean, I just wonder, like, is that a lot of power that you want or don’t want, and like, could it be a self-fulfilling prophecy? I don’t know. It’s interesting.

Derek Dodge: I mean, my feeling is this type of data is already in the hands of people like advertisers.

Es Lee: Right.

Derek Dodge: And so why can’t we as consumers have this data in a way that is actionable and beneficial to ourselves-

Laurie Segall: Right.

Derek Dodge: … and, and actionable? And we all have people in our lives who we communicate with in a way that is either frustrating or causes anxiety. And if we had technology that could help us manage those relationships and that communication better, then that would be extremely powerful.

Es Lee: Right. 

Laurie Segall: But to give you the flip side of it, I read something that a Wired writer wrote, and he was talking about it’s almost like tarot cards. Right? Like,  it is big data analytics, and I believe in it. And, and I’ve been obsessed with this idea for a while.

Laurie Segall: Like, I was interviewing a guy years ago who does predictive data analytics to determine if something bad happens. And in the middle of the interview, he’s like, “I…” like, he could predict if, like, a suicide bombing or something really terrible was gonna happen in, in the area. And he had, like, a certain level of accuracy. And I think you would probably, you would probably be like, “Oh, that makes a lot of sense,” given what you do.

Laurie Segall: and in the middle of the interview, he’s like, “I analyzed all your data on Facebook, Instagram, Twitter, everything you’ve said over the last seven or eight years.”

Es Lee: Mm-hmm (affirmative).

Laurie Segall: And he’s like, “I predict you’re unhappy in your relationship and you’re growing unhappy at your job.” and i was like Whoa. That’s fascinating. When I left my job and that relationship, I called him up. I was like, “How’d you do it?” And he talked a lot about, um… You know, he talked a lot about how a lot of the stuff you’re talking about, negative and positive words, the time you tweet, the time you post, all this information that’s out there. And I’ve talked to my tech contacts about it, and I think there’s a lot there. But then there’s also a lot of skepticism, too, being like it’s also like a tarot card. Right? Like where you give us some things and we’re like, “Oh, yeah. That, that totally makes sense.”

Laurie Segall: And a Wired writer wrote, um… And so you have to respond to this. But he said, “Whether a text analyzer reveals anything real or not, using one seems to offer a false sense of predictability and a semblance of control over otherwise messy human relationships. Does the emoji mean it’s true love? Did the double text ruin the mood? Am I doing this right? The answers, displeasingly, never live in the app. The guidance there is about as useful as a deck of tarot cards.” So what do you think?

Es Lee: Well, you know, the tarot card and the horoscope industry is huge.

Laurie Segall: (laughs)

Es Lee: and, you know, whether you believe it or not, right,  people are actually just looking for another data point. It’s up to them whether they want to believe it or not.

Laurie Segall: Hmm.

Es Lee: So…

Laurie Segall: And then one other thing I’ll go into is I think there is a, The one thing after Cambridge Analytica and the whole debacle with privacy and Facebook that really, I think, wasn’t covered enough was this fine line between micro-targeting and manipulation and this idea that we know so much about people-

Es Lee: Right.

Laurie Segall: … and we can tell so much that y-… Just like you’re ta-… We’ve spent this whole time talking about. We can tell so much about people now. There is kind of a fine line with this AI being able to, to micro-target or to manipulate. Right? Like, do you worry about kind of the future and, and some of those ethical lines?

Es Lee: Um, I think every piece of technology is controversial and worrying. 

Es Lee: I mean, I, I think the more powerful the technology is, the more controversial it is. and it always came down to how people used it. I think it’s an inevitability, and I think we just…  we should just embrace the things that are and try to understand it, talk about it. yeah. And, and, and see how we could kind of live in the world with this tech because it’s not going away.

Laurie Segall: How will we use it? I’m sitting here with Derek. How are we using this? So if this is like version… This is like, what, 1.0? Right?

Es Lee: Yeah.

Laurie Segall: Like, 2.0? 1.0?

Es Lee: Yeah. Somewhere. Somewhere

Laurie Segall: You know, what does this look like in 10, 15 years? How are we using this?

Es Lee: I think this technology has the ability to simplify our lives. I think the choice will always still be ours on whether we take its advice or not. yeah. I mean, I don’t know where this is gonna go. I just think if the people who have this information kinda do it I guess do it for good reason. that’s all we could really ask for.

Laurie Segall: Hmm.

Es Lee: I can see that didn’t seem like a very, you know, gratifying response.

Laurie Segall: No. It’s not, it’s not that it’s not gratifying. I, I think it’s optimistic.

Es Lee: Right.

Es Lee: I think the- there’s a double-edged sword with every, with every technology.

The microphone is off and then Es said something that I thought was fascinating so i asked him permission to include it in the episode (since technically the interview was over), he said yes. Es says Mei has over 30,000 comments from users and he said of those comments only 1% are about privacy.Many of his users who’ve experienced Mei don’t mind giving over their data, in fact one of the most common requests he says is to give more. Although you have to remember Mei’s users have shown a willingness to share more than the average person. Es says users ask to just turn on the microphone and record all day, get a much larger data set to analyze. So could more data points about our conversations give us more feedback that could lead to stronger relationships? And when given the choice, if we saw the power of Mei to help us learn more about each other, to communicate better – is the tradeoff worth it? We often hear these very strong arguments for privacy. But as Es says, the  feedback he gets is “I’ll give more if you give me more information that’s valuable to me”.

Although of course there’s a lot of gray area in between.

I’ll leave you with that.

I’m Laurie Segall and this is First Contact

For more about the guests you hear on First Contact sign up for our newsletter. Go to firstcontactpodcast.com to subscribe. Follow me I’m @lauriesegall on Twitter and Instagram and the show is @firstcontactpodcast. If you like the show, I want to hear from you leave us a review on the Apple podcast app or wherever you listen and don’t forget to subscribe so you don’t miss an episode.

First Contact is a production of Dot Dot Dot Media executive produced by Laurie Segall and Derek Dodge. Original theme music by Zander Singh. Visit us at firstcontactpodcast.com 

First Contact with Laurie Segall is a production of Dot Dot Dot Media and iHeart Radio.