First Contact with Laurie Segall is a production of Dot Dot Dot Media and iHeart Radio.
Laurie Segall: In the future, we could have these chips inside of our brains. Do you think they could be hacked?
Moran Cerf: Once the chip is inside your brain, it’s just a button press. So instead of changing the minds of one at a time, we can change the minds of everyone in Michigan and make them vote one way or the other. Suddenly you know creating a misinformation ad on Facebook looks like peanuts.
Laurie Segall: What if we could order our own dreams? And could our thoughts be hacked? Will technology make some of us superhuman? And If so, would we create a whole new species? Is death really the final step? Or could our brains answer vital questions once our bodies are gone?
These are topics from a conversation with Moran Cerf – He’s a professor of neuroscience at the Kellogg School of Management. I’d also describe him as a creative – he’s a brain hacker. At one point he was also a bank robber – we’ll get into that. But really – he’s a student of humanity. He likes to test the boundaries. See how far he can go. And challenge us to anticipate what’s coming next, even the worst case scenario. Just talking to him is like living in an experiment – I can’t help but question if he’s manipulating my memory or changing my mind while we’re doing the interview. But I think that’s the point. We talk about disinformation and manipulation in this era of tech, but things get pretty personal when it comes to the type of research Moran does. He focuses on the brain. He focuses on your mind and your sense of self.
….and how that sense of self is increasingly hackable in the modern era.
I’m Laurie Segall and this is First Contact.
Laurie Segall: This show is called First Contact and I talk about my first contact with the folks that we have on. And I- I was trying to think back to our first contact and I think I wanna say it was Ashley Madison, but, but let me-
Moran Cerf: (laughs).
Laurie Segall: – Caveat that with, it wasn’t like we met on the cheating website, Ashley Madison. Um, we met because of a story I was doing on Ashley Madison, for, for a show I did call Mostly Human and you were explaining to me all this research on how people cheat and like when they’re at their weakest moments.
Laurie Segall: Can you just like explain that really quick?
Moran Cerf: Yeah. So I think that there’s by now a lot of scientists who look into that and they try to understand basically in big words, bad behavior. Why do people do bad stuff? We know that it’s a bad idea and somehow our brains allow us to do that.
Moran Cerf: And it boils down to the fact that we use an equation in our mind to decide whether to do something bad and the equation, involves parameters like, how, like get him to get caught. If I do get caught, how big is the punishment? And mostly the one that I find interesting is, if I do all of that bad stuff, will it change my perception of myself as a good person?
Moran Cerf: And the reality is that most people find ways to justify to themselves why they’re great people, even if they do bad stuff. So, most prisoners, if you take them and ask them, why did you do it? They say, “Yeah, I did steal the bread, but the reality was that it was stale and my family was hungry.”
Moran Cerf: and you kind of find a way to tell a story and this is a property of the brain. The brain has to think that you are great when it does stuff, and to frame the entire world around these things. So, when we ask why people cheat, it’s mostly because they can find a way to tell a story that they’re not cheating.
Laurie Segall: That’s so crazy and, and you know all this stuff because you study the brain?
Moran Cerf: Yes.
Laurie Segall: Uh, tell us a little bit about yourself. I, I mean you have such a fascinating background. You’re like a hacker turned neuroscientists.
Moran Cerf: So I have, I guess three different hats that I wear or used to wear at some point. One of them is the one of a hacker, where I spent nearly 15 years breaking into banks and government institutes to try to find flaws in the security, and teach them how to better save themselves from villains. That’s the first career.
Moran Cerf: Then a second career, that spends the last 15 years as neuroscientist trying to study the brain, and understand how it works in a unique way that kind of borrows from my traditional hacking techniques. And then the last five years, I’ve been working as a business professor as well.
Moran Cerf: Where I teach companies and MBA students how they can use the knowledge about our brain, and about hacking to essentially understand customers better. To create value in a different way. To align desires of people with outcomes and mostly how other people are influencing us and whether we can stop it or not.
Laurie Segall: And I, I mean, you used to hack into banks and what not?
Moran Cerf: Yeah, so back in the early, oh, I guess mid 90s is when I started as a kid up to about 10 years after to 2005, 2006… I was working in a company that did what we call active marketing. Which is, we used to go to banks and first hack into the system and steal money. And then go to the board and say, “Look, your bank is insecure. We were able to steal a million dollars yesterday by just doing this and that.
Moran Cerf: We’re gonna help, you now, secure yourselves better from bad villains who wouldn’t tell you that they did it, but also we’ll take a cut of the money that we’re saving you.” And what we did was essentially just that.
Laurie Segall: How’d you get into that?
Moran Cerf: I started as a kid. So I kind of grew up in the 80s as computers did, and I basically knew how they worked. And I started by hacking-
Laurie Segall: You grew up in, in Israel?
Moran Cerf: Yeah.
Laurie Segall: Okay.
Moran Cerf: I was born in France and I was raised in Israel. And in Israel, in the 80s computers started to become a household thing. And if you wanted to add one life to a Mario game, the only way to do it was to hack. So I did that. And at some point the only way to do it-
Laurie Segall: What do you mean you added a life? Wait, wait wait. Slow down. What are you talking about? What?
Moran Cerf: So you play Mario, this like old game and you’re getting to level three where you can’t pass some monster, you try and you try and you try and doesn’t work. And then someone teaches you that you can actually take a snapshot of the game before you die, and just after you die. And see what’s the difference between the image of the computer before and after.
Moran Cerf: And you see that one number changed from two to one. You say, “I guess this is the lives. So if I just put it back to three, suddenly let the game run and I suddenly have a third life.” That’s as simple as I can depict what we did back then, but it was enough to teach us how hacking works.
Moran Cerf: It has changed a lot in the last 20 years, but the concepts are the same. You take a snapshot of before and after an event happens and you see what changed and you learn how you can manipulate that not too far from what we do with the brain. We look at two events, and try to understand what happens in the brain.
Laurie Segall: And so you went from being a kid who’s hacking Mario games and then all of a sudden you’re hacking, you decided to hack things. Did you ever do any-
Moran Cerf: (laughs).
Laurie Segall: I mean, you’re very, I feel like you’re very proper and you do like Ted stuff now, and so like you have a good name about you now. Did you ever do anything like kind of illegal?
Moran Cerf: Oh, tons. So, so first of all, between age, let’s say 16 to age 21, I was a soldier in the Israeli Army doing that as a soldier.
Laurie Segall: What does that mean?
Moran Cerf: So I was recruited to the Israeli army to be in a team of hackers who do the same thing they did as a kid for Mario games, but now to big, you know, governments and nation states and really apply the same methods to large scale army stuff.
Laurie Segall: Take us on a mission.
Moran Cerf: (laughs).
Laurie Segall: Where are we going?
Moran Cerf: Um, I think that, uh, back then when I was a soldier, the most kind of controversial things were encrypted files. So, files used to be encrypted with some codes that were not too hard to crack. But there was of course a need for the Israeli intelligence to get access to those files. And they would bring a file to me and say, “This file is encrypted, there’s this password, find out what the password is.”
Moran Cerf: And sometimes even not just the password, but the thinking of the person who created the password. So if I give you five passwords and you crack them, and I see that all five of them are the birthdays of your ex-boyfriends, I can start understanding how you actually think of passwords. So I can crack the system by which you create them. So this was my job as a soldier for many years.
Laurie Segall: I mean, it sounds like you were a hacker, but you were always kind of very good with people too, right? Because it doesn’t sound like you can be a hacker without, the kind of hacker you were without being very astute to, to human beings.
Moran Cerf: I would say that more than 90% of hacking is psychology.
Laurie Segall: Tell me about the first time you broke into a bank. Did it get, did it give you a rush?
Moran Cerf: So the first bank-
Laurie Segall: Were you nervous?
Moran Cerf: Bank robbing was after I finished the army, we started this company and we were mostly hacking into banks virtually, as in, trying from home to get into it. But in our contract, wa- some clause that allowed us to actually physically go and rob the bank.
Moran Cerf: So, not only were we supposed to just hack into the bank, but we were supposed to also in theory go to the banks and see if the cameras are pointing to the right direction. Or if someone left a posted note with the password next to a computer, which are physical ways of hacking. And at some point we decided that we’re gonna actually exercise the rights to rob a bank old-fashioned, you know, ski masks and the-
Laurie Segall: What?
Moran Cerf: – Going to the safe.
Laurie Segall: Wait – but can you like take me to the scene? Because it seems like you’re just like talking casually about putting on ski masks and robbing a bank. Like, were, were you guys like in a cafe somewhere and you’re like, “We’re gonna rob this bank with ski masks?”
Moran Cerf: One of the people in my team said, “You know, we are allowed to do that. We never tried. We should try this as a small bank, a small contract, nothing bad would happen if we tried.” And we did.
Moran Cerf: And we sat in the office on a Friday afternoon and, you know, took drawings of the bank. Like, like every movie that you’ve seen, this was our life for a weekend. We decided if it’s good to come from the entrance or from the back though, who should be in the getaway car. All of those things.
Moran Cerf: As you imagine a bank robbery, only that we picked a small one with only one teller. We had to do a lot of like preparations with the legal team to make sure that if we get caught, someone can call and say, “Hey, these guys are my people and don’t put them in prison. They actually were allowed to do it.”
Moran Cerf: A lot of the behind the scenes that I think bank robbers don’t spend time doing, but at the end of the day we came to the bank and you asked me about the moment before, kind of, when you, you know, boys turn into men, this was a moment.
Moran Cerf: Like I remember with all the legal precautions and with all the preparations, the moment you enter a bank knowing that in 10 steps, you’re gonna go to a person who doesn’t know anything, and tell her that it’s a bank robbery and ask her to give you the key to the vault and that’s gonna be a stressful moment for everyone, is scary. I don’t know how people do it for a living. Just for a day was really hard.
Laurie Segall: Did you practice before? Like did you say those words out loud? What did you … Tell me what you said.
Moran Cerf: Yeah, I think, I think we had a really, really s- kind of specific wording that we provided, but it was along the lines of, “This is a bank robbery. Please give us the key to the vault.”
Laurie Segall: Sounds really polite.
Moran Cerf: (laughs). Yeah.
Laurie Segall: I don’t know. That sounds very, very polite.
Moran Cerf: It was very polite.
Laurie Segall: And it worked?
Moran Cerf: It worked a few times. It failed as many. We ended up in prison multiple times. The police comes, takes you to prison and then a couple of hours later you get out of prison because a lawyer calls a lawyer and everything gets organized. But in the first few hours, imagine a police officer getting a call showing up and being told by two kids, “No, no, it’s okay. We’re allowed to rob the bank. Not a problem here.” That’s-
Laurie Segall: Sure.
Moran Cerf: Yeah. (laughs).
Laurie Segall: Like sure you’re allowed to rob the bank. But this is all kind of like hack- this idea of hacking for, for good in some capacity?
Moran Cerf: Mm-hmm (affirmative).
Laurie Segall: Um, this is like early phases of white-hat hacking. People don’t understand what white-hat hacking is.
Moran Cerf: Yeah.
Laurie Segall: That’s what it is.
Moran Cerf: It was, there was a lot of, a lot of kind of learning for the system around this. Like we didn’t have exact names for what we did. It didn’t have like how to hire people because no one knew what it was. Now it’s a lot more organized. Now there’s companies who do that and all the banks are required by law to have hackers try to get in their systems, I think once every quarter and report to the government how well it went. It became a lot more regulated.
Laurie Segall: Yeah. Our world is so vulnerable and there are good guys and bad guys. And you actually need good guys like you at the time before you turned into like a neuroscientist. We’re gonna get into all that, but breaking into these places to show how vulnerable they are.
Moran Cerf: I think it becomes even more complicated right now, because now there’s nation states involved. And I was just in DC a few weeks ago in a conference where the big questions that were addressed had to do with when is hacking a war step or not? So if if one country hacks into their systems and say shuts down the power grid, and in doing so maybe closes a hospital for a few days and maybe makes a few patients life miserable, can you respond with a missile?
Moran Cerf: The rules are not clear right now. So most countries decided it’s not, and that hacking is its own thing and you can hack back but you can’t respond with military. And I think that the US at least is trying to think right now, whether it should change this policy. So I was there thinking about how hacking can be large scale military operation.
Laurie What do you think?
Moran Cerf: Wha- generally I think that whether it is what everyone agrees on or not, I think at the end of the day, it will become the case. Because right now, you can do real damage with hacking.
Moran Cerf: So it’s no longer confined to, “Okay, I stole your data.” You can get into someone’s pacemaker and shuts it down and kill them with a hack.
Laurie Segall: Mm-hmm (affirmative).
Moran Cerf: So now I think that when the hacks lead to civilian casualties at that level, countries are gonna start responding with kind of force, not just with hacks.
Laurie Segall: That’s crazy. And I, I wanna get into brain hacking and all this stuff. Because this is the stuff that you’re doing.
Moran Cerf: Yeah.
Laurie Segall: You’re doing now that’s so fascinating to me. So for our listeners it’s like you’ve always been my guy that I called, and I’m like, “Hmm, I’m working on this story and I know it’s kind of weird, but what’s the future of X?”
Moran Cerf: Mm-hmm (affirmative).
Laurie Segall: You’re involved in like the weird, weird stuff. Of like the stuff that people think is not gonna happen, but it actually is. So like what is the craziest stuff that’s coming down the pipeline when it comes to the future of our brain and our thoughts?
Moran Cerf: So, crazy things without still evidenced that are actually gonna be real, have to do with connecting brains directly. So that’s something that people talk about, but no one has proven that it actually would work. But this would mean that I somehow create a wire between my brain and your brain and connect them.
Moran Cerf: And in doing so, I’m not just allowing thoughts to flow between your and my brain, but actually the theory suggest to create a new third entity that is the sum of the two of us. So, what will happen if we do it is that, immediately a third entity emerges. It doesn’t think it’s you or me. It thinks it’s its own entity and it thinks of our brains as part of it.
Moran Cerf: Total science fiction in the sense that we don’t know how to do it, but total science in the sense that it’s coming from real theory that looks at how brains look when they’re connected. That’s the most extreme I’m thinking about right now.
Laurie Segall: Wow.
Moran Cerf: Another as extreme and something that is being explored right now is the question of what we can do with a brain of a person who died. So, we think of death as the final step of a person. They are no longer there. But it turns out that when you’re dead, your brain still has juice in it, and the neurons can still work for a few minutes to a few hours.
Moran Cerf: And the question that scientists are asking is, can you essentially ask a person’s brain questions after the person is not there? You know, like you kill someone because you think they’re the killer. And then in the few minutes after they’re dead, you ask them, “Did you kill this person?” And you get the true answer because there’s no more boundaries. You can access the brain.
Laurie Segall: What? No, that’s crazy.
Moran Cerf: It is.
Laurie Segall: Could that actually happen?
Moran Cerf: So I think that right now we do a very, very limited version of that, in that we take the tissue from the brains of animals who we call it sacrifice, but actually kill, put that in microscope and still inject currents into this tissue and it still does things for us.
Moran Cerf: If you know exactly where to inject the, the current, such as, it will activate a process that is exactly like the process that happens in the brain, you can essentially read the output and know what’s going on. So, think of the following, simple but useful example to look at a picture and tell if there’s a cat or not in this picture is something that humans are perfect at.
Moran Cerf: And machines are getting really, really good at, but they still work much harder. If you take a baby and you tell it, “Tell me if there’s a cat or not here.” It would know how to point to the cat or not. So we’re terrific at that and machines are terrible at that.
Moran Cerf: So, at the very least, you can take a person who died at noon, and for the few minutes while their brain is still alive until the case, just show with pictures of images and ask it to tell you if there’s a cat or not. And you can now classify images using dead people’s brain and there’s people dying in the millions every day.
Moran Cerf: So, you can just use the brains of people who died to do chores for you while they’re not there to say, “Okay, I’m done with that.”
We’ve got to take a quick break to hear from our sponsors but when we come back the unintended consequences of technology meant to enhance our brains. Chips implanted in our brain could make us smarter, but could our thoughts become hackable? We’ll dig in after the break.
Laurie Segall: I’m very interested in neuro link technology. You know, everyone’s talking about this idea of implanting chips in our brains to help enhance our brains, make us smarter, make us feel kind of limitless. Talk us through this. How close are we and, and what are some of the benefits?
Laurie Segall: And then let’s get into the, the unintended consequences, which is always kind of my-
Moran Cerf: (laughs).
Laurie Segall: The, the part that I love to kind of jam on.
Moran Cerf: So, so when you think about putting a chip in the brain or brain implant, there is kind of three things that you have to deal with, and the neuroscience is the easiest of the three. That’s the one we pretty much solved. We still have to tweak it and it’s still something that we need to make sure it works kind of chronically cause it’s in the brain forever. But that’s actually the easiest part.
Moran Cerf: We know how to stimulate the brains. In fact, right now in the world there’s about 40,000 people in the US that already have brain implants in their heads for clinical purposes. Something that helped them deal with Parkinson’s or clinical depression or some other problems.
Moran Cerf: So we know how to put a chip in your brain that speaks to your brain and controls it and works with it. That is the easy part. Even though people think it’s hard. The two components of this endeavor that are still hard is, one, how do you get a chip inside the brain? Right now, the only way to get inside the brain is to drill a hole in the brain and put a chip directly inside.
Moran Cerf: How to get it inside is a big problem, just swallowing a pill wouldn’t work here. So we have to find a way to get into the brain, passing all the barriers that the brain created for chemicals to get in. And it’s not easy at all. So, that’s where most of the work is spent. And the other one is purely legal and regulatory.
Moran Cerf: So, even if tomorrow we found a person who agrees to have their brain, exposed and says, “Put the chip inside my brain, I’m okay with that.” It’s still not allowed. Doctors wouldn’t be allowed to do that. But also, it’s something that we’re worried would have consequences that we don’t know of right now. That’s the question you asked me.
Moran Cerf: We know in theory how to put a chip inside your brain, and to give it the powers to help you, but it’s unclear if we want that as a society. Do we want people to have more IQ because we gave it to them and leave us mortals behind? Is the question.
Laurie Segall: Right? I mean, is … Right now, some of this, this technology is being used to help people, right?
Moran Cerf: Mm-hmm (affirmative).
Laurie Segall: People who need to move limbs that are moving. I mean, it’s actually already in use right?
Moran Cerf: Mm-hmm (affirmative). One person in France, in Grenoble was given a brain implant that controls exoskeleton and allows him to move. So he was in a wheelchair all his life, and now he walks. With limbs controlled by his brain, the same way he controls his biological ones, but now he controls robotic ones, but he for all purposes, walks now.
Moran Cerf: That’s where it goes clinically. We find ways to use it to help people. People who lost limbs, people who lost some functions, who lost sensations, we can give it to them. And those are the people who get the chip right now in the brain. The fear/opportunity is that, suddenly people would want it for pure enhancement.
Moran Cerf: So when Elon Musk and his team talk about that, they’re not talking about necessarily helping people who lost the function. They’re talking about basically giving it to Elon Musk, and making him smarter, better having the entire world’s information in his brain. And that is when we started to talk about things that are beautiful in what they give us, but also risky.
Moran Cerf: So you can imagine that, having all of Wikipedia in your brain, it’s, a simple query you can get all the data you want would be remarkable. You don’t have to study anything historic-wise. You just have to ask the question, when was the French revolution? And the number 1789 is gonna appear in your mind the same way when you ask how much is two plus five, the number of seven periods in your mind without having to process it.
Moran Cerf: You don’t have to kind of do the same thing you did with your phone, where you type the digits and kind of ask, “How much is 11 times 47?” Get the answer and now read it. It will just emerge. That’s fantastic.
Laurie Segall: What else? Let’s do more. So it’s like, if I hear a song I can think, “What’s that song?” And it just tells me?
Moran Cerf: Yeah. Yes, you can probably immediately compare it to all the songs out there and see what it reminds you of in like you know. Do this thing like, uh, people like you also like that song, so it will immediately kind of have a playlist of things that you prefer. Because it also know that you enjoyed this songs so it will just choose the next one that you would enjoy as much.
Moran Cerf: It will tell you when you need to go to sleep. Humans are terrible in sleeping. They always delay sleep, and it’s hurting us. It will just shut down.
Laurie Segall: Uh huh, I’m horrific at it.
Moran Cerf: It says like, “It’s time to go to sleep. I’m shutting the brain off.” It will, uh, choose the best food for you out of the menu items, so you will eat the healthiest things for you. It will, uh, tell you how to be more concise in, uh, speaking so it will pick the words that would help you like, thesaurus in your brain. It will-
Laurie Segall: It would probably do a better interview than I’m doing right now. Right?
Moran Cerf: It will do a better interview than I’m doing right now. (laughs).
Laurie Segall: Wow.
Moran Cerf: It will be, you know, high frequency trading in your mind. Instead of having two online computers to tell you what stocks to invest in, it will do … So all, all the good stuff is great.
Laurie Segall: I want it. Where do I sign up?
Moran Cerf: So here is where you, kind of ask us after you want it. The question that you should ask is, let’s say I do that, but so are all the other people around me, and suddenly there are like different chips in the market. But you can afford only the one that gives you 10 IQ points, but your friends can afford, the one that gives them 100 IQ points. Suddenly all your friends become smarter and you’re just a little bit smarter.
Moran Cerf: … what’s, what’s gonna happen? So right now, we have this kind of understanding that there’s inequality in the world, but it’s only inequality at the level of money, pretty much.
Laurie Segall: Yeah.
Moran Cerf: And resources. So, one person is, uh, rich, affords to buy the best food out there. One person who is poor affords only some foods and maybe gets sicker. And that’s, that’s the inequality. But at the end of the day, they both have to eat. They both speak the same language. They both live in the same day, breathe the same a- air, they, they kind of interact in the same world.
Moran Cerf: Once you think about making super humans basically those limitless people, they might be totally different species than us. So, so the example that I sometimes give is us compared to apes. So the, you know, bonobos that are really, really clever animals, they basically are 98% us in terms of DNA.
Moran Cerf: There’s only 2% difference between us and them and it leads to an enormous difference in practice. Like, we treat them as animals and we are humans. You know, we put them in cages, we give them bananas, but we don’t really talk, try to interact with them. We don’t really say, “Let’s ask the bonobo what she thinks about this particular idea in politics.”
Moran Cerf: We treat them as animals. If, you know, you think of the smartest ape out there that can communicate using symbols, we say, “Look at this one. It, it almost interacts like a two year old kid. How amazing is that?” Now skip to world where you can put neural implants in the brain. Those super humans with the 150 IQ points above us, will probably think of us like we think of the bonobos.
Moran Cerf: They would say, “Look at this one. She is so smart. She can do differential equations in her mind, just like two year-old Timmy here, uh, from our species. So beautiful, let’s put her back in the cage now.” That’s kind of the world that we can imagine if we started to creating this and this means that there’s going to be inequality at the level that we’ve not experienced.
Moran Cerf: Where the rich and the poor in IQ really are two different species. They’re not just like two people who eat a little better than the other. They’re, you know, communicating differently. They might have technologies we don’t understand. Really a different world.
Laurie Segall: Do you think that this is part of the, because you hear folks like Elon Musk talking about this, which is with such enthusiasm. Is this part of the conversation do you think in Silicon Valley?
Moran Cerf: I think that because it’s not technologically something that we know how to build right now, people push it to the side. And you know it, and they kind of deal with that later. It’s the same, the same way we talk about AI. We kind of say, “Yeah, yeah, one day it’s going to be smarter and better than us, but not right now. So let’s push it to the side. And let’s not talk about it.”
Moran Cerf: And I think that, that’s a mistake. I think that in an ideal world, we should think about things before they become reality once we start to explore them, and prepare for them.
Laurie Segall: So, I think it’s actually this larger issue of like, well the tech titans are building out this technology. A few folks like Elon Musk talking about it, politicians don’t quite understand, anything about it.
Laurie Segall: We have people like you who are talking about it, in, in theory as it’s being built out, but there’s not one like larger entity that’s saying, “Okay, wait guys, we’ve got to figure this out.” And then you have like China over here doing all this crazy stuff with different boundaries than, than we have. So it’s, it’s kind of a whole stew of interesting problems that when put together could, could provide for a dystopian future if we’re not careful?
Moran Cerf: Yeah. So I think that you’re absolutely right. There’s this really big gap between what regulatory is talking about, between what people are talking about in bars in Silicon Valley versus what we talk about as scientists. I think that if we increased the bar by starting to ask politicians, those questions, they would be required to learn about that and they would know.
Moran Cerf: So I think that here is the job of society to just ask questions and embarrass them a few times and then they would do a better job. I think that’s what’s gonna happen if we start asking that repeatedly in town halls.
Laurie Segall: Well. So speaking of politics, let’s talk about the future of disinformation. Cause what I like, what I’ve always been interested is, while, everybody’s kind of yelling about like one issue and, and this is why I’ve always kind of related to you. You’re kind of like 10, 10 steps ahead being like-
Moran Cerf: I’m blushing.
Laurie Segall: Yeah, well but it’s like you’re 10 steps ahead being like, no, no, no. Like you’ve got to pay attention to this because this is like coming down the pipeline and no one’s even looking. And you said something about
Laurie Segall: …someone being able to write code into your brain, and like change your mind. This is where like, it’s like the emoji with like, your head blowing up kinda thing.
Moran Cerf: (laughs).
Laurie Segall: Should we go there and terrify people?
Moran Cerf: So I think, the bad news for people is it’s already happening. This is one of the things that, uh, we’re not talking about like far future. It’s something that we do in the lab everyday now. Changing people’s minds, writing small thoughts and the technology to write big thoughts is already in the testing. So we should tell everyone about all of them.
Moran Cerf: So, in a way, first let’s start with the kind of, baseline, which is marketing has been doing that for the last 80 years anyhow. People have been finding ways to get into your mind and change it in small deviations all the time. We’re just not too worried about it because we think that we know how to do that.
Moran Cerf: The reality is that we don’t. Reality is that, if a company decided that they wanna target you and change your mind with all the might of their, uh, you know, marketing team, they could do that. This is why all of the companies in Silicon Valley are hiring neuroscientists right now to help them in the marketing world basically to kind of apply addiction methods to get you to spend more time looking at their ads or clicking on their content. That’s-
Laurie Segall: I feel, wa- wasn’t that like very like 2000 and like 14?
Moran Cerf: Yeah.
Laurie Segall: Are, aren’t they on to like at least publicly being like, “We’re not doing that anymore.” No?
Moran Cerf: I think that, I think that they are just more efficient and more clever about it, but I think that it’s the same. My students, when they finish their PhD, have a competition between going to other universities and the temptation to go to Silicon Valley and be hired by the same companies that we know from 2014 as employers who want them to do the same thing that they did as neuroscientist in the service of those companies.
Moran Cerf: So this is, this is kind of the baseline that marketing has worked. You don’t know if you like Coca Cola or not. You have no idea if you really like it, because your brain has been trained for 25 years to think that this is good and the value of sugar was kind of aligned with how much sugar is in Coca Cola. Such that this is now what you think is good. You really have no idea.
Moran Cerf: It’s really hard to disentangle what you think from what you were trained.
Moran Cerf: Okay, so that’s level one. Level two, we’ve been working in the last 15 years as scientists on changing memories. And what changing memories does is, it creates a new narrative for your life. How do you do that? There are simple ways and complicated ways.
Moran Cerf: The complicated ones involve taking your brain in moments where it’s, vulnerable, when the guards are down and changing things. One of those moments, for instance, is your sleep. So, right now there are studies that show that when you’re sleeping, we can find a specific moment where your brain is listening to the outside world and actually rewrite stuff into your brain and you will wake up with a different memory, not knowing that someone changed it. That’s the complicated one.
Moran Cerf: But we’re working it in the lab right now. So it’s something that’s happening.
Laurie Segall: So you’re like sneaking into your students rooms while they’re sleeping and messing with their brains?
Moran Cerf: Essentially we bring them to the st- to the study by telling them we’re gonna, uh, have a study on a sleep pattern, go to sleep for a couple of hours in the lab, but then we’re not telling them what the study is really about. And we listen to their brain and wait for it to get to the right moment where it’s kind of listening to the outside world.
Moran Cerf: And then we use old factory cues, smells essentially, to inject ideas in their mind and change those ideas using other smells. And then when they wake up, they have different thoughts. The, the example that we do right now is, uh, smoking. That’s not a bad thing because we are still a science lab.
Moran Cerf: So we bring a smoker to the lab. We have him go to sleep. We wait for him to get the right moment where his brain thinks about smoking or thinks about the past and we spray the sort of nicotine into their nose to make them think about smoking.
Moran Cerf: Then we spray the smell of rotten eggs, which is a smell that’s known to penetrate the brain, make you have bad thoughts, but not wake you up. The pairing of those two creates a connection in your brain between smoking and something bad such that when you wake up, you don’t wanna smoke anymore for a few days and you have no idea why.
Moran Cerf: Like you just, I don’t feel like smoking. So this was me, changing your brain, behind your back, in the course of a few hours to something that we think is positive, but as an evidence it suggests that we can do anything. We can change your brain to anything.
Laurie Segall: Like what else? What, what else could you do?
Moran Cerf: So, there’s been studies that looked at trying to make you eat healthy. So you go to sleep. They kind of inject an idea into your brain about what you should eat when you wake up. When you wake up, they say, “Here is a buffet, choose what you want.” And people choose healthier options because you’re injecting the ideas.
Laurie Segall: Mm-hmm (affirmative).
Moran Cerf: There’s one group that looks at racism. They take people, they test how racist they are before they go to sleep. Then they do something to their brain and when they-
Laurie Segall: How do they test how racist they … How do you get like a casual test of your, how racist you are before you go to sleep?
Moran Cerf: (laughs).
Laurie Segall: By the way, I, the, the thing that’s so funny about you is you’re just like, “Oh, and we just test how racist you are before you go to sleep.” It’s like what?! What do you mean?” (laughs).
Moran Cerf: So this is a, uh, a test that was invented, I think at Harvard a few years ago called the IAT, the Implicit Association Task, where you basically show people, things like, pictures of African-Americans and Caucasians. And you ask them to put the African-American in the bucket on the left by pressing the left arrow and put the Caucasian on the bucket in the right by pressing the right arrow, and do it as fast as you can without making mistakes.
Moran Cerf: So a picture appears and you have to quickly put it left or right. And then they show you words. And some of the words are positive or negative. So, love, beauty, perfection versus bad words. Like anger, disgust and so on. And again, you have to put them on the bucket left and right.
Moran Cerf: And then they say, “Now, we’re gonna show you either a face of say Caucasian person or, uh, nice word, put them in the same basket, or we’re gonna show you a bl- black, African-American face or nasty word and put it in the right box. And again, do it as fast as you can without making mistakes.”
Moran Cerf: People do it still very, very fast and then you say, “Now we’re gonna reverse it.” We’re gonna show you either African-American face or a good word and put those in the right basket or Caucasian and bad word and put those in basket.” And suddenly people start making mistakes.
Moran Cerf: Because they have an implicit association that, black person is bad, white person is good. So it’s harder for them to do the reverse association. They do it either slower, or make mistakes and that’s, showing that you have some kind of implicit bias against African-Americans even if you don’t exercise it. You don’t really do that.
Laurie Segall: Oh God.
Moran Cerf: And it’s true even for African-Americans.
Laurie Segall: Yeah.
Moran Cerf: Even they, they failed the same test. And we do it with women in wages. It shows that people have a bias towards paying women less than men.
Laurie Segall: They should do that at like tech companies too. Like, you know, when they need to hire more women or we have more diversity, they can, the tech companies actually-
Moran Cerf: So, all of those studies show that most of us have these biases.
Laurie Segall: Yeah.
Moran Cerf: You are kind of trained by society to have those. And whether you’re a woman or a man, young, old, black, white, all of those you still fail the same tests. But up to now, this was the case and there was no way to change that. What they do right now is they take the test first. They show that you have the bias, then you go to sleep.
Moran Cerf: They basically try to kind of use olfaction or auditory cues, to essentially remind you that you should break the bias in the right moment. And when you wake up you take the same test, and what we show is that people actually change their biases. They become a little bit less likely to make a mistake when they see an African-American face and a good word and vice versa.
Laurie Segall: Wow.
Moran Cerf: So, all of those are in the realm of the how things. Because they involve getting a person to the lab, putting stuff on their head, having them go to sleep. Get the right moment. Someone has to look at your brain when sleeping to know that you’re in the right moment, and do stuff. It’s complicated.
Moran Cerf: It’s still very clear what you need to do, but it’s a cumbersome process. There is a much easier one that is done in a lot of labs right now. We essentially convince a person that a memory is not what it was just in words. We have a study that we’re doing right now, where we bring you to the lab, and we ask you to make a simple choice.
Moran Cerf: We show you two pictures of two guys that you don’t know. And we say, tell me who do you find more attractive? The guy on the left or the guy in the right. And we have cards in our hands with those pictures.
Laurie Segall: (laughs).
Moran Cerf: So you might point to the guy in the left-
Laurie Segall: Is this Tinder you’re doing? It’s just like a-
Moran Cerf: It’s … Basically we, it’s a version of Tinder or Hinge.
Laurie Segall: Okay.
Moran Cerf: You see picture and you have to swipe left or right. Only that you see two guys and you have to choose which one is better.
Laurie Segall: Okay.
Moran Cerf: So you have to choose one. So we show you two guys and we ask you, “Who do you find more attractive?” And you say, “The guy in the right.” And then we give you the card that you selected and we ask you to hold it in your hand and explain to us why you picked this person. So you might hold it in your hand and you say, “I really like his smile.” And we say, “Fantastic. Let’s pick another pair.” So we pick two new cards, two different guys.
Moran Cerf: You do that 100 times. So every couple of seconds you get a new pair and you make a choice and you explain it.
Moran Cerf: And here is the trick. Every now and then, let’s say every 20 trials we give you the card you didn’t choose. So, the person that gives you the cards, the magician and use a slight of hands to give you the other options. So if you chose A, he would give you B.
Moran Cerf: And what we see is that, first, people rarely notice that they didn’t receive what they chose. It’s all of the kind of step one in genetic memory. In step two, when we ask them to explain, they copy an answer. So they chose A, I give them B, they take B and they say, “I chose B because I really like his, uh …”
Laurie Segall: They just come up with, the, even though they didn’t choose that person?
Moran Cerf: Yeah. And in that moment, what happens in their brain is that, their brain now creates association. A memory for something they didn’t choose themselves. So if you ask them more and more about why they chose option B, their brain is gonna commit more and more to this choice. So much so that if they come back tomorrow, and you give them the same options again, and you say, “Choose again.” They’re gonna choose B now.
Moran Cerf: So think about it. In the case for business. You go to a supermarket to choose a toothpaste. You debate between the Colgate on the left and the Crest on the right and you run a complex analysis of the two and in the end you choose Colgate. You put it in the basket and you go shop for other things. Somewhere between the moment you chose the Colgate and the checkout, I stick in your basket and I replaced the Colgate with Crest.
Moran Cerf: Most likely you’re gonna buy the Crest. You won’t know it. And if I stepped in the way outside and I say, “I’m from Proctor and Gamble and I’m running a marketing research campaign, I wanna know why you chose Crest.” You wouldn’t say, ” I have no idea.”
Moran Cerf: You wouldn’t say, “I chose Colgate and someone flipped them.” You will defend the choice you didn’t make. And in doing so, you convince yourself that you want that so much so that tomorrow you actually buy the Crest. So I can now make you change your memories in a very, very simple way. And by asking you why, make your brain come up with a story that you will now believe and go forward with it.
We’ve got to take a quick break to hear from our sponsors but when we come back try this one out. Imagine a future where companies could control the the content of your dreams. More on that after the break.
Laurie: So how could that be weaponized in the future for political uses? Because we’re all talking about this information online and the fact that people don’t really trust what they see anymore. And there’s, you know, nation states who are manipulating social networks but you’re talking about something a little bit different.
Moran Cerf: So the tagline for this research in our lab is, don’t believe everything you think. And, the point behind that is that, evolutionary, we were raised to have a brain that manufactures reality for us. So everything that our brain comes up with, is reality. We never doubt our own mind.
Moran Cerf: So, if you have a thought in your brain, it’s real. You might spend a lot of time vetting thoughts that come into your brain by being skeptical, by asking questions, by really exerting your might to make sure that nothing comes in that’s not true, but once it’s in you trust it.
Moran Cerf: If, if tomorrow someone asks you, “What did you do yesterday?” And your memory tells you that you were here with me in the studio recording this podcast. And this person says, “No, no, yesterday you were playing golf with me.” You say, “No, I have a memory that says I was with Moran in this recording studio.” And they said, “No, you were in a golf course playing golf with me.” And they start showing you pictures of the two of you playing golf.”
Moran Cerf: Or he brings 10 friends who would vouch for the fact that you are with them, nothing will change your mind. Doesn’t matter how many people would claim that you are not where you were. You have a memory of reality and you think this is reality and you would not change it.
Laurie Segall: Right.
Moran Cerf: And this is how we evolved, because we had to trust our own brain. Now that someone can actually hack into our brain when we’re sleeping, using experiments like the one I told you, where we flip options and make people come up with answers for the choices and in doing so changed our mind. You wouldn’t be able to trust your own mind anymore. So, suddenly you really would have to doubt your own thinking and not know how you behave.
Moran Cerf: And in, in the world we live in right now, relies on you having full understanding of your own brain for everything. But now this is no longer a fair game.
Moran Cerf: Your brain is vulnerable. I think that the best answer I could give people when they ask me what they should do about it is, to think of themselves on April Fools. So April fools is the one day of the year people actually skeptical of anything. So if I tell you in April fools, “You know your mom called me.”
Moran Cerf: You would say, “Wait. Normally I would trust him, but today’s April Fools, maybe he’s lying. I would want to vet even this piece of information. And that’s the only time we actually are skeptical of everything. You would have to play April fools every day with your own thoughts.
Laurie Segall: So why now is every day like April fools?
Moran Cerf: So, I think that the experiment that we did with politics for instance, is just an example of how easy it is to flip a mindset. A person comes on the street and is told, “We’re gonna do a, political poll. We’re going to ask you a few questions and we just want to kind of figure out who you are.”
Moran Cerf: So we ask them 10 questions, and we sit with the paper and kind of mark the answers. And question one says something like, uh, “On a scale, one to 10 how for or against, uh, abortions are you?” And the person says, “I’m Nine pro choice.” So that’s their answer. Then we asked them about climate change.
Moran Cerf: “How much do you believe climate change is real and manmade?” They say, “Seven,” And we mark what they said. And we asked them many, many questions and then we mark the answers. But the reality is, that we don’t mark what they said. We mark different things. So if they said nine to, uh, pro choice, we put a six.
Moran Cerf: If they said, uh, seven to climate change is real and manmade, we put five. So we kind of change their answers a little bit, not totally the other extreme, but just a little bit. And then we say, “Okay, let’s tally all of your votes and see what you come up with in kind of politically.”
Moran Cerf: And we tally all the vote next to them, with them they kind of scored. And what comes out is that actually a right-leaning. Data on itself until this moment is like a very kind of liberal Democrat, but somehow the questions come up as if they’re, right-leaning kind of republican.
Moran Cerf: And then we ask, “Can you explain to me what this is?” I guess, sometimes not always. It depends on how many questions are we ask and how kind of good we are in doing this thing. They’re gonna start coming up with an answer that says, “Actually, now that you say it, yeah, I was raised a liberal, but when I see the world right now, uh, the world becomes more polarized and I think that we should be a little bit more proactive.”
Moran Cerf: And they start coming up with an answer, and the more you ask them questions, the more they create associations in their mind that align with the story that’s not true. And now, they go into the world into the wild, we call it, with a different thought of themselves. A different story.
Moran Cerf: And they will totally go with the story and they talk to their friends and convinced them in that and they go to an echo chamber that, now there is a virus in the echo chamber. Which is a person that comes into the echo chamber that’s safe with a new thought, and will start infecting others.
Laurie Segall: So why are you doing that?
Moran Cerf: We’re doing it because we need to prove that it works, because then I can tell your audience about that. Then they become a lot more aware. So once you know it, it already doesn’t work as much.
Laurie Segall: So this is all on a very human level. And then now, when we look at the technical level of one day, when there is actual, technology inside of us, it could be hackable, right?
Moran Cerf: Yeah.
Laurie Segall: Um, you’re someone who has a hacker background and now you’re a neuroscientist. So, in the future, we could have these chips inside of our brains. Do you think they could be hacked in any capacity?
Moran Cerf: This just makes it much more efficient and at large scale. So up to now, we had to take a person, stop him or her on the street, ask question, do a 10 minutes process with them. Once the chip’s nside their brain, it’s just a button press. It becomes a lot easier and at scale. So instead of changing the minds of one at a time, we can change the minds of everyone in Michigan and make them vote one way or the other.
Moran Cerf: That’s, that’s the risk. And I think the reason we should talk about it right now is the proof of concepts that comes from labs just shows how it works, but that’s where we stop. We write a paper that says, this is possible. You can hack into a brain and change a person’s mind. That’s it.
Laurie Segall: When we talk about disinformation and whatnot, do you think that’s, that’s what’s coming down that we’re not talking about yet?
Moran Cerf: Yeah. So, I think the biggest fear is that. So I think that suddenly, you know, creating a misinformation ad on Facebook looks like peanuts. When you talk about I can just change your minds as you go to the voting ballot and just make you vote what I want. You’ll become a puppet and I the puppeteer is the biggest fear at scale.
Moran Cerf: And I think that the people speak- speak right now mostly on the marketing version, on the thing that is still, I convince you by showing you the wrong ad and so on, that’s the old fashioned thing. This is what we did since the, you know, 1930s, that the fear is that it’s gonna be a lot bigger and it’s going to be directly at your brain and you’re gonna welcome it.
Moran Cerf: So I think the way I see it is that, the Silicon Valley guys, they’re gonna see all the positives of a chip inside our brain. The high frequency trading and the Wikipedia and the solving cure for cancer. Everything that’s great. And that’s why they’re gonna put it in their brain, and they’re gonna create the entrance point for the villain who can now hack into the brain.
Moran Cerf: And when they ask for question of Wikipedia, when was the French revolution? What they’re going to get is, “Also vote this way.” And that’s I think kind of where it’s gonna go. And the thing is, it’s a race to the bottom. So if someone in Silicon Valley does that, and is able to say, think better and invest better than all his friends, then the friends would have to do it as well.
Moran Cerf: And suddenly there’s inequalities. So all of, all of the people in, not just Silicon Valley but in Texas would want the same chip in their brain. And suddenly Alabama wants the same thing. And before long China wants the same thing. And we’re gonna bring it ourselves because we can’t afford to not have that. And this makes us all vulnerable.
Laurie Segall: Hmm. That’s interesting.
Moran Cerf: I sounded really dystopian in, in this-
Laurie Segall: Yeah, that got really dystopian pretty quickly. I feel pretty depressed. Is there like any, uh, silver lining there?
Moran Cerf: So, I think that the, yes, I think the silver lining is, is your audience. The good news is that we decide. So it doesn’t have to happen. So, historically humans have not been great in taking technologies and not using them for bad things. Every time we had technology, it could be used in a negative way.
Moran Cerf: It was used in a negative way. But we also have shown, increasingly in the last 50 years, that we can also harness technologies. We have weapons that could kill a lot of people, and we decided together that we’re not gonna use them. And this happened multiple times, and it happened across the world by many countries. So, in the same way we can decide how we regulate those implants in the brain such that one thing that’s good happening without the bad sides of it.
Laurie Segall: What about, so Facebook just bought control labs, which is like, you know, this mind technology.
Moran Cerf: Mm-hmm (affirmative).
Laurie Segall: I’m, I’m kind of obsessed with these ideas like, is Facebook down the line or these companies, they get to buy a- ad space in, in our brains in the future? Is that, do you think that’s sci-fi?
Moran Cerf: I think that the-
Laurie Segall: We should play this game like sci-fi or the future. (laughs).
Moran Cerf: The CEO of Netflix, a few years ago, I, I was in a conference and he said that on stage. He said that the biggest competitor for Netflix isn’t Hulu or YouTube or any of those. It’s sleep. So, the people sleep for hours and they don’t watch movies. Their brain is active and no one uses it.
Moran Cerf: Right now we have studies in our lab, but we try to manipulate dreams. We try to basically create movies in your mind that will come when you sleep and will be content for you. This means that, suddenly Netflix and Hulu and YouTube and all of those are gonna have one more canvas for content that they’re gonna start using.
Moran Cerf: They’re gonna have dreams by Spielberg or dreams by, you know …
Laurie Segall: What?!
Moran Cerf: So that’s, that’s the kind of domain we’re playing in right now. And, I think that to your question, this really means that the competition is gonna be welcomed. People are gonna want that, and they’re gonna bring in the bad sides with it. You know there’s an episode of the Simpsons that I watched long ago.
Moran Cerf: That is my kind of go to, when I talk about greed. Homer Simpson sits with Mr. Burns and they suddenly become friends for a minute. Mr. Burns, if the audience doesn’t know, is the rich owner of the nuclear plant. He’s kind of the depiction of the rich guy. And Homer Simpson tells him, “Mr. Burns, you’re the richest guy I know.”
Moran Cerf: And Mr. Burns says, “But I would give it all up for a little more.” That’s kind of how humans are. Every time there’s a chance for a little more, we want that. And, we have now a chance for a little more IQ. A little more control of our own mind, a little more thinking, a little more access to our own brains. No one’s gonna want to say no to that.
Moran Cerf: We have access to more sleep, we have access to control of our behavior. Companies can control our dreams and give us the best content. We would welcome that. But in working with that, we’re opening the floodgate for also bad things that we don’t think about right now.
Laurie Segall: I mean that’s crazy. The idea that you could order dreams, you really think that’s happening? That could happen? Like, like we could have dreams sponsored by Spielberg?
Moran Cerf: (laughs).
Laurie Segall: I mean that’s not the worst. I got, I loved like, you know, E.T’s my favorite movie.
Moran Cerf: I mean, so I think that it, it became, it was a science fiction a few years ago.
Laurie Segall: I wanna dream by E.T. That would be such a great movie. If I could watch E.T in my sleep, because I just don’t have time in the day.
Moran Cerf: The big kind of question that you asked me is like, “Is it possible or not?” Right now the science is at the level of, uh, we can induce you having a good dream, not knowing what it’s about, but just spray the right smell and your brain goes to a positive dream. That’s it.
Moran Cerf: We can make you have a bad dream. We inject the wrong smell, you have bad dreams. We can sometimes control a little bit of the content. So, if I do anything with water, if I spray water on you or if I dip your hands in the water while you’re dreaming, you will incorporate water in the dream.
Moran Cerf: You will see water falls or the ocean or a boat. So, we can kind of induce very basic ideas, which means that we can control your dream in a very, very crude level. But this historically is the gap between a proof of concept and engineering.
Moran Cerf: So, now and dreams are no longer something that is totally, uh, kind of black box for us. We know how to change them, and now it becomes an engineering problem. Like finding the right smell for every particular concept. Realizing what makes you dream of your mom versus your dad. Stuff like that.
Moran Cerf: It becomes a race by engineers rather than by neuroscientists. Neuroscientists have proven that it works, and now we leave it to others to kind of perfect it, which means that you can get soon to a point where you, at very least can choose what memory you want to activate in your dream.
Moran Cerf: So you go on a date, you come back home, you go to sleep and the date is over. Even if you’re next to the person you were with, the sleep kind of separates you. So we can know at least at the very minimal thing, make the dream go longer by finding cues from the awake-self that will let you go into the sunset together in your sleep.
Laurie Segall: But in the future we’re, we’re ordering up dreams?
Moran Cerf: So, so I would, I am gambling on that because companies come to me every now and then and say, “We wanna think about that.” And I help all of the companies do that in a sense.
Laurie Segall: What’s the craziest thing like a company … You, you I mean you hear it, so like what’s the craziest thing a company has asked you to do?
Moran Cerf: So I think that dream manipulation is, sitting somewhere on those, uh, things.
Laurie Segall: Companies are, are asking you to do that?
Moran Cerf: Companies, famous companies.
Laurie Segall: Like what?
Moran Cerf: Like, like the Silicon Valley’s companies that you know. One of them came to me a number of years ago when, I gave a Ted talk about dream manipulation and one of them was sitting in the audience and said like, “We wanna incorporate it in the next version of our big product.”
Moran Cerf: And I said, “It’s very, very unreal right now. It’s just like a proof of concept. And our lab really is on the mission of just showing that something is possible, not in making a product.” And at the time they were ready to do anything.
Moran Cerf: Like they were ready to basically buy our lab and move us to California so we can develop that. And at the time, I mostly didn’t believe it’s possible to do it as fast as they thought. So I said, “Not something in my life,” but since then a lot of companies are after that. So we’re no longer talking about science fiction. There are big companies who play with doing things to you when you sleep.
Laurie Segall: I mean, don’t you feel like now, like Silicon Valley’s gotta be a little careful about that? Because like now we’re, we’re fi- we figured they did a lot of things to us that we didn’t even realize with like addiction and mental health and like I don’t, I don’t know if I would trust like Facebook or Twitter to do things to me in my sleep anymore, you know.
Moran Cerf: So, that’s interesting kind of psychology. On the one hand, I think that things have not changed dramatically. Like, they, they, you know, we talk about beginning of, uh, cheating. One of the things we learn about cheaters is that, when they get caught, at the moment they get caught, they immediately promise to themselves and everyone else they’re not gonna do it again.
Moran Cerf: But if they get forgiven, they actually go back to their bad behavior because now they have evidence that it works. Like they get caught and nothing happens. And, and they do a better job in hiding it. So the second time they actually now know what ways they failed the first time. So they do a better job in hiding it.
Moran Cerf: So in that sense, I think that, I don’t, think that things have changed dramatically in Silicon Valley. I also, to defend our friends there because we know many of them and we know that they’re-
Laurie Segall: Yeah, yeah.
Moran Cerf: No, no, no, no. I think no one is bad there. It’s not a m- a malicious or act of trying to do wrong. It’s somehow the system and the structure of the world that suggested good things are ahead, and there are engineers who wanna perfect them. So in that sense, I think that to just blame it on them is a little bit, unfair.
Laurie Segall: I think it’s true. I think it’s unfair to just say it’s not black and white.
Laurie Segall: What do you think is the most important ethical question we should be asking right now?
Moran Cerf: So down to earth questions, I think the, so technology and and and uh, you know, brain implants and dream control and changing memories, they’re in our life. But I think that they’re far enough from the next electoral cycles so that we don’t have to worry about that.
Moran Cerf: I think the biggest effect of technology is totally outside of what we spoke about fight now is how it affects relationships. I think that there are countries already that, you know, people have no sex, they have less relationships with humans. They spend a lot more time on their devices instead of relationships.
Moran Cerf: So, right now everyone speaks about screen time as just, instead of social time. But I think specifically I would focus on relationships. There are just less and less people that find meaningful relationships with others. And I think this is the biggest kind of risk to our world. As a neuroscientist, I say our brain loves interaction, it loves communication.
Moran Cerf: And if people are not doing that enough, they’re actually hurting their brain. There really is a kind of negative consequence brain-wise to you not having a person to talk to, to interact with, to rely on, to have comfort with. And that’s, I think the biggest thing that if I were to choose, one solution to advocate for, I would say find a way to have partners in your life that are meaningful.
Laurie Segall: I mean, it certainly seems like a lot of people are utilizing their partner as, as kind of like the phone, right?
Moran Cerf: Yeah.
Laurie Segall: You just did a study with Hinge. We were talking about it before we started, tell me about the, the study. Like, what is the most interesting thing that you, you got from this study?
Moran Cerf: The most practical thing that’s never gonna work is that it turns out that you would probably do better in finding a partner if you outsource the search of a partner to someone else. Someone else could be a friend or an AI. So, we are our worst enemies when it comes to making choices. We rely on the wrong cues. We’re too fast in judging negatively stuff.
Moran Cerf: Uh, when we charge, judged positively, we immediately copied answers. Why? And we’re very critical of ourselves. If you want it to kind of get advice for dating on online dating, I would say give your phone to your best friend, and ask her or him to swipe for you.
Moran Cerf: What we say in the paper is that, in a way it could also be an AI. Someone who learns a lot about your preferences, and actually starts swiping for you and just says, “Laurie, I found this person that’s great for you.” Don’t look at him before, don’t read anything about him. Go to the date with him.
Moran Cerf: but the big one that I want to leave your audience with is like, start trusting others even in choices that you think are very, very personal.
Laurie Segall: I’m kind of obsessed with this idea that in the future we could have AI bots date for us. Is that like totally sci-fi or, or could that come down the pipeline?
Moran Cerf: So no one does it right now, and I think they should. And I think that the interesting thing th- that drives that is that people don’t really know themselves as well as they think. They, have answers. If you ask them, like we said before, if you ask people, “Why did you choose this or that?”
Moran Cerf: They are gonna come up with an answer. It’s just that it’s not true, for example, we know that a big driver of your preferences is smell. There’s a study that shows that people come to the lab and, someone shakes their hands and have them sit down before the study begins. But actually the study already began, because the handshake is what they were looking at.
Moran Cerf: And what they show is that, within a few seconds, from the moment someone shook your hands, you’re gonna bring your hand close to your nose and smell it. No one notices, it’s unconscious but people do it and they do it in a very controlled way. They had a heterosexual male shake the hand of a heterosexual women and then they shake their hands.
Moran Cerf: But if it’s a man versus man, and they are straight, they’re not doing it when the person who shakes their hand wears a glove, no one smells it. But if they don’t, they do it like they really did in a control way. And the bottom line is that, smells are really critical to how we evaluate other people.
Moran Cerf: We actually, you know, assess the mix of our chemistry together in a very practical way. No one’s gonna ever tell you that. No one’s gonna say, “you know what I really liked her because our mixture of smells after she didn’t shower for two days and I didn’t shower for three days. It’s perfect alignment.”
Laurie Segall: (laughs).
Moran Cerf: Our bacteria in my body loves her bacteria, and we’re gonna just have great babies together. People don’t say that. They’ll say, “She was really funny and interesting and we share the same love for sporting. And this is, something that is driving our behavior that we don’t know anything about. Machines could do it for us.
Moran Cerf: And they’re gonna actually know her biome, your biome, her, little quirks that she doesn’t tell anyone and yours and could match you in a way that really aligns with your interest. And if we go back to the beginning and say that in relationships, one of the most important thing that we’re starting to lose, with technology. This will change a lot of how we see the world.
Moran Cerf: It will actually make people remove barriers, it’ll make people get new ideas in their mind that would protect our stories better. Because as you share stories with other brains, you actually have better chance of having accurate memories rather than ones that you lose.
Laurie Segall: Wow. But like you’re sitting there inside people’s brains essentially. I mean, and you’re sitting here and talking about dreams and whatnot. You could change people’s minds already ways that are, I mean, let’s just be honest, you could make someone think something, you could make someone more racist in their sleep, right?
Moran Cerf: Mm-hmm (affirmative). Yep. Yep. What our lab does is proof of concept, and we always do it positively, but when I give talks, I always tell people you can easily see how the same thing can be used to make a person wake up and eat more unhealthy food rather than healthy food or or become immoral.
Moran Cerf: The science doesn’t tell you if it’s good or bad. The science doesn’t know. The science is just kind of an objective tool. And my students, the MBA students I teach at the business school often because they’re young and they’re kind of millennials, they ask question about ethics and they say, “Wait, but this could be used for obviously terrible things.”
Moran Cerf: And I say, “I’m glad you asked that, because you’re right, it can. And since you asked that, you have the moral obligation to remember your question 20 years from now when you’re going to be the CEO of one of those companies. And you’re going to have this quarterly decision, whether you’re gonna use those techniques to make someone buy more of your captain crunch. Or stop it because you know that it’s not what they wanted.”
Moran Cerf: Unfortunately, so far the world doesn’t look like it’s embracing those ethical ideas as much as we want, but I think that the new generation is already doing a good job and putting it on the table again and again. Like we’re doing in this podcast is making people think and maybe change their behavior.
Laurie Segall: Spend enough time with Moran and you begin to question everything, but maybe that’s
the point of this. Maybe the next iteration of this messy phase of technology where truth has taken on its own meaning. The point is not to just question the tech compnaies the lines of code we see in front of us the next phase is questioning ourselves and our own thoughts. Understanding how maleable our own minds are is the first step in changing behavior, which is important because it’s the first line of defense for what’s coming down the pipeline in an era where the lines between true and false, and real and fake have blurred.
I’m Laurie Segall and this is First Contact.
For more about the guests you hear on First Contact sign up for our newsletter. Go to firstcontactpodcast.com to subscribe. Follow me I’m @lauriesegall on Twitter and Instagram and the show is @firstcontactpodcast. If you like the show, I want to hear from you leave us a review on the Apple podcast app or wherever you listen and don’t forget to subscribe so you don’t miss an episode.
First Contact is a production of Dot Dot Dot Media executive produced by Laurie Segall and Derek Dodge. Original theme music by Zander Singh. Visit us at firstcontactpodcast.com
First Contact with Laurie Segall is a production of Dot Dot Dot Media and iHeart Radio.