Transcript: God & Tech: ‘Hey Alexa, Does God Exist?’

This is a raw, unedited transcript of the Dot Dot Dot Conversation, “God & Tech: ‘Hey Alexa, Does God Exist?” You can listen to the full recording here.

Laurie Segall  00:00

This is what’s exciting to me, these are folks who are directly engaged with technology and how we interact with it so I’m just going to start quickly by having everyone introduce themselves and Yancey would start with you, I can give a quick intro but I’d love for you to tell us a little bit more, you’re the co founder of Kickstarter and Yancey we’ve known each other for quite a long time, and in the last year, you wrote a book called This Could Be Your Future which talks about a really cool philosophy, something called Bento-ism so yes if you want to just say a little bit about yourself.

Yancey Strickler 00:34

Yeah, what’s up, thanks for having me Laurie and thanks for putting together this great panel. Very excited to learn and dig in. My name is Yancey Strickler I’m the co founder and former CEO of Kickstarter, most relevant to this conversation, I grew up in evangelical Christian in the south. And in the past three years have started my own practice, which we refer to as Bento-ism. So thinking about this question of technology and gatherings and life systems and and how they come together in a lot of different ways.

Laurie Segall  01:12

Right. And then Michael let’s let’s go to you, you’re a partner at an early stage VC firm that focuses on Israeli entrepreneurs. And I’ve known you for over a decade and you kind of have had a front row seat to history and some of the most successful entrepreneurs, you’re one of the first investors in companies like lemonade and Wix. And you also I just read your book recently which is called The Tree of Life and Prosperity, which really made me think about you for this conversation today so tell us a little bit more about the book and tell us about you.

Michael Eisenberg  01:50

So, thanks for having me, Laurie, great to meet everybody, perhaps, first and foremost I’m a happy husband proud father of eight and grandfather. Family is really at the core of a lot of both the belief system, and what I think it’s important in life. So we start with that. And my book is coming out at the end of August, August 24. The title of the book again like you said The Tree of Life and Prosperity which looks at the book of Genesis in the Bible in general. For timeless values that are applicable to the world of investing innovation and and technology kind of a modernization of the biblical theme, and I’ve been a venture capitalist for 25 years and, and really believe that in the 21st century, timeless values will be at the center of successful businesses and helping humanity get through a lot of the challenges that you cover so well in a lot of your stories. Right.

Laurie Segall 02:44

And we have the founding members of AI and faith which is a group of, it’s an interesting group because it’s tech executives, ethicists clergy members, and engineers, researchers who look at artificial intelligence. So, all sorts of folks and and Rajiv maybe let’s, let’s start with you, you’re the founder and director of infinity Foundation also founding member of AI and faith so if you could give us a little bit more on yourself.

Rajiv Malhotra  03:11

Thank you, Laurie. Glad to be here. So my background is originally as a physicist and computer scientist, way back 50 years ago when AI was barely, you know, in existence, and that was my field, and then after a few decades of being an entrepreneur I quit everything in the mid 90s to sort of explore spirituality, on a full time basis, and created a foundation for that purpose. So my take on this topic is that there is a clash between two big worldviews coming together. The big word. One is a spiritual worldview, which says that our essence is being and not matter and not algorithms and not biology and not body parts. And so there’s the whole meditation movement, whether it’s Oprah or Deepak Chopra or all these mindfulness, that’s going in that direction, and their trajectory is to create it evolve and evolve, human being is a super conscious human being through these practices against this, or in diametrically opposite to this is this is the new AI machine learning. Let’s put electrodes, let’s put, you know, different implants and give you that higher conscious higher experience without you having to spend decades meditating. So if the latter wins, then the former, which is 1000s of years old, really is obsolete, I mean I’m giving you an extreme case, but it is worth discussing all this, if the, if the material if we are biological computers, and we are basically organic algorithms. Then of course we can reverse engineer these algorithms we can improve them we can intervene with them, with all sorts of augmentation. And the future is to learn how to do kind of enhancements of the human being through these kinds of, you know, technologies, but if this wins, then what happens to the spiritual movement, which is taking an entirely different non material approach? Or is there a way both can win – other combinations, other hybrids. That is my area of investigation.

Laurie Segall  05:25

That’s fascinating. Shanen, I would love to get to you you’re a former GM at Microsoft, who’s now a PhD candidate in researching ethics of AI. And what I love, we’ll get into this a little bit later, is you spend a lot of time asking Alexa existential questions. But tell us a little bit about yourself.

Shanen Boettcher 05:46

Sure, thanks for having me, Laurie. I, as you said, I’m a PhD candidate at the University of St Andrews, where I study AI and spirituality. Before that I was in the technology industry for over 25 years at companies like Microsoft and Netscape and Accenture. I’ll build on what Rajiv said a little bit you know my, my take on this is that there’s potential for AI to help people understand different worldviews, understand those technologies that have been vetted through spiritual practices intend to build empathy between people and make them less afraid to ask questions, taboo questions, personal questions. At least that’s what my research is showing is that as people interact with these devices, they’re more likely to ask tough questions, ask things that may embarrass them in other situations, and learn from them. And so, you know, call me a technology romantic but I think there’s potential here for AI systems to work in tandem with spiritual traditions, and to help people come together more than polarize them. We just need different business models and different motivations to achieve this and I think now’s our chance to get this right.

Laurie Segall  06:58

What we could certainly dig into the business model of this type of stuff because I think that’s, that’s an ethical question that a lot of folks are asking right now with Silicon Valley. Before we get in everything, Yasmin. Last but certainly not least, tell us a little bit about yourself. I know you’re founder and CEO of Skillspire which is an organization that works towards making tech spaces more diverse and inclusive. Tell us a little bit about your background.

Yasmin Ali 07:24

Thank you, thank you Laurie for having me, as you mentioned, you know, we never talk about religion and tech or God and tech in the same thing in our, you know, workspace, and really happy to be part of this conversation. I am the CEO and founder of Skillspire a coding bootcamp actually reaching more of underrepresented communities women, minorities, immigrants, refugees people from very diverse backgrounds. We help train them and help place them in the tech sector, so very happy to be part of this conversation. Right.

Laurie Segall 08:05

Why don’t we, I guess you know I’ve mentioned this, and this is to all of you guys whoever wants to weigh in, you know, as long as I’ve covered tech, the topic of religion almost feels taboo, like we’re not really allowed to talk about it too much in Silicon Valley, people almost look down on it, and in Silicon Valley technology is religion. So why aren’t we connecting the two more often. Shanen I’m gonna start with you.

Shanen Boettcher 08:33

Sure, I mean I think there’s there’s been a lot written about this. Like Jim Wellman here at the University of Washington, or Michael Dilash But there’s this, there’s this the sort of phenomenon that happens as people come west they think about it as being almost like a, you know like an explorer or a frontier person coming to coming west in the sort of the joke is that you drop your spirituality sort of when you cross the mountains when you cross the Sierra Nevadas or you cross the Cascade Mountains, you leave that part of you behind, to come and join a group of people who are in fact focused mostly on ones and zeros, and I think, you know as Rajiv was saying up like there. I think there’s always been a tension between religion and science and I think it plays out in the sort of the culture of technology companies wanting things to be deterministic, wanting things to be very predictable and formulaic, and, you know, bringing in something that would feel like you know mysticism might might destabilize that culture and destabilize you know kind of what the goal is, which is, is obviously you know profit driven.

Laurie Segall  09:43

And I mean, is there a connection between technology and religion. Michael, I think you talked about this quite a bit in your book.

Michael Eisenberg  09:52

Yeah, So I think there’s a couple of parts to this one maybe Shanen was alluding to, you didn’t see many people would be outwardly religious. Although, to be perfectly candid and I walked around with my yarmulke everywhere I’ve never felt it. You know in some religious or spiritual level, although I don’t think that’s the case for what it’s worth I think that part of being a person of faith is believing that the world is going to get better, and the world is going to get better through technology and through innovation, like Shin and I romanticise about this but I think it’s true, society and humanity has gotten better over the years and I think, specifically the joining of kind of timeless faith principles with technology is actually a match made in heaven, no pun intended for kind of an optimistic worldview that we can drive people together and we can drive Earth and humanity to a better spot.

Laurie Segall 11:13

And it’s something that’s interesting Michael about your book is you you connect, you know, the story of Noah and the Arc with you know with modern day business practices you you just connect these very old biblical stories. I think you connected with the universal basic income or something you know you, it was a story of Adam Yeah, oh sorry yes it was Adam and Eve you connected with universal basic income like. And for a lot of people I think you would say, Okay, this sounds crazy but, but you have a very successful track record, and being part of first to invest in multi billion dollar companies so so you’re doing something right. What business practices can we learn.

Michael Eisenberg  11:54

I think you know you will take Lemonade as an example. We’re Lemonade. Used to looked at insurance and said this is a very conflicted business, where, when you are in your biggest time of need, and you file an insurance claim. The insurance company is incentivized to reject you, because they make more money by rejecting your claim. And they said this is not aligned. And they then went ahead and said, Okay, well what we can do is we’ll manage the pool for a fixed fee that’s more aligned and to our business model and won’t make any money off of your distress. Moreover, because there’s so much fraud in insurance we believe that human beings are intrinsically good but insurance is a faceless crime. And so, all leftover premiums will be donated to charities and they’ve given over $4 million of premium from consumers to charities of the choice of consumers and people don’t want to fraud their charities because people are good. And I think what you find is by aligning values with business you can build the fastest growing insurance company in the world and the story of Noah that you alluded to, I pointed out that Noah and Alfred Nobel are very very similar. They both invented something Noah, the plow and Alfred Nobel, dynamite, but because they didn’t build a timeless ethical framework around it. Humanity kind of destroyed itself with those innovations so it’s critical as we’re doing things like AI and the work that Chen is doing to really put foundational and timeless ethical principles against this so that these innovations dynamite, the plow and Ai don’t destroy humanity and instead move us forward.

Laurie Segall 13:30

You know, Yancey when I read your book. God, I think the last time I saw you was right before the pandemic, which feels like that’s like a decade ago. But I remember reading your book and one of the most striking parts was you talking about your upbringing and I remember thinking to myself, Wow this is so different than a Silicon Valley founder upbringing, you know, you grew up you you mentioned this in your intro you grew up on a farm, evangelical Christian, you write about in this, in your book you write about going to church where people are speaking in tongues, you talk about your father being a mattress salesman. I don’t know what that says about Casper now so you know, how did this impact you when you went to go build Kickstarter because we can all look at Kickstarter now as the thing that’s kind of a no brainer but at the time it was, it was a pretty radical idea. Yeah, I mean, I think,

Yancey Strickler  14:29

Well I would I see in you know my, the way religion in Christianity really defined the first you know say 15-16 years of my life. And I said you know the relationship has been different since I think it’s instructive for kind of religions place in the world right now, which is, you know, kind of be the operating strategy for the major religions, has been monotheism, has been to say, subscribe to our belief system and not anyone else’s and so it says this total commitment, you know, it’d be like in social media terms saying if you use Twitter, then you cannot use any other service, you know, you, you must only use Twitter. And so that was incredibly powerful for the last, you know, 1000 plus years, you know, it was, it was an anchoring and anchoring concept for yourself and your entire way of being. But now, identity is something that is multi-layered, and we all carry many identities. And for me, you know, when I left the church was when I was really into music and reading and culture in the world. And I wanted to talk to the elders of my church about the things that I found interesting and what I heard over and over from them is if something is not of God, it is of Satan, you know you shouldn’t be spending your time with these things you must be committing yourself solely to God so this notion that my identity could include being a Christian and say who’s also a culture vulture who reads everything was incompatible. And so in the world where like the Internet where we have so many identities, you know you’re a gamer, you’re gay or straight, you’re you, you know there’s so many identities we carry, and, and I think in some ways that is where religion has has sort of been stuck, which is that it seems like a limitation of self, right, in some ways it is it’s a discipline it’s self discipline is core to it, but it’s a limitation of self where I agree with the earlier comments about Silicon Valley being about pioneering exploring experimenting. And if you’re doing that within some limited parameter or an identity set that doesn’t allow your identity to evolve beyond certain ways, that becomes something that is seen as more limiting. And so I think that what’s happening now is that in the internet’s making happen is that religion has to be okay with. I am Muslim and a gamer, you know, I am, I am this and that and that still makes you, you know, a meaningful member of your congregation that larger family, and that allows maybe, you know your faith to grow, and maybe use the power of the web, rather than kind of fighting against what a lot of the natural tendencies are that are pulling us towards multifaceted identity.

Laurie Segall 17:18

Rajiv I’d love to get to you, how is, how has your faith shaped your experience in the tech industry, and, you know in your work, you’re on mute. All right, well, I’m going to move to Yasmin and Rajiv, unmute when you can. Yasmin, you helped establish something called map so if you could explain that to us. You know, I’m curious, you know, what exactly is that and then also I’m curious for the same question that I just asked Rajiv to, you know what, you know, how, how has your faith shaped your experience in the in the tech industry what and what you’re doing and what you’re doing with faith in AI.

Yasmin Ali 18:02

Yeah, so, you know you asked that earlier question about why is it that we’ve kind of made like our demarcation. You know, I came to this country over 30 years ago and, you know, you when you’re dealing with sort of the secular world, you do. You don’t bring up the topic of religion as much, and if you do bring it up, you know, being a Muslim for me and you know there was a lot of Islamophobic comments or, you know unconscious bias that was involved. So, you know, you tend to kind of sweep it under the rug and, you know, just go about doing your stuff. To answer your question about, you know, you know, what does, you know, my faith, you know, Islam, you know, is very supportive of scientific research, you know, if we look back, you know, they’ve been there lots of research that’s been done in the scientific area which, as long as it brings benefit to humankind knowledge of science and technology is not discouraged at all. In fact, technology and science, which is geared towards, you know any advancement of protection of life of faith into like property and lineage, you know, are actually, I would say that, you know some of the goals of, you know, Muslims life. As long as it doesn’t violate the principles or Islamic principles, I think, you know it is accepted, to tell you about, you know, my, you know how my spirituality has, it has definitely shaped, you know, one of the biggest things that, you know, we believe in, and you know that, you know, the Quran actually states is that you know we made you into tribes peoples and tribes that you may know one another and you know the one of the most noble of you in the sight of God is the one that’s righteous, and it also goes on to say that you know there’s, You know created heavens and the earth and the diversity of tongues and colors, and these are all signs for people who are wise. So, you know, those are the basic foundations where, you know what, of what we believe in. And when you know and these things are not just in a book that’s given to you by God, but we want to live this right and, specifically, you know, I came from a, you know, computer background computer science background, having worked in the technology sector and then taking a break and then coming back. You know I wanted to kind of focus a little bit more of my efforts on both on community and tech, and more so with people who, who are coming from very diverse backgrounds, you know, how can I help them help, maybe, you know, stick, establish or give stability to the carriers, you know every single day I talked to, you know, men and women who say that you know hey we’ve come to this country from so many, so many different backgrounds. And, you know, but we’re kind of, you know, all over the place we’re doing caregiving, we’re, you know, working as Uber drivers. We’re working in the Amazon warehouse and, you know, there’s, you know, everyone, but but the aspirations are so much more. So, you know, by getting these diverse thoughts come from diverse backgrounds and creating them as engineers, who can go work in the tech industry, and help provide their experiences and their talk leadership would go a long way in how we will see the next Facebook or the next AI and how it grows. Yeah,

Laurie Segall  21:59

I think that’s a great point and I also wonder if the pandemic and this idea that people don’t necessarily have to be in the office and maybe there’ll be workers in a more diverse place will also hopefully create a more diverse working environment. Something we spoke about in our first in the first session we did about a month ago, was this idea that the metaverse is being built out this next iteration of the web, and if it’s not built out by a diverse group of people we’re going to run into some of the same problems we ran into with the last iteration of the web, really quick reset if you guys are just joining the room I’m Laurie founder of Dot Dot Dot Media. You can sign up for our mailing list at I invited Linda up. Because, so, so much of this panel was inspired by an article she wrote for The New York Times called Can Silicon Valley Find God? You know, Linda, can you just like, explain to us what you know, a little bit about this article, where, where the idea for this and your interest for this came from because so many of the folks up here you’ve interviewed, and also please feel free to weigh in, and jump in with questions too because, yeah, you know you know this topic, as well as anyone.

Linda Kintsler23:20

Yeah, thank you so much for convening this conversation and for having me. This piece was a long time in the making and basically a came out of observations that many people on the panel have also described. I was really interested in the different kinds of language that were used to talk about ethics, not just with regard to AI but with regard to technology in general. And it began at this really kind of specific moment right before the set in, where there was a really concerted effort among the major tech firms in Silicon Valley to come up with a kind of collective language for these things, except I quickly found that they were kind of pursuing it on their own and kind of their own silos and I started looking around for communities that were approaching it in kind of counterintuitive, or rather, hyper intuitive ways and so that’s kind of what led me to discover the work of AI and faith.

Laurie Segall  24:30

And Rajiv, we had back when I want to hear from you tell us a little bit about AI and faith and okay

Rajiv Malhotra  24:36

So I want to make a quick point, probably I’m the only person here who is not from the Abrahamic religions. The other half of humanity, who are not from the three Abrahamic religions belong to many other faiths that have completely different worldviews and starting positions. I come from the Dharma traditions, which is Hinduism, Buddhism, Jainism and Sikhism, and there is pluralism built into it. There is no one linear history, one linear set of events that are exclusive and absolute through which God spoke, and that against which others have to be evaluated. So you can, you can have many pretty much any number of spiritual paths, discover new ones and our exemplars discovered several of them and new people are discovering new ones, so there is no, there is no shortage of paths, and no exclusivity claim, and no requirement to go and evangelize, or convert people because whatever they are doing is their own discovery. So, having said this, an issue that comes up is whether AI will become like other weapons or other technologies when they came about became used in unfortunately became used as weapons for one worldview to fight and expand against others, whether it was cannons were brought into India by people who brought in other faiths as part of their military both the land cannon and the sea cannon came that way. So, the majority of the history of organized religions has been one of conflict and violence. and I think this is one of the things people don’t talk about, they not only don’t talk about religion, but the reason they don’t talk about religion in Silicon Valley is the fear that when you talk about religion, you’ll be talking about conflict, and you’ll be talking it’ll go to some uncomfortable place, and somebody will somebody will say that, you know, My religion has been abused and the other person will respond and so you have this fighting match, and since there is no scientific basis or empirical basis to ascertain the truth of what happened 1000s of years ago. These are fights that don’t go anywhere. So I think part of the part of the challenge that AI has to face is we do not want to end up as another technology that became a weapon for conflict. So, while there are people talking about bias in AI for race and bias in AI for gender by gender, orientation and so on. We need to have a conversation about bias based on faith, which there is in here. That’s a huge topic of mine. There are premises. When they are training the machines when they’re training the algorithms, they’re using. They’re using standard, or what they’re using popular writings, and some of these popular writings are from a colonial era, and this colonial era consisted of narratives and discourse produced against the colonizer. So the current, the fight against the colonized people, so the colonizers coming from dominant religions and expansionist religions, often wrote the narratives and the histories and interpretations about those people they colonized, and since the Dharma traditions have been colonized for a very long time. You know, that’s well established history. Therefore, the worldviews that are commonly used to train algorithms tend to have a bias. So I would say, as a person representing the Dharma traditions. It is very important to talk about pluralism, it’s very important to talk about bias in algorithms that have religious bias. So not only should we talk about religion in Silicon Valley, but we should also talk about the AI based biases, for certain religions and against certain religions, it’s an uncomfortable topic but I think we have we have to do it.

Laurie Segall 28:38

That’s, that’s really interesting, I’ve heard quite a bit about right bias and AI and I wanted to I want to get to that but that’s not something we actually hear about and I also think that because a lot of folks in Silicon Valley, we don’t really talk about religion I, we just had a bunch of folks join so for those who just joined, I’m Laurie from Dot Dot Dot Media and we’re talking tech and religion and God and let’s, I’m about to go into the portion of this conversation that hopefully will get a little uncomfortable because I think that’s where we do our finest work. So, we didn’t really kind of get into what AI and faith is and we don’t want to get into it too much because I want to talk about the the real ethical framework for artificial intelligence and what you guys are digging into and Linda, please feel free to jump in with questions too because I know you know this stuff. But, you know, could someone just really quick give a quick overview of what AI and faith is I know you guys were founded in 2017 and you have all sorts of different folks from from varying faiths but if you want to give a quick overview to folks before we kind of get into it.

Rajiv Malhotra 29:45

So Shanen should go proceed, I can join in from the Dharma side.

Shanen Boettcher  29:50

Sure, it’s, it’s a consortium of people who share an interest in the intersection of AI and in faith. That’s the first the name and the focus here is to really engage technology companies in a dialogue about what the, how they should think about the voices of faith, how they should integrate with those how they should project those into, or be thoughtful about the way that they’re projecting those into the technologies that they’re building, particularly their AI based technologies so it really is, you know it’s a think tank, it’s a consortium, it’s a group of people who are willing to speak and consult with technology companies, and educate and raise awareness about the issue in general.

Laurie Segall 30:38

If tech companies are open to it, out of curiosity. Yes or No, I mean, there’s a,

Shanen Boettcher 30:45

I think the first conversations are quite easy to have and then when you get down to, how are we actually going to implement this, how are we going to make this happen, how are we going to affect change in what’s going on, that’s where it gets a bit a bit tougher.

Laurie Segall 30:58

Now she has that part of Linda started her article with this great kind of anecdote about how she was speaking to Alexa for research you were doing. And she was asking existential questions about life so asking Alexa, you know, does God exist or, you know, what should Siri say if, if I were to ask you know why is there evil and suffering in the world. I don’t know. I feel like everybody’s kind of played around with Alexa or maybe not, I have, you know, just to try to ask some of these, these questions you’ve been doing research on how these devices that are increasingly intimate because they’re in our home and we spend so much time with them on how they should answer. So like, how should they answer.

Shanen Boettcher  31:47

Well yeah, my research was really focused on, how would people react to these conversations you know and when when the answer of substance was produced and so that’s what that’s what I focused on was, you know, was this creating a sort of an influence was it was it having an effect on the way that people weave their spirituality and the short answer is yes. I mean, our attentions are tuned to these devices I mean, a big part of any kind of information is being able to break through, are very short in packed attention spans and these devices do. What I also found was that we tend to want to make meaning immediately from what they’ve said. Try to anthropomorphize the device and try to make sense of what it’s telling us even if we know it’s an experiment. And so long story short, you know, it was alarming to find that the these devices have great influence people do defer to them in many ways we defer lots of things to them in terms of, you know the news that we consume the driving directions that we use, and really my research was focused on like how far will that go, you know when we get down to really personal existential meaningful questions about the way that we live our lives, how will we, how will we do with these and so yeah I mean you can take lots of different perspectives you could say, hey, the device should use all of the information that it’s collected about me, to provide answers to me that would be consistent with what it thinks my worldview is certainly from my shopping history alone what holidays I celebrate what things I buy at certain times of the year. What events I go to and attend, probably can do a pretty good job of predicting what my spiritual outlook looks like where my religious affiliation is and so a big question is, you know, should that should that data that’s used for so many things to recommend things that we buy in ads that we see should it be used in this realm as well. I mean,

Laurie Segall  33:53

and everybody weigh in here for a second. That’s kind of crazy, if you take a step back and you look at that right if if Amazon saw that I bought. I’m Jewish, by the way. But, but if Amazon saw that I bought, maybe, you know, a say a Seder book or something. And then I asked, Alexa, something about God in it and it was able to look at my history, you know, and personalize those conversations or personalize that response, you know, it’s like should religion be personalized I don’t know that doesn’t, doesn’t that get into some murky territory

David Brenner  34:35

It’s David Brenner. I’m one of the founders of AI and faith. Welcome. I’d be happy to speak to that I think it most definitely does. But it’s a question of authority is one of the speakers mentioned earlier, I think, are we going to you know make up as we go along, religion. Religious beliefs and practices through a chatbot who has drawn on whatever sources. It is may have interpreted by whatever background committee who runs that chatbot and answers joke questions and other things is that going to be the source of a person’s religious understanding, or will it be longer term, more thought through 4000 year old in some cases, understandings that, that have been worked on over the course of time and experience have had the benefit of 1000s of years of human experience. How can those things be brought together that technology and and that knowledge and wisdom and understanding, which is really what I think Linda’s article highlighted. It’s also not the case that there isn’t a, you know, a philosophy in Silicon Valley, there are a number of philosophies, there are some dominant ones, materialism, libertarianism. Those are pretty dominant strains of thinking and those creep in to into the databases and a wide variety of ways or into the algorithms.

Michael Eisenberg  36:14

Laurie. Yeah, it’s Michael, I want to unpack this a little bit and continue with what David just said right now. I think there’s, there’s an important element here of what somebody referred to, said about something I’d said is religion, sorry wisdom of the ancients for moderns and to David’s point I think the moderns view out in technology is less relevant today on some level. And I think that actually we’ll find, whether it’s religions or the East store or the West are monotheistic faiths. There’s a lot to be said for the wisdom of the ancients and how it’s applied to, to modern problems and I think that’s kind of one part of this conversation. Newer relativist versus more ancient time honored things that I wrote in my book that Facebook thinks it has the most users on the planet but it’s actually the Bible it’s probably had more users than anyone or other ancient texts, since the beginning of humanity. I think the other thing is, to the point you were making about this personalized religion so much of what’s called social media and responses are like dopamine hits it tells you what you want to hear and I think, at least in our faith practice religion challenges us pushes us to uncomfortable places and not to the kind of natural state. And that’s, that’s important. I don’t sense that social media does a whole lot of that right now or, or the way AI follows us and, and kind of moving over to AI which is maybe the third topic I’ve always found the term artificial intelligence to be a little funny, because it’s really the amalgamation of of human intelligence of the past that machines read today. And so, two regimes point earlier, which I think is spot on. In the same way that that humans are fallible. So the AI that is fed by that wisdom of humans, over the time or the narratives of humans, over time, will continue to be fallible, in an important way. And I think it’s important to kind of unpack these topics they’re not, they’re not all the same and the last one I’d add to the discussion, is what I would call deification of some of the big tech companies, which is, there’s a great t shirt that you see sometimes around, you know, Google knows everything, Google’s like God or my wife, it knows everything. And so I think there’s some level of deification that goes on there which is dangerous because it’s, it’s an enforcing or an ability of a centralized system to enforce a set of language a set of intelligence a set of responses, that is, is essentially controlled, maybe by servers maybe by the narratives of humanity. It’s something referred to in my book is like the Tower of Babel we we kind of enforced the language and a set of responses on people. And I think, I think it’s important to call that out. I think that’s part that’s lost in the conversation today about big tech in Washington is this kind of uniform to your ability to enforce uniformity that regime was so concerned about before, and I agree with.

Laurie Segall  39:35

Yeah, in your book, you said that big tech has become the modern Tower of Babel which is kind of a fascinating way of putting it. I think you

Shanen Boettcher  39:42

I would I would agree with that. What you know, my research is showing is that people do we do, we do defer to these devices we do give them our attention. And, you know, so therefore they have influence and and so I’m not sure, like how do we break out of that, that that spell that we’re under here, because to the extent that people feel like there’s a difference, a deficit between the amount of information that they have and the information that the AI platforms have or the social media networks have, there is a tendency and Andre Guzman has studied this and it’s proving through my research as well as that we have a tendency to defer in those situations to that information that the systems are bringing to us if we feel like hey I don’t know as much about Judaism as Laurie or I don’t know as much about Judaism as Alexa, my tendency is to say okay I’m going to take whatever is delivered to me as being more authoritative and more informative than what I have and so this is, this is part of the part of the conundrum I think is that, you know, if you look at anything. The Pew research has looked at, you know, how much do Americans know about religion and religions in general, and, and really, we’re probably. It’s the only thing we’re worse at than geography is, is understanding other religions I think like 62% of Americans fail, basic, basic, a basic test on sort of like, when does the Jewish Sabbath start and is yoga based in Hindu tradition, things like that. We just are so ignorant. As a country, about this that we tend to then defer to places that we find authoritative like all all the media platforms,

Rajiv Malhotra 41:29

You know, I would say, I would say to add to this that the, the asymmetry of power is enhanced by AI and AI has its own asymmetry of power, so you can conceive that those who have the biggest algorithms in terms of market share the best, the most powerful hardware, and most efficient and have gained a huge number of, you know, huge footprint, a lot of sponsors and advertisers, propelling this, those, those people will have will have a huge amount of ideological power, comparable to any organized church or religion would have, and they will be able to like they can influence elections, like they can influence bias against certain things and for certain things and what what you should buy, what movie you should watch. They can also do the same for and against fates. And this I think this is a huge issue. That, that, you know we have we have turned over, we’re assuming the neutrality of these algorithms which is simply not the case, we are assuming that in the garb of secularism, everything is sort of equal but actually there is power, there definitely there’s power being involved involved. I mean you can boycott a person you can ban a person you can bring in a person you can change your mind and say okay, we will now allow discussion on COVID which we previously didn’t allow and that’s what Zuckerberg did. I mean, that’s huge power over the ideological spin that 8 billion human beings in the planet are being fed. And this is now going into classrooms, this is going into policymaking in a lot of developing countries they are relying on AI, machine learning and all the social network to make policies to decide what is right. So I think the amplification of bias the amplification of human ego, the amplification of power is unprecedented today, and therefore, given the that religion is a huge so huge source of human dignity and positive things. On the one hand, but it’s also a powder keg, that has a history of being blown up and being misused, given that tendency that that risk that religion has combined with the fact that now AI is going to be manipulating and managing an AI may run, you know maybe doing training courses for people of faith in certain religions. So, this I think is a, is a topic, uncomfortable, but it has to be put on the table in Silicon Valley, and people of faith have to do this and I’m very glad I joined with David in this AI in faith because I think we are going to make this a big issue. You know,

Laurie Segall  44:13

AI, to some degree, we talk about a kind of playing God and maybe we give it too much credit maybe we don’t, but, you know, when they you spoke about this in the article you wrote about this in your article, it can rule, you know who’s surveilled using facial recognition who goes to prison who gets a loan, you know not to go off the religious thing but there’s a bit of a judgement day same going on so you guys are all talking about this in a certain in a certain way so when we talk about an ethical framework for algorithms like I’ve been covering Silicon Valley for a long time and you hear a lot about the ethical framework, without as many tangibles, right, so, so, are there tangibles that you’re, when you’re in these meetings with these with these folks and these tech executives. What can you give us like a look, maybe Shannon or Rajiv or Yasmin, David, behind the scenes of what exactly you’re you’re suggesting and maybe where some of the pushback is coming, what what the pushback is.

Rajiv Malhotra 45:12

So my take is that the people of faith should be able to represent themselves, rather than other people representing them. And this I say is particularly a problem among the Dharma traditions because the vast majority of discourse on these traditions over the last 200 300 years has been written by people who are outside of these traditions but they were the colonizers. They control the English language and so all of this stuff got written into what is actually a foreign language for these faiths. And so, to decolonize is a huge project, but it’s not going to be easy. And now what’s going to what is happening is AI is re colonizing A, the AI, the premises built into the algorithms are sort of re colonizing, not only the views people have in the United States and Western countries about, you know, non western faiths, but actually is being exported to those countries. So you’ll find that you know, in a place like India for example and other other south, southeast Asian countries and South Asian countries, you will find that a large amount of the people who are educators are relying on what does Alexa say and what does this one say and what does, what what is popular on Facebook and so on, even about their own culture, even about even biases and stigmas and stereotypes about themselves that they are incorporating and that’s what colonization means when you accept the views of others, about yourself, and like the East India Company, or in my book I could I talk about Google as the next East India Company, because the East India Company controls so much power more than the British government. It was a private company with more resources than the British government and became the most powerful entity in the world for a certain period of time and look at the effect it had look at the destructive effect it had on the lives of people. My concern is too much power in the hands of a few private companies that can then turn it into kind of their own, own ideology, ideology, whatever it takes for them to spread their market share. So this, this is a huge red flag. And I wish that there were a dialogue with the Silicon Valley, top brass, not just goody goody stuff, but a dialogue where people of faith would be able to raise these kinds of issues. I don’t think that the adjudication on what is biased or not biased against a certain faith, I don’t think that adjudication in these machine language training going on is apt is fair and neutral. I don’t think that all the right people, have a seat at the table.

Linda Kintsler47:42

If I can just kind of address your question a little bit more, I think. And it is one that I had come up quite frequently, not just for this piece but also for previous pieces and I think some of the difficulty of asking, Okay, how is this implemented in practices that there is no agreement, or kind of common language or even protocol for how one would do such a thing and it’s quite clear that there is a lot of consensus that it should happen and that it should exist and there should be a way but like one person told me you know no one will touch running code, you know, obviously, and there’s practical reasons why you wouldn’t want to do that especially if you have shareholders involved. But there is this kind of desire to be able to point to code and say this is where my moral values are.

Shanen Boettcher 48:39

The challenge with the code is that, that it code has been developed over decades now, by hundreds of different people and so when you see these tech leaders on the stand in front of Congress and they’re trying to articulate what’s going on with these algorithms stay true truly are unable to do it because no one no one no one human can say here are the inputs here are the outputs there so that’s that’s part of the challenge I think is that no one is in a position to describe exactly what’s happening and can just change a few knobs and it will make things better, you know, to your point, Laurie about like what’s happening, I think, at a top level companies are putting out like sort of sweeping platitudes and principles like a deontological structure of saying like, here’s what we stand for and here’s what we think should happen and you know, this should be fair and it should be transparent and it should be explainable. You know, things like that. They also look at sort of the tele illogical or sort of what are the consequences of what we do you know if we have facial recognition but it’s only being used for good. Is that okay, or if we can prove that it’s only being used for good. Would that be okay so there’s that approach and then there’s also like a virtue approach which is really just trying to say like, this is what the leaders of the company think is the best thing for us to do all of those, all three of those approaches are very much top down. And I think what’s needed is actually to look at sort of the bottoms up and to look at the behavior the effects that these systems are having on people using them and so, you know, maybe like a, like a clinical trial of with a drug, you would you would actually trial in and have people participate in using a system, and then interview them afterwards and say, Are you more anxious than you were when you started Are you, are you more depressed than you were when you started Are you happier, you know, and these would be ways that we would create this feedback loop and so you know the touchstone here is really, Kathy O’Neill and her book of weapons of mass destruction. She talks about having a feedback loop. And that’s the thing for me that’s really missing from a lot of these companies is great you have these, these high level platitudes that, you know, we could sort of float to a company mission of not being evil. But you know what are you doing to make sure that the systems aren’t affecting people in negative ways.

Laurie Segall  50:59

Well, I think, something, something I think about shedding you know I’ve covered Facebook quite a bit throughout my whole career interviewed Zuckerberg many times and one of the things you notice that’s the same at the companies, is it really is top down and there’s a, there’s a bubble right there’s a very high a bubble and the thing that seems to get lost over and over again no matter how many times I’ve sat in front of Mark Zuckerberg and asked some of those hard ethical questions is kind of that human impact. And so, in you begin when you go behind the walls of Facebook, you see how it begins to happen over and over and over again. And so, I sometimes wonder, you know some of these harder decisions. Can they be made with when we look at what the real human impact will be. It is harder for those decisions to be made when it’s top down, and so I just, I, you know I just say that and, and Yancey I know we haven’t heard from you in a while and I think it’s important to get your perspective because you’ve built a tech company, right, like you’ve, you’ve rolled up your sleeves and you have built a company that interestingly enough the model of Kickstarter was built on faith, to some degree, give people money, this isn’t an ad based model. Can you take us into any, maybe this is a strange question to kind of, you know, just put you in the hot seat on clubhouse but were there any ethical or moral decisions you thought of at the beginning of the creation of Kickstarter, that, that really kind of impacted you where you kind of could have gone one way but you went another way, maybe any regrets, just, you know, lay down. Lay down lay down on the virtual therapy cuz you know it’s not a Laurie interviews, you’re not. Well, you know,

Yancey Strickler 52:45

I mean, I mean the one thing I’ll say, are a few couple things we need to reflect on say Facebook and and say is making the top down decision, you know, certainly there is a, there’s a culture that is defined within a company. Early on, and what ends up happening over time is that the only person who seemingly has the right to change that culture or to make a decision against it are the founders or the people who created otherwise and crew you know it’s a it’s a weird dynamic of potential conflict with the core vision. So and that becomes a real limitation of these sorts of organizations, and where I think you’re seeing, you know, what’s called Web three create more fluid organizational types where sort of everyone’s perspective is reflected it’s more decentralized those kinds of ideas. But yeah, I mean you know what happens is that you do, you know, say Kickstarter, what were the moral questions we would have it would be like you know what happens if people raise money for something and they’re unable to complete it. You know there’s criminal versions of that which are obvious but then there are ones where it’s like someone got in over their head, and then what way should we view this as being problematic and is this something that we should try to prevent from happening before it starts. And what you, what we found and maybe this is, you know, maybe this convenient thinking but, you know, for us we thought the more we in effect played God, and tried to affect the outcomes of things. It would be worse for the system overall, you know, we wanted to create a system that could be to use a Nassim Taleb term to be anti fragile that would experience, you know, ups and downs will become stronger as a result of going through them. And so this question of like how heavy handed, should we be, to what degree should we impose our values or beliefs upon, you know, a customer or user  communities, you know, ultimately you end up being put in a position of, you know where I found myself is often feeling like do less rather than more, because there is a concern and it’s maybe a Western liberal concern. But the concern of not wanting to impose a value system, you know, and so that’s where you get into these plays where you end up kind of nowhere. It could be kind of challenging but I think you do come to these questions but, you know, sitting around a room of other executives or even trying to make a decision based on your own belief system, there comes a limitation where hopefully you’re at least asking yourself, you know, does my decision really reflect just my beliefs is it something that’s that represents this communities what what is it that we are supposed to express and then often these are not things that you’ve ever really talked about as deeply enough to where you truly know. So these are you know they’re the toughest situations. And they’re the things that every company, every company has some version of this and it’s where you spend an inordinate amount of your time is on these kinds of questions.

David Brenner  55:41

Laurie I wonder if I might add to that. I love that perspective, The answer he has from the Founder and Executive Office role, and you see that too and Brad Smith’s book tools and weapons where he talks about what Microsoft is seeking to do and ethical review boards, kind of like IR B’s and the bioethics world design boards that are supposed to pass on applications and their impact on people, but the other big development, and I wonder if, if this has happened to Kickstarter as well is you, there’s a rise of diversity and inclusion groups based on faith in many American corporations that faith has become one of the centers for those kinds of groups employee resource groups, and then you have these ethics offices that corporations have developed like Linda’s article last year in protocol, about the Salesforce, as the ethics office, but there’s not much of a connection between those two rising corporate organizations and I think that might be a grassroots opportunity for people of faith who are tech workers and deeply knowledgeable about these technologies, if they can come to be able to speak in the same language that corporate ethics speaks by translating their faith into values that then can be aligned with their work, you know, there may be a new opportunity forming up, you know,

Gil  57:10

if I could add a quick shout my name is Gilad I’m an AI entrepreneur based in Seattle. And, you know, one of the things that happened at least in my experience as a founder in early stage startups, the experience is very intimate right it’s a founders with a small number of people who are making all the decisions, partially because there’s no one around the table at the beginning before the startup has reached any sort of success. And you begin to have this mentality of moving fast and deciding for yourself. And going back to fix things later. And one of the things I think organizations like AI and faith and of course members of the team can do as well is help remind founders that they’re not on their own that there is wisdom that they can draw on from the past from people who’ve come before that can help us be, you know, them can use that as a lens to make some of the important decisions that you make at any startup, but especially if you’re using AI or technology like

Laurie Segall  58:09

So Michael, what, what biblical lesson is going to help us understand the ethics of AI

Michael Eisenberg  58:19

to respond to Gilad point by the way, one of the roles I think of the investor around the table and to the point that both Yancey and Gilad make when you start a startup, it’s hard enough, It’s really really really hard to break through, and you almost don’t have time for these issues and I think that’s kind of the point of the investor who’s seen a few things and is maybe not as much in the trenches. There, to your question, more specifically, I find AI again to be a reflection of humans, and I go back again to to the Noah’s story which is we have innovations that are that are disruptive, whether it’s dynamite, whether it was the plow before they had industrial farming or mechanical farming of any kind, and these things tend to produce abundance. Abundance of profits abundance of power. And, and it’s hard to handle abundance. The biblical narrative is filled with warnings about the impact of abundance on ego, in fact of abundance on thinking that humans are supreme beings. And I think these, these technologies and new technologies create, you know feeling of abundance and and a reality of abundance. That must be undergirded and supported or scaffolded by, by, but I keep calling timeless values and ironically, by the way, to the point where Jean Michel eloquently earlier. If you build things on values, they actually create long term economic value that I think is more sustainable. Whereas, when you don’t consider these questions from, from a timeless perspective and from. I would say from a religious perspective. You can, you can have both technologies run amok and power run amok. And that’s something we need to be, we need to be careful of. And so, yeah I really view AI despite its fancy language and Shannon’s correct nobody knows what’s actually under the hood at this point ultimately human creations and human innovation. But, but that hadn’t been thought of at the earliest phases for the impact that they’ll have if they’ll actually successful,

Gil  1:00:37

arguably, given two quick biblical concepts that I think applying AI very quickly. First is the Jewish concept of Pikuach Nefesh, which means preservation of the soul. And I think it’s really important that you design systems to remember and reflect on the fact that those systems will impact people’s souls and identities and lives, and that the clock nephesh is something that entrepreneurs could remember. And the second quick principle is the principle of Shabbat or the Sabbath and remembering that all of us need time to rest, so as we design our applications, and we think about things that could be addictive or engaging to use another term, remembering that everyone deserves a break and we should design apps and applications that allow people to do,

Laurie Segall  1:01:17

or maybe even a business model that will. Yeah, I agree with that. Good luck with that. You know, it’s, it’s, I’ve heard a lot of optimism in my career and a lot of people talk about that type of thing but when you really think about it, I don’t mean to push back on it, it’s like when you really got to put your money where your mouth is right and and we see certain situations because of a business model too and Silicon Valley and, you know, and I think Michaels it’s interesting point of how you can have values and and make profit so it’s certainly tonight’s brainstorming exercise. Yeah, boy,

Michael Eisenberg  1:01:50

I think, I think I think the human agents here is important you know we can decide to shut things down. You know and so down like the last point we can decide to insist on these values in some way as well.

Laurie Segall  1:02:02

Yeah. And I think our founders have. This is me from a journalistic standpoint and having talked to founders have, you know, a lot of times in the past it’s taken media pressure or great folks like AI and faith writer or articles like limited to really kind of, you know, begin to talk about these things, I think, when you talk about founders really beginning to to make those decisions to I think it gets really interesting and powerful. We have one question from an audience member who does not want to come up but has messaged me. So I know we have to end soon but I do want to ask this question from the audience member for representation of those who are shy. They say, my question is there are a lot of people who feel like religion has been used to persecute others, if we’re going to use learnings of different faiths to help inform how we create AI, how can we make sure AI doesn’t also weaponize religion against others, so this one’s open for anyone.

Rajiv Malhotra 1:03:01

Well, you know, this is the issue that concerns me the most. And I think that people of faith have to get involved, you know like a coming from India, I will tell you that a fairly significant percentage of AI engineers are from India, whether they’re working in India or in Silicon Valley, there’s a very large representation, and most of, most of them when I speak to them privately, are people of faith, they belong to the Dharma traditions, but you know they have not spoken up. They are either scared or kind of embarrassed or they just don’t know what to say. So I consider that to be a kind of a high potential in line with what David wants to do and what AI in faith wants to do which is to mobilize the faith people who are already in these tech companies, and to make sure that they have a voice, so that this misuse of technology that the question, very appropriately has raised that we at least have a counter voice at the table, not necessarily that we’ll be able to avoid it. You know when the British East India Company was doing all its stuff. There were people within the company protesting, they were employees there were senior people listening but nobody ever heard them. So it’s not like all the British people were bad, is that huge percent of them were voices of conscience, but they were not in the driver’s seat, and that’s the lesson I think we have to draw that we need to put more and more faith oriented people in the decision making process in some of these companies and we need to from the outside with the David’s initiative we have to nurture them and support them.

Laurie Segall 1:04:39

Right. And as we, you know, kind of, as we, we’ve got to, we’ve got to end, probably after after this but I’d love to go around and ask everyone. As we look to the future whether it’s AI or the future of technology and I’ve always asked this question with big tech founders because I always think the answers and fascinating when, when you kind of look at the future what what do you guys think is, first of all, you know what’s keeping, keeping you up at night. What is the most important ethical issue, you think when it comes to the future of technology and us complicated human beings.

Shanen  Boettcher1:05:15

For me it is the root of the business model being in our attention, and, and, and then coupling that with advertising as the fuel. And so, what it, what’s happening is we you know we are seeing information we’re being presented with information that we don’t know why we’re seeing it, we’re not sure why we’re getting it. We’re either there’s no explainability there’s no transparency and there’s no ability to change it, other than completely opting out of a service that we might find otherwise value valuable and so, you know as Rajiv highlighted earlier you know these are things that we really have to be conscious of is how, you know, there’s great potential for the systems to use the differences between worldviews, to polarize people to create conflict to create fear, all of which we know feed into us spending more time and more attention with information, we’ve seen this play out with in politics and other places in our lives and so for me that’s the most concerning thing is this lack of transparency and explainability in terms of where information is coming from and our ability to give feedback on it.

Laurie Segall  1:06:25

The next question I’m gonna ask Alexa is Alexa will the business model of Silicon Valley change.

David Brenner  1:06:33

I think, I think the great imperative is just to pause and perform a risk assessment on every product that’s powered by AI and ask, is this going to contribute to human flourishing or human destruction is probably going to be some of each. Depending on how it’s used, but just pausing and asking the question would be a really valuable exercise

Laurie Segall  1:06:58

and Yasmin. What about you, we haven’t heard from you enough so what what keeps you up at night. What do you think is the most important ethical issue.

Yasmin Ali 1:07:06

Yeah, I just wanted to add that, you know, I think that’s one of the reasons that I joined AI and faith you know it’s at the intersection of two things that are you know very very important to me. I mean, the the diversity that we bring to the table, so that, you know, in the long run we have all voices heard from the people who are designing this technology and also, you know, the religious faith people who need to be involved, who come together, you know, as if you see the founding members of AI and faith we have. We have representation for pretty much every faith, and that’s what makes it really really unique and. And I think we all need to come together to, you know, you know, maybe, you know, go. You know stand. Well I wouldn’t say stand up against, but then, you know help inform Silicon Valley that this is an important factor to be considered one of the things that really keeps me awake, I mean for my from my business point of view is, you know, working with several diverse people, and people who come from, you know, all different backgrounds. I wish I was able to, I’m able to wave a magic wand and you know do something better for their lives and help us, you know, make their lives a little bit more secure, because there are a lot of, you know, COVID has actually brought brought out a lot of inequity and, you know, those are some of the things that I constantly, you know, think about.

Gil  1:08:46

I agree with everything that was said the one I would add is like aI transparency. So I recently wrote about synthetic media for AI and faith. And one of my concerns is that synthetic media is getting so good, whether it’s ring media or voice or soon, or sometimes a video is becoming so good, that are already like detachment from a collective truth is only going to become much more detached and one of the things we could try to do about that is have some sort of standard of AI, transparency, where any sort of continent was manipulated with somehow was somehow communicated to the viewer, but of course it’s a lot easier said,

Michael Eisenberg  1:09:27

Laurie, can I give a very non technological answer. Sure. I think, spending time making the choice to spend time with family and and children. In particular, raises both the sensitivity to the issue and the awareness of it, or our kids. If we let them or if they do it out of our sight spend a lot of time on these platforms and I think sitting and talking to children, helps sensitize adults, to, to some of the parallels here and I think makes the topic in the lead engage on this topic, ever more urgent. And I think I’ve often wondered when I look at founders, those that have had children when they start a company and those that haven’t whether there’s an extra level of sensitivity there, and my anecdotal conclusion is, is yes. And I think as we as we have these very powerful technologies. That’s something the next generation and generations is something we need to keep, you know, front and center

Yancey Strickler 1:10:48

I would say I am. I’m just fascinated and and and just think nonstop about how it is that we learn to collaborate and cooperate in a post internet world and a world of kind of a Cambrian explosion of identity, and you know how we find those common bonds to fix problems, because there is a looming cloud of climate change, that is going to require a degree of coordination that we’ve never seen before, exactly the moment that we become seemingly more individualized so you know how these tensions come together and how we survive, and how we can continue to be independent individuals but yet you know united and connected on bigger and bigger things that affect our future that we’ll all share

Laurie Segall  1:11:39

and Yancey is one of the, one of my regrets from this panel that we didn’t get too much into Bento-ism and so if you could really quick just explain really quick what it is and where people can find it, because I do think that’s it’s, you know, part of why you’re here today so I don’t want to completely not not even mentioned it.

Yancey Strickler 1:11:55

Yeah, sure, yeah. Bento-ism on is a philosophy, maybe a lifestyle maybe religion its members have different terms, but it’s about making decisions that see beyond the near term orientation, it’s an acronym bento. And so it’s a form of structure and a community of people around the world. We try to make decisions, considering the perspectives of now me too to me now, US and future us, and there’s a whole world of thinking within

Laurie Segall 1:12:24

from and Rajiv let’s begin with you.

Rajiv Malhotra 1:12:28

So what keeps me up at night are also positive inspirations and positive reasons, hopes, because I feel that the, the technology cannot be put back in the bottle. Once the genies out. So this technology of AI and all these related technologies are definitely going to continue galloping ahead. And so rather than fighting for technology, I am looking for hybrids and synthesis between the spiritual pursuit to involve human beings above the biological layer, which is what the whole consciousness movement and, and the whole meditation movement is trying to do for many, many for a very long time. I hope for synthesis where that movement can actually benefit from AI, certain kinds of AI. And once we can channel AI towards the spiritual upliftment of humanity, across all faiths and so on. It’ll give AI, a new place. So, so I’m actually quite hopeful and that the topic of my next book.

Laurie Segall  1:13:28

And when I realized we didn’t get to you. Do you have any idea as someone who’s covered this and is in the trenches, is there anything you’re thinking about any what what’s going to be the next article, can you give us.

Linda Kintsler 1:13:38

I wish I could tell you but no I don’t know I think that I would just really like to see people start speaking more clearly and identifying the things that they’re talking about, you know, saying the silent part out loud, which is what I found most compelling about what AI and faith and others are doing and I actually think that would bring a great deal of clarity to the industry at large. Great. Well guys,

Laurie Segall 1:14:08

Thank you so much. This has been such a fascinating conversation. We are going to be back next week our episode next week focuses on true crime, as someone who watches a lot of Law and Order SVU I’m very excited about this one, we’re going to look specifically at a platform that’s crowdsourcing justice online, and bringing in one of my favorite well known hackers in the industry to talk us a little bit more about the dark web and cyber justice and he’s seen a lot, so I’m super excited about that that’s next Tuesday at 6pm, and you can sign up for our mailing list at And also check out the link on my instagram and twitter you can also pre order my book. This is a shameless plug and I’m sorry I’m terrible about this but you can also preorder my book on Amazon it’s it’s coming out in 2022 It’s called Special Characters, some of those inside baseball stories of the tech founders I kind of mentioned are in there with a lot of these ethical questions. And lastly, you guys thank you so much what this really was a fascinating conversation and hopefully just the beginning of larger conversations. And I hope that we do this again and have a great evening.