WTP
WTP Podcast
Creating digital twins of societies
0:00
-55:12

Creating digital twins of societies

My conversation with Justin E. Lane and F. LeRon Shults. We covered MAAI, decentralized economies, navigating conflicts, Jen's trip to the theatre and monkeys flinging poo.

Hello and welcome to this installment of the WTP interview series. Today I'm speaking with Justin E. Lane, Co-Founder and Chief Executive Officer at CulturePulse as well as F. LeRon Shults, who, among other things, is Chief Research Officer at CulturePulse. If you want to learn more about them and their work, you can find links to their LinkedIn profiles and a link to their company's website in this paragraph.

The transcript below is intented to reflect the actual conversation, so it has all the marks of spoken word. Here we go…

.

Katie Burkhart: Perfect. Okay, so I'm just going to dive right in and just say I've had an absolutely wonderful time getting to know you both and your backgrounds a bit better. You have done a tremendous amount of very cool stuff in spaces that I do not play. But one of the things that I noted was a theme that seemed to come up in your work as well as your credentials. Particularly Justin, in some of the publications you have listed on LinkedIn. Which is religion. What got you interested in the subject, and why do you go back to it? Why do you continue to work on it? LeRon, I know this is in your credentials as well, so feel free to chime in.

Justin E. Lane: Yeah, I mean. Halfway through, the story is kind of also how I met LeRon, was through the study of religion. I was always interested in the study of religion, even from a really young age. As a kid, my family grew up in a conservative southern Methodist church, and so I was always very interested in what was going on. I started reading a lot about other religions as well when I was a teenager. And then, when September 11th happened, I became really engrossed in this question of “why is it that someone would believe something so much about their religion that they would be willing to not just kill for it, but also take their own lives for it?” And I started to become very interested in the ideas of religious extremism and new religious movements as well, which have — Oftentimes, at least the new religious movements we hear about typically have a violent end.

So, the question really became very interesting for me and I thought, “well, if I could figure out the psychology of what motivates these sorts of extreme behaviors, that would be something that might be able to help the world in a different way.” And so, I set off to study what makes people tick and found in the early 2000s that computer modeling is a really interesting way of doing that. And so I set off to make that. My primary focus was, “can I figure out the psychology of what makes extremists tick in a way that's so specific and replicable that I could create computer programs that can do the same so that we can study them.” And then around 2013, I published a paper that was outlining really, how I propose we can do this. And LeRon picked it up and asked if I would, when I finished my doctoral research at Oxford, if I'd move to Boston and join a project that he was heading up to do exactly that.

Katie Burkhart: Very cool. Very cool. And how does this inform what you're doing at CulturePulse? Like, this particular interest and everything that you've learned?

Justin E. Lane: Yeah. Well, the narrative version of this is that when I finished my undergraduate degree at University of Vermont, I decided I really wanted to actually professionally study this and I wanted to go into academia. And after talking with my mentors, they said, “If you really want to do this, you should go and study in Belfast, in Northern Ireland. They have a center there where there's one particular professor who's doing a lot of work like this, and you should talk to him. He'd be interested in your computer modeling approach even though he's not really a computer scientist, but he'd help mentor you to do this.” And so, when I moved to Belfast, it was really eye-opening because I had never lived — I'd never even really been overseas before I got off the plane and just decided to move there. And when I got there, I very quickly learned that the issue of the conflict in Northern Ireland was not really religious.

It was much more political and economic, and religion was just the banner that people flew. It was kind of the flag that they flew in order to energize the community and to define the communities of who was their ingroup and who was their outgroup. And that really fascinated me in a way because it was like, “wait, how could not only I have gotten this wrong, but how could so many people who consider this conflict a religious conflict —how could they have gotten this wrong as well?” And it kind of dawned on me that it was because people weren't talking to the people who were doing the conflict.

And so, as I spoke with more people who were ex-prisoners and paramilitaries, I really realized that listening is a very important aspect of understanding these kinds of conflicts. And to this day we've worked in a lot of different conflict zones that people consider religious. So, for example, in the Balkans, the religious and ethnic divisions there between Muslims, Orthodox, and Catholics is still palpable to this day. And of course, our work in Israel and Palestine. That's obviously a religiously flavored kind of conflict as well. So, looking at those intricacies still very much play a role to this day in all of the work that we do, But it's not always about religion, is what we found.

Katie Burkhart: In the work that you do, as I understand it — and this is really to both of you — you create digital twins of real-world societies, and you're attempting to deliver actionable insights to both enterprises and governments. I'm familiar with the concept of a digital twin, but I'm used to it being applied either to technology or some other type of machine. But you are actually trying to make it of people, and more importantly of cultures and subcultures, and then the individual people presumably that make those up. Can you just take a moment to probe the similarities and the differences there? Are people machines?

F. LeRon Shults: That's great, I can tackle that one first. So, yes, they're digital twins. Sometimes we refer to them as “artificial societies.” So, imagine not just one artificial intelligent agent. Usually when people hear of AI, they think of an agent that's really good at playing chess or can read really fast, for example, machine learning. But what our models do is they have hundreds or thousands or even millions of artificial intelligent agents, each one of whom is different.

So, each agent is programmed to have different variables such as gender, age, employment, a religious identity, possibly, when that's relevant, or… Sometimes we have as many as 80 or even more variables, depending on the complexity of the model. And then what makes the approach we do unique, which is why we call it multi-agent artificial intelligence, is these agents are not simply algorithms that compute what is most useful rationally for the agent to do — which is how many approaches in what's called game theory go about it — but rather our agents have variables such as anxiety, such as tolerance for being around outgroup members, for passion, for their belief, if it's in a model that involves religion, their passion for their belief or their participation in the rituals of their particular religious culture.

And so then each of those agents has different levels of those variables. They interact in networks that are weighted. So, in other words, if I encounter someone from an outgroup that's really highly intense, who's fused to their group and a little violent, then I'm going to react differently than if I meet someone in my ingroup. So, it's not just a bunch of agents randomly interacting. They're networked and connected and then they're spatially geographically separated, so they're more or less likely to interact with each other.

So then, imagine you have a validated artificial society of this type. So, let's just take Northern Ireland for example. Around 5 million agents which represent broadly the population of Northern Ireland. And then you can run simulation experiments where you change what are called parameters at the environmental level.

So, for example, you can make it more economically stressful, or you can make it more immigrants from one way or from a certain culture come in, or you can ratchet up policies that would force or encourage people to interact or that would keep them from interacting. So all kinds of parameters you can adjust. And then you run experiments in artificial society. Again, thousands or even millions of simulations. So, then you get, probabilistically, a prediction that says, “if you want your society to look like this, here's the most likely pathway toward that outcome.” Which, in the case of all of the people we've worked with, that outcome is peace, or lack of conflict, or social cohesion.

Katie Burkhart: Fascinating. That's very helpful and gives me a little bit more background, but you specifically were talking about Northern Ireland. And this gets to--Just in your comment about “nobody was actually talking to the people,” which is a thing that I see a lot in my work. And it was this idea, again as I understand it, that one of the sources of data that you use to train these multi-agent models is news.

And you had used some 50 million articles to bring in but determine that wasn't sufficient. You actually needed to do your own research to understand the people involved in the conflict and their psychology and that the key to success was the collection of firsthand information. So again, actually talking to people. Can you tell me a little bit more about why this is so fundamentally important and how do you go about doing it? What types of questions do you ask the people that you talk to? Who do you select?

Justin E. Lane: Yeah, so it depends on —What we're really studying depends on who we talk to. The issue that media has — and in an interesting way, it's almost the opposite issue of social media. Social media is far better for studying extremism in a way.

Typically, the people who are the most violent in a conflict are a minority of the overall population. So, if you're looking at the largest net that you could cast in a population, the likelihood that you're going to just find someone who's normal and not really likely to commit some kind of atrocity, that's your average. The average person is not going to pick up a gun or start creating bombs or really get into any hard conflict. But there are a subset of society that will do that, and a lot of times if they're doing it, they're not posting about it on social media, they're not really talking about it in any way, so we have to go and talk to them about it.

And that requires face-to-face interactions. Not least of which because you need to have a certain amount of rapport or trust with a community to be able to get anything out of them. And you might not always get everything out of them, but showing up is kind of the first step. Being able to say, “I will come to you on your territory and have this discussion with you,” and you can see ideally that I am an honest actor here. I'm just here to try and get information.

And they get a sense for you in a face-to-face interaction that would not be achievable, really, through either typical data analytics or through an online mediated conversation with them. So, if we're having a conversation with them, a lot of times we start very casually, actually, just to kind of build that rapport. And then as the conversations go on, we can try and dig further and further and better understand.

For us, the key is, what were their motivations for joining a group? And then, once having joined the group, what would their motivations have been for carrying out some sort of act of violence? What were the real key triggers? How do they view things like blame and responsibility and justice, and how does that work into what they're doing? What are the key things that they see as crucial to their identity, not just in the past but today? Because we find that identity is one of the biggest drivers, really, in whether or not someone's willing to fight and die for their group is — That's an identity function.

Katie Burkhart: Wow, okay. So, what you were saying about identity and how this comes together. I'd love to hear a little bit more about the identity function in general. And then, if it is related, cool. If not, you can certainly answer separately.  This concept of groups versus individuals. And what you're getting into is this idea of, the extremists are the minority, then there's all these other people. And the reason I'm asking about that is, individuals are notoriously hard to predict.

For example, it's very difficult to know — I was reading a

article recently — It's very difficult to know when Susie is going to show up at the theater, but we know that generally speaking, Wednesdays are going to be more populated than Fridays, right? There's this interesting thing about humans and behavior and I'm sort of curious, both in the sense of identity building and how that comes in at an individual versus collective level, but then also just more generally, how does what you're doing lean in or contradict the concept that individuals are tough, but groups are actually possible? For example, one of the questions that come to mind is, how many people do you need to talk to in order to feel like you have a good enough understanding?

Justin E. Lane: So, those two questions, I would say, are fairly related. I mean, groups are nothing if not collections of individuals. But all individuals feel and align themselves with at least one group. So, all individuals are going to have their personal identity, which are those things and experiences and attributes that they consider to be unique and self-defining.

If I were to ask you, who are you? What do you stand for? What are the things that have defined your life? Most people would be able to tell you. They might not be comfortable telling you right away, because some of them might be rather intimate details about their life, but everyone has these things. Everyone also has a group that they feel that they would align themselves to. Typically nationalities are one of them. Religions are one of them as well, of course. Ethnicities are a very palpable one. But even as mundane as, “oh, I'm an alumni of this university.” That is a social group that we can align ourselves to.

Now in some cases you can always recite, “oh, this is what that group stands for, and therefore this is why I align myself with the group.” And people do that in order to feel safe and secure in their life and in their environments. There are some circumstances where that gets more complicated, and that line between their personal self and what motivates them personally and their social selves and how they align with the group, they combine and they blur. And that typically happens in response to having some sort of very intense emotional experience that they then reflect on it. As they reflect on this experience, they internalize what happened to them with the beliefs of their social group. So, if it's religious, they use that theology to interpret what happened to them. If it's nationalistic, they can use national narratives to interpret what happened to them. And in acts of terrorism, for example, you see this a lot.

So, we did a study some years ago on the Boston bombing and found that reflection was key to how people understood what was going on there. And I did research in a number of other areas also looking at this even further. And even going as far as looking into Pentecostal Christianity and finding that religious conversions and born-again experiences and all of these really intense emotional experiences, what they were doing was serving to basically fuse their social self and their personal self in a way that they can no longer separate that. And what ends up happening is that if someone's going to attack you personally, you're going to fight back. But if you can't dissociate your personal self from your social self, if someone attacks your social group, you're also going to fight back. And that ends up becoming the key to unlocking why it is that people seem to be motivated to be willing to fight and die for their social group.

When you take a step back and you look at things like the basic training in the US military, basic training is a very intense experience that you go through with your brothers in arms. And as you come out of that experience, you are so bonded to both those people and the concept of the nation that you are willing to fight and die for the nation and willing to fight and die for the guys that stand on the line next to you. So, it's something that we're just now starting to understand scientifically over the last 10 years. But it's something that humans have done naturally for centuries and centuries, as we've needed to create larger and larger groups where you're not just willing to fight and die for your family like we needed to 10,000 years ago. Now you need to fight and die for ideas that define your group, and that's something that's a bit more unique to the modern age. And you do. You see this in places like the Middle East, you see this in Northern Ireland, you see it in the Balkans, you even see it in the United States increasingly.

Katie Burkhart: Given your current emphasis or focus on conflicts, there seems to be some sort of sense to looking at this social group angle and looking at what's going to happen between the groups. But just--It's sort of curiosity, and it may be there's a point at which we reach a limitation, even with this technology. For this to be useful is where you may be able to see a macro trend of how a group may respond, or whether it may grow or get smaller, who might be attracted. Is there any way to understand if Jen, who is a 30-year-old female with these 80 behavioral characteristics, is going to be more likely to join? And why?

I realize that unless you really put in all the data for Jen — to literally interview all 5 million people in the country — Jen does not actually exist. But I’m sort of extrapolating that idea of, how far down into individual behavior can you go before that starts to become difficult?

Justin E. Lane: In the context of conflict, it's interesting, because once we understand that people are motivated to fight and die for a group because of this relationship between their personal beliefs and the beliefs of their social group, you can start to look for signatures in their psychological profile. And also, how different narratives, cultural narratives, are interacting with their personal beliefs to see whether or not there is a higher likelihood that this person might engage in a violent action. And that's one of the things that our AI systems are specifically geared to try and pick out.

It’s to not just understand that Jen has a certain age or psychological profile, but that also that Jen's core defining belief has to do with, let's say, animal rights. And that, as things are heating up nationally regarding things like environmentalism and climate change, and that she's getting more and more angry about these issues, that does show the psychological signature of motivation to potentially taking a more drastic action. And so when people spell it out a narrative form, you're like, “yes, of course intuitively this also makes sense,” but we now have the technology to also do that. So, looking at someone's — Looking at, for example, social media channels, and looking at how those narratives are interacting with people's personal beliefs, does become a very powerful way of looking at it.

Katie Burkhart: Very cool.

F. LeRon Shults: If I could tack onto that briefly? To go back to your Jen example and going to the theaters on Wednesdays. So, the phrase that's sometimes used in this field is Agent Zero. So, you may have, say, a million people in a population. And you don't know which agent is going to be the first agent to tip over — to hit a tipping point that suddenly leads to conflict between the groups.

So, in other words, you might not know that Jen is going to be the one who's going to suddenly freak out and throw a rock or shoot somebody, at which point both one or both groups kind of tip over and everyone joins Jen. But the point is that, you don't need to know whether it's Jen or not, because if you have people who represent the basic tendencies, identity fusion Justin was describing, and the other variables in the population, then you can play with the ecological environmental shifts and all kinds of possible interactions between the agents to see that one of them, at least, or some set of them will — Whether it's Jen or not, we don't know, but some will pop up, and then that will lead to a kind of cascading effect, which could lead to conflict, or social cohesion, or any other number of actions that you're trying to study in the social simulation.



Katie Burkhart: I appreciate that expansion because it sort of drives to where my brain is going next, which is a phrase I picked out of — whether it's one of your backgrounds, or the company website, or the articles I've read —"simulation focused on the prediction of social networks, markets, and social stability.” Which was a phrase that sort of caught my eye because it fed into a gut feel of my understanding of what you're doing, or how it could potentially be understood, which is the idea of central planning.

Does what you are doing run into some of the same issues that have historically bogged down a guru coming up with one grand vision for the world and being able to kind of make that all uniformly work? Do you even see what you're doing as potentially a form of central planning or informing central planning? I'm just sort of curious how you think about it.

Justin E. Lane: The way I think about it is that we're actually informing decentral planning. The key to the idea of these agents operating in environments that are realistic and on social networks is that it's just embracing the fact of human nature. And this is where all of central planning really has gone wrong. Particularly over the last 150 years, but really throughout all of human history. Which is that humans are heterogeneous, and the idea of a one size fits all social planning approach for anything political or governmental is about as effective as a one size fits all glove that you can get at a grocery store. It's not going to work. We are too varied and we vary within our subgroups and our subcultures, but then we also--The subcultures themselves vary within a nation state. And so, trying to come up with a one size fits all approach can be very dangerous, it can be very damaging.

Someone often gets left out. So, in a lot of ways, the best uses of this technology is really as a warning towards overcentralization and planning, and trying to say, “look, what can you do to engage local stakeholders, local NGOs, for example, local universities, in order to try and better address the complexities of the issues at hand?” Because centrally planned anything, really, is not complex. It's a single point in saying, “we have decreed from on high that we will now do this, and this is the way we move forward” And that's all good and fine. But that one decree has to apply equally to everyone in the society, regardless of all of the differences in heterogeneity that make us so powerful as societies. Evolution works on variation, and so progress is going to need variation as well. Otherwise we'd never step out of line enough to create something interesting, create something magnificent.

So, if anything, a lot of times this is kind of like — It's better for decentralized planning. And saying, “look, a little over here and a little over there,” is going to work a lot better than just trying to say, “look, from a centralized planning perspective, this is what we need.”

I mean, if that actually worked, then you could just centrally decree, “look, we're going to stop fighting,” and then people would do it. Obviously that's never worked, right? So, you need to have something that's taking into account the complexities of human behavior to try and figure out how to tip the scales more in terms of peace, because just decreeing peace has never once worked in the history of humanity.

Katie Burkhart: No. No matter how boldly we do it and with maximum fanfare, it still hasn't happened.

Justin E. Lane: Yeah, that's not how we work. We're not that kind of monkey.

Katie Burkhart: No, no, sadly not. We like to fling poo. The trick that this got me to is — That's a fascinating answer, and it makes me wonder, looking at your system and what you're building — and system is too simple a word, but — I wonder. One of the documentaries I watched, because I found it vastly entertaining because history is such a wonderful teacher, is “How to Become a Tyrant,” which Peter Dinklage wonderfully narrates. And it made me think about that image you gave of, “here's your simulated society. I can now kind of push on different levers and see what happens.”

Can that accidentally end up in the authoritarian government's hands--who's trying to become a tyrant, and make him a better one? Can it end up in the hands of Buy n’ Large, which is the mega corporation in WALL-E, so that they very successfully get everybody in their floating chair watching their screen, sucking on their Buy n’ Large cup because they figured out how to make that happen? It seems, based on what we just talked about, that they would have a hard time because we are not perfectly homogeneous, but I'm sort of curious if you've thought about that and how you're thinking about it.

F. LeRon Shults: Yeah, I could tackle that one first. As I think I mentioned to you earlier, Katie, this question — Some version of that question is always the first or the second asked after I give a lecture on these topics. So, there's several ways of responding to it. The first, kind of broad one is that this kind of technology can be used for good or ill like every kind of technology. Genetic engineering, nuclear power, and so forth, they can be used for good or ill. People are working on it, no doubt. People are working on it in countries and in areas that disagree with our vision of the future, from democratic countries and context. So, what Justin and I believe is that it's better to try to be out front, be upfront and out front and say the way we do our models. We are super clear on the assumption, first of all, in the assumptions that are built into the models. The formalization of the theories and the architectures forces you to do that.

Second thing we do is we work really hard to have a diversity of stakeholders, as diverse as we can pull off, so that enough people are at the table to shape both the model development itself, but also the simulation experiments. And then we've just decided — We have a special advisory board in our company that is responsible for approving, looking over the ethical considerations and concerns of any potential project that we do and approving it. And so, we feel like it's better to get out front and upfront and say, “This is ethically challenging, so let's have an open conversation, and let's get as many people who are concerned to support peace and social cohesion at the steering wheel as early as possible in this process.”

Katie Burkhart: Cool. That's fascinating. There was something I read… So, thinking ethically. Number one — I'm going to hop a question. I love the idea that you state your assumptions. You are my people. I encourage people to do that a lot in my work with businesses and teams and even individuals. We all make them, whether we realize we're making them or not. They're kind of essential, or the logic doesn't hold.

And also, needing to have diverse viewpoints at the table because we are not homogeneous. And this makes me think about the business implications for a moment. And you mentioned enterprise and governments. Clearly if you're dealing with conflicts and peace, you are probably dealing with governments more often than businesses, but can you talk to me a little bit about potential business implications and uses that you see? What do you envision an enterprise doing with it? And does that align at all with what any businesses have approached you and asked you to do? Or are they asking for something potentially different than you'd like what you'd like to see them do?

Justin E. Lane: Yeah. So, our bread and butter is actually made in the NGO space, when it comes to where our revenue actually comes from. But from a business perspective, I actually really like the idea of engaging with businesses. Because businesses have a sort of check on them that is kind of natural in a way. It's very organic that, if they step out of line too much, they fail. And there's no one — In an ideal world, there's no one there to catch them. Governments can fail consistently for decades and they always still are around and we still have to deal with them. Nobody's dealing with Enron anymore, because Enron messed up, and they failed. And no one's really dealing with Bernie Madoff anymore, because he messed up and he failed. The same ultimately will happen with organizations that are involved in overextraction of resources.

Eventually they're going to run out of those resources and they will fail. Now, we need to be wary of at what cost will they fail. That's where I think the real interesting question is. But for us, the world is a very complex place, and there are a lot of things that we take for granted.

So, for example, the computers that we're using to discuss right now. For us to learn how to build that computer and figure out how to make the silicon wafer chips and how to create the buttons and where to put the lights and how a screen even works and how Wi-Fi adapters and all of these things work, that would probably take a lifetime. And then, the ability for us to go and mine all of the minerals and get the gold and the silicon and the copper and the cobalt and all of the things that we need to make the batteries and the transistors, it would literally take a lifetime. And it would be millions of dollars of an endeavor.

But yet, we can buy a laptop for a few hundred dollars. And the reason is because we've specialized this. And we'll get a chip from Taiwan, and we'll get a screen from China, and we will get buttons from Germany, and then we will get something about the way that the hardware is programmed. We'll get the operating system from the United States. And these things are traveling all around the world in order to finally make its way to our local electronic store so that we can buy it and then use that laptop, and we pay a couple hundred dollars for it, and we don't think anything of it. But in order to keep this ticking and to keep the wheels on this thing, there needs to be a lot of things that fall into place perfectly. Because these computers are made on these large assembly lines in these factories where the screens are coming just in time to be put onto the computer.

The battery's arriving just in time to be charged and slid into the back before it goes to the next phase. So, if something happens to a ship off the coast of Yemen, for example, and Houthi rebels bomb a ship or fire rockets at a ship and that ship goes down, someone's not getting their battery. And if someone doesn't get their battery, then that factory in Europe has to slow down.

It has massive repercussions for all sorts of different things, just because one ship was potentially taken out. And this is true and then multiplied exponentially all the way around the world, from the Suez Canal to the Middle East, to Europe, to Asia, to Africa. All of these different places keep our complex economy running. And so, having a better understanding of what the risk of conflict is is of critical importance to a lot of large organizations and a lot of large companies.

But what I think is often overlooked is that they have, because of this, a profit incentive for peace that no one is really appreciating, I think, in the global conversation about what role corporations play in our daily life and what role they could play that's more positive.

So, having an understanding that conflict is going to hurt their bottom line 99.9% of the time. Unless you're like a small arms contractor operating in sketchy corners of the world, conflict is not your friend. Conflict is going to destroy your bottom line, cut into your profit margins, and maybe cause you to even have to take out loans and debt. So, they want to be very knowledgeable about what could happen and how it could impact them, and if — Insofar as they can try and take positive steps to mitigate it and actually increase cooperation in these regions — particularly those that are the most vulnerable in the third world, for example — and in developing nations so that everything falls into place the way that it should.

Katie Burkhart: That's fascinating that you're really looking at it in a more macro lens as far as what's going on, because you recognize that we often think of division of labor as, like, “you put the pin head on, and I put the pin in the box.” But in reality, division of labor at this point has extended to what I think, in some ways, the vision was. That if Taiwan is the best chip maker, we don't need to make chips. We can let them make chips, and we can do these other parts. But it does add potential complexity into that system that can — by potential, I mean lots of complexity and moving parts — into that system that can make it so that you have massive supply shortages. And depending on what you're doing as a business, that can be deeply damaging. If you do things like provide food and the person waiting for the food isn't getting it…

I could take different steps that would promote a more peaceful world that is ultimately going to be better for me, my business and my customers.

One of the things that I thought about which is not — is smaller, I want a little more micro—is — and again, made me think a lot about how you're building these conflicts and that emphasis on actually going out and talking to people. One of the things that's always difficult for businesses is change.  Both internally, like “what happens if we make this change on our team? How stressful is that going to be? How much anxiety is it going to produce? How many people are going to quit because we're making some significant change?” But also, for whoever they serve, whether they're a nonprofit or a for-profit. We've realized that XYZ change in the world necessitates a fundamental change in what we're doing or how we're doing it.

What's that going to mean for all the people that I'm trying to help? How can I mitigate that pain? How can I make it a better experience? Et cetera. Those all sound like, again, more micro, but potentially valuable simulations that a company could run to understand what's going on. Because while we can control our own actions, to your point, there are a lot of other actions at play that affect what it is that we're doing. I'm just curious.

Justin E. Lane: Yeah, very much so. I mean, one of the things that we realized working in some of these conflict zones is conflict changes quickly. Crises change quickly on a day-to-day basis. So, one of the things that we've integrated into our platform, for example — So, we do have an online platform where people can go and see what's happening in the world, and whether or not there's increased risk of conflict in any particular country, and it pretty much covers the entire globe at this point. We also have the ability for us to utilize these digital twins.

And as LeRon said earlier, when we're running hundreds of thousands or millions of simulations, which one of those is most relevant to what's happening today? And so, we've utilized technologies that people are very familiar with, these large language models that everyone's talking about. They can produce English language with human-level fluency.

So, what we realized is that we could create our own AI that can do that, that's based in the research that LeRon and I have done as academics and researchers. Mix this with the knowledge of hundreds of thousands of potential simulations, as well as bring in the current live news streams that are coming in that are relevant to that particular location of the world. So that people can understand, in the changing context of the world, whether or not this news is likely to trigger conflict, increase or decrease some aspect that could affect their bottom line, be it peace or making sure that semiconductor comes in on the Tuesday that it needs to come in on. So, integrating those sorts of technologies as well to try and deal with the dynamicity of the modern world and the complexity of the modern world has been something we've focused on over the last few months in particular.

Katie Burkhart: That's huge, because it is something that so often, we're — Again, history teaches a lot. We learn a lot from those patterns, but it doesn't necessarily help us to predict. That being said, future predicting is a very hazardous game for all of us humans and monkeys. Are you really trying to end up for picture-perfect future prediction, or are you aiming for something more like how we statistically understand the weather? Like, “given this combination of factors, we think it's statistically this likely that we'll get rain today.”

Justin E. Lane: LeRon, I'll let you take that one.

F. LeRon Shults: I wondered if you would. So, it's not prediction in the sense of a crystal ball or knowing exactly precise — It's not knowing whether Jen is going to go to the theater on Wednesday, right? But it is prediction in the scientific sense, or in the kind of classical scientific sense, and in that sense it's actually pretty powerful prediction, because it can tell you that under the following conditions, through these mechanisms, this is the likelihood that X will be the outcome. So, it is prediction in that sense. And many of the models that we've developed have been shown to be better at predicting than linear regression models, for example, up to three times more accurate.

So, it is prediction, but it's not going to tell you what Jen is going to do on Wednesday. But it will tell you with--and even the models can say how high a confidence level we have. We only publish or we only present to clients if we have over 95% confidence because that's what's required in an academic peer reviewed psychological journal. So, we can tell you with over 95% confidence that, under these conditions and by these mechanisms, this is what's likely to happen at this level of confidence. Even if we can't tell you what Jen's going to do, we can tell you how many people are going to the theater, or will engage in conflict, or alter their attitude toward an outgroup member, or respond in a certain way to a company's decision.

.

WTP is a reader-supported publication. Writing one article takes hours of work. If you appreciate my posts, consider becoming a paid subscriber. Your subscription ensures I can keep asking questions and writing all about it. 


.

Katie Burkhart: Cool. So, one of the things that leads me to think about, and I picked this up somewhere in researching your work, which is false information. And that's a reality in our world. Inaccurate information exists everywhere. We learn new stuff as a species every day. There are also people out there intentionally putting false information out into the world, which is a whole other ball of wax.

But in either case, inaccurate information affects people's behavior. It affects what they value, it affects what they believe, it affects how they understand the world. So, I have two questions. How does your model and your work deal with information that's false? And then, most importantly, who decides what's true and what's false in building the model? Or do you not pass that judgment in what it is that you're building?

Justin E. Lane: We don't pass that judgment in what we build, and there's two reasons for that. One is — I mean, really it just comes down to the practicalities of it. As you've mentioned, misinformation can be a powerful motivator in human behavior. There's been a lot of things in the last, let's say four or five years that were rooted in misinformation, and yet still affected people's behaviors.

So why would we possibly want to take that out of the data stream? That creates two problems. One, we're neglecting information that's affecting people's behaviors and we're trying to predict behavior, so we need that information. Two, we end up just spending our time worrying about trying to get to some sort of ground social truth that may not be, and then later evidence might come out that shows that what we thought was misinformation was true information all along, but we took it out of the model so we didn't capture that and have that good understanding.

So for every reason that I can think of, it actually doesn't make really good sense to take the false information out of these models. You have to leave them in, both because they're such important motivators of human behavior and because the way that we create knowledge as a society is a process. And this is something that I think goes beyond just our own work. I think this is something that we need to be wary about as a society. We are far too quick to look at something and go, “this is true henceforth and forevermore done,” and we move on. And that's not the way that--And at the same time we're going to say, “trust the science.” It's like, you don't get both of those. If we did that, then science would not go anywhere, right? Science does not trust itself inherently, and that's the beauty of it.

But science asks intelligent empirical questions and tries very hard within certain bounds of empirical rationality in order to make a statement that is, if not true — and in a more hard sense of the word— it's the best knowledge we have at the moment. And so, trying to just say, “oh, I like this, or I don't like this.” Or even worse, getting back to that concept of tyranny that you mentioned, “this is politically expedient or politically inconvenient for me in this election cycle. Therefore, it is misinformation depending on whether or not it's going to help me.”

Those are problems that are — It's best that we as founders and technologists, as well as we as members of a society at large. We should focus more on educating our peers on how to think, not what to think, in order to try and deal with issues of this misinformation. Until then, it's always going to plague us. And if it's plaguing us, it's affecting our behaviors, and if it's affecting our behaviors, then I definitely want them in our data streams.

F. LeRon Shults: I totally agree. If I could just add one other thing quickly, then. One of the values of computational models is that you have to formalize your assumptions, going back to what we were saying earlier, and so we could put in different facts, if I can say it that way. In other words, you could put in different assumptions about different causal levers and see how it affects the simulation runs itself. So often, at the end of our articles when we publish them, we always give the code, and then we say, “if anybody disagrees with our assumptions or thinks they disagree that this is misinformation, then here's the code, alter it and run your own simulations.” So it doesn't shut down the conversation. It opens up the conversation.

Katie Burkhart: Yeah, you're really giving lots of space for retesting, relooking, asking different, because that's the statement. That's not what science is, yet. It is sort of a myth. Somewhere along the way, somebody quoted that we've made science facts, which is not actually what science is about. Science is a process and is therefore always alive and always changing, and I love that you guys are baking that — I'm a question person — that you're baking that right into what you're doing, what you're producing, and being so willing to put it out there.

I have two final questions. I'm going to do one, and then the other. One is, you talk a little bit about, or have talked about the idea that you have this simulation, you have an artificial laboratory where you can basically do whatever you want to your simulated society to see what happens, even if maybe what you're testing would not be ethical to test in the real world. One of the things that struck me was when does interacting with a simulation make it easier or more permissible to practice that behavior in the real world? And this is an offensive association, but —

F. LeRon Shults: Interesting. Very interesting question.

Katie Burkhart: The quest — The thing that I think about is pornography. The idea that, well, if we had these really lifelike digital simulations, people with inclinations to do things they shouldn't do, like pedophiles, would be satisfied and therefore wouldn't touch children. But at what point does it leave you wanting? Does it make you too comfortable?

We're starting to now see this in digital dates, that, because the robot does whatever you want, whenever you want, however you want, slowly but surely, you start to think that that's how real people behave and think that that behavior is permissible. So I'm just sort of curious, you guys are working at such a grand scale. Is that thread still there? And when does running some of those tests make us feel like this would be cool to do for real, and that gets us somewhere not so cool?

Justin E. Lane: That's actually a really interesting question. I don't think we've ever gotten this one before, put this way. Yeah. We've gotten a lot of the ethics questions. Is this ethical to do and all that? But putting it this way, does it make you more prone?

Well, one, I do think that that much like — not just pornography, but also violent music and violent video games — I think this is a bit of an empirical question, because the assumption often is the more you can practice something not in the real world, but like in a video game, the more violent it makes you. And a lot of the data suggests that that's not the case, but that's not always true. There are instances where it is the case. So to an extent, there's an empirical question there of, “if you are able to run conflict simulations, does it make you more prone to conflict?”

That's an interesting one. I don't that it's the case in a way. And my general thought —My general thought is, from an ethical standpoint, when you're dealing with conflict in particular, it's much better to get it wrong a hundred thousand times and then get it right once than it is to try and experiment with the real-world population on the fly. People as individuals and as decision makers, we're not very smart, particularly at dealing with complex systems. Our working memory only holds in minus seven — plus or minus three things at a time, and the number of moving parts in any given conflict is usually far more than 10. So using a computational tool definitely can help, but I don't know that it necessarily would make somebody more prone to enact something, at least in this instance. I don't know. That's an interesting question.

F. LeRon Shults: It's a really great question and good, and I think we need to pay attention to it in the future, but it also can have the opposite, a positive effect. In other words, you start running simulations and you're like, “oh my goodness, for sure we're not going to do that.” Because, you see, if I could just give one example quickly.

A few years ago, we did a paper because we had developed an artificial society in which there were different ingroups and outgroups, a majority and a minority. And we were talking about how they interacted with each other, and then we created what we called digital or virtual ethnography. So you could go in and talk to a particular agent. In other words, you could pick an agent, just pick one and then say, “How old are you? Which group are you in? Did you go to school today?”“How are you feeling about your name?” That sort of thing. And so one of my colleagues went in and he picked an agent at random, and it turned out it was an 8-year-old girl, and he immediately just froze. He was like, “Whoa. Okay, we're done here.” You know what I mean? So, what it triggered in his mind was, the simulations can't just have homogenous people. They have to have at least the appropriate level of heterogeneity. So if a person from an outgroup meets an 8-year-old girl, for example, from the other group, they're not just going to have a conversation with them about something. They're, in most cases, going to avoid them. That's not going to be a normal interaction. So, precisely, playing around in the artificial society can lead to ethical reflections of that sort, which reflect back and see how our society is and is not interacting.

Katie Burkhart: Yeah. Alright. Well, my last question — Because I'm not responding, my gears are turning. So, I will certainly have a few follow-up questions for both of you. But my final question for today is, is there anything else that you'd like to add that we didn't talk about that's important for you to share, or maybe that sparked in some of the conversation that we have had today?

F. LeRon Shults: You've never got that one either.

Justin E. Lane: Yeah. At least. I mean, you've already gotten one question in on us that we weren't expecting, so we're already one up. One of the things that I would add is, there's a lot of hype around AI, and a lot of it is nonsense. There's a lot of worry around AI being perpetuated by people who've never actually built AI. And I would encourage people that when they're hearing something that either sounds great or sounds like the end of the world, to definitely look into and keep a skeptical eye on these issues.

I mean, we've been working in this space for 10 years to create these kinds of the technology that underlies these systems. It's not something that just happened overnight. And a lot of what I see in the media on, “Oh, the new GPT is doing this, the new GPT is doing that! We're going to have human level general intelligence anytime now! We're going to have disinformation that's going to destroy democracy around the world!” And all of these things. A lot of those conversations, I've noticed, are being led by people who aren't really active in the AI space until very recently, and now they're more AI influencers than they are AI researchers.

And once you get into the mud of building AI, you realize how the sausage is made, and that a lot of AI is really dumb. And just because you put two dumb systems together doesn't mean you get a great system. Sometimes you just have a bigger dumb system. And so, a lot of times I encourage people to be very skeptical of why it is that people are decrying that the sky is falling on us right now and that we're on the precipice of some major thing. I think we should be very skeptical on that. We're running out of data for machine learning. And that would be kind of — I guess my last point is that we should not think that we are going to machine learn our way to human intelligence, because humans did not machine learn their way to their current level of intelligence.

There are certain things that we have as humans that we're pre-programmed with. So for example, facial recognition. You can see a person's face. Or, humans, rather, are attuned to look at people's faces the moment they come out of the womb. There's even some studies suggesting we do it before we even leave the womb. So this is pre-programmed. This is not something that we need to learn.

And I think we should start to differentiate between the things that we learn socially and the things that we're taught and the things that evolution taught us as a species for generations and millennia and try to better understand that some of the key things that make us human, we never learned. Things like fear and anxiety and love and sociality and attachment. We come pre-programmed with those things. We're just filling in the blanks based on our own experience. So, the idea that a new version of OpenAI, or whoever comes after OpenAI, is going to get and unlock those aspects of humanity through machine learning, I'm very skeptical that that actually is possible because we didn't have to learn it. So why should the machine?

F. LeRon Shults: I would just tag onto that. I agree that it won't be through machine learning. And one thing we didn't really highlight earlier in our conversations was that the cultural ontology that detects 93 different dimensions in the social learning algorithms that CulturePulse uses, those are based on cultural psychology or moral psychology. So in other words, our system learns the way humans learn. It already has categories that are representative of actual culturally evolved tendencies of the sort that Justin just described. Anxiety and groupishness, personality factors, that sort of thing.

Katie Burkhart: Yeah, that's fascinating. I love the statement that there are things we never learned. We just have. And I think there is sometimes a forgetfulness that there are things that you just have. For example, I breathe, and I don't actually have to tell myself to breathe. I just do. So thank you so much, both of you for your time today. I have thoroughly enjoyed this conversation and hope at some point we'll get to do it again. But thank you so much.

Justin E. Lane: Yeah, thank you. I had a good time.

F. LeRon Shults: It was fun. Really fun.

Katie Burkhart: That's what I like to hear.


One of the best ways to support my work is to share it. Share this post with a friend or click the 🖤 so more people will discover it on Substack. Thanks. You rock my socks.

Share


.

0 Comments
WTP
WTP Podcast
Asking, “What's the point?” to make the most of your time. Covering work, strategy, business, productivity, and culture in the value economy.
Listen on
Substack App
RSS Feed
Appears in episode
Katie Burkhart