Tue, 18 Dec 2018 23:21
On this episode of Too Embarrassed to Ask, Ren(C)e DiResta, the head of policy at Data for Democracy, talks with Recode's Kara Swisher about how disinformation is spread on social media platforms and what can be done about it.
You can read a write-up of the interview here or listen to the whole thing in the audio player above. Below, we've posted a lightly edited complete transcript of their conversation.
If you like this, be sure to subscribe to Too Embarrassed to Ask on Apple Podcasts, Spotify, Pocket Casts, Overcast or wherever you listen to podcasts.
Kara Swisher: Hi, I'm Kara Swisher, editor at large at Recode, and you're listening to Too Embarrassed to Ask, coming to you from the Vox Media podcast network. This is the show where we answer all of your embarrassing questions about consumer tech and the week's news. You can send us your questions on Twitter with the #TooEmbarrassed. We also have an email address, tooembarrassed@recode.net. Reminder, there are two Rs and two Ss in embarrassed, in case you cannot spell.
Today on Too Embarrassed to Ask I'm here in the studio with Renee DiResta, who does a lot of things. So she has a lot of titles. We're going to go through them now. She's the head of policy at the nonprofit Data for Democracy, the director of research at the startup New Knowledge, and also was a founding adviser at the Center for Humane Technology, which is the group behind the Time Well Spent movement, and also one of the most ironic names for something, Humane Technology. Okay, we're going to talk about that and more. Renee is an expert on a lot of important issues on the internet today, including disinformation and social media manipulation, which is an area that I'm hugely interested in, as are many people. We're going to talk about all that. Renee, welcome to the show.
Renee DiResta: Thanks for having me.
So, I want to get a little bit of your background first, and then we'll get to some questions I have. This is a lot of stuff, so parse it all down for us, to break it all down.
Yeah, so Data for Democracy is a data-science collective. There's about 3,000 members. It is much bigger than just disinformation. There's channels in there where people are looking at vehicular traffic fatality data, where people are looking at gerrymandering, voter registration. It's just a collective of data scientists who are interested in using their skills to make a difference in the world, mostly social good projects.
One of the channels in there is related to disinformation and misinformation. When we started realizing the extent to which this was a problem, I began doing some advising in Congress. At the time, I was actually working at a supply chain logistics company that I helped found. It got to be a little bit difficult explaining why I worked in supply chain logistics, but also this was like my passion project, so we decided that we would spin up a policy team at Data for Democracy whereby we could do a little bit of lobbying and advocacy work as independent techies, basically.
New Knowledge is a company that builds detection and mitigation technologies specifically for manipulated narratives. So, there is social listening where brands will get alerted to ... They have 500 mentions of Coca-Cola, for example. What New Knowledge does is we ascertain whether or not those mentions are organic or if they're a kind of coordinated campaign to impact the reputation of the brand.
We're shocked. There's a lot of companies popping up, Zignal, there's a whole bunch of people now doing ...
Most recently, I think ... Yeah, there's a lot ...
We'll talk about that because we're going to be talking later today about something else. Then the Center for Humane Technology, which is my favorite name of a group.
I'm just an adviser there. My area of interest is mostly kind of societal implications for a lot of the ...
Rather than individual ones.
Exactly, rather than individuals.
Right.
But of course, since societies are comprised of individuals, these two things are related. So, I spend a lot of time talking with Tristan about ways that we see specific design features consistently popping up time and time again across platforms being misused or co-opted in an abusive way, and thinking about what are the better ways that the technologies could perhaps have that ethical design more clearly built in on an individual level.
Right, and what's your background? You're a data scientist?
I have a computer science degree. I worked on Wall Street. I've had a bunch of different ...
Right, but how did you get into this area?
How did I get into that?
Yeah.
Yeah, so in 2013, I had my first kid. I started looking at ... You know, you have to do that preschool thing here, you've got to get them on a list a year early. I didn't want to be in a preschool with a bunch of anti-vaxxers, candidly, and I ... California Department of Public Health ...
There's not that many.
Well, you'd be surprised.
Yeah, I have kids.
Yeah, California Department of Public Health had the preschool data sets and I just downloaded them and started looking at clustering and ...
Oh, wow.
Interesting areas in which ... Yeah, I mean, some of the schools actually had vaccination rates in like 35 percent.
Whoa, really?
Yeah. You think that it's normal '-- and overall population level it is, cluster-wise it's not.
Good for you.
This is actually a great analogy to disinformation and how it's targeted, right? It's really the vulnerable populations that we look at. I got involved ...
So, you downloaded the data?
Yeah, I downloaded the data, made a couple drafts.
So, there were vaccinated children near your children. Okay.
Yeah, it was an important thing for me.
Right. Okay.
I just felt like being in communities of people who had completely different values ...
Who believe in science? Yes.
Right. Exactly, I wanted to be around people who believe in science. But what came out of that was actually the measles outbreak. The Disneyland measles outbreak happened in January. I had published my analysis of California's immunization problems in November, and I called my congressman. Literally, did that thing. Called David Chiu, called Mark Leno at the time, and said, ''Why aren't we introducing legislation to do something about this?'' They said, ''Senator Pan up in Sacramento is.''
And so I started doing analysis of the social media conversation around the bill because it was polling at 85 percent positive, and when the legislators were taking polls in their district and they were saying, ''Why are we seeing 99 percent, or why is it overwhelmingly negative? Why am I being harassed? Why am I getting death threats? Why am I having memes made of myself when I express support for this bill?'' So, the way I got into all of this was really starting to dig into what was happening with that conversation.
So, the abuse of social media for ...
The abuse of social media. Now, at the time it wasn't really bots, it was more of the manufactured consensus concealing identity. At the time it was more like weaponizing the kind of tools that people use for marketing, or turning everything into a marketing campaign. Ironically, at the time, Devin Nunes, who is now one of the top conspiracy pushers in Congress, one might argue.
One doesn't have to argue. Go ahead and say it, right?
Being diplomatic.
Please don't. He's a crackpot.
He is. He's a crackpot, but ironically he is one of the first people who had come out and said, ''I'm really seeing this marked shift where my constituency used to be 10 percent telling me that the government was listening to their radio and communicating ... You know the aliens are communicating with them, and now it's like 90 percent. Like, what has happened to my district?'' So, this is something that politicians ...
Devin Nunes has happened to his district, but that's another issue. Thank you, Devin, for ... It takes a crackpot to know one. So, you got interested in this issue, and it was largely manufactured outrage or manufactured ...
Yep, manufactured outrage, manufactured consent.
Which affects people.
It does. Then right around the same time I met Jonathon Morgan, who's the founder of New Knowledge, and we met because we were asked to do some analysis of extremist content on social media, specifically ISIS. Jonathon was one of the authors of the ISIS Twitter census, where they really went in there ... And the same kind of work that Gilad Lotan and I had done on mapping at the anti-vax conversation, and the way that they were using affinity marketing and co-opting hashtags and trying to grow their numbers, trying to look a lot bigger than they were. Jonathon was doing very similar types of analysis on ISIS, and on violent extremism.
There were a lot of parallels in how the technology was being used. The conspiracy theorists were relying on these new algorithmic amplifications, megaphones, the ease of connecting with each other to spread their message, and ISIS was building a virtual caliphate, which, both things at the time, were largely being run completely undisturbed because nobody could convince the social platforms that this was worth our time.
That's because they're using them exactly the way they were built.
Exactly.
Do you know what I mean? So let's start talking about disinformation first. Talk about what that means, because a lot of this is disinformation, the idea of disinformation.
Yeah, it maybe helps to give the taxonomy. So misinformation is something that's just accidentally wrong, it's the kind of stuff that your grandma will send you in an email.
I get those, yeah.
It's what Snopes used to do, before it became a political tool. Disinformation is misinformation with an agenda, it's quite deliberately done, it has a really clear agenda that it's looking to push. It's looking to either spread a message to increase societal divisions ... It's used as a tool. It's a tactic of information warfare. It's not accidental at all, it's quite deliberate.
And so, how does it exist on the internet today? You were talking initially about that they were using existing tools. They started the original hashtags and things like that, and now in a more automated sense.
So, disinformation, it's probably helpful to talk about it in the context of Russia, because that's probably the example that most people have readily available. The Russian troll, the internet research agency activities did exactly what we saw with ISIS and with conspiracy theorists: The goal is to blanket the internet so they are everywhere, the message is everywhere. And that's because repetition really matters. Because you're trying to manufacture consensus, you want people to believe that there's huge percentages of the population who have a particular opinion, because you're trying to sway hearts and minds.
Sure, like, ''If I see all this ...''
Right, exactly, ''I see it everywhere, it must be real, people must be thinking this, it's not just me.'' And so it normalizes it. So, what we saw, there was fake accounts on Twitter. And fake means two things, sometimes fake can mean automated, meaning it's a bot, and sometimes fake means an account that's just not what it's represented as, and oftentimes, the best ones are run by real people, because they develop a persona. So it's a persona ...
And then it's just bots reinforcing it.
And the bots reinforce it with the ''Likes.'' And that's because we're accustomed to looking at signals '-- like number of stars, number of retweets '-- so the algorithm uses those as signal, and people use those heuristics. So that's how it operates on Twitter.
Similarly on Facebook. Groups, the creation of Groups, the creation of Pages. So you run an ad, you find people who are sympathetic to your point of view, oftentimes that ad is tied to either a Group that they want you to join, a Page that they want you to like, or it's actually gonna push you out to a third-party website. At which point the Facebook tracking pixel will recognize that you've engaged and spent time with the content on that site, so that they can then re-target you in the future. If they push you into a Group then you've effectively got an entire community around you of people who you think are just like you, who you think have the same interests as you, but some percentage of them are not what they seem to be. Some percentage of them are in there to convey a particular point of view.
On Google, it's really YouTube that's probably the biggest culprit.
Right, 100 percent.
Yeah, on YouTube it's, again, the pushing of content, but content in a different medium. They had Vine accounts, they Tumblr accounts, they had a promotion for Pok(C)mon Go.
It's hard on Snapchat.
Yeah.
It's almost impossible. You can't do ...
I think it's less because it's much more fleeting.
It's curated. It's also curated.
Yeah, exactly.
The stuff they allow, the discovery is all their choices, versus ...
Yeah.
So nothing gets on there.
That's true.
And then the fleeting stuff just doesn't, it doesn't stick.
It's impossible.
It doesn't stick, which is interesting. It's designed like that, which is what I want to talk about.
So when you talk about this, you talk about the social networks, that they aren't paying attention to it, can you talk about that? Because I think, I feel that '-- and I think you're 100 percent right '-- but their stance, you know, Mark being in Congress is like, ''Oh, we take a broader responsibility now.'' Can you talk about that responsibility, and the lack of responsibility, really?
Yeah, I really think there's been an evolution since November 2016. I will say that had Trump not won the election it's not totally clear to me that we'd be having the conversation in quite the same way.
No, yeah, 100 percent.
Which is sad, because it's actually really not a partisan issue. The Russia activity, very much was, they quite clearly had a preferred candidate, and that is absolutely undeniable, but the Russians were one actor operating in this space. Like I've said, there's a number of others, both on the right and on the left, that are just domestic ideologues using social platforms to push an agenda.
So I think the responsibility piece, it's tied up in a lot of internet culture. Back in the '90s there's the ads model, the idea that information should be free, and so supporting it with ads is a great way to ensure that everyone has access. There were a lot of really noble principles underlying the structural flaws that exist today.
When we were dealing with the ISIS stuff, there was the EFF, arguing for, if you take down ISIS content, it's a slippery slope. You might accidentally capture the content of someone who is mocking ISIS, or debunking ISIS.
You saw that the other day, I think it was, Sarah Frier at Bloomberg had a great piece that ''shrubbery,'' because Bush.
Right, because of Bush, yes. The beans, I think, was another one. So there's these interesting ... And for a long time we've had the sense that because of American commitment to free speech, a false positive is a terrible thing, actually terrible thing, as opposed to a false positive as something that can be remedied, and we can put frameworks in place to deal with those, to have transparency, to have things like ...
So it begins with a laudable thing.
It does. I think a lot of it begins with a laudable thing, with a real commitment to free speech. What that became, though, that plus the combination of some legislative things, like Communications Decency Act, section 230, that act was created so that they were indemnified from the content on their platform. We didn't want them sued out of existence because some people posted some terrible things. But it gave them the right to moderate as they saw fit. What they chose to use it for, the way the norm was set, so that you had the regulation, but then you have the norms. The way the norms was set, it was really much more around like, ''Well, we're just not gonna moderate anything, because we're indemnified.'' So you had this sort of free-for-all.
Twitter was perhaps the best possible ... You really saw this, it was very very much in your face, in their complete unwillingness, their absolute ineptitude when it came to policing harassment. Harassment is one of the things that we do see people use to either amplify a point of view or suppress a point of view, right? Because you can effectively use your speech, particularly automated speech, to harass other people off the platform, thereby silencing their speech. But that was not an argument that was very well received.
No.
For a very long time. I think we're finally starting to come around.
Why is that? Why do you think that is? Besides they were, ''First Amendment!'' I think it's because they're all men, and they don't get harassed that much. Honestly, I think it's the lack of diversity. I had several people at Twitter, ''Oh, I never got bothered.'' I'm like, ''And?''
Yeah, it is remarkable. I personally had pictures of my baby used to harass me, to try to intimidate me into being quiet, pushing them into harassment hashtags, and the response I got from the company was, ''Well that looks like a conversation.'' I was like, ''No, it doesn't.''
No, it doesn't.
''I don't know what universe that looks like a conversation in.''
Jackass.
''I don't know how you talk to people, but that's not how I talk to them.''
Right.
It was definitely a series of interesting interactions over there in particular. Now, I think now that they have seen that this is actually a geopolitical issue, that the stakes are extremely high here, there is no way to avoid dealing with it, because there are regulators on both sides of the pond who are saying, ''We're gonna do something about this if you don't.''
So I think the credible threat of regulation plus the terrible press cycles, plus the internal employee revolts, perhaps not from management, but a lot of the internal employees saying, ''What did we do here?'' I think you saw that even on Twitter on election night, a lot of employees quite publicly wondering, ''What did we do here? What happened?''
Right. But they continue to do it. So who's most at risk when these things run rampant, from your perspective?
Sometimes ... the demographics are really interesting. The platforms know that better than anybody else. Actually, it's interesting because we don't see it quite as much on the outside. It's very hard. We can gauge production of content, we can see prevalence of narrative, we can see consumption of content through things like CrowdTangle and other analytics platforms that increase transparency about who's reading, about what is being read. We don't have as much visibility into who's reading it.
So, we have to use these sort of little signals, like if we're looking at the sizes of particular groups on Facebook growing over time, that's something that we can see. There have been a number of articles done by investigative journalists that really dig into the boomer phenomenon, actually, the idea that boomers are running a lot of the Twitter groups and are really running a lot of the groups that are led by kind of like a charismatic one or two people who then say, ''Everybody amplify this message,'' and then they all go and tweet from their accounts.
I think that the question of demographically who's most impacted, I can say with regard to the Russia scenario, African-Americans were really targeted. There was a ton of just proliferation of content, Black Lives Matter-related content, but just a little bit off, just a little bit more extreme than you would normally see, just pushing the Overton window ever so slightly, trying to normalize it for that audience.
Explain the Overton window to people.
The Overton window is the collection of societally acceptable political opinions. I hope that that's an accurate definition. Thinking back on my Poli Sci 101. So, shifting the Overton window or expanding the Overton window means increasing or changing the types of positions, political positions, that are considered mainstream or that are considered respectable, some things that we're willing to discuss.
What we saw was most of the Russian content was related to societally divisive issues. There were some LGBT pages. They were pro LGBT, the content was pro LGBT, but they were being targeted at anti LGBT people to kind of gin up outrage. There were the Black Lives Matter pages that used a lot of very extreme rhetoric and talking about police.
To scare people.
Yeah. So, there is of course that tension and that problem that is very real and deserves attention, but what they were doing was just blanketing the channels.
Making the group seem unreasonable.
Exactly. Making the group seem unreasonable, and also taking things that were sort of sensationalized in ways that were ...
And make them worse.
Right. Because you don't want to have to work hard for every impression. This is, again, marketing 101. You want to create content that people are gonna organically share and is the kind of thing that if you create something, give it a sensationalist click-baity type headline, and push it into a group that has ... Their groups had about 100, 200,000 people in them. These were not small Facebook groups. But then they would be picked up by groups that were even larger, like The Other 99%, which I think has a couple million followers, was sharing content from this fake page. Blacktivist, that had a couple hundred thousand followers.
Right.
So, this information, hundreds of millions people saw it.
Just keeps going. So, the 2018 midterms are coming up. Just how screwed are we?
I think we really need better information sharing. I've been kind of beating that drum for a while. I think that what that means ... Somebody said, ''Does that mean that our privacy's gonna go out the window?'' No, that's not what it means at all. It's information sharing in the sense of like tactical information sharing, threat detection. We're seeing this anomalous behavior. We're seeing this bizarre content. We're seeing this blog that magically appeared yesterday.
Right. Which they can do with child pornography. They do it with a lot of things. They share a lot of online resources.
So, this is where we're advocating for ... Each of the platforms has great visibility into their own platform, and then third-party researchers have information and signal from across the platforms. So we're looking at dissemination patterns and trends, and we're saying, ''Hey, we think this isn't authentic. You have a second-order thing here. You've got devise IDs, you've got IP addresses. You have a number of other signals that we don't have access to.''
So, this combination of researchers, platforms and then government, I would say, is kind of the third piece of this where they actually do have information. People knew that Russia was using social networks long before the 2016 election. That information wasn't necessarily communicated in a very effective way.
I agree. Today they were just having meetings where they feel the government's not even slightly ...
Right. I think the kind of blame there goes both ways in that when the government did make the outreach under the Obama administration to deal with the ISIS stuff, there was a lot of stonewalling. So, I think it does have to be ... There's the fallout from the Snowden revelations. I think that we really need to see the restoration, that we're on the same side here.
For 2018, this is absolutely critical. There is so much distrust in the country on everything as it is. Half the country feels that the presidential election was illegitimate. I don't think that we're in a place societally where feeling that our midterms are also illegitimate is ... We don't want to be there. And there's already a lot of campaigns under way to just erode trust in voting, in candidates, in platforms, and people, and your neighbors. It's gonna be a challenge.
You're a professor? What is your ...
No, I have a bachelor's degree.
Bachelor's degree. All right.
In computer science and political science.
So, one of the things that's happening is a lot of academics have been studying all kinds of things. I just was reading one ...
Yes, the real academics.
The real academics. But reading one about how the phone listening in on you thing is not true, but it's gone all over the internet. There's a piece of disinformation.
Right. I think I saw that, I saw that come out today. I haven't had a chance to read it.
Yeah, but it's not true. Of course it's not true. It's craziness.
Right.
But talk about this idea of societal ... because people are so addicted to technology individually and everyone recognizes that and realizes it creates bad feelings, it creates unhappiness, it creates all kinds of things. You were talking about a bigger societal issue. Can you talk about that concept?
I think there's a few things at work here. There's erosion of trust. The mere fact that we know that these campaigns are under way, if you go on Twitter now, you'll see people accusing other people of being bots. It doesn't matter if they're actually bots or not. ''Bot'' is just a great way to dismiss someone who doesn't agree with you. ''Fake news,'' same thing. The president himself kind of took that term, co-opted it and made it meaningless. It means ''things I don't like on the internet'' now.
The way that we relate to each other, the way that we relate to truth ... You need to have a shared basis of fact in order to create good policy. You can disagree about what to do with those facts, you can disagree about how to weigh those facts, how to weight them when you think about cost/benefit analysis as you're designing a policy, but we used to all at least agree that people were acting in good faith and that researchers were legitimate and that there was such a thing as expertise. I think that that's kind of gone out the window now.
We have people like Tom Nichols writing books called ''The Death of Expertise.'' I think that there's a profound division there. I think that one of the things that's challenging for people '-- and I wrote about this today '-- is you search for something. We've acclimated to the idea that the internet is a great place to find information. I don't think that's true today the way that it perhaps once was because of the proliferation and the ease with which you can spread manipulated narratives.
Now we have situations where last week there was a very sad story that happened on Facebook where a baby died. The parents chose not to get the Vitamin K shot, which is something that facilitates clotting, and the baby died. And the parents, as the story came to light, were members of many Facebook anti-vaccine groups. When you search for Vitamin K, if you're a new parent, you're a pregnant person, you search for Vitamin K, what you find is this void where scientists and doctors aren't out there writing posts about how critical Vitamin K is. So, what comes to the top is the information put out by ideologues and extremists.
Right.
So, we had this situation. We see it with cancer, we see all sorts of cancer quackery popping up. Really basic things that deeply impact people on a personal level.
Where they're looking for real information.
Yes. They're looking for real information. Entirely outside of politics, this is having an impact on us societally in terms of things like health.
I was arguing with people at YouTube because I was looking for Anti-Defamation League, I've told this story a number of times, and the top 20 videos were anti-Semitic. And when you look on Google, when you look up Anti-Defamation League, you find Anti-Defamation League. You find good material on anti-Semitism. When you go on YouTube, you get the opposite. And I was like, ''You have this company that owns you named Google, why can't your search yield the videos I'm looking for, not the videos I'm not looking for?'' Which proved the point.
Right. And that's because their algorithm is designed not to optimize for facts or for ranking the world's information, which is or used to be Google's mission statement. It's more of it's an entertainment platform. It's just that as people choose to consume information in video form now '-- and the platforms, by the way, are absolutely involved in pushing that. Facebook, Twitter, all of them created or acquired their own video platforms. Instagram just announced that it's competing with YouTube now, right?
Yes, it is.
So, this push towards consuming information in the form of video means that even though YouTube didn't necessarily want to be an information platform, it is there.
We were talking about responsibility a little bit earlier. I believe that when you build the algorithms that recommend content to people, you have a responsibility to surface good information. I'm surprised that this is a controversial position.
I agree. You don't feed bad meat to people in the store, you get in trouble. You do, but then you get in big trouble.
Let me ask, this is a question from one of our readers. A. Panzera wanted to ask about two things, I'm gonna summarize them. One was whether using people to sway people's opinions, even in a seemingly harmless way, is a form of warfare, which I think you called it. Another question is how easily someone who's in a position of power can manipulate data on the web. A. Panzera asks, ''How do you bring democracy and justice to the Wild West of social media?'' Would you call it Wild West, or how do you ...
Maybe in the last year it's gotten a little better, I think, as they've made some steps to kind of rein in the mass harassment on Twitter or the clickbait headlines, gaming the algorithm to achieve top billing on Facebook. I think that part of it is also, for a while, one of the social norms on the internet was that you didn't pick fights. Not the ''Don't feed the trolls,'' but even like when your batty aunt sends you the hoax, you just kind of ignore it. I do think there's something to be said at this point for people pushing back within their own communities, because there's a lot of evidence that shows that trust in community, trust in the people that you actually know can have an impact.
So, saying, ''Hey, maybe you want to fact-check that,'' or, ''Hey, I found this article that seems to be false information,'' just kind of presenting it more compassionately than like, ''You're a fucking idiot,'' just doing it a little more graciously within the communities is an option.
Right. So, being nicer. That's really essentially what you're saying.
Civility, yeah. That's sort of broken down. Again, I feel like an old person saying that.
They don't know if it's coming back, I have to tell you. When does it come back? Why? In what format?
I don't have an answer to that question. We really don't know. I really feel that the toxic, the amplification of the most toxic content, the most toxic impulses is, to some extent it is a problem of algorithms, what is surfaced, what is volunteered to us, but then on the other it is, yeah, it's just ...
People are people. I was just looking at the Alan Dershowitz thing yesterday. Did you see that? He said people were mean to him on Martha's Vineyard.
Oh, I remember seeing that. People were mean to him on Martha's Vineyard, his community was shunning him. Well, I mean, that's sort of, that's always been done.
Yeah, exactly. It was interesting how quickly people came with responses and it was immediately all about the immigrant kids. It's like, ''Oh, you're getting shunned. What about the babies?'' It was really like, ''Whoa,'' just making fun of this idiot for saying something so stupid has turned ugly real fast and it was being used, it was fascinating to watch. And it was all real, it was all true. He did say it and he looked like an idiot. But then it was used in ways I agree with, but it was a really interesting issue.
So, when you think about the societal sickness, then '-- I hate to say that '-- what happens in a society that's addicted like this, besides becoming an episode of ''Black Mirror''?
I think ... that is such a hard question. One of the things that the platforms are looking at now is this notion of healthy discourse. What are the metrics for healthy discourse? You could argue kind of better late than never. I think Twitter was the one who started this. They've got five teams, all academics. I know some of the foundations are also working on thinking about how do we quantify this, what are the types of interactions that we see and can we predict things like dog piles that are gonna be used to silence someone, can we predict things like who to surface.
Twitter is particularly a strong example because at least on Facebook you're kind of opting in. You're opting in to the friends that you have. You're opting in to the Groups that you join. Most of the Groups have moderators, whereas Twitter is very much more this kind of roiling crowd that's always angry about something.
So, the thoughts around how do we surface constructive conversation, I think right now they've gone very much to kind of like almost a keyword moderation. I'll click into the little gray box '-- you know, they're calling it grayboxing now, grayboxing, shadowbanning '-- I'll click into the gray box and sometimes there's just somebody who used profanity in their reply to me which is not directed at me, but it had the profanity. So there's, I think, a challenge: how do you not overly sanitize conversation? How do you not inadvertently digitally tone-police people, so to speak, while at the same time, recognizing that the gray box is actually a very valuable tool for, perhaps, the restoration of civility in conversation?
Sure, yeah. It'll be difficult. Is AI helpful in solving these issues? Because they always keep shooting that out. Talk about that issue. How do you get that?
I would say yes. I would say it does have value. I think that I might kind of divert from some other folks at CHT on this particular issue, and the reason that I do think it's valuable is because, at scale, I don't think you can do this with human moderation.
Then you'd need 10 million.
Right.
They were talking about hiring 10,000, I think. Susan [Wojcicki] would just get one of them and I said, ''You need 10 million.''
I don't see any way that you do this with individual people. Also, I feel like we've seen evidence now that people who do that stuff with that job have serious psychological damage. They're looking at all sorts of terrible things on the internet, not just mean words. I mean, there's some really nasty shit that shows up on these platforms.
Using AI as a way to flag ... I think that there's an interesting opportunity here where we come up with a framework where first pass is done by the AI. I mean, it already is right now. Let's be honest, right?
Right. If there's a pass at all.
If there's a pass at all, but first pass is done by the AI and then you kind of flag things for further review. That's where you have your people who are presumably trained. One of the things that we have seen '-- we talked about this and me in particular, because I am in the U.S., from a very U.S.-centric point of view '-- but then you see these horror stories about literal lynch mobs killing people in India because of hoaxes that appear on WhatsApp.
Which is encrypted, so it's hard for them to control.
Yes. And so there is this ...
What do they do? What does Facebook do?
Nothing. Well, they tried sending people out to talk about how they were seeing hoaxes and, very sadly, one of the people whose job it was to debunk the hoaxes was one of the people who got killed because a hoax started about him. So this is a terrible situation. I do think that community involvement, people who speak the language, people who understand the nuance of the slang, otherwise you do wind up with these situations where campaign ads are pulled down because they say ''bush,'' even though they're talking about Bush's Beans, right? This is a huge problem.
I think that we've delayed way too long on beginning to solve it, and right now, again, it's going to be a combination of people who have really been studying this and talking about this for a very, very long period of time, working with the platforms to try to come up with frameworks that work. AI? Not the answer.
Just the beginning of the answer.
AI is the beginning of the answer. AI is the first flag and then we have to wait and see. I don't think AI has the capability right now to do what we need to do in a short term.
So talking from the center, you mentioned the Center, you were a founding adviser. Again, I want to get back to this idea of what you do, what are the things you do. So can you give some insight? Like, do you think you're spending your time with technology well? Talk about yourself, what addictions that you could kick? You have a kid. I do that all the time.
I do. I've got two kids. I've got a 4-and-a-half-a-year old.
I talk about Fortnite almost continually.
Yeah, my 4-and-a-half-a-year old, he loves YouTube. And the reason he loves YouTube is because when he was very small, he really had a thing for garbage trucks. Loved garbage trucks.
Yeah, they all do. Boys.
I don't know why. I don't understand what it is about garbage trucks, but there was this video that I found on YouTube, literally an hour long. Somebody filmed garbage trucks across the country.
I know that video.
You've seen this video, right? They've set it to metal and he loved that video and that took us into Blippi and then, with the Blippi channel ... And this is great content. This is really useful content, both from the standpoint of, like, as a parent, I want to take a shower, you know?
Garbage trucks.
Unfortunately '-- right, leaving with the garbage trucks '-- the problem is now, at 4 and a half, he understands that he can click this button and get away from the garbage trucks and get away from Blippi and then this is how I find myself dealing with, like, unboxing videos which target children, which is just ... This is not the biggest problem the world faces but, in my house, there's a problem because then it turns into, ''I want that toy, I want to watch someone play with Play-Doh.'' I'm like, ''You could play with your own Play-Doh.'' ''No, I don't want to. I want to watch somebody else play with Play-Doh,'' and I'm like, this is the craziest thing. This is like Twitch for 4 year olds with Play-Doh. Just play with your own Play-Doh.
Right.
So there is that element where now we're like, ''Okay, you only get two 22-minute episodes. I prefer you to be on Netflix which is sanitized, so I don't find you looking at God knows what on YouTube.'' I believe that there's a balance, there's a responsible way to use it, but the problem is the Autoplay on YouTube in particular is really destructive in our house because it mean that unless I was in the room at the minute that the video was rolling, I would have to have a fight about how he just wanted to watch to the end.
I myself, outside of my little boy, I do that thing where you pick up your phone and you pick it up for a purpose and you're like, ''I'm going to send an email to this person,'' and you see the red button, the red dot on some app and you're like, ''Oh, there's that red dot.'' And then you're four apps in before you realize that you just never did the thing that you actually picked up the phone to do. So I do notice the intentionality being a little bit amiss. I wouldn't say I'm addicted. I do believe I'm very much more easily distracted and that makes me uncomfortable.
Right, right. Well, it's there to distract you, like there's just one more thing that's ...
Yeah, well, everything's competing for attention. This is the challenge, right?
This one really does get ... I can turn off the television. It's hard to turn off the phone. You know what I mean? I can actively turn off the television.
Well, it's the pushes also. I get the push notifications even though I try to turn most of them off. You know, you glance down ...
You know what I got that I actually thought was very helpful from a technological standpoint? I got a Ringly bracelet. My husband got it for me when I had our second baby and it has his number and the daycare number and the preschool number, and so if one of those three people is calling me or texting me, it will buzz. So it just gives me a little, tiny nudge on the wrist. There's no screen, there's nothing for me to push or turn off. It just lets me know, ''Somebody important just called. You should probably pay attention to that.''
Pay attention. Or texted you.
Yeah, because that's the one area where I'm like ... When you have two little kids, you need to be reachable.
Right, right. Yeah, you need to be reachable for everything, just so you know. It goes on and on and on and on.
So let's finish up talking about things people can do in all these areas, on disinformation, on the use of the phones, on civility. Give me one tip for each of those, because you're in all these areas.
Yeah, disinformation is, really, they're preying on your confirmation bias, right? When the content is being pushed to you, it's something that you want to see, so take the extra second to do the fact-check, even if it confirms your worst impulses about someone you absolutely hate before you hit the retweet button, before you hit the reshare button. Just take the extra second until we get to a point where we don't all have to do that all the time, every minute of every day.
So somebody comes in '-- Hillary Clinton, the emails, if you're that side '-- it's not true that he said this ...
But, yeah, I think this is the benefit of the doubt factor. I think we've kind of lost that entirely at this point. Everything is the worst thing that anyone possibly could've done until five minutes later when the next worst thing comes up again. It leaves people in a constant state of stress and emotional upheaval. It's not healthy.
On the addiction front ... Gosh, I would say turn off notifications. Get yourself something that doesn't have a screen so that you get the alerts that you need. That's really been instrumental for us. And there was a third thing that you asked? There's disinformation ...
Civility.
Civility. I think, again, it goes back to ... What I try to do, I follow a lot of Republicans across the spectrum, including some Trump Republicans, and I try to do the same thing on the left. I feel like it just gives me some perspective.
Meaning?
I try to read content to not agree with their point of view, but understand their point of view. I do think that there's something to be said for that. I was part of a fellowship program that was sponsored by both the Bush and Clinton foundations and I had the opportunity to spend a lot of time with what I would now call moderate Republicans, reasonable Republicans, and I felt like there were a lot of things that we disagreed on but, as people, there was a fundamental underlying respect for kind of shared humanity in the conversation.
Sure, sure. I think the problem is the nutjobs on both sides.
Yeah.
I get just as much from the left as the right.
Exactly.
The other day, I was literally like, ''You've got to be kidding me,'' and I expect better from the left, obviously. But then I'm like, ''No, you're using the same tactics,'' which is interesting.
What do you think the impact of Trump is on this, because he's such an active social media ... Wait, if he got kicked off of Twitter, he'd have nowhere to go, wouldn't he? He's never going to get kicked off Twitter, but that's the difference ...
Well, Fox will continue to push the debates.
Yeah, I know, but it's not the same thing.
But it's not the same thing. Well, the question is one of the things that's an interesting question ...
Doesn't work on Facebook. I guess YouTube.
The way that the intelligence communities think about leaving hostile content up online, letting the ISIS accounts stay, for example, was are you getting more information than you otherwise would? What's the cost-benefit of having that information? So if the people on the left are not going to watch Fox News or watch the areas where the President is speaking, is it perhaps better, almost, that they have this accessibility where they can just go click into his account, read it and be done?
You know, more and more people get their information, get their news from social media rather than actually going out and picking up a newspaper, so this is ...
So is he the symptom or the cause? Because he's the most perfect example ...
I think he's definitely exacerbating the symptoms, unfortunately. I think that the particularly belligerent, constantly hostile, constantly outrageous tone that he prefers is deeply harmful.
Does it wear off? Get tiresome after a while, it's like a TV show?
I think that the fatigue is actually a problem, right? You don't want to get so fatigued that you check out, because that's how disinformation works. That's how the ''I can't tell what's true anymore, I don't trust the people around me anymore, the effort to find the truth is so arduous that I'm not going to bother,'' and then in more authoritarian countries, ''The government is what it is and ...''
And people zone out.
And people zone out.
From their outrage, anyway. Well, there's lots of outrage and we'll see where it goes. Let's hope we get better news next time we talk about this. We'll see what happens. Maybe everyone throws their phone in the river or something.
Do you think?
No. It'll be attached to your head and then you'll be literally, it's like an episode of ''Black Mirror,'' but a real bad one.
This has been another great episode of Too Embarrassed to Ask. Thanks again to Ren(C)e DiResta for joining me on the show and I hope your kids are going to school with vaccinations.
They are.
Okay, good.
recode_logomark Recode Daily Sign up for our Recode Daily newsletter to get the top tech and business news stories delivered to your inbox.
By signing up, you agree to our
Privacy Policy and European users agree to the data transfer policy.