Are Twitter’s challenges for regaining consumers’ trust so ingrained in its structure that a significant remedy would require an overhaul, rather than targeted tweaks?
Twitter CEO Jack Dorsey sat down Tuesday for a casual interview at TED 2019, the popular conference series that focuses on technology, entertainment and design.
Dorsey’s responses suggest that quick fixes would do little to quell the rise of abusive language and misinformation on the platform.
He’s the latest social media honcho to venture into a high-profile setting to convince the public that the industry overall, and his company in particular, are striving to improve and deserve consumers’ patience. Twitter has faced scrutiny in recent years over how it handles abuse, hate speech and aggressive behavior on its platform.
Moderator Chris Anderson depicted Dorsey as the captain of the Titanic and questioned whether the CEO grasped the fear and urgency that so many feel about social media platforms’ outsize role in shaping public opinion.
For most of the interview, Dorsey outlined steps that Twitter has taken to combat abuse and misinformation, but Anderson explained why the company’s critics sometimes find those steps so insufficient and unsatisfying. He compared Twitter to the Titanic, and Dorsey to the captain, listening to passengers’ concerns about the iceberg up ahead — then going back to the bridge and showing “this extraordinary calm.”
“It’s democracy at stake, it’s our culture at stake,” Anderson said, echoing points made yesterday in a talk by journalist Carole Cadwalladr. So why isn’t Twitter addressing these issues with more urgency?
“We are working as quickly as we can, but quickness will not get the job done,” Dorsey replied. “It’s focus, it’s prioritization, it’s understanding the fundamentals of the network.”
Dorsey claims that the reason Twitter appears so ineffective in responding to abusive speech on the platform is that any potential fixes would strike at the fundamentals of how the platform operates. At times, his solution appeared to involve remaking the platform entirely.
Dorsey recalled that when the team was first building the service, it decided to make follower count “big and bold,” which naturally made people focus on it.
“Was that the right decision at the time? Probably not,” he said. “If I had to start the service again, I would not emphasize the follower count as much … I don’t think I would create ‘likes’ in the first place.”
Since he isn’t starting from scratch, Dorsey suggested that he’s trying to find ways to redesign Twitter to shift the “bias” away from accounts and toward interests.
More specifically, Rodgers asked about the frequent criticism that Twitter hasn’t found a way to consistently ban Nazis from the service.
“We have a situation right now where that term is used fairly loosely,” Dorsey said. “We just cannot take any one mention of that word accusing someone else as a factual indication of whether someone can be removed from the platform.”
The critique comes at a time of increased public divisiveness and rancor. Just this week, false claims about the fire that engulfed Notre Dame Cathedral spread rapidly on the platform.
However, Dorsey avoided specifics in his TED interview.
Dorsey didn’t address any of these incidents specifically at TED. In fact, his answers lacked specificity overall. When he was asked pointed questions, he evaded them, as he often does. Rodgers asked him how many people are working on content moderation on Twitter—a number the company has never published, and Tuesday continued the vagueness streak.
“It varies,” Dorsey said. “We want to be flexible on this. There are no amount of people that can actually scale this, which is why we have done so much work on proactively taking down abuse.”
That proactive work was the big news Dorsey announced from the stage: A year ago, Twitter wasn’t proactively monitoring abuse actively using machine learning at all. Instead, it relied entirely on human reporting—a burden Dorsey was quick to recognize was unfairly put on the victims of the abuse. “We’ve made progress,” he said. “Thirty-eight percent of abusive tweets are now proactively recognized by machine-learning algorithms, but those that are recognized are still reviewed by humans. But that was from zero percent just a year ago.” As he uttered those words, Twitter sent out a press release with more information on the effort, highlighting that three times more abusive accounts are being suspended within 24 hours of getting reported compared with this time last year.
At times, Dorsey seemed focused on optics rather than the deeper problems that plague Twitter.
Dorsey did bring up one specific fix. “The first thing you see when you go to [the page to report abuse] is about intellectual property protection. You scroll down and you get to abuse and harassment,” he noted. “I don’t know how that happened in the company’s history, but we put that above the thing that people actually want the most information on. Just our ordering shows the world what we believed was important. We are changing all that, we are ordering it the right way.”
For all his insistence on the bigger picture, this was a very small problem for Dorsey to point out, and one with a very obvious solution. Nevertheless, Twitter is not fixed. Why? The reasoning here is agonizingly circular: Because Dorsey says he doesn’t want to do a bunch of small iterative quick fixes; he wants to fundamentally rebuild the site to encourage better conversations, and that will take time—time it’s unclear the world can afford.
In a twist, users took to Twitter’s platform to badger and press the CEO.
Hi @jack, long time fan. If you can block neo-Nazis in Germany in accordance with German law, then do you think it might be a good idea to just ban neo-Nazis in general? Or do think that literal neo-Nazis have ideas worth being heard? @TEDTalks #AskJackAtTED
— NEED FOR CHEE: HOT FURSUIT (@Kamunt) April 16, 2019
#AskJackAtTED how are you going to “reduce outrage” and “prioritize healthy conversations” without silencing protest and without silencing voices dissenting against power? because that is already beginning to look like an (intentional?) outcome
— jonny sun (@jonnysun) April 16, 2019
Ooh exciting. @jack is taking questions from Twitter users live at TED today. Anyone?? I’d like to know why a video that showed me being beaten up & threatened with a gun to soundtrack of Russian anthem stayed up for 72 hours despite 1000s of complaints, @jack?#AskJackAtTED pic.twitter.com/KuRdNoDyAY
— Carole Cadwalladr (@carolecadwalla) April 16, 2019
I don't think jack realizes that "ask jack" is a BAD idea right now lmao. which is funny bc it's very much on brand and *exactly* the problem #AskJackAtTED
— Tonight, dinner is you 🤺 (@Dinnerisyou) April 16, 2019
Users also fact-checked his claims in real time:
— Carole Cadwalladr (@carolecadwalla) April 16, 2019
Other users took to the platform to assert that Dorsey doesn’t understand what users want:
@jack we don’t want to interact with “topics of interest” we just want to interact with individuals who are not abusive cretins and/or Nazis
— Katie Mack (@AstroKatie) April 16, 2019
What’s beyond scary here is that Jack Dorsey has no idea what his users want. We don’t want to shift to following topics, we’ve just spent YEARS curating our feed of individuals. I spend an extraordinary amount of time on here. I’ve never been more concerned with its leadership. https://t.co/tQt9q820OR
— Darren Rovell (@darrenrovell) April 16, 2019
these wackdoodle ideas come and go but I am hard pressed to think of the head of another successful company who seems more at sea about why people use their product https://t.co/uE01UDMuP5
— Sam Adams (@SamuelAAdams) April 17, 2019
Twitter says it has made progress on removing abusive content and accounts from the platform and on proactively protecting users.
People who don’t feel safe on Twitter shouldn’t be burdened to report abuse to us. Previously, we only reviewed potentially abusive Tweets if they were reported to us. We know that’s not acceptable, so earlier this year we made it a priority to take a proactive approach to abuse in addition to relying on people’s reports.
This time last year, 0% of potentially abusive content was flagged to our teams for review proactively. Today, by using technology, 38% of abusive content that’s enforced is surfaced proactively for human review instead of relying on reports from people using Twitter. This encompasses a number of policies, such as abusive behavior, hateful conduct, encouraging self-harm, and threats, including those that may be violent.
What do you think of Dorsey’s TED interview, PR Daily readers?