OAKLAND, Calif .– For months, Twitter, Facebook and YouTube have braced themselves to crack down on disinformation on election day.
On Tuesday, most of their projects went off without a hitch. Social platforms added labels to President Trump’s misleading messages and informed their users that there was no immediate outcome to the presidential race. On television, news anchors even cited fact-checks similar to those performed by Twitter and Facebook.
Then came Wednesday. With the ballots still being counted and the lack of a clear result, the tide of disinformation has shifted from sowing doubts about the vote to false declarations of victory. Twitter was quick to label several of Mr. Trump’s tweets throughout the day as misleading about the outcome of his race, and also did the same with tweets from other members of his circle, such as Eric Trump and White House press secretary Kayleigh. McEnany. And Facebook and YouTube used their home pages to show people specific information about the election.
The shares reinforced that even a smooth performance on Election Day didn’t mean social media companies could relax, fighting off an endless stream of toxic content. In fact, the biggest tests for Facebook, Twitter and YouTube are still looming, disinformation researchers said, as false narratives can crop up until a final presidential race result is certified.
“What we found on polling day from companies was that they were extremely responsive and faster than ever,” said Graham Brookie, director of the Digital Forensic Research Lab of the Atlantic Council. But now, he said, the misinformation focused only on the results and undermined them.
“You have a hyper-focused audience and a point in time where there’s a huge amount of uncertainty, and bad actors can use that opportunistically,” he said.
Twitter said it continues to monitor disinformation. Facebook said: “Our job is not done – we will remain vigilant and promote reliable information on Facebook as the votes continue to be counted.” YouTube said it was also on alert for “election-related content” in the coming days.
The companies had all braced for a chaotic election day, trying to avoid a repeat of 2016, when their platforms were misused by the Russians to spread divisive disinformation. In recent months, companies have deployed a number of anti-disinformation measures, including suspending or banning political advertising, slowing the flow of information, and highlighting specific information and context.
As Americans voted across the country on Tuesday, lies about broken voting machines and biased election officials repeatedly surfaced. But the companies weren’t tested until Mr. Trump – with early results showing how tight the race was – posted to Twitter and Facebook just before 1 a.m. EST to baselessly attack the electoral process.
“They are trying to STEAL the election,” Mr. Trump posted on the sites, without specifying who he was talking about.
Twitter acted swiftly, hiding Mr. Trump’s inaccurate tweet behind a tag that warned people that the allegation was “disputed” and “could be misleading about an election or other civic process.” Twitter, which first started tagging Mr. Trump’s tweets in May, was also limiting users’ ability to like and share the post.
On Wednesday morning, Twitter added more tags to Mr. Trump’s posts. In one, he tweeted that his first tracks in democratic states “had started to magically disappear.” In another post, Mr Trump said unnamed people were working to erase his lead in the battlefield state of Pennsylvania.
Twitter also applied other labels to posts that falsely claimed victory. One was added to a post from Ben Wikler, leader of the Wisconsin Democratic Party, in which he prematurely claimed that Joseph R. Biden Jr. had won the state. The Associated Press and other media then called Wisconsin for Mr. Biden, although Mr. Trump called for a recount.
On Wednesday afternoon, Twitter also put into context the tweets of Eric Trump, one of Mr. Trump’s sons, and Ms. McEnany when they preemptively claimed that Mr. Trump won in Pennsylvania, even if the race had not been organized there. The company has also verified other claims by Mr. Trump claiming his victory in several battlefield states such as North Carolina and Georgia, where the race has not been started, and has restricted the sharing of his misrepresentation about electoral fraud.
“While votes are still being counted across the country, our teams continue to take coercive action on tweets that prematurely declare victory or contain misleading information about the election in general,” Twitter said.
Facebook has taken a more cautious approach. Mark Zuckerberg, its chief executive, said he was unwilling to verify the facts of the president or other political figures because he believed in free speech. Still, to avoid being misused in the election, Facebook said it would present premature victory claims by notifying that the election has yet to be called for a candidate, if necessary.
On Tuesday night, Facebook had to do just that. Shortly after Mr. Trump announced that the election had been stolen from him, Facebook officials added tags to his posts. The labels noted that “no winner of the presidential election had been projected.”
After the polls closed, Facebook also sent users a notification that if they were waiting to vote at a polling station, they could still vote if they were already in line.
Facebook added more tags to Mr. Trump’s new posts on Wednesday, verifying his claims by noting that “as expected, election results will take longer this year.”
Unlike Twitter, Facebook has not blocked users from sharing or commenting on Mr. Trump’s posts. But this was the first time Facebook had used such labels, as part of the company’s plan to add context to the election posts. A spokesperson said the company “had planned and prepared these scenarios and built the essential systems and tools.”
YouTube, which is not used regularly by Mr. Trump, has faced fewer high profile issues than Twitter and Facebook. All YouTube videos on the election results included a tag saying the election may not have ended and linked to a Google page with the Associated Press results.
But the site ran into a problem early Tuesday night when several YouTube channels, one with more than a million subscribers, said they were streaming election results live. What the live streams actually showed was a graph of a projection of an election result with Mr. Biden in the lead. They were also among the first results that appeared when users searched for election results.
After media reported the issue, YouTube removed the video feeds, citing its policy of banning spam, deceptive practices and scams.
One America News Network, a conservative cable news network with nearly one million YouTube subscribers, also posted a video comment to the site on Wednesday claiming that Mr. Trump had already won the election and that Democrats were “kicking off. Republican ballots, collecting fake ballots and delaying results “to cause confusion. The video has been viewed over 280,000 times.
Farshad Shadloo, a spokesperson for YouTube, said the video did not violate company policy on misleading voting allegations. He said the video was tagged with the election results not being final. YouTube added that it removed the ads from the video because it did not allow creators to make money with content that undermined “election confidence with blatantly false information.”
Alex Stamos, director of the Stanford Internet Observatory, said tech companies still have a fight against election misinformation, but they are prepared for it.
“There will always be a long trail of disinformation, but it will have less impact,” he said. “They’re still working, that’s for sure, and will try to maintain that level of staffing and focus until the outcome is generally accepted.”
But Fadi Quran, campaign manager at Avaaz, a progressive nonprofit that tracks disinformation, said Facebook, Twitter and YouTube need to do more.
“The platforms must quickly expand their efforts before the country plunges into further chaos and confusion,” he said. “It’s a democratic emergency.”