Nearly half of Twitter accounts discussing coronaviruses are likely bots, the researchers said | Instant News


Nearly half of Twitter accounts that share information about the new corona virus are likely bots, according to researchers at Carnegie Mellon University.

The researchers analyzed more than 200 million tweets that discussed coronavirus or COVID-19 since January. They found that almost half were sent by accounts that behaved more like convincing bots than real humans.

RELATED: CoronavirusNOW.com, FOX launches a national hub for COVID-19 news and updates

Of the top 50 retweeter influences, 82% are likely to be bot, research shows. Of the top 1,000 retweeters, 62% are bot.

More than 100 types of inaccurate COVID-19 stories have been identified by researchers, including misinformation about potential healing theories and conspiracy theories – such as hospitals filled with mannequins or corona viruses linked to 5G towers. Researchers say the bot also dominates conversations about ending orders staying at home and “reopening America.”

The team said it was too early to appoint certain entities that might be behind the bot “trying to influence online conversations.”

“We know it looks like a propaganda machine, and it certainly fits in with the Russian and Chinese handbooks, but it will need a lot of resources to prove it,” Kathleen Carley, a professor at the School of Computer Science at Carnegie Mellon, said in a statement.

Carley said she and her colleagues saw bot activity double the team’s predictions, based on natural disasters, crises and previous elections.

In this photo illustration, the Twitter logo is displayed on a smartphone with a COVID 19 sample in the background. (Photo illustration by Omar Marques / SOPA / LightRocket Images via Getty Images)

The team used several methods to identify which accounts were real and which might be bots. Artificial intelligence tools analyze account information and see things like number of followers, frequency of tweeting and account designations.

“Tweeting more often than is humanly possible or appears to be in one country and then a few hours later is an indication of the bot,” Carley said.

“When we see a number of tweets at the same time or back to back, it’s like the time. We are also looking for the use of exact hashtags, or messages that appear to be copied and pasted from one bot to the next bot,” Carley added.

The team at Carnegie Mellon continues to monitor tweets and say posts from Facebook, Reddit and YouTube have been added to their research.

Twitter blog post this week from Yoel Roth, head of site integrity, and Nick Pickles, director of global public policy and strategy, referring to the word “bot” as a term that is loaded and often misunderstood.

“People often refer to bots when describing everything from automatic account activity to individuals who prefer to be anonymous for personal or safety reasons, or avoid photos because they have strong privacy concerns,” the post wrote. “This term is used to misinterpret accounts with numerical usernames that are generated automatically when your preferences are taken, and more alarmingly, as a tool by those in positions of political power to tarnish the views of people who might disagree with them or the online public. . unfavorable opinion. “

Twitter notifies NPR it has gotten rid of thousands of tweets with misleading and potentially dangerous information about corona virus.

In a blog post, Twitter added that not all forms of bots are Twitter violations – such as customer service conversation bots that automatically find information about travel orders or bookings.

The company said it proactively focused on “platform manipulation,” which includes the use of malicious automation aimed at damaging and disrupting public talks, such as trying to get something trending.

For someone who is not sure about the authenticity of the account, Carnegie Mellon researchers say to examine it closely for the red flag. If an account shares a link with subtle typing errors, many tweets are posted in quick succession, or the username and profile picture don’t seem to match – it might be a bot.

“Even if someone seems to come from your community, if you don’t know it personally, pay close attention, and always visit trusted or trusted information sources,” Carley said. “Be very vigilant.”

This story was reported from Cincinnati.





image source

to request modification Contact us at Here or [email protected]n.com