Researchers from the University of Southern California made the discovery after analyzing activity patterns in two Twitter datasets. The first was a set of posts about the 2017 French presidential election. The second was a collection of tweets by bots and human accounts that had already been hand-labeled. They chose these two datasets because they complement each other: The human-labeled accounts reliably differentiated between humans and bots, while the French election tweets offered more contextual insights. [Read: Facebook bots are abusing the platform in a parallel social network] The researchers then analyzed their behaviors by studying the lengths of tweets and the number of retweets, replies, and mentions they attracted.
Twitter bots don’t get bored
The results revealed some key differences between human and bot activity. While humans interactions increased over time, they simultaneously tweeted less. The bots, meanwhile, maintained the same levels of activity, and tended to produce more content at specific times, such as 30-minute intervals. The researchers suspect this is because humans get too tired to compose original content and become distracted by other content. They used these findings to train a bot-detection algorithm called Botometer to distinguish between human and fake accounts. When the AI ignored the timing of the posts, it did a better job of detecting the bots. These insights could help AI to detect fake accounts seeking to influence elections — at least until the bots can mimic our tendency to tire. And as the data in the study is already three years old, they may already have learnt to do that.