Quantcast
Channel: Articles
Viewing all articles
Browse latest Browse all 52

Q&A: Everything You Need to Know About Social Bots

$
0
0
Social bots are programmed to appear as real people on social networks—tweeting and retweeting, amassing followers, even having matching Facebook accounts. Filippo Menczer, an Indiana University professor and a principal investigator for Truthy, a research program that tracks bots and Twitter trends, explains how social bots can increasingly manipulate consumers.

What is a social bot and how does it work?

A social bot is piece of software that is designed to have a presence on the Internet—especially on social media—and is engineered to achieve some purpose. Typically they’re designed to make something appear to be happening that isn’t. They can impersonate someone and post as this person, thereby promoting a message. Spammers can do this to promote content or an idea to convince someone to do something. Social bots can be used to create a smear campaign, to promote legislation, or to make it look like many people are responding to something to make a topic trend. Social bots can make a website, a hashtag, a person or user become popular.

For example, we know from social theory that people are more likely to change behavior or adopt a new behavior—say voting or buying something—when they’re exposed to multiple people they know exhibiting or promoting that behavior. Well, creating the image that multiple people are buying something or voting in a particular way could have very real implications.

Why should people care about the rise of these bots?

People should care because no one wants to be manipulated. Social media is becoming very popular and prevalent. A large number of people are interacting on Facebook and Twitter—here and around the world in other large countries—and on clones such as Renren in China. Statements made on social media, the patterns on these networks, affect our personal choices and behaviors. What we see online informs our voice, our opinion when we buy something, how we vote, when we decide to sign a petition.

As with all new technologies there’s an initial period when we’re naive to possible negative repercussions. But it’s our job to become aware and protect ourselves against them. We’ve learned to develop spam filters for email and malware removal software for our computers. Now we need to realize that comments on social media may not be coming from a real person. A statement backed by a 1,000 people may be coming from a single programmer.

A recent New York Times news story said that social bots could be used “to sway elections, to influence the stock market, to attack governments.” Are these likely scenarios we can expect in the near future? 

Nobody knows for sure if they were used to sway elections, but we know people have tried. It’s not my work, but some of my colleagues in Boston presented a paper at the Web Science conference in 2010 on an attack by social bots that happened the night before the special election to replace Sen. Kennedy in Massachusetts. The attack was against the Democratic candidate and was either false or misleading, depending on your political beliefs. There were 10 of these social bot accounts and they posted on the same topic with a link to a site with a fake or alleged statement by the candidate, targeting thousands of users. This is a technique used by spammers so Twitter identified it. The accounts were taken down in two hours, but in that time Republican voters retweeted it, and kept it alive. So, the next day when one typed the Democratic candidate’s name into Google, the first thing that came up was this fake news, because Google pulled live election news from topics trending on Twitter. Whether it swayed the election, we don’t know, but we do know it generated 60,000 tweets. My colleagues did find out that a well-known PAC was behind it, the same people behind the Swift Boat Veterans for Truth campaign.

In other instances people have shown that Twitter can be used to predict movements in the stock market, and even election outcomes, so if some of this chatter is generated by bots, well, you can put two and two together. Just recently Apple stock rose very much because a famous investor said on Twitter that he thought it was undervalued. You can imagine a fake investor, or group of fake investors, generating false information to capitalize on it. Someone could get very rich with those movements.

As for attacking governments or political groups, we’ve already seen this. Social media has done a lot in government resistance circles, from the Occupy movement to more extreme cases in Russia, Georgia—the country, not the state—Iran and, more recently, in Turkey, Syria, and Egypt. Groups have mobilized on Twitter and Facebook and other parties have flooded Twitter and Facebook with fake or misleading messages to delude that information. Someone posts that the protest is in this square, but then fake information is inserted with the same hashtag (Twitter topic marker) that says, “No, it’s in this square.” This is a very unsophisticated but effective way social bots can disrupt communication.

You commented in the Times on how oversharing and information saturation is contributing to the rise of the social bot? Can you elaborate?

Indeed, this is the very point I was trying to make. There are a number of things that contribute to social bots. First, we are in an attention economy. Information is abundant, but attention is scarce. So what we decide to pay attention to gets popular. But just because information becomes popular doesn’t mean it’s more valid or truer or better than other information. It just got our attention. Observation No. 2: Social media is a how we’re observing so much of this information, deciding on its popularity, and these decisions affect other choices we make—our shopping or voting preferences, say. No. 3: The more friends we have, the more invested we are in social media, the less time we have to verify and vet our network, so we’re losing the personal connection—as these things were originally intended to provide. We become more susceptible to the manipulation of that social reinforcement effect I mentioned earlier. If five of my friends have the newest phone and endorse it, it’s likely I will want the newest phone, for example. That opinion on the newest phone may come from a person we trust, or it could have come from another person who retweeted someone they trust, but all it takes is one person in that chain to be duped. So if information comes from our friend, we tend to believe it came from that friend, but the more people we’re engage with on our social networks, the easier it is to make a mistake on the source—and the less reliable that piece of information becomes.

We hear on the news this video has been watched one million times. We’re compelled to watch it. What if I told you it was watched by 100,000 people and 900,000 bots? We’d feel differently about it, but we don’t focus on that, we focus on that one million number. And the point of online marketing right now is helping develop viral messages, and there are many unscrupulous people who aren’t afraid to cut corners. Followers on Twitter are an example of this. It was shown that some politicians had huge numbers of followers who, when examined, were not real people, but fake accounts. Well, who has the time to analyze every follower of every politician? So yes, I suspect our limited attention due to information saturation makes us more vulnerable to manipulation.

Kashmir Hill in Forbes recently quoted an expert who said, “Twitter bots loose on the Internet could be a way for attackers to figure out who at an organization is gullible; if you fall for a Twitter bot, you’ll probably fall for a phishing attack.” Here at IDT911, when we first heard of social bots, this was our first thought, too. Have bad guys made this connection yet between bots and phishing attacks or data breaches?

I don’t have an authoritative answer for you, but I’m ready to believe it. Here’s why:  We had a paper titled “Social Phishing” in 2007 on phishing attacks using a social bot. We ran an experiment where we took open data from online social networks to view an individual’s friend network. This information is easily available; for example if you use Facebook to log into a site, that site now has access to your friend list. Well, all we needed to know was that Alice is friends with Bob, and then we sent Alice a fake message from Bob that simply said “Check this out” with a link to a phishing site. We had two groups receive the same message and link. In one group the message came from a known friend, in the other from a stranger, and 72 percent—72 percent!—of the people who thought the message was from a friend logged into the phishing site with their actual Indiana University user name and password, whereas in the control group 16 percent—which is still huge— logged in with IU credentials. Sixteen percent is much more than we expected, but it’s not 72 percent. We couldn’t believe it.

So you can see how easily, just by creating a profile and getting people to friend you, you can get sensitive information about usernames and passwords in a phishing campaign. Well, a social bot can very easily friend people and get information on their  friends. This could be the stupidest social bot in the world, but it can get enough information to be used in a phishing attack. You could make the bot more sophisticated. The bot could make a message that appears to come from two of your friends, or three of your friends. That could be very powerful, and we see it as very scary.

Bots sound like a whole new avenue of potential fraud. They could “court” potential victims by the thousands, no?

Yes, and some people have looked into that. It’s not my work, but the Times cited dating sites as an example. Someone creates a bunch of accounts with very attractive photos, so people on the site are likely to contact them. In an attempt to meet this fake person people disclose information on where they live, what they do, and all that information can be used in a phishing attack or fraud. This is definitely happening. In the Times it was said that when the bots were cleaned from a site, human participation dropped because people didn’t find the site a warm environment—they weren’t bombarded with fake, flattering messages. We all want to be loved. This has been the idea behind brick-and-mortar fraud for centuries, so there’s no reason to expect that it wouldn’t happen online. 

That brings up a good question: What can users do protect themselves against social bots? 

Today, at this time, the only thing a person can do is to be vigilant. Take your mom’s advice: Don’t talk to strangers. Just because they’re online doesn’t mean they’re innocuous. By adding unknown people to your friend networks you can make yourself easy to victimize. You have to be careful whom you talk to online. Don’t talk to strangers. It’s really that simple.

Eventually, we hope to develop counter measures that are automatic in the network, and then we can be a little less careful, because the system will police itself. And of course social media companies are working very hard to delete fake accounts and identify criminals. For example, now on Facebook you can flag someone who tries to friend you if you don’t know them. This presumably puts them on a watch list that can be vetted for trouble. 

What about positives? Can you imagine a way bots could be used to protect children against cyberbullying, say, or to combat phishing bots?

Absolutely. You have to think of social bots as any other kind of mass communication device. If you have a good message—defined by who created it and who’s reading it— then bots could be used to promote that message. It’s just another form of advertising. Brands are using social media en masse to do this. They’re trying to create profiles or pages that are attractive. They’re not stating that they’re people, but they’re trying to persuade you. This could be used to raise awareness against racism or bias or, as one of my IU colleagues is doing, combat the stigma of mental disorders. So yes, definitely, you can have a campaign, positive or negative, that uses software to generate messages, to promote messages, to generate followers, who will listen to your message. This can all be automated. You can make a social bot that alerts followers to an event in their area, or to sign up for a newsletter or sign a petition. All of this can be automated. We even use bots in our research. For example, we would like to promote an app developed in partnership with The Kinsey Institute for Research in Sex, Gender and Reproduction. Spreading the word about the app will foster data collection—for us, it’s basically advertising. Now, would we create a fake account? No. But we can use social bots in a way that’s not frowned upon. We might automate certain posts triggered by events of interest to users, directing users to links with further relevant information.

Some people say technology is neutral; it can be used for good or bad. With bots we shouldn’t brand this technology, but have to accept that it can go either way. This happened with the telephone—people thought it would end face-to-face communication—similarly with the radio, with television. To some degree, with any new technology, bad things will happen. With social bots, they can definitely change our behaviors so we can’t be naive about it.

 

 


Viewing all articles
Browse latest Browse all 52

Trending Articles