FAKE FACEBOOK ACCOUNTS ARE GETTING HARDER TO TRACE

Facebook has taken down 32 fake pages and accounts that it says were involved in coordinated campaigns on both Facebook and Instagram. Though the company has not yet attributed the accounts to any group, it says the campaign does bear some resemblance to the propaganda campaign run by Russia’s Internet Research Agency (IRA) in the run-up to the 2016 presidential election. Facebook is now working with law enforcement to determine where the campaign originated.

FACEBOOK

“We’re still in the very early stages of the investigation, and we don’t know all the facts, including who might be behind it,” Facebook’s chief operating officer said on a call with reporters Tuesday.

According to Facebook, some 290,000 Facebook users followed at least one of these pages. The most popular ones were called Aztlan Warriors, Black Elevation, Mindful Being, and Resisters. Across the phony accounts and pages, Facebook found politically divisive content about, among other things, Immigrations and Customs Enforcement, President Trump, and the Unite the Right rally scheduled for Washington, DC, in August. These pages also organized about 30 events over the past year.

One event, a protest against Unite the Right in Washington called “No Unite the Right 2 – DC,” was scheduled for August 10th. It was cohosted by other, legitimate pages, and more than 3,000 people indicated they were interested in or planned on attending. The desire to get out ahead of this event, Facebook says, hastened its announcement. The company says it disabled the event on Tuesday and alerted the administrators of those pages. It will also notify those users who were interested in attending the event, but a spokesperson told WIRED it’s “premature” to alert all 290,000 people impacted by the campaign.

Shortly after the announcement, other organizers of the protest took to Twitter to object to Facebook’s suspension of the event. “I cannot believe I have to say this: The Unite the Right counter protest is not being organized by Russians,” wrote one user, Dylan Petrohilos. “We have permits in DC, we have numerous local orgs like BLM, Resist This, and Antifascist groups working on this protest. FB deleted the event because 1 page was sketch.” Petrohilos also tweetedthat the event was founded by another group, not the Resisters page.

The news, which was first reported by The New York Times, is the first indication by Facebook that it’s detected this kind of activity in advance of the midterm elections in the United States. Just one week ago, in a call with members of the press, Facebook executives evaded several questions on the matter, saying only that the company would report any such activity to law enforcement. The company began looking into this particular network two weeks ago.

But Facebook has previously acknowledged that fake news sites originating in Macedonia popped up during the Alabama special elections last year. And in April, Facebook shut down hundreds more phony Russian pages linked to the Internet Research Agency, which were targeting people in Russia, Azerbaijan, Uzbekistan and Ukraine.

Since late last year, Facebook has been using artificially intelligent tools to root out what it calls “coordinated inauthentic activity,” which is to say, phony accounts working together to spread a message. It’s also expanded its security team to 15,000 people, up from 10,000 last year when Facebook first identified the Russian trolls’ work.

FACEBOOK

But Facebook says it’s harder to assign blame now than it was last year when the company discovered the IRA-linked accounts. In that case, the group got sloppy with its work, paying for ads in rubles and revealing its Russian IP addresses. The actors behind this campaign have used VPNs to hide their locations and paid third parties to purchase ads on their behalf. All together, the newly discovered pages and accounts bought 150 ads for approximately $11,000, the most recent of which was purchased in June 2018.

“We’re still investigating what happened, but whoever created this network of accounts took a lot of effort to hide their real identity, so we don’t yet know for certain who is responsible,” Facebook CEO Mark Zuckerberg wrote in a separate post. “That said, some of this activity is similar to what the Internet Research Agency in Russia did before and after the 2016 US presidential elections.”

There are also some loose ties to the IRA. One known IRA account, for example, was briefly a co-administrator on one of the suspended pages. “We think that is interesting, but not determinative, which is why we wanted to publish our findings while not relying upon that to do attribution,” said Facebook’s chief security officer, Alex Stamos.

The newly discovered network seems to have picked up right about the time the known IRA network fell silent. According to the ads published by the House Intelligence Committee earlier this year, the IRA pages that targeted the 2016 election began to taper off their advertising efforts in the spring of 2017. In this new network, the oldest page Facebook identified dates back to March 2017.

Facebook detected this activity relatively early compared to the IRA’s interference, the evidence of which wasn’t publicly announced until almost a year after the 2016 election. Now, the company says it is following thousands of leads, some of which stem from that investigation and some of which have been instigated by law enforcement. Facebook also says it has shared some of its information with other tech companies.

In a statement, Senator Mark Warner, the Democratic vice chair of the Senate Intelligence Committee, commended Facebook for proactively seeking out these accounts. “Today’s disclosure is further evidence that the Kremlin continues to exploit platforms like Facebook to sow division and spread disinformation, and I am glad that Facebook is taking some steps to pinpoint and address this activity,” Warner said. “I also expect Facebook, along with other platform companies, will continue to identify Russian troll activity and to work with Congress on updating our laws to better protect our democracy in the future.”

Facebook is not the first tech company to come forward with evidence of midterm meddling. Earlier this month, Microsoft disclosed that it has already spotted phishing attempts on three US political campaigns. The Daily Beastreported that hackers sent emails to Senator Claire McCaskill’s staff, urging them to change their passwords, thereby leading staffers to a fake but convincing domain. Microsoft thwarted that particular attack, but the fight is hardly over.

“Russia continues to engage in cyberwarfare against our democracy,” McCaskill said in a statement last week. “While this attack was not successful, it is outrageous that they think they can get away with this.”

The Department of Justice recently issued indictments on 12 Russian actors who were involved in hacking into the Democratic National Committee’s servers and the email account of Hillary Clinton’s campaign chairman John Podesta. Earlier this year, special counsel Robert Mueller indicted individuals associated with the Internet Research Agency. In both cases, investigators laid out in detail the extent of the Russian operations, down to the date and time of certain activities.

Now, Facebook says it is up to law enforcement to do the forensic analysis. “We believe law enforcement and the intelligence community will have a lot more information upon which they can draw,” Stamos said. He acknowledged, however, that there’s no telling if or when the company will be able to determine who exactly was behind this information blitz.

Source

Here is the report from Facebook

Today we removed 32 Pages and accounts from Facebook and Instagram because they were involved in coordinated inauthentic behavior. This kind of behavior is not allowed on Facebook because we don’t want people or organizations creating networks of accounts to mislead others about who they are, or what they’re doing.

We’re still in the very early stages of our investigation and don’t have all the facts — including who may be behind this. But we are sharing what we know today given the connection between these bad actors and protests that are planned in Washington next week. We will update this post with more details when we have them, or if the facts we have change.

It’s clear that whoever set up these accounts went to much greater lengths to obscure their true identities than the Russian-based Internet Research Agency (IRA) has in the past. We believe this could be partly due to changes we’ve made over the last year to make this kind of abuse much harder. But security is not something that’s ever done. We face determined, well-funded adversaries who will never give up and are constantly changing tactics. It’s an arms race and we need to constantly improve too. It’s why we’re investing heavily in more people and better technology to prevent bad actors misusing Facebook — as well as working much more closely with law enforcement and other tech companies to better understand the threats we face.

What We’ve Found So Far
How Much Can Companies Know About Who’s Behind Cyber Threats?
Sample Content
Press Call Transcript

July 31, 2018

What We’ve Found So Far

By Nathaniel Gleicher, Head of Cybersecurity Policy

About two weeks ago we identified the first of eight Pages and 17 profiles on Facebook, as well as seven Instagram accounts, that violate our ban on coordinated inauthentic behaviorWe removed all of them this morning once we’d completed our initial investigation and shared the information with US law enforcement agencies, Congress, other technology companies, and the Atlantic Council’s Digital Forensic Research Lab, a research organization that helps us identify and analyze abuse on Facebook.

  • In total, more than 290,000 accounts followed at least one of these Pages, the earliest of which was created in March 2017. The latest was created in May 2018.
  • The most followed Facebook Pages were “Aztlan Warriors,” “Black Elevation,” “Mindful Being,” and “Resisters.” The remaining Pages had between zero and 10 followers, and the Instagram accounts had zero followers.
  • There were more than 9,500 organic posts created by these accounts on Facebook, and one piece of content on Instagram.
  • They ran about 150 ads for approximately $11,000 on Facebook and Instagram, paid for in US and Canadian dollars. The first ad was created in April 2017, and the last was created in June 2018.
  • The Pages created about 30 events since May 2017. About half had fewer than 100 accounts interested in attending. The largest had approximately 4,700 accounts interested in attending, and 1,400 users said that they would attend.

We are still reviewing all of the content and ads from these Pages. In the meantime here are some examples of the content and ads posted by these Pages.

These bad actors have been more careful to cover their tracks, in part due to the actions we’ve taken to prevent abuse over the past year. For example they used VPNs and internet phone services, and paid third parties to run ads on their behalf. As we’ve told law enforcement and Congress, we still don’t have firm evidence to say with certainty who’s behind this effortSome of the activity is consistent with what we saw from the IRA before and after the 2016 elections. And we’ve found evidence of some connections between these accounts and IRA accounts we disabled last year, which is covered below. But there are differences, too. For example, while IP addresses are easy to spoof, the IRA accounts we disabled last year sometimes used Russian IP addresses. We haven’t seen those here.

We found this activity as part of our ongoing efforts to identify coordinated inauthentic behavior. Given these bad actors are now working harder to obscure their identities, we need to find every small mistake they make. It’s why we’re following up on thousands of leads, including information from law enforcement and lessons we learned from last year’s IRA investigation. The IRA engaged with many legitimate Pages, so these leads sometimes turn up nothing. However, one of these leads did turn up something. One of the IRA accounts we disabled in 2017 shared a Facebook Event hosted by the “Resisters” Page. This Page also previously had an IRA account as one of its admins for only seven minutes. These discoveries helped us uncover the other inauthentic accounts we disabled today.

The “Resisters” Page also created a Facebook Event for a protest on August 10 to 12 and enlisted support from real people. The Event – “No Unite the Right 2 – DC” – was scheduled to protest an August “Unite the Right” event in Washington. Inauthentic admins of the “Resisters” Page connected with admins from five legitimate Pages to co-host the event. These legitimate Pages unwittingly helped build interest in “No Unite Right 2 – DC” and posted information about transportation, materials, and locations so people could get to the protests.

We disabled the event earlier today and have reached out to the admins of the five other Pages to update them on what happened. This afternoon, we’ll begin informing the approximately 2,600 users interested in the event, and the more than 600 users who said they’d attend, about what happened.

We don’t have all the facts, but we’ll work closely with others as we continue our investigation. We hope to get new information from law enforcement and other companies so we can better understand what happened — and we’ll share any additional findings with law enforcement and Congress. However, we may never be able to identify the source with the same level of confidence we had in naming the IRA last year. See Alex Stamos’ post below on why attribution can be really hard.

We’re seeing real benefits from working with outside experts. Partners like the Atlantic Council have provided invaluable help in identifying bad actors and analyzing their behavior across the internet. Based on leads from the recent US Department of Justice indictment, the Atlantic Council identified a Facebook group with roughly 4,000 members. It was created by Russian government actors but had been dormant since we disabled the group’s admins last year. Groups typically persist on Facebook even when their admins are disabled, but we chose to remove this group to protect the privacy of its members in advance of a report that the Atlantic Council plans to publish as soon as it concludes its analysis. It will follow this report in the coming weeks with an analysis of the Pages, accounts and profiles we disabled today.

 

July 31, 2018

How Much Can Companies Know About Who’s Behind Cyber Threats?

By Alex Stamos, Chief Security Officer

Deciding when and how to publicly link suspicious activity to a specific organization, government, or individual is a challenge that governments and many companies face. Last year, we said the Russia-based Internet Research Agency (IRA) was behind much of the abuse we found around the 2016 election. But today we’re shutting down 32 Pages and accounts engaged in coordinated inauthentic behavior without saying that a specific group or country is responsible.

The process of attributing observed activity to particular threat actors has been much debated by academics and within the intelligence community. All modern intelligence agencies use their own internal guidelines to help them consistently communicate their findings to policymakers and the public. Companies, by comparison, operate with relatively limited information from outside sources — though as we get more involved in detecting and investigating this kind of misuse, we also need clear and consistent ways to confront and communicate these issues head on.

Determining Who is Behind an Action

The first challenge is figuring out the type of entity to which we are attributing responsibility. This is harder than it might sound. It is standard for both traditional security attacks and information operations to be conducted using commercial infrastructure or computers belonging to innocent people that have been compromised. As a result, simple techniques like blaming the owner of an IP address that was used to register a malicious account usually aren’t sufficient to accurately determine who’s responsible.

Instead, we try to:

  • Link suspicious activity to the individual or group with primary operational responsibility for the malicious action. We can then potentially associate multiple campaigns to one set of actors, study how they abuse our systems, and take appropriate countermeasures.
  • Tie a specific actor to a real-world sponsor. This could include a political organization, a nation-state, or a non-political entity.

The relationship between malicious actors and real-world sponsors can be difficult to determine in practice, especially for activity sponsored by nation-states. In his seminal paper on the topic, Jason Healey described a spectrum to measure the degree of state responsibility for cyber attacks. This included 10 discrete steps ranging from “state-prohibited,” where a state actively stops attacks originating from their territory, to “state-integrated,” where the attackers serve as fully integrated resources of the national government.

This framework is helpful when looking at the two major organized attempts to interfere in the 2016 US election on Facebook that we have found to date. One set of actors used hacking techniques to steal information from email accounts — and then contacted journalists using social media to encourage them to publish stories about the stolen data. Based on our investigation and information provided by the US government, we concluded that this work was the responsibility of groups tied to the GRU, or Russian military intelligence. The recent Special Counsel indictment of GRU officers supports our assessment in this case, and we would consider these actions to be “state-integrated” on Healey’s spectrum.

The other major organized effort did not include traditional cyber attacks but was instead designed to sow division using social media. Based on our own investigations, we assessed with high confidence that this group was part of the IRA. There has been a public debate about the relationship between the IRA and the Russian government — though most seem to conclude this activity is between “state-encouraged” and “state-ordered” using Healey’s definitions.

Four Methods of Attribution

Academics have written about a variety of methods for attributing activity to cyber actors, but for our purposes we simplify these methods into an attribution model with four general categories. And while all of these are appropriate for government organizations, we do not believe some of them should be used by companies:

  • Political Motivations: In this model, inferred political motivations are measured against the known political goals of a nation-state. Providing public attribution based on political evidence is especially challenging for companies because we don’t have the information needed to make this kind of evaluation. For example, we lack the analytical capabilities, signals intelligence, and human sources available to the intelligence community. As a result, we don’t believe it is appropriate for Facebook to give public comment on the political motivations of nation-states.
  • Coordination: Sometimes we will observe signs of coordination between threat actors even when the evidence indicates that they are operating separate technical infrastructure. We have to be careful, though, because coincidences can happen. Collaboration that requires sharing of secrets, such as the possession of stolen data before it has been publicly disclosed, should be treated as much stronger evidence than open interactions in public forums.
  • Tools, Techniques and Procedures (TTPs): By looking at how a threat group performs their actions to achieve a goal — including reconnaissance, planning, exploitation, command and control, and exfiltration or distribution of information — it is often possible to infer a linkage between a specific incident and a known threat actor. We believe there is value in providing our assessment of how TTPs compare with previous events, but we don’t plan to rely solely upon TTPs to provide any direct attribution.
  • Technical Forensics: By studying the specific indicators of compromise (IOCs) left behind in an incident, it’s sometimes possible to trace activity back to a known or new organized actor. Sometimes these IOCs point to a specific group using shared software or infrastructure, or to a specific geographic location. In situations where we have high confidence in our technical forensics, we provide our best attribution publicly and report the specific information to the appropriate government authorities. This is especially true when these forensics are compatible with independently gathered information from one of our private or public partners.

Applying the Framework to Our New Discovery

Here is how we use this framework to discuss attribution of the accounts and Pages we removed today:

  • As mentioned, we will not provide an assessment of the political motivations of the group behind this activity.
  • We have found evidence of connections between these accounts and previously identified IRA accounts. For example, in one instance a known IRA account was an administrator on a Facebook Page controlled by this group. These are important details, but on their own insufficient to support a firm determination, as we have also seen examples of authentic political groups interacting with IRA content in the past.
  • Some of the tools, techniques and procedures of this actor are consistent with those we saw from the IRA in 2016 and 2017. But we don’t believe this evidence is strong enough to provide public attribution to the IRA. The TTPs of the IRA have been widely discussed and disseminated, including by Facebook, and it’s possible that a separate actor could be copying their techniques.
  • Our technical forensics are insufficient to provide high confidence attribution at this time. We have proactively reported our technical findings to US law enforcement because they have much more information than we do, and may in time be in a position to provide public attribution.

Given all this, we are not going to attribute this activity to any one group right now. This set of actors has better operational security and does more to conceal their identities than the IRA did around the 2016 election, which is to be expected. We were able to tie previous abuse to the IRA partly because of several unique aspects of their behavior that allowed us to connect a large number of seemingly unrelated accounts. After we named the IRA, we expected the organization to evolve. The set of actors we see now might be the IRA with improved capabilities, or it could be a separate group. This is one of the fundamental limitations of attribution: offensive organizations improve their techniques once they have been uncovered, and it is wishful thinking to believe that we will always be able to identify persistent actors with high confidence.

The lack of firm attribution in this case or others does not suggest a lack of action. We have invested heavily in people and technology to detect inauthentic attempts to influence political discourse, and enforcing our policies doesn’t require us to confidently attribute the identity of those who violate them or their potential links to foreign actors. We recognize the importance of sharing our best assessment of attribution with the public, and despite the challenges we intend to continue our work to find and stop this behavior, and to publish our results responsibly.

Source

Previous article5 Ways Small Security Teams Can Defend Like Fortune 500 Companies
Next articleThe Trouble with Tribbles

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.