august 2022
By Nicole Müller and Elsine Van Os, Contributing Writers
ankarb / iStock / Getty Images Plus via Getty Images
Companies often focus on peripheral security controls to keep external attackers outside. In the meantime, malicious perpetrators have adapted their tactics accordingly and are now targeting the weakest link of an organization — their employees.
According to the Verizon 2021 Data Breach Investigations Report, 85% of data breaches involve a human element, with social engineering being among the most prevalent attack techniques. Social engineering refers to attackers who pursue victims to illegally disclose confidential information by exploiting their trust. And there is no better way to connect with a person than over social media, right? But considering the consequences that an employee might face if they try to illegally disclose confidential data, the question arises: why would an employee fall victim to social engineering attacks, and to what extent does exposure on social media increase the risk of victimization?
Let’s start at the beginning. First, it helps to look at how intelligence officers have been recruiting spies for decades. The so-called agent recruitment cycle describes how an intelligence officer can recruit a person for spying purposes in four steps:
Understanding these steps is important, as social engineering follows the exact same process. Insight into how people within an organization can be recruited and transformed into threats will help security leaders create insider threat programs to combat the practice.
In order to define and identify a target, social engineers first conduct a need and strategy analysis. The goal is to clarify what information is needed and how that information can be obtained. This is where social engineers look at online user profiles and network connections. Does this person work in a company that has the information that I need? What job positions within the company have access to the desired information? Who has any (online) connections with employees that might have access to this information? Finding the right person is crucial for the next phase: assessing the target.
In the assessing phase, bad actors collect in-depth information on the target. This is where LinkedIn, Facebook, Twitter, etc. become a goldmine for social engineers. Why? Because many people love to share about their lives and work — sometimes too much.
Perpetrators gather information on their target by browsing through various social media accounts, specifically looking for signs that could make a person vulnerable to social engineering victimization. Attackers can easily exploit this information and reach out to them to develop a relationship.
But what makes employees vulnerable to such attacks? According to research done for the Central Intelligence Agency by Eric D. Shaw and Laura Sellers, the interaction of personal predispositions and stressors can lead a person to commit a damaging act. Personal predispositions can refer to social and personal frustrations, a sense of entitlement or ethical flexibility. However, these traits alone do not lead to insider incidents, but are triggered by stressors. Stressors can be financial hardship, the death of a loved person or a toxic work culture that makes the person “go down the critical pathway.” The bottom line is: there is no such thing as a bad apple, but they can become bad when placed in a moldy basket.
Finding these little signs on social media is what makes social engineering attacks successful. Users not only share their current and previous job positions, but also beliefs and interests, current projects and sometimes even their attitudes toward their employer.
For example, following activism pages for animal protection while working at a pharmaceutical company might indicate that you may have or could develop ethical concerns regarding how some drugs are tested. Showing openly that you are “open to work” while still working for a company might indicate that you are unhappy with your current employer. But this does not mean that everyone who shows concerning behavior is automatically willing to harm their employer.
The key to saying yes to a crime — and consequently the factor that social engineers can manipulate the most — is how the insider act is rationalized — or, in other words, how social engineers convince their target that they are doing nothing wrong when performing a harmful act.
Disclosing sensitive information to a social engineer is risky. The employee can get caught, lose their job and even go to jail — or stay undetected. And social engineering can distort a victim’s perception of risk and awareness of the actual consequences by manipulating the framing of a malicious act.
Every decision is linked with benefits and costs (or risks). Bad actors have learned that positively-framed situations work better than negative ones in getting victims to cooperate.
Because committing a crime causes discomfort, the respective individual has to rationalize their actions to quiet their inner voice of morality. Take Kevin Mallory. He was an American intelligence officer recruited by Chinese intelligence officers to spy for them. Mallory was struggling financially after the 2008 real estate crash when a Chinese agent offered him money in return for intelligence. Initially, they claimed to only want information on Trump’s administrative policies, but Mallory eventually disclosed national defense information.
Even when the American intelligence agency became suspicious of Mallory, he kept communicating with Chinese agents. Kevin McLane, an old friend of Mallory, even said that “he thought he was doing something good for the United States.” Mallory changed the whole narrative of his actions. Instead of leaking national secrets and damaging his own country, he truly believed that his acts were rooted in good will and would even benefit the U.S.
The ultimate goal of social engineering, just as it is for intelligence officers, is to establish a relationship of trust and reframe malicious behavior.
In July 2020, a Tesla employee got approached by an old acquaintance named Egor Kriuchkov. Kriuchkov tried to persuade the employee to install malware on the company’s network system in exchange for $1 million. When the employee showed concerns about getting caught, Kruichkov assured him that the risk would be minimal and offered to frame someone else within the company — and the employee could even choose whom to frame. Unfortunately for Kriuchkov, the employee reported the incident to Tesla’s insider threat team. Eventually, the FBI got involved and arrested Kriuchkov.
Even though the employee knew the perpetrator and was assured that the malware attack could not be traced back to him, the attack failed. The reason for that can be summarized in two words: awareness training.
How can employees be protected from social engineering? Organizations should not monitor what their employees are doing on social media in their free time. In fact, laws such as the General Data Protection Regulation (GDPR) specifically prohibit employers from doing so.
However, organizations can emphasize the risks of social engineering by creating a strong security and integrity culture. The recent social engineering attempt on Tesla shows why. As former Tesla insider risk manager Charles Finfrock explained in an interview with Signpost Six, training and awareness were essential in this case. “Insider threat is a human problem to be solved by humans and supported by technology,” he said. Organizations need to use cases like these in their training so employees can detect social engineering attempts and have a clear picture of reporting processes. The Tesla employee showed a high level of awareness and knew exactly who to approach to report the incident.
Organizations and security leaders tasked with managing insider threat programs must carefully consider and understand how insider threats through social engineering come about and use security awareness culture and training as an opportunity to help employees take part in this conversation.
august 2022 / SECURITYMAGAZINE.COM