Virtually anyone who works in industry or government can tell you what the reportable warning signs of insider threat are – sudden behavioral changes, unexplained affluence, odd working hours, etc. Yet every time an espionage incident, intellectual property theft, or mass shooting takes place, it seems as though indicators are either not reported, or somehow fail to reach those who need to know. So what exactly is going on here?
There are a variety of mechanisms responsible for the failure of insider threat detection; reporting mechanisms, inter-organizational communications, and the existence and enforcement of policy are just a few laid out in CERT’s Common Sense Guide to Mitigating Insider Threats (2012). While any valid insider threat program certainly should address the nineteen components presented within the guide – it must also examine how detection is communicated to employees.
In a discussion pertaining to evolutionary psychology and business ethics, Cosmides and Tooby (2004) delve into a crucial element of the human mind that gets overlooked when discussing threat detection and reporting – humans are unable to detect procedural rule violations that are not precautionary or social in nature. The hunter-gather mind that humans have developed is equipped with specific machinery to detect social contract violations – instances wherein one receives the benefit (Q) without paying the price (not P) or vice versa – but the majority of humans fail at detecting violations of non-social “if then” rules.
The reason for this selective reasoning specialization is simple; our minds are the product of millions of years of natural selection. In terms of scale, we have just recently emerged from hunter-gatherer societies, yet our minds largely remain within this realm. Our mental machinery has been tailored for a starkly different world from which we live in today. In the past, societies were smaller and people often lived with extended family and spent most of their time outdoors. The number of people that an individual might have encountered throughout his or her lifetime was far less than that of an individual in 2014. In a world where people spent most of their days simply trying to stay alive, being able to detect social contract cheaters, or free-riders, was an essential skill because every individual had the incentive to reap benefits without expending personal resources.
Within the context of natural selection, the fact that humans are adept at detecting violations of precautionary rules (e.g. if you’re going take risk A, then you must take precaution B) makes perfect sense. Possessing this skill provides palpable utility to an individual; and that utility is survival. However, the procedural rules of the workplace are another matter. They are not social or precautionary rules and they generally do not identify a benefit or risk to the individual. For example, most insider threat programs can be boiled down to “if you see something, say something.” While straightforward, it simply does not hit the same mental circuits that say, walking through a pit of snakes might. If there is no obvious risk to the individual, and no potential personal benefit – humans are less engaged.
What threats and benefits to an organization mean to an individual remains largely ambiguous. The human mind was developed in an environment in which social exchanges were face to face, in real time, and the results were often observable. The indirect relationship between benefits to the individual and the group were more readily observable (e.g. if I spend time crafting tools in order to allow the hunters more time to hunt, I will eat better). Reporting a coworker who fails to lock their computer may not activate the same mechanisms. The value to the individual through the group is not as apparent and the threat and benefit are obscured. Even within organizations that are serious about implementing security measures through negative reinforcement (counseling, performance review), individuals generally do not lose their jobs. With that said, a culture of enforcement and repercussions can be advantageous.
To put it in more everyday terms, this is one of the reasons why it’s so difficult to get the public out of traditional ways of doing things. For example, it is common knowledge that studies reflect a direct correlation between smoking tobacco and cancer; it’s usually just a matter of time. In most metropolitan areas of the United States today, the effects of smoking are not observed and documented as often as they should be. Going back a few decades, we all knew smoking led to cancer, but it took serious public campaigns and incentives to curb smoking – even though people could rationally understand that smoking might kill them, the lengthy process generally wasn’t rapidly observable enough to command the public’s attention.
If there isn’t a negative repercussion directly associated with an action, our minds fail to acknowledge the association. This is the substance of modern parenting. In order to curb dangerous behaviors, punishment must be swift, consistent and enforceable; otherwise the lesson is lost. This concept can be assimilated to ocean thermal delay – when actions and reactions are separated by timeframes that exceed the normal human attention span, we are less apt to acknowledge (and accept) the connection.
So how can an organization take steps to effectively address insider threat? Anchor the threat of observable impact to the employee. Simply providing training on the machinations of “if you see something, say something” does not go far enough; insider threat detection needs to be tied to livelihood. Consider the impact of the following two statements:
- All personnel must badge into facility X, never allow a person to “tailgate” into the building.
- Reviews of security incidents over the past two years have found tailgating to be the most common method for unauthorized personnel to gain access to intellectual property at facility X. As a result, several companies are now selling our product at a lower price. We will likely have to find ways to streamline budgets, to include no bonuses or pay increases, and the possibility of layoffs.
The first statement is valid, but it fails to emphasize the bottom line impact. Even the second statement is insufficient due to the fact that the damage has already occurred; therefore, the threat could be considered non-existent.
Another aspect to contemplate is the likelihood of a perceptual difference in security stance between management and the average employee. There are very good reasons for employees to nod in accordance with management when security edicts are discussed, but the underlying truth can be acutely different. Management may be oblivious simply because no one wants to tell the emperors they have no clothes.
In order to address this issue, organizations might consider a neutral third party assessment that compares attitudes and perceptions of security from the viewpoint of both employees and management on a scheduled basis. Industrial psychologists could also assist organizations through framing security training in a manner that elicits not only compliance, but active participation from employees as well.
The combination of impartial active listening, conveyance of threats to the individual employee, and the implementation of swift, observable repercussions can create a proactive culture of security awareness, but the organization must be willing to invest. Please contact us below if you would like to know more about this or our ESA methodology to help secure your enterprise.
About the Author
Gabriel Whalen has a Master’s in Forensic Psychology, a decade of experience in the U.S. National Security community, and a background in acting, biology, and ethical hacking. Gabriel represents TSC Advantage’s diversified talent portfolio as a social engineer, behavioral analyst, and insider threat expert.
Cosmides, L. & Tooby, J. (2004). Knowing thyself: The evolutionary psychology of moral reasoning and moral sentiments. In R. E. Freeman and P. Werhane (Eds.), Business, Science, and Ethics. The Ruffin Series No. 4. (pp. 91-127). Charlottesville, VA: Society for Business Ethics.
Silowash, G., Cappelli, Dawn., Moore, Andrew., Trzeciak, Randall., Shimeall, Timothy., & Flynn, Lori. (2012). Common Sense Guide to Mitigating Insider Threats, 4th Edition (CMU/SEI-2012-TR-012). Retrieved April 02, 2014, from the Software Engineering Institute, Carnegie Mellon University website: http://resources.sei.cmu.edu/library/asset-view.cfm?AssetID=34017