Cybervictims, Defenders and Cybercriminals: How to Tell Them Apart

By C. Warren Axelrod, Ph.D., CISM, CISSP

The Nexus  |  Monday, 13 January 2020

In this Orwellian era, when opponents are enemies and enemies are co-conspirators, where news is falsified and trust is endangered, it has become well-nigh impossible to apply clear definitions to attackers, victims and defenders in the context of cyberspace. The rapid expansion of social networks without the forethought necessary to protect individuals’ privacy and to distinguish between truth and falsehoods has resulted in victims believing attackers’ false onslaughts to be real and cooperating with them either intentionally or unwittingly, thereby altering the traditional distinction between the good guys/white hats (victims, defenders) and the bad guys/black hats (noncriminals, hackers).

This situation calls for a new and better understanding of the motives of perpetrators, the consequences of their actions and the means of counteracting their influence. It also requires knowing about the motivations of victims and defenders and how such impulses impact the protective measures that they take.

Cybersecurity, audit and risk professionals must be more aware of the factors that induce individuals, groups, organizations, government agencies and nations to perpetrate cybercrimes or defend against them. When those responsible for cybersecurity understand why attackers act as they do, they are better able to construct and deploy effective measures that will deter attackers and protect assets from nefarious activities.

Understanding Victims and Defenders

Trusted insiders are regularly given access to highly sensitive information without requisite oversight. And herein lies the problem that was so well illustrated by the Edward Snowden case. Snowden, a contractor with the US National Security Agency (NSA), was able to persuade a colleague to lend him his credentials for accessing highly sensitive classified information, which Snowden then leaked to the world.

The question that must be faced is how to overcome having to grant individuals, groups and organizations privileged access to sensitive data while addressing the reality that some of them will abuse the trust associated with such access, deliberately or accidentally.

The notion that victims and defenders could be in league with attackers is occasionally mentioned, but seldom enters into consideration of how to protect systems and data from nefarious attacks.

One approach is to understand how motives, motivation, intent, risk and consequences affect the decision-making of individuals and groups who balance these factors against the value and benefits of committing crimes. Interactions among attackers, victims, defenders and other players (i.e., observers and influencers) have evolved from relatively straightforward conflict models (red team vs. blue team) to much more complex hybrid cooperation/conflict models. As a result, the methods of the past no longer fully address the issues.

Attackers, Defenders and Victims

The determination of who is for an organization and who is against it is highly subjective. Victims who benefit from attacks have different perspectives on the consequences of attacks in the short and long terms, as compared to those who see themselves as genuine victims.

Victims and defenders need not be, and frequently are not, the same persons or groups, although they might overlap somewhat in organizations and nations. Basically, victims are those who are the target of successful attacks. Individuals may have some responsibility for defending themselves against cyberattacks, but if those individuals are employed by an organization, they will be defended by dedicated internal groups or by outside third parties. Stricter criteria are usually applied to midsize to large organizations, which are expected to understand cybersecurity risk and have appropriate staff dedicated to mitigating it.


It is not as likely to see attackers and defenders working toward a common goal, although it can happen when certain objectives align.

However, when cooperation or collusion between attackers and victims is in play, it must first be understood what those relationships among players are, and then law enforcement and the legal system must distinguish between those who were victimized and those who participated in criminal activities and prosecute perpetrators as appropriate. It is not as likely to see attackers and defenders working toward a common goal, although it can happen when certain objectives align. For example, an enterprise might engage hackers or former black hats to fight against attacks since the latter likely better understand the thinking of other attackers and how to protect against them. However, such collaboration with former black hats is not generally advisable since one can never quite be sure whether such persons might slide back into their former habits.

Trust and Verification

How can trust be dealt with in cyberspace? The common mantra of information security professionals is “trust, but verify.” However, a more appropriate approach might be “do not trust, and verify.” That is to say, the default position should be to not trust a source until it can be demonstrated that trust is warranted.


The default position should be to not trust a source until it can be demonstrated that trust is warranted.

Verification of authoritative sources is no easy matter. Attackers can be extremely creative in their efforts to disguise sources and anonymize information. It has become common practice to spoof or hide or fake identities on the Internet, where someone will masquerade as a real or imaginary person. The recipient of the information is often unable or unwilling to scrutinize incoming information to ensure that the professed source is genuine. Even when the source appears to be genuine, attackers can engage in a man-in-the-middle (MitM) exploit wherein they intercept transmissions and insert false information while retaining the original source. While this form of attack is used mainly for identity theft and fraud, there is no reason why a similar approach could not be used for untrue news items.

Erosion of Trustworthiness

There has recently been an uptick in focus on how to improve the trustworthiness of systems, networks and data, as the incidence of falsification has mushroomed.1 As is usual, technologists are seeking technology solutions for the problem. But it is important to include human factors in the mix,2 since technology can only achieve so much. Indeed, there are those who question whether technologists’ biases are at all acceptable.

Perhaps the approach that would be most effective, yet most difficult to achieve, is to change the behavior of players—attackers, victims and defenders. The answer may be in the use of a combination of deterrence against attackers, avoidance by victims and preventive (rather than protective) methods by defenders.

Motives and Motivations

The underlying assumption is that perpetrators are evil and victims and defenders are good, but that is not always the case, especially when victims collaborate with attackers for personal gain, rather than being forced to cooperate by means of physical threats and the like. For example, there are few, if any, who would risk the well-being of family members to prevent attackers’ stealing funds from their employer.

Victims may be motivated to protect themselves to avoid the negative consequences of successful attacks. But in situations where potential victims think that their chances of being attacked are low or that losses from an attack will be minimal, they may choose not to bother with protecting themselves.

Defenders, namely those in the business of protecting victims, also have complex motivations. Not all their motivations are necessarily altruistic, since, after all, they are in business to make money, although some offer open-source solutions at no charge. This raises the issue as to whether the ranks of defenders contain some who actually create threats that would not have otherwise existed, or draw attention to threats that have not been discovered by the general population, so that they might enhance their reputations and increase their profits.

There is also a question of whether Big Tech can be trusted. After all, as stated by photo-forensics expert Hany Farid, “The entire business model of these trillion-dollar companies is attention engineering. It’s poison.”3 This begs the question as to whether entities, such as Facebook and Google, are friends or foes since their use of personal information for revenue-generation might be construed as attacking their users’ privacy. While their public relations arms present images of these organizations as being intent on preserving and protecting users’ privacy, their actions speak louder than words, and their actions often suggest the exploitation of users’ personal information to enhance their bottom lines. Perhaps such organizations are not enemies in the traditional sense, but they are clearly not defenders of users’ personal data or protectors of users from addiction to their services.

It is no longer obvious (if it ever was) as to who attackers, victims and defenders actually are and what makes them act in the ways that they do. It is safe to say that one cannot generalize about motives and motivations since the roles of the various players are not always apparent at any point in time. Furthermore, reasons for behavior might change drastically over time, as do technologies and new methods of attack.

Editor’s Note

This article is excerpted from an article that appeared in the ISACA® Journal. Read C. Warren Axelrod’s full article, “When Victims and Defenders Behave Like Cybercriminals,” in volume 1, 2020, of the ISACA Journal.

C. Warren Axelrod, Ph.D., CISM, CISSP

Is the research director for financial services with the US Cyber Consequences Unit. Previously, he was the business information security officer and chief privacy officer for U.S. Trust. He was a cofounder and board member of the Financial Services Information Sharing and Analysis Center (FS-ISAC) and represented the banking sector’s cybersecurity interests in Washington DC during the Y2K date rollover. He testified before the US Congress on cybersecurity in 2001. Axelrod received ISACA’s Michael P. Cangemi Best Book/Article Award in 2009 for his ISACA Journal article “Accounting for Value and Uncertainty in Security Metrics.” He was honored in 2007 with the Information Security Executive Luminary Leadership Award and received a Computerworld Premier 100 award in 2003. Warren’s books include Engineering Safe and Secure Software Systems and Outsourcing Information Security, and he was the coordinating editor of Enterprise Information Security and Privacy. He has published more than 140 professional articles and chapters in books and has delivered more than 150 professional presentations. His current research activities include the behavioral aspects of cybersecurity risk management and the security and safety of cyberphysical systems, particularly as they relate to autonomous road vehicles.

Endnotes

1 Saydjari, O. S.; Engineering Trustworthy Systems: Get Cybersecurity Design Right the First Time, McGraw Hill Education, USA, 2018
2 Bellovin S. M.; P. G. Neumann; “The Big Picture: A Systems-Oriented View of Trustworthiness,” Communications of the ACM, vol. 61, no. 11, November 2018, p. 24-26
3 Rothman, J.; “Afterimage: Now That Everything Can Be Faked, How Will We Know What’s Real?” The New Yorker, 12 November 2018, p. 34–44