Originally published by the Annual Review of Cybertherapy and Telemedicine
Authors: Kamilya Salibayeva, Alexander Thorpe, Luke French, Zachary P. Fry, Hajime Inoue, Robert Forties, Emma Hewlett, Scott Brown and Ami Eidels
Abstract: High-profile cyber-attacks continue to highlight the growing sophistication of malicious hacking. While much attention has been paid to human error on the defensive side, attackers are also subject
to cognitive limitations. Recognising this opens a strategic opportunity: rather than viewing attackers solely as technical threats, we can exploit predictable human biases to reduce their effectiveness. One such bias is the representativeness heuristic, where individuals assess the likelihood of events based on how well they align with a mental prototype, often ignoring relevant statistics. This can lead attackers to make poor judgments under uncertainty. As part of a project funded by the Intelligence Advanced Research Projects Activity (IARPA), our multinational research team has developed and tested a range of interventions that expose skilled participants to cognitive biases during complex cyber tasks. Early findings suggest that certain patterns of behaviour—particularly under conditions of uncertainty—may predict susceptibility to bias and influence performance in realistic attack scenarios. These results indicate that even highly capable adversaries are vulnerable to subtle psychological manipulations. By incorporating these insights into cybersecurity systems—using ambiguous patterns, misleading cues, or strategically framed information—we can reduce attacker efficiency and disrupt decision-making. This psychological approach, when integrated with existing technical safeguards, offers a novel and scalable method for strengthening cyber-defence and protecting systems from emerging threats.