U.S. Naval Postgraduate School
Deception is a common in many attacks on computer systems. It can also be used to defend computer systems. Since attackers so much trust computer systems to tell them the truth, it may be effective for those systems to lie or mislead when under attack. This could waste the attacker's resources while permitting time to organize a better defense, and would provide a second line of defense when access controls have been breached, part of "defense in depth". But effective deception requires a good model of what the attacker is thinking. We provide a probabilistic such model here, subcategorized by the attacker's belief in each of a set of "generic excuses" for their inability to accomplish their goals. An important one of those excuses is deception, an excuse we want to prevent the attacker from believing. We show how the model can be updated by evidence presented to the attacker and feedback from the attacker's own behavior. We show results with human subjects supporting our theory. We show how probabilistic analysis can provide a way to choose the most effective time and manner to deceive the attacker, by solving an optimization problem and applying some pruning criteria. We use examples from an attack plan of installing a rootkit on a computer system.
Keywords: information security, deception, psychology, modeling and simulation, Bayesian reasoning, counterplanning
Read Paper (in PDF)