As intelligent synthetic forces (ISFs) become more complex, the ability to develop and employ them becomes more costly, in part because their behavior is inscrutable to non-developer users. This paper summarizes research aimed at reducing the cost of developing ISFs by addressing the lack of transparency of ISF behavior. Our goal is to produce systems that exhibit transparency of behavior, allowing users to interrogate an ISF about what it is doing, and why it is performing that behavior and not another behavior. Our approach is to develop a generic framework for automatic generation of multimodal explanation for ISFs. This framework (a) makes few assumptions about the underlying behavior architecture; (b) takes the form of an external observer of the behavior; (c) generates explanations based on a reconstruction of that behavior; and (d) incorporates a number of sources of knowledge to elaborate the explanations.


Taylor, G., Knudsen, K., Holt, L. (2006) “Explaining Agent Behavior.” In Proceedings of 14th Behavior Representation in Modeling and Simulation (BRIMS). SISO. May 15-18, 2006.

Download a Case Study