-
In safety-critical domains, effective human-AI teaming is essential for ensuring reliable operations. Aviation provides an advanced example of such a domain, where mature human-machine interaction principles and extensive operational data enable systematic study of collaboration between humans and AI. As AI systems increasingly resemble human behavior, the long-term effects of such collaboration remain insufficiently understood.The HARMONY project addresses this gap by developing a system dynamics simulation model to capture key aspects of human-AI interaction (e.g. trust, workload, explainability) and to investigate how these are affected over time. The project combines causal loop diagrams, simulation modeling, and empirical validation, with aviation serving as application domain and validation environment through flight simulation with trained pilots. This integrated strategy provides predictive insights into the dynamics of human-AI collaboration under real-world constraints.
As part of the IRIS-HISIT (Human-Intelligent Systems Interaction and Teaming) program, the HARMONY project addresses a key challenge in AI integration: how to design human-AI teams that are effective, trustworthy and sustainable in safety-critical domains like aviation. While AI systems are becoming increasingly capable, predictive models fail to capture how system behavior dynamically shapes human cognitive and emotional states. In particular, existing approaches rarely model how core human-AI teaming factors in aviation, such as explainability, trust, workload, and situational awareness, evolve over time, or how these dynamics affect overall system performance and safety.
Taken together, the evidence obtained with this project will inform role allocation, explainability, and training needs for human-AI teaming in safety-critical domains and support a broader understanding of AI integration in future systems.
Link to IRIS-HISIT (Human-Intelligent Systems Interaction and Teaming)