Quantifying human decision-making responsibility when collaborating with intelligent automation and AI

I have not posted in the last few years, as I have invested all my free time in my studies, in addition to working full-time at Dell EMC and later at Google. I am happy to share that I have now graduated from Tel Aviv University’s School of Industrial & Intelligent Systems Engineering with a PhD degree. I had the pleasure to work with Prof. Joachim Meyer, whose guidance was invaluable (thank you!). My research work focused on the exciting intersection of humans and automation. I was able to model dynamic decision-making processes that are assisted by intelligent automation and artificial intelligence (AI) and unveil some of the challenges that relate to responsibility attribution. Want to learn more about it? Continue reading.

Humans utilize automation in all areas of life to enhance task outcomes and perform more efficiently, with reduced effort and minimal risk. Decision-making is a critical task that can be significantly improved by the use of automation, which, in many situations, is better equipped to detect and analyze the situation, weigh all the alternatives, and select the preferred course of action. However, when automation is used, either to assist human decisions or to make such decisions autonomously, a major gap in responsibility can be created, as the human is being distanced from the outcome, making it less clear who should be held responsible for it. The lack of a quantitative method to determine the human involvement and contribution to the outcome in such decision-making situations is one of the obstacles on the path to broader adoption of automation in critical decision-making systems.

Assisted decision-making consists of decision support systems (DSSs) where the automation merely advises the human, and automated decision systems (ADMs) in which the automation makes the decision and the human is there only to approve or override it. Either can be applied to static, single decisions or to a series of decisions in a dynamic decision-making event. Previous work aimed to quantify the human contribution and responsibility for the outcome when using an ADM for single decisions. We extended this work to also cover DSS systems and performed an empirical study to understand people’s different behaviors when assisted by a DSS vs. ADM. We found out that the way the automation’s role is positioned to the human operator has a significant influence on how they react to its recommendations, whether they follow its advice or not. This is important for system designers and policy-makers who require a minimal level of human control on the decision-making, so the roles of automation and humans should be clearly defined and presented to the operator.

We then extended the analysis to dynamic decision-making events, where a new mathematical model had to be developed, since such events are non-stationary, and some previously used information theory concepts could not serve as the basis for the model. Using probabilistic causation, we identified a metric that could be applied to measure ‘how far’ the automation would move the human’s decision probability space from its original state, thus indicating the level of influence of the automation. An empirical test investigated humans’ behavior in such dynamic events and compared it to the theoretical predictions.
Our experiments demonstrated that the models can a priori predict the human influence on the outcome, which could be a measure of their level of causal responsibility. Interestingly, less accurate (or inexperienced) operators tended to follow the automation’s advice less, even though they could mostly benefit from it. This means that just providing an automated system and assuming it will improve task performance is not enough. Mechanisms should be in place to enhance the human operator’s trust in the automation if we expect them to follow its advice.

These insights should be considered when designing assisted decision-making systems, to ensure the human level of influence is as expected, especially in regulated systems where meaningful human control is required. Our models allow us to predict the human’s level of influence. This is essential for the design of such systems and related processes, and for the development of relevant regulations.

Further reading:

Human Responsibility in Dynamic Automation-Assisted Decision Making, PhD dissertation

Context-based human influence and causal responsibility for assisted decision-making, Human Factors, 2025

Quantifying Automation Influence and Human Responsibility in Dynamic Decision Making Event. ACM Transactions on Intelligent
Systems and Technology, 2023

Aided decision-processes in dynamic events: measuring decision support systems influence and human responsibility. IEEE Transactions on Human Machine Systems, 2025

Leave a comment