TY - GEN
T1 - User trust and understanding of explainable AI
T2 - Thematic Area on Human Computer Interaction, held as part of the 22nd International Conference on Human-Computer Interaction, HCII 2020
AU - Branley-Bell, Dawn
AU - Whitworth, Rebecca
AU - Coventry, Lynne
N1 - Publisher Copyright:
© Springer Nature Switzerland AG 2020.
PY - 2020/7/10
Y1 - 2020/7/10
N2 - Artificial intelligence (AI) is increasingly being integrated into different areas of our lives. AI has the potential to increase productivity and relieve workload on staff in high-pressure jobs such as healthcare. However, most AI healthcare tools have failed. For AI to be effective, it is vital that users can understand how the system is processing data. Explainable AI (XAI) moves away from the traditional ‘black box’ approach, aiming to make the processes behind the system more transparent. This experimental study uses real healthcare data – and combines a computer science and psychological approach – to investigate user trust and understanding of three popular XAI algorithms (Decision Trees, Logistic Regression and Neural Networks). The results question the contribution of understanding towards user trust; Suggesting that understanding and explainability are not the only factors contributing to trust in AI. Users also show biases in trust and understanding – with a particular bias towards malignant results. This raises important issues around how humans can be encouraged to make more accurate judgements when using XAI systems. These findings have implications in relation to ethics, future XAI design, healthcare and further research.
AB - Artificial intelligence (AI) is increasingly being integrated into different areas of our lives. AI has the potential to increase productivity and relieve workload on staff in high-pressure jobs such as healthcare. However, most AI healthcare tools have failed. For AI to be effective, it is vital that users can understand how the system is processing data. Explainable AI (XAI) moves away from the traditional ‘black box’ approach, aiming to make the processes behind the system more transparent. This experimental study uses real healthcare data – and combines a computer science and psychological approach – to investigate user trust and understanding of three popular XAI algorithms (Decision Trees, Logistic Regression and Neural Networks). The results question the contribution of understanding towards user trust; Suggesting that understanding and explainability are not the only factors contributing to trust in AI. Users also show biases in trust and understanding – with a particular bias towards malignant results. This raises important issues around how humans can be encouraged to make more accurate judgements when using XAI systems. These findings have implications in relation to ethics, future XAI design, healthcare and further research.
U2 - 10.1007/978-3-030-49065-2_27
DO - 10.1007/978-3-030-49065-2_27
M3 - Conference contribution
AN - SCOPUS:85088747762
SN - 9783030490645
T3 - Lecture Notes in Computer Science (LNCS)
SP - 382
EP - 399
BT - Human-computer interaction. Human values and quality of life
A2 - Kurosu, Masaaki
PB - Springer
CY - Cham
Y2 - 19 July 2020 through 24 July 2020
ER -