User trust and understanding of explainable AI: exploring algorithm visualisations and user biases

Dawn Branley-Bell*, Rebecca Whitworth, Lynne Coventry

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Citations (Scopus)

Abstract

Artificial intelligence (AI) is increasingly being integrated into different areas of our lives. AI has the potential to increase productivity and relieve workload on staff in high-pressure jobs such as healthcare. However, most AI healthcare tools have failed. For AI to be effective, it is vital that users can understand how the system is processing data. Explainable AI (XAI) moves away from the traditional ‘black box’ approach, aiming to make the processes behind the system more transparent. This experimental study uses real healthcare data – and combines a computer science and psychological approach – to investigate user trust and understanding of three popular XAI algorithms (Decision Trees, Logistic Regression and Neural Networks). The results question the contribution of understanding towards user trust; Suggesting that understanding and explainability are not the only factors contributing to trust in AI. Users also show biases in trust and understanding – with a particular bias towards malignant results. This raises important issues around how humans can be encouraged to make more accurate judgements when using XAI systems. These findings have implications in relation to ethics, future XAI design, healthcare and further research.

Original languageEnglish
Title of host publicationHuman-computer interaction. Human values and quality of life
Subtitle of host publicationThematic area, HCI 2020, held as part of the 22nd international conference, HCII 2020, proceedings, part III
EditorsMasaaki Kurosu
Place of PublicationCham
PublisherSpringer
Pages382-399
Number of pages18
ISBN (Electronic)9783030490652
ISBN (Print)9783030490645
DOIs
Publication statusPublished - 10 Jul 2020
Externally publishedYes
EventThematic Area on Human Computer Interaction, held as part of the 22nd International Conference on Human-Computer Interaction, HCII 2020 - Copenhagen, Denmark
Duration: 19 Jul 202024 Jul 2020

Publication series

NameLecture Notes in Computer Science (LNCS)
PublisherSpringer
Volume12183
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

ConferenceThematic Area on Human Computer Interaction, held as part of the 22nd International Conference on Human-Computer Interaction, HCII 2020
Abbreviated titleHCI 2020
Country/TerritoryDenmark
CityCopenhagen
Period19/07/2024/07/20

Keywords

  • Explainable AI
  • Artificial intelligence
  • Machine learning
  • Health
  • Trust
  • Understanding
  • Healthcare
  • Medical diagnoses
  • Cognitive biases

Fingerprint

Dive into the research topics of 'User trust and understanding of explainable AI: exploring algorithm visualisations and user biases'. Together they form a unique fingerprint.

Cite this