Does opportunistic testing bias cognitive performance in primates? Learning from drop-outs

Michèle N. Schubiger*, Alexandra Kissling, Judith M. Burkart

*Corresponding author for this work

Research output: Contribution to journalArticle

2 Downloads (Pure)

Abstract

Dropouts are a common issue in cognitive tests with non-human primates. One main reason for dropouts is that researchers often face a trade-off between obtaining a sufficiently large sample size and logistic restrictions, such as limited access to testing facilities. The commonly-used opportunistic testing approach deals with this trade-off by only testing those individuals who readily participate and complete the cognitive tasks within a given time frame. All other individuals are excluded from further testing and data analysis. However, it is unknown if this approach merely excludes subjects who are not consistently motivated to participate, or if these dropouts systematically differ in cognitive ability. If the latter holds, the selection bias resulting from opportunistic testing would systematically affect performance scores and thus comparisons between individuals and species. We assessed the potential effects of opportunistic testing on cognitive performance in common marmosets (Callithrix jacchus) and squirrel monkeys (Saimiri sciureus) with a test battery consisting of six cognitive tests: two inhibition tasks (Detour Reaching and A-not-B), one cognitive flexibility task (Reversal Learning), one quantity discrimination task, and two memory tasks. Importantly, we used a full testing approach in which subjects were given as much time as they required to complete each task. For each task, we then compared the performance of subjects who completed the task within the expected number of testing days with those subjects who needed more testing time. We found that the two groups did not differ in task performance, and therefore opportunistic testing would have been justified without risking biased results. If our findings generalise to other species, maximising sample sizes by only testing consistently motivated subjects will be a valid alternative whenever full testing is not feasible.

Original languageEnglish
Article numbere0213727
Number of pages22
JournalPLoS One
Volume14
Issue number3
DOIs
Publication statusPublished - 20 Mar 2019

Fingerprint

dropouts
cognition
Primates
Callithrix
Saimiri
learning
Learning
Sample Size
Testing
Reversal Learning
Aptitude
Selection Bias
testing
Task Performance and Analysis
Research Personnel
Callithrix jacchus
Saimiri sciureus
Logistics

Cite this

Schubiger, Michèle N. ; Kissling, Alexandra ; Burkart, Judith M. / Does opportunistic testing bias cognitive performance in primates? Learning from drop-outs. In: PLoS One. 2019 ; Vol. 14, No. 3.
@article{10751a7408da437cabe7fc7f689ad65c,
title = "Does opportunistic testing bias cognitive performance in primates? Learning from drop-outs",
abstract = "Dropouts are a common issue in cognitive tests with non-human primates. One main reason for dropouts is that researchers often face a trade-off between obtaining a sufficiently large sample size and logistic restrictions, such as limited access to testing facilities. The commonly-used opportunistic testing approach deals with this trade-off by only testing those individuals who readily participate and complete the cognitive tasks within a given time frame. All other individuals are excluded from further testing and data analysis. However, it is unknown if this approach merely excludes subjects who are not consistently motivated to participate, or if these dropouts systematically differ in cognitive ability. If the latter holds, the selection bias resulting from opportunistic testing would systematically affect performance scores and thus comparisons between individuals and species. We assessed the potential effects of opportunistic testing on cognitive performance in common marmosets (Callithrix jacchus) and squirrel monkeys (Saimiri sciureus) with a test battery consisting of six cognitive tests: two inhibition tasks (Detour Reaching and A-not-B), one cognitive flexibility task (Reversal Learning), one quantity discrimination task, and two memory tasks. Importantly, we used a full testing approach in which subjects were given as much time as they required to complete each task. For each task, we then compared the performance of subjects who completed the task within the expected number of testing days with those subjects who needed more testing time. We found that the two groups did not differ in task performance, and therefore opportunistic testing would have been justified without risking biased results. If our findings generalise to other species, maximising sample sizes by only testing consistently motivated subjects will be a valid alternative whenever full testing is not feasible.",
author = "Schubiger, {Mich{\`e}le N.} and Alexandra Kissling and Burkart, {Judith M.}",
year = "2019",
month = "3",
day = "20",
doi = "10.1371/journal.pone.0213727",
language = "English",
volume = "14",
journal = "PLoS One",
issn = "1932-6203",
publisher = "Public Library of Science",
number = "3",

}

Does opportunistic testing bias cognitive performance in primates? Learning from drop-outs. / Schubiger, Michèle N.; Kissling, Alexandra; Burkart, Judith M.

In: PLoS One, Vol. 14, No. 3, e0213727, 20.03.2019.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Does opportunistic testing bias cognitive performance in primates? Learning from drop-outs

AU - Schubiger, Michèle N.

AU - Kissling, Alexandra

AU - Burkart, Judith M.

PY - 2019/3/20

Y1 - 2019/3/20

N2 - Dropouts are a common issue in cognitive tests with non-human primates. One main reason for dropouts is that researchers often face a trade-off between obtaining a sufficiently large sample size and logistic restrictions, such as limited access to testing facilities. The commonly-used opportunistic testing approach deals with this trade-off by only testing those individuals who readily participate and complete the cognitive tasks within a given time frame. All other individuals are excluded from further testing and data analysis. However, it is unknown if this approach merely excludes subjects who are not consistently motivated to participate, or if these dropouts systematically differ in cognitive ability. If the latter holds, the selection bias resulting from opportunistic testing would systematically affect performance scores and thus comparisons between individuals and species. We assessed the potential effects of opportunistic testing on cognitive performance in common marmosets (Callithrix jacchus) and squirrel monkeys (Saimiri sciureus) with a test battery consisting of six cognitive tests: two inhibition tasks (Detour Reaching and A-not-B), one cognitive flexibility task (Reversal Learning), one quantity discrimination task, and two memory tasks. Importantly, we used a full testing approach in which subjects were given as much time as they required to complete each task. For each task, we then compared the performance of subjects who completed the task within the expected number of testing days with those subjects who needed more testing time. We found that the two groups did not differ in task performance, and therefore opportunistic testing would have been justified without risking biased results. If our findings generalise to other species, maximising sample sizes by only testing consistently motivated subjects will be a valid alternative whenever full testing is not feasible.

AB - Dropouts are a common issue in cognitive tests with non-human primates. One main reason for dropouts is that researchers often face a trade-off between obtaining a sufficiently large sample size and logistic restrictions, such as limited access to testing facilities. The commonly-used opportunistic testing approach deals with this trade-off by only testing those individuals who readily participate and complete the cognitive tasks within a given time frame. All other individuals are excluded from further testing and data analysis. However, it is unknown if this approach merely excludes subjects who are not consistently motivated to participate, or if these dropouts systematically differ in cognitive ability. If the latter holds, the selection bias resulting from opportunistic testing would systematically affect performance scores and thus comparisons between individuals and species. We assessed the potential effects of opportunistic testing on cognitive performance in common marmosets (Callithrix jacchus) and squirrel monkeys (Saimiri sciureus) with a test battery consisting of six cognitive tests: two inhibition tasks (Detour Reaching and A-not-B), one cognitive flexibility task (Reversal Learning), one quantity discrimination task, and two memory tasks. Importantly, we used a full testing approach in which subjects were given as much time as they required to complete each task. For each task, we then compared the performance of subjects who completed the task within the expected number of testing days with those subjects who needed more testing time. We found that the two groups did not differ in task performance, and therefore opportunistic testing would have been justified without risking biased results. If our findings generalise to other species, maximising sample sizes by only testing consistently motivated subjects will be a valid alternative whenever full testing is not feasible.

U2 - 10.1371/journal.pone.0213727

DO - 10.1371/journal.pone.0213727

M3 - Article

VL - 14

JO - PLoS One

JF - PLoS One

SN - 1932-6203

IS - 3

M1 - e0213727

ER -