A computational neuroscience framework for quantifying warning signals

O. Penacchio*, C. G. Halpin, I. C. Cuthill, P. G. Lovell, M. Wheelwright, J. Skelhorn, C. Rowe, J. M. Harris

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

24 Downloads (Pure)

Abstract

1. Animal warning signals show remarkable diversity, yet subjectively appear to share certain visual features that make defended prey stand out and look different from more cryptic palatable species. For example, many (but far from all) warning signals involve high contrast elements, such as stripes and spots, and often involve the colours yellow and red. How exactly do aposematic species differ from non‐aposematic ones in the eyes (and brains) of their predators?
2. Here, we develop a novel computational modelling approach, to quantify prey warning signals and establish what visual features they share. First, we develop a model visual system, made of artificial neurons with realistic receptive fields, to provide a quantitative estimate of the neural activity in the first stages of the visual system of a predator in response to a pattern. The system can be tailored to specific species. Second, we build a novel model that defines a ‘neural signature’, comprising quantitative metrics that measure the strength of stimulation of the population of neurons in response to patterns. This framework allows us to test how individual patterns stimulate the model predator visual system.
3. For the predator–prey system of birds foraging on lepidopteran prey, we compared the strength of stimulation of a modelled avian visual system in response to a novel database of hyperspectral images of aposematic and undefended butterflies and moths. Warning signals generate significantly stronger activity in the model visual system, setting them apart from the patterns of undefended species. The activity was also very different from that seen in response to natural scenes. Therefore, to their predators, lepidopteran warning patterns are distinct from their non‐defended counterparts and stand out against a range of natural backgrounds.
4. For the first time, we present an objective and quantitative definition of warning signals based on how the pattern generates population activity in a neural model of the brain of the receiver. This opens new perspectives for understanding and testing how warning signals have evolved, and, more generally, how sensory systems constrain signal design.
Original languageEnglish
Pages (from-to)103-116
Number of pages14
JournalMethods in Ecology and Evolution
Volume15
Issue number1
Early online date11 Dec 2023
DOIs
Publication statusPublished - 1 Jan 2024

Keywords

  • Animal pattern
  • Aposematism
  • Avian vision
  • Camouflage
  • Computational neuroscience
  • Defenseive coloration
  • Image statistics
  • Lepidoptera

Fingerprint

Dive into the research topics of 'A computational neuroscience framework for quantifying warning signals'. Together they form a unique fingerprint.

Cite this