Evaluation of a multiscale color model for visual difference prediction

P. G. Lovell, C. A. Parraga, T. Troscianko, C. Ripamonti, D. J. Tolhurst

Research output: Contribution to journalArticle

18 Citations (Scopus)

Abstract

How different are two images when viewed by a human observer? There is a class of computational models which attempt to predict perceived differences between subtly different images. These are derived from theoretical considerations of human vision and are mostly validated from psychophysical experiments on stimuli, such as sinusoidal gratings. We are developing a model of visual difference prediction, based on multiscale analysis of local contrast, to be tested with psychophysical discrimination experiments on natural-scene stimuli. Here, we extend our model to account for differences in the chromatic domain by modeling differences in the luminance domain and in two opponent chromatic domains. We describe psychophysical measurements of objective (discrimination thresholds) and subjective (magnitude estimations) perceptual differences between visual stimuli derived from colored photographs of natural scenes. We use one set of psychophysical data to determine the best parameters for the model and then determine the extent to which the model generalizes to other experimental data. In particular, we show that the cues from different spatial scales and from the separate luminance and chromatic channels contribute roughly equally to discrimination and that these several cues are combined in a relatively straightforward manner. In general, the model provides good predictions of both threshold and suprathreshold image differences arising from a wide variety of geometrical and optical manipulations. This implies that models of this class can be generally useful in specifying how different two similar images will look to human observers.

Original languageEnglish
Pages (from-to)155-178
Number of pages24
JournalACM Transactions on Applied Perception
Volume3
Issue number3
DOIs
Publication statusPublished - 1 Jan 2006
Externally publishedYes

Fingerprint

Color
Cues
Prediction
Evaluation
Discrimination
Luminance
Observer
Model
Human Vision
Multiscale Analysis
Computational Model
Gratings
Experiment
Vision
Discrimination (Psychology)
Manipulation
Experimental Data
Imply
Experiments
Predict

Cite this

Lovell, P. G. ; Parraga, C. A. ; Troscianko, T. ; Ripamonti, C. ; Tolhurst, D. J. / Evaluation of a multiscale color model for visual difference prediction. In: ACM Transactions on Applied Perception. 2006 ; Vol. 3, No. 3. pp. 155-178.
@article{a125d93e793246efae9f59713cde9d3e,
title = "Evaluation of a multiscale color model for visual difference prediction",
abstract = "How different are two images when viewed by a human observer? There is a class of computational models which attempt to predict perceived differences between subtly different images. These are derived from theoretical considerations of human vision and are mostly validated from psychophysical experiments on stimuli, such as sinusoidal gratings. We are developing a model of visual difference prediction, based on multiscale analysis of local contrast, to be tested with psychophysical discrimination experiments on natural-scene stimuli. Here, we extend our model to account for differences in the chromatic domain by modeling differences in the luminance domain and in two opponent chromatic domains. We describe psychophysical measurements of objective (discrimination thresholds) and subjective (magnitude estimations) perceptual differences between visual stimuli derived from colored photographs of natural scenes. We use one set of psychophysical data to determine the best parameters for the model and then determine the extent to which the model generalizes to other experimental data. In particular, we show that the cues from different spatial scales and from the separate luminance and chromatic channels contribute roughly equally to discrimination and that these several cues are combined in a relatively straightforward manner. In general, the model provides good predictions of both threshold and suprathreshold image differences arising from a wide variety of geometrical and optical manipulations. This implies that models of this class can be generally useful in specifying how different two similar images will look to human observers.",
author = "Lovell, {P. G.} and Parraga, {C. A.} and T. Troscianko and C. Ripamonti and Tolhurst, {D. J.}",
year = "2006",
month = "1",
day = "1",
doi = "10.1145/1166087.1166089",
language = "English",
volume = "3",
pages = "155--178",
journal = "ACM Transactions on Applied Perception",
issn = "1544-3558",
publisher = "Association for Computing Machinery (ACM)",
number = "3",

}

Evaluation of a multiscale color model for visual difference prediction. / Lovell, P. G.; Parraga, C. A.; Troscianko, T.; Ripamonti, C.; Tolhurst, D. J.

In: ACM Transactions on Applied Perception, Vol. 3, No. 3, 01.01.2006, p. 155-178.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Evaluation of a multiscale color model for visual difference prediction

AU - Lovell, P. G.

AU - Parraga, C. A.

AU - Troscianko, T.

AU - Ripamonti, C.

AU - Tolhurst, D. J.

PY - 2006/1/1

Y1 - 2006/1/1

N2 - How different are two images when viewed by a human observer? There is a class of computational models which attempt to predict perceived differences between subtly different images. These are derived from theoretical considerations of human vision and are mostly validated from psychophysical experiments on stimuli, such as sinusoidal gratings. We are developing a model of visual difference prediction, based on multiscale analysis of local contrast, to be tested with psychophysical discrimination experiments on natural-scene stimuli. Here, we extend our model to account for differences in the chromatic domain by modeling differences in the luminance domain and in two opponent chromatic domains. We describe psychophysical measurements of objective (discrimination thresholds) and subjective (magnitude estimations) perceptual differences between visual stimuli derived from colored photographs of natural scenes. We use one set of psychophysical data to determine the best parameters for the model and then determine the extent to which the model generalizes to other experimental data. In particular, we show that the cues from different spatial scales and from the separate luminance and chromatic channels contribute roughly equally to discrimination and that these several cues are combined in a relatively straightforward manner. In general, the model provides good predictions of both threshold and suprathreshold image differences arising from a wide variety of geometrical and optical manipulations. This implies that models of this class can be generally useful in specifying how different two similar images will look to human observers.

AB - How different are two images when viewed by a human observer? There is a class of computational models which attempt to predict perceived differences between subtly different images. These are derived from theoretical considerations of human vision and are mostly validated from psychophysical experiments on stimuli, such as sinusoidal gratings. We are developing a model of visual difference prediction, based on multiscale analysis of local contrast, to be tested with psychophysical discrimination experiments on natural-scene stimuli. Here, we extend our model to account for differences in the chromatic domain by modeling differences in the luminance domain and in two opponent chromatic domains. We describe psychophysical measurements of objective (discrimination thresholds) and subjective (magnitude estimations) perceptual differences between visual stimuli derived from colored photographs of natural scenes. We use one set of psychophysical data to determine the best parameters for the model and then determine the extent to which the model generalizes to other experimental data. In particular, we show that the cues from different spatial scales and from the separate luminance and chromatic channels contribute roughly equally to discrimination and that these several cues are combined in a relatively straightforward manner. In general, the model provides good predictions of both threshold and suprathreshold image differences arising from a wide variety of geometrical and optical manipulations. This implies that models of this class can be generally useful in specifying how different two similar images will look to human observers.

U2 - 10.1145/1166087.1166089

DO - 10.1145/1166087.1166089

M3 - Article

AN - SCOPUS:38549120661

VL - 3

SP - 155

EP - 178

JO - ACM Transactions on Applied Perception

JF - ACM Transactions on Applied Perception

SN - 1544-3558

IS - 3

ER -