Show Tag: psychology

Select Other Tags

Probabilistic value estimations (by humans) are subject to framing issues: how valuable a choice is depends on how the circumstances are presented (frames).

Probabilistic value estimations are not linear in expected value.

The value function for uncertain gains seems to be generally concave, that of uncertain losses seems to be convex.

Low probabilities are often mis-estimated by humans; depending on the setting, they can be over- or underestimated.

Schools in psychology have been thought of replacing each other. Instead, Simon argues that they build on top of each other and psychology is incremental instead of revolutionary.

The way to describe human cognition is by describing how its state changes from one moment to another, given input.

Neuropsychology must both describe information processing in the brain and do its part in building the abstracted interface to theories of cognition at higher levels of resolution.

Simon implies that human cognition is serial or parallel depending on the level of resolution one looks at it.

Static, purely psychophysical theories of cognition (computational theories of the mind, in Marr's sense) are weak and descriptive only, as opposed to explanatory.

Simon calls theories in psychology which make predictions by quantitatively describing structural characteristics of the brain models.

If natural learning (and information processing) were perfect, psychology would not need to study learning (and information processing), but the environment which would determine what we learn and how we process information.

Natural learning (and information processing) is not optimal and therefore psychology needs to study it and especially its imperfections.

Simon task and Stroop task are similar. A main difference is that the conflict is between a dimension of the response and a task-irrelevant stimulus dimension in the Simon task, while it is between a task-irrelevant dimension of the stimulus, the task-relevant dimension of the stimulus, and a dimension of the response, in the Stroop task.

Attention is necessary to perform the Stroop and Simon tasks.

The dimensional overlap framework can be used to classify overlap and interference between relevant (features of) stimuli and (features of) responses in psychological stimulus-response paradigms. In particular it can be used to classify types of conflict between relevant and irrelevant dimensions of stimuli and response.

In Stroop-type experiments, there is usually conflict between an irrelevant stimulus dimension, the relevant stimulus dimension, and a dimension of the response, for example the color of ink $C_I$ in which a word is written, the meaning of the word (a different color) $C_R$, and the response (saying the name of that color $C_R$).

In Simon-type experiments, there is usually conflict only between an irrelevant stimulus dimension and a dimension of the response, for example the task-irrelevant location of a stimulus and the hand with which to respond.

Liu et al. hypothesize that conflicts between stimulus dimensions and between stimulus and response dimensions are detected by different mechanisms, but resolved by the same executive control mechanism.

Liu et al. found support for their model of two conflict detection mechanisms and one conflict resolution mechanism: In their experiments, compatibility effects between stimulus dimensions and between stimulus dimensions and response dimensions were additive when both types of conflicts occurred (or both were congruent) and they canceled out when one type of conflict and one type of congruency occurred.

The account of abstraction due to Hoare is that we first cluster objects according to arbitrary similarities. We then find clusters which are predictive of the future and name them. Subsequently, the similarities within such a named cluster are thought of as essential whereas the differences are perceived as unimportant.

The account of the process of abstraction due to Hoare is

  • Abstraction (selecting those properties of the real things we want to represent in our abstraction)
  • Representation (choosing symbols like words, pictograms... for the abstraction)
  • Manipulation (declaring rules for how to use the symbolic representation to predict what will happen to the real things under certain circumstances)
  • Axiomatisation (declaring rigorously the relationship between the symbols used in the representation and the properties of the real things being abstracted from)

Jones and Love propose three ways of `Bayesian Enlightenment'.

Bayesian theory can be used to describe hypotheses and prior beliefs. These two can then be tested against actual behavior.

In contrast with `Bayesian Fundamentalism', this approach views prior and hypotheses as the scientific theory to be tested as opposed to the only (if handcrafted) way to describe the situation, which is used to see whether once again optimality can be demonstrated.

Feldman relates his "Subjective Unity of Perception" to the stable world illusion.

Feldman gives a functional explanation of the stable world illusion, but he does not seem to explain "Subjective Unity of Perception".

According to Friedman, Hummel divides binding architectures into multiplicative and additive ones.

Already von Helmholtz formulated the idea that prior knowledge---or expectations---are fused with sensory information into perception.

This idea is at the core of Bayesian theory.

Although predecessors existed, Bayesian theory became popular in perceptual science in the 1980's and 1990's.

Psychophysical results can inform the study of algorithms and representations.

Jones and Love talk about Bayesian theory as psychological theories---not so much as neuroscientific theories... I guess?