Show Tag: representations

Select Other Tags

Maps of sensory space in different sensory modalities can, if brought into register, give rise to an amodal representation of space.

A traditional model of visual processing for perception and action proposes that the two tasks rely on different visual representations. This model explains the weak effect of visual illusions like the Müller-Lyer illuson on performance in grasping tasks.

Foster et al. challenge the methodology used in a previous study by Dewar and Carey which supports the perception and action model of visual processing due to Goodale and Milner.

They do that by changing the closed visual-action loop in Dewar and Carey's study into an open one by removing visual feedback at motion onset. The result is that the effect of the illusion is there for grasping (which it wasn't in the closed-loop condition) but not (as strongly) for manual object size estimation.

Foster et al. argue that this suggests that the effect found in Dewar and Carey's study is due to continuous visual feedback.

Sun argues that mechanisms and representations (and thus computational models) are an important and necessary part of scientific theories and that that is true especially in cognitive science.

Sun argues that computational cognitive models describe mechanisms and representations in cognitive science well.

According to Markman and Dietrich, conventional views of neural representations agree on the following five principles:

  1. Representations are state of the system which carry information about the world,
  2. some information about the world must be stored in a cognitive system and accessible without the percepts from which it was originally derived,
  3. representations use symbols,
  4. some information is represented amodally ie. independent of perception or action, and
  5. some representations are not related to the cognitive agent's embodiment (do not require embodiment).

According to Markman and Dietrich, symbolic systems are too rigid to cope well with the variability and ambiguity of situations.

According to Markman and Dietrich, some of the problems of symbolic systems are handled by models which use modal instead of amodal representations.

The theory around situated cognition holds that cognitive processes cannot be separated from context:

  • What needs to be represented internally depends on what is readily available in the environment and
  • some things are easier to check in the environment (by gathering information, trying things out) than inferred or simulated in the cognitive agent itself.

The theory of embodied cognition states that, to model natural cognition, it is necessary to build embodied cognitive agents because cognition cannot be understood out of context.

According to Markman and Dietrich, the traditional view on cognitive representations suffers from being too static in a dynamic world: There are no discrete state transitions in the world of biological cognitive agents so any model that operates on representations requiring discrete state transitions are inaccurate.

Dynamical systems have been used to model actual dynamics in cognitive systems.

There are alternative views to the traditional view of cognitive representations.

The traditional view of cognitive representation needs to be extended rather than replaced by aspects and mechanisms of correspondence to perception and action.

Neurorobotics is an activity which creates embodied cognitive agents.

Wilson and Golonka argue that early research in cognitive science operated under the assumption that our sensory organs aren't very good and the brain needs to make up for that.

Newer results show, according to the authors, that our perceptual world does give us many resources (in brain, body, and environment) to respond to our environment without the need for creating detailed mental representation and operating on them.

Markmann and Dietrich argue against the replacement hypothesis, saying that all of the alternative approaches still assume that brain states do reflect world properties.

Wilson and Golonka argue that representations and computational processes can be replaced by world-body-brain dynamics even in neurolinguistics.

Since speech happens in a brain which is part of a body in a physical world, it is without doubt possible to describe it in terms of world-body-brain dynamics.

The question is whether that is a good way of describing it. Such a description may be very complex and difficult to handle—it might run against what explanation in science is supposed to do.

Neural responses to words from different categories activate different networks of brain regions.

The fact that the brain regions activated by (hearing, reading...) certain words correspond to the categories the words belong to (action words for motor areas etc.) suggests semantic grounding in perception and action.

Words from some categories do not activate brain regions which are related to their meaning. The semantics of those words do not seem to be grounded in perception or action. Pulvermüller calls such categories and their neural representations disembodied.

Some abstract, disembodied words seem to activate areas in the brain related to emotional processing. These words may be grounded in emotion.

`Disembodied' theories account for intentionality relatively well: they posit cognitive representations which stand for real-world entities even in their absence (or inexistence).

Embodied cognition has a harder time explaining off-line cognition, ie. cognition about things that don't stimulate the senses.

Mental simulation, ie. simulation of sensorimotor interaction, is one way in which embodied cognitive theories can account for offline cognition.

Some things we think about (like moral judgements, dynamics in economy) are very abstract and it is hard to connect them to sensorimotor interactions.

Clark calls theories `radical embodiment' if they make one or more of the following claims:

  1. Classical tools of cognitive science are insufficient to understand cognition (and others, like dynamical systems are needed)
  2. Representations and computation on them are inadequate to describe cognition
  3. Modularizing the brain is misleading.

Clark called evidence for the ideas that non-classical tools are necessary to understand cognition are necessary and that representations and computation are bad categories for thinking about cognition weak (in 1999).

He conjectured that there is a middle ground between embodied and disembodied cognition.

Barsalou writes that most researchers in cognitive psychology and cognitive science who accept that many neural representations are modal at the same time hold that there are amodal representations.

Theories of amodal representations of concepts hold that sensorimotor representations are transduced into representations which are divorced from sensory content (like feature lists, semantic networks, or frames).

According to modal theories of concepts, representations of concepts comprise of sensorimotor representations.

Activating a concept activates (some of) the modal neurons which represent the concept. Barsalou et al. call this re-inactment.

Much of neural processing can be understood as compression and de-compression.

Representations in the cortex (eg. V1) develop differently depending on the task. This suggests that some sort of feedback signal might be involved and learning in the cortex is not purely unsupervised.

Some task-dependency in representations may arise from embodied learning where actions bias experiences being learned from.

Conversely, the narrow range of disparities reflected in disparaty-selective cells in visual cortex neurons might be due to goal-directed feature learning.

Explicit or implicit representations of world dynamics are necessary for optimal controllers since they have to anticipate state changes before the arrival of the necessary sensor data.

Are there representations, forward models of saccade controls in the SC?

The idea that neural activity does not primarily represent the world but 'action pointers', as put by Engel et al., speaks to the deep SC which is both 'multi-modal' and 'motor'.

A representation of probabilities is not necessary for optimal estimation.

A representation is a formal system for making explicit certain entities or types of information, together with a specification of how the system does this.

And I shall call the result of using a representation to describe a given entity a description of the entity in that representation.

Any type of representation makes certain information explicit at the expense of information that is pushed into the background and may be quite hard to recover.

Representation, algorithm, and hardware depend on each other and, critically, on the demands of the task.

The three levels at which any information-processing system needs to be understood are

  • computational theory
  • representation and algorithm
  • hardware implementation

Neurophysiology can help us understand the representations. Otherwise it is mainly concerned with the implementational side of the study of the brain as an information-processing system. Neurophysiological knowledge is hard to interpret in terms of algorithms and representations especially without a clear understanding of the task (ie. the computational theory).

Psychophysical results can inform the study of algorithms and representations.

There are three kinds of things we can learn from computational neuroscience:

  • Computational Theory of Perception If we want computers to do what we do, we need to understand what we do, why we do it, and how it can be done in general.
  • Implementation We can learn from nature how what we do can be implemented.
  • Algorithms and Representations We can learn from computational neuroscience good ways to represent information and process it.

Of course, if we don't use neural computers, we will have to adapt the algorithms and representations, which may not be so optimal on different hardware.

Marr speaks of vision as one process, whose task is to generate `a useful description of the world'. However, there is more than one actual goal of vision (though they share similar properties) and thus there are different representations and algorithms being used in the different parts of the brain concerned with these goals.

Marr writes: "The usefulness of a representation depends upon how well suited it is to the purpose for which it is used". That's pure embodiment.

When studying an information-processing system, and given a computational theory of it, algorithms and representations for implementing it can be designed, and their performance can be compared to that of natural processing.

If the performance is similar, that supports our computational theory.