Show Tag: embodiment

Select Other Tags

Verschure argues that models of what he calls the mind, brain, body nexus should particularly account for data about the behavior at the system level, ie. overt behavior. He calls this convergent validation

In Verschure's concept of convergent validation, the researcher does not seek inspiration for but constraints for falsification or validation of models in nature.

According to Markman and Dietrich, conventional views of neural representations agree on the following five principles:

  1. Representations are state of the system which carry information about the world,
  2. some information about the world must be stored in a cognitive system and accessible without the percepts from which it was originally derived,
  3. representations use symbols,
  4. some information is represented amodally ie. independent of perception or action, and
  5. some representations are not related to the cognitive agent's embodiment (do not require embodiment).

According to Markman and Dietrich, symbolic systems are too rigid to cope well with the variability and ambiguity of situations.

The theory of embodied cognition states that, to model natural cognition, it is necessary to build embodied cognitive agents because cognition cannot be understood out of context.

Neurorobotics is an activity which creates embodied cognitive agents.

The idea of embodied cognition is not just that cognitive processes are influenced by bodily states.

The idea of embodied cognition is not just that cognitive processes are influenced by bodily states.

Wilson and Golonka argue that early research in cognitive science operated under the assumption that our sensory organs aren't very good and the brain needs to make up for that.

Newer results show, according to the authors, that our perceptual world does give us many resources (in brain, body, and environment) to respond to our environment without the need for creating detailed mental representation and operating on them.

The replacement hypothesis of embodiment (not the one in anthropology) states that a description of the dynamics of body, brain, and environment can replace a description of human cognition in terms of representations and computational processes.

Markmann and Dietrich argue against the replacement hypothesis, saying that all of the alternative approaches still assume that brain states do reflect world properties.

According to Wilson and Golonka, there are four questions a truly embodied research programme (theory?) needs to ask:

  1. What is the task to be solved?
  2. Which (cognitive, bodily, environmental) resources does the organism have to solve the task?
  3. How can the available resources be used to solve the task?
  4. Does the organism indeed use the hypothesized resources in the hypothesized way?

Female crickets have a system for orienting towards sounds (esp. mating calls) which is arguably based more on mechanics and acoustics than on neural computation.

Sound-source localization requires much more neural computation in vertebrates than in crickets.

Wilson and Golonka argue that representations and computational processes can be replaced by world-body-brain dynamics even in neurolinguistics.

Since speech happens in a brain which is part of a body in a physical world, it is without doubt possible to describe it in terms of world-body-brain dynamics.

The question is whether that is a good way of describing it. Such a description may be very complex and difficult to handle—it might run against what explanation in science is supposed to do.

The fact that the brain regions activated by (hearing, reading...) certain words correspond to the categories the words belong to (action words for motor areas etc.) suggests semantic grounding in perception and action.

Words from some categories do not activate brain regions which are related to their meaning. The semantics of those words do not seem to be grounded in perception or action. Pulvermüller calls such categories and their neural representations disembodied.

Some abstract, disembodied words seem to activate areas in the brain related to emotional processing. These words may be grounded in emotion.

`Disembodied' theories account for intentionality relatively well: they posit cognitive representations which stand for real-world entities even in their absence (or inexistence).

Embodied cognition has a harder time explaining off-line cognition, ie. cognition about things that don't stimulate the senses.

Mental simulation, ie. simulation of sensorimotor interaction, is one way in which embodied cognitive theories can account for offline cognition.

Some of the proponents of embodied cognition argue against the replacement hypothesis.

Some things we think about (like moral judgements, dynamics in economy) are very abstract and it is hard to connect them to sensorimotor interactions.

Clark calls theories `radical embodiment' if they make one or more of the following claims:

  1. Classical tools of cognitive science are insufficient to understand cognition (and others, like dynamical systems are needed)
  2. Representations and computation on them are inadequate to describe cognition
  3. Modularizing the brain is misleading.

Clark called evidence for the ideas that non-classical tools are necessary to understand cognition are necessary and that representations and computation are bad categories for thinking about cognition weak (in 1999).

He conjectured that there is a middle ground between embodied and disembodied cognition.

Flies' flying and walking behavior is relatively directly influenced by visual stimulation: Basic stimuli that suggest body rotation of the fly will lead to compensatory flying and walking direction.

Flies use translational optic flow to detect impending collisions.

Direct connections from the vision to the motor system lead to highly stereotyped visuomotor behavior in the fly.

The stereotyped visuomotor flying behavior in the fly is mediated by internal states and input from other sensory modalities.

Sometimes, the best (fastest, least-suboptimal, most effortless etc.) response to a stimulus can be generated relatively directly from the way the world interacts with the body with little or no neural processing in between.

Barsalou writes that most researchers in cognitive psychology and cognitive science who accept that many neural representations are modal at the same time hold that there are amodal representations.

Embodied grounding can come not only from sensory but also from perception of internal states.

Biorobotics has been a driving force in embodiment theory.

Theories of amodal representations of concepts hold that sensorimotor representations are transduced into representations which are divorced from sensory content (like feature lists, semantic networks, or frames).

According to modal theories of concepts, representations of concepts comprise of sensorimotor representations.

Activating a concept activates (some of) the modal neurons which represent the concept. Barsalou et al. call this re-inactment.

It is interesting that multisensory integration arises in cats in experiments in which there is no goal-directed behavior connected with the stimuli as that is somewhat in contradiction to the paradigm of embodied cognition.

Zhao et al. propose a model which develops perception and behavior in parallel.

Their motivation is the embodiment idea stating that perception and behavior develop in behaving animals

Disparity-selective cells in visual cortical neurons have preferred disparities of only a few degrees whereas disparity in natural environments ranges over tens of degrees.

The possible explanation offered by Zhao et al. assumes that animals actively keep disparity within a small range, during development, and therefore only selectivity for small disparity develops.

Zhao et al. present a model of joint development of disparity selectivity and vergence control.

Zhao et al.'s model develops both disparity selection and vergence control in an effort to minimize reconstruction error.

It uses a form of sparse-coding to learn to approximate its input and a variation of the actor-critic learning algorithm called natural actor critic reinforcement learning algorithm (NACREL).

The teaching signal to the NACREL algorithm is the reconstruction error of the model after the action produced by it.

Some task-dependency in representations may arise from embodied learning where actions bias experiences being learned from.

Conversely, the narrow range of disparities reflected in disparaty-selective cells in visual cortex neurons might be due to goal-directed feature learning.

There is the view that perception is an active process and cannot be understood without an active component.

The terms active vision',active perception', smart sensing',animate vision' are sometimes used synonymously.

Active perception and its synonyms usually refer to a sensor which can be moved to change the way it perceives the world.

The way in which the perception of the world changes when the sensor is moved physically is a source of information in addition to static perception of the world.

Kleesiek et al. use a recurrent neural network with parametric bias (RNNPB) to classify objects from the multisensory percepts induced by interacting with them.

Antonelli et al. use Bayesian and Monte Carlo methods to integrate optic flow and proprioceptive cues to estimate distances between a robot and objects in its visual field.

Embodied robots bring together the complexity of sensing and action the real world poses. These are not present in simple models and simulations.

Some argue that the main task of cognition is generating the correct actions.

If the main task of cognition is generating the correct actions, then it is not important in itself to recover a perfect representation of the world from perception.

The efficient coding hypothesis does not take into account any task neural processing is supposed to accomplish. Some redundancy may make a code more suitable for a particular task. This is true especially when the values being represented are not equally distributed, when there is noise, and when responding correctly to some values yields higher utility than for others.

A complete theory of early visual processing would need to address more aspects than coding efficiency, optimal representation and cleanup. Tasks and implementation would have to be taken into account.

Non-primates often see only one or two, or more than three primary colors. This is probably, because their visual system is used for different tasks (like hunting in the dark). Since efficient coding does not take into account the task, implementation, and base rate, it cannot explain this variability.

Marr writes: "The usefulness of a representation depends upon how well suited it is to the purpose for which it is used". That's pure embodiment.