Show Tag: philosophy

Select Other Tags

The way we extend ourselves using technology is important in society as well: The norms of our society (societies) are based on what is possible in practice. Since what is possible in practice extends with technological progress, those norms have to change.

We routinely use our countries' constitutions as the gold standard for what is legitimate according to our values and what isn't. However, these constitutions restrict actions and the implications of our actions change with what is possible technologically. Actions which were innocuous when our constitutions were conceived have now changed drastically in their ethical implications.

Percepts can be processed (in certain settings) and acted upon without being conscious of them. This raises the question what is the use of consciousness.

Some people hold that consciousness is not needed for anything, but a side effect of perceptual processing.

One theory of the function of consciousness is that it is needed to integrate information from different modalities and processing centers in the brain and coordinate their activity.

It makes sense that consciousness could be important for multi-sensory integration.

There are quite a number of different definitions of multi-sensory integration.

According to Palmer and Ramsey,

"Multisensory integration refers to the process by which information from different sensory modalities (e.g., vision, audition, touch) is combined to yield a rich, coherent representation of an object or event in the environment."

Adams et al. argue that, since the brain is fast and requires little energy, researching biomimetic solutions can help solve the problems that robots have limited energy resources and computing power.

O'Reagan and Noë acknowledge that cortical maps containing information about the world exist.

O'Reagan and Noë deny that the existence of cortical maps explains the metric quality of visual phenomenology.

Just because some phenomenon in brain activity correlates with consciousness (or aspects of consciousness), it does not explain how consciousness arises through it.

In particular, coherent oscillations may correlate with consciousness, but they don't explain it according to O'Regan and Noë.

O'Regan and Noë highlight the importance of understanding seeing as an active process, as an exploratory activity.

All sensory input signals are equal, a priori, as are all motor outputs.

The difference between inputs from different modalities and between different motor outputs is their sensory-motor contingencies.

Actually, motor outputs are not different from secondary sensory signals—efferent copies are exactly that: motor output used for sensory processing. (And raw sensory input can be used as motor signals, as Braitenberg has shown.)

O'Regan and Noë speak of the geometric laws that govern the relationship between moving the eyes and body and the change of an image in the retina.

The geometry of the changes—straight lines becoming curves on the retina when an object moves in front of the eyes—are not accessible to the visual system, initially, because nothing tells the brain about the spatial relations between photoreceptors in the retina.

O'Regan and Noë claim that the structure of the laws governing visual sensory-motor contingencies is different from the structure of other sensory-motor contingencies and that this difference gives rise to different phenomenology.

Braitenberg postulates the "the law of uphill analysis and downhill invention", which states that it is easier to build something and see what it does (what it can do) than to analyse something just from the observable output.

As Dennett points out in his review of Braitenberg's book, just assuming that the mind and the brain and are the same thing (loosely speeking) is all nice and well, but it does not help, because the brain is so complex that knowing all its structure will not help much in learning about the mind.

Using Braitenberg's "law of uphill analysis and downhill invention" can help, because it starts by designing simple things and seeing what behavior they exhibit.

If I understand O'Regan and Noẽ correctly, what it is like to drive a Porsche is the activation of knowledge of its sensory-motor contingencies by those sensory-motor contingencies.

Crucially, it is nothing derived of that knowledge.

O'Regan and Noë argue that there is not an illusion that there is a "stable, high-resolution, full field representation of a visual scene" in the brain, but that people have the impression of being aware of everything in the scene.

The difference is that we would not need a photograph-like representation in the brain to be aware of all the details even if we were aware of it.

Ron Sun integrates philosophical notions into his scientific writing against the (supposed) opinion of many scientists that philosophy has no place in science.

Ron Sun's mixing of philosophical with cognitive-scientific theories is supported by empiricist philosophers like Quine who hold that there can be no meaningful philosophy which is not based on empiry and therefore philosophy is not distinct from science.

This is as long as those theories are empirical theories.

According to Sun (and Wikipedia), Realists believe that unobservable entities in scientific theories really do exist—they are just unobservable.

To Constructive Empiricists, however, accepting a theory only means believing in the existence of the observable parts.

Thus, in Quine's terminology, Realists' ontological commitments include the unobservables, whereas Constructive Empiricists' commitments don't.

Pure neural modeling does not explain complex behavior.

Eye movements are important for visual consciousness.

Patrick Winston calls perception "guided hallucination"

"The intention and the result of a scientific inquiry is to obtain an understanding and control of some part of the universe."

"No substantial part of the universe is so simple that it can be grasped and controlled without abstraction."

"the best material model for a cat is another, or preferably the same cat."

A theoretical model of (a significant part of) the world would have comparable complexity as (that part of) the world and we would be unable to understand and use it.

Sensation refers to the change of state of the nervous system induced purely by a stimulus. Perception integrates sensation with experience and training.

According to Friston, percepts are `the products of recognizing the causes of sensory input and sensation'.

Functional segregation and integration are complementary principles of organization of the brain.

Grossberg states that ART predicts a functional link between consciousness, learning, expectation, attention, resonance, and synchrony and calls this principle the CLEARS principle.

Some argue that the main task of cognition is generating the correct actions.

If the main task of cognition is generating the correct actions, then it is not important in itself to recover a perfect representation of the world from perception.

The idea that neural activity does not primarily represent the world but 'action pointers', as put by Engel et al., speaks to the deep SC which is both 'multi-modal' and 'motor'.

If there is a close connection between the state of the world and the required actions, then it is easy to confuse internal representations of the world with `action pointers'.

There is an illusion that there is a "stable, high-resolution, full field representation of a visual scene" in the brain.

Could the illusion that there is a "stable, high-resolution, full field representation of a visual scene" in the brain be the result of the availability heuristic? Whenever we are interested in some point in a visual scene, it is either at the center of our vision anyway, or we saccade to it. In both cases, detailed information of that scene is available almost instantly.

This seems to be what O'Regan and Noë imply (although they do not talk about the availability heuristic).

Jerome Feldman argues that the Neural Binding Problem is really four related problems and not distinguishing between them contributes to the difficulty of understanding them.

Jerome Feldman distinguishes between the following four "technical issues" that together form the binding problem: "General Considerations of Coordination", "The Subjective Unity of Perception", "Visual Feature-Binding", and "Variable Binding".

The general Binding Problem according to Jerome Feldman is really a problem of any distributed information processing system: it is difficult and sometimes impossible or intractable for a system that keeps and processes information in a distributed fashion to combine all the information available and act on it.

Jerome Feldman talks about the sub-problem of "General Considerations of Coordination" of the general Binding Problem as more or less a problem of synchronization and states that modeling efforts are well underway, taking account physiological details as spiking behavior and neuron oscillations.

The sub-problem of "Subjective Unity of Perception" according to Feldman is the problem of explaining why we experience perception as an "integrated whole" while it is processed by "largely distinct neural circuits".

Feldman relates his "Subjective Unity of Perception" to the stable world illusion.

Feldman gives a functional explanation of the stable world illusion, but he does not seem to explain "Subjective Unity of Perception".

Feldman states that enough is known about what he calls "Visual Feature Binding", so as not to call it a problem anymore.

Feldman explains Visual Feature Binding by the fact that all the features detected in the fovea usually belong together (because it is so small), and through attention. He cites Chikkerur et al.'s Bayesian model of the role of spatial and object attention in visual feature binding.

Feldman states that "Neural realization of variable binding is completely unsolved".

Feldman dismisses de Kamps' and van der Velde's approaches to neural variable binding stating that they don't work for the general case "where new entities and relations can be dynamically added".

According to Friedman, Hummel divides binding architectures into multiplicative and additive ones.

Although predecessors existed, Bayesian theory became popular in perceptual science in the 1980's and 1990's.

Stone speaks of the 'conservative nature of evolution' which recycles solutions and applies them wherever they fit. According to this, it is likely that any mechanisms found in visual processing operate in many if not all places of the brain dealing with different but structurally similar functions.

Redundancy reduction, predictive coding, efficient coding, sparse coding, and energy minimization are related hypotheses with similar predictions. All these theories are reasonably successful in explaining biological phenomena.

"Constructing a mathematically precise account of the brain has the potential to change our view of how it works."

Computational theories of the brain account not only for how it works, but why it should work that way.

"In order to understand a device one needs many different kinds of explanations." To understand vision, one needs theories that comply with the knowledge of the common man, the brain scientist, the experimental psychologist and which can be put to practical use.

Marr effectively argues normativity:

"... gone is any explanation in terms of neurons—except as a way of implementing a method. And present is a clear understanding of what is to be computed, how it is to be done, the physical assumptions on which the method is to be based, and some kind of analysis of algorithms that are capable of carrying it out."

It is important to make the distinction between different levels of understanding something (an information processing system) explicit.

Understanding that an abstract, mathematical description of the brain as an information-processing system is part of understanding the brain as a whole, one can rationally study

  • what is being processed,
  • why it is being processed,
  • how it is processed,
  • and whether or not processing it that way is optimal.

A representation is a formal system for making explicit certain entities or types of information, together with a specification of how the system does this.

And I shall call the result of using a representation to describe a given entity a description of the entity in that representation.

Any type of representation makes certain information explicit at the expense of information that is pushed into the background and may be quite hard to recover.

Representation, algorithm, and hardware depend on each other and, critically, on the demands of the task.

The three levels at which any information-processing system needs to be understood are

  • computational theory
  • representation and algorithm
  • hardware implementation

According to Marr, the computational theory of an information-processing system is the theory of what it does, why it does what it does and "what is the logic of the strategy by which" what it does can be done.

Computational neuroscience is not the study of computational theories of the brain (alone): it also deals with the other two aspects of understanding neuroscience.

Neurophysiology can help us understand the representations. Otherwise it is mainly concerned with the implementational side of the study of the brain as an information-processing system. Neurophysiological knowledge is hard to interpret in terms of algorithms and representations especially without a clear understanding of the task (ie. the computational theory).

Psychophysical results can inform the study of algorithms and representations.

There are three kinds of things we can learn from computational neuroscience:

  • Computational Theory of Perception If we want computers to do what we do, we need to understand what we do, why we do it, and how it can be done in general.
  • Implementation We can learn from nature how what we do can be implemented.
  • Algorithms and Representations We can learn from computational neuroscience good ways to represent information and process it.

Of course, if we don't use neural computers, we will have to adapt the algorithms and representations, which may not be so optimal on different hardware.

A heuristic program that solves some task is not a theory of that task! Theoretical analysis of the task and its domain is necessary!

Marr writes: "The usefulness of a representation depends upon how well suited it is to the purpose for which it is used". That's pure embodiment.

Schroeder names two general definitions of multisensory integration: One includes any kind of interaction between stimuli from different senses, the other only integration of information about the same object of the real world from different sensory modalities.

These definitions both are definitions on the functional level as opposed to the biological level with which Stein's definition is concerned.

Through simulations of neurons (and neuron ensembles), numbers of neurons can be monitored over time scales which both are not possible in vivo.

This is mainly an argument in favor of computational neuroscience. It is not so valid for ANN in classical AI where neuronal models are quite detached from biological neurons.

According to Rucci et al., neuroscientists can use robots to quantitatively test and analyze their theories.

The degree to which neuroscientists can draw conclusions from computational models depends on biological accuracy.

If input to biologically plausible models is too dissimilar to natural input, then that can lead to non-natural behavior of the model.

Sensory noise in robotic experiments validates a model's robustness. It is always realistic (but not necessarily natural).