Show Tag: philosophical

Select Other Tags

One of the goals of science is human understanding.

Without human understanding, at least one goal of science is not fulfilled.

We extend ourselves using technology in the sense that we build things that give us epistemological access to parts of reality which would otherwise be beyond our reach.

This includes instruments which help us perceive the world in ways not given to us naturally, like microscopes or compasses, and machines which help us think about our theories deeper than our cognitive limitations permit.

Verschure argues that models of what he calls the mind, brain, body nexus should particularly account for data about the behavior at the system level, ie. overt behavior. He calls this convergent validation

In Verschure's concept of convergent validation, the researcher does not seek inspiration for but constraints for falsification or validation of models in nature.

Mommy, where do models come from?

Should models be informed by normative theories like Bayesian or decision theory?

Verschure champions his model of Distributed Adaptive Control as a model comprising all aspects of the mind, brain, body nexus.

Verschure states his Distributed Adaptive Control (DAC) provides a solution to the symbol grounding problem.

The state spaces in the formal definition of Verschure's DAC already seems to comprise symbols.

Verschure states his is an early model in the tradition of what he calls the "predictive brain" hypothesis and relates it to Friston's free energy principle and Kalman filtering.

Biomimetic approaches have been ascribed various benefits.

Ron Sun integrates philosophical notions into his scientific writing against the (supposed) opinion of many scientists that philosophy has no place in science.

Ron Sun's mixing of philosophical with cognitive-scientific theories is supported by empiricist philosophers like Quine who hold that there can be no meaningful philosophy which is not based on empiry and therefore philosophy is not distinct from science.

This is as long as those theories are empirical theories.

Sun subscribes to Jackendoff's "Hypothesis of computational sufficiency". According to Sun, this hypothesis states that a mechanistic explanation is necessary and sufficient to explain consciousness. A mechanistic explanation, here, is an explanation in terms of physical processes.

Anyone who is not a dualist must assume that phenomenology can be traced to physical processes. Therefore, consciousness, being phenomenological, must be due to physical processes. To explain consciousness, explaining the physical processes is thus necessary.

According to Sun, one, if not the, aim of science is to come up with descriptions of phenomenology with lower and lower Kolmogorov complexity.

Framing the aim of science as coming up with descriptions of phenomenology with lower and lower Kolmogorov complexity is a very instrumentalist view.

Sun invokes Occam's razor to argue that shorter theories are better theories.

According to Musgrave, the "Ultimate Argument" for Scientific Realism is that it claims to explain why science is not only good at explaining known phenomenology but also at predicting new phenomenology. The argument claims that non-realism can view successful prediction of new phenomenology by a theory only as a coincidence and that scientific theories have been too successful for pure coincidence.

Musgrave shows that the "Ultimate Argument" of Scientific Realism is

an inference to the best explanation of facts about science.

Thus, the argument advocates to (tentatively) accept Scientific Realism, of which science's success is a consequence, rather than Anti-Realism, which depicts science's success as pure coincidence, because it is the better explanation, not because the latter is the only consistent one.

Realism makes greater ontological commitments than non-realism, by asserting that unobservables exist. Usually, making greater commitments is a good thing for a theory because it entails being more testable, empirically. Unobservables are not observable, though, and therefore do not make the theory more testable.

According to Musgrave, the "Ultimate Argument" of Scientific Realism cannot convince the Positivist Anti-Realist, because the Positivist Anti-Realist does not value explanation via unobservables and thus not the explanation of scientific success through unobservables.

A nice argument for Turing's immitation game (aka. the Turing test) is that it is no worse than how we conclude that other people have intelligence. We only believe that we're not the only intelligent entities in this world because other people respond to interaction in a way which is consistent with them being intelligent.

According to Patrick Winston, our mental development suddenly diverged from that of the Neanderthals and that raises two central questions: What makes us different from other primates and what is similar.

Patrick Winston differentiates three different kinds of models:

  • those that mimic behaviour
  • those that make predictions
  • those that increase understanding

Patrick Winston differentiates two kinds of cognitive performance:

  • reactive, "thermometer"-like behavior,
  • predictive, "model making" behavior

Patrick Winston says that Rodney Brooks was wrong in neglecting "model making", representational processes in human cognition.

Patrick Winston says that "asking better, biologically inspired questions" will make our AI dreams come true, because the space of possible solutions to the AI problem is large and looking close to a known one (natural intelligence) makes success more likely.

Patrick Winston says that neural nets are a mechanism rather than a method.

Taking inspiration for technical solutions from nature promises greater robustness.

Biomimetic (neural) robotics can provide feedback to neuroscience.

There is the view that perception is an active process and cannot be understood without an active component.

The terms active vision',active perception', smart sensing',animate vision' are sometimes used synonymously.

The account of abstraction due to Hoare is that we first cluster objects according to arbitrary similarities. We then find clusters which are predictive of the future and name them. Subsequently, the similarities within such a named cluster are thought of as essential whereas the differences are perceived as unimportant.

The account of the process of abstraction due to Hoare is

  • Abstraction (selecting those properties of the real things we want to represent in our abstraction)
  • Representation (choosing symbols like words, pictograms... for the abstraction)
  • Manipulation (declaring rules for how to use the symbolic representation to predict what will happen to the real things under certain circumstances)
  • Axiomatisation (declaring rigorously the relationship between the symbols used in the representation and the properties of the real things being abstracted from)

"The intention and the result of a scientific inquiry is to obtain an understanding and control of some part of the universe."

"No substantial part of the universe is so simple that it can be grasped and controlled without abstraction."

"the best material model for a cat is another, or preferably the same cat."

A theoretical model of (a significant part of) the world would have comparable complexity as (that part of) the world and we would be unable to understand and use it.

The components of `Bayesian Fundamentalist's' psychological models critically are not assumed to correspond to anything in the subject's mind.

Already von Helmholtz formulated the idea that prior knowledge---or expectations---are fused with sensory information into perception.

This idea is at the core of Bayesian theory.

Love and Jones accuse `Bayesian Fundamentalism' of focussing too much on the computational theory and neglecting more biologically constrained levels of understanding cognition.

There often are multiple computational theories of a given problem, differing in assumptions on hardware and problem.

Computational theories of cognition alone are underconstrained.

`Bayesian Fundamentalism', like Behaviorism and evolutionary psychology, explain behavior purely from the point of view of the environment---they completely ignore the inner workings of the organism.

Connectionism used to use telegraph networks as its founding metaphor. Information processing units and physical neurons came later.

Connectionism has been criticised for

  • being too opaque,
  • lacking compositionality,
  • lacking productivity,
  • for using biologically implausible learning rules,
  • being mostly generalized regression.