Show Tag: new-idea

Select Other Tags

Weber and Triesch's model learns task-relevant features.

However, a brain region like the SC, which serves a very general task, cannot specialize in one task—it has to serve all goals that the system has.

It therefore should change its behavior depending on the task. Attention is one mechanism which might determine how to change behavior in a given situation.

If the goal is predictive of the input, then a purely unsupervised algorithm could take a representation of the goal as just another input.

While it is possible that the goal often is predictive of the input, some error feedback is probably necessary to tune the degree to which the algorithm can be `distracted' by task-irrelevant but interesting stimuli.

Using multiple layers each of which learns with a trace rule with successively larger time scales is similar to the CTRNNs Stefan Heinrich uses to learn the structure of language. Could there be a combined model of learning of sentence structure and language processing on the one hand and object-based visual or multi-modal attention on the other?

There are modulatory projections from AES to SC. This looks like a parallel to connections in visual cortex, because

  • SC is "low" in the information processing hierarchy and AES is high,
  • projections from SC are topographically organized
  • AES-SC-projections are modulatory.


  • I don't know about the topographic organization and bifurcation properties of the descending projection.
  • I don't know if there are (indirect?) connections from SC to AES and whether they are topographically organized

Possibly, this is a point for future work: model cortico-collicular connections as prediction. But, in Friston's framework, there would have to be ascending connections, too.

Judging by the abstract, von der Malsburg's Democratic Integration does what I believe is impossible: it lets a system learn the reliability of its sensory modalities from how well they agree with the consensus between the modalities.