# Show Reference: "Goal-Directed Feature Learning"

Goal-Directed Feature Learning In 2009 International Joint Conference on Neural Networks (June 2009), pp. 3319-3326, doi:10.1109/ijcnn.2009.5179064 by Cornelius Weber, Jochen Triesch
@inproceedings{weber-and-triesch-2009b,
address = {Pasadena, CA},
author = {Weber, Cornelius and Triesch, Jochen},
booktitle = {2009 International Joint Conference on Neural Networks},
doi = {10.1109/ijcnn.2009.5179064},
isbn = {978-1-4244-3548-7},
keywords = {ann, learning, model, reinforcement-learning, unsupervised},
location = {Atlanta, Ga, USA},
month = jun,
pages = {3319--3326},
posted-at = {2013-06-24 09:36:21},
priority = {2},
publisher = {IEEE},
title = {{Goal-Directed} Feature Learning},
url = {http://dx.doi.org/10.1109/ijcnn.2009.5179064},
year = {2009}
}


See the CiteULike entry for more info, PDF links, BibTex etc.

Classical models assume that learning in cortical regions is well described in an unsupervised learning framework while learning in the basal ganglia can be modeled by reinforcement learning.

Representations in the cortex (eg. V1) develop differently depending on the task. This suggests that some sort of feedback signal might be involved and learning in the cortex is not purely unsupervised.

Unsupervised learning models have been extended with aspects of reinforcement learning.

The algorithm presented by Weber and Triesch borrows from SARSA.

SOMs can be used for preprocessing in reinforcement learning, simplifying their high-dimensional input via their winner-take-all characteristics.

However, since standard SOMs do not get any goal-dependent input, they focus on globally strongest features (statistically most predictive latent variables) and under-emphasize features which would be relevant for the task.

The model due to Weber and Triesch combines SOM- or K-Means-like learning of features with prediction error feedback as in reinforcement learning. The model is thus able to learn relevant and disregard irrelevant features.