Show Reference: "Computational Audiovisual Scene Analysis for Dialog Scenarios"

Computational Audiovisual Scene Analysis for Dialog Scenarios In IROS 2011 Workshop on Cognitive Neuroscience Robotics (September 2011) by Rujiao Yan, Tobias Rodemann, Britta Wrede edited by Yukie Nagai
@inproceedings{yan-et-al-2011,
    abstract = {We introduce a system for Computational {Audio-Visual} Scene Analysis ({CAVSA}) with a focus on human-robot
dialogs in multi-person environments. The general target of
{CAVSA} is to learn who is speaking now, where the speaker
is, and whether the speaker is talking to the robot or to other
persons. In the application specied in this paper, we aim at
estimating the number and position of speakers using several
auditory and visual cues. Our test application for {CAVSA} is the
online adaptation of audio-motor maps, where vision is used
to provide position information about the speaker. The system
can perform this adaptation during the normal operation of
the robot, like when the robot is engaging in conversation
with a group of humans. Comparing our online adaptation of
audio-motor maps using {CAVSA} with prior online adaptation
methods, our approach is more robust in situations with more
than one speaker and when speakers dynamically enter and
leave the scene.},
    author = {Yan, Rujiao and Rodemann, Tobias and Wrede, Britta},
    booktitle = {IROS 2011 Workshop on Cognitive Neuroscience Robotics},
    editor = {Nagai, Yukie},
    keywords = {alignment, learning, multi-modality, visual, visual-processing},
    month = sep,
    posted-at = {2011-10-12 14:29:02},
    priority = {2},
    title = {Computational Audiovisual Scene Analysis for Dialog Scenarios},
    url = {http://www.honda-ri.de/intern/Publications/PUBA-22},
    year = {2011}
}

See the CiteULike entry for more info, PDF links, BibTex etc.