"Jolg Lucke" 2008/10/29 Open PhD and Post-doc Positions at the NewComputational Vision Center in Frankfurt -------------------------------------------------------------------------------------------------- We offer a range of PhD and post-doc positions for theoretical and experimental work in the new 'Bernstein Focus Neurotechnology' in Frankfurt. ...... DESCRIPTION OF PROJECTS ----------------------------------------------------------------------- Autonomous Learning in an Infant-Like Active Vision System We will develop an active vision system that autonomously learns to perceive the world around it. An existing anthropomorphic robot head capable of fast saccade-like eye movements will be used. In close collaboration with some of the other projects, the robot will be given learning capabilities including attentional mechanisms and curiosity drives. The robot will learn to control its gaze and learn both low-level (stereo, motion) and high-level (shapes, objects, people) representations and predictive models in an autonomous fashion. PIs: Jochen Triesch, Cornelius Weber, Christoph von der Malsburg Contact: triesch@fias.uni-frankfurt.de ----------------------------------------------------------------------- A Learning Visual Sensor System for a Mobile Platform This project addresses the demonstration of learnt and continuously improving visual perception used on a mobile robot which shall safely navigate in an unknown indoor environment, detect and identify obstacles and moving objects in the scene. The emphasis is not on the navigation capabilities but on perception: the project emphasizes the autonomous learning of motion and near-field environment perception capabilities under egomotion, considering the conjunction of perception (vision) and action (motor signals). PIs: Rudolf Mester, Hanno Scharr Contact: mester@vsi.cs.uni-frankfurt.de ----------------------------------------------------------------------- Cooperative Neural Learning Approaches in a Multi-Camera Visual Surveillance Scenario We plan to develop a system which exhibits autonomous learning of convergent cooperative processing of visual information in a large multi-camera setup which is arranged over an extended area. Th demonstrator will show prototypically that a complex network of visual sensors can learn about the geometric and photometric interrelation between shared cameras, learn about the appearance and behavior of people, and ultimately learn about usual and unusual events in an autonomous fashion. This will be achieved by combining statistical methods and neural control and communication strategies. PIs: Rudolf Mester, Jochen Triesch Contact: mester@vsi.cs.uni-frankfurt.de