[Neuroinfo] [JOBS] 1 Fully funded PhD scholarship in Robotics and AI: Social perception in unstructured environments

Dimitri Ognibene dimitri.ognibene at gmail.com
Thu Jun 9 17:55:02 CEST 2022


[JOBS] 1 Fully funded PhD scholarship  in Robotics and AI: Social
perception in unstructured environments

######### Apologies for cross posting  #########

Dear colleagues,

University Milano-Bicocca is offering 1 doctoral scholarship as part of the
“ph.D. program of national interest in Robotics and Intelligent Machines
(DRIM)”.

Main Theme: Social perception in unstructured environments



Funder: National Robotics Doctoral Consortium



Working place: Milan-Bicocca, University



Deadline: 29th of June 2022



More info: https://drim.i-rim.it/en/ https://drim.i-rim.it/en/admission/
https://sites.google.com/site/dimitriognibenehomepage/jobs



Template for the motivation letter:
https://drim.i-rim.it/wp-content/uploads/2022/05/Template-Motivation-Project-Letter.rtf



Contact: dimitri.ognibene at unimib.it



Description of the candidate: We are looking for the perfect PhD candidate
to find out how to enable robots to interact with humans in the wild
considering the perceptual and computational limits they have. The
candidate will have the chance to explore practical machine learning and
more formal methods to develop the AI controller of social robots. There
will also be the opportunity for interdisciplinary collaboration to look at
how humans and other organisms solve similar problems.

It will be crucial to be passionate about ideas and challenges (and maths
and programming).



Requirements:

Applicants are expected to have good programming skills and be interested
in further improving them.

Knowledge of statistics, control systems theory, artificial intelligence,
computer vision, as well as machine learning methodologies, and libraries
would be an important plus. Similarly, the ability to understand and design
psychological tasks as well as use statistical methods to evaluate
experimental results and human-robot interaction effectiveness would be
valuable. Experience with real-time 3d engines and/or VR platforms, such as
Unity3D, Unreal and similar, or with robotic platforms will also be
considered positively.



Description of the field:



In the last 10 years, with the advent of modern deep learning
methodologies, substantial performance improvement has been observed in
perception for robots and other artificial systems. However, interactions
with unstructured environments pose high challenges due to the variety of
conditions and crucial sensory limits, such as occlusions and limited FOV.
This position will focus on the study and development of systems that can
perceive others’ states in unstructured environments and predict their
actions, intentions, and beliefs.



A possible line of research would focus on adaptive and social active
perception mechanisms that enable to dynamically deal with sensory limits
and have received limited attention but play a crucial role in human
perception (Ognibene & Demiris, 2013, Lee, Ognibene et al. 2015). It has
been recently shown that such mechanisms may substantially improve learning
performance other than execution efficiency and even enable online
adaptation to new environments [Ognibene & Baldassarre, 2015], however,
these properties have not been fully scaled to social conditions yet.
Moreover, active perception also plays a crucial role also when interacting
with other agents who add relevant scene dynamics and may occlude important
information. At the same time agents may have their own sensory limits and
active perception strategies that must be scrupulously parsed to support
effective social interaction [Ognibene, Mirante, et al, 2019], e.g. false
beliefs and theory of mind [Bianco & Ognibene 2020]. Most importantly,
social interaction increases the demand for integration of information
about task and context, i.e. simultaneous perception of the states of other
agents, their effectors, and other scene elements which can be strongly
affected by the limited field of view and challenging for active perception
due to the necessity to focus on the right element at the right time
[Ognibene, Chinellato, et al 2013] and adapt to different types of
interaction. The work may not only focus on advancing technical performance
but on understanding and modelling how humans perform and adapt social
perception or on how to design active social perception to improve the
perceived quality of human-robot interactions.



References:

   -

   Bianco, F., & Ognibene, D. (2020, March). From psychological intention
   recognition theories to adaptive theory of mind for robots: Computational
   models. In Companion of the 2020 ACM/IEEE International Conference on
   Human-Robot Interaction (pp. 136-138).
   -

   Ognibene, D., Mirante, L., & Marchegiani, L. (2019, November). Proactive
   intention recognition for joint human-robot search and rescue missions
   through Monte-Carlo planning in POMDP environments. In International
   Conference on Social Robotics (pp. 332-343). Springer, Cham.
   -

   Lee, K., Ognibene, D., Chang, H. J., Kim, T. K., & Demiris, Y. (2015).
   Stare: Spatio-temporal attention relocation for multiple structured
   activities detection. IEEE Transactions on Image Processing, 24(12),
   5916-5927.
   -

   Ognibene, D., Chinellato, E., Sarabia, M., & Demiris, Y. (2013).
   Contextual action recognition and target localization with an active
   allocation of attention on a humanoid robot. Bioinspiration & biomimetics,
   8(3), 035002.
   -

   Ognibene, D., & Demiris, Y. (2013). Towards active event perception. In
   Proceedings of the 23rd International Joint Conference of Artificial
   Intelligence (IJCAI 2013).



-- 
Dimitri Ognibene, PhD
Associate Professor at Università Milano-Bicocca
Honorary Lecturer of Computer science and Artificial Intelligence at
University of Essex

http://sites.google.com/site/dimitriognibenehomepage/
*Skype:* dimitri.ognibene
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.incf.org/pipermail/neuroinfo/attachments/20220609/8a715d81/attachment-0001.html>


More information about the Neuroinfo mailing list