<div dir="ltr"><div class="gmail_default" style="font-family:monospace,monospace;font-size:small;color:#3d85c6"><p dir="ltr" style="color:rgb(80,0,80);font-family:Arial,Helvetica,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">[JOBS] 1 Fully funded PhD scholarship  in Robotics and AI: Social perception in unstructured environments</span></p><br style="color:rgb(80,0,80);font-family:Arial,Helvetica,sans-serif"><p dir="ltr" style="color:rgb(80,0,80);font-family:Arial,Helvetica,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">######### Apologies for cross posting  #########</span></p><br style="color:rgb(80,0,80);font-family:Arial,Helvetica,sans-serif"><p dir="ltr" style="color:rgb(80,0,80);font-family:Arial,Helvetica,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Dear colleagues,</span></p><br style="color:rgb(80,0,80);font-family:Arial,Helvetica,sans-serif"><p dir="ltr" style="color:rgb(80,0,80);font-family:Arial,Helvetica,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">University Milano-Bicocca is offering 1 doctoral scholarship as part of the “ph.D. program of national interest in Robotics and Intelligent Machines (DRIM)”.</span></p><br style="color:rgb(80,0,80);font-family:Arial,Helvetica,sans-serif"><p dir="ltr" style="color:rgb(80,0,80);font-family:Arial,Helvetica,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10.5pt;font-family:Roboto,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Main Theme: Social perception in unstructured environments</span></p><p dir="ltr" style="color:rgb(80,0,80);font-family:Arial,Helvetica,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"> </p><p dir="ltr" style="color:rgb(80,0,80);font-family:Arial,Helvetica,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10.5pt;font-family:Roboto,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Funder: National Robotics Doctoral Consortium</span></p><p dir="ltr" style="color:rgb(80,0,80);font-family:Arial,Helvetica,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"> </p><p dir="ltr" style="color:rgb(80,0,80);font-family:Arial,Helvetica,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10.5pt;font-family:Roboto,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Working place: Milan-Bicocca, University</span></p><p dir="ltr" style="color:rgb(80,0,80);font-family:Arial,Helvetica,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"> </p><p dir="ltr" style="color:rgb(80,0,80);font-family:Arial,Helvetica,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10.5pt;font-family:Roboto,sans-serif;color:rgb(0,0,0);background-color:transparent;font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Deadline:</span><span style="font-size:10.5pt;font-family:Roboto,sans-serif;color:rgb(255,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> 29th of June 2022</span></p><p dir="ltr" style="color:rgb(80,0,80);font-family:Arial,Helvetica,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"> </p><p dir="ltr" style="color:rgb(80,0,80);font-family:Arial,Helvetica,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10.5pt;font-family:Roboto,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">More info: </span><a href="https://drim.i-rim.it/en/" rel="noreferrer" target="_blank" style="text-decoration-line:none"><span style="font-size:10.5pt;font-family:Roboto,sans-serif;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">https://drim.i-rim.it/en/</span></a><span style="font-size:10.5pt;font-family:Roboto,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> </span><a href="https://drim.i-rim.it/en/admission/" rel="noreferrer" target="_blank" style="text-decoration-line:none"><span style="font-size:10.5pt;font-family:Roboto,sans-serif;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">https://drim.i-rim.it/en/admission/</span></a><span style="font-size:10.5pt;font-family:Roboto,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> </span><a href="https://sites.google.com/site/dimitriognibenehomepage/jobs" rel="noreferrer" target="_blank" style="text-decoration-line:none"><span style="font-size:10.5pt;font-family:Roboto,sans-serif;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">https://sites.google.com/site/dimitriognibenehomepage/jobs</span></a></p><p dir="ltr" style="color:rgb(80,0,80);font-family:Arial,Helvetica,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"> </p><p dir="ltr" style="color:rgb(80,0,80);font-family:Arial,Helvetica,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10.5pt;font-family:Roboto,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Template for the motivation letter: <a href="https://drim.i-rim.it/wp-content/uploads/2022/05/Template-Motivation-Project-Letter.rtf" rel="noreferrer" target="_blank">https://drim.i-rim.it/wp-content/uploads/2022/05/Template-Motivation-Project-Letter.rtf</a></span></p><p dir="ltr" style="color:rgb(80,0,80);font-family:Arial,Helvetica,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"> </p><p dir="ltr" style="color:rgb(80,0,80);font-family:Arial,Helvetica,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10.5pt;font-family:Roboto,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Contact: <a href="mailto:dimitri.ognibene@unimib.it" rel="noreferrer" target="_blank">dimitri.ognibene@unimib.it</a></span></p><p dir="ltr" style="color:rgb(80,0,80);font-family:Arial,Helvetica,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"> </p><p dir="ltr" style="color:rgb(80,0,80);font-family:Arial,Helvetica,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10.5pt;font-family:Roboto,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Description of the candidate: We are looking for the perfect PhD candidate to find out how to enable robots to interact with humans in the wild considering the perceptual and computational limits they have. The candidate will have the chance to explore practical machine learning and more formal methods to develop the AI controller of social robots. There will also be the opportunity for interdisciplinary collaboration to look at how humans and other organisms solve similar problems.</span></p><p dir="ltr" style="color:rgb(80,0,80);font-family:Arial,Helvetica,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10.5pt;font-family:Roboto,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">It will be crucial to be passionate about ideas and challenges (and maths and programming).</span></p><p dir="ltr" style="color:rgb(80,0,80);font-family:Arial,Helvetica,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"> </p><p dir="ltr" style="color:rgb(80,0,80);font-family:Arial,Helvetica,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10.5pt;font-family:Roboto,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Requirements:</span></p><p dir="ltr" style="color:rgb(80,0,80);font-family:Arial,Helvetica,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10.5pt;font-family:Roboto,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Applicants are expected to have good programming skills and be interested in further improving them.</span></p><p dir="ltr" style="color:rgb(80,0,80);font-family:Arial,Helvetica,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10.5pt;font-family:Roboto,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Knowledge of statistics, control systems theory, artificial intelligence, computer vision, as well as machine learning methodologies, and libraries would be an important plus. Similarly, the ability to understand and design psychological tasks as well as use statistical methods to evaluate experimental results and human-robot interaction effectiveness would be valuable. Experience with real-time 3d engines and/or VR platforms, such as Unity3D, Unreal and similar, or with robotic platforms will also be considered positively.</span></p><p dir="ltr" style="color:rgb(80,0,80);font-family:Arial,Helvetica,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"> </p><p dir="ltr" style="color:rgb(80,0,80);font-family:Arial,Helvetica,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10.5pt;font-family:Roboto,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Description of the field:</span></p><p dir="ltr" style="color:rgb(80,0,80);font-family:Arial,Helvetica,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"> </p><p dir="ltr" style="color:rgb(80,0,80);font-family:Arial,Helvetica,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10.5pt;font-family:Roboto,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">In the last 10 years, with the advent of modern deep learning methodologies, substantial performance improvement has been observed in perception for robots and other artificial systems. However, interactions with unstructured environments pose high challenges due to the variety of conditions and crucial sensory limits, such as occlusions and limited FOV. This position will focus on the study and development of systems that can perceive others’ states in unstructured environments and predict their actions, intentions, and beliefs.</span></p><p dir="ltr" style="color:rgb(80,0,80);font-family:Arial,Helvetica,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"> </p><p dir="ltr" style="color:rgb(80,0,80);font-family:Arial,Helvetica,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10.5pt;font-family:Roboto,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">A possible line of research would focus on adaptive and social active perception mechanisms that enable to dynamically deal with sensory limits and have received limited attention but play a crucial role in human perception (Ognibene & Demiris, 2013, Lee, Ognibene et al. 2015). It has been recently shown that such mechanisms may substantially improve learning performance other than execution efficiency and even enable online adaptation to new environments [Ognibene & Baldassarre, 2015], however, these properties have not been fully scaled to social conditions yet. Moreover, active perception also plays a crucial role also when interacting with other agents who add relevant scene dynamics and may occlude important information. At the same time agents may have their own sensory limits and active perception strategies that must be scrupulously parsed to support effective social interaction [Ognibene, Mirante, et al, 2019], e.g. false beliefs and theory of mind [Bianco & Ognibene 2020]. Most importantly, social interaction increases the demand for integration of information about task and context, i.e. simultaneous perception of the states of other agents, their effectors, and other scene elements which can be strongly affected by the limited field of view and challenging for active perception due to the necessity to focus on the right element at the right time [Ognibene, Chinellato, et al 2013] and adapt to different types of interaction. The work may not only focus on advancing technical performance but on understanding and modelling how humans perform and adapt social perception or on how to design active social perception to improve the perceived quality of human-robot interactions.</span></p><p dir="ltr" style="color:rgb(80,0,80);font-family:Arial,Helvetica,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"> </p><p dir="ltr" style="color:rgb(80,0,80);font-family:Arial,Helvetica,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10.5pt;font-family:Roboto,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">References:</span></p><ul style="color:rgb(80,0,80);font-family:Arial,Helvetica,sans-serif;margin-top:0px;margin-bottom:0px"><li dir="ltr" style="margin-left:24pt;list-style-type:disc;font-size:10.5pt;font-family:Roboto,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:11pt;margin-bottom:0pt"><span style="font-size:10.5pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">Bianco, F., & Ognibene, D. (2020, March). From psychological intention recognition theories to adaptive theory of mind for robots: Computational models. In Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (pp. 136-138).</span></p></li><li dir="ltr" style="margin-left:24pt;list-style-type:disc;font-size:10.5pt;font-family:Roboto,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10.5pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">Ognibene, D., Mirante, L., & Marchegiani, L. (2019, November). Proactive intention recognition for joint human-robot search and rescue missions through Monte-Carlo planning in POMDP environments. In International Conference on Social Robotics (pp. 332-343). Springer, Cham.</span></p></li><li dir="ltr" style="margin-left:24pt;list-style-type:disc;font-size:10.5pt;font-family:Roboto,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10.5pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">Lee, K., Ognibene, D., Chang, H. J., Kim, T. K., & Demiris, Y. (2015). Stare: Spatio-temporal attention relocation for multiple structured activities detection. IEEE Transactions on Image Processing, 24(12), 5916-5927.</span></p></li><li dir="ltr" style="margin-left:24pt;list-style-type:disc;font-size:10.5pt;font-family:Roboto,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:10.5pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">Ognibene, D., Chinellato, E., Sarabia, M., & Demiris, Y. (2013). Contextual action recognition and target localization with an active allocation of attention on a humanoid robot. Bioinspiration & biomimetics, 8(3), 035002.</span></p></li><li dir="ltr" style="margin-left:24pt;list-style-type:disc;font-size:10.5pt;font-family:Roboto,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:11pt"><span style="font-size:10.5pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">Ognibene, D., & Demiris, Y. (2013). Towards active event perception. In Proceedings of the 23rd International Joint Conference of Artificial Intelligence (IJCAI 2013).</span></p></li></ul><br class="gmail-Apple-interchange-newline"></div><div><br></div>-- <br><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><span style="font-family:georgia,serif">Dimitri Ognibene, PhD</span><br></div><div><font face="georgia, serif">Associate Professor at Università Milano-Bicocca</font></div><div><font face="georgia, serif">Honorary Lecturer of Computer science and Artificial Intelligence at University of Essex</font><br></div><div dir="ltr"><font face="georgia, serif"><br></font><div><div><font face="georgia, serif"><a href="http://sites.google.com/site/dimitriognibenehomepage/" target="_blank">http://sites.google.com/site/dimitriognibenehomepage/</a><br><b><font color="#00ffff">Skype</font>:</b> dimitri.ognibene</font></div><div><br></div><div><br><br></div></div></div></div></div></div></div></div></div></div></div></div>