Repository logo
About
Deposit
Communities & Collections
All of UWSpace
  • English
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Latviešu
  • Magyar
  • Nederlands
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
Log In
Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Nehaniv, Chrystopher L."

Filter results by typing the first few letters
Now showing 1 - 2 of 2
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Item
    Exploring Human Teachers' Interpretations of Trainee Robots' Nonverbal Behaviour and Errors
    (University of Waterloo, 2021-04-22) Aliasghari, Pourya; Dautenhahn, Kerstin; Nehaniv, Chrystopher L.; Ghafurian, Moojan
    In the near future, socially intelligent robots that can learn new tasks from humans may become widely available and gain an opportunity to help people more and more. In order to successfully play a role, not only should intelligent robots be able to interact effectively with humans while they are being taught, but also humans should have the assurance to trust these robots after teaching them how to perform tasks. When human students learn, they usually provide nonverbal cues to display their understanding of and interest in the material. For example, they sometimes nod, make eye contact or show meaningful facial expressions. Likewise, a humanoid robot's nonverbal social cues may enhance the learning process, in case the provided cues are legible for human teachers. To inform designing such nonverbal interaction techniques for intelligent robots, our first work investigates humans' interpretations of nonverbal cues provided by a trainee robot. Through an online experiment (with 167 participants), we examine how different gaze patterns and arm movements with various speeds and different kinds of pauses, displayed by a student robot when practising a physical task, impact teachers' understandings of the robot’s attributes. We show that a robot can appear differently in terms of its confidence, proficiency, eagerness to learn, etc., by systematically adjusting those nonverbal factors. Human students sometimes make mistakes while practising a task, but teachers may be forgiving about them. Intelligent robots are machines, and therefore, they may behave erroneously in certain situations. Our second study examines if human teachers for a robot overlook its small mistakes made when practising a recently taught task, in case the robot has already shown significant improvements. By means of an online rating experiment (with 173 participants), we first determine how severe a robot’s errors in a household task (i.e., preparing food) are perceived. We then use that information to design and conduct another experiment (with 139 participants) in which participants are given the experience of teaching trainee robots. According to our results, perceptions of teachers improve as the robots get better in performing the task. We also show that while bigger errors have a greater negative impact on human teachers' trust compared with the smaller ones, even a small error can significantly destroy trust in a trainee robot. This effect is also correlated with the personality traits of participants. The present work contributes by extending HRI knowledge concerning human teachers’ understandings of robots, in a specific teaching scenario when teachers are observing behaviours that have the primary goal of accomplishing a physical task.
  • Loading...
    Thumbnail Image
    Item
    How Do Different Modes of Verbal Expressiveness of a Student Robot Making Errors Impact Human Teachers' Intention to Use the Robot?
    (Association for Computing Machinery, 2021-11-09) Aliasghari, Pourya; Ghafurian, Moojan; Nehaniv, Chrystopher L.; Dautenhahn, Kerstin
    When humans make a mistake, they often try to employ some strategies to manage the situation and possibly mitigate the negative effects of the mistake. Robots that operate in the real world will also make errors and therefore might benefit from such recovery strategies. In this work, we studied how different verbal expression strategies of a trainee humanoid robot when committing an error after learning a task influence participants’ intention to use it. We performed a virtual experiment in which the expression modes of the robot were as follows: (1) being silent; (2) verbal expression but ignoring any errors; or (3) verbal expression while mentioning any error by apologizing, as well as acknowledging and justifying the error. To simulate teaching, participants remotely demonstrated their preferences to the robot in a series of food preparation tasks; however, at the very end of the teaching session, the robot made an error (in two of the three experimental conditions). Based on data collected from 176 participants, we observed that, compared to the mode where the robot remained silent, both modes where the robot utilized verbal expression could significantly enhance participants' intention to use the robot in the future if it made an error in the last practice round. When no error occurred at the end of the practice rounds, a silent robot was preferred and increased participants' intention to use.

DSpace software copyright © 2002-2025 LYRASIS

  • Privacy policy
  • End User Agreement
  • Send Feedback