Yay or Neigh: Can Our Interactions With Horses Help Us Design Better Robots?
Think about how often you use your phone, computer, or tablet each day. As technology progresses, it is becoming more and more ingrained in our daily lives. Research in technology has developed a field known as human-robot interaction (HRI) that specializes in understanding how people interact with technological systems and the relationships between them. Here robot is a broad term for technology that includes computers and AI systems, such as autonomous cars, delivery drones, and factory machinery [1]. In this field, it is common to examine the way humans and animals interact, also known as human-animal interaction (HAI), as a model for humans and robots, typically with dogs. However, researchers Eakta Jain and Christina Gardner-McCune recently studied interactions between people and horses to find some new insights.
Horses are ideal models to simulate how robots would respond to people because they are able to learn and be trained by sensing and responding to their rider. They have historically played important roles in food production, transportation, war, and other services for people, which are roles that robots are increasingly occupying [1]. These would all be valuable qualities to build into robots to improve how they interact with people. On top of that, horses can also provide new insight compared to dogs because they rely heavily on nonverbal communication and training them involves things like the release of pressure on their body rather than treats, so people inherently interact with them differently.
Human-horse interaction research typically focuses on well being, performance, and accident prevention, whereas HRI research focuses on interaction, relationship, attachment, and trust in teamwork settings, which requires communication and respect [1]. The common thread in these fields is the assumption there is already an existing working relationship, but how this relationship is formed can be important, too. According to the researchers, “findings with human-horse interaction can inform how to move from the first introduction to the next more complex interaction and then the next: at each stage, horse trainers test a horse for whether it respects the human” [1].
The study was entirely qualitative, meaning the researchers used different methods to collect data on human-horse interaction in real-world contexts rather than taking numerical measurements. First, they collected observational data by watching students in the Horse Teaching Unit at the University of Florida train horses. To do this, one of the researchers sat in on two consecutive semester-long courses and took notes on the students’ activities while they interacted with the horses. It was a very hands-on process that included grooming, brushing, washing, and sending them out to the fields.
Then, the researcher conducted monthly interviews with the instructor of the course as well as students and teaching assistants when the opportunity arose. They also spoke with equestrians who were not affiliated with the class, including an expert horse trainer with more than 30 years of experience, an amateur trainer for competitive riding, an amateur mustang adopter, and two equestrians who kept horses for light ranch work and recreational riding. Finally, the researchers wrote journal entries documenting their own experience taking horse riding lessons over six months. In order to analyze all of the hand-written notes, all findings were typed and grouped into categories, which were arranged under broader themes: how the horse communicates with the human, how the human communicates with the horse, and how the human reacts to undesired or unexpected behaviors.
They found that the horses used nonverbal cues to interact with humans, like moving their ears to indicate they are paying attention. After some time, trainers were able to understand what the cues meant in context. For example, when horses moved away to give a human space, they came to understand that the horse respected them. They also showed cooperation by understanding the boundaries of personal space.
On the other hand, when people communicated with their horses, they used physical and verbal cues. A rider might apply pressure with the reins, squeeze their legs around the horse, kick their side gently, or tighten and loosen the reins accordingly. They also use clucking or kissing sounds and verbalizations like “yah!” and “woah!” to indicate what they want the horse to do. Different horses need different handling, and each rider has their own preferences, but all horses can be confused by mixed signals from their trainer. Thus, the rider learns adaptive strategies in combining physical and verbal cues to direct the horse clearly.
Finally, the researchers saw that when faced with an undesired or unexpected behavior from the horse, riders were most effective in their interactions when they attempted to find the source of the horse’s behavior. Horses can sense their rider’s nervousness, tiredness, or distraction, which can become one possible source. They learned that it is essential to remain calm.
Overall, the findings suggest that horses act as both teammates and companions to people. This makes them perfect for HRI research. However, there were some limitations to this study. The sample size of people and horses was quite small because of how new horse models for HRI are. There is also a possibility that the findings are biased because the concepts used were only based in Euro-American traditions of horse interaction, which may not be universal. Finally, observing later stages of interaction past first impressions and the formation of a relationship may provide even more information.
Nonetheless, the authors were able to create guidelines for the early stages of human-robot interaction from their research. First, robots should be able to non-verbally express that they are being attentive and what they are being attentive to. Second, robots should express respect for the user. Third, they should be able to learn rules about appropriate proximity through the user’s verbal and non-verbal cues. Next, humans and robots should train together in a way that emphasizes early-stage relationship building. Lastly, this training should also encourage a “debugging mindset” where the user assesses the possible sources of an issue.
Jain and Gardner-McCune conclude that their work has added to two previously unexplored areas in HRI. First, the study expanded the lessons we can learn from HAI by observing additional animals outside of dogs. Secondly, they addressed interaction in the earliest stages of building a working relationship. Their work with horses opens the field of HRI to the possibility of learning from our interactions with animals beyond just dogs, and provides a lot of new information on how relationships between people and robots might function based on the relationships we have with horses. It is important to consider how we interact with animals, which used to occupy similar roles to the ones robots do today. With animals, there are some ethical awarenesses in place that we do not afford robots.
On the design side of things, this research shows that robots should be made with consideration for the authors’ guidelines and trained alongside humans to foster the sense of a working relationship. As an example, delivery robots might learn to follow the same personal space rules as horses and move aside for people, which was found to indicate respectfulness. Similarly, when technology goes awry, users can learn to find the source of the issue and solve that rather than force a solution that tends to only last in the short term—think turning something off and on again— since the most effective horse riders showed a debugging mindset.
For the average technology user, which nowadays is all of us, this research leaves us considering how we can shift our thinking so that it can be more pleasant and productive to work with robots. Although robots have not quite reached the level seen in futuristic movies like I, Robot or A.I. Artificial Intelligence where they act as companions, perhaps we can start thinking of technology as a helper or assistant we work with instead of a tool we use. Regardless, as technology takes up more roles in human society, we can still learn something from the animals they are replacing.
References:
- Jain E, Gardner-McCune C. Horse as Teacher: How human-horse interaction informs human-robot interaction. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems 2023 Apr 19 (pp. 1-13).