This research project explores agents and robots that move their social presence from one body to another (re-embody) and that inhabit a single body (co-embody). Through the method of speed dating using user enactments, we look at how participants experience and perceive such interactions in various contexts.

Part I:

Re-embodiment & Personal Interactions

We used speed dating with user enactments as our primary research method. As this is an area in which there are no design patterns of guidelines to draw from, an open-ended method is needed to begin mapping out this complex and unknown design space.

Speed dating allows participants to quickly encounter possible future scenarios by taking part is multiple interactive scenarios. Like romantic speed dating, people might not know much about one scenario or another, but from “meeting” all the scenarios they might get a better sense of their own values, needs and desires from future technology.

We immersed participants in four settings, accompanied by interactive robots and agents to encourage reactions on what people might and might not want: a home setting, a DMV office, a hospital, and an autonomous car.

Design Process

In order to determine the enactments that we would want to test, we went through an extended ideation and iteration process. We generated as many ideas as possible through rapid ideation methods, such as cards, bodystorming, brainstorming and metaphors. Then, we used affinity diagramming to draw out and organize the emerging themes. Finally, we iterated and made design judgements on what topics to include in the enactments. We did not attempt to systematically test the entire design space, but to probe a variety of situations that would generate initial insights and questions about it.

In these scenarios, agents behaved in one of four ways: (1) each robot had their own social presence (presented itself as an entity of its own), similar to the human model of a single brain in a single body; (2) one social presence moved from one body to another, following the user in their task (re-embodiment); (3) a social presence controlled multiple bodies simultaneously; and (4) a social presence entered a body in which there was already another social presence (co-embodiment).

An example of a re-embodiment enactment, in which an agent’s social presence moves from one body to another.


The study required 3 researchers working in a team to create the full experience we intended for participants. One researcher was the experimenter, who guided the participant in the experiences. The second researcher was the ‘wizard’, who controlled the voices of all agents using a Bluetooth speaker, a set of pre-recorded voice clips, and a microphone for ‘live’ interactions. They were connected to the experimenter through a voice-call and had an overview of the study through a live GoPro stream. The third researcher served as a ‘stage-hand’, and was in charge of moving props, cameras, and agents throughout the study.


The findings of the study include insights about people’s perception of re-embodiment and co-embodiment, agent expertise, agent ‘mental’ capacity, and user privacy.

We find that:

  • People feel comfortable with a re-embodying entity that moves from one body to another to support a single service.
  • In situations of high expertise or high risk, such as a medical context, participants preferred an agent that does not re-embody, but rather focuses on a single task.
  • Agents should be transparent about their cognitive abilities, especially when re-embodiment and multi-tasking is involved.
  • Participants felt uncomfortable with agents talking to each other “out loud” in front of the participant.

Part II:

Interpersonal Interactions

This study asks how should agents address interpersonal interactions in public contexts? Should there be a single agent that addresses several users, should services have personal agents that are affiliated with the service, or should each user have their personal agent that “re-embodies” and follows them from one location to another (as suggested in Study 1)?

This study made use of a more structured format of Speed Dating with User Enactments, as the research questions were more specific and structured, and could be more rigorously tested. Participants interacted in situations where agents either belonged to the service (like current standard service agents), agents belonged to the service but generated a unique agent for a specific user, or agents that belonged to the user, and served them in public and in their personal space while leveraging personal information.


We created three service environments in our lab for participants to interact with: a department store, a hotel and a medical center. Participants interacted in pairs, and were asked to perform a range of everyday tasks, with the services of a robot. The robot was controlled through Wizard of Oz (an experimenter controlled the robot) according to a predefined script. We tested the three robot embodiment types in all three environments.


Our findings showed that:

  • Participants saw no value in having a personalized agent provided by the service. Rather, they preferred to have a single service agent, or an agent that is “theirs” and moves with them from location to location.
  • Participants thought that a “Life Agent” is more personalized, seamless, and emotionally supportive. However they also expressed privacy-related concerns.
  • Participants agreed that a “Singular Agent” is better is situations that require expertise, echoing the findings from our previous findings. They also found these agents as better promoters of interpersonal social interaction.

Part III:

Interpersonal Interactions with a Shared Agent

Study 1 and 2 began to outline how agents should behave in public and service scenarios. But when moving agents into personal spaces, like the home, new challenges surface. Current agents, such as Google Home and Amazon Echo, assume their devices are shared between household members. Yet their design does not address the interpersonal challenges that might arise from this sharing behavior.

For example, what should an agent do when a visiting mother-in-law asks for her daughter-in-law’s location? Should the agent share this personal information? Should it prevaricate, stall, or redirect the subject? What should an agent do if a teen asks it to lie and tell parents the teen has been studying? Should it keep secrets?


This study used Speed Dating with Storyboards, and group semi-structured interviews with families. Participants were presented with 21 storyboards that probed a range of situations in which it was not clear what how a home agent or robot should behave. The goal was to uncover perceptions, tensions and a range of aspects that agents should better understand to correctly respond to situations that are socially complex. For example, we tested topics of hierarchy, conflict, proactivity and judgement.

  • Ownership: Participants explicitly stated that they would like to know who the agent belongs to. They argued that knowing this would significantly assist in setting their expectations about how an agent should appropriately behave in interpersonal situations.
  • Social Roles: Understanding people’s social roles is critical for the agent to interact and respond in a socially appropriate way. The most important roles were the division between parents and children, and whether an individual resides within the home or not. People care about social roles, and agents should therefore also pay attention to them.
  • Proactivity: Findings revealed three proactivity thresholds: reactive, proactive and proactive recommender. Reactive—participants only wanted agents to respond in social situations when asked. Proactive—the agent can be proactive and intervene, but only provide information. Proactive Recommender—the agent should not only provide information, but also a recommendation for action. While participants varied in their preference, the bottom and top boundaries were clear: participants were open to the idea of an agent who can understand and respond to social situations, but never wanted agents to enforce a decision.