3PO

Does watching Han Solo or C-3PO similarly influence our language processing?

Sophie‑Anne Beauprez1 · Christel Bidet‑Ildei1 · Kazuo Hiraki2

Received: 2 August 2018 / Accepted: 19 March 2019
© Springer-Verlag GmbH Germany, part of Springer Nature 2019

Abstract
Several studies have demonstrated that perceiving an action influences the subsequent processing of action verbs. However, which characteristics of the perceived action are truly determinant to enable this influence is still unknown. The current study investigated the role of the agent executing an action in this action–language relationship. Participants performed a semantic decision task after seeing a video of a human or a robot performing an action. The results of the first study showed that perceiving a human being executing an action as well as perceiving a robot facilitate subsequent language processing, suggesting that the humanness (The term “humanness” is used as meaning “belonging to human race” and not to refer to a personal quality) of the agent is not crucial in the link between action and language. However, this experiment was conducted with Japanese people who are very familiar with robots; thus, an alternative explanation could be that it is the unfamiliarity with the agent that could perturb the action–language relationship. To assess this hypothesis, we carried out two additional experiments with French participants. The results of the second study showed that, unlike the observation of a human agent, the observation of a robot did not influence language processing. Finally, the results of the third study showed that, after a familiarization phase, French participants too were influenced by the observation of a robot. Overall, the outcomes of these studies indicate that, more than the humanness of the agent, it is the familiarity which we have with this agent that is crucial in the action–language relationship.

Introduction

The embodiment theory postulates that all cognitive func- tions are related to sensorimotor experiences (Barsalou, 1999). In the present study, we propose to focus on the link between action and language.

1.A relationship between action and language processing A growing body of literature exists on the topic of the
relationship between action and language. Numerous studies have demonstrated that action execution can be influenced by language (see, for examples, Aravena et al., 2012; Bou- lenger et al., 2006; Glenberg & Kaschak, 2002; Lindemann,
Stenneken, van Schie, & Bekkering, 2006). This link has been demonstrated at a behavioral level. For example, Zwaan
& Taylor, (2006) asked their participants to answer a ques- tion by turning a knob in a specific direction. Participants were quicker to judge a sentence when the manual response to this sentence was in the same rotational direction as the manual action described by the sentence (for example, turn- ing the knob to the right for a sentence implying a clockwise rotation like “Jane started the car” or turning the knob to the left for a sentence implying a counterclockwise rotation like “Liza opened the pickle jar”). These results suggest that the production of an action and language processing could be based on common processing and use similar brain corre- lates. To test this assumption, numerous brain studies were carried out. These studies demonstrated the involvement of brain motor areas using functional magnetic resonance

*
[email protected]
imaging (Aziz-Zadeh & Damasio, 2008; Hauk, Johnsrude,
& Pulvermüller, 2004), magnetoencephalography (Klepp

1Université de Poitiers, Université de Tours, Centre National de la Recherche Scientifique, Centre de Recherches sur la Cognition et l’Apprentissage (UMR 7295), Poitiers, France
2Department of Systems Science, Graduate School of Arts and Sciences, The University of Tokyo, Tokyo, Japan
et al., 2014), electroencephalography (Mollo, Pulvermüller,
& Hauk, 2016), or transcranial magnetic stimulation (Kui- pers, van koningsbruggen, & Thierry, 2013). Altogether, these studies showed that the part of the body involved when

Vol.:(0123456789)

someone performs actions is also activated when this person processes language describing these actions.
Interestingly, researchers also demonstrated that this action–language relationship is not restricted to action exe- cution but also occurred when an action is only observed. Therefore, some studies demonstrated that perceiving an action can influence language processing (Beauprez &
Bidet-Ildei, 2017; Liepelt, Dolk, & Prinz, 2012). Studies by Beauprez and Bidet-Ildei (2017) and Liepelt et al. (2012) showed that seeing an action enables the participants to answer faster when a verb corresponds to this action.

2.A role of the characteristics of the action?

The study of the action–language relationship involving action perception has the advantage of offering researchers the opportunity to modulate several aspects of the action, which could not be possible when using an action execu- tion paradigm. Thereby, using action observation allows researchers gain knowledge from questions that remain unsolved on the action–language relationship. In particular, it is possible to understand how and which action properties can influence semantic activation during word processing. It can be assumed that the influence of action observation on language is automatic, and that as soon as an action is per- ceived, the associated semantic representation is activated (Pulvermüller, 2005). However, recent studies consider- ing these action properties indicate that it is not always the case (Beauprez & Bidet-Ildei, 2018; Beauprez, Toussaint &
Bidet-Ildei, 2018).
For example, when the context of an action was modified, the influence of action observation on action verb processing disappeared (Beauprez et al., 2018).
Indeed, the context in which an action is produced is criti- cal, since it provides much information to the understanding of this action [such as the intention of the actor, for example, see Iacoboni et al. (2005)]. Thus, actions are not perceived in isolation but are rather embedded with objects, actors, and the relationships among them. Indeed, context provides information concerning both the environment in which the action is performed and the agent performing the action. In a previous study (Beauprez et al., 2018), we decided to examine the role of context in the action relationship by focusing on the environment. In this study, participants observed a picture depicting an action performed in a usual (“to water a plant”) or unusual context (“to water a com- puter”) before performing a language decision task. After seeing a usual picture, participants were quicker to judge a congruent action verb (“to water”) compared to an incongru- ent action verb (“to eat”). However, when the context was unusual, no differences were observed between congruent and incongruent verbs. The results indicate that the influence of action observation on language processing is dependent

on the context where an action is produced. The question is now to explore the role of the agent performing the action. In particular, we propose to focus on the humanness of the agent. Is language influenced when observing a non-human agent instead of a human agent?
This characteristic is particularly interesting, because the crosstalk between action and language may be supported by a mechanism of motor resonance (Zwaan & Taylor, 2006), involving the activation of sensorimotor representations common to action perception/execution and language pro- cessing. The idea is that understanding an action involves an internal simulation of the perceived action (Rizzolatti, Fogassi, & Gallese, 2001) and that the closer the observed action is to the motor repertoire of the observer, the stronger the resonance should be (Calvo-Merino et al., 2005). Fol- lowing this logic, we can assume that perceiving another human would lead to more resonance than perceiving a non- human agent.

3.Motor resonance and perceiving robots

In our study, to compare a human and a non-human agent, we used a humanoid robot, because we can modify them (appearance, size, kinematic, etc.) more strictly and more easily than real humans. Therefore, robots represent a rel- evant tool to improve understanding of our cognition and how we interact with other human beings. Many studies have investigated the mechanisms sustaining the perception of robot action. However, the literature contains contradictory results.
Some authors have reported that motor resonance appears when perceiving robots (e.g., Gazzola, Rizzolatti, Wicker, & Keysers, 2007a; Press, Bird, Flach, & Heyes, 2005). Press, Gillmeister and Heyes (2006) demonstrated, for example, a similar priming effect of robotic and human hands. Moreover, using fMRI, Gazzola et al. (2007a) demonstrated that the mirror neuron system was strongly activated by the sight of both human and robotic action. In the same vein, it has been demonstrated that perceiv- ing robotic and human actions produced equivalent mu suppression1; in other words, human and robotic agents produced similar activation of the mirror neuron system (Oberman, McCleery, Ramachandran, & Pineda, 2007). This mirror neuron system is assumed to play a key role in the relationship between the sensorimotor system and lan- guage processing by mediating the mapping of observed actions onto one’s own motor repertoire (Aziz-Zadeh, Wil- son, Rizzolatti, & Iacoboni, 2006; Giacomo Rizzolatti &
Craighero, 2004). Thus, if the observation of a robot leads

1 Mu is a range of electroencephalography oscillations (8–13 Hz). Its suppression is considered to reflect mirror neuron system activity.

Fig. 1 Schematic representation of the hypotheses. Solid lines rep- resent an activation and dashed lines represent less or no activation. Perceiving a robot and a human being should influence language pro- cessing differently. The perception of the human action would acti-

vate the representation of this action, which in turn facilitates the pro- cessing of the verb describing this action, whereas the perception of the robotic action would activate this representation less or not at all

to the activation of the mirror neuron system, these results could indicate that the action–language relationship should be found not only when observing a human being but also when observing robotic agents.
In contrast, other studies have obtained opposite results (e.g., Matsuda, Hiraki, & Ishiguro, 2015; Tai et al., 2004, Kilner, Paulignan, & Blakemore, 2013). In their EEG study, Matsuda et al. (2015) found that human actions evoked sig- nificant mu suppression, whereas robotic actions did not. Another example is a positron emission tomography study (Tai et al., 2004), reporting that the mirror neuron system was activated when participants observed a grasping action performed by a human but was not when the same action was performed by a robot. Thus, when perceiving actions performed by a robot, it would be more difficult to activate a motor simulation. Ranzini, Borghi, and Nicoletti (2011) pro- vided evidence reinforcing this idea. A compatibility effect was obtained between hand posture (precision or power) and line width (thin or thick), reflecting that motor simulation occurred (the attention of the participant is directed where the hand posture is congruent with the line width). Interest- ingly, this effect was larger for a biological hand than for a non-biological hand. Altogether, these studies suggest a higher motor resonance for humans than for robots (see also Anelli et al. 2014).
It has been proposed that the differences between these two kinds of studies (motor resonance with robots vs no motor resonance with robots) could be explained by experi- mental design differences. Indeed, these studies used differ- ent kinds of robots (different levels of anthropomorphism, kinematic similarities with humans, etc.), presented either the entire body of the robots or only parts of them (for exam- ple, only the arm) and/or had different experimental instruc- tions. All these parameters could have significant effects on the brain structures involved in the cognitive tasks of these

studies (for more information on this subject, see Chaminade and Cheng, 2009).
More precisely, to explain these discrepancies, it has been suggested that, if the task does not impose focusing the attention on the goal of an action, motor resonance could be automatic for human actions, whereas robotic stimuli would not be processed automatically, because the participants had no existing sensorimotor representation of the robot’s action. Regarding these results, we could assume that the action–language relation should only be found when observ- ing a human being and that observing a robot performing an action would not influence subsequent processing of language.

4.The present study

The aim of the present study was to assess how the action–language relationship would be influenced when perceiving a robot or a human being performing an action. To do so, we compared the priming effect induced by action perception on a semantic decision task. In accordance with the previous studies, we hypothesized that the humanness of the agent is an important characteristic of an action. On one hand, when we perceive a human being performing an action, the mirror neuron system and the sensorimotor rep- resentations linked to the perceived action would be acti- vated. Since these representations are shared by language, its processing should be facilitated. On the other hand, when we perceive a robot performing an action, the mirror neu- ron system and sensorimotor representations would activate less or not at all. In this situation, language would not be influenced. In summary, we hypothesized that perceiving the action of a human agent would facilitate action-verb process- ing, whereas perceiving the action of a non-human (robotic) agent would not (see Fig. 1 for a schematic representation of our hypotheses).

Fig. 2 Procedure of the experi- ment task. The fixation cross, the prime video, and the verb stimulus were centered on a uniform gray background. The arrow represents the sequence of one trial

Experiment 1 Method Participants
Eighteen Japanese university students (M = 19-year-old, SD = 2.01; 11 male, 18 right-handed) participated in this experiment. The sample size was calculated using G*Power 3.0.10 (Faul, Erdfelder, Lang, & Buchner, 2007). The experiment was based on a repeated-measures ANOVA design from the results obtained in a pilot study (Cohen’s d value = 0.84, correlations between repeated measures = 0.5). Statistical significance was set at p < 0.05 and power at 0.90. All participants had normal or corrected-to-normal vision, no history of motor, perceptual or neurological disorders, and Japanese as their mother tongue. Moreover, all partici- pants provided their written informed consent prior to their inclusion in the experiment. They were also unaware of the purpose of the study. Prime and stimuli The prime was a video of a human being or a robot perform- ing an action. The videos were in color, muted, and lasted 3000 ms on average. Sixteen different actions were used (see “Appendix A” for the list of actions). Each action was per- formed both by a human actor and a humanoid robot (Nao, the robot from SoftBank Robotics https://www.ald.softb ankrobotics.com; see “Appendix B” for examples of frame). The stimuli were 32 verbs. Half of them were “action verbs” corresponding to the priming action, and the other half were “non-action verbs” (e.g., “think” or “dream”), namely, verbs that do not imply movement of the body. The verbs were presented in the neutral form and written with hiragana (see “Appendix A” for the list of verbs). Procedure For each participant, the experimental session included 192 trials (2 × 16 × 2 × 3): 2 presentations of the 16 actions per- formed by 2 types of agent (human and robot) that were fol- lowed by a verb (congruent action verbs, incongruent action verbs, or non-action verbs). The presentation order of the trials was randomized across participants. Each trial involved the following pro- cedure (see Fig. 2): a fixation cross appeared for 500 ms, then, the prime video was displayed (3000 ms). Finally, following another fixation cross (500 ms), the stimulus (a verb) appeared and remained on the screen until the partici- pant entered a response. This verb could be an action verb congruent with the prime (for example, seeing the video depicting the action of cleaning before reading the word “clean”), an incongruent action verb (for example, seeing the video depicting the action of cleaning before reading the word “take”), or a non-action verb (for example, seeing the video depicting the action of cleaning before reading the word “wish”). The participant’s task was to judge, as quickly and as accurately as possible, whether the verb was an action verb (namely, a verb involving a movement of the body). Participants consistently entered a “yes” response with the Fig. 3 Mean response time of the Japanese participants according to the congruency (congruent and incongruent) and the type of agent (human and robot). The error bars indicate the 95% confidence interval. ***Significant differ- ence with p < 0.001 right click of the mouse, whereas they entered a “no” answer with the left click of the mouse. The non-action verbs trials were not analyzed; they were included only to develop a task for the participants. Data analysis Participants’ response time and accuracy for trials with action verbs were recorded. Only the response times of the correct trials were analyzed (97% of the data), since trials with errors were excluded of the analyses. We used the lmer function of the lme4 package (Bates, Mächler, Bolker, & Walker, 2014) in R environment (R version 3.3.0, R Core Team© 2016) to build linear mixed-effects models. Partici- pants and word items were specified as random-effects fac- tors. Two fixed-effect factors were included: the congruency of the verb (congruent verb × incongruent verb) and the type of agent (human × robot) as well as their interaction. The p values were obtained for F values (Type III ANOVA) with the error degrees of freedom calculation based on Satter- thwaite’s approximation. The significance level was set at p < 0.05. Results Response times (see Fig. 3) varied according to the con- gruency [F(1,2110) = 89.29; p < 0.001] but not according to the type of agent [F(1,2110) = 0.04; p = 0.84]. There was no significant interaction between the type of agent and the congruency [F(1,2110) = 1.47; p = 0.22]. With the human agent as the prime, the response time for congruent action verbs (M = 738.55, SD = 199.35) was significantly shorter than that of the incongruent action verbs (M = 852.06, SD = 249.51; p < 0.001). Similarly, with the robotic agent as the prime, the response time for congruent action verbs (M = 739.32, SD = 168.25) and that for incongruent action verbs (M = 837.11, SD = 241.49) were significantly different (p < 0.001). Discussion The aim of this study was to assess whether the action–lan- guage relationship can be modulated according to the agent performing an action. To do so, we compared priming effects obtained in action verb processing when the action presented was performed by a human agent (human) or a non-human agent (robot). Our results confirmed that perceiving a bio- logical action facilitates the subsequent processing of a con- gruent action verb (Beauprez & Bidet-Ildei, 2017), since our participants were faster to answer when the action of the prime and the action of the verbs were congruent. However, contrary to our hypothesis, this facilitation effect was also found when perceiving a robot performing an action and could indicate that the humanness of the agent is not a deter- minant characteristic in the action–language relationship. Another explanation could be related to the cultural spe- cificities of the Japanese people with regards to their famili- arity with and beliefs about robots. Indeed, Japan has more robots than any other country; therefore, Japanese people have more exposure to robots in real life/for example, since 2014, SoftBank (a Japanese telecommunication company) has used the robot Pepper in their store to welcome, support, and guide customers in their shopping or to entertain them. Indeed, Japan promotes the use of robots to support human interaction, and robots frequently appear at public events or on television, such as the robot dog Aibo of the Sony company or the humanoid robot Asimo of the Honda company. In a study by MacDor- man, Vasudevan, & Ho, (2009), 731 participants from Japan and the United States completed a questionnaire including a question on their level of familiarity with robots. On average, Japanese female participants had 110% more robot-related experiences than US female participants had. Japanese male participants had 69% more robot-related experiences than US male participants had. Moreover, Japanese people are known to be more accept- ing of robots. Indeed, the original religion of Japan, Shinto, derives from a belief that spirits can inhabit objects (ani- mism), which could lead to a different sort of relationship with robots (MacDorman, Vasudevan, & Ho, 2009) than that experienced by other cultures. Some authors suggest that the beliefs which we have about the minds of others modify how we process sensory information. For example, Wykowska et al. (2014) obtained different results regarding whether the participants thought that a robot was controlled by a human mind rather than by a machine. In their first experiment, attentional control over sensory processing was enhanced when participants observed a human compared to a robot. However, in a second experiment, they demonstrated that this sensory gain control was enhanced when partici- pants observed a robot that they thought was controlled by a human mind compared to when they thought that it was con- trolled by a machine. Thus, the mental states we attribute to robots modify the way we behave with them (see also Hofer, Hauf, & Aschersleben, 2005, for evidence with children). Therefore, we speculate that these cultural specificities of Japanese participants concerning robots may change their capacity to activate their own motor repertoire when per- ceiving robots acting. Thus, we carried out a second experi- ment to assess this hypothesis. Experiment 2 The aim of this experiment was to determine whether the effects obtained in Experiment 1 with Japanese people could be related to the cultural specificities of Japanese partici- pants. For this, we decided to reproduce the experiment with French people who are less familiar with robots in their daily life and are less likely to attribute mental states to them. Indeed, according to the European Commission2, few European citizens have experience using robots (less than 15% have used a robot at home or at work or some- where else). Moreover, for French people and European people in general, the image of a robot is more related to an instrument-like machine than to a human-like machine, so they interact with robots not as communicative agents but as tools. The hypothesis was that if the humanness of the agent is not important for the action–language relationship, then we should replicate the results found in Japan. Namely, a facili- tation effect on action verb processing should be obtained after observing either a human or a robot performing a con- gruent action. In contrast, if it is the familiarity with and/or the beliefs towards robots that explains the results of Experi- ment 1, the results of the French and Japanese participants should be different. In this case, the facilitation effect on action verb processing should be obtained only after observ- ing a congruent action produced by a human agent. Method Participants Eighteen French university students participated in this experiment (M = 19-year-old, SD = 2.57; 7 male, 16 right- handed). All participants had normal or corrected-to-normal vision, no history of motor, perceptual or neurological dis- orders, and French as their mother tongue. Moreover, all participants provided their written informed consent prior to their inclusion in the experiment. They were also unaware of the purpose of the study. Stimuli and procedure The procedure of this second experiment was the same as for Experiment 1 except that it was conducted with French participants instead of Japanese participants. To adapt the material to French people, action verbs were translated to French (see “Appendix A”). All verbs were presented in the infinitive form. Data analysis As in Experiment 1, participants’ response time and accu- racy for trials with the action verbs were recorded. Only the response times of the correct trials were analyzed (91% of the data), since trials with errors were excluded of the analyses. Linear mixed-effects models were used with par- ticipants and word items specified as random-effects factors. Two fixed-effect factors were included: the congruency of the verb (congruent verb × incongruent verb) and the type of agent (human × robot) as well as their interaction. The p values were obtained for F values (Type III ANOVA) with error degree of freedom calculation based on Satterthwaite’s approximation. The significance level was set at p < 0.05. 2 2012 report on “Public attitudes towards robots” http://ec.europ a.eu/commfrontoffice/publicopinion/archives/ebs/ebs_382_en.pdf. Fig. 4 Mean response time of the French participants accord- ing to the congruency (con- gruent and incongruent) and the type of agent (human and robot). The error bars indicate the 95% confidence interval. ***Significant difference with p < 0.001 Results Response time (see Fig. 4) varied according to the congru- ency [F(1,1960) = 46.24; p < 0.001] but not according to the type of agent [F(1,1960) = 2.74; p = 0.09]. A signifi- cant interaction between the type of agent and the congru- ency was found [F(1,1960) = 11.91; p < 0.001]. With the human agent, the response time for congruent action verbs (M = 663.83, SD = 147.99) was significantly shorter than that of the incongruent action verbs (M = 768.36, SD = 163.47; p < 0.001). However, with the robotic agent, response time for congruent action verbs (M = 685.07, SD = 174.78) and that for incongruent action verbs (M = 715.35, SD = 137.89) were not significantly different (p = 0.29). Discussion The aim of this second study was to assess whether the absence of differences between the effects obtained with a human and a robotic agent in Experiment 1 could be related to the cultural specificities of Japanese people concerning robots. The results obtained in this second experiment with French participants confirmed again that perceiving a bio- logical action primes the processing of action verbs (Beau- prez & Bidet-Ildei, 2017). Interestingly, we observed here that the priming effect could be due to an interference more than a facilitation effect. Participants are perturbed in the processing of incongruent action verbs in comparison with other conditions. This is surprising because, in the previ- ous literature, when action and action verbs are processed, a facilitation was subsequently and classically observed (Beauprez & Bidet-Ildei, 2017; Bidet-Ildei et al., 2011). However, given the speed of response times observed in this experiment (approximately 100 ms less than in Experiment 1), it is possible that our participants cannot be accelerated more in the congruent condition, which can account for the absence of a facilitation effect. Importantly, whatever the origins of the priming effect observed when a human agent produces an action, the crucial effect is that it disappears when a robotic agent produces the action, suggesting that the relationship between action and language is dependent on the agent who performs the action. One possible explanation for this result could be that the French participants, unlike Japanese participants, might have been unable to recognize the action performed by the robot. If robot actions are not recognized (for example, see- ing the robot scratching could be understood as dancing), then all of the verbs would be incongruent regarding the prime, explaining the absence of facilitation effect. However, we carried a short questionnaire concerning the recognition of the actions to rule out this possibility. After the experi- mental task, each videos of the task was presented to the participants who were asked to say what action was depicted according to them. The video obtained the score 1 when the answer provided by the participants corresponded to the action (the participants gave the exact verb or a semanti- cally close verb) or obtained the score 0 when the answer differed semantically from the one expected. The percent- age obtained allowed us to confirm that the actions of the robot were recognized as well as those of the human (95% of recognition). It seems more likely that the absence of the facilitation effect with robots is related to the fact that for French participants seeing a robot may not enable the activa- tion of motor representations, which are the origins of the action–language relationship (e.g., Bidet-Ildei et al., 2011). This would be in accordance with the idea that sensorimotor representations are involved only when the observed action is close to the perceiver’s motor reper- toire (Calvo-Merino et al., 2005; Martel, Bidet-Ildei, & Coello, 2011). These results also support the idea that the strength of the sensorimotor experiences and the motor repertoire of a person play a role in the processing in action words (Lyons et al., 2010). As suggested in the discussion of the first experiment, the absence of motor resonance in French participants could be explained by Japanese participants’ familiarity with and perception of robots, two major areas of differ- ence between the two groups. Japanese participants are both more exposed to robots in their daily life and more likely to attribute mental states to them. Interestingly, some authors have demonstrated an influence of visual familiarity on the activation of action representations. Indeed, Amoruso and Urgesi (2016) showed videos of action performed by humans or dogs to participants familiar with dogs (for example, because they own a dog) or not. The participants familiar with dogs showed a similar level of motor activation when seeing videos displaying actions performed by a human being or by a dog, whereas participants with no familiarity with dogs showed higher motor activation when observing human actions. Following this idea, we can hypothesize that, in our experiment, the relationship between action and language disappeared when the agent was a robot, because French participants are not familiar with robotic agents (contrary to Japanese participants), and conse- quently, they did not activate their motor representations when they observed a robot that produced an action. We decide to test this assumption in Experiment 3. For this, we propose to assess the link between action and language when French participants are familiarized with robots. If the difference in familiarity with robots is what explains the difference between our results with French and Japanese participants, then we should be able to reproduce the Japanese results in French participants that have been familiarized with robots. Experiment 3 The aim of this experiment was to assess the role of visual familiarity in the link between action and language. For this, we decided to reproduce the previous experiments with two groups of French participants: a control group and a group which were familiarized to the Nao robot before completing the experimental task. Method Participants Forty-four French university students participated in this experiment. Twenty-two were in the control condition (M = 19 years old, SD = 1.04; 13 male, 21 right-handed). The other 22 were in the familiarization condition (M = 19 years old, SD = 0.75; 15 male, 19 right-handed). All par- ticipants had normal or corrected-to-normal vision, no history of motor, perceptual or neurological disorders, and French as their mother tongue. Moreover, all partici- pants provided their written informed consent prior to their inclusion in the experiment. They were also unaware of the purpose of the study. Stimuli and procedure The procedure of this experiment was exactly the same as in experiments 1 and 2. However, the participants of the familiarization condition went through a familiarization phase. This phase lasted approximately 10 min and con- sisted of text and two short videos about Nao, the robot used during the experiment. The aim of the text and the video was to introduce Nao to accustom our participants to it and to make it seem more human to them. The text was read by the experimenter who explained in which situ- ation Nao is used (education, patient reeducation, etc.) and how it interacts with humans in these situations. One of the two videos was an example of one of these situa- tions (Nao interacting with children with autistic spectrum disorder), and the other video was a small presentation of Nao by itself. The aim of this familiarization was to emphasize the interactive side of Nao and to get the par- ticipants used to seeing it. After the familiarization phase, the participants received a questionnaire about robots (the “Negative attitude toward robot scale”, Nomura, Kanda, & Suzuki, 2006). Participants from the control condition also received this questionnaire before the experimental task. The aim of this questionnaire was to assess the effec- tiveness of the familiarization phase. The questionnaire consisted of items concerning attitude towards the inter- action with robots, attitude towards the social influence of robots, and attitude towards emotions in interactions with robots. Participants answered with a five-point scale (going from “I strongly disagree” to “I strongly agree”). A mean score based on their response was calculated, so that a high score (close to 5) would indicate a negative attitude towards robots, while a low score (close to 1) would indi- cate a positive attitude towards robots. Fig. 5 Mean response time of the French participants accord- ing to the congruency (congru- ent, incongruent) and the type of agent (human, robot). The error bars indicate the 95% con- fidence interval. ***Significant difference with p < 0.001 Data analysis Participant’s response time and accuracy for trials with action verbs were recorded. Only the response times of the correct trials were analyzed (90% of the data), since trials with errors were excluded of the analyses. Linear mixed- effects models were used with participants and words items specified as random-effects factors. Three fixed-effects fac- tors were included: the congruency of the verb (congruent verb × incongruent verb), the type of agent (human × robot), and the group (control × familiarized) as well as their inter- action. The p values were obtained for F values (Type III ANOVA) with error degree of freedom calculation based on Satterthwaite’s approximation. The significance level was set at p < 0.05. Results The results showed a significant interaction between the type of agent, the congruency of the verbs and the group [F(1,4722) = 4.714; p = 0.03]. More precisely, for the control condition (see Fig. 5), with the human agent, the response time for congruent action verbs (M = 673.19, SD = 128.49) was significantly shorter than that for the incongruent action verbs (M = 759.90, SD = 133.79; p < 0.001). However, with the robotic agent, the response time for congruent action verbs (M = 726.63, SD = 128.50) and for incongruent action verbs (M = 735.04, SD = 125.71) were not significantly different (p = 0.27). For the familiarization condition, with the human agent, the response time for the congruent action verbs (M = 658.59, SD = 122.99) was significantly shorter than that for the incongruent action verbs (M = 733.12, SD = 127.33; p < 0.001). Similarly, with the robotic agent, response time for congruent action verbs (M = 676.21, SD = 137.73) was significantly shorter than that for the incongruent action verbs (M = 733.39, SD = 135.91; p < 0.001). Concerning the attitude questionnaire, the score of the familiarized group (2.81) was slightly lower than the score of the control group (3.08), which indicates a trend of a more positive attitude toward robots in this group. However, the Student’s t test revealed that this difference was not signifi- cant (p = 0.09). Discussion The aim of this third study was to assess whether the differ- ence between our Japanese and French participants could be related to their familiarity with robots. If our hypothesis was true, familiarizing French participants should have enabled us to obtain the same results as those of Japanese partici- pants. The results obtained in this third study were in agree- ment with this hypothesis. Indeed, while French participants who were not familiarized with robots produced different results when observing a robot and a human being (as in Experiment 2), French participants who were familiarized with robots produced results similar to those of Japanese participants. Thus, when familiarized participants perceived an action by either a human agent or a robot agent, it led to facilitation (as in Experiment 1). It is worth noting that this effect occurred, even though the familiarization was not enough to significantly modify Fig. 6 Schematic representation of the influence of familiarity on the action–language relation- ship. Solid lines represent an activation, and dashed lines rep- resent less or no activation. Per- ceiving a robotic action would activate the representation of this action when the participant is familiar with robots the attitudes of our participants towards robots. This could indicate that the influence of observing a robot relies more on visual experience and that a participant’s attitude does not interfere with this action–language relationship. However, it is important to remember that the attitude of our participants toward robots was neither high nor low but rather indifferent. Therefore, we cannot exclude that, in another situation, atti- tude might play a role in the action–language relationship. In fact, we could expect it to interfere when people have a truly negative attitude toward the agent performing an action. For example, Gutsell and Inzlicht (2010) demonstrated that a person is less likely to resonate with another person when this person belongs to a disliked group: the suppression of the mu rhythm was, indeed, linked to the amount of preju- dice toward this group. The results of the present study cannot be explained in terms of attitude towards the agent. Thus, rather than the explicit attitude, it is the visual experience that seems to have impacted the influence of robot observation on lan- guage processes (see Fig. 6 for a schematic representation of this interpretation). We suggest that this visual experi- ence may have a more implicit influence by modulating our sensorimotor representations. General discussion The previous studies have demonstrated that the influence of the perception of an action is on the subsequent processing of language (Beauprez & Bidet-Ildei, 2017; Liepelt et al., 2012) and that this influence is not mandatory but rather depends on some properties of the observed action (Beau- prez et al., 2018, Beauprez & Bidet-Ildei, 2018). Here, we studied the role of the humanness of the agent performing an action. Different results were obtained according to the nationality of our participants. In France (Experiment 2 and control group of Experiment 3), the results seemed to indi- cate that the humanness of the agent is a crucial property of the action, since, when it is modified, the action–language relationship is not found. This would be in agreement with the previous studies that have suggested that perceiving a robot does not produce mirror neuron activation (Matsuda et al., 2015; Tai et al., 2004), which is required to support the motor resonance. In contrast, in Japan (Experiment 1), the results seemed to indicate that observing an agent that is human is not so crucial, because, when the humanness of the agent is modified, the action–language relationship is still present. This aligns with the other studies that have sug- gested that we are able to activate the mirror neuron system when observing a robot (Gazzola et al., 2007a; Oberman et al., 2007; Press et al., 2005). We could wonder what in particular disturbed our French participants when observing a robot. It is worth noting that robots differ from human beings in two important aspects: their appearance and their kinematics. In addition, it has been demonstrated that both the kinematics (Bidet-Ildei, Méary, & Orliaguet, 2006; Pavlova, Krägeloh-Mann, Birbaumer, & Sokolov, 2002) and the appearance (Chaminade, Hodgins, & Kawato, 2007) are important in the perception of biological motion and for motor resonance. For example, it has been demonstrated that the observation of similar faces (same race) leads to stronger motor resonance than that of dissimi- lar faces (Liew, Han, & Aziz-Zadeh, 2011). Similarly, it has been shown that modifying kinematics of action perturbs the capacity to anticipate the following component of a motor sequence (Kandel, Orliaguet, & Viviani, 2000). For exam- ple, Bisio et al. (2014) showed motor contagion (the observ- er’s motor performance might automatically replicate some features of the observed agent) when their participants were observing robots whose kinematics respected the biological law of motion, whereas no motor contagion was obtained when participants observed robots performing movements with non-biological kinematics. Our study does not allow differentiation of the influence of the kinematics from the influence of the appearance, since Nao has both an appearance dissimilar to human beings and modified kinematics. However, even if Nao’s kinematics differs from that of humans, it is worth noting that Nao’s design is highly motivated by the way humans move. In their study, Kupferberg et al., (2012) demonstrated that morpho- logical similarities (i.e., those concerning the structure of an organism) between agent and observer are important. More precisely, the joint configuration of an individual influences the way he moves (i.e., motility). The same industrial robot arm performing the same movements induced motor inter- ference when it had human-like motility (quasi-biological movement), but not when it was shown in a standard indus- trial configuration (non-biological movement). Moreover, it has been demonstrated that the kinematics of robotic actions have no influence on motor resonance when observ- ers are highly familiar with the goal of an action (Gazzola et al., 2007a). Because Nao has some morphological simi- larity with humans (quasi-biological movement) and only performed usual actions in our experiments, we think that the kinematic explanation can be ruled out to interpret our results. As touched upon earlier, a more probable interpretation of the difference between the French and Japanese partici- pants may be the difference in familiarity that they have with robots. Perhaps, the motor system is flexible and not strictly limited to our sensorimotor experiences. More precisely, familiarity would enable resonance even when observing non-human actions (see, for example, Amoruso and Urgesi 2016). Similarly, in an fMRI study, participants familiar- ized with certain dance sequences (observational learning) showed similar cerebral activity in premotor and parietal regions as trained participants (physical learning) when watching these sequences (Cross et al., 2009). Following the same logic, our results suggest that the influence of action observation on language processing is related to the activa- tion of the sensorimotor representation which depends not only on our motor experiences but also on our visual expe- riences. The results of Experiment 3 (familiarized group) are in agreement with these results. Indeed, after visual familiarization, the same influence of action observation on language processing was obtained for robot agent and human agent. As explained before, in addition to familiarity with robots, there is another difference between French and Jap- anese participants related to the way they could conceive robots. Indeed, Japanese are not only more familiar seeing robots in their daily lives, but they are also more used to interacting with them; thus, they may more easily consider them as potential partners for interaction than the French would. Indeed, studies have suggested that to consider robots as communicative agents infants need to see them interact- ing with human (Arita, Hiraki, Kanda, & Ishiguro, 2005) and that the believed humanness of a robot is important for humans to corepresent actions (Stenzel et al., 2012), and so, it is important in human–robot interactions. Moreo- ver, robots are more socially accepted by Japanese citizens than by European citizens. For example, Nomura, Syrdal, and Dautenhahn (2015), showed that UK people felt more negatively towards humanoid robots than did Japanese peo- ple. It is worth noting that despite this particularity of the Japanese people, some studies did not report mirror neu- ron system activity in Japanese participants when watching robots performing actions (Matsuda et al., 2015). A differ- ence between our study and the one by Matsuda, Hiraki, and Ishiguro is the robot used. In their robot condition, Repliee Q2 without its silicone skin was used. The appearances of Repliee Q2 and the robot we used in our study, Nao, are very different. In contrast to Repliee Q2, which seems less attrac- tive, Nao has a cute appearance. Indeed, Nao was designed to make people want to interact with it; it is small and color- ful, and possesses pleasantly rounded features. Maybe, it is easier to attribute humanness to Nao than to other robots. Thus, it could be interesting in the future to replicate our results with different types of robots. Anyway, in Experiment 3, the results of the attitude ques- tionnaire revealed that familiarization was not enough to change the attitude of our participants. This result is not surprising given that the familiarization phase only lasted 10 min; this is certainly not enough time to modify indi- viduals’ beliefs. Thus, this result seems to indicate that, in our experiment, only visual familiarity with the robot was modified during the familiarization. This familiarity would be enough to enable our participants to activate their senso- rimotor representation, even when an observed action differs from their motor experience. In other words, if observers are familiar with an agent, any differences can be ignored. Thus, it seems that increasing the interaction between people and robots would increase familiarity with robots, which would be key to being able to resonate with robots and recreate an action–language relationship. In any case, our results indicate that the nature of the agent is an important property of an action to produce semantic acti- vation during word processing. However, we also dem- onstrated that this property is not essential. In contrast, we can see that if observers are familiar with robots, the modification of this property can be ignored. In this case, action verb processing can be facilitated when observing actions performed by human as well as robotic agents. This supports the idea that the motor repertoire is flexible and can bridge differences in embodiment. In agreement, brain imagery data showed that the mirror neuron system can be activated even when watching familiar actions that are not part of our motor repertoire. For example, Gazzola et al., (2007b) showed that aplasic subjects (born without hands) activate their mirror neuron system as strongly as typically developed adults when viewing hand actions. Overall, the results of the present studies confirm the flexibility of the activation of sensorimotor representa- tion and extend the previous findings by demonstrating that observing an action facilitates language processing not only when perceiving a human agent but also when perceiving a non-human agent, such as a robot that we are familiar with. Moreover, the main finding of this study is to demonstrate the plasticity of the action–lan- guage relationship. Thus, our results demonstrate that the activation of the sensorimotor representation is sen- sitive to prior experience. In addition, our results indicate that the update of these sensorimotor representations is rapid. In a recent experiment, Bidet-Ildei et al. (2017) demonstrated the quick update of sensorimotor represen- tation and showed that 24 h of sensorimotor deprivation is enough to affect action verb processing. We now dem- onstrate that a short period of familiarization (10 min) is enough to modify these representations, making their Compliance with ethical standards Ethical approval The experiments were conducted in accordance with the ethical standards of the institutional and with the 1964 Helsinki declaration. Appendix A: Prime video and list of verbs activation more flexible. Thus, to answer the question in our title: one’s language could certainly be influenced when watching C-3PO mov- ing, but only if he or she has seen at least 10 min of one of the Star Wars movies!

Actions of the videos (English translation/French/
Japanese) Acquiesce
Acquiescer/うなずく

Action verbs
Congruent Incongruent

Acquiesce Read

Non-action verbs

Want

Applaud Applaud Move back Dream

Conclusion
Applaudir/たたくClean
Nettoyer/ふく

Clean

Take

Wish

In conclusion, as robots are becoming more integrated in everyday life, it is becoming increasingly necessary to understand how the perception of robots influences our cognitive functions. Thus, the data presented here must be taken into consideration to improve the human–robot inter- action. In particular, the use of robots is now being con- sidered in education and in therapy. Indeed, as robots have predictable behavior and simple conversational functions, they might be adapted to speech-language therapy for peo- ple with language disorders or with more specific popula- tions. For example, the French association “Autistes sans frontières” has tested Nao as a remediation tool for chil- dren with autism spectrum disorder to enhance their com- munication as well as their speaking and listening skills. The results of our studies are encouraging since they indi- cate that the observation of robots can influence language processing and that familiarizing people could be the key
Deny
Nier/くびをふるKneel S’agenouiller/しゃ
がむ
Move back Reculer/さがるRead
Lire/みるReverence
S’incliner/おじぎするSalute
Saluer/てをふるScratch Gratter/かくShow
Montrer/ゆびさすStand up
Se lever/たつ
Deny Throw (a ball) Recognize

Kneel Scratch Hope

Move back Turn Envy

Read Deny Progress Reverence Throw (in a bin) Believe
Salute Stand up Cost

Scratch Salute Understand

Show Acquiesce Guess

Stand up Show Choose

to optimize this kind of therapy. However, before reaching that point, numerous questions still need to be addressed. As a first step, the objective of future research could be to assess the action–language relationship in populations with autism spectrum disorder to determine whether this relationship is (1) expressed the same way and (2) influ- enced the same way as with a typical population.

Acknowledgements This work was supported by the Japan Society for the Promotion of Science. Experiment 1 was conducted during an in-doc by Sophie-Anne Beauprez at the University of Tokyo. We would like to thank Yoshida Fumiaki and Masaoka Shiori for their help in the experiment development, the recruitment of the participants, and data collection.
Take Prendre/とる
Throw (in the bin) Jeter/すてる
Throw (a ball) Lancer/なげるTurn Tourner/まわる
Take

Throw

Throw

Turn
Kneel

Clean Reverence Applaud
Doubt Consider Think Have

Appendix B: Examples of frames, “cleaning” action performed by the robotic and the human agent

References

Amoruso, L., & Urgesi, C. (2016). Familiarity modulates motor acti- vation while other species’ actions are observed: A magnetic stimulation study. European Journal of Neuroscience, 43(6), 765–772. https://doi.org/10.1111/ejn.13154.
Anelli, F., Borghi, A. M., & Nicoletti, R. (2014). Grasping the pain: Motor resonance with dangerous affordances. Consciousness and Cognition, 21(4), 1627–1639. https://doi.org/10.1016/j. concog.2012.09.001.
Aravena, P., et al. (2012). Grip force reveals the context sensitivity of language-induced motor activity during “action words” pro- cessing: Evidence from sentential negation. PLoS One, 7(12), e50287. https://doi.org/10.1371/journal.pone.0050287.
Arita, A., Hiraki, K., Kanda, T., & Ishiguro, H. (2005). Can we talk to robots? Ten-month-old infants expected interactive humanoid robots to be talked to by persons. Cognition, 95(3), B49B57. https://doi.org/10.1016/j.cognition.2004.08.001.
Aziz-Zadeh, L., & Damasio, A. (2008). Embodied semantics for actions: Findings from functional brain imaging. Journal of Physiology-Paris, 102(1), 3539. https://doi.org/10.1016/j.jphys paris.2008.03.012.
Aziz-Zadeh, L., Wilson, S. M., Rizzolatti, G., & Iacoboni, M. (2006). Congruent embodied representations for visually pre- sented actions and linguistic phrases describing actions. Cur- rent Biology, 16(18), 18181823. https://doi.org/10.1016/j. cub.2006.07.060.
Barsalou, L. W. (1999). Perceptual symbol systems. The Behavioral and Brain Sciences, 22(04), 577–609. https://doi.org/10.1017/
S0140525X99002149.
Bates, D., Mächler, M., Bolker, B., & Walker, S. (2014). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1), 1–48. https://doi.org/10.18637/jss.v067.i01.
Beauprez, S.-A., & Bidet-Ildei, C. (2017). Perceiving a biological human movement facilitates action verb processing. Current Psy- chology. https://doi.org/10.1007/s12144-017-9694-5.
Beauprez, S.-A., & Bidet-Ildei, C. (2018). The kinematics but not the orientation of an action influences language processing. Journal of Experimental Psychology: Human Perception and Performance, 44(11), 1712–1726. https://doi.org/10.1037/xhp0000568.
Beauprez, S.-A., Toussaint, L., & Bidet-Ildei, C. (2018). When con- text modulates the influence of action observation on language
processing. PLoS One, 13(8), 1–12. https://doi.org/10.1371/journ al.pone.0201966.
Bidet-Ildei, C., Méary, D., & Orliaguet, J.-P. (2006). Visual perception of elliptic movements in 7- to-11-year-old children: Influence of motor rules. Current Psychology Letters. Behaviour, Brain & Cognition, 19(2), 1–10.
Bidet-Ildei, C., Meugnot, A., Beauprez, S.-A., Gimenes, M., & Tous- saint, L. (2017). Short-term upper limb immobilization affects action-word understanding. Journal of Experimental Psychology. Learning, Memory, and Cognition, 43(7), 1129–1139. https://doi. org/10.1037/xlm0000373.
Bidet-Ildei, C., Sparrow, L., & Coello, Y. (2011). Reading action word affects the visual perception of biological motion. Acta Psycholog- ica, 137(3), 330334. https://doi.org/10.1016/j.actpsy.2011.04.001.
Bisio, A., Sciutti, A., Nori, F., Metta, G., Fadiga, L., et al. (2014). Motor contagion during human–human and human–robot inter- action. PLoS One, 9(8), e106172. https://doi.org/10.1371/journ al.pone.0106172.
Boulenger, V., et al.(2006). Cross-talk between language processes and overt motor behavior in the first 200 msec of processing. Jour- nal of Cognitive Neuroscience, 18(10), 16071615. https://doi. org/10.1162/jocn.2006.18.10.1607.
Calvo-Merino, B., Glaser, D. E., Grèzes, J., Passingham, R. E., & Hag- gard, P. (2005). Action observation and acquired motor skills: An FMRI study with expert dancers. Cerebral Cortex, 15(8), 12431249. https://doi.org/10.1093/cercor/bhi007.
Chaminade, T., & Cheng, G. (2009). Social cognitive neuroscience and humanoid robotics. Journal of Physiology-Paris, 103(3–5), 286–295. https://doi.org/10.1016/j.jphysparis.2009.08.011.
Chaminade, T., Hodgins, J., & Kawato, M. (2007). Anthropomorphism influences perception of computer-animated characters’ actions. Social Cognitive and Affective Neuroscience, 2(3), 206. https://
doi.org/10.1093/scan/nsm017.
Cross, E. S., Kraemer, D. J. M., de Hamilton, A. F. C., Kelley, W. M., & Grafton, S. T. (2009). Sensitivity of the action observation network to physical and observational learning. Cerebral Cortex, 19(2), 315326. https://doi.org/10.1093/cercor/bhn083.
Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175191.
Gazzola, V., Rizzolatti, G., Wicker, B., & Keysers, C. (2007a). The anthropomorphic brain: The mirror neuron system responds to human and robotic actions. NeuroImage, 35(4), 16741684.

Gazzola, V., et al. (2007b). Aplasics born without hands mirror the goal of hand actions with their feet. Current Biology, 17(14), 12351240. https://doi.org/10.1016/j.cub.2007.06.045.
Glenberg, A. M., & Kaschak, M. P. (2002). Grounding language in action. Psychonomic Bulletin & Review, 9(3), 558–565. https://
doi.org/10.3758/BF03196313.
Gutsell, J., & Inzlicht, M. (2010). Empathy constrained: Prejudice predicts reduced mental simulation of actions during observation of outgroups. Journal of Experimental Social Psychology, 46(5), 841845. https://doi.org/10.1016/j.jesp.2010.03.011.
Hauk, O., Johnsrude, I., & Pulvermüller, F. (2004). Somatotopic representation of action words in human motor and premotor cortex. Neuron, 41(2), 301307. https://doi.org/10.1016/S0896
-6273(03)00838-9.
Hofer, T., Hauf, P., & Aschersleben, G. (2005). Infant’s perception of goal-directed actions performed by a mechanical device. Infant Behavior and Development, 28(4), 1–11. https://doi.org/10.1016/j. cognition.2011.05.012.
Iacoboni, M., et al. (2005). Grasping the intentions of others with one’s own mirror neuron system. PLoS Biology, 3(3), e79. https://doi. org/10.1371/journal.pbio.0030079.
Kandel, S., Orliaguet, J. P., & Viviani, P. (2000). Perceptual anticipa- tion in handwriting: The role of implicit motor competence. Per- ception & Psychophysics, 62(4), 706716. https://doi.org/10.3758/
BF03206917.
Kilner, J. M., Paulignan, Y., & Blakemore, S. J. (2003). An interfer- ence effect of observed biological movement on action. Current Biology, 13(6), 522–525.
Klepp, A., et al. (2014). Neuromagnetic hand and foot motor sources recruited during action verb processing. Brain and Language, 128(1), 4152. https://doi.org/10.1016/j.bandl.2013.12.001.
Kuipers, J.-R., van Koningsbruggen, M., & Thierry, G. (2013). Semantic priming in the motor cortex: Evidence from combined repetitive transcranial magnetic stimulation and event-related potential. Neuroreport, 24(12), 646651. https://doi.org/10.1097/
WNR.0b013e3283631467.
Kupferberg, A., Huber, M., Helfer, B., Lenz, C., Knoll, A., Glasauer, S., et al. (2012). Moving just like you: motor interference depends on similar motility of agent and observer. PLoS ONE, 7(6), e39637. https://doi.org/10.1371/journal.pone.0039637.
Liepelt, R., Dolk, T., & Prinz, W. (2012). Bidirectional semantic inter- ference between action and speech. Psychological Research Psy- chologische Forschung, 76(4), 446455. https://doi.org/10.1007/
s00426-011-0390-z.
Liew, S.-L., Han, S., & Aziz-Zadeh, L. (2011). Familiarity modulates mirror neuron and mentalizing regions during intention under- standing. Human Brain Mapping, 32(11), 198697. https://doi. org/10.1002/hbm.21164.
Lindemann, O., Stenneken, P., van Schie, H. T., & Bekkering, H. (2006). Semantic activation in action planning. Journal of Experi- mental Psychology. Human Perception and Performance, 32(3), 633643. https://doi.org/10.1037/0096-1523.32.3.633.
Lyons, I. M., et al. (2010). The role of personal experience in the neu- ral processing of action-related language. Brain and Language, 112(3), 214222. https://doi.org/10.1016/j.bandl.2009.05.006.
MacDorman, K. F., Vasudevan, S. K., & Ho, C.-C. (2009). Does Japan really have robot mania? Comparing attitudes by implicit and explicit measures. AI & Society, 23(4), 485510. https://doi. org/10.1007/s00146-008-0181-2.
Martel, L., Bidet-Ildei, C., & Coello, Y. (2011). Anticipating the termi- nal position of an observed action: Effect of kinematic, structural, and identity information. Visual Cognition, 19(6), 785798. https ://doi.org/10.1080/13506285.2011.587847.
Matsuda, G., Hiraki, K., & Ishiguro, H. (2015). EEG-based Mu rhythm suppression to measure the effects of appearance and motion on perceived human likeness of a robot. Journal of Human–Robot

Interaction, 5(1), 68–81. https://doi.org/10.5898/10.5898/
JHRI.5.1.Matsuda.
Mollo, G., Pulvermüller, F., & Hauk, O. (2016). Movement priming of EEG/MEG brain responses for action-words characterizes the link between language and action. Cortex, 74, 262276. https://doi. org/10.1016/j.cortex.2015.10.021.
Nomura, T., Kanda, T., & Suzuki, T. (2006). Experimental inves- tigation into influence of negative attitudes toward robots on human–robot interaction. AI & Society, 20(2), 138150. https://
doi.org/10.1007/s00146-005-0012-7.
Nomura, T. T., Syrdal, D. S., & Dautenhahn, K. (2015). Differences on social acceptance of humanoid robots between Japan and the UK. In Proceedings of 4th international symposium on new frontiers in human–robot interaction. The Society for the Study of Artificial Intelligence and the Simulation of Behaviour (AISB), Canterbury, United Kingdom (pp. 115–120).
Oberman, L., McCleery, J., Ramachandran, V., & Pineda, J. (2007). EEG evidence for mirror neuron activity during the observation of human and robot actions: Toward an analysis of the human quali- ties of interactive robots. Neurocomputing, 70(1315), 21942203. https://doi.org/10.1016/j.neucom.2006.02.024.
Pavlova, M. A., Krägeloh-Mann, I., Birbaumer, N., & Sokolov, A. (2002). Biological motion shown backwards: The apparent-facing effect. Perception, 31(4), 435443. https://doi.org/10.1068/p3262.
Press, C., Bird, G., Flach, R., & Heyes, C. (2005). Robotic movement elicits automatic imitation. Cognitive Brain Research, 25(3), 632640. https://doi.org/10.1016/j.cogbrainres.2005.08.020.
Press, C., Gillmeister, H., & Heyes, C. (2006). Bottom-up, not top- down, modulation of imitation by human and robotic models. European Journal of Neuroscience, 24(8), 2415–2419. https://
doi.org/10.1111/j.1460-9568.2006.05115.x.
Pulvermüller, F. (2005). Brain mechanisms linking language and action. Nature Reviews. Neuroscience, 6(7), 576582. https://doi. org/10.1038/nrn1706.
Ranzini, M., Borghi, A. M., & Nicoletti, R. (2011). With hands I do not centre! Action- and object-related effects of hand-cueing in the line bisection. Neuropsychologia. https://doi.org/10.1016/j.neuro psychologia.2011.06.019.
Rizzolatti, G., & Craighero, L. (2004). The mirror-neuron sys- tem. Annual Review of Neuroscience, 27, 169192. https://doi. org/10.1146/annurev.neuro.27.070203.144230.
Rizzolatti, G., Fogassi, L., & Gallese, V. (2001). Neurophysiologi- cal mechanisms underlying the understanding and imitation of action. Nature Reviews. Neuroscience, 2(9), 661670. https://doi. org/10.1038/35090060.
Stenzel, A., Chinellato, E., Tirado Bou, M. A., del Pobil, A. P., Lappe, M., & Liepelt, R. (2012). When humanoid robots become human- like interaction partners: Corepresentation of robotic actions. Journal of Experimental Psychology: Human Perception and Per- formance, 38(5), 1073–1077. https://doi.org/10.1037/a0029493.
Tai, Y., Scherfler, C., Brooks, D., Sawamoto, N., & Castiello, U. (2004). The human premotor cortex is “Mirror” only for bio- logical actions. Current Biology, 14(2), 117120. https://doi. org/10.1016/j.cub.2004.01.005.
Wykowska, A., Wiese, E., Prosser, A., & Müller, H. J. (2014). Beliefs about the minds of others influence how we process sensory information. PLoS One, 9(4), e94339. https://doi.org/10.1371/journal.pone.0094339.
Zwaan, R. A., & Taylor, L. J. (2006). Seeing, acting, understand- ing: Motor resonance in language comprehension. Journal of Experimental Psychology. General, 135(1), 111. https://doi. org/10.1037/0096-3445.135.1.1.
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.