Skip to main content

A modular cognitive model of socially embedded robot partners for information support

Abstract

The purpose of this research is the development of a robot partner system based on modular cognitive model for human support. In order to implement information support of daily life to human, smooth human–robot interaction is needed. The interaction with a robot partner requires many elements including verbal communication, non-verbal communication, and embodiment as well. These modular structures are used in our robot partner system named “iPhonoid-C”. In this paper, we propose a Cognitive–Emotional–Behavioral (C–E–B) model derived from current research in cognitive science to realize the robot’s personal features leading to a socially embedded robot partner. C–E–B model is integrated with the modular cognitive system of the robot partner. Therefore, given the integration, the robot partner is able to exhibit a wide variety of interactions with the user, depending on the environmental factors as well as relationships between the cognitive modules and the C–E–B model.

Background

Since the industrial revolution, machines have been developed to improve the work efficiency [1]. Given the technological progress, quality of human life continues to improve, where machine can contribute in performing mundane tasks without human supervision. Hence, robots are developed to provide a variety of services to humans [2, 3], because the introduction of human-friendly robot partners is one of the possible solutions to realize support towards a person who needs help [4]. In the development of a socially embedded robot partner, human–robot interaction plays an important role. If the robot can socially interact with the human, the robot can provide the human with many services such as information support, health promotion, and rehabilitation. In order to develop a socially embedded robot partner, we have to consider the human communication system.

In this paper, we propose the robot’s system architecture based on the Cognitive–Emotional–Behavioral (C–E–B) model (Fig. 1). C–E–B model is able to give individuality to the robot based on the control of the C–E–B model in order to adapt to the user’s interaction style. For the system structure of C–E–B model, we are getting a hint from “Social systems theory”, that the structure will have incorporate with other structure. According to the “Social systems theory” by Niklas Luhmann, a system has relationship with other systems based on communication [57]. The three systems that are crucial for human communication are cognitive system, emotion system, and behavior system. These three structures consist of modular structure for socially embedded robot partner. For modular structure, social systems theory provides the means to exchange information between the decentralized systems to provide necessary binding.

Fig. 1
figure 1

C–E–B model on ternary diagram. This figure shows the structure of C–E–B model of the robot partner system. The ternary diagram shows the parameter position of \(\alpha\), \(\beta\), and \(\gamma\) on triangle structure

The robot interaction style is changed by combination of the parameters of the C–E–B model. This model can be used to change a role based on the “Role Theory” [8]. This theory explains the patterns of human conduct roles about expectations, identities, and social positions. We can get the idea from “Role theory”. The robot partner also can consider this role depending on with who and where the robots interact. That is, human personal and environmental information is considered to make a different interaction style. For example, human’s gender information is also considered to give a different style of interaction. Thus, we adopted this concept of the personality of the robot based on parameters in Table 4. The structure of this system will be explained in “Implementation of C–E–B model for the robot partner”.

In this research, we used robot parameters by trial and error to give robot personalities for each user’s needs. To develop a robot partner that meets user needs, it is necessary to modify the parameters according to the purpose of each user. Thus, the experiment results show one to one human–robot interaction. For other situations, the parameters need to be designed according to each situation. Our robot partner system has been developed to be able to change the design according to various needs based on the modular structure.

Therefore, the goal of our paper is to realize the modular approach of robot partner system by integration proposing a modular structure of cognitive model by using C–E–B relation. This paper is organized as follows. “Related work” presents social robot partner to improve quality of life. “A robot partner system” presents our robot partner named “iPhonoid-C”, which includes hardware and software. In “The system concept: Cognitive–Emotional–Behavioral model”, we discuss the modular cognitive model used in this research for robot partner. We explain the communication system of the robot partner including the type of communication system implemented in the robot along with the conversation system algorithm based on the C–E–B model. “Experimental results” presents results of human-robot interaction. “Conclusions and future work” presented the conclusion of our works and discussions for our future work.

Related work

Many researchers have tried to clarify human cognition, emotion, and behavior. The realization of these human factors in robot development is the research topic of Developmental Robotics (Epigenetic robotics) [911]. Accordingly, it is possible to refer to the human properties in order to realize a robot partner system. When a human communicates with the external environment, the human uses cognitive abilities, emotions, and behavior, e.g. “What have we recognized?”, “What is the current emotional state?”, “How about behavioral state?”. Thus, human communication is the result of a complex process. Consequently, human cognition, emotion, behaviors should be considered in the development of robot partner. If these elements are to be fully reflected in the robot partner, the robot can be used as a socially friendly robot partner.

Various human theories have discussed cognition, emotions, and behavior. In “James–Lange theory” of emotion, stimuli occurs by the activation of neurons, and it causes emotional changes [12, 13]. For example, when we are facing a dangerous situation e.g., we meet dangerous animals, we fear because we see the danger, which can be considered as cognitive stimuli. “Cannon–Bard Theory” has discussed that cognition and emotion occur at the same time [14], however cognition and emotion can be separated. Emotion exists alone without cognitive information. “Schachter–Singer Theory” has discussed that the emotional stimulus is caused by general physiological arousal, and the brain interprets this arousal cognitively, and this cognitive interpretation leads to the emotional experience [15]. It means that the physical state of human is an important parameter when interpreting the emotion. The reaction of the human’s body can be different based on the emotional changes [16]. Thus, human’s body and emotion are closely related [17, 18]. “Cognitive Appraisal Theory” of emotion has been proposed by Richard Lazarus [19, 20]. His theory has dealt with stress as a stimulus factor of emotions. Emotion occurs based on cognitive appraisal of the stimulus. We have focused on these theories of emotion for the robot system. Human’s cognition, emotion, and behavior have a close relationship and they flexibly react to each other. Each outcome of the system will be different based on the environmental structure. This is interpreted as the individuality of robot, which means, that the robot could have different individuality. Our goal is to realize a robot, which is able to perform different interactions depending on the C–E–B model.

We intend to devise cognitive architecture and modular systems to define a modular cognitive model for our robot partner system. Cognitive model of robot partner is based on various viewpoints of system architecture (Fig. 2) [21]. These cognitive models are different from traditional expert system in the field of artificial intelligence. This is regarding the understanding and application of human models for artificial intelligence and machine learning. (e.g., how human can have cognition, how emotional and behavioral reaction are shown.) Because of the complexity of the environment, human final reaction may vary based on the cognitive situation. These human factors are a good foundation to study the robot partner system. For example, in terms of cognitive architecture, the “EPAM (Elementary Perceiver and Memorizer)” provides psychological theory of human’s learning and memory to make a computer architecture in the 1960s [22]. “EPAM” has enabled an architecture for cognitive, including the fundamental aspects of the human mind. The research of cognitive architecture actually began in the 1980s to implement a cognitive model in computational architecture, where the “ACT-R (Adaptive Control of Thought-Rational)” and “Soar” are the most popular cognitive architectures [23]. Many researchers used these architectures for utilization of a cognitive model to create a wide variety of human point of view. The “Soar” provides the mechanism to achieve the goal based on the production system. In other words, the architecture controls the behavior by explicit production of rules. Cognitive architectures have a common point that can be implemented in the calculation of levels by the understanding of the human cognitive process.

Fig. 2
figure 2

History of cognitive architectures. This figure shows the order of development of cognitive architectures. After the development of EPAM, many cognitive architectures have been developed

Recently, human emotional model is also considered in the development of the emotional robot. Emotional expression of social robots is also considered an important factor [24]. It can be thought of as one of the reasons why emotional model should be considered for cognitive model, because, as quoted from Lane et. al., “Emotion involves cognitive appraisals” [25]. In particular, there is a Pepper robot developed by Softbank. The robot has two emotional modules, e.g., emotion recognition based on the human voice intonation, and robot emotional model by using emotion map [26]. These emotional structures are used to make a social robot partner.

The effect of the robot behavior is also important in order to interact with human, because the behavior has a close relation with cognition and emotion [27]. The Laban Movement Analysis (LMA) is a famous theory of human behavior which can be considered as one of the somatics. For human behavior, LMA investigates the processes underlying human movements [28]. This theory can be considered for the definition of robot gesture. Research on robot gestures has been conducted based on this theory  [29, 30]. By applying such human references to the robot, the application of gesture analysis of the human system on the robot can give meaning to the action during human–robot communication.

The modular structure is very helpful for robot development in the application of the features as technology development guidelines, because many devices are developed to be multipurpose with high-specification and low cost based on the development of industry and technology. In particular, the movement is taking place around the world to be easily integrated with each other in terms of modules, such as ROS (Robot Operating System) or RTM (Robotics Technology Middleware). That will increase the compatibility for each of the modules. For example, the robot “NAO” is being applied as ROS driver [31]. By applying various modules, it may have the structure capable of unlimited expansion of the robot. In Japan, the RTM module is being developed to support compatibility by various devices [32]. The robot “PALRO” is available for a variety of services as stand alone application or network application [33]. It is compatible with RTM for robot control system. Therefore, we can consider the modular structure to be used to develop the robot system.

A robot partner system

In this section, we describe the hardware structure of the robot, and the elements of the software such as verbal, nonverbal, and emotional models to be considered for designing a social robot.

Problems

Socially interactive robots should address robot design problems. Fong et al has shown that common design problems are related about cognition, perception, action, human–robot interaction, and architecture [34]. Socially interactive robot partners need to be proficient in recognizing and interpreting human activities and behaviors. Furthermore, robot partners should interact with human based on understanding of human. Since communication with humans is made up of a number of implicit rules, it is considered necessary to imitate human ability. Therefore, we propose a system that can perform different patterns of interaction according to the cognitive, emotional, and behavioral attributes based on human cognitive ability.

The robot partner: iPhonoid-C

The main point of this robot partner system is as follows: the applicability of system based on the policy for the overall design of robot partner (hardware and software) and the robot communication by system integration by modular structure. We also considered how the robot architecture will be developed to maintain good relationships.

In order to design a robot, we need to consider what function is needed. The guidelines of system design are important to consider to achieving specification of robot application. Then, what we can refer to is the robot modules proposed by us, and the robot system can be manufactured appropriately by adjusting the module according to the service. As a hardware aspect, each component has a module configuration, so that it can show different designs according to the configuration of components, and can develop a design desired by themselves.

In our robot partner, design guidelines of our robot system, the hardware is a freely customized design by using 3D printer. The software is designed to achieve the realization of system integration based on the modular structural systems. In order to develop a social robot, we have to consider how to design the robot partner to become widely used in the world. A robot with minimum functionality is required in order to develop the robot for business market with low price. Originally, our robot system was controlled by ZigBee communication and wireless camera. However, we tried to cut down the cost by removing the sensors from the robot body, since the smart device is equipped with many sensors. In order to develop a robot partner, we have to consider the human communication system. Many researchers have tried to clarify human cognition, emotion, and behavior. In our previous paper, we discussed the application of human factors in the robot system by Emotional Model [35], Laban Movement Theory [36], and Cognitive model of iPhonoid [37].

The “iPhonoid” is a series of robot partners based on iOS device [36, 38, 39] composed of a smart device, a robot body, a microcontroller, and servomotors. The robot body has a supporting structure for device fixing and battery charging for the smart device. Figure 3 shows the 8 degrees of freedom of the robot body, where 3 are related to its left arm’s joint angles, 3 to the right arm’s joint angles, and 1 to its waist for realizing body rotation. The neck part has also 1 degree of freedom for tilt movement. By this structure gestural expression can be realized. 3D printer is used to build a new design for the robot partner (Fig. 3). It is possible to change the lower part of the body with another structure such as wheel, leg, or fixed structures. Many people can share the robot design by 3D printing based on the modular design. We apply Bluetooth 4.0LE to be compatible with iOS. For developing the same system in iOS devices, we use the OLS426 of ublox (connectBlue) as a Bluetooth connection module to control the robot’s body [40].

Fig. 3
figure 3

Robot partner: iPhonoid-C. This figure shows the robot partner named “iPhonoid-C”. “iPhonoid” is a series of robot partners based on iOS device. This robot is composed of smart device, micro-computer, motors, and body structure

Nonverbal communication part

Face detection and classification follows the methods from [41, 42] in order to perform human detection, smile detection, gender and human race classification. In order to consider the emotional state of the robot, it was used to face recognition of a person and acquire smiling information and clarification of gender or race to update the parameters of the emotional state [35]. We apply simple fuzzy inference for facial expression generation crucial for nonverbal communication. We used Laban movement theory to generate robot gesture. This expression format may be determined randomly when each module exists, and when the emotion of the robot is considered important, the facial expression and the gesture are different according to the change of the emotion [37]. The robot’s gestural expression is generated based on Laban Movement Analysis (LMA). The robot’s gestures and body movements consist of four gesture segments based on the emotional model and LMA theory [43]. Emotion perception is the ability to recognize and identify other people’s emotions [44]. This is also an important factor in social interaction. We are inspired by the emotion perception and use the perceptual sensory information to determine the reaction of the robot. Hence, we also consider that human behavior differs based on the situations in nonverbal communication. So, we use sensory information for nonverbal communication in order to consider externally and internally generated emotions [45], e.g., display touch status, camera, microphone, compass, battery status, accelerometer, proximity status, and device shake motion characteristic [4648].

Representative examples of human nonverbal behaviors include changes in facial expressions or use of gestures. In order to obtain such information from human beings, in this paper, we use sensor information of robot to grasp human condition and use it as input data for nonverbal interaction. The information of each sensor is classified and used as shown in Table 1. In the case where the sensor is divided into three parts as follows: an instinctive value such as battery information as similar to human’s hunger is behavioral mode, the information including emotions such as human’s smiling is emotional mode, and a simple recognition of the external environment is cognitive mode.

Table 1 Sensory information for estimation

The detailed information is shown in Table 2. In order to use the merit of the smart device, we also use the touch interface in nonverbal communication (Fig. 4). First, when the human touches the robot’s forehead or jaw, a gray zone appears on the display. Then, we can input the touch information on the gray zone. This input information is used as a communication input parameter. By using the touch gesture, it is possible to switch the display from robot face to what the robot sees that is used for image processing. For the experiment, we define that all sensory signals scale from 0 to 1 as an output value (Table  2).

Table 2 Sensory information from smart device
Fig. 4
figure 4

The input information of touch interface. We used touch interface as the robot’s sensory information. The touch information is recognized through the robot face area. The special command corresponds to the forehead and chin area

The values that the robot can handle are binary values that are convenient to use in model calculations. Therefore, each sensor value is scaled to 0–1. For each sensor, parameters were set to express the robot’s personality. For example, a gesture of “hand left right” is designed a negative emotion parameter. The robot has a configuration that it likes to be with people so that the emotional state of the robot is changed to a lonely emotional state when the distance goes away and a happy emotional state when the distance gets closer.

For example, in terms of human distance parameter, the robot has scaling based on the distance between human and robot (maximum distance is 1.5 m, minimum distance is 0.1 m).

After scaling, these parameters are used to define C–E–B model of robot architecture. All of the sensory information is obtained in real time. In order to detect changes of the environment, the sensory information is updated every 0.5 s. Since all calculations are performed within the smartphone, the interaction is performed for 0.5 s intervals, but the updating speed of the sensor information will be adjustable according to the module configuration. For example, given a system resource, a term of at least 1 s may be given when there is no human being in the environment, and a lower term may be given to allow a smoother interaction if a person is detected. The sensor information has an attenuator, and when it is not activated for a certain period of time, the value falls to zero.

Verbal communication part

Given this modular structure, we proposed a verbal conversation system [37, 49, 50] that closely follows all the aforementioned properties, and discussed the structure of the conversation system based on the three utterance systems of iPhonoid related to different situations as illustrated in Fig. 5. This conversation system consists of three parts as follows: Conversation Flow Utterance System (CFUS) extracts sentences from the previous patterns of conversation [49]. Sentence Building Utterance System (SBUS) is a grammatical rule based sentence building system developed to improve the conversation system, since the Conversation Flow Utterance System cannot reply questions [50]. Time Dependent Utterance System (TDUS) has a time parameter to select the utterance sentences based on the user schedule [37]. The robot system can control the amount of utterance based on the contents rules [51].

Fig. 5
figure 5

The structure of conversation system. Our proposed conversation system has three components such as Conversation Flow Utterance System (CFUS), Sentence Building Utterance System (SBUS) and Time Dependent Utterance System (TDUS). These components are related to different conversation situations according to human state

In [52], we used the concept of Informationally Structured Space (ISS) in order to store and provide environmental information. The concept of Informationally Structured Space can be used to share the information between robot partners. In this experiment, the usage status of the users using the two robots is stored and shared in the database. The robot interaction information with user is used as a criterion for judging whether or not there is an opponent. In this paper, we used this system in part. “Information pool” is defined as a subset of “Informationally Structured Space”. This “information pool” is used as database to share the information e.g., utterance sentence, utterance time schedule, and utterance properties. The robot partly uses an ontology structure to share the information as robot’s memory about interaction information (Fig. 6). In this paper, we use domain ontology, which is a database concept to share information between multiple robots based on special knowledge area with effective knowledge. Information is stored and shared using the information pool of the robot’s database. In order to interact with humans, it is necessary to have information about the gesture, which is necessary for interaction, and detailed information about the gesture. Therefore, we defined and used the concept of the database to write or read information as “information pool”.

Fig. 6
figure 6

The concept of information pool and robot structure. This figure shows the structure of robot partner system. This figure also shows the relationship of the robot system with other robots. The robot can share the information by using information pool with other robots

This concept can be scaled up to support a community for information sharing among humans by using robot partners. Figure 6 describes the relationship between robot partners, as well as the integrated modules within one robot partner. Based on this interconnected structure, robot partners are able to obtain knowledge of humans without having to be with them physically.

Emotional model for communication system

We discussed the robot’s emotional models through various experiments [33, 35, 37]. Through these experiments, the structure of the emotion model has been established and the algorithm has been improved. The structure of emotional model is depicted in Fig. 7. The normal state of the robot is defined as “Neutral”. The robot has eight feelings as follows: Happy, Surprise, Angry, Disgust, Sad, Frightened, Fearful, Thrilling. This fully emotional model uses large fuzzy value. Medium fuzzy value uses four feelings such as “Happy”, “Angry”, “Sad” and “Fearful”. Small fuzzy value uses only two feelings such as “Happy” and “Angry” (Fig. 7).

Fig. 7
figure 7

The structure of emotional model. This figure shows the the emotional model structure. We use four pieces of predefined information in the emotional parameters

The system concept: Cognitive–Emotional–Behavioral model

In order to develop a social robot, it is necessary to devise socially necessary components. In general, a person needs several components for human interaction, such as interacting, thinking, feeling, and taking action. As a definition of human attitude, it generates models by using affect, behavior, and cognition [53], and emotion is also an important factor for human cognition [54]. In this paper, we focus on human cognition, emotion, and behavior in order to design a robot interaction system using verbal, nonverbal, and emotion models.

Cognitive–Emotional–Behavioral model

Human communication, not only for transfer the information, but has differences based on various factors and situations. The communication shape will be changed based on the recognized information, emotional state, and physical conditions.

There are implicit rules of communication that are essential in order for the communication to feel natural. For example, the Cooperative principle of Grice spells out the implicit rules [55]. From a cognitive linguistic [56] point of view, language ability is considered to be included in the general cognitive ability of humans. Therefore, cognition occupies an important position in human conversation. Such human conversation is changed by emotions and sometimes expressed by behaviors. Thus, in this paper, we propose a robot system based on three parts existing in human beings: cognition, emotion, and behavior.

The purpose of this paper is to be able to conduct appropriate interaction with the environmental situation. Therefore, by having a C–E–B structure that imitates a human structure, it is an object to make a robot partner with different patterns according to a user or an environment.

Human cognitive processes include emotions and behaviors, and cognitive activities are determined according to emotion and behavior. Accordingly, these C–E–B components are closely related to each other. Therefore, we propose C–E–B model to change the interaction pattern of the robot. Researches in various fields are studying on human cognition, emotion, and behavior. Paper [53] has discussed how behavior and cognition affect to human beings related on attitude. Furthermore, therapists can categorize the details of cognition, emotion, and behavior for therapy of human [62]. Therefore, we considered the three components of the cognitive model to be important in understanding human interactions. Based on these researches, we have developed the basic structure to realize the cognitive model for robot partners that interact with humans. This cognitive model has a structure consisting of three components: cognition, emotion, and behavior.

These components can be linked to changes in emotion and behavior according to recognition of the external environment. Occasionally, emotional states change instinctively to survive, and behavior changes to quickly respond to changes in circumstances. The robot partner system uses calculation for the importance of each module with respect to the external input, and selects what is to be considered centrally. Depending on the result, the selection of module to be considered centrally will be different.

In this research, we use the C–E–B model to give a mode difference for human robot interaction. When we define each component, it is necessary to consider a wide range of aspects to apply the cognition part to the robot. Therefore, in this research, we thought about the cognition of robot from the definition of cognitive linguistics. From the viewpoint of cognitive linguistics, it is considered that human language abilities are included in the general cognitive abilities. Hence, we used this idea to define a conversation module as cognitive component to realize the correlation among language, body, and mind. The emotion module discussed the influence of the utterance and behavior according to the calculation of the emotion model, and the behavior module discussed the generation of the gesture according to Laban’s theory. When the conversation shows a dominant result, the robot focuses on the utterance system, and when the emotion becomes dominant, the robot expresses the emotional expression mainly. When the behavior becomes dominant, the robot mainly focuses on gestures as a behavior module for nonverbal communication.

Even if the same robot, the same interaction, and the same system are used, the robot will have personality. In this paper, we propose a robot partner system using C–E–B structure because it is thought that this personality is important for robots to adapt to the society. Furthermore, we propose how the robot partner uses this model based on the relationship of the three components in the C–E–B structure. Thus, we propose a three dimensional C–E–B structure for the interaction system (Fig. 8).

Fig. 8
figure 8

The C–E–B model mechanism. The figure shows the conceptual diagram showing the relationship between input activity and robot output. Communication model’s output is derived according to each input based on the attitude model

Cognitive, Emotional, and Behavioral information is an important factor to establish a robot system similar to human structure [57, 58]. The robot used the result of the C–E–B levels for evaluation of current state by using the attitude model for each set of stimuli (Table 3). Each level has a competitive relationship. The winner of stimuli information will be used to select the mode (cognitive or emotional or behavioral). We show the detailed explanation of equations for selecting the mode from “Implementation of C–E–B model for the robot partner”.

Table 3 The approach of Cognitive, Emotional, Behavioral structure

This model structure has three subsystems in order to change robot communication modes based on the meaning of sensory information parameters. We categorized the sensor values into cognitive, emotional, and behavioral categories, depending on which sensors are closely related to cognition, emotion or instinctive behavior. These categorized sensor values are shown in Table 1. Table 3 is used as a selection criterion. These pieces of information compete and the winner subsystem is used for robot interaction.

As aforementioned properties, certain properties need hold for human communications. Table 3 explains our C–E–B approach. One can refer to Table 3 on the construction of our robot architecture for C–E–B model based on human properties.

Implementation of C–E–B model for the robot partner

We explain the C–E–B model based on the three modules, namely, cognitive, emotional and behavioral module. In this model, we use several sensory signals from the smart device such as human gesture, touch information, touch finger radius, human detection, human distance, color information, smile state, gender information, racial information, proximity sensor status, input sound magnitude, shake motion, compass direction, battery state (Table  2).

In order to give personality to the robot partner, we set a preference to have a preference according to the direction. In this experiment, after setting the direction to the south and north, the closer to the south, the closer to the maximum, and the closer to the north, the lower the parameter.

Each sensory information can be divided into three category (\(C_{in,i}\), \(E_{in,i}\), and \(B_{in,i}\)), where in is the input value from sensory information; i is sensory information number for each category. These pieces of sensory information are separated by the classification criteria (Table 1). In the table, rational factor has related on the cognitive side and instinctive factor has related on the behavior side. Internal information such as the internal state of the robot (excluding emotionally induced state) is dealt with by behavioral mode (instinctive category). Behavioral mode is mainly implemented physically through the robot body. Cognitive mode is related to the human cognition from external information. External information such as the environmental state is processed by cognitive module (rational category). External information is more on environmental knowledge, which does not concern emotions. Contrary to that, certain external information induces emotions, which we termed as instinctive information. Emotional mode is related to the internal state from robot personality, e.g., what color is interested, difference of gender information (emotional category). Each mode has its field of control, where cognitive module deals with utterances, emotional module handles the robot emotional state, and the behavioral modules handles how the robot will react.

The sensory information uses the coefficient of decrease. We defined weight parameter to realize the matrix of robot personality. In Eq. (3) the matrix is composed of \(a_{i,j}\) for the robot personality matrix. Equation (1) is part of the control sensory input information. This part is used to define mode parameter of C–E–B. Then each parameter is scaled to make a C–E–B weight. This matrix corresponds to the part of the weight parameter, which can change the behavior of the robot. The system calculates the average of sensory signal values for each parameter as shown in Eq. (1):

$$\begin{aligned} P_{C}= & {} \frac{1}{5}\sum _{i=1}^{5}C_{in,i}\nonumber \\ P_{E}= & {} \frac{1}{5}\sum _{i=1}^{5}E_{in,i}\nonumber \\ P_{B}= & {} \frac{1}{5}\sum _{i=1}^{5}B_{in,i} \end{aligned}$$
(1)

where \(P_{C}\), \(P_{E}\), and \(P_{B}\) are the parameter of cognitive, emotional, and behavioral module, respectively. These parameters are used to calculate parameters \(\alpha\), \(\beta\), and \(\gamma\) as shown in Eq. (2).

$$\begin{aligned} \alpha&= \frac{P_{C}}{P_{C} + P_{E} + P_{B}}\nonumber \\ \beta&= \frac{P_{E}}{P_{C} + P_{E} + P_{B}}\nonumber \\ \gamma&= \frac{P_{B}}{P_{C} + P_{E} + P_{B}} \end{aligned}$$
(2)

where \(\alpha\), \(\beta\), and \(\gamma\) are cognitive, emotional, and behavioral weight, respectively, to be used in the ternary diagram illustrated in Fig. 1. The coordinates of (\(\alpha , \beta , \gamma\)) are shown as weight points in the triangle. These weight points show the robot’s characteristic applied to calculate the following equation:

$$\begin{aligned} \left[ \begin{array}{c} C_{out} \\ E_{out} \\ B_{out} \\ \end{array} \right] &= \left[ \begin{array}{ccc} a_{1,1} & a_{1,2} & a_{1,3} \\ a_{2,1} & a_{2,2} & a_{2,3} \\ a_{3,1} & a_{3,2} & a_{3,3} \\ \end{array} \right] \\ & \quad \times \left[ \begin{array}{ccc} \alpha & \alpha \cdot (1-\beta ) & -\alpha \cdot (1-\gamma ) \\ -\beta \cdot (1-\alpha ) & \beta & \beta \cdot (1-\gamma ) \\ \gamma \cdot (1-\alpha ) & -\gamma \cdot (1-\beta ) & \gamma \\ \end{array} \right] \end{aligned}$$
(3)

where \(a_{i,j}\) is matrix parameter for robot personality. In order to uniformly present the \(P_C\), \(P_E\), and \(P_B\) values according to the respective input information, weight value is not given in Eq. (1). As a result, these input values are converted to alpha (\(\alpha\)), beta (\(\beta\)), and gamma (\(\gamma\)). These three values are used as output values by applying the matrix related to the weights (\(a_{i,j}\)) in Eq. (3). The parameters are shown in Table 4.

Table 4 The C–E–B parameters according to the input information

The parameters in Table 4 define the robot’s personality. The robot’s output will be changed by using different parameter, for example, in different environment. The structure of the system can adopt for a variety of people by changing these parameters which are generated through trial and error to define the robot’s personality. The results from Eq. (3) are used in Eq. (4) to select the main mode from C, E, and B.

$$\begin{aligned} O = \max (C_{out}, E_{out}, B_{out}) \end{aligned}$$
(4)

Communication and interaction will have various effects on the C–E–B model, where the most dominant module is selected by Eq. (4). However, in the interaction with human, all modules are used, not only the dominant one, using a simple fuzzy rule based approach. We distinguish three fuzzy membership functions to describe the degrees of the C, E, and B modules, as presented in Fig. 9. The membership functions form a Ruspini partition with u, v, and w parameters as can be seen in Fig. 9. In this paper, the parameters are set as follows: \(u=0.2\), \(v=0.3\), \(w=0.5\). The fuzzy rules for each module (C, E, and B) are presented in Table 5. After the calculation of C, E, and B values by Eq. (3), these values are used in the antecedent part of the fuzzy rules shown in Table 5. Based on these rules the robot’s output is generated. The output generation rule has a criterion for each C, E, and B. The C has three different magnitudes based on the control of utterance quantity. The E has three different magnitudes based on the range of emotional states. Finally, the B has three different magnitudes based on the LMA gesture level.

Table 5 C–E–B output based on the fuzzy values
Fig. 9
figure 9

A simple fuzzy model for C–E–B model expression. The figure distinguishes three fuzzy membership functions to describe the degrees of the C, E, and B modules

Apart from current sensory information, human’s personalized information such as their age, birth place, etc (in Table  7) will also be taken into account during interaction. Based on the result inferred from the stated information, winner-take-all rule will be used to select the mode. Regardless of the winning mode, loose-coupling between the two modules dealing with the two losing modes will occur as well to assist during interaction. For example, the winner mode takes a large value, and the other two modes have a smaller value than the winner mode based on Table 5.

In the robot emotional state, the normal state of the robot is defined as “Neutral”. The robot has eight feeling expressions as follows: Happy, Surprise, Angry, Disgust, Sad, Frightened, Fearful, Thrilling (Fig. 10). This complete emotional model is used in Emotional winning state situation (if \(O=E\)). Cognitive and Behavior state based modes use different configuration based on the level, Small, Medium, or Large (Table 5). Thus, the robot could have different interaction patterns based on external and internal information. We used C–E–B parameters to make a ternary diagram to impose social constraints on the robot.

Fig. 10
figure 10

Emotional expression based on the fuzzy value. This figure shows the robot emotional expression by facial expression. The robot has three different facial expressions of each feeling based on the expression strength by the fuzzy value

Experimental results

In this section we present the experiment using the proposed robot partner system by integration proposing a modular structure of cognitive model by using C–E–B relation. In this experiment, “The difference of interaction style by C–E–B model” explains how human–robot interaction styles are changed by using C–E–B model consisting of three modules, and “Interaction results” shows an example of human–robot interaction with different user attributes about age, gender, and hometown. The interaction style is one on one for human–robot interaction. In this experiment, information of age and hometown is retrieved using a smart device what we previously inputted for the calculation of the C–E–B model and gender information is used from sensory information as shown in Table 2. The detailed parameters used in this experiment are shown as follows: two different gender parameters, four age level parameters, parameters about 14 cities in Korea, parameters about 19 cities in Japan. These parameters are used to apply the difference of interaction according to age, gender, and hometown. All parameters are obtained and tested by using trial and error method. The other information about utterances and utterance properties will be given by information pool from the database for conversation as shown in Fig. 6. The utterance properties consist of the utterance sentences ID, the number of selections, time, relationship parameters, and scenario ID [50]. “The difference of interaction style by C–E–B model” shows the situation of interaction in terms of sensory information flow and its modulation on conversation.

The difference of interaction style by C–E–B model

This section shows experimental results of cognitive, emotional, behavioral statement changes based on sensory information. We explain the change of robot interaction according to the C–E–B model by sensory information from the smart device. The sensory information as shown in Fig. 11 represents the change of sensory information about the entire interaction with the robot as shown in Fig. 12. Using Eqs. (1) and (2), the appropriate values of \(\alpha\), \(\beta\), and \(\gamma\) will be calculated. The values of these pieces of sensory information by using Eq. (2) are shown in Fig. 12a. Hence, the robot’s interaction style is decided based on the parameters:\(\alpha\), \(\beta\), and \(\gamma\). We can see, that the order of the winner parameters is \(\gamma\), \(\alpha\), then \(\beta\) in Fig. 12a.

Fig. 11
figure 11

Sensory information from robot partner. This figure shows the change of sensory information from smart device. This graph displays 14 pieces of sensory information

In this experiment, the process is divided into three parts. Figure 12(i) shows the experimental environment. The robot stands alone. From the start, the robot used personal information by Table  7 for human personal information to initial interaction. Human personal information consists of age and hometown information. Gender information is obtained by gender classification by using robot camera image information. These parameters are used as the stimulus of C–E–B for robot.

Fig. 12
figure 12

Experimental results: (\(\alpha , \beta , \gamma\)) and (CEB). This figure shows experimental results. This experiment state is divided into three parts as shown in the graphs. a Variation of C–E–B value according to the change of robot sensor. b Variation of C–E–B mode. c Variation in robot interaction

In Fig. 12(ii), the human approaches the robot for interaction. The mode is changed from behavioral mode to cognitive mode as the sensory information updated. The cognitive mode focuses on robot utterance control rather than gesture as shown in Table 1. Here, the robot gesture level is selected as a low LMA gesture level (Table 5) to produce a small behavior output.

In Fig. 12(iii), the robot C–E–B mode is changed from cognitive to emotional mode by the human gesture recognition. As a result, the robot emotion is emphasized. Therefore, the robot emotional expression is bigger than in cognitive mode. Thus, given the sensory information, the appropriate robot mode will be selected. The results by using Eq. (3) is illustrated in Fig. 12b. The winner of results is used to select the mode. The percentage of the mode change is shown in Table 6.

Table 6 Experimental results: the percentage of C–E–B mode change

In this case, the lowest case of mode change is only \(9\%\) from emotional state to cognitive state. In other words, when the robot is in an emotional state, it can be seen, that it mainly takes action more on behavior mode rather than change to cognitive mode (It seems like emotion is closely related to human behavior [59]). Hence, the robot partner’s C–E–B mode selection based on the stimuli information by Eq. (4) is illustrated in Fig. 13.

Fig. 13
figure 13

Experimental results: the mode change. The graph shows the results of the mode value using the Winner-take-all in accordance to the stimulus value

In this way, the robot partner could change its own mode based on the human’s personal information and human–robot interaction style. Therefore, when the robot interacts with human, the value of the C–E–B mode changes according to the internal change of the robot. We show more detailed results in “Interaction results” based on these sensory information results.

Interaction results

In this section, we describe example of human–robot interactions using the C–E–B model. We prepare robot partners “iPhonoid-C” and “iPhonoid-D” to interact with human. The “iPhonoid-D” is the follow-up model from “iPhonoid-C”, which is an attempt to consider cultural background in design [60]. The robots have the same degrees of freedom and basic structure to show the gesture, and the appearance of robot has been designed differently according to the developer. The information of the user is different based on information of each other. The experiment process is shown in Fig. 14. When a human is detected by using the camera information of the robot, the human–robot interaction starts. The human–robot interaction continues until the human leaves the robot.

Fig. 14
figure 14

Interaction process with robot. This figure shows human communication process with robot partner

We have interacted with these robot partners in the experimental environment shown in Fig. 15. Figure 15a, b show the environment of interaction with iPhonoid-C and iPhonoid-D, respectively. Each robot interacts with a human in a different room. The two robots shared information by the database server. In other words, they can share the information such as interaction state from different robot by using the concept of “information pool”. In this experiment, the HS10 of Table 8 provides information on whether or not the male user is interacting with robot according to the question of the female user. To exhibit gender preferences and how they influence communication, experiment is performed with both human genders by using iPhonoids. The personal input information is presented in Table  7. Age and hometown information is previously saved in the smart device. Gender information can be determined by the robot through real-time classification.

Fig. 15
figure 15

Experimental environments. This figure shows the experimental environment for each robot. Here, conversation is carried out with the robot in each room. a Interaction with iPhonoid-C. b Interaction with iPhonoid-D

The conversation results are shown in Tables  8 and  9. In each table, the contents of the human and robot utterances, the utterance system mode, emotional state of the robot, and the state of the C–E–B mode are displayed according to the interaction situation by HS number. HS denotes Human Sentence (HS), where each HS has their corresponding replies of human–robot interaction. The change of the emotional state of the robot is shown separately in Fig. 16. The number on X-axis in Fig. 16 is HS steps which is the same as the result of experiment in Tables  8 and  9.

Fig. 16
figure 16

The result of emotional state. This graph shows a change of robot feeling according to each conversation. a iPhonoid-C. b iPhonoid-D

In the experiment of iPhonoid-C part as shown in Table  8, the emotion of the robot changes at “HS8” to “Happy”, because the robot’s emotional state is changed by the parameter influenced using “happy” word in the human sentence. In HS10, the robot confirms whether user’s acquaintance is with the robot, and the robot performs confirmation using the information obtained through the database to know whether another user is logged in or not. The C–E–B mode change is different based on the age, gender, hometown, and sensory information (Table  7).

Table 7 The experiment results by gender difference: the percentage of mode during interaction

In the experiment of iPhonoid-D part, the robot emotional state was more actively changed than in the female user situation, because the emotional tendency of the robot is designed to prefer women more. Therefore, the personality of the robot became like a male. Especially, when the robot is in an angry emotional state in HS24 of Table 9, the C–E–B mode is changed to the cognitive mode, and the expression pattern is changed.

Table 8 Experimental result of conversation: iPhonoid-C

The percentage of C–E–B is different based on the human personal information as shown in Table  7. Robot systems show results that vary according to gender and interaction style. In the case of iPhonoid-C, there was no cognitive mode, but in iPhonoid-D, the usage rate of emotional module was strong, but the use of cognitive module appeared. It would have made decision by using C–E–B model.

Table 9 Experimental result of conversation: iPhonoid-D

The emotional results of female user (iPhonoid-C) mainly showed such as “Neutral” and “Happy” (more positive emotional change), and the emotional results of male user shows two more additional different emotional states such as “Surprise” and “Angry” (Fig. 16).

Discussion

Methodology of system design by smart device

In this paper, we proposed a modular cognitive model of “iPhonoid”. This robot system design has some benefits as follows.

  • Hardware structure The robot partner is divided into two hardware parts such as smart device and body structure. Smart device part is responsible for robot soft system such as sensory information, conversation, and others. The robot also has facial expression and touch interaction by using touch screen. This robot system already has a telecommunication companies network based on the smart device. Thus, the robot can support the information anytime and anywhere. Body structure part is responsible for the body movement. The robot body was made by 3D printer. The drawing of robot part can be shared and made for various robot designs by the user. Therefore, the hardware design can be configured more freely. For example, “iPhonoid-D” is a modified design from “iPhonoid-C” in order to realize a personalized design.

  • Software system We can easily design a robot system according to purpose by using software such as verbal and nonverbal communication, and emotion model based on C–E–B model. The robot can use various embedded sensory information for perception by smart device without any additional cost. In this paper, we defined fourteen sensory information to control the robot’s C–E–B mode. Thus, the robot has various interaction differences by user and environmental information.

The design of social robot by C–E–B model

The social robot needs to know the human’s communication system and rules. Therefore, we consider robot design inspired by human structure of cognition, emotion and behavior. Especially, psychologist consider human model of the relation about cognitive, emotion, behavior [61, 62]. Thus, in this paper, we proposed the C–E–B model based on modularization, cognitive and emotion theory and system theory for robot partner system design. In our previous research, we could see that participants of experiment interacted with robot like a child [38]. Therefore, we could consider that the robot design needs to correspond to the design of the human system. Hence, we proposed the usage of the C–E–B model by the robot’s hardware and software. Even with similar robot system, the robot’s interaction will vary according to differences in C–E–B model, which provides a variety of interaction. Therefore, by introducing the C–E–B model, it is possible to realize a robot with various characteristics such as a talkative robot partner with cognitive module emphasized, an emotional robot with emotional modules emphasized, and a robot having active motion with behavioral modules emphasized.

Conclusions and future work

The robot partner can change its individuality based on the C–E–B model, and it can adapt to a personal partner. A social robot should understand many situations of human’s life. Then, the robot can closely coexist with human as a partner. Consequently, we proposed a C–E–B model by mimicking human cognition, emotion, and behavior to realize a social robot. For the future work, we will solve and develop the following situations, which have not been solved in this paper. First, we show the change of communication according to the mood of the robot, but it can be seen that there is not a big difference compared to the expression of gesture and emotion due to the nature of communication. Therefore, we will consider how we can improve the communication system by emotional-based human-like utterance database for natural communication based on C–E–B model. Next, in the control of the C–E–B model, only the information of the smart device was used. However, in order to apply a more natural interaction pattern, information is considered using various sensors not robot but environment. Next, during the experiment, the subjects of the contents that are not related to the contents of human speech are also spoken. Therefore, as a future work, we will improve the service to be suitable for the environment according to the conceptualization of the meaning using the ontology concept so that the communication with human is related. Next, in order to develop and use the robot, it is important to set the basic parameters of the robot and the content suitable for the situation to be used. Therefore, it is necessary to set the parameters in accordance with each user purpose such as a restaurant or an elderly facility. Hence, we need to collect these parameters, and the parameters can be chosen according to the user’s purpose. We will develop a system that changes parameters based on human gestures or utterance patterns. In order to utilize not only the person who develops the robot but also the robot to design the contents of the robot, the C–E–B parameter is set so that a system capable of developing contents suitable for the situation will be considered. Finally, in order to apply higher-order human cognition, which has more detailed and detailed meaning, we need to consider improvement and necessity of robot cognitive model.

References

  1. Fernandez GC, Gutierrez SM, Ruiz ES, Perez FM, Gil MC (2012) Robotics, the new industrial revolution. IEEE Technol Soc Mag 31(2):51–58

    Article  Google Scholar 

  2. Breazeal CL (2004) Designing sociable robots. MIT press, Cambridge

  3. Smarr C, Fausset CB, Rogers WA (2011) Understanding the potential for robot assistance for older adults in the home environment. Georgia Inst. of Technology, Atlanta

  4. Scassellati B, Admoni H, Mataric M (2012) Robots for use in autism research. Annu Rev Biomed Eng 14:275–294

    Article  Google Scholar 

  5. Luhmann N (1984) Soziale Systeme. Suhrkamp Frankfurt am Main

  6. Luhmann N (1993) Communication and social order risk: a sociological theory. Transaction Publishers, Piscataway

  7. Seidl D (2004) Luhmann’s theory of autopoietic social systems. Ludwig-Maximilians-Universität München-Munich School of Management

  8. Biddle BJ (2013) Role theory: expectations, identities, and behaviors. Academic Press, Cambridge

  9. Lungarella M, Metta G, Pfeifer R, Sandini G (2003) Developmental robotics: a survey. Connect Sci 15(4):151–190

    Article  Google Scholar 

  10. Stoytchev A (2009) Some basic principles of developmental robotics. IEEE Trans Auton Mental Dev 1(2):122–130

    Article  Google Scholar 

  11. Asada M, Hosoda K, Kuniyoshi Y, Ishiguro H, Inui T, Yoshikawa Y, Ogino M, Yoshida C (2009) Cognitive developmental robotics: a survey. IEEE Trans Auton Mental Dev 1(1):12–34

    Article  Google Scholar 

  12. James W (1884) What is an emotion? Mind 34:188–205

    Article  Google Scholar 

  13. Cannon WB (1927) The James-Lange theory of emotions: a critical examination and an alternative theory. Am J Psychol 39:106–124

    Article  Google Scholar 

  14. Cannon WB (1931) Again the James–Lange and the thalamic theories of emotion. Psychol Rev 38(4):281

    Article  Google Scholar 

  15. Schachter S, Singer J (1962) Cognitive, social, and physiological determinants of emotional state. Psychol Rev 69(5):379

    Article  Google Scholar 

  16. Nummenmaa L, Glerean E, Hari R, Hietanen JK (2014) Bodily maps of emotions. Proc Natl Acad Sci 111(2):646–651

    Article  Google Scholar 

  17. Plutchik R (1958) Section of psychology: outlines of a new theory of emotion. Trans N Y Acad Sci 20(5 Series II):394–403

    Article  Google Scholar 

  18. Friedman BH (2010) Feelings and the body: the Jamesian perspective on autonomic specificity of emotion. Biol Psychol 84(3):383–393

    Article  Google Scholar 

  19. Lazarus RS (1982) Thoughts on the relations between emotion and cognition. Am Psychol 37(9):1019

    Article  Google Scholar 

  20. Lazarus RS, Folkman S (1984) Stress, appraisal, and coping. Springer, Berlin

  21. Langley P, Laird JE, Rogers S (2009) Cognitive architectures: research issues and challenges. Cogn Syst Res 10(2):141–160

    Article  Google Scholar 

  22. Feigenbaum EA, Simon HA (1962) A theory of the serial position effect. Br J Psychol 53(3):307–320

    Article  Google Scholar 

  23. Jones RM, Lebiere C, Crossman JA (2007) Comparing modeling idioms in act-r and soar. In: Proceedings of the 8th international conference on cognitive modeling. pp 49–54

  24. Breazeal C (2003) Emotion and sociable humanoid robots. Int J Hum Comput Stud 59(1):119–155

    Article  Google Scholar 

  25. Lane RD, Nadel L (2002) Cognitive neuroscience of emotion. Oxford University Press, Oxford

    Google Scholar 

  26. SoftBank Corp. SoftBank to Launch Sales of ‘Pepper’—the world’s first personal robot that reads emotions on June 20. http://www.softbank.jp/en/corp/group/sbm/news/press/2015/20150618_01/. Accessed 29 Aug 2016

  27. Bradley MM, Lang PJ (2000) Measuring emotion: behavior, feeling, and physiology. Cogn Neurosci Emot 25:49–59

    Google Scholar 

  28. Laban R, Ullmann L (1971) The mastery of movement

  29. Nakata T, Sato T, Mori T, Mizoguchi H (1998) Expression of emotion and intention by robot body movement. In: Proceedings of the 5th international conference on autonomous systems

  30. Narahara H, Maeno T (2007) Factors of gestures of robots for smooth communication with humans. In: Proceedings of the 1st international conference on robot communication and coordination. IEEE Press, New York, pp 44

  31. ROS.org. Robots Using ROS: Aldebaran Nao. http://www.ros.org/news/2010/03/robots-using-ros-aldebaran-nao.html. Accessed 29 Aug 2016

  32. National Institute of Advanced Industrial Science and Technology: robots and devices available with RTC. http://www.openrtm.org/openrtm/en/robots_and_devices_en/all. Accessed 29 Aug 2016

  33. Woo J, Kubota N, Shimazaki J, Masuta H, Matsuo Y, Lim H-O (2013) Communication based on Frankl’s psychology for humanoid robot partners using emotional model. In: 2013 IEEE international conference on fuzzy systems (FUZZ). IEEE, New York, pp 1–8

  34. Fong T, Nourbakhsh I, Dautenhahn K (2003) A survey of socially interactive robots. Robot Auton Syst 42(3):143–166

    Article  MATH  Google Scholar 

  35. Botzheim J, Woo J, Tay Nuo Wi N, Kubota N, Yamaguchi T (2014) Gestural and facial communication with smart phone based robot partner using emotional model. In: Proc. of the world automation congress. pp 644–649

  36. Woo J, Botzheim J, Kubota N (2014) Facial and gestural expression generation for robot partners. In: Proc. of the 25th international symposium on micro-nanomechatronics and human science, Nagoya, Japan. pp 218–223

  37. Woo J, Botzheim J, Kubota N (2015) Verbal conversation system for a socially embedded robot partner using emotional model. In: Proceedings of the 24th IEEE international symposium on robot and human interactive communication. pp 37–42

  38. Woo J, Wada K, Kubota N (2012) Robot partner system for elderly people care by using sensor network. In: 2012 4th IEEE RAS & EMBS international conference on biomedical robotics and biomechatronics (BioRob). pp 1329–1334

  39. Sakata Y, Botzheim J, Kubota N (2013) Development platform for robot partners using smart phones. In: Proc. of the 24th international symposium on micro-nanomechatronics and human science, Nagoya, Japan. pp 233–238

  40. u-blox connectBlue, OLS426. http://support.connectblue.com/display/Dashboard/OLS426. Accessed 15 June 2015

  41. Moghaddam B, Yang M-H (2002) Learning gender with support faces. IEEE Trans Pattern Anal Mach Intell 24(5):707–711

    Article  Google Scholar 

  42. Sinha P, Balas B, Ostrovsky Y, Russell R (2006) Face recognition by humans: nineteen results all computer vision researchers should know about. Proc IEEE 94(11):1948–1962

    Article  Google Scholar 

  43. Woo J, Botzheim J, Kubota N (2014) Facial and gestural expression generation for robot partners. In: 2014 international symposium on micro-nanomechatronics and human science (MHS). pp 1–6

  44. Barrett LF, Mesquita B, Gendron M (2011) Context in emotion perception. Curr Dir Psychol Sci 20(5):286–290

    Article  Google Scholar 

  45. Reiman EM, Lane R, Ahern G, Schwartz G, Davidson R (2000) Positron emission tomography in the study of emotion, anxiety, and anxiety disorders. In: Lane RD, Nadel L (eds) Cognitive neuroscience of emotion. Oxford University Press, Oxford, pp 389–406

    Google Scholar 

  46. Apple Inc.: Core Motion Framework Reference. https://developer.apple.com/library/prerelease/watchos/documentation/CoreMotion/Reference/ CoreMotion_Reference/index.html#//apple_ref/doc/uid/TP40009686. Accessed 17 Feb 2016

  47. Apple Inc.: Core Location Framework Reference. https://developer.apple.com/library/prerelease/watchos/documentation/CoreLocation/Reference/ CoreLocation_Framework/index.html#//apple_ref/doc/uid/TP40007123. Accessed 17 Feb 2016

  48. Apple Inc.: UIKit Framework Reference. https://developer.apple.com/library/prerelease/watchos/ documentation/UIKit/Reference/UIKit_Framework/. Accessed 17 Feb 2016

  49. Woo J, Kubota N (2013) Conversation system based on computational intelligence for robot partner using smart phone. In: Proc. of the IEEE international conference on systems, man, and cybernetics, Manchester, United Kingdom. pp 2927–2932

  50. Woo J, Botzheim J, Kubota N (2014) Conversation system for natural communication with robot partner. In: Proc. of the 10th France–Japan and 8th Europe–Asia congress on mechatronics. pp 349–354

  51. Woo J, Kasuya C, Kubota N (2015) Content-based conversation for robot partners based on life hub. In: 2015 international symposium on micro-nanomechatronics and human science (MHS). pp 1–6

  52. Tang D, Yusuf B, Botzheim J, Kubota N, Chan CS (2015) A novel multimodal communication framework using robot partner for aging population. Expert Syst Appl 42(9):4540–4555

    Article  Google Scholar 

  53. Breckler SJ (1984) Empirical validation of affect, behavior, and cognition as distinct components of attitude. J Personal Soc Psychol 47(6):1191

    Article  Google Scholar 

  54. Phelps EA (2006) Emotion and cognition: insights from studies of the human amygdala. Annu Rev Psychol 57:27–53

    Article  Google Scholar 

  55. Grice HP (1970) Logic and conversation. In: Unpublished

  56. Barsalou LW (2008) Grounded cognition. Annu Rev Psychol 59:617–645

    Article  Google Scholar 

  57. Dolan RJ (2002) Emotion, cognition, and behavior. Science 298(5596):1191–1194

    Article  Google Scholar 

  58. Smith A (2006) Cognitive empathy and emotional empathy in human behavior and evolution. Psychol Rec 56(1):3

    Google Scholar 

  59. Kipp M Martin J-C (2009) Gesture and emotion: can basic gestural form features discriminate emotions? In: 2009 3rd international conference on affective computing and intelligent interaction and workshops. pp 1–8

  60. Jacquemont M, Woo J, Botzheim J, Kubota N, Sartori N, Benoit E (2016) Human-centric point of view for a robot partner: a cooperative project between France and Japan. In: Mecatronics-REM 2016, Compiegne, France, pp 164–169. http://hal.univ-smb.fr/hal-01343189. Accessed 9 Sept 2016

  61. Beck JS (2011) Cognitive behavior therapy: basics and beyond. Guilford Press, New York City

    Google Scholar 

  62. Corstorphine E (2006) Cognitive–emotional–behavioural therapy for the eating disorders: working with beliefs about emotions. Eur Eat Disord Rev 14(6):448–461

    Article  Google Scholar 

Download references

Authors’ contributions

JW develops the robot partner system and drafted the manuscript. JB is involved in discussion of ideas and results and manuscript drafting. NK supervises the project. All members are involved in checking and approval of the paper. All authors read and approved the final manuscript.

Acknowledgements

The authors would like to thank to Chiaki Kasuya for her help in conducting experiments.

Competing interests

The authors declare that they have no competing interests.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jinseok Woo.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Woo, J., Botzheim, J. & Kubota, N. A modular cognitive model of socially embedded robot partners for information support. Robomech J 4, 10 (2017). https://doi.org/10.1186/s40648-017-0079-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40648-017-0079-1

Keywords