Skip to main content
  • Research Article
  • Open access
  • Published:

A SafeML extension for a unified risk assessment to diverse service robots

Abstract

Risk assessment is one of the important processes in the social implementation of robots. In risk assessment and safety design of systems with various stakeholders, modeling and visualization of related elements are important not only for designers but also for users. We examined the use of SafeML in the process and proposed its extension for the purpose for missing elements.

Introduction

As the social integration of service robots is widely proceeded, the risk assessment of robots has become difficult because of their complexity and rapid advancement. The diversity of the robot’s environment is the first reason for the difficulty, and the same change of diversity occurs not only for service robots but also for other robots, for example, agricultural robots. Although the risk assessment should adopt the change of diversity, a tabular form using written expressions is still a common tool for risk assessment. Due to the diversity of service robots, a risk assessment in tabular form becomes complex and requires additional explanations when we attempt to understand the assessment, which is not effective and leads to additional costs. Therefore, a risk assessment tool that provides a board or macroscopic view is required to determine potential problems effectively. SafeML [1] had been proposed as a solution to such problems. SafeML is a modeling language that is an extension of SysML [2], and a tool for modeling risk assessment and safety measures. Because SafeML uses a diagram expression and clear stereotypes, it can express a risk assessment in a meaningful form. This expression provides a more macroscopic view and control of risk assessment information than traditional tabular form, which implies a lower cost in a service robot application.

The second reason for the difficulty in the risk assessment is the diversity of stakeholders. As described above, the robot environment can be changed and if the robot can detect the change, there is less of a problem. However, we consider the case when the following countermeasure is taken for the change:

  1. 1

    A human operator instructs the robot.

  2. 2

    A sensor in the environment informs the robot.

In case 1, the operator would intervene and control the robot to avoid dangerous situations, e.g., when the robot is moving to a dangerous place, the operator controls and detours the robot.

In case 2, a company different from the robot company installs the sensor. In both cases, the stakeholders of the risk assessment increase.

Additionally, risk assessment information should be shared in multiple phases, for example, design and operation, repeatedly. SafeML can handle such information sharing in design phases, to some extent, but SafeML lacks some stereotypes under a certain condition [3].

Although the two issues above can be related to the complexity of a robot system and such complexity can be considered in the system of systems (SoS) framework [4], the risk assessment discussion from the viewpoint of SoS is limited.

To address the issues above with service robot safety design, a small working group for safety software architecture (SWG-SSA) was established in the software architecture study group of the Robot Revolution & Industrial IoT Initiative (RRI) robot innovation working group in Japan, where Authors’ contributions are summarized at the end of the paper.

The safety designers of several robot manufacturers participated in the working group; thus, the risk assessment in the group was realistic and practical. Additionally, an extension of the design information (meta-model) of the SafeML language was carried out under the initiative by the National Institute of Advanced Industrial Science and Technology in Japan. In this paper, we describe the RRI activities for service robot safety design and related SafeML extensions. The SoS nature of a safety design method with risk assessment and safety measures is discussed based on its definition [5] in “SafeML requirement with SoS characteristics”.

Note that we submitted this paper after we received recommendations on the content presented at “JSME Conference on Robotics and Mechatronics 2022” and further considerations.

Existing SafeML application to safety standards

Because SafeML and SysML are language specifications, they have no restriction on their application method and process. Thus, in this report, we follow the risk assessment process with risk reduction in ISO/IEC Guide 51 [6] for SafeML modeling (Fig. 1).

In Fig. 1, after the block “Is residual risk tolerable?”, the answer “yes” (= safe) means the end of the process for the robot designer, and the robot user should take care of the residual risk, which is not depicted in the figure. We add this user process for the residual risk using SafeML modeling. For example, suppose that the designer considers that the impact energy of a robot is sufficiently low (less than 93 J based on JIS B 8446-1 [7]) and tolerable, but the user wants to avoid an impact accident for a specific application. In this scenario, the subsequent process performed by the user could be to reduce the risk of the impact accident, for example, by adding a physical barrier or warning sign. Such a subsequent process is not explicitly considered in Fig. 1 but is worth considering from the viewpoint of SoS.

Fig. 1
figure 1

ISO/IEC Guide 51 Iterative process of risk assessment and risk reduction

In the risk assessment of a service robot, peripheral information, such as misuse, increases. In a tabular form assessment, this information is often described as a remark on the form, which is not easy to proceed or degrade readability if the amount of information is large. In a SysML model, peripheral information can be expressed by calling up an element block of the model. Although this expression is a nature of SysML as an architecture modeling method, the expression is advantageous for risk assessment for summarizing several items of peripheral information. We use this “call-up element block” technique of SysML in the case studies in “Case studies of risk assessments: tabular form”.

Case studies of risk assessments: tabular form

Based on two actual service robot applications, we conducted modeling case studies. In this section, regarding prior information of SysML modeling, we describe basic information on risk assessment in tabular form.

Case 1: agricultural robot

In the first case study, we consider an agricultural robot with autonomous mobility. We assume that the robot is sufficiently large for operator boarding, but we do not assume that the operator is on board all the time (the operator can be on board during autonomous movement). Table 1 shows an example of the risk assessment in tabular form. Figure 2 shows the environment for robot operation.

Table 1 Risk assessment table 1
Fig. 2
figure 2

Surrounding environment of an agricultural robot

Although we consider that the robot falling down is a harm, we do not consider the falling down event as a harm from the viewpoint of the safety of machinery in general. We consider the event as a malfunction (not harm). However, in a government report on agricultural occupational safety [8], the falling down of agricultural machines is considered as a harm; hence, we follow tradition and consider falling down as a harm.

Case 2: service robot at a train station

In the second case study, we consider a mobile service robot that carries a lightweight package at a common train station. The robot is a type of service robot as shown in the press release [9] in which the robot is assumed to be lightweight and low speed (sufficiently low kinetic energy), and the station is not so crowded.

Table 2 shows an example of the risk assessment in tabular form. The assessment content corresponds to the “yes” path from the “Is risk tolerable?” block in Fig. 1.

Table 2 Risk assessment table 2

SafeML modeling

Based on the tabular form risk assessment, as in the previous section, we conducted SafeML modeling in SWG-SSA, and summarize the discussion in this section. Note that in the figures in this section, a word enclosed in \(\ll \) \(\gg \) is a stereotype (classification to define the meaning of the element) defined in SafeML (or some elements are defined in SysML). For each stereotype, we can use a SafeML model with a specific meaning. Therefore, a risk assessment composed of stereotypes in SafeML increases readability, and this provides a common platform to understand the assessment results for various stakeholders. We explain the figures in this section by specifying the associated stereotypes.

SafeML modeling case 1: agricultural robot

Fig. 3
figure 3

SafeML model of the agricultural robot

Figure 3 shows a SafeML model that corresponds to the tabular form in Table 1. In the SafeML model, the hazard and harm of risk assessment are expressed as \(\ll \)Hazard\(\gg \) and \(\ll \)Harm\(\gg \) (\(\ll \)Harm Group\(\gg \) group element), respectively. The harm context is expressed as \(\ll \)HarmContext\(\gg \). Then, four \(\ll \) \(\sim \) Defence\(\gg \) are determined as safety measures. From the determined \(\ll \)ProActive Defense\(\gg \), \(\ll \)using\(\gg \) leads to the \(\ll \)Non-Safety-Related Function\(\gg \). Similarly, from \(\ll \)Active Defense\(\gg \), \(\ll \)triggeredBy\(\gg \) leads to \(\ll \)ContextDetector\(\gg \). These refer to relations to a safety measure function or sensor information as a trigger. Such a description provides a macroscopic view of the functional elements necessary in safety design.

During the modeling process, we determined the difference between the safety measure stereotypes, Active and ProActive, and found that the authorship of the safety measure distinguished the stereotypes. We explain the details in “SafeML extensions”.

SafeML modeling case 2: service robot at a train station

Fig. 4
figure 4

SafeML model of a service robot at a train station

Figure 4 shows a SafeML model that corresponds to the tabular form in Table 2. In the model, we connect \(\ll \)ProActive Defense\(\gg \) through \(\ll \)ResponsibleFor\(\gg \) to “SMART STATION,” which is an imaginary information management system. This connection indicates the relationship of responsibility, and it also implies that the safety measure related to “wait” depends on a system at the train station. Although this robot-station collaborative system does not actually exist, it is an example of a dynamic safety system in which the robot decides and moves in a dynamic manner based on crowd-level information provided from the social infrastructure (station) in real time. This representation forms a collaborative safety system of the robot and social infrastructure.

SafeML extensions

Based on the SafeML modeling case studies in “SafeML modeling”, we propose extensions of SafeML for the stereotypes of risk reduction measures in the safety standard [6]. We summarize the extensions in Table 3, where “++” indicates the relation between stereotypes (Passive, Active, Undefended, and ProActive) and the risk reduction measures (three-step methods (inherently, guards and protective devices, and information for use) and safety measures taken by users). Although the stereotype of \(\ll \)Undefended\(\gg \) corresponds to the case of “no safety measure and accept the risk” in general, we consider that \(\ll \)Undefended\(\gg \) implies a weak and minor safety measure, for example, a warning sign; hence, “information for use” in the three-step method is associated with the stereotype and “+” is placed.

In the safety standards for the designer [6], the risk reduction measures taken by the designer comprise the three-step method, and “safety measures taken by users” are not explicitly considered. However, as we discussed in “SafeML modeling case 2: service robot at a train station”, the collaboration between the robot and infrastructure designed by a robot user should be considered in the robot design phase. To take account of this scenario explicitly, we add the stereotype \(\ll \)ProActive Defense\(\gg \) to SafeML for “safety measures taken by users.”

Table 3 SafeML stereotypes and risk reduction measures

We determine the scope of application of \(\ll \)ProActive \(\sim \) \(\gg \) based on the responsibilities of the designer and users. In Table 3, the designer takes full responsibility for the “functional safety” of “guards and protective devices” because the entire scope of application is determined strictly by the designer; hence, \(\ll \)Active \(\sim \) \(\gg \) is assigned (note that we do not determine the safety measures in cases 1 and 2 to be functional safety or not because of the time limitation of the discussion. This is a further issue).

By contrast, the designer does not take responsibility for the “non-functional safety” of “guards and protective devices” because the residual risk in this case can be assumed to be tolerable by the other measures for which the designer takes responsibility. “Non-functional safety” can be considered as a supportive function for further risk reduction by the request of the user. Therefore, not only the designer but also the user takes responsibility for “non-functional safety” (this concept is based on the idea “The designer has product liability for the “non-functional safety” function as a general software product” in JIS Y 1001 [10] ).

Using these stereotypes, we can develop a SafeML model that can clarify the scope of responsibility and support communication over various stakeholders.

Discussion

Development procedure

In [11], the authors proposed that there are three systems in the development procedure: System Under Design, Designing System, and Context System. The systems should be harmonized with each other for better development. In this report, we consider that these three systems correspond to the following: a robot system under development (System Under Design), a social system in which a robot is used (Context System), and SafeML as a safe design tool (Designing System). According to the case studies, SafeML (Designing System) is suitable for the complexity of a robot (System Under Design) and social system (Context System). This indicates the validity of SafeML from the viewpoint of the development procedure.

SafeML requirement with SoS characteristics

In this report, we consider that SafeML = Designing System is an SoS, and investigate the requirements of SafeML by referring to SoS characteristics. We summarize the relationship between the SafeML requirements and SoS in Fig. 5 and explain the summary below.

Fig. 5
figure 5

SoS characteristics

In the original reference for SoS [4], the independence of operation and management are denoted by two important factors, and based on these, five features of SoS are derived: independence of operation, independence of management, evolutionary design, emerging behavior, and geographical distribution. These five features are recognized in [5, 12]; hence, we consider that they are basic characteristics of SoS. According to these characteristics of SoS, we extract the requirement“0: Independence of operation/management” as a fundamental requirement for SafeML. In addition to this requirement, we determine the following three requirements, where the relationship between the requirements and SoS characteristics are shown in Fig. 5.

First, because the environment of a service robot operation changes in the development processes, risk assessments should be conducted in an iterative manner. Thus, we extract the requirement “1: Supports iterative development.”

Second, we consider a system life cycle in which SafeML is used. The system life cycle is the entire design process of a product, and the Concept of operations: ConOps [12] is its starting point. ConOps is an operational concept at the organization level, and it can be concept sharing in a enterprise layer for service robot development and deployment. The difficulty of information sharing in ConOps is explained in [13], and the importance of the involvement of stakeholders with diverse knowledge levels is also stated, where the stakeholders should be involved as participants, not an audience. Therefore, we extract the requirement “2: Stakeholders with diverse knowledge levels can participate.” This requirement implies that if the design information prepared by the designer can be used by a user without instructions from the designer, life cycle tasks will be managed effectively, which is associated with the requirement “0: Independence of operation/management.”

Third, ConOps is also used for drone applications [14], where the relationship to a social system with existing airplanes is important. Such a relationship between new technology and the existing social system can be seen in service robots; thus, system design by considering various environmental conditions is required. Thus, we extract the requirement “3: Ability to express various environmental conditions as design information.”

For each case study, we assume the story of its design process in the following sections, and evaluate the degree of realization for the above four SafeML requirements.

In addition, the position of SafeML in the system lifecycle including ConOps is shown in the Appendix.

Design process 1: agricultural robot

An important issue is that there are objects that are difficult to detect during the autonomous movement of a robot using current technologies, which lead to an unsafe action. This issue is general for autonomous car driving, and, for example, the operational design domain:ODD [15] is used to coordinate information.

In this report, we assume the harm the risk is “a wheel falls into an invisible ditch, resulting in injury to the driver and nearby farm workers,” and the story of the design process is how the designer and operator of the robot share the responsibility of the risk reduction measure. In the story, the designer and operator are required to coordinate and cooperate in the measures, as in Fig. 3, and the main feature is as follows: Four measures (two ProActive and two Active Defense) are proposed, where ProActive measures are implemented by the operator and Active measures are realized by the designer (manufacturer) as, for example, functional safety, and each measure is related to the others. For example, even if the manufacturer uses simultaneous localization and mapping technology as a state-of-the-art safety measure (three \(\ll \)ContextDetector\(\gg \)s in Fig. 3), the operator has the responsibility to increase safety by selecting a safer route(\(\ll \)ProActive Defense\(\gg \) move along a path where the robot does not derail); that is, even if the manufacturer uses a functional safety measure for a local sensing system, the operator has responsibility for the global operation (e.g., avoid an unsafe environment, such as submerged ground.)

Another feature of agricultural machines is the definition of harm. Although harm is defined as human injury in the safety standards for machinery [6], harm for agricultural machines is defined as simply the malfunction of a machine (e.g., falling down). Although this definition of harm for an agricultural machine would be strange to a general safety designer of machinery, the designer of machinery should understand that the malfunction of an agricultural machine corresponds to an injury directly. Such a misunderstanding does not occur if all the designers belong to an agricultural society, but in designing an agricultural service robot, general safety designers participate and such a misunderstanding occurs, which degrades the design process. In the SafeML expression in 3, \(\ll \)HarmContext\(\gg \) is connected to actors (farm workers and a driver) through \(\ll \)impact\(\gg \), which can overcome the differences of the definitions of harm between an agricultural machine and general machine.

Table 4 summarizes the degree of realization for the requirements.

Table 4 Realization of requirements for agricultural robots

Design process case 2: service robot at a train station

Applications of a service robot at a train station are under development now. A train station is a place in which various people (pedestrians, workers, etc.) exist and a complex social field. The designer and operator of the robot should cooperate for a safe application. At this moment, no perfect robot system is available for general applications at a station. The deployment of a robot is conducted in a step-by-step manner (this robot application scenario is the same as that at an airport and similar large places, and a constructive approach is required because of the increase of tolerance of the robot in a social system).

The story of the design process in this scenario is the same as the agricultural robot (the designer and operator collaborate for the risk reduction measure), and a \(\ll \)block\(\gg \) SMART STATION SYSTEM is proposed for a system to support the effective operation in Fig. 4. This smart system is assumed to be a resource management system with robots that use, for example, AI prediction, and it is under development as an operation management system. In the future, the smart system will be developed and contribute to effective operation with various resources that include robots. When the smart system obtains the density of people through a \(\ll \)block\(\gg \)human congestion prediction model, it can control the robot in a global manner. This function of the smart station is considered as a risk reduction measure that is indicated and understandable through the connection with \(\ll \)ProActive Defense\(\gg \) when the number of people\(\sim \).

Table 5 summarizes the degree of realization of the requirements.

Table 5 Realization of the requirements for agricultural robots

Conclusions

According to the diversity of service robots, the risk assessment in a conventional tabular form become complex and difficult to proceed with a reasonable cost. In order to overcome this in the paper, we proposed a SafeML extension by adding \(\ll \)ProActive Defence\(\gg \) as the risk reduction measure by a user, and coordinated Defense. Using this extension, we conducted the risk assessment of a service robot in a comprehensive and unified manner. We examined the effectiveness of the proposed extension through two case studies, that is, an agricultural robot and a service robot at a train station. Though the two robots and associated services and operation management are different, we confirmed that a reference as a stereotype in SafeML can clarify the scope of responsibility in the same procedure as in “Discussion”. We also proposed four requirements of SafeML for effective risk assessment and safety design from the viewpoint of SoS. In addition, we discussed the degree of realization of the requirements related to SoS through the case studies as in Tables 4 and 5.

The risk assessments of the two robots in the proposed SafeML form were carried out by the RRI SWG-SSA members from various organizations, and we proceeded as a working group activity in 2 h times 4 days, 8 h in total. During this 8-hour risk assessment in the case studies, we also confirmed effective information sharing. Note that 8 h includes a discussion of the SafeML extension, thus we can shorten the working time if only risk assessments are proceeded, which implies the proposed risk assessment of extended SafeML is unified and effective for diverse service robots. We reported these activities on the RRI web page [16].

As future work, we will conduct detailed case studies to validate the effectiveness of the proposed extension with comparison to the other risk assessment methods, e.g., FTA, FMEA, STAMP/STPA, GSN, and ODD.

Availibility of data and materials

“[Release of the “Robot Innovation WG Investigation Committee (FY2021 Activity Results)”]’ Robottoinobēshon WG chōsa kentō iinkai (2021-nendo katsudō seika)’ kōkai ni tsuite(in Japanese)” https://www.jmfrri.gr.jp/document/library/2870.html (accessed Oct. 28, 2022).

Abbreviations

MBSE:

Model-based systems engineering

SoS:

System of systems

ODD:

Operational design domain

ConOps:

Concept of operations

RRI:

Robot Revolution & Industrial IoT Initiative

References

  1. Biggs G, Sakamoto T, Kotoku T (2016) A profile and tool for modelling safety information with design information in SysML. Softw Syst Model 15(1):147–178

    Article  Google Scholar 

  2. OMG, OMG Systems Modeling Language Version 1.5

  3. Miyoshi T, Biggs G, Kimura T (2018) SysML + SafeML analysis of collaborative service robot engaged in light work. JSME Conf Robot Mechatron 2018:2A2-B13

    Google Scholar 

  4. Maier MW (1998) Architecting principles for systems-of-systems. Syst Eng Electr 1(4):267–284

    Article  Google Scholar 

  5. DADC, [Discussion Paper on Architectural Design of Safety and Governance in Society 5.0 -Vision on how governance should be to ensure safety in Society 5.0-]Society 5. 0 Ni okeru anzen gabanansu no ākitekucha sekkei ni kansuru disukasshonpēpā Society 5. 0 Ni okeru anzen kakuho o jitsugen suru gabanansu no arikata ni kansuru bijon (in Japanese), [Online]. Available: https://www.ipa.go.jp/dadc/architecture/pdf/pj_report_smart safety_doc_20210726.pdf

  6. ISO/IEC Guide 51:2014, ISO, (2019) https://www.iso.org/standard/53940.html (accessed Feb. 21, 2022)

  7. JIS, JIS B 8446-1 Safety requirements for personal care robots—part 1: Static stable mobile servant robot with no manipulator, B 8446-1, (2016)

  8. [Fiscal 2020 Commissioned research project for realizing new on-site work safety measures among the projects to promote measures to strengthen occupational safety in the agriculture, forestry, fisheries and food industries (Agriculture)]Ryō wa 2-nendo nōrin mizu sangyō shokuhin sangyō ni okeru rōdō anzen kyōka taisaku suishin jigyō no uchi aratana genba no sagyō anzen taisaku no jitsugen ni muketa chōsa itaku jigyō dai 1-shō sagyō jiko jittai oyobi yōin bunseki chōsa II - 1 bun’ya betsu bunseki (nōgyō)(in Japanese), [Online]. Available: https://www.maff.go.jp/j/kanbo/sagyou_anzen/attach/pdf/itaku-12.pdf

  9. [Autonomous transport robot “SEED-Mover with Lifter” debuts [News Release]]Jiritsu hansō robotto ‘Lifter-tsuki shīdo - mūbā’ shin tōjō [nyūsurirīsu](in Japanese), https://www.thk.com/?q=uk/node/21162 (accessed Feb. 21, 2022)

  10. JIS, Requirements for safety management system of robot service using service robots, Y1001, (2019)

  11. Long D, Scott Z (2011) A primer for model-based systems engineering. Lulu.com

  12. Robertson T (1998) INCOSE systems engineering handbook. Insight 1(2):20

    Article  Google Scholar 

  13. Brown D, Johnson C, Hatch AR, Storytime, Audience to Authors: Enhancing Stakeholder Engagement

  14. UAM Vision Concept of Operations (ConOps) UAM Maturity Level (UML) 4. https://ntrs.nasa.gov/api/citations/20205011091/downloads/UAM

  15. PAS 1883:2020 Operational Design Domain (ODD) taxonomy for an automated driving system (ADS)—Specifi cation., [Online]. Available: https://www.bsigroup.com/globalassets/localfiles/en-gb/cav/pas1883.pdf

  16. [Release of the Robot Innovation WG Investigation Committee (FY2021 Activity Results)]‘Robottoinobēshon WG chōsa kentō iinkai (2021-nendo katsudō seika)’ kōkai ni tsuite(in Japanese), https://www.jmfrri.gr.jp/document/library/2870.html (accessed Oct. 28, 2022)

Download references

Acknowledgements

This work was supported by RRI. We thank Edanz (https://jp.edanz.com/ac) for editing a draft of this manuscript.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

TM and YN led discussions in a small working group within the committee. KO served as chairman of the RRI internal committee, and NA served as vice-chairman, providing advice to members and formulating meeting themes in order to determine the direction of the discussion. TK (Kuga) provided an example of an agricultural robot and TM provided an example of a train station. TM, YN, HF, MY, IM, TS, NA, TK (Kuga), AK, and KO contributed to the case modeling and SafeML metamodeling. TM and TK (Kimura) created the evaluation axis in the discussion. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Takao Miyoshi.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Positioning of SafeML in the system lifecycle including ConOps

Appendix: Positioning of SafeML in the system lifecycle including ConOps

FMEA, FTA, STAM/STPA, GSN, ODD, and Table form are considered typical risk assessment methods. In the classification of Enterprise Layer/technical process shown in the document [12], this paper considers that each method can be positioned as shown in the Fig. 6. Therefore, only Table forms belonging to the same category as SafeML is compared here.

Fig. 6
figure 6

Position of safety design method in the system lifecycle

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Miyoshi, T., Nakabo, Y., Fukui, H. et al. A SafeML extension for a unified risk assessment to diverse service robots. Robomech J 10, 6 (2023). https://doi.org/10.1186/s40648-023-00245-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40648-023-00245-z

Keywords