Skip to content
BY 4.0 license Open Access Published by De Gruyter (O) November 28, 2024

Towards equitable shared control

Shifting from static to dynamic policies that provide personalised support in assistive robotics

Auf dem Weg zu einer gerechten gemeinsamen Kontrolle
Der Übergang von statischen zu dynamischen Strategien, die personalisierte Unterstützung in der assistiven Robotik bieten
  • Tom Carlson

    Tom Carlson is Professor of Assistive Robotics at University College London (UCL), UK. He obtained his MEng in Electrical & Electronic Engineering (2006) and PhD in Intelligent Robotics (2010), both from Imperial College London, UK. He then pursued his postdoctoral research in shared control for brain-machine interfaces at EPFL, Switzerland, before joining UCL as a lecturer in 2013. Carlson’s research focus is on the user-centred design of assistive robotic technologies for people with spinal cord injuries and other pathologies. He is particularly interested in human-robot interaction and is developing shared control techniques for brain-machine interfaces and smart wheelchairs.

    EMAIL logo

Abstract

Shared human-machine control systems are critical to realising assistive technologies that empower users to enhance their level of independence, and quality of life. In this paper, we present an overview of the state-of-the-art in equitable shared control for assistive robotics, drawing upon examples from our own work, applied to both smart wheelchair applications and the brain-machine interface domain. In particular, we focus on three themes: personalised control optimisation; understanding interactions with the environment; and supporting users to maintain control authority. Through these concepts we demonstrate the potential of equitable shared control systems and argue for future research into more dynamic policies that continue to support users as their needs evolve in the long-term.

Zusammenfassung

Gemeinsame Mensch-Maschine-Kontrollsysteme sind entscheidend für die Verwirklichung von assistiven Technologien, die den Nutzern ein höheres Maß an Selbstständigkeit und Lebensqualität ermöglichen. In diesem Beitrag geben wir einen Überblick über den aktuellen Stand der Technik im Bereich der gemeinsamen Steuerung von Assistenzrobotern. Dabei stützen wir uns auf Beispiele aus unserer eigenen Arbeit, die wir sowohl bei intelligenten Rollstuhlanwendungen als auch im Bereich der Gehirn-Maschine-Schnittstellen anwenden. Wir konzentrieren uns insbesondere auf drei Themen: personalisierte Steuerungsoptimierung, Verständnis der Interaktionen mit der Umgebung und Unterstützung der Benutzer bei der Aufrechterhaltung der Steuerungskompetenz. Anhand dieser Konzepte zeigen wir das Potenzial gerechter, gemeinsam genutzter Kontrollsysteme auf und plädieren für die künftige Erforschung dynamischerer Strategien, die die Nutzer auch langfristig bei der Entwicklung ihrer Bedürfnisse unterstützen.

PACS: 45.40.Ln

1 Introduction

Designing human-in-the-loop systems is challenging. Perhaps traditionally the automation community has tended to treat the human as a disturbance in the system, optimising the control parameters to be resilient to the human’s oftentimes imprecise inputs, or to treat the human as a last-resort controller in the rare event that the automation fails (Figure 1). However, more recently, we have seen a shift in thinking, towards maximising the complementary benefits of a highly reliable automation, with a highly adaptable human being. This has led to the exploration of the murky world between fully manual control and fully autonomous control, which, building upon Sheridan’s seminal works in remote manipulation [1], [2], we term Shared Control [3].

Figure 1: 
Traditional approaches to automation have often removed the human from the loop, relying on them only as a fail-safe when something occurs that the automation cannot handle. Unfortunately this approach also tends to leave the human with limited situational awareness, sometimes making it difficult to take over control, especially when operating outside the normal design envelope of the system parameters. (Used with permission: Paul Breedveld (1994), Delft University of Technology, 
https://www.bitegroup.nl
).
Figure 1:

Traditional approaches to automation have often removed the human from the loop, relying on them only as a fail-safe when something occurs that the automation cannot handle. Unfortunately this approach also tends to leave the human with limited situational awareness, sometimes making it difficult to take over control, especially when operating outside the normal design envelope of the system parameters. (Used with permission: Paul Breedveld (1994), Delft University of Technology, https://www.bitegroup.nl ).

There are many other paradigms that also overlap this loosely defined area, including concepts such as semi-autonomous-, supervisory-, traded-, cooperative- and collaborative control etc., with models developed for different types of interactions, but these tend to be situated at different levels of abstraction [4]. Whilst shared control is usually implemented at the operational level, in a well-designed joint human-machine system, it is ideally supported by (some of) these other concepts at the higher tactical and strategic levels of abstraction [5]. Therefore, the IEEE Systems, Man and Cybernetics Technical Committee on Shared Control[1] has proposed a definition to support a more common understanding amongst the diverse community of engineers practising similar techniques at the operational level, across multiple application domains:

In shared control, human(s) and robot(s) are interacting congruently in a perception-action cycle to perform a dynamic task, that either the human or the robot could execute individually under ideal circumstances. [3]

In this paper we are particularly concerned with shared control for assistive robotics or assistive technologies that are primarily designed to support people with disabilities to improve their level of independence and quality of life. In such applications the system should enable the user to perform one or more tasks that would otherwise be extremely difficult – or impossible for them – due to a physical, cognitive or sensory impairment. For example, we have worked extensively with smart wheelchairs [6], [7], which aim to support severely mobility-impaired users to drive around independently in a safe and efficient manner [8]. In such cases you simply cannot remove the human from the loop, since the human is integral to the overall system.

The notions of shared control and collaborative robotics have also been applied to other domains, including making the workplace more inclusive for people with a wide range of disabilities [9]. Specialist robots and control interfaces have been developed to support job-specific functions, e.g. [10]. However, there are many ways in which robots and people with motor impairments could better collaborate in the workplace, so qualitative studies have also been undertaken to explore them [11]. In turn, this exploration has led to the development of a range of paradigms, such as using behavioral patterns to describe and design robotic collaborative assembly [12]; or employing a capability-based distribution of control between human-robot teams to achieve specific work processes [13]. Researchers have also developed tools to assess an individual’s capabilities and consequently indicate the level of robotic assistance they may require to complete workplace tasks successfully [14].

Regardless of the application domain, it is important not only consider the assistive technology itself, but also that the interface is well-matched to the individual [15]. For example, the standard mode of commanding a powered wheelchair is via a joystick, but not everyone has the required manual dexterity to manipulate a standard joystick, so we have explored alternative human-machine interfaces. There are many commercially available alternatives, ranging from simple modified joysticks and buttons to “head arrays” (switches embedded in the headrest that can be activated by head movements) and even sip-and-puff switches (Figure 2), which all depend on reliable voluntary motor control [16]. However, not everyone is able to produce the required motions, for example due to high-level spinal cord injury, motor neuron disease or many other pathologies. Therefore, we have also worked extensively with brain-machine interfaces (BMI) to offer a potentially generic method of interfacing with many assistive technologies, which does not require the user to produce any volitional movements. Instead we can exploit a paradigm known as motor imagery, whereby the user imagines making a movement and we can simultaneously use electroencephalography (EEG) to measure their brain activity and decode (to some extent) which limb they are imagining moving. We can then, for example, map the imagination of left hand movements to a turn left command; and map the imagination of right hand movements to a turn right command [17].

Figure 2: 
An experiment participant familiarises themselves with commercially-available alternative wheelchair interfaces, including a head-array and a sip-and-puff switch. They initially practise controlling a simulated wheelchair in Gazebo (https://gazebosim.org), before proceeding to drive the physical smart wheelchair in a controlled real-world test environment (adapted with permission from [22], IEEE license number 5873120577142, license date Sep 20, 2024).
Figure 2:

An experiment participant familiarises themselves with commercially-available alternative wheelchair interfaces, including a head-array and a sip-and-puff switch. They initially practise controlling a simulated wheelchair in Gazebo (https://gazebosim.org), before proceeding to drive the physical smart wheelchair in a controlled real-world test environment (adapted with permission from [22], IEEE license number 5873120577142, license date Sep 20, 2024).

However, irrespective of the human interface being used, we have found that people are prone to making mistakes, and advanced interfaces (speech, eyetracking, BMI etc.) are prone to misinterpreting human input. Therefore, assistive robotic applications can benefit from some type of context-aware shared control to compensate for the imperfect human-machine communication channel, as will be described next.

2 Considerations when designing shared control systems

We have been inspired by the Horse-metaphor (or H-mode) to provide a starting point for designing our shared control frameworks [18]. In this metaphor, we can think of two separate agents working closely together to achieve a final control output, in this case: (1) a human (user) riding (collaborating with) (2) a horse (an autonomous agent). We should then consider the interaction modalities to convey (bi-directional) information between these two agents, as well as how to manage authority and what happens in case of conflict. Through this section we further discuss four import concepts to consider during the control policy design process:

  1. Equity in shared control systems

  2. Personalised control optimisation

  3. Understanding interactions with the environment

  4. Maintaining control authority.

2.1 Equity in shared control applications

Whilst the notion of shared control has been around for quite some time, we need to reflect on what we are really trying to achieve. In a world where we hear about inequality, are we striving for equality, where mass production treats every user of a human-machine system the same? We would argue no; instead, as control engineers, we have the power to strive for equity. Equity is about fairness. For example, within the healthcare sector, instead of distributing resources equally between all users or regions, equity is about distributing according to need [19]. Translating this concept into shared control applications, this means we should not necessarily give every user the same level of automation assistance. By contrast, we should give each user the level of assistance they require as an individual, to be afforded the same control opportunities to succeed in undertaking their desired task (Figure 3). And I argue that in many cases, even in mass production, this equity could be achieved through software, by adapting system behaviours to meet the needs of the individual.

Figure 3: 
Equity is about fairness. For shared control applications, this means we should not give every user the same level of automation assistance. Instead, we need to give different users the level of assistance they require as individuals, to be afforded the same control opportunities to succeed in undertaking the desired task (illustration CC BY SA 4.0, interaction institute for social change | artist: Angus Maguire, 
https://interactioninstitute.org
).
Figure 3:

Equity is about fairness. For shared control applications, this means we should not give every user the same level of automation assistance. Instead, we need to give different users the level of assistance they require as individuals, to be afforded the same control opportunities to succeed in undertaking the desired task (illustration CC BY SA 4.0, interaction institute for social change | artist: Angus Maguire, https://interactioninstitute.org ).

On the surface, some of our early work in shared control seemed to be achieving this, as we applied a linear-blending method to simply quantize the user’s input signal gradually towards a safe trajectory that was generated by a robotic planner [6]. Whilst this did provide different levels of assistance to each user, enabling them to achieve the same task, it neglected to consider each individual user’s own preferences. Consequently, although the approach resulted in a successful and safe task completion from the automation perspective, it was not subjectively acceptable to all the participants, nor was it necessarily efficient [20]. This was attributed to the users sometimes “fighting” against the shared control as it deviated from their personal comfort zones (for example in terms of speed, acceleration or clearance from an obstacle). In this example, despite both the human and the machine having the same overall goal, they may not have had well-matched value functions for the optimisation process.

When moving around, there are of course many possible safe trajectories, so whilst the automation may plan one safe trajectory, the user may be attempting to take an alternative – yet equally safe – route. When we take a linear blending approach to resolving this conflict, as Trautman showed, it can result in a suboptimal, and sometimes unsafe, trajectory [21]. Therefore, we moved away from linear blending and instead developed a probabilistic shared control (PSC) framework, whereby the automation can consider the probability that a user would select each of a multitude of different safe trajectories generated by a robotic planner. We then also fit a likelihood function to the instantaneous user input. For example, if a user is proficient with joystick manipulation you could apply a delta function to the current joystick vector (target translational and rotational velocity vector). However, if the user is less proficient – or, for example, has a tremor – you could instead apply a Gaussian distribution over a range of vectors, centred on the current joystick position, thus accounting for potential noise – or uncertainty – in the user input signal. This capability becomes even more pertinent when you consider discrete inputs such as head arrays, or sip-and-puff switches, which would otherwise result in a more bang-bang style of control (Figure 2). Then, instead of a naïve linear blend of the human and automation control signals, you simply select the trajectory that maximises the joint probability distribution between those proposed by the robotic planner and those indicated by the actual user input [22]. Whilst this approach allowed users to follow their preferred path more closely, and experimentally improved acceptability to some extent over the linear blending method, there was still a mismatch between exactly how the user wanted to reach the goal and the resultant style of driving. Therefore, we should further explore how to estimate the user’s own value function and how their individual preferences could be (automatically) accounted for during the control optimisation process.

2.2 Personalised control optimisation

We begin by drawing upon lessons from the automotive industry. For years, most car manufacturers have given drivers the option to choose between preset vectors of control parameters that suit different driving and performance styles [23], [24], [25]. This is normally achieved by allowing them to manually press a button or change a default driver profile setting to switch preset modes between e.g. eco, normal and sport etc. Whilst this goes some way to personalising the driving experience, in most cases the parameters being optimised are still very much determined by the designer, during the design phase, e.g. fuel burn, acceleration etc. If we switch to the assistive technology domain, we can consider a similar approach for powered wheelchairs, and indeed, many manufacturers do offer different drive profiles that change the mapping between the joystick input and maximum velocity (and acceleration curves), e.g. the Dynamic Controls DX/DX2 Power Wheelchair Control System[2] and the Curtiss-Wright R-net Control System.[3] These can be targeted towards specific scenarios, for example a more gentle indoor profile and a higher speed outdoor profile. Moreover, these parameters can indeed be tailored to the individual wheelchair user by a specially trained wheelchair technician. However, once these parameters have been configured, they tend to remain the same for the lifetime of the wheelchair, unless the user is experiencing any particular difficulties.

Again, in the automobile industry, over the last couple of decades, more complex Advanced Driver Assistance Systems (ADAS, such as adaptive cruise control and lane-keeping assistance) have also become prevalent. In turn, this has led to the need to personalise the assistance, because almost all these systems can be switched on/off and will therefore only enhance safety if drivers actually choose to use them [26]. More recently, there has been a shift from automated assistance towards a more cooperative approach between the driver and the vehicle, but this raises questions about authority and how to ensure the driver is able to regain or influence the control of the vehicle. One approach to dealing with such recurring problems is to take a human-system pattern approach, and balance the capabilities of the driver and vehicle automation through the concept of confidence horizons [27]. An alternative approach uses linear-quadratic differential games to identify the human behaviour and adapt the automation correspondingly, in a shared control manner. Regardless of the approach, the human factors element, with respect to how the user develops and maintains an evolving mental model of the assistive control system, is critical to maintaining safety and risk mitigation [28].

In the assistive technology domain of smart wheelchairs, we can think of some pseudo-equivalent functions to the ADAS and the more recent cooperative approaches used in highly automated vehicles. In the literature, there are some fairly prevalent examples of highly-automated wheelchair assistance functions, including: obstacle avoidance, wall following, door passing, kerb following, table-docking etc. [8]. In this case, the parameter space becomes much larger, as personal preferences are not only about the driving style, but can include more nuanced topics, like proxemics: how to approach other people and objects for interactions, which should consider notions such as personal space and angles of approach etc. [29], [30]. Moreover, these preferences are often context-dependent, so for an individual they are likely to change in different situations, for example, depending on whether or not the people who are interacting are familiar with one another. Since this parameter space is so large, it is difficult for a designer to explicitly consider all the variables that a user may wish to optimise. Furthermore, even if such a complete parameter space could be enumerated, it would then become a near impossible task for a technician to tune such a large array of parameters that, in all likelihood, most users would not be able to take the advantage of the full capabilities of such a system.

Therefore, we have taken a different approach: to try to learn a range of typical wheelchair driving styles and interaction preferences, from a wide range of people. Then we can cluster these into distinct profiles (which could be given notional labels – e.g. novice, timid, aggressive, sporty etc.). These user profiles can in turn be used as a starting point for transfer learning and subsequently the large parameter space can continue to be fine-tuned on the fly. To achieve this, we developed a reinforcement learning based probabilistic shared control (RL-PSC) framework [31], which built upon a generative adversarial imitation learning (GAIL) approach [32]. To date we have implemented and tested this approach in a simulated environment.

We found that whilst our earlier incarnation of probabilistic shared control (PSC) and our newly-proposed reinforcement learning-based probabilistic shared control (RL-PSC) were both successful in reducing the number of collisions, compared with no shared control (No-SC), RL-PSC was able to achieve this without the usual cost in terms of overall task completion time (c.f. Figure 4). We attribute this to the fact that the shared control is offering assistance that is more aligned to the user’s natural driving style, which means they no longer feel the need to “fight” against the controller when it subtly deviates from their input signals. This leads to a better understanding of the common workspace [33] and consequently an improved joint task performance. Perhaps more importantly, this in turn leads to better user acceptance of the shared control system, compared with the early linear blending approach that was only accepted in particular scenarios (e.g. by expert users when facing particularly high-workload situations; or by complete novice users [20]). Indeed, user acceptance and the previously mentioned “human-out-of-loop” problems were the starting points for the development of shared control systems, so this next step of personalisation was the natural evolution of such systems.

Figure 4: 
Both probabilistic shared control (PSC) and reinforcement learning-based probabilistic shared control (RL-PSC) reduced the number of collisions, compared with no shared control (No SC). However, RL-PSC achieved this, without the usual cost in terms of overall task completion time (adapted with permission from [31], IEEE license number 5873121128373, license date Sep 20, 2024).
Figure 4:

Both probabilistic shared control (PSC) and reinforcement learning-based probabilistic shared control (RL-PSC) reduced the number of collisions, compared with no shared control (No SC). However, RL-PSC achieved this, without the usual cost in terms of overall task completion time (adapted with permission from [31], IEEE license number 5873121128373, license date Sep 20, 2024).

One challenge that remains to really harness the power of the learning approach is to collect a sufficiently large and diverse data set in the real world. There are currently limited smart wheelchairs in the world, with diverse configurations, although there are efforts to improve interoperability, for example, through modular approaches [7] and with standardised interfaces and data formats, such as those provided through ROS (Robot Operating System) [34]. Furthermore, access to incredibly useful location and video data can often be impeded through policies, such as the General Data Protection Regulation (GDPR),[4] which, despite being very important and well-meaning when it comes to protecting our rights to privacy, place a high burden on innovators of emerging technologies [35]. Consequently, we are investigating to what extent data collected in simulated environments can be extrapolated for use in real-world applications [36] (the so-called sim-to-real problem [37]).

2.3 Understanding interactions with the environment

The robotics field has advanced rapidly in terms of sensing capabilities to understand the environment in which a robot is operating. Early sensors were fairly rudimentary and restricted to linear distance measures to objects via ultrasonic (sonar) or infrared (IR) emitter-receiver pairs [38]. Then laser scanners and LiDAR were able to give us a much more precise 3D representation, leading to simultaneous localisation and mapping (SLAM) as well as being able to incorporate notions of higher-level reasoning in path-planning [39], [40], whilst stereoscopic and RGB-D cameras are able to co-register colour and depth. When fused, this wealth of information has allowed the robotics field to develop satisfactory planning algorithms for autonomous robots to operate in real-world environments [41], [42].

The majority of these path planners will generate safe (and often optimal, with respect to pre-defined criteria) trajectories that will lead to a specified goal (location or pose). In doing so, these planners will automatically attempt to avoid collisions with obstacles. For mobile robotics, an obstacle is usually defined as anything that is not navigable space (floor/ground), and in the case of wheeled robots, including wheelchairs, this can also include the notion of negative obstacles [43]. Nevertheless, sometimes a detected object may not actually be an obstacle per se, but rather a target. For example, when a wheelchair user is navigating, they may wish to drive around a table that is in their way, or they may wish to dock to a table to work at it, or eat lunch. Therefore, we have worked on table detection, which can then be integrated into our planning algorithms to automatically line up and dock, or avoid, depending on the user input signals during the approach [44]. Similarly, a person may be treated as an obstacle that the user wishes to avoid, or a target with whom the user wishes to interact. In this case, higher level semantic information can be given to the planner by e.g. people detectors and people trackers [45], [46].

Another challenge is that the majority of robotic path planners neglect to take into account the effect of the robot itself on the environment [47]. Whilst this is fine for relatively static environments, many assistive robotic devices such as smart wheelchairs will need to be used in often densely populated areas. There has been extensive work modelling how crowds of people behave, to predict crowd flows and crowd dynamics [48], [49], which could theoretically be incorporated into a robot planner. However, there has been very little work that seeks to predict how the people in a crowd will react to the presence of a robot, and in turn the effect that has on the crowd dynamics. This is clearly important for any planner to incorporate, if it is going to enable a robot to move effectively in a crowded environment, without having to constantly re-plan in reaction to unexpected (yet potentially predictable) changes in crowd dynamics.

Therefore, we took a two-pronged approach to the challenge. First, we experimentally discovered how crowd dynamics are affected by different types of robot (including a smart wheelchair) [50]. This enabled us to modify state-of-the art crowd dynamics models [51], [52] to incorporate these newly observed behaviours and use this in our robotic path planner [31]. Second, we undertook some qualitative research with wheelchair users, to understand how they would like to be supported in navigating dynamic crowded environments [53]. The findings from this study not only reiterated that the driving assistance must be adapted to the individual user’s driving preference, but also highlighted the two interaction loops that need to be supported: (1) between the user and the smart wheelchair; and (2) between the combined user-smart wheelchair entity with the people in the crowd. This led to three design areas that should be considered when developing an equitable shared control system:

  1. Empathy, embodiment and social acceptance

  2. Situational awareness and adaptability

  3. Selective information management.

2.4 Maintaining control authority

Although users can often retain authority in blended shared control schemes, they usually do this by correcting automation mistakes after they have been executed. By contrast, the H-mode [18] naturally supports a method for negotiating (and maintaining) control authority through the use of haptic interaction. In such a paradigm, the automation is blended with the user input at the physical interface, which allows for smooth transition in control authority: the user can choose to be guided by the automation, or if they don’t like what the automation is proposing, they can simply push harder against it [54].

Whilst haptic shared control can work well for physical interactions, there are other cases when this might not be so easy to implement, for example, in brain-machine interfaces (BMI). Nevertheless, we were inspired by the concept of haptic shared control, so were compelled to see if we could imitate it to some degree in a BMI context. First we decomposed a joint human-BMI-robot task into the different levels of interaction at which we can cooperate [55]. Then, again considering the operational level, we attempted to emulate haptic supports, by dynamically modulating the level of mental effort required to deliver a command, as opposed to varying the physical effort. In this way we demonstrated that BMI users could benefit from automation assistance, but in cases of conflict were now able to proactively override the assistance and assert their control authority, instead of having to correct errors after they occurred [56]. We have since built upon this concept to better support the initial BMI learning process, by dynamically introducing diminishing levels of bias into our classifiers [36].

3 Conclusions and future outlook

Whilst joint human-machine actions can be conceptualised at several levels of control abstraction and the notion of equity could be embedded at each of these levels (including the strategic and tactical levels), we have focused on the operational level, where we are primarily concerned with shared control. We have given an overview of our work towards producing equitable shared control systems for assistive robotics technologies, with a focus on smart wheelchairs and brain-machine interfaces. In doing so, we have drawn inspiration and lessons learned from the wider human-machine interface community, including the well-established automotive industry.

We have discussed three key considerations when design successful equitable shared control schemes: personalised control optimisation; understanding interactions with the environment; and supporting users to maintain control authority in the case of conflict. One key challenge that remains is how to obtain sufficient high-quality data to support our approaches to learn shared control policies for smart wheelchair control. To mitigate this challenge, we have taken a sim2real approach, relying on transfer learning methods. However, we also recognise that our users are not time-invariant systems, and whilst some will hone their skills over time, others may experience degenerative conditions that degrade their skills. Therefore, in future work we will explore when and how to update equitable shared control policies in a dynamic manner, such that they continue to meet the long-term evolving needs of the user.


Corresponding author: Tom Carlson, Aspire Create, University College London, London, UK, E-mail: 

About the author

Tom Carlson

Tom Carlson is Professor of Assistive Robotics at University College London (UCL), UK. He obtained his MEng in Electrical & Electronic Engineering (2006) and PhD in Intelligent Robotics (2010), both from Imperial College London, UK. He then pursued his postdoctoral research in shared control for brain-machine interfaces at EPFL, Switzerland, before joining UCL as a lecturer in 2013. Carlson’s research focus is on the user-centred design of assistive robotic technologies for people with spinal cord injuries and other pathologies. He is particularly interested in human-robot interaction and is developing shared control techniques for brain-machine interfaces and smart wheelchairs.

Acknowledgment

I acknowledge Paul Breedveld for kindly allowing me to use his wonderful illustration (Figure 1). I also acknowledge the conversations, support and joint work of several key colleagues, collaborators and students over the years who have had a significant impact on the shaping of this article, in particular: Yiannis Demiris, José del R. Millán, Rui Loureiro, Robert Leeb, Ricardo Chavarriaga, Cathy Holloway, Chi Ezeh, Andrew Kell, Marie-Pierre Pacaux-Lemoine, Frank Flemisch, Marie Babel, Julie Pettré, Francois Pasteau, Louise Devigne, Stefan Teodorescu, Pat Zhang, George Walker, Sam Ardittie, Felix Habert, Ozge Saracbasi, Youngjun Cho, Hubin Zhao, Alex Thomas, Merlin Kelly, Jianan Chen, and Ziyue Zhu.

  1. Research ethics: Not applicable.

  2. Informed consent: Not applicable.

  3. Author contributions: The author has accepted responsibility for the entire content of this manuscript and approved its submission.

  4. Use of Large Language Models, AI and Machine Learning Tools: None declared.

  5. Conflict of interests: None declared.

  6. Research funding: None declared.

  7. Data availability: Not applicable.

References

[1] W. R. Ferrell and T. B. Sheridan, “Supervisory control of remote manipulation man is expanding his range of research to encompass a new universe that extends from the confines of an atom to the fringes of outer space. Exploring such a universe requires a new breed of manipulators to perform tasks accurately and intelligently in areas inaccessible to man,” IEEE Spectr., vol. 4, no. 10, pp. 81–88, 1967. https://doi.org/10.1109/mspec.1967.5217126.Search in Google Scholar

[2] T. B. Sheridan, “Telerobotics,” Automatica, vol. 25, no. 4, pp. 487–507, 1989. https://doi.org/10.1016/0005-1098(89)90093-9.Search in Google Scholar

[3] D. A. Abbink, et al.., “A topology of shared control systems–finding common ground in diversity,” IEEE Trans. Hum.-Mach. Syst., vol. 48, no. 5, pp. 509–525, 2018. https://doi.org/10.1109/thms.2018.2791570.Search in Google Scholar

[4] R. Parasuraman, T. B. Sheridan, and C. D. Wickens, “A model for types and levels of human interaction with automation,” IEEE Trans. Syst. Man Cybern. Syst. Hum., vol. 30, no. 3, pp. 286–297, 2000. https://doi.org/10.1109/3468.844354.Search in Google Scholar PubMed

[5] F. Flemisch, D. A. Abbink, M. Itoh, M. P. Pacaux-Lemoine, and G. Weßel, “Joining the blunt and the pointy end of the spear: towards a common framework of joint action, human–machine cooperation, cooperative guidance and control, shared, traded and supervisory control,” Cogn. Technol. Work, vol. 21, no. 4, pp. 555–568, 2019. https://doi.org/10.1007/s10111-019-00576-1.Search in Google Scholar

[6] T. Carlson and Y. Demiris, “Human-wheelchair collaboration through prediction of intention and adaptive assistance,” in Proceedings – IEEE International Conference on Robotics and Automation, 2008.10.1109/ROBOT.2008.4543814Search in Google Scholar

[7] F. Morbidi, et al.., “Assistive robotic technologies for next-generation smart wheelchairs: codesign and modularity to improve users’ quality of life,” IEEE Robot. Autom. Mag., vol. 30, no. 1, pp. 24–35, 2023. https://doi.org/10.1109/mra.2022.3178965.Search in Google Scholar

[8] R. C. Simpson, E. F. LoPresti, and R. A. Cooper, “How many people would benefit from a smart wheelchair?” J. Rehabil. Res. Dev., vol. 45, no. 1, pp. 53–72, 2008. https://doi.org/10.1682/jrrd.2007.01.0015.Search in Google Scholar PubMed

[9] N. Mandischer, et al.., “Toward adaptive human–robot collaboration for the inclusion of people with disabilities in manual labor tasks,” Electronics, vol. 12, no. 5, p. 1118, 2023. https://doi.org/10.3390/electronics12051118.Search in Google Scholar

[10] A. Gräser, et al.., “A supportive FRIEND at work: robotic workplace assistance for the disabled,” IEEE Robot. Autom. Mag., vol. 20, no. 4, pp. 148–159, 2013. https://doi.org/10.1109/mra.2013.2275695.Search in Google Scholar

[11] S. A. Arboleda, M. Pascher, Y. Lakhnati, and J. Gerken, “Understanding human-robot collaboration for people with mobility impairments at the workplace, a thematic analysis,” in 29th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2020, 2020.10.1109/RO-MAN47096.2020.9223489Search in Google Scholar

[12] M. Mondellini, et al., “Behavioral patterns in robotic collaborative assembly: comparing neurotypical and Autism Spectrum Disorder participants,” Front. Psychol., vol. 14, no. 1, 2023. https://doi.org/10.3389/fpsyg.2023.1245857.Search in Google Scholar PubMed PubMed Central

[13] N. Mandischer, M. Usai, F. Flemisch, and L. Mikelsons, “Exploring capability-based control distributions of human-robot teams through capability deltas: formalization and implications,” in International Conference on Systems, Man, and Cybernetics, Kuching, Malaysia, 2024.Search in Google Scholar

[14] C. Weidemann, E. Husing, Y. Freischlad, N. Mandischer, B. Corves, and M. Husing, “RAMB: validation of a software tool for determining robotic assistance for people with disabilities in first labor market manufacturing applications,” in Conference Proceedings – IEEE International Conference on Systems, Man and Cybernetics, vol. 2022-October, 2022.10.1109/SMC53654.2022.9945241Search in Google Scholar

[15] C. Weidemann, N. Mandischer, and B. Corves, “Matching input and output devices and physical disabilities for human-robot workstations,” in International Conference on Systems, Man, and Cybernetics, Kuching, Malaysia, 2022.Search in Google Scholar

[16] L. Fehr, W. E. Langbein, and S. B. Skaar, “Adequacy of power wheelchair control interfaces for persons with severe disabilities: a clinical survey,” J. Rehabil. Res. Dev., vol. 37, no. 3, 2000.Search in Google Scholar

[17] T. Carlson and J. D. R. Millan, “Brain-controlled wheelchairs: a robotic architecture,” IEEE Robot. Autom. Mag., vol. 20, no. 1, pp. 65–73, 2013. https://doi.org/10.1109/mra.2012.2229936.Search in Google Scholar

[18] F. O. Flemisch, C. A. Adams, S. R. Conway, K. H. Goodrich, M. T. Palmer, and P. C. Schutte, “The H-metaphor as a guideline for vehicle automation and interaction,” Control, 2003. [Online] Available at: NASA/TM—2003-212672. https://ntrs.nasa.gov/search?q=2003-212672.Search in Google Scholar

[19] A. J. Culyer and A. Wagstaff, “Equity and equality in health and health care,” Technical report, 1993.10.1016/0167-6296(93)90004-XSearch in Google Scholar PubMed

[20] T. Carlson and Y. Demiris, “Collaborative control for a robotic wheelchair: evaluation of performance, attention, and workload,” IEEE Trans. Syst. Man Cybern. B Cybern., vol. 42, no. 3, pp. 876–888, 2012. https://doi.org/10.1109/tsmcb.2011.2181833.Search in Google Scholar PubMed

[21] P. Trautman, “Assistive planning in complex, dynamic environments: a probabilistic approach,” in Proceedings – 2015 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2015, 2016.10.1109/SMC.2015.534Search in Google Scholar

[22] C. Ezeh, P. Trautman, L. Devigne, V. Bureau, M. Babel, and T. Carlson, “Probabilistic vs linear blending approaches to shared control for wheelchair driving,” in IEEE International Conference on Rehabilitation Robotics, 2017.10.1109/ICORR.2017.8009352Search in Google Scholar PubMed

[23] M. Plöchl and J. Edelmann, “Driver models in automobile dynamics application,” User Model. User-Adapted Interact., vol. 45, nos. 7–8, pp. 699–741, 2007. https://doi.org/10.1080/00423110701432482.Search in Google Scholar

[24] Y. Huang, E. C. Ng, J. L. Zhou, N. C. Surawski, E. F. Chan, and G. Hong, “Eco-driving technology for sustainable road transport: a review,” Renewable and sustainable energy reviews, vol. 93, no. 1, pp. 596–609, 2018.10.1016/j.rser.2018.05.030Search in Google Scholar

[25] F. Sagberg, Selpi, G. F. Bianchi Piccinini, and J. Engström, “A review of research on driving styles and road safety,” A review of research on driving styles and road safety, vol. 57, no. 7, pp. 1248–1275, 2015. https://doi.org/10.1177/0018720815591313.Search in Google Scholar PubMed

[26] M. Hasenjager, M. Heckmann, and H. Wersing, “A survey of personalization for advanced driver assistance systems,” IEEE Transactions on Intelligent Vehicles, vol. 5, no. 2, pp. 335–344, 2020. https://doi.org/10.1109/tiv.2019.2955910.Search in Google Scholar

[27] F. Flemisch, M. Usai, G. Wessel, and N. Herzberger, “Human system patterns for interaction and cooperation of automated vehicles and humans confidence horizons and diagnostic takeover-requests (TOR) in cooperatively automated driving,” Automatisierungstechnik, vol. 71, no. 4, pp. 278–287, 2023. https://doi.org/10.1515/auto-2022-0160.Search in Google Scholar

[28] N. Y. Mbelekani and K. Bengler, “Risk and safety-based behavioural adaptation towards automated vehicles: emerging advances, effects, challenges and techniques,” in Proceedings of Ninth International Congress on Information and Communication Technology, X.-S. Yang, S. Sherratt, N. Dey, and A. Joshi, Eds., Singapore, Springer Nature Singapore, 2024, pp. 459–482.10.1007/978-981-97-3299-9_38Search in Google Scholar

[29] J. Mumm and B. Mutlu, “Human-robot proxemics: physical and psychological distancing in human-robot interaction,” in HRI 2011 – Proceedings of the 6th ACM/IEEE International Conference on Human-Robot Interaction, 2011.10.1145/1957656.1957786Search in Google Scholar

[30] F. A. Leite, et al.., “A robocentric paradigm for enhanced social navigation in autonomous robotic: a use case for an autonomous wheelchair,” in 2024 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), IEEE, 2024, pp. 112–119.10.1109/ICARSC61747.2024.10535955Search in Google Scholar

[31] B. Zhang, C. Holloway, and T. Carlson, “Reinforcement learning based user-specific shared control navigation in crowds,” in Conference Proceedings – IEEE International Conference on Systems, Man and Cybernetics, 2023.10.1109/SMC53992.2023.10394139Search in Google Scholar

[32] J. Ho and S. Ermon, “Generative adversarial imitation learning,” in Advances in Neural Information Processing Systems, Cambridge, MA, The MIT Press, 2016.Search in Google Scholar

[33] M. P. Pacaux-Lemoine and M. Itoh, “Towards vertical and horizontal extension of shared control concept,” in Proceedings – 2015 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2015, 2016.10.1109/SMC.2015.536Search in Google Scholar

[34] M. Quigley, et al.., “ROS: an open-source robot operating system,” in ICRA Workshop on Open Source Software, vol. 3, 2009.Search in Google Scholar

[35] H. Li, L. Yu, and W. He, “The impact of GDPR on global technology development,” Journal of Global Information Technology Management, vol. 22, no. 1, pp. 1–6, 2019. https://doi.org/10.1080/1097198x.2019.1569186.Search in Google Scholar

[36] A. Thomas, J. Chen, A. Heller-Szabo, M. Kelly, and T. Carlson, High Stimuli Virtual Reality Training for a Brain Controlled Robotic Wheelchair, Yokohama, Japan, IEEE, 2024.10.1109/ICRA57147.2024.10610636Search in Google Scholar

[37] A. Kadian, et al.., “Sim2Real predictivity: does evaluation in simulation predict real-world performance?” IEEE Robot. Autom. Lett., vol. 5, no. 4, pp. 6670–6677, 2020. https://doi.org/10.1109/lra.2020.3013848.Search in Google Scholar

[38] J. Borenstein and Y. Koren, “Obstacle avoidance with ultrasonic sensors,” IEEE J. Robot. Autom., vol. 4, no. 2, 1988. https://doi.org/10.1109/56.2085.Search in Google Scholar

[39] C. S. Andersen, C. B. Madsen, J. J. Sorensen, N. O. Kirkeby, J. P. Jones, and H. I. Christensen, “Navigation using range images on a mobile robot,” Robot. Autonom. Syst., vol. 10, nos. 2–3, pp. 147–160, 1992. https://doi.org/10.1016/0921-8890(92)90023-r.Search in Google Scholar

[40] H. Surmann, A. Nüchter, and J. Hertzberg, “An autonomous mobile robot with a 3D laser range finder for 3D exploration and digitalization of indoor environments,” Robot. Autonom. Syst., vol. 45, nos. 3–4, pp. 181–198, 2003. https://doi.org/10.1016/j.robot.2003.09.004.Search in Google Scholar

[41] S. A. Shafer, A. Stentz, and C. E. Thorpe, “An architecture for sensor fusion in a mobile robot,” IEEE International Conference on Robotics and Automation, vol. 3, no. 1, pp. 2002–2011, 1986.10.1109/ROBOT.1986.1087440Search in Google Scholar

[42] M. B. Alatise and G. P. Hancke, “A review on challenges of autonomous mobile robot and sensor fusion methods,” IEEE Access, vol. 8, no. 1, pp. 39830–39846, 2020. https://doi.org/10.1109/access.2020.2975643.Search in Google Scholar

[43] L. Devigne, F. Pasteau, T. Carlson, and M. Babel, “A shared control solution for safe assisted power wheelchair navigation in an environment consisting of negative obstacles: a proof of concept,” in Conference Proceedings – IEEE International Conference on Systems, Man and Cybernetics, vol. 2019-October, 2019.10.1109/SMC.2019.8914211Search in Google Scholar

[44] S. Arditti, F. Habert, O. O. Saracbasi, G. Walker, and T. Carlson, “Tackling the duality of obstacles and targets in shared control systems: a smart wheelchair table-docking example,” in Conference Proceedings – IEEE International Conference on Systems, Man and Cybernetics, 2023.10.1109/SMC53992.2023.10393886Search in Google Scholar

[45] A. Takmaz, et al.., “3D segmentation of humans in point clouds with synthetic data,” in Proceedings of the IEEE International Conference on Computer Vision, 2023.10.1109/ICCV51070.2023.00125Search in Google Scholar

[46] D. Jia, A. Hermans, and B. Leibe, “DR-SPAAM: a spatial-attention and auto-regressive model for person detection in 2D range data,” in IEEE International Conference on Intelligent Robots and Systems, 2020.10.1109/IROS45743.2020.9341689Search in Google Scholar

[47] S. Campbell, N. O’Mahony, A. Carvalho, L. Krpalkova, D. Riordan, and J. Walsh, “Path planning techniques for mobile robots A review,” in 2020 6th International Conference on Mechatronics and Robotics Engineering, ICMRE 2020, 2020.10.1109/ICMRE49073.2020.9065187Search in Google Scholar

[48] S. P. Hoogendoorn and W. Daamen, “Pedestrian behavior at bottlenecks,” Transport. Sci., vol. 39, no. 2, pp. 147–159, 2005. https://doi.org/10.1287/trsc.1040.0102.Search in Google Scholar

[49] D. Helbing and P. Molnár, “Social force model for pedestrian dynamics,” Phys. Rev. E, vol. 51, no. 5, pp. 4282–4286, 1995. https://doi.org/10.1103/physreve.51.4282.Search in Google Scholar PubMed

[50] B. Zhang, J. Amirian, H. Eberle, J. Pettré, C. Holloway, and T. Carlson, “From HRI to CRI: crowd robot interaction–understanding the effect of robots on crowd motion: empirical study of pedestrian dynamics with a wheelchair and a pepper robot,” Int. J. Soc. Robot., vol. 14, no. 3, pp. 2022–2643, 2022. https://doi.org/10.1007/s12369-021-00812-7.Search in Google Scholar

[51] J. Amirian, J. B. Hayet, and J. Pettre, “Social ways: learning multi-modal distributions of pedestrian trajectories with GANs,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, vol. 2019-June, 2019.10.1109/CVPRW.2019.00359Search in Google Scholar

[52] P. Charalambous, J. Pettre, V. Vassiliades, Y. Chrysanthou, and N. Pelechano, “GREIL-crowds: crowd simulation with deep reinforcement learning and examples,” ACM Trans. Graphics, vol. 42, no. 4, pp. 1–15, 2023. https://doi.org/10.1145/3592459.Search in Google Scholar

[53] B. Zhang, G. Barbareschi, R. Ramirez Herrera, T. Carlson, and C. Holloway, “Understanding interactions for smart wheelchair navigation in crowds,” in Conference on Human Factors in Computing Systems – Proceedings, 2022.10.1145/3491102.3502085Search in Google Scholar

[54] D. A. Abbink, M. Mulder, and E. R. Boer, “Haptic shared control: smoothly shifting control authority?” Cogn. Technol. Work, vol. 14, no. 1, pp. 19–28, 2012. https://doi.org/10.1007/s10111-011-0192-5.Search in Google Scholar

[55] M. P. Pacaux-Lemoine, L. Habib, and T. Carlson, “Levels of cooperation in human-machine systems: a human-BCI-robot example,” in Handbook of Human-Machine Systems, Hoboken, NJ, Wiley, 2023.10.1002/9781119863663.ch6Search in Google Scholar

[56] M. P. Pacaux-Lemoine, L. Habib, N. Sciacca, and T. Carlson, “Emulated haptic shared control for brain-computer interfaces improves human-robot cooperation,” in Proceedings of the 2020 IEEE International Conference on Human-Machine Systems, ICHMS 2020, 2020.10.1109/ICHMS49158.2020.9209521Search in Google Scholar

Received: 2024-07-09
Accepted: 2024-10-14
Published Online: 2024-11-28
Published in Print: 2024-12-17

© 2024 the author(s), published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 22.3.2025 from https://www.degruyter.com/document/doi/10.1515/auto-2024-0090/html
Scroll to top button
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy