Operators working with robots in safety-critical domains have to make decisions under uncertainty, which remains a challenging problem for a single human operator. An open question is whether two human operators can make better decisions jointly, as compared to a single operator alone. While prior work has shown that two heads are better than one, such studies have been mostly limited to static and passive tasks. We investigate joint decision-making in a dynamic task involving humans teleoperating robots. We conduct a human-subject experiment with N=100 participants where each participant performed a navigation task with two mobiles robots in simulation. We find that joint decision-making through confidence sharing improves dyad performance beyond the better-performing individual (p < 0.0001). Further, we find that the extent of this benefit is regulated both by the skill level of each individual, as well as how well-calibrated their confidence estimates are. Finally, we present findings on characterising the human-human dyad’s confidence calibration based on the individuals constituting the dyad. Our findings demonstrate for the first time that two heads are better than one, even on a spatiotemporal task which includes active operator control of robots.
@inproceedings{10.5555/3721488.3721550,author={Nguyen, Duc-An and Bhattacharyya, Raunak and Colombatto, Clara and Fleming, Steve and Posner, Ingmar and Hawes, Nick},title={Group Decision-Making in Robot Teleoperation: Two Heads are Better Than One},year={2025},publisher={IEEE Press},booktitle={ACM/IEEE International Conference on Human-Robot Interaction (HRI)},pages={489–500},numpages={12},keywords={human-robot interaction, joint decision-making, teleoperation},location={Melbourne, Australia},series={HRI '25},}
Journal
Time-bounded planning with uncertain task duration distributions
Michal Staniaszek, Lara Brudermüller, Yang You, and 3 more authors
We consider planning problems where a robot must gather reward by completing tasks at each of a large set of locations while constrained by a time bound. Our focus is problems where the context under which each task will be executed can be predicted, but is not known in advance. Here, the term context refers to the conditions under which the task is executed, and can be related to the robot’s internal state (e.g., how well it is localised?), or the environment itself (e.g., how dirty is the floor the robot must clean?). This context has an impact on the time required to execute the task, which we model probabilistically. We model the problem of time-bounded planning for tasks executed under uncertain contexts as a Markov decision process with discrete time in the state, and propose variants on this model which allow adaptation to different robotics domains. Due to the intractability of the general model, we propose simplifications to allow planning in large domains. The key idea behind these simplifications is constraining navigation using a solution to the travelling salesperson problem. We evaluate our models on maps generated from real-world environments and consider two domains with different characteristics: UV disinfection, and cleaning. We evaluate the effect of model variants and simplifications on performance, and show that policies obtained for our models outperform a rule-based baseline, as well as a model which does not consider context. We also evaluate our models in a real robot experiment where a quadruped performs simulated inspection tasks in an industrial environment.
@article{STANIASZEK2025104926,title={Time-bounded planning with uncertain task duration distributions},journal={Robotics and Autonomous Systems},volume={186},pages={104926},year={2025},issn={0921-8890},doi={https://doi.org/10.1016/j.robot.2025.104926},author={Staniaszek, Michal and Brudermüller, Lara and You, Yang and Bhattacharyya, Raunak and Lacerda, Bruno and Hawes, Nick},keywords={Markov decision process, Mission planning, Temporal methods, Travelling salesperson problem}}
Journal
A transparency paradox? Investigating the impact of explanation specificity and autonomous vehicle imperfect detection capabilities on passengers
Daniel Omeiza, Raunak Bhattacharyya, Marina Jirotka, and 2 more authors
Transportation Research Part F: Traffic Psychology and Behaviour, 2025
Transparency in automated systems could be afforded through the provision of intelligible explanations. While transparency is desirable, might it lead to catastrophic outcomes (such as anxiety) that could outweigh its benefits? It’s quite unclear how the specificity of explanations (level of transparency) influences recipients, especially in autonomous driving (AD). In this work, we examined the effects of transparency mediated through varying levels of explanation specificity in AD. We first extended a data-driven explainer model by adding a rule-based option for explanation generation in AD and then conducted a within-subject lab study with 39 participants in an immersive driving simulator to study the effect of the resulting explanations. Specifically, our investigation focused on: (1) how different types of explanations (specific vs. abstract) affect passengers’ perceived safety, anxiety, and willingness to take control of the vehicle when the vehicle perception system makes erroneous predictions; and (2) the relationship between passengers’ behavioural cues and their feelings during the autonomous drives. Our findings showed that abstract explanations did not make passengers safer despite being vague enough to conceal all perception system detection errors compared to specific explanations having a minimal amount of exposed perception system detection errors. Anxiety levels increased when specific explanations revealed perception system detection errors (high transparency). We found no significant link between passengers’ visual patterns and their anxiety levels. We advocate for explanation systems in autonomous vehicles (AV) that can adapt to different stakeholders’ transparency needs.
@article{OMEIZA20251275,title={A transparency paradox? Investigating the impact of explanation specificity and autonomous vehicle imperfect detection capabilities on passengers},journal={Transportation Research Part F: Traffic Psychology and Behaviour},volume={109},pages={1275-1292},year={2025},issn={1369-8478},doi={https://doi.org/10.1016/j.trf.2025.01.015},author={Omeiza, Daniel and Bhattacharyya, Raunak and Jirotka, Marina and Hawes, Nick and Kunze, Lars},keywords={Explanations, Transparency, Autonomous driving, Perceived safety, Visual attention}}
2024
Preprint
CC-VPSTO: Chance-Constrained Via-Point-based Stochastic Trajectory Optimisation for Safe and Efficient Online Robot Motion Planning
Lara Brudermüller, Guillaume Berger, Julius Jankowski, and 2 more authors
@article{brudermuller2024cc,title={CC-VPSTO: Chance-Constrained Via-Point-based Stochastic Trajectory Optimisation for Safe and Efficient Online Robot Motion Planning},author={Bruderm{\"u}ller, Lara and Berger, Guillaume and Jankowski, Julius and Bhattacharyya, Raunak and Hawes, Nick},journal={arXiv preprint arXiv:2402.01370},year={2024},}
2023
ECMR
Difficulty-Aware Time-Bounded Planning Under Uncertainty for Large-Scale Robot Missions
Michal Staniaszek, Lara Brudermüller, Raunak Bhattacharyya, and 2 more authors
In European Conference on Mobile Robots (ECMR), 2023
@inproceedings{staniaszek2023difficulty,title={Difficulty-Aware Time-Bounded Planning Under Uncertainty for Large-Scale Robot Missions},author={Staniaszek, Michal and Bruderm{\"u}ller, Lara and Bhattacharyya, Raunak and Lacerda, Bruno and Hawes, Nick},booktitle={European Conference on Mobile Robots (ECMR)},year={2023},organization={IEEE},}
2022
IEEE Transactions
Modeling human driving behavior through generative adversarial imitation learning
Raunak Bhattacharyya, Blake Wulfe, Derek J Phillips, and 4 more authors
IEEE Transactions on Intelligent Transportation Systems, 2022
@article{bhattacharyya2022modeling,title={Modeling human driving behavior through generative adversarial imitation learning},author={Bhattacharyya, Raunak and Wulfe, Blake and Phillips, Derek J and Kuefler, Alex and Morton, Jeremy and Senanayake, Ransalu and Kochenderfer, Mykel J},journal={IEEE Transactions on Intelligent Transportation Systems},volume={24},number={3},pages={2874--2887},year={2022},publisher={IEEE},}
2021
IEEE Transactions
A hybrid rule-based and data-driven approach to driver modeling through particle filtering
Raunak Bhattacharyya, Soyeon Jung, Liam A Kruse, and 2 more authors
IEEE Transactions on Intelligent Transportation Systems, 2021
@article{bhattacharyya2021hybrid,title={A hybrid rule-based and data-driven approach to driver modeling through particle filtering},author={Bhattacharyya, Raunak and Jung, Soyeon and Kruse, Liam A and Senanayake, Ransalu and Kochenderfer, Mykel J},journal={IEEE Transactions on Intelligent Transportation Systems},volume={23},number={8},pages={13055--13068},year={2021},publisher={IEEE},}
2020
ACC
Online parameter estimation for human driver behavior prediction
Raunak P Bhattacharyya, Ransalu Senanayake, Kyle Brown, and 1 more author
@inproceedings{bhattacharyya2020online,title={Online parameter estimation for human driver behavior prediction},author={Bhattacharyya, Raunak P and Senanayake, Ransalu and Brown, Kyle and Kochenderfer, Mykel J},booktitle={American Control Conference (ACC)},year={2020},organization={IEEE},}
2019
ICRA
Simulating emergent properties of human driving behavior using multi-agent reward augmented imitation learning
Raunak P Bhattacharyya, Derek J Phillips, Changliu Liu, and 3 more authors
In International Conference on Robotics and Automation (ICRA), 2019
@inproceedings{bhattacharyya2019simulating,title={Simulating emergent properties of human driving behavior using multi-agent reward augmented imitation learning},author={Bhattacharyya, Raunak P and Phillips, Derek J and Liu, Changliu and Gupta, Jayesh K and Driggs-Campbell, Katherine and Kochenderfer, Mykel J},booktitle={International Conference on Robotics and Automation (ICRA)},year={2019},organization={IEEE},}
2018
IROS
Multi-agent imitation learning for driving simulation
Raunak P Bhattacharyya, Derek J Phillips, Blake Wulfe, and 3 more authors
In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018
@inproceedings{bhattacharyya2018multi,title={Multi-agent imitation learning for driving simulation},author={Bhattacharyya, Raunak P and Phillips, Derek J and Wulfe, Blake and Morton, Jeremy and Kuefler, Alex and Kochenderfer, Mykel J},booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},year={2018},organization={IEEE}}