Target Detection in Video Feeds with Selected Dyads and Groups Assisted by Collaborative Brain-Computer Interfaces

Saugat Bhattacharyya1, Davide Valeriani2, Caterina Cinel1, Luca Citi1, Riccardo Poli1

  • 1University of Essex
  • 2Massachusetts Eye and Ear, Harvard Medical School

Details

16:30 - 18:30 | Thu 21 Mar | Grand Ballroom B | ThPO.40

Session: Poster Session I

Abstract

We present a collaborative Brain-Computer Interface (cBCI) to aid group decision-making based on realistic video feeds. The cBCI combines neural features extracted from EEG and response times to estimate the decision confidence of users. Confidence estimates are used to weigh individual responses and obtain group decisions. Results obtained with 10 participants indicate that cBCI groups are significantly more accurate than equally-sized groups using standard majority. Also, selecting dyads on the basis of the average performance of their members and then assisting them with our cBCI halves the error rates with respect to majority-based performance while allowing most participants to be included in at least one of the selected dyads, hence being particularly inclusive. Results indicate that this selection strategy makes cBCIs even more effective as methods for human augmentation in realistic scenarios.