Exploring Nonverbal Cues in VR Experiences

I often think of what VR experiences have had the most profound effect on me and my perceptions of the possibilities of this technology. I can’t recall the exact time or date but I can remember one scenario that always sticks in my mind. It was a meeting held in a lecture theatre in VR with participants from around the world invited to ask questions of the presenter.
So what was the profound effect?
Unlike previous VR meetings I’d attended this was the first I’d witnessed another participant expressing their emotion, their passion in the question that they posed. It wasn’t their words, their tone or the actual question asked: it was witnessing their hand gestures in real time as they expressed their thoughts. Whilst words can convey so much nonverbal cues like hand gestures can provide an even broader understanding.
Which is why I was particularly interested in a paper presented at ASSETS ’24: The 26th International ACM SIGACCESS Conference on Computers and Accessibility by Crescentia Jung, Jazmin Collins, Ricardo E Gonzalez Penuela, Jonathon Isaac Segal, Andrea Stevenson Won and Shiri Azenkot. Their paper “Accessible Nonverbal Cues to Support Conversations in VR for Blind and Low Vision People” explores possibilities for conveying nonverbal communication in a highly visual medium. The paper makes interesting reading.
It raises some interesting points on how we use nonverbal communication in the physical not virtual world as well.
Over my years working with people with vision impairment and exploring how assistive technology can help in achieving their independence the basic scenario of how to detect nonverbal cues has never been mastered. The aforementioned article demonstrates how we can possible achieve this in the virtual world but how can we translate that to the physical environment?
With the rapid development of AI will this help in reaching this milestone? We already have (relatively) inexpensive wearables with AI capabilities like the Meta Ray-Ban smart glasses on the market that could theoretically address this. However have AI models developed sufficiently to make this a reality? What’s more given that AI is constantly evolving and training itself on the data models it has available are we ready to provide consent to help it do so?
For now I look forward to the next VR meeting I attend to see who present is conscious of their use of nonverbal gestures.