
Chicago, IL
United States
CON/CNS Membership Talk
Tuesday, June 3rd, 11:00 am
Pedro Lopes, PhD
University of Chicago
SBRI J461
“Integrating Interactive Devices with the User’s Body”
Abstract: When we look back to the early days of computing, user and device were distant. In the ’70s, personal computers “moved in” with users. In the ’90s, mobile devices moved computing into users’ pockets. Then, wearables brought computing into physical contact with our skin. These transitions proved useful: moving closer to users allowed devices to sense more of their users and act more personal. The question that drives my research is: what is the next interface paradigm that supersedes wearables?
I propose that the next generation of interfaces will be defined by how devices integrate with the user’s biological senses and actuators. For the past years, my lab has been exploring how this body-device integration allows to engineer interactive devices that intentionally borrow the user’s body (i.e., their nervous system) for input and output, rather than adding more technology to the body.
The first key advantage of body-device integration is that puts forward a new generation of miniaturized devices; allowing us to circumvent traditional physical constraints. For instance, in the case of our devices based on electrical muscle stimulation, they illustrate how to create realistic haptic feedback (e.g., forces in immersive environments, such as virtual or augmented reality) while circumventing the constraints imposed by robotic exoskeletons, which need to balance their output power against the size of their motors and batteries. Taking this further, we successfully applied this body-device integration approach to other sensory modalities. For instance, we engineered a device that leverages chemical stimulation to the user to render temperature sensations without the need to rely on cumbersome thermal actuators. Our approach to miniaturizing devices is especially useful to advance mobile interactions, such as in virtual or augmented reality, where users have a desire to remain untethered.
A second key advantage is that integrating devices with the user’s body allows for new interactions to emerge without encumbering the user’s hands. Using our approach, we demonstrated how to create tactile sensations in the user’s fingerpads without putting any hardware on the fingerpads—instead, we intercept the fingerpad nerves from the back of the user’s hand. This allows users to benefit from haptic feedback (e.g., tactile cues they can use as guidance in VR/AR) without detriment to their dexterity. Taking this further, we also demonstrated that by using brain stimulation we can achieve haptic feedback on all four limbs without wearing any hardware (e.g., feeling forces & tactile sensations on both hands and feet)—opening up a new way to achieve haptics by directly stimulation the source (brain) rather than the endpoints (limbs).
A third facet is that our integrated devices enable new physical modes of reasoning with computers, going beyond just symbolic thinking. For example, we have engineered a set of devices that control the user’s muscles to provide tacit information to the user, such as a muscle-stimulation device to learn new skills (piano or sign language), or our wearable devices to allow users to control information using their bodies and without the need for screens (e.g., using their lips, hands or feet for both input and output).
A fourth key aspect that we found while integrating devices with the user’s body is that we can endow users with new physical abilities. Examples include a device that allows users to locate odor sources by “smelling in stereo” as well as a device that physically accelerates one’s reaction time using muscle stimulation, which can steer users to safety or even catch a falling object that they would normally miss.
While this integration between humans and computers is beneficial (e.g., faster reaction time, realistic simulations in VR/AR, or improved skill acquisition), it also requires tackling new challenges, such as improving the precision of how we safely stimulate the body or the question of agency: do we feel in control when our body is integrated with an interface? Together with our colleagues in neuroscience, we have been improving the design of this new type of computer interface. We found that, even in the extreme case of our interfaces that electrically control the user’s muscles, it is possible to improve the user’s sense of agency. More importantly, we found that it is only by preserving the user’s sense of agency that these integrated devices provide benefits even after the user takes them out.
It is undeniable that this new interface generation will be so deeply connected to the user’s nervous system that designing it will require more than just computing knowledge—it requires neuroscience.