A few weeks ago, Bloomberg reported that Amazon is developing a wearable device that uses vocal patterns to determine human emotions. Although Amazon hasn’t confirmed this report, it’s certainly a compelling idea that opens up a lot of discussions about its implications. Some see this as science fiction coming to reality. Others are concerned about violations of privacy, while others see it as an opportunity to gain insights for health and personalization.
The idea of using information from the body as an indicator of mental states has been around for a long time. For example, the infamous lie detector test, i.e. a polygraph test, is predicated on the notion that physiological markers such as sweat or a fast heartbeat can indicate that a person is lying. But, since it’s understandably intrusive and expensive to have people wear full body sensors, there has been limited use of these physiological measures. However, we are now in an era with significant advancements in technology. We’ve all heard the tidbit that our current smartphones give us more computing power than supercomputers from the ‘70s and ‘80s. As technology gets more sophisticated, powerful, and more compact, physiological measures are cheaper, more accessible, and can be used in a wide variety of settings.
Although technological improvements make capturing physiological metrics easier, there are still significant challenges for detecting emotion in particular. First, though we all know what the term “emotion” means in the lay sense of the word, there is no consensus in the scientific world about what is “emotion,” and how it differs from other constructs like “mood.” Second, and probably the bigger challenge, is that physiological measures do not clearly and consistently differentiate emotions. That is, an increase in heart rate could be an indicator that someone is excited or fearful. For some people, a quiver in the voice may be an indication of nerves, whereas nervousness manifests itself in a higher pitch for others.
Which brings us back to Amazon’s project (code-named “Dylan”). While using vocal analysis to determine emotion is not yet reliable, Amazon is well-positioned to make this a reality because of its nearly unparalleled access to data. While the relationship between voice and emotion is weak, it can be useful when combined with other data. Amazon can use all of its data about individuals to construct powerful models that shed light on the concomitants of particular emotions. When it’s 2 a.m. and a woman’s voice is low energy, and she orders ice-cream to be delivered (she lives in NYC) while listening to break-up songs and performing a Google search for “how to mend a broken heart,” we can be more certain that she is sad than if we just had that low voice alone.
This is great news for Amazon. What does it mean for the rest of us? Physiological data is more accessible then it has ever been. And while this type of data is interesting and fun (who doesn’t like to see a video of someone’s face watching a funny ad?), it is not a direct pipeline. It’s unlikely we’re going to capture a precise emotion by relying on any one physiological indicator alone.
Thus, if you’re interested in using physiology to understand emotion, here are a few recommendations:
Whether or not Project Dylan comes to fruition, one thing is clear: as companies collect more consumer data, they have an unprecedented opportunity to understand their consumers’ emotions. But if companies want to do this successfully, they can’t rely on physiological data alone. They must use multiple sources of data to get the full picture.