Tell Zoom to cancel its creepy AI mood tracker
Zoom leadership
Dear Zoom,
Hi there. It’s us—the millions of people who learned your name during the pandemic and have stuck with you through thick and thin to make you the most successful video platform on the web. We rely on you for work calls, town halls, and word games with our families.
We like you, we really do. But we heard something troubling, and we’re getting worried about you.
Protocol reported that you’re planning a feature that claims to track and analyze our emotions. We get that you’re trying to improve your platform, but surveilling users and mining us for emotional data points doesn’t make the world a better place. And selling this tech to employers or businesses so that they can monitor and manipulate us for profit is really not cool.
It’s manipulative.
You describe this emotional surveillance tool as a way for businesses to hone their sales pitch by tracking the headspace of the person on the other side of the screen. Even that is a major breach of user trust. But we see the writing on the wall. Ultimately, this software will be sold to schools and employers who will use it to track and discipline us. You say you care about our happiness—so where does this dystopian vision fit in? ?
It’s discriminatory.
Emotion AI, like facial recognition in general, is inherently biased. It has connections to historic practices like physiognomy which have been proven totally bunk (not to mention, totally racist). These tools assume that all people use the same facial expressions, voice patterns, and body language—but that’s not true. Adding this feature will discriminate against certain ethnicities and people with disabilities, hardcoding stereotypes into millions of devices.
It’s pseudoscience.
Let’s be honest—this emotional measuring stuff is a marketing gimmick, and experts admit that it doesn’t even work. The way we move our faces is often disconnected from the emotions underneath, and research has found that not even humans can measure emotion from faces some of the time. Why add credence to pseudoscience and stake your reputation on a fundamentally broken feature?
Zoom, you can do better.
We’ve already lost trust in a bunch of other companies because of shady tracking systems and other extractivist practices. Zoom, this is a chance to be one of the good ones. You’ve made the right call before, like in 2020, when you changed your mind about blocking free users from your encrypted service. You’ve even canceled face-tracking features before because they didn’t meet your privacy standards. This can be just like those times—we’re just asking you to put the privacy and happiness of your users first.
You’re the industry leader, and millions of people are counting on you to steward our virtual future. Make the right call and cancel this crummy surveillance feature—and publicly commit to not implementing emotion AI in the future.
With emotion ❤️,
Your Users
Sponsored by
To:
Zoom leadership
From:
[Your Name]
Dear Zoom,
Hi there. It’s us—the millions of people who learned your name during the pandemic and have stuck with you through thick and thin to make you the most successful video platform on the web. We rely on you for work calls, town halls, and word games with our families.
We like you, we really do. But we heard something troubling, and we’re getting worried about you.
Protocol reported that you’re planning a feature that claims to track and analyze our emotions. We get that you’re trying to improve your platform, but surveilling users and mining us for emotional data points doesn’t make the world a better place. And selling this tech to employers or businesses so that they can monitor and manipulate us for profit is really not cool.
It’s manipulative: You describe this emotional surveillance tool as a way for businesses to hone their sales pitch by tracking the headspace of the person on the other side of the screen. Even that is a major breach of user trust. But we see the writing on the wall. Ultimately, this software will be sold to schools and employers who will use it to track and discipline us. You say you care about our happiness—so where does this dystopian vision fit in?
It’s discriminatory: Sentiment analysis, like facial recognition in general, is inherently biased. It has connections to historic practices like physiognomy which have been proven totally bunk (not to mention, totally racist). These tools assume that all people use the same facial expressions, voice patterns, and body language—but that’s not true. Adding this feature will discriminate against certain ethnicities and people with disabilities, hardcoding stereotypes into millions of devices.
It’s pseudoscience: Let’s be honest—this emotional measuring stuff is a marketing gimmick, and experts admit that it doesn’t even work. The way we move our faces is often disconnected from the emotions underneath, and research has found that not even humans can measure emotion from faces some of the time. Why add credence to pseudoscience and stake your reputation on a fundamentally broken feature?
We’ve already lost trust in a bunch of other companies because of shady tracking systems, data breaches, and extractivist practices. Zoom, this is a chance to be one of the good ones. You’ve made the right call before, like in 2020, when you changed your mind about blocking free users from your encrypted service. You’ve even canceled face-tracking features before because they didn’t meet your privacy standards. This can be just like those times—we’re just asking you to put the privacy and happiness of your users first.
You’re the industry leader, and millions of people are counting on you to steward our virtual future. Make the right call and cancel this crummy surveillance feature—and publicly commit to not implementing sentiment analysis in the future.
Thanks in advance,
Your Users