What Happens When an AI Acknowledges Your Emotions?

Previously, technology served only to deliver our messages. It now wishes to write them for us through an understanding of our emotions.

 

IN MAY 2021, Twitter, a platform infamous for abuse and immaturity, launched a “prompts” feature that encourages users to pause before sending a tweet. The following month, Facebook announced AI-powered “conflict alerts” for groups, allowing administrators to intervene where “contentious or unhealthy conversations are occurring.” Every day, email and messaging smart-replies complete billions of sentences for us.

 

Amazon’s Halo, which will be available in 2020, is a fitness band that monitors your voice tone. Wellness is no longer defined by the rate of one’s heartbeat or the number of steps taken, but by how we present ourselves to those around us. To predict and prevent negative behaviour, algorithmic therapeutic tools are being developed.

 

According to Jeff Hancock, a Stanford University professor of communication, AI-mediated communication occurs when “an intelligent agent acts on behalf of a communicator by modifying, augmenting, or generating messages to accomplish communication goals.” He asserts that this technology is already in widespread use.

 

Underneath it all is a growing conviction that our relationships are just a little bit short of perfection. Since the outbreak of the pandemic, an increasing number of our relationships have been mediated by computers. Could algorithms help us be nicer to one another in the face of a churning ocean of online spats, toxic Slack messages, and infinite Zoom? Can an app read our emotions more accurately than we can? Or does outsourcing our communications to AI erode the essence of human relationships?

Co-Parenting in Coding

ONE COULD SAY THAT Jai Kissoon was raised in a family court system. Or, at the very least, in its vicinity. His mother, Kathleen Kissoon, was a family law attorney, and as a teenager, he would hang out at her Minneapolis, Minnesota, office and assist with document collation.

 

This was pre-“fancy copy machines,” and as Kissoon shuffled through the endless stacks of paper that flutter through a law firm’s corridors, he overheard stories about the numerous ways families can fall apart.

 

In that regard, Kissoon has remained relatively unchanged since 2001, when he cofounded OurFamilyWizard, a scheduling and communication tool for divorced and co-parenting couples. Kathleen conceptualised the idea, while Jai created the business plan and initially launched OurFamilyWizard as a website.

 

It quickly gained the attention of those working in the legal system, including Judge James Swenson, who piloted the platform in 2003 at Hennepin County Family Court in Minneapolis. The project took 40 of the “most hardcore families,” as Kissoon describes them, and placed them on the platform—and “they vanished from the court system.” When someone did end up in court — two years later — it was because a parent had ceased using it.

 

After two decades, OurFamilyWizard has been used by approximately a million people and has received court approval throughout the United States. It launched in the United Kingdom in 2015 and Australia a year later. It is now available in 75 countries and competes with coParenter, Cozi, Amicable, and TalkingParents. According to Brian Karpf, secretary of the American Bar Association’s Family Law Section, many lawyers now recommend co-parenting apps as standard practise, particularly when they want to have a “chilling effect” on a couple’s communication.

 

These applications can act as a deterrent to harassment, and their use in communications can be ordered by a court.

 

To promote civility, artificial intelligence has become an increasingly prominent feature. OurFamilyWizard includes a “ToneMeter” feature that employs sentiment analysis to monitor messages sent through the app—”something to serve as a yield sign,” according to Kissoon. Sentiment analysis is a subset of natural language processing, which is concerned with the examination of human speech. These algorithms, which are trained on massive language databases, parse text and assign it sentiment and emotion scores based on the words and phrases it contains.

 

When the ToneMeter detects an emotionally charged phrase in a message, a set of signal-strength bars turns red and the offending words are highlighted. “It’s your fault we were late,” for instance, could be interpreted as “aggressive.” Other expressions may be labelled as “humiliating” or “upsetting.” It is up to the user to decide whether or not to proceed with the send.

 

ToneMeter was initially used for messaging, but is now being coded for all points of communication between parents within the app. According to Shane Helget, chief product officer, it will soon discourage not only negative communication, but also positive language. He is analysing a wide variety of interactions with the hope of using the app to proactively encourage parents to behave positively toward one another outside of routine conversations.

 

There could be reminders to communicate schedules in advance, or an offer to swap dates for birthdays or holidays—non-mandatory but likely well-received gestures.

 

Additionally, CoParenter, which launched in 2019, employs sentiment analysis. Parents communicate via text, and a warning appears if a message is deemed too hostile—much like a human mediator would shush a client. If the system does not result in an agreement, a human can be added to the chat.

 

Delegating such emotionally charged negotiations to an app has a number of drawbacks. Kissoon made a conscious decision not to allow the ToneMeter to assign a score to parents based on how positive or negative they appear, and Karpf reports a noticeable effect on user behaviour. “Communications become increasingly robotic,” he explains. “Are you now writing for an audience?”

 

Co-parenting apps may be able to assist in navigating a difficult relationship, but they cannot resolve it. Occasionally, they can exacerbate the situation. According to Karpf, some parents use the app as a weapon, sending “bait” messages to rile up their spouse and coerce them into sending a problem message: “A jerk parent will always be a jerk parent.”

 

Kisson recalls a conversation he had with a judge during the pilot program’s inception. “The important thing to remember about tools is that I can hand you a screwdriver and you can use it to fix a lot of things,” the judge explained. “Alternatively, you can poke yourself in the eye.”

Hugs from the Computer

ADELA Timmons was a doctoral student in psychology in 2017 when she completed a clinical internship at UC San Francisco and San Francisco General Hospital, where she worked with families with young children from low-income backgrounds who had been exposed to trauma.

 

While there, she noticed a pattern developing: Patients would make progress in therapy only to have it eroded by the chaos of daily life in between sessions. She believed that technology could “bridge the divide between the therapist’s office and the real world” and saw the potential for wearable technology that could intervene precisely as a problem unfolded.

 

This is referred to as a “Just in Time Adaptive Intervention” in the field. In theory, it’s similar to having a therapist available to whisper in your ear whenever an emotional alarm bell goes off. “However, to do this effectively,” says Timmons, who is now the director of Florida International University’s Technological Interventions for Ecological Systems (TIES) Lab, “you must sense or detect interesting behaviours remotely.”

 

Timmons’ research, which entails developing computational models of human behaviour, is aimed at developing algorithms capable of accurately predicting couples’ and families’ behaviour. At first, she concentrated on couples. Researchers wired 34 young couples with wrist and chest monitors to monitor their body temperature, heart rate, and perspiration in one study.

 

Additionally, they provided them with smartphones that eavesdropped on their conversations. Timmons and her colleagues developed models to predict when a couple was likely to fight by cross-referencing this data with hourly surveys in which couples described their emotional state and any disagreements. A fast heart rate, frequent use of the word “you,” and contextual factors such as the time of day or the amount of light in a room would all be triggers. “There is no single variable that is a strong predictor of an inevitable row,” Timmons explains (though driving in LA traffic was a significant factor), “but by combining a variety of different pieces of information in a model, you can get closer to having an algorithm that works in the real world.”

Timmons is expanding on these models to examine family dynamics, with an emphasis on strengthening parent-child bonds. TIES is developing mobile applications that utilise smartphones, Fitbits, and Apple Watches to passively detect positive interactions (the idea is that it should be workable with existing consumer technology). To begin, data is gathered—most notably heart rate, tone of voice, and language. Additionally, the hardware detects physical activity and the presence or absence of the parent and child.

 

The algorithm was 86 percent accurate at detecting conflict in the couples study and was capable of generating a correlation with self-reported emotional states. The hope is that by detecting these states in a family context, the app will be able to intervene actively. “It could be a prompt, such as ‘go hug your child’ or ‘tell your child something they did well today,'” Timmons explains. “We’re also developing algorithms that can detect negative states in parents and then send interventions to assist them in regulating their emotions. We know that when a parent’s emotion is in check, things generally go more smoothly.”

 

Contextual information contributes to the improvement of prediction rates: Has the individual slept well the previous night? Have they exercised on that particular day? Prompts may include suggestions to meditate, try a breathing exercise, or engage in cognitive behavioural therapy techniques. There are currently available mindfulness apps, but they rely on the user remembering to use them during times when they are likely to be angry, upset, or emotionally overwhelmed. “It’s precisely at those times when you’re least able to mobilise your cognitive resources,” Timmons explains. “Our hope is to meet the individual halfway by alerting them to the point at which they will need to use those skills.” From her work with families, she has discovered that the traditional structure of therapy—50-minute sessions once a week—is not always the most effective way to make an impact. “I believe the field is becoming more explicit in its interest in expanding the science of psychological intervention.”

 

The work is funded by the National Institutes of Health and the National Science Foundation as part of a fund to develop commercially viable technology systems, and Timmons hopes the research will result in more accessible, scalable, and sustainable psychological health care. Once her lab has data demonstrating that the technology is effective and safe for families—and does not cause unexpected harm—decisions about how such technology could be deployed will need to be made.

 

Privacy is a concern as data-driven health care becomes more prevalent. Apple is the latest major technology company to enter this space; the company is halfway through a three-year study with UCLA researchers that began in 2020 to determine whether iPhones and Apple Watches can detect—and eventually predict and intervene in—cases of depression and mood disorders. The iPhone’s camera and audio sensors will be used to collect data, as will the user’s movements and even the way they type on the device. Apple intends to safeguard user data by running the algorithm entirely on the phone, with no data sent to Apple’s servers.

 

Timmons states that no data is sold or shared at the TIES lab, except in cases of harm or abuse. She believes it is critical for scientists developing these technologies to consider potential misuses: “It is the scientific community’s joint responsibility, along with legislators and the public, to establish acceptable limits and bounds within this space.”

 

The next step is to evaluate the models in real time to determine their efficacy and whether mobile phone prompts actually result in meaningful behavioural change. “We have a number of compelling reasons and theories to believe that would be an extremely effective mode of intervention,” Timmons says. “We simply do not know how well they work in practise.”

A Relationship X-Ray

THE IDEA THAT SENSORS AND ALGORITMS CAN UNDERSTAND THE COMPLICATIONS OF HUMANITARIAN INTERACTION IS NOT NEW. Love has always been a numbers game, according to relationship psychologist John Gottman. He has been attempting to quantify and analyse the alchemy of relationships since the 1970s.

 

 

Gottman conducted research on couples, most notably at the “Love Lab,” a 1980s-era research centre at the University of Washington. A modified version of the Love Lab continues to operate at the Gottman Institute in Seattle, which he founded in 1996 with his wife, Julie Gottman, a fellow psychologist. In rom-com terms, the Love Lab is a mashup of the opening sequence of When Harry Met Sally and the scene in Meet the Parents in which Robert De Niro subjects his future son-in-law to a lie detector test.

 

Individuals were wired in pairs and asked to converse with one another—first about their relationship history, then about a conflict—while various pieces of machinery monitored their pulse, perspiration, tone of voice, and amount of fidgeting in their chair. Each facial expression was coded by trained operators in a back room filled with monitors. The Love Lab’s objective was to gather data on how couples interact and express their emotions.

 

This research resulted in the development of the “Gottman method,” a relationship counselling technique. It’s critical to maintain a 5:1 ratio of positive to negative interactions; a 33% failure to respond to a partner’s request for attention is considered a “disaster”; and eye-rolling is strongly associated with marital doom. “Relationships are not complicated,” John Gottman says from his Orcas Island, Washington, home.

 

The Gottmans, too, are venturing into the world of artificial intelligence. They founded Affective Software in 2018 with the goal of developing an online platform for relationship assessment and guidance. It began with an in-person encounter; a friendship that began many years ago when Julie Gottman met Rafael Lisitsa, a Microsoft veteran, as they collected their daughters from school.

 

Lisitsa, cofounder and CEO of Affective Software, is working on a virtual Love Lab in which couples can receive the same “x-ray” diagnosis of their relationship via the camera on their computer, iPhone, or tablet. Again, facial expressions and tone of voice, as well as heart rate, are monitored. It demonstrates how far emotion detection, or “affective computing,” has progressed; while the original Love Lab was supported by screens and devices, it ultimately required a specially trained individual to monitor the monitor and accurately code each cue. Gottman was never convinced that the human element could be eliminated. “There are very few people who can code emotion extremely sensitively,” he says. “They were required to be musical. They needed to have some theatre experience… I never imagined a machine could do that.”

 

Not everyone believes that machines are capable of this. Emotion-detecting artificial intelligence is uncharted territory. It is largely predicated on the idea that humans have universal emotional expressions—a theory developed in the 1960s and 1970s in response to Paul Ekman’s observations and the development of a facial expression coding system that informs the Gottmans’ work and serves as the foundation for much affective computing software. Several researchers, including Lisa Feldman Barrett of Northeastern University, have questioned whether it is possible to reliably detect emotion from a facial expression.

 

And, despite its widespread use, some facial recognition software has been found to exhibit racial bias; one study found that when two widely used programmes were compared, they assigned more negative emotions to black faces than to white ones. Gottman claims that the virtual Love Lab is trained on facial datasets that span all skin types and that his system for coding interactions has been tested across multiple groups in the United States, including African American and Asian American communities. “We know that culture does indeed influence how people express or conceal their emotions,” he says. “We searched Australia, the United Kingdom, South Korea, and Turkey. And it appears as though the unique affect system I’ve evolved actually works. Will it now work across all cultures? We truly have no idea.”

 

Gottman adds that the Love Lab is truly a social coding system; by analysing the conversation’s subject matter, tone of voice, body language, and expressions, it is less concerned with detecting a single emotion in the moment and more concerned with analysing the overall qualities of an interaction. Combine these, according to Gottman, and you can more reliably generate a category such as anger, sadness, disgust, or contempt.

 

Couples who participate are asked to complete a detailed questionnaire and then record two 10-minute conversations. The first is a discussion of the previous week; the second is about a conflict. After uploading the videos, the couple rates their emotional state at various points during the conversation on a scale of 1 (extremely negative) to 10 (extremely positive) (very positive).

 

The app then analyses this data in conjunction with the detected cues and returns results such as a positive-to-negative ratio, a trust metric, and the prevalence of the dreaded “Four Horsemen of the Apocalypse”: criticism, defensiveness, contempt, and stonewalling. It is intended to be used in conjunction with the services of a licenced therapist.

 

Therapy and mental health services are increasingly being delivered via video calls—a trend that has been accelerated in the aftermath of the pandemic. According to McKinsey analysts, venture capital investment in virtual care and digital health has tripled since Covid-19, and AI therapy chatbots such as Woebot are gaining traction.

 

Relationship counselling apps such as Lasting already follow the Gottman method and send notifications to remind users to do things like express their love to their partner. One might imagine that this results in us becoming slothful, but the Gottmans view it as an educational process, arming us with tools that will eventually become second nature.

 

The team is already considering a simplified version that can be used without the assistance of a therapist.

 

According to the Gottmans, who were inspired by the fact that so many couples are already glued to their smartphones, technology enables the democratisation of counselling. “People are becoming much more accustomed to using technology as a language,” Gottman observes. “And as a means of enhancing their lives in a variety of ways.”

 

Email for You, but Not by You THIS TECHNOLOGY IS NOW COMMON. It may be having an effect on your relationships without your knowledge. Consider Gmail’s Smart Reply feature—which suggests possible responses to emails—and Smart Compose, which offers to complete your sentences. Smart Reply was added as a mobile feature in 2015, and Smart Compose was introduced in 2018; both are neural network-powered.

 

Jess Hohenstein, a doctoral student at Cornell University, first encountered Smart Reply in 2016, when Google Allo, the now-defunct messaging app, launched. It included a virtual assistant that suggested responses. She found it unsettling: “I didn’t want an algorithm influencing my speech patterns, but I figured this had to be working.”

 

In 2019, she conducted studies that concluded that artificial intelligence is indeed altering the way we interact and relate to one another. In one study, 113 college students were asked to collaborate on a task with a partner in which one, both, or neither of them could use Smart Reply.

 

Following that, participants were asked how much they blamed the other person (or AI) in the conversation for the task’s success or failure. A second study examined the linguistic effects of positive or negative “smart” responses.

 

Hohenstein discovered that people’s language with Smart Reply was skewed toward the positive. Individuals were more likely to roll with a positive suggestion than a negative one—participants were also frequently placed in situations where they desired to disagree but were only offered expressions of agreement. The effect is that the conversation moves more quickly and smoothly—and Hohenstein noticed that it also made the participants feel better about one another.

 

Hohenstein believes that this can backfire in professional relationships: This technology (when combined with our inherent suggestibility) has the potential to dissuade us from confronting someone or even disagreeing at all. By increasing the efficiency of our communication, AI may also eliminate our true emotions, reducing exchanges to bouncing “love it!” and “sounds good!” back and forth. This could exacerbate the disincentive to speak up for those in the workplace who have historically struggled

Related Post