Koren Shadmi for The New York Times
People often reveal their private emotions in tiny, fleeting facial
expressions, visible only to a best friend — or to a skilled poker
player. Now, computer software is using frame-by-frame video analysis to
read subtle muscular changes that flash across our faces in
milliseconds, signaling emotions like happiness, sadness and disgust.
News from the technology industry, including start-ups, the Internet, enterprise and gadgets.
On Twitter: @nytimesbits.
On Twitter: @nytimesbits.
With face-reading software, a computer’s webcam might spot the confused
expression of an online student and provide extra tutoring. Or
computer-based games with built-in cameras could register how people are
reacting to each move in the game and ramp up the pace if they seem
bored.
But the rapidly developing technology is far from infallible, and it
raises many questions about privacy and surveillance.
Ever since Darwin, scientists have systematically analyzed facial
expressions, finding that many of them are universal. Humans are
remarkably consistent in the way their noses wrinkle, say, or their
eyebrows move as they experience certain emotions. People can be trained
to note tiny changes in facial muscles, learning to distinguish common
expressions by studying photographs and video. Now computers can be
programmed to make those distinctions, too.
Companies in this field include Affectiva, based in Waltham, Mass., and Emotient,
based in San Diego. Affectiva used webcams over two and a half years to
accumulate and classify about 1.5 billion emotional reactions from
people who gave permission to be recorded as they watched streaming
video, said Rana el-Kaliouby, the company’s co-founder and chief science
officer. These recordings served as a database to create the company’s
face-reading software, which it will offer to mobile software developers
starting in mid-January.
The company strongly believes that people should give their consent to
be filmed, and it will approve and control all of the apps that emerge
from its algorithms, Dr. Kaliouby said.
Face-reading technology may one day be paired with programs that have complementary ways of recognizing emotion, such as software that analyzes people’s voices, said Paul Saffo,
a technology forecaster. If computers reach the point where they can
combine facial coding, voice sensing, gesture tracking and gaze
tracking, he said, a less stilted way of interacting with machines will
ensue.
For some, this type of technology raises an Orwellian specter. And
Affectiva is aware that its face-reading software could stir privacy
concerns. But Dr. Kaliouby said that none of the coming apps using its
software could record video of people’s faces.
“The software uses its algorithms to read your expressions,” she said, “but it doesn’t store the frames.”
So far, the company’s algorithms have been used mainly to monitor
people’s expressions as a way to test ads, movie trailers and television
shows in advance. (It is much cheaper to use a program to analyze faces
than to hire people who have been trained in face-reading.)
Affectiva’s clients include Unilever, Mars and Coca-Cola. The advertising research agency Millward Brown says it has used Affectiva’s technology to test about 3,000 ads for clients.
Face-reading software is unlikely to infer precise emotions 100 percent
of the time, said Tadas Baltrusaitis, a Ph.D. candidate at the
University of Cambridge who has written papers
on the automatic analysis of facial expressions. The algorithms have
improved, but “they are not perfect, and probably never will be,” he
said.
Apps that can respond to facial cues may find wide use in education, gaming, medicine and advertising, said Winslow Burleson,
an assistant professor of human-computer interaction at Arizona State
University. “Once we can package this facial analysis in small devices
and connect to the cloud,” he said, “we can provide just-in-time
information that will help individuals, moment to moment throughout
their lives.”
People with autism, who can have a hard time reading facial expressions,
may be among the beneficiaries, Dr. Burleson said. By wearing Google
Glass or other Internet-connected goggles with cameras, they could get
clues to the reactions of the people with whom they were talking — clues
that could come via an earpiece as the program translates facial
expressions.
But facial-coding technology raises privacy concerns as more and more of
society’s interactions are videotaped, said Ginger McCall, a lawyer and
privacy advocate in Washington.
“The unguarded expressions that flit across our faces aren’t always the
ones we want other people to readily identify,” Ms. McCall said — for
example, during a job interview. “We rely to some extent on the
transience of those facial expressions.”
She added: “Private companies are developing this technology now. But
you can be sure government agencies, especially in security, are taking
an interest, too.”
Ms. McCall cited several government reports, including a National Defense Research Institute report this year that discusses the technology and its possible applications in airport security screening.
She said the programs could be acceptable for some uses, such as dating
services, as long as people agreed in advance to have webcams watch and
analyze the emotions reflected in their faces. “But without consent,”
Ms. McCall said, “they are problematic — and this is a technology that
could easily be implemented without a person’s knowledge.”
Aucun commentaire:
Enregistrer un commentaire
Remarque : Seul un membre de ce blog est autorisé à enregistrer un commentaire.