Face Recognition: The Business of Your Face

[ad_1]

Facial recognition software is a powerful technology that poses serious threats to civil liberties. It’s also a booming business. Today, dozens of startups and tech giants are selling face recognition services to hotels, retail stores—even schools and summer camps. The business is flourishing thanks to new algorithms that can identify people with far more precision than even five years ago. In order to improve these algorithms, companies trained them on billions of faces—often without asking anyone’s permission. Indeed, chances are good that your own face is part of a “training set” used by a facial recognition firm or part of a company’s customer database.

Consumers may be surprised at some of the tactics companies have used to harvest their faces. In at least three cases, for instance, firms have obtained millions of images by harvesting them via photo apps on people’s phones. For now, there are few legal restrictions on facial recognition software, meaning there is little people can do to stop companies using their face in this manner.

In 2018, a camera collected the faces of passengers as they hurried down an airport jetway near Washington, D.C. In reality, neither the jetway nor the passengers were real; the entire structure was merely a set for the National Institute for Science and Technology (NIST) to demonstrate how it could collect faces “in the wild.” The faces would become part of a recurring NIST competition that invites companies across the globe to test their facial recognition software.

In the jetway exercise, volunteers gave the agency consent to use their faces. This is how it worked in the early days of facial recognition; academic researchers took pains to get permission to include faces in their data sets. Today, companies are at the forefront of facial recognition, and they’re unlikely to ask for explicit consent to use someone’s face—if they bother with permission at all.

The companies, including industry leaders like Face++ and Kairos, are competing in a market for facial recognition software that is growing by 20% each year and is expected to be worth $9 billion a year by 2022, according to Market Research Future. Their business model involves licensing software to a growing body of customers—from law enforcement to retailers to high schools—which use it run facial recognition programs of their own.

In the race to produce the best software, the winners will be companies whose algorithms can identify faces with a high degree of accuracy without producing so-called false positives. As in other areas of artificial intelligence, creating the best facial recognition algorithm means amassing a big collection of data—faces, in this case—as a training tool. While companies are able to use the sanctioned collections compiled by government and universities, such as the Yale Face Database, these training sets are relatively small and contain no more than a few thousand faces.

These official data sets have other limitations. Many lack racial diversity or fail to depict conditions—such as shadows or hats or make-up—that can change how faces appear in the real world. In order to build facial recognition technology capable of spotting individuals “in the wild,” companies needed more images. Lots more.

“Hundreds are not enough, thousands are not enough. You need millions of images. If you don’t train the database with people with glasses or people of color, you won’t get accurate results,” says Peter Trepp, the CEO of FaceFirst, a California-based facial recognition company that helps retailers screen for criminals entering their stores.


An App for That

Where might a company obtain millions of images to train its software? One source has been databases of police mug shots, which are publicly available from state agencies and are also for sale by private companies. California-based Vigilant Solutions, for instance, offers a collection of 15 million faces as part of its facial recognition “solution.”

Some startups, however, have found an even better source of faces: personal photo album apps. These apps, which compile photos stored on a person’s phone, typically contain multiple images of the same person in a wide variety of poses and situations—a rich source of training data.

“We have consumers who tag the same person in thousands of different scenarios. Standing in the shadows, with hats-on, you name it,” says Doug Aley, the CEO of Ever AI, a San Francisco facial recognition startup that launched in 2012 as EverRoll, an app to help consumers manage their bulging photo collections.

Ever AI, which has raised $29 million from Khosla Ventures and other Silicon Valley venture capital firms, entered NIST’s most recent facial recognition competition, and placed second in the contest’s “Mugshots” category and third in “Faces in the Wild.” Aley credits the success to the company’s immense photo database, which Ever AI estimates to number 13 billion images.

In its earlier days, when Ever AI was a mere photo app, its aggressive marketing practices created controversy and temporarily led Apple to ban EverRoll from the App Store in 2016. Notably, the app induced users to send promotional links to all of their phone contacts, a tactic known as “growth hacking” in Silicon Valley parlance. Users also accused it of gobbling their data.

“The first thing it does even as it is installing is to harvest all your phone numbers and immediately message everybody… This thing then starts to pull all your photos and put them into the cloud,” wrote Greg Miller, a Texas-based portrait studio owner, in a 2015 Facebook review.

Four years later, Miller was dismayed to discover that the app once known as EverRoll still had his photos, and that it was now a facial recognition company.

“No, I was not aware of that, and I don’t agree with it one bit,” Miller tells Fortune. “All of this being tracked is a real problem. Nothing is private anymore and that just scares the hell out of me.”

Aley, the Ever AI CEO, says the company doesn’t share identifying information about individuals in its database, and only uses the photos to train its software. He added the company is akin to a social media network from which people can opt out. Aley also denied that Ever AI had intended to become a facial recognition company from the get-go, saying the move away from the now-shuttered photo app was a business decision. Currently, Ever AI’s customers are using it for a range of activities, including corporate ID management, retail, telecommunications, and law enforcement.

EverRoll is not the only facial recognition company that once offered a consumer photo app. Another example is Orbeus, a San Francisco-based startup quietly acquired by Amazon in 2016, which once offered a popular picture organizer called PhotoTime.

“Nothing is private anymore and that just scares the hell out of me.” —Greg Miller, a portrait studio owner based in Texas

According to a longtime Orbeus employee, the startup’s A.I. technology and its large collection of photos with people in public settings made it an appealing acquisition target.

“Amazon was looking for that capability. They acquired everything, then shut down the app,” says the employee, who declined to be identified, citing non-disclosure agreements.

Today the PhotoTime app no longer exists, though Amazon continues to sell another Orbeus product known as Rekognition. The product is a type of facial recognition software used by law enforcement and other organizations.

Amazon declined to provide details about the extent to which Orbeus’s photo app was used to train the Rekognition software, only stating it obtains data for its A.I. projects, including facial recognition, from a variety of sources. The company added it does not use its customers’ Prime photo service to train its algorithms.

Another company that uses a consumer photo app to train its facial recognition algorithm is Real Networks. The Seattle-based company, once known for its 1990s-era online video player, today specializes in software that can recognize children’s faces in schools. At the same time, it offers a smartphone app aimed at families called RealTimes, which one critic says has served as a pretext to obtain facial data.

“The app allows users to make video slideshows of their own photos. Imagine mom putting together a video slide show to send to grandma, and those images being used to train a dataset to use on young faces. It’s pretty horrible,” says Clare Garvie, a Georgetown University professor who published an influential report on facial recognition technology.

Real Networks confirmed the photo app helps improve its facial recognition tool, but added that it uses additional data sources for the purpose.

In all of these cases where companies used a photo app to harvest faces for training data, they didn’t ask for consumers’ explicit permission. Instead, the firms appear to have obtained legal consent through their terms of service agreements.

This is, however, more than what some other facial recognition companies have done. According to Patrick Grother, who runs the face competitions at NIST, it’s common for facial recognition companies to write programs that “scrape” pictures from websites like SmugMug or Tumblr. In these cases, there is not even a pretext of consent from those whose faces end up in training sets.

This “help yourself” approach was underscored by a recent NBC News report detailing how IBM siphoned more than one million faces from the photo sharing site Flickr as part of the company’s artificial intelligence research. (John Smith, who oversees AI technology for IBM’s research division, told NBC News that the company was committed to “protecting the privacy of individuals” and would work with those who sought removal from the dataset.)

All of this raises questions about what companies are doing to safeguard the facial data they collect, and whether governments should provide more oversight. The issues will only be become more pressing as facial recognition spreads to more areas of society, and powers the business of companies large and small.


From Shops to Schools

Facial recognition software is not new. Primitive versions of the technology have existed since the 1980s when American mathematicians began defining faces as a series of numerical values, and used probability models to find a match. Security personnel in Tampa, Fla. deployed it at the 2001 Super Bowl and casinos have used it for years. But in the last few years, something changed.

“Facial recognition is undergoing something of a revolution,” says Grother of NIST, adding the change is most pronounced with fleeting or poor quality images. “The underlying technology has changed. The old tech has been replaced by a new generation of algorithms, and they’re remarkably effective.”

This revolution in facial recognition comes thanks to two factors that are transforming the field of artificial intelligence more broadly. The first is the emerging science of deep learning, a pattern recognition system that resembles the human brain. The second is an unprecedented glut of data that can be stored and parsed at low cost with the aid of cloud computing.

The first companies to take full advantage of these new developments, unsurprisingly, were Google and Facebook. In 2014, the social network launched a program called DeepFace that could discern if two faces belonged to the same person with an accuracy rate of 97.25%—a rate equivalent to what humans scored on the same test. A year later, Google topped this with its FaceNet program, which obtained a 100% accuracy score, according to security firm Gemalto.

Today, those companies and other tech giants like Microsoft are leaders in facial recognition—in no small part because of their access to large databases of faces. A growing number of startups, though, are also posting high accuracy scores as they seek a niche in a growing market for face software.

In the U.S. alone, there are more than a dozen such startups, including Kairos and FaceFirst. Silicon Valley has been flocking to the sector, according to market researcher PitchBook, which reveals dozens of investment deals taking place in the last few years. The average total investment in the last three years is $78.7 million, according to PitchBook. This is not an eye-popping number by Silicon Valley standards, but reflects a significant bet by venture capitalists that at least a few facial recognition startups will mushroom into major companies.

Business models for facial recognition companies are still emerging. Today, most revolve around licensing software to organizations. According to data from Crunchbase, annual revenue for startups like Ever AI and FaceFirst is relatively modest, ranging from $2 million to $8 million. Amazon and the other tech giants, meanwhile, have not disclosed how much of their revenue comes from licensing facial recognition.

For years, the most avid paying customers for facial recognition has been law enforcement agencies. More recently, though, a growing number of organizations, including Wal-Mart, are using the software to identify and learn more about the people who enter their physical premises.

This is certainly the case for customers of California-based FaceFirst, which sells facial recognition software to hundreds of retailers, including dollar stores and pharmacies. Its CEO, Trepp, says the bulk of his clients use the technology to screen for criminals coming into their stores but, increasingly, retailers are testing it for other purposes such as recognizing VIP customers or identifying employees.

The most avid paying customers for facial recognition software have been law enforcement agencies.

Amazon, meanwhile, appears to be casting a wide net in its efforts to find a business models for face recognition. In addition to selling to police departments, the retail giant is reportedly working with hotels to help them expedite check-in procedures.

“Companies from all over are coming to Amazon and saying, ‘This what we’d like you to do’. Then you figure out that’s your sweet spot. The interest is all over the place,” says the unnamed person who joined Amazon when the company acquired Orbeus, the facial recognition firm.

These efforts, in the case of Amazon, have not been without controversy. Last July, the ACLU tested the company’s software by running the faces of every member of Congress against a database of convicted felons. The test resulted in 28 false positives, the majority of which comprised Congressional members of color. In response, the ACLU called for a ban on the use of facial recognition technology by law enforcement. Meanwhile, Amazon’s own employees have pressed the company to justify the sale of the software to police departments and to U.S. Immigration and Customs Enforcement.

Some members of Congress, including Rep. Jerrold Nadler (D-N.Y.) and Sen. Ron Wyden (D-Ore.), have since asked the Government Accountability Office to investigate the use of facial recognition software. Corporate leaders are also uneasy about the technology’s applications. Among them: Microsoft president Brad Smith, who in December called for government regulation.

But even as concern mounts, use of facial recognition technology is expanding as companies find new and novel applications for which to sell it. These include Real Networks, the maker of the family photo app, which is offering its software for free to K-12 schools across the country. The company says hundreds of schools are now using it. In an interview with Wired magazine, CEO Rob Glaser says he began the initiative as a non-partisan solution to the debate over school safety and gun control. Currently, Real Networks’ website is touting its technology as a way for event hosts to “recognize every fan, customer, employee, or guest”—even if their face is covered.

 

Real Networks isn’t the only facial recognition company with products that focus on children. A Texas-based startup called Waldo is supplying the technology to hundreds of schools, as well as kids’ sports leagues and summer camps. In practice, this involves using Waldo’s software to scan images taken by video cameras or official photographers, then match children’s faces to a database of images provided by parents. Those parents who don’t wish to participate can opt out.

According to CEO Rodney Rice, schools take tens of thousands of photos every year and only a handful of them up being seen in a yearbook. Facial recognition, he says, is an efficient way to distribute the remaining ones to those who would like to have them.

“Instead of buying popcorn or wrapping paper, you can get a photo stream to your kids’ grandparents,” says Rice, explaining that Waldo has a 50-50 revenue sharing arrangement with public schools. The service is now doing business in more than 30 U.S. states.

The growth of Waldo and FaceFirst show how businesses are helping to normalize facial recognition, which not long ago was the stuff of science fiction. And as the technology spreads to more sectors of the American economy, more companies will collect copies of our faces—either to train their algorithms or to recognize customers and criminals—even as the potential for mistakes or misuse grow.


The Future of Your Face

In a 2017 episode of the techno-dystopian TV series Black Mirror, an anxious mother frets over images of a ne’er-do-well carrying on with her daughter. To identify him, she uploads an image of his face to a consumer facial identification service. The software promptly displays his name and place of work, and she goes to confronts him.

Such a scenario, once far-fetched, feels close at hand today. While fears over facial recognition have focused on its use by governments, its deployment by private companies or even individuals—Black Mirror-style—poses obvious privacy risks.

As more companies start to sell facial recognition, and as our faces end up in more databases, the software could catch on with voyeurs and stalkers. Merchants and landlords could also use it to identify those they deem to be undesirable, and quietly withhold housing or services.

“Anybody with a video camera and a place with a lot of foot traffic can start to compile a databases of images, and then use this analytic software to see if there’s a match with what you’ve compiled,” says Jay Stanley, a policy analyst of the ACLU.

There’s also the risk of hacking. Andrei Barysevich of Gemini Advisors, a cybersecurity firm, says he has seen profiles stolen from India’s national biometrics database for sale on “dark web” Internet sites. He has yet to see databases of American faces for sale, but added, “It’s just a matter of time.” If such a thing were to occur, a stolen collection of customer faces from a hotel or retailer could help criminals carry out fraud or identity theft.

As the technology spreads with little government oversight, the best hope to limit its misuse may lie with the software makers themselves. In interviews with Fortune, the CEOs of facial recognition startups all stated they were deeply attuned to privacy perils. A number, including the CEO of FaceFirst, cited the spread of face surveillance systems in China as a cautionary tale.

The CEOs also offered two ways the industry can limit misuse of their technology. The first is by working closely with the purchasers of their software to ensure clients don’t deploy it willy-nilly. Aley of Ever AI, for instance, says his company follows a higher standard than Amazon, which he claims furnishes its Rekognition tool to nearly all comers.

In response to a question of how it polices misuse, Amazon provided a previously published statement by Matt Wood, who overseas artificial intelligence services at Amazon Web Services, pointing to a company policy prohibiting activity that is illegal or harmful to others.

The other potential privacy safeguard cited by facial recognition executives is the use of technical measures to ensure the faces identified in their databases can’t be hacked.

Rice, the CEO of the Waldo, says faces are stored in the form of alphanumeric hashes. This means that, even in the event of a data breach, privacy would not be compromised because a hacker would not be able to use the hashes to reconstruct the faces and their identities. The point was echoed by others.

Rice is also wary that lawmakers could do more harm than good by making rules for the use of facial technology. “Throwing the baby out with the bathwater, and creating a bunch of crazy regulations, that would be a travesty,” he says.

Meanwhile, some companies that make facial recognition software are using new techniques that may reduce the need for large collections of faces to train their algorithms. This is the case with Kairos, a Miami-based facial recognition startup that names a major hotel chain among its clients. According to chief security officer Stephen Moore, Kairos is creating “synthetic” facial data to replicate a wide variety of facial expressions and lighting conditions. He says these “artificial faces” means the company can rely on smaller sets of real world faces to build its products.

All of these measures—oversight of facial recognition customers, sound data security, and synthetic training tools—could allay some of the privacy concerns related to companies’ use of our faces. At the same time, Trepp of FaceFirst believes anxiety over the technology will diminish as we become more familiar with it. He even argues that facial recognition scenes in the 2002 sci-fi movie Minority Report will start to feel normal.

“Millennials are much more willing to hand over their face. That [Minority Report] world is coming,” he says. “Done properly, I think people are going to enjoy it and it’s going to be a positive experience. It won’t feel creepy.”

“Millennials are much more willing to hand over their face.” —Peter Trepp, CEO, FaceFirst

Others, including the ACLU, are less sanguine. Still, despite the growing controversy around the technology, there is, for now, almost nothing in the way of laws to limit the use of your face. The only exception comes from a trio of states—Illinois, Texas, and Washington—that require a degree of consent before the use of someone’s face. These laws have not really been tested, with one exception: Illinois, where consumers can bring lawsuits to enforce the right.

Currently, the Illinois law is the subject of a high profile appeals court case involving Facebook, which claims that restrictions on obtaining faces does not extend to digital scans. In 2017, Facebook and Google ran an unsuccessful lobbying campaign to persuade Illinois lawmakers to dilute the law. In late January, the law’s supporters got a boost when the Illinois Supreme Court ruled that consumers do not have to show real-world harm if they want to sue over the unauthorized use of their biometrics.

Other states are considering biometrics laws of their own. At the federal level, lawmakers have so far devoted little attention to the matter. This may be changing, however, as Senators Brian Schatz (D-Ha.) and Roy Blount (R.MO) this month introduced a bill that would require companies to get permission before using facial recognition in public places and or share face data with third parties.

Garvie, the Georgetown researcher, is in favor of laws to oversee the technology. But she says it has been difficult for lawmakers to keep up.

“One challenge of facial recognition is it’s been incredibly quick on the uptake because of legacy databases. There are so many instances where our faces were captured,” she says. “Unlike fingerprints, where there have long been rules on how and when they’re collected, there are no rules for face technology.”

For more on artificial intelligence, subscribe to Fortune’s Eye on A.I. newsletter.

[ad_2]

Source link Google News