WHO DOES MY FACE BELONG TO?
In November 2019, Kashmir Hill, a new reporter at The New York Times, received a tip about a company called Clearview AI, which was working on facial recognition technology. The tip revealed that Clearview AI had developed a powerful app by gathering billions of photos from social media and other websites. This app could recognize a person’s face and find all their photos online. The tip also mentioned that Clearview AI was selling this technology to law enforcement while trying to keep it secret.
Since the 1960s, there have been efforts to make facial recognition technology work, but results were often disappointing. However, Clearview claimed to be different with a “98.6% accuracy rate” and a huge photo database that law enforcement had never had access to before.
Hill saw the potential risks of facial recognition technology and started looking into how it affected privacy. Clearview AI’s claims showed improvements in accuracy and effectiveness, but there were serious concerns about privacy violations and ethical issues. Hill’s research became a crucial starting point for understanding the impact of facial recognition technology on society and its future use.
During the investigation, Hill discovered how secretive Clearview was and how it monitored its use. Clearview was tracking and blocking searches for photos of reporters like Hill. The company could see who law enforcement was searching for and control the results, showing the power of a secretive company.
The book Your Face Belongs to Us explores this topic and examines the development of facial recognition technology and its societal impacts. Kashmir Hill has investigated companies like Clearview AI and the ethical and privacy issues associated with this technology.
The book provides a broad view, covering the history of facial recognition technology, its current uses, and its potential future effects. Hill discusses the impacts on security, privacy, and human rights, encouraging readers to consider the challenges and opportunities that come with this technology.
() Hill, K. (2023). *Your Face Belongs to Us: The Secretive Startup Dismantling Your Privacy, Simon & Schuster, pp. 347.
Hoan Ton-That, an Australian tech enthusiast of Vietnamese descent, moved to San Francisco at the age of nineteen, drawn by the allure of Silicon Valley. He created a few Facebook test apps that reached millions of users, generating revenue through ads. However, his ventures seem far beyond ethical boundaries. For example, his ViddyHo app, which tricked users into sending spam messages to their friends, sparked significant backlash and damaged his online reputation, eventually forcing him into a low-profile role at a startup.
By 2015, Ton-That relocated to New York City, diving into various tech projects, including facial recognition applications. In the summer of 2016, at the Republican National Convention in Cleveland, Ohio, he met Charles Carlisle Johnson, a figure with a rather notorious reputation. The two bonded over their shared disdain for political correctness and discussed the potential of technology to uncover human secrets.
This brainstorming session at the convention marked the beginning of Ton-That and Johnson’s journey into developing powerful facial recognition technology. If you are curious about Hoan Ton-That, he is the Chairman of Clearview AI. Starting as a young, inquisitive programmer, he has evolved into a pivotal figure in developing technology that challenges societal norms about privacy and anonymity. His name and that of his company will come up frequently as we proceed.
The History of Facial Recognition
Over two thousand years ago, Aristotle claimed that humans possess a true face, unlike any other creature on Earth. While animals have basic components (eyes, mouth, nose), he argued that a true face reflects personality and character, expressing the depths and nuances of the human soul. According to Aristotle, men with large foreheads tended to be lazy, those with straight eyebrows were seen as gentle, while those with curved eyebrows were likely to be irritable, humorous, or jealous. But don’t you believe that for a second!
This practice of reading faces evolved into what became known as “physiognomy,” capturing the interest of serious thinkers. During the Victorian era, English polymath Francis Galton expanded on these ideas and asserted the hereditary nature of physical and mental traits. A cousin of Darwin, Galton tried to apply Darwin’s theory of evolution to humans, aiming to identify common criminal traits by overlaying criminals’ photographs. Today, we have the science of epigenetics (*)!
Galton posited that traits such as intelligence and character were inherited through families. To validate his theory, he analyzed the family trees of influential figures. Galton argued for preserving society’s best qualities, coining the term “eugenics.” According to this theory, the reproduction of criminals and individuals with undesirable traits should be restricted. We all know which dark regimes later adopted Galton’s ideas.
All these methods have now been replaced by fingerprinting, marking a significant turning point in global human tracking and criminal identification. If you recall, fingerprints were once only taken from those suspected of a crime, but now…
Efforts to develop facial recognition technology began in the 1950s, led by scientists such as Manuel Blum and Woody Bledsoe. In the 1960s, computer scientists began developing algorithms to identify faces, but the computers of the time lacked the required capabilities for such complex tasks. Ultimately, these early facial recognition attempts often ended in failure. The algorithms had difficulty with low-quality images and photos of faces captured from various angles. Moreover, the technology’s effectiveness was influenced by factors such as gender, age, and race.
By the 1980s, as computers grew more powerful, facial recognition technology began to improve. Throughout this time, more advanced algorithms were created and tested on larger datasets. Despite these advancements, errors remained frequent, and the technology was not yet ready for practical use.
The 1990s saw breakthroughs in facial recognition technology. Lawrence Sirovich and Michael Kirby developed new mathematical methods to enhance the accuracy of facial recognition algorithms. During this time, facial recognition technology became more reliable, though it was still far from perfect. These efforts established the groundwork for the technology, though it was still not ready for broad adoption.
As we entered the 21st century, facial recognition technology was still being applied in a limited capacity. During this period, both companies and governments began exploring the potential of facial recognition technology. Despite these efforts, widespread adoption remained limited, as the technology’s performance continued to fall short of expectations. In the 2000s, however, the arrival of high-performance computers enabled the training of facial recognition algorithms on large datasets, marking the true onset of artificial intelligence. The accuracy and reliability of these algorithms improved, making it feasible to use facial recognition technology more broadly and effectively with the help of supercomputers.
As facial recognition technology advanced, concerning suggestions emerged, highlighting its potential for misuse. For instance, certain companies planned to use facial recognition in advertising, personalizing ads based on people’s facial expressions and emotions. Such applications posed significant privacy risks and raised serious ethical concerns.
Around the same time, some governments also considered using facial recognition technology for security purposes. Interest in using the technology to fight terrorism and capture criminals was strong, but the potential for misidentifications and discrimination raised serious concerns.
Some of the most troubling proposals involved secretly scanning people’s faces to identify them without their permission. Such practices could lead to unauthorized exposure of individuals’ identities, representing a significant breach of privacy. It became evident that, with the advancement and increasing prevalence of the technology, privacy breaches would inevitably occur.
It was not long after these privacy debates that the anticipated scenario unfolded. When the calendar hit in 2001, during the Super Bowl, a significant event involving large-scale use of facial recognition technology took place. Held in Tampa, Florida, this event marked the first time facial recognition was applied on a massive scale in a public setting. The event became infamously known as the “Snooper Bowl.” At the Super Bowl, the faces of all attendees were secretly scanned, and the data was transmitted to law enforcement with the goal of identifying criminals and wanted individuals. Many individuals were unsettled by the unauthorized scanning of their faces, and concerns about the potential misuse of this technology escalated. This experiment demonstrated both the potential and the limitations of facial recognition technology. Although it managed to identify some criminals, it also mistakenly labeled many innocent people as suspects. Moreover, this incident highlighted the ethical and legal challenges that could emerge from the widespread adoption of facial recognition technology.
The “Snooper Bowl” incident marked a pivotal moment in the widespread use of facial recognition technology, offering significant lessons for future applications.
In 2006, James Ferg-Cadima, a legislative consultant for the Chicago branch of the ACLU (American Civil Liberties Union), noticed the “Pay By Touch” service while shopping at a Jewel-Osco market in Chicago. Pay By Touch was a company founded in 2002 by John Rogers, attracting hundreds of millions of dollars in investment. The cashiers were promoting the convenience of paying by fingerprint.
Concerned about the legal implications of collecting and using biometric data, James began to investigate. He was particularly worried about the security of fingerprints and other biometric information because once these data were compromised, they could not be retrieved or changed. Rogers’ past legal troubles and the company’s financial difficulties alarmed James further, and when the company went bankrupt, the fingerprints of Illinois residents were classified as assets in a bankruptcy proceeding in a different state. This situation raised serious concerns about the potential exploitation of biometric data.
Collaborating with the ACLU’s technology team, James drafted a bill including a definition of biometric data. The proposed law required individuals’ consent for the collection, use, or sale of biometric data and mandated that they be informed about how their data would be stored and eventually destroyed. The bill quickly gained support in the Illinois state legislature and was passed in 2008. This legislation became a model for similar laws, and the collapse of Pay By Touch served as a crucial lesson in the commercial use of biometric data, leading to stricter regulations to protect such information.
The years 2009-2011 were a turning point for facial recognition technology. In 2009, Google launched a groundbreaking new search feature called “Goggles,” which allowed users to take photos and perform visual searches. The engineers behind Goggles created a “cute” YouTube video explaining the technology they created. Quantum Physicist Hartmut Neven, one of the key figures behind Goggles, demonstrated what the technology could do.
According to Neven, Google was developing a product that could fundamentally alter the idea of privacy in public spaces. The concept aimed to enable users to identify strangers merely by photographing their faces. In March 2011, CNN reported on this with a chilling headline, but a Google spokesperson quickly downplayed the concerns, stating that this was not something the company was actively pursuing. It seemed Neven had made statements not entirely in line with Google’s corporate communications strategy.
Years ago, when Google began capturing images for Street View with camera-equipped cars, it did not expect some people to be horrified by the idea. A couple from Pennsylvania, the Borings, sued the company for invasion of privacy and trespassing. In response to backlash from European privacy regulators, Google introduced a blurring option for those who opted to keep their homes off the virtual map. This option was particularly popular in Germany. However, pro-technology militants took advantage of this by easily identifying these blurred houses on Google Maps, finding them in real life, throwing eggs at them, and leaving notes in their mailboxes. It seemed that remaining hidden in this new landscape would no longer be possible.
The Allure of Controversial Technology
In 2018, Clearview AI sought further investment to fully realize the potential of its facial recognition technology. The company planned to market its technology to police departments and security forces. For private companies, it offered three products: background checks, surveillance systems for identifying criminals and enhancing security, and entry authentication systems. While Clearview AI had clients in banking and hospitality, it had not yet secured a large customer base. Its investors were confident in the potential for significant commercial success and supported the company’s growth. You may recall my mention of facial recognition-powered elevators in Singapore in a previous article. Consider the convenience and affordability this technology has achieved—it is already part of our everyday lives, from smartphones to various apps. In essence, our biometrics are already in their hands.
By December 2011, the Federal Trade Commission (FTC) had taken notice. Established in 1914 to protect American consumers from deceptive, unfair, and anticompetitive business practices, the FTC assumed a new mission in the Internet age: safeguarding personal data. Protecting consumers in the Information Age meant not only guarding their finances but also their increasingly valuable personal data. The FTC warned that facial recognition technology could end anonymity, making individuals identifiable in public spaces. In a report, the agency urged companies to be transparent, offer consumers choices, and incorporate privacy safeguards into product development. They also called on Congress to pass legislation addressing these concerns.
Clearview AI quickly gained popularity among police departments like the NYPD. However, the application could be used for purposes beyond just catching criminals. As a result, the NYPD asked the Clearview AI to modify some of its features. Despite the controversy, Clearview AI proved to be a crucial tool in solving serious crimes like child exploitation and human trafficking. The company gained traction by offering free trial accounts to police departments, leading to widespread adoption and numerous successes. Nonetheless, the company’s far-right affiliations and the secretive nature of its usage stirred anxiety among various groups. Public debate over the legality and ethical use of Clearview AI emerged, becoming a key factor in shaping the company’s future.
Meanwhile, comedian-turned-politician Senator Al Franken, along with Alvaro Bedoya, began investigating how tech companies collected and sold user data. In 2011, Franken summoned executives from Apple and Google to a Senate hearing, questioning how smartphones gathered and shared users’ location data. The hearing attracted significant media attention. Franken, keen to delve deeper into privacy issues, sought Bedoya’s input for a new session. Franken found facial recognition technology particularly unsettling, prompting further investigation into the matter.
Their attention was drawn to Facebook’s 2010 introduction of automated “tag suggestions,” which utilized facial recognition technology to help users tag photos more easily. This feature sparked privacy concerns. In 2012, Franken organized a congressional hearing on facial recognition technology. Initially reluctant, Facebook eventually agreed to participate. During the hearing, Facebook’s representative, Rob Sherman, assured that the technology was used solely for identifying friends and wouldn’t be shared with third parties. However, Bedoya voiced concerns that Facebook’s practice of making profile photos public could enable the creation of a massive facial recognition database. Facebook dismissed these worries, asserting that such data collection was not feasible. Franken and Bedoya pushed for federal legislation on facial recognition technology and location data. However, the 2013 revelations by Edward Snowden about the NSA’s extensive data collection program shifted public attention to government privacy violations. It was a classic case of “the thief being caught at home while being sought in the market,” but the government claimed national security as a valid justification. As a result of these events, Facebook now exercises greater caution in using facial recognition technology. Additionally, the 2008 Illinois Biometric Information Privacy Act (BIPA) has made tech companies more wary of deploying such technologies.
In 2019, Clearview AI sold a vast database of facial recognition technology to law enforcement agencies. Researcher Martinez emphasized the potential dangers this technology posed to privacy and the risks associated with its widespread adoption. In an interview with The New York Times, Clearview AI defended its actions, claiming that its technology served security purposes, such as capturing child predators. However, Martinez claimed that Clearview’s algorithm had not been audited by a third party, and the company’s internal tests allegedly yielded 100% accuracy. Clearview AI had ventured into territory that other tech companies steered clear of due to ethical considerations.
Later on, the New York Times published a revealing article titled “The Secretive Company That Might End Privacy as We Know It,” exposing Clearview’s massive database containing billions of facial images and detailing the hundreds of law enforcement agencies and private companies using this technology. This exposé brought Clearview AI’s operations into the public eye.
Faced with legal scrutiny from around the world, Clearview AI had to assemble a large team of lawyers to deal with the issues. The company’s practices were challenged in numerous countries, leading to legal reviews worldwide. As a result, Clearview was declared illegal in many countries and faced fines totaling approximately $70 million. Clearview planned to appeal these rulings. The company’s founder, Hoan Ton-That, defended Clearview by comparing it to Google, asserting that both were merely search engines designed to make publicly available information more accessible.
These developments highlighted the potential dangers of technology misuse. The unauthorized use of individuals’ biometric data sparked concerns about the potential for being tracked in various ways in the future. Although Ton-That claimed Clearview intended to work exclusively with law enforcement, the company’s future actions remained uncertain. Moreover, with the technology now accessible, even amateurs could create similar databases, leading to equally concerning uses.
Ultimately, the significant concerns about the threats and privacy violations posed by Clearview AI’s facial recognition technology led to efforts to halt the company’s operations. However, a larger threat soon emerged, diverting public attention— the pandemic!
Future Shock
By the end of January 2020, nearly eight thousand COVID-19 cases had been recorded globally, prompting the WHO to declare a pandemic.
Clearview AI saw an opportunity in this crisis. In March 2020, the Wall Street Journal reported that the company was in “discussions” with several state agencies to use its technology to track COVID-19 patients. It was also reported that South Korea, China, and Russia were using facial recognition technology to control the spread of the virus. This raised the question of whether U.S. policymakers were considering similar measures, and companies with facial recognition technology quickly adapted their software to identify masked individuals.
Critics, including Yuval Noah Harari, voiced concerns that such technologies, even if used for health purposes, could ultimately infringe on individual freedoms. Harari warned of the dangers of biometric data collection, which could be used to manipulate people’s behavior and emotions. He argued that a choice must be made between totalitarian surveillance and empowering citizens; the pandemic, he suggested, should be an opportunity to foster trust in scientific data and health experts, adopting a more democratic and ethical approach.
As technology rapidly advanced during this period, many individuals and organizations developed new strategies to keep pace with these changes. Facial recognition technology began to be applied across various sectors—from workplaces and schools to airports and shopping centers. The security and commerce industries, in particular, capitalized on the opportunities presented by this technology. In security, facial recognition systems were employed to identify criminals and prevent security breaches. In commerce, companies invested in this technology to analyze customer shopping habits and offer personalized services.
During this time, many organizations and individuals assessed the potential benefits and risks of facial recognition technology and formulated their strategies accordingly. Privacy advocates warned of the privacy violations that this technology could entail, while some businesses and governments sought ways to use the technology more securely and ethically.
Yet, cameras around the world were now constantly collecting data, with that data being analyzed by artificial intelligence. Was Clearview merely a scapegoat?
Governments could track protesters using facial recognition cameras, but as they are composed of distinct individuals, they are also vulnerable to abuse. How would our constitutional rights be protected?
Clearview faced a series of lawsuits, and being sued by the ACLU placed significant pressure on the company. Ultimately, the court suggested that the ACLU might prevail in the case, prompting Clearview to agree to a settlement. The company agreed not to sell its product to private individuals or companies. Interesting, isn’t it, how the U.S. legal system operates?
In the UK, following the lifting of COVID-19 restrictions, the London Metropolitan Police conducted an operation at Oxford Circus using live facial recognition technology. The police had compiled a watchlist of over 9,000 individuals. Meanwhile, in Russia, the facial recognition app FindFace could identify people through VKontakte (VK). The app became notorious for instances of misuse, such as users exposing sex workers. In 2018, NtechLab shut down FindFace to focus solely on selling its algorithm to governments and corporations. The public availability of FindFace had led to numerous cases of abuse.
China, prioritizing security, implemented widespread surveillance technology. Facial recognition cameras were used not only to identify criminals but also to detect so-called “bad citizens” believed to be disrupting public order. It was also suggested that China’s surveillance network was used to instill Chinese consciousness among minorities.
In my view, examining the effects and reactions to the use of facial recognition technology across different regions globally—and discussing the balance between individual privacy rights and security priorities—is crucial. Each society needs to determine its priorities. But as soon as I consider this, I am reminded that the global mobile workforce and tourism economy impose significant constraints.
Meanwhile, some tech enthusiasts have developed various tools to block facial recognition technology. In Michigan, an eyewear maker named Scott Urban has produced reflective frames that blind facial recognition cameras. At NYU, a student named Adam Harvey invented a facial recognition camouflage called CV Dazzle. However, such protections are temporary and can be overcome, as facial recognition still functioned even when masks were worn during the pandemic.
Facebook has announced its decision to shut down its facial recognition system, but the company has not eliminated the underlying algorithm. It has left open the possibility of using this technology in the future.
Clearview’s database continues to grow, adding 75 million new photos daily. The company is working on AI-powered features such as clarifying blurred faces and recognizing masked individuals. According to the author, Ton-That will continue to fight critics, deal with lawsuits, and challenge governments as he strives to sell and normalize Clearview’s facial recognition technology. Ton-That describes the current resistance he faces as “future shock”, with the belief that the world will eventually adapt to this technology.
Here is a link to a previous post I wrote about audio espionage (https://muratulker.com/y/ses-espiyonaji-istihbarat-endustrisi). Take a look and guess what we might encounter in the next step of technological advancement.
—
(*) Epigenetics is a field of biology that examines how gene expression changes without altering the DNA sequence, yet these changes can still be inherited. In other words, it examines hereditary but non-genetic phenotypic variations. These changes affect the cell or organism directly without altering the DNA sequence.
Note: This open-source article does not require copyright and can be quoted by citing the author.