[prisna-google-website-translator]
Select Page
[prisna-google-website-translator]

Woody Bledsoe was sitting in a wheelchair in his open garage, waiting. To anyone who had seen him even a few months earlier—anyone accustomed to greeting him on Sundays at the local Mormon church, or to spotting him around town on his jogs—the 74-year-old would have been all but unrecognizable. The healthy round cheeks he had maintained for much of his life were sunken. The degenerative disease ALS had taken away his ability to speak and walk, leaving him barely able to scratch out short messages on a portable whiteboard. But Woody’s mind was still sharp. When his son Lance arrived at the house in Austin, Texas, that morning in early 1995, Woody immediately began to issue instructions in dry-erase ink.

He told Lance to fetch a trash can from the backyard—one of the old metal kinds that Oscar the Grouch lives in. Lance grabbed one and set it down near his father. Then Woody sent him into the house for matches and lighter fluid. When Lance got back, Woody motioned to two large file cabinets inside the garage.

They’d been around ever since Lance could remember. Now in his late thirties, Lance was pretty sure they hadn’t been opened since he was a kid. And he knew they weren’t regular file cabinets. They were the same kind he’d seen when he worked on sonar equipment for US nuclear submarines—fireproof and very heavy, with a strong combination lock on each drawer. His father slowly began writing numbers on the whiteboard, and to Lance’s astonishment, the combination worked. “As I opened the first drawer,” he tells me almost 25 years later, “I felt like Indiana Jones.”

A thick stack of old, rotting documents lay inside. Lance began removing them and placing them in his father’s hands. Woody looked over the piles of paper two inches at a time, then had his son toss them into the fire he’d started in the burn barrel. Some, Lance noticed, were marked “Classified” or “Eyes only.” The flames kept building until both cabinets were empty. Woody insisted on sitting in the garage until all that remained was ash.

Lance could only guess at what he’d helped to destroy. For nearly three decades, his father had been a professor at the University of Texas at Austin, working to advance the fields of automated reasoning and artificial intelligence. Lance had always known him to be a wide-eyed scientific optimist, the sort of man who, as far back as the late 1950s, dreamed of building a computer endowed with all the capabilities of a human—a machine that could prove complex mathematical theorems, engage in conversation, and play a decent game of Ping-Pong.

But early in his career, Woody had been consumed with an attempt to give machines one particular, relatively unsung, but dangerously powerful human capacity: the ability to recognize faces. Lance knew that his father’s work in this area—the earliest research on facial-­recognition technology—had attracted the interest of the US government’s most secretive agencies. Woody’s chief funders, in fact, seem to have been front companies for the CIA. Had Lance just incinerated the evidence of Washington’s first efforts to identify individual people on a mass, automated scale?

Advertisement

Today, facial recognition has become a security feature of choice for phones, laptops, passports, and payment apps. It promises to revolutionize the business of targeted advertising and speed the diagnosis of certain illnesses. It makes tagging friends on Instagram a breeze. Yet it is also, increasingly, a tool of state oppression and corporate surveillance. In China, the government uses facial recognition to identify and track members of the Uighur ethnic minority, hundreds of thousands of whom have been interned in “reeducation camps.” In the US, according to The Washington Post, Immigration and Customs Enforcement and the FBI have deployed the technology as a digital dragnet, searching for suspects among millions of faces in state driver’s license databases, sometimes without first seeking a court order. Last year, an investigation by the Financial Times revealed that researchers at Microsoft and Stanford University had amassed, and then publicly shared, huge data sets of facial imagery without subjects’ knowledge or consent. (Stanford’s was called Brainwash, after the defunct café in which the footage was captured.) Both data sets were taken down, but not before researchers at tech startups and one of China’s military academies had a chance to mine them.

Woody’s facial-recognition research in the 1960s prefigured all these technological breakthroughs and their queasy ethical implications. And yet his early, foundational work on the subject is almost entirely unknown. Much of it was never made public.

Fortunately, whatever Woody’s intentions may have been that day in 1995, the bulk of his research and correspondence appears to have survived the blaze in his garage. Thousands of pages of his papers—39 boxes’ worth—reside at the Briscoe Center for American History at the University of Texas. Those boxes contain, among other things, dozens of photographs of people’s faces, some of them marked up with strange mathematical notations—as if their human subjects were afflicted with some kind of geometrical skin disease. In those portraits, you can discern the origin story of a technology that would only grow more fraught, more powerful, and more ubiquitous in the decades to come.

An image of Woody Bledsoe from a 1965 study. The computer failed to recognize that two photos of him, from 1945 and 1965, showed the same person.

Photograph: Dan Winters


Woodrow Wilson Bledsoe—always Woody to everyone he knew—could not remember a time when he did not have to work. He was born in 1921 in the town of Maysville, Oklahoma, and spent much of his childhood helping his father, a sharecropper, keep the family afloat. There were 12 Bledsoe kids in all. Woody, the 10th, spent long days weeding corn, gathering wood, picking cotton, and feeding chickens. His mother, a former schoolteacher, recognized his intelligence early on. In an unpublished essay from 1976, Woody described her as an encouraging presence—even if her encouragement sometimes came from the business end of a peach-tree switch.

Advertisement

When Woody was 12 his father died, plunging the family even deeper into poverty in the middle of the Great Depression. Woody took on work at a chicken ranch while he finished high school. Then he moved to the city of Norman and began attending classes at the University of Oklahoma, only to quit after three months to join the Army on the eve of World War II.

Showing an aptitude for math, Woody was put in charge of a payroll office at Fort Leonard Wood in Missouri, where wave after wave of US soldiers were being trained for combat. (“Our group handled all black troops,” wrote the Oklahoman, “which was a new experience for me.”) Then on June 7, 1944, the day after D-Day, Woody was finally deployed to Europe, where he earned a Bronze Star for devising a way to launch large naval vessels—built for beach landings—into the Rhine.

Having landed in the European theater just as Allied troops were accelerating to victory, Woody seemed to have an unusually positive experience of war. “These were exciting times,” he wrote. “Each day is equivalent to a month of ordinary living. I can see why men get enamored with war. As long as you are winning and don’t sustain many casualties, everything is fine.” He spent the following summer in liberated Paris, his mind and his experience of the world expanding wildly in an atmosphere of sometimes euphoric patriotism. “The most sensational news I ever heard was that we had exploded an atomic bomb,” Woody wrote. “We were glad that such a weapon was in the hands of Americans and not our enemies.”

Woody couldn’t wait to get back to school once the war ended. He majored in mathematics at the University of Utah and finished in two and a half years, then went off to Berkeley for his PhD. After grad school, he got a job at the Sandia Corporation in New Mexico, working on government-funded nuclear weapons research alongside such luminaries as Stanislaw Ulam, one of the inventors of the hydrogen bomb. In 1956 Woody flew to the Marshall Islands to observe weapons tests over Enewetak Atoll, parts of which to this day suffer worse radioactive contamination than Chernobyl or Fukushima. “It was satisfying to me to be helping my own dear country remain the strongest in the world,” he wrote.

Sandia also offered Woody his first steps into the world of computing, which would consume him for the rest of his career. At first, his efforts at writing code tied directly to the grim calculations of nuclear weapons research. One early effort—“Program for Computing Probabilities of Fallout From a Large-Scale Thermonuclear Attack”—took into account explosive yield, burst points, time of detonation, mean wind velocity, and the like to predict where the fallout would land in the case of an attack.

But as his romance with computing grew, Woody took an interest in automated pattern recognition, especially machine reading—the process of teaching a computer to recognize unlabeled images of written characters. He teamed up with his friend and colleague Iben Browning, a polymath inventor, aeronautical engineer, and biophysicist, and together they created what would become known as the n-tuple method. They started by projecting a printed character—the letter Q, say—onto a rectangular grid of cells, resembling a sheet of graph paper. Then each cell was assigned a binary number according to whether it contained part of the character: Empty got a 0, populated got a 1. Then the cells were randomly grouped into ordered pairs, like sets of coordinates. (The groupings could, in theory, include any number of cells, hence the name n-tuple.) With a few further mathematical manipulations, the computer was able to assign the character’s grid a unique score. When the computer encountered a new character, it simply compared that character’s grid with others in its database until it found the closest match.

Advertisement

The beauty of the n-tuple method was that it could recognize many variants of the same character: Most Qs tended to score pretty close to other Qs. Better yet, the process worked with any pattern, not just text. According to an essay coauthored by Robert S. Boyer, a mathematician and longtime friend of Woody’s, the n-tuple method helped define the field of pattern recognition; it was among the early set of efforts to ask, “How can we make a machine do something like what people do?”

Around the time when he was devising the n-tuple method, Woody had his first daydream about building the machine that he called a “computer person.” Years later, he would recall the “wild excitement” he felt as he conjured up a list of skills for the artificial consciousness:

"I wanted it to read printed characters on a page and handwritten script as well. I could see it, or a part of it, in a small camera that would fit on my glasses, with an attached earplug that would whisper into my ear the names of my friends and acquaintances as I met them on the street … For you see, my computer friend had the ability to recognize faces."


In 1960, Woody struck out with Browning and a third Sandia colleague to found a company of their own. Panoramic Research Incorporated was based, at first, in a small office in Palo Alto, California, in what was not yet known as Silicon Valley. At the time, most of the world’s computers—massive machines that stored data on punch cards or magnetic tape—resided in large corporate offices and government labs. Panoramic couldn’t afford one of its own, so it leased computing time from its neighbors, often late in the evenings, when it was cheaper.

Panoramic’s business, as Woody later described it to a colleague, was “trying out ideas which we hoped would ‘move the world.’ ” According to Nels Winkless, a writer and consultant who collaborated on several Panoramic projects and later became a founding editor of Personal Computing magazine, “Their function was literally to do what other people find just too silly.”

The company attracted an odd and eclectic mix of researchers—many of whom, like Woody, had grown up with nothing during the Great Depression and now wanted to explore everything. Their inclinations ranged from brilliant to feral. Browning, who came from a family of poor farmers and had spent two years of his youth eating almost nothing but cabbage, was a perpetual tinkerer. At one point he worked with another Panoramic researcher, Larry Bellinger, to develop the concept for a canine-powered truck called the Dog-Mobile. They also built something called the Hear-a-Lite, a pen-shaped device for blind people that translated light levels into sound.

Bellinger, who had worked as a wing-walker as a teenager (he kept the pastime secret from his mother by playing off his bruises from bad parachute landings as bicycle injuries), had also helped design the Bell X-1, the sound-­barrier-breaking rocket plane made famous in Tom Wolfe’s The Right Stuff. Later he created the Mowbot, a self-propelled lawnmower “for cutting grass in a completely random and unattended manner.” (Johnny Carson featured the device on The Tonight Show.)

Then there was Helen Chan Wolf, a pioneer in robot programming who started at Panoramic a couple of years out of college. She would go on to help program Shakey the Robot, described by the Institute of Electrical and Electronics Engineers as “the world’s first robot to embody artificial intelligence”; she has been called, by one former colleague, “the Lady Ada Lovelace of robotics.” In the early 1960s, when Wolf’s coding efforts could involve stacks of punch cards a foot and a half high, she was awed by the range of ideas her Panoramic colleagues threw at the wall. At one point, she says, Woody decided that he “wanted to unravel DNA, and he figured out that it would take 30 or 37 years to do it on the computers that we had at the time. I said, ‘Well, I guess we won’t do that.’ ”

Advertisement

Perhaps not surprisingly, Panoramic struggled to find adequate commercial funding. Woody did his best to pitch his character-­recognition technology to business clients, including the Equitable Life Assurance Society and McCall’s magazine, but never landed a contract. By 1963, Woody was all but certain the company would fold.

But throughout its existence, Panoramic had at least one seemingly reliable patron that helped keep it afloat: the Central Intelligence Agency. If any direct mentions of the CIA ever existed in Woody’s papers, they likely ended up in ashes in his driveway; but fragments of evidence that survived in Woody’s archives strongly suggest that, for years, Panoramic did business with CIA front companies. Winkless, who was friendly with the entire Panoramic staff—and was a lifelong friend of Browning—says the company was likely formed, at least in part, with agency funding in mind. “Nobody ever told me in so many words,” he recalls, “but that was the case.”

Sign Up Today
Sign up for our Longreads newsletter for the best features, ideas, and investigations from WIRED.

According to records obtained by the Black Vault, a website that specializes in esoteric Freedom of Information Act requests, Panoramic was among 80 organizations that worked on Project MK-Ultra, the CIA’s infamous “mind control” program, best known for the psychological tortures it inflicted on frequently unwilling human subjects. Through a front called the Medical Sciences Research Foundation, Panoramic appears to have been assigned to subprojects 93 and 94, on the study of bacterial and fungal toxins and “the remote directional control of activities of selected species of animals.” Research by David H. Price, an anthropologist at Saint Martin’s University, shows that Woody and his colleagues also received money from the Society for the Investigation of Human Ecology, a CIA front that provided grants to scientists whose work might improve the agency’s interrogation techniques or act as camouflage for that work. (The CIA would neither confirm nor deny any knowledge of, or connection to, Woody or Panoramic.)

But it was another front company, called the King-Hurley Research Group, that bankrolled Woody’s most notable research at Panoramic. According to a series of lawsuits filed in the 1970s, King-Hurley was a shell company that the CIA used to purchase planes and helicopters for the agency’s secret Air Force, known as Air America. For a time King-Hurley also funded psychopharmacological research at Stanford. But in early 1963, it was the recipient of a different sort of pitch from one Woody Bledsoe: He proposed to conduct “a study to determine the feasibility of a simplified facial recognition machine.” Building on his and Browning’s work with the n-tuple method, he intended to teach a computer to recognize 10 faces. That is, he wanted to give the computer a database of 10 photos of different people and see if he could get it to recognize new photos of each of them. “Soon one would hope to extend the number of persons to thousands,” Woody wrote. Within a month, King-Hurley had given him the go-ahead.

In one approach, Woody Bledsoe taught his computer to divide a face into features, then compare distances between them.

Photograph: Dan Winters

Advertisement


Ten faces may now seem like a pretty pipsqueak goal, but in 1963 it was breathtakingly ambitious. The leap from recognizing written characters to recognizing faces was a giant one. To begin with, there was no standard method for digitizing photos and no existing database of digital images to draw from. Today’s researchers can train their algorithms on millions of freely available selfies, but Panoramic would have to build its database from scratch, photo by photo.

And there was a bigger problem: Three-dimensional faces on living human beings, unlike two-dimensional letters on a page, are not static. Images of the same person can vary in head rotation, lighting intensity, and angle; people age and hairstyles change; someone who looks carefree in one photo might appear anxious in the next. Like finding the common denominator in an outrageously complex set of fractions, the team would need to somehow correct for all this variability and normalize the images they were comparing. And it was hardly a sure bet that the computers at their disposal were up to the task. One of their main machines was a CDC 1604 with 192 KB of RAM—about 21,000 times less working memory than a basic modern smartphone.

Fully aware of these challenges from the beginning, Woody adopted a divide-and-conquer approach, breaking the research into pieces and assigning them to different Panoramic researchers. One young researcher got to work on the digitization problem: He snapped black-and-white photos of the project’s human subjects on 16-mm film stock. Then he used a scanning device, developed by Browning, to convert each picture into tens of thousands of data points, each one representing a light intensity value—ranging from 0 (totally dark) to 3 (totally light)—at a specific location in the image. That was far too many data points for the computer to handle all at once, though, so the young researcher wrote a program called NUBLOB, which chopped the image into randomly sized swatches and computed an n-tuple-like score for each one.

Meanwhile, Woody, Helen Chan Wolf, and a student began studying how to account for head tilt. First they drew a series of numbered small crosses on the skin of the left side of a subject’s face, from the peak of his forehead down to his chin. Then they snapped two portraits, one in which the subject was facing front and another in which he was turned 45 degrees. By analyzing where all the tiny crosses landed in these two images, they could then extrapolate what the same face would look like when rotated by 15 or 30 degrees. In the end, they could feed a black-and-white image of a marked-up face into the computer, and out would pop an automatically rotated portrait that was creepy, pointillistic, and remarkably accurate.

Keep Reading
The latest on artificial intelligence, from machine learning to computer vision and more

These solutions were ingenious but insufficient. Thirteen months after work began, the Panoramic team had not taught a computer to recognize a single human face, much less 10 of them. The triple threat of hair growth, facial expressions, and aging presented a “tremendous source of variability,” Woody wrote in a March 1964 progress report to King-Hurley. The task, he said, was “beyond the state of the art of the present pattern recognition and computer technology at this time.” But he recommended that more studies be funded to attempt “a completely new approach” toward tackling facial recognition.

Over the following year, Woody came to believe that the most promising path to automated facial recognition was one that reduced a face to a set of relationships between its major landmarks: eyes, ears, nose, eyebrows, lips. The system that he imagined was similar to one that Alphonse Bertillon, the French criminologist who invented the modern mug shot, had pioneered in 1879. Bertillon described people on the basis of 11 physical measurements, including the length of the left foot and the length from the elbow to the end of the middle finger. The idea was that, if you took enough measurements, every person was unique. Although the system was labor-intensive, it worked: In 1897, years before fingerprinting became widespread, French gendarmes used it to identify the serial killer Joseph Vacher.

Advertisement

Throughout 1965, Panoramic attempted to create a fully automated Bertillon system for the face. The team tried to devise a program that could locate noses, lips, and the like by parsing patterns of lightness and darkness in a photograph, but the effort was mostly a flop.

So Woody and Wolf began exploring what they called a “man-machine” approach to facial recognition—a method that would incorporate a bit of human assistance into the equation. (A recently declassified history of the CIA’s Office of Research and Development mentions just such a project in 1965; that same year, Woody sent a letter on facial recognition to John W. Kuipers, the division’s chief of analysis.) Panoramic conscripted Woody’s teenage son Gregory and one of his friends to go through a pile of photographs—122 in all, representing about 50 people—and take 22 measurements of each face, including the length of the ear from top to bottom and the width of the mouth from corner to corner. Then Wolf wrote a program to process the numbers.

At the end of the experiment, the computer was able to match every set of measurements with the correct photograph. The results were modest but indeniable: Wolf and Woody had proved that the Bertillon system was theoretically workable.

Their next move, near the end of 1965, was to stage a larger-scale version of much the same experiment—this time using a recently invented piece of technology to make the “man” in their man-machine system far more efficient. With King-Hurley’s money, they used something called a RAND tablet, an $18,000 device that looked something like a flatbed image scanner but worked something like an iPad. Using a stylus, a researcher could draw on the tablet and produce a relatively high-resolution computer-­readable image.

Woody and his colleagues asked some undergraduates to cycle through a new batch of photographs, laying each one on the RAND tablet and pinpointing key features with the stylus. The process, though still arduous, was much faster than before: All told, the students managed to input data for some 2,000 images, including at least two of each face, at a rate of about 40 an hour.

Even with this larger sample size, though, Woody’s team struggled to overcome all the usual obstacles. The computer still had trouble with smiles, for instance, which “distort the face and drastically change inter-facial measurements.” Aging remained a problem too, as Woody’s own face proved. When asked to cross-match a photo of Woody from 1945 with one from 1965, the computer was flummoxed. It saw little resemblance between the younger man, with his toothy smile and dark widow’s peak, and the older one, with his grim expression and thinning hair. It was as if the decades had created a different person.

And in a sense, they had. By this point, Woody had grown tired of hustling for new contracts for Panoramic and finding himself “in the ridiculous position of either having too many jobs or not enough.” He was constantly pitching new ideas to his funders, some treading into territory that would now be considered ethically dubious. In March 1965—some 50 years before China would begin using facial pattern-­matching to identify ethnic Uighurs in Xinjiang Province—Woody had proposed to the Defense Department Advanced Research Projects Agency, then known as Arpa, that it should support Panoramic to study the feasibility of using facial characteristics to determine a person’s racial background. “There exists a very large number of anthropological measurements which have been made on people throughout the world from a variety of racial and environmental backgrounds,” he wrote. “This extensive and valuable store of data, collected over the years at considerable expense and effort, has not been properly exploited.” It is unclear whether Arpa agreed to fund the project.

Advertisement

What’s clear is that Woody was investing thousands of dollars of his own money in Panoramic with no guarantee of getting it back. Meanwhile, friends of his at the University of Texas at Austin had been urging him to come work there, dangling the promise of a steady salary. Woody left Panoramic in January 1966. The firm appears to have folded soon after.

With daydreams of building his computer person still playing in his head, Woody moved his family to Austin to dedicate himself to the study and teaching of automated reasoning. But his work on facial recognition wasn’t over; its culmination was just around the corner.


In 1967, more than a year after his move to Austin, Woody took on one last assignment that involved recognizing patterns in the human face. The purpose of the experiment was to help law enforcement agencies quickly sift through databases of mug shots and portraits, looking for matches.

As before, funding for the project appears to have come from the US government. A 1967 document declassified by the CIA in 2005 mentions an “external contract” for a facial-­recognition system that would reduce search time by a hundredfold. This time, records suggest, the money came through an individual acting as an intermediary; in an email, the apparent intermediary declined to comment.

Woody’s main collaborator on the project was Peter Hart, a research engineer in the Applied Physics Laboratory at the Stanford Research Institute. (Now known as SRI International, the institute split from Stanford University in 1970 because its heavy reliance on military funding had become so controversial on campus.) Woody and Hart began with a database of around 800 images—two newsprint-quality photos each of about “400 adult male caucasians,” varying in age and head rotation. (I did not see images of women or people of color, or references to them, in any of Woody’s facial-recognition studies.) Using the RAND tablet, they recorded 46 coordinates per photo, including five on each ear, seven on the nose, and four on each eyebrow. Building on Woody’s earlier experience at normalizing variations in images, they used a mathematical equation to rotate each head into a forward-looking position. Then, to account for differences in scale, they enlarged or reduced each image to a standard size, with the distance between the pupils as their anchor metric.

The computer’s task was to memorize one version of each face and use it to identify the other. Woody and Hart offered the machine one of two shortcuts. With the first, known as group matching, the computer would divide the face into features—left eyebrow, right ear, and so on—and compare the relative distances between them. The second approach relied on Bayesian decision theory; it used 22 measurements to make an educated guess about the whole.

In the end, the two programs handled the task about equally well. More important, they blew their human competitors out of the water. When Woody and Hart asked three people to cross-match subsets of 100 faces, even the fastest one took six hours to finish. The CDC 3800 computer completed a similar task in about three minutes, reaching a hundredfold reduction in time. The humans were better at coping with head rotation and poor photographic quality, Woody and Hart acknowledged, but the computer was “vastly superior” at tolerating the differences caused by aging. Overall, they concluded, the machine “dominates” or “very nearly dominates” the humans.

This was the greatest success Woody ever had with his facial-recognition research. It was also the last paper he would write on the subject. The paper was never made public—for “government reasons,” Hart says—which both men lamented. In 1970, two years after the collaboration with Hart ended, a roboticist named Michael Kassler alerted Woody to a facial-recognition study that Leon Harmon at Bell Labs was planning. “I’m irked that this second rate study will now be published and appear to be the best man-machine system available,” Woody replied. “It sounds to me like Leon, if he works hard, will be almost 10 years behind us by 1975.” He must have been frustrated when Harmon’s research made the cover of Scientific American a few years later, while his own, more advanced work was essentially kept in a vault.

Advertisement


In the ensuing decades, Woody won awards for his contributions to automated reasoning and served for a year as president of the Association for the Advancement of Artificial Intelligence. But his work in facial recognition would go largely unrecognized and be all but forgotten, while others picked up the mantle.

In 1973 a Japanese computer scientist named Takeo Kanade made a major leap in facial-recognition technology. Using what was then a very rare commodity—a database of 850 digitized photographs, taken mostly during the 1970 World’s Fair in Suita, Japan—Kanade developed a program that could extract facial features such as the nose, mouth, and eyes without human input. Kanade had finally managed Woody’s dream of eliminating the man from the man-machine system.

Woody did dredge up his expertise in facial recognition on one or two occasions over the years. In 1982 he was hired as an expert witness in a criminal case in California. An alleged member of the Mexican mafia was accused of committing a series of robberies in Contra Costa County. The prosecutor had several pieces of evidence, including surveillance footage of a man with a beard, sunglasses, a winter hat, and long curly hair. But mug shots of the accused showed a clean-shaven man with short hair. Woody went back to his Panoramic research to measure the bank robber’s face and compare it to the pictures of the accused. Much to the defense attorney’s pleasure, Woody found that the faces were likely of two different people because the noses differed in width. “It just didn’t fit,” he said. Though the man still went to prison, he was acquitted on the four counts that were related to Woody’s testimony.

Only in the past 10 years or so has facial recognition started to become capable of dealing with real-world imperfection, says Anil K. Jain, a computer scientist at Michigan State University and coeditor of Handbook of Face Recognition. Nearly all of the obstacles that Woody encountered, in fact, have fallen away. For one thing, there’s now an inexhaustible supply of digitized imagery. “You can crawl social media and get as many faces as you want,” Jain says. And thanks to advances in machine learning, storage capacity, and processing power, computers are effectively self-teaching. Given a few rudimentary rules, they can parse reams and reams of data, figuring out how to pattern-match virtually anything, from a human face to a bag of chips—no RAND tablet or Bertillon measurements necessary.

Even given how far facial recognition has come since the mid-1960s, Woody defined many of the problems that the field still sets out to solve. His process of normalizing the variability of facial position, for instance, remains part of the picture. To make facial recognition more accurate, says Jain, deep networks today often realign a face to a forward posture, using landmarks on the face to extrapolate a new position. And though today’s deep-learning-based systems aren’t told by a human programmer to identify noses and eyebrows explicitly, Woody’s turn in that direction in 1965 set the course of the field for decades. “The first 40 years were dominated by this feature-based method,” says Kanade, now a professor at Carnegie Mellon’s Robotics Institute. Now, in a way, the field has returned to something like Woody’s earliest attempts at unriddling the human face, when he used a variation on the n-tuple method to find patterns of similarity in a giant field of data points. As complex as facial-recognition systems have become, says Jain, they are really just creating similarity scores for a pair of images and seeing how they compare.

Advertisement

But perhaps most importantly, Woody’s work set an ethical tone for research on facial recognition that has been enduring and problematic. Unlike other world-changing technologies whose apocalyptic capabilities became apparent only after years in the wild—see: social media, YouTube, quadcopter drones—the potential abuses of facial-recognition technology were apparent almost from its birth at Panoramic. Many of the biases that we may write off as being relics of Woody’s time—the sample sets skewed almost entirely toward white men; the seemingly blithe trust in government authority; the temptation to use facial recognition to discriminate between races—continue to dog the technology today.

Last year, a test of Amazon’s Rekognition software misidentified 28 NFL players as criminals. Days later, the ACLU sued the US Justice Department, the FBI, and the DEA to get information on their use of facial-recognition technology produced by Amazon, Microsoft, and other companies. A 2019 report from the National Institute of Standards and Technology, which tested code from more than 50 developers of facial-­recognition software, found that white males are falsely matched with mug shots less frequently than other groups. In 2018, a pair of academics wrote a broadside against the field: “We believe facial recognition technology is the most uniquely dangerous surveillance mechanism ever invented.”

In the spring of 1993, nerve degeneration from ALS began causing Woody’s speech to slur. According to a long tribute written after his death, he continued to teach at UT until his speech became unintelligible, and he kept up his research on automated reasoning until he could no longer hold a pen. “Always the scientist,” wrote the authors, “Woody made tapes of his speech so that he could chronicle the progress of the disease.” He died on October 4, 1995. His obituary in the Austin American-Statesman made no mention of his work on facial recognition. In the picture that ran alongside it, a white-haired Woody stares directly at the camera, a big smile spread across his face.


Shaun Raviv (@ShaunRaviv) is a writer living in Atlanta. He wrote about the neuro­scientist Karl Friston in issue 26.12.

This article appears in the February issue. Subscribe now.

Let us know what you think about this article. Submit a letter to the editor at mail@wired.com.


Read more: https://www.wired.com/story/secret-history-facial-recognition/

[prisna-google-website-translator]