Fei-Fei Li started a revolution in artificial intelligence with an algorithm-like vision

Fei-Fei Li started a revolution in artificial intelligence with an algorithm-like vision

Early in the pandemic, an agent—literary, not software—suggested Fei-Fei Li write a book. This approach was logical. She left an indelible mark on the field of artificial intelligence by heading up a project started in 2006 called ImageNet. It has sorted millions of digital images to form what has become an essential training ground for the artificial intelligence systems shaking our world today. Lee is currently the founding co-director of the Human-Centered Artificial Intelligence (HAI) Institute at Stanford University, whose name bears a call for cooperation, if not co-evolution, between humans and intelligent machines. After accepting the agent’s challenge, Lee spent the lockout year preparing the draft. But when one of her founders at HAI, the philosopher John Etchemendy, read it, he asked her to start over, this time including her own journey in the field. “He said there are a lot of technical people who can read AI books,” Lee says. “But I missed the opportunity to tell all young immigrants, women and people from diverse backgrounds to understand this they It can actually use artificial intelligence as well.

Lee is a private person and doesn’t feel comfortable talking about herself. But she courageously figured out how to incorporate her experience as an immigrant who came to the United States when she was 16, without mastering the language, and overcame obstacles to become a key figure in this pivotal technology. On her way to her current position, she was also director of the Stanford Artificial Intelligence Lab and chief scientist for artificial intelligence and machine learning at Google Cloud. She tells me that her book, The worlds I see, structured like a double helix, where her personal quest and the AI’s path are intertwined into an spiraling whole. “We still see ourselves through the reflection of our identity,” Lee says. “Part of thinking is the technology itself. The hardest world to see is ourselves.”

The threads come together most dramatically in her account of ImageNet’s creation and implementation. She tells me of her determination to challenge those, including her colleagues, who doubted the possibility of classifying and classifying millions of images, with at least 1,000 examples of each from a sprawling list of categories, from pillows to violins. The effort required not only technical fortitude, but the sweat of thousands of people (spoiler: an Amazon mechanic helped pull the trick). The project can only be understood when we understand her personal journey. The audacity to undertake such a risky venture came from the support of her parents, who despite financial difficulties insisted that she turn down a lucrative job in the business world to pursue her dream of becoming a scientist. The implementation of this moon will be the final validation of their sacrifices.

The reward was profound. Lee describes how building ImageNet required her to look at the world the way an artificial neural network algorithm might. When she encountered dogs, trees, furniture, and other objects in the real world, her mind now saw beyond the instinctive categorization of what she perceived, and came to sense aspects of an object that might reveal its essence to software. What visual clues might lead a digital intelligence to recognize those objects, and also be able to identify the different subcategories – beagles versus greyhounds, acorns versus bamboos, an Eames chair versus a rock mission? There’s a great section on how her team tries to collect images of every possible car model. When the ImageNet project was completed in 2009, Lee launched a competition in which researchers used the data set to train their own machine learning algorithms, to see if computers could reach new heights in object identification. In 2012, the winner, AlexNet, came out of Geoffrey Hinton’s lab at the University of Toronto, making a huge leap over previous winners. One might argue that the combination of ImageNet and AlexNet launched the deep learning boom that still captivates us today, powering ChatGPT.

What Lee and her team didn’t understand was that this new way of seeing could also become linked to humanity’s tragic tendency to allow bias to distort what we see. In her book, she spoke of “a pang of guilt” when news broke that Google had misclassified black people as gorillas. Other horrific examples followed. “When the Internet presents a predominantly white, Western, and often male, image of everyday life, we face a technology that struggles to understand everyone,” Lee writes, belatedly recognizing the flaw. She was asked to launch a program called AI4All to bring women and people of color into the field. “When we were pioneering ImageNet, we didn’t know nearly as much as we do today,” she tells me, explaining that she used “we” in a collective sense, not just referring to her small team. It has evolved greatly since then. But if there are things we didn’t do well; We have to fix them.”

The day you talked to Lee, Washington Post He published a long article about how bias in machine learning remains a serious problem. Today’s AI image generators, such as Dall-E and Stable Diffusion, still provide stereotypes when interpreting neutral claims. When asked to depict a “productive person,” the systems generally show white men, but the request for “social services person” often shows people of color. Is the lead inventor of ImageNet, ground zero for infusing human bias into AI, confident he can solve the problem? “Confident “It would be a very simple word,” she says. “I am cautiously optimistic that there are technology solutions and governance solutions, as well as market requirements to be better and better.” This cautious optimism also extends to the way she talks about dire predictions that artificial intelligence may lead to human extinction. “I don’t want to give a false sense that everything is going to be okay,” she says. “But I also don’t want to carry a sense of doom and gloom, because human beings need hope.”

    (Tags for translation)Plain text

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *