Is AI Too Smart For Our Own Good?
Can scientific achievements become so advanced that we lose sight of our own humanity? This thought was weighing on the mind of science and technology journalist Jacob Ward during a Synapse-sponsored lunchtime talk with students on Tuesday, April 16.
At UCSF we think of science as a subject we study, but science can also be used as a lens through which we view and deepen our understanding of other subjects.
Ward, a Berggruen Fellow at Stanford’s Center for Advanced Study in the Behavioral Sciences, uses this lens in his work as a journalist and technology correspondent for NBC news.
He recently pondered the ethical considerations of scientific advancements after profiling writer Anna Todd whose work was discovered with the help of an algorithm created by the company Wattpad.
The algorithm compares the language of uploaded stories to the language of successful novels and identifies stories that may be the next book or movie sensation.
Thanks in part to Wattpad’s algorithm, Todd became a New York Time’s bestselling author with five published novels and a film adaption now in theaters.
This ability of Artificial Intelligence (AI) to perfectly carry out an activity that seems inherently human led Ward to reflect on the ramifications of technological advancements mimicking human creativity.
“[AI] will give you the most likely answer just out of [the input you give it], and we in our brains tend to confuse that with some sort of intelligent machine that is looking at the world and is making a better decision than human beings could,” said Ward.
“[The problem with] pattern recognition machine learning is that you create these feedback loops that nobody ever ends up seeing. And it’s especially hard to look at because the people who build the pattern recognition systems don’t even know why the AI came to the conclusion it did.”
For example, said Ward, consider the impact of AI on art. Ward covered a story about Mario Klingemann who taught an algorithm his taste in art. The algorithm then used the input paintings Klingemann fed it to generate new works of art.
This piece was considered so significant that it sold at Sotheby’s auction house in February for over $50,000. That generated no small amount of controversy.
“There was an MIT philosopher who wrote this whole [piece] about how you should not confuse this kind of work with real creativity,” said Ward. “Real creativity is held within the human soul, and this is just a simulation.
“My question is if it does such a good simulation and you can’t tell the different who cares? [People] don’t care, they’re going to sell you this stuff.”
In addition to the invisible attitudes being shaped by human interactions with AI, Ward is also interested in how technological and scientific advances are pushing humans into other new ethical dilemmas — for example the new field of genoeconomics, which Ward wrote about in a recent New York Times article.
Genoeconomics works on a principal familiar to biologists and medical doctors, that variations in the human genome can be associated with different outcomes.
Instead of applying these genetic variations to medical outcomes, however, sociologists are now applying them to social outcomes.
Ward talked to University of Southern California economist Daniel Benjamin who co-led a study comparing the DNA of over one million people to identify genetic variations that could predict the likelihood that an individual would graduate from a four-year college.
“I and a bunch of critics that I interviewed asked ‘What’s the point of this? Isn’t this going to do more harm than good?’” said Ward.
Researchers told Ward that discovering genetic disadvantages related to learning could help the education system identify and correct for them.
They also suggested that their work could strengthen the findings of the Perry Preschool Project, an ongoing research study tracking preschool students and the impact early education has on their life trajectories.
Despite these good intentions, Ward saw a hole in these explanations.
The majority of participants in the Perry Preschool Project have been African American, whereas Benjamin’s study only captures genetic variations in white people.
That’s because the study’s genomes came from cohorts assembled by the Estonian Genome Center, Iceland’s deCODE Genetics, the UK Biobank, America’s 23andMe, and other predominantly white nations.
“There’s a long history of porting learnings from upper-class white people over to other communities that never really works out,” said Ward. “They have no way of [applying] this research to African Americans, and yet they are talking about using it to help people in school. Nobody needs more help in the United States right now than black people in terms of economic disparities and educational disparities.”
Ward shared his reservations on genoeconomics at a conference by the Social Science Genetics Consortium. He compared the genetic predictors in Benjamin’s study to another example of a well-intentioned invention being misused, Alfred Binet’s IQ test.
“It’s the same idea, and people wound up using the IQ test to exclude whole swathes of the world from immigrating to the United states,” Ward told them. “Eugenics was born out of that. Why would you mess around with this?”
The response at the conference was not positive.
“I got shouted at by a whole bunch of people,” Ward said.
According to Ward, the scientists said “science is about the free, unrestrained discovery of as much new information as possible. It’s not our job to say ahead of time this is not something we should pursue because it’s too dangerous.”
Ward was still dubious.
“I feel like we’re at a place technologically and scientifically [where] we’re so out ahead of our ability to see that we are being played, much less our ability to create an ethical framework for how we’re going to use these [scientific and technological advances],” Ward said.
“Is there a mechanism for evaluating the work before it’s done? [Does anyone ask] should this be done? How do we get this conversation going?”
Ward still hasn’t settled on an answer.