‘You guys could literally sit down at your board meeting, I believe, and determine who’s gonna be the next president of the United States…’
(Ben Sellers, Liberty Headlines) The day after Project Veritas released an exposé providing both documents and a hidden-camera confession from a Google executive that it was engaged in anti-conservative bias, senators had the chance to grill a company representative about it during a committee hearing.
But even when faced with evidence of bias, a panel of so-called expert researchers in artificial intelligence continued to downplay tech companies’ efforts to manipulate their algorithms in service of a partisan agenda.
The panel appeared before the Senate Commerce Subcommittee on Communications, Technology, Innovation, and the Internet, chaired by Sen. John Thune, R-SD, on the topic “Optimizing for Engagement: Understanding the Use of Persuasive Technology on Internet Platforms.”
Among the witnesses was Maggie Stanphill, director of Google user experience, who repeatedly stonewalled questions about the company’s use of “persuasive technology.”
Several senators, after criticizing Stanphill for refusing to give direct answers, turned to the other three panelists, who offered a sometimes damning account of what companies like Google did to maximize their profit margins.
Tristan Harris, of the Center for Humane Technology, likened the user-optimization experience that kept people coming back to tech companies “crawling down the brain stem” of their platform’s users.
Panelists testified to the dangerous and subversive effects of algorithms that deployed data-mining to influence a user’s perspectives and choices in a variety of ways.
Citing Instagram and its parent company, Facebook, as examples, Harris said, “What it actually is doing is an attempt to cause you to come back every day because now you wanna see, ‘Do I have more followers now than I did yesterday?'”
If someone attempts to delete a site like Facebook, a screen will pop up asking users if they are sure they want to leave, along with the faces of five close connections.
“They’re calculating which of the five faces would be most likely to get you to hit cancel and not delete your Facebook account,” Harris said.
The panel also addressed the dangerous social impact that the platforms could have on non-users, such as a traffic app, Waze, that redirected vehicles through a residential neighborhood, resulting in increased accidents.
But when it came time to discuss allegations of anti-conservative bias, despite recent evidence, all four witnesses seemed to dismiss the notion that the overtly left-leaning Silicon Valley monopolies would be engaged in such a thing.
Sen. Ron Johnson, R-Wisc., told the panel that five conservative staff members of his, as an experiment, had sought suggestions from the photo-sharing Instagram app on whom to follow, only to be bombarded with a litany of radical, far-left organizations and political candidates.
Only a single conservative-leaning institution, the Wall Street Journal was included in the list, Johnson said. (The Journal‘s opinion page leans conservative, while the newspaper’s reporting mostly leans left.)
Johnson said the experiment made clear it was not user habits dictating the results but some outside influence steering them in the opposite direction.
“If there are really algorithms shuffling the content that they might want to—that they would agree with—you would expect they would see maybe Fox New, Breitbart, Newsmax,” he said. “You might even see like a really big name, like Donald Trump.”
Even so, Harris rejected the possibility of bias and suggested that the prior “click pattern” of the staffers may have driven the recommendations.
Two other panelists—Rashida Richardson of the AI Now Institute and Stephen Wolfram of Wolfram Research—said that the overwhelmingly left-wing results from the five conservative staff members likely had to do with what was popular and trending at the time on Instagram.
“I can speak for Google’s stance, just generally, with respect to AI, which is we build products for everyone, so we’ve got systems in place to ensure no bias is introduced,” she said.
Johnson remained skeptical and called on further investigation into the matter.
“Conservatives have legitimate concern that content is being pushed from a liberal/progressive standpoint to the vast majority of users of these social sites,” he said.
Thune agreed with him.
“If you Google yourself, you’ll find most of the things that pop up right away are gonna be from news organizations that tend to be to the Left,” he said.
“I have had that experience as well,” he continued, “and it seems like if that actually was based upon a neutral algorithm or some other form of artificial intelligence, that since you’re the user and since they know your habits and patterns, you might see something, instead of from the New York Times, pop up from Fox News or from the Wall Street Journal.”
Sen. Jon Tester, D-Mont., expressed a similar line of concern over the alarming addictiveness he had observed in his own grandchildren while engaged in online content and the potential impacts it could have in swaying public sentiment.
“I will tell you that I’m probably gonna be dead and gone—and I’ll probably be thankful for it—when all this shit comes to fruition,” Tester told the panel.
Addressing Google, he said, “… You guys could literally sit down at your board meeting, I believe, and determine who’s gonna be the next president of the United States.”
Sen. Ted Cruz also confronted Stanphill directly on the report from Project Veritas, including the hidden-camera admissions from Jen Gennai, Google’s head of “responsible innovation,” that top brass there saw a duty to influence election outcomes and “prevent” another 2016.
Cruz grilled Stanphill over a PowerPoint presentation leaked to Project Veritas in which Google’s internal memos proposed actively intervening in the machine learning process to account for what they considered “fairness” that might not otherwise be reflected in user-driven algorithms.
“Google according this whistleblower, deliberately makes recommendations—if someone is searching for conservative commentators—deliberately shifts the recommendations … [to] organizations like CNN or MSNBC or left leaning political outlets. Is that occurring?” Cruz asked.
Again, Stanphill pleaded ignorance, claiming that it was outside her realm of expertise.
“I can’t comment on search algorithms or recommendations given my purview as digital well-being lead,” she said. “I can take that back to my team, though.”