There are numerous
organizations trying to learn as much about us as possible. Everywhere we go, every website we visit,
every book we read, every comment we make on social media, every email we
write, who we communicate with by phone is being examined by people who wish to
make a profit from information about us.
Everything known about you is fed into analysis routines that try to
characterize you and predict how you are likely to behave as a consumer, as a
voter, as an employee, as a member of an insurance plan, even as a lover. Much of this activity is relatively harmless,
such as helping vendors market their products to individuals. In other cases the stakes are much higher: this
data is being used in ways we have no control over and drawing conclusions
about us that may be quite in error yet still determining whether or not we are
worthy of a job, a loan, or even a prison sentence.
Cathy O’Neil addresses this “big data” environment and
its uses in her book Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. We have discussed previously how big data is used to increase inequality. Here the focus will be on how it is used to
threaten democracy.
Big data has often been used to target people who have
financial problems. If a person is
desperate for a solution to a problem, that is the best time to sell them some
shady financial scheme that is likely to do them more harm than good.
“We are ranked, categorized, and
scored in hundreds of models, on the basis of our revealed preferences and
patterns. This establishes a powerful
basis for legitimate ad campaigns, but it also fuels their predatory cousins:
ads that pinpoint people in great need and sell them false or overpriced
promises. They find inequality and feast
on it. The result is that they
perpetuate our existing social stratification, with all of its injustices.”
The same techniques that identify people who might be
receptive to the offer of an ultrahigh interest rate loan or an inducement to
sign up at a for-profit college can be used to identify people who might be
receptive to false or exaggerated political claims. O’Neil provides this example.
“In 2015, the Center for Medical
progress, an antiabortion group, posted videos featuring what they claimed was
an aborted fetus at a Planned Parenthood clinic. The videos asserted that Planned Parenthood
doctors were selling baby parts for research, and they spurred a wave of
protest, and a Republican push to eliminate the organization’s funding.”
“Research later showed that the
video had been doctored: the so-called fetus was actually a photo of a
stillborn baby born to a woman in rural Pennsylvania. And Planned Parenthood does not sell fetal
tissue. The Center for Medical Progress
admitted that the video contained misinformation. That weakened its appeal for a mass
market. But with microtargeting,
antiabortion activists could continue to build an audience for the video,
despite the flawed premise, and use it to raise funds to fight Planned
Parenthood.”
There are a large number of campaigns that are not widely
seen because the general public would find them ridiculous or repulsive. However, microtargeting can feed false or
misleading information to vulnerable people who might be receptive, and thus
propagate that information.
“According to Zeynep Tufekci, a
techno-sociologist and professor at the University of North Carolina, these
groups pinpoint vulnerable voters and then target them with fear-mongering
campaigns, scaring them about their children’s safety or the rise of illegal
immigration. At the same time, they can
keep these ads from the eyes of voters likely to be turned off (or even
disgusted) by such messaging.”
“As this happens, it will become
harder to access the political messages our neighbors are seeing—and as a
result to understand why they believe what they do, often passionately. Even a nosey journalist will struggle to
track down the messaging. It is not
enough simply to visit the candidate’s web page, because they, too,
automatically profile and target each visitor, weighing everything from their
zip codes to the links they click on the page, even the photos they appear to
look at.”
“Successful microtargeting, in
part, explains why in 2015 more than 43 percent of Republicans, according to a
survey, still believed the lie that President Obama is a Muslim. And 20 percent of Americans believed he was
born outside the United States and, consequently, an illegitimate president.”
As a result, different people will approach a political
issue armed with different sets of supposed facts. For a democracy to function properly, it must
have people who are dealing with the same set of facts so that they can come to
a compromise over how to address the consequences of that set of facts. Having contending factions that possess
different views of reality is a recipe for disaster.
“The result of these
subterranean campaigns is a dangerous imbalance. The political marketers maintain deep
dossiers on us, feed us a trickle of information, and measure how we respond to
it. But we’re kept in the dark about
what our neighbors are being fed. This
resembles a common tactic used by business negotiators. They deal with different parties separately
so that none of them knows what the other is hearing. This asymmetry of information prevents the
various parties from joining forces—which is precisely the point of a
democratic government.”
O’Neil is also concerned because there are other big data
companies that are intimately acquainted with us and our interests and
preferences and could be responsible for biasing our views—perhaps without even
intending to. Consider the case of
search engines such as Google.
“Two researchers Ronald Epstein
and Ronald E. Robertson, recently asked undecided voters in both the United
states and India to use a search engine to learn about upcoming elections. The engines they used were programmed to skew
the search results, favoring one party over another. These results, they said, shifted voting
preferences by 20 percent.”
“The effect was powerful, in
part, because people widely trust search engines. Some 73 percent of Americans, according to a
Pew Research report, believe that search results are both accurate and
impartial.”
O’Neil recognizes that it would be dangerous for a Google
to purposely bias search results in order to promote some agenda, and there is
no evidence that they do this.
“Then again, how would anyone
know? What we learn about these Internet
giants comes mostly from the tiny proportion of their research that they
share. Their algorithms represent vital
trade secrets. They carry out their
business in the dark.”
If a person with known liberal tendencies who has
demonstrated a preference for reading liberal views searches on a topic with
political relevance, does the Google algorithm provide a balanced mix of
liberal and conservative views on the topic, or does it preferentially provide
views that are more likely to be those desired by the searcher? If it is intended to be the former, then it is
difficult to see how that can actually be accomplished. If the latter is intended, then the risk of positive
feedback reinforcing established views becomes a problem. As O’Neil points out “How would anyone know?”
O’Neil also fears the potential power of Facebook to bias
opinions. Facebook was accused by some of
being a vehicle for the propagation of “fake news” in the 2016 presidential
election. Facebook, with its news feed,
funnels information to its users in ways that are potentially biased just as
search engines do.
“When we visit the site, we
scroll through updates from our friends.
The machine appears to be only a neutral go-between. Many people still believe it is. In 2013, when a University of Illinois researcher
named Karrie Karahalios carried out a survey on Facebook’s algorithm, she found
that 62 percent of the people were unaware that the company tinkered with the
news feed. They believed that the system
instantly shared everything they posted with all of their friends”
“While Facebook may feel like a
modern town square, the company determines, according to its own interests,
what we see and learn on its social network.
As I write this, about two-thirds of American adults have a profile on
Facebook. They spend thirty-nine minutes
a day on the site, only four minutes less than they dedicate to face-to-face
socializing. Nearly half of them, according
to a Pew Research Center report, count on Facebook to deliver at least some of
their news, which leads to the question: By tweaking its algorithm and molding
the news we see, can Facebook game the political system?”
Facebook itself seems quite interested in the answer to
that question. It has been running
experiments on its users to find out how much influence it can wield. With many millions of users available, it can
run these experiments quickly and accurately.
O’Neil tells us that they admit to having been successful in increasing
voting rates and in altering emotional states by manipulating the algorithm
used in determining the news feed to a subset of users.
Once again, there is no evidence that Facebook has any
intention other than to make a profit, but once again “How would anyone know?” Even with the best of intentions, by
operating as a news source, the company risks injecting bias into the views of
its users. If the news feed is intended
to provide items that the particular user would be most likely to read, there
is a real danger of reinforcing existing political views. If it selects the most popular items to feed
to its users, the risk is that the news will be dominated by those who mount
the loudest and most outrageous campaigns.
If it tries to provide an unbiased set of news items, how can it
actually accomplish that?
It is clear that voters will generally seek information
that supports their preconceptions.
Voters who vote Republican prefer to get their news from Fox News. That is a conscious choice they can
make. The information they are fed by
Google and Facebook is determined by someone else. Could the algorithms used by these companies
be contributing to the extreme political polarization that has developed?
How would anyone know?
The interested reader might find these articles
informative: