Weapons of Math Destruction Summary and Review

by Cathy O’Neil

Has Weapons of Math Destruction by Cathy O’Neil been sitting on your reading list? Pick up the key ideas in the book with this quick summary.

Maybe you’ve heard of big data and how algorithms using that data are providing new insights into consumer patterns, politics and social media platforms. Indeed, algorithms are everywhere. They guide our social media feeds and sift the advertisements we see. And they also influence human life in many other ways; often, they now dictate which jobs and schools we have access to.

You might think that decisions – about hiring, about admissions – would be much fairer if based on objective calculations rather than on someone’s gut feeling. After all, algorithms judge everyone on the same scale, right? Well, as you’ll learn in this book summary, the situation is a bit more complex than that.

In this summary of Weapons of Math Destruction by Cathy O’Neil,You’ll also find out

  • why manipulating Facebook feeds can improve voter turnout;
  • how an algorithm used to rank US universities increased tuition by 500 percent; and
  • why Florida drivers with flawless driving records pay higher premiums than drunk drivers.  

Weapons of Math Destruction Key Idea #1: Algorithms have the potential to sway the voting public and disrupt democracy.

In many ways, the internet helps democracy. It’s a public platform that supports independent voices. But that same platform is also open to powerful propaganda machines that can manipulate the conversation.

Research has shown that social media and search engines are especially vulnerable to algorithms that can influence the decisions of unsuspecting users.

Researchers Robert Epstein and Ronald Robertson found proof of this after asking undecided voters in the United States and India to find information about a handful of different political candidates.

The catch was that the voters were told to use a specific search engine, unaware that it had been programmed with an algorithm that favored one candidate over all the others. As a result, the participants showed a 20-percent shift toward voting for the algorithm's preferred choice.

A similar study happened on Facebook just prior to the 2012 elections: Solomon Messing, of the Pew Research Center, designed a special algorithm that would generate the news feeds of two million users and favor political news over all other posts.

Facebook surveyed the participants before and after the elections, and the results showed that 3 percent more users turned out to vote than was expected before the algorithm had been adjusted to favor politics.

While we can’t know for sure whether certain search-engine or social-media algorithms are designed to influence users, it is clear that there is vast potential for abuse.

It is also clear that political candidates are well aware of their power to garner votes.

Heading into the 2012 elections, Obama had a team of data analysts who interviewed thousands of voters and used their answers, in addition to demographic and consumer data, to create mathematical profiles.

These profiles were then used to find similar people on national databases. Based on the profiles, they could assume that people with similar interests and backgrounds would also share the same political views.  Once people with similar data were grouped together, the analysts could create algorithms that made sure these groups received specifics ads that would appeal to their tastes.

So those who showed evidence of having environmental concerns, for instance, were targeted for ads that highlighted Obama’s environmental policies.

Weapons of Math Destruction Key Idea #2: Algorithms designed to predict crime also reinforce prejudices.

Predicting future crimes sounds like something out of a Philip K. Dick science-fiction story, but it is, in fact, part of today’s reality. Police departments are using algorithms to target prospective criminals.

But this software is far from perfect, and the algorithms have led to cities being unevenly policed and certain people being unfairly singled out.

How has this happened?

The algorithms rely on historical data to pinpoint where crimes are most likely to occur, and it’s the police who determine which data is fed into the algorithm.

Part of the problem is that the police tend to focus on specific kinds of crimes, such as “nuisance crimes,” which include vagrancy and certain drug-related offenses.

Given that crimes like these tend to occur in poor neighborhoods, the analysis will end up being completely skewed toward these parts of the city. As a result, the police send the majority of their patrol units to the streets of poor neighborhoods, making the residents feel unfairly targeted. This also leads to neglect of wealthier neighborhoods, which become more vulnerable to criminal activity.

Similar built-in biases skew the data that police use to predict potential violent crimes as well, leading to innocent people getting labeled as dangerous.

In 2009, the Chicago Police Department received a grant to develop new crime-prediction software. They used that money to develop an algorithm that came up with a list of the 400 people most likely to be involved in a homicide.

One of those people was Robert McDaniel, a 22-year-old who became the focus of police attention. One day, in 2013, a police officer even visited McDaniel’s home to let him know that the police had their eyes on him.

But McDaniel was never charged with any crime. He ended up being red-flagged by the algorithm solely based on the people he follows on social networks and the criminals who happen to live in his neighborhood.

In short, growing up in a poor neighborhood is all it takes to get you labeled as potentially dangerous.

To be fair, crime prediction algorithms are designed to protect people, but they can very easily make people’s lives worse than they were before.

As we’ll see in the next book summary, a similar problem is plaguing the insurance business.

Weapons of Math Destruction Key Idea #3: Insurance companies are exploiting people with bad credit.

If you’re familiar with insurance agencies, you might be aware that they’ll ask different clients to pay different premiums. And, no, they don’t just do this at random. They use the specific data they’ve collected on their customers.

For car insurance, algorithms are used to calculate payment amounts based on how many prior accidents a customer has been in as well as their prior credit reports.

In fact, in some areas, those credit reports are given more weight than a customer’s driving record.

Such is the case in Florida, where adults with clean driving records and poor credit reports end up paying an average of $1,552 more per year than drivers with excellent credit and a history of drunk driving.

This leads to poor drivers with impeccable driving skills having to pay more for insurance than rich drivers.

And so begins the vicious cycle: By being forced to pay more for insurance, cash-strapped families will be more likely to miss a payment on another bill and worsen their credit score even further. And then, when their current insurance expires, the rate on their next contract will go up even higher, even if they haven’t broken a single traffic rule.

Some insurance companies are even using algorithms to calculate the likelihood that a customer will shop around for cheaper prices.

The insurance company Allstate does this by using a model that employs consumer and demographic data. If the algorithm suggests that a customer is likely to search for lower prices, they’ll offer them a reduced price, sometimes as large as 90 percent off the average rate.

However, if a customer isn’t likely to shop around, his rate can increase by as much as 800 percent.

But what Allstate’s algorithm is really doing is taking advantage of poor people without formal education, since this is the demographic that is less likely to shop around for other options.

Weapons of Math Destruction Key Idea #4: The job market is also being unfairly influenced by algorithms.

It can be difficult to spot the best workers out of a pool of hundreds of applicants. So it makes sense to use a variety of tests, in combination with the help of data companies, to sort through the results.

But these tests have proven to be restrictive for certain kinds of people, especially when it comes to personality tests, which have made it next to impossible for someone like Kyle Behm to get a job.

Behm had to drop out of his classes at Vanderbilt University to get treatment for his bipolar disorder. But in 2012, he was healthy enough to start looking for a part-time job.

So he applied to Kroger, a supermarket chain, after a friend told Behm that there was an open position. When he was turned down, he checked with his friend, who told him that he’d been “red-lighted” due to the results of his personality test: the algorithm had tagged Behm as “likely to underperform.”

Unfortunately, the same thing happened to Behm at all the other minimum-wage jobs he applied for. So, with the help of his father, he filed a lawsuit against seven different companies under the Americans with Disabilities Act. As of 2016, the suit was still pending.

Part of the problem is that the companies handling the data can make some troubling mistakes.

When Catherine Taylor applied for a job with the Red Cross in Arkansas, she was rejected and told that it was due to her criminal charge for intent to manufacture and sell methamphetamine. This seemed odd to Catherine since she had a pretty clean record.

When she investigated further, she found that those charges belonged to another Catherine Taylor who happened to have the same birthday.

She also discovered that it was the company providing the data to the Red Cross that had made the mistake, which prompted her to do a little more research. In the end, Catherine discovered that at least ten data brokers had made the same mistake, linking her to a serious crime that she’d never committed.

Weapons of Math Destruction Key Idea #5: University rankings have negative effects on higher education.

It’s no secret that colleges in the United States have gotten quite expensive over the past 30 years, but few people know that one of the main reasons for the increase in tuition is due to one newspaper.

In the 1980s, US News and World Report began using an algorithm that ranked the quality of US colleges using data that they believed would determine their success, such as SAT scores and acceptance rates.

Suddenly, these ranking became crucially important for all the universities involved, and they all set out to improve their performance in the areas that the US News algorithm used. But to do that, they needed resources.

This scramble for money is largely responsible for tuition going through the roof. Between 1985 and 2013, the cost of higher education increased by 500 percent.

These rankings weren’t the only factor that contributed to this increase, but they certainly encouraged the schools to raise their costs.

One of the most damaging things US News did was to include acceptance rates in their formula, as it completely ruined the concept of a “safety school.”

Traditionally, a safety school was a college that had a high acceptance rate, and would serve as a good backup plan for a student who was also applying to a more prestigious school like Harvard or Yale.

But since US News gave schools with a lower acceptance rate a better position in the rankings, many schools began lowering their rates and sending out fewer acceptance letters.

In order to keep their actual enrollment numbers the same, they had to choose which students they were going to reject. By looking at their numbers, the safety schools could see that only a small percentage of the top students would choose them over the prestigious schools, so they believed that rejecting them wouldn't do any harm.

But even if only some of these high-performing students chose to attend, it would have benefitted the school. Also, the decision to reject high performers out of hand ruined the backup plans of many good students.

Like all the other algorithms we looked at, what started out as a good idea ended up doing far more harm than good.

In Review: Weapons of Math Destruction Book Summary

The key message in this book:

Algorithms were initially created to be neutral and fair by avoiding all-too-human biases and faulty logic. However, many of the algorithms used today, from the insurance market to the justice system, have incorporated the very prejudices and misconceptions of their designers. And since these algorithms operate on a massive scale, these biases lead to millions of unfair decisions.

Actionable advice:

Write machine-friendly resumes.

Most companies today use automatic resume readers. To increase your chances of getting the job, modify your resume with the automatic reader in mind. Here are some simple tips you can always apply:

  • Use simple fonts like Arial and Courier
  • Stay away from images, which can’t be processed by the reader
  • Don’t use symbols – even simple ones like arrows can confuse the reader