Book Summary: Weapons of Mathematical Destruction

Screen Shot 2018-01-07 at 10.30.47 AM

Cathy O’Neil’s Weapons of Math Destruction, or WMD for short, offers a moralistic argument that with big data, so too comes big responsibility. Uncovering what she refers to an inherent prejudice, racism and economic unfairness in predictive algorithms and machine learning models, no analytical model is safe from her fierce criticism. With an opening salvo that our very democracy hangs on the moralism of our mathematical models, O’Neill unveils the destructive power of the models that impact our daily lives.

O’Neill defines a WMD with the following characteristics:

  1. It is opaque, as if a hidden veil sits between the subject and the controller
  2. It causes a negative feedback loop where the output of the model adversely affects the future decisions for the subject, only to impute further damaging data
  3. It disadvantages a specific group, most notably minorities
  4. It presumes guilt, treating innocence as a mere constraint in the model
  5. It favors efficiency resulting in further model variance
  6. It doesn’t calculate fairness by relying solely on numbers and data
  7. It lacks a feedback structure, resulting in creating noise without signals to readjust

A prime example of a WMD is as follows,

“Do you see the paradox? An algorithm processes a slew of statistics and comes up with a probability that a certain person might be a bad hire, a risky borrower, a terrorist, or a miserable teacher. That probability is distilled into a score, which can turn someone’s life upside down. And yet when the person fights back, ‘suggestive’ countervailing evidence simply won’t cut it. The case must be ironclad. The human victims of WMD’s, we’ll see time and again, are held to a far higher standard of evidence than the algorithms themselves.”

Some intriguing examples O’Neill sites are common in the analytic community. These include the star high school student who applies to a state-level university, only to be turned away because the state school’s predictive model for acceptance states that all-star students have a high probability of turning down so-called “safety schools” for more prestigious institutions. Knowing that the models for what makes a good university ranking relies on having a low acceptance rate, the state school rejects the all-star student. This cycle is perpetuated in a negative feedback loop.

Another example is the predictive models that city police departments rely on. Using various inputs to machine learning algorithms such as the number of ATMs, geographical inputs, classifications of high-risk zones, the models attempt to optimize police resources by placing cop cars in areas with a higher probability of a crime occurring. Unfortunately, these areas tend to be lower class and with a large minority population. The sheer act of having an increased police presence results in minor crimes such as drug use, being seen, acted upon and enforced. More police reports and arrests feedback into the machine learning model resulting in a further bias towards placing more police units in the area. This is another prime example of a negative feedback loop presuming guilt without model readjustment.

As seen from these examples, O’Neill argues that more data is not the panacea that will fix our predictive problems. If the algorithm favors a certain piece of the population or refuses to readjust based on algorithm tuning (accuracy, precision, recall, a combination of the two in the F1 or F-beta scores), then no matter the n-size of the data, the model will still fail. However, there is hope! O’Neill offers the following suggestions to improve our models and restore dependency on a more fair and robust set of algorithms.

  1. Ensure there is an opportunity for outside introspection. Let the general public understand the models that judge them, be it credit e-scores or teacher evaluation criteria
  2. Allow user sign-off to use their online actions as data that can be resold. Europe follows a similar model that could be copied
  3. Make public the assumptions that the model is built upon
  4. Audit algorithms as if they were financial institutions

While O’Neil’s work strongly falls into the category of personal rights and privacy in a data-filled world, much of her anecdotal evidence is convincing. The moralism that surrounds what we inexplicitly provide the data world in our online actions, voting behavior, consumer choices, and personal health well-being should not be the product for others, but of ourselves. This is a question that will plague future technologists as we come to terms with what is and isn’t the public domain.

Standard

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s