Cathy’s Book is Out!

Cathy O’Neil’s book Weapons of Math Destruction is out, and it’s already been shortlisted for a National Book Award! Here is a review of the book that I posted on Amazon.com:

So here you are on Amazon’s web page, reading about Cathy O’Neil’s new book, Weapons of Math Destruction. Amazon hopes you buy the book (and so do I, it’s great!). But Amazon also hopes it can sell you some other books while you’re here. That’s why, in a prominent place on the page, you see a section entitled:

Customers Who Bought This Item Also Bought

This section is Amazon’s way of using what it knows — which book you’re looking at, and sales data collected across all its customers — to recommend other books that you might be interested in. It’s a very simple, and successful, example of a predictive model: data goes in, some computation happens, a prediction comes out. What makes this a good model? Here are a few things:

  1. It uses relevant input data.The goal is to get people to buy books, and the input to the model is what books people buy. You can’t expect to get much more relevant than that.
  2. It’s transparent. You know exactly why the site is showing you these particular books, and if the system recommends a book you didn’t expect, you have a pretty good idea why. That means you can make an informed decision about whether or not to trust the recommendation.
  3. There’s a clear measure of success and an embedded feedback mechanism. Amazon wants to sell books. The model succeeds if people click on the books they’re shown, and, ultimately, if they buy more books, both of which are easy to measure. If clicks on  or sales of related items go down, Amazon will know, and can investigate and adjust the model accordingly.

Weapons of Math Destruction reviews, in an accessible, non-technical way, what makes models effective — or not. The emphasis, as you might guess from the title, is on models with problems. The book highlights many important ideas; here are just a few:

  1. Models are more than just math. Take a look at Amazon’s model above: while there are calculations (simple ones) embedded, it’s people who decide what data to use, how to use it, and how to measure success. Math is not a final arbiter, but a tool to express, in a scalable (i.e., computable) way, the values that people explicitly decide to emphasize. Cathy says that “models are opinions expressed in mathematics” (or computer code). She highlights that when we evaluate teachers based on students’ test scores, or assess someone’s insurability as a driver based on their credit record, we are expressing opinions: that a successful teacher should boost test scores, or that responsible bill-payers are more likely to be responsible drivers.
  2. Replacing what you really care about with what you can easily get your hands on can get you in trouble. In Amazon’s recommendation model, we want to predict book sales, and we can use book sales as inputs; that’s a good thing. But what if you can’t directly measure what you’re interested in? In the early 1980’s, the magazine US News wanted to report on college quality. Unable to measure quality directly, the magazine built a model based on proxies, primarily outward markers of success, like selectivity and alumni giving. Predictably, college administrators, eager to boost their ratings, focused on these markers rather than on education quality itself. For example, to boost selectivity, they encouraged more students, even unqualified ones, to apply. This is an example of gaming the model.
  3. Historical data is stuck in the past. Typically, predictive models use past history to predict future behavior. This can be problematic when part of the intention of the model is to break with the past. To take a very simple example, imagine that Cathy is about to publish a sequel to Weapons of Math Destruction. If Amazon uses only  purchase data, the Customers Who Bought This Also Bought list would completely miss the connection between the original and the sequel. This means that if we don’t want the future to look just like the past, our models need to use more than just history as inputs. A chapter about predictive models in hiring is largely devoted to this idea. A company may think that its past, subjective hiring system overlooks qualified candidates, but if it replaces the HR department with a model that sifts through resumes based only on the records of past hires, it may just be codifying (pun intended) past practice. A related idea is that, in this case, rather than adding objectivity, the model becomes a shield that hides discrimination. This takes us back to Models are more than just math and also leads to the next point:
  4. Transparency matters! If a book you didn’t expect shows up on The Customers Who Bought This Also Bought list, it’s pretty easy for Amazon to check if it really belongs there. The model is pretty easy to understand and audit, which builds confidence and also decreases the likelihood that it gets used to obfuscate. An example of a very different story is the value added model for teachers, which evaluates teachers through their students’ standardized test scores. Among its other drawbacks, this model is especially opaque in practice, both because of its complexity and because many implementations are built by outsiders. Models need to be openly assessed for effectiveness, and when teachers receive bad scores without knowing why, or when a single teacher’s score fluctuates dramatically from year to year without explanation, it’s hard to have any faith in the process.
  5. Models don’t just measure reality, but sometimes amplify it, or create their own. Put another way, models of human behavior create feedback loops, often becoming self-fulfilling prophecies. There are many examples of this in the book, especially focusing on how models can amplify economic inequality. To take one example, a company in the center of town might notice that workers with longer commutes tend to turn over more frequently, and adjust its hiring model to focus on job candidates who can afford to live in town. This makes it easier for wealthier candidates to find jobs than poorer ones, and perpetuates a cycle of inequality. There are many other examples: predictive policing, prison sentences based on recidivism, e-scores for credit. Cathy talks about a trade-off between efficiency and fairness, and, as you can again guess from the title, argues for fairness as an explicit value in modeling.

Weapons of Math Destruction is not a math book, and it is not investigative journalism. It is short — you can read it in an afternoon — and it doesn’t have time or space for either detailed data analysis (there are no formulas or graphs) or complete histories of the models she considers. Instead, Cathy sketches out the models quickly, perhaps with an individual anecdote or two thrown in, so she can get to the main point — getting people, especially non-technical people, used to questioning models. As more and more aspects of our lives fall under the purview of automated data analysis, that’s a hugely important undertaking.

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s