Friday, March 9, 2018

How Today’s Pros Solve Math Problems: Part 2

By Keith Devlin

You can follow me on Twitter @profkeithdevlin

CHANGE OF PLAN: When I wrote last month’s post, I said I would conclude the description of my Nueva School Course this time. But when I sat down to write up that concluding piece, I realized it would require not one but two further posts. The course itself was the third iteration of an experiment I had tried out on a university  class of non-science majors and an Adult Education class. This series of articles is my first attempt to try to describe it and articulate the thinking behind it. As is often the case, when you try to describe something new (at least it was new to me), you realize how much background experience and unrecognized tacit knowledge you have drawn upon. In this post, I’ll try to capture those contextual issues. Next month I’ll get back to the course itself.

We all know that mathematics is not always easy. It requires practice, discipline and patience,  as do many other things in life.  And if learning math is not easy, it follows that teaching math is not easy either. But it can help both learner and teacher if they know what the end result is supposed to be.

In my experience, many learners and teachers don’t know that. In both cases, the reason they don’t know it is that no one has bothered to tell them. There is a general but unstated assumption that everyone knows why the teaching and learning of mathematics is obligatory in every education system in the world.  But do they really?

There are two (very different) reasons for teaching and learning mathematics.

One reason is that it is a way of thinking that our species has developed over several thousand years, that provides wonderful exercise for the mind, and yields both challenging intellectual pleasure and rewarding aesthetic beauty to many who can find their way sufficiently far into it. In that respect, it is like music, drama, painting,  philosophy, natural sciences, and many other intellectual human activities. This is a perfectly valid reason to provide everyone with an opportunity to sample it, and make it possible for those who like what they see to pursue it as far as they desire. What it is not, is a valid reason for making learning math obligatory throughout elementary, middle, and high school education.

The argument behind math’s obligatory status in education is that it is useful; more precisely, it is useful in the practical, everyday world. This is the view of mathematics I am adopting in the short series of “Devlin’s Angle” essays of which this is the third. (There will be one more next month. See episode 1 here and episode 2 here.)

Indeed, mathematics is useful in the everyday practical world. In fact, we live in an age where mathematics is more relevant to our lives than at any previous time in human history. 

It is, then, perfectly valid to say that we force each generation of school students to learn math because it is a useful skill in today’s world. True, there are plenty of people who do just fine without having that skill, but they can do so only because there are enough other people around who do have it.

But let’s take that argument a step further. How do you teach mathematics so that it prepares young people to use it in the world? Clearly, you start by looking at the way people currently use math in the world, and figure out how best to get the next generation to that point. (Accepting that by the time those students finish school, the world’s demands may have moved on a bit, so those new graduates may have a bit of catch up and adjustment to make.)

If the way the professionals use math in the world changes, then the way we teach it should change as well.  Don’t you think? That’s certainly what has happened in the past.

For instance, in the ninth century, the Arabic-Persian speaking traders around Baghdad developed a new, and in many instances more efficient, way to do arithmetic calculations at scale, by using logical reasoning rather than arithmetic. Their new system, which quickly became known as al-jabr after one of the techniques they developed to solve equations, soon found its way into their math teaching.

When Hindu-Arabic arithmetic was introduced into Europe in the thirteenth century, the school systems fairly quickly adopted it into their arithmetic teaching as well. (It took a few decades, but knowledge moved no faster than the pace of a packhorse back then. I tell the story of that particular mathematics-led revolution in my 2011 book The Man of Numbers.)

The development of modern methods of accounting and the introduction of financial systems such as banks and insurance companies, which started in Italy around the same time, also led to new techniques being incorporated into the mathematical education of the next generation.

Later, when the sixteenth century French mathematician François Viète introduced symbolic algebra, it too became part of the educational canon.

In each case, those advances in mathematics were introduced to make mathematics more easy to use and to increase its application. There was never any question of “What is this good for?” People eagerly grabbed hold of each new development and made everyday use of it as soon as it became available.

The rise of modern science (starting with Galileo in the seventeenth century) and later the Industrial Revolution in the nineteenth century, led to still more impetus to develop new mathematical concepts and techniques, though some of those developments were geared more toward particular groups of professionals. (Calculus, for example.)

To make it possible for an average student or worker to make use of each new mathematical concept or technique, sets of formal calculating rules (algorithmic procedures) were developed and refined. Once mastered, these made it possible to make use of the new mathematics to handle—in a practical way—the tasks and problems of the everyday world for which those concepts and techniques had been developed to deal with in the first place.

As a result of all those advances, by the time the Baby Boomers came onto the educational scene in the 1950s, the curriculum of mathematical algorithms that were genuinely important in everyday life was fairly large. It was no longer possible for a student to understand all the underlying mathematical concepts and techniques behind the algorithms and procedures they had to learn. The best that they could do was master, by repetitive practice, the algorithmic procedures as quickly as possible and move on. [A few of us had difficulty doing that. We wanted to understand what was going on. By and large, we frustrated our teachers, who seemed to think we were simply troublesome slow learners. Some of us eventually learned to “play the mindless algorithm game” in class to pass the test, but kept struggling on our own to understand what was going on, setting us on a path to becoming mathematics professors in the 1970s.]

It was while that Boomer generation was going through the school system that mathematics underwent the first step of a seismic shift that within a half of a century would completely revolutionize the way mathematics was done. Not the pure mathematics practiced by a few specialists as an art—though that too would be impacted by the revolution to some extent. Rather, it was mathematics-as-used-in-the-world that would be radically transformed.

The first step of that revolution was the introduction of the electronic desktop calculator in 1961. Although, mechanical desktop calculators had been available since the turn of the Twentieth Century, by and large their use was restricted to specialists—often called “computers” in businesses. [I actually had a summer-job with British Petroleum as such a specialist in my last three years at high school, and it was in my final year in that job that the office I worked in acquired its first electronic desktop calculator and the British Petroleum plant bought its first digital computer, both of which I learned to use.] But with the increasing availability of electronic calculators, and in particular the introduction of pocket-sized versions in the early 1970s, their use in the workplace rapidly became ubiquitous. Mathematics underwent a major change. Humans no longer needed to do arithmetic calculations themselves, and professionals using arithmetic in their work no longer did.

It was not too many years later that, one by one, electronic systems were developed that could execute more and more mathematical procedures and techniques, until, by the late 1980s, there were systems that could handle all the mathematical procedures that constituted the bulk of not only the school mathematics curriculum, but the entire undergraduate math curriculum as well. The final nail in the coffin of humans needing to execute mathematical procedures was the release of the mathematics system Mathematica in 1988, followed soon after by the release of Maple.

In the scientific, industrial, engineering, and commercial worlds, each new tool was adopted as soon as it became available, and since the early 1990s, professionals using mathematical techniques to carry out real-world tasks and solve real-world problems have done so using tools like Mathematica, Maple, and a host of others that have been developed.

Simultaneously, colleges and universities quickly incorporated the use of those new tools into their teaching. And while the cost of the more extensive tools put their use beyond most schools, the graphing calculator too was quickly brought into the upper grades of the K-12 system, after its introduction in 1990.

Yet, while the pros in the various workplaces changed over to the new human-machine-symbiotic way of doing math with little hesitation, most educators, exhibiting very wise instincts, proceeded with far more caution. The first wave of humans to adopt the new, machine-aided approach had all learned mathematics in an age when you had to do everything yourself. Back then, “computers” were people. For them, it was easy and safe to switch to executing a few keystrokes to make a computer run a procedure they had carried out by hand many times themselves. But how does a young person growing up in this new, digital-tools-world learn how to use those new tools safely and effectively?

To some extent, the answer is (and was) obvious. You teach not for smooth, proficient, accurate execution of procedures, but for broad, general understanding of the underlying mathematics. The downplay of execution and increased emphasis on understanding are crucial. Computers outperform us to ridiculous degrees (of speed, accuracy, size of dataset,  and information storage and retrieval) when it comes to execution of an algorithm. But they do not understand mathematics. They do not understand the problem you are working on. They do not understand the world. They don't understand anything. 

People, on the other hand, can understand, and have a genetically inherited desire to do so.

But just how do you go about teaching for the kind of understanding and mastery that is required for students to transition into worlds and workplaces dominated by a wide array of new mathematical tools, where they will encounter work practices that involve very little by way of hand execution of algorithms?

We know so little about how people learn (though we do know a whole lot more than we did just a few decades ago), that most of us with a stake in the education business are rightly concerned about making any change that would effectively be a massive experiment on an entire generation. So we can, and should, expect small steps, particularly in systemic education.

In the U.S., the mathematicians who developed the mathematical guidelines for the Common Core State Standards made a good first attempt at such a small step. True, it quickly ran into difficulties when it came to implementing the guidelines in a large and complex public educational system that is answerable to the public. But that is surely a temporary hiccup. Most of the problems at launch came from a lack of effective ways to assess the new kind of learning. Those problems can be and are being fixed. Which is just as well. For, although it’s possible to argue for tinkering with specific details of the Common Core State Standards guidelines, in terms of setting out a broad set of educational goals to aim for, there is no viable alternative first step. The pre-1970s educational approach is no longer an option.

In the meantime, individual teachers at some schools (particularly, but not exclusively, private schools) have been trying different approaches, in some cases sharing their experiences on the MTBOS (Math Twitter Blog-O-Sphere), making use of another technological tool (social media) now widely available. [For a quick overview of one global initiative to support and promote such innovations, the OECD’s Innovative Pedagogies for Powerful Learning project (IPPL), see this recent article from the Brookings Institution.]

The mini-course I gave at Nueva School in the San Francisco Bay Area last January, which I talked about in the first of this short series of essays, is one such experiment in teaching mathematics in a way that best prepares the next generation for the world they will live and work in after graduation. I tested it first with a class of non-science majors in Princeton in the fall of 2015 and then again with an Adult Education class at Stanford in the fall of 2017. The Nueva School class was its third outing.

With the above backstory now established, next month I will describe that course and talk about how today’s pros “do the math”. (Again, let me stress, I am not talking here about “pure math”, the academic discipline carried out by professional mathematicians in universities and a few think tanks. My focus here is on using math in the everyday world.)

In the meantime, I’ll leave you with a simple arithmetic problem that I will discuss in detail next time.

It comes with two instructions:

  1. Solve it as quickly as you can, in your head if possible. Let your mind jump to the answer.
  2. Then, and only then, reflect on your answer, and how you got it.
The goal here is not to get the right answer, though a great many of you will. Rather, the issue is how do our minds work, and how can we make our thinking more effective in a world where machines execute all the mathematical procedures for us?

Ready for the problem? Here it is. 

PROBLEM: A bat and a ball cost $1.10. The bat costs $1 more than the ball. How much does the ball cost on its own? (There is no special pricing deal.)

Wednesday, February 7, 2018

How today’s pros solve math problems: Part 1

Last month, I wrote about my recent experience teaching a three-day mini-course in the Nueva School January electives “Intersession” program. What I left out was a description of the course itself. I ended with the below diagram as a teaser. I said that, when reading in the usual left-right-down reading order, these were the technology tools that I typically turn to when I start to work on solving a new problem.

A number of mathematicians commented on social media that their list would be almost identical to mine. That did not surprise me. My chart simply captures the way today’s pros approach new problems. A number of math teachers expressed puzzlement. That too did not surprise me. The current mathematics curriculum is still rooted in a conception of “doing math” that developed to meet society’s needs in the 19th Century.

Actually, I should point out that the diagram above is not exactly the one I published last month. I have added an icon for a spreadsheet. A mathematician in Austria emailed me to say I should have included it. The two of us had corresponded in the past about the use of spreadsheets in mathematics, both in problem solving and in teaching, and we were (and are) very much on the same page as to their usefulness in a wide variety of circumstances. My excuse for overlooking it the first time round was that it was only the second technological tool I brought into my mathematics arsenal, so far back in my career that I had long ago stopped thinking of it as something new. (The first piece of “new tech” I adopted was the electronic calculator, and that too did not appear in my chart.) I suspect that almost all math teachers, and indeed, pretty well all of society, make frequent use of calculators and spreadsheets, not only in their professional activities but in their social and personal lives as well. Still, the spreadsheet is such a powerful, ubiquitous mathematics tool, I should have included it, and now I have. (Its use definitely figured in the guidance I gave to the Nueva School class.) I have placed it in the position in my list that, on reflection, I find I turn to in order of frequency.

Some of the responses I received from teachers indicated that I need to clarify that, by “solving a mathematical problem”, I mean using mathematics to solve a real-world problem. The problem we worked on at Nueva School was one UPS worked on not long ago: “What is the most efficient way to route packages from place to place?” More on that later. A simpler example in the same vein is when we ask ourselves “Which kind, model, and hardware configuration of mobile phone best meets my needs within my current budget?”—an example where, for most of us, the item’s cost is high enough for us to weigh the (many) options fairly carefully.

This is clearly not the same as “solving a math problem” in a typical math textbook. For example, “What are the roots of the equation x2 + 3x – 5 = 0?” Those kinds of questions are, of course, designed to provide practice in using various specific, sharply focused, mathematics techniques, procedures, formulas, or algorithms.

Those techniques, procedures, etc. are the basic building blocks for using mathematics to solve problems in real life, but they don’t really present much of a problem, in the sense the word is used outside the math class. Indeed, the reason it can be valuable to master those basic techniques, etc. is that being able to use them fluidly means they won’t be a problem (in the sense of an obstacle) that gets in the way of solving what really is a mathematical problem (e.g., which phone to buy). That, of course, is why we call them basic skills. But having mastery of a range of basic skills does not make a person a good problem solver any more than being a master bricklayer makes someone an architect or a construction engineer.

My focus then, is on using math to solve real-world problems. That’s where things are very different from the days when I first learned mathematics. Back in the 1950s and 60s, when I went through the school system, we spent a huge amount of time mastering algorithms and techniques for performing a variety of different kinds of numerical and symbolic calculations, geometric reasoning, and equation solving. We had to. In order to solve any real-world problem, we had to be able to crank the algorithmic and procedural handles.

Not any more. These days, that smartphone in your pocket has access to cost-free cloud resources that can execute any mathematical procedure you need. What’s more, it will do it with a dataset of any real-world size, with total accuracy to whatever degree you demand, and in the majority of cases in a fraction of a second.

To put it another way, all those algorithms, techniques, and procedures I spent years mastering, all the way through to earning my bachelors degree in mathematics, became obsolete within my lifetime, an observation I wrote about in an article in the Huffington Post in January of last year.

So, does that mean all that effort was wasted? Not at all. Discounting the fact that in my case, I was able to make good use of those skills and knowledge for several decades before the march of technology rendered them obsolete, the one thing that I gained as a result of all that procedural learning that is as valuable today as it was back then, was the ability to think mathematically. I wrote about one aspect of that “mathematical thinking” mental ability, number sense, in a simultaneously published follow-up piece to that Huffington Post article.

In today’s world, all the algorithmic, computational, algebraic, geometric, logical, and procedural skills that used to take ten years of effort to master can now be bought for $699. At least, that amount (the price of an iPhone 8, which I chose for illustration) is all it costs to give you access to all those skills. Making effective use of that vast powerhouse of factual knowledge and procedural capacity requires considerable ability. Anyone who mastered mathematics the way I did acquired that ability as an automatic by-product of mastering the basic skills. But what does it take to acquire it in an age when all those new tools are widely available?

The answer, of course (though not everyone involved in the mathematics education system thinks it is obvious, or even true), is that the educational focus has to shift from procedural mastery to understanding. Which is precisely the observation that guided the Common Core initiative in the United States. Yes, I know that the current leadership of the US Department of Education believes that the Common Core is a bad idea, but that is an administration that also believes the future of energy lies in fossil fuels, not renewables, and the highly qualified, career-professional contacts I have in the Department of Education have a very different view.

How do you acquire that high-level skill set? The answer is, the same way people always did: through lots of practice.

But be careful how you interpret that observation. What need to be practiced are the kinds of activities that you would use as a professional—or at least a competent user of mathematics—in the circumstances of the day. In my school days, that meant we had to practice with highly constrained, “toy” problems. But with today’s technologies, we can practice on real-world problems using real-world data.

Almost inevitably, when you do that, you find you frequently need to drop down to suitably chosen “toy problem” variants of your task in order to understand how a particular online tool (say) works and what it can (and cannot) do. But today, the purpose of, say, inverting a few 2x2 or 3x3 matrices is not (as it was in my day) so you can become fluent at doing so, and certainly not because you will actually invert by hand that 100x100 matrix that has just reared its ugly head in your real-world problem. No, you just need to get a good understanding of what it means to invert a matrix, why you might do so, and what problems can arise.

And you know what? That’s rarely a problem. Once you have identified a mathematical technique you need to understand, the chances are high you will find not one but a dozen or more YouTube videos that explain it to you.

These new tools certainly don’t solve the problem for you. [Well, sometimes they may do, but in that case it wasn’t a problem that required the time of a mathematician. Better to move on and put your efforts into a problem that cannot be solved by an app in the Cloud!] All that these fancy new tools have done is change the level at which we humans operate.

At heart, that shift is no different from the level-shift introduced in the 9th Century when traders in and around Baghdad developed techniques for doing routine arithmetic calculations at scale, by performing operations not on specific numbers but on classes of numbers. One of the techniques they developed was called, al-jabr, a term that ended up giving the name we use today to refer to that new kind of calculation procedure: algebra.

Throughout mathematics’ history, mathematicians have calculated and reasoned logically with the basic building blocks of the time. Today’s procedures (that have to be executed) turn into tomorrow’s basic entities (on which you operate). A classic example is differential calculus, where functions are no longer viewed as rules that you execute to yield new numbers from old numbers, but higher-level objects on which you operate to produce new functions from old functions.

So (finally), what exactly did we do in that Nueva School mini-course to illustrate the way today’s pros use math to solve a problem? The problem, remember, was this: Reverse engineer the core algorithm than UPS uses to route packages from origin to destination?

To start the class off—they worked in small teams of three or four—I provided a small amount of information to get them started:
  1. Tracking information for a fairly large, heavy case, including a partially dismantled bicycle, I had shipped from Petaluma, California to Fair Haven, New Jersey, in 2015. See image below.
  2. I told them I sent the case by “three day select.”
  3. I reported that my package went by plane from Louisville, Kentucky, to the UPS facility in Newark, where it was immediately loaded onto a truck, and was delivered to the intended Fair Haven destination with just a few hours to spare within the three-day period guaranteed.
That information, I told the class, was enough to figure out how the routing algorithm worked. [This itself is useful information that I did not have when I first solved the problem, but they had to figure it out by the end of the course, so I was happy to give them additional information.] In solving this problem, they could elicit my help as their “math consultant,” to call on with specific questions when required. But they had to carry out the key steps.

They could, of course, use the various tools in my “modern math tools” chart, and any others they could find. (Since the UPS routing algorithm is an extremely valuable trade secret, they would not find that online, of course.)

Next month, I’ll tell you how they got on. In the meantime, you might like to see how far you can get with it. Happy problem solving! Happy mathematical thinking!

Part 2 will appear next month.

Tuesday, January 23, 2018

Déjà vu, all over again

I gave a short course at a local high school recently. Three days in a row, two hours a day, to fifteen students. To my mind, it was a huge success. By the end of the course, the students had successfully reverse-engineered UPS’s core routing/scheduling algorithm. In fact, they spent the last half hour brainstorming how UPS might improve their efficiency. (My guess is the company had long ago implemented, or at least considered, the ideas the kids came up with, but that simply serves to illustrate how far they had come in just six hours of class-time.)

To be sure, it was not an average class in an average high school. Nueva School, located in the northern reaches of Silicon Valley, is private and expensive (tuition runs at $36,750 for an 8th gader), and caters to students who have already shown themselves to be high achievers. Many Silicon Valley tech luminaries send their children there, and some serve on the board. They have an excellent faculty. Moreover, the fifteen students in my class had elected to be there, as part of their rich, January, electives learning experience called “Intersession”.

I was familiar with the school, having been invited to speak at their annual education conference on a couple of occasions, but this was the first time I had taught a class.

Surprisingly, the experience reminded me of my own high school education, back in the UK in the early 1960s. My high school was a state run, selective school in the working class city of Hull, a major industrial city and large ocean fishing and shipping port. Socially and financially, it was about as far away as you could get from Nueva School on the San Francisco Peninsula, and my fellow students came from very different backgrounds than the students at Nueva.

What made my education so good was a highly unusual set of historical circumstances. Back then, Hull was a fiercely socialist city that, along with the rest of the UK, was clawing its way out of the ravages of the Second World War. For a few short years, the crippling English class system broke down, and an entire generation of baby boomers entered the school system determined to make better lives for themselves—and everyone else. (“Me first” came a generation later.)

We had teachers who had either just returned from fighting the war (the men on the battlefields, the women in the factories or in military support jobs), or were young men and women just starting out on their teaching careers, having received their own school education while the nation was at war. There was a newly established, free National Health Service, an emerging new broadcasting technology (television) run by a public entity, a rapidly growing communications systems (a publicly funded telephone service), and free education, including government-paid- for university education for the 3 percent or so able to pass the challenging entrance exams.

We were the generation that the nation was dependent on to rebuild, making our way through the education system in a social and political environment where the class divisions that had been a part of British life for centuries had been (temporarily, it turned out) cast aside by the need to fight a common enemy across the English Channel. The result was that, starting in the middle of the 1960s, a “British Explosion” of creative scientific, engineering, and artistic talent burst forth onto the world. Within our individual chosen domains, we all felt we could do anything we set our minds to. And a great many of us did just that. About half my high school class became highly successful people. That from a financially impoverished, working class background.

It was short lived, lasting but a single generation. I was simply lucky to be part of it.

What brought it all back to me was finding myself in a very similar educational environment in my three days at Nueva School. The circumstances could hardly be more different, of course. But talking and working with those students, I sensed the same thirst to learn, the same drive to succeed (in terms they set for themselves), and the same readiness to keep trying I had experienced two generations earlier. It felt comfortingly—and encouragingly—familiar.

But I digress. In fact, I’ve done more than digress. I’ve wandered far from my intended path. Or have I? The point I want to get across is that when it comes to learning, success is about 5 percent talent, 35 percent the teachers and students around you, and 60 percent desire and commitment. (I just made up those figures, but they represent more or less how I see the landscape, having been an education professional for half a century.)

It turns out that, in today’s world, given those ingredients, in roughly those proportions, it is possible for a small group of people, in the space of just a few days, to make significant progress in solving a major problem of massive societal importance. (If you can figure out how UPS performs its magic, you can do the same thing with many other large organizations, Walmart, Amazon, United Airlines, and so on.)

How can it be possible to take a small group of students, still in high school, and make solid progress on a major mathematical problem like that? It would not have been possible in my school days. The answer is, in today’s world, everyone has access to the same rich toolset the professionals use. Moreover, most of those tools—or at least, enough of them—are free to anyone with access to a smartphone or a personal computer. You just have to know how to make effective use of them.

Next month, I will describe how my Nueva class went about the UPS project. (I had done it once before, with a non-science majors undergraduate class at Princeton University. Doing it with high school students confirmed my belief that a group with less academic background could achieve the same result, in the process providing me with some major-league ammunition to back up my oft-repeated—and oft-ignored or disputed—claim that K-12 mathematics education is in need of a major (and I mean MAJOR) makeover. (After the invention of the automobile, it made more sense to teach people how to drive than how to look after a horse. I feel the math ed argument should end with that razor-sharp analogy, but it rarely does.)

As I say, that discussion is for next month. But let me leave you with a teaser. Actually, two teasers. One is my January 1, 2017 opinion piece in the Huffington Post, "All The MathematicalMethods I Learned In My University Math Degree Became Obsolete In My Lifetime." The other teaser is the diagram I will end with. It summarizes some of the most useful tools that a professional mathematician today uses when starting to work on a new problem. (Note: I’m talking about using math to solve real-world problems here. Pure mathematics is very different, although all the tools I will mention can be of use to a pure mathematician.)

This is my set of “most useful tools,” I should note, and reading the diagram left-to- right, top to bottom, the tools I list are roughly in the order I have used them in working on various projects over the past fifteen years. Other mathematicians might produce different collections and different orders. But they won’t be that much different, and I’ll bet they all begin with the same first tool.

If you find this diagram in any way surprising, you likely have not worked in today’s world of mathematical problem solving. If you find it surprising and are in mathematics education, I respectfully point out that this is the mathematical toolset that your students will need to master in order to make use of math in the world they will inhabit after graduation. You may or may not like that. If you don’t like it, then that is unfortunate. Mathematical problem solving is simply done differently today. It just is.

Thursday, December 14, 2017

Clash of representations

The pie chart in the above tweet jumped out of the page when it appeared in my twitter feed on September 14. My initial shock at seeing the figure 1% attached to a region of the pie chart that was evidently almost 25% of the total area of the disk did not last long, of course, since the accompanying text made it clear what the diagram was intended to convey. The 1% label referred to the section of the population being discussed, whereas the pie-chart indicated the share of taxes paid by that group. Indeed, the image was an animated GIF; when I clicked on it, the region labeled “1%” shrank, culminating with the chart on the right in the image shown below:
But here’s the thing. Even after I had figured out what the chart was intended to convey, I still found it confusing. I wondered if a lay-reader, someone who is not a professional mathematician, would manage to parse out the intended meaning. It was not long before I found out. The image below shows one of the tweets that appeared in response less than an hour later:
As I had suspected, a common reaction was to dismiss the chart as yet another example of a bad data visualization created by an innumerate graphics designer. Indeed, that had been my initial reaction. But this particular example is more interesting. Yes, it is a bad graphic, for the simple reason that it does not convey the intended message. But not because of the illustrator’s innumeracy. In fact, numerically, it appears to be as accurate as you can get with a pie chart. The before and after charts do seem to have regions whose areas correspond to the actual data on the tax-payer population.

This example was too good to pass up as an educational tool: asking a class to discuss what the chart is intended to show, could lead to a number of good insights into how mathematics can help us understand the world, while at the same time having the potential to mislead. I was tempted to write about it in my October post, but wondered if I should delay a couple of months to avoid an example that was at the heart of a current, somewhat acrimonious party-political debate. As it turned out, the September 30 death of the game-show host Monty Hall resolved the issue for me—I had to write about that—and then November presented another “must do” story (the use of mathematics in election jerrymandering). So this month, with the background political, tax votes now a matter of historical record, I have my first real opportunity to run this story.

The two-month delay brought home to me just how problematic this particular graphic is. Even knowing in advance what the issue is, I still found I had to concentrate to “see” the chart as conveying the message intended. That “1%” label continued to clash with the relative area of the labeled region.

It’s a bit like those college psychology-class graphics that show two digits in different font sizes, and ask you to point to the digit that represents the bigger integer. If the font sizes clash with the sizes of the integers, you take measurably longer to identify the correct one, as shown below:
For me, the really big take-home lesson from the tax-proposal graphic is the power of two different mathematical representations of proportions: pie charts and numerical percentages. Each, on its own, is instant. In the case of the pie chart, the representation draws on the innate human cognitive ability to judge relative areas in simple, highly symmetrical figures like circular disks or rectangles. With percentages, there is some initial learning required—you have to understand percentages—but once you have done that, you know instantly what is meant by figures such as “5%” or “75%."

But how do you get that understanding of the meaning of numerical percentages? For most of us (I suspect all of us), it comes from being presented (as children) with area examples like pie charts and subdivided rectangles. This sets us up to be confused, bigly, by examples where those two representations are used in the same graphic but with the percentage representing something other than the area of the segment (or what that area is intended to represent).

The message then, from this particular example—or at least the message I got from it—is that powerful graphics are like any powerful tool, their power for good depends on using them wisely; if used incorrectly, they can confuse and mislead. And make no mistake about it, numbers are incredibly powerful tools. Their invention alone is by far the greatest mathematical invention in human history. That’s why in every nation in the world, math is the only mandated school subject apart from the native language.

Thursday, November 16, 2017

Mathematics and the Supreme Court

American courts have never appeared to be very receptive to mathematical arguments, in large part, some (including me) have assumed, because many judges do not feel confident evaluating mathematical reasoning and, in the case of jury trials, no doubt because they worry that unscrupulous, math-savvy lawyers could use formulas and statistics to fool jury members. There certainly have been some egregious examples of this, particularly when bogus probability arguments have been presented. Indeed, one classic misuse of conditional probability is now known as the “prosecutor’s fallacy."

Another example where the courts have trouble with probability is in cases involving DNA profiling, particularly Cold Hit cases, where a DNA profile match is the only hard evidence against a suspect. I myself have been asked to provide expert testimony in some such cases, and I wrote about the issue in this column in September and October of 2006.

In both kinds of case, the courts have good reason to proceed with caution. The prosecutor’s fallacy is an easy one to fall into, and with Cold Hit DNA identification there is a real conflict between frequentist and Bayesian probability calculations. In neither case, however, should the courts try to avoid the issue. When evidence is presented, the court needs to have as accurate an assessment as possible as to its reliability or veracity. That frequently has to be in the form of a probability estimate.

Now the courts are facing another mathematical conundrum. And this time, the case has landed before the US Supreme Court. It is a case that reaches down to the very foundation of our democratic system: How we conduct our elections. Not how we use vote counts to determine winners, although that is also mathematically contentious, as I wrote about in this column in November of 2000, just before the Bush v Gore Presidential Election outcome ended up before the Supreme Court. Rather, the issue before the Court this time is how states are divided up into electoral districts for state elections.

How a state carves up voters into state electoral districts can have a huge impact on the outcome. In six states, Alaska, Arizona, California, Idaho, Montana, and Washington, the apportioning is done by independent redistricting commissions. This is generally regarded—at least by those who have studied the issue—as the least problematic approach. In seven other states, Arkansas, Colorado, Hawaii, Missouri, New Jersey, Ohio, and Pennsylvania, politician commissions draw state legislative district maps. In the remaining 37 states, the state legislatures themselves are responsible for state legislative redistricting. And that is where the current problem arises.

There is, of course, a powerful temptation for the party in power to redraw the electoral district maps to favor their candidates in the next election. And indeed, in the states where the legislatures draw the maps, both major political parties have engaged in that practice. One of the first times this occurred was in 1812, when Massachusetts governor Elbridge Gerry redrew district boundaries to help his party in an upcoming senate election. A journalist at the Boston Gazette observed that one of the contrived districts in Gerry’s new map looked like a giant salamander, and gave such partisan redistricting a name, combining Gerry and mander to create the new word gerrymander. Though Gerry lost his job over his sleight-of- hand, his redistricting did enable his party to take over the state senate. And the name stuck.

Illegality of partisan gerrymandering is generally taken to stem from the 14th Amendment, since it deprives the smaller party of the equal protection of the laws, but it has also been argued to be, in addition, a 1st Amendment issue—namely an apportionment that has the purpose and effect of burdening a group of voters’ representational rights.

In 1986, the Supreme Court issued a ruling that partisan gerrymandering, if extreme enough, is unconstitutional, but it has yet to throw out a single redistricting map. In large part, the Supreme Court’s inclination to stay out of the redistricting issue is based on a recognition that both parties do it, and over time, any injustices cancel out, as least numerically. Historically, this was, generally speaking, true. Attempts to gerrymander have tended to favor both parties to roughly the same extent. But in 2012, things took a dramatic turn with a re-districting process carried out in Wisconsin.

That year, the recently elected Republican state legislature released a re-districting map generated using a sophisticated mathematical algorithm running on a powerful computer. And that map was in an altogether new category. It effectively guaranteed Republican majorities for the foreseeable future. The Democrat opposition cried foul, a Federal District Court agreed with them, and a few months ago the case found its way to the Supreme Court.

That the Republicans come across as the bad actors in this particular case is likely just an accident of timing; they happened to come to power at the very time when political parties were becoming aware of what could be done with sophisticated algorithms. If history is any guide, either one of the two main parties would have tried to exploit the latest technology sooner or later. In any event, with mathematics at the heart of the new gerrymandering technique, the only way to counter it may be with the aid of equally sophisticated math.

The most common technique used to gerrymander a district is called “packing and cracking." In packing, you cram as many of the opposing party’s voters as possible into a small number of “their” districts where they will win with many more votes than necessary. In cracking, you spread opposing party’s voters across as many of “your” districts as possible so there are not enough votes in any one of those districts to ever win there.

A form of packing and cracking arises naturally when better-educated liberal-leaning voters move into in cities and form a majority, leaving those in rural areas outnumbered by less-educated, more conservative-leaning voters. (This is thought to be one of the factors that has led to the increasing polarization in American politics.) Solving that problem is, of course, a political one for society as a whole, though mathematics can be of assistance by helping to provide good statistical data. Not so with partisan gerrymandering, where mathematics has now created a problem that had not arisen before, for which mathematics may of necessity be part of the solution.

When Republicans won control of Wisconsin in 2010, they used a sophisticated computer algorithm to draw a redistricting map that on the surface appeared fair—no salamander-shaped districts—but in fact was guaranteed to yield a Republican majority even if voter preferences shifted significantly. Under the new map, in the 2012 election, Republican candidates won 48 percent of the vote, but 60 of the state’s 99 legislative seats. The Democrats’ 51 percent that year translated into only 39 seats. Two years later, when the Republicans won the same share of the vote, they ended up with 63 seats—a 24-seat differential.

Recognizing what they saw as a misuse of mathematics to undermine the basic principles of American democracy, a number of mathematicians around the country were motivated to look for ways to rectify the situation. There are really two issues to be addressed. One is to draw fair maps—a kind of “positive gerrymandering.” The other is to provide reliable evidence to show that a particular map has been intentionally drawn to favor one party over another, if such occurs, and moreover to do so in a way that the courts can understand and accept. Neither issue is easy to solve, and without mathematics, both are almost certainly impossible.

For the first issue, a 2016 Supreme Court ruling gave a hint about what kind of fairness measure it might look kindly on: one that captures the notion of “partisan symmetry,” where each party has an equal opportunity to convert its votes into seats. The Wisconsin case now presents the Supreme Court with the second issue.

When, last year, a Federal District Court in Wisconsin threw out the new districting map, they cited both the 1st and 14th Amendments. It was beyond doubt, the court held, that the new maps were “designed to make it more difficult for Democrats, compared to Republicans, to translate their votes into seats.” The court rejected the Republican lawmakers’ claim that the discrepancy between vote share and legislative seats was due simply to political geography. The Republicans had argued that Democratic voters are concentrated in urban areas, so their votes have an impact on fewer races, while Republicans are spread out across the state. But, while that is true, geography alone does not explain why the Wisconsin maps are so skewed.

So, how do you tell if a district is gerrymandered? One way, that has been around for some time, is to look at the geographical profile. The gerrymandering score, G, is defined by:
G = gP/A, where
g: the district’s boundary length, minus natural boundaries (like coastlines and rivers)
P: the district’s total perimeter
A: the district’s area
The higher the score, the wilder is the apportionment as a geographic region, and hence the more likely to have been gerrymandered.

That approach is sufficiently simple and sensible to be acceptable to both society and the courts, but unfortunately does not achieve the desired aim of fairness. And, more to the point in the Wisconsin case, use of sophisticated computer algorithms can draw maps that have a low gerrymandering score and yet are wildly partisan.

The Wisconsin Republicans’ algorithm searched through thousands of possible maps looking for one that would look reasonable according to existing criteria, but would favor Republicans no matter what the election day voting profile might look like. As such, it would be a statistical outlier. To find evidence to counter that kind of approach, you have to look at the results the districting produces when different voting profiles are fed into it.

One promising way to identify gerrymandering is with a simple mathematical formula suggested in 2015, called the “efficiency gap." It was the use of this measure that caused, at least in part, the Wisconsin map to be struck down by the court. It is a simple idea—and as I noted, simplicity is an important criterion, if it is to stand a chance of being accepted by society and the courts.

You can think of a single elector’s vote as being “wasted” if it is cast in a district where their candidate loses or it is cast in a district where their candidate would have won there anyway. The efficiency gap measures those “wasted” votes. For each district, you total up the number of votes the winning candidate receives in excess of what it would have taken to elect them in that district, and you total up the number of votes the losing candidate receives. Those are the two parties’ “wasted votes” for that district.

You then calculate the difference between those “wasted-vote” totals for each of the two parties, and divide the answer by the total number of votes in the state. This yields a single percentage figure: the efficiency gap. If that works out to be greater than 7%, the systems developers suggest, the districting is unfair.

By way of an example, let’s see what the efficiency gap will tell us about the last Congressional election. In particular, consider Maryland’s 6 th Congressional district, which was won by the Democrats. It requires 159K votes to win. In the last election, there were 186K Democrat votes, so 186K – 159K = 26K Democrat votes were “wasted,” and 133K Republican votes, all of which were “wasted.”

In Maryland as a whole, there were 510K Democrat votes “wasted” and 789K Republican votes “wasted.” So, statewide, there was a net “waste” of 789K – 510K = 279K Republican votes.

There were 2,598M votes cast in total. So the efficiency gap is 279K/2598K = 10.7% in favor of the Democrats.

I should note, however, that the gerrymandering problem is currently viewed as far more of a concern in state elections than in congressional races. Last year, two social scientists published the results they obtained using computer simulations to measure the extent of intentional gerrymandering in congressional district maps across most of the 50 states. They found that on the national level, it mostly canceled out between the parties. So banning only intentional gerrymandering would likely have little effect on the partisan balance of the U.S. House of Representatives. The efficiency gap did, however, play a significant role in the Wisconsin court’s decision.

Another approach, developed by a team at Duke University, takes aim at the main idea behind the Wisconsin redistricting algorithm—searching through many thousands of possible maps looking for ones that met various goals set by the creators, any one of which would, of necessity, be a statistical outlier. To identify a map that has been obtained in this way, you subject it to many thousands of random tweaks. If the map is indeed an outlier, the vast majority of tweaks will yield a fairly unremarkable map. So, you compare the actual map with all those thousands of seemingly almost identical, and apparently reasonable, variations you have generated from it. If the actual map produces significantly different election results from all the others, when presented with a range of different statewide voting profiles, you can conclude that it is indeed an “outlier” — a map that could only have been chosen to deliberately subvert the democratic process.

And this is where we—and the Supreme Court—are now. We have a problem for our democracy created using mathematics. Mathematicians looking for mathematical ways to solve it, and there are already two candidate “partisan gerrymandering test” in the arena. Historically, the Supreme Court has proven resistant to allowing math into the courtroom. But this time, it looks like they may have no choice. At least as long as state legislatures continue to draw the districting maps. Maybe the very threat of having to deal with mathematical formulas and algorithms will persuade the Supreme Court to recommend that Congress legislates to enforce all states to use independent commissions to draw the districting maps. Legislation under pain of math. We will know soon enough.

Wednesday, October 11, 2017

Monty Hall may now rest in peace, but his problem will continue to frustrate

Monty Hall with a contestant in Let's Make a Deal.
The news that American TV personality Monty Hall died recently (The New York Times, September 30) caused two groups of people to sit up and take note. One group, by far the larger, was American fans of television game shows in the 1960s and 70s, who tuned in each week to his show “Let’s Make a Deal.” The other group include lovers of mathematics the world over, most of whom, I assume, have never seen the show.

I, and by definition all readers of this column, are in that second category. As it happens, I have seen a key snippet of one episode of the show, which a television documentary film producer procured to use in a mathematics program we were making about probability theory. Our interest, of course, was not the game show itself, but the famous — indeed infamous — “Monty Hall Problem” it let loose on an unsuspecting world.

To recap, at a certain point in the show, Monty would offer one of the audience participants the opportunity to select one of three doors on the stage. Behind one, he told them, was a valuable prize, such as a car, behind each of the other two was a booby prize, say a goat. The contestant chose one door. Sometimes, that was the end of the exchange, and Monty would open the door to reveal what the contestant had won. But on other occasions, after the contestant had chosen a door, Monty would open one of the two unselected doors to reveal a booby prize, and then give them the opportunity to switch their selection. (Monty could always do this since he knew exactly which door the prize was hidden behind.)

So, for example, if the contestant first selects Door 2, Monty might open Door 1 to reveal a goat, and then ask if the contestant wanted to switch their choice from Door 2 to Door 3. The mathematical question here is, does it make any difference if the contestant switches their selection from Door 2 to Door 3? The answer, which on first meeting this puzzler surprises many people, is that the contestant doubles their chance of winning by switching. The probability goes up from an original 1/3 of Door 2 being the right guess, to 2/3 that the prize is behind Door 3.

I have discussed this problem in Devlin’s Angle on at least two occasions, the most recent being December 2005, and have presented it in a number of articles elsewhere, including national newspapers. That on each occasion I have been deluged with mail saying my solution is obviously false was never a surprise; since the problem is famous precisely because it presents the unwary with a trap. That, after all, is why I, and other mathematics expositors, use it! What continues to amaze me is how unreasonably resistant many people are to stepping back and trying to figure out where they went wrong in asserting that switching doors cannot possibly make any difference. For such reflection is the very essence of learning.

Wrapping your mind around the initially startling information that switching the doors doubles the probability of winning is akin to our ancestors coming to terms with the facts that the Earth is not flat or that the Sun does not move around the Earth. In all cases, we have to examine how it can be that what our eyes or experience seem to tell us is misleading. Only then can we accept the rock-solid evidence that science or mathematics provides.

Some initial resistance is good, to be sure. We should always be skeptical. But for us and society to continue to advance, we have to be prepared to let go of our original belief when the evidence to the contrary becomes overwhelming.

The Monty Hall problem is unusual (though by no means unique) in being simple to state and initially surprising, yet once you have understood where your initial error lies, the simple correct answer is blindingly obvious, and you will never again fall into the same trap you did on the first encounter. Many issues in life are much less clear-cut.

BTW, if you have never encountered the problem before, I will tell you it is not a trick question. It is entirely a mathematical puzzle, and the correct mathematics is simple and straightforward. You just have to pay careful attention to the information you are actually given, and not remain locked in the mindset of what you initially think it says. Along the way, you may realize you have misunderstood the notion of probability. (Some people maintain that probabilities cannot change, a false understanding that most likely results from first encountering the notion in terms of the empirical study of rolling dice and selecting colored beans from jars.) So reflection on the Monty Hall Problem can provide a valuable lesson in coming to understand the hugely important concept of mathematical probability.

As it happens, Hall’s death comes at a time when, for those of us in the United States, the system of evidence-based, rational inquiry which made the nation a scientific, technological, and financial superpower is coming under dangerous assault, with significant resources being put into a sustained attempt to deny that there are such things as scientific facts. For scientific facts provide a great leveler, favoring no one person or one particular group, and are thus to some, a threat.

The late Carl Sagan warned of this danger back in 1995, in his book The Demon-Haunted World:Science as a Candle in the Dark, writing:
“I have a foreboding of an America in my children’s or my grandchildren’s time — when the United States is a service and information economy; when nearly all the key manufacturing industries have slipped away to other countries; when awesome technological powers are in the hands of a very few, and no one representing the public interest can even grasp the issues; when the people have lost the ability to set their own agendas or knowledgeably question those in authority; when, clutching our crystals and nervously consulting our horoscopes, our critical faculties in decline, unable to distinguish between what feels good and what’s true, we slide, almost without noticing, back into superstition and darkness. ...”
Good scientists, such as Sagan, are not just skilled at understanding what is, they can sometimes extrapolate rationally to make uncannily accurate predictions of what the future might bring. It is chilling, but now a possibility that cannot be ignored, that a decade from now, I could be imprisoned for writing the above words. Today, the probability that will happen is surely extremely low, albeit nonzero. But that probability could change. As mathematicians, we have a clear responsibility to do all we can to ensure that Sagan’s words do not describe the world in which our children and grandchildren live.

Wednesday, September 20, 2017

The Legacy of Jonathan Borwein

Keith Devlin and Jonathan Borwein talk to host Robert Krulwick on stage at the World Science Festival in 2011.

At the end of this week I fly to Australia to speak and participate in the Jonathan Borwein Commemorative Conference in Newcastle, NSW, Borwein’s home from 2009 onwards, when he moved to the Southern hemisphere after spending most of his career at various Canadian universities. Born in Scotland in 1951, Jonathan passed away in August last year, leaving behind an extensive collection of mathematical results and writings, as well as a long list of service activities to the mathematical community. [For a quick overview, read the brief obituary written by his long-time research collaborator David Bailey in their joint blog Math Drudge. For more details, check out his Wikipedia entry.]

Jonathan’s (I cannot call him by anything but the name I always used for him) career path and mine crossed on a number of occasions, with both of us being highly active in mathematical outreach activities and both of us taking an early interest in the use of computers in mathematics. Over the years we became good friends, though we worked together on a project only once, co-authoring an expository book on experimental mathematics, titled The Computer as Crucible, published in 2008.

Most mathematicians, myself included, would credit Jonathan as the father of experimental mathematics as a recognized discipline. In the first chapter of our joint book, we defined experimental mathematics as “the use of a computer to run computations—sometimes no more than trial-and- error tests—to look for patterns, to identify particular numbers and sequences, to gather evidence in support of specific mathematical assertions that may themselves arise by computational means, including search.”

The goal of such work was to gather information and gain insight that would eventually give rise to the formulation and rigorous proof of a theorem. Or rather, I should say, that was Jonathan’s goal. He saw the computer, and computer-based technologies, as providing new tools to formulate and prove mathematical results. And since he gets to define what “experimental mathematics” is, that is definitive. But that is where are two interests diverged significantly.

In my case, the rapidly growing ubiquity of ever more powerful and faster computers led to an interest in what I initially called “soft mathematics” (see my 1998 book Goodbye Descartes) and subsequently referred to as “mathematical thinking,” which I explored in a number of articles and books. The idea of mathematical thinking is to use a mathematical approach, and often mathematical notations, to gather information and gain insight about a task in a domain that enables improved performance. [A seminal, and to my mind validating, example of that way of working was thrust my way shortly after September 11, 2001, when I was asked to join a team tasked with improving defense intelligence analysis.]

Note that the same phrase “gather information and gain insight” occurs in both the definition of experimental mathematics and that of mathematical thinking. In both cases, the process is designed to lead to a specific outcome. What differs is the nature of that outcome. (See my 2001 book InfoSense, to get the general idea of how mathematical thinking works, though I wrote that book before my Department of Defense work, and before I adopted the term “mathematical thinking.”)

It was our two very different perspectives on the deliberative blending of mathematics and computers that made our book The Computer as Crucible such a fascinating project for the two of us.

But that book was not the first time our research interests brought us together. In 1998, the American Mathematical Society introduced a new section of its ten-issues- a-year Notices, sent out to all members, called “Computers and Mathematics,” the purpose of which was both informational and advocacy.

Though computers were originally invented by mathematicians to perform various numerical calculations, professional mathematicians were, by and large, much slower at making use of computers in their work and their teaching than scientists and engineers. The one exception was the development of a number of software systems for the preparation of mathematical manuscripts, which mathematicians took to like ducks to water.

In the case of research, mathematicians’ lack of interest in computers was perfectly understandable—computers offered little, if any, benefit. (Jonathan was one of a very small number of exceptions, and his approach was initially highly controversial, and occasionally derided.) But the writing was on the wall—or rather on the computer screen—when it came to university teaching. Computers were clearly going to have a major impact in mathematics education.

The “Computers and Mathematics” section of the AMS Notices was intended to be a change agent. It was originally edited by the Stanford mathematician Jon Barwise, who took care of it from the first issue in the May/June 1988 Notices, to February 1991, and then by me until we retired the section in December 1994. It is significant that 1988 was the year Stephen Wolfram released his mathematical software package Mathematica. And in 1992, the first issue of the new research journal Experimental Mathematics was published.

Over its six-and- a-half years run, the column published 59 feature articles, 19 editorial  essays, and 115 reviews of mathematical software packages — 31 features 11 editorials, and 41 reviews under Barwise,  28 features, 8 editorials, and 74 reviews under me. [The Notices website has a complete index.] One of the feature articles published under my watch was “Some Observations of Computer Aided Analysis,” by Jonathan Borwein and his brother Peter, which appeared in October 1992. Editing that article was my first real introduction to something called “experimental mathematics.” For the majority of mathematicians, reading it was their introduction.

From then on, it was clear to both of us that our view of “doing mathematics” had one feature in common: we both believed that for some problems it could be productive to engage in mathematical work that involved significant interaction with a computer. Neither of us was by any means the first to recognize that. We may, however, have been among the first to conceive of such activity as constituting a discipline in its own right, and each to erect a shingle to advertise what we were doing. In Jonathan’s case, he was advancing mathematical knowledge; for me it was about utilizing mathematical thinking to improve how we handle messy, real-world problems. In both cases, we were engaging in mental work that could not have been done before powerful, networked computers became available.

It’s hard to adjust to Jonathan no longer being among us. But his legacy will long outlast us all. I am looking forward to re-living much of that legacy in Australia in a few days time.