JANUARY 2001
Mathematics, Limited
Well it’s here at last: the year 2001, a date that for most of us who were coming of age in the 1960s can mean only one thing: Stanley Kubrick and Arthur C. Clarke’s celebrated futuristic science fiction movie 2001: A Space Odyssey. Just how correct has Clarke and Kubrick’s vision of the future turned out to be?
In the movie, a team of new millennium space explorers set off on a long journey of discovery to Jupiter. To conserve energy, the team members spend most of the time in a state of hibernation, their life-support systems being monitored and maintained by the on-board computer HAL. Though HAL controls the entire spaceship, it is supposed to be under the ultimate control of the ship’s commander, Dave, with whom it communicates in a soothingly soft, but emotionless male voice (actually that of actor Douglass Rain). But once the vessel is well away from Earth, HAL shows that it has developed what can only be called a “mind of its own.” Having figured out that the best way to achieve the mission for which it has been programmed is to dispose of its human baggage (expensive to maintain and sometimes irrational in their actions), HAL kills off the hibernating crew members, and then sets about trying to eliminate its two conscious passengers. It manages to maneuver one crew member outside the spacecraft and sends him spinning into outer space with no chance of return. Commander Dave is able to save himself only by entering the heart of the computer and manually removing its memory cells. Man triumphs over machine—but only just.
It’s a good story. (There’s a lot more to it than I just described.) But how realistic is the behavior of HAL? We don’t yet have computers capable of genuinely independent thought, nor do we have computers we can converse with using ordinary language (except in some fairly narrowly constrained domains). True, there have been admirable advances in systems that can perform useful control functions requiring decision making, and there are working systems that recognize and produce speech. But they are all highly restricted in their scope. Despite the oft-repeated claims that “the real thing” is just around the corner, the plain fact is that we are not even close to building computers that can reproduce human capabilities in thinking and using language.
But back in the 1960s, when 2001 was being made, there was no shortage of expert opinion claiming that the days of HAL were indeed just a few years off. The first such prediction had been made by the mathematician and computer pioneer Alan Turing. In his celebrated article Computing Machinery and Intelligence, written in 1950, Turing claimed, “I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”
Though the last part of Turing’s claim seems to have come true, that is a popular response to years of hype rather than a reflection of the far less glamorous reality. There is now plenty of evidence, from psychology, sociology, and from linguistics, to indicate that the original ambitious goals of machine intelligence is not achievable, at least when those machines are electronic computers, no matter how big or fast they get. (I present some of that evidence in my 1997 book Goodbye Descartes: The End of Logic and the Search for a New Cosmology of the Mind.) So how did the belief in intelligent machines ever arise?
Ever since the first modern computers were built in the late 1940s, it was obvious that they could do some things that had previously required an “intelligent mind.” For example, by 1956, a group at Los Alamos National Laboratory had programmed a computer to play a poor but legal game of chess. That same year, Allen Newell, Clifford Shaw, and Herbert Simon of the RAND Corporation produced a computer program called The Logic Theorist, which could—and did—prove some simple theorems in mathematics.
The success of The Logic Theorist immediately attracted a number of other mathematicians and computer scientists to the possibility of machine intelligence. Mathematician John McCarthy organized what he called a “two month ten-man study of artificial intelligence” at Dartmouth College in New Hampshire, thereby coining the phrase “artificial intelligence”, or AI for short. Among the participants at the Dartmouth program were Newell and Simon, Minsky, and McCarthy himself. The following year, Newell and Simon produced the General Problem Solver, a computer program that could solve the kinds of logic puzzles you find in newspaper puzzle columns and in the puzzle magazines sold at airports and railway stations. The AI bandwagon was on the road and gathering speed.
As is often the case, the mathematics on which the new developments were based had been developed many years earlier. Attempts to write down mathematical rules of human thought go back to the ancient Greeks, notably Aristotle and Zeno of Citium. But the really big breakthrough came in 1847, when an English mathematician called George Boole published a book called An Investigation of the Laws of Thought. In this book, Boole showed how to apply ordinary algebra to human thought processes, writing down algebraic equation in which the unknowns denoted not numbers but human thoughts. For Boole, solving an equation was equivalent to deducing a conclusion from a number of given premises. With some minor modifications, Boole’s nineteenth century algebra of thought lies beneath the electronic computer and is the driving force behind AI.
Another direct descendent of Boole’s work was the dramatic revolution in linguistics set in motion by MIT linguist Noam Chomsky in the early 1950s. Chomsky showed how to use techniques of mathematics to describe and analyze the grammatical structure of ordinary languages such as English, virtually overnight transforming linguistics from a branch of anthropology into a mathematical science. At the same time that researchers were starting to seriously entertain the possibility of machines that think, Chomsky opened up (it seemed) the possibility of machines that could understand and speak our everyday language.
The race was on to turn the theories into practice. Unfortunately (some would say fortunately), after some initial successes, progress slowed to a crawl. The result was hardly a failure in scientific terms. For one thing, we do have some useful systems, and they are getting better all the time. The most significant outcome, however, has been an increased understanding of the human mind: how unlike a machine it is and how unmechanical human language use is.
One reason why computers cannot act intelligently is that logic alone does not produce intelligent behavior. As neuroscientist Antonio Damasio pointed out in his 1994 book Descartes’ Error, you need emotions as well. While Damasio acknowledged that allowing the emotions to interfere with our reasoning can lead to irrational behavior, he presented evidence to show that a complete absence of emotion can likewise lead to irrational behavior. His evidence came from case studies of patients for whom brain damage — either by physical accident, stroke, or disease — has impaired their emotions but had left intact their ability to perform ‘logical reasoning’, as verified using standard tests of logical reasoning skill. Take away the emotions and the result is a person who, while able to conduct an intelligent conversation and score highly on standard IQ tests, is not at all rational in his or her behavior. Such people often act in ways highly detrimental to their own well being. So much for western science’s idea of a “coolly rational person” who reasons in a manner unaffected by emotions. As Damasio’s evidence indicated, truly emotionless thought leads to behavior that by anyone else’s standards is quite clearly irrational.
And as linguist Steven Pinker explained in his 1994 book The Language Instinct, language too is perhaps best explained in biological terms. Our facility for language, said Pinker, should be thought of as an organ, along with the heart, the pancreas, the liver, and so forth. Some organs process blood, others process food. The language organ processes language. According to Pinker, we should think of language use as an instinctive, organic process, not a learned, computational one.
So, while no one would deny that work in AI and computational linguistics has led to some very useful computer systems, the really fundamental lessons that were learned were not about computers but about ourselves. The research was most successful in terms not of engineering but of understanding what it is to be human. Though Kubrick got it dead wrong in terms of what computers would be able to do by 2001, he was right on the mark in terms of what we ultimately discover as a result of our science. 2001 showed the entire evolution of mankind, starting from the very beginnings of our ancestors Homo erectus and taking us through the age of enlightenment into the present era of science, technology, and space exploration, and on into the then-anticipated future of routine interplanetary travel. Looking ahead forty years to the start of the new millennium, Kubrick had no doubt where it was all leading. In the much discussed—and much misunderstood—surrealistic ending to the movie, Kubrick’s sole surviving interplanetary traveler reached the end of mankind’s quest for scientific knowledge, only to be confronted with the greatest mystery of all: Himself. In acquiring knowledge and understanding, in developing our technology, and in setting out on our exploration of our world and the universe, said Kubrick, scientists were simply preparing the way for a far more challenging journey into a second unknown: the exploration of ourselves.
The dawn of the new millennium (which most of Humanity celebrated a year ago, twelve months before the calendar event itself) sees Mankind about to pursue that new journey of discovery. Far from taking away our humanity, as many feared, attempts to get computers to think and to handle language have instead led to a greater understanding of who and what we are.
Devlin’s Angle is updated at the beginning of each month. If much of this article seems familiar, there’s a good reason. I adapted it from a column I wrote four years ago. I don’t normally do that. But with the arrival of the prophetic year 2001 coinciding with the inauguration of a new US President who seems committed to putting the defense of the nation—and with it all life on earth—in the hands of a computer-controlled missile defense system, it seemed worth saying the same thing again, and reminding ourselves that mathematics and the technologies based on mathematics have their limitations.
Devlin’s Angle is updated at the beginning of each month.
Keith Devlin ( devlin@stmarys-ca.edu) is Dean of Science at Saint Mary’s College of California, in Moraga, California, and a Senior Researcher at Stanford University. His latest book is The Math Gene: How Mathematical Thinking Evolved and Why Numbers Are Like Gossip, published by Basic Books.
FEBRUARY 2001
As others see us
“Mathematicians have no friends, except mathematicians. They are usually fat, unmarried, aren’t seeing anyone, and have wrinkles in their forehead from thinking so hard.”
Sound familiar? How about this?
“[Mathematicians are usually] bald, overweight, unmarried men who wear beards and glasses and lead little or no social life.”
Neither statement would come as a surprise if you were in the habit of asking school pupils to describe their image of what a typical mathematician is like. The two quotations are taken verbatim from the recently released report of a study of 476 12- to 13-year olds in the US, Britain, Finland, Germany, and Romania, reported in the Christian Science Monitor on 8 January. They are representative of a strongly negative stereotype of my profession widely held by young adolescents.
Now, a straightforward application of classical probability theory tells me that, among the thousands of readers of Devlin’s Angle there probably is a fat, unmarried, forehead-wrinkled, friendless individual who can’t get a date. I dare say my readership also includes the occasional bald, overweight, unmarried man with a beard and glasses who leads little or no social life. To such readers, let me, like many a commentator before me, say please don’t blame the messenger. I’m not making this up, I’m simply reporting it. The question I want to ask is, “How representative is that picture of a mathematician?”
Having moved in mathematical circles for over thirty-five years now, my answer is an unequivocal “Not at all.”
“So why worry?” you might say. “Does it matter that we don’t all look like the tall, blond-haired, blue-eyed Matt Damon in the movie Good Will Hunting?” Of course it doesn’t matter. Being a mathematician has nothing to do with physical appearance or an ability to get a date. (And, for the record, I personally don’t have a problem with fat, unmarried, forehead-wrinkled, friendless individuals who can’t get a date, nor do I mind if such people become mathematicians.)
The reason the report bothers me is that it may indicate a major reason why so few children elect to study mathematics (or allied fields like physics or computer science) at university, and hence go on to pursue a career as a mathematician or a mathematics teacher. As Susan Picker, a member of the research team that produced the report, observes, “For children, images tend to be gatekeepers, and kids who would prefer an active social life don’t want to end up being lonely geeks.”
Unfortunately, according to Professor John Berry of Plymouth University in England, who ran the study, “The image we got … was a very negative one.” “The average picture was of a scruffy person, probably with pens in his shirt pocket, holes in his clothes, and equations written on his arms.” (Professor Barry also points out that the children interviewed almost invariably assumed math teachers were male.)
Picker adds that when she asked children to draw a mathematician, the result was predictable. “The children saw them as unprepossessing nerds.”
With countries such as the United States and Britain currently facing a chronic shortage of students electing to study math and go on to become mathematics teachers, the report raises some significant questions. The first one being, “Where are these young children picking up this stereotype?” It’s tempting to blame those familiar whipping boys: television and the movies. Except that mathematicians are almost never portrayed on either medium! As far as Hollywood and the television production companies are concerned, mathematicians simply don’t merit any attention, positive or negative.
So maybe the problem has its origins closer to home. Perhaps we can see at least elements of the stereotype in reality. In a profession where, in the final analysis, truth is everything and appearance nothing, maybe many of us do pay little attention to our dress. But people in many other professions don’t spend a fortune on clothes or hours in front of the mirror either. Likewise, few of us in any profession have the slim, youthful appearance of a movie star. So the children’s’ comments on clothing and corporal geometry must surely be a manifestation of something else. I wonder if that something else might be attitude. Perhaps some of us—actually, given the report’s findings it would have to be many of us—after years of struggle trying to get students to “see it”, start to show signs of weariness and/or impatience.
For instance, according to the report, in Finland several children drew pictures of math teachers holding machineguns. One pupil wrote a caption: “If these sums are wrong, it’s the end of you.”
Does that tell us anything?
A few years ago, I was speaking at a conference for middle and high school mathematics teachers. In my presentation, I made the point that I thought it was important for those of us in the mathematics education business to convey the notion that mathematics can be fun—that there is enjoyment to be had from working out a problem that at first seems impossible. The mathematician who spoke directly after me began by remarking that he did not think mathematics should be fun at all. Now, I knew—and I confirmed it with him afterwards—that his opening remark was intended to be amusing. After all, like me, this person was someone who loved mathematics and had devoted his life to it. Unfortunately, to both our surprise, a large section of the audience burst into loud and enthusiastic applause at his remark. Neither of us were in any doubt. These teachers believed down to their socks that mathematics should not be fun. Like long cross-country runs and cold showers, you did mathematics because it was good for you, and the pain and the misery was an essential part of it.
Well, we live in a free country, and every one of those individuals in my audience was entitled to his or her view of how mathematics should be taught. Whether or not society should employ them as teachers of our children is another matter. Particularly if, as the Plymouth study might suggest, the result is that the attitudes of a minority of teachers results in a negative stereotype that is applied to the majority.
These thoughts were brought to my mind when I attended the World Economics Forum meeting in Davos last month. After I gave my talk—a brief survey of how mathematics can be applied in the modern world—a significant number of members of the audience came up to me and started to recount their own very negative experiences of mathematics at school. Now, by the very nature of the annual Davos meeting, all of the attendants are there because they have been highly successful in their chosen arena and have reached positions of power and influence in society—often considerable power and influence. The ones who attended my talk did not appear to have a genetic predisposition to hate mathematics. Indeed, many of them told me they found what I said fascinating—as any mathematician knows mathematics is. And yet our profession, the mathematics educators, had sent these future leaders of society out into the world with a highly negative experience of the discipline.
As a profession, we debate endlessly what is the best way to teach mathematics. And that is all to the good. But what we teach and the methodology we adopt are surely not all there is to it. The attitude we convey is, at least in my view, also important. In fact, I think it is crucial. If the teacher does not convey enthusiasm and love for mathematics—let alone if he or she does not even have that enthusiasm and love—then no amount of curriculum development or good instructional technique is going to succeed.
My guess is that the negative stereotype of mathematics teachers that came out of the Plymouth study is simply a stereotypical representation in physical form of a psychological impression. Read what the children said not as a description of how their teachers actually looked but rather as a code for the way they came across in the classroom, and a very different picture emerges. One I feel we need to do something about.
I fear it is unlikely we can completely solve the problem of the highly negative image of mathematics and mathematicians until society decides to pay mathematics teachers more, and thereby attracts more qualified people into the profession. After all, it’s hard for a teacher who does not him or herself feel comfortable with and have a love for mathematics to convey the subject in an exciting and engaging way. (I suspect that those “make ’em suffer” teachers at that conference I attended had little real love for the subject, and adopted the attitude that what they felt was good for them was darned well good for their pupils.) Well, even with increased funding and conditions for math teachers, it would take at least a generation to fill our schools with top notch math teachers who could really inspire their pupils. But those of us who are mathematicians in colleges, universities, or industry can do something now. We can provide young children with role models.
Another session at Davos was about the importance of good role models in many walks of life. (Incidentally, forget all those newspaper articles about the annual Davos meeting being simply an exclusive club where the world’s elite get together once a year to plot the future of the world. There is much talk about ethics, about science, about education, and in general about trying to improve the world.) World leaders of all kinds spoke of the crucial influence role models had played in their lives. We can learn from them. It takes very little time or effort to visit a local school occasionally to tell the students what we do—what mathematicians do. And in my experience, most teachers are delighted to have a “real, live, practicing mathematician” come into the classroom. The effect can be significant.
As Picker describes in the Plymouth report, after the research had been completed, she arranged for a group of children from the study to meet and talk with eight leading mathematicians. “After that, the drawings they did were quite different,” she said. “They saw mathematicians as real people.” Real people. Now there’s a thought.
I think we need to work on our image. And I don’t mean our wardrobe or our waistline.
Devlin’s Angle is updated at the beginning of each month.
Keith Devlin ( devlin@stmarys-ca.edu) is Dean of Science at Saint Mary’s College of California, in Moraga, California, and a Senior Researcher at Stanford University. His latest book is The Math Gene: How Mathematical Thinking Evolved and Why Numbers Are Like Gossip, published by Basic Books.
MARCH 2001
Claude Shannon
Mathematician Claude Shannon died on Saturday February 22, aged 84, after a long struggle with Alzheimer’s disease. But his intellectual legacy will live on as long as people communicate using phone and fax, log on to the Internet, or simply talk about “information” as a commodity that can be measured in “bits” and shipped from place to place. The approach to information and communication Shannon laid out in his groundbreaking paper “A Mathematical Theory of Communication,” published in the Bell System Technical Journal in 1948, and republished virtually unchanged in the pamphlet The Mathematical Theory of Communication he wrote with Warren Weaver the following year (published by the University of Illinois Press) remain current to this day. (Note how the “a” of his paper became “the” in the Shannon-Weaver version.)
Shannon was born in Michigan in 1916. After obtaining degrees in both mathematics and engineering at the University of Michigan, he went to MIT to pursue graduate studies in mathematics. There he came into contact with some of the men who were laying much of the groundwork for the information revolution that would take off after the Second World War, notably the mathematician Norbert Wiener (who later coined the term cybernetics for some of the work in information theory that he, Shannon, and others did at MIT and elsewhere) and Vannevar Bush, the dean of engineering at MIT (whose conceptual “Memex” machine foretold the modern World Wide Web and whose subsequent achievements included the establishment of the National Science Foundation).
In the early 1930s, Bush had built a mechanical, analog computer at MIT called the Differential Analyzer, designed to solve equations that were too complex for the (mechanical) calculating machines of the time. This massive assemblage of cog wheels, shafts, gears, and axles took up several hundred feet of floor space, and was powered by electric motors. Preparing the device to work on a particular problem required physically configuring the machine, and could take two or three days. After the machine had completed the cycle that constituted “solving” the equation, the answer was read off by measuring the changes in position of various components.
Always a “tinkerer,” Shannon took to working with the Analyzer with great enthusiasm. At Bush’s suggestion, for his master’s thesis, he carried out a mathematical analysis of the operation of the machine’s relay circuits. In 1938, he published the results of this study in the Transactions of the American Institute of Electrical Engineers under the title “A Symbolic Analysis of Relay and Switching Circuits.”
Bush’s seemingly mundane motivation for having Shannon do the work was the telephone industry’s need for a mathematical framework in which to describe the behavior of the increasingly complex automatic switching circuits that were starting to replace human telephone operators. What Shannon produced far transcended that aim. The ten page article that he published in the Transactions of the AIEE has been described as one of the most important engineering papers ever written. And with good reason: quite simply, it set the stage for digital electronics.
Shannon began by noting that, although the Analyzer computed in an analog fashion, its behavior at any time was governed by the positions of the relay switches, and they were always in one of just two states: open or closed (or on or off). This led him to recall the work of the nineteenth century logician George Boole, whose mathematical analysis of the “laws of thought” was carried out using an algebra in which the variables have just the two “truth values” T and F (or 1 and 0). From there it was a single—but major—step to thinking of using relay circuits to build a digital “logic machine” that could carry out not just numerical computations but also other kinds of “information processing.”
In 1940, Shannon obtained his doctorate in mathematics, and went to the Institute for Advanced Study at Princeton as a National Research Fellow, where he worked with Hermann Weyl. The following year, he took a position at the Bell Telephone Laboratories in New Jersey, joining a research group who were trying to develop more efficient ways of transmitting information and improving the reliability of long-distance telephone and telegraph lines.
In the 1950s, Shannon became interested in the idea of machine intelligence, and was one of the conveners—together with his soon to be famous mentees John McCarthy and Marvin Minsky—of the now legendary 1956 conference at Dartmouth College in New Hampshire that is generally acknowledged as the birth of artificial intelligence (or AI), as it later became known. But while others (McCarthy and Minsky among them) would become identified with AI, Shannon’s name will be forever associated with the theory of information and communication that the world learned of from the Shannon-Weaver pamphlet.
Prior to Shannon’s work, mathematicians and engineers working on communications technology saw their job as finding ways to maintain the integrity of an analog signal traveling along a wire as a fluctuating electric current or through the air as a modulated radio wave. Shannon took a very different approach. He viewed “information” as being completely encoded in digital form, as a sequence of 0s and 1s—which he referred to as “bits” (for “binary digits”), following a suggestion of his Princeton colleague John Tukey. In addition to providing the communications engineers with a very different way of designing transmission circuits, this shift in focus also led to a concept of “information” as an objective commodity, disembodied from a human “sender” or “receiver.” After Shannon, the name of the game became: how can you best send a sequence of discrete electrical or electromagnetic pulses from one point to another?
A particular consequence of this new approach, which Shannon himself was not slow to observe, was that whereas even a small variation in an analog signal distorts—and can conceivably corrupt—the information being carried by that signal, the discrete yes-or-no/on-or-off nature of a digital signal means that information conveyed digitally is far less prone to corruption; indeed, by adding extra bits to the signal, automatic error detection and correction can be built into the system. (A feature of digital coding that, decades later, would enable Napster users to download music files over the phone lines and play the latest pop music on their desktop PC with a fidelity limited only by the quality of the computer’s sound system, and which is further exemplified by the oft-repeated claim of CD manufacturers that you can drill a centimeter hole in your favorite music CD and it will still play perfectly.)
From a mathematical point of view, arguably the most significant aspect of Shannon’s new, digital conception of information is that it provides a way to measure information—to say exactly how much information a particular signal carries. The measure is simple: you simply count the minimum number of bits it takes to encode the information. To do this, you have to show how a given item of information can be arrived at by giving the answers to a sequence of yes/no questions.
For example, suppose that eight work colleagues apply for a promotion: Alberto, Bob, Carlo, David, Enid, Fannie, Georgina, and Hilary. After the boss has chosen which person will get the position, what is the minimum number of yes/no question you have to ask to discover his or her identity? A few moments thought will indicate that the answer is 3. Thus, the information content of the message announcing who got the job is 3 bits. Here is one way to arrive at this figure:
First question: Is the person male?
That cuts down the number of possibilities from 8 to 4.
Second question: Does the person’s name end in a vowel?
That reduces the field to a single pair.
Third question: Is the person the taller of the two?
Now you have your answer. Of course, this particular sequence of questions assumes that no final pair of applicants are the same height. Moreover, I rigged it to have four males and four females, with carefully chosen names. But the principle will work for any example. All you need is a framework within which a series of yes/no questions (or other binary decisions) will repeatedly halve the number of possibilities until just one remains. (If the number of possibilities at the outset is not a power of 2, there will be a little redundancy in the decision sequence, but you’ll still get a measure of the information content. For example, if there were just 7 candidates, the information content of the final decision will still be 3 bits.)
Building on this simple idea, Shannon was able to develop an entire quantitative theory of information content that has proved to be of enormous importance to the engineers who have to decide how much “channel capacity” a particular communications network requires at each point. So complete was his initial analysis that, although you can find the theory described in many contemporary textbooks, you might just as well go back to his original 1949 pamphlet with Weaver. Except for one thing: the name “information theory” is misleading.
As has been pointed out by a number of workers (including myself in my 1991 book Logic and Information), Shannon’s theory does not deal with “information” as that word is generally understood, rather with data—the raw material out of which information is obtained. (See my book InfoSense for a discussion of the distinction.) In Shannon’s theory, what is measured is the size of the (binary) signal. It does not matter what that signal denotes. According to Shannon’s measure, any two books of 100,000 words have exactly the same information content. That’s a useful (if misleading) thing to say if your goal is simply to transmit both books digitally over the Internet. But if one is an instruction manual for building a nuclear-powered submarine and the other a trashy novel, no one would claim that the two contain the same amount of “information.”
By the same token, anyone who thinks that the information content of Shannon’s 1948 paper can be captured by the statement that it is “10 pages worth” must surely have been in a trance for the past fifty years in which Shannon’s ideas have transformed the world.
Devlin’s Angle is updated at the beginning of each month.
Keith Devlin ( devlin@stmarys-ca.edu) is Dean of Science at Saint Mary’s College of California, in Moraga, California, and a Senior Researcher at Stanford University. His latest book is The Math Gene: How Mathematical Thinking Evolved and Why Numbers Are Like Gossip, published by Basic Books.
APRIL 2001
Knotty Problems
Anyone who has tried to save their Christmas tree lights to use the next year will know that, no matter how carefully you put them away, as soon as you start to uncoil them, you find that at best you are faced with some nasty looking knots and at worst a hopeless tangle.
Neither science nor mathematics can explain why this always happens, and probably never will. But, thanks to some recent work by the mathematicians Jeffrey Lagarias of AT&T Laboratories and Joel Hass of the University of California at Davis, we do now know how much effort might be involved in straightening out the lights.
The new result is in knot theory, a branch of mathematics that goes back to the work of Gauss in the middle of the 19th century. It began with a typical pure mathematician’s question: Can you find a mathematical way to describe different knots?
One way to try to classify knots is by counting the number of times the string (or Christmas tree lights) crosses itself, and much of the early work in knot theory consisted of discovering how many genuinely different kinds of knot there are for each given “crossing number.”
It turns out that there is one knot with crossing number 3, one with crossing number 4, two with 5, three with 6 crossings, seven with 7, twenty-one with 8, forty-nine with 9, and one hundred and sixty-five with 10. In 1998, using computers, two teams of knot theorists managed to tabulate all knots having 16 or fewer crossings. There are exactly 1,701,936 of them altogether.
One reason why knot theory is such a tricky subject is that it’s hard to see if two knots are the same simply by looking at them. For instance, stage magicians often present you with what looks like a knotted rope, but then they pull on the two ends and, lo and behold, the “knot” simply falls away. The rope wasn’t really knotted at all; it was just tangled up.
In fact, this is precisely the issue that Lagarias and Hass’s new result addresses. Given a length of string that is all tangled up, but not technically knotted, what is the maximum number of steps it could take you to straighten it out?
Before you try this, I should tell you that, as with the magician’s “knot”, you are not allowed to let go of the free ends. In fact, mathematicians eliminate the free-ends issue altogether by assuming that the string is a closed loop, something that could be achieved in practice by gluing the two ends together.
With the free ends gone, the only way to untie a knot, or untangle a tangle, is by manipulating the string. But how? Knots theorists have long known that a combination of just three basic moves—called Reidemeister moves after their discoverer—will always untangle a tangled up loop that is not actually knotted. The problem is, no one knew just how many Reidemeister moves it could take.
Now, thanks to the recent theorem we do. But don’t hold your breath. According to Lagarias and Hass’s result, if the loop has N crossovers, then you can untangle it in no more than 2 to the power (100 billion times N) basic moves. That would take way longer than the entire life of the universe. So the result is not practical. On the other hand, going from infinite to any finite number is a big breakthrough, and it could set the stage for finding a more realistic number.
So who cares? Does this result have any practical importance? Other than the importance of knots to scouts, campers, and sailors, does knot theory have any uses anyway? As so often happens in mathematics, what started as a question driven by pure curiosity has turned out to be of major importance in at least two sciences.
Physicists now believe that matter is made up of tiny loops of space-time—the strings of Superstring Theory—and the mathematics of knot theory turns out to be exactly what they need to describe those loops.
Secondly, knot theory plays a role in our understanding of DNA. Because a typical DNA molecule is so long, it has to coil itself up to squeeze into the cell. Some viruses work by changing the knot structure of the DNA, making it behave differently (to benefit the virus rather than the original “owner” of the DNA). By using electron microscopes and the mathematics of knots, collaborative teams of biologists and mathematicians have started to make headway in figuring out just how some viruses manage to infect and take over a cell—knowledge that might one day lead to more effective medicines to fight disease.
With applications like those, almost any advance in our understanding of knots could turn out to have enormous significance. Meanwhile, knot theorists are delighted to have found a bound on the number of moves it takes to straighten out a tangled unknot. Even if it won’t help them at Christmas.
NOTE: A slightly different version of this article appeared in The Guardian newspaper in the UK in March.
Devlin’s Angle is updated at the beginning of each month.
Keith Devlin ( devlin@stmarys-ca.edu) is Dean of Science at Saint Mary’s College of California, in Moraga, California, and a Senior Researcher at Stanford University. His latest book is The Math Gene: How Mathematical Thinking Evolved and Why Numbers Are Like Gossip, published by Basic Books.
MAY 2001
Car Talk Woes
A few weeks ago on NPR’s popular afternoon program All Things Considered, Tom Magliozzi, who together with his brother Ray hosts the even more popular NPR program Car Talk, suggested that the teaching of algebra, geometry, and calculus in schools was a waste of time, and that we’d all be better off if pupils were taught more useful things. [You can hear the commentary by clicking here.]
Now, along with millions of other listeners who make Car Talk by far the most popular show on NPR, I am an avid fan of the Magliozzi brothers. If tomorrow morning my 1995 Buick Park Avenue with 75,000 miles on the clock developed a strange BRRR–CLICK–BRRR–CLIC–BRRR–CLICK sound, Tom and Ray, who go by the nom-de-radio Click and Clack, would be the first people I would turn to for help. When it comes to cars, they know their stuff.
But when it comes to mathematics, well, Tom, I’ve gotta tell you, you are so off base, it’s scary. So scary, in fact, that when I heard your comments, I said to myself: “This is a smart guy—heavens, he was a professor at MIT for many years. How come he thinks teaching mathematics serves no useful purpose?”
It didn’t take me long to find out. Thanks to the World Wide Web—an entirely mathematical invention by the way—I was able to replay Tom’s words again, and it was clear what was going on. The real culprit isn’t Tom, it’s the way mathematics so often gets taught in schools.
The event that prompted Tom’s remarks was a back-to-school night at his son’s high school. On the board in the math classroom Tom read the following statement:
Calculus is the set of techniques that allow us to determine the slope at any point on a curve and the area under that curve.”
AGGGHHH. If I didn’t know any better, that would have made me react the same way as Tom, although I’m not sure that Tom’s phrase “Who gives a rat’s patootie?” works as well in an English accent as it does in a Car Talk voice.
Having been a mathematician—not a math teacher I should add—for thirty years, I can think of dozens of things I might have written on the board to describe calculus.
For example: Calculus is a set of techniques that scientists and engineers use to describe accurately the way things move—planets, space shuttles, ballistic missiles, electricity, radio waves, stock market prices, blood, the heart, the muscles of the body, and so on.
Or, and this one is designed specially for Tom: calculus is a set of techniques that enable automobile designers and manufacturers to design and build a modern automobile, with all its moving parts.
Or: calculus was the major intellectual discovery of the seventeenth century that made possible the scientific revolution and all of modern science, technology, and medicine.
Or, calculus is an absolutely indispensable tool for designing computers, radios, televisions, telephones, VCR machines, CD players, airplanes, artificial heart valves, CAT scan machines, the GPS positioning system, I could go on for ever.
Or: calculus is the language physicists use to understand the universe and the world we live in.
Or even simply, calculus is one of the greatest intellectual achievement of humankind.
But if you take one of our culture’s most impressive and useful inventions, that quite literally changed the world, and reduce it to a trivial statement about finding slopes of curves, as Magliozzi’s son’s teacher did, then there’s no wonder Tom reacted the way he did.
So we shouldn’t blame Tom. He’s simply the product of the education he received—although I wonder who he mixed with during all those years on the faculty at MIT. And we shouldn’t blame the hapless teacher who started this whole thing. It may well be that he or she is doing their best, given their education. Much of the blame lies in the way universities train future mathematics teachers.
Mathematics only exists because it is so useful. No part of the subject should ever be taught at school level without explaining why is was invented and what some of its uses are. Uses that directly affect everyone in the classroom.
According to Magliozzi, the purpose of education is, and I quote, to “help us to understand the world we live in.” That, Tom, is precisely why some of our ancestors developed mathematics. Without mathematics, your weekly show would have to be called Horse Talk and you and your brother would have to be called Whinnie and Neigh—except that without mathematics there wouldn’t be any radio to do it on either, so you’d have to do it by standing in Harvard Square and yelling as loud as you could.
By the way, I still think Car Talk is one of the best programs on radio.
Devlin’s Angle is updated at the beginning of each month.
Keith Devlin ( devlin@stmarys-ca.edu) is Dean of Science at Saint Mary’s College of California, in Moraga, California, a Senior Researcher at Stanford University, and “The Math Guy” on NPR’s Weekend Edition. His latest book is The Math Gene: How Mathematical Thinking Evolved and Why Numbers Are Like Gossip, published by Basic Books.
JUNE 2001
How many real numbers are there?
How many real numbers are there? One answer is, “Infinitely many.” A more sophisticated answer is “Uncountably many,” since Georg Cantor proved that the real line—the continuum—cannot be put into one-one correspondence with the natural numbers. But can we be more precise?
Cantor introduced a system of numbers for measuring the size of infinite sets: the alephs. The name comes from the symbol Cantor used to denote his infinite numbers, the Hebrew letter aleph—a symbol not universally available for web pages. He defined an entire infinite hierarchy of these infinite numbers (or cardinals), aleph-0 (the first infinite cardinal, the size of the set of natural numbers), aleph-1 (the first uncountable cardinal), aleph-2, etc.
The infinite cardinals can be added and multiplied, just as the finite natural numbers can, only it’s much easier to learn the answers. The sum or product of any two infinite cardinals is simply the larger of the two.
You can also raise any finite or infinite cardinal to any finite or infinite cardinal power. And this is where things rapidly become tricky. To pick the simplest tricky case, if K is an infinite cardinal, what is the value of 2K (2 raised to the power K)? Cantor proved that the answer is strictly bigger than K itself, but that’s as far as he got. In particular, he was not able to figure out whether or not 2aleph-0 is equal to aleph-1.
The significance of this question for the rest of mathematics lay in the fact that 2aleph-0 is the size of the real continuum, i.e., the number of real numbers. Since Cantor had been able to prove that there are aleph-0 rational numbers, the next obvious question to ask was, how many real numbers are there? Being unable to answer this question was frustrating to say the least, and Hilbert included the problem in his famous 1900 list.
The proposal that 2aleph-0 = aleph-1 became known as Cantor’s Continuum Hypothesis. It turned out to be intimately connected with the choice of axioms for the construction of infinite sets. The axioms generally accepted by the mathematical community were formulated by Ernst Zermelo and Abraham Fraenkel in the early twentieth century. In 1936, Kurt Goedel stunned the mathematical world with his proof that the Zermelo-Franekel axioms were not sufficient to prove that the Continuum Hypothesis is false.
What made this stunning was not the result itself. Apart from logicians and a few real analysts, most mathematicians didn’t care about the Continuum Hypothesis one way or the other. Rather, it was the fact that Goedel had found a way to prove, conclusively, that something could not be proved. (Notice that Goedel’s proof that the Continuum Hypothesis could not be proved in Zermelo-Fraenkel set theory did not imply that it could be disproved in that theory. Absence of proof—even proven absence of proof—is not proof of the contrary.)
With the sure knowledge that the Continuum Hypothesis could not be proved false, the hunt was on to prove it true. That hunt proved unfruitful, and in 1963 Paul Cohen showed why. In a mathematical tour de force that won him a Fields Medal, he proved that the Continuum Hypothesis could not be proved true either! (Within the axiomatic framework of Zermelo and Fraenkel.) The hypothesis was undecidable.
Of course, the natural response was to look for additional axioms of set theory to augment the Zermelo-Fraenkel system, that would enable the Continuum Hypothesis to be resolved one way or the other. And many mathematicians did just that. But without success.
The problem was that set theory was a foundational subject, one that had been developed in an attempt to provide a unified framework for all of mathematics (including arithmetic). Its axioms, to be acceptable, had to be “intuitively obvious.” No one could find such a principle.
One possibility that I personally found appealing (my Ph.D was in set theory and Cantor’s infinite cardinal arithmetic, and I specialized in that area for the first fifteen years of my career) was the “Axiom of Constructibility”. This principle was formulated by Goedel in the course of his proof that the Continuum Hypothesis could not be disproved using the Zermelo- Fraenkel axioms. Although Goedel did not propose adopting it as an axiom of set theory, I felt it had sufficient “naturalness” in its favor to do so. Not because I believed it was “true.” When it comes to doing mathematics on infinite sets, I don’t think the notion of truth comes into it. Rather, I felt that the meta-message in Cohen’s result (and a lot of similar results that came in its wake) was that the axioms of set theory should be chosen on pragmatic grounds.
On the basis of set theory having the main goal of providing a universal basis for mathematics, I could (and in 1977 did) put forward what I believed was a good argument in favor of adopting the Axiom of Constructibility. (I laid out my argument in my monograph The Axiom of Constructibity: A Guide for the Mathematician, published in the Springer-Verlag Lecture Notes in Mathematics series in 1977.)
If the Axiom of Constructibility was assumed (as an additional axiom, on top of the Zermelo-Fraenkel system), then you could prove that the Continuum Hypothesis is true.
For a variety of reasons, many mathematicians did not buy my arguments, or those of others who also proposed the Axiom of Constructibility. But no one came up with what I thought was a compelling counter argument. At least, not at the time. That changed in 1986, when Christopher Freiling published an intriguing paper in Volume 51 of the Journal of Symbolic Logic. In his paper, titled “Axioms of Symmetry: throwing darts at the real line”, Freiling puts forward the following thought experiment.
You and I are throwing darts at a dartboard. We are separated by a screen, so that nothing either of us does can influence the other. At a given signal from a third party, we both throw a dart at the board. We do so entirely randomly. (Formally, since the points on the dartboard can be put into a one-one correspondence with the real numbers, we are simply two independent random number generators.)
How is the winner decided? Well, the organizer has chosen a well-ordering of the real numbers (i.e., the points on the dartboard), say <<. The aim is to land on a point associated with the larger number. If your dart lands on a number (point) Y that is << the number M my dart lands on, I win; otherwise you win. Simple, no?
Well, there’s more. Suppose the Continuum Hypothesis were true. Then the organizer could have chosen the well-ordering so that, for any number X, the set {R|R << X} is countable. Agreed?
Now, since we throw independently, we can assume I threw first. My dart lands at point M. Now you throw. Since the set {R|R << M} is countable, the probability that your dart lands at a point Y for which Y << M is zero. (Any countable set has measure zero.) Thus, with probability 1, Y >> M, and you win.
But the situation is entirely symmetrical, and so by the same argument, with probability 1, I win.
But this is an impossible situation. Conclusion: there can be no such well ordering, and hence the Continuum Hypothesis is false. Right?
Well, not quite. To make the above argument go through formally, we have assume that the graph of the well ordering << is measurable. And there is no justification for making such an assumption. Thus, we have not proved that the Continuum Hypothesis is false. But that was not what we (or Freiling) were trying to do. Rather, we were looking for some plausible evidence to support an axiomatization of set theory that would resolve the Continuum Hypothesis.
If you view set theory as an axiomatic framework for constructing sets, and take a conservative approach of constructing only the sets that the rest of mathematics absolutely must have, you end up with the Axiom of Constructibility, and then the Continuum Hypothesis is true. But if you conceive of mathematics as abstracting from the world of our everyday experience, and if you take the view that Freiling’s dartboard thought experiment has an intuitive naturalness and “ought to be right”, then your set theory, whatever its axioms may be, should imply that the Continuum Hypothesis is false. (Or at the very least, your axioms should not imply that the Continuum Hypothesis is true.)
What’s my own current view? Well, I still think a good argument can be put forward for the Axiom of Constructibility. But I also find the Freiling thought experiment compelling. To my mind, on an intuitive level, it does show that the Continuum Hypothesis must be false. When a mathematician finds himself supporting two contradictory propositions, he’s obviously been a department chair or a dean for too long and it’s time to give up and move on. And do you know, I just did. Please note the change of address below.
Devlin’s Angle is updated at the beginning of each month.
Keith Devlin ( devlin@csli.stanford.edu) is the new Executive Director of the Center for the Study of Language and Information at Stanford University and “The Math Guy” on NPR’s Weekend Edition. His latest book is The Math Gene: How Mathematical Thinking Evolved and Why Numbers Are Like Gossip, published by Basic Books.
JULY-AUGUST 2001
Witten at 50
Edward Witten—the man who has often been described as the Isaac Newton of the modern age—celebrates his fiftieth birthday on Saturday 26 August this year. I am sure that many mathematicians will want to join with me in wishing him the very best as he passes the half-century mark. Which, on the face of it, is a little odd, since Witten is not a mathematician but a physicist, a Professor in the School of Natural Sciences at the Institute for Advanced Study in Princeton, New Jersey. Why would mathematicians want to celebrate the birthday of a physicist—other than the obvious fact that it’s always good to have an excuse for a party? Can Witten really be compared to the great Isaac Newton, the seventeenth century genius who brought us not only powerful scientific theories of light and of gravity but calculus as well?
I believe the comparison is entirely apt, in several ways. First, though, a few words about Witten the man. He was born 26 August 1951 in Baltimore, Maryland. He studied at Brandeis University, where he received his BA in 1971. From Brandeis he went to Princeton, where he received an MA in 1974 and a Ph.D. in 1976. He was a postdoctoral fellow at Harvard for 1976-77, then a junior Fellow there for 1997-80. In 1980 he was appointed a Professor at Princeton University, from where he moved across town to the Institute for Advanced Study in 1987.
Now to the comparison of Witten with Newton. Although there is no doubt that Witten is a physicist, like Newton he is a powerful mathematician. Very much in the tradition of Newton, Witten’s mathematics arises out of physics—he does his mathematics in order to advance his—and hence our—understanding of the universe. The British mathematician Sir Michael Atiyah has written of Witten that:
“… his ability to interpret physical ideas in mathematical form is quite unique.” [Michael Atiyah: On the work of Edward Witten, Proceedings of the International Congress of Mathematicians, Kyoto, 1990 (Tokyo, 1991), pp.31-35.]
Atiyah’s words were written in 1990, on the occasion when the international mathematical community give Witten their most prestigious award, a Fields Medal, often described as the mathematicians’ equivalent of a Nobel Prize. In addition to the Fields Medal, physicist Witten has been invited to give two major addresses at national meetings of the American Mathematical Society: he was AMS Colloquium Lecturer in 1987 and three years ago, in 1998, he gave the Gibbs Lecture.
Like Newton, the physics Witten does is deep, fundamental, and center stage. Both men set out to answer ultimate questions about the nature of the world we live in. In Witten’s case, he works in the hot research areas of supersymmetry and string theory.
Just as questions in physics led Newton to develop some far reaching new mathematics that found many applications, often well outside of physics, so too Witten’s mathematics has been of a depth and originality (and incidentally of a difficulty equaled by few mathematicians) that will surely find other applications. Witten has used infinite dimensional manifolds to study supersymmetric quantum mechanics. Among the results for which he was awarded a Fields Medal was his proof of the classic Morse inequalities, relating critical points to homology.
Witten’s work in manifold theory brings up yet another comparison with Newton. Neither of them were concerned with finding mathematically correct proofs to support their arguments. Relying on their intuitions and their immense ability to juggle complicated mathematical formulas, they both left mathematicians reeling in their wake. It took over two hundred years for mathematicians to develop a mathematically sound theory to explain and support Newton’s method of the infinitesimal calculus. Similarly, it might take decades—maybe even centuries—before mathematicians can catch up with Witten. Commenting on this state of affairs in a presentation at a Millennium Meeting at the University of California at Los Angeles, in August 2000, Witten said:
“Understanding natural science has been, historically, an important source of mathematical inspiration. So it is frustrating that, at the outset of the new century, the main framework used by physicists for describing the laws of nature is not accessible mathematically.” [Edward Witten: Physical Law and the Quest for Mathematical Understanding.]
British mathematician Sir Michael Atiyah, wrote of Witten’s work:
“… he has made a profound impact on contemporary mathematics. In his hands physics is once again providing a rich source of inspiration and insight in mathematics. Of course physical insight does not always lead to immediately rigorous mathematical proofs but it frequently leads one in the right direction, and technically correct proofs can then hopefully be found. This is the case with Witten’s work. So far the insight has never let him down and rigorous proofs, of the standard we mathematicians rightly expect, have always been forthcoming.” [Michael Atiyah: On the work of Edward Witten.]
This feature of Witten’s work not only tells us that Witten is a remarkable physicist, it also says something about mathematics. For all that mathematics is a product of the human mind, the very logical rules that have to be satisfied for mathematical creations to be mathematics means that there is—potentially—a shortcut path to mathematical truth that avoids the long and painful “official” route of logically correct, step-by-step deductions we call proofs. For most mathematicians, myself included, the only way to convince ourselves that something is true in mathematics is to find a proof. A very small number of individuals, however, seem to be blessed with such deep and powerful insight that, guided by little else besides their intuitions and a sense of “what is right”, they can cut through the logical thickets and discover the truth directly—whatever that means. Newton did it with calculus. The great Swiss mathematician Leonhard Euler did much the same thing with infinite sums in the eighteenth century. Arguably the Indian mathematician Srinivasa Ramanujan did something similar with the arithmetical patterns of numbers he discovered. And now Witten is doing the same with infinite dimensional manifolds. On several occasions, Witten has made a discovery—a physicist’s discovery since it is technically not a mathematical discovery—that mathematicians subsequently showed to be “correct” by the traditional means of formulating a rigorous proof. Given the complexity of the “insights” that Newton, Euler, Ramanujan, and Witten have made—and the difficulty of the subsequent proofs—this cannot be a case of making lucky guesses. So what is going on?
As a mathematician, when I work on a mathematical problem, my sense is very much one of discovering facts about some pre-existing (abstract) world “out there”. If I try hard enough, and am lucky, I’ll discover the right path that leads me to my goal, and I’ll solve the problem. If I fail, sooner or later someone else will come along and find the path. Very likely I’ll then see that it’s the very path I was trying to find!
Nevertheless, for all that mathematical research feels like discovery, I firmly believe that mathematics does not exist outside of humans. It is something we, as a species, invent. (I don’t see what else it could be.)
But mathematical invention is not like invention in music or literature. If Beethoven had not lived, we would never have heard the piece we call his Ninth Symphony. If Shakespeare had not lived, we’d never have seen Hamlet. But if, say, Newton had not lived, the world would have gotten calculus sooner or later, and it would have been exactly the same! Likewise, if Witten had not lived we’d have obtained his results eventually. (Although the wait would almost certainly have been much longer for Witten’s work than it was for calculus.)
Since mathematical creativity is not tied exclusively to one particular individual, the patterns of mathematics must tell us something very deep and profound about the human brain and the way we interact with our environment. If you want, you can “reify” (objectify) the results of that interaction and think of it as an “outside (Platonic) world”. But to my mind that’s just playing with words. A much more honest way to think of it, I suggest, is that our mathematical creations arise from the world we live in and are constrained by our experience of that world. This means that, very occasionally, a person such as Ed Witten can come along whose understanding of the physical world is so good that he can bypass the normal methods of mathematical discovery and “see”, directly, the results that the rest of us can, at best, stumble upon.
The result is that mathematicians are able to get occasional glimpses of the mathematics of tomorrow—or possibly even the next century. For me, and I’m sure for many fellow mathematicians, that alone is good reason to say:
Happy fiftieth birthday, Ed Witten.
Devlin’s Angle is updated at the beginning of each month.
Mathematician Keith Devlin ( devlin@csli.stanford.edu) is the Executive Director of the Center for the Study of Language and Information at Stanford University and “The Math Guy” on NPR’s Weekend Edition. His latest book is The Math Gene: How Mathematical Thinking Evolved and Why Numbers Are Like Gossip, published by Basic Books.
SEPTEMBER 2001
Untying the Gordian Knot
One day, according to ancient Greek legend, a poor peasant called Gordius arrived with his wife in a public square of Phrygia in an ox cart. As chance would have it, so the legend continues, an oracle had previously informed the populace that their future king would come into town riding in a wagon. Seeing Gordius, therefore, the people made him king. In gratitude, Gordius dedicated his ox cart to Zeus, tying it up with a highly intricate knot—the Gordian knot. Another oracle—or maybe the same one, the legend is not specific, but oracles are plentiful in Greek mythology—foretold that the person who untied the knot would rule all of Asia.
The problem of untying the Gordian knot resisted all attempted solutions until the year 333 B.C., when Alexander the Great—not known for his lack of ambition when it came to ruling Asia—cut through it with a sword. “Cheat!” you might cry. And although you might have been unwise to have pointed it out in Alexander’s presence, his method did seem to go against the spirit of the problem. Surely, the challenge was to solve the puzzle solely by manipulating the knot, not by cutting it.
But wait a minute. Alexander was no dummy. As a former student of Aristotle, he would have been no stranger to logical puzzles. After all, the ancient Greek problem of squaring the circle is easy to solve if you do not restrict yourself to the stipulated tools of ruler and compass. Today we know that the circle-squaring problem as posed by the Greeks is indeed unsolvable. Using ruler and compass you cannot construct a square with the same area as a given circle. Perhaps Alexander was able to see that the Gordian knot could not be untied simply by manipulating the rope.
If so, then the knot surely could not have had any free ends. The two ends of the rope must have been spliced together. This, of course, would have made it a knot in the technical sense of modern mathematicians.
Continuing under the assumption that many fine minds had been stumped by the Gordian knot problem, but no one had claimed the puzzle was unsolvable, we may conclude that in principle the knot could be untied, and everyone who looked closely enough could see this fact. In modern topological parlance, the loop of rope must have been in the form of an unknot. Thus, the Gordian knot was most likely constructed by first splicing the two ends of a length of rope to form a loop, and then “tying” the loop up (i.e. wrapping it around itself in some way) to disguise the fact that it was not really knotted. And everyone was stumped until Alexander came along and figured out that on this occasion the sword was mightier than the pen. (Of course, he did have a penchant for coming to that conclusion.)
Now, when modern topologists study knots, they assume the knots are constructed out of perfectly flexible, perfectly stretchable, infinitely thin string. Under those assumptions, if the Gordian knot were really an unkotted loop, then it would have been possible to untie it, i.e., to manipulate it so it was in the form of a simple loop that does not cross itself.
Thus, the only thing that could make it absolutely necessary to resort to a sword to untie it would be that the physical thickness of the actual rope prevented the necessary manipulations being carried out. In principle, this could have been done. The rope could have been thoroughly wetted prior to tying, then dried rapidly in the sun after tying to make it shrink.
This is the explanation proposed recently by physicist Piotr Pieranski of the Poznan University of Technology in Poland and the biologist Andrzej Stasiak of the University of Lausanne in Switzerland. Physicists are interested in knots because the latest theories of matter postulate that everything is made up of tightly coiled (and maybe knotted) loops of space-time, and biologists are interested in knots because the long, string-like molecules of DNA coil themselves up tightly to fit inside the cell.
Pieranski and Stasiak have been studying knots that can be constructed from real, physical material, that has, in particular, a fixed diameter. This restriction makes the subject very different from the knot theory traditionally studied by mathematicians. Pieranski has developed a computer program, called SONO (Shrink-On-No-Overlaps) to simulate the manipulation of such knots.
Using this program, he showed that most ways of trying to construct a Gordian knot will fail. SONO eventually found a way to unravel them. But recently he discovered a knot that worked. SONO—which had not been programmed to make use of an algorithmic sword—was unable to unravel it. Maybe, just maybe, he had discovered the actual structure of the Gordian knot! Here it is:
To construct Pieranski’s knot, you fold a circular loop of rope and tie two multiple overhand knots in it. You then pass the end loops over the entangled domains. Then you shrink the rope until it is tight. With this structure, there is not enough rope to allow the manipulations necessary to unravel it.
NOTE: Further details about Pieranski’s Gordian Knot construction will be given in the paper Gordian Unknots, by Pieranski, Stasiak, and Sylwester Przybyl, currently in preparation.
Devlin’s Angle is updated at the beginning of each month.
Mathematician Keith Devlin ( devlin@csli.stanford.edu) is the Executive Director of the Center for the Study of Language and Information at Stanford University and “The Math Guy” on NPR’s Weekend Edition. His latest book is The Math Gene: How Mathematical Thinking Evolved and Why Numbers Are Like Gossip, published by Basic Books.
OCTOBER 2001
The music of the primes
As a teenager just starting to learn some real mathematics, one of the results that most amazed and intrigued me was Leonard Euler’s result expressing the zeta function as an infinite product over the primes. Who could not look at this theorem and wonder what deep and profound mathematics lay beneath the equation Euler discovered?
Unfortunately, overshadowed by the complex version of the zeta function subsequently developed and used by Bernard Riemann, Euler’s original real zeta function seems to have dropped out of sight in popular expositions of mathematics of late. With the hope of similarly inspiring another generation of future mathematicians, this month’s column tries to rekindle interest in Euler’s original and spectacular eighteenth century theorem.
To set the scene: Euler’s theorem addresses one of the oldest questions of mathematics: What is the pattern of the primes numbers? Euclid devoted many pages of his mammoth work Elements to a treatment of prime numbers, including his famous result that the primes are infinite in number. Besides providing a proof of this fact completely different from Euclid’s, Euler’s zeta function theorem marked the beginning of the enormously important area of modern mathematics called analytic number theory, where methods of analysis are used to obtain results about whole numbers.
Because of the need to include quite a lot of mathematical formulas, I have prepared my account as a PDF file, which any modern web browser will open automatically using Acrobat Reader. Simply click on the link below to find out what Euler did and how he did it.
How Euler discovered the zeta function (PDF file)
Devlin’s Angle is updated at the beginning of each month.
Mathematician Keith Devlin ( devlin@csli.stanford.edu) is the Executive Director of the Center for the Study of Language and Information at Stanford University and “The Math Guy” on NPR’s Weekend Edition. His latest book is The Math Gene: How Mathematical Thinking Evolved and Why Numbers Are Like Gossip, published by Basic Books.
NOVEMBER 2001
The math of stuff
What are we, and everything around us, made of? Since the dawn of modern mathematics, humans have tried to use its methods to help answer this question.
The ancients believed that the world was made up of four basic “elements””: earth, water, air, and fire. Around 350 BC, the ancient Greek philosopher Plato, in his book Timaeus, theorized that these four elements were all aggregates of tiny solids (in modern parlance, atoms). He went on to argue that, as the basic building blocks of all matter, these four elements must have perfect geometric form, namely the shapes of the five “regular solids” that so enamoured the Greek mathematicians — the perfectly symmetrical tetrahedron, cube, octahedron, icosahedron, and dodecahedron.
As the lightest and sharpest of the elements, said Plato, fire must be a tetrahedron. Being the most stable of the elements, earth must consist of cubes. Water, because it is the most mobile and fluid, has to be an icosahedron, the regular solid that rolls most easily. As to air, he observed, somewhat mysteriously, that “… air is to water as water is to earth,” and concluded, even more mysteriously, that air must therefore be an octahedron. Finally, so as not to leave out the one remaining regular solid, he proposed that the dodecahedron represented the shape of the entire universe.
To modern eyes, it is hard to believe that an intellectual giant such as Plato could have proposed such a whimsical theory. What on earth led him to believe that the geometer’s regular solids could possibly underlie the structure of the universe? In fact, although the particulars of his theory can easily be dismissed as whimsy, the philosophical assumptions behind it are exactly the same as those that drive present day science. Namely, that the universe is constructed in an ordered fashion that can be understood using mathematics. To Plato, as to many others, as Creator of the universe, God must surely have been a geometer. Or, as the great Italian scientist Galileo Galilei wrote in the seventeenth century, “In order to understand the universe, you must know the language in which it is written. And that language is mathematics.”
Believing that the world was constructed according to mathematical principles, Plato simply took the most impressive, most perfect piece of mathematics known at the time. That was the proof (found in Euclid’s classic mathematics text Elements) that there are exactly five regular solids—solid objects for each of which all the faces are identical, equal-angled polygons that meet at equal angles.
As recently as the seventeenth century, the famous astronomer Johannes Kepler, who discovered the mathematical formula that describes the motion of the planets around the Sun in our Solar System, was likewise seduced by the mathematical elegance of the regular solids. There were six known planets in Kepler’s time, Mercury, Venus, Earth, Mars, Jupiter, and Saturn, and a few years previously Copernicus had proposed that they all rotated in circular orbits with the Sun at the center. (Kepler would later suggest, correctly, that the orbits were not circles but ellipses.) Starting from Copernicus’s suggestion, Kepler developed a theory to explain why there were exactly six planets, and why they were at the particular distances from the sun that he and other astronomers had recently measured. There were precisely six planets, he reasoned, because between each adjacent pair of orbits (think of the orbit as a circle going round a spherical ball in space) it must be possible to fit, snuggly, an imaginary regular solid, with each solid used exactly once. After some experimentation, he managed to find an arrangement of nested regular solids and spheres that worked: the outer sphere (on which Saturn moves) contains an inscribed cube, and on that cube is inscribed in turn the sphere for the orbit of Jupiter. In that sphere is inscribed a tetrahedron; and Mars moves on that figure’s inscribed sphere. The dodecahedron inscribed in the Mars-orbit sphere has the Earth-orbit sphere as its inscribed sphere, in which the inscribed icosahedron has the Venus-orbit sphere inscribed. Finally, the octahedron inscribed in the Venus-orbit sphere has itself an inscribed sphere, on which the orbit of Mercury lies.
Of course, Kepler’s theory was completely wrong. For one thing, the nested spheres and the planetary orbits did not fit together particularly accurately. (Having himself been largely responsible for producing accurate data on the planetary orbits, Kepler was certainly aware of the discrepancies, and tried to adjust his model by taking the spheres to be of different thicknesses, though without giving any reason why the thicknesses should differ.) But besides, as we now know, there are not six planets but eight (nine if you want to count Pluto, these days officially classified as not being a planet).
For all that Plato’s theory of matter and Kepler’s theory of the Solar System were incorrect, however, let me repeat my earlier point that the same underlying philosophy underlies all of present-day scientific theorizing about the universe: that the universe operates according to mathematical laws. Or, as the contemporary physicist Stephen Hawking has remarked, in developing mathematical theories about the nature and origin of the universe, we are seeking to know “the mind of God.”
Until the early part of the twentieth century, our best modern theory of matter was the atomic theory, which viewed everything as being made up of atoms, miniature “solar systems” in which a number of electrons (the planets) orbited a central nucleus (the sun). The atomic theory was supported by experimental evidence and the mathematical details had been worked out with considerable accuracy (using some pretty sophisticated mathematics). Clearly, to physicists of the time, God must indeed have seemed to be a geometer.
But then, in the early 1900s, scientists observed phenomena that did not fit the neat geometric picture of the “solar system atom,” eventually forcing them to accept that atomic theory had reached its limits and would have to be abandoned (or drastically modified). What they found to replace it was a far more complicated mathematical explanation known as quantum theory. At the heart of quantum theory was the assumption that there is a built-in uncertainty about matter. If you were to focus attention on a single particle, such as an electron, you would find that it was not in any one fixed and definite place at any moment, but was constantly flitting around in an unpredictable fashion that could only be described mathematical using probability theory. Although quantum theory is nowadays widely accepted, in the early days its dependence on probability theory led Albert Einstein to dismiss it with the remark that “God does not play dice with the universe.”
Even if you accept it, however, quantum theory forces you to rely on purely mathematical descriptions of reality. For instance, the human mind simply cannot grasp, on an intuitive level, quantum theoretic entities that behave both like particles and waves. As the late Richard Feynman, one of the leading pioneers in the development of modern quantum theory, once remarked: “There was a time when the newspapers said that only twelve men understood the theory of relativity. I do not believe there ever was such a time. … After people read Einstein’s paper a lot of people understood the theory of relativity in one way or other, certainly more than twelve. On the other hand I think I can safely say that nobody understands quantum mechanics.” [Richard Feynman, The Character of Physical Law, Cambridge, MA: MIT Press, 1965, p.129.]
When it comes to quantum mechanics, physicists have to abandon their intuitions and rely on the mathematics to tell them what’s going on. Mathematics began as a system to help us to understand the world and to add precision to our understanding. With the arrival of quantum theory in the twentieth century, mathematics became our only way to understand.Today, the quantum theoretic lens has been focused on matter at an ever finer scale than electrons, to reveal that everything in the world consists ultimately of tiny folds and ripples in space-time (the study of which requires still more new mathematics). This development has led the writer and broadcaster Margaret Wertheim to quip, “These days, God isn’t a geometer, he does origami.”
What will come next, if anything, is hard to say. The original urge is still there: to understand what the stuff is that we and our world are made of. Moreover, we still have the belief that the answer will be found using mathematics. And that means that we can expect to see the development of new mathematics to support our quest, as we continue to push against the limits of our knowledge.
Devlin’s Angle is updated at the beginning of each month.
Mathematician Keith Devlin ( devlin@csli.stanford.edu) is the Executive Director of the Center for the Study of Language and Information at Stanford University and “The Math Guy” on NPR’s Weekend Edition. His latest book is The Math Gene: How Mathematical Thinking Evolved and Why Numbers Are Like Gossip, published by Basic Books.
DECEMBER 2001
A Beautiful Mind
This coming January, the movie A Beautiful Mind hits the nation’s screens. Inspired by the life of Princeton mathematician John Forbes Nash, winner of the 1994 Nobel Prize for Economics, this will be the latest in a small but growing list of films in which the central character is a mathematician, among them Straw Dogs (1971, starring Dustin Hoffman as the theoretical physicist), It’s My Turn (1980, starring Jill Clayburgh as the algebraist), Good Will Hunting (1997, in which Matt Damon was the combinatorist), and Pi (1998, with Sean Gullette as the tortured discrete number theorist).
As always, when someone attempts to portray mathematics and/or mathematicians for a general audience, I expect the reactions from mathematicians will fall into two camps: those who loudly applaud, and those who complain about all the inaccuracies, the “misrepresentations”, the trivializations, and the “sensationalisationism”. I get a fair amount of such criticisms when I try to explain mathematical ideas and results to nonmathematicians, either in print or on radio and television. How much worse, then, is it likely to be for director Ron Howard, writer Akiva Goldsman, and star Russell Crowe (whose portrayal of Nash draws significantly on his own interpretation both of a mathematician and of Nash himself)? For as all involved in the film have made very clear, it is not a film biography of Nash. Rather, inspired by author Sylvia Nasar’s superb book of the same name – which was a biography, and an extremely well researched one at that – the movie folk set out to create a fictional account of an individual whose life had many of the individual elements of the real John Nash.
Here is how director Howard puts it. (You can find this and subsequent quotes on the movie’s website.)
“[The movie] captures the spirit of the journey, and I think that it is authentic in what it conveys to a large extent. Certain aspects of it are dealt with symbolically. How do you understand what goes on inside a person’s mind when under stress, when mentally ill, when operating at the highest levels of achievement. The script tries to offer some insight, but it’s impossible to be entirely accurate. Most of what is presented in the script is a kind of synthesis of many aspects of Nash’s life. I don’t think it’s outrageous.
We are using [Nash] as a figure, as a kind of symbol. We are using a lot of pivotal moments in his life and his life with Alicia as the sort of bedrock for this movie … even though we are taking licence, we are trying to deal with it in a fairly authentic way so that an audience is transported and can begin to understand. But they can’t begin to understand completely; they never could – no one could.”
According to writer Goldsman:
“[W]hat we’re doing here is not a literal representation of the life of John Nash, it’s a story inspired by the life of John Nash, so what we hope to do is evoke a kind of emotional journey that is reminiscent of the emotional journey that John and Alicia went through. In that sense, it’s true – we hope – but it’s not factual. For me, it was taking the architecture of his life, the high points, the low points, and then using that as a kind of wire frame, draping invented scenes, invented interactions in order to tell a truthful but somewhat more metaphoric story.
I think that to vet this by exposing it to historical accuracy is absurd. This movie is not about the literal moment-to-moment life of John Nash. It’s an invention … What we did is we used from his life what served the story we are trying to tell, which is why we are saying this is not a biopic. It could never bear up to that kind of scrutiny, it never wanted to, it never pretended to be a biopic. It always wanted to be a human journey, based on someone, inspired by someone’s life.”
My response to all those mathematicians who will complain about the inaccuracies and “over- dramatizations” of the movie is the same as it is to those who criticize my own attempts to communicate to a general audience (or at least would be if I bothered to respond): You’re not the intended audience! (One reader writing about me on Amazon.com recently said he was “insulted” by the way I tried to explain everything in simple terms.) In fact, I never ceased to be amazed that people who have sufficient knowledge and training in mathematics to see inaccuracies in popular portrayals and expositions approach them in the expectation of learning something about the field. Put bluntly, anyone who wants mathematical accuracy should read mathematical journals, research monographs, or textbooks. Popular expositions are an entirely different genre, written for a very different audience, with totally different goals, and they succeed to the extent that they meet the demands of that genre. Likewise movies are something else again.
One of the most derided scenes in Good Will Hunting is where the hero starts to write equations on a bathroom mirror. Conveniently forgetting that the great Irish mathematician Alexander Rowan Hamilton scratched the key identities for the quaternions on a stone bridge—the only writing surface available to him at the time inspiration struck him—the critics scoffed that no mathematician would ever do such a thing. Those critics will surely have another opportunity to trash Hollywood’s romanticism with A Beautiful Mind, which has an almost identical scene, in which Crowe writes mathematics on a window pane. (You can see part of this scene on the movie trailer on the web.)
But think for a minute. Whether or not any real mathematician has ever done such a thing, how would you convey to an audience that knows diddly squat about mathematics or mathematicians, that some people live in a world of mathematics, are constantly engaged in mathematical thoughts, even when doing ordinary everyday things, and see the world through mathematical eyes—a mathematical filter on their environment if you will? Depicting a mathematician scribbling formulas on a sheet of paper might be more accurate (and you’ll see Crowe doing that in A Beautiful Mind, just as we saw Damon doing it in Good Will Hunting) but it certainly doesn’t convey the image of a person passionately involved in mathematics, as does seeing someone write those formulas in steam on a mirror or in wax on a window, nor is it as cinematographically dramatic. In fact, that kind of image is so powerful that we made liberal use of a similar idea in the PBS television documentary series Life by the Numbers, which was intended to be entirely factual. We superimposed numbers and symbols over head-and-upper-body shots of mathematicians to simultaneously convey the idea of an individual living in a symbolic world and the idea of symbolic reasoning being carried out inside that individual’s head.
The point surely about a movie is that it is, after all, a movie. It’s not a math lesson on celluloid. Nor, in the case of A Beautiful Mind, is it intended to be a history lesson. It’s a story, a piece of fiction. Like many great stories, it’s inspired by real events. According to the advance publicity (and I am writing this without having seen the film) it even remains true to many specific details of the life of John Nash. Moreover, the filmmakers went to great trouble to ensure that the mathematics in the film—and I’m told there’s a lot—is mathematically realistic, by hiring a professional mathematician (Dave Bayer of Columbia University) as a consultant. But as the quotations from the director and the writer above make clear, their primary aim was to make a darned good human-drama film—a piece of quality entertainment.
The issue for those of us in the mathematics business is whether, on balance, we think it’s a good thing for mathematics for films to be made in which the main character is a mathematician, seen to “do some mathematics,” even if melodramatically, on the screen. Personally, I have no doubt whatsoever that only good can come of it for our profession, not least because one good movie can do more to inspire young children to view mathematics in a positive, indeed exciting, light than any number of federal or state educational initiatives. Or come to that, more than any number of popular expositions of mathematics written by MAA columnists. (Okay, so if I had the talents of Ron Howard or Russell Crowe, maybe I could do a better job.)
If you disagree (and I know some do, although my sense is that the vast majority of mathematicians these days think similarly to me), come along and say so at the special session on How the World Sees Mathematicians that the AMS is organizing at the Joint Mathematics Meetings in San Diego at 4:00PM on Sunday January 6. In addition to viewing some clips from A Beautiful Mind (and some other mathematician movies) you’ll also be able to listen to Dave Bayer talk about his experiences working on the movie as math consultant, and to award winning science writer K.C. Cole (Los Angeles Times) letting you in on some of her secrets on how to make a mathematician seem appealing to a general audience. I’ll be moderating, and I’ll also present some disturbing research findings (not mine) that show just how far we have to go to improve our public image.Click here to see the trailer to A Beautiful Mind.
Devlin’s Angle is updated at the beginning of each month.
Mathematician Keith Devlin ( devlin@csli.stanford.edu) is the Executive Director of the Center for the Study of Language and Information at Stanford University and “The Math Guy” on NPR’s Weekend Edition. His latest book is The Math Gene: How Mathematical Thinking Evolved and Why Numbers Are Like Gossip, published by Basic Books.
