Archive Page 3

al-Khwārizmī

The fourth in a series. See the November 20 entry, “What is algebra?” for the first, the December 13 entry “When did algebra begin?” for the second, and the December 19 entry “The  golden age of Arabic mathematics” for the third.

Abū ʿAbdallāh Muḥammad ibn Mūsā al-Khwārizmī (c.780 – c.850 CE) was one of the most significant figures in the development of modern algebra. Yet we know virtually nothing about his life.

There is even some confusion in the literature as to his full name. Most present-day sources give it as Abū ʿAbdallāh Muammad ibn Mūsā al-Khwārizmī, which can be translated as “Father of ʿAbdallāh, Mohammed, son of Moses, native of the town of al-Khwārizmī”. References to Abū Jaʿfar Muammad ibn Mūsā al-Khwārizmī are erroneous in this context; that was a different person

Al-Khwārizmī wrote several books, two of which had a huge impact on the growth of mathematics, one focused on arithmetic, the other on algebra. He aimed both at a much wider audience than just his fellow scholars. As with Euclid and his Elements, it is not clear whether al-Khwārizmī himself developed some of the methods he desribed in his books, in addition to gathering together the work of others, though a later author, Abū Kāmil, suggested that his famous predecessor did develop some of the methods he presented in his books.

The first of al-Khwārizmī’s  two most significant books, written around 825, described Hindu-Arabic arithmetic. Its original title is not known, and it may not have had one. No original Arabic manuscripts exist, and the work survives only through a Latin translation, which was most likely made in the 12th century by Adelard of Bath. The original Latin translation did not have a title either, but the Italian bibliophile Baldassare Boncompagni gave it one when he published a printed edition in the 19th century: Algoritmi de numero Indorum (“al-Khwārizmī on the Hindu Art of Reckoning”). The Latinized version of al-Khwārizmī’s name in this title (Algoritmi) gave rise to our modern word “algorithm” for a set of rules specifying a calculation. In English, the work is sometimes referenced as On the Calculation with Hindu Numerals, but it is most commonly referred to simply as “al-Khwārizmī’s Arithmetic.”

Al-Khwārizmī’s second pivotal book, completed around 830, was al-Kitab al-mukhtasar fi hisab al-jabr wa’l-muqābala. The phrase al-jabr wa’l-muqābalah translates literally as “restoration and confrontation,” or more loosely as “reducing (or solving) an equation.” The title of the book translates literally as “The Abridged Book on Calculation by Restoration and Confrontation”, but a more colloquial rendering would thus be “The Abridged Book on Algebra”. It is an early treatise on what we now call “algebra,” that name coming from the term al-jabr in the title. Scholars today usually refer to this book simply as “Al-Khwārizmī’s Algebra.” There are seven Arabic manuscripts known, not all complete. One complete Arabic copy is kept at Oxford and a Latin translation is kept in Cambridge. Two copies are in Afghanistan.

In Algebra, al-Khwārizmī described (but did not himself develop) a systematic approach to solving linear and quadratic equations, providing a comprehensive account of solving polynomial equations up to the second degree.

The Algebra was translated into Latin by Robert of Chester in 1145, by Gherardo of Cremona around 1170, and by Guglielmo de Lunis around 1250.  In 1831, Frederic Rosen published an English language translation. In his preface, Rosen wrote:

ABU ABDALLAH MOHAMMED BEN MUSA, of Khowarezm, who it appears, from his preface, wrote this Treatise at the command of the Caliph AL MAMUN, was for a long time considered as the original inventor of Algebra.        …   …   …      From the manner in which our author [al-Khwārizmī], in his preface, speaks of the task he had undertaken, we cannot infer that he claimed to be the inventor. He says that the Caliph AL MAMUN encouraged him to write a popular work on Algebra: an expression which would seem to imply that other treatises were then already extant.

In fact, algebra (as al-Khwārizmī described it in his book) was being transmitted orally and being used by people in their jobs before he or anyone else started to write it down. Several authors wrote books on algebra during the ninth century besides al-Khwārizmī, all having the virtually identical title  Kitāb al-ğabr wa-l-muqābala. Among them were Abū Hanīfa al-Dīnawarī, Abū Kāmil Shujā ibn Aslam, Abū Muḥammad al-ʿAdlī, Abū Yūsuf al-Miṣṣīṣī, ‘Abd al-Hamīd ibn Turk, Sind ibn ʿAlī, Sahl ibn Bišr, and Šarafaddīn al-Tūsī.

In addition to his two books on mathematics, al-Khwārizmī wrote a revised and completed version of Ptolemy’s Geography, consisting of a general introduction followed by a list of 2,402 coordinates of cities and other geographical features. Titled Kitāb ūrat al-Ar (“Book on the appearance of the Earth” or “The image of the Earth”), he finished it in 833. There is only one surviving Arabic copy, which is kept at the Strasbourg University Library. A Latin translation is kept at the Biblioteca Nacional de España in Madrid.

* * *

COMING UP NEXT: Al-Khwārizmī’s answer to that perennial student question, “What is algebra good for?” Plus a look at the contents of his seminal book, including an explanation of what exactly was being “restored” in the process for which al-Khwārizmī’s Arabic term was al-jabr.

* * *

Al-Khwārizmī on National Public Radio: I talked about al-Khwārizmī and the birth of algebra with host Scott Simon in my occasional “Math Guy” slot on NPR’s Weekend Edition on December 24.

The golden age of Arabic mathematics

The third in a series. See the November 20 entry, “What is algebra?” for the first and the December 13 entry “When did algebra begin?” for the second.

On 14 September 786, Harun al-Rashid became the fifth Caliph of the Abbasid dynasty. From his court in the capital city of Baghdad, Harun ruled over the vast Islamic empire, stretching from the Mediterranean to India. He brought culture into his court and encouraged the widespread pursuit of learning.

Al-Rashid had two sons, the elder al-Amin, the younger al-Mamun. Harun died in 809 and there was an armed conflict between the brothers. Al-Mamun won the armed struggle and al-Amin was defeated and killed in 813. Following this, al-Mamun became Caliph and ruled the empire from Baghdad.

Al-Mamun continued the patronage of learning started by his father. With his encouragement, scholars of the time set about collecting and writing down in books all available practical knowledge, much of which had hitherto been transmitted only orally, including mathematics and folk astronomy. They translated into Arabic works of Greek and Indian science.

Many of the works collected and created may have been housed in a library called the House of Wisdom, though there is no evidence to support the commonly repeated claims that (1) it was massive, (2) it was founded by al-Mamun, or (3) translations were carried out there.

The tradition of learning, writing, and translation begun by al-Rashid and al-Mamun continued for the next quarter century, making the Islamic civilization the center of world knowledge. The aristocracy and other wealthy groups within Muslim society supported the appropriation of all practical and scientific knowledge they could acquire. They employed scholars to translate into Arabic works by Indian, Sasanian, and especially Greek authors, and mathematicians recorded on paper all that was known of arithmetic, algebra, and mensuration, which had hitherto been communicated orally by traders. In addition to the mathematical sciences (arithmetic, geometry, optics, mathematical astronomy, etc.), they also translated texts on geography, astrology, philosophy, medicine, agriculture, alchemy, and even falconry.

Greek works formed the bulk of the material translated. In addition, the more scientifically oriented mathematicians adopted the Greek tradition of definitions, axioms, and propositions with rigorous proof, and astronomers embraced the Greek idea of geometric models of planetary motion. Within this framework, Indian techniques were incorporated into this new Arabic/Islamic mathematics.

In addition to the translations, scholars wrote commentaries and criticisms of the ancient mathematics and made their own original contributions. For example, in the 9th century, Thābit ibn Qurra (d. 901) translated several works of Archimedes, wrote commentaries on Euclid’s Elements and Ptolemy’s Almagest, critiqued Euclid’s definition for the composition of ratios of numbers, and derived and proved new formulas for volumes of solids of revolution.

When the sources of Greek and other foreign texts was finally exhausted, scholars continued to produce new results in all branches of mathematics. For instance, in the 11th century, Ibn al-Haytham made major contributions to optics and geometry, and at the start of the 12th century, al-Khāyyamī wrote his book on algebra.

Over a thousand mathematical manuscripts from the period have survived, about half of them dating before the 15th  century.

Al-Khwārizmī, who may have studied and worked in the House of Wisdom, was one of the earliest contibutors to this vast undertaking, and arguably had the most impact of all the mathematicians involved. But his books – he wrote one on Hindu arithmetic in addition to the one on algebra – should be viewed as part of this larger movement.

At the time, algebra was viewed primarily as a practical, numerical problem solving technique, not the autonomous branch of mathematics it became later. Indeed, the greatest contribution of Arabic mathematical work to society was its development as a set of practical tools.

Three systems of practical calculation were taught and practiced in the medieval Islamic world: finger reckoning, Hindu arithmetic, and the base 60 system of the astronomers. Merchants preferred finger-reckoning, which worked for numbers up to 10,000. Finger reckoning was used to solve problems by various methods, such as double false position and algebra. Al-Khwārizmī is known to have written a work, now lost, called Book of Adding and Subtracting, in the early 9th century, which was probably devoted to the use of finger reckoning. (If so, it was probably the earliest written text on the subject.)

The Arabic mathematicians referred to the numerals 1, 2, 3, etc., as Hindī numerals, because they acquired the system from India. These numerals were already in use in the Middle East by the 7th century CE. The earliest known Arabic text describing the system is al-Khwārizmī’s Book on Hindī Reckoning, written in the early 9th  century, which survives only in Latin translation. The original algorithms for calculating in this system were devised for use on a dust board, where erasing is easy. In the middle of the 10th century, al-Uqlīdisī introduced new algorithms for use with pen and paper. The Arabic mathematicians introduced the concept of decimal fractions, wihich al-Uqlīdisī described for the first time.

Unlike Diophantus, most of the Arabic authors, including al-Khwārizmī, wrote their algebra almost entirely in words. For example, where we would write down the symbolic equation  x + 1 = 2, they might write “The thing plus one equals two” (and very occasionally “The thing plus 1 equals 2”). This is generally known as the rhetorical form, and remained in common use right up to the 16th  century. This is, however, a notational distinction, not one of content. Commentators who refer to “rhetorical algebra” as being a form of algebra distinct from “literal algebra” are in error. For, although the Arabic authors wrote their books rhetorically, with no notation even for numbers, they did not solve problems rhetorically. Throughout most of Arabic algebra, problems were worked out on some ephemeral surface, by writing the coefficients and numbers in Hindu form. For example, they would write

1  2  1

to mean x2 + 2x + 1. Later, Arabic scholars in the Maghreb developed a truly algebraic notation, with symbols for the words representing the powers of the unknown, but even they they would resort to rhetorical text to communicate the result of a calculation.

Symbolic algebra, where full symbolism is used, is generally credited in the first instance to the French mathematician François Viète (1540 –1603), followed by René Descartes (1596 – 1650), though traces can be discerned in the writings of some Arabic mathematicians as early as the 13th century.

* * *

In my next two articles in this short series, I’ll say a bit about  al-Khwārizmī and take a look at the contents of his seminal book on algebra. In particular, I’ll give his answer to that perennial student question, “What is algebra good for?”

I use terms like “Arabic mathematics” in the standard historical fashion to refer to the mathematics done where and when the primary language for scholastic texts was Arabic. Mathematics, like all of science, belongs to the world.

When did algebra begin?

The second in a series. See the November 20 entry, “What is algebra?” for the first.

Two key features of algebra as we understand the word today are:

1. Reasoning about numbers by recognizing patterns across numbers;

2. Solving a problem by introducing a term for an unknown and then, starting with what is known, reasoning to determine its value.

We first see the emergence of both features of algebra in the mathematics of ancient Babylonia, around 2,000 BCE.

Several hundred of the many thousands of Babylonian’s cuneiform-inscribed clay tablets that have been found are devoted to mathematics. They show that those ancient mathematicians had systematic procedures for solving geometric problems involving the determination of lengths and areas of figures. Today, we would solve those kinds of problems using linear and quadratic equations and indeterminate systems of linear equations. Their methods amounted to a form of geometric algebra that could be applied to solve problems beyond overtly geometric examples such as calculating the perimeters or areas of various plane figures or the volumes of solid objects: arithmetic problems arising in trade and commerce, for example, and other financial transactions such as inheritance. In addition, the Babylonians considered problems that seemed to have had no practical application, pursuing them purely for recreation. Although they described their procedures in terms of specific lengths and areas, they did so in a way that made it clear they applied in general, and in that sense they were starting to think algebraically, by recognizing patterns across quantities.

Moreover, some of their writings show the second characteristic feature of algebra, namely introducing an unknown and then reasoning to find its value. In their case, however, the unknown was not numeric but geometric – an unknown line on which they performed geometrical operations to get the answer.

In reasoning with unknown quantities, the Babylonians went further than other early civilizations with a mathematical tradition, such as the Egyptians, the Chinese, and the early Greeks, all of the first millennium BCE. Our knowledge of the mathematics of those peoples comes from works such as the Rhind papyrus, The Nine Chapters of the Mathematical Art, and Euclid’s Elements, respectively. The approach described in those documents was, like that of the Babylonians, fundamentally geometric and exhibited reasoning about patterns of quantities, but we do not find the introduction of an unknown followed by an argument to determine its value.

It is with the work of the Greek mathematician Diophantus  (ca. 210–290 CE) that we first find clearly recognizable algebra, where the unknowns represent numbers whose values are to be determined. Around 250 CE, Diophantus, who lived in Alexandria in Egypt, wrote a multi-volume work, Arithmetica, which its title notwithstanding was an algebra book. Its author used letters (literals) to denote the unknowns and to express equations, but that is a purely notational distinction. He also was one of the first mathematicians to use negative numbers in calculations. He showed how to solve equations by using two techniques called restoration and confrontation. In modern terms, these correspond more or less (but not precisely) to (1) adding a quantity to both sides of an equation to eliminate a negative term on one side, and (2) eliminating like terms from both sides. He used these techniques to solve polynomial equations involving powers up to 6.

Almost four hundred years later, the Indian mathematician Brahmagupta (598–668 CE) likewise displayed recognizable algebra, in his book Brahmasphutasiddhanta, where he described the first complete arithmetic solution (including zero and negative solutions) to quadratic equations.

Following Diophantus and Brahmagupta, the next major step in the development of algebra – and it was huge – took place in the period generally referred to as “Arabic mathematics” or “Muslim mathematics”, a significant outpouring of mathematical activity stretching from the 8th century to the end of the 16th. Indeed, the word algebra itself comes from the Arabic word al-jabr, which occurs in the title of a highly influential book by the Persian mathematician al-Khwārizmī, completed around 830: al-Kitab al-mukhtasar fi hisab al-jabr wa’l-muqābala. The phrase al-jabr wa’l-muqābalah translates literally as “restoration and confrontation,” but more loosely means “solving an equation.”

That period will be the focus of my next article on algebra.

What is algebra?

We hear a lot about the importance that all children master algebra before they graduate from high school. But what exactly is algebra, and is it really as important as everyone claims? And why do so many people find it hard to learn?

Answering these questions turns out to be a lot easier than, well, answering a typical school algebra question, yet surprisingly, few people can give good answers.

First of all, algebra is not “arithmetic with letters.” At the most fundamental level, arithmetic and algebra are two different forms of thinking about numerical issues. (I should stress that in this article I’m focusing on school arithmetic and school algebra. Professional mathematicians use both terms to mean something far more general.)

Let’s start with arithmetic. This is essentially the use of the four numerical operations addition, subtraction, multiplication, and division to calculate numerical values of various things. It is the oldest part of mathematics, having its origins in Sumeria (primarily today’s Iraq) around 10,000 years ago. Sumerian society reached a stage of sophistication that led to the introduction of money as a means to measure an individual’s wealth and mediate the exchange of goods and services. The monetary tokens eventually gave way to abstract markings on clay tablets, which we recognize today as the first numerals (symbols for numbers). Over time, those symbols acquired an abstract meaning of their own: numbers. In other words, numbers first arose as money, and arithmetic as a means to use money in trade.

It should be noticed that counting predates numbers and arithmetic by many thousands of years. Humans started to count things (most likely family members, animals, seasons, possessions, etc.) at least 35,000 years ago, as evidenced by the discovery of bones with tally marks on them, which anthropologists conclude were notched to provide what we would today call a numerical record. But those early humans did not have numbers, nor is there any evidence of any kind of arithmetic. The tally markers themselves were the record; the marks referred directly to things in the world, not to abstract numbers.

Something else to note is that arithmetic does not have to be done by the manipulation of symbols, the way we are taught today. The modern approach was developed over many centuries, starting in India in the early half of the First Millennium, adopted by the Arabic speaking traders in the second half of the Millennium, and then transported to Europe in the 13th Century. (Hence its present-day name “Hindu-Arabic arithmetic.”) Prior to the adoption of symbol-based, Hindu-Arabic arithmetic, traders performed their calculations using a sophisticated system of finger counting or a counting board (a board with lines ruled on it on which small pebbles were moved around). Arithmetic instruction books described how to calculate using words, right up to the 15th Century, when symbol manipulation began to take over.

Many people find arithmetic hard to learn, but most of us succeed, or at least pass the tests, provided we put in enough practice. What makes it possible to learn arithmetic is that the basic building blocks of the subject, numbers, arise naturally in the world around us, when we count things, measure things, buy things, make things, use the telephone, go to the bank, check the baseball scores, etc. Numbers may be abstract — you never saw, felt, heard, or smelled the number 3 — but they are tied closely to all the concrete things in the world we live in.

With algebra, however, you are one more step removed from the everyday world. Those x’s and y’s that you have to learn to deal with in algebra denote numbers, but usually numbers in general, not particular numbers. And the human brain is not naturally suited to think at that level of abstraction. Doing so requires quite a lot of effort and training.

The important thing to realize is that doing algebra is a way of thinking and that it is a way of thinking that is different from arithmetical thinking. Those formulas and equations, involving all those x’s and y’s, are merely a way to represent that thinking on paper. They no more are algebra than a page of musical notation is music. It is possible to do algebra without symbols, just as you can play and instrument without being ably to read music. In fact, traders and other people who needed it used algebra for 3,000 years before the symbolic form was introduced in the 16th Century. (That earlier way of doing algebra is nowadays referred to as “rhetorical algebra,” to distinguish it from the symbolic approach common today.)

There are several ways to come to an understanding of the difference between arithmetic and (school) algebra.

  • First, algebra involves thinking logically rather than numerically.
  • In arithmetic you reason (calculate) with numbers; in algebra you reason (logically) about numbers.
  • Arithmetic involves quantitative reasoning with numbers; algebra involves qualitative reasoning about numbers.
  • In arithmetic, you calculate a number by working with the numbers you are given; in algebra, you introduce a term for an unknown number and reason logically to determine its value.

The above distinctions should make it clear that algebra is not doing arithmetic with one or more letters denoting numbers, known or unknown.

For example, putting numerical values for a, b, c  in the familiar formula

in order to find the numerical solutions to the quadratic equation

 is not algebra, it is arithmetic.

In contrast, deriving that formula in the first place is algebra. So too is solving a quadratic equation not by the formula but by the standard method of “completing the square” and factoring.

When students start to learn algebra, they inevitably try to solve problems by arithmetical thinking. That’s a natural thing to do, given all the effort they have put into mastering arithmetic, and at first, when the algebra problems they meet are particularly simple (that’s the teacher’s classification as “simple”), this approach works.

In fact, the stronger a student is at arithmetic, the further they can progress in algebra using arithmetical thinking. For example, many students can solve the quadratic equation  x2 = 2x + 15  using basic arithmetic, using no algebra at all.

Paradoxically, or so it may seem, however, those better students may find it harder to learn algebra. Because to do algebra, for all but the most basic examples, you have to stop thinking arithmetically and learn to think algebraically.

Is mastery of algebra (i.e., algebraic thinking) worth the effort? You bet — though you’d be hard pressed to reach that conclusion based on what you will find in most school algebra textbooks. In today’s world, most of us really do need to master algebraic thinking. In particular, you need to use algebraic thinking if you want to write a macro to calculate the cells in a spreadsheet like Microsoft Excel. This one example alone makes it clear why algebra, and not arithmetic, should now be the main goal of school mathematics instruction. With a spreadsheet, you don’t need to do the arithmetic; the computer does it, generally much faster and with greater accuracy than any human can. What you, the person, have to do is create that spreadsheet in the first place.  The computer can’t do that for you.

It doesn’t matter whether the spreadsheet is for calculating scores in a sporting competition, keeping track of your finances, running a business or a club, or figuring out the best way to equip your character in World of Warcraft, you need to think algebraically to set it up to do what you want. That means thinking about or across numbers in general, rather than in terms of (specific) numbers.

Of course, the need for algebra does not make it any easier to learn — though I think that spreadsheets can provide today’s students with more meaningful and fulfilling applications than problems about trains leaving stations or garden hoses filling swimming pools, that my generation had to endure. But in a world where our very national livelihood depends on staying ahead of the technology curve, it is crucial that we equip our students with the kind of thinking skills today’s world requires. Being able to use computers is one of those skills. And being able to use a computer to do arithmetic requires algebraic thinking.

In future postings I’ll describe the growth of algebra through the ages.

Steve Jobs remembered

The fact that I, along with millions of other people, learned of Steve Jobs’ death by way of a text message or email sent to my iPhone indicates just how huge was the Apple co-founder and CEO’s impact on the way many of us go about our business and live our lives. I can’t imagine any other CEO of a large corporation whose death would be so widely mourned.

By chance, the Apple Store in Job’s home town of Palo Alto, where I live, is right across the street (University Avenue) from the Peet’s Coffee Shop where I go for my morning latte. As I left the coffee house the morning after his death was announced, I noticed some flowers and a couple of lighted candled had been left on the sidewalk in front of the Apple Store window. I walked over and took a couple of photos. A day later, that small shrine had grown considerably.

Since Jobs had lived less than a mile from my home, on the Saturday morning I detoured past his (surprisingly modest) suburban-style house on my way to the supermarket, where I found precisely what I had anticipated. A huge outpouring of grief and remembrances.

Again it struck me: this reaction to the passing of a billionaire CEO? I cannot imagine a similar response to the death of Bill Gates, Larry Page, Sergey Brin, Mark Zuckerberg, or any of the other technology elite. Somehow,  Jobs did more than provide people with useful consumer products, he touched their emotions.

To be sure, not everyone is an Apple fan. But to those who are, their attachment to the brand is deep. People tend not to like Apple stuff, rather they LOVE it. And that is surely a tribute to superb design – design both for appearance and for usability.

I can’t say I’ve ever been a fan of anyone or anything, but I am a sucker for great design. When I first saw, and used, a Macintosh (one of the first generation), it quite literally changed my career, and in due course my life. As a mathematician with research interests in computation and a side interest (more of a hobby) in computational number theory, I was using computers long before the personal computer came on the scene. Back then (and this was the 1970s), a computer was a slave that you commanded to do things for you. Even worse, it was a dumb slave, so you had to formulate those commands with extreme care. A single missing or misplaced comma would cause the machine to grind to a halt.

All that changed with the Mac. As a user, you were no longer issuing commands to be acted upon, you were HANDLING INFORMATION. It was your stuff, and you were the one performing the action. Or so it seemed. In reality, you were being fooled by a cleverly designed interface.  But that was the genius of the Mac; it took something intrinsically alien to human beings, computation, and presented it is a way that we find natural and instinctive.

Well, actually, that genius was not Apple’s, rather that of a remarkable group of researchers at the Xerox Palo Alto Research Center (PARC) who had developed this new approach to computing over several years. To Xerox (and many of the researchers at PARC), the goal was to build computers to support office workers. Jobs’ genius was recognizing that, properly packaged and marketed, this approach to computing could turn computers into mass-market consumer products.

When I first started to use a Mac, I realized that there was huge value to be had in viewing reasoning not as a PROCESS of logical deduction to arrive at a conclusion (the classical view of mathematical reasoning), but as  GATHERING INFORMATION in order to reach a decision. The result of that shift in viewpoint first resulted in my book Logic and Information, which was published in 1990, and my entire research career since then has followed on the heels of that book.

Were it not for the Mac, I doubt I would have shifted my research the way I did. It would not have been enough to read about the WIMPS interface (windows, icons, mouse, pointers), or even to watch someone else using it. It was the powerful sensation of DOING it yourself, of physically manipulating items of information, that made all the difference. The Mac was not a device you used. It was something you experienced. And that is a profound difference.

As became clear through his entire career, Jobs had a deep appreciation for the importance of making technology something people experience – not merely use.

By pure chance, just two months before Jobs died, I published a short e-book in which I compared him with one of the greatest innovators of all time, the thirteenth century mathematician Leonardo of Pisa, known more commonly today by the modern nickname Fibonacci. Leonardo changed the world by writing a book that introduced modern arithmetic to the western world.

The stories of what these two men did exhibit remarkable similarities. Leonardo packaged arithmetic and, through the medium of parchment, gave personal computing to the masses. Jobs made personal computing accessible to everyone through the medium of silicon. Neither individual was an inventor. Their genius was taking something alien and complex and making it accessible – and friendly – to all.

For more details on the Jobs – Leonardo connection, see my MAA column “Devlin’s Angle” for August of this year.

Apple homepage October 6, 2011


I’m Dr. Keith Devlin, a mathematician at Stanford University, an author, the Math Guy on NPR’s Weekend Edition, and an avid cyclist. (The header photo is me halfway up Mt. Baldy in Southern California.)

RSS MOOCtalk

Twitter Updates

Error: Twitter did not respond. Please wait a few minutes and refresh this page.

New book 2012

New book 2011

New e-book 2011

New Book 2011

April 2019
M T W T F S S
« Jan    
1234567
891011121314
15161718192021
22232425262728
2930  

%d bloggers like this: