Philosophy, Maths, Logic and Computers

Interview by Richard Marshall.

'Mathematicians are among the most innately philosophical creatures on the planet. Mathematics is a strange activity: it involves talking about things in funny ways, reveling in abstraction, and finding deep meaning in utterances that sound like gibberish to everyone else.'

'Logic is still the black sheep of the mathematics community. What I love about logic is that it touches philosophy, mathematics, computer science, linguistics, cognitive science, and other disciplines. That's part of what makes it so interesting. But it also means that logic is easily orphaned. Philosophers think it looks too much like mathematics, and mathematicians think it looks too much like computer science.'

'The only way I know of getting at mathematical metaphysics and epistemology is to start with mathematical method. Mathematics is designed to enable us to reason efficiently and effectively, and that has a strong influence on the kinds of objects we talk about and the way we talk about them. I can't see how to make sense of the nature of mathematical objects without understanding their role in mathematical thought.'

'David Hilbert is one of the most influential mathematicians of all time. He made seminal contributions throughout mathematics, to algebra, number theory, analysis, mathematical physics, combinatorics, and more. He developed an interest in logic and the foundations of mathematics around the turn of the twentieth century. Around that time, it was discovered that naive uses of set-theoretic reasoning and reasoning about the infinite give rise to contradictions.'

' Those of us who work in logic are watching the recent developments in deep learning with trepidation. AI has long been divided between symbolic, logic-based methods on the one hand, and approximate, statistical techniques on the other. The fact that neural networks have had such striking successes with games like go and chess leave us wondering whether they will supplant symbolic methods entirely.'

Jeremy Avigad  is a professor in the Department of Philosophy and the Department of Mathematical Sciences and associated with Carnegie Mellon's interdisciplinary program in Pure and Applied Logic. He is interested in mathematical logic and proof theory,  formal verification and automated reasoning and the history and philosophy of mathematics. Here he discusses the relationship between philosophy and mathematics, philosophy of mathematics and analytic philosophy, early  twentieth century mathematical innovation, Hilbert and Henri Poincare, mathematical logic, the distinction between syntax and semantics, about not being dogmatic about foundations, change in mathematics, the role of computers in mathematical enquiry, the modularity of mathematics, whether mathematics is discovered or designed,  why David Hilbert is important, and formal verification, automated reasoning and AI.

3:16:  What made you become a philosopher?

Jeremy Avigad: My graduate degree is in mathematics, specifically in logic, but I have been thinking about mathematics for almost as long as I have been doing it. It was Carnegie Mellon that turned philosophy into a profession for me. I am eternally grateful to my department for recognizing that the foundational work I was trained to do is philosophically important, and for giving me room to use that background to explore the history and philosophy of mathematics.


3:16: You’re interested in the philosophy of mathematics. Do philosophical concerns get raised in the average mathematicians workday or is the philosophy something like ornithology is to birds – it would be useful if they accessed it  - but they don’t?

JA: Mathematicians are among the most innately philosophical creatures on the planet. Mathematics is a strange activity: it involves talking about things in funny ways, reveling in abstraction, and finding deep meaning in utterances that sound like gibberish to everyone else. What mathematicians do on a day-to-day basis is guided by judgments about what it means to do mathematics and what it means to do it well: what kinds of questions are natural, what kinds of problems are interesting, what kinds of solutions are appropriate, and so on. Mathematicians talk about these things all the time, perhaps not with the kinds of rigor and precision that we would like to see as philosophers, but these views are central to what they do.

Philosophy, when done well, can help us think better about what we are doing. Philosophy of mathematics in particular can help us articulate the goals of mathematics and understand its methods, and understand how the methods are suited to the goals. Philosophers are good at developing rigorous ways of talking about things like these, but we need to keep in mind that, when we do philosophy of mathematics, we are talking about mathematics and not some empty bead game. Philosophers and mathematicians bring different skills to the table, but we're thinking about the same thing, and can help one another.

3:16: Philosophy of mathematics is very important for analytic philosophy isn’t it – both as a subject of enquiry and as a landmark in the broader philosophical landscape – can you give us an overview of this importance as you see it?

JA: As I see it, what makes mathematics special is the clarity and precision of its language and methods of inference. What is most important are the rules of communication: how I am expected to make a claim, how you are allowed to challenge it, how I am allowed to respond, and so on. The messiness of the empirical world is kept at bay, and mathematics provides mechanisms that enable us to come to agreement as to whether we have a theorem. Maybe we are looking for a theorem that will help us build airplanes or predict the stock market, but even if that doesn't pan out as well as we had hoped, a theorem is still a theorem, and so a piece of mathematics.

The narrow focus on communication and means of coming to agreement makes it easier to think about mathematics than other aspects of our daily lives. Ordinary language is complicated and messy, but mathematical language is regimented and neat. The concepts that come up in ordinary discourse are unruly, but mathematical concepts are carefully controlled. It is impossible to make sense of all the cognitive mechanisms we use to get by in our everyday lives, but mathematical reasoning is amenable to systematic study.

That's not to say that natural language and other forms of reasoning aren't important. But mathematics is a good place to set up a base camp, and we can make it further if we explore from there.

3:16: The early  twentieth century seems to have been a time of foundations and mathematical innovation. You say that these innovations were a reaction to the set theoretic and nonconstructive methods that had become prevalent. So first, can you sketch for us what these two things claimed – why were they seen as the best way of approaching maths at the time?

JA: For most of its history, mathematics was about algorithms and calculation. Euclid's Elements was about geometric construction. Algebra was about solving equations. Probability was about predicting odds in games of chance and making statistical forecasts. Calculus was about explaining the motion of the planets and making predictions about systems changing over time.

In the nineteenth century, as mathematicians pushed the bounds of subjects like these, the technical details grew unwieldy. As the community coped with the growing complexity, some began to insist that the goal of mathematics was not to carry out calculations per se, but, rather, to obtain a proper conceptual understanding, enabling us to reason about mathematical objects in a manner that can underwrite computation but is not tied to it. From that perspective, calculation was seen as a surface manifestation of a much deeper body of knowledge.

This conceptual approach played out in a number of ways, including algebraic abstraction, foundational reduction, the use of informal set-theoretic language, and the use of more liberal means of describing infinitary mathematical objects and structures. But much of it also involved suppressing calculational detail and adopting methods that are at odds with a strict computational understanding.

[Poincare]

3:16: So why did the innovators such as Hilbert and Henri Poincare challenge these ideas? What did they put in their place?

JA: Many of Poincare's innovations can be understood in these terms. For example, in the nineteenth century it was well understood that systems of differential equations could be used to model a wide range of physical phenomena, but it is often impossible to carry out accurate calculations and make reliable predictions on the basis of such a description. Poincare showed that even in cases like that it is often possible to obtain useful qualitative information about a system, for example, by abstracting its geometric or topological properties.

David Hilbert was a master of the new techniques. One of his early successes was to settle a longstanding question in the theory of algebraic invariants, showing that certain kinds of algebraic expressions exist without showing how to calculate them. Throughout his career, Hilbert was a great champion of the new methods, as evidenced by his famous proclamation that no one will drive us from the set-theoretic paradise that Georg Cantor had created. At the same time, he was sensitive to concerns that the new methods strayed far from explicit reasoning about finite objects, which he viewed as representing the concrete content of mathematics. His foundational work was devoted to explaining how infinitary reasoning about abstract, ideal objects can be understood to have concrete content.

3:16: Another big innovation happening at the end of the nineteenth century and the beginning of the twentieth was mathematical logic. Can you say something about this and why it wasn’t easy to fit into a generally accepted taxonomy of maths at the time which categorized maths as being either geometry or arithmetic?

JA: It was during that time period that logic became a branch of mathematics. Since ancient times, mathematics had been characterized as the science of magnitude, which bifurcates into the study of continuous magnitudes, namely geometry, and discrete magnitudes, that is, arithmetic. In the mid-nineteenth century, George Boole argued for a new characterization of mathematics in order to find a place for logic. He proposed that we think of mathematics as the science of symbolic calculation, noting that we can calculate with propositions just as we can calculate with numbers. The algebraization of logic in the hands of Boole and De Morgan, and later by others such as Peirce and Schröder, went a long way towards making logic feel more mathematical.

3:16: So has logic become a mature mathematical discipline now and is this because of a distinction between syntax and semantics developed within the logic community rather than any changes in the maths community?

JA: I don't think the distinction between syntax and semantics is the reason that logic has become a part of mathematics, but, rather, one of the results of that transformation. In the early twentieth century, mathematicians developed rigorous ways to reason about symbolic expressions and their semantic interpretations, and both of these played an important part in the development of the theory.

That said, logic is still the black sheep of the mathematics community. What I love about logic is that it touches philosophy, mathematics, computer science, linguistics, cognitive science, and other disciplines. That's part of what makes it so interesting. But it also means that logic is easily orphaned. Philosophers think it looks too much like mathematics, and mathematicians think it looks too much like computer science. Computer science is a natural home, since symbolic expressions and their interpretation are fundamental to that subject. But confining logic to computer science imposes constraints on the kinds of research that can be done, and it would be a shame if we lost sight of its philosophical and mathematical relevance.

3:16: Why and how is semantics important for communicating and sharing mathematical knowledge and why is the  Zermelo Fraenkel set theory important in this respect?

JA: I was trained as a proof theorist in the Hilbert tradition, so I tend to favor syntax over semantics. In other words, I focus on symbolic expressions and their rules of use, and worry less about reference and meaning. So I personally think of set theory as a communal collection of linguistic norms, a rule book and style guide for communicating mathematical arguments and ideas. Having these articulated clearly was especially helpful in the early twentieth century, because it provided ways of unifying different branches of mathematics and making sense of the new methods of abstraction.

That said, I am not dogmatic about foundations. Set theory provides an idealized account of mathematical activity, one that has been informative and helpful. Categorical logic provides complementary ideas and insights, and the different perspectives are compatible. Mathematics is what it is, and the more ways we have to understand it, the better.

3:16: Has mathematics moved from being constructive to something else after the nineteenth century and does that mean that maths as currently understood is very different from what it has been for most of its history? Is this linked to the downgrading of computation in maths which followed from the discovery of paradoxes stemming from overly naive use of set-theoretic language and methods?

JA: Mathematics is always changing. What has remained constant throughout history is the emphasis on clarity and precision, and its ability to establish a communal consensus as to what it takes to establish a claim definitively. I wouldn't want to suggest that mathematics in 1850 and mathematics in 1950 were completely different things. But yes, there are fundamental differences in the language and patterns of reasoning that one sees in 1850 and 1950, and these reflect interesting differences in the way mathematicians understood the subject and its goals.

What became clear in the late nineteenth and early twentieth century is that there is a tension between, on the one hand, the desire to develop powerful abstractions that help us solve harder problems and extend our cognitive reach, and, on the other hand, the desire to relate those abstractions to the problems of measurement and prediction that motivated the subject in the first place. The tension is still with us today. Developing powerful theories and abstractions is important. So is using mathematics to build airplanes, manage the economy, and cure disease. It is fascinating to see how the discipline manages to do both at the same time.

3:16: So what is the role of computers in mathematical enquiry  - can they offer evidence for mathematical assertions and/or mathematical understanding – and what is meant by ‘mathematical understanding’ in this context?

JA: Until this point, I have been talking about using mathematics to support computation. This question goes the other way: how can we use computers to support mathematical reasoning? Computer scientists use the phrase "formal methods" to refer to logic-based computational methods that can be used to reason about complex hardware and software. I am interested in exploring how these methods can be used in mathematics as well.

In a field known as interactive theorem proving, computers are used to check mathematical proofs down to axiomatic primitives, providing strong guarantees that the results are correct. In automated reasoning, computers are used to discover new mathematical results. These applications are in their early stages, and mathematicians have a right to be skeptical that the technology has much to offer them right now. But the methods hold enormous potential, and I have no doubt that in the long run they will fundamentally transform what we can do.

Famous results like the four-color theorem and Thomas Hales' proof of the Kepler conjecture show that there are interesting theorems that can be reduced to computations that are too long to carry out by hand. This can be disconcerting. We want our mathematics not only to tell us that a theorem is true, but also help us understand why it is true, and we may not feel that a long computation provides the right kind of understanding. But the mathematical community has begun to reconcile itself to the fact that sometimes a computational proof is all we have a right to expect, and all that we really need. In that case, the understanding we have gained may be located in the fact that we have learned how to reduce the problem to a computation, and why the reduction works.

Mathematicians are pragmatists. We want to know the answers. Computers fundamentally alter the kinds of things we can discover and verify, and if we can put them to good use, we will. This will continue to challenge the way we think about mathematics and mathematical understanding, and I expect that mathematics will look very different a century from now, perhaps as different as 1950 mathematics would look to the 1850 mathematician. In the meanwhile, we will continue to grapple with the new technology.

3:16: Is mathematics modular and if it is, why is it epistemically useful to think of maths in this way? What alternatives does this model supplant?

JA: The word "modular" is a term of art in a number of disciplines. If you go to the library and take out a book on software engineering, it will tell you that it is important to design software in modular ways. It will tell you that this makes software more robust, more reliable, easier to understand, and easier to adapt and modify, and it will tell you why.

I have made a case that it is helpful to think about the mathematical literature - definitions, theorems, and proofs - in these terms. Reasoning effectively in mathematics is largely a matter of information management. It requires finding the abstractions that suppress the things that don't matter and let us focus on the things that do. When we think about mathematics in these terms, lots of things we see happening there start to make more sense.

Philosophers sometimes dismiss the pragmatics of everyday mathematical activity as methodology, as opposed to metaphysics or epistemology, which, on their view, is where all the action is. But the only way I know of getting at mathematical metaphysics and epistemology is to start with mathematical method. Mathematics is designed to enable us to reason efficiently and effectively, and that has a strong influence on the kinds of objects we talk about and the way we talk about them. I can't see how to make sense of the nature of mathematical objects without understanding their role in mathematical thought.

3:16: Is maths something that was designed or something that was discovered? I guess I’m wondering whether you think mathematics fits into a broadly naturalistic or Platonic account of metaphysics and epistemology?

JA: If forced to choose, I'd come down on the side of design, but I think the dichotomy is misleading. You can equally ask whether wheels and levers were invented or discovered. I suppose it is more common to say that they were invented, but you can also say that we discovered that wheels are levers are useful for moving things, and that we learned how to put them to good use. In a similar way, we often talk about the invention of the transistor, but the 1956 Nobel Prize was awarded for the discovery of the transistor effect.

When we do mathematics, we are participating in a communal choice to talk about isosceles triangles and prime numbers, and to talk about them in certain ways. The story of how the community reached consensus as to how to do this is a complex one, but the mathematics we have today is the result of countless decisions, big and small, and, in that sense, I think it is best to think of mathematical concepts as being designed. But we settled on those concepts because we discovered that they are useful for our purposes. Once again, the story as to how they are useful and what they are useful for is complex. But I think that is what the philosophy of mathematics should try to understand.

I don't think we have to choose between a naturalistic or Platonistic account. Bill Tait summed it up nicely in one of his essays. Mathematics is designed to help us make sense of the physical world, and that plays an important part in understanding why mathematics is the way it is. But something becomes a piece of mathematics only when we fix the definitions, assumptions, and rules of inference, and then we only have to answer to those. Whether or not a theorem or a calculation is correct only depends on whether we have followed the rules correctly. Whether or not it applies in any particular physical setting is a scientific question, and one that is adjudicated by other means.

[Hilbert]


3:16: Is Hilbert the go-to guy for you concerning metamathematics? What’s so important about Hilbert for the philosopher and in particular the philosopher of maths? (It may be useful to tell us a little about him as he will be unfamiliar to many readers outside maths!)

JA: David Hilbert is one of the most influential mathematicians of all time. He made seminal contributions throughout mathematics, to algebra, number theory, analysis, mathematical physics, combinatorics, and more. He developed an interest in logic and the foundations of mathematics around the turn of the twentieth century. Around that time, it was discovered that naive uses of set-theoretic reasoning and reasoning about the infinite give rise to contradictions. In the early 1920s, Hilbert gave a mature presentation of what has come to be known as 'Hilbert's program', which was designed to put the use of such methods on a stable foundation. He proposed to describe the new methods with clear axioms and rules, and then prove, using minimal, uncontroversial methods, that these axioms and rules are free from contradiction. (The German term for that is widerspruchsfrei, now translated to English as consistent.)

Gödel's 1931 incompleteness theorems show that Hilbert's program cannot succeed as it was originally formulated: no reasonable theory of mathematics can prove its consistency, so it is certainly impossible to establish the consistency of a powerful body of mathematics using only a small portion thereof. But one can interpret the program more broadly as having the goal of modeling portions of mathematical reasoning using axioms and rules, and trying to understand what the system can and cannot do, especially with respect to the concrete, computational parts of the subject. Construed that way, the program has been a success.

I began my career working firmly in that tradition, and it has governed the way I think about mathematics ever since. What makes Hilbert's program possible is that the language and inferential norms of mathematics admit precise description, and, in my opinion, that should be the starting point for the philosophy of mathematics. That's not to deny that a good theory of abstract objects might help us make sense of the subject, but, ultimately, the data we need to explain are the definitions, theorems, conjectures, questions, and arguments that we see in the literature. No matter what we say about what is happening on the inside, our philosophical theories have to square with the outward manifestations.

I also admire Hilbert in a broader sense for being able to bring mathematical methods to bear on broadly philosophical problems in a down-to-earth, no nonsense sort of way. He was a pragmatist. He is often described as proclaiming that mathematics is nothing more than a meaningless symbol game, but, of course, he had way too much respect for mathematics for this to be an accurate characterization of his views. The point of his sometimes fiery rhetoric was that, insofar as we can model mathematics in formal terms, the question as to whether those methods are consistent is a precise mathematical question, not a vague philosophical one.

3:16: And to end on an outlier - does your understanding of formal verification and automated reasoning suggest to you that some kind of AI is possible that would look very like human thinking and pass the Turing test?

JA: Formal verification and automated reasoning are mature fields, but the methods are still surprisingly limited when it comes to mathematical reasoning. With interactive theorem provers, it is helpful to have automation that can justify small inferences, rather than forcing users to spell out all the details by hand. But even the kinds of straightforward inferential steps that one finds in an undergraduate textbook currently require more user input and guidance than we would like. Automation is good at performing large, homogeneous reasoning tasks very quickly, but not so good at locating and combining heterogeneous facts from an underlying mathematical library.

Those of us who work in logic are watching the recent developments in deep learning with trepidation. AI has long been divided between symbolic, logic-based methods on the one hand, and approximate, statistical techniques on the other. The fact that neural networks have had such striking successes with games like go and chess leave us wondering whether they will supplant symbolic methods entirely. But so far, machine learning techniques have had even more limited success when it comes to mathematical reasoning.

I am confident that computers will eventually be able to carry out the kinds of mathematical inferences we do today. The fact that we can learn and communicate mathematics means that there are shared patterns of rules and expectations, and I see no reason to think that these cannot be mechanized. But we have a long way to go. I don't know whether it will require better symbolic techniques or better statistical techniques, and whether formal mathematical knowledge will come from carefully curated databases or setting machine learning loose on the web. We have a lot to learn, and it will be really interesting to see where the technology takes us.

By the way, this is another place where mathematics, computer science, and philosophy can come together. Mathematicians are good at doing mathematics, computer scientists are good at designing algorithms, and philosophers are good at uncovering the norms, assumptions, and expectations that are implicit in a practice. We could leave AI to the computer scientists, but I think we'll make better progress with a more varied perspective. But that requires overcoming academic pressures that tend to push towards specialization and discourage interaction. There is a large culture gap separating mathematicians from computer scientists, but I am happy to see that they are still talking to each other, and learning from one another, when it comes to mathematical reasoning. I am more worried about philosophy of mathematics. The field has become pretty isolated and far removed from the subject it is supposed to illuminate.

3:16: And finally, are there five books you could recommend that would take us further into your philosophical world?

JA: I recommend spending time with any of the seminal works in the history of mathematics and mathematical thought. 

Read Euclid's Elements, and think about the structure of its claims and arguments. Think about why Euclid takes the whole numbers to start with 2, and then think about why we don't do the same. 

Spend time with Dirk Struik's excellent source book in mathematics, and see how firmly early algebra was based on a geometric foundation. Notice also how reluctant early algebraists to add up heterogeneous quantities like areas and lengths, and think about why it doesn't bother us to do that today. 

Read Descartes' Geometry side by side with his philosophical writing. Look at the ways that Newton and Leibniz conceived of the new calculus.

William Ewald's From Kant to Hilbert: A Source Book in the Foundations of Mathematics is also really wonderful. Seeing Dedekind and Cantor introduce new ways of talking about sets and functions, and thinking about how and why they did, can help us understand the way we talk about sets and functions today.

3:16: And where do you stand on some opposing philosophical positions?  

JA: For most you mention, I have no special insight. I'll pick my battles. But on these four I do have a view:

a. A priori knowledge: yes or no?

Yes.

b. Abstract objects: Platonism or nominalism?

Both.

d. Analytic-synthetic distinction: yes or no?

Yes.

l. Logic: classical or non-classical?

Classical




ABOUT THE INTERVIEWER

Richard Marshall is biding his time.

Buy his new book here or his first book here to keep him biding!

End Times Series: the index of interviewees