Interview by Richard Marshall.

photo

'In science, a classical example of the application of experimental robustness is the one that led to the discovery of the atom. Several different experiments were devised and all of them converged on the same result in favour of the atomic nature of matter. The underlying idea was that it would be such a coincidence if different tools gave the same result, that when this happened, scientists’ confidence immediately increased in the hypothesis under test.'

'I think it is very difficult to have a genuine interdisciplinary approach in philosophy, as it means getting up from the armchair, just when it is becoming comfortable. As a compromise, the metaphor could be to have a portable armchair to carry around across disciplines. One like those summer chairs you can bring to the seaside. In contrast to classical armchair philosophers, I have a movable armchair, which I bring with me to sit in front of the vast sea of scientific enterprise, be it in the laboratory, the workroom or the field.'

Chiara Lisciandrais an Assistant Professor of Philosophy at the University of Groningen, Faculty of Economics and Business. Before moving to Groningen, she was a Postdoctoral Researcher at the University of Helsinki, Center of Excellence in the Philosophy of the Social Sciences. Her primary research interests are in general philosophy of science, moral psychology and philosophy of economics. Here she discusses the philosophical term of art 'robustness' in philosophy of science, de-idealisation, methodologies in science and epistemology, Robert Hudson's scepticism and realism vs anti-realism. She then discusses norms, descriptive norms and finally her views on x-phi. Autumn creeps in, leaves are falling, thoughts start moving across the cooling earth...

3:AM:What made you become a philosopher?

Chiara Lisciandra:It was chance. When I finished high school, I was torn between the many options university had to offer. After some hesitation, I decided on philosophy. For one thing, I thought that I would learn a method of study that I could use when I had finally made up my mind about what to become. For another, I was attracted by the subject, but I couldn’t say if it was love or just a temporary crush. Eventually, with a leap of faith, I started my philosophical journey.

When I started studying philosophy, I soon encountered decision theory. This came as a relief! I discovered that philosophers were exploring the extent to which our decisions are driven by irrational factors, such as emotions, intuition, imagination and wishes. Arguments were advanced in favour of the view that our decisions would most likely be worse, if we were rational decision makers, who exclusively apply logic and probability theory to make decisions. And that was indeed my case with philosophy: I am glad that by a mix of aspects, and a twist of fate, I chose it.

3:AM:In the science and epistemology world of philosophy ‘robustness’ is an important term of art which you’ve been working on. To begin with could you say what you think this term means and why it is important. What are the stakes?

CL:Robustness… one word, several meanings! Over the last few years, together with my colleagues from the University of Helsinki, I have spent quite some time trying to disentangle the different senses of robustness and their epistemic justification.

Let me explain what robustness is by means of an example. Imagine that you were an architect or a structural engineer and you had to design a bridge to connect two islands. To start, you first build a physical model of the bridge. The model is quite accurate: it includes water, boats, trees, people, cars, etc. To test the stability of the model, you simulate the effect of air and wind on the bridge. What you find is that the model works quite well and is robust under these factors. However, the model is a simplification of a real-world bridge. How can we ensure that the bridge will be safe, even when we build it in the real world?

A similar question applies to scientific models. Models in science are based on assumptions, which are simplifications of real-world systems, in a similar way as with the model of the bridge. How can we apply the result of a model to real-world phenomena, where the initial simplifications do not hold? The idea behind robustness analysis is that if the result of a model holds under different assumptions, each of which captures certain possible aspects of the real world phenomenon, then our confidence in the result of the model is higher than before we proved its robustness.

You asked me why robustness is important. In short, I think it is important because in science, we cannot but use assumptions that abstract from how things really are. Scientists work with idealisations, caricatures, as-if assumptions, etc. Yet, by manipulating falsities, they find ways to get closer to the truth.

3:AM:Is robustness analysis a version of de-idealisation?

CL:This brings me to what I mentioned before about different senses of robustness analysis. Loosely speaking, de-idealisation is a version of robustness analysis. They are both strategies to validate unrealistic models. However, de-idealisation is based on different principles than robustness analysis, or so I argue. De-idealisation is a method of progressively enriching our models by adopting more accurate representations of a real-world phenomenon. Recall our toy model of the bridge: imagine that we gradually made the model more realistic, by adding the force of water, the weight of people, cars and trucks, etc. Piece by piece, the model becomes more accurate.

Robustness analysis, on the other hand, remains at the same level of abstraction. In theoretical models, as for instance in physics and economics, realistic and unrealistic aspects are deeply intertwined with one another. This makes it inappropriate to talk about de-idealisation when replacing any of them with different ones. In these cases, the underlying idea is that if a result is invariant across conditions, then the result does not strictly depend on the falsifications used to represent the target system.

To go back to our bridge model, we can use different assumptions to describe how people move on the bridge. They will all be only partial representations of how individuals really behave. Nevertheless, if the result is robust across conditions, then we have an indication that the result does not depend on the particular way in which we represent them.

3:AM:What is your argument regarding robustness analysis and tractability of modelling and what difference does this make to assumptions about methodologies in science and epistemology?

CL:Let me again explain the main issue by means of a toy model. Imagine that you and I were playing with LEGO. Suppose that the bricks in our LEGO model fulfil different functions. Some bricks are there for structural reasons, i.e. they keep the building up. Other bricks are there to organise the space, others still for decoration, etc. I think of tractability assumptions in theoretical models as being comparable to the structural bricks in a LEGO model. Tractability assumptions have certain mathematical properties, which make the result possible, in the same way as structural bricks make a LEGO model stand up. When we take a tractability assumption out of our model, we have to be careful that our mathematical ‘building’ stands firm.

The idea behind robustness analysis is to introduce certain changes into a model and compare the results across conditions. However, when we replace tractability assumptions with different ones we have to consider the further consequences that changing them will determine. If robustness analysis were a ‘surgical’ operation, we could replace one single assumption with a different one and observe the result across conditions. When things are interconnected, though, change rarely comes in isolation. By changing one brick, we might need to change another brick, and yet another brick until our building no longer resembles the original one. We might end up with models that are quite different from one another and, thus, difficult to compare.

3:AM:Robert Hudson has put forward a sceptical argument regarding robustness analysis in the context of the empirical sciences, hasn’t he? What’s his argument and is it justified do you think?

unknown

CL:The case for robustness analysis in the context of the empirical sciences is similar but not identical to the case of theoretical models. It always involves the comparison of results across different conditions. However, rather than changing the initial assumptions or the parameters of a model, in the case of experimental robustness, scientists vary the experimental set up and observe the consequences of this variation. If the result does not vary across conditions, they have an indication that the result of the experiment is not an artefact of the experimental apparatus they are using.

In science, a classical example of the application of experimental robustness is the one that led to the discovery of the atom. Several different experiments were devised and all of them converged on the same result in favour of the atomic nature of matter. The underlying idea was that it would be such a coincidence if different tools gave the same result, that when this happened, scientists’ confidence immediately increased in the hypothesis under test.

In his latest book, Hudsonlooks at a number of episodes from the history of science. Beyond the discovery of atoms and molecules, he discusses the debate on dark matter and dark energy. These examples are considered to be paradigmatic cases in favour of robustness analysis. Hudson, however, contests this view. His argument is that it was not the convergence of different methods on the same result that provided a knockdown argument in favour of the hypothesis under test. It was the reliability of one particular method over the others that convinced the scientists about the validity of their hypothesis. This is what Hudson calls ‘reliable process reasoning’. According to him, robustness does not provide a warrant of accuracy; it is rather the fact that a certain result emerges from the experimental ‘tool’ that we consider to be most reliable one.

3:AM:Is reliable process reasoning an alternative to robustness analysis?

CL:I wouldn’t say that it is an alternative to robustness analysis. In the best case scenario, we have reliable methods at our disposal; it is when such reliability is not a given, that we might have to proceed via robustness analysis instead.

3:AM:Does your work around modelling shed light on the discussion about the realism or anti-realism of scientific theories? Roy Sorensenwondered some time ago whether the idealisations used in modelling required one to adopt some kind of anti-realism and suggested we could resist this move – do you agree with him on this or are you comfortable with not taking a realist position here?

CL:I think that we can all live with idealisations in scientific models and be realists at the same time. Uskali Mäkihas defended this view at length. My work does not directly shed light on issues related to the realism/antirealism debate. My main goal is to provide scientists with epistemic arguments for the methods they use in their disciplines.

3:AM:You’ve also looked at the role of norms. One question you’ve looked at is how other people’s opinions affect judgements of norm transgressions. Could you give an example of the kind of thing you’re thinking about here and how you go about answering the question?

CL:True, beyond topics in philosophy of science, I have been working on the role of norms in social groups. These topics may seem quite distant from one another, but they actually are closer than at first seems. In one case, I am interested in laws of nature; in the other case, in norms of society. There is an important difference, though. While laws of nature are such that they hold indefinitely, norms of society are created by the individuals, and as such they can change.

My work on norms is about what makes people follow norms and break them. One question is how other people’s opinions affect judgments about the violations of norms. In a family of experiments that I designed at Tilburg University together with Matteo Colomboand Marie Nilsenova, we tested whether different types of norms show different resistance to conformity effects. What we found is that moral norms are more insulated from conformity effects than other types of norms. More specifically, we developed a taxonomy of norms on the basis of the tendency we have to break norms when we are in a group of people with different views than ours.

3:AM:Is this approach generalisable outside of the paradigm of moral norms?

CL:The main result of our experiment is that moral norms are subjected to conformity effects less than other groups of norms, for instance social and decency norms. The latter are norms that establish what is disgusting and what is not within a certain society. According to our research, individuals tend to change judgment more easily in the case of social norms and disgusting behaviours than in the case of moral norms.

3:AM:Another question you’ve worked on is why there are descriptive norms. So what’s a descriptive norm and why do we have them?

CL:Descriptive norms are things such as fashions, fads and all sorts of trends. They are a curious class of behaviour because, unlike other types of norms, they are not strictly needed. They do not offer solutions to interaction problems and their prescriptions are neither right nor wrong per se. Yet, they emerge all the time in social groups, hold for a while, and eventually disappear. Why are there descriptive norms? In a paper with Ryan Muldoonand Stephan Hartmann, we offer a possible explanation for this phenomenon. We argue that in the same way as we look for regularities in the natural world, we look for regularities in the social world. In the latter case, however, by observing certain behaviours, we infer that they may indicate the existence of regularities and, if so, we may start following them. In so doing, we provide further evidence about these norms, in a chain of feedback effects that eventually brings such norms into existence. In this sense, by looking for norms, we create them.

3:AM:How does your work interface, if at all, with experimental philosophy? Are you sympathetic to x-phi and do you think that interdisciplinary work in epistemology and philosophy of science is unavoidable?

CL:As a philosopher of science, I am interested in scientific discovery, the principles of scientific reasoning and the scientific method. In order properly to explore scientists’ work, I need to put my hands on the same tools that scientists use. For this reason, I often get involved in the design of experiments, in the formulation of models and simulations, and in all the forms of quantitative methods that I can approach. This helps me to get a better grip on different scientific methods. But this is just my approach. Philosophers may already have a good understanding of scientific tools even without actually trying them out.

Overall, however, I think it is very difficult to have a genuine interdisciplinary approach in philosophy, as it means getting up from the armchair, just when it is becoming comfortable. As a compromise, the metaphor could be to have a portable armchair to carry around across disciplines. One like those summer chairs you can bring to the seaside. In contrast to classical armchair philosophers, I have a movable armchair, which I bring with me to sit in front of the vast sea of scientific enterprise, be it in the laboratory, the workroom or the field.

3:AM:And finally, are the five books you could recommend to the readers here at 3:AM that will take us further into your philosophical world?

CL:This is a difficult question! Since I have to make a selection, I will give one that follows a chronological order that reflects how my philosophical world has changed over time.

9780061316869

1) Wittgenstein, Ludwig (1969) On Certainty. This is the last book that Wittgenstein wrote and the first one that I read in philosophy. Just saying!

9780262691598

2) Stich, Stephen (1990) The Fragmentation of Reason. I read this book when I was preparing my master thesis and it made the job less painful! It is one of the books that made me glad to have chosen to study philosophy.

9780631232575

3) Lewis, David (1969) Convention: A Philosophical Study. This is a pillar of our discipline. I liked it from the beginning, in the acknowledgements, where Lewis thanks the French patisserie in L.A. where the book was written. I hope I will visit it one day.

9780195189537

4) Woodward, James (2003) Making Things Happen. This is another masterpiece. It provides the most comprehensive overview of everything that has ever been written on causation.

9780141394534

5) Koestler, Arthur (1959) The Sleepwalkers: A History of Man’s Changing Vision of the Universe. This is the last book I have read. It reflects on the psychology of scientific discovery in astronomy from antiquity to Newton. It was a very nice way of bringing my portable armchair up to the planets and stars!

IMG_4063

ABOUT THE INTERVIEWER
Richard Marshallis still biding his time.

Buy his book hereto keep him biding!