An Engine Not A Camera – Chapter 1 – Performing Theory?

skitched-20091002-200918.jpg

The book concerns two interconnected stories that MacKenzie illustrates with an example from the October 1987 stock market crash. First the growth of markets that exchange not stocks but ‘derivatives’. As we all know, the percentage of our economy build on such derivatives is huge – MacKenzie quotes the market being worth $273 trillion. Second the “emergence of modern theories of financial markets” rendered in elegant mathematics that had begun in the 1950s, and become significant in the 1960s and 70s. MacKenzie details the snobbery directed towards such models by economics departments and as a result such theories were largely developed in business schools. The book is about the relationship between these two stories – “What were the effects on financial markets of the emergence of an authoritative theory of those markets?”.

What are these models/theories? “The models discussed in this book are verbal and financial representations of markets and of economic processes”. They are simplified representations that end in precise mathematical formations. The may contain complex economic thinking but are often computationally very simple – the Black-Scholes-Merton model of options pricing is a simple differential equation, a version of the heat or diffusion equation in physics. These models are could be done with pen and paper but it is necessary to have a computer in the background since one of the strengths of financial models is the ability to test such models against real data. Such attempts are discussed later in the book. The maths in the study of finance is against the background of the wider changes in economics and the move from a verbal and pluralist tradition with little maths, to one dominated by maths and statistics after the strong neoclassical turn after World War II which squeezed almost all other version of economics out of the contemporary economic academy (be they Keynesian, Marxist, Historical, Institutional or otherwise). The peak of the approach was in the early 50s where Arrow-Debreu-McKenzie model of General Equilibrium “proved”, mathematically at least, the market equilibria are Pareto-efficient – that competitive markets tend towards efficient resource allocation. Parallel to this mathematicisation is the recovery, after the Great Depression and the ascendency of interventionist Keynesianism, of confidence in markets as a whole.

The models of financial economists are incredibly simple, which leads to suspicion they may be too simple. MacKenzie takes us over some of the assumptions of these models. Typically these involve:

  • stocks and shares that can be bought at market prices with no effect on those prices
  • no transaction costs are involved (eg commissions on transactions)
  • that stocks can be sold short without penalty
  • money may be borrowed risklessly

The canonical response to the claim that the assumptions of economics were unrealistic was from 1953 onwards to refer people to Milton Friedman’s infamous essay “The Methodology of Positive Economics”. It is the defense that financial economics mounted both against external detractors and those with economics who did not see financial economics as economics proper. In this essay positive (“what is”) economics is distinguished from normative (“what should be”) economics. The goal of positive economics is to be predictive, the performance of a given theory can be judged by its ability to make predictions or not. Therefore to assess a theory by its assumptions was inaccurate, but the acid test is (in Friedman’s words) “whether it yields sufficiently accurate predictions”. In this paper we find echoes of Popper’s theory that a scientific theory is never proved only disproved – ie falsification – but the influence of one on the other is rather murky. Friedman threw something into the mix by saying a hypothesis should be rejected if “its predictions are contradicted (‘frequently’ or more often than an alternative hypothesis)” – allowing much room for professional judgement if this was the case. As MacKenzie notes, by the standards of strict falsification, almost all the models of financial markets in the book would be disregarded as their conclusions are empirically false. But they don’t disregard them “and they were right to do so”, hence Friedman’s views were not “a precise methodological prescription for how economics should be done”. Economic theory was not a camera attempting to perfectly capture the world but rather an engine to analyse the world. This metaphor is a guiding one for this book. The point here is:

that a model’s assumptions were “unrealistic” did not generally count, in the epistemic culture of financial economics, as a valid argument against a model

Here the central contention of the book is given:

Financial economics, I argue, did more than analyze markets; it altered them. It was an “engine” in a sense not intended by Friedman: an active force transforming the environment, not a camera passively recording it.

Here a key theorist is economic sociologist and sociologist of science Michel Callon, who Nick referred to in his paper. He posits that in order to function modern markets need an “anthropology of calculation” that makes the calculation that actors are supposed to do possible. The agents and good involved in a calculation must be disentangled and framed – which elements count in the calculation and which do not must be established. This is contrasted with ethnologists talk of entangled objects, cultural and religious artifacts that cannot be subject to market transactions – this recalled some of the anthropological discussions of gift exchange and gift economy, as I assume the come from similar places. But McKenzie notes we should not over-state our case here that the market is only about disentangled objects – the market itself requires a slew of its own entangled objects in order to properly function, but this comparison draws attention to the social, cultural and technological conditions that make markets possible.

MacKenzie breaks out into describing such a process. For futures markets to work, the underlying asset involved has to be standardised. This standardisation is a form of the disentangling and framing Callon describes. For example, in the agricultural futures markets of the early Chicago Mercantile Exchange and Board Of Trade. Such a market in futures was only possible with some degree of standardisation. In the past, sacks of grain kept the grain and the grower together, but with transport in rail cars and grain elevators, all the grain got mixed together in the large elevators. Property was retained now by a paper receipt exchangeable for the same quantity of grain, but not the same exact grains (which you had previously just kept in your grain sack). Hence there needed to be some kind of standardisaton which was a technical and social process. A bushel became a standard of weight measured by the scales at the top of the elevator graded into No. 1 type and No. 2 type grain.

This process of “homogeneous abstraction”, “disentangled at least partially from their heterogenous physical reality” allowed the market to function at all. A future works by entering into a contract to buy a set amount of stuff at a certain price at a certain time. So with this innovation I could make a contract to buy 5,000 bushels of wheat at x time in the future at y price. “Such a contract had no link to any particular physical entity, and because its terms were standardized it was not connected permanently to those who initially entered into it”. The difference was large: if I wanted to be free of this contract, I don’t have to go back to the original person I made the contract with, but I can rather go to a third party and make an equal-but-opposite contract. At the end of the contract when I was supposed to fulfil it, I could in theory get my grain, but mostly I would settle the contract by paying the difference between the price in the contract and the current market price. The “spot” price, for immediate exchange and the futures price were thus tied together. This disentanglement was a social matter, dependent on people setting those standards that defined these disentanglements.

This infrastructure is then diverse. Grain futures required: steam powered elevators, grain inspectors, crowded pits and contracts that separated this activity from gambling. A technical innovation had changed the character of the market.  What Callon claims is that economics is part of this infrastructure “economics in the broadest sense of the term, performs, shapes and formats the economy” rather than neutrally observing it. The idea is then that financial economics did something like the innovation of the steam powered elevators. It added something, changed the way the market worked.

One famous example of economics formatting an economy is the Chicago Boys infamous intervention into the Chilean economy after being trained at the University of Chicago with Friedman and pals. In Pinochet’s state  they established a monetarist free market, as we all know, by force. I should note that the interesting thing about this is that it shows the difference between liberalism and neoliberalism. Liberalism for the most part assumes the market just came about when things were left alone by a night watchman state. By contrast, the neoliberals knew that market had to be created, despite their occasional appeals to their naturalness – neoliberalism is not the same as laissez faire. I digress, the Chilean case:

is a vivid example of a general phenomenon. The academic discipline of economics does not always stand outside the economy, analyzing it as an external thing; sometimes it is an instrinsic part of economic processes. Let us call that economic plays the latter role the performativity of economics.

By reference of J.L. Austin, MacKenzie defines performativity as “utterances that do something” – “in saying what I do I actually perform the activity” – ie apologising, betting, making a contract etc. Here MacKenzie drops in a diagram illustrating the various forms of performativity. On the outerpost ring is generic performativity. If economics is used in the ‘real world’ by market participants, policy makers, regulators, we have generic performativity. Finding this is straightforwardly empirical – you just look at whether economics is used in such a case by a group of people. If someone uses economics and participates in a market they are doing this. Effective performativity is closer to the centre of the diagram and more specific. This what we are really after, the stronger meaning of performativity, the above hunch I referred to: “to determine what effect, if any, the use of economics, has had on the economic process in question”. It is clear that for effective performability to be true, then the use of economic theories in markets must make a difference – it must, for example, make something possible in a market that was impossible before, or make the market or economic process different to a situation when a set theory is not used. Economics must not be, as Mirowski and Nik-Khan have argued, simple an epiphenomenal that makes no difference. The perfect test would be to compare a situation where a certain theory was used to one where it was not and make a direct comparison to measure if it had any effect. The most important and intriguing aspects are in the centre of the diagram – something that MacKenzie labels examples of Barnesian performativity after sociologist Barry Barnes. Here for example a monarch calling Robin Hood an outlaw makes him an outlaw. In economics “the use of a model (or some other aspect of economics makes it “more true””. MacKenzie admits that the opposite, counter-performability might also be true, that using an economic theory makes something less true.

This form of performativity is similar to Robert K. Merton’s notion of a self-fulfilling prophecy, yet MacKenzie dislikes this term. First, his own term incorporates a sense that this strong Barnesian form is a sub-step of the other, lesser forms he has discussed. Second, self-fulfilling ‘prophecy’ suggests we are only talking about beliefs and world views. But performativity is not only about the beliefs in the heads of the participants of the market but also the material structures etc set up, that exist whether the actors like them or not, or even if they know nothing about them. Thirdly, ‘self-fulfilling prophecy’ suggests a pathology, that some false belief was made to be true. But “It is emphatically not my intention to imply that in respect to finance theory”. To say that the Black-Scholes-Merton option pricing theory is performative it is not saying you could come up with any old set of equations and these equations would “make themselves true” and I think this is an important point that MacKenzie is making. No, as if this formula had lost money, then it would have been dropped like a stone. The point is that the Black-Scholes-Merton model established not merely a way of working out the theoretical price of options, but also gave an account of the processes determining those prices. It established that way that economist thought about a range of issues – “It affected how market participants and regulators thought about options, and it still does so, even if the phase of Barnesian performativity of the original Black-Scholes-Merton model has passed.”

But how would be detect Barnesian performativity? Well, by comparing market conditions and patterns of prices before and after a model was widely adopted. If these conditions and prices moved to greater conformity with the model then we would have ourselves an example of Barnesian performativity. Now we wouldn’t have proved performativity, as this could have happened for other reasons, but we can hint along the way. Yet this is a complex process, it will be difficult to judge if patterns of market prices have moved towards or away from what Black-Scholes-Merton, for example, predict they should do. Yet, what such a model actually predicts is difficult to ascertain. In this case the model “yields an option price only after the characteristics of the option and the values of the parameters of the Black-Scholes-Merton equation has been set.” One of these parameters, the volatility of a stock is not acknowledged to be observable. Problems come from ‘the other end’ as well – market prices are often difficult to pin down in the ‘real world’. Here we have to have a detour into philosophy of science. The ‘world price’ of cotton, for example, is a result of averaging and a variety of subjective calls. Even when we restrict the data set to the prices at which transactions are made, we have a slew of problems. Mark Rubinstein got hold of the Chicago Exchange Board’s transaction data but still faced problem. Options would trade at various prices when the underlying stock did not change at all – Rubinstein was forced to take a weighed average. But this involved excluding data he saw as problematic – the scrambles in the first and last 1,000 seconds of the day – the former because transactions hanging over from the previous day were being executed and the latter because prices were being influenced by traders to influence the market makers. Both these types of transaction were “artifical pricing”, and this was a simple statistical choice, but it embedded a sense of normativity in the data – the most frantic periods of operation were not ‘natural’. MacKenzie drops in Latour – sometimes the words we use don’t really refer to discrete easily observable things. Also, reproduction and replication in econometrics is just as problematic as in the natural sciences. A test might contradict a previous one and there is no real way of telling if the first test was faulty or the new one, or some external variable like historical variation. Theories require auxiliary assumptions as well as just the theory itself – a result that falsifies a model might be due to these assumptions being wrong. Hence things like ‘cleaning’ data, although required for any attempt to do any science, not simply economic science, involve theory. The original data which tested finance theory were the records collected by the Center for Research in Security Price at the University of Chicago. Sifting this data required an algorithm which worked out if there were data-entry errors, but this excluded, arbitrarily that a error had occurred when a arbitrarily ‘large’ change had occurred. Testing if the “real world” moves towards a theory is tough. Should we abandon this? No:

to do that would abandon a central question: Has finance theory helped create the world it posited – for example, a world that has been altered to conform better to the theory’s initially unrealistic assumptions?

Whether the myriad sociologists, political philosophers, activists, anti-capitalists and theologians are right in saying that economics has transformed the world into its own image is a vital question, and it is an imperative one, despite methodological problems. And contributing to these kind of political debates about the usefulness of markets seems to be MacKenzie’s aim:

To try to understand how finance theory hs “aligned, transformed [and] constructed its world – which is also everyone’s world, the world of investment savings, pensions, growth, development, wealth, and poverty- may in contrast, contribute something to conversations about markets.

This book will contribute to these debates as well as tracing a technological history of markets. It will draw extensively on interviews with participants, triangulated for accuracy. MacKenzie then gives an overview of the book. I will skip some of the detail here as we are going to be looking at the chapters in real detail later, but there are eight remaining.

About these ads

4 Responses to “An Engine Not A Camera – Chapter 1 – Performing Theory?”

  1. Alex Says:

    Hope this isn’t too long. It is quite important I felt at this stage to ensure everyone knows what MacKenzie is and is not doing and why.

  2. Nick Says:

    Interesting take on the invention of the grain elevator. Echoes analyses in William Brown’s book, “American Colossus: the Grain Elevator, 1843 to 1943″ (Colossal Books, 2009), which not only concerns market formation, but urbanism and modernity, as well.

  3. Jared Says:

    Thanks for this detailed discussion. I really hope you’re able to keep posting on this.

    I also hope MacKenzie’s argument doesn’t depend this much on claims about market microstructure.

    1. “Chicago Exchange Board” – Options on most Chicago Mercantile Exchange products trade electronically 24 hours, 5 days per week. But even for Chicago Board Options Exchange equity options and other products with fewer trading hours, beginning and end of session blips simply aren’t that big of a deal, and even if they were it wouldn’t be because of some flaw in or performative effect of the pricing model. Exceptions – when some mutual fund has a massive “market on close” order that gives you a price spike – prove the rule. And if prices at 9:31 or 3:59 are “artificial,” then so are prices at noon.

    2. “One of these parameters, the volatility of a stock is not acknowledged to be observable.” Acknowledged by whom? 30-day historical volatility and present implied volatility have a correlation north of .8 for equity indexes, which means you can get the “close enough” theoretical price of an option using only recent price data. Additionally, you don’t have to commit to a model to get implied vol: there are model-free estimates, most prominently the VIX and VIX-style indexes published by the CBOE.

    3. “Options would trade at various prices when the underlying stock did not change at all” – since price is only one of the variables that affect the price of an option, one would certainly hope so!

    4. The need for data-cleaning is a non-issue. Many exchanges have “realistic price” filters so that you can’t offer an illiquid contract worth $10 for $10000 and catch me out if I send an order to buy at the market price. You can check a weird data point in one asset against the data for other highly correlated assets. And whenever I have to clean data, it’s usually because an entry is missing entirely, or somebody added an extra zero or two, or something obvious and stupid like that. One of the ways the SEC catches insider trading (when they choose to make the occasional token effort) is precisely by filtering and finding outlier prices that don’t fit expectations. Those suspiciously-expensive put options on a company that’s going to announce terrible earnings tomorrow aren’t erroneous data points that somehow fail to reflect reality – they’re actually more accurate than the other prices on the board precisely because the buyers have more information.

    I don’t accept the Efficient-market Hypothesis on either macro or micro levels. But I also don’t see how microstructural statistical noise helps establish the substantive claims of the book.

  4. Alex Says:

    Thanks very much for your comments and I really hope you continue to comment. I will hopefully have the second chapter up tomorrow.

    I think by the end of the book I’m going to be mostly unsatisfied that he has empirically grounded his thesis. Which is a bit of a problem.

    As regards to point 2. this is discussed in the next chapter. I can’t remember why this is precisely, but it is to do with the construction of the equation. As to 3. what I mean is that at the exact same time the stock had two different prices as the book says “can the equilibrium price please stand up”. Point 4. will be discussed in a ridiculous amount of detail in a later chapter.


Comments are closed.

Follow

Get every new post delivered to your Inbox.

Join 3,241 other followers