Friday 29 March 2019

Price v's policy for reducing emissions

At the New Zealand Initiative, Matt Burgess makes an important point about the advantages of using price rather than policy to reduce emissions. Prices act like information signals that allocate resources to their best uses in the economy. This, as Matt points out, means that prices can do easily what policymakers find almost impossible, dealing with the trade-offs involved with choosing between different methods of reducing emissions. When should we stop investing in one particular method of emissions reduction and invest in other methods instead? Policymakers find this an almost insurmountable problem.
When governments want to reduce emissions, they have a choice between using policy or price.

Policy includes rules – for example, 100% of electricity must be generated from renewables – as well as incentive payments, such as electric vehicle subsidies.

Alternatively, governments can price carbon using cap-and-trade, or tax carbon directly.

The fact that emissions occur in millions of places in the economy strongly affects the relative performance of policy and price.

Consider the following question: At what share of renewable electricity does further investment in renewable electricity cease to be competitive with other ways of reducing emissions?

For policymakers, this is an astonishingly difficult question.

It is not just a matter of working out how the per tonne carbon abatement cost rises as the share of renewables approaches 100%. That is hard enough.

It is also about understanding the consequences for downstream users of electricity, who comprise the rest of the economy.

At a very high share of renewables, the cost of electricity will tend to increase. For downstream users, that affects emissions: If electricity costs more, they will be less willing to switch from petrol to electric vehicles, or to switch their industrial processes from coal or gas to electricity.

For policymakers, working out how the share of renewables affects overall emissions is impossibly complicated.

But for a carbon price, whether through cap-and-trade or a tax, discovery of the ‘right’ share of renewable electricity is easy.

Confronted with the relative cost of emissions-intensive coal and gas generation against green alternatives, buyers of electricity decide their willingness to pay.

For some users, green energy is attractive. For other users, coal and gas has real advantages and means a high willingness to pay.

For a problem like emissions, price enables discovery of the answer to non-obvious questions like how much coal and gas generation to retain. Price can access information that is lost to top-down policy.

Policy’s disadvantage is measurable. A survey of the literature on the performance of government emissions reduction programmes reveals governments spend perhaps $5 to avoid harm from emissions worth $1, on average.

Under cap-and-trade like an Emissions Trading Scheme (ETS), retaining coal and gas generation does not increase overall emissions. These high-emissions generators stay in business only by outcompeting alternative emissions sources for the right to emit.

The government recently calling the ETS its “main tool” for achieving its emissions targets is a step in the right direction.

Monday 25 March 2019

Social media's content challenge

Moderating content in a polarized political climate while also respecting the value of free speech is a challenge still vexing social media companies. Thomas Kadri of the Yale Information Society Project comments.

Friday 8 March 2019

A voluminous congressional attack on free political speech

A massive new plan unveiled by Democrats is a wish list of restrictions on free political speech. If this kind of attack on free speech can occur in the US why not New Zealand?

Tuesday 5 March 2019

Jacob Vigdor on the Seattle Minimum Wage

An interesting discussion of many of the issues to do with the effects of minimum wage increases and how to work them out and estimate how big they are. Well worth the hour to listen to.

Jacob Vigdor of the University of Washington talks with EconTalk host Russ Roberts about the impact of Seattle's minimum wage increases in recent years. Vigdor along with others from the Evans School of Public Policy and Governance have tried to measure the change in employment, hours worked, and wages for low-skilled workers in Seattle. He summarizes those results here arguing that while some workers earned higher wages, some or all of the gains were offset by reductions in hours worked and a reduction in the rate of job creation especially for low-skilled workers.

Monday 4 March 2019

Partial versus general equilibrium: the theory of the firm example

A point worth making about the modern models of the theory of the firm is that they illustrate a general issue to do with post-1970 microeconomics, namely, the retreat from the use of general equilibrium (GE) models.

Historian of economic thought Roger Backhouse writes that
 “[i]n the 1940s and 1950s general equilibrium theory [ ...] became seen as the central theoretical framework around which economics was based” (Backhouse 2002: 254)
and that by the
“[ ...] early 1960s, confidence in general equilibrium theory, and with it economics as a whole, as at its height, with Debreu’s Theory of Value being widely seen as providing a rigorous, axiomatic framework at the centre of the discipline” (Backhouse 2002: 261), 
but
“[ ...] there were problems that could not be tackled within the Arrow-Debreu framework. These include money (attempts were made to develop a general-equilibrium theory of money, but they failed), information, and imperfect competition. In order to tackle such problems, economists were forced to use less general models, often dealing only with a specific part of the economy or with a particular problem. The search for ever more general models of general competitive equilibrium, that culminated in Theory of Value, was over” (Backhouse 2002: 262).
As early as 1955 Milton Friedman was suggesting that to deal with “substantive hypotheses about economic phenomena” a move away from Walrasian towards Marshallian analysis was required. When reviewing Walras’s contribution to GE, as developed in Walras’s famous Elements of Pure Economics, Friedman argued,
“[e]conomics not only requires a framework for organizing our ideas [which Walras provides], it requires also ideas to be organized. We need the right kind of language; we also need something to say. Substantive hypotheses about economic phenomena of the kind that were the goal of Cournot are an essential ingredient of a fruitful and meaningful economic theory. Walras has little to contribute in this direction; for this we must turn to other economists, notably, of course, to Alfred Marshall” (Friedman 1955: 908).
By the mid-1970s microeconomic theorists had largely turned away from Walras and back to Marshall, at least insofar as they returned to using partial equilibrium analysis to investigate economic phenomena such as strategic interaction, asymmetric information and economic institutions.

All the models considered in this book are partial equilibrium models, but in this regard, the theory of the firm is no different from most of the microeconomic theory developed since the 1970s. Microeconomics such as incentive theory, incomplete contract theory, game theory, industrial organisation, organisational economics etc, has largely turned its back, presumably temporarily, on GE theory and has worked almost exclusively within a partial equilibrium framework. This illustrates the point that there is a close relationship between the economic mainstream and the theory of the firm; when the mainstream forgoes general equilibrium, so does the theory of the firm.

One major path of influence from the mainstream of modern economics to the development of the theory of the firm has been via contract theory. But contract theory is an example of the mainstream’s increasing reliance on partial equilibrium modelling. Contract theory grew out of the failures of GE. As Salanie (2005: 2) has argued,
“[t]he theory of contracts has evolved from the failures of general equilibrium theory. In the 1970s several economists settled on a new way to study economic relationships. The idea was to turn away temporarily from general equilibrium models, whose description of the economy is consistent but not realistic enough, and to focus on necessarily partial models that take into account the full complexity of strategic interactions between privately informed agents in well defined institutional settings”.
The Foss, Lando and Thomsen classification scheme for the theory of the firm clearly illustrates the movement of the current theory of the firm literature away from GE towards partial equilibrium analysis. The scheme divides the contemporary theory into two groups based on which of the standard assumptions of GE theory is violated when modelling issues to do with the firm. The theories are divided into either a principal-agent group, based on violating the ‘symmetric information’ assumption, or an incomplete contracts group, based on the violation of the ‘complete contracts’ assumption. The reference point approach extends the incomplete contracts grouping to situations where ex-post trade is only partially contractible.

The introduction of the entrepreneur, as in the models proposed by Silver, Spulber and by Foss and Klein, also challenges, albeit in a different way, the standard GE model since, as William Baumol noted more than 40 years ago, the entrepreneur has no place in formal neoclassical theory.
“Contrast all this with the entrepreneur’s place in the formal theory. Look for him in the index of some of the most noted of recent writings on value theory, in neoclassical or activity analysis models of the firm. The references are scanty and more often they are totally absent. The theoretical firm is entrepreneurless−the Prince of Denmark has been expunged from the discussion of Hamlet” (Baumol 1968: 66).
The reasons for this are not hard to find. Within the formal model, the ‘firm’ is a production function or production possibilities set, it is simply a means of creating outputs from inputs. Given input prices, technology and demand, the firm maximises profits subject to its production plan being technologically feasible. The firm is modelled as a single agent who faces a set of relatively uncomplicated decisions, e.g. what level of output to produce, how much of each input to utilise etc. Such ‘decisions’ are not decisions at all, they are simple mathematical calculations, implicit in the given conditions. The ‘firm’ can be seen as a set of cost curves and the ‘theory of the firm’ as little more than a calculus problem. In such a world there is a role for a ‘decision maker’ (manager) but no role for an entrepreneur.

The necessity of having to violate basic assumptions of GE theory so that we can model the firm, suggests that as it stands GE can not deal easily with firms or other important economic institutions. Bernard Salanie has noted that,
“[ ...] the organization of the many institutions that govern economic relationships is entirely absent from these [GE] models. This is particularly striking in the case of firms, which are modeled as a production set. This makes the very existence of firms difficult to justify in the context of general equilibrium models, since all interactions are expected to take place through the price system in these models” (Salanie 2005: 1).
This would suggest that to make GE models a ubiquitous tool of microeconomic analysis - including the analysis of issues to do with non-market organisations such as the firm - developing models which can account for information asymmetries, contractual incompleteness, strategic interaction, the existence of institutions and the like is not so much desirable as essential. One catalyst for the development of such a new approach to GE is that partial equilibrium models can obscure the importance of the theory of the firm for overall resource allocation, a point which is more easily appreciated in a GE framework.