Tuesday, 30 April 2013

The battle of Bretton Woods

From VoxEU.org comes this audio in which Benn Steil of the Council on Foreign Relations talks to Romesh Vaitilingam about his book ‘The Battle of Bretton Woods: John Maynard Keynes, Harry Dexter White, and the Making of a New World Order’. They discuss the ‘realpolitik’ of the 1944 conference and the scheming of the two central characters, as well as lessons for today’s efforts to reform the Eurozone and the international monetary system.

EconTalk this week

James Galbraith of the University of Texas and author of Inequality and Instability talks with EconTalk host Russ Roberts about inequality. Galbraith argues that much of the mainstream analysis of inequality in the economics literature is flawed. Galbraith looks at a variety of different measures and ways of analyzing income data. In the podcast he focuses on how much of measured inequality is due to changes in specific counties or industries. Other topics discussed include the state of economics in the aftermath of the Great Recession and the importance of the government safety net and other social legislation.

Why parties may deliberately write incomplete contracts

Incomplete contracts are big news in economic theory these days. Economists, and lawyers, would generally agree that almost all contracts are incomplete. It is simply too costly for the contracting parties to anticipate the many contingencies that may occur and to write down unambiguously how to deal with them.Incomplete contracts have been used to explain, among other things, the boundaries of firms, the internal organisation of firms, firms’ financial decisions, the costs and benefits from privatisation; and the organisation of international trade between inter- and intrafirm trade. But there is an obvious question you have to ask: Why are contracts incomplete?

In a new NBER working paper, More is Less: Why Parties May Deliberately Write Incomplete Contracts, Maija Halonen-Akatwijuka and Oliver D. Hart look at this question.

First what doesn't (fully) work when thinking about the reasons for incompleteness:
The idea that transaction costs or bounded rationality are a total explanation for this is not convincing. In many situations some states of the world or outcomes are verifiable and easy to describe, appear relevant, and yet are not mentioned in a contract. A leading example is a breach penalty. A contract will usually specify the price the buyer should pay the seller if trade occurs as intended, but may not say what happens if there is a breach or under what conditions breach is justified. Of course, sophisticated parties often do include breach penalties in the form of liquidated damages but this is far from universal.

A second example concerns indexation. Since a worker’s marginal product varies with conditions in the industry she works in as well as the economy as a whole we might expect to see wages being indexed on variables correlated with industry profitability such as share prices or industry or aggregate unemployment, as well as to inflation. Such an arrangement might have large benefits, allowing wages to adjust and avoiding inefficient layoffs and quits of workers (see, e.g., Weitzman (1984) and Oyer (2004)). Indeed Oyer (2004) argues that high tech firms grant stock options to employees to avoid quits. Yet the practice does not seem a common one overall. Similarly, in the recent financial crisis many debt contracts were not indexed to the aggregate state of the economy; if they had been the parties might have been able to avoid default, which might have had large benefits both for them and for the economy as a whole.

How do we explain the omission of contingencies like these from a contract? One possibility is to argue that putting any contingency into a contract is costly – some of these costs may have to do with describing the relevant state of the world in an unambiguous way – and so if a state is unlikely it may not be worth including it (see, e.g., Dye (1985), Shavell (1980)). This is often the position taken in the law and economics literature (see, e.g., Posner (1986,p.82)). However, this view is not entirely convincing. First, states of the world such as breach are often not that unlikely and not that difficult to describe. Second, while the recent financial crisis may have been unlikely ex ante, now that it has happened the possibility of future crises seems only too real. Moreover, finding verifiable ways to describe a crisis does not seem to be beyond the capability of contracting parties. Thus one might expect parties to rush to index contracts on future crises. We are not aware of any evidence that this is happening.

A second possibility is to appeal to asymmetric information (see, e.g., Spier (1992)). The idea is that suggesting a contingency for inclusion in a contract may signal some private information and this may have negative repercussions. Such an explanation does not seem very plausible in the case of financial crises – where is the asymmetry of information about the prospects of a global crisis? – but it may apply in other cases. For example, if I suggest a (low) breach penalty you may deduce that breach is likely and this may make you less willing to trade with me. Or if you suggest that my wage should fall if an industry index of costs rises I may think that you are an expert economist who already knows that the index is likely to rise.

Even in these cases asymmetric information does not seem to be a complete answer. Asymmetric information generally implies some distortion in a contract but not that a provision will be completely missing. For example, in the well-known Rothschild-Stiglitz (1976) model, insurance companies offer low risk types less than full insurance to separate them from high risk types. But the low risk types are not shut out of the market altogether – they still obtain some insurance (and the high risk types receive full insurance). Indeed to explain why a contingency might be omitted from a contract Spier assumes a fixed cost of writing or enforcing contractual clauses in addition to asymmetric information.
What do Halonen-Akatwijuka and Hart offer that is new? They analyse when and why parties will deliberately write incomplete contracts even when contract-writing costs are zero.Their approach is based upon the ideas first formulated in Hart and Moore (2008) which sees contracts as "reference points". Under this approach a contract is viewed as delineating what parties feel they are entitled to. Importantly  the parties to a contract do not feel entitled to outcomes which are outside the set of outcomes specified by the contract but they may feel entitled to different outcomes within those specified in the contract. If a party does not receive what he feels entitled to he is "aggrieved" and "shades" (or cuts back) on performance. This creates deadweight losses. Halonen-Akatwijuka and Hart write,
We have argued that adding a contingency of the form, “The buyer will require an extra good or service in event E”, has a benefit and a cost. The benefit is that there is less to argue about in event E; the cost is that the reference point provided by the extra service in event E may increase argument costs in states outside E. Similarly indexing a price or wage to an exogenous variable has the benefit that if this variable tracks the buyer’s value and seller’s cost closely then breakdown in trade can be avoided; but the cost that if the index does not track value and cost closely the reference point provided by the indexation may make renegotiation harder when trade does break down.

Our principal result is that the relative benefit and cost of adding a contingency or indexing will be sensitive to how closely the parties agree about what is a reasonable division of surplus when an incomplete contract is renegotiated. The benefit is likely to exceed the cost when parties have very different views about what is a reasonable division of surplus, but the opposite will be the case if they have shared views. Under the latter conditions an incomplete contract will be strictly optimal. Our results can shed light on why wage indexation, although observed in some situations (see Card (1986) and Oyer (2004)), is not more common.

It is worth considering how our theory’s implications differ from those of a theory based on asymmetric information. Consider the Nanny example [see below] in the introduction where the question is why a late fee is not introduced. The asymmetric information explanation would be that introducing the late fee might signal to the Nanny that the employer knows that he is unpunctual, which makes the job less attractive. But this problem could be presumably solved through the choice of a high late fee. Or take the case of wage indexation. If an employee is offered a contract whereby the wage is indexed on some signal, the employee might think that the employer already knows that the signal will be such that the employee’s wage is low, making the contract less attractive. But this would suggest that in an optimal contract the wage should not vary much with the index, not that it should not vary at all. Only by introducing costs of contractual clauses (as in Spier (1992)) can one explain a complete lack of indexation.

In contrast in our theory, introducing a late fee or any amount of indexation has a discontinuous effect: it introduces a brand new reference point. We have seen that in some circumstances the cost of doing this outweighs the benefit.

Our theory also has different implications from the asymmetric information one regarding the timing of incompleteness. Signaling favorable private information is particularly important at the beginning of a relationship. In our theory one possible explanation for similar views about the division of surplus is the history of the relationship between the buyer and the seller. If the parties have interacted before they may have grown to know and like each other, with the implication that each will become more generous about sharing surplus (see the social influence theory of Kelman (1958)). Therefore we would expect contracts to become less complete in long-term relationships, but be more complete when such relationships are formed -- in contrast to the asymmetric information theory.

Finally, our approach may also be able to explain why parties often use general rather than specific language in contracts. For example, parties negotiating acquisitions frequently include a clause that excuses the buyer if the target seller suffers a “material adverse change” (see Schwartz and Scott (2010)). According to our theory the advantage of a general clause is that it creates a neutral reference point: In terms of the model of Section 2 it is like describing states s2-s4, rather than event E, as a situation where the add-on should be provided. In contrast spelling out particular contingencies that qualify as a material adverse change may complicate renegotiation in other contingencies that are not easily described but where the parties also intended to excuse the buyer. Asymmetric information theories do not seem to have much to say about this issue.
The Nanny example,
Suppose that you hire a Nanny to work Monday-Friday from 9am-5pm for $600 per week ($15 per hour). There is a chance that you will get stuck in traffic and will be late. Should you include a late fee of, say, $30 per hour in the Nanny’s contract? (Being late is a verifiable contingency.) Including the late fee could prevent bad feelings later on about how much the Nanny should be paid when you are late. But if you include the late fee, it may create some expectation by the Nanny concerning what she should receive if, say, you need her to work on the weekend. (There may be several reasons for you to want her to work on the weekend—some business, some pleasure— and it may be difficult to distinguish between these in advance.) She might feel that $30 per hour is the appropriate reference point for such an arrangement, whereas you might feel that $15 per hour is. If you and the Nanny have similar views about what is reasonable absent a reference point, it may be better to leave the late fee out and renegotiate as needed.

Sunday, 28 April 2013

50 years of "A Behavioral Theory of the Firm"

In his latest weekly newsletter from economicprincipals.com David Warsh writes about attending a program celebrating the fiftieth anniversary of the publication of A Behavioral Theory of the Firm by March and Richard Cyert. Warsh says,
The corporate landscape today is, of course, all but unrecognizable compared to what it was when March and Cyert wrote in 1963. The outlines of any number of vivid company stories loomed as I listened to the panels and papers at the conference. In fact, a bountiful new field of organizational economics has grown up in the nexus of the interest in organizations that March shared with Kenneth Arrow, Ronald Coase, Oliver Williamson, Herbert Simon and Sidney Winter.

That field was an empty lot when March started, not long after finishing his PhD, at Yale. A Behavioral Theory of the Firm was a broadside aimed at standard textbook economics, especially the branch of it known as the theory of the firm. The book begins,
The “firm” of the theory of the firm has few of the characteristics that we have come to identify with actual business firms. It has no complex organization, no problems of control, no standard operating procedures, no budget, no controller, no aspiring “middle management.”
In fact the neoclassical model can be seen as a model with no firms at all! As Nicoli Foss remarks,
“With perfect and costless contracting, it is hard to see room for anything resembling firms (even one-person firms), since consumers could contract directly with owners of factor services and wouldn’t need the services of the intermediaries known as firms”.
The ‘Behavioural models’ of the firm have been developed since the 1950s. In these models it is assumed that there is a separation between ownership and control. Behavioural theorists consider the consequences of conflict between self-interested groups within firms for the way in which firms make decisions on price, output etc. The emphasis in these models is on the internal relations of the firm with little attention being paid to the external relations between firms.

Although some of the seminal work on the behavioural theories can be traced back to work by Herbert Simon in the 1950s, the theory has largely been developed by Cyert and March, in their book A Behavioral Theory of the Firm, with whose names it has been connected right up to today.

In behavioural theory the corporation has a multiplicity of different goals. Ultimately these goals are set by top management via a continual process of bargaining between the groups within the firm. An important point here is that the goals take the form of aspiration levels rather than strict maximisation constraints. Attainment of the aspiration level ‘satisfices’ the firm: the behavioural firm’s behaviour is ‘satisficing’ in contrast to the maximising behaviour of the traditional firm. The firm seeks levels of profits, sales, rate of growth etc that are ‘satisfactory’, not those that are maxima. Satisficing is seen as rational behaviour given the limited information, time and computational skills of the firm’s management. The behavioural theory redefines rationality, rationality is now that of ‘bounded rationality’.

Cyert and March argue that there are two sources of uncertainly that a firm has to deal with. The first is uncertainty that arises from changes in market conditions, that is, from changes in tastes, products and methods of production. The second is uncertainty arising from the behaviour of competitors. According to the behavioural theory the first form of uncertainty is avoided, as much as it can be, by search activity, by spending on R&D and by concentrating on short-term planning. A difference between the traditional and behavioural theories is the importance given in the behavioural theory to the short-run, at the expense of the long-run. To avoid competitor-originated uncertainty, Cyert and March arguethat firms operate within a ‘negotiated environment’, that is, firms act collusively with their competitors.

The instruments the behavioural firm uses in decision-making are the same as in the traditional theories. Both theories consider output, price and sales strategy as the major instruments. The difference between the theories lies in the way firm choose the values of these instruments. In the neoclassical theory such values are selected so to maximise long-run profits. In the behavioural theory the choice is made so that the outcome is the ‘satisficing’ level of sales, profits, growth etc. The behavioural theory also assumes that the firm learns from its experience. In the beginning a firm isn't a rational institution in the neoclassical sense of ‘global’ rationality. In the long run the firm may tend towards global rationality but in the short run there is an important adaptive process of learning. Firms make mistakes, there is trial and error from which the firm learns. In a sense the firm has memory and learns via its past experience.

An aspect of the firm neglected by the traditional theory is the allocation of resources within the firm and the decision-making process that leads to that allocation. In the neoclassical theory the firm reacts to its environment, the market, while the behavioural theory assumes that firms have some discretion and do not take the constraints of the market as definite and impossible to change. The important point here is that the behavioural theory looks at the mechanisms for the allocation of resources within the firm, while the neoclassical theory examines the role of the market, or price, mechanism for the allocation of resources between the different sectors of the economy. The concept of of ‘slack’ is used by Cyert and March to refer to payments made to groups within organisation over and above that needed to keep that group in the organisation. Slack is therefore the same as ‘economic rent’ accruing to a factor of production in the traditional theory of the firm. What is significant about the behavioural school is their analysis of the stabilising role of ‘slack’ on the activities of the firm. Changes in slack payments in periods of good and bad business means that the firm can maintain its aspiration levels despite the changes to its environment.

The behavioural theories can be seen as an early attempt to develop a theory of the firm at the level of the individual firm, a theory which, as Oliver Williamson has said of the Cyert and March (1963) book, was an attempt to “pry open what had been a black box, thereupon to examine the business firm in more operationally engaging ways”. But the success of this attempt was limited in economics. Williamson’s interaction with people such as Herbert Simon, Richard Cyert and James March while he was at Carnegie-Mellon University did play a role in the development of the transaction cost theory of the firm but outside of this the behavioural/managerial theories have had little effect on the mainstream economic theories of the firm. Consider the representation of the firm in standard microeconomics textbooks. If you look at both undergraduate and graduate microeconomics textbooks, it is difficult to find a discussion of behavioural or managerial models. Koutsoyiannis (1979) is one of the few that gives serious attention to these models, and it is now more than 30 years old.

In fact the impact of the behavioural theories may have been greater in management than economics. Argote and Greve claimed in a 2007 paper that A Behavioral Theory of the Firm “continues to be one of the most influential management books of all time”.

Friday, 26 April 2013

Reinhart and Rogoff: responding to criticisms

From the New York Times:
  • Debt, Growth and the Austerity Debate
    Last week, three economists at the University of Massachusetts, Amherst, released a paper criticizing our findings. They correctly identified a spreadsheet coding error that led us to miscalculate the growth rates of highly indebted countries since World War II. But they also accused us of “serious errors” stemming from “selective exclusion” of relevant data and “unconventional weighting” of statistics — charges that we vehemently dispute.
    The academic literature on debt and growth has for some time been focused on identifying causality. Does high debt merely reflect weaker tax revenues and slower growth? Or does high debt undermine growth?

    Our view has always been that causality runs in both directions, and that there is no rule that applies across all times and places.
  • Reinhart and Rogoff: Responding to Our Critics
    These critics, Thomas Herndon, Michael Ash and Robert Pollin, identified a spreadsheet calculation error, but also accused us of two “serious errors”: “selective exclusion of available data” and “unconventional weighting of summary statistics.”

    We acknowledged the calculation error in an online statement posted the night we received the article, but we adamantly deny the other accusations.

    They neglected to report that we included both median and average estimates for growth, at various levels of debt in relation to economic output, going back to 1800. Our paper gave significant weight to the median estimates, precisely because they reduce the problem posed by data outliers, a constant source of concern when doing archival research that reaches far back into economic history spanning several periods of war and economic crises.

    When you look at our median estimates, they are actually quite similar to those of the University of Massachusetts researchers.

Butler on Smith on government debt and growth

The relationship between government debt and growth is much in the news right now but worries about the relationship between them are not new, Adam Smith worried about the issue.

In his book "Adam Smith - A Primer" Eamonn Butler writes that Smith ends his "An Inquiry into the Nature and Causes of the Wealth of Nations" with a warning about the effects of a large national debt (see the Wealth of Nations, Book V, Ch. III). Butler explains,
By issuing debt, governments draw capital away from investment and growth, and steer it towards present consumption - in the shape of government activities - which means that growth necessarily falters. In addition, government borrowing allows politicians to take on more functions and boost their power, without having to ask the people for more tax. And they often find ways of avoiding repayments anyway. For these reasons, national debt is not just a benign transfer from one group to another: it is a real threat to liberty and therefore a real threat to prosperity. (Butler p.71)

The costs of inflation

Dr. Steve Horwitz explains the cause and costs of inflation. Horwitz is the Charles A. Dana Professor of Economics at St. Lawrence University in Canton, NY.

Interesting blog bits

A debate on efficient/inefficient institutions.
  • Pirate Democracy? Acemoglu and Robinson argue that democracy arises when nondemocratic elites are forced to cede power to the previously disenfranchised and not as a way to solve some inefficiency. Acemoglu and Robinson claim that this view that democracy solves an inefficiency, a view referred to as the “efficient institutions view”, is wrong but underlies Peter Leeson's view of pirate democracy. In Acemoglu and Robinson's view pirate democracy is better explained in terms of power.
  • Why did Pirates Choose Democracy? Peter Leeson replies to Acemoglu and Robinson. Leeson says Acemoglu and Robinson are right to characterise his view of pirate institutions as efficient. Leeson claims that pirates choose a system of democracy and separated powers to solve a principal-agent problem, to stop abuse of power by their captains and Leeson says the evidence shows this.
  • Efficient Organization among Pirates? Acemoglu and Robinson counter that they find the general presumptions upon which the efficient institutions view rests fairly unconvincing. What are exactly the forces that will ensure that institutions are efficient? And efficient for whom? Note that legitimate ships, 18th-century merchantmen, from which pirates were drawn were not democratic, but pirates were, Why?. If democracy was efficient for pirates, why not for merchantmen too?
  • Efficient Institutions are Context Dependent. Leeson replies that democracy’s cost was far higher for merchantmen than for pirates. Merchantmen were organized and outfitted by external financiers—wealthy landlubbers who had commercial expertise and capital, but weren’t sailors and thus hired seamen to sail their ships. To make sure crewmembers didn’t shirk, embezzle cargo, or steal the vessels they sailed on in owners’ absence, owners appointed officers to monitor and control them. Allowing crewmembers to democratically elect their officers instead would’ve been extremely costly in this context. Merchant sailors who could choose their officers democratically would have an incentive to elect the opposite kind of officer from what owners wanted—the kind of officer who would let sailors do whatever they pleased, destroying voyages’ profitability. Pirate ships, in contrast, weren’t organized or financed by external landlubbers. Pirates stole their vessels jointly: they were both the owners and employees of their ships. Because of this, democracy’s major potential cost on merchantmen—the prospect of crewmembers electing lax officers and thus undermining voyages’ profitability—was absent on pirate ships. Pirates who elected lax officers, qua employees, would’ve undermined their own interest, qua owners. Their incentive was therefore to elect the kind of officer that maximized profit. Consequently, for pirates, democracy was cheap

Thursday, 25 April 2013

A sentence I just don't understand

From Richard Murphy at the Tax Research UK blog. Murphy is discussing what he calls "bankrupt ideas":
Third, that utility is a useful economic concept. It isn’t. Distribution matters.
First I just don't see how distribution mattering means utility isn't a useful concept in economics. Second if you don't use utility what do you assume is the objective of households/consumers.

Issues with the Labour/Greens power policy

Bernard Doyle heads the investment strategy group at JBWere here in New Zealand and he has an interesting article over at interest.co.nz website on the unintended consequences of the Labour/Greens power policy. Doyle writes,
The first blush of the NZ Power policy is another variation of models that have been tried, tested and failed for well over a century. That is, the state looks askance at the messy process of market-discovered price and production and figures it can do a better job.

Labour says as much in its policy document stating; “No one plans the New Zealand energy sector and ensures it operates for the benefit of all New Zealanders”.
What Labour really means is that the government doesn't plan the energy sector. But that is the entire point of a free market, the government doesn't plan industries. Firms and consumers do the planning. As Hayek put it in his essay, The Use of Knowledge in Society,
The answer to this question is closely connected with that other question which arises here, that of who is to do the planning. It is about this question that all the dispute about "economic planning" centers. This is not a dispute about whether planning is to be done or not. It is a dispute as to whether planning is to be done centrally, by one authority for the whole economic system, or is to be divided among many individuals.
So the energy sector is planned but within a market system it is planned by individual firms and households. For Labour, there is one authority, them, who will do the planning.

Doyle continues,
The idea of a central planner co-ordinating supply and prices is superficially alluring. But almost invariably it ends in either taxpayer funded over-supply or rationing.

A brief look at New Zealand’s own history in the energy sector provides ample evidence, with the Think Big projects of the 1970’s an example of well-meaning but ultimately financially crippling supply-side state intervention.

The Clyde Dam was built on price assumptions that are still distant 30 years later.

It is illuminating that Labour cites California, Virginia, South Africa and Brazil as poster children for the centrally planned electricity model.

A quick scan of media headlines in three out of the four markets from the last quarter alone shows significant supply problems:

“California Girds for Electricity Woes”, Wall St Journal February 2013
“Biggest Crisis Since 2008 Looms for South African Mines: Energy”, Bloomberg March 2013
“Fears grow of Brazil power shortages”, Financial Times January 2013

The Californian example highlights just one of the many ways a central planner can come unstuck:
Changes in California's market have attracted lots of new generation; the state expects to have 44% more generating capacity than it needs next year. Grid officials say they expect the surplus to fall to 20% by 2022, though it will remain high for about a decade. However, the surplus generating capacity doesn't guarantee steady power flow.

Even though California has a lot of plants, it doesn't have the right mix:. Many of the solar and wind sources added in recent years have actually made the system more fragile, because they provide power intermittently.”
The electricity market is extraordinarily complex – the notion that a central planner can sit, Wizard of Oz-like, making long term planning, production and price decisions more efficiently than thousands of minds working in a market process is hopeful.

Of course there is a role for the Government in the economy, including the electricity sector. It is as a regulator, not a player.

None of this is to say that the current system, which sets prices at the marginal cost of the most expensive generation, is perfect.

Whilst likely more efficient, there is a trade-off that electricity consumers bear versus an average price model. So a discussion on New Zealand’s electricity market is a worthwhile exercise.

However, 1. New Zealand has had this debate before and found strongly in favour of the market model, and 2. We believe there should always be a strong bias toward the status quo given the increased risk and detrimental impact on investor sentiment that constantly changing regulatory regimes engender and for which NZ already has a poor reputation.
Another part of the current debate revolves around the so-called "super-profits" that the power companies are said to be making. But are these profits "super"? This report from Stuff.co.nz calls the superness of power companies profits into question.
The country's biggest electricity firms generated enough profits over the past five years to buy every man, woman and child in New Zealand a new 40-inch Sony flat screen television.

That's according to an analysis of power sector earnings since the global financial crisis, which showed Contact Energy, Genesis Energy, Meridian Energy, Mighty River Power and TrustPower collectively earned net profits of $3.2 billion.

But while the figures are eye- catching, and appear to add weight to a Labour-Greens bid to reorganise the sector under a cheaper single buyer model, the investment community notes profits have been slipping for years.

Operating earnings before interest, tax, depreciation, amortisation and other fair value movements have risen 5.6 per cent across the sector on a compound annual growth basis since 2008.

But over the same period, average profitability across the sector has fallen 3.6 per cent - a level analysts say shows these businesses are not extracting unreasonable amounts of profit from customers.

William Curtayne, a senior investment analyst at Milford Asset Management, said that while New Zealand had one of the highest rates of electricity inflation among developed countries, much of that had been to redress a past imbalance.

Businesses subsidised household electricity through higher rates in the 1970s and 1980s.

That's now unwinding, which is why retail power prices have been on the rise, but he said correcting for that showed New Zealand real power inflation was in the middle of the OECD average.

"In my view companies are not making super profits, earnings are flat, and there's been a pullback in demand."
Labour has been quoting the Wolak report's figure of $4.3b as the size of these "super-profits". But as Seamus Hogan has noted before that report has come in for some "trenchant criticisms".

There is also the problem of what effect yet another rapid regulatory shift will have on the wider economy. Investors more likely to demand a higher risk premium before they invest in the country. Regime uncertainty is never good for an economy.

Tuesday, 23 April 2013

The difference between economists and non-economists

Chris Dillow writes at his blog Stumbling and Mumbling
Many economists say "markets facilitate mutually beneficial trade, so we should have free(ish) ones in organs, drugs, prostitution etc." To which non-economists go "yuck, that's disgusting."
Yes economists see the world differently from non-economists, even when those non-economists are their mother. Economist Harold Winter writes,
In one of the chapters we discussed that day, the author argued that it may be sensible social policy to allow overworked emergency rooms [of hospitals] to refuse care to patients who couldn't pay or who didn't have insurance. As I told my mother about this, I went into "professor mode" and ranted on for about ten minutes, showing her my passion for economic reasoning. When I was finished, she had a scared look in her eyes and then simply said, "You're a monster!?

What use is a graduate programme?

Recently when thinking about developments in university education Bill Kaye-Blake asked What’s the point of academic research? Is it to be 'critic and conscience of society' or is it to 'advance knowledge and understanding'? Or both? Whatever the answer, the results of research are certainly one of the two major outputs of universities. The second main output of a university relates to the results of teaching, both undergraduate and graduate, students of the institution. With regard to teaching I wish to ask, What use is a graduate programme?

What does a university department need to be be serious? Can a department live without a graduate programme? If a department within a university didn't have such a programme what effect would it have on that department? How would the department hold current staff and how would it attract new staff? What effect would it have on students?

In terms of student numbers obviously you would lose all your graduate students, but if you are planning to do away with a grad programme I guess it would be because it didn't have many students to begin with so it wouldn't be a great loss, as far as the bean-counters are concerned. One may argue that a small but high quality programme is worth keeping on grounds of quality rather than quantity. Can graduates of the programme get good jobs in good places, can they succeed in good overseas Ph.D. programmes?

But what effect would the closure of a graduate programme have on undergraduate student numbers? If you can't do a graduate degree at a given university you have to ask, Why do an undergraduate degree there? Given that you are going to have to move to another university for grad school why not just move to the second university for your undergrad degree as well? Importantly in New Zealand, unlike for example the U.S., an undergrad degree in the subject you want to do grad work in is normally assumed so there is a more direct link between undergraduate degrees and graduate degrees in the New Zealand system. This means moving universities is more difficult since your undergrad training may not integrate easily into the grad programme you are moving to. This gives students an incentive to do their undergrad and grad work at the same university. For some subjects this may not matter much since an undergraduate degree is the terminal degree, e.g. engineering,  but for others, e.g. clinical psychology or economics, where an advantaged degree is necessary for employment it matters more.

So if you only teach undergrads you have to teach in such a way as to have your students be able to move to any other university in the country for their graduate work. This is not a simple issue to deal with given the differences in what is assumed about students backgrounds in different graduate programmes. Knowing that student have to leave their current institution to do graduate work gives other institutions an incentive to head-hunt the best of the local students thereby damaging the local undergraduate programme.

Thus there are push factors making students want to leave an institution without a postgrad programme and there will be pull factors as well as other universities try to attract good undergraduate students. If this results in a falloff in undergraduate students, certainly it will not help undergraduate numbers, it could be to the point where the undergraduate programme is also seen by management as unsustainable.

Knowing this can a move by university administrators  to close a department's graduate programme just be seen as a signal that they want an excuse to close the department itself?

EconTalk this week

Edward Glaeser of Harvard University and author of The Triumph of Cities talks with EconTalk host Russ Roberts about American cities. The conversation begins with a discussion of the history of Detroit over the last century and its current plight. What might be done to improve Detroit's situation? Why are other cities experiencing similar challenges to those facing Detroit? Why are some cities thriving and growing? What policies might help ailing cities and what policies have helped those cities that succeed? The conversation concludes with a discussion of why cities have such potential for growth.

Sunday, 21 April 2013

Do entrepreneurs matter?

According to Sascha O Becker and Hans K. Hvide over at VoxEU.org the short answer is yes.

Becker and Hvide write
Governments try to boost competitiveness through a vast array of policies. Newly founded firms have the potential to give new impetus to an economy. The question arises: what matters more, the horse (i.e. the products) or the jockey (i.e. the owner-manager) in the life of young firms? Do entrepreneurs matter and should they be encouraged by economic policy? Few empirical studies focus on how entrepreneurs affect the performance and value of firms (see Syverson 2011). If entrepreneurs personally embed a major part of the value of the firm, it will be difficult to pledge the value of the firms to outside investors, which in turn leads to liquidity constraints and underinvestment in entrepreneurial firms (as in Hart and Moore 1994).
Brynjolfsson (1994) and Rabin (1993) are additional papers which highlight the problems that reliance on human capital can have for the development of firms. While the Brynjolfsson model is distinct from the Rabin model, they are complementary. The relationship between information, ownership and authority is central to both papers. Rabin works within a framework utilising an adverse selection model and shows that the adverse selection problems can be such that, in some cases, an informed party has to take over the firm to show that their information is indeed useful. The Brynjolfsson model is a moral hazard type framework which deals with the issue of incentives for an informed party to maximise uncontractible effort.

Brynjolfsson argues that the increased importance of information technology will result in reduced integration and smaller firms insofar as this increased reliance on IT leads to better informed workers, who need incentives; enables more flexibility and less lock-in in the use of physical assets, and allows direct coordination among agents, reducing the need for centralised coordination. On the other hand, the Brynjolfsson framework suggests that more integration will result from information technology where network externalities or informational economies of scale support the centralised ownership of assets, and it facilitates the monitoring, and thus contractibility, of agent’s actions. Clearly in any given case more than one of these phenomena may be important.

Within the Rabin framework it is suggested that firms are more likely to trade through markets when informed parties are also superior providers of productive services that are related to their information. But if, on the other hand, information is a firm’s only competitive advantage, it is likely to obtain control over assets, possibly by buying firms that currently own those assets.

Becker and Hvide argue that human capital/personalities are important for firms. Looking into the death of a firm’s founder during the first ten years of a company’s existence, the data suggest that entrepreneurs matter – they are the ‘glue’ that holds a business together.
We expected businesses that experienced the death of a founder-entrepreneur to have some kind of a dip in performance immediately after the death owing to the upheaval, but we anticipated there would be a bounce back. However, the results were quite surprising. Even four years after the death, most firms show no sign of recovering and the negative effect on performance appears to continue even further beyond that, [...]

A simple explanation for our findings could be reverse causality: poor firm performance leads to entrepreneurs having a higher probability of dying. To deal with this possibility, we look at whether there are pre-treatment differences between treated and matched controls. We do not find evidence of pre-treatment effects, [...]. This suggests that reverse causality is not a major force behind our findings.

For how long in a firm's life does the entrepreneur matter? The very youngest companies suffered most after the founder’s death, but significant effects were still felt by companies that were up to ten years old. The degree of ownership the founder had retained matters. The death of a founder with a 50% stake had about half the impact of losing a founder who had retained a majority shareholding. The level of formal education of the founder also showed a strong correlation with the damage that person’s death could have. Those with the most highly educated founders experienced the largest drops in sales performance after the founder’s death. There was no difference between the results for family and non-family companies, between rural and urban businesses, or when comparisons were made between different sectors.

It could simply be that the founder was a fantastic sales person who generated a disproportionately high level of sales. On the other hand, it could be down to a leadership effect, where the founder-entrepreneur inspires the employees to perform as best they can and without the presence, that drive slips away.

Possibly, entrepreneur death induces a voluntary shutdown by heirs of unprofitable firms that provided the entrepreneur with private benefits, so that there is no social loss. Using quantile regressions, we find strong negative effects of entrepreneur death on sales and assets also among successful firms. The bankruptcy code in Norway is similar to Chapter 7 in the US bankruptcy code, i.e., bankruptcy is associated with creditors taking control and is not 'voluntary' as in Chapter 11 in the US bankruptcy code. We find that firms where the entrepreneur dies have twice the probability of going bankrupt. This, again, is evidence supporting that entrepreneurs create value.

Another concern is that many firms in our database are very small, and possibly motivated by providing tax or private benefits to the entrepreneur.

Fortunately, a substantial fraction of firms in our database are not tiny, even in the first year – the 75th percentile for book value of assets and number of employees in the first year of operations is about $400,000 and four, respectively.
The conclusions of the article are
All our results are consistent with a simple mechanism: entrepreneurs personally embed a major part of the value of the firm, and the entrepreneur vanishing has a large negative impact. The death of the founder appears to shift the firm outcome distribution to the left. For firms in the lower part of the outcome distribution, the consequence is a higher probability of closing down, while for firms higher up in the quality distribution, the effect will be a significant reduction in firm growth.
  • Brynjolfsson, Erik (1994). ‘Information Assets, Technology, and Organization’, Management Science, 40(12): 1645-62.
  • Hart, O and J Moore (1994), 'A Theory of Debt Based on the Inalienability of Human Capital', Quarterly Journal of Economics, 109, 841-879.
  • Rabin, Matthew (1993). ‘Information and the Control of Productive Assets’, Journal of Law, Economics, and Organization, 9(1) Spring: 51-76.
  • Syverson, C (2011), “What Determines Productivity?”, Journal of Economic Literature 49, 326-365.

Friday, 19 April 2013

Hogan writes to David and David (updated)

Over at the Offsetting Behaviour blog Seamus Hogan has an excellent open letter to David Shearer and David Parker. He writes,
Dear David and David,

I have read with interest the policy document you released yesterday: New Zealand Power, Energising New ZealandI wonder if you could clarify a few points for me.
  1. In the document and the associated speeches, you quote the Wolak report's figure of $4.3b of, in your words, "super profits". Have either of your read the report, or any of the trenchant criticisms of that report? (A bit egotistically, I can suggest work that I was involved in, herehere, and here, but there are others.) 
  2. You say that "prices are rising faster than in many of our major competitor countries", and show a graph comparing the price trend in a number of countries since 1986. . Let's leave aside the question of what is meant by "competitor country". Is it your position that prices were correct in New Zealand in 1986? Elsewhere you say that your new agency, New Zealand Power, will set prices based on operating costs and a fair return on capital. Is it your position that prices were generating a fair return on capital in 1986?
  3. You say that the faster rate of price growth in New Zealand "undermines the competitiveness of our economy". But one of your graphs shows that real industrial prices have remained about constant since 1986 and commercial prices have fallen. What exactly do you mean by "competitiveness"? 
  4. Your graph shows that the faster increase of prices relative to other countries has been fairly steady since 1986 albeit with an acceleration around 2000. Since your explanation for this price trend is a lack of competition in the market and the use of marginal-cost rather than average-cost pricing, is it your position that these factors have been changing steadily over the past 25 years, accelerating during the period of the last Labour government? Is it possible that the trend might be attributable to steady increases in demand over time and regulatory obstacles to power companies building new capacity? 
  5. You say that selling assets will "push up power prices even more as foreign and corporate investors look to maximise profits". Is it your position that the state-owned electricity companies are not currently looking to maximise profit, even though that is their fiduciary duty under the State-Owned Enterprises Act? 
  6. You state that the Wolak report found that the four big generators made "super profits of $4.3b at the expense of consumers". You also state that hydro generators earn "super profits" by using free water to generate electricity that is sold at the same price as generators using more expensive methods. Do you think this is what Wolak meant when he calculated the excess profits earned? Have you read the Wolak report? 
  7. As I noted earlier, you state that price will be set based on operating costs and a fair return to capital. But the Wolak report assumed that there was excess capacity in New Zealand so that a competitive market would have produced prices based only on operating costs. Are you stating that Wolak's $4.3b figure is overstated? Have you read the Wolak report? 
  8. Drawing on a report you have commissioned from BERL, you state that your policy will create 5,000 jobs and boost the economy by $450 million per annum. In their report, BERL state that they are assuming an economy with deficient demand so that unemployed resources are available to the industrial and commercial sector with no opportunity cost. In citing that figure as an on-going per annum benefit, are you stating that it is your view that the economy will remain in a state of deficient aggregate demand forever, and that your government would take no other action to increase demand?
  9. And if you have time, could you ask BERL whether it is not an oxymorn to have a computable general equilibrium model, and then state that "the model's calculation of the impacts on the government accounts exclude the direct loss of revenue from lower generator dividends and lower tax receipts from the generator's reduced profits". 
  10. By the way, did you know that one of the implicit assumptions Wolak used in his report implied that there was no efficiency loss from the putative overcharging, just a transfer from users to taxpayers. If you accept this report, wouldn't it be easier just to use the tax and benefit system to transfer money back to poorer consumers? Have you read the Wolak report?
Kindest Regards....
I have to ask that if this idea works for electricity why not for food. petrol and who knows what else? Also if monopoly is bad how is it that monopsony is suddenly good? Can we get iPredict to set up a contract on the chances off Seamus getting a reply or anyone getting any sensible comment from either David on this?

Update: Kiwiblog writes on Electricity Prices and notes
David Parker is on record as saying a single buyer will increase the cost of power. Simon Bridges quotes from a 2006 cabinet paper by Parker:
“As Minister of Energy he said that “a single buyer would likely result in higher capital and operating costs”. He went on to say that: “The risks involved in changing arrangements could be significant. The resulting uncertainty could lead to investment proposals being put on hold. Direct implementation costs could be large.” And, he admitted that “The single buyer would be relatively poor at sustaining pressure on operational costs.”

Thursday, 18 April 2013

Adam Smith: a man of the left or right? (updated)

I shall put aside the obvious response that since the left/right distinction developed from the political divisions of the French Revolution and Smith died in 1790 it is somewhat pointless to think about Smith in terms that had no meaning in his time and concentrate instead on the recent trend in Smith studies that concerns itself with the extent to which Smith’s ideas can be distanced from the more vociferous of his free market admirers such as Milton Friedman, James Buchanan and F.A. Hayek. In the field of political economy there has developed a line of argument that sees Smith’s ideas associated not with the right or liberal (or libertarian for any Americans reading this) concerns but with the contemporary left’s concerns with fairness, equality and social justice.

In a forthcoming article (“Adam Smith: Left or Right?”) in the journal Political Studies well-known Adam Smith scholar Craig Smith writes
Amartya Sen (2009) has drawn inspiration from Smith in developing his own theory of social justice and Samuel Fleischacker (2004) has made the case for reading Smith as a precursor of modern notions of social justice. Iain McLean (2006), on the other hand, makes the stronger claim that Smith’s true legacy lies, not with the libertarian economists of the Adam Smith Institute, but rather with the social democrats of the John Smith Institute. In all three cases the broad claim is that there are grounds for associating Smith with the modern egalitarian idea of social justice understood as the state-backed redistribution of wealth to ameliorate the effects of poverty.
Smith expands on this by saying,
Fleischacker offers perhaps the most detailed version of the argument under consideration. He admits that Smith wrote in a period prior to the modern notion of distributive justice and that this leads Smith to consider justice in the commutative sense favoured by the natural law tradition, but he goes on to argue that Smith helped to point the way towards the notion of distributive justice that animates the contemporary left (Fleischacker, 2004, p.213).Fleischacker accepts that there are both libertarian and egalitarian themes in Smith’s work and that he can thus be read as providing a legacy fo both contemporary positions (Fleischacker, 2004, p. 19), but in his view Smith’s abiding concern for the poor brings him closer in spirit to the contemporary left (Fleischacker,2004,p.265). Fleischacker’s argument is based on the idea that Smith does not operate with an absolute and pre-social, moralised notion of property rights and that as a result of this Smith has no principled reason to consider it unjust to use ‘redistributive taxation to help the poor’ (Fleischacker, 2004, p. 145). What replaces the principled objection in this reading is a case-by-case assessment of the likely success of particular government attempts to alleviate poverty with a presumption against the likely success of such activity drawn from Smith’s distrust of the political process. This leads to a‘Smithian’state which,while unlikely to be as extensive as the modern welfare state (Fleischacker,2004,p.236),is nonetheless open to the use of politics to pursue the goals of egalitarian distributive justice. Fleischacker then argues that the contemporary left has much to learn about the pursuit of its goals from Smith’s criticism of state bureaucracies and his stress on competition.
Craig Smith argues against those who would claim Adam Smith for the left, in terms of an adherence to social justice, by explaining,
[...] that not only would Smith have been dubious about the modern conception of social justice, but that he actually takes care to draw a conceptual distinction between his notion of justice and the sort of redistributive and welfare programmes that we understand under the vague catch-all notion of social justice. My point is not that Smith was unconcerned with the situation of the poor; it is rather that he makes a quite clear philosophical distinction between this concern and the concept of justice. I want to claim that we should not dismiss this distinction as merely a feature of the language that Smith inherited from his predecessors. Instead I want to take his attempt at conceptual clarity seriously and suggest that the normative distinctions Smith draws might prove to be a further lesson for the contemporary left in addition to the empirical and social theoretical points that Fleischacker concedes (Fleischacker, 2004, p. 226). Put another way, for the purposes of this article it does not matter how much of a role Smith allowed for deliberate attempts to ameliorate the effects of poverty; what matters is that he does not conduct this discussion in terms of justice.
In the conclusion to his paper Smith argues that in Adam Smith's ideas we see the existence of a sphere of devolved (local level) human activity distinct from the political concerns of the state. Craig Smith continues,
The proper conceptual vocabulary for this sphere then is clearly distinct from the vocabulary both of justice and of beneficence. Justice has its place in Smith’s vision of society, but that place is specific and limited and this must surely give us pause in attempting to relate Smith’s thought to modern conceptions of social justice. Indeed, at least one conclusion that might be drawn from the reading presented here is that, far from offering us a theory or even an inspiration for a theory of social justice, Smith actually gives us good grounds to want to keep some conceptual distance between ideas of justice, police and beneficence. That he is wary of any automatic reliance on the political process and the state to pursue our social objectives is admitted even by those such as Fleischacker who want to reclaim Smith for the left. As Fleischacker (2004, p. 241) also admits, this points us toward a presumption against the state and a presumption in favour of private action by voluntary associations of individuals. But if this is the locus for the exercise of beneficence and the provision of public works then we are dealing with something very different from the modern debates about intra-national transfers or even international transfers and distributive patterns.
Craig Smith goes on to say that what this implies for a ‘Smith-based’ notion of distributive or social justice is clear,
we should take more seriously Smith’s silence on modern distributive justice, his desire to place conceptual distance between beneficence and justice, his distrust of the political process and his temperamental distaste for utopianism. And we should pay more attention to his localist, prudential category of police and his desire to press a normative distinction between strict principles of justice and political or beneficent decisions guided by expediency. These are not accidental aspects of Smith’s thinking, however imperfectly they are carried over into his own policy prescriptions. They suggest a very different understanding of the normative ideal of justice and one that might actually give us good reasons to doubt the efficacy of thinking about our moral obligations to the poor and welfare provision in terms of social justice.
When thinking of our moral obligations a related question about Adam Smith’s thinking is raised by Maria Pia Paganelli in a chapter forthcoming in the Oxford Handbook on Adam Smith. Paganelli asks why Smith promotes free markets and argues that he promotes them for at least two reasons: efficiency and morality. In terms of morality Paganelli argues that Smith thought that markets can foster morality just as much as morality can foster markets. Paganelli concludes her chapter by noting,
Adam Smith favours commerce on grounds of both morality and efficiency. Commerce is intertwined with morals, it supports moral development and at the same time it is supported by it. Commerce requires morals for its functioning and gives the conditions under which people can live, can live freely, and can live morally.
Returning to the question of whether Adam Smith was “left or right” James Otteson writes in the epilogue to his 2011 book Adam Smith,
He [Smith] was instead an old-fashioned liberal: favoring individual liberty, endorsing state institutions to protect this liberty, and, where they conflicted, favoring the individual over the state as a default. But he was also a sceptical empiricist. He favored free trade, free markets, and a government robust but limited to the enforcement of a few central tasks not because they comported with a priori principles but because they seemed to work.
It is worth noting that this sceptical empiricist approach to markets, trade and government rather than an a priori principle approach would most likely disqualify Smith as a libertarian, at least of the Radian or Nozickean kind.

Otteson goes on to say,
Smith’s concern with the poor leads some commentators to suggest that he must have been a proto-“progressive” liberal, since, as some believe, only progressive liberals care about the poor. Samuel Fleischacker, for example, argues that Smith’s concern for the poor is one reason to see him as “left-leaning” rather than “right-leaning” . Concern for the poor is, however, hardly the exclusive provenance of the political left. And Smith’s strong arguments in favor of decentralization of power, competition, and free markets would seem to put him rather on the right of today’s political spectrum than on the left.
Otteson's conclusion is that Smith is a classical liberal, which is consistent with the arguments made above, but if accepted this does mean Smith is not a man of the left.

Update: Adam Smith scholar Gavin Kennedy comments on this post at this Adam Smith's Lost Legacy blog.

Tuesday, 16 April 2013

EconTalk this week

Jeffrey Sachs of Columbia University and author of The Price of Civilization talks with EconTalk host Russ Roberts about the state of the American economy. Sachs sees the current malaise as a chronic problem rather than a short-term challenge caused by the business cycle. He lists a whole host of issues he thinks policymakers need to deal with including the environment, inequality, and infrastructure. He disagrees with the Keynesian prescriptions for stimulating the economy and believes that the federal government budget deficits are a serious problem. The conversation closes with a discussion of the state of economics.

Unemployment is bad for employment 2

I have made the point that the longer you are unemployment the less likely it is that you will become employed before but more evidence confirming it comes from this article in the Washington Post:
Here’s one big reason why America’s unemployment crisis may be here to stay. Thanks to the lasting effects of the recession, there are currently 4.7 million workers who have been out of work for at least 27 weeks. And new research suggests that employers will almost never consider hiring them.

Matthew O’Brien reports on a striking recent experiment by Rand Ghayad of Northeastern University. He sent out 4,800 fake resumes at random for 600 job openings. And what he found is that employers would rather call back someone with no relevant experience who’s only been out of work for a few months than someone with more relevant experience who’s been out of work for longer than six months.

In other words, it doesn’t matter how much experience you have. It doesn’t matter why you lost your previous job — it could have been bad luck. If you’ve been out of work for more than six months, you’re essentially unemployable. Many companies won’t even consider you for a job.
One obvious question this raises is, Are companies irrationally discriminating against the long-term unemployed or do they have good reasons for screening out these applicants? The Washington Post article writes,
Privately, many employers worry that someone who’s been out of work for six months “may have outdated skills, or may be a short-timer who is desperate enough to take any work now but will leave when something better comes along.”
One worry with this is that the current cyclical unemployment problems could become structural and very long-lasting.

Local authority chief executives are boosting their pay packets by "empire-building"

The following link is to a TV3 news item in which Dr Glenn Boyle, professor of finance at Canterbury University, discusses his recent paper in which he shows that that some local authority chief executives are boosting their pay packets by "empire-building". Boyle found that councils which collect the most revenue per ratepayer also pay their chief executives the most. The pay rate is not related to things the amount of infrastructure the councils controls but is related to the additional hiring of bureaucrats.


Scoop reports,
A new study by UC finance professor Glenn Boyle and former UC student Scott Rademaker found councils which collect the most revenue per ratepayer pay their chief executives the most.

``While this could indicate that chief executives with more revenue to manage have more complex jobs, and hence deserve to be paid more, it turns out that the additional revenue is primarily used to employ additional council personnel,’’ Professor Boyle says.

"The more bureaucrats a council chief executive is able to employ, relative to the size of their ratepayer base, the greater the remuneration he or she is able to extract on average. Chief executives who have increased personnel costs the most during the 2005-10 period have, on average, received the biggest pay rises during that time. In short, council chief executives are being rewarded for good old-fashioned empire building.

Friday, 12 April 2013

Margaret Thatcher’s economic legacy

From VoxEU.org comes a couple of articles looking at Margaret Thatcher’s economic legacy. John Van Reenen argues:
Margaret Thatcher’s economic legacy lives on. This column provides a markedly balanced assessment of her mistakes and achievements. Most pressingly, Thatcherism left the UK failing to properly think about long-run investment, especially in infrastructure, in the skills of those at the lower end of the ability distribution and in innovation. The UK is addressing some of these problems, but this failure to invest in prosperity is the main challenge we face as a nation over the next 50 years.
while Nicholas Crafts explains that
The policies of the Conservative governments led by Margaret Thatcher between 1979 and 1990 remain highly controversial more than 20 years later. In many respects, they represented a sharp break with the earlier postwar period and this was certainly true of supply-side policies relevant to growth performance. Reforms of fiscal policy were made including the restructuring of taxation by increasing VAT while reducing income-tax rates and, notably, by indexing transfer payments to prices rather than wages while aiming to restore a balanced budget. Industrial policy was downsized as subsidies were cut and privatisation of state-owned businesses was embraced while deregulation, including most notably of financial markets with the ‘Big Bang’ in 1986, was promoted. Legal reforms of industrial relations further reduced trade union bargaining power which had initially been undermined by rising unemployment. In general, these changes were accepted rather than reversed by Labour after 1997.

In fact, before, during and after Thatcher, government policy moved in the direction of increasing competition in product markets. In particular, protectionism was discarded with liberalisation through GATT negotiations, entry into the European Community in 1973, the retreat from industrial subsidies and foreign-exchange controls in the Thatcher years, and the implementation of the European Single Market legislation in the 1990s. Trade liberalisation reduced price-cost margins. The average effective rate of protection fell from 9.3% in 1968 to 4.7% in 1979, and 1.2% in 1986 (Ennew et al. 1990), subsidies were reduced from £9bn (at 1980 prices) in 1969 to £5bn in 1979 and £0.3bn in 1990 (Wren 1996), and import penetration in manufacturing rose from 20.8% in 1970 to 40.8% by 2000.
Van Reenan sees Thatcher's polices as a failure.
Nevertheless, there are many important economic and social failures that are part of the Thatcher legacy. First, there was a tremendous growth of inequality both in pre-tax incomes and through changes to tax and benefit policies that favoured the rich. [...]. Some of this inequality was addressed by the Labour governments through tax credits and the minimum wage, but the share of income going to the top 1% continued to rise inexorably, driven by the financial sector. This was the second failure – excessive deregulation of financial services starting with the Big Bang in 1986, but continuing until the eve of the 2007 crisis. Even free markets need to be properly regulated. Third, her early years were marked by a failure to understand that the public employment service needs to be active in helping people find jobs. A major mistake was splitting benefit offices from job centres and pushing many unemployed onto disability benefits (which are much harder to escape from) in an effort to massage down the unemployed claimant count statistics. Unemployment claims peaked at over three million in 1986 when Restart was launched – a policy that finally put more effort into getting the unemployed searching for work and was deepened under the New Deal policies after 1997.

Finally, and perhaps most importantly, there was been a failure of long-run investment: in infrastructure, in the skills of those at the lower end of the ability distribution, and in innovation. The UK addressed some of its problems, but this failure to invest in prosperity is the main challenge we face as a nation over the next 50 years. The LSE Growth Commission has put forward some proposals to deal with this – let’s hope the current generation of political leaders takes heed.
Crafts concludes by saying,
In sum, Thatcherism was a partial solution to the problems which had led to earlier underperformance, in particular, those that had arisen from weak competition (Crafts 2012). The reforms encouraged the effective diffusion of new technology rather than greater invention and worked more through reducing inefficiency than promoting investment-led growth. They addressed relative economic decline through improving TFP and reducing the NAIRU. At the same time, the short-term implications were seriously adverse for many workers as unemployment rose and manufacturing rapidly shed two million jobs while income inequality surged, to no small extent as a result of benefit reforms.

Indeed, any judgement on Thatcherism turns heavily on value judgements concerning the relative importance of income distribution and economic growth as policy objectives. The 1980s saw a very rapid increase in the Gini coefficient by about nine percentage points, which has turned out to be largely permanent. Ultimately, the Thatcher experiment was about making a liberal market economy work better. There will be those who think a German-style coordinated market economy is preferable. That was not really an option available to Mrs Thatcher but in any event it was hardly a vision of which she approved.

John Taylor's Hayek Lecture at Duke University

The title of his lecture is Why We Still Need to Read Hayek given on Wednesday, April 10 2013:

Wednesday, 10 April 2013

EconTalk this week

Anat Admati of Stanford University talks with EconTalk host Russ Roberts about her new book (co-authored with Martin Hellwig), The Bankers' New Clothes. Admati argues that the best way to reduce the fragility of the banking system is to increase capital requirements--that is, require banks to finance their activities with a greater proportion of equity rather than debt. She explains how debt magnifies returns and losses while making each bank more fragile. Despite claims to the contrary, she argues that the costs of reducing debt are relatively small for society as a whole while the benefits are substantial.

Winston Peters on exporter's tax rate

From Stuff
New Zealand First leader Winston Peters says his party wants to cut the corporate tax rate for exporters from 28 per cent to 20 per cent.
More mercantalism from Peters. Will it never die? Why not lower the rate rate for all business? What is so special about exporters? In short, nothing. Adam Smith pointed out more than 240 years ago that "Consumption is the sole end and purpose of all production" and that the measure of a country's true wealth, is the total of its production and commerce. That is, a country's wealth is what the people of that country can consume. The great 19th century French economic pamphleteer Frédéric Bastiat wrote, "Consumption is the end, the final cause, of all economic phenomena, and it is consequently in consumption that their ultimate and definitive justification is to be found." Note also that exports are things that we produce and send to other (overseas) people. That is, they are goods and services that we produce but do not consume and thus they lower our welfare. Imports on the other hand, are goods and services that other counties produce and send to us to increase our consumption. This means imports increase our welfare. So imports are welfare increasing and exports are welfare decreasing. Therefore "imports are good; exports are bad"

But this does raise the question of why do we bother to export and not just import? The obvious answer is that exports are the way we pay for our imports. If we want people to send their goods and services to us we have to send our goods and services to them in exchange. Adam Smith also noted that in any free exchange, both sides must benefit. The buyer profits, just as the seller does, because the buyer values whatever he gives up less than the goods he obtains. That's why we trade at all.

Wednesday, 3 April 2013

EconTalk this week

Eric Topol of the Scripps Research Institute and the author of The Creative Destruction of Medicine talks with EconTalk host Russ Roberts about the ideas in his book. Topics discussed include "evidence-based" medicine, the influence of the pharmaceutical industry, how medicine is currently conducted for the "average" patient, the potential of genomics to improve health care and the power of technology, generally, to transform medicine.