Wednesday 31 December 2014

Audio on how macroeconomics has changed since the crisis

Wouter den Haan, Professor of Economics and Co-Director of the Centre for Macroeconomics, London School of Economics and CEPR Research Fellow, is interviewed at VoxEU.org by Viv Davies.
Macroecomics has changed in a number of ways since the global crisis. For example, there is now more emphasis on modeling the financial sector, self-fulfilling panics, herd behaviour and the new role of demand. This Vox Talk discusses these changes as well as those areas in macroeconomics that are currently perhaps not researched enough. Wouter Den Haan explains the inadequacy of the conventional 'rational expectations' approach, quantitative easing, endogenous risk and deleveraging and refers to current CEPR research that reflects the changes. He concludes by reminding us that the the 'baby boomers' issue could be the basis of the next crisis.

A direct link to the audio is available here.

Adam Smith, classical economics and the firm, or rather the lack of a firm

I have written before on The firm in classical economics and noted Mark Blaug's famous point that that the classical economists simply
"[ ... ] had no theory of the firm".
Blaug is not the only one to argue in this way, Kenneth Arrow explains,
"[i]n classical theory, from Smith to Mill, fixed coefficients in production are assumed. In such a context, the individual firm plays little role in the general equilibrium of the economy. The scale of any one firm is indeterminate, but the demand conditions determine the scale of the industry and the demand by the industry for inputs. The firm's role is purely passive, and no meaningful boundaries between firms are established".
When writing about Adam Smith's approach to the firm Philip L. Williams says,
"[t]he firm was disembodied and became a unit in which resources congeal in the productive process. When we come to examine the equilibrium/value theory of The Wealth of Nations it will be shown that, in that context, the firm is little more than a passive conduit which assists in the movement of resources between alternative activities"
while Michael Best states simply that
"Adam Smith did not elaborate a theory of the firm".
Howard Bowen argues in a similar fashion:
"[ ... ] economists of the classical tradition had usually assumed that the level and distribution of income and the allocation of resources were determined by forces that could be understood without a detailed theory of the firm. [ ... ] Everything else would be settled by the impersonal forces of the market, and there would be no need to consider in detail the decisions and actions of the individual firm".
When looking at Adam Smith's magnum opus, An Inquiry into the Nature and Causes of The Wealth of Nations, it should be noted that Smith begins with a discussion of the division of labour at the microeconomic level, the famous pin factory example, but quickly moves the analysis to the market level. When discussing Smith's approach to the division of labour Paul McNulty comments,
"[h]aving conceptualized division of labor in terms of the organization of work within the enterprise, however, Smith subsequently failed to develop or even to pursue systematically that line of analysis. His ideas on the division of labor could, for example, have led him toward an analysis of task assignment, management, or organization. Such an intra-firm approach would have foreshadowed the much later - indeed, quite recent - efforts in this direction by Herbert Simon, Oliver Williamson, Harvey Leibenstein, and others, a body of work which Leibenstein calls "micro-microeconomics". [ ... ] But, instead, Smith quickly turned his attention away from the internal organization of the enterprise, and outward toward the market and the realm of exchange, perhaps because he found therein both the source of division of labor, in the "propensity in human nature ... to truck, barter and exchange" and its effective limits".
So the question raised by all of this is, Why no theory of the firm in classical economics? At least part of the answer may be that the theories being analysed in classical economics are macroeconomic theories, they are theories of the production of an entire economy rather than (microeconomic) theories of firm production. D. P. O'Brien remarked that
"[c]lassical economics ruled economic thought for about 100 years. It focused on macroeconomic issues and economic growth. Because the growth was taking place in an open economy, with a currency that (except during 1797-1819) was convertible into gold, the classical writers were necessarily concerned with the balance of payments, the money supply, and the price level. Monetary theory occupied a central place, and their achievements in this area were substantial and - with their trade theory - are still with us today".
Nicolai Foss and Peter Klein note that classical economics was largely carried out at the aggregate level with microeconomic analysis acting as little more than a handmaiden to the macro-level investigation,
"[e]conomics began to a large extent in an aggregative mode, as witness, for example, the "Political Arithmetick" of Sir William Petty, and the dominant interest of most of the classical economists in distribution issues. Analysis of pricing, that is to say, analysis of a phenomenon on a lower level of analysis than distributional analysis, was to a large extent only a means to an end, namely to analyze the functional income distribution".
O'Brien makes the same basic point by noting the differences in emphasis between classical and neoclassical economics:
"[t]he core of neo-Classical economics is the theory of microeconomic allocation, to which students are introduced in their first year in an elementary and largely intuitive form, and which receives increasingly sophisticated statements during succeeding years of study. On top of this, as a sort of icing on the cake, comes the macroeconomics theory of income determination, with, in little attached boxes so to speak, theories of growth and trade appended. But the approach of the Classical economists was the very reverse of this. For them the central propositions of economics concerned macroeconomic problems. Their focus above all was on the problem of growth, and the macroeconomic distribution conclusions which followed from their view of growth. On the one hand, international trade, at least for Smith, was inextricably bound up with all this: on the other, the microeconomic problems of value and microdistribution took their place as subsets of the greater whole".
So the lack of a theory of the firm may reflect a more general lack of microeconomics in the writings of the classical school. Classical economics was largely about the macroeconomic problems of growth, monetary policy and trade. It took the "neoclassical revolution" of the 1870s to bring about a theory of firm level production but even this wasn't a theory of the firm. Development of such theories had to wait until as recently as the 1970s when Coase's ideas on the firm started to be taken seriously.

Tuesday 30 December 2014

EconTalk this week

James Tooley, Professor of Education at Newcastle University, talks to EconTalk host Russ Roberts about low-cost for-profit private schools in the slums and rural areas of poor countries. Tooley shows how surprisingly widespread private schools are for the poor and how effective they are relative to public schools where teacher attendance and performance can be very disappointing. The conversation closes with whether public schooling should remain the ideal in poor countries.

A direct link to the audio is available here.

Access to higher education and the value of a university degree

Governments sometimes promote reforms, normally with little thought as to what the actual outcomes will be, that increase access to education for a large share of the population. These reforms may lower the returns to education by altering returns to skills, education quality, and peer effects. In a new column at VoxEU.org examines Nicola Bianchi the case of a 1961 Italian reform that increased enrolment in university STEM majors among students who had previously been denied access. The reform ultimately failed to raise their incomes.

Bianchi writes,
In a recent working paper, I illustrate the effects of a 1961 Italian reform that led to a 216% increase in enrolment in university STEM (Science, Technology, Engineering, and Mathematics) programmes over a mere eight years (Bianchi 2014). I find that:
  • The reform increased enrolment in university STEM majors among students that had been previously denied access, but ultimately failed to raise their incomes.
  • The enrolment expansion lowered the value of a STEM education by crowding out university spending and generating negative peer effects.
  • Due to lower returns to a university STEM degree, some students with the potential to succeed in STEM turned to other university programs.
He continues
Italian high schools offer different curricula. Until 1960, a student who graduated from a university-prep high school (licei) could enroll in university in any major. A student who graduated from a technical high school for industry-sector professionals (istituti industriali) could enroll in only a few majors and most often did not enroll in university at all. In 1961, the Italian government allowed graduates with a technical diploma to enroll in university STEM majors for the first time. Technical graduates embraced this opportunity to the extent that freshman enrolment in STEM programs had increased by 216% by 1968 [...]

To analyze the effects of this reform, I collected high school records, university transcripts, and income tax returns for the population of students that completed high school in Milan between 1958 and 1968. I chose Milan because it is Italy’s commercial capital and second largest city. It has the thickest market for university graduates and university-type jobs, and is believed to be the place where a university graduate can earn the highest returns.
The reforms resulted in higher university access but a lower value of education.
The reform was successful in increasing university access among students with a technical diploma. After 1961, many technical students enrolled in university and completed their degrees. However, I find little evidence that technical students gained positive returns to university STEM education. This is an important result for two reasons:
  • STEM degrees were leading to high-paying occupations and
  • the outside option of technical students was to enter the labour market with just a high school diploma.
To explain these findings, I lay out a simple framework in which enrolment expansions affect returns to education through three main channels:
  • higher supply leads to lower wages,
  • higher enrolment crowds university resources and decreases the quality of education,
  • learning is lower in classes with students from different types of high school.
Thanks to the reforms you got crowding of university spending, peer effects, and changes in major choices.
Several findings suggest that the enrolment expansion following the policy implementation lowered the returns to a STEM degree. To analyze changes in the value of a university education, I focus on the students who were not directly affected by the reform – the graduates from university-prep high schools. Among these students, returns to STEM education declined after 1961 to the point of erasing the pre-reform income premium associated with a STEM degree.

This decline can be partially explained by a lower amount of skills acquired in STEM majors after 1961. I find that human capital (measured by absolute grades) decreased more in STEM courses in which resources became more crowded and in which the entry of technical students had greater disruptive potential. Overall, lower resources per student can explain 31% of the income decline, while the change in class composition can explain another 37.3%. The remaining share can be attributed to higher supply of workers with a university STEM education or possibly other minor channels of general equilibrium effects.

By decreasing the value of STEM education, the reform might have deprived STEM majors of talented students. After 1961, many more students with a university-prep diploma decided to enroll in university majors that were still not accessible to technical students. This effect was concentrated among the students with higher high school grades.
Policy conclusions?
There are instances in which students should invest more in education. For students who do not have the resources to pay for education, public intervention is needed to improve access, but should just ease the financial constraints of students under-investing in education. Public intervention, should not take the form of greatly expanded education provision by state-controlled universities. The inefficiencies in the public provision of education might be magnified by enrolment expansions and might limit the benefits for targeted students.
Ref.:

Monday 29 December 2014

Presentations on F. A. Hayek and the Nobel Prize

In a followup to the previous posting on the talk by Israel Kirzner, the other presentations given that day are embedded below.

Dr. Peter Boettke Introduces F. A. Hayek Nobel Laureate Panel


Dr. Eric Maskin's Presentation on F. A. Hayek and the Nobel Prize


Dr. Vernon Smith's Presentation on F. A. Hayek and the Nobel Prize and Q&A

Sunday 28 December 2014

Was Stalin necessary for Russia’s economic development?

An often asked question. In a 2013 column at VoxEU.org Anton Cheremukhin, Mikhail Golosov, Sergei Guriev and Aleh Tsyvinski attempted to answer it. Soviet Russia’s industrialisation was a pivotal episode in the 20th century, and economic historians have spent decades debating the role of Stalin’s policies in bringing it about. This column argues that Stalin’s industrialisation was disastrous even in purely economic terms. The brutal policy of collectivisation devastated productivity, both in manufacturing and in agriculture. The massive welfare losses in the years 1928-40 outweighed any hypothetical gains from Stalin’s policies after 1940, and Russia would have been better off under a continuation of the ‘New Economic Policy’.

So in other worlds, no!

In more detail, the debate about Stalin’s economic policies focuses on the issues: Did Stalin’s industrialisation policies pay off in terms of economic welfare? Would Russia have industrialised without Stalin? Would Russia have done better without Stalin?

In the 1920s Russia outperformed its pre-1913 trend in per capita GDP, doubled its investment to GDP ratio, and moved about 30% of its labour force from agriculture to manufacturing and services. But this does not prove that this transformation was driven by Stalin’s policies. It may well be the case that industrialisation and growth would have happened anyway. We also do not know to what extent Soviet citizens benefitted from Stalin’s policies in the short- and in the long-run – compared to a reasonable counterfactual.
Recent research

In a recent paper, we address these issues by constructing a new dataset and using modern macroeconomic tools (Cheremukhin et al. 2013). In particular, we study a standard two-sector neoclassical model of unbalanced growth that has been extensively employed in the literature to analyse industrialisation and structural change in other contexts. We do not assume that markets are frictionless; instead, we allow for distortions (or ‘wedges’) in product and factor markets.

Certainly, Stalin’s economy was not a neoclassical market economy. We do not assume that markets worked in the Soviet Union nor do we use Soviet price data. Still, the neoclassical model helps us to analyse the planned economy. We calculate the magnitude of the wedges that would make a neoclassical economy with wedges perform similarly to Stalin’s economy in terms of real variables (such as output in the agricultural and manufacturing sectors, labour and capital used in these two sectors, private and government consumption etc.). We then perform the same exercise for the Russian economy before WWI and for the economy of Japan (which turns out to be a convenient benchmark for the Russian economy).

Once we calculate the wedges in the product market, the capital market, the labour market, and the intertemporal wedge, we describe how to connect these wedges to actual policies carried out by Stalin’s government (such as collectivisation or industrialisation) and to distortions present in the economy. We then compare Stalin’s economic performance to a counterfactual scenario in which we extrapolate the Tsarist pre-1913 trends. Essentially, we estimate what would have happened had the Tsarist economy continued with the same productivity trends and distortions as it had before 1913. Moreover, as we can quantify the role of each wedge and understand connections between policies and wedges, we can evaluate the contribution of each policy and of each external factor to the performance of the Stalinist and Tsarist economies.

The disaster of collectivisation

Our analysis of wedges clearly divides Stalin’s industrialisation years 1928-40 into two sub-periods. Up until mid-1930s, the economic policies resulted in dramatic spikes in wedges and in substantial drops in total factor productivity in both the manufacturing and the agricultural sector. We connect these changes to the hectic and violent attempts of expropriating the peasants through ‘collectivisation’ of land and to the policy of the ‘price scissors’ (forcing peasants to sell agricultural output at below-market prices). The expropriations from peasants were an essential part of Stalin’s policy – the government needed to export grain to pay for the import of industrial equipment. Also, the resulting impoverishment of peasants was supposed to increase their incentives to move to the cities. Naturally, these policies would not be possible without use of violence. In 1929, there were 1,300 peasant riots with more than 200,000 participants (Khlevnyuk 2009). In March 1930 alone, there were more than 6,500 riots with 1.4 million peasants participating. All these riots were brutally suppressed.

The collectivisation resulted in a dramatic fall in agricultural productivity and, subsequently, in an unprecedented famine which cost about 6 million lives. The fall in output, in turn, undermined the attempts to import the necessary industrial capital. GDP stagnated and the first five-year plan was not fulfilled.

Given the disastrous outcomes of collectivisation, the government retreated and pursued more balanced policies. Since 1935, the wedges normalised and declined to pre-1913 levels and even lower. Agricultural TFP rose back to the long-run trends; manufacturing TFP increased but stayed substantially below the trend (actually, at the level of 1913). On the other hand, by the end of 1930s, Soviet economy had more capital and labour in the modern sector than the pre-1913 trends would imply.

Figure 2. TFP in the manufacturing and agricultural sectors



To evaluate the costs and benefits of Stalin’s industrialisation, we calculate Russian consumers’ welfare (discounted utility). We find that Stalin’s economic policies resulted in exceptionally large losses of welfare in 1928-40 (about 24% of aggregate consumption relative to the counterfactual based pre-1913 TFP growth trends and wedges).

Counterfactual analysis and the role of international trade

In order to obtain the upper bound of Stalin’s potential long-term contribution to the Soviet economy, we consider a scenario where WWII did not happen and Stalin’s industrialisation continued – with the capital levels and wedges achieved in the late 1930s. Being generous to Stalin, we assumed that he would manage to grow TFP in both sectors at pre-1913 rates. Under these assumptions we found that post-1940, Stalin’s economy would have brought Soviet citizens non-trivial gains (about 16% of aggregate consumption). However, once we compare the short-run losses in 1928-40 and the post-1940 long-run gains, we find that losses still outweigh the gains. The lifetime welfare of the generation born in 1928 would be 1% lower under Stalin.

We also carry out comparisons with Japan and with a scenario in which Russia would continue with the dual-track ‘New Economic Policy’ that was conducted before Stalin’s ‘Great Turn’ in 1928. Both scenarios result in higher welfare for the Soviet citizens.3

Figure 3. Counterfactual analysis



Furthermore, we study the role of foreign trade. Part of Stalin’s success of industrialisation was due to drastic reduction in foreign trade because of isolation of the Soviet Union (see Figure 4). Likely, the reduction of trade would have happened anyway if Russia remained a market economy. Indeed, the Great Depression resulted in a substantial fall in Russia’s terms of trade; if Russia were in a market economy, this would have resulted in reallocation of labour and capital from agriculture to industry.

Figure 4. The isolation of the Soviet Union



Conclusion

Therefore our answer to the ‘Was Stalin Necessary?’ question is a definite ‘no’. Even though we do not consider the human tragedy of famine, repression and terror, and focus on economic outcomes alone, and even when we make assumptions that are biased in Stalin’s favour, his economic policies underperform the counterfactual. We believe Stalin’s industrialisation should not be used as a success story in development economics, and should instead be studied as an example where brutal reallocation resulted in lower productivity and lower social welfare.
Refs:
  • Cheremukhin, A, M Golosov, S Guriev, and A Tsyvinski (2013), “Was Stalin Necessary for Russia’s Economic Development?”, NBER Working Paper No. 19425.
  • Khlevnyuk, Oleg (2009), Master of the house: Stalin and his inner circle, Yale University Press.

The Economics of World War I. 5 and 6

Two more in the series of posts from The Economics of World War I at VoxEU.org
The US learned the wrong lessons from WWI
Hugh Rockoff, 04 October 2014
World War I profoundly altered the structure of the US economy and its role in the world economy. However, this column argues that the US learnt the wrong lessons from the war, partly because a halo of victory surrounded wartime policies and personalities. The methods used for dealing with shortages during the war were simply inappropriate for dealing with the Great Depression, and American isolationism in the 1930s had devastating consequences for world peace.

World War I: Why the Allies won
Stephen Broadberry, 11 November 2014
In the massive circumstances of total war, economic factors play the deciding role. Historians emphasise size in explaining the outcome of WWI, but this column argues that quality mattered as well as quantity. Developed countries mobilised resources in disproportion to their economic size – the level of development acted as a multiplier. With their large peasant sectors, the Central Powers could not maintain agricultural output as wartime mobilisation redirected resources from farming. The resulting urban famine undermined the supply chain behind the war effort.

Dr. Israel Kirzner's keynote address on F. A. Hayek and the Nobel Prize

To reflect on the significance of Hayek’s Nobel Prize and the various strands of influence his work has had in subsequent decades of scholarship, the Mercatus F .A. Hayek Program for Advanced Study in Philosophy, Politics, and Economics hosted a keynote speech and panel discussion by some of Hayek’s most prominent colleagues and interlocutors. They discussed the breadth of Hayek’s vision, his contribution, and its influence on the research of other elite economic thinkers. The keynote address was by Dr. Israel Kirzner.

Saturday 27 December 2014

How effective is the minimum wage at supporting the poor?

This is an important question since one good reason for supporting having a minimum wage would be if it is an effective antipoverty policy measure. Such a belief would rely on two assumptions: first, raising the minimum wage will increase the incomes of poor families; and second, the minimum wage imposes little or no public or social costs.
The policy debate over the minimum wage principally revolves around its effectiveness as an antipoverty program. A popular image used by both sides of the debate consists of families with breadwinners who earn low wages to support their children. Policies that raise the wages of these workers increase their earnings and contribute to their escaping poverty. As a counterbalance to this impact, opponent of the minimum wage argue that wage regulation causes some low-wage workers to lose their jobs and they will suffer income drops. The issue, then, becomes a tradeoff; some low-income breadwinners will gain and others will lose. Promoters of the minimum wage retort that employment losses are quite small and, consequently, the workers who gain far exceed those who lose.

In addition to potential adverse employment effects, opponents of minimum wages further counter the belief that the minimum wage assists poor families by documenting that many minimum-wage workers are not breadwinners of low-income families. They are, instead, often teenagers, single heads of household with no children, or not even members of low-income families. Promoters of the minimum wage admit that some of these groups may also benefit from the wage increase, but since few workers lose jobs, they contend that the minimum wage still benefits low-income families with children.

The notion that the minimum wage can be increased with little or no economic cost underlies many advocates’ assessments of the effectiveness of the minimum wage in its antipoverty role. Most economists agree that imposing wage controls on labor will not raise total income in an economy; indeed, elementary economics dictates that such market distortions lead to reduced total income implying fewer overall benefits than costs. If, however, one presumes that employment losses do not occur and total income does not fall, then the minimum wage debate becomes a disagreement over how it redistributes income. The efficacy of a minimum wage hike as an antipoverty program depends on who benefits from the increase in earnings and who pays for these higher earnings. Whereas a number of studies have documented who benefits, who pays is far less obvious. But someone must pay for the higher earnings received by the low-wage workers.

At the most simplistic level, the employer pays for the increase. However, businesses don't actually pay, for they are merely conduits for transactions among individuals. Businesses have three possible responses to the higher labor costs imposed by the minimum wage. First, they can reduce employment or adjust other aspects of the employment relationship (e.g., less fringes or training opportunities), in which case some low-wage workers pay themselves through loss of their jobs or by receiving less non-salary benefits; second, firms can lose profits, in which case owners pay; and, third, employers can increase prices,wherein consumers pay.

Of these three sources, entertaining that low-wage workers bear any cost of the minimum wage has been largely dismissed by proponents in recent years based on several (albeit much disputed) studies that found little or no job loss following historical increases in federal and state minimum wages. While the extra resources needed to cover higher labor costs could theoretically come out of profits, several factors suggest that this source is the least likely to bear costs. Capital and entrepreneurship are highly mobile and ill eventually leave any industry that does not yield a return comparable to that earned elsewhere. This means that capital and entrepreneurship, and hence profits, will not bear any significant portion of a “tax” imposed on a particular factor of production. Stated differently, employers in low-wage industries are typically in highly competitive industries such as restaurants and retail stores, and the only option for these low profit margin industries becomes lowering exposure to low-wage labor or raising prices. With jobs presumed to be unaffected, this leaves higher prices as the most likely candidate for covering minimum wage costs. In fact, supporters of minimum and living wage initiatives often admit that slight price increases pay for higher labor costs following minimum wage hikes.
The above comes from a new paper, How Effective Is the Minimum Wage at Supporting the Poor? by Thomas MaCurdy of the Department of Economics in Stanford University.

MaCurdy sets out to evaluate the redistributive effects of the minimum wage adopting the view implicitly held by its advocates, that is, the study examines the antipoverty effectiveness of this policy presuming that firms raise prices to cover the full amount of their higher labor costs induced by the rise in wages.
In particular, the analysis simulates the economy taking into account both who benefits and who pays for a minimum wage increase assuming that its costs are all passed on solely in the form of higher consumer prices. The families bearing the costs of these higher prices are those consumers who purchase the goods and services produced with minimum-wage labor. In actuality, most economists expect some of these consumers would respond to the higher prices by purchasing less, but such behaviors directly contradict the assertion of no employment effects since lower purchases mean that fewer workers would be needed to satisfy demand. Consequently, to keep faith with the view held by proponents, the simulations carried out in this study assume that consumers do not alter their purchases of the products and services produced by low-wage labor and they bear the full cost of the minimum wage rise. This approach, then, maintains the assumption of a steady level of employment, the “best-case” scenario asserted by minimum-wage proponents. Although highly stylized and probably unrealistic, the following analysis demonstrates that the minimum wage can have unintended and unattractive distributional effects, even in the absence of the employment losses predicted by economic theory.
To evaluate the distributional impacts of an increase in the minimum wage MaCurdy investigates the circumstances applicable in the 1990s in the U.S.A. when the federal minimum wage increased from US$4.25 in 1996 to US$5.15 in 1997. (In 2014 dollars, this increase corresponds to a change from US$6.40 to US$7.76.)
To identify families supported by low-wage workers and to measure effects on their earnings and income, this analysis uses data from waves 1-3 of the 1996 Survey of Income and Program Participation (SIPP). To translate the higher earnings paid to low-wage workers into the costs of the goods and services produced by them, this study relies on national input-output tables constructed by the Minnesota Impact Analysis for Planning (IMPLAN) Group, matched to a time period comparable with SIPP’s. To ascertain which families purchase the goods and services produced by low-wage workers and how much more they pay when prices rise to pay for minimum wage increases, this study uses data from the Consumer Expenditure Survey (CES), again matched to the same time period as SIPP’s. The contribution of this study is not to estimate the distribution of benefits of the minimum wage, nor is it to estimate the effect on prices; both of these impacts have already been done in the literature. Instead the goal of this paper is to put the benefits and cost sides together to infer the net distributional impacts of the minimum wage on different categories of families and to translate this impact into a format readily accessible to economists and policymakers.

To provide an economic setting for evaluating the distributional measures presented here, this study develops a general equilibrium (GE) framework incorporating minimum wages. [Details of the GE model are given in the Appendix to the paper] This model consists of a two-sector economy with the two goods produced by three factors of production: low-wage labor, high-wage labor, and capital. A particular specification of this GE model justifies the computations performed in the analysis, and entertaining alterations in its behavioral elements permits an assessment of how results might change with alternative economic assumptions. The model proposed here goes well beyond what is currently available in the literature, which essentially relies on a Heckscher-Ohlin approach with fixed endowments (supplies) of labor and capital inputs. In contrast, the GE model formulated in this study admits flexible elasticities for both input supplies and for consumer demand, as well as a wide range of other economic factors.
As to results. Remember the exercise described in this paper simulates the distributional impacts of the rise in the federal minimum wage from US$4.25 to US$5.15 implemented in 1996-97.
Following the assumptions maintained by advocates, the simulation presumes (i) that low-wage worker earned this higher wage with no change in their employment or any reduction in other forms of compensation, (ii) that these higher labor costs were fully passed on to consumers through higher prices, and (iii) that consumers simply paid the extra amount for the goods produced by low-wage labor with no change in their quantities purchased. The cost of this increase is about 15 billion dollars, which was nearly half the amount spent by the federal government on such antipoverty programs as the federal EITC, AFDC/TANF, or Food Stamp program. The analysis assesses the extent to which various categories of families benefit from higher earnings, and the amounts that these groups pay more as consumers through higher prices. Combining these two sides yields a picture of who gains and who pays for minimum wage increases, including the net effects for families.

On the benefit distribution side, as other research has shown, the picture portrayed by this analysis sharply contradicts the view held by proponents of the minimum wage. Low-wage families are typically not low-income families. The increased earnings received by the poorest families is only marginally higher than by the wealthiest. One in four families in the top fifth of the income distribution has a low-wage worker, which is the same share as in the bottom fifth. Virtually as much money goes to the highest-income families as to the lowest. While advocates compare the wage levels to the poverty threshold for a family to make the case for raising the minimum wage, less than $1 in $5 of the additional earnings goes to families with children that rely on low-wage earnings as their primary source of income. Moreover, as a pretax increase, 22% of the incremental earnings are taxed away as Social Security contributions and state and federal income taxes. The message of these findings is clear: raising wages wastefully targets the poor contrary to conventional wisdom.

Turning to who pays the costs of an increase in the federal minimum wage through higher prices, the analysis reveals that the richest fifth of families do pay a much larger share (three times more) than those in the poorest fifth. This outcome reflects the fact that the wealthier families simply consume much more. However, when viewed as a percentage of expenditures, the picture looks far less appealing. Expressed as a percentage of families’ total nondurable consumption, the extra costs from higher prices are slightly above 0.5% for families at large. The picture worsen further when one considers costs as a percentage of the types of consumption normally included in the calculation of state sales taxes, which excludes a number of necessities such as food and health care. Here, the implied costs approximately double as a percentage of expenditure. More important, the minimum-wage costs as a share of “taxable” annual expenditures monotonically falls with families’ income. In other words, the costs imposed by the minimum wage are paid in a way that is more regressive than a sales tax.

On net, the minimum wage does redistribute income slightly in favor of lower income families, with higher-income families paying more in increased prices than they benefit from the rise in their earnings. However, adverse impacts occur within income groups. Whereas less than one in four low-income families benefit from a minimum wage increase of the sort adopted in 1996, all low-income families pay for this increase through higher prices rendering three in four low-income families as net losers. Meanwhile, many higher income families are net winners.

Political support for the minimum wage largely depends on the apparent clarity of who benefits and the inability to trace who pays for the wage increase, irrespective of whether costs are paid through higher prices, or lower profits, or cutbacks in jobs or employee benefits. As shown in this study, the benefits created by the minimum wage goes families essentially evenly distributed across the income distribution; and, when minimum wage increases are paid through higher prices, the induced rise in consumption expenditures mimics the imposition of a sales tax with a higher tax rate enacted on the goods and services purchased disproportionately by low-income families. Consequently, a minimum-wage increase effectively emulates imposition of a “national tax” that is more regressive than a typical sales tax with its proceeds allocated to families unrelated to their income. This characterizes the income transfer properties of the minimum wage, which many might not view as an antipoverty program.
The highlighted sections are some of the more important takeaway bits from the study.

In summary, MaCurdy adopts a “best-case” scenario taken from minimum-wage advocates. His study projects the consequences of the increase in the national minimum wage instituted in 1996 on the redistribution of resources among rich and poor families. Under this scenario, the minimum wage increase acts like a regressive sales tax in its effect on consumer prices and is in fact even more regressive than a typical state sales tax. With the proceeds of this national sales tax collected to fund benefits, the 1996 increase in the minimum wage distributed the bulk of these benefits to one in four families nearly evenly across the income distribution. Far more poor families suffered reductions in resources than those who gained. As many rich families gained as poor families. These income transfer properties of the minimum wage document its considerable inefficiency as an antipoverty policy.

(HT: thanks to Tim Worstall for pointing out the study.)

Returns to innovation

Recently Tim Worstall has been reminding us of a 2004 paper on Schumpeterian Profits in the American Economy: Theory and Measurement by William D. Nordhaus. The point of the paper is that entrepreneurs gain less than 3% of the social returns to their innovation. The paper's abstract reads:
The present study examines the importance of Schumpeterian profits in the United States economy. Schumpeterian profits are defined as those profits that arise when firms are able to appropriate the returns from innovative activity. We first show the underlying equations for Schumpeterian profits. We then estimate the value of these profits for the non-farm business economy. We conclude that only a minuscule fraction of the social returns from technological advances over the 1948-2001 period was captured by producers, indicating that most of the benefits of technological change are passed on to consumers rather than captured by producers.
Back in 2004 Don Boudreaux blogged on the paper at the Cafe Hayek blog. He said,
In a recent NBER working paper – “Schumpeterian Profits in the American Economy: Theory and Measurement” – Yale economist William Nordhaus estimates that innovators capture a mere 2.2% of the total “surplus” from innovation. (The total surplus of innovation is, roughly speaking, the total value to society of innovation above the cost of producing innovations.) Nordhaus’s data are from the post-WWII period.

The smallness of this figure is astounding. If it is anywhere close to being an accurate estimate, the implication is that “society” pays a paltry $2.20 for every $100 worth of welfare it enjoys from innovating activities.

Why do innovators work so cheaply? One possible reason is alluded to by Nordhaus himself: excess optimism. Nordhaus suggests that over-optimism might explain the late 1990s tech-market equity bubble. The social gains from innovation were in fact very large, but the ability of investors to capture more than a small sliver of these gains – rather than see these gains flow to consumers in the form of lower prices and improved products – proved undoable.

Another possible explanation for why innovators work so cheaply is that the prospects, few as they might be, for capturing gargantuan shares of the gains from innovation are sufficiently attractive that even rational, well-informed entrepreneurs and investors perform and fund innovating activities, each hoping that he or she will be among the tiny but inordinately lucky handful of entrepreneurs and investors who personally do capture a much-much-greater-than-normal share of the value of their innovative endeavors.

Whatever the reason, Nordhaus’s empirical evidence supports (at least my) casual observation that innovative economic activity yields benefits that are both enormous and widespread.
Boudreaux has now added an Addendum to the above blog posting noting the implications of Nordhaus's paper for the current debate about income inequality. He writes,
Nordhaus’s findings are relevant also to discussions of income inequality. His findings show that successful entrepreneurs have already, in the very process of succeeding in the market and becoming wealthy, increased the wealth of ‘society’ – have ‘given’ to others – far more than each successful entrepreneur has increased his or her own individual wealth. This process of enhancing the economic well-being of countless others through successful market innovation is neither intended nor choreographed by government, but this fact doesn't make the results any less real or significant.

True, in a society in which people are not all equally innovative and driven and risk-tolerant, the measured monetary results of such successful innovation are that some individuals (the successful entrepreneurs) gain more wealth than is gained by other individuals (those who passively prosper simply by being a consumer and worker in an innovation-filled market economy). Measured monetary incomes, therefore, do become less equal.

But why do we so seldom never hear from the fairness-obsessed, we’re-all-in-this-together crowd any expressions of concern about the great inequality of net contributions to total wealth? Where is the concern over this “unfairness”? Compared to successful market entrepreneurs, people who choose to consume much leisure or who remain consistently afraid to risk their wealth on entrepreneurial ventures enjoy over their lifetimes a higher ratio of wealth-increases to their own contributions-to-wealth. If we are to be concerned with cosmic fairness or “social justice” or “inequality,” why is this inequality one that is or ought to be ignored?
I still find, like Boudreaux, the smallness if the 2.2% figure astounding. This does tell us that "society" gets a very good deal out of entrepreneurs and thus instead of complaining about the absolute size, in terms of the number of dollars, of the 2.2% we should just be very happy with our 97.8%. Incentives matter and such a small percentage is a small price to pay for the incentive it gives for the generating of innovation and growth.

Friday 26 December 2014

An alternative view of the Mississippi System and the South Sea Bubble

From Richard Cantillon: Entrepreneur and Economist by Antoin E. Murphy
Before dealing with the complex interrelationship between [John] Law and Cantillon it is necessary to explain the monetary environment in which they operated, an environment popularized in the public eye through images of bubbles, speculative mania, and irrational behaviour during what amounted to the first major stock market boom in Europe in 1718–20. This boom involved what were popularly known as the Mississippi System and the South Sea Bubble. The popular image of these stock market booms needs to be counterbalanced by the alternative view that interprets the Mississippi System and the South Sea Company as serious attempts at financial innovation, as efforts to free policy from the suffocating constraints imposed on it by the management of the war-induced national debt. These financial innovations failed, not because the idea behind them was fundamentally flawed, but because their projectors, along with their political backers, were over adventurous and too many ‘invisible hands’ found their way into the public purse (p. 65).

Wednesday 24 December 2014

Some proper economics research for a change: LBW decisions and bias by umpires.

One of the most important questions in economics has to do with whether pressure from home crowds affects decision making of sports officials. A new column at VoxEU.org investigates this problem using new data from cricket matches. The authors find that neutral umpires decrease the bias against away teams, making neutral officials very important for a fair contest.

'Leg before wicket' decisions in cricket provide a fascinating case study in which to study the issue of bias in decision making by umpires.
Umpiring decisions in cricket provide a fascinating case study in which to study the issue. In the first place, decisions such as whether the batsman is out ‘leg before wicket’ (LBW) require significant judgement from the umpire in a very short period of time (less than 10 seconds). At least until recently, umpires have had complete discretion over these decisions, which can have crucial impacts on the outcome of matches (Chedzoy 1997). Unusually amongst professional sports, international cricket continues to use officials of the same nationality as the home team. Throughout most of the history of test cricket, both umpires were from the same country as the home team. In 1994, the regulations were changed and one of the umpires was required to be from a neutral country. From 2002, both umpires were required to be neutral. In One Day International (ODI) cricket, there is still one home and one neutral umpire in most matches. Unsurprisingly, cricket fans and sometimes players have long held suspicions that decisions by home umpires tend to favour the home team. The notorious altercation between the former England cricket captain Mike Gatting and Pakistani umpire Shakoor Rana in 1987 led to an international diplomatic incident, the ramifications of which were felt for many years.

Despite this, academic study of officials’ decision making in cricket has been limited to a handful of articles. An investigation by Sumner and Mobley in the New Scientist in 1981 was the first to focus on leg before wicket decisions against home and away teams, followed by Crowe and Middeldorp (1996) and Ringrose (2006). Although these articles broadly concluded that away teams suffer more leg before wicket decisions against them than do home teams, none was able to establish statistically meaningful links between neutrality of umpire and decisions against home and away teams.
Recent research by Ian Gregory-Smith, David Paton and Abhinav Sacheti takes a new look at the issue.
It was this issue that we sought to address in our recent article (Sacheti et al. 2014). We collected data from 1,000 test matches played between 1986 and 2012 from ESPNCricInfo. The changes to regulations about neutral umpires provided us with an ideal ‘natural experiment’; in our sample, around 20% of matches were umpired by home officials, 35% by one home and one neutral official, and 45% by two neutral officials. We also controlled for the quality of team, venue (as each pitch may have distinct characteristics making it more or less conducive to enabling leg before wicket decisions than others), and even the experience of the umpires in the match, among other things. With these controls in place, we found striking results as shown in the Table 1 below.
  • During the period when there were two home umpires, home teams had a clear advantage.
Reading across the columns for the Home batting marginal effect in Table 1, batsmen in away teams were given out leg before wicket about 16% more often than batsmen in home teams.
  • However, with one neutral umpire, the bias against away teams receded to 10%, and in the matches with two neutral umpires there was no home advantage at all.
It would thus seem that having neutral officials is very important for a fair contest.

Table 1. Negative binomial model of number of leg before wicket (LBW) decisions per innings
Notes: (i) Robust standard errors in brackets, clustered by match; (ii) *Significant at the 10% level. **Significant at the 5% level. ***Significant at the 1% level; (iii) ‘Home marginal effect’ is calculated as the Average Marginal Effect (see Cameron and Trivedi 2010, p.576); (iv) Controls are umpire experience; log of overs; innings; country level dummies for each home team; batting team effects and bowling team effects; (v) For the full table of results and a battery of robustness checks see Sacheti et al. (2014).
So the next question is crowd pressure or favouritism?
An obvious question is whether the apparent bias in favour of home teams was caused by crowd pressure. We examined this by comparing results between the first two innings and the final two innings of test matches. The rationale is that crowds tend to be higher in the early stages of a test match and decline significantly later on (Hynds and Smith 1994). We found that the advantage to home teams from home umpires was strongest in the final two innings of the match. So, there is little evidence that bias towards home teams from home umpires was driven primarily by crowd pressure.
What, you may ask, of the decision review system (DRS)?
In our sample there were 71 matches in which the decision review system was in place. Leg before wicket appeals or decisions in these matches can be referred to a third umpire who has the benefit of watching a slow-motion replay of the appeal or decision. All these matches had two neutral umpires, so we cannot use these data to identify any effect of favouritism by home umpires. However, any differences between home and away teams in referred decisions could indicate favouritism by neutral umpires towards home (or away) teams. Out of the 389 referred leg before wicket decisions in our sample, almost exactly the same proportion went against the away team as against the home team. This is consistent with our main finding that neutral umpires do not display bias.
So, conscious or unconscious favouritism?
It is important to note that our results do not necessarily suggest that home umpires deliberately tended to favour their own team. It is possible that home umpires could favour home teams sub-consciously. Our research does not attempt to examine the motivations of umpires. It is clear, however, that the introduction of neutral umpires in test cricket overcame the problem of home bias. This finding is important given the continued presence of home umpires in One Day Internationals and also because some commentators are suggesting a return to home umpires in test cricket on the grounds that new technology such as the decision review system makes it easier to reduce poor decision making. However, whilst the decision review system offers a ‘check’ of umpires’ decisions, it still allows some subjective decisions to stay in favour of the on-field umpire’s call. So in the light of our results, any proposal to revert to home umpires in test cricket should be treated with some caution.
Refs:
  • Cameron, A C and Trivedi, PK (2010), Microeconometrics using Stata, Texas: StataCorp LP.
  • Chedzoy, O B (1997), “The effect of umpiring errors in cricket”, The Statistician, 46, 529-540.
  • Crowe, S M and Middeldorp, J (1996), “A Comparison of Leg Before Wicket Rates Between Australians and Their Visiting Teams for Test Cricket Series Played in Australia, 1977-94”, The Statistician, 45, 255-262.
  • ESPNcricinfo (2010-12). Available from http://www.cricinfo.com (First accessed on December 5 2010).
  • Hynds, M and Smith, I (1994), “The demand for test match cricket”, Applied Economics Letters, 1, 103-106.
  • Ringrose, T J (2006), “Neutral umpires and leg before wicket decisions in test cricket”, Journal of Royal Statistical Society: Series A (Statistics in Society), 169, 903-911.
  • Sacheti, A, Gregory-Smith, I and Paton, D (2014), “Home bias in officiating: evidence from international cricket”, Journal of the Royal Statistical Society: Series A (Statistics in Society).
  • Sumner, J and Mobley, M (1981), “Are cricket umpires biased?” New Scientist, 91, 29-31.

Management and productivity: An interview with Nicholas Bloom

The following comes from an interview with Stanford University economist Nicholas Bloom available online at "Econ Focus", the economics magazine of the Federal Reserve Bank of Richmond.
EF: Another branch of your research has focused on how management practices affect firm and country productivity. Why do you think management practices are so important?

Bloom: My personal interest was formed by working at McKinsey, the management consulting firm. I was there for about a year and a half, working in the London office for industrial and retail clients.

There's also a lot of suggestive evidence that management matters. For example, Lucia Foster, John Haltiwanger, and Chad Syverson found using census data that there are enormous differences in performance across firms, even within very narrow industry classifications. In the United Kingdom years ago, there was this line of biscuit factories — cookie factories, to Americans — that were owned by the same company in different countries. Their productivity variation was enormous, with these differences being attributed to variations in management. If you look at key macro papers like Robert Lucas' 1978 "span of control" model or Marc Melitz's 2003 Econometrica paper, they also talk about productivity differences, often linking this with management.

Economists have, in fact, long argued that management matters. Francis Walker, a founder and the first president of the American Economic Association, ran the 1870 U.S. census and then wrote an article in the first year of the Quarterly Journal of Economics, "The Source of Business Profits." He argued that management was the biggest driver of the huge differences in business performance that he observed across literally thousands of firms.

Almost 150 years later, work looking at manufacturing plants shows a massive variation in business performance; the 90th percentile plant now has twice the total factor productivity of the 10th percentile plant. Similarly, there are massive spreads across countries — for example, U.S. productivity is about five times that of India.

Despite the early attention on management by Francis Walker, the topic dropped down a bit in economics, I think because "management" became a bad word in the field. Early on I used to joke that when I turned up at seminars people would see the "M-word" in the seminar title and their view of my IQ was instantly minus 20. Then they'd hear the British accent, and I'd get 15 back. People thought management was quack doctor research — all pulp-fiction business books sold in airports.

Management matters, obviously, for economic growth — if we could rapidly improve management practices, we would quickly end the current growth slowdown. It also matters for public services. For example, schools that regularly evaluate their teachers, provide feedback on best practices, and use data to spot and help struggling students have dramatically better educational outcomes. Likewise, hospitals that evaluate nurses and doctors to provide feedback and training, address struggling employees, and reward high performers provide dramatically better patient care. I teach my Stanford students a case study from Virginia Mason, the famous Seattle hospital that put in place a huge lean-management overhaul and saw a dramatic improvement in health care outcomes, including lower mortality rates. So if I get sick, I definitely want to be treated at a well-managed hospital.

EF: How much of the productivity differences that you just discussed are driven by management?

Bloom: Research from the World Management Survey that Raffaella Sadun, John Van Reenen, and I developed suggests that management accounts for about 25 percent of the productivity differences between firms in the United States. This is a huge number; to give you a benchmark, IT or R&D appears to account for maybe 10 percent to 20 percent of the productivity spread based on firm and census data. So management seems more important even than technology or innovation for explaining variations in firm performance.

Coincidentally, you do the same exercise across countries and it's also about 25 percent. The share is actually higher between the United States and Europe, where it's more like a third, and it's lower between the United States and developed countries, where it's more like 10 to 15 percent.

Now, you may not be surprised to learn that there are significant productivity differences between India and the United States. But you look at somewhere like the United Kingdom, and it's amazing: Its productivity is about 75 percent of America's. The United Kingdom is a very similar country in terms of education, competition levels, and many other things. So what causes the gap? It is a real struggle to explain what it is beyond, frankly, management.

EF: What can policy do to improve management practices?

Bloom: I think policy matters a lot. We highlight five policies. One is competition. I think the key driver of America's management leadership has been its big, open, and competitive markets. If Sam Walton had been based in Italy or in India, he would have five stores by now, probably called "Sam Walton's Family Market." Each one would have been managed by one of his sons or sons-in-law. Whereas in America, Walmart now has thousands of stores, run by professional nonfamily managers. This expansion of Walmart has improved retail productivity across the country. Competition generates a lot of diversity through rapid entry and exit, and the winners get big very fast, so best practices spread rapidly in competitive, well-functioning markets.

The second policy factor is rule of law, which allows well-managed firms to expand. Having visited India for the work with Benn Eifert, Aprajit Mahajan, David McKenzie, and John Roberts, I can say this: The absence of rule of law is a killer for good management. If you take a case to court in India, it takes 10 to 15 years to come to fruition. In most developing countries, the legal system is weak; it is hard to successfully prosecute employees who steal from you or customers who do not pay their invoices, leading firms to use family members as managers and supply only narrow groups of trusted customers. This makes it very hard to be well managed — if most firms have the son or grandson of the founder running the firm, working with the same customers as 20 years ago, then it shouldn't be surprising that productivity is low. These firms know that their sons are often not the best manager, but at least they will not rampantly steal from the firms.

The third policy factor is education, which is strongly correlated with management practices. Educated and numerate employees seem to more rapidly and effectively adopt efficient management practices.

The fourth policy factor is foreign direct investment, as multinational firms help to spread management best practices around the world. Multinational firms are typically incredibly well run, and that spills over. It's even true in America, where its car industry has benefited tremendously from Honda, Toyota, Mitsubishi, and Volkswagen. When these foreign car manufacturers first came to America, they achieved far higher levels of productivity than domestic U.S. firms, which forced the American car manufacturers to improve to survive.

The fifth factor is labor regulation, which allows firms to adopt strong management practices unimpeded by government. In places like France, you can’t fire underperformers, and as a result, it's very hard to enforce proper management.

EF: Do you expect America's productivity advantage to continue?

Bloom: On the above five criteria, the United States scores an "A" on four of them except education, where we score a "C." The United States has a weak school system and poor education standards compared to a number of our competitors. For example, based on OECD Pisa [Programme for International Student Assessment] scores, the U.S. educational system ranks in the mid-20s on math, below many European and East Asian countries. So improving educational standards is the most obvious way to improve management and ultimately growth, because poor education makes it harder to manage our firms. Fixing U.S. education will take more funding. But most importantly, it will require dismantling the cobweb of restrictions that teachers unions and politicians have put on schools, like tenure and seniority-based pay.

If you fix these five drivers of management, you're 95 percent of the way there. Most other factors seem of secondary importance compared to the big five of competition, rule of law, education, foreign direct investment, and regulations.

EF: Management practices can be viewed as "soft" technologies, compared to so-called "hard" technologies such as information technology. Do you see anything special about the invention and adoption of these "soft" technologies relative to "hard" technologies?

Bloom: The only distinction is that hard technologies, like my Apple iPhone, are protected by patents, whereas process innovations are protected by secrecy.

The late Zvi Griliches, a famous Harvard economist, broke it down into two groups: process and product innovations. Most people who think of innovation think of product innovations like the shiny new iPhone or new drugs. But actually a lot of it is process innovations, which are largely management practices.

Good examples would be Frederick Winslow Taylor and scientific management 100 years ago, or Alfred Sloan, who turned a struggling General Motors into the world's biggest company. Sloan pushed power and decision-making down to lower-level individuals and gave them incentives — called the M-form firm. It seems perfectly standard now, but back then firms were very hierarchical, almost Soviet-style. And then there was modern human resources from the 1960s onward — the idea that you want to measure people, promote them, and give them rewards. Most recently, we have had "lean manufacturing," pioneered by Toyota from the 1990s onward, which is now spreading to health care and retail. This focused on data collection and continuous improvement.

These have been major milestones in management technologies, and they've changed the way people have thought. They were clearly identified innovations, and I don't think there's a single patent among them. These management innovations are a big deal, and they spread right across the economy.

In fact, there's a management technology frontier that's continuously moving forward, and the United States is pretty much at the front with firms like Walmart, GE, McDonald's, and Starbucks. And then behind the frontier there are a bunch of laggards with inferior management practices. In America, these are typically smaller, family-run firms.

EF: What are the key challenges for future research on management?

Bloom: One challenge is measurement. We want to improve our measurement of management, which is narrow and noisy.

The second challenge is identification and quantification: finding out what causes what and its magnitude. For example, can we quantify the causal impact of better rule of law on management? I get asked by institutions like the World Bank and national governments which policies have the most impact on management practices and what size impact this would be? All I can do is give the five-factor list I've relayed here; it's very hard to give any ordering, and there are definitely no dollar signs on them. I would love to be able to say that spending $100 million on a modern court system will deliver $X million in extra output per year.

One way to get around this — the way macroeconomists got around it — is to gather great data going back 50 years and then exploit random shocks to isolate causation. This is what we are trying to do with the World Management Survey. The other way is a bit more deliberate: to run field experiments by talking with specific firms across countries.
In the New Zealand context its interesting to note Bloom's comments on the effects of foreign direct investment. Multinational firms help improve the standard of management in the host country and thus help improve productivity. Add this to the point that Eric Crampton noted about foreign firms paying their employees more, then foreign investment in New Zealand looks better than the anti-FDI crowd would have us believe. Also Bloom highlights the advantages of a flexible labout market. Given the small size of the internal New Zealand market Bloom's point about the importance of competition to good management practices emphasises the  need to keep New Zealand open to trade so that local produces face as much competition as possible from foreign firms.

Four principles for an effective state

The following four principles come from the 2014 Nobel prize winner in economics Jean Tirole. The original column was posted at VoxEU.org on 16 July 2007 (reposted 13 October 2014). Tirole argues that for the French state to meet the expectations of its citizens the state will have to become more effective. To do this requires, in Tirole's view, a four-pronged approach: restructuring, competition, evaluation and accountability.
Restructuring
Many countries have undertaken fundamental governmental reforms based on a consensus between political parties and unions. In the 1990s, the Swedish Social Democrats government made large cuts in the civil service. Ministers, who formulate overall strategy and make decisions on resource allocation, have to rely on a small number of civil servants. Operational details must therefore be delegated to a large number of independent agencies, each of which can recruit and remunerate their employees as they choose. These independent agencies operate under strict budgetary limits that ensure the sustained delivery of public services.

Around the same time, Canada cut government expenditure by 18.9% without social turmoil – and without greatly reducing health, justice, or housing programmes. They did this while maintaining tax levies, so the result was a reduced public deficit and falling public debt. Spending that could not be clearly justified in terms of the resulting service to the public was pruned. Subsidies for entrepreneurial projects and privatisation facilitated the elimination of one in six positions in the civil service. Indeed the sort of government reorganisation undertaken in Canada could only be dreamed of in France with its often nightmarish collection of laws and fiscal regulations. The Canadians have a single service for the calculation and collection of taxes and a one-stop-shop for government-business relations.
Competition
Contrary to common beliefs in France, head-on competition can produce high quality public services. In telecommunications, most countries, including France, have put a universal service obligation fund in place, which is compatible with competition between providers. It protects the smallest firms while ensuring that services are available in all regions of the country or to poor consumers.

When it comes to education, several countries (Belgium, the UK, Sweden) have tried voucher systems that give everyone access to education but create competition among schools for students. Such a system must be accompanied by clear and openly available information on schools so parents can make informed choices and “insider-ism” can be avoided (something that arose from the competition among the tracks in the French education system).

Competition can also be created via standardisation. In the healthcare realm, using more systematic comparisons between hospitals, or between the private and public sectors could help control costs. Sometimes the cost of treatment for a given disease varies by a factor of 2.5 with the variation having nothing to do with patient selection.
Evaluation
Every action of the State must be subject to a double independent evaluation. The first should be before the action: Is public intervention necessary? What are the costs and benefits? The second is after. Did it work? Was it cost effective? On this point, it would be necessary to require that the audit recommendations (for example, those of the Audit Court) be either followed according to a strict schedule, or rejected with a convincing justification.
Accountability
The 2001 Law (LOLF), adopted on the basis of a left-right consensus, is a small revolution in a country accustomed to the logic of budgetary processes. Embracing the logic of effectiveness, the law aims to transform public sector managers into true owners where their obligation to produce results goes hand in hand with the freedom to manage. Putting this principle into practice is certainly difficult. First of all, the objectives need to be clear and easily verifiable. Then, “accountability” must be introduced. For that, the objectives can’t be collective (as the failure of control of health expenses has shown), but must be the subject of rewards or sanctions. Lastly, one should be wary of the pernicious effects of “multi-tasking”. Incentives that are related to an easily measured objective (for example, the cost per student for a university, which can be easily reduced by teaching large numbers of students in large lecture halls) can cause one to ignore equally important objectives that one has neglected to measure (such as the quality of teaching or research). In other words, to construct good incentives, one has to evaluate actions comprehensively. That way, it’s clear that giving regulated enterprises more responsibility should go hand in hand with stricter safety and quality controls. The need for such controls is clear from the experience of British telecoms in 1984 and more recently, of British railways.
The French state does have something of a bad history when it comes to effectiveness. For example the French SOEs have not always been run well. In 1997, for example, the Economist magazine (`Banking's Biggest Disaster', vol. 344 issue 8024 July 5: 69-71) noted the near-bankruptcy of the then state-owned bank Credit Lyonnais. The magazine pointed out that the then French finance minister Dominique Strauss-Kahn had to admit that the bank had probably lost around Ffr100 billion (around US$17 billion). The bank had to be bailed out three times in the 1990s. The total cost to the French taxpayer of the whole debacle has been estimated at between US$20 and US$30 billion. Improving on such a record shouldn't be too difficult.

Tuesday 23 December 2014

EconTalk this week

Joshua Angrist of the Massachusetts Institute of Technology talks to EconTalk host Russ Roberts about the craft of econometrics--how to use economic thinking and statistical methods to make sense of data and uncover causation. Angrist argues that improvements in research design along with various econometric techniques have improved the credibility of measurement in a complex world. Roberts pushes back and the conversation concludes with a discussion of how to assess the reliability of findings in controversial public policy areas.

A direct link to the audio is available here.

Sunday 21 December 2014

An updated version of "The Past and Present of the Theory of the Firm"

EconTalk from last week

Gary Marcus of New York University talks with EconTalk host Russ Roberts about the future of artificial intelligence (AI). While Marcus is concerned about how advances in AI might hurt human flourishing, he argues that truly transformative smart machines are still a long way away and that to date, the exponential improvements in technology have been in hardware, not software. Marcus proposes ways to raise standards in programming to reduce mistakes that would have catastrophic effects if advanced AI does come to fruition. The two also discuss "big data's" emphasis on correlations, and how that leaves much to be desired.

A direct link to the audio is available here.

Thursday 11 December 2014

EconTalk for many, many weeks

Thomas Piketty of the Paris School of Economics and author of Capital in the Twenty-First Century talks to Econtalk host Russ Roberts about the book. The conversation covers some of the key empirical findings of the book along with a discussion of their significance.

Martha Nussbaum of the University of Chicago and author of Creating Capabilities talks with EconTalk host Russ Roberts about an alternative to GDP for measuring economic performance at the national level. She is a proponent of the capabilities approach that emphasizes how easily individuals can acquire skills and use them, as well as the capability to live long and enjoy life. Nussbaum argues that government policy should focus on creating capabilities rather than allowing them to emerge through individual choices and civil society.

David Autor of the Massachusetts Institute of Technology talks with EconTalk host Russ Roberts about the future of work and the role that automation and smart machines might play in the workforce. Autor stresses the importance of Michael Polanyi's insight that many of the things we know and understand cannot be easily written down or communicated. Those kinds of tacit knowledge will be difficult for smart machines to access and use. In addition, Autor argues that fundamentally, the gains from machine productivity will accrue to humans. The conversation closes with a discussion of the distributional implications of a world with a vastly larger role for smart machines.

EconTalk host Russ Roberts is interviewed by long-time EconTalk guest Michael Munger about Russ's new book, How Adam Smith Can Change Your Life: An Unexpected Guide to Human Nature and Happiness. Topics discussed include how economists view human motivation and consumer behavior, the role of conscience and self-interest in acts of kindness, and the costs and benefits of judging others. The conversation closes with a discussion of how Smith can help us understand villains in movies.

Luigi Zingales of the University of Chicago's Booth School of Business talks with EconTalk host Russ Roberts about Zingales's essay, "Preventing Economists' Capture." Zingales argues that just as regulators become swayed by the implicit incentives of dealing with industry executives, so too with economists who study business: supporting business interests can be financially and professionally rewarding. Zingales outlines the different ways that economists benefit from supporting business interests and ways that economists might work to prevent that influence or at lease be aware of it.

Robert Solow, Professor Emeritus at Massachusetts Institute of Technology and Nobel Laureate, talks with EconTalk host Russ Roberts about his hugely influential theory of growth and inspiration to create a model that better reflected the stable long-term growth of an economy. Solow contends that capital accumulation cannot explain a significant portion of the economic growth we see. He makes a critical distinction between innovation and technology, and then discusses his views on Milton Friedman and John M. Keynes.

Daron Acemoglu, the Elizabeth and James Killian Professor at the Massachusetts Institute of Technology, talks with EconTalk host Russ Roberts about his new paper co-authored with James Robinson, "The Rise and Fall of General Laws of Capitalism," a critique of Thomas Piketty, Karl Marx, and other thinkers who have tried to explain patterns of data as inevitable "laws" without regard to institutions. Acemoglu and Roberts also discuss labor unions, labor markets, and inequality.

Becky Liddicoat Yamarik, Hospice Palliative Care Physician, talks to EconTalk host Russ Roberts about the joys and challenges of providing care for terminally ill patients. The two discuss the services palliative care provides, how patients make choices about quality of life and when to stop receiving treatment, conflicts of interest between patients and families, and patients' preparedness to make these decisions.

Nobel Laureate Vernon L. Smith of Chapman University talks to EconTalk host Russ Roberts about how Adam Smith's book, The Theory of Moral Sentiments has enriched his understanding of human behavior. He contrasts Adam Smith's vision in Sentiments with the traditional neoclassical models of choice and applies Smith's insights to explain unexpected experimental results from the laboratory.

Emily Oster of the University of Chicago talks with EconTalk host Russ Roberts about why U.S. infant mortality is twice that in Finland and high relative to the rest of the world, given high income levels in the United States. The conversation explores the roles of measurement and definition along with culture to understand the causes of infant mortality in the United States and how it might be improved.

Nick Bostrom of the University of Oxford talks with EconTalk host Russ Roberts about his book, Superintelligence: Paths, Dangers, Strategies. Bostrom argues that when machines exist which dwarf human intelligence they will threaten human existence unless steps are taken now to reduce the risk. The conversation covers the likelihood of the worst scenarios, strategies that might be used to reduce the risk and the implications for labor markets, and human flourishing in a world of superintelligent machines.

James Otteson of Wake Forest University talks to EconTalk host Russ Roberts about his new book, The End of Socialism. Otteson argues that socialism (including what he calls the "socialist inclination") is morally and practically inferior to capitalism. Otteson contrasts socialism and capitalism through the views of G. A. Cohen and Adam Smith. Otteson emphasizes the importance of moral agency and respect for the individual in his defense of capitalism. The conversation also includes a discussion of the deep appeal of the tenets of socialism such as equality and the impulse for top-down planning.

Propriety and Prosperity: Adam Smith

A new collection of essays on Adam Smith has been announced. The following is from Gavin Kennedy at his Adam Smith's Lost Legacy blog:
Propriety and Prosperity

New Studies on the Philosophy of Adam Smith

$115.00

ISBN 9781137320681

Publication Date December 2014

Palgrave Macmillan

This book is a collection of specially commissioned chapters from philosophers, economists and political scientists, focusing on Adam Smith's two main works Theory of Moral Sentiments and Wealth of Nations. It examines the duality which manifests itself as an apparent contradiction: that is, how does one reconcile the view of human nature expounded in Theory of Moral Sentiments (sympathy and benevolence) and the view of human nature expounded in Wealth of Nations (self-interest)? New work by philosophers has uncovered the complex and nuanced connections between Smith's account of economic and moral motivation. His economic theory has presented conceptual challenges: the famous 'invisible hand' has proved an elusive concept much in need of scrutiny.

'Prosperity' in the title captures the economic side of Smith's thought. 'Propriety' points to his ethics. In recent philosophical scholarship two major shifts have occurred. One is that the originality of Smith's moral theory has been rediscovered and recognised. His account of sympathy is significantly different from Hume's: his idea of the 'impartial spectator' is independent, rich and complex and he is alert to the phenomenon of self-deception. The second shift is that Smith's image as an economic liberal has been drastically revised, reclaiming him from current ideological use in defence of free markets and the minimal state. Smith links economics, politics and ethics through notions of justice and utility in subtle ways that make the labels 'economic liberal' and 'laissez-faire theorist' at best inadequate and at worst misleading.

This collection was put together with a view to bringing Smith to a mainstream philosophy audience while simultaneously informing Smith's traditional constituency (political economy) with philosophically finessed interpretations.


1. Introduction; David F. Hardwick and Leslie Marsh
PART I: CONTEXT


2. Adam Smith as a Scottish Philosopher; Gordon Graham 

3. Friendship in Commercial Society Revisited: Adam Smith on Commercial Friendship; Spyridon Tegos

4. Adam Smith and French Political Economy: Parallels and Differences; Laurent Dobuzinskis

5. Adam Smith: 18th Century Polymath; Roger Frantz

PART II: PROPRIETY

6. Indulgent Sympathy and the Impartial Spectator; Joshua Rust

7. Adam Smith on Sensory Perception: A Sympathetic Account; Brian Glenney

8. Adam Smith on Sympathy: From Self-Interest to Empathy; Gloria Zúñiga y Postigo

9 . What My Dog Can Do: On the Effect of The Wealth of Nations I.ii.2; Jack Weinstein

PART III: PROSPERITY

10. Metaphor Made Manifest: Taking Seriously Smith's 'Invisible Hand'; Eugene Heath

11. The 'Invisible Hand' Phenomenon in Philosophy and Economics; Gavin Kennedy

12. Instincts and the Invisible Order: The Possibility of Progress; Jonathan B. Wight

13. The Spontaneous Order and the Family; Lauren K. Hall

14. Smith, Justice and the Scope of the Political; Craig Smith


Contributing Authors (In bold: scholars whom I know)
Laurent Dobuzinskis, Simon Fraser University, Canada

Roger Frantz, San Diego State University, USA

Brian Glenney, Gordon College, Massachusetts, USA

Gordon Graham, Princeton Theological Seminary, USA

Lauren K. Hall, Rochester Institute of Technology, USA

David F. Hardwick, The University of British Columbia, Canada

Eugene Heath, State University of New York at New Paltz, USA

Gavin Kennedy Heriot-Watt University, UK

Leslie Marsh, The University of British Columbia, Canada

Joshua Rust, Stetson University, USA

Craig Smith, University of Glasgow, UK

Vernon L. Smith
, Chapman University, USA

Spyridon Tegos, The University of Crete, Greece

Jack Weinstein, University of North Dakota, USA

Jonathan Wight, University of Richmond, USA

Gloria Zúñiga y Postigo, Ashford University, USA
This looks like a very interesting set of essays, but the price!! Demand curves do slope downwards. I sometimes wonder if publishers know this simple fact.

Book award to Foss and Klein

This is from the website of the Society for the Development of Austrian Economics
2014 FEE Prize for the best book in Austrian economics is awarded to Nicolai Foss and Peter G. Klein’s Organizing Entrepreneurial Judgment

Comments read by Virgil Storr, Vice President of the SDAE, during the presentation of the 2014 FEE Prize for the best book in Austrian economics:

In Organizing Entrepreneurial Judgment: A New Approach to the Firm, Nicolai Foss and Peter G. Klein bridge the gap between studies of entrepreneurship and the theory of the firm. Despite both concepts becoming increasingly appreciated and studied by contemporary scholars in economics and management, Foss and Klein show that the existing theoretical and practical literature too often fails to adequately connect these theories. Seeking to correct this failing and drawing on insights from Austrian economics, Foss and Klein examine entrepreneurship as judgment decisions made by management under conditions of uncertainty. They show that these judgments are drivers of the economy and keys to understanding firm performance and organization.

For meaningfully adding to the Austrian literature on entrepreneurship and applying Austrian insights about entrepreneurship to the theory of the firm, the prize committee determined that Organizing Entrepreneurial Judgment: A New Approach to the Firm is deserving of the 2014 FEE Prize for the best book in Austrian Economics.
In a working paper of mine I say this about the Foss and Klein book (footnotes removed),
Foss and Klein 2012

A second recent approach to the firm that doesn't fit well into the Foss, Lando and Thomsen classification, but which also emphasises the entrepreneur, is that of Foss and Klein (2012) (FK). FK see their work as offering a theory of the entrepreneur centred around a combination of Knightian uncertainty and Austrian capital theory. While such a basis places their work outside the conventional theory of the firm, FK see themselves ``not as radical, hostile critics, but as friendly insiders" (Foss and Klein 2012: 248).

To understand the FK theory first consider the one-person firm. For FK it is the incompleteness of markets for judgement that explains why an entrepreneur has to form their own firm. Here ``judgement" refers to business decision making in situations involving Knightian uncertainty, that is, circumstances in which the range of possible future outcomes, let alone the likelihood of any individual outcome, is unknown. Thus decision-making about the future must rely on a kind of understanding that is subjective and tacit, one that can not be parameterised in a set of formal, explicit decision-making rules. But then how can we tell great/poor judgement from good/bad luck? A would-be entrepreneur may not be able to communicate to others just what his ``vision" of a new way to satisfy future consumer desires is in such a manner that other people would be able to assess its economic validity. If the nascent entrepreneur can not verify the nature of his idea then they are unlikely to be able to sell their ``expertise" across the market - as an consultant or advisor - or become an employee of a firm utilising his ``expertise" due to adverse selection/moral hazard problems and thus he will have to form his own firm to commercialise this ``vision". This reasoning for the formation of a firm is not entirely without precedence. Working within a standard property rights framework Rabin (1993) and Brynjolfsson (1994) show that an informed agent may have to set up a firm to benefit from their information for adverse selection and moral hazard reasons respectively. In addition to this the inability to convey his ``vision" to capital markets will limit an entrepreneur's ability borrow to finance the purchase of any non-human assets the entrepreneur requires. This means the the entrepreneur can not be of the Kirznerian penniless type. Non-human assets are important because judgemental decision making is ultimately about the arrangement of the non-human capital that the entrepreneur owns or controls. Capital ownership also strengthens the bargaining position of the entrepreneur relative to other stakeholders and helps ensure the entrepreneur is able to appropriate the rents from his ``vision".

Turning to the multi-person firm FK argue that the need for experimentation with regard to production methods is the underlying reason for the existence of the firm. Given that assets have many dimensions or attributes that only become apparent via use, discovering the best uses for assets or the best combination of assets requires experimenting with the uses of the assets involved. Thus entrepreneurs seek out the least-cost institutional arrangement for experimentation. Using a market contract to coordinate collaborators leaves the entrepreneur open to hold-up, collaborators can threaten to veto any changes in the experimental set-up unless they are granted a greater proportion of the quasi-rents generated by the project. By forming a firm and making the collaborators employees, the entrepreneur gains the right to redefine and reallocate decision rights among the collaborators and to sanction those who do not utilise their rights effectively. This means that the entrepreneur can avoid the haggling and redrafting costs involved in the renegotiation of market contracts. This can make a firm the least-cost institutional arrangement for experimentation.

With regard to the boundaries of the firm, FK argue that when firms are large enough to conduct activities exclusively within its borders - so that no reference to an outside market is possible - the organisation will become less efficient since the entrepreneur will not be able to make rational judgements about resource allocation. When there are no markets for the means of production, there are no monetary prices and thus the entrepreneur will lack the information they need about the relative scarcity of resources to enable them to make rational decisions about resource allocation and whether entrepreneurial profits exists. This implies that as they grow in size, and thus do more internally, firms become less efficient due to the increasing misallocation of resources driven by the lack of market prices. But the boundaries of firms seem to be such that firms stop growing before outside markets for the factors of production are eliminated and market prices become unavailable. So while this idea can explain why one big firm can not produce everything it seems less able to tell us why the boundaries of actual firms are where they are. Real firms seem too small for the lack of outside markets and prices to be driving large efficiencies.

For FK the internal organisation of a firm depends on the dispersion of knowledge within the firm. The entrepreneur will typically lack the information or knowledge to make optimal decisions. So the entrepreneur has to delegate decision-making authority to those who have, at least more of, the necessary information or knowledge. In doing this the firm is able to exploit the locally held knowledge without having to codify it for internal communication or motivating managers to explicitly share their knowledge. But the benefits of delegation in terms of better utilising dispersed knowledge need to be balanced against the costs of delegation such as duplication of effort - due to a lack of coordination of activities, moral hazard, creation of new hold-up problems and incentive alignment.

The things that sets the FK apart from the mainstream is the importance given in their theory to the entrepreneur and that they develop their theory utilising a combination of Knightian uncertainty and Austrian capital theory. But, unlike Spulber (2009), the questions they set out to answer are standard in that they want to explain the existence, boundaries and organisation of the firm.
Refs:
  • Brynjolfsson, Erik (1994). `Information Assets, Technology, and Organization', Management Science, 40(12): 1645-62.
  • Foss, Nicolai J. and Peter G. Klein (2012). Organizing Entrepreneurial Judgment: A New Approach to the Firm, Cambridge: Cambridge University Press.
  • Rabin, Matthew (1993). `Information and the Control of Productive Assets', Journal of Law, Economics and Organization, 9(1) Spring: 51-76.
  • Spulber, Daniel F. (2009). The Theory of the Firm: Microeconomics with Endogenous Entrepreneurs, Firms, Markets, and Organizations, Cambridge: Cambridge University Press.