Thursday, 31 January 2013
The French gold sink and the great depression
Douglas Irwin of Dartmouth College talks on The French Gold Sink and the Great Depression. The discussants are Charles Calomiris of Columbia University and James Hamilton of University of California, San Diego.
Antidumping protection hurts exports
Hylke Vandenbussche and Jozef Konings argue in an article at VoxEU.org that there is evidence to suggest that old-fashioned protection can have an unexpected negative effect on firms that are part of a global value chain. In an increasingly globalised world, exporters’ success seems to positively depend on the free entry of imports rather than the other way round.
Protection is often viewed as a powerful instrument to help domestic firms to raise their sales at the expense of foreign importers. But this view is now being challenged by recent research showing that the effects of protection really depend on the international orientation of the firms i.e. whether they are exporters or not. Protected firms that are well integrated in global value chains may actually lose sales whenever the imports of inputs are subject to protection. This observation may not come as a surprise, but it is important to realise that trade policy has not kept pace with this aspect of globalisation.What does this tell us about trade policy?
The main reason is that many of the current WTO rules governing trade protection stem from an era where trade models predicted that all domestic firms would benefit from import protection. Traditional theory models assumed that all firms in the protected industry are import-competing and only sell domestically. However, in recent years an increasing number of papers have shown that even within narrowly defined industries, firms can be very different. Some firms only produce for the domestic market, others mainly export or sell both domestically and internationally. Thus, the question that can be raised is whether all domestic firms benefit from import protection given that some of the protected firms may be exporters
Some trade policy uses protection as an instrument to protect its domestic import-competing sector. If this policy does not take its negative externality on protected firms’ exports into account, it may have negative long-run consequences. Firms today no longer operate within the confines of a singular country or market, and their operations are increasingly international. Two decades ago, when firms mainly sold domestically, import protection laws may have been an effective way to temporarily boost a country’s trade surplus and current account. It is no longer the case today. In an increasingly globalised world, exporters’ success seems to positively depend on the free entry of imports rather than the other way round.So we have another reason, if we needed one, to be anti-protection.
Things you learn at 2am
In their book The Marketplace of Christianity Ekelund Jr., Robert F. Hebert and Robert D. Tollison note that in 1805:
Spanish Index of Forbidden Books issued under the aegis of the Catholic Church banned hundred of titles, including Adam Smith's The Wealth of Nations and Burke's Reflections on the Revolution in France.I can't quite see what Adam would have said that would get him banned.
The gains from trade
Chris Dillow at the Stumbling and Mumbling blog misses the point of the gains from trade:
Mario Balotelli's transfer to AC Milan highlights the Marxian critique of capitalism.How can Chris possible know the bargaining positions of the parties? How can he know the surplus that each party gets from the bargain. The important point to note here is that this trade was voluntary, both parties will only have agreed to it if they think they will gain from the bargain and the parties themselves are the only ones who know the amount of surplus they think they will get.
How can Man City get £19m for a player who is so obviously flawed? The answer's simple. It's because their great wealth means they did not need to sell him, and so could drive a hard bargain. The party in the strongest bargaining position gets most of the surplus from any trade.
Tuesday, 29 January 2013
Rewards to grad school
Recently we had the Ministry of Education releasing figures on what students can earn after they graduate. Now we have Jason Sorens writing at the Pileus blog saying Don't Go to Grad School:
Or you could just think of your education as a consumption good rather than an investment good.
It’s not just PhD programs that aren’t worth it any more. Law school applications have plummeted. Full-time MBA’s in the United States are of doubtful value at best, especially when opportunity cost is considered. Even medical degrees are now a huge financial risk.The rewards that the Ministry says are there is not the ones you want to think about. As Sorens notes you need to take into account the opportunity costs of getting a degree to judge whether it is worthwhile, and once you do you may well find it isn't. Also consider life time earnings, not just fives years worth.
Instead of going to graduate school, students would be better advised to do more with their undergraduate degrees. The value of studying math is difficult to overstate. From engineering to biomedicine to insurance and finance, understanding calculus and advanced statistics opens doors. This is true regardless of whether a BA is useful mostly for human capital development or for signaling (math is hard for most people). I recommend a minor in math to most undergraduates. Alternatively, computer programming and web development can be self-taught — you don’t even need to go to college.
Or you could just think of your education as a consumption good rather than an investment good.
The flattening hierarchy: not
Over the past few decades one of the things that management gurus, consultants and the popular business press have argued is that firms are flattening their hierarchies. Flattening typically refers to the elimination of layers in a firm's hierarchy (can't say I've seen much of it in universities) and the broadening of managers' spans of control. The alleged benefits flow primarily from pushing decisions downward to enhance market responsiveness and improve accountability and morale.
The questions this gives rise to are, Has flattening actually occurred? and where it has, Has it delivered on its promise? These questions are examined in a new paper in the Fall 2012 issue of the California Management Review. The paper, The Flattened Firm: Not As Advertised, is by Julie Wulf.. Wulf writes,
The questions this gives rise to are, Has flattening actually occurred? and where it has, Has it delivered on its promise? These questions are examined in a new paper in the Fall 2012 issue of the California Management Review. The paper, The Flattened Firm: Not As Advertised, is by Julie Wulf.. Wulf writes,
[ ... ] I set out to investigate the flattening phenomenon using a variety of methods, including quantitative analysis of large datasets and more qualitative research in the field involving executive interviews and a survey on executive time use [ ...] . Using a large-scale panel dataset of reporting relationships, job descriptions, and compensation structures in a sample of over 300 large U.S. firms over roughly a 15-year period, my co-authors and I began by characterizing the shifting “shape” of each company’s hierarchy. We focused on the top of the pyramid: after all, it is the CEO and other members of senior management who make the resource-allocation decisions that ultimately determine firm strategy and performance. Then, to dig deeper into how decisions are made in flattened firms, we complemented the historical data analysis with exploratory interviews with executives—what CEOs say—and analysis of data on executive time use-what CEOs do.
We discovered that flattening has occurred, but it is not what it is widely assumed to be. In line with the conventional view of flattening, we find that CEOs eliminated layers in the management ranks, broadened their spans of control, and changed pay structures in ways suggesting some decisions were in fact delegated to lower levels. However, using multiple methods of analysis, we find other evidence sharply at odds with the prevailing view of flattening. In fact, flattened firms exhibited more control and decision making at the top. Not only did CEOs centralize more functions, such that a greater number of functional managers reported directly to them (e.g., CFO, CHRO, CIO); firms also paid lower-level division managers less when functional managers joined the top team, suggesting more decisions at the top. Furthermore, CEOs report in interviews that they flattened to “get closer to the businesses” and become more involved, not less, in internal operations and subordinate activities. Finally, our analysis of time use indicates that CEOs of flattened firms allocate more time to internal interactions. Taken together, the evidence suggests that flattening transferred some decision rights from lower-level division managers to functional managers at the top. Flattening is associated with increased CEO involvement with direct reports—the second level of top management—suggesting a more hands-on CEO at the pinnacle of the hierarchy.
EconTalk this week
Peter Boettke of George Mason University talks with EconTalk host Russ Roberts about his book, Living Economics. Boettke argues for embracing the tradition of Smith and Hayek in both teaching and research, arguing that economics took a wrong turn when it began to look more like a branch of applied mathematics. He sees spontaneous order as the central principle for understanding and teaching economics. The conversation also includes a brief homage to James Buchanan who passed away shortly before this interview was recorded.
More agreement among economists
It is often said that economists always disagree, but is this true. If the results of a recent NBER working paper are anything to go by economists seem to be agreeing a lot. The paper is Views among Economists: Professional Consensus or Point-Counterpoint? by Roger Gordon and Gordon B. Dahl. The abstract reads,
The questions referred to above can be found here.
One interesting question the panel was asked had to do with China-US Trade:
On Free Trade:
Another question was on Ticket Resale:
On Buy American:
On Rent Control:
To what degree do economists disagree about key economic questions? To provide evidence, we make use of the responses to a series of questions posed to a distinguished panel of economists put together by the Chicago School of Business. Based on our analysis, we find a broad consensus on these many different economic issues, particularly when the past economic literature on the question is large. Any differences are unrelated to observable characteristics of the Panel members, other than men being slightly more likely to express an opinion. These differences are idiosyncratic, with no support for liberal vs. conservative camps.Interesting that differences are idiosyncratic with no support for a liberal vs. conservative divide. I'm sure non-economists would have thought there would be such a divide.
The questions referred to above can be found here.
One interesting question the panel was asked had to do with China-US Trade:
Question A: Trade with China makes most Americans better off because, among other advantages, they can buy goods that are made or assembled more cheaply in China.85% of responses were either "Strongly agree or agree". I think the other 15% were "Did not answer". I can't find anyone who "Disagreed or Strongly disagreed".
On Free Trade:
Question A: Freer trade improves productive efficiency and offers consumers better choices, and in the long run these gains are much larger than any effects on employment.85% "Strongly agree or Agree", 5% "Uncertain", the rest I think are "Did not answer". I can't find anyone who "Disagreed or Strongly disagreed".
Another question was on Ticket Resale:
Laws that limit the resale of tickets for entertainment and sports events make potential audience members for those events worse off on average.68% "Strongly agree or Agree", only 8% "Strongly disagree or Disagree".
On Buy American:
Federal mandates that government purchases should be “buy American” unless there are exceptional circumstances, such as in the American Recovery and Reinvestment Act of 2009, have a significant positive impact on U.S. manufacturing employment.49% "Strongly disagree or Disagree". 10% "Agree". 0% "Strongly agree".
On Rent Control:
Local ordinances that limit rent increases for some rental housing units, such as in New York and San Francisco, have had a positive impact over the past three decades on the amount and quality of broadly affordable rental housing in cities that have used them.81% "Strongly disagree or Disagree". 2% "Agree". 0% "Strongly agree".
Monday, 28 January 2013
Is the financial sector too large?
A question asked by Greg Mankiw over at his blog. He points us to two possible answers.
This first of these is The Growth of Modern Finance by Robin Greenwood and David Scharfstein. Their abstract reads:
This first of these is The Growth of Modern Finance by Robin Greenwood and David Scharfstein. Their abstract reads:
The U.S. financial services industry grew from 4.9% of GDP in 1980 to 7.9% of GDP in 2007. A sizeable portion of the growth can be explained by rising asset management fees, which in turn were driven by increases in the valuation of tradable assets, particularly equity. Another important factor was growth in fees associated with an expansion in household credit, particularly fees associated with residential mortgages. This expansion was itself fueled by the development of non-bank credit intermediation (or “shadow banking”). We offer a preliminary assessment of whether the growth of active asset management, household credit, and shadow banking – the main areas of growth in the financial sector – has been socially beneficial.They conclude,
Our objective in this paper has been to understand the activities that contributed to the growth of finance between 1980 and 2007, and to provide a preliminary assessment of whether and in what ways society benefited from this growth.The second is Is Finance Too Big? by John H. Cochrane. He concludes,
Our overall assessment comes in two parts. First, a large part of the growth of finance is in asset management, which has brought many benefits including, most notably, increased diversification and household participation in the stock market. This has likely lowered required rates of return on risky securities, increased valuations, and lowered the cost of capital to corporations. The biggest beneficiaries were likely young firm, which stand to gain the most when discount rates fall. On the other hand, the enormous growth of asset management after 1997 was driven by high fee alternative investments, with little direct evidence of much social benefit, and potentially large distortions in the allocation of talent. On net, society is likely better off because of active asset management but, on the margin, society would be better off if the cost of asset management could be reduced.
Second, changes in the process of credit delivery facilitated the expansion of household credit, mainly in residential mortgage credit. This led to higher fee income to the financial sector. While there may be benefits of expanding access to mortgage credit and lowering its cost, we point out that the U.S. tax code already biases households to overinvest in residential real estate. Moreover, the shadow banking system that facilitated this expansion made the financial system more fragile.
Greenwood and Scharfstein’s big picture is illuminating. The size of finance increased, at least through 2007, because fee income for refinancing, issuing, and securitizing mortgages rose; and because people moved assets to professional management; asset values increased, leading to greater fee income to those businesses. Compensation to employees in short supply – managers – increased, though compensation to others – janitors, secretaries – did not. Fee schedules themselves declined a bit.
To an economist, these facts scream “demand shifted out.” Some of the reasons for that demand shift are clearly government policy to promote the housing boom. Some of it is “government failure,” financial engineering to avoid ill-conceived regulations. Some of it – the part related to high valuation multiplied by percentage fees – is temporary. Another part – the part related to the creation of private money-substitutes – was a social waste, has declined in the zero-interest rate era, and does not need to come back. The latter can give us a less fragile financial system, which is arguably an order of magnitude larger social problem than its size.
The persistence of very active management, and very high fees, paid by sophisticated institutional investors, such as nonprofit endowments, sovereign wealth funds, high-wealth individuals, family offices, and many pension funds, remains a puzzle. To some extent, as I have outlined, this pattern may reflect the dynamic and multidimensional character of asset-market risk and risk premiums. To some extent, this puzzle also goes hand in hand with the puzzle why price discovery seems to require so much active trading. It is possible that there are far too few resources devoted to price discovery and market stabilization, i.e. pools of cash held out to pounce when there are fire sales. It is possible that there are too few resources devoted to matching the risk-bearing capacities of sophisticated investors with important outside income or liability streams to the multidimensional time-varying bazaar of risks offered in today’s financial markets.
Surveying our understanding of these issues, it is clearly far too early to make pronouncements such as “There is likely too much high-cost, active asset management,” or “society would be better off if the cost of this management could be reduced,“ with the not-so-subtle implication ( “Could be?” By whom I wonder?) that resources devoted to greater regulation (by no less naïve people with much larger agency problems and institutional constraints) will improve matters.
Friday, 25 January 2013
Reasons for liking a carbon tax
This article from the National Journal argues in favour of a carbon tax:
The basic reasoning is,
To paraphrase Ronald Reagan paraphrasing Will Rogers, some people around here never met a tax they didn’t dislike. Others have met just one: a carbon tax.The reasons that some economists like a carbon are outlined here.
A number of the nation’s leading conservative economists, who as a rule do not like taxes, are touting some benefits to a federal carbon tax. That group includes Gregory Mankiw, a former Romney adviser and George W. Bush-era chairman of the Council of Economic Advisors; Douglas Holtz-Eakin, Sen. John McCain’s 2008 chief economic adviser; and Art Laffer, progenitor of Reagan’s treasured Laffer Curve.
Such a tax could raise an estimated $1.5 trillion over 10 years and help wean the country from carbon-intensive fuels. And with Congress set for a season of budget fights and a possible effort to overhaul the tax code, the carbon tax is likely to reenter the conversation about getting America’s fiscal house in order.
So, it’s worth understanding why the economics of a carbon tax might make it appealing to some conservative economists, and why many political arguments about taxes don't apply to it.
The basic reasoning is,
A carbon tax is a special kind of tax called a Pigovian tax, named after 20th-century British economist Arthur Pigou.Coase in his famous 1960 article, The Problem of Social Cost, points out some problems with Pigou's approach.
Normally, a competitive market produces just the right amount of a good. If there are not enough people selling glue, its price will rise and people will cash in by selling more glue. If too many people are selling glue, the price will go down, and some people will find it’s not worth their while to sell glue anymore. Either way, the market should settle at the point where the cost of producing more glue is equal to the value people place on that additional glue.
But Pigou realized that if a producer wasn’t paying for the full cost of producing a good, they would produce too much of it anyway and everyone else would foot the bill. Imagine that making glue is expensive because it costs a lot to cart away all the horse carcasses used in its production. There’s not going to be a lot of glue because only people who really like glue will be willing to pay to produce it.
Now imagine that instead of carting away the dead horses, glue factories realize they can dump them in nearby rivers for free. All of a sudden, it becomes a lot cheaper to make glue, so the price goes down. At a price like this, you can’t afford not to buy glue, so people consume more of it, and new glue factories pop up.
It all looks like economic growth, until the dead horses start piling up, and people start getting sick. Then they get a bunch of medical bills and the government has to spend money cleaning up the river. The sticky-fingered glue barons don’t mind much, because they can afford to buy the expensive houses upriver, and when the cost of cleanup gets spread to everyone, the cost to them is a pittance compared to their newfound glue fortunes.
Meanwhile, the tape users are fuming. They’re getting sick from glue they don’t even use, and the horse-dredgings are driving up their tax bill. And because a bunch of the former tape-makers have jumped on the glue bandwagon, there’s now a tape shortage. It’s a mess.
When you account for the costs of sickness and cleanup, each tub of glue costs $20 to produce. But the glue factories don’t pay for this, so they can sell glue at a going rate of $12. Glue that’s only worth $12 is being made at a cost of $20, so $8 is being wasted on each new tub of glue.
In this case, Pigou would prescribe an $8 tax on glue. Now, it costs glue factories $20 to produce glue, and only people willing to pay that much for glue will buy it. Less glue is produced, so fewer dead horses end up in the river, and the revenue raised from the tax can be used deal with the problems caused by the ones that do.
Administrative bloat at universities
This is a topic much discussed at universities ........ by non-administrators. Arnold Kling writes at his askblog blog
In universities, I would argue that the growth in administrators is symptomatic, not an independent cause. The problem is what is known in the software business as scope creep or feature bloat. The more you add features to software, the more complex it becomes, and the harder it becomes to manage. Organizations are the same way.The basic point is that if you really want to reduce administrative overhead, you have to think in terms of radically reducing scope. The downside of trying to do this is the fight you would get from powerful groups of insiders who have much to loose. Canterbury has tried to get rid of academic programs and has been far from successful at it. Interestingly even when academic staff do go the number of administrators that go seems much less.
Universities, like government, add new programs with alacrity, while almost never discarding old programs. Any university today has many more majors, many more activities, and many more technologies in use than was the case 30 years ago.
How do you introduce efficiency and cost saving at universities? Narrow scope and reduce features. Do students choose your school because of the chemistry department? If not, then get rid of it. Better to have three excellent departments than dozens of mediocre ones. Let students take courses on line in the ones that you do not cover.
Thursday, 24 January 2013
Mercantilism and its contemporary relevance
Peter Boettke over at the Coordination Problem argues that,
[Dani] Rodrik claims that "The liberal model has become severely tarnished, owing to the rise in inequality and the plight of the middle class in the West, together with the financial crisis that deregulation spawned." But the reason for this is precisely because for the past 6 decades it hasn't only been the Asian countries that have pursued Mercantlist policies (as Rothbard explained 50 years ago).One may have thought that Adam Smith killed off mercantilism more than two hundred years ago, but no. As Boettke notes the struggle between liberalism and mercantilism has be long and if Rodrik is anything to go by has yet to be won. One thing people like Rodrik are good at is to blame recent, and not so recent, economic problems on liberalisation and deregulation despite the fact that there has been little serious attempts at either. As Boettke adds,
So while Rodrik is basically right when he says: "The history of economics is largely a struggle between two opposing schools of thought, liberalism' and 'mercantilism.'" He is off the mark when he states that: "Economic liberalism, with its emphasis on private entrepreneurship and free markets, is today’s dominant doctrine." This is true only in rhetoric, but not in the reality of economic policy practice.
Every where we turn in our economic lives we can see the grabbing hand of the state. Throughout the western world we have bloated public budgets, the manipulation of money and credit, obstructionist regulations, and numerous measures to weaken the discipline of profit and loss. In short, we have state controlled market economies.This look more like a mercantilist world than a economically liberal one.
Wednesday, 23 January 2013
Lifetime earnings matter, not five years
Eric Crampton writes,
Students choosing majors may well worry about employment options after graduation. The NZ Ministry of Education has released some data on post-study outcomes by major. Read the whole thing hereBasically the ministry has looked at (ex)student earnings, and other outcomes, five years after graduation. But this is worthless since what matters is life-time earnings with the full opportunity cost of gaining a qualification taken into account. One wonders what such a calculation would look like. Given the opportunity costs of gaining higher qualifications it may not look good.
Tuesday, 22 January 2013
The minimum wage-employment debate .... again
Debates about the economic effects and the merits of the minimum wage date back at least as far as the introduction of minimum wages. In a 2008 book "Minimum Wages" by David Neumark and William Wascher it is concluded that,
Neumark, Salas and Wascher note that
"... [M]inimum wages reduce employment opportunities for less-skilled workers, especially those who are most directly affected by the minimum wage".In the last couple of years two studies have appeared that question the empirical methods and conclusions in much of the recent literature (Allegretto et al., 2011; Dube et al., 2010). Dube et al. (2010, hereafter DLR) and Allegretto et al. (2011, hereafter ADR) have put forward a severe critique of the state panel-data approach, including the work discussed at length in Neumark and Wascher (2008). The essence of the argument in DLR and ADR is summarized in a review of "Minimum Wages" by Dube which draws heavily on the findings from the two papers he co-authored:
" ... [V]ariation over the past two decades in minimum wages has been highly selective spatially, and employment trends for low-wage workers vary substantially across states … This has tended to produce a spurious negative relationship between the minimum wage and employment for low wage workers – be it for sectors such as restaurant and retail or for demographic groups such as teenagers”ADR argue without reservation that their results overturn the conclusion that minimum wages reduce employment of low-skilled workers:
"Interpretations of the quality and nature of the evidence in the existing minimum wage literature … must be revised substantially. Put simply, our findings indicate that minimum wage increases – in the range that have been implemented in the United States – do not reduce employment among teens" (ADR, 2011, p. 238).Similarly, DLR conclude that there are
"no detectable employment losses from the kind of minimum wage increases we have seen in the United States” (DLR, 2010, p. 962).Now there is a new NBER working paper, Revisiting the Minimum Wage-Employment Debate: Throwing Out the Baby with the Bathwater? by David Neumark, J.M. Ian Salas, and William Wascher, NBER Working Paper No. 18681, January 2013, that sets out to evaluate this new research because of the strong challenge it poses to the large body of prior research that found that minimum wages reduce employment of low skilled workers.
Neumark, Salas and Wascher note that
[ ... ] the central element of this new research is the issue of how to construct counterfactuals for the places where minimum wages are increased. The authors of both studies argue that one must compare places that are geographically proximate to have valid controls, because, according to them, minimum wage changes are correlated with unobserved economic shocks to areas that can confound the estimation of minimum wage effects. Consequently, much of the analysis focuses on the validity of this criticism, and on the approaches these studies take to address this potential problem. The overriding concern we have with these studies is that their research designs, out of concerns about avoiding minimum wage variation that is potentially confounded with other sources of employment change, discard a great deal of valid identifying information – throwing out the identifying “baby” along with, or worse yet instead of, the contaminated bathwater.” Our findings, in a nutshell, indicate that neither the conclusions of these studies nor the methods they use are supported by the data.In part the conclusions reached are
Throughout the long-running debate about the employment effects of minimum wages, the empirical evidence has focused on similar questions: How does a minimum wage affect employment? Which workers are affected? And how do we ensure that we are getting a valid comparison that isolates the effect of the minimum wage?So the standard results still stand, but given the nature of the debate you can be sure that this is not the last we have heard of this topic.
Given the ongoing ebb and flow of this debate, it would have been shortsighted to think that the 2008 book that two of us wrote (Neumark and Wascher, 2008), despite surveying a massive amount of evidence, would have settled the issue. And indeed it has not. In particular, echoing long-standing concerns in the minimum wage literature, Dube et al. (2010) and Allegretto et al. (2011) attempt to construct better counterfactuals for estimating how minimum wages affect employment. When they narrow the source of identifying variation – looking either at deviations around state-specific linear trends or at within-region or within-county-pair variation – they find no effects of minimum wages on employment, rather than negative effects. Based on this evidence, they argue that the negative employment effects for low-skilled workers found in the literature are spurious, and generated by other differences across geographic areas that were not adequately controlled for by researchers.
Our analysis suggests, however, that their methods are flawed and lead to incorrect conclusions. In particular, neither study makes a compelling argument that its methods isolate more reliable identifying information (i.e., a better counterfactual). In one case – the issue of state-specific trends – we explicitly demonstrate the problem with their methods and show how more appropriate ways of controlling for unobserved trends that affect teen employment lead to evidence of disemployment effects that is similar to that reported in past studies. In the other case – identifying minimum wage effects from the variation within Census divisions or, even more narrowly, within contiguous cross-border county pairs – we show that the exclusion of other regions or counties as potential controls is not supported by the data.
We think the central question to ask is whether, out of their concern for avoiding minimum wage variation that is potentially confounded with other sources of employment change, ADR and DLR have thrown out so much useful and potentially valid identifying information that their estimates are uninformative or invalid. That is, have they thrown out the “baby” along with – or worse yet, instead of – the contaminated “bathwater”? Our analysis suggests they have. Moreover, despite the claims made by ADR and DLR, the evidence that their approaches provide more compelling identifying information than the standard panel data estimates that they criticize is weak or non-existent.
In addition, when the identifying variation they use is supported by the data, the evidence is consistent with past findings of disemployment effects. Thus, our analysis substantially undermines the strong conclusions that ADR and DLR draw – that there are “no detectable employment losses from the kind of minimum wage increases we have seen in the United States” (DLR, 2010, p. 962), and that “Interpretations of the quality and nature of the evidence in the existing minimum wage literature …, must be revised substantially” (ADR, 2011, p. 238).
Can one come up with a dataset and an econometric specification of the effects of minimum wages on teen and low-skilled employment that does not yield disemployment effects? As in the earlier literature, the answer is yes. But prior to concluding that one has overturned a literature based on a vast number of studies, one has to make a much stronger case that the data and methods that yield this answer are more believable than the established research literature, and convincingly demonstrate why the studies in that literature generated misleading evidence. Our analysis indicates that the studies by Allegretto et al. (2011) and Dube et al. (2010) fail to meet these standards. Based on this evidence, we continue to believe that the empirical evidence indicates that minimum wages pose a tradeoff of higher wages for some against job losses for others, and that policymakers need to bear this tradeoff in mind when making decisions about increasing the minimum wage.
- Allegretto, Sylvia A., Arindrajit Dube, and Michael Reich. 2011. “Do Minimum Wages Really Reduce Teen Employment? Accounting for Heterogeneity and Selectivity in State Panel Data.” Industrial Relations, Vol. 50, No. 2, April, pp. 205-240.
- Dube, Arindrajit, T. William Lester, and Michael Reich. 2010. “Minimum Wage Effects Across State Borders: Estimates Using Contiguous Counties.” Review of Economics and Statistics, Vol. 92, No. 4, November, pp. 945-64.
- Neumark, David, and William L. Wascher. 2008. Minimum Wages. Cambridge, MA: MIT Press.
Measuring the knowledge economy
The problem noted in the previous post on how to measure productivity in the internet age is just one example of the problems with measuring most aspects of the economy in the, so-called, knowledge economy.
This is a problem that I have written a little on. There is a section on this problem, called "Mr Bean(counter) measures the economy", in Oxley, Walker, Thorns and Wang (2008). It is written,
This is a problem that I have written a little on. There is a section on this problem, called "Mr Bean(counter) measures the economy", in Oxley, Walker, Thorns and Wang (2008). It is written,
Much time and effort is expended by many national and international organisations in an attempt to measure the economy or economies of the world.All the references in the above can be found in Oxley, Walker, Thorns and Wang (2008).
While the measuring of the “standard” economy is funny enough, when we move to the measurement of the “knowledge economy” measurement goes from the mildly humorous to the outright hilarious. Most attempts to measure, or even define, the information or knowledge economy border on the farcical: the movie version should be called, "Mr Bean(counter) Measures the Economy".
There are substantial challenges to be overcome in any attempt to measure the knowledge society\economy. These are at both the theoretical and the method level. A more consistent set of definitions are required as are more robust measures that are derived from theory rather than from what is currently or conveniently available. In order to identify the size and composition of the KBE one inevitably faces the issue of quantifying its extent and composition. Economists and national statistical organisations are naturally drawn to the workhorse of the ‘System of National Accounts’ as a source of such data. Introduced during WWII as a measure of wartime production capacity, the change in Gross Domestic Product (GDP) has become widely used as a measure of economic growth. However, GDP has significant difficulties in interpretation and usage (especially as a measure of wellbeing) which has led to the development of both ‘satellite accounts’ - additions to the original system to handle issues such as the ‘tourism sector’; ‘transitional economies’ and the ‘not-for-profit sector’ and alternative measures for example, the Human Development Indicator and Gross National Happiness . GDP is simply a gross tally of products and services bought and sold, with no distinctions between transactions that add to wellbeing, and those that diminish it. It assumes that every monetary transaction adds to wellbeing, by definition. Organisations like the ABS and OECD have adopted certain implicit\explicit definitions, typically of the Information Economy-type, and mapped these ideas into a strong emphasis on impacts and consequences of ICTs. The website (http://www.oecd.org/sti/information-economy) for the OECD’s Information Economy Unit states that it:
“. . . examines the economic and social implications of the development, diffusion and use of ICTs, the Internet and e-business. It analyses ICT policy frameworks shaping economic growth productivity, employment and business performance. In particular, the Working Party on the Information Economy (WPIE) focuses on digital content, ICT diffusion to business, global value chains, ICT-enabled off shoring, ICT skills and employment and the publication of the OECD Information Technology Outlook.”Furthermore, the OECD’s Working Party on Indicators for the Information Society has
“. . . agreed on a number of standards for measuring ICT. They cover the definition of industries producing ICT goods and services (the “ICT sector”), a classification for ICT goods, the definitions of electronic commerce and Internet transactions, and model questionnaires and methodologies for measuring ICT use and e-commerce by businesses, households and individuals. All the standards have been brought togetherin the 2005 publication, Guide to Measuring the Information Society . . . “ (http://www.oecd.org/document/22/0,3343,en_2649_201185_34508886_1_1_1_1, 00.html)The whole emphasis is on ICTs. For example, the OECD’s “Guide to Measuring the Information Society” has chapter headings that show that their major concern is with ICTs. Chapter 2 covers ICT products; Chapter 3 deals with ICT infrastructure; Chapter 4 concerns ICT supply; Chapter 5 looks at ICT demand by businesses; while Chapter 6 covers ICT demand by households and individuals. As will be shown below several authors have discussed the requirements for, and problems with, the measurement of the knowledge\information economy. As noted above most of the data on which the measures of the knowledge economy are based comes from the national accounts of the various countries involved. This does raise the question as to whether or not the said accounts are suitably designed for this purpose. There are a number of authors who suggest that in fact the national accounts are not the appropriate vehicle for this task. Peter Howitt argues that:
“. . . the theoretical foundation on which national income accounting is based is one in which knowledge is fixed and common, where only prices and quantities of commodities need to be measured. Likewise, we have no generally accepted empirical measures of such key theoretical concepts as the stock of technological knowledge, human capital, the resource cost of knowledge acquisition, the rate of innovation or the rate of obsolescence of old knowledge” (Howitt 1996: 10).Howitt goes on to make the case that because we can not measure correctly the input to and the output of, the creation and use of knowledge, our traditional measure of GDP and productivity give a misleading picture of the state of the economy. Howitt further claims that the failure to develop a separate investment account for knowledge, in much the same manner as we do for physical capital, results in much of the economy’s output being missed by the national income accounts.
In Carter (1996) six problems in measuring the knowledge economy are identified:
1) The properties of knowledge itself make measuring it difficult,Carter argues that these issues result in it being problematic to measure knowledge at the level of the individual firm. This results in it being difficult to measure knowledge at the national level as well since the individual firms’ accounts are the basis for the aggregate statistics and thus any inaccuracies in the firms’ accounts will compromise the national accounts.
2) Qualitative changes in conventional goods: the knowledge component of a good or service can change making it difficult to evaluate their “levels of output” over time,
3) Changing boundaries of producing units: for firms within a knowledge economy, the boundaries between firms and markets are becoming harder to distinguish,
4) Changing externalities and the externalities of change: spillovers are increasingly important in an knowledge economy,
5) Distinguishing ‘meta-investments’ from the current account: some investments
are general purpose investments in the sense that they allow all employees to be more efficient,
6) Creative destruction and the “useful life” of capital: knowledge can become
obsolete very quickly and as it does so the value of the old stock drops to zero.
Haltiwanger and Jarmin (2000) examine the data requirement for the proper measurement of the information economy. They point out that changes are needed in the statistical accounts which countries use if we are to deal with the information\knowledge economy. They begin by noting that improved measurement of many “traditional” items in the national accounts is crucial if we are to understand fully Information Technology (IT’s) impact on the economy. It is only by relating changes in traditional measures such as productivity and wages to the quality and use of IT that a comprehensive assessment of IT’s economic impact can be made. For them, three main areas related to the information economy require attention:
1) The investigation of the impact of IT on key indicators of aggregate activity, such as productivity and living standards,Haltiwanger and Jarmin outline five areas where good data are needed:
2) The impact of IT on labour markets and income distribution and
3) The impact of IT on firm and on industry structures.
1) Measures of the IT infrastructure,In Moulton (2000) the question is asked as to what improvements we can make to the measurement of the information economy. In Moulton’s view additional effort is needed on price indices and better concepts and measures of output are needed for financial and insurance services and other “hard-to-measure” services. Just as serious are the problems of measuring changes in real output and prices of the industries that intensively use computer services. In some cases output, even if defined, is not directly priced and sold but takes the form of implicit services which at best have to be indirectly measured and valued. How to do so is not obvious. In the information economy, additional problems arise. The provision of information is a service which in some situations is provided at little or no cost via media such as the web. Thus on the web there may be less of a connection between information provision and business sales. The dividing line between goods and services becomes fuzzier in the case of e-commerce. When Internet prices differ from those of brick-and-mortar stores do we need different price indices for the different outlets? Also the information economy may affect the growth of Business-to-Consumer sales, new business formation and in cross-border trade. Standard government surveys may not fully capture these phenomena. Meanwhile the availability of IT hardware and software results in the variety and nature of products being provided changing rapidly. Moulton also argues that the measures of the capital stock used need to be strengthened, especially for high-tech equipment. He notes that one issue with measuring the effects of IT on the economy is that IT enters the production process often in the form of capital equipment. Much of the data entering inventory and cost calculations are rather meagre and needs to be expanded to improve capital stock estimates. Yet another issue with the capital stock measure is that a number of the components of capital are not completely captured by current methods, an obvious example being intellectual property. Also research and development and other intellectual property should be treated as capital investment though they currently are not. In addition to all this Moulton argues that the increased importance of electronic commerce means that the economic surveys used to capture its effects need to be expanded and updated.
2) Measures of e-commerce,
3) Measures of firm and industry organisation,
4) Demographic and labour market characteristics of individuals using IT, and
5) Price behaviour.
In Peter Howitt’s view there are four main measurement problems for the knowledge economy:
1) The “knowledge-input problem”,To deal with these problems Howitt makes a call for better data. But it’s not clear that better data alone is the answer, to both Howitt’s problems and the other issues outlined here. Without a better theory of what the “knowledge economy” is and the use of this theory to guide changes to the whole national accounting framework, it is far from obvious that much improvement can be expected in the current situation.
2) The “knowledge-investment problem”,
3) The “quality improvement problem”,
4) The “obsolescence problem”.
One simple question is to which industry or industries and\or sector or sectors of the economy can we tie knowledge\information production? When considering this question several problems arise. One is that the “technology” of information creation, transmission and communication pervades all human activities so cannot fit easily into the national accounts categories. It is language, art, shared thought, and so on. It is not just production of a given quantifiable commodity. Another issue is that because ICT exists along several different quantitative and qualitative dimensions production can not be added up. In addition if much of the knowledge in society is tacit, known only to individuals, then it may not be possible to measure in any meaningful way. If on the other hand knowledge is embedded in an organisation via organisational routines, see Becker (2004) for a review of this literature, then again it may not be measurable. Organisational routines may allow the knowledge of individual agents to be efficiently aggregated, much like markets aggregate information, even though no one person has a detailed understanding of the entire operation. In this sense, the organisation “possesses” knowledge which may not exist at the level of the individual member of the organisation. Indeed if, as Hayek can be interpreted as saying, much of the individual knowledge used by the organisation is tacit, it may not even be possible for one person to obtain the knowledge embodied in a large corporation.
There has also been considerable effort made to measure the information\knowledge society by national and international organisation such as UNESCO, the UN and the EU. That these efforts differ in their outcomes reflects, to a certain degree, different understandings of what the knowledge society is and thus different ways of modelling it. Some documents follow the process of knowledge production to sort out indicators, themes and tend to include measures on i) prerequisites for knowledge production (information infrastructure, knowledge, skill and human capital) and ii) knowledge production (R&D) itself. For example, in “Advancement of the Knowledge Society: Comparing Europe, the US and Japan” (European Foundation for the Improvement of Living and Working Conditions 2004), all indicators are sorted by whether they measure a prerequisite for the advancement of the knowledge society or whether they measure the outcomes of a knowledge society.
Other documents use different criteria to select indicators. The UN model initiated in “Understanding Knowledge Societies in Twenty Questions and Answers with the Index of Knowledge Societies” (Department of Economic and Social Affairs 2005), for example, categorises indicators along three dimensions: assets, advancement and foresightedness. When putting together its “Knowledge Society Barometer” (European Foundation for the Improvement of Living and Working Conditions 2004a), ‘The European Foundation for the Improvement of Living and Working Conditions’ considers notions such as information, knowledge, knowledge-value societies and sustainable development as parts of a ‘jigsaw puzzle’ which makes up their knowledge society framework. It seems to indicate that the knowledge society is viewed as a result of the integration of concerns of the previous conceptualisation of societies. Thus, the different frameworks also suggest the influence of organisational agenda\priorities in defining the knowledge society.
Despite the difference in frameworks and indicators, there are some common themes. These include human capital, innovation, ICT development and the context dimension. The human capital theme includes variables on the levels of people’s skills and education which reflect the size of the pool of educated people. Included in the innovation theme are variables showing innovation investment, procedures, capacities and networks. There are diverse indicators under the ICT theme; yet, they can be categorised as either resources or access. The former refers to the information infrastructure while the latter is related to the accessibility of information in people’s life and work. The context dimension always includes variables on socio-economic, political and institutional conditions for knowledge production.
Obviously, these themes are crucial for measuring the knowledge society. However, these measures are not without their pitfalls. One basic problem for these measures is caused by the “knowledge problem”. In some cases, knowledge is understood partially and information and knowledge are treated as exchangeable terms. As a result, some documents focused entirely on measuring the information economy while talking about the knowledge economy and society. Other documents mentioned the difference between tacit and explicit knowledge, the distinction between information and knowledge, and thus, the distinction between the information society and the knowledge society while they failed to employ appropriate variables to reflect the distinctions, due to data availability. Among these documents, we do see a gradually shifting understanding and discourse on the knowledge society. For example, “UNESCO World Report: Towards Knowledge Societies” could be seen as a leading document in initiating the paradigm shift from the information society to the knowledge one. It acknowledges that “the idea of the information society is based on technological breakthroughs. The concept of knowledge societies encompasses much broader social, ethical and political dimensions” (UNESCO 2005: 17). At the same time, another document prepared by UNESCO on statistical challenges shows difficulties in identifying the relevant data within the existing measurement frameworks.
In addition, the knowledge problem raises other issues to do with the choice of indicators in each of the major themes. For example, human capital is measured according to people’s formal education and skills based on human capital approaches. This inevitably ignores people’s tacit knowledge and knowledge between people. There are a number of sociological studies which show that even within the economic domain people are not rational actors but their economic performance is signicantly affected by social, cultural and political structures in which they are embedded.
Similarly, the measurement of innovation in these documents seems to focus mainly on the production of scientific knowledge in laboratories. This is inconsistent with the Mode-2 knowledge production initiated by Gibbons (1994) in the knowledge society in which science and society co-evolve. Also the measurement of innovation fails to distinguish the role of inventions from that of innovations. Consequently, it is difficult to see how they can measure the economic value of innovation and at the same time attach a social value to it. Regarding ICTs, it seems that the widely accepted practice is to enumerate the physical infrastructure or, at best, measure access to information. There is a misunderstanding on the relationship between technology and human beings here. It is not technology but human beings and their interactions that constitute so-called society and its institutions. Thus, the function of ICTs is not only their capacity to provide additional new connections but also their potential for opening or closing forms of personal, social and economic capacities, relationships and power-plays (Dutton, 2004). Mansell and Wehn’s (1998) INEXSK approach would be a valuable endeavour to integrate the dimension of human beings, their knowledge and ICTs in the knowledge society measurement (Mansell and Wehn 1998).
Another problem with measures of the knowledge society is confusing the knowledge economy with the knowledge society. Generally, there are two kinds of documents on the measurement of the knowledge society. One group focuses on measuring the knowledge economy although they mention the concept of the knowledge society. The foci of the measurement are human capital, innovation and ICT development. A representative document is “Measuring a Knowledge-based Economy and Society: An Australian Framework” prepared by the Australian Bureau of Statistics. The document’s author claims that this framework
“does not attempt to cover all knowledge in the economy and society . . . [and] offer a comprehensive treatment of a knowledge-based society although it does address those social elements which potentially affect economic change or are affected by it” (Australian Bureau of Statistics 2002: 15)Another group of documents considers both economic and technological features and social conditions and outcomes of the knowledge society. Two representative documents here would be, “Advancement of the Knowledge Society: Comparing Europe, the US and Japan” (European Foundation for the Improvement of Living and Working Conditions 2004) and “Knowledge Society Barometer” (European Foundation for the Improvement of Living and Working Conditions 2004a) published by the European Foundation for the Improvement of Living and Working Conditions. There are some variables reflecting social issues such as social inclusion, quality of life and gender equality in the two documents. However, they failed to see that both the economic and the social are equally important and integrated components in the measurement frameworks. Instead, the social is still treated as the ‘leftover’ after having identified ‘significant’ and ‘measurable’ components for national accounting.
In light of these issues it would seem that a necessary first step along the path towards the correct measurement of the knowledge society\economy would entail the development of a theory of the knowledge society\economy. Such a theory would tell us, among other things, what the knowledge economy is, how - if at all - the knowledge economy\knowledge society differ, how they change and grow, and what the important measurable characteristics are. Based on this, a measurement framework could be developed to deal with, at least some of, the problems outlined above.
- Oxley, Les, Walker, Paul, Thorns, David, Wang, Hong (2008). 'The knowledge economy/society: the latest example of “Measurement without theory”?', The Journal of Philosophical Economics, II:1 , 20-54.
EconTalk this week
Kevin Kelly talks with EconTalk host Russ Roberts about measuring productivity in the internet age and recent claims that the U.S. economy has entered a prolonged period of stagnation. Then the conversation turns to the potential of robots to change the quality of our daily lives.
Monday, 21 January 2013
Nine facts about top journals in economics
These nine facts will not be of any concern to most of you, but they will be very important for a small subset of you. So those in the subset take note.
'Publish or perish' has been the rule in academic economics since forever, but there is a widespread perception that publishing in the best journals has become harder and much slower. The nine facts given below includes evidence confirming this perception. The number of articles published in top journals has fallen, while the number and length of submissions have risen. I would argue that this basic trend is true not just of the top five journals but of all journals. You do have to how have the incentives to publish changed over the years. Has it changed for the better? The profession should consider recalibrating publication demands to reflect this new reality.
Card and DellaVigna conclude their article by saying,
'Publish or perish' has been the rule in academic economics since forever, but there is a widespread perception that publishing in the best journals has become harder and much slower. The nine facts given below includes evidence confirming this perception. The number of articles published in top journals has fallen, while the number and length of submissions have risen. I would argue that this basic trend is true not just of the top five journals but of all journals. You do have to how have the incentives to publish changed over the years. Has it changed for the better? The profession should consider recalibrating publication demands to reflect this new reality.
- The number of yearly submissions to the five top journals nearly doubled from 1990 to 2012.
- The total number of articles published in the top journals declined from about 400 per year in the late 1970s to around 300 per year in 2010-12. The combination of rising submissions and falling publications led to a sharp fall in the aggregate acceptance rate, from around 15% in 1980 to 6% today.
- The American Economic Review now accounts for 40% of top journal publications in the field, up from 25% in 1970. In contrast, JPE, which also published about one quarter of all top-five articles in the late 1970s, now publishes less than 10% of these articles. Stated differently, in the late 1970s AER and JPE had about equal say in the gatekeeping process that determined publications in the top-five journals. Now AER has four times greater weight than JPE.
- The trends to the length of published articles are documented. It is found that published papers in the top five journals are nearly three times longer today than they were in the 1970s.
- The fifth fact is that the number of authors per paper has increased monotonically over time. In the early 1970s, three quarters of articles were single-authored, and the average number of authors in a paper was 1.3. By the early 1990s, the fraction of single-authored papers had fallen to 50%, and the mean number of authors reached 1.6. Most recently (2011-2012), more than three quarters of papers have at least two authors and the mean number of authors is 2.2.
- Sixth, papers published in the top five economics journals are highly cited. Among those published in the late 1990s, for example, the median article has about 200 Google Scholar citations.
- Seventh, citation-based rankings of the top-five journals reveal interesting patterns. Median citations for articles in The American Economic Review and the Journal of Political Economy tend to be quite similar from year to year – for example, around 100 in the late 1980s, between 250 and 300 in the mid-1990s, and around 130 in 2005. In the earlier years of our sample, articles in Econometrica have about the same median citations as those in AER or JPE. Starting in the 1990s, however, there is a discernible fall in the relative impact of Econometrica articles. Articles in The Review of Economic Studies tend to be the least cited among the top five journals, although its relative position appears to be improving in the last few years. Perhaps the most obvious feature is the dramatic increase in relative citations for articles in The Quarterly Journal of Economics.
- Eighth, using a regression-based analysis, it is shown that citations are strongly increasing in both the length of a paper and the number of coauthors, suggesting that trends in both dimensions may be driven in part by quality competition.
- Ninth, despite the relative stability of the distribution of published articles across fields, there are interesting differences in the relative citation rates of newer and older papers in different fields. In particular, papers in Development and International Economics published since 1990 are more highly cited than older (pre-1990) papers in these fields, whereas recent papers in Econometrics and Theory are less cited than older papers in these fields.
Card and DellaVigna conclude their article by saying,
Overall, these findings have potentially significant implications for academic economists, particularly with regard to the career paths of younger scholars. Most importantly, the competition for space in the top journals has grown fiercer over time. The overall acceptance rate for submissions at the top five journals is about one-third as high today as in the early 1970s. This trend is independent of the trend documented by Ellison (2002) toward longer delays in the adjudication and revision process, and in fact has largely emerged in the decade since Ellison’s original investigation. Both lower acceptance rates and longer delays, however, make it increasingly difficult for any one author to achieve a given set of publication benchmarks. Authors have clearly responded by forming bigger teams, and to the extent that co-authored papers are treated as equivalent to single-authored ones, they have been able to partially mitigate the adverse effects of lower acceptance rates and longer delays.
More blatant self promotion
I have made a few changes to my paper "The Past and Present of the Theory of the Firm". The new version is available at SSRN
The abstract reads:
The abstract reads:
In this survey we give a short overview of the way in which the theory of the firm has been formulated within the ‘mainstream’ of economics, both past and present. As to a break point between the periods, 1970 is a convenient, if not entirely accurate, dividing line. The major difference between the theories of the past and the present, as they are conceived of here, is that the focus,in terms of the questions asked in the theory, of the post-1970 literature is markedly different from that of the earlier (neoclassical) mainstream theory.The questions the theory seeks to answer have changed from being about how the firm acts in the market, how it prices its outputs or how it combines its inputs, to questions about the firm’s existence, boundaries and internal organisation. That is, there has been a movement away from the theory of the firm being seen as developing a component of price theory, namely issues to do with firm behaviour, to the theory being concerned with the firm as a subject in its own right.
Saturday, 19 January 2013
Friday, 18 January 2013
Kidney exchange and market design
Recent Nobel Prize winner Alvin Roth gives a brief summary of work on "Kidney Exchanges" in the NBER Reporter: Number 4, 2012.
More than 90,000 patients are on the U.S. waiting list for a kidney transplant from a deceased donor, and only 11,000 or so such transplants are accomplished each year. So, the waiting is long and costly, sometime fatally so. But healthy people have two kidneys and can remain healthy with only one, which also makes it possible to receive a kidney from a living donor -- around 6,000 such transplants were accomplished in 2011. Nevertheless, someone who is healthy enough to donate a kidney may be unable to donate to his or her intended recipient because of various types of donor-recipient incompatibility. This is the origin of kidney exchange. In the simplest case, two incompatible patient-donor pairs exchange kidneys, with each patient receiving a compatible kidney from the others donor. The first kidney exchange in the United States was performed at the Rhode Island Hospital in 2000, when doctors there noticed two incompatible patient-donor pairs who could benefit from exchange. Shortly after that, Tayfun Sonmez, Utku Unver, and I proposed a way to organize a multi-hospital kidney exchange clearinghouse(1), and began discussions with Dr. Frank Delmonico of Harvard Medical School, that soon led to the founding of the New England Program for Kidney Exchange.(2)Together with Itai Ashlagi, we have since assisted in the formation and operation of other kidney exchange networks operating around the country.For the references follow the link at the top of the message.
In the United States and most of the world it is illegal to buy or sell organs for transplant.(3) As Jevons (1876)(4) noted, one obstacle to two-way barter exchange is the need to find a counterparty who has what you want and also wants what you have. One way to reduce the difficulty of finding these double coincidences is to assemble a large database of interested patient-donor pairs. Another is to consider a larger variety of exchanges than those between just two pairs: for example, a cycle of exchange among three pairs, or a chain that begins with a donation by a non-directed donor (such as a deceased donor, or an altruistic living donor) to the patient in an incompatible patient donor pair, whose donor "passes it forward" to another such pair or ends the chain with a donation to someone on the waiting list for a deceased donor (that is, the chain ends when a donation is made to a patient who does not have a willing but incompatible live donor).
Our 2003 paper proposed kidney exchange that integrated cyclic exchanges of all sizes and chains beginning with a non-directed donor and ending with a donation to someone without a living donor. We focused on two kinds of incentive issues that seemed likely to be important in a mature system of kidney exchange, both concerned with aligning incentives so as to make it safe and simple to participate. First, we showed how exchanges could be arranged so that they would be in the core of the game, which means that no coalition of patient-donor pairs could go off on their own, or to a competing exchange, and do better than to accept the proposed exchanges. Second, we showed how this could be accomplished in a way that made it a dominant strategy for patients (and their surgeons) to reveal the medical information that determined the desirability of each potential transplant. It is worth noting that the tools we used built on theory that was initially proposed in a very abstract setting: Shapley and Scarf (1974) studied a "top trading cycle" algorithm for trading indivisible goods without money and showed that it produced an allocation in the core(5) , and Roth (1982)(6) showed that the top trading cycle algorithm made it a dominant strategy for traders to reveal their true preferences. Abdulkadiroglu and Sonmez (1999)(7) extended this model to deal with assignment of dormitory rooms when some students already had rooms, some did not, and some rooms might be vacant, so that assignment would involve chains as well as cycles.
We observed that the efficient chains and cycles in kidney exchange mostly would be short but occasionally would be long, which presented a logistical problem, since, for incentive reasons, all surgeries in a given exchange would be performed simultaneously (because contracts can't be written on kidneys). This means that even an exchange between two pairs requires four operating rooms and surgical teams, for the two nephrectomies (kidney removal from the donor) and two transplants. A three-way exchange would require six. When we presented this initial proposal to our surgical colleagues, led by Frank Delmonico, they felt it was a critical problem - the prospect of four simultaneous surgeries was daunting enough. They asked us to present a proposal with the more modest aim of organizing exchanges involving only two-way exchanges.
Our new, more limited proposal(8) and the accompanying software formed the basis for organizing the New England Program for Kidney Exchange,(9) and was widely shared and explained and soon adapted for use elsewhere. Almost simultaneously, we began exploring with our surgical colleagues the possibilities of including larger exchanges and chains.(10, 11, 12) (It speaks volumes about the relative publishing speed of Economics and Medicine to note that the follow-up paper which reported in the American Journal of Transplantation how longer exchanges actually had been carried out was published a year later than the publication of the original 2005 NBER Working Paper analyzing such exchanges.)
Although the three-way chain reported in that AJT paper was performed simultaneously (and hence involved six operating rooms and surgical teams), the paper also proposed that chains that begin with a non-directed donor might not need to be performed simultaneously. The argument was a simple cost-benefit analysis. The reason that cyclic exchanges are performed simultaneously is that if they were not, some patient-donor pair would have to give a kidney before getting one, and if the cycle were to be broken subsequently, that pair would suffer a grievous loss. The donor in the pair would have undergone a nephrectomy that yielded no benefit to the recipient in the pair, and there would no longer be a kidney with which to participate in a future exchange.
Now consider a chain that begins with a non-directed donor, who donates to some incompatible patient-donor pair under the understanding that they will subsequently donate to another, and so on. Every pair in this chain will receive a kidney before they donate one. If the chain is broken, then the pair that was scheduled but fails to receive a kidney will be disappointed, but not grievously harmed. They are not worse off than they were before the non-directed donor came forward, and, in particular, they still have a kidney with which to participate in some future exchange. Hence the cost of a broken link in a chain initiated by a non-directed donor is much less than that of a broken link in an exchange among a cycle of patient-donor pairs.
In 2007, Mike Rees, a pioneer of kidney exchange and the founder of the Alliance for Paired Donation, which is one of the most active networks, began the first such non-simultaneous chain. It was reported on in Rees et al. (2009), at which point it had accomplished ten transplants (and 20 surgeries), many more than could have been done simultaneously. (13) Since then, non-simultaneous non-directed donor chains have become the fastest growing part of kidney exchange, even though the number of non-directed donors is small. In some cases a non-directed donor has initiated a chain of more than 30 transplants.
Ashlagi and I have worked to understand why long chains are so useful, and how to structure them. As kidney exchange has grown and become a standard tool of transplantation, hospitals are more able to do some exchanges among their own patients. This means the players in the kidney exchange game have changed: where it used to be enough to think of the incentives of patients and donors and their surgeons, now the directors of transplant centers are players, and they see many patient-donor pairs. Their strategy sets now include which pairs to show to a centralized exchange. The present organization of kidney exchanges gives them some incentives to withhold their easy-to-match pairs. This could be fixed by taking account of which hospitals enrolled easy-to-match pairs and using this information (in a sort of "frequent flier program") to give some increased probability of matching to patients at those hospitals.(14) But this faces important political obstacles and has so far not been adopted. Partly as a result of the withholding of easy-to-match pairs, the percentage of patients enrolled in kidney exchange networks that are hard to match, even to a blood-type compatible donor, has skyrocketed.
We can organize patient and donor data in a compatibility graph, in which each node represents a patient and her incompatible donor(s), and an edge goes from one node to another whenever the donor in the first node is compatible with the patient in the second node. As patients have become harder to match, the compatibility graphs have become sparser, that is, they contain fewer edges. When we look at the data of the kidney exchange networks with which we work, there is a densely connected sub-graph of the relatively few fairly easy-to-match pairs, and a sparse sub-graph of many hard-to-match pairs (this is joint work with David Gamarnik and Mike Rees). Within the easy-to-match sub-graph, many patients could be transplanted with the aid of two-way or three-way exchanges, but within the sub-graph of hard-to-match pairs, only long chains offer the chance of transplanting many patients.(15) Non-directed donors have a chance of starting those long chains, and the presence of easy-to-match pairs allows more hard-to-match pairs to be included.
Despite the growing success that kidney exchange has had in facilitating transplants from living donors, the list of people waiting for kidney transplants from deceased donors continues to grow. Deceased donor organs are a scarce resource of an unusual kind, because their supply depends on decisions to donate made by potential donors (while still living) and their next of kin (immediately afterwards). Consequently there are market design issues associated with how donations are solicited, and how organs are allocated, both of which may influence the donation decision and hence the supply. Judd Kessler and I have begun to investigate this:(16) we begin with an experimental investigation motivated by a priority allocation scheme just put into place in Israel, in which people who have registered as donors will be given some priority in case they need to receive an organ for transplant, and so will members of their immediate family.
While it is natural that economists should investigate institutions that facilitate exchange, many people (including some economists) find it surprising that economists should be helping to design the institutions of kidney exchange. This is a natural outgrowth, however, of two strands in modern economics: market design in general (17), and the study of matching markets. Matching markets are those in which price does not do all the work of determining who gets what, and they include some of the important passages in our lives, from school choice and college admissions to marriage and labor markets. In none of these can you simply choose what you want - you also have to be chosen. In some of these, economists have begun to help design the matching institutions.
Outsourcing doesn't always work
Many firms outsource work they want done. But how many employees outsource their own job? This is from the Globe and Mail
Bob was his company’s best software developer, got glowing performance reviews and earned more than $250,000 a year.I'm guessing this type of outsourcing only works when the boss doesn't know about it!
Then one day last spring, Bob’s employer, an American infrastructure company, thought its computer network had been attacked by a virus.
The ensuing forensic probe revealed that Bob’s software code had in fact been the handiwork of a Chinese subcontractor.
Bob was paying a Chinese firm about $50,000 a year to do his work, then spent the day surfing the web, watching cat videos and updating his Facebook page.
“This particular case was pretty unique,” computer security investigator Andrew Valentine, who helped uncover Bob’s scheme, said in an e-mail to The Globe and Mail. “We thought it was actually pretty clever.”
Bob was fired for violating internal company policy, Mr. Valentine said in his e-mail to The Globe and Mail.The negative externality of this for the other employees of the company is that now management knows about the high quality of the Chinese contractor they may start officially outsourcing work to them.
By all accounts, the Chinese contractor did an excellent job and until then it reflected well on Bob.
EconTalk this week
Esther Dyson talks with EconTalk host Russ Roberts about the market for attention and how technology has changed, how much we pay attention to others, and vice versa. Along the way Dyson reminisces about Steve Jobs, the nature of the start-up and venture capital world, and the future of space travel.
Thursday, 10 January 2013
Tuesday, 8 January 2013
Aumann on Shapley and Roth
In the February 2013 Notices of the American Mathematical Society Robert Aumann writes on Shapley and Roth Awarded Nobel Prize in Economics (pdf).
(HT: Market Design)
(HT: Market Design)
EconTalk this week
Morten Jerven of Simon Fraser University, author of Poor Numbers, talks with EconTalk host Russ Roberts about the quality of data coming out of Africa on income, growth, and population. Jerven argues that the inconsistency of the numbers and methodology both across countries and within a country across time, makes many empirical studies of African progress meaningless. The conversation closes with a discussion of what might be done to improve data collection in poor countries.
Monday, 7 January 2013
Roberto Serrano on Lloyd Shapley
When Lloyd Shapley was named co-winner of last year's Nobel Prize in economics many people went, Who? Well here is a link to a paper by Roberto Serrano which explains Shapley's contribution to game theory.
(HT: Market Design)
(HT: Market Design)
Saturday, 5 January 2013
Regulatory capture video
In this short video Susan Dudley of George Washington University provides a concise introduction to the concept of regulatory capture:
(HT: Knowledge Problem)
(HT: Knowledge Problem)
Friday, 4 January 2013
Fiscal cliff video
Economists John Cochrane and Carl Tannenbaum appeared on "Chicago tonight" to discuss the deal done on the "Fiscal Cliff" in the U.S. Regime uncertainty looks like a big problem still for the U.S.
Wednesday, 2 January 2013
The decline of British industry
That the British car industry is not what it once was is obvious but you are left asking why it has collapsed so dramatically. Part of the answer may be found in the attitudes of the managers of British companies post World War two. In a short set of notes on the full cost controversy G. B. Richarson writes,
But the attitudes of the management of the British companies may also have hampered economic development in other ways, in particular via the professionalism of British management. Richardson goes on to say,
In the 1950’s, when I was Chairman of the Economics Research Group, two directors of Morris Motors ( a large automobile manufacturer ) with whom we met informally after dinner in a Oxford college, told us that they had set a price for their very successful Morris Minor car at what they considered a fair level. Given that there was then a long waiting list for the vehicle, we asked why they did not charge more. We were told that this would be profiteering, that the firm would not unfairly exploit a current scarcity, as this would be wrong in principle and threaten customer goodwill.Such a pricing policy may seem "fair" but it comes at a cost as Richardson notes.
This policy seemed to me then, and would seem to most of us today, commercially unwise; had the firm made the most of the situation then obtaining, it could have accumulated the funds needed for future development and thus been better able to face its competitors, still temporarily disabled after the conflict, once they again got into their stride.A lack of funds to finance future development of their product range may well have been one factor contributing to the inability of the British car industry to deal with the threat coming from the development of car industries in other countries, mainly of course, Japan.
But the attitudes of the management of the British companies may also have hampered economic development in other ways, in particular via the professionalism of British management. Richardson goes on to say,
[ ... ] but it is at least possible that British business management was at that time less able and less professional than today.The directors of Morris Motors to whom I referred had remarked to us, apparently with satisfaction, that none of their number were university graduates. The founder of the business, William Morris, later Lord Nuffield, had started his business career by running a cycle shop in Oxford; his successors may have seen this as showing the superfluousness of higher education, [ ... ]But such a view was not shared by all,
[ ... ] it is notable that Nuffield himself established a graduate college in Oxford named after him.What all this suggests is that businessmen - and policy makers - are influenced by the nature of the attitudes and world view of their times. This can work against economic development.
Tuesday, 1 January 2013
EconTalk this week
Becky Pettit of the University of Washington and author of Invisible Men talks with EconTalk host Russ Roberts about the growth of the prison population in the United States in recent decades. Pettit describes the magnitude of the increase particularly among demographic groups. She then discusses the implications of this increase for interpreting social statistics. Because the prison population isn't included in the main government surveys used by social scientists, data drawn from those surveys can be misleading as to what is actually happening among demographic groups, particularly the African-American population.