Thursday, 14 April 2016

Consumption is the goal

One of the many things Adam Smith taught us is that the ultimate goal of economic activity is consumption and not as many people, most famously perhaps the mercantilists, seem to think, production. Adam Smith pointed out more than 240 years ago that "Consumption is the sole end and purpose of all production". We value production as a means of getting to the goal of consumption.

Trade helps consumers while protection helps producers and thus if production is the ultimate goal then we should support protection. If on the other hand we see consumption as the ultimate economic goal then we should oppose protection. While this point may be well appreciated among economists its not as well understood by non economists. Don Boudreaux has been trying to correct this situation with an opinion piece in the Pittsburgh Tribune-Review. He writes,
Suppose [...] that we accept an opinion held by many advocates of tariffs and other import restrictions — that opinion being that economic policy should be judged not by how well it enables people to consume but, instead, by how well it keeps current producers doing what they do.

“People take pride in their work,” these protectionists observe. “If trade causes them to lose their jobs, they'll lose their dignity. And preventing honest, hardworking people from losing their dignity is reason enough to restrict trade.”

No one doubts that excelling at a job is a source of self-respect and dignity for workers. But what's the root source of this self-respect and dignity? It's not just the worker's knowledge that she is providing well for herself and her family. If providing well for oneself and one's family were sufficient to create self-respect and dignity, then the successful armed robber and arsonist-for-hire would have self-respect and dignity.

Essential to a producer's self-respect and dignity is the belief that he earns his living honestly. The producer takes justified pride in his work not merely because that work pays him well but because that work is socially useful.

Protectionism, however, destroys this source of pride — or, it would destroy this source of pride if protected producers understood the nature of protectionism. Protectionism allows a handful of producers to earn incomes not by serving consumers but, instead, by being served by consumers. Protectionism is a policy, enforced with threats of violence, that prevents consumers from spending their incomes in ways that promote their own best interests; protectionism is a policy of forcing consumers to spend their incomes in ways that promote the interests of current producers.

Protectionism treats production as the ultimate goal of economic activity — a goal that consumption must be made to serve.

Unlike workers and producers who succeed when trade is free, workers and producers who remain in their current jobs only because of trade barriers do not serve their fellow human beings as well as they possibly can. They do not truly earn their incomes. And there is no dignity in that.

Wednesday, 13 April 2016

What are the employment effects of minimum wages?

This question is discussed in a new article from IZA World of Labor. The article is Employment effects of minimum wages by David Neumark (University of California—Irvine, USA, and IZA, Germany).

There is no such thing as a free lunch, and this applies to the minimum wage as much as to anything else. The potential upside is higher wages for affected workers, the downside fewer workers could be employed. In policy terms there is a trade-off between these two effects.

Key findings


Main message
Although a minimum wage policy is intended to ensure a minimal standard of living, unintended consequences undermine its effectiveness. Widespread evidence indicates that minimum wage increases are offset by job destruction. Furthermore, the evidence on distributional effects, though limited, does not point to favorable outcomes, although some groups may benefit.
Neumark's discussion of the evidence on the employment effects of the minimum wage is:
Economists describe the effect of minimum wages using the employment elasticity, which is the ratio of the percentage change in employment to the percentage change in the legislated minimum wage. For example, a 10% increase in the minimum wage reduces employment of the affected group by 1% when the elasticity is −0.1 and by 3% when it is −0.3.

Through the 1970s, many early studies of the employment effects of minimum wages focused on the US. These studies estimated the effects of changes in the national minimum wage on the aggregate employment of young people, typically 16−19-year- olds or 16−24-year-olds, many of whom have low skills. The consensus of these first- generation studies was that the elasticities for teen employment clustered between −0.1 and −0.3 [1].

Limited evidence from the 1990s challenged this early consensus, suggesting that employment elasticities for teenagers and young adults were closer to zero. But even newer research, using more up-to-date methods for analyzing aggregate data, found stronger evidence of disemployment effects that was consistent with the earlier consensus. Using data through 1999, the best of these studies found teen employment elasticities of −0.12 in the short run and −0.27 in the longer run, thus apparently confirming the earlier consensus: Minimum wages destroy the jobs of young (and hence unskilled) people, and the elasticity ranges between −0.1 and −0.3.

In the early 1990s, a second, more convincing wave of research began to exploit emerging variation in minimum wages across states within the US. Such variation provides more reliable evidence because states that increased their minimum wages can be compared with states that did not, which can help account for changes in youth employment occurring for reasons other than an increase in the minimum wage. A related literature focuses on specific cases of state minimum wages increases. This case study approach offers the advantage of limiting the analysis to a state where the minimum wage increases and another very similar state that is a reasonable comparator. Unfortunately, these results do not necessarily apply in other states and other times.

An extensive review of this newer wave of evidence looked at more than 100 studies of the employment effects of minimum wages, assessing the quality of each study and focusing on those that are most reliable [2], [3]. Studies focusing on the least skilled were highlighted, as the predicted job destruction effects of minimum wages were expected to be more evident in those studies. Reflecting the greater variety of methods and sources of variation in minimum wage effects used since 1982, this review documents a wider range of estimates of the employment effects of the minimum wage than does the review of the first wave of studies [1].

Nearly two-thirds of the studies reviewed estimated that the minimum wage had negative (although not always statistically significant) effects on employment. Only eight found positive employment effects. Of the 33 studies judged the most credible, 28, or 85%, pointed to negative employment effects. These included research on Canada, Colombia, Costa Rica, Mexico, Portugal, the UK, and the US. In particular, the studies focusing on the least-skilled workers find stronger evidence of disemployment effects, with effects near or larger than the consensus range in the US data. In contrast, few—if any—studies provide convincing evidence of positive employment effects of minimum wages.

One potential exception is an investigation of New Jersey’s 1992 minimum wage increase that surveyed fast-food restaurants in February 1992, roughly two months before an April 1992 increase, and then again in November, about seven months after the increase [4]. As a control group, restaurants were surveyed in eastern Pennsylvania, where the minimum wage did not change. This allowed comparing employment changes between stores in New Jersey and Pennsylvania. The results consistently implied that New Jersey’s minimum wage increase raised employment (as measured by full-time equivalents, or FTEs) in that state. The study constructed a wage gap measure equal to the difference between the initial starting wage and the new minimum wage for fast-food restaurants in New Jersey and equal to zero for those in Pennsylvania. The increase had a positive and statistically significant effect on employment growth in New Jersey (as measured by FTEs), with an estimated elasticity of 0.73. Note that the study did not, as is often claimed, find “no effect” of a higher minimum, but rather a very large positive effect.

A reassessment of this evidence looked at the unusually high degree of volatility in the employment changes found in the data [5]. The new study collected administrative payroll records from fast-food establishments in the same areas from which the initial study had drawn its sample. In the initial survey, managers or assistant managers were simply asked, “How many full-time and part-time workers are employed in your restaurant, excluding managers and assistant managers?” [4]. This question is highly ambiguous, as it possibly refers to the current shift, the day, or the payroll period. In contrast, the administrative payroll data clearly referred to the payroll period. Reflecting this problem, the initial survey data indicated far greater variability than the payroll records did, with some implausible changes.

When the minimum wage effect was re-estimated with the payroll data, the minimum wage increase in New Jersey led to a decline in employment in New Jersey relative to employment in Pennsylvania [5]. The estimated elasticities ranged from −0.1 to −0.25, with many of the estimates statistically significant. In response to these results, the authors of the original study used data from the US Bureau of Labor Statistics on fast-food restaurant employment, this time finding small and statistically insignificant effects of the increase in New Jersey’s minimum wage on employment.

By far the largest number of studies use US data because state-level variation provides the best “laboratory” for estimating minimum wage effects. Many studies focus on the UK, which enacted a national minimum wage in 1999. A national minimum wage poses greater challenges to social scientists, because it is difficult to define what would have happened in the absence of a minimum wage increase. This challenge is reflected in the UK studies. Absent variation in minimum wages across regions in the UK, one recent study examines groups differentially affected by the national minimum wage, finding employment declines for part-time female workers, the most strongly affected. A second study looks at changes in labor market outcomes at ages when the UK minimum wage changes—at 18 and 22—and finds a negative effect at age 18 and at age 21 (a year before the minimum wage increases, which the authors suggest could reflect employers anticipating the higher minimum wage at age 22). However, there are numerous UK studies that do not find disemployment effects.

The current summary differs from many other brief synopses of minimum wage studies, which often point out that some studies find negative effects and others do not. The studies reporting positive or no effects are often given too much weight. Studies suggesting that “we just don’t know” often summarize the literature by citing one or two studies finding positive effects, such as [4], along with a couple of studies reporting negative effects, suggesting that one should not confidently hold the view that minimum wages reduce employment. However, the piles of evidence do not stack up evenly: The pile of studies finding disemployment effects is much taller.

The large review of minimum wage studies also highlights some important considerations when assessing the evidence on minimum wages [2]. First, case-study analyses may cover too little time to capture the longer-run effects of minimum wage changes. Second, case studies focusing on a narrow industry are hard to interpret, since the standard competitive model does not predict that employment will fall in every narrow industry or subindustry when an economy-wide minimum wage goes up.

This view of the overall lessons to be drawn from the large body of research on minimum wages has been contested in a review from 2013 [6], drawing in part on previous meta-analysis. The review uses the estimates displayed in Figure 1 in that meta-analysis to suggest that the best estimates are clustered near zero. However, the figure includes a pronounced vertical line at a zero minimum wage-employment elasticity, creating the illusion that the estimates are centered on zero. This illusion is perhaps further enhanced by including studies with elasticities ranging from nearly −20 (that is, 100 times larger than a −0.2 elasticity) to 5, making it hard to discern whether the graph’s central tendency is closer to 0, −0.1, or −0.2, which is the relevant debate. In fact, the previous meta-analysis reports that the mean across the studies summarized in the graph is −0.19.

Moreover, applying meta-analysis to minimum wage research is problematic. Meta-analysis treats all studies as equally valid, aggregating them to estimate an overall effect. This approach is intuitively appealing for combining estimates from similar experiments that differ mainly in the samples studied, because it turns many small samples into one large one. However, combining minimum wage studies without taking into account the variations in the reliability of their methods and in the groups of workers studied compromises the findings of such meta-analysis.

Two recent revisionist studies find no detectable employment losses from US minimum wage increases [7], [8]. These studies argue that higher minimum wages were adopted in states where the employment of teenagers and other low-skill workers was declining because of deteriorating economic conditions generally, so the negative relationship does not necessarily imply a negative causal effect.

More convincingly, another study suggests that when economic conditions are considered, minimum wage policies have an even stronger effect in reducing employment [9]. That study looks at variations in state minimum wages that arise not from the decisions of state legislators, who could be responding to immediate economic conditions, but from national decisions, which are less likely to respond to state-level economic conditions. The study finds evidence that teenage employment is negatively affected by minimum wage increases, with elasticities as large as −1, although smaller in some cases. This evidence suggests stronger disemployment effects of minimum wages than most other studies find.

Moreover, a review of the two studies finding no detectable employment losses finds that their conclusions are not supported by the data. The review suggests that the data show elasticities nearer to –0.15 for teenagers and some signs of negative employment effects for restaurant workers, although other factors make this hard to estimate [10]. The review concludes that elasticities of employment for groups strongly affected by minimum wage policies are in the range found by many earlier researchers, from –0.1 to –0.2.

Estimates in this range suggest that for groups of workers strongly affected by the minimum wage, disemployment effects are relatively modest. That has led some people to conclude that there are, at most, “small” disemployment effects. However, these elasticities understate the effects on the most affected workers, because even among these groups many workers earn more than the minimum wage. Suppose, for example, that half of teenagers earn the minimum wage and that a rise in the minimum wage sweeps them from the old minimum to the new one. And suppose that the other half of teenagers earn above the new minimum wage and are not affected by the increase. Then, a 10% increase in the minimum wage with a −0.15 elasticity for teens implies that teen employment will decline 1.5%. However, this decline occurs solely among the teenagers earning below the new minimum wage. Since in this example they make up just half of teenagers, their employment must fall 3% to generate a 1.5% decline among all teenagers.


References and much additional discussion of the theory of the minimum wage, the distributional effects, the limitations of and gaps in our understanding of the effects of a minimum wage, along with a summary and policy advice can be found in the paper. Well worth a read.

The morality of tax avoidance

Emile Yusupoff has written on the above topic at the Adam Smith Institute blog. Thinking about such things is of course due to the debates around the Panama Papers leak and whether taking advantage of legal tax avoidance schemes is moral. The claim of many of those upset by the Panama Papers seems to be that it is not. Although it is not clear why. Should we just happily give up whatever amount of our income the government demands and to not do so is a form of theft.

Yusupoff writes,
The view that all taxation is theft may not have much currency outside of hardcore libertarian circles. The opposite view, that there’s no such thing as ‘your money’ and you have an absolute obligation to give up whatever the government thinks is fit, sadly seems to be gaining currency.

The perspective that avoiding tax is inherently theft rests on some very peculiar assumptions. It needs to be accepted that current tax rates are either just, or not high enough, that the right things are being taxed in the right way, and that taking advantage of any loopholes is wrong.

For instance, in order to think that setting up a company to avoid income tax is immoral, you need to assume that: (i) income should be taxed at a higher rate than corporate profits; (ii) there is an obvious and absolute moral distinction between income and profit; and (iii) there are objective grounds to determine when it is legitimate to register a company.

And what about government encouraged avoidance schemes, such as ISAs and tax relief for risky investments? Is it wrong to take advantage of these?

It also needs to be assumed that providing the government with all the funds it demands is moral. It’s easy to talk about hospitals, schools, the roads, defence, and welfare. But that skirts over the real question of whether government should be funding these things at all and, if so, whether they should cost what they do.

It also ignores less palatable areas of expense, such as spending on foreign wars, nuclear weapons, a quixotic and destructive drug war, nonsensical vanity projects, bloated and pointless government departments, and corporate welfare. The same people who attack tax avoidance also (I think correctly) decry much of this, yet remain absolutely committed to the ‘obligation’ to fund the state’s largesse above and beyond what the law requires.

These issues may not have an obvious answer. But that’s exactly why tax dodging cannot just be lazily and self-righteously vilified as ‘disgusting’ by definition.

Tuesday, 12 April 2016

Does a CEO have a duty to lobby?

This question is asked by Luigi Zingales at the Pro-Market (irony anyone) blog. Zingales writes,
If we limit ourselves to describing what businesses do (what we call positive economics), then the answer is obvious. Most CEOs lobby heavily. Not only do they do it, their main investors tell them to do so, as confirmed here by Larry Fink, CEO of Blackrock and one of the largest institutional investors in the world. They lobby not just to redress grievances, but to shape the rules of the game to their own advantage. Alphabet (Google) is not a regulated company (at least in the classical sense of the word), but it is one of the companies spending the most on lobbying. Why? Not only to defend the right to use the massive data it collects, but also to proactively shape the business environment in its favor. Whether one supports “net neutrality” or opposes it, he has to agree that net neutrality greatly favors Google, which fears being charged directly for the massive internet use it generates, while it penalizes telecom companies, which cannot price-discriminate to recover the fixed costs of the network they build. Not surprisingly, Google lobbies very heavily in favor of net neutrality, while telecom companies lobby against it.
Now notice how a socialist like Zingales frames this issue. The problem here is the actions of the evil CEOs out to rape, murder and pillage his way around the world in the name of corporate profits [insert Vincent Price evil laugh]. The evil CEO forces the poor hapless but nice, caring and social welfare maximising politician  [insert mental images of bambi frolicking through the forest]  to do his evil bidding.

But wait. The CEO only lobbies to increase corporate profits [insert more Vincent Price evil laughs], so if the politician [more mental images of bambi frolicking through the forest] took the US government's advice and "just said no" the whole problem would go away. With no payoff to lobbying but still having to pay the costs of lobbying, the CEO [more Vincent Price evil laughs] would find lobbying loss making and thus stop. To take the Google example above, if the government just ignored Google on net neutrality what is the point of lobbying?

The problem with the loony left approach here is that it is framed as a problem with the actions of the firm (of course) and not as a problem with the actions of government. The question that Zingales should have asked is, Does a politician have a duty to ignore lobbying? For a trade to take place there has to be both a supply and a demand. Firms supply lobbying only because politicians demand it.

In addition note that Zingales is against firm's lobbying but makes no comment on lobbying by any other group. Why? If lobbying by firms is wrong why isn't lobbying by trade unions or professional bodies or churches or ....... Why just firms?

Local government picking winners

From Stuff comes news of the Wellington City Councils' ability at picking winners,
Wellington City Council is chasing $50,000 it gave to organisers of Wellington Fashion Week, as the man in charge appears to elude efforts to track him down.

Last year the event was cancelled just one week before it was meant to start, and this year it is not going ahead at all.

The debacle comes just months after CallActive, a call centre to which the council gave $300,000 in 2013, folded leaving 60 Wellington-based staff without a job.
The business of local (and central) government is not business, so they should stay away from it. Their tack record when involved is not good.

What those who followed Smith also got wrong

The other day I argued that one thing Adam Smith could have done but didn't was develop a theory of the firm. He had building blocks that could have led to some version of such a theory but he didn't go - rightly or wrongly - in this direction.

Given that Smith left the issue of the firm under-theorised did other economists set in and fill the void? It looks like there is a $100 bill being left on the pavement. As noted in the previous post the other classical economists followed Smith in paying the firm little heed. But what of the schools of thought that came after the classical school?

It turns out that in the period following the classical economists, with the possible exception of Alfred Marshall, few economists wrote anything much on the firm. When reviewing the contribution of the old institutionalists to the theory of the firm Hodgson (2012: 55) writes, “[ ...] we search in vain for a well-defined ‘theory of the firm’ within the old institutional economics”. Carl M. Guelzo argues that one of the leading old institutionalists, John R. Commons, “[ ...] did not construct a rigorous theory of the firm since this was never his purpose” (Guelzo 1976: 45). With reference to the German historical school Le Texier (2013: 80) writes “[m]embers of the German historical school such as Gustav von Schmoller analysed at length the birth and growth of the business enterprise, but they were more historians than economists. None of these thinkers proposed a theory of the business firm”. When writing about the work of Joseph Schumpeter, Hanappi (2012: 62) says “[a] well-defined theory of the firm thus cannot be found in Schumpeter’s oeuvres”. As to Austrian economics Per Bylund writes, “[b]ut despite the focus in Austrian economics on [ ...] “mundane economics,” and the fact that “the Austrians [have] so many necessary ingredients for a theory of the firm” [ ...], there is no Austrian theory of the firm” (Bylund 2011: 191) and “[w]hereas the theory of the firm has been a neglected area of study in mainstream economics, it has been missing from the Austrian economics literature” (Bylund 2011: 191). Hutchison (1953: 308) comments “[t]he Austrian School, with the exception of Auspitz and Lieben, did not concern themselves much with the analysis of markets and firms, except in respect to their general principle of imputation”. Hutchison also summarised the early neoclassical contributions to the theory of the firm, and markets, as “Jevons has little on the firm. [ ...] Walras’s assumptions of perfect competition (maintained virtually throughout) and of fixed technical ‘coefficients’, limited his contribution to the analysis of firms and markets, [ ...]. Pareto’s contribution to the theory of firms and markets were not rounded off, and of very varying value, [...]” (Hutchison 1953: 307). Post-1920 the latter neoclassical economists started to develop a theory of firm level production - see any introductory or intermediate microeconomics text book for a discussion of the theory - but it is a model production without firms. Given the model assumes zero transaction costs there is no need for the services of the intermediaries known as firms.

In fact it took till around 1970 before anyone decided to take the firm seriously. It was only then the economists such as Oliver Williamson, Armen Alchian, Harold Demsetz, Michael Jensen, William Meckling, Benjamin Klein, Oliver Hart and many others started to consider the firm as an important economic entity is its own right.

This lack of interest in the firm is baffling given the importance of the firm to economic activity, employment, innovation, growth, income generation and general well-being.

Refs.:
  • Bylund, Per L. (2011). ‘Division of Labor and the Firm: An Austrian Attempt at Explaining the Firm in the Market’, Quarterly Journal of Austrian Economics, 14(2): 188-215.
  • Guelzo, Carl M. (1976). ‘John R. Commons and the Theory of the Firm’, The American Economist, 20(2) Fall: 40-6.
  • Hanappi, Gerhard (2012). ‘Schumpeter’. In Michael Dietrich and Jackie Kraff (eds.), Handbook on the Economics and Theory of the Firm (pp. 62-69), Cheltenham: Edward Elgar Publishing Ltd.
  • Hodgson, Geoffrey M. (2012). ‘Veblen, Commons and the theory of the firm’. In Michael Dietrich and Jackie Kraff (eds.), Handbook on the Economics and Theory of the Firm (pp. 55-61), Cheltenham U.K.: Edward Elgar Publishing Ltd.
  • Hutchison, T. W. (1953). A Review of Economic Doctrines 1870-1929, Oxford: Oxford University Press.
  • Le Texier, Thibault (2013). ‘Veblen, Commons, and the Modern Corporation: Why Management Does Not Fit Economics’, Homo Oeconomicus, 30(1) March: 79-98.

Monday, 11 April 2016

Keeping secrets: do we need patents?

Patents are the most well known way for people and firms to protect their intellectual property and gain reward for innovation. But there are well know problems with patents and other ways, such as the use of prizes, to reward innovation are often looked at.

One alternative to patents and prizes not often discussed is just to keep your innovation secret. But is it worth it? Secrets are costly to keep and can be found out.

A forthcoming paper, "Keeping Secrets: The Economics of Access Deterrence" by Emeric Henry and Francisco Ruiz-Aliseda, in the American Economic Journal: Microeconomics looks a this question.

The abstract reads:
Keeping valuable secrets requires costly protection efforts. Breaking them requires costly search efforts. In a dynamic model in which the value of the secret decreases with the number of those holding it, we examine the secret holders' protection decisions and the secret breakers' timing of entry, showing that the original secret holder's payoff can be very high, even when protection appears weak, with implications for innovators' profits from unpatented innovations. We show that the path of entry will be characterized by two waves, the first of protected entry followed by a waiting period, and a second wave of unprotected entry. (Emphasis added)
Payoffs to keeping a secret can be high even in a world where protection looks weak, so sometimes shutting the hell up is the best policy.

Of course this can't be the perfect solution to rewarding innovation since firms can always try to keep their developments secret, but they don't. In many cases they do takeout patents.

So what would a Trump presidency be like?

The Boston Globe newspaper has been having some fun on the topic of a Trump presidency. Their front page from Sunday, April 9, 2017 can be found here (pdf).

HT: Greg Mankiw's Blog.

Sunday, 10 April 2016

People are fed up with taking surveys


And who is surprised by this piece of news? But it may matter for the quality of data collected by stats agencies such as StatsNZ. And that matters for policy determination.

In a recent article in the Journal of Economic Perspectives, "Household Surveys in Crisis" by Bruce D. Meyer, Wallace K. C. Mok and James X. Sullivan, it is argued that declining cooperation by survey respondents is adversely affecting the quality of data being collected.

The abstract reads,
Household surveys, one of the main innovations in social science research of the last century, are threatened by declining accuracy due to reduced cooperation of respondents. While many indicators of survey quality have steadily declined in recent decades, the literature has largely emphasized rising nonresponse rates rather than other potentially more important dimensions to the problem. We divide the problem into rising rates of nonresponse, imputation, and measurement error, documenting the rise in each of these threats to survey quality over the past three decades. A fundamental problem in assessing biases due to these problems in surveys is the lack of a benchmark or measure of truth, leading us to focus on the accuracy of the reporting of government transfers. We provide evidence from aggregate measures of transfer reporting as well as linked microdata. We discuss the relative importance of misreporting of program receipt and conditional amounts of benefits received, as well as some of the conjectured reasons for declining cooperation and for survey errors. We end by discussing ways to reduce the impact of the problem including the increased use of administrative data and the possibilities for combining administrative and survey data.
Given the claims we hear about evidence based policy being the big thing these days these issues matter. There is little point in basing policy on evidence if that evidence is crap.

Ref.:
  • Meyer, Bruce D., Wallace K. C. Mok and James X. Sullivan. 2015. "Household Surveys in Crisis." Journal of Economic Perspectives, 29(4):199-226.

What is so terrible about this Mossack Fonseca thing?

I can't help thinking not much. One has to keep in mind there is a difference between tax avoidance and tax evasion. Eamonn Butler at the Adam Smith Institute blog makes the point,
The two are different, of course. Theft, fraud, tax evasion and money-laundering are rightly illegal: any firm or country that helps mafia bosses or dictators conceal stolen millions should be exposed and punished. But if you work within the rules and find ways to cut your tax bill, or invest your money in some place where it won’t be taxed within an inch of its life, that is legal and should remain so. Indeed, low-tax jurisdictions act as a safety valve that makes it harder for politicians to oppress their citizens with crippling taxes.

But it is too easy for those politicians to lump together the illegal evasion with the legal avoidance and say that both should be swept away.
And not just politicians, many commentators and journalists who are losing their collective minds over this whole issue make the same mistake.

Butler continues,
That is why politicians hate them. They know that if other places have lower taxes, people will move their money (or their businesses, or even themselves) abroad – so their citizens can no longer be taxed with impunity. It’s pure tax protectionism: governments don’t produce widgets, so they are all in favour of free trade in widgets; but they do produce taxes, so they want to keep out the competition.
So its competition for thee but not for me! Not that anti-competitive behaviour by governments in that unusual.

And if governments are so worried about tax havens then their is an obvious solution, lower their taxes.
Low taxes encourage enterprise, investment, growth and freedom. So low-tax jurisdictions don’t need to flout the rules, and it is insulting to suggest that they do: in fact, many have financial sectors that are better regulated than ours. Rather than try to bully them out of existence, the big countries should ditch their tax protectionism, square up to the competition, lower and simplify their own taxes.
What's the bit they won't?

Saturday, 9 April 2016

Things Adam Smith got wrong

In short, not many and the things he got right far outweigh the things he got wrong, but as  James R. Otteson argues in his book "Adam Smith" there were some wrong steps.

In his book Otteson has a chapter on "What Smith Got Wrong". He suggests four things:
  1. Labor Theory of Value,
  2. Happiness and Tranquility,
  3. Committing the Great Mind Fallacy? and
  4. Smithian Limited Government and Human Prosperity.
I'm going to argue here for a fifth thing, Smith missed the opportunity to formulate a theory of the firm. He had building blocks on which to base such a theory, he just didn't develop them.

In particular I would argue he could have his expanded his discussions of specialisation and the division of labour and of joint-shock companies to formulate some version of a theory of the firm.

Smith who opens his magnum opus, An Inquiry into the Nature and Causes of The Wealth of Nations, with a discussion of the division of labour at the microeconomic level, the famous pin factory example, but quickly moves the analysis to the market level. When discussing Smith’s approach to the division of labour McNulty (1984: 237-8) comments,
“[h]aving conceptualized division of labor in terms of the organization of work within the enterprise, however, Smith subsequently failed to develop or even to pursue systematically that line of analysis. His ideas on the division of labor could, for example, have led him toward an analysis of task assignment, management, or organization. Such an intra-firm approach would have foreshadowed the much later−indeed, quite recent−efforts in this direction by Herbert Simon, Oliver Williamson, Harvey Leibenstein, and others, a body of work which Leibenstein calls “micro-microeconomics”. [ ...] But, instead, Smith quickly turned his attention away from the internal organization of the enterprise, and outward toward the market and the realm of exchange, perhaps because he found therein both the source of division of labor, in the “propensity in human nature ... to truck, barter and exchange” and its effective limits”.
Another missed opportunity is when, from the third edition on, Smith discusses ‘joint-shock companies’. When considering the internal organisation of such firms Smith raises, but does not develop a theory of, what we would call today, the principal-agent problems that arise from the separation of ownership from control. Perhaps his most famous remark is,
“[t]he directors of such companies, however, being the managers rather of other people’s money than of their own, it cannot well be expected, that they should watch over it with the same anxious vigilance with which the partners in a private copartnery frequently watch over their own. Like the stewards of a rich man, they are apt to consider attention to small matters as not for their master’s honour, and very easily give themselves a dispensation from having it. Negligence and profusion, therefore, must always prevail, more or less, in the management of the affairs of such a company” (Smith 1776: Book V, Chapter 1, Part III, p. 741).

But “[ ...] Smith neither used the modern terms, “agency” or “corporate governance,” nor developed a general theory−a fact that is often overlooked” (Fleckner 2016: 22).

Perhaps the most obvious reason for this is that Smith wasn't interested in the firm as such. He was, as the title of this book would suggest, interested in economic growth and its nature and causes. This didn't require a theory of the firm in terms of a theory explaining the existence, boundaries and internal organisation of the firm. There may also be an empirical reason for the firm being overlooked; the relative unimportance of the firm. Until relatively recently firms were simply not a large part of the economy. But it has been pointed out that such an explanation is not wholly convincing. Large firms have existed since before the time of Adam Smith and the classical economists knew this. A more precise, and more defensible, version of the argument would be that the large, vertically integrated and diversified firm was not empirically important until recently.

For whatever reason this line of thinking was followed by the classical economists resulting in a situation which Blaug (1958: 226) could summarise simply by noting that the classical economists “[ ...] had no theory of the firm”.

Refs.:
  • Blaug, Mark (1958). ‘The Classical Economists and the Factory Acts-A Re-Examination’, The Quarterly Journal of Economics, 72(2) May: 211-26.
  • Fleckner, Andreas Martin (2016). ‘Adam Smith on the Joint Stock Company’, Max Planck Institute for Tax Law and Public Finance Working Paper 2016-01 January.
  • McNulty, Paul J. (1984). ‘On the Nature and Theory of Economic Organization: the Role of the Firm Reconsidered’, History of Political Economy, 16(2) Summer: 233-53.
  • Otteson, J. R. (2011). Adam Smith (Major Conservative and Libertarian Thinkers Volume 16, Series Editor: John Meadowcroft), New York: Continuum.
  • Smith, Adam (1776). An Inquiry into the Nature and Causes of the Wealth of Nations, Volumes I and II, R. H. Campbell and A. S. Skinner (general eds.), W. B. Todd (textual ed.), Indianapolis: Liberty Classics, 1981.

Is general equilibrium dead?

As early as 1955 Milton Friedman was suggesting that to deal with "substantive hypotheses about economic phenomena" a move away from Walrasian towards Marshallian analysis was required. When reviewing Walras's contribution to general equilibrium, as developed in his Elements of Pure Economics, Friedman argued,
"[e]conomics not only requires a framework for organizing our ideas [which provides], it requires also ideas to be organized. We need the right kind of language; we also need something to say. Substantive hypotheses about economic phenomena of the kind that were the goal of Cournot are an essential ingredient of a fruitful and meaningful economic theory. Walras has little to contribute in this direction; for this we must turn to other economists, notably, of course, to Alfred Marshall" (Friedman 1955: 908).
By the mid-1970s microeconomic theorists had largely turned away from Walras and back to Marshall, at least insofar as they returned to using partial equilibrium analysis to investigate economic phenomena such as strategic interaction, asymmetric information and economic institutions.

If one takes the theory of the firm as an example (see here for an overview of this literature), all the models considered in the contemporary literature are partial equilibrium models. but in this regard the theory of the firm is no different from most of the microeconomic theory developed since the 1970s. Microeconomics such as incentive theory, incomplete contract theory, game theory, industrial organisation, organisational economics etc, has largely turned its back on general equilibrium theory and has worked almost exclusively within a partial equilibrium framework.

One major path of influence from the mainstream of modern economics to the development of the theory of the firm has been via contract theory. But contract theory is an example of the mainstream’s increasing reliance on partial equilibrium modelling. Contract theory grew out of the failures of general equilibrium. As Salanie (2005: 2) has argued,
“[t]he theory of contracts has evolved from the failures of general equilibrium theory. In the 1970s several economists settled on a new way to study economic relationships. The idea was to turn away temporarily from general equilibrium models, whose description of the economy is consistent but not realistic enough, and to focus on necessarily partial models that take into account the full complexity of strategic interactions between privately informed agents in well-defined institutional settings”.
When surveying theory of the firm literature Foss, Lando and Thomsen (2000) use a classification scheme which clearly illustrates the movement of the current theory of the firm literature away from general equilibrium towards partial equilibrium analysis. The scheme divides the contemporary theory into two groups based on which of the standard assumptions of general equilibrium theory is violated when modelling issues to do with the firm. The theories are divided into either a principal-agent group, based on violating the ‘symmetric information’ assumption, or an incomplete contracts group, based on the violation of the ‘complete contracts’ assumption.

Another recent challenge to standard general equilibrium as come from the introduction of the entrepreneur in the theory of the firm since, as William Baumol noted more than 40 years ago, the entrepreneur has no place in formal neoclassical theory.
“Contrast all this with the entrepreneur’s place in the formal theory. Look for him in the index of some of the most noted of recent writings on value theory, in neoclassical or activity analysis models of the firm. The references are scanty and more often they are totally absent. The theoretical firm is entrepreneurless−the Prince of Denmark has been expunged from the discussion of Hamlet” (Baumol 1968: 66).
The reasons for this are not hard to find. Within the formal model the ‘firm’ is a production function or production possibilities set, it is simply a means of creating outputs from inputs. Given input prices, technology and demand, the firm maximises profits subject to its production plan being technologically feasible. The firm is modelled as a single agent who faces a set of relatively uncomplicated decisions, e.g. what level of output to produce, how much of each input to utilise etc. Such ‘decisions’ are not decisions at all, they are simple mathematical calculations, implicit in the given conditions. The ‘firm’ can be seen as a set of cost curves and the ‘theory of the firm’ as little more than a calculus problem. In such a world there is a role for a ‘decision maker’ (manager) but no role for an entrepreneur.

The necessity of having to violate basic assumptions of general equilibrium theory so that we can model the firm, suggests that as it stands GE can not deal easily with firms, or other important economic institutions. Bernard Salanie has noted that,
“[ ...] the organization of the many institutions that govern economic relationships is entirely absent from these [GE] models. This is particularly striking in the case of firms, which are modeled as a production set. This makes the very existence of firms difficult to justify in the context of general equilibrium models, since all interactions are expected to take place through the price system in these
models” (Salanie 2005: 1).
This would suggest that to make general equilibrium models a ubiquitous tool of microeconomic analysis - including the analysis of issues to do with non-market organisations such as the firm - developing models which can account for information asymmetries, contractual incompleteness, strategic interaction, the existence of institutions and the like is not so much desirable as essential. One catalyst for the development of such a new approach to general equilibrium is that partial equilibrium models can obscure the importance of the theory of the firm for overall resource allocation, a point which is more easily appreciated in a general equilibrium framework.

General equilibrium is dead. Long live general equilibrium? We will see.

Refs.:
  • Baumol, William J. (1968). `Entrepreneurship in Economic Theory', American Economic Review, 58(2) May: 64-71.
  • Foss, Nicolai J., Henrik Lando and Steen Thomsen (2000). ‘The Theory of the Firm’. In Boudewijn Bouckaert and Gerrit De Geest (eds.), Encyclopedia of Law and Economics (vol. III, pp. 631-58), Cheltenham U.K.: Edward Elgar Publishing Ltd.
  • Friedman, Milton (1955). ‘Leon Walras and His Economic System’, American Economic Review, 45(5) December: 900-09.
  • Salanie, Bernard (2005). The Economics of Contracts: A Primer, 2nd edn., Cambridge, Mass.: The MIT Press.

Wednesday, 6 April 2016

Do minimum wages stimulate productivity and growth?

This question is asked in a new paper (pdf) under that title by Joseph J. Sabia (San Diego State University, USA, and IZA, Germany).

The basic finding from the paper is that minimum wage increases fail to stimulate growth and can have a negative impact on vulnerable workers during recessions.

The Pros and Cons of minimum wages are outlined as,


The author's main message is
Empirical evidence provides little support for claims that higher minimum wages will: (i) serve as an engine of economic growth by redistributing income to workers with a relatively high marginal propensity to consume; or (ii) alleviate poverty during economic downturns. Therefore, policymakers wishing to aid low-skilled workers during recessions, or to spur economic growth, should not look to the minimum wage as a policy solution. Rather, means-tested, pro-work cash assistance programs and negative income tax schemes can deliver income to the working poor far more efficiently.

Sunday, 3 April 2016

Per Bylund on "The Problem of Production".

In this audio Per Bylund (Assistant Professor of Entrepreneurship and Records-Johnston Professor of Free Enterprise in the School of Entrepreneurship at Oklahoma State University) discusses his recent book, The Problem of Production: A New Theory of the Firm (Routledge, 2016).

Bylund's theory is rooted in Austrian economics and examines the firm as a part of the market, not as a free-standing entity. In this integrated view, a theory is offered which incorporates entrepreneurship, production, market process and economic development.

The digital revolution, the firm, and economic statistics

As has been noted previously the digital revolution has given rise to a number of problems to do with the measurement of economic statistics. One particular issue has to do with the effects of the digital revolution on firms and the measurement issues this gives rise to. One reason it is problematic to measure the "knowledge economy" at the national level is because it is difficult to measure knowledge at the level of the individual firm. Part of the reason for this is that none of the orthodox theories of the firm offer us a theory of the “knowledge firm” to guide our measurement. I wrote a section on the "The (non)theory of the knowledge firm" in Oxley, Walker, Thorns and Wang (2008).
As has been emphasised by Carter (1996) it is problematic to measure knowledge at the national level in part because it is difficult to measure knowledge at the level of the individual firm. Part of the reason for this is that none of the orthodox theories of the firm offer us a theory of the “knowledge firm” to guide our measurement.

The model of the “firm” found in most microeconomic textbooks does not incorporate knowledge - individual or institutional - or the knowledge worker; it can’t since it isn’t a “theory of the firm” in any meaningful sense. The output side of the standard neoclassical model is a theory of supply rather than a true theory of the firm. In neoclassical theory, the firm is a ‘black box’ there to explain how changes in inputs lead to changes in outputs. The firm is a conceptualisation that represents, formally, the actions of the owners of inputs who place their inputs in the highest value uses, and makes sure that production is separated from consumption. The firm in neoclassical theory is no more or less than a specialized unit of production, but it can be a one-person unit” (Demsetz 1995: 9).

Given there is no serious modelling of the firm in neoclassical theory, there is no way to deal with the knowledge firm within this framework. There are no organisational problems or any internal decision-making process, in fact, there is no organisational structure at all and thus the advent of the knowledge economy cannot alter this nonexistent structure. As there is no role for managers or employees there can be no knowledge workers in the firm. But the growth in knowledge workers is one of the most important aspects in the development of the knowledge society. And their advent will change the way we think about firms.

Knowledge creators\workers\owners have the potential to be highly internationally mobile (unlike the physical capital or land in the old economy) which has the capacity to either reduce the knowledge divide or increase it, but importantly at much higher speeds. Buying the necessary knowledge creators\assimilators is like buying physical capital except the ownership of the ‘means of production’ is now vested more with the capital itself (human) than in the past modes of production. This has a number of important implications including helping understand who wins and who loses from the knowledge economy and this has the potential to affect our understanding\modelling of the traditional ‘theory of the firm’ - in terms of the Grossman/Hart/Moore (GHM) approach - which is vested in the ownership of physical capital alone.

The modern theory of the firm is based on the work of Grossman and Hart, (1986, 1987) and Hart and Moore (1990). Within the GHM approach ownership is defined in terms of residual control over non-human assets, things such as machinery, inventories, buildings, patents, client lists, firm’s reputation etc. Owner-managers employ labour that cannot work without the physical capital these firms own. Dismissal\resignation of the labour requires them to find other physical capital owning organisations (firms) to employ them. On liquidation of the firm, physical capital can be sold and the proceeds disbursed to the owners (shareholders). The standard theory of the firm is based on the role of non-human capital in the firm. The definition of a firm, the determinants of the boundaries of a firm - that is, the determinants of vertical integration of firms, the meaning of ownership of the firm, the nature of authority within the firm are all functions of control rights over the firm’s non-human assets. Making non-human assets the centre of the theory means that questions to do with the ownership and control of the physical information technology can be addressed, but this concentration on non-human assets means that the theory doesn’t deal with firms based on human assets. However it had been noted from the beginning that the theory could beextended to include human capital. As Hart (1988: 151) argues:
“. . . one difference with previous work is the emphasis on how integration changes control over physical assets. This is in contrast to Coase’s 1937 paper which focuses on the way integration changes an ordinary contractual relationship into one where an employee accepts the authority of an employer (within limits). Note that these approaches are not contradictory. Authority and residual rights of control are very close and there is no reason why our analysis of the costs and benefits of allocating residual rights of control could not be extended to cover human, as well as physical, assets.”
Once we move to a situation where firms may own\need little physical capital, then the modern theory of the firm loses much of its main reason for being. Once human capital (labour) becomes the most important\sole creator of wealth\value added then modern economic theory is in need of modification. The theory does not, however, lose all relevance. As Hart (1995: 56-7) explains, at least some, nonhuman assets are essential to a theory of the firm. To see why this may be so consider a situation where ‘firm’ 1 acquires ‘firm’ 2, which consists entirely of human-capital. The question Hart raises is, What is to stop firm 2’s workers from quitting? Without any physical assets, e.g. buildings, firm 2’s workers would not even have to relocate themselves physically.

If these workers were linked by telephones or computers, which they themselves own, they could simply announce one day that they had decided to become a new firm. For the acquisition of firm 2 by firm 1 to make economic sense there has tobe a source of value in firm 2 over and above the human-capital of the workers. It makes little sense to buy a ‘firm’ if that ‘firm’ can just get up and walk away. Hart argues there must be some ‘glue’ holding firm 2’s workers in place.

The value which acts as this glue may consist of as little as a place to meet; the firm’s name, reputation, or distribution network; the firm’s files, containing important information about its operations or its customers; or even a contract that prohibits firm 2’s workers from working for competitors or from taking existing clients with them should they quit. The source of value may even just represent the difficulty firm 2’s workers face in co-ordinating a move to another firm. But, Hart points out, without something binding the firm together, the firm becomes a phantom, and as such we should expect that such firms would be flimsy and unstable entities, constantly subject to the possibility of break-up or dissolution.

Thus even a human-capital based firm will involve some nonhuman-capital, but the human capital will play the dominate role. The important characteristic of human-capital is that it embodies information and knowledge. A theory of the human-capital based firm has to model this co-existence of the human and nonhuman-capital. Brynjolfsson (1994) deals with the issue by extending the property rights approach to the firm to include information whether this information is embodied in humans, in the form of human-capital, or in artifacts. Rabin (1993) also works within the property rights framework, but extends it by assuming that an agent has information about how to make production more productive which they are willing to sell.

If the firm comprises human capital resources (eg., a legal or accounting firm) whose accumulated knowledge is the source of wealth creation, the balance of power stemming from the “ownership of the means of production”, has changed. Likewise predictions about what would happen at the dissolution of a knowledge-firm, is also unclear. Who has the rights to the sell-off of the assets, where these assets are embodied in human beings? How can these assets be sold-off? These issues, although important in the context of the economic theory of the firm may have less importance when trying to measure the size\scale of the knowledge economy. However they are likely to have profound effects on the idea f a Knowledge Society where the balance of (economic) power will change - owners of physical capital losing this to owners of human capital, which without slavery map one-to-one to each individual. An individual’s own economic power would likely vary with their different stocks of human capital as would the price they charge to hire it to others in the form of employment. This in turn affects who wins and who loses from the knowledge society.

While the Brynjolfsson model is distinct from the Rabin model, they are complementary. The relationship between information, ownership and authority is central to both papers. Rabin works within a framework of an adverse selection model and shows that the adverse selection problems can be such that, in some cases, an informed party has to take over the firm to show that their information is indeed useful. The Brynjolfsson model is a moral hazard type framework which deals with the issue of incentives for an informed party to maximise uncontractible effort.

As has been discussed Hart (1995: 17) noted that the neoclassical model tells us nothing about where a firm’s boundaries will lie or about the size or location of a plant or factory within a given firm. This approach is consistent with every existing firm being a plant or division of one huge firm which produces everything. It is also consistent with every plant or division of each existing firm being a separate and independent firm in their own right. Thus it is not clear in what organisational form production will occur. Will it be organised as a single large factory, several smaller factories or a household? The GHM approach does delineate the boundaries of the firm but still does not tell us anything about the location or size of a plant or factory which is part of the firm. Again the form of production organisation is indeterminate. What will be argued below is that the division of knowledge is one important influence on the form of organisation in which production takes place. The most obvious issue has to do with the determination of whether or not work occurs in a centralised factory or in separate households or some combination. This has been an issue since at least the industrial revolution.

The development of ICTs has meant that the costs of moving people as opposed to moving information have risen sharply. The costs involved in sending and receiving information have fallen thanks to technologies such as email and the Internet along with falls in the costs of long distance phone calls and the expanding use of cellular networks. The costs of people moving have not fallen however. Commuting to work via congested city and suburban streets, for example, is at least as difficult as it was two decades ago. The increasing interest in congestion pricing in many cities around the world suggests that traffic problems are not lessening. The ever increasing relative cost of moving people would suggest that the size of the “unit of production” should be moving away from the large factory, so dominant for the last two centuries, towards more home based production, as in the period before the industrial revolution.

The previous sections have briefly outlined the effects of the increasing importance of knowledge for the mainstream theory of the firm. It was argued that the neo-classical production function approach is not a true theory of the firm, but rather the firm is portrayed as a uninvestigated perfectly efficient ‘black box’ which simply turns inputs into outputs without organisation structure. The extensions of the GHM framework offered by Brynjolfsson (1994) and Rabin (1993) inherits the implicit owner\manager restriction of the original GHM framework and thus are of limited value when modelling the knowledge firm. When we turn to the location of production the models suggest that we should, in general terms, see a movement back towards home production but we are not given a specific relationship between knowledge and plant size or production location.

We are left with an unsatisfactory model of the (knowledge) firm and thus we are unable to give guidance on either empirical or policy questions that flow, via changes to the firm, from the development of the knowledge economy. Firm’s organisational structures are changing in response to the increased prominence of information and knowledge in the production process. In the new economy, not only will we see changes in the location of production, but even if production still takes place within a traditional firm, a factory or office, that firm may have a very different structure and organisation from that which we see today. Rajan and Zingales (2003: 87) argue that we are in fact seeing a new “kinder, gentler firm”. This is in response to the increase in the importance of human capital, along with increased competition and access to finance, all of which have increased the worker’s importance and improved the outside options for workers, thereby changing the balance of power within firms. In Rajan and Zingales’s view “[t]he single biggest challenge for the owners or top management today is to manage in an atmosphere of diminished authority. Authority has to be gained by persuading lower managers and workers that the workplace is an attractive one and one that they would hate to lose. To do this, top management has to ensure that work is enriching, that responsibilities are handed down, and rich bonds develop among workers and between themselves and workers” (Rajan and Zingales 2003: 87).

Cowen and Parker (1997) make a similar point about changing organisational structures. For them, “[i]nformation as a factor of production is making old functional structures and methods of organisation and planning redundant in many areas of business. The successful use of knowledge involves not only its generation, but also its mobilisation and integration, requiring a change in the way it is handled and processed.” (Cowen and Parker 1997: 12). Organisational change, as far as Cowen and Parker are concerned, is the consequence of the increasing need to make use of market principles within the firm and the growing importance of human capital. They note that as far as a firm’s labour force is concerned, “[t]he emphasis now is upon encouraging knowledge acquisition, skills and adaptability in the workforce as critical factors in competitive advantage.” (Cowen and Parker 1997: 32). Firms are obliged to rely more on market based mechanisms as the most efficient way of processing and transmitting information and giving the firm the flexibility and yet also focus it requires. Companies are decentralising their management systems as a way of coping with the uncertainty and pace of change in their markets. The aim is to ensure that those with the required knowledge and right incentives are the ones making the decisions and taking responsibility for the outcomes. Cowen and Parker (1997: 25-8) emphasise how advances in ICTs underlie the ability to be able to combine the advantages of this organisational flexibility with mass production.

But little of these types of changes are captured or explained by the mainstream theory of the firm. Expanding the orthodox view of the firm to include the new reality of the knowledge economy should be an urgent issue on the economic research agenda. It should be noted that such changes to the firm help determine who are the “winners and losers” from economic change in general. As in all previous “economic revolutions”, this is the ultimate issue to do with the knowledge economy.
All the references in the above can be found in Oxley, Walker, Thorns and Wang (2008).
  • Oxley, Les, Walker, Paul, Thorns, David, Wang, Hong (2008). 'The knowledge economy/society: the latest example of “Measurement without theory”?', The Journal of Philosophical Economics, II:1 , 20-54.
A more detailed discussion of these issues can be found in Walker (2010).
  • Walker, P. (2010). "The (non)theory of the knowledge firm". Scottish Journal of Political Economy, v57 no1, February 2010: 1-32.

The digital revolution, and economic statistics

Following on from the previous post, the problems for measuring economic activity bought about by the digital revolution that has given rise to the so-called "knowledge economy"  is an issue that I have written a little on. There is a section on this problem, called "Mr Bean(counter) measures the economy", in Oxley, Walker, Thorns and Wang (2008).
Much time and effort is expended by many national and international organisations in an attempt to measure the economy or economies of the world.

While the measuring of the “standard” economy is funny enough, when we move to the measurement of the “knowledge economy” measurement goes from the mildly humorous to the outright hilarious. Most attempts to measure, or even define, the information or knowledge economy border on the farcical: the movie version should be called, "Mr Bean(counter) Measures the Economy".

There are substantial challenges to be overcome in any attempt to measure the knowledge society\economy. These are at both the theoretical and the method level. A more consistent set of definitions are required as are more robust measures that are derived from theory rather than from what is currently or conveniently available. In order to identify the size and composition of the KBE one inevitably faces the issue of quantifying its extent and composition. Economists and national statistical organisations are naturally drawn to the workhorse of the ‘System of National Accounts’ as a source of such data. Introduced during WWII as a measure of wartime production capacity, the change in Gross Domestic Product (GDP) has become widely used as a measure of economic growth. However, GDP has significant difficulties in interpretation and usage (especially as a measure of wellbeing) which has led to the development of both ‘satellite accounts’ - additions to the original system to handle issues such as the ‘tourism sector’; ‘transitional economies’ and the ‘not-for-profit sector’ and alternative measures for example, the Human Development Indicator and Gross National Happiness . GDP is simply a gross tally of products and services bought and sold, with no distinctions between transactions that add to wellbeing, and those that diminish it. It assumes that every monetary transaction adds to wellbeing, by definition. Organisations like the ABS and OECD have adopted certain implicit\explicit definitions, typically of the Information Economy-type, and mapped these ideas into a strong emphasis on impacts and consequences of ICTs. The website (http://www.oecd.org/sti/information-economy) for the OECD’s Information Economy Unit states that it:
“. . . examines the economic and social implications of the development, diffusion and use of ICTs, the Internet and e-business. It analyses ICT policy frameworks shaping economic growth productivity, employment and business performance. In particular, the Working Party on the Information Economy (WPIE) focuses on digital content, ICT diffusion to business, global value chains, ICT-enabled off shoring, ICT skills and employment and the publication of the OECD Information Technology Outlook.”
Furthermore, the OECD’s Working Party on Indicators for the Information Society has
“. . . agreed on a number of standards for measuring ICT. They cover the definition of industries producing ICT goods and services (the “ICT sector”), a classification for ICT goods, the definitions of electronic commerce and Internet transactions, and model questionnaires and methodologies for measuring ICT use and e-commerce by businesses, households and individuals. All the standards have been brought togetherin the 2005 publication, Guide to Measuring the Information Society . . . “ (http://www.oecd.org/document/22/0,3343,en_2649_201185_34508886_1_1_1_1, 00.html)
The whole emphasis is on ICTs. For example, the OECD’s “Guide to Measuring the Information Society” has chapter headings that show that their major concern is with ICTs. Chapter 2 covers ICT products; Chapter 3 deals with ICT infrastructure; Chapter 4 concerns ICT supply; Chapter 5 looks at ICT demand by businesses; while Chapter 6 covers ICT demand by households and individuals. As will be shown below several authors have discussed the requirements for, and problems with, the measurement of the knowledge\information economy. As noted above most of the data on which the measures of the knowledge economy are based comes from the national accounts of the various countries involved. This does raise the question as to whether or not the said accounts are suitably designed for this purpose. There are a number of authors who suggest that in fact the national accounts are not the appropriate vehicle for this task. Peter Howitt argues that:
“. . . the theoretical foundation on which national income accounting is based is one in which knowledge is fixed and common, where only prices and quantities of commodities need to be measured. Likewise, we have no generally accepted empirical measures of such key theoretical concepts as the stock of technological knowledge, human capital, the resource cost of knowledge acquisition, the rate of innovation or the rate of obsolescence of old knowledge” (Howitt 1996: 10).
Howitt goes on to make the case that because we can not measure correctly the input to and the output of, the creation and use of knowledge, our traditional measure of GDP and productivity give a misleading picture of the state of the economy. Howitt further claims that the failure to develop a separate investment account for knowledge, in much the same manner as we do for physical capital, results in much of the economy’s output being missed by the national income accounts.

In Carter (1996) six problems in measuring the knowledge economy are identified:
1) The properties of knowledge itself make measuring it difficult,
2) Qualitative changes in conventional goods: the knowledge component of a good or service can change making it difficult to evaluate their “levels of output” over time,
3) Changing boundaries of producing units: for firms within a knowledge economy, the boundaries between firms and markets are becoming harder to distinguish,
4) Changing externalities and the externalities of change: spillovers are increasingly important in an knowledge economy,
5) Distinguishing ‘meta-investments’ from the current account: some investments
are general purpose investments in the sense that they allow all employees to be more efficient,
6) Creative destruction and the “useful life” of capital: knowledge can become
obsolete very quickly and as it does so the value of the old stock drops to zero.
Carter argues that these issues result in it being problematic to measure knowledge at the level of the individual firm. This results in it being difficult to measure knowledge at the national level as well since the individual firms’ accounts are the basis for the aggregate statistics and thus any inaccuracies in the firms’ accounts will compromise the national accounts.

Haltiwanger and Jarmin (2000) examine the data requirement for the proper measurement of the information economy. They point out that changes are needed in the statistical accounts which countries use if we are to deal with the information\knowledge economy. They begin by noting that improved measurement of many “traditional” items in the national accounts is crucial if we are to understand fully Information Technology (IT’s) impact on the economy. It is only by relating changes in traditional measures such as productivity and wages to the quality and use of IT that a comprehensive assessment of IT’s economic impact can be made. For them, three main areas related to the information economy require attention:
1) The investigation of the impact of IT on key indicators of aggregate activity, such as productivity and living standards,
2) The impact of IT on labour markets and income distribution and
3) The impact of IT on firm and on industry structures.
Haltiwanger and Jarmin outline five areas where good data are needed:
1) Measures of the IT infrastructure,
2) Measures of e-commerce,
3) Measures of firm and industry organisation,
4) Demographic and labour market characteristics of individuals using IT, and
5) Price behaviour.
In Moulton (2000) the question is asked as to what improvements we can make to the measurement of the information economy. In Moulton’s view additional effort is needed on price indices and better concepts and measures of output are needed for financial and insurance services and other “hard-to-measure” services. Just as serious are the problems of measuring changes in real output and prices of the industries that intensively use computer services. In some cases output, even if defined, is not directly priced and sold but takes the form of implicit services which at best have to be indirectly measured and valued. How to do so is not obvious. In the information economy, additional problems arise. The provision of information is a service which in some situations is provided at little or no cost via media such as the web. Thus on the web there may be less of a connection between information provision and business sales. The dividing line between goods and services becomes fuzzier in the case of e-commerce. When Internet prices differ from those of brick-and-mortar stores do we need different price indices for the different outlets? Also the information economy may affect the growth of Business-to-Consumer sales, new business formation and in cross-border trade. Standard government surveys may not fully capture these phenomena. Meanwhile the availability of IT hardware and software results in the variety and nature of products being provided changing rapidly. Moulton also argues that the measures of the capital stock used need to be strengthened, especially for high-tech equipment. He notes that one issue with measuring the effects of IT on the economy is that IT enters the production process often in the form of capital equipment. Much of the data entering inventory and cost calculations are rather meagre and needs to be expanded to improve capital stock estimates. Yet another issue with the capital stock measure is that a number of the components of capital are not completely captured by current methods, an obvious example being intellectual property. Also research and development and other intellectual property should be treated as capital investment though they currently are not. In addition to all this Moulton argues that the increased importance of electronic commerce means that the economic surveys used to capture its effects need to be expanded and updated.

In Peter Howitt’s view there are four main measurement problems for the knowledge economy:
1) The “knowledge-input problem”,
2) The “knowledge-investment problem”,
3) The “quality improvement problem”,
4) The “obsolescence problem”.
To deal with these problems Howitt makes a call for better data. But it’s not clear that better data alone is the answer, to both Howitt’s problems and the other issues outlined here. Without a better theory of what the “knowledge economy” is and the use of this theory to guide changes to the whole national accounting framework, it is far from obvious that much improvement can be expected in the current situation.

One simple question is to which industry or industries and\or sector or sectors of the economy can we tie knowledge\information production? When considering this question several problems arise. One is that the “technology” of information creation, transmission and communication pervades all human activities so cannot fit easily into the national accounts categories. It is language, art, shared thought, and so on. It is not just production of a given quantifiable commodity. Another issue is that because ICT exists along several different quantitative and qualitative dimensions production can not be added up. In addition if much of the knowledge in society is tacit, known only to individuals, then it may not be possible to measure in any meaningful way. If on the other hand knowledge is embedded in an organisation via organisational routines, see Becker (2004) for a review of this literature, then again it may not be measurable. Organisational routines may allow the knowledge of individual agents to be efficiently aggregated, much like markets aggregate information, even though no one person has a detailed understanding of the entire operation. In this sense, the organisation “possesses” knowledge which may not exist at the level of the individual member of the organisation. Indeed if, as Hayek can be interpreted as saying, much of the individual knowledge used by the organisation is tacit, it may not even be possible for one person to obtain the knowledge embodied in a large corporation.

There has also been considerable effort made to measure the information\knowledge society by national and international organisation such as UNESCO, the UN and the EU. That these efforts differ in their outcomes reflects, to a certain degree, different understandings of what the knowledge society is and thus different ways of modelling it. Some documents follow the process of knowledge production to sort out indicators, themes and tend to include measures on i) prerequisites for knowledge production (information infrastructure, knowledge, skill and human capital) and ii) knowledge production (R&D) itself. For example, in “Advancement of the Knowledge Society: Comparing Europe, the US and Japan” (European Foundation for the Improvement of Living and Working Conditions 2004), all indicators are sorted by whether they measure a prerequisite for the advancement of the knowledge society or whether they measure the outcomes of a knowledge society.

Other documents use different criteria to select indicators. The UN model initiated in “Understanding Knowledge Societies in Twenty Questions and Answers with the Index of Knowledge Societies” (Department of Economic and Social Affairs 2005), for example, categorises indicators along three dimensions: assets, advancement and foresightedness. When putting together its “Knowledge Society Barometer” (European Foundation for the Improvement of Living and Working Conditions 2004a), ‘The European Foundation for the Improvement of Living and Working Conditions’ considers notions such as information, knowledge, knowledge-value societies and sustainable development as parts of a ‘jigsaw puzzle’ which makes up their knowledge society framework. It seems to indicate that the knowledge society is viewed as a result of the integration of concerns of the previous conceptualisation of societies. Thus, the different frameworks also suggest the influence of organisational agenda\priorities in defining the knowledge society.

Despite the difference in frameworks and indicators, there are some common themes. These include human capital, innovation, ICT development and the context dimension. The human capital theme includes variables on the levels of people’s skills and education which reflect the size of the pool of educated people. Included in the innovation theme are variables showing innovation investment, procedures, capacities and networks. There are diverse indicators under the ICT theme; yet, they can be categorised as either resources or access. The former refers to the information infrastructure while the latter is related to the accessibility of information in people’s life and work. The context dimension always includes variables on socio-economic, political and institutional conditions for knowledge production.

Obviously, these themes are crucial for measuring the knowledge society. However, these measures are not without their pitfalls. One basic problem for these measures is caused by the “knowledge problem”. In some cases, knowledge is understood partially and information and knowledge are treated as exchangeable terms. As a result, some documents focused entirely on measuring the information economy while talking about the knowledge economy and society. Other documents mentioned the difference between tacit and explicit knowledge, the distinction between information and knowledge, and thus, the distinction between the information society and the knowledge society while they failed to employ appropriate variables to reflect the distinctions, due to data availability. Among these documents, we do see a gradually shifting understanding and discourse on the knowledge society. For example, “UNESCO World Report: Towards Knowledge Societies” could be seen as a leading document in initiating the paradigm shift from the information society to the knowledge one. It acknowledges that “the idea of the information society is based on technological breakthroughs. The concept of knowledge societies encompasses much broader social, ethical and political dimensions” (UNESCO 2005: 17). At the same time, another document prepared by UNESCO on statistical challenges shows difficulties in identifying the relevant data within the existing measurement frameworks.

In addition, the knowledge problem raises other issues to do with the choice of indicators in each of the major themes. For example, human capital is measured according to people’s formal education and skills based on human capital approaches. This inevitably ignores people’s tacit knowledge and knowledge between people. There are a number of sociological studies which show that even within the economic domain people are not rational actors but their economic performance is signicantly affected by social, cultural and political structures in which they are embedded.

Similarly, the measurement of innovation in these documents seems to focus mainly on the production of scientific knowledge in laboratories. This is inconsistent with the Mode-2 knowledge production initiated by Gibbons (1994) in the knowledge society in which science and society co-evolve. Also the measurement of innovation fails to distinguish the role of inventions from that of innovations. Consequently, it is difficult to see how they can measure the economic value of innovation and at the same time attach a social value to it. Regarding ICTs, it seems that the widely accepted practice is to enumerate the physical infrastructure or, at best, measure access to information. There is a misunderstanding on the relationship between technology and human beings here. It is not technology but human beings and their interactions that constitute so-called society and its institutions. Thus, the function of ICTs is not only their capacity to provide additional new connections but also their potential for opening or closing forms of personal, social and economic capacities, relationships and power-plays (Dutton,  2004). Mansell and Wehn’s (1998) INEXSK approach would be a valuable endeavour to integrate the dimension of human beings, their knowledge and ICTs in the knowledge society measurement (Mansell and Wehn 1998).

Another problem with measures of the knowledge society is confusing the knowledge economy with the knowledge society. Generally, there are two kinds of documents on the measurement of the knowledge society. One group focuses on measuring the knowledge economy although they mention the concept of the knowledge society. The foci of the measurement are human capital, innovation and ICT development. A representative document is “Measuring a Knowledge-based Economy and Society: An Australian Framework” prepared by the Australian Bureau of Statistics. The document’s author claims that this framework
“does not attempt to cover all knowledge in the economy and society . . . [and] offer a comprehensive treatment of a knowledge-based society although it does address those social elements which potentially affect economic change or are affected by it” (Australian Bureau of Statistics 2002: 15)
Another group of documents considers both economic and technological features and social conditions and outcomes of the knowledge society. Two representative documents here would be, “Advancement of the Knowledge Society: Comparing Europe, the US and Japan” (European Foundation for the Improvement of Living and Working Conditions 2004) and “Knowledge Society Barometer” (European Foundation for the Improvement of Living and Working Conditions 2004a) published by the European Foundation for the Improvement of Living and Working Conditions. There are some variables reflecting social issues such as social inclusion, quality of life and gender equality in the two documents. However, they failed to see that both the economic and the social are equally important and integrated components in the measurement frameworks. Instead, the social is still treated as the ‘leftover’ after having identified ‘significant’ and ‘measurable’ components for national accounting.

In light of these issues it would seem that a necessary first step along the path towards the correct measurement of the knowledge society\economy would entail the development of a theory of the knowledge society\economy. Such a theory would tell us, among other things, what the knowledge economy is, how - if at all - the knowledge economy\knowledge society differ, how they change and grow, and what the important measurable characteristics are. Based on this, a measurement framework could be developed to deal with, at least some of, the problems outlined above.
All the references in the above can be found in Oxley, Walker, Thorns and Wang (2008).
  • Oxley, Les, Walker, Paul, Thorns, David, Wang, Hong (2008). 'The knowledge economy/society: the latest example of “Measurement without theory”?', The Journal of Philosophical Economics, II:1 , 20-54.

Independent review of UK economic statistics

Sir Charles Bean discusses the challenges of the digital revolution, the "knowledge economy",  in measuring economic statistics. The aim is to ensure that the statistics – and the methodologies used to construct them – evolve to better reflect Britain’s modern complex dynamics.


Of course the issues raised are not UK dependent, all countries face the same measurement problems.

Discussion of such problems is not new, For example Oxley, Walker, Thorns, Wang (2008) argued that a lack of good theory is holding back measurement. Just collecting more data is not enough. Many authors do not see that the problems measuring the "knowledge economy" brings can not be resolved by simply gathering better data. Without well thought out theory, the data we would be seeking to find would be yet another “known unknown”.

The abstract reads:
The world has embraced a set of concepts (knowledge driven growth) which are seen as the ‘core of future growth and wellbeing’ without any commonly agreed notion of what they are, how they might be measured, and crucially therefore, how they actually do (or might) affect economic growth and social wellbeing. The theory of how the mechanism works lacks important detail.

Ref.:
  • Les Oxley, Walker, Paul, Thorns, David, Wang, Hong (2008) ‘The knowledge economy/society: the latest example of “Measurement without theory”?’, The Journal of Philosophical Economics, II:1 , 20-54

Tuesday, 29 March 2016

Economists on trade deals

This comes from Greg Mankaw's blog:
The new IGM Panel poll of prominent economists asks about this proposition:
An important reason why many workers in Michigan and Ohio have lost jobs in recent years is because US presidential administrations over the past 30 years have not been tough enough in trade negotiations.
Only 5 percent agree, while 64 percent disagree. (The rest were uncertain or did not answer.) A previous poll asked about this statement:
Past major trade deals have benefited most Americans.
On this one, 83 percent agreed, and zero percent disagreed.
With regard to the first poll mentioned, David Cutler, who "agreed" with the statement made an interesting comment,
The phrasing implies that any reduction in jobs is bad. Some shifts in employment are valuable (e.g., fewer sweatshop jobs in the US).
So there may be good even in the bad.

When the responses to this poll are weighted by each expert's confidence, those who "agree" falls to 3%. In both weighted and unweighted responses 0% "strongly agree". The weighted "disagrees" is 86%. So those who disagree are confident in their position.

So the vast majority of economists see advantages in trade deals, even if non-economists don't.

Do politicians affect economic outcomes?

For the US at least the answer, in one sense, is yes: the US economy has performed better under Democratic presidents than Republican ones.

But why?

A new paper in the American Economic Review (Vol. 106, Issue 4 -- April 2016 ) looks at this question.
Presidents and the US Economy: An Econometric Exploration
Alan S. Blinder and Mark W. Watson

The US economy has performed better when the president of the United States is a Democrat rather than a Republican, almost regardless of how one measures performance. For many measures, including real GDP growth (our focus), the performance gap is large and significant. This paper asks why. The answer is not found in technical time series matters nor in systematically more expansionary monetary or fiscal policy under Democrats. Rather, it appears that the Democratic edge stems mainly from more benign oil shocks, superior total factor productivity (TFP) performance, a more favorable international environment, and perhaps more optimistic consumer expectations about the near-term future. (JEL D72, E23, E32, E65, N12, N42)
When you look at the things that drive the Democratic advantage - benign oil shocks, superior total factor productivity (TFP) performance, a more favorable international environment and more optimistic consumer expectations - apart from may be the more optimistic consumer expectations, it is not obvious how the current term president could affect the other drives of economic performance.

So in this sense the answer to the question is no. It looks more like luck than good management on the part of politicians that drives economic success.

Sunday, 20 March 2016

Israel Kirzner on the history and importance of the Austrian Theory of the Market Process

Mercatus Center Academic & Student Programs recently hosted the 2016 Advanced Austrian Seminar at which Dr. Israel M. Kirzner, Professor Emeritus of Economics at New York University, delivered the keynote lecture, “The History and Importance of the Austrian Theory of the Market Process.” In this talk, Professor Kirzner examines the history of thought in Austrian economics, specifically focusing on the developments in the 20th century, to develop a link between the Austrian theory of the market process and the notion of subjectivism as the central idea in Austrian economics.

David Friedman & Bob Murphy - The Chicago Vs. Austrian School Debate

David Friedman and Robert Murphy will compare and contrast the different principles and methods of the Chicago and Austrian schools of economics -- and their impact on ethics and other social sciences.

Peter Boettke on Austrian Economics in the 21st Century at the Legatum Institute

The Legatum Institute's Economics of Prosperity programme hosted a breakfast discussion with Peter Boettke, one of the world’s most prominent Austrian economists. In this short video, the Legatum Institute's Shanker Singham, Boettke discusses the US elections (in particular, the rise of populist candidates such as Bernie Sanders and Donald Trump), the issue of inequality and economic distortion, which Boettke believes has come about because of the crony capitalism that has been spawned by a regulatory system that is captured by the elites, and why 'innovation' could be the 'magic bullet' for improving human capital.