Showing posts with label knowledge economy. Show all posts
Showing posts with label knowledge economy. Show all posts

Monday, 1 August 2016

High human capital and worker-owned firms

In a recent paper in the Journal of the Knowledge Economy I argued, via a simple model of the firm based on the reference point approach to the firm, that having a firm based on a homogeneous group of human capital leads to a different organisational form than that of a firm which involves a heterogeneous group of human capital.

To illustrate this idea I considered the ownership structure of professional sports teams. Heterogeneity in playing talent—playing talent being the human capital here—and thus in earning potential is a disincentive to the formation of a worker cooperative, an organisation which normally involves (rough) equality in payment,  since those players with the greatest earning potential, the largest outside options, will transfer away from the cooperative to maximise their income stream. Thus, a worker-owned team would have few, if any, star players, a handicap in the winner-takes-all world of professional sports.

But this argument could be applied to any worker-owned firm in which their is a range of worker ability. My argument would mean that the given a relatively flat compensation structure high ability workers would be more likely to leave a worker-owned firm.

In a new paper in the latest issue of the Economic Journal Gabriel Burdín argues that Equality Under Threat by the Talented: Evidence from Worker-Managed Firms.

The abstract of the paper reads:
Does workplace democracy engender greater pay equality? Are high-ability individuals more likely to quit egalitarian organisational regimes? The article revisits this long-standing issue by analysing the interplay between compensation structure and quit behaviour in the distinct yet underexplored institutional setting of worker-managed firms. The analysis is based on novel administrative data sources, which allow constructing a simple ordinal measure of the workers' ability type. The article's key findings are that worker-managed firms have a more compressed compensation structure than conventional firms, and high-ability members are more likely than other members to exit. (Emphasis added.)
Thus Burdin finds what you would expect, high-ability workers (players) are more likely to exit worker-owned firms (teams) leaving such firms (teams) at a disadvantage compared to employee based firms.

Wednesday, 20 July 2016

How well does GDP measure the digital economy?

A question asked by Timothy Taylor at his Conversable Economist blog. Taylor writes,
Digital technologies aren't just changing the way existing companies communicate and keep records, but are creating new kinds of companies (think Uber, AirBnB, or Amazon) and products (think and "free" products like email and websearch or an app like Pokemon Go). Can the old-style methods of measuring GDP keep up? Nadim Ahmad and Paul Schreyer of the OECD tackle this question in "Are GDP and Productivity Measures Up to the Challenges of the Digital Economy?" which appears in the Spring 2016 issue of International Productivity Monitor, which in turn is published by the Ontario-based Centre for the Study of Living Standards. Perhaps a little surprisingly, their overall message is upbeat. Here's the abstract:
"Recent years have seen a rapid emergence of disruptive technologies with new forms of intermediation, service provision and consumption, with digitalization being a common characteristic. These include new platforms that facilitate peer-to-peer transactions, such as AirBnB and Uber, new activities such as crowd sourcing, a growing category of the ‘occasional self-employed’ and prevalence of ‘free’ media services, funded by advertising and ‘Big data’. Against a backdrop of slowing rates of measured productivity growth, this has raised questions about the conceptual basis of GDP, and whether current compilation methods are adequate. This article frames the discussion under an umbrella of the Digitalized Economy, covering also statistical challenges where digitalization is a complicating feature such as the measurement of international transactions and knowledge based assets. It delineates between conceptual and compilation issues and highlights areas where further investigations are merited. The overall conclusion is that, on balance, the accounting framework for GDP looks to be up to the challenges posed by digitalization. Many practical measurement issues remain, however, in particular concerning price changes and where digitalization meets internationalization."
Contrary to this "upbeat" assessment I would argue that there are reason to think that GDP, as we know it, does not capture much of what happens within the digital/knowledge/information economy, call it what you will. There are substantial challenges to be overcome in any attempt to measure the such an economy. These are at both the theoretical and the method level.

To begin with, a more consistent set of definitions are required as are more robust measures that are derived from theory rather than from whatever data is currently or conveniently available. In order to identify the size and composition of the knowledge based economy one inevitably faces the issue of quantifying its extent and composition. Economists and national statistical organisations are naturally drawn to the workhorse of the ‘System of National Accounts’ as a source of such data. Introduced during World War II as a measure of wartime production capacity, the change in (real) Gross Domestic Product (GDP) has become widely used as a measure of economic growth. However, GDP has significant difficulties in interpretation and usage (especially as a measure of well being) which has led to the development of both ‘satellite accounts’ - additions to the original system to handle issues such as the ‘tourism sector’; ‘transitional economies’ and the ‘not-for-profit sector’ - and alternative measures, for example, the Human Development Indicator and Gross National Happiness. GDP is simply a gross tally of products and services bought and sold, with no distinctions between transactions that add to well being, and those that diminish it. It assumes that every monetary transaction adds to well being, by definition. Organisations like the Australian Bureau of Statistics and the OECD have adopted certain implicit/explicit definitions, typically of the Information Economy-type, and mapped these ideas into a strong emphasis on impacts and consequences of ICTs. The website (http://www.oecd.org/sti/information-economy) for the OECD’s Information Economy Unit states that it:
“[...] examines the economic and social implications of the development, diffusion and use of ICTs, the Internet and e-business. It analyses ICT policy frameworks shaping economic growth productivity, employment and business performance. In particular, the Working Party on the Information Economy (WPIE) focuses on digital content, ICT diffusion to business, global value chains, ICT-enabled off shoring, ICT skills and employment and the publication of the OECD Information Technology Outlook.”
Furthermore, the OECD’s Working Party on Indicators for the Information Society has
“[...] agreed on a number of standards for measuring ICT. They cover the definition of industries producing ICT goods and services (the “ICT sector”), a classification for ICT goods, the definitions of electronic commerce and Internet transactions, and model questionnaires and methodologies for measuring ICT use and e-commerce by businesses, households and individuals. All the standards have been brought together in the 2005 publication, Guide to Measuring the Information Society [ . . . ]” (http://www.oecd.org/document/22/0,3343,en_2649_201185_34508886_1_1_1_1,00.html).
The whole emphasis is on ICTs. For example, the OECD’s “Guide to Measuring the Information Society” has chapter headings that show that their major concern is with ICTs. Chapter 2 covers ICT products; Chapter 3 deals with ICT infrastructure; Chapter 4 concerns ICT supply; Chapter 5 looks at ICT demand by businesses; while Chapter 6 covers ICT demand by households and individuals.

As will be shown below several authors have discussed the requirements for, and problems with, the measurement of the knowledge/information economy. As noted above most of the data on which the measures of the knowledge economy are based comes from the national accounts of the various countries involved. This does raise the question as to whether or not the said accounts are suitably designed for this purpose. There are a number of authors who suggest that in fact the national accounts are not the appropriate vehicle for this task. Peter Howitt argues that:
“[...] the theoretical foundation on which national income accounting is based is one in which knowledge is fixed and common, where only prices and quantities of commodities need to be measured. Likewise, we have no generally accepted empirical measures of such key theoretical concepts as the stock of technological knowledge, human capital, the resource cost of knowledge acquisition, the rate of innovation or the rate of obsolescence of old knowledge.” (Howitt 1996: 10).
Howitt goes on to make the case that because we can not measure correctly the input to and the output of, the creation and use of knowledge, our traditional measure of GDP and productivity give a misleading picture of the state of the economy. Howitt further claims that the failure to develop a separate investment account for knowledge, in much the same manner as we do for physical capital, results in much of the economy’s output being missed by the national income accounts.

In Carter (1996) six problems in measuring the knowledge economy are identified:
  1. The properties of knowledge itself make measuring it difficult,
  2. Qualitative changes in conventional goods: the knowledge component of a good or service can change making it difficult to evaluate their ‘levels of output’ over time,
  3. Changing boundaries of producing units: for firms within a knowledge economy, the boundaries between firms and markets are becoming harder to distinguish,
  4. Changing externalities and the externalities of change: spillovers are increasingly important in an knowledge economy
  5. Distinguishing ‘meta-investments’ from the current account: some investments are general purpose investments in the sense that they allow all employees to be more efficient
  6. Creative destruction and the ‘useful life’ of capital: knowledge can become obsolete very quickly and as it does so the value of the old stock drops to zero.
Carter argues that these issues result in it being problematic to measure knowledge at the level of the individual firm. This results in it being difficult to measure knowledge at the national level as well since the individual firms’ accounts are the basis for the aggregate statistics and thus any inaccuracies in the firms’ accounts will compromise the national accounts.

Haltiwanger and Jarmin (2000) examine the data requirements for the better measurement of the information economy. They point out that changes are needed in the statistical accounts which countries use if we are to deal with the information/knowledge economy. They begin by noting that improved measurement of many “traditional” items in the national accounts is crucial if we are to understand fully Information Technology’s (IT’s) impact on the economy. It is only by relating changes in traditional measures such as productivity and wages to the quality and use of IT that a comprehensive assessment of IT’s economic impact can be made. For them, three main areas related to the information economy require attention:

The investigation of the impact of IT on key indicators of aggregate activity, such as productivity and living standards,
  1. The impact of IT on labour markets and income distribution and
  2. The impact of IT on firm and on industry structures.
Haltiwanger and Jarmin outline five areas where good data are needed:
  1. Measures of the IT infrastructure,
  2. Measures of e-commerce,
  3. Measures of firm and industry organisation,
  4. Demographic and labour market characteristics of individuals using IT, and
  5. Price behaviour.
In Moulton (2000) the question is asked as to what improvements we can make to the measurement of the information economy. In Moulton’s view additional effort is needed on price indices and better concepts and measures of output are needed for financial and insurance services and other “hard-to-measure” services. Just as serious are the problems of measuring changes in real output and prices of the industries that intensively use computer services. In some cases output, even if defined, is not directly priced and sold but takes the form of implicit services which at best have to be indirectly measured and valued. How to do so is not obvious. In the information economy, additional problems arise. The provision of information is a service which in some situations is provided at little or no cost via media such as the web. Thus on the web there may be less of a connection between information provision and business sales. The dividing line between goods and services becomes fuzzier in the case of e-commerce. When Internet prices differ from those of brick-and-mortar stores do we need different price indices for the different outlets? Also the information economy may affect the growth of Business-to-Consumer sales, new business formation and in cross-border trade. Standard government surveys may not fully capture these phenomena. Meanwhile the availability of IT hardware and software results in the variety and nature of products being provided changing rapidly. Moulton also argues that the measures of the capital stock used need to be strengthened, especially for high-tech equipment. He notes that one issue with measuring the effects of IT on the economy is that IT enters the production process often in the form of capital equipment. Much of the data entering inventory and cost calculations are rather meagre and needs to be expanded to improve capital stock estimates. Yet another issue with the capital stock measure is that a number of the components of capital are not completely captured by current methods, an obvious example being intellectual property. Also research and development and other intellectual property should be treated as capital investment though they currently are not. In addition to all this Moulton argues that the increased importance of electronic commerce means that the economic surveys used to capture its effects need to be expanded and updated.

In Peter Howitt’s view there are four main measurement problems for the knowledge economy:
  1. The “knowledge-input problem”. That is, the resources devoted to the creation of knowledge are underestimated by standard measures.
  2. The “knowledge-investment problem”. The output of knowledge resulting from formal and informal R&D activities is typically not measured.
  3. The “quality improvement problem”. Quality improvements go unmeasured.
  4. The “obsolescence problem”. No account is taken of the depreciation of the stock of knowledge (and physical capital) due to the creation of new knowledge.
To deal with these problems Howitt makes a call for better data. But it’s not clear that better data alone is the answer, to both Howitt’s problems and the other issues outlined here. Without a better theory of what the “knowledge economy” is and the use of this theory to guide changes to the whole national accounting framework, it is far from obvious that much improvement can be expected in the current situation.

One simple, theoretical, question is, To which industry or industries and/or sector or sectors of the economy can we tie knowledge/information production? When considering this question several problems arise. One is that the “technology” of information creation, transmission and communication pervades all human activities so cannot fit easily into the national accounts categories. It is language, art, shared thought, and so on. It is not just production of a given quantifiable commodity. Another issue is that because ICT exists along several different quantitative and qualitative dimensions production can not be added up. In addition if much of the knowledge in society is tacit, known only to individuals, then it may not be possible to measure in any meaningful way. Also if knowledge is embedded in an organisation via organisational routines then again it may not be measurable. Organisational routines may allow the knowledge of individual agents to be efficiently aggregated, much like markets aggregate information, even though no one person has a detailed understanding of the entire operation. In this sense, the organisation “possesses” knowledge which may not exist at the level of the individual member of the organisation. Indeed if, as Hayek can be interpreted as saying, much of the individual knowledge used by the organisation is tacit, it may not even be possible for one person to obtain the knowledge embodied in a large corporation.

As noted above Carter (1996) emphasises that it is problematic to measure knowledge at the national level in part because it is difficult to measure knowledge at the level of the individual firm. Part of the reason for this is that none of the orthodox theories of the firm offer us a theory of the “knowledge firm” which is needed to to guide our measurement.

Thus many of the measurement problems of the "knowledge economy" are rooted in the fact that we don't have a good theory of the "knowledge economy" or the "knowledge firm". Without such theories calls for better data are wasted, they miss the point. "Better" data collection alone is not going to improve the measurement of the "digital/knowledge/information economy".

Wednesday, 6 August 2014

Decorporatisation

or the knowledge economy, as some people may call it. At the Stumbling and Mumbling blog Chris Dillow invents a new world, decorporatisation. Chris writes,
In the day job, I've coined a newish word - decorporatization.

My chart shows what I mean. It shows that in recent years we haven't seen just a squeeze on wages, but also a squeeze on profits; the share of these in GDP has fallen recently. One reason for this is that the incomes of the self-employed - measured by the ONS as other incomes - have increased. This hasn't happened because the self-employed are raking it in, but because there are so many more of them.


This is what I mean by decorporatization; we're seeing a shift from corporate sector activity to the self-employed.

Some of this might be due to firms employing freelancers and one-man subcontractors rather than staff - perhaps because transactions costs have fallen. However, whilst this might have have increased profits for particular firms, it hasn't increased them in aggregate.
And this is possible. Likely I would say. But I would put forward another factor, the thing that New Zealand governments used to get very excited about, the knowledge economy. Remember the "knowledge wave" the last Labour government promised us but never seemed to deliver on.

At least insofar as the knowledge economy means an increase in the importance of human capital compared to nonhuman capital in determining the organisational form that businesses will take, it could help explain what Chris is seeing in the data. Rajan and Zingales (2003: 90) argue that
[h]uman capital is replacing inanimate assets as the most important source of corporate capabilities and value. In both their organizational structure and their promotion and compensation policies, large firms are becoming more like professional partnerships.
It is also likely that this change has increased the number of sole-proprietorship and small partnerships being formed.

The important point here is that if your firm is based, in the main, around human capital then it is much earlier and cheaper to set-up a firm than it is to set-up a nonhuman capital based firm. There is no need to find and negotiate with and keep happy external financiers who supply the funding required to purchase large amount of nonhuman capital. With a human capital based firm the human capital is much more likely to be able to provide any nonhuman capital required without the need of external funding. For example if you are a computer scientist using a PC to design small business accounting software, then your firm being a sole-proprietorship or a small partnership is feasible whereas if you are writing software for a super-computer you are more likely to be an employee of whoever owns/rents the computer. That is, in the first case human capital is the major input to the production process of the firm while in the second nonhuman capital plays a much greater role and this leads to different organisational forms.

So what Chris is noting is an affect of a change in the importance of human capital to the production process, at least in some areas of the economy. More small firms are being set-up because it is cheaper to do so. Large amounts of expensive nonhuman capital are not necessary.

Monday, 28 March 2011

Technology use and employment protection

A question that is often asked with regard to productivity across countries is Why is the U.S. more productive than the E.U.? Studies have suggested that much of the difference can be explained by the wider use of information and communication technologies in the U.S. But this just raises the obvious question Why does the U.S. use these technologies more? A new column at VoxEU.org provides new evidence suggesting the answer may lie in differences in employment protection legislation.

The column, Employment protection and technology choice by Eric Bartelsman Joris de Wind and Pieter Gautier notes that until the mid-1990s, E.U. productivity had been converging towards U.S. productivity. But since then, U.S. productivity growth has accelerated and the U.S.-E.U. gap has widened. Robert Gordon is one economist who has noted this fact:
[...] since 1995 Europe has experienced a productivity growth slowdown while the United States has experienced a marked acceleration. As a result, just in the past eight years, Europe has already lost about one-fifth of its previous 1950-95 gain in output per hour relative to the United States. Starting from 71 percent of the U. S. level of productivity in 1870, Europe fell back to 44 percent in 1950, caught up to 94 percent in 1995, and has now fallen back to 85 percent. (Gordon 2007: 176).
One factor that has been put forward to explain this productivity difference is the production and use of information and communication technologies (ICT). Such activity is much lower in the E.U. than in the U.S.

Bartelsman, de Wind and Gautier continue,
Why has the adoption of the new ICT been much slower in the EU? Recent research in Brynjolfsson et al. (2008) and our latest paper (Bartelsman et al. 2010) provides evidence that the adoption of these new technologies is associated with an increase in the variance of firm productivity. For example, implementation of advanced business software like SAP and Oracle requires a new organisational structure and the outcome is inherently uncertain. The variance of firm productivity is therefore relatively large in sectors that intensively use ICT.

For a given firm, adopting a technology with risky outcomes is attractive because the benefits can be scaled up if the outcome is good, while firms can fire workers or exit if things go poorly. Essentially, the ability to close a production unit is a real option that bounds the downward risk.
But what is the role of labour market policy? Bartelsman, de Wind and Gautier explain that one major policy difference between Europe and the U.S. is that employment protection legislation is much stricter in Europe. They
[...] show that the employment share of risky (ICT-intensive) sectors is indeed smaller in the EU than in the US, and that, within Europe, high-protection countries have relatively smaller ICT-intensive sectors than low-protection countries. We then find that countries with strict legislation are relatively less productive.
Bartelsman, de Wind and Gautier go on to say,
In order to explore the mechanism and to establish how much of the US-EU productivity divergence can be explained by stricter employment legislation, we develop a two-sector matching model with endogenous technology choice, i.e. firms can choose between a safe sector with stable productivity and a risky sector with productivity subject to sizable shocks. In the absence of employment protection legislation, the risky sector is relatively attractive because firms have the option to fire workers which bounds the downward risk. Introducing legislation makes it less attractive to use risky technologies, so this establishes the negative relationship between employment protection and the size of the risky sector. Legislation also results in more labour hoarding, i.e. the productivity threshold below which a worker is fired is lower if legislation is stricter. Further, the size of the effect increases as the variance of the shocks in the risky sector increases. This explains why productivity growth is lower in high-protection countries in particular when new technologies with a high variance in profitability become available.
So no matter what your views of the origins of employment protection legislation, the research findings Bartelsman, de Wind and Gautier put forward clearly show that the economic costs of employment protection increase with change, over time, in the type of technological opportunities available, but the benefits are unaffected.

If these research results are right then I can't help but think that they have serious implication for New Zealand, and if we really want to have a "knowledge economy", and catchup with Australia, then we should not ignore such findings.
  • Bartelsman, Eric J, Pieter A Gautier, and Joris de Wind (2010), “Employment Protection, Technology Choice, and Worker Allocation”, CEPR Discussion Paper 7806.
  • Brynjolfsson, E, A McAfee, M Sorell, and F Zhu (2008), “Scale without mass: business process replication and industry dynamics”, Harvard Business School Working Paper, 07–016.
  • Gordon, R. J. (2007). ‘Why was Europe Left at the Station When America’s Productivity Locomotive Departed?’ In Mary Gregory, Wiemer Salverda, and Ronald Schettkat (eds.), Services and Employment: Explaining the U.S.-European Gap, Princeton: Princeton University Press.

Sunday, 13 March 2011

Problems with measuring the "knowledge economy"

The previous posting on More on the productivity paradox highlighted the problem of the measurement of the modern, new or "knowledge economy". Much time and effort is expended by many national and international organisations in an attempt to measure the economy or economies of the world. While the measuring of the ‘standard’ economy is funny enough, when we move to the measurement of the ‘knowledge economy’ measurement goes from the mildly humorous to the outright hilarious. Most attempts to measure, or even define, the information or knowledge economy border on the farcical: the movie version should be called, "Mr Bean(counter) Measures the Economy".

There are substantial challenges to be overcome in any attempt to measure the knowledge economy. These are at both the theoretical and the method level. A more consistent set of definitions are required as are more robust measures that are derived from theory rather than from whatever data is currently or conveniently available. In order to identify the size and composition of the knowledge based economy one inevitably faces the issue of quantifying its extent and composition. Economists and national statistical organisations are naturally drawn to the workhorse of the ‘System of National Accounts’ as a source of such data. Introduced during World War II as a measure of wartime production capacity, the change in (real) Gross Domestic Product (GDP) has become widely used as a measure of economic growth. However, GDP has significant difficulties in interpretation and usage (especially as a measure of wellbeing) which has led to the development of both ‘satellite accounts’ - additions to the original system to handle issues such as the ‘tourism sector’; ‘transitional economies’ and the ‘not-for-profit sector’ - and alternative measures, for example, the Human Development Indicator and Gross National Happiness. GDP is simply a gross tally of products and services bought and sold, with no distinctions between transactions that add to wellbeing, and those that diminish it. It assumes that every monetary transaction adds to wellbeing, by definition. Organisations like the Australian Bureau of Statistics and the OECD have adopted certain implicit/explicit definitions, typically of the Information Economy-type, and mapped these ideas into a strong emphasis on impacts and consequences of ICTs. The website (http://www.oecd.org/sti/information-economy) for the OECD’s Information Economy Unit states that it:
“[...] examines the economic and social implications of the development, diffusion and use of ICTs, the Internet and e-business. It analyses ICT policy frameworks shaping economic growth productivity, employment and business performance. In particular, the Working Party on the Information Economy (WPIE) focuses on digital content, ICT diffusion to business, global value chains, ICT-enabled off shoring, ICT skills and employment and the publication of the OECD Information Technology Outlook.”
Furthermore, the OECD’s Working Party on Indicators for the Information Society has
“[...] agreed on a number of standards for measuring ICT. They cover the definition of industries producing ICT goods and services (the “ICT sector”), a classification for ICT goods, the definitions of electronic commerce and Internet transactions, and model questionnaires and methodologies for measuring ICT use and e-commerce by businesses, households and individuals. All the standards have been brought together in the 2005 publication, Guide to Measuring the Information Society [ . . . ]” (http://www.oecd.org/document/22/0,3343,en_2649_201185_34508886_1_1_1_1,00.html).
The whole emphasis is on ICTs. For example, the OECD’s “Guide to Measuring the Information Society” has chapter headings that show that their major concern is with ICTs. Chapter 2 covers ICT products; Chapter 3 deals with ICT infrastructure; Chapter 4 concerns ICT supply; Chapter 5 looks at ICT demand by businesses; while Chapter 6 covers ICT demand by households and individuals.

As will be shown below several authors have discussed the requirements for, and problems with, the measurement of the knowledge/information economy. As noted above most of the data on which the measures of the knowledge economy are based comes from the national accounts of the various countries involved. This does raise the question as to whether or not the said accounts are suitably designed for this purpose. There are a number of authors who suggest that in fact the national accounts are not the appropriate vehicle for this task. Peter Howitt argues that:
“[...] the theoretical foundation on which national income accounting is based is one in which knowledge is fixed and common, where only prices and quantities of commodities need to be measured. Likewise, we have no generally accepted empirical measures of such key theoretical concepts as the stock of technological knowledge, human capital, the resource cost of knowledge acquisition, the rate of innovation or the rate of obsolescence of old knowledge.” (Howitt 1996: 10).
Howitt goes on to make the case that because we can not measure correctly the input to and the output of, the creation and use of knowledge, our traditional measure of GDP and productivity give a misleading picture of the state of the economy. Howitt further claims that the failure to develop a separate investment account for knowledge, in much the same manner as we do for physical capital, results in much of the economy’s output being missed by the national income accounts.

In Carter (1996) six problems in measuring the knowledge economy are identified:
  1. The properties of knowledge itself make measuring it difficult,
  2. Qualitative changes in conventional goods: the knowledge component of a good or service can change making it difficult to evaluate their ‘levels of output’ over time,
  3. Changing boundaries of producing units: for firms within a knowledge economy, the boundaries between firms and markets are becoming harder to distinguish,
  4. Changing externalities and the externalities of change: spillovers are increasingly important in an knowledge economy
  5. Distinguishing ‘meta-investments’ from the current account: some investments are general purpose investments in the sense that they allow all employees to be more efficient
  6. Creative destruction and the ‘useful life’ of capital: knowledge can become obsolete very quickly and as it does so the value of the old stock drops to zero.
Carter argues that these issues result in it being problematic to measure knowledge at the level of the individual firm. This results in it being difficult to measure knowledge at the national level as well since the individual firms’ accounts are the basis for the aggregate statistics and thus any inaccuracies in the firms’ accounts will compromise the national accounts.

Haltiwanger and Jarmin (2000) examine the data requirements for the better measurement of the information economy. They point out that changes are needed in the statistical accounts which countries use if we are to deal with the information/knowledge economy. They begin by noting that improved measurement of many “traditional” items in the national accounts is crucial if we are to understand fully Information Technology’s (IT’s) impact on the economy. It is only by relating changes in traditional measures such as productivity and wages to the quality and use of IT that a comprehensive assessment of IT’s economic impact can be made. For them, three main areas related to the information economy require attention:

The investigation of the impact of IT on key indicators of aggregate activity, such as productivity and living standards,
  1. The impact of IT on labour markets and income distribution and
  2. The impact of IT on firm and on industry structures.
Haltiwanger and Jarmin outline five areas where good data are needed:
  1. Measures of the IT infrastructure,
  2. Measures of e-commerce,
  3. Measures of firm and industry organisation,
  4. Demographic and labour market characteristics of individuals using IT, and
  5. Price behaviour.
In Moulton (2000) the question is asked as to what improvements we can make to the measurement of the information economy. In Moulton’s view additional effort is needed on price indices and better concepts and measures of output are needed for financial and insurance services and other “hard-to-measure” services. Just as serious are the problems of measuring changes in real output and prices of the industries that intensively use computer services. In some cases output, even if defined, is not directly priced and sold but takes the form of implicit services which at best have to be indirectly measured and valued. How to do so is not obvious. In the information economy, additional problems arise. The provision of information is a service which in some situations is provided at little or no cost via media such as the web. Thus on the web there may be less of a connection between information provision and business sales. The dividing line between goods and services becomes fuzzier in the case of e-commerce. When Internet prices differ from those of brick-and-mortar stores do we need different price indices for the different outlets? Also the information economy may affect the growth of Business-to-Consumer sales, new business formation and in cross-border trade. Standard government surveys may not fully capture these phenomena. Meanwhile the availability of IT hardware and software results in the variety and nature of products being provided changing rapidly. Moulton also argues that the measures of the capital stock used need to be strengthened, especially for high-tech equipment. He notes that one issue with measuring the effects of IT on the economy is that IT enters the production process often in the form of capital equipment. Much of the data entering inventory and cost calculations are rather meagre and needs to be expanded to improve capital stock estimates. Yet another issue with the capital stock measure is that a number of the components of capital are not completely captured by current methods, an obvious example being intellectual property. Also research and development and other intellectual property should be treated as capital investment though they currently are not. In addition to all this Moulton argues that the increased importance of electronic commerce means that the economic surveys used to capture its effects need to be expanded and updated.

In Peter Howitt’s view there are four main measurement problems for the knowledge economy:
  1. The “knowledge-input problem”. That is, the resources devoted to the creation of knowledge are underestimated by standard measures.
  2. The “knowledge-investment problem”. The output of knowledge resulting from formal and informal R&D activities is typically not measured.
  3. The “quality improvement problem”. Quality improvements go unmeasured.
  4. The “obsolescence problem”. No account is taken of the depreciation of the stock of knowledge (and physical capital) due to the creation of new knowledge.
To deal with these problems Howitt makes a call for better data. But it’s not clear that better data alone is the answer, to both Howitt’s problems and the other issues outlined here. Without a better theory of what the “knowledge economy” is and the use of this theory to guide changes to the whole national accounting framework, it is far from obvious that much improvement can be expected in the current situation.

One simple, theoretical, question is, To which industry or industries and/or sector or sectors of the economy can we tie knowledge/information production? When considering this question several problems arise. One is that the “technology” of information creation, transmission and communication pervades all human activities so cannot fit easily into the national accounts categories. It is language, art, shared thought, and so on. It is not just production of a given quantifiable commodity. Another issue is that because ICT exists along several different quantitative and qualitative dimensions production can not be added up. In addition if much of the knowledge in society is tacit, known only to individuals, then it may not be possible to measure in any meaningful way. Also if knowledge is embedded in an organisation via organisational routines then again it may not be measurable. Organisational routines may allow the knowledge of individual agents to be efficiently aggregated, much like markets aggregate information, even though no one person has a detailed understanding of the entire operation. In this sense, the organisation “possesses” knowledge which may not exist at the level of the individual member of the organisation. Indeed if, as Hayek can be interpreted as saying, much of the individual knowledge used by the organisation is tacit, it may not even be possible for one person to obtain the knowledge embodied in a large corporation.

As noted above Carter (1996) emphasises that it is problematic to measure knowledge at the national level in part because it is difficult to measure knowledge at the level of the individual firm. Part of the reason for this is that none of the orthodox theories of the firm offer us a theory of the “knowledge firm” which is needed to to guide our measurement.

Thus many of the measurement problems of the "knowledge economy" are rooted in the fact that we don't have a good theory of the "knowledge economy" or the "knowledge firm". Without such theories calls for better data are wasted, they miss the point. "Better" data collection alone is not going to improve the measurement of the "knowledge economy".

Saturday, 12 March 2011

More on the productivity paradox

Annie Lowrey has an article in Stale in which she asks, Why hasn't the Internet helped the American economy grow as much as economists thought it would? And answer is, may be it has but we just don't know how to measure it.
Maybe it is not the growth that is deficient. Maybe it is the yardstick that is deficient. MIT professor Erik Brynjolfsson explains the idea using the example of the music industry. "Because you and I stopped buying CDs, the music industry has shrunk, according to revenues and GDP. But we're not listening to less music. There's more music consumed than before." The improved choice and variety and availability of music must be worth something to us—even if it is not easy to put into numbers. "On paper, the way GDP is calculated, the music industry is disappearing, but in reality it's not disappearing. It is disappearing in revenue. It is not disappearing in terms of what you should care about, which is music."

As more of our lives are lived online, he wonders whether this might become a bigger problem. "If everybody focuses on the part of the economy that produces dollars, they would be increasingly missing what people actually consume and enjoy. The disconnect becomes bigger and bigger."

But providing an alternative measure of what we produce or consume based on the value people derive from Wikipedia or Pandora proves an extraordinary challenge—indeed, no economist has ever really done it. Brynjolfsson says it is possible, perhaps, by adding up various "consumer surpluses," measures of how much consumers would be willing to pay for a given good or service, versus how much they do pay. (You might pony up $10 for a CD, but why would you if it is free?) That might give a rough sense of the dollar value of what the Internet tends to provide for nothing—and give us an alternative sense of the value of our technologies to us, if not their ability to produce growth or revenue for us.
In short, we are trying to use 20th century measurement technology to measure a 21st century economy. And that just isn't going to work. As Don Boudreaux put it,
what has stagnated isn’t the economy but, rather, economists’ and statisticians’ capacity to measure economic activity and its contribution to human well-being.

Wednesday, 9 March 2011

Chris Trotter on technological innovation and productivity growth

In a posting, Reply to Chris Trotter, at his blog Roger Kerr quotes Chris Trotter as saying,
An interesting graph, Roger.

As you quite rightly state, MFP measures the influence of innovation and technological improvements on the productivity of our business enterprises.

Have you given any thought to the fact that the period of rapid MFP growth depicted in the graph coincides with the widespread adoption of the personal computer in New Zealand workplaces; the opening up of the Internet from 1992 onwards; and the rapid take-up of the mobile phone as an essential tool of business?

All of these technological changes were responsible for substantial productivity gains, but none of them are attributable to the neoliberal economic reforms introduced by Roger Douglas and Ruth Richardson.
Robert Solow famously quipped in a 1987 review of the book “Manufacturing Matters: The Myth of the Post-Industrial Economy” that: “[y]ou can see the computer everywhere but in the productivity statistics.” A remark that has given rise to what is often called the “Solow productivity paradox”. It wasn't until post-1995 that the effects of computers finally showed up in the U.S. productivity statistics. The point here is that productivity gains can be hard to find and even when found there is a long lag between the technological innovation and the productivity increases showing up. Computers started to play an increasing role for business in the U.S. in the 1970s but it was not until the mid-1990s that productivity increases showed up in the data. A delay of some 20-25 years. So the idea that "rapid MFP growth [...] coincides with the widespread adoption of the personal computer in New Zealand workplaces; the opening up of the Internet from 1992 onwards; and the rapid take-up of the mobile phone as an essential tool of business" is implausible simply on a timing bases. Technological innovation and the MFP growth simply do not coincide as Trotter argues.