Constructing Jurisdictional Advantage

With Professor Maryann Feldman

Click to view as PDF

One of the myths surrounding the I-Cubed (Information, Intangibles, Innovation) Economy is that “place” – the physical location of economic activity – no longer matters. With the “death of distance” we are told that economic activity can occur anywhere – as the current debate over offshoring illustrates. But, in a highly interconnected global economy, place may become even more important. In response to criticisms of offshoring, we hear over and over from corporate leaders that they must go to where the resources and the talent are located. Local intangible assets are becoming key factors in a company’s competitive advantage. And the uniqueness of those local assets becomes ever more important.

Drawing on theories of corporate competitive strategy, Professor Maryann Feldman outlined a new approach to local economic development based on a community’s unique characteristics—arguing that jurisdictional advantage is established through a strategy of differentiation rather than low costs. Professor Feldman is the Jeffery S. Skoll Chair in Technical Innovation and Entrepreneurship and Professor of Business Economics at the Rotman School of Management, University of Toronto. Prior to joining Rotman, she held the position of Policy Director for Johns Hopkins Whiting School of Engineering. She was also a research scientist at the Institute on Policy Studies at the University. Her research focuses on the areas of innovation, the commercialization of academic research, and the factors that promote technological change and economic growth. A large part of her work concerns the geography of innovation – investigating the reasons why innovation clusters spatially and the mechanisms that support and sustain industrial clusters.

Professor Feldman was introduced by Dr. Kent Hughes, Director of the Project on America and the Global Economy at the Woodrow Wilson Center.

In her presentation, Professor Feldman focused on the need of cities and economic regions to construct a jurisdictional advantage, a deliberate and unique construction of social, economic, and political assets that help the city gain competitive and innovative advantages. If she had been running in a political race her campaign slogan probably would have been “it’s the location, stupid.”

Professor Feldman noted how companies built a competitive advantage by either developing a unique product or service or by using a low cost approach to providing a product and service. By low cost, she stressed that she did not mean cutting prices and hence profit margins. For example, she pointed to Southwest Airlines’ business strategy that built an entire, complex system to create their low cost advantage. To meet their price, a company essentially had to master and either duplicate or improve on their total system.

In Professor Feldman’s view, cities and regions needed to learn from the strategies of successful companies. She recognized that corporate strategies were relatively simple – their principal goal was profit maximization. Cities have broader goals that included quality of life, security, and protecting the environment, as well as fostering economic growth and job creation.

It was in terms of generating growth that she felt the cities had most to learn from the business approach. Just as she did not see a future for companies that simply cut prices and profits, she did not see long-term success for cities or regions that relied on low wages or luring industry with tax breaks and other incentives. When economic conditions change, the lured company is often the first to leave.

In terms of a strategy for growth, Professor Feldman rejected both the laissez faire approach of passively waiting for the magic of the market place and the kind of active industrial targeting that attracted attention in the United States in the early 1980s. In her view, technology simply changed too rapidly and unpredictably for industrial targeting to work.

Instead, she encouraged cities and regions to focus on constructing a jurisdictional advantage, a unique system that would allow them to capitalize on new innovations while staying flexible to market changes. Before active “construction,” a city or region had to assess it current assets and strengths. The next step would be to build on their current competitive advantages. In most cases, a city or region will find local industries that have already stimulated the development of activities that create a supportive cluster.

She gave two examples of how a specific industry had encouraged the formation of related activities in a city or region and then, in turn, been strengthened by them. Her first example was Hollywood. Citing recent work by A.J. Scott, she noted that it was not the weather but rather an innovative approach to filmmaking that helped break the New York monopoly on film production. “The New York based Motion Picture Patent Trust priced films by the foot” regardless of quality. In Hollywood, Thomas Ince pioneered the development of shooting movies in discreet segments that were reassembled later, thus lowering costs and eventually giving rise to the studio system. The innovations in filmmaking led to an increased demand for actors, technicians, craftsmen, and managers, among others. The Academy of Motion Pictures Arts and Science also played a critical role in training talent, and thereby supporting the industry’s growth.

In New York, Professor Feldman pointed to N.M. Rantisi’s work on the fashion industry where a series of complementary industries, services and educational institutions developed that helped the garment industry to move to high value added production. Other cities’ everyday garment districts failed to develop their own market niche and withered under first southern and then international competition.

In some cases, a low cost strategy may be successful. But a low cost strategy is not the same as a strategy built on low wages. Rather, it is a strategy of providing the needed economic infrastructure and services at a lower cost. The example of the educational system (K-12) in Edmonton, Canada is one where providing a high quality jurisdictional service at low cost has helped promote economic development. However, to be sustainable any low cost strategy must be used to create unique assets that can not be easily replicated by other jurisdictions.

Because economic activity is often path dependent (where you are depending on where you started) many cities attempt to lure key industries to their jurisdiction. Professor Feldman argued that by the time it was clear that an innovation was going to be a market force that a city or region might desire it is already embedded in a supportive cluster elsewhere. She again stressed the importance of building on existing strengths and, where necessary, focusing on attracting the missing industries to complete a cluster. To continue to build on their existing strengths, city and regional governments must communicate effectively and regularly with their business stakeholders to ensure that regulatory requirements are not constraining further growth.

Professor Feldman closed by noting that she will be going to India in the near future to add an international dimension to her work.

The discussion was moderated by Dr. Kenan Jarboe, President of Athena Alliance. During the discussion, a number of participants noted other studies and research that parallels Professor Feldman’s work. It was especially noted that her work essentially bridges the gap between Michael Porter’s thinking on corporate strategy with his (other others) work on economic clusters in a new and insightful way.

Part of discussion focused on how communities could identify their unique advantages and turn disadvantages into advantages. It was noted that the social infrastructure and the entrepreneurial culture of an area play a large role in helping a community identify opportunities based on their local strengthens. Serial entrepreneurs are able to switch from business to business as the opportunities change.

The issue of the current bandwagon effect of focusing on one or two supposedly key industries also came up in the discussion. As one participant put it, cities and regions go looking for the magical White Buffalo and end up with a White Elephant. Professor Feldman noted that one of the most important aspects of the work may be a greater understanding of what not to do.

Finally, participants raised the issue of states versus local regions. It was noted that while states may be the controlling political jurisdictions, the loci of economy activity is the local region – which may span state boundaries, as does the Washington DC metropolitan region. Professor Feldman stressed the importance of a regional strategy for competitive advantage. Simply limiting the strategy to existing political boundaries ignores the nature of the regional economic linkages.

“Building on Local Information Assets”

Athena Alliance whitepaper by Kenan Patrick Jarboe et. al.

Click to view as PDF

Being rural can also be “cool.” As locally developed information assets become the keys to economic success, rural areas need not be left behind. All communities have the opportunity to benefit from capturing and using their local knowledge. In this new age of information and knowledge, rural areas can continue to thrive by being the special places they are.

America is in transition to an information economy. For our nation’s rural areas, the challenge of this transformation is especially acute. Too often left out of the economic mainstream, they are in danger of becoming backwaters of the rapidly flowing changes all around them. Yet, the opportunities for these areas are as real as the dangers. The economic rules are changing; local information assets are becoming an important factor in economic development. These changes can help rural areas leap into the economic mainstream or can leave them further isolated.

It is now a generally accepted principle in economic development that communities should build on their strengths. Mapping of a community’s assets is becoming a standard component of economic and community development. But what are a community’s strengths in the global information economy? How does a community discover, foster, develop and use its local assets? Which assets are truly important in this new economy?

Economic activity is no longer merely a process of combining capital, energy, materials and labor. Growth in the industrial era was achieved by a process that economists call capital deepening. Costs were driven down by economies of scale; productivity increased through incremental changes. In addition, new technological breakthroughs created new products and entire new industries.

In the information economy, growth is achieved through relentless innovation and the application of knowledge. Economists who have studied this process (often called New Growth Theory) describe knowledge as very different from other factors of production, like capital and labor. Knowledge and information are what they call “non-rival,” meaning that more than one person can use them at the same time. For example, many people may be reading this article at the same time. The ideas, concepts and information contained in this article can be used in many communities simultaneously.

As a result, spillovers from knowledge make the accumulation of knowledge self-perpetuating and not subject to diminishing returns. This is unlike in the physical world where laws of energy and laws of economics dictate diminishing rates of return on the use of labor and capital. In other words, knowledge creates more knowledge which creates yet more knowledge. It is the closest thing that we have to a perpetual motion machine – a perpetual idea machine.

And it is not just technological advances. Knowledge and information comes in many forms and can be used in many ways – such as expanding consumer choice through more customized products, more individualized service and greater attention to aesthetics in order to respond to changing consumer tastes.

In the late 1990’s, Danny Quah of the London School of Economics began talking about the “weightless economy” – a phrase that Federal Reserve Board Chairman Alan Greenspan has used repeatedly – to describe how more and more of our economic activity has little or no physical manifestation. It is not just services, but knowledge-based goods like software and music – so-called “intangibles.” According to Leonard Nakamura of the Federal Reserve Bank of Philadelphia, the value of US gross investments in intangibles is over a trillion dollars annually. These intangibles include R&D, advertising and marketing, software, financial activities and creative activities of writers, artists and entertainers. In their book, Invisible Advantage, management consultants Jonathan Low and Pam Cohen Kalafut describe a set of intangibles that drive business performance. Based on years of research, they documented the importance of intangibles such as: organizational leadership; technology and business processes; human capital; workplace organization and culture; innovation capacity; intellectual capital; brands; reputation; and alliances and networks.

Knowledge, information and intangibles drive our innovation process. Not all innovation comes out of the research lab. In fact, creative new design and new uses for old products are key parts of innovation. Those ideas bubble up from customers and front-line workers as well as from managers and researchers.

Thus, it is really a combination of factors that are powering this new economy. It is an Information, Intangibles and Innovation Economy (I3 or I-Cubed Economy).

Intangibles cover not only what we normally think of as intellectual capital – patents, copyrights etc. – but also tacit knowledge or know-how. Explicit (or formalized) knowledge is the codified body of knowledge that is captured in books, scientific formulas and blueprints; tacit knowledge is that part of our knowledge base that is intuitive and experiential. Both are needed. Formal knowledge provides the “know-what” needed for technical progress. Tacit knowledge provides the “know-how” to apply that formal knowledge. An expert is someone who knows not only the formal knowledge of his or her field but also has the ability to develop and utilize the relevant tacit knowledge – be it a line worker in a paper mill, a brain surgeon or a computer software developer.

In this I-Cubed Economy, local economic development requires identifying and using these information and intangible assets, especially the tacit knowledge imbedded in local worker experience. Tacit knowledge is found in every situation and in every location. In international development, such localized knowledge is often called “indigenous” knowledge. The World Bank describes this as:

Local, in that it is rooted in a particular community and situated within broader cultural traditions; it is a set of experiences generated by people living in those communities.
Tacit knowledge and, therefore, not easily codifiable.
Transmitted orally, or through imitation and demonstration. Codifying it may lead to the loss of some of its properties.
Experiential rather than theoretical knowledge. Experience and trial and error.
Learned through repetition, which is a defining characteristic of tradition even when new knowledge is added.
Constantly changing, being produced as well as reproduced, discovered as well as lost; though it is often perceived by external observers as being somewhat static.
Tacit knowledge is only partially based in the individual; it also resides in the special circumstances and situation of the community. It has long been known that industries cluster together in certain geographical areas. Economist have shown that this clustering effect is due to more than simply location of physical resources. Sharing of knowledge, especially tacit knowledge, is a key ingredient in cluster formation. Tacit knowledge is “sticky” – it is not highly mobile or easily transmitted over telecommunications lines. There is the importance of “being there.” Thus, place still matters in the information economy.

It is often thought that rural areas are at a distinct disadvantage in this race to develop intangible assets. For example, it is difficult for rural areas to put together the scientific and research assets needed for technology-led economic development. Likewise, following the notion of Richard Florida’s The Creative Class, more and more urban areas have latched on to the notion that “cool” is a defining intangible asset on which to build their economic base. These assets are commonly seen as an active nightlife, arts and entertainment centers that will attract young creative people. Rural areas generally don’t have the agglomeration of such features.

But recent studies show that rural areas and smaller communities may be better suited to commercialization of new products (a specialized form of “innovation”) rather than scientific and technical innovation. And small-town and rural life styles have their own appeal as “cool.” What sells is a unique or distinct approach – that important intangible asset of “brand” that more and more communities are beginning to recognize.

There are numerous examples of rural communities or regions that have utilized local knowledge to spark development. The Appalachian Center for Economic Networks (ACEnet) in Athena, Ohio has created a local economic cluster centered on the specialty food products industry. Other examples include film making around Wilmington, NC; windsurfing-related sporting goods and apparel in Hood River, Oregon; fishing gear in Woodland, Washington; snowmobile manufacturing in northern Minnesota; and houseboat manufacturing in Kentucky. These areas, and many others, show that it can be economically “cool” to be rural.

The Commission on the Future of the U.S. Economy Act: S. 2747

A PDF copy of the Congressional Record is linked within article

Click to view as PDF


Introduced by Senator Joseph Lieberman (D., Conn.), this
legislation
creates a Commission on the Future of the
U.S. Economy to craft a strategy to confront our economic challenges.
Athena Alliance was deeply involved in developing the key ideas and crafting this legislation.
The Commission is similar to the 1980’s President’s Commission
on the Industrial Competitiveness (the Young Commission), which proved
to be a successful mechanism for confronting the issues of its time.
But the current problems are fundamentally different compared with 20
years ago.

As Senator Lieberman says, “Today, the challenges we face are
exponentially larger and more complex. We’ve entered an information
age where intangible assets such as innovation and knowledge are the
new keys to competitive advantage. These intangibles – including
worker skills and knowledge, informal relationships that feed creativity,
new business methods, and intellectual property — are driving
worldwide economic prosperity. In an age where these knowledge-based
assets are difficult to patent or copyright, intellectual property rights
are difficult to enforce, and information crosses borders freely and
instantaneously, the first Young Commission doesn’t give us all
the answers.”

These challenges echo those set forth in our publication “Competitiveness
Revisited
”. They include:

  • the fusion
    of manufacturing and services into complex networks and the rise
    of new business models;
  • the need to
    go beyond quality and productivity to address issues of increased
    customization, speed, and responsiveness to customer needs;
  • the broad nature
    of the innovation system that encompasses basic research, technological
    development, venture capital, new product development, design and
    aesthetics, new business models, and the development of new markets;
  • the need for
    new and better ways of fostering the types of skills needed in a
    knowledge and information economy; and,
  • the challenge
    of unlocking the value of underutilized knowledge assets.

The legislation explicitly
recognizes that we are becoming an information and knowledge economy
(or, as I like to put it, the I-Cubed – Information, Innovation
and Intangible – Economy) and that both science-based research
and informal creativity are key factors in the innovation system. It
also explicitly recognizes the importance of information, knowledge,
and other intangible assets in driving economic prosperity, including
worker skills and know-how, informal relationships that feed creativity
and new ideas, high-performance work organizations, new business methods,
intellectual property such as patents and copyrights, brand names, and
innovation and creativity skills.

The Commission is directed to broadly explore these new challenges,
and is specifically charged with developing policy recommendations for:

  • transforming the education and training process into a true system of
    life-long learning;
  • upgrading skills of the United States workforce to compete effectively
    in the new economic environment, including mathematics and science skills,
    critical thinking skills, communication skills, language and intercultural
    awareness, creativity, and interpersonal relations essential for success
    in the information age;
  • promoting a broad system of innovation and knowledge diffusion, including
    non-technological ingenuity and creativity as well as science-based
    research and development;
  • fostering the development of knowledge and information assets in all
    sectors of the United States economy, particularly those sectors of
    the economy in which rates of productivity and innovation have lagged,
    and in United States companies of all sizes, particularly small and
    medium-size companies;
  • developing jobs that are rooted in local skills and local knowledge
    assets in order to lessen displacement resulting from ongoing global
    competition;
  • improving access to, and lowering the cost of, capital by unlocking
    the value to financial markets of underutilized knowledge assets;
  • strengthening the efficiency and stability of the international financial
    system (taking into account the roles of foreign capital and domestic
    savings in economic growth);
  • developing policies and mechanisms for managing the increasing complexity
    of globalization;

  • adjusting to the impacts of global demographic changes in the United
    States, other developed countries, and developing countries;
  • improving economic statistics and accounting principles to adequately
    measure all sectors of the new economic environment, including the value
    of information, innovation, knowledge, and other intangible assets;
    and
  • improving understanding of how the Federal Government supports and invests
    in knowledge and other intangible assets;

The Commission
shall submit a report to Congress by March, 1 2006 (or 18 months after
its first meeting, whichever is later) regarding the competitive challenges
facing the United States with conclusions and specific recommendations
for legislative and administrative actions. Made up of 17 voting and
5 non-voting members, the Commission would have the power to hire
staff, conduct hearings, request information from government agencies
and commission outside studies. $10 million is authorized in appropriations
to fund the Commission’s activities.

top

“Competitiveness Revisited”

Kenan Patrick Jarboe

Click to view as PDF


The United States economy has entered a new era. But it is not exactly
the period of ease and prosperity that we were led to believe it would
be. This new economy is one of relentless change and competition. We
are on a new treadmill. While the industrial economy demanded higher
and higher levels of productivity to drive down costs, this information
economy demands more and more innovation.

If the industrial age was driven by machines and natural resources,
this new innovation age is being driven more and more by people and
intangibles. Foremost are worker skills and know-how, innovative work
organizations, new business methods, brands, and formal intellectual
property such as patents and copyrights. Our economy increasingly runs
not just on technological advances, but also on ways of expanding consumer
choice through more customized products, more individualized service,
and greater attention to aesthetics in order to respond to changing
consumer tastes.

In this economy—the Information, Intangibles, and Innovation Economy
(I-Cubed Economy)—the rules have changed. But public policy has
not caught up with the changes. Many have been putting forward policy
solutions to deal with our current economic challenges, although most
often they have simply recycled proposals left over from the 1980s.
Few are looking at what the economy has become in the past 20-30 years.

One notable exception has been Senator Joseph Lieberman, who has proposed
creating a
bipartisan commission
to tackle this new competitiveness challenge.
Like the President’s Commission on Industrial Competitiveness (Young
Commission) appointed in 1984 by Ronald Reagan, this new group’s
mandate would be to re-think the problems and provide new solutions.
The Young Commission created a bipartisan agenda for responding to the
competitiveness challenges of its time. This new commission would be
charged with updating that consensus and altering it to meet the realities
of our rapidly changing world.

The task does not promise to be easy. Twenty years ago, the U.S. faced
global competition in goods and loss of domestic manufacturing firms;
now it faces the fusion of manufacturing and services and the opening
to international competition of services sectors once thought immune
to such challenges. Then, the operating issues were quality and productivity;
now they are customization, speed, and responsiveness to customer needs.
Then, a key concern was creating a flexible and educated workforce;
now, in addition, we must foster an educational enterprise that can
provide the constantly changing skills required in a knowledge- and
information-intensive economy. Then, the main financial challenge was
reducing the cost of capital; today’s equivalent challenge is
unlocking the value of underutilized knowledge assets and ensuring the
efficiency and stability of the global financial system. Then, the policy
problem was raising awareness of the importance of international trade;
now it is crafting policy appropriate to an increasingly globalized
and interconnected economy.

In the 1980s our focus was on individual firms and industries; now we
must find ways of sustaining networks of firms and of adopting new business
models. Finally, these problems and challenges, as well as myriad new
ideas and technologies, are rapidly sweeping across the domestic and
international economy. Their speed requires that U.S. industry, both
manufacturing and services—as well as the suppliers of financial,
scientific, and human capital—have the capabilities and resources
necessary to prosper and grow in this new environment.

Then:
Now:
Manufacturing

All
sectors of the economy under
competitive challenge (fusion of services and manufacturing)

Quality

Customization,
speed and
responsiveness to customer
needs

Productivity
& technological innovation

Innovation
– broadly defined
Not just technological, not just new products

Design
(aesthetics)
New business models

Awareness
of importance of trade

Management of globalization
Skills
and flexibility of the workforce

Constantly changing skills
Cost
of capital

Efficiency and stability of financial
system
Increase access to capital by unlocking
value of underutilized knowledge assets

Not
only are the issues different, but the competitors are different. Then,
it was Japan and the Southeast Asian “tigers.” Now it is
the populous nations of India and China as well as new challengers from
Eastern Europe. All of these nations are rapidly developing their intangible
assets and gearing up their innovation processes.

A key task for the proposed commission would be to understand the new
role of intangible assets. Information and intangible assets power our
innovation process. That process combines both formal research and informal
creativity to yield the productivity and improvement gains needed to
maintain prosperity.

American firms and workers lead the world in creating and using intangible
assets. But the increasing global competition means we must continue
to develop our I-Cubed Economy. No American need be left behind in this
new era of intangible-based global competition.

It will take new solutions, however. Simply more investment in high-tech
and education is not the answer. We need to focus on upgrading and changing
all our production systems to make them more creative and innovative.
Innovation is everyone’s business. Creativity is essentially problem-solving.
And everyone is a problem-solver. We are solving problems from the moment
we wake up in the morning until we close our eyes at night–and our
brain often continues even in our sleep.

Despite the rhetoric that “our people are our most important asset,”
we have yet to unleash the creative, innovative power of our work force.
Companies still see workers as a fungible cost—a cost to be minimized
by utilizing lower-paid overseas labor or by eliminating overtime pay.

Part of the problem is our accounting system, which still hasn’t
figured out a way to count workers as assets rather than expenses. What
gets counted as an asset attracts investments. What gets counted as
an expense is cut.

Thus, this new commission that Sen. Lieberman proposes would have its
work cut out for it. It must go beyond the vision of the past 20 years—while
not neglecting its unfinished business, some of which still merits our
attention—to create the vision for the next 20 years.

We know that economic success can come from harnessing information,
knowledge, and intangible assets. Leading U.S. companies have proven
that they can leverage their innovative capacity, new business models,
brands, and intellectual capital for economic gain. The challenge facing
our government leaders is to help translate that activity into benefits
for all. Let us hope that Lieberman’s commission can create a
framework for ensuring future American prosperity in the new I-Cubed
Economy.

“National Innovation Policy: An Urgent U.S. Need”

Kenan Patrick Jarboe

Click to view as PDF

Over the next few weeks, both major parties and both presidential candidates will be promoting their technology and manufacturing strategies. President Bush has already grown fond of saying that we are in the beginnings of an innovation economy. He is right. We are in an Information-Intangibles-Innovation (I-Cubed) Economy where relentless and continuous improvement is needed to stay competitive. But we don’t have a national innovation policy to keep us on top. Yes, we have a science and technology (S&T) policy and are fashioning a manufacturing policy. But those are not the same as an innovation policy.

Don’t get me wrong: President Bush and Senator Kerry are pushing needed proposals to increase funding for research and development (R&D) and to improve math and science education. Though still important, they would at best power a giant leap forward into the gadget-based industrial economy of the 20th century. They are not policies for the creativity-based information economy of the 21st.

R&D is only one part of innovation. Some innovation is technology driven; some is not. Doing things differently can be extremely important–consider Wal-Mart’s “big-box” marketing and “cross-docking” logistics concepts, Dell’s build-to-order process, or Southwest Airlines’ strategy for quicker turnaround on the ground.

Many technological innovations require organizational innovations as well. As MIT Professor Eric Brynjolfsson’s research points out, much of the productivity gain from new information technology comes from concomitant innovations in organizational structures.

Even new-product development is often more the result of interactions with customers and suppliers than of a breakthrough in the lab. 3M Corporation, a highly sophisticated company technologically, uses a “lead-user” process to identify and then adapt innovations that are already employed in leading-edge and similar markets. New uses for old products, such as teenagers morphing cell phones into music machines, and creative new design, such as a more user-friendly web page or a more functional layout of an airport terminal, are also key features of innovation. As is better tasting coffee.

And employee suggestions add untold product and process innovations to our economy outside the R&D lab. Many companies have instituted “knowledge management” systems to capture and share the innovative knowledge circulating within their workforce.

A 2002 RAND report on innovation, New Foundations for Growth: The U.S. Innovation System Today and Tomorrow, summed it up:

…[W]e immediately think of scientists and engineers working sometimes on their own but most often in laboratories or R&D facilities operated by private industry, by universities, and to some extent by the government. Yet, much innovative activity occurs outside the formal precincts of R&D labs. R&D departments tend to be an artifact of large firm organization. But in all company settings much “fixing” that amounts to innovation is done on the line by employees not principally charged with the innovation task. This type of informal activity too is an element of the national innovation system.
Our public policy often ignores these other facets of innovation.

To use a sports analogy, imagine an NFL coach who concentrates solely on the passing game–working only with the quarterback and the wide receivers. Granted, these are the players that produce many of the TV-highlights plays. But as any football fan can tell you, you don’t get to the Super Bowl if you neglect the running game, defense, the kicking game, and special teams.

We have equated innovation only with S&T and have neglected other parts of the game. Is S&T necessary? Yes. Is it sufficient? No.

One of the first things we need to do is figure out how well we are doing. We need to measure differently and better. There are a lot of data on such S&T indicators as R&D expenditures, patents, the number of workers with technology degrees, and student math and science test scores. But the U.S. has no organized means of collecting information on innovation broadly. Our S&T indicators need to be expanded to innovation indicators.

The European Union is already doing this; it will release its third Community Innovation Survey next month. That survey will cover topics such as the number of new or improved products introduced, new markets developed, new processes adopted, and overall expenditures on innovation activities–including not only R&D but also worker training and industrial design. The Australian Bureau of Statistics conducted an innovation survey in the mid-1990’s and is preparing a new survey for this year that will cover technology innovation, new products, and organizational innovations. The Organization for Economic Cooperation and Development is in process of revising its manual for collecting statistics on innovation.

The U.S. needs to institute its own Innovation Survey. Only when we look at the big picture and find out where we really stand can we begin to put together all the pieces: technology, education, creativity, organizational innovation, workforce training. Otherwise, we will have a lopsided team that isn’t going to win the economic Super Bowl.

If you doubt me, just ask Steve Spurrier.

Patent Donations and the Problem of Orphan Technologies

With David Martin and Peter Bloch

Click to view as PDF

Dr. David E. Martin, President and CEO of M-Cam, and Mr. Peter Bloch, COO of Light Years IP, explored different aspects of the orphan patent question. Orphan patents are those that are no longer used by their inventors or owners and are often donated to other institutions in exchange for tax deductions. Dr. Martin opened the discussion by noting that his company has developed intellectual property auditing systems to identify the commercial validity and value of patents. He noted that 30 percent of current patents are “functional forgeries” because they are issued based on the uniqueness of the words used to describe the invention not on the uniqueness of the invention itself. In addition, he contended that 90 percent of the patents granted in the United States, Europe and Japan were defensive in nature. As fee- based organizations, the patent offices depend on the volume of patents and thus have incentives to grant patents regardless of their ultimate validity.

Dr. Martin noted that private consulting firms were counseling companies to adopt an “abandon or donate” strategy for unused intellectual property for tax savings. He noted that universities have used these donations of intellectual property (IP) (rather than cash) as private- sector matching funds often required for federal research grants. In some cases, the universities would abandon patents rather than pay the $3,000 fee for each needed to maintain them.

Dr. Martin contrasted the $1.4 billion budget for the U.S. Patent and Trademark Office (PTO) with the $3.8 billion dollar cost to the U.S. taxpayer from tax deductible donations of patents. The valuation of donated patents is based on the methodology used to determine damages from a infringing patent. Yet, in the case of donated patents, none have had any commercial value. He concluded his opening remarks with a call for the PTO to do a much better job of determining what is really a new invention that deserves the temporary monopoly conferred by the law.
Mr. Bloch elaborated on the history of patent donations. While allowed since 1954, in the 1990’s corporations became more aware of the value of their patent holdings and of the tax benefits of donating unused patents. In response to a growing concern about abuse or even outright fraud, Congress began tightening provisions of the tax code for deduction of donated patents. This has caused concern, as proponents of patent donation believe that donated patents lead to new areas of research and have helped universities bring their research closer to market.

After reviewing the current system of patent donations, Mr. Bloch concluded that it did not work. He pointed to a limited number of successes but did not feel that they offset the costs. Most of the incentives provided by tax deductions are for technologies that are already closest to market and easiest to commercialize. However, ending the program completely could also prove to be a mistake. Instead, a broader look at the entire national innovation system is needed to return the focus to technologies that are more difficult to commercialize, where incentives for further development may produce more public benefit.

The speakers for this policy forum were Dr. David Martin and Mr. Peter Bloch. Dr. Martin is CEO and founder of M-CAM, a Charlottesville, Virginia corporation that developed and commercialized the world’s first international intellectual property auditing systems to identify the commercial validity and value of patents. He has been at the forefront of IP management system development for over a decade. Formerly an Assistant Professor at the University of Virginia’s School of Medicine, he has worked with numerous governments on technology transfer policies and intellectual property protection.

Mr. Bloch is the Chief Operating Officer of Light Years IP, a not- for- profit association focused on adapting modern IP marketing, asset management and licensing techniques to help developing countries earn export income. He is a business strategist and multimedia developer with over twenty-five years of experience in all aspects of startup, management and strategic planning for media companies. For the last fifteen years, he has specialized in working with media technology companies as a strategic planning consultant. As a consultant to the International Intellectual Property Institute, he co-authored a recently published research paper, IP Donations: A Policy Review.

Dr. Martin and Mr. Bloch were asked to explore different aspects of the orphan patent question. Orphan patents are those that are no longer used by their inventors or owners and are often donated to other institutions in exchange for tax deductions.

They were introduced by Dr. Kent Hughes, Director of the Project on America and the Global Economy at the Woodrow Wilson Center.

Dr. Martin began his presentation by describing the work of M-CAM on validating patents.

Based on this work, he believes that over 30 percent of the United States patents currently in circulation are “functional forgeries”— they have no uniqueness other than the use of a thesaurus, not necessarily a novel and unique product or process. He gave the example of a patent for toast issued in July of 2001 where toast is called “the thermal refreshening and remediation of a bread product.”

According to Dr. Martin, the U.S Patent and Trademark Office (PTO) is not fulfilling its constitutional charter. The U.S. Constitution says that in exchange for the disclosure of an invention or discovery that advances “science and useful arts,” the inventor may get a limited monopoly. Over 90 percent of the patents in circulation in the United States, Japan and Europe are “defensive” patents. These patents are in stark violation of the Constitution; grant of a public monopoly was not for protectionist self-interest. The grant of a monopoly was in exchange for the disclosure of something that promoted science, technology and industry.

The problem is that the PTO—and the patent offices of Japan and Europe—are fee-based organizations. Thus, their incentive is to grant more and more patents. If they reject a patent, they obviate any annuity value of the maintenance fees—and reduce their funding.

In 1983, the decision was made to go for quantity over quality and the PTO became a “customer-service organization.” But who’s the customer and what’s the service? The customer should not be the applicant. The customer should be the public whom, in that exchange of sovereign grant, there has been an advancement of public interest.

Dr. Martin pointed out that the term “orphan patent” is a bit oxymoronic. If it’s disclosed, it is no longer an “orphan”— the public can access it at no cost. If it’s disclosed, the public has no limitation on what it can do with it, save it can’t specifically commercially exploit that particular thing that is embodied in that particular patent.

The term “orphan technology” has a more legitimate historical basis in the economic dialogue. It comes out of the era that followed the liberation of defense technologies, where Congress decided that it would be a good idea to try to make those technologies available for commercial exploitation if they no longer had a defense application. For example, the under-40 female population was able to get gamma-emission detection of breast cancer. Certain optics and certain telecommunications also came out of this switch from defense to commercial technologies.

But orphan technologies were, at the time the term was coined, specifically those technologies that the public had paid for, the public had already invested a monopoly interest in. The U.S. government because of its holding rights on that intellectual property was not doing anything with them.

But, the term orphan technology now implies that there is another use, or a better use, of an invention. Implicit in this concept of technology transfer is the notion that somewhere along the line you’re introducing an economic theory called the “secondary market”—the ability to put a piece of technology, a property interest, what have you, into the hands of parties who can do something with it.

Dr. Martin believes that we pay a lot of money for R&D but don’t get much for that R&D dollar. Most technology transfer dollars actually are trying to offset this funding inefficiency—with the exception of viewing technology transfer programs as a jobs program or alternative means of funding higher education.

He referred to a study his company did for the Small Business Administration (SBA) on the Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) grants. It found that 40 percent of grant applications coming through SBIR and STTR actually were soliciting funds to pursue an investigation into something that had already been patented.

It was in that investigation that patent donation came to their attention. These, and other federal granting mechanisms, are required to show commercial utility and partnerships with industry in the development phase of funding. The partnership requirement often takes the form of private industry matching grants. They found that often these donations were intellectual property, not cash. At the same time, it became clear that the major accounting firms were hawking an “abandon or donate” strategy as an ideal way to generate phenomenal tax savings.

In one interesting case, the Internal Revenue Service got an information disclosure statement from a taxpayer that listed a number of donated patents that the company never owned. The company who did own some of these “donated” patents never even knew of the company that had been so kind as to donate them.

From Dr. Martin’s perspective, all of this began to look like a situation where companies no longer wanted to pay maintenance fees on those questionable patents acquired for defense reasons (often by copying competitor patents to build a protective hedge around their ideas). Rather than pay to maintain those patents, it became both easier and more lucrative to donate them to universities as match grants.

It has become clear that post-donation, patents aren’t being maintained. Universities are willing to walk away from patent portfolio valued for tax donation purposes at millions of dollars for the sake of saving a few thousand dollars in fees.

The result is a system that generates $1.4 billion a year in fees to support a patent office and loses $3.8 billion a year in tax revenues on patent donations.

Part of this rush for patents has been deliberate. From 1980 to 1985 we began to copy the Japanese system of defense patents. So Now, when Japanese company comes into a technology negotiation with their stack of patents, the U.S. company can counter with their own stack. And then the two companies agree to non-revenue-bearing cross licenses.

Another problem with the process is how we calculate value. The current patent-valuation methodology is drawn from infringement damages. Yet, at the time of donation, none of the patents actually has a commercial value. To have infringement damage, you have to have commercial consequences; if you don’t have commercial consequences, there is no damage. And if you have no damage, there is no economic consequence. So, Dr. Martin wondered, why are we using such a methodology to arrive at economic value?

He offered the analogy of a patent as a “No Trespassing” sign. There’s no affirmative value in either; it conveys no affirmative right. It simply enables you to keep someone else from doing something. And a “No Trespassing” sign is worth exactly what you pay for it. Put the “No Trespassing” sign on two different “mines”: a minefield and a platinum mine. The sign is still worth only what you paid for it. The “No Trespassing” sign on the minefield is worth liability avoidance. The minefield owner is doing that more than anything else just to put you on notice. The “No Trespassing” sign on the platinum mine is only worth how much the owner is willing to enforce that admonition. In either case, the sign itself has no intrinsic value other than the intrinsic value it had when you bought it.

Both the “No Trespassing” sign and the patent are not an asset; they are a contingent liability. There is a burden to do something with that sign to actually achieve what it says. They cost you money to get; they cost you money to enforce; they cost you money to maintain. Where’s the asset side so far? Some will argue that not having an enforcement action brought against you must have value, and that we need to find a GAAP accounting means of putting that on a balance sheet. Dr. Martin is not advocating for or against putting it on the balance sheet. He is pointing out that current regulations don’t have a mechanism to address the point. Hence, we have a policy problem.

Dr. Martin closed with the observation that American policy is based on the mistaken belief that we are the only creative economy, that we are the source of innovation. In all the debate about outsourcing and where jobs are going, the underlying response is to assert that Americans are still the people who invent stuff.

That assumption may get us in trouble in the future. He posed the following scenario: Hoover Dam was constructed with concrete that was calculated to fatigue next year. We have 9 million people living in a desert whose water supply is exclusively from that location. The intellectual property rights (IPR) for water desalinization—which is the only way you can save the 9 million people in the desert from a water catastrophe—are not U.S. owned. They’re owned by foreign interests.

This scenario would end up reversing role in the AIDS-drug debate in South Africa, where the U.S. is pushing for strong enforcement of drug companies’ IPR. IPR is great, until it’s our 9 million people who have to deal with a major problem.

He noted that the rest of the world is now using technologies like M-CAM’s forensic analysis of patent enforceability to detect, as he puts it, the patent frauds that are issued every day out of patent offices.

He closed by emphasizing that it is absolutely essential that we wake up to the fact that we need to do a much better job of accounting. We need to better define what invention is, what innovation is and what a monopoly is worth. After we answer these questions, we must build systems and standards to enforce it. This will require halting abuse of the ambiguity that has surrounded intellectual property.

The next speaker was Mr. Bloch, who gave an overview of the recently published report by the International Intellectual Property Institute (IIPI), IP Donations: A Policy Review. Whereas Dr. Martin looked at the detailed foundations of this issue concerning validity of the patents, the policy review looked at the broad macro level.

The issue first came up in 1954 when the IRS clarified certain rulings to allow for the donation of intellectual property. This opportunity was never really used until the mid to late nineties. At that point, corporations began to realize that their intellectual property was becoming increasingly valuable—in some cases even more valuable than their physical plant. Companies and accountants discovered that maintenance fees on used patents were becoming large costs; it was better to either abandon the patent or donate it. Donation was seen as the preferred option, since companies take tax deductions of up to 37% of the value of the patent as valued by an appraiser. Patent donations began gaining momentum in 1996, reaching a peak around 2001. Write-offs for these donations of patent portfolios have reached up into the $10 to $20 million range, with numerous large companies now routinely donating patents.

As a result, some people began looking closely at these donations and found cases of abuse, if not outright fraud. The outcome of that debate is a provision in the current tax bill—S. 1637—that would effectively eliminate tax deductions for patent donations. Many corporate donors claim that this will result in elimination of patent donations altogether.

This concerns the recipient community and others who believe that there is value in the program. The recipients of many of these patents claim that the mechanism has been immensely valuable in bringing technologies closer to market. According to them, it has enabled universities to do research in areas that they may not have been able to before.

Most patent donations are going to the second-tier research universities. These institutions don’t have their own well-funded endowments and research programs. Nor do they have a great number of their own patents. Therefore they wouldn’t have licensing and technology transfer programs if not for the donated patents.

The debate has come down to the intellectual property owners and some in the educational community against the Senate Finance Committee and others who are interested in curbing tax abuse.

It was in this environment that IIPI commissioned the policy review. One of the insights of the policy review was that there was no explicit policy on patent donations as a tool of technology transfer. It was a de facto policy resulting from the IRS’s interpretations of the rules and regulations. No one has taken a look at the overall picture and the benefit to the taxpayer. This is a subsidy to corporations but no one had asked the question of what the public was getting in exchange for the subsidy.

One of the problems is finding out how much the program costs. Discussions with the IRS, the Treasury Department and donors have led Mr. Bloch to conclude that it is almost impossible to come up with any reliable data on the value of the donated patents. It is in tens or hundreds of millions of dollars but we have trouble calculating the exact cost.

The first reason is that most of the patents were not donated until the late mid to late 1990s. However, it may take anywhere between four to 10 years to actually commercialize the technology. Companies have been set up as spin-offs of universities specifically to exploit the technology developed as a result of the donated patent but they haven’t been in business long enough to deliver any commercial results. And the vast majority of donated patents have not even gotten to the point where there is a technology developed.

The second reason is that there is no measurement system. There is no government agency, no national innovation policy czar who tracks this. There is no data on who is giving what patents to whom, on the progress of the patents, whether or not they are ever commercialized and on what economic activity is generated. The mission of the PTO is job creation and innovation but that is not tied to any national innovation policy whatsoever.

As an aside, Mr. Bloch noted that there are other consequences of not having a national innovation policy. For example, government funding of basic research has been declining steadily since 1982. And the private funding of basic research is moving offshore, away from U.S. universities.

A third problem is the process itself. It’s the technology that is closest to a commercial application that collects the highest valuations and gets the highest tax donations. Yet, this same technology, because it is closest to market, should be the easiest to license through traditional arrangements and thereby be less in need of the donation process for commercialization.

Mr. Bloch noted that a large company with a new product that is not going to create a billion-dollar market has two choices. They can give it to a research institution that will take it through a little bit more research and then sell it. As a result, the company gets a subsidy in the form of a tax deduction. The alternative would be to license the technology to another company that doesn’t need a billion-dollar market.

Why donate rather than license? According to Mr. Bloch, the answer that normally comes back from business executives varies: we couldn’t find anybody who is interested; we didn’t have the time; it was too complicated; it was easier to donate. He suspects in some cases that it was simply more profitable to donate the patent for the tax deduction than it was to seek a partner to develop the technology.

Mr. Bloch believes that that if the taxpayer is going to subsidize technology commercialization, the subsidy should go to technologies which are promising but more difficult to commercialize. But it’s the more difficult technologies which, under the rules for appraisal, will be given lower values, get lower tax deductions, and therefore are less likely to be donated.

The conclusion of the policy review: the program doesn’t work. This conclusion holds despite the fact that there have been some notable successes. There are some technologies that probably wouldn’t have gotten to market without this program. This may be a suitable mechanism for subsidizing technology development in the case of orphan drugs— there’s limited demand for the drug and the big pharmaceutical companies don’t see the return on their investment. In this area, donations or enlightened licensing to research institutions has had positive results.

However, certain proposed changes to the tax code would throw out the program entirely. Rather, we should look at criteria for designing new mechanisms to make promising technologies in early-stage development available to research institutions and to small businesses. Right now, only a 501(c)3 can receive donations for donors to get write offs. Thus, the program locks out small businesses as a recipient.

The entire complex needs to be looked at carefully: owners of a technology that for one reason or another didn’t pursue it, universities which are seen as engines of economic growth through their research activities and small business innovations programs and other government programs to foster commercialization.

Before considering subsidies and tax write-offs, Mr. Bloch stressed that we need to look broadly at what elements should be built into a new program. We also need to determine exactly where the market failures are, and to tie it to a national innovation policy.

Crafting Policy for the Information Age: EU Research on Intangibles

Clark Eustace, Executive Chairman of PRISM, presented the findings of the PRISM project, an umbrella organization of European business schools that was formed to carry on the work of the European Commission’s High-Level Expert Group on the Intangible Economy. Originally, the issue of intangibles was primarily seen an accounting problem. The further the issue was explored, the more it was realized that it was a larger, more serious issue with far greater implications throughout the economy. In recognition of the broad nature of the problem, PRISM focused on four issue areas: the evolving new theory of the firm; measurement issues; issues for the key interest groups, particularly the accountants, bankers and other market related actors; and, implications for EU policy.

One of the group’s main conclusion is that there really isn’t a new economy, but a soft revolution in number of areas of including the asset base, the speed of markets and the nature of the value chain, which is resulting in deeper transformations. This revolution has gone largely undetected over a number of years because systems of measurement aren’t able to pick it up. This transformation has significant implications for EU policy in areas of understanding the productivity of knowledge-intensive services and the creation of intangible goods. It also raises concerns over accounting standards and the adequacy of existing mechanisms of company reporting.

During the discussion, a number of issues were raised concerning innovation and intellectual property rights (IPR). It was pointed out that we still don’t have a good model of the knowledge production process – either inputs or outputs. Thus, we have difficulty in determining the incentives needed, including the role and forms of IPR that might be most appropriate. Similarly, changes to the model of business services (toward commoditization and systemization) and the rise of intangible goods as a category of products that are neither goods nor services have increased the complexity of the situation facing policymakers. The implication is that there is no one-size-fits-all policy.

Some of the implications of the rise of intangibles for the financial system, especially on access to capital and the role of credit rating agencies, were also discussed. Throughout the discussion, the need for continued research on better models and data collection was stressed. Only through both better data and a more complete theory can we understand how the shifts are occurring.

The speaker at this policy forum was Mr. Clark Eustace, Executive Chairman of PRISM. PRISM is one of the leading European research efforts on intangibles. Funded by the European Commission, the PRISM group is a consortia of eight European business schools that has spent the last two years intensively studying the role of intangibles in economic growth. A former senior partner with Price Waterhouse in Europe, Mr. Eustace is an international expert on the economic and accounting issues relating to the expanding intangible economies. He has served in a number of advisory capacities to European governments. Most recently, he was the founding Chairman of the European Commission’s High-Level Expert Group on the Intangible Economy.

He was introduced by Dr. Kent Hughes, Director of the Project on America and the Global Economy at the Woodrow Wilson Center.

Mr. Eustace began by presenting background on the PRISM project. The project is actually an outgrowth of a Brookings Institution project on which he served. The Brookings Task Force on Intangibles looked at the question of what we should be measuring with respect to intangibles, including the issue of high stock market valuations.1 This project served as a wake-up call for officials in Europe who responded by creating a High-Level Expert Group on the Intangible Economy, which he was asked to Chair. This was a highly successfully undertaking which managed in a short period of time to produce a detailed overview of the issue. 2

Based on that experience, he convinced the European Commission to expand the group and bring in academics for detailed studies on the issues. PRISM was formed as the umbrella organization for that research, with a high-level advisory group comprised of government officials, business leaders and academics. The research is now complete and PRISM has published its executive summary of findings and a large number of background papers. These can be found on its website at http://www.euintangibles.net.

Europe has made this issue of the new economy/digital economy/intangible economy a priority, starting with the Lisbon Accord in 2000. The Lisbon Accord grew out of concern over the gap in economic performance between the US and European economies. This European Commission (EC) agreement set both the research and policy agenda to confront these issues and sent a clear political message that the issue of economic changes was a priority. It is clear that these issues are understood at the highest level of the EC, including EC President Romano Prodi, a respected former economics professor.

Since then, and even before, there have been numerous studies about the economic gap between the US and the Europe. As this analysis has progressed, the view of the issue surrounding intangible assets has changed. Originally, the reason that Mr. Eustace was asked to get involved in the issue of identifying and measuring intangibles was primarily because it was seen an accounting problem. The further the issue was explored, the more everyone realized it was a larger, more serious issue that had far greater implications throughout the economy. The question then become one of how to gain support for the recognition that this was a larger issue affecting major portions of the economy, such as banks and capital markets, and not just the accountants.

Both sides of the Atlantic have now shifted their views that this is an economic problem, not simply an accounting problem. There is a greater realization that the economic fundamentals have changed. Unfortunately, fragmentation within academic economics was not helpful in confronting the issue.

As the Experts Group delved further into this question they found that a major part of the academic fracture line concerned measurement theory. They now believe they needed new models and a new math. By that they do not mean a new accounting math, such as that proposed by Baruch Lev. Rather, they mean a new way of measuring, similar to that of the revolution in science that took place with Robert Boyle. Previously, the notion was that you looked at things in a disaggregated fashion. In economics and accounting, you added up things such as input or transactions or costs. You then disaggregated each individual item and could analyze it in different ways. At some point, people in the natural sciences realized that there were far too many interactions to deal with in this way. So they created a new model and a new math for measurement and analysis.

That is where the PRISM project started. One goal was to bring together disparate academics from various disciplines to attempt to create a new way of approaching intangibles, including taking a closer look at econometrics.
The group had its first session to flesh out their ideas at the University of Ferrara, led by Professor Patrizio Bianchi. That session led to focusing on four issue areas:
The evolving new theory of the firm.
Measurement issues.
Issues for the key interest groups, particularly the accountants, bankers and other market related actors.
Implications for EU policy.
Two years later, the PRISM group has come to set of main conclusions.

First, there really isn’t a new economy. There are not the shifts that occurred similar to those at the beginning of the Industrial Revolution. There is, however, a very rapid and largely hidden change in a number of areas including the asset base, the speed of markets and the nature of the value chain. This is more a soft revolution, as Paul Romer likes to refer to it. The soft revolution has gone largely undetected over a number of years because our systems of measurement aren’t able to pick it up. And it is resulting in deeper transformations.

Mr. Eustace pointed out that they are still grappling with how to better structure these forces in order to make them more understandable.

The PRISM group’s key findings from their thematic workshops are:
A maturing global economy with surpluses is leading to a commodization of products and services.
The shift from old mass production to more customized production leads to a shift from economies of scale to economies of scope.
The struggle for comparative advantage shifts from price factors to factors of differentiation such as intangibles. The real question is why the factors of differentiation rather than price have now become the key drivers. They’ve always been there, but why now in the last 20 years have they have taken on the preeminent role?
Therefore, firms constantly try to create, maintain or invade monopolies founded on intangibles as a major part of corporate strategy. This has major implications for IPR strategies in service industries, since IPR régimes are currently not suited for protecting those intangible assets.
Hidden investment in business intangibles is now as much as 100% of physical capital. This includes investments in R&D, ICT, training and organizational change. However, variations across European countries are enormous and we don’t really yet understand why.
The economic characteristics of the intangible economy are very different from the manufacturing era.
The corporate response to these shifts has been notable. The economic changes have not occurred suddenly, or from a common cause. But, over time, they have induced important changes in the architecture of the corporate value chain. Value chains always had a limited life in competitive markets, but now eroding faster than ever before. Strategic responses include:
Having an effective “innovation machine.” This has become a must for every enterprise. The process started with the Kaizan process of continuous improvement and has grown into the key notion of companies continually re-inventing themselves at all organizational levels. The most successful organizations have ingrained this into the organizational culture. This need for change and re-invention has become relentless.
The constant search for new modes of monopoly rent. This is what keeps companies in business.
Ways of exploiting unique, or difficult-to-replicate capabilities, competences and quasi-assets.
Utilization of networks as key strategic assets. But we don’t yet understand how to measure and evaluate the power and usefulness of networks.
The new dynamics of power has implications for the old corporate governance system. However, the group did not have time to delve into this issue in any depth.
In the area of the policy agenda, PRISM felt that the statistical, market and corporate tracking systems have not kept pace. There is no systematic information, only glimpses. The macro, mezzo and micro systems for picking up information on intangibles are not there. For example, the mezzo-level of the system of real-time information collection in the United States, as exemplified by the SEC, does not exist in Europe.

The issue of intangibles strike at the very heart of the macro and micro reporting models. Stewardship of intangibles is now a major priority for companies & investors (& regulators) There is a proliferation of guidelines and indicators. Yet, holistic solutions to measurement and information are slow to mature.

There are steps that can be taken. The first is a better break-out of intangibles as input to the data collection process. This is a task facing those trying to get agreement on corporate reporting. The second is to refine the notion of using multiple layers of reporting, similar to that which Price Waterhouse and other big accounting firms are advocating. This starts at the basic transaction level and move up to higher levels of tiers, by adding more value-based indicators.

The business level data problem is especially acute when trying to understand the relationship between costs and value. With the rise of importance of intangibles, we no longer have good analytic models of business that show the relationship between what you put in (costs) and what you take out (value).

With respect to macroeconomic data, the PRISM group’s research, led by the former chief statistician for the OECD, shows that 10% of GDP goes unreported due to our inability to capture data on intangibles. Changing the GDP measure, however, is very difficult since so many policy system and automatic policy decisions – such as government program funding – is tied to the number.

On the broader policy front, the policy community has relied on “New Economy” paradigm to light the way for new policy orientations/ levers. This paradigm has collapsed, leaving the policy community very confused and creating a vacuum of ideas. As a result, the attention of policymakers in Europe has turned to sponsoring serious investigation of the issue and funding research in the area.

The final result of the current PRISM research efforts was six questions for EU policy makers:
Productivity of Knowledge-Intensive Services: What are the economics of the knowledge production function? What new tools do we need to measure the productivity of knowledge? What should we be measuring/ tracking a) at the firm level, and b) the System of National Accounts (SNA) level?
Intangible goods: What are the characteristics of the intangible goods sector – size, growth, dynamics? How should the SNA be reformulated to track intangible goods? Does the EU have a competition policy for intangible goods?
Accounting standards: Why are EU company accounts silent on assets and liabilities arising from licenses, leases, annuity and executory contracts, which form the backbone of the modern economy?
Investor communications: Do we need a European version of the SEC’s Edgar information system?
Company reporting: What is an appropriate role for EU in fostering a holistic corporate reporting model?
Single European Market: Given the EU’s strategic goal to increase R&D investment from 1.9 to 3% of GDP, to what extent – and in what areas – would a realignment of the EU’s IPR system help?
Stepping back, there appears to be a lot that can be done quickly to patch up some of these areas, such as IPR. There is an enormous amount of nonsense currently in the IPR system. It doesn’t take a lot of new analysis; it will take political will to carry the solutions through.

But there are fundamental issues that need to be addressed. The issues of productivity in the knowledge production function and the value chain in information services need more attention and analytical work. One important effort would be to try to understand the taxonomy of services. For example, how much of the basic value generation of running an airline is similar to that of running a law office? There may be only a handful of truly unique models of value generation in the services industry. If we can understand what those handful of basic models are we can make much more intelligent business and policy decisions, both within and across the industries.

There is also the difficult issue of intangible goods. There is a discrete area of products (goods) that can be stockpiled and traded, but embody intangible goods and are affected by patents, copyrights, licenses etc. Often referred to as the content sector, this is an area we really don’t understand. It needs to be fleshed out. It is a rapidly growing area that has enormous revenue and tax implications. Because it falls in between the existing categories of goods and services—and because it is so complicated to add a third category—this sector often gets ignored.

In the area of accounting standards, the new IASB standards on intangibles will become mandatory in Europe in 2005. The standards are similar to the US FAS 141 & 142, so there is considerable interest in Europe about the American experience. The need to capture data on intangibles may be the impetus for setting up an EDGAR-like system in Europe as a data repository. With the IASB requirements, it will now be possible to gather company data on a pan-European basis.

(The full report can be found at http://www.euintangibles.net)

Dr. Kenan Jarboe, President of Athena Alliance, moderated the Q&A session. He began with a comment about the importance of PRISM as an “observatory” or learning entity for the interested parties. This is similar to what Athena Alliance hopes to do. While he has similar problems with the hype over the concept of a New Economy, there clearly have been structural changes in the economy. These are not the changes that have been highlighted during the dot.com bubble, but fundamental shifts in the nature of production.

We still don’t understand what is happening with the knowledge production function. We understand many of the inputs, and we understand some of the outputs, like the number of patents and copyrights – even though these are poor proxies for innovation. But we really don’t understand what happens inside the black box. We don’t know how those costs and inputs get turned into value added products and services.

Dr. Jarboe noted that one comment Mr. Eustace made during his presentation was especially provocative and interesting when he stated that our IPR régime is unsuited for protecting investments in intangible assets. Dr. Jarboe asked Mr. Eustace if he would elaborate on that point, especially about the problem that there are a number of other forms of intangible assets such as trade secrets, tacit knowledge, and networking effects, that are not in any way protected by existing IPR.

Mr. Eustace replied that one of the key questions is determining what to protect and finding the line between public and private goods. The pharmaceutical industry and its relationship with the regulatory community is a case in point.

With respect to measuring outputs of the knowledge production function, the OECD is doing some very good work. They are trying to create a measurement model but it is very difficult because of the interconnected nature of the knowledge creation process.

Another issue is that of business process patents, which is to some extent a knee-jerk response to the fuzzy problem of protecting software. Europe is taking its time to watch what is happening in the US. He is not convinced that the business process patent is the right approach – it may be far too crude. This has been driven by the shift in the business service sector toward selling intangible assets – a shift from customized research for each client to selling a codified product to meet a client’s needs. Having a limited set of codified products helps streamline the business and increase productivity and profits. In that case, the key question is how to protect original models – and patents may not be the best mechanism. There are many other forms of IPR – some still being invented – that may be more appropriate.

Dr. Jarboe commented that it would be good to see the list of new forms of IPR that PRISM has uncovered.

He also noted that the paradox of business services moving to an industrial model of codifying the answer and selling that codified (standardized) product and away from the information economy model of customization. Are we simply moving to an industrialized model where services is following manufacturing or is there something different really going on?

Mr. Eustace replied that the business service sector is a maturing industry moving toward commoditization and systemization. That process still has a ways to go. Manufacturing is moving to another phase of exploring the tension between centralized and decentralized – such as the swing between conglomerate and highly focused organizational structures.

A key is understanding that there are many parallel paths. Policymakers have not yet come to understand the multiple models. For example, it took a long time for policymakers in the UK to understand that the services and non-services economies were out of phase – that there are many economies. The implication is that there is no one-size-fits-all policy.

One participant raised the issue of the policy implications. Outside of an academic or intellectual exercise, what are the implications for policy of the study of intangibles? Specifically, issues of control (regulation) and taxation are problematic. Are we better off simply letting that part of the economy run without government interference as the best way to foster the development of these intangibles?

Mr. Eustace replied that it is a difficult question. We can’t, however, simply ignore the situation and continue to not measure even if that might lead to increased government scrutiny. We can’t afford to have such “black holes” in key measurements of our financial system. We are now looking at major corporate balance sheets and simply saying “we haven’t got a clue.”

There’s also the issue of the volatility of markets. Brookings Institution has done very good work in showing how the information failures have led to increased volatility.

We have no choice but to move the frontier of measurement further along. The question is how we do it. We need a combination of policy experiments, such as FAS 141 and 142, and academic research to spur fresh ideas. And we need to attach new academics to this area of research.

One participant brought up Christiansen’s book on the innovator’s dilemma. One point of this book was that the level of investment in new knowledge creation, i.e. R&D, was not the determining factor in innovation. It was the recognition by the leadership of the company of an entirely new way to use an existing technological development. Thus it was not the infusion of knowledge assets, but the recognition of the implications of that knowledge which made the difference. Do we really run the risk of missing the point by measuring the knowledge inputs, which might be what matter?

Dr. Jarboe noted that measuring the correct inputs is a major problem. And it is not just measuring the R&D because we don’t measure a number of inputs such as skill inputs. We can measure the raw amount of R&D expenditures on the inputs and the number of patents on the outputs, but this doesn’t give us a clear picture of the innovation process. Those measures may not be relevant, they may not have any real correlation with economic growth, and they don’t tell us how to improve the innovation process.

Mr. Eustace noted from his consulting experience, that it’s always been very difficult to make good decisions about allocation of resources for innovation, especially in business services. It is important to recognize what we don’t know and ask the question of how far we push these measures. Part of the process is to maintain a dialogue between the research and the policy makers so as to be able to understand what is really useful.

Another participant brought up the role of technology and the problem of creating long term fixed assets in a world of rapid change. Created fixed assets requires understanding two fixed parameters. First is that humans have will still have certain needs, such as stimulating work and interaction with other humans. More and more routines jobs that humans do not like to do – including supposed thought jobs of workers in cubicles – will be done by machines. This will continue to push the envelope of human activity into areas of intellectual endeavors well beyond the notion of goods versus services. Second that there are still only 24 hours in the day. People relate to their environment in both a time and space sense – for example by measuring distance to work in terms of time. These are fixed human traits. Ultimately the development of any intangibles needs to relate to these parameters.

Dr. Jarboe noted that the parameter of time is a key factor in expansion of services. It is true as Pat Moynihan used to say that there is no productivity gains in the services because the minute waltz still takes a minute to perform. But with CDs, radio, and the ability to download music on the Internet, that minute waltz can be repeated for people over and over and over again – thereby greatly improving the productivity of those original musicians.

This raised the difficulties of categorizing products as goods or services. For example, listening to music by buying a CD is classified as a good, whereas listening to music by going to a concert is a service. The possibility was raised of switching to a categorization system based on human needs. It was suggested in the mid-seventies to classify the economy by ultimate end use – food, housing, transportation – rather than dividing between goods and services.

Mr. Eustace noted that the problems facing national economic statistics are even more difficult than issues of corporate accounting for intangibles. Not only are the classification systems outmoded, all of the time series and historical data are based on those outmoded classifications. Going back to modify all of past data is an horrendous undertaking. At every level there are huge problems. Companies, even in the same sectors, still are not reporting the same intangibles (such as R&D spending) in the same way. And then there is the political problem of changing the way in which a country reports on its economic situation.

With respect to classification of industries, Dr. Hughes mentioned efforts to structure organizations using IT links into affinity groups that would go from raw materials to finished products. Our economic statistics don’t really capture that.

Mr. Eustace stressed the need for continued research on the data collection problem in order to understand the shifts that are occurring – such as in business service and other sectors that rely heavily on intangibles. There are a number of attempts at reports – such as the UK’s Social Trends report – which try to capture these factors. But they are not enough. We need some very high quality thought pieces to set the frameworks and then lots of good statistics. The key in those high-level pieces is understanding how value is created and destroyed with these shifts.

Dr. Jarboe picked up on the point of the importance of creating value – especially in the current policy environment which is geared toward narrow and specific sectors of the economy. We may end up crafting policies which attempt to create value in a narrow area – based on our view of the economy as a producer of goods – that have unintended consequences elsewhere.

He also noted that there are a number of other reports, such as the PPI New Economy Index and the World Economic Forum’s Competitiveness report, that try to capture data on intangibles. Why don’t these reports get us to where we want to go?

Mr. Eustace replied that one problem is that these reports – such as the Department of Commerce’s Digital Economy report – are high quality but unfocused. They need to be condensed and synthesized. The data needs to be targeted toward helping understanding of what is going on and to be able to put it together in some sort of pattern so that it becomes cogent. That could then become the basis for developing longer term monitoring of the process.

A participant raised a concern over our lack of understanding of the knowledge production function and the creative production function. Without that understanding issues such as copyright become unknowable. For example, if we don’t understand the creators’ incentives for the production of new works, we can’t really craft policies to foster those incentives. Some research is being done on whether IPR, for example software patents, serve as an incentive for further innovation and development or rather as a means of protecting the market of existing IPR owners, including through the means of so-called “patent thickets.” If we don’t understand the nature of the market incentives and can’t measure how those incentives work, then we don’t know how the market will respond to different kinds of IPR regimes.

He also raised an issue of the gap between theory and measurement. For example, data from the National System of Accounts feeds directly into models that describe the economy and are used to understand the different sources of growth – the standard growth accounting economy models. These models are predicated on certain theories of how markets behave. In this intangible economy, is the theory still correct? Do we need a better theory of economic growth in order to better the statistics?

Mr. Eustace stated that this was out of his field of expertise, but suggested that one starting point for improving the models is to look at how we account for investments. The standard models of production are based on a value-chain that starts with R&D and works through to production and ultimately distribution. The notion was that you could ramp up the value generated in each stage of the value chain.

But in large parts of the economy, especially in intangible goods, there is no separate production. There is an economic production function – but not a separate activity known as production. For example, in software, the value generation lies in what is considered the R&D stage (the generation and testing of the code). Value is also generated downstream in the market distribution channels but not in the traditional physical production stage.

Yet, we don’t have a good understanding or theory of how this works. Here we go back to the issue of mapping the service sectors. We know that certain sectors are identical in the value generating activities – but we don’t have a clear view of the knowledge production function to be able to say where the value is generated. We don’t adequately measure R&D and we don’t know how to measure the downstream activities like setting up marketing channels. And therefore we don’t have a real idea of how much of the economy is made up of these types of activities.

We are even having a hard time conceptualizing the issues. This is one of the goals of PRISM – how to identify and synthesize the issue.

Dr. Jarboe noted the importance of the concept of moving from a transaction-based model to more of a field-based model. In the 70’s and 80’s, we talked about a national system of innovation not just R&D. But we never connected all the parts. While we have studied the research process, we still have not connected to the fact that much of the innovation comes from users back up the marketing channels.

A participant pointed out we need to look at the ultimate needs of the customer, otherwise we can mistake what is of value. For example what is the need for railroads or for transportation? The entire role of increased information and IP is to create better ways of accomplishing meaningful objectives – not simply improve a product.

And, as was pointed out, our national system of accounts does not cover that. They look at transactions from the production process point of view, not the end users’ point of view.

A second point made was that the technology not only improves what we are currently doing, but also makes some possible that which was not possible before. The more profound issue is the lack of imagination to see what is not already known. How do you measure that?

There were more patents for horseshoe improvements issued after the automobile was invented than before. How do you then value all of those patents granted just as a new technology is developing to make them obsolete?

One participant noted the PRISM quote from Kelvin about the importance of measurement. A counter to that is the quote, possibly from Einstein, that not everything that can be measured is important and not every thing that is important can be measured.

Mr. Eustace agreed that without a conceptual framework, we have no way of knowing what is important and should be measured. But the reality is that we need measurement systems and that we need to improve our existing measurement systems without destroying all confidence in them.

The PRISM group came to the conclusion at there were four categories of resources which is useful as a heuristic. At the softest end, there are latent resources that are unknown and maybe unknowable. They cannot be measured but can at least be identified and discussed. Next are intangible competencies which are usable and partially codifiable, such as organization structures, even if they are difficult to recognize and measure. Then there are intangible goods which can be measured and valued, such as IPR, long-term contracts and royalties.

There are difficulties in all these areas. Patents are difficult to value because of the problem of separating the knowledge out from other factors, like the workforce. In many cases the licensing out of patents is just a mechanism for transferring risks – where others assume the cost/risk of developing the patent into a revenue producing product.

On top of this, we need to move to a fair value accounting system and learn from the company’s experience as they move into this system.

Dr. Jarboe pointed out that the value of donated patents has become an issue with the IRS and Senate Finance Committee as they work through the latest changes in the corporate tax code.

One participant questioned what the policy implications of the report were and how the lack of knowledge of intangibles hurts policymaking. For example, one recommendation is for a European version of the SEC’s EDGAR system for financial reporting and another talks of improving access to capital by SMEs. How would these policy recommendations work?

Mr. Eustace replied that the idea of a European EDGAR system or some standard financial disclosure system for Europe (although not necessarily enforcement) is inevitable. The need for transparency in the data for policy decisions is overwhelming. There have already been a number of studies making this point, such as the Winter report. But, just putting together the legal framework for collecting the data will take some time.

Concerning the issue of access to capital, there is a section on this in the PRISM report. The problem is that banks and the financial system do not explicitly and systematically recognize intangibles. For example, the credit scoring models do not explicitly include any sort of intangibles. But factors like quality of management, potential market share, sustainability etc are all implicitly included. PRISM thought that some transparency would be useful – maybe a best-practice code at the European level. The goal right now is to inject this awareness of the issue of intangibles into the discussion over credit and capital allocations – such as the Basel II bank requirements.

It is also especially important in looking at the activities of the rating agencies – whose activities are not very well scrutinized. These entities need to be incorporated into the process, especially as the role of debt financing has increased in Europe over the past 20 years.

Dr. Jarboe noted that the research of our earlier speaker in this series, Jon Low, showed that stock analysts very much paid attention to intangibles, even if they didn’t know how to quantify or measure them.

One participant noted that the rating agencies have been involved in various financial industry activities on new regulations. Their concerns, however, seem to be different from other financial institutions – and their perspective is different as well. This difference becomes clear from looking at their Congressional testimony on the Enron scandal.

Participants noted the importance of intangibles as a basis for access to capital. If intangibles cannot be used to raise capital, companies may be shut out of the capital market.

It was pointed out that for certain types of intangibles, there are operating secondary markets. For example, copyrights to music are routinely bought and sold – even packaged into portfolios – with valuation based on expected royalties.

One participant raised the problem of how to account for R&D, especially if the technology is highly speculative. Much of the current system requires the expensing of current costs that are really not operating costs but investment in creating new intangible assets. The difficulty lies in measuring the future value of the benefits that consumers might receive.

Mr. Eustace noted the practical difficulties of capitalizing all expenses that go into intangible assets, including the way the accounts can be manipulated. Because of these problems, full capitalization will probably never be accepted. What is needed – and is very possible – is a better break out of where the money is being spent (regardless of how it is treated for amortization purposes). Details of spending on intangibles would help overcome our lack of basic knowledge in the area – and help get rid of the bad notions, such as that there is no R&D in service industries.

Mr. Eustace closed the session by noting that the discussion didn’t even talk about the macro-intangibles – that is, the frameworks that national governments have, or have not, put in place to recognize and deal with the policy questions surrounding intangibles. We have put into place the ability to look comparatively at frameworks across countries in a number of areas, for example, labor law and tax base. But we don’t have such a conceptualization when it comes to intangibles. In part, this is one of the future tasks facing PRISM.

1.Margaret M. Blair and Steven M.H. Wallman, Unseen Wealth: Report of the Brookings Task Force on Intangibles, The Brookings Institution, Washington, DC, 2001.

2.High Level Expert Group, The Intangible Economy – Impact and Policy Issues: Report of the European High Level Expert Group on the Intangible Economy, European Commission, Brussels, October 2000.

The Coming Bust of the Knowledge Economy

Steven Weber of the University of California at Berkeley argued that the production system for knowledge goods is undergoing not just a dramatic technological change but also a shift in mindset that is fueling a economic revolution. This shift has already occurred in the music industry. The melt-down of the telecommunications industry illustrates the magnitude of the risk. And the software and pharmaceuticals industries are next to face the threat. As illustrated by the open-source software movement, this new system reverses the traditional notion of intellectual property protection from a right-to-exclude to a right-to-distribute. While the change may increase innovation and productivity, it threatens to undermine, in a Schumpeterian fashion of creative destruction, current investments and the profitability of the existing industries.

The discussion focused on a number of points raised by Professor Weber’s presentation.
Much of the discussion centered on the possibility of an open-source type research process in the pharmaceutical industry. Given the high level of human risk and the resulting regulatory controls in pharmaceuticals, the process would be different. As health care in general moves to a much more personalized and information-intensive activity, broader issues of information regulation – including intellectual property rights – are likely to emerge.


The speaker at this policy forum was Steven Weber of UC Berkeley and the Berkeley Roundtable on the International Economy (BRIE). A leading expert in risk analysis and forecasting, Dr. Weber is Associate Professor of Political Science at UC Berkeley and directs the MacArthur Program on Multilateral Governance at Berkeley’s Institute of International Studies. At BRIE, his research focuses on the political and social change in the knowledge-based economy and the political economy of globalization. Professor Weber actively consults with major corporations, non-profits and government agencies. His latest book, The Success of Open Source, will be published this Fall.

He was introduced by Dr. Kent Hughes, Director of the Project on America and the Global Economy at the Woodrow Wilson Center.

Professor Weber began with 4 propositions:

  1. Open-source software is remaking the economics of a significant piece of the information processing industry;
  2. Open source is not just a sui generis phenomenon, but a more general illustration of how new production processes arise in complex knowledge goods;
  3. These new production processes are shifting where value-added is found in the production chain, turning highly expensive goods into commodities, and challenging current notions of what is “proprietary” (to the extent that he predicted that proprietary operating system software would become an old-fashioned, “quaint” notion in as soon as ten years); and
  4. While this process of Schumpeterian creative destruction leads to economic progress, the shift has political as well as economic and technological implications.

Professor Weber cautioned that this evolution entails risks that must be defined and prepared for, the largest being the destruction of business models that largely depend on provincial and antiquated notions of what is proprietary and can be controlled. The repercussions for these business models are substantial: the destruction of huge amounts of invested capital and stock valuation. Those heavily vested in these models will “fight tooth and nail” to halt these economic and technological changes.

Telecom and music

The background and justification for these propositions can be found in the recent history of the telecommunications and music industries and in the current situation of the software and pharmaceutical industries.

A difficult but inevitable truth is that there is no viable business model that guarantees the return of IT stocks in general to their valuations before the bursting of the technology bubble. Some companies over-invested in the 1990s and are left with unused capacity and a significant burden of debt. For instance, telecom companies invested heavily in network capabilities just as the Internet began to replace them. Some of these same companies and others built out huge amounts of bandwidth on the erroneous assumption that there would be enough content and demand to justify such a huge supply. Substantial capital was borrowed to build cable and fiber-optic bandwidth that are now, at most, an inexpensive commodity.

Other industries are still struggling to adapt to the digital revolution. The music industry faces a substantial decline in CD sales and revenue. Unlike the “meltdown” in the telecommunications industry, Professor Weber believe that the music industry faces a “corrosive loss of legitimacy and confidence.”

Professor Weber argues that it is not just a dramatic technological change but rather a shift in mindset that fuels economic revolutions. He reminded the audience that Peter Drucker argued in the nineties that new ideas, not a fascination with technology, fuel economic growth. Drucker compared the nineties to the Industrial Revolution where it was not the steam engine but major innovations in the organization of factories, corporations, daily newspapers, and trade unions that led to rapid growth during and after that period.

Napster, a relatively simple piece of technology, profoundly changed how people thought about the music industry by making the record companies’ business model transparent to their end users/customers. Professor Weber asked 150 students in one of his classes last year if it was legitimate to pay for music. Only three students (the slightly older ones) raised their hands. He believed that this was a mind shift that would not easily be changed.

Professor Weber felt that intellectual property politics led to legally inconsistent and problematic solutions in the Napster case. Under the non-circumvention clause of the Digital Millennium Copyright Act (DMCA), it is not only illegal to break an electronic lock on a protected digital good, it is also illegal (with few exceptions) to build a software tool that can be used to open an electronic lock, regardless of the builder’s intentions. As a result, technologies posing any threat to the copyright regime are being constrained instead of punishing conduct in violation of copyright law.

This preemptive law undermines a major source of innovation for the economy to protect the $12 billion dollar music industry. More fundamentally troubling, it gives more protection to a single piece of intellectual property (in this case a song) than it currently gives to much more personal information such as information encoded in one’s DNA. He argued that while another song might be written at any time, once someone’s DNA has been decoded and made public, the harm that might be done is irrevocable.

Professor Weber also referred to a proposal by Pam Samuelson (UC Berkeley) and the Electronic Frontier Foundation that would levy a tax on hard drives or broadband connections and distribute the tax-generated funds to copyright holders as compensation for file-sharing. He considered this to be a “bizarre, stunningly inefficient, and dysfunctional,” idea.

Software and pharmaceuticals

Professor Weber then went on to discuss the new threats to software and pharmaceuticals. The software and pharmaceuticals industries are both fundamental to the knowledge economy. The debate on the current IPR regime revolves around the breathtaking evolution of software and its related products and services. Meanwhile, the pharmaceuticals industry represents a large percentage of the U.S. GDP and exports and also has the potential to dramatically affect society with groundbreaking treatments for cancer and other life-threatening illnesses. Still, Professor Weber maintains that it is a “strange and discomforting time” to be involved with either industry.

Professor Weber has detected two distinct views on the future direction in both industries, but especially in the IT sector. Many people optimistically believe that the current declines in stock prices are temporary setbacks and that the markets will return to their mid-nineties positions after a few years, but without an equity bubble that proved to be a distorting distraction. The second theory, which Professor Weber agrees with, is that the current economic condition is a major period of industrial reorganization that will fundamentally change the way the markets value these industries.

The software industry represents about two and a half percent of U.S. GDP. Although the whole IT sector was affected by the bubble economy of the 1990s, Professor Weber distinguished between the grossly overvalued dot com companies and less irrational over valuations of traditional software companies that create products with a measurable value for their investors and consumers.

The challenge in the IT world comes from open-source software. Open source is a fundamentally different kind of production process for complex knowledge goods. It has three basic characteristics:

  • the source code that allows people to understand and modify what the software is doing is distributed freely with the software;
  • anyone can redistribute the software without paying royalties to the author of that software; and
  • anyone can modify the software and distribute that software under the same terms.

Open-source software is working under and thus pioneering a fundamentally different IPR regime that is directly opposite of the proprietary software companies’ strategies. It is characterized by an owner’s “right to distribute, not to exclude” under only one condition: other users cannot be constrained in how they alter the code. It is the exact opposite of the current IPR regime: “the core notion” holding together the existing proprietary software model and justifying its valuations is that the software code’s owner can rightfully exclude others according to its own terms. At a basic level, open-source software turns expensive, protected, service-intensive products into commodities. Since the computer code is open and inexpensive, it also broadens the system maintenance market. According to Professor Weber, “open-source software is not just a fluke. It is a fundamentally different kind of production process for complex knowledge goods.”

Professor Weber strongly emphasized that this new form of production means new sources of value, but it also means the destruction of old monopoly rents. Companies such as Microsoft, Oracle, and Sun are thus directly challenged by the open-source method of software creation. For example, about forty percent of businesses have overwhelmingly turned to Linux software on inexpensive Intel chips to run the same kinds and magnitude of operations as they would on much more costly Sun software and chips. Apache, an open-source software program that runs Internet servers, owns 65 percent of the server market. Linux also now runs supercomputer clusters, thereby cutting high-performance computing costs by 30 to 50 percent. Globis is an open-source software program that utilizes the entire Internet’s computational ability to solve problems. TiVo, Sharp PDAs, and household goods are either using or will soon use Linux. Open source is turning what used to be very expensive and highly protected, service intensive products, into commodities – and the market around these commodities is becoming significantly more competitive.

Professor Weber maintained that this actually helps the IT industry because of the fast rate of hardware development and the slow rate of software development. Increasing the rate of innovation and development in software technology would have a disproportionately huge impact on both the IT sector and the entire economy.

Open source has the potential to dramatically remake the software industry because of both its adaptability and its measurable success in the market in the way that Napster altered the music business. More significantly, whereas Napster was simply a distribution system for an existing product, open-source software is a free-standing production system that overturns the existing rules of intellectual property.” The change is significant for not only software companies. For example, in many industries, such as banking, companies invested their resources in capital-intensive, proprietary, specialized software that is the basis of their competitive strategy and therefore of their market valuation. The availability of non-proprietary, less expensive open-source alternatives nay cause their companies’ valuations to plummet. The results would be the bursting of another financial bubble created by the overvaluation of proprietary software.

Professor Weber also discussed what he viewed as a politically and conceptually fragile IPR regime behind the pharmaceutical and biotechnology industries. Conceptually, the success of open-source software undercuts the argument that the patent system is the only way to insure investment in innovative activity. There are also well-known problems with the patent system, including inefficiency of patent pools and the use of patenting as a business strategy for litigation rather than innovation.

Politically, Professor Weber believes that these companies face a number of challenges:

  • The visibility of their business model, as in the music business, foreshadows public pressure and perhaps ultimately, structural changes. Current medicines target a very limited set of human genes while the human genome project’s progress allows for possibly groundbreaking medical treatments that will require public-private and intra-industry collaboration and expansion. Even in the U.S., the pharmaceutical industry’s current business model and reputation do not have the legitimacy and support to obtain and sustain such partnerships;
  • The current debate in developing countries over the price and availability of drugs will soon begin in developed countries. He pointed out that differential pricing for Africa will lead to pushes for lower drug prices for lower income individuals in the U.S. as well;
  • He also foresees drug companies being targeted as villains of public health problems, with some parts of the pharmaceutical industry even being compared to the tobacco industry;
  • With the September 11, 2001 attacks, the anthrax scare, the U.S. government’s threat to take Cipro off patent, and the possibility of a SARS-like outbreak somewhere in the U.S., the pharmaceutical industries can no longer rely on the federal government’s protection for their intellectual property rights.

The pharmaceutical companies typically focus on the development of one or two major drugs. This results in the constant threat of a “biotech bomb” where stock prices collapse as soon as a key drug fails to secure FDA approval or does not prove as effective as initially expected.

Professor Weber pointed out that many people in the pharmaceutical industry, while aware of these problems, fear severe repercussions from Wall Street should they be the first to seek a new business model. The result will not be a melt down, like in telecom, but rather a possible corrosive loss of confidence, as in the music industry.

Public policy

Professor Weber believes there is good news, bad news, and contingent news in this situation. The bad news is that the technology companies will not return to the same high valuations of the nineties. He believes that the IT industry faces a period of persistent lower growth and that negative financial shocks will continue, leaving policymakers with difficult choices:

  • Pension funds must find different places to invest the capital they had previously put in the technology and pharmaceutical industries;
  • Technology outsourcing to India, China, Taiwan, and other countries will become a target for protectionist policies. At the same time, outsourcing will raise national security concerns. Unlike before, technology companies have been politically mobilized;
  • Government-funded research and development in technology from homeland security initiatives will have a disproportionate impact on technology trajectories in areas that were previously driven by the consumer sector (e.g. wireless networking, ubiquitous sensors, distributed supercomputing).

The good news, Professor Weber believes, comes from the opening of new opportunities. The creation of commodity infrastructures build the foundations upon which new industries can grow. The famous example is that of the railroads and Sears. The over-capacity which resulted in price decline in the railroads allowed the creation of a new value chain when someone figured out that they could now afford to ship goods to consumers who purchased them out of a catalog. IBM is moving decisively in this direction with its embrace of open source and its transition to becoming a global services company. Likewise, Professor Weber expects to see the emergence of service providers in the pharmaceuticals industry, perhaps following the approach of a cosmetics company that successfully provides specific products to targeted segments of the population.

The contingent news is that policymakers can either encourage or hinder this process. Professor Weber discouraged “buying time for systems that do not degrade gracefully.” He argued against desperation leading to “defensive, rigid, and inefficient” actions such as the hard drive tax proposed by the Electronic Frontier Foundation, eternal copyright, and the non-circumvention clause of the DMCA.

Professor Weber believes that the government must inevitably subsidize the reconstruction of these industries not because of their size, but because they are too essential to the infrastructure of a modern economy. A good federal subsidy strategy will permit experiments in unlicensed space (ex: Wifi) and simultaneously, give clear boundaries for areas that require regulation in the interests of society (ex: public safety communications bandwidth). The government also needs to find a solution to the problem of “first-mover disadvantage” whereby those who try to move to a new model are financially punished for writing off the old investments. At some point, the government must stop protecting the direct stakeholders and accept the reality of industry losses.

Finally, Professor Weber concluded with the admonition we are still very very early in the evolution of industries like software and pharmaceuticals/biotech. These are not even close to being mature industries. We must never forget that fact as we think and talk about policy in these areas.


Dr. Kenan Jarboe, President of Athena Alliance, moderated the Q&A session. He began by noting a change in one of the political and policy dimensions, specifically the protection of monopolies. Other analysts, notably Rob Atkinson of the Progressive Policy Institute, have made the point that many old monopolies are attempting to protect themselves from changes brought about by new information technologies – i.e. certain groups like realtors, wine stores trying to protect themselves from web-based competition. Yet, Professor Weber seems to be saying that the new industries – such as software – have matured enough that there is a danger that they will seek protection from new forms of economic activities, such as open-source.

The previous Athena/Wilson discussion with Kurt Ramin brought up the issues of how R&D costs are to be accounted for and of how accounting rules might affect outsourcing of R&D. Under the new International Accounting Standards Board (IASB) rules, development costs – once a demonstrable product has been created – are considered an asset and must be capitalized. In the U.S., all R&D costs must be expensed. There are benefits both ways – more expensing means lower profits but also lower taxes. Under both rules, if you buy a patent from outside, it can be capitalized (thus lowering your expense) whereas if you spend the money in-house it must all be expenses. There was some discussion about whether in the case of pharmaceuticals (where much of the R&D is already done outside) this would lead to further outsourcing of R&D. But, according to Professor Weber’s analysis, this type of outsourcing only works if there is a strong IPR regime where you can buy the rights.

Professor Weber responded to the question of possible changes in pharmaceutical R&D by noting that the industry’s strength is in production, marketing and managing the regulatory process. In R&D, their strength is in managing the risks about the scientific enterprise. Small biotech firms are now operating as the de-facto R&D arms of the large pharmaceutical companies. That structure could be put at risk by both changes in accounting rules and IPR – creating the inability to buy R&D in a way that it could be protected. While profoundly altering the industry structure, this could also open up the next level of innovation. Partial findings by biotech firms might be open to development by the next generation of pharmaceutical companies in new ways that can be customized for new markets. It leads to the service model of pharmaceutical companies mentioned earlier. The question is, how the new business model can be sustainable? In the current system the biotech firm sustains its cash flow by selling its latest findings. It is unclear how this would work in a different system.

Jarboe pointed out that this might become even more problematic if the biotech industry moves to a financial model of raising funds through securitization of its IP, rather than through venture capital. What happens if accounting and IPR rules move in the directions of increasing the risk at the same time that the financial model is moving to use securitization to reduce the risk?

Professor Weber noted that the result might be pharmaceutical companies bringing R&D back inside the company — not necessarily because of the ability to own the knowledge, but rather that the internal knowledge and insight about that product would give a short-term marketing advantage. The situation may be analogous to the consumer electronics industry of late 1980 and early 1990’s where control over the basic commodity technology gave companies a short-term development and marketing advantage – at least until the next innovation came along.

Dr. Catherine Mann of the Institute for International Economics raised a note of concern about comparing the computer/software and pharmaceutical industries. The R&D process is very different. The time-to-market is very different. A three to four month advantage makes a difference in computer chips; it doesn’t in pharmaceuticals where there is a long clinical trial period. The difference is in the network externalities. IT has high network externalities; pharmaceuticals, with maybe the exception of HIV/AIDS drugs, have little network externalities.

Both industries operate under international IPR rules, specifically the trade-related aspects of intellectual property rights (TRIPS) agreement as part of the World Trade Organization – which are one size fits all rules. These international rules constrain what national governments can do with their public policy in this area.

Finally much of the discussion has been on the micro-level. The real concern should be on how changes in the business model, the price of the product, and how it is used in the economy affect the performance of the macro-economy and jobs. Is the cost of this disruptive change from the standpoint of the financial markets large enough to cause macroeconomic effects?

Professor Weber commented that the time differences between software and pharmaceuticals are important points. The difference holds for drugs in the narrow sense. But in the larger area that includes medical devices and genetically modified food organisms, the time to market issue is much shorter. And the trend is toward the two industries becoming much more similar rather than diverging.

On the macro-economic issue, the question is how to look at the situation. From a purely macro perspective, value shifts from one part of the economy to another are part of normal economic evolution. Some people are hurt; some are helped. However, the shifts can have real macro-economic effects. For example, the concern over Microsoft stems from the fact that it accounts for half the market valuation of the software sector. If that valuation is the result of an IP valuation bubble even in just that one company, the bursting of that bubble may have large spin-off effects on the entire sector and the whole economy.

Nor is it clear that a new business model that is being created will return the same monopoly rents – just to a different set of economic actors. There may be a similar industry concentration in a commodity software industry, but the valuations might be much less than in the old proprietary software model. The result is a destruction of the old system of monopoly rents. This may be good from the macro-economic perspective that monopoly rents are bad. But the transition process can be extremely painful.

As Kent Hughes then pointed out, monopoly rents are what supposedly drive the innovation process. The process is one of a series of monopoly rents which are eroded over time and replaced by new monopoly rents (due to new innovations).

One participant noted that the cost of innovation in IT is relatively low – with the open-source process actual enhancing the capacity for innovation. In the biotech industry, R&D is a capital-intensive activity. In a commodity industry, resources are not available to carry out capital-intensive activities. A shift of pharmaceuticals and biotech to a commodity-type industry would therefore have a large macro impact on the innovation system.

Another participant stated that he was less concerned about IT and more concerned about pharmaceuticals. In IT there already is an alternative source of innovation and technology in the open-source movement. In pharmaceuticals, there is no such alternative source of innovation. That gives the pharmaceutical industry an enormous source of political strength for protecting IP in the sector.

Professor Weber commented that the cost of innovation is important, but not as normally thought of. The cost of innovation in pharmaceuticals is largely a function of the regulatory environment. Take the situation in the larger life-sciences industry – specifically the case of genetically modified seeds. Currently, a farmer buys seeds that are tied to a specific pesticide, such as Monsanto’s “Round-Up.” Imagine a different IP regime where farmers buy a genetic sequence licensed under something like the open-source general public license. They also buy tools to modify that sequence and compile them into seeds customized for their particular fields. They could also redistribute those seeds to others with similar types of fields. They could hire companies for specific services (pesticides, herbicides, fertilizer, irrigation) around that specific configuration of the customized seed and their specific ground characteristics.

In this alternative situation, sources of innovation are very different. Innovation is happening in farmers’ fields with real-time experimentation, rather than in the Monsanto R&D labs. The configuration of value-added is very different – centered more around providing the auxiliary services. The monopoly profits are very different. And the product life-cycle is very different, with products moving in and out of the system very quickly.

As one participant then pointed out, the predicate for this type of system is a relaxed regulatory system. It may be appropriate in some areas, but not in areas of modification of DNA and human health. Such a relaxed regulatory system is unlikely in the biotech areas. Open source is logical in the software area, but not in the biotech area where the consequences of a disastrous outcome, however remote, are enormous.

Professor Weber agreed with this assessment and noted that this is exactly where the discussion needs to focus. Where are the areas where strict regulation is required?

Dr. Jarboe also noted that the line between strict regulation and experimentation is already fuzzy because of the practice by some medical doctors of prescribing drugs, originally approved for one application, to treat conditions for which they were not originally approved. In addition, companies such as IBM are already moving to position themselves to be the information provider for personalized medicine. As the information revolution continues, the regulatory system will be modified and will modify the innovation system in return.

Dr. Mann raised the possibility that rather than pharmaceuticals becoming more like software, software would become more regulated like pharmaceuticals. Certain IT innovations would not be allowed and certain information and information services would be restricted because of issues of homeland security and of privacy. IT services would then become a highly regulated sector.

Professor Weber noted that in this scenario the big IT players become the major interface with the regulatory process – similar to the way that the large pharmaceutical companies manage the FDA regulatory process. This is not an implausible scenario. The pharmaceutical companies then might have a competitive advantage over the IT companies in managing the regulatory process.

Dr. Mann also noted that the pharmaceutical industry is in protectionist mode, the IT industry is not. From a public policy perspective, the IT industry is still very much in an open market position, not even following the “protect the market” stance of the music industry. However, Professor Weber noted that there is a fair amount of protectionism of business models centered on the war over the open source model. For example, Microsoft launches fierce counter attacks every time a local government or country announces that it is going to switch to open source software. While it appears that Microsoft is losing the battle against open source, he expressed a concern over the expansion of this protectionist mindset.

Dr. Jarboe raised a question about where the battles for openness were being fought. Microsoft may be losing the battle to protect operating systems, but other issues are heating up. There is still the issue of business process method patents, where a number of lawsuits will soon reach the courts. There are international efforts to constrain either the use or flow of information. With these other forces pushing toward greater regulation, are the principles behind open source strong enough to push back?

Professor Weber responded that a movement in the IT industry toward a highly regulated and patented system would end up creating more problems. Such an overprotection of the old system would destroy the innovative potential of the industry. The industry may generally understand this – but it also continues to be worried about the opposite scenario where there is no IP protection at all and the industry falls apart.

Dr. Hughes returned to the question of the different nature of the pharmaceutical industry, specifically the role of the public sector. Much of the basic research, and the training of the industry scientist, is paid for by the public sector. If the industry changes to this more open source model, what is the role of the public sector in the R&D process? Does public support research become the source of the pharmaceutical equivalent of LINUX?

Professor Weber responded that much of the innovation already comes out of the public sector and universities. The small biotech firms, at least on the West Coast, are an adjunct to academic work. If market-based funding based on the creation of a proprietary produce is reduced, R&D activity may return more to academic research environments. It would be a different activity, but not necessarily less innovative. Dr. Hughes pointed out it might be analogous to the role that university-based Engineering Research Centers play in the computer industry.

At this point, a participant referenced an EU-US conference at George Washington University earlier this year on open source. At that conference a step-wise model of IPR was proposed (open source escrow plan) whereby a program would become open source after it reached a certain valuation. It was also noted that the general legal discussions of Microsoft in the lawsuits were flawed because they blurred the difference between operating system and applications. The distinction is most important in the creation of new products; open source may be able to improve the process of software creation.

Professor Weber emphasized the problems of innovation and productivity in software – and that certain business models make those problems even greater. Thus, any improvement in the way in which software is created would have huge macro-economic (general public welfare) benefits. Hardware has progressed tremendously; software has not. Improvements in software creation are large sources for new value-added. However, value will be created in different parts of the economy and that shift will be messy and difficult.

The point was also raised about bundling the technology. The industry is beginning to move to a less technology-driven and more user-driven model as it stresses information services over information technology. But the transition to the service model will only occur with a great deal of displacement.

The issue of migration of technology out of the U.S. was raised by another participant. Professor Weber answered that the process was one of dispersion rather than migration. Whether this is a dispersion domestically or internationally may not matter as much as the fact of the dispersion itself. It will become a political issue insofar as it affects the concern over retaining high-wage jobs in the United States.

Dr. Jarboe noted that the whole concept of controlling either migration or dispersion of an information good – which is non-rival and non-excludable – is only possible in strong IPR legal regimes that allow for the information good to be constrained. Otherwise, the information will flow freely of its own accord. The issue becomes one of how to capture the economic rents – by creating a proprietary right, by exploiting first-mover advantages based on the information, or by becoming a service provider building upon a common information base.

Dr. Mann brought up the issue of doing the R&D versus gaining the benefits from that R&D. European pharmaceutical companies are deliberately coming to the U.S. to do their R&D. They believe there is a better climate for R&D here. There are externalities of university arrangements that make it more cost-effective. There is a health care system that will pay more for drugs. Thus, there are lower costs and higher profits for developing the drugs here as opposed to Europe. But, then they take the drugs back to Europe to be sold there. U.S. citizens have essentially paid for all of this – and have reaped the benefits. But the Europeans also get the benefits without paying the costs. Thus, there is a set of economic understructures that influence the international flows of intellectual property.

Professor Weber pointed out that this process describes a mature industry with a relatively slow product development cycle. Most industry profits come from very few drugs. But the industry is likely to change to one where there is a faster rate of innovation.

Dr. Mann asked how this might happen in a political climate where there is a thrust to greater regulation, especially in Europe over Genetically Modified Organisms (GMOs).

Professor Weber responded by saying it would to happen in the U.S. first. But it will still matter where the basic research is done. In order for it not to matter the following would have be true: that the raw information that comes out of the basic research moves simply and quickly across national borders; that the tacit knowledge gained in the research process is not important; that the companies will not try to tie up the information and use it in high profit margins.

Dr. Jarboe pointed out that this is very close to a description of the open-source model whereas the traditional product life cycle model describes a situation where R&D is done in the U.S. first and then sold in Europe. That model changes dramatically if pharmaceuticals moves to an open-source service model where there is an open knowledge base that is customized at the service delivery point.

Dr. Mann pointed out that the customized system describes information-intensive health care delivery as an IT industry. It may not describe the narrow R&D-based product-specific pharmaceuticals/biotech industry. When the IT-driven health care delivery system runs up against restrictions on information, then it becomes closer to the current highly regulated pharmaceuticals model. Under that highly regulated system, it becomes very difficult to operate an open-source model.

Dr. Jarboe speculated about what might happen when the key product is the information – not the pill. What happens when the most important factor is not the economies of scale in making the pill, but getting information about what goes in it to the point of delivery – say, the local pharmacy that can make it on site? It is the regulation of the information, not the product, that becomes the key intervention point.

Dr. Hughes raised the issue of changes in both the IPR and trade regime in pharmaceuticals under an open-source approach to research where companies don’t need to recoup profits by selling overseas, don’t have costs of extensive clinical trials and generic manufactures can sell anywhere. Do we end up with a graduated pricing system – like a differentiated tariff system?

Dr. Mann pointed out that differential pricing is difficult at the pill level because the product can be bought over the Internet, creating a gray-market problem. In such a situation with no IPR protection, the pharmaceuticals industry claims that there would be no research on new drugs. Many of these new drugs will be so-called life-style drugs, which people will be reluctant to do without.

Professor Weber noted that the pricing issue comes about because the business model for producing those drugs is not transparent. People don’t understand why, if a drug costs eight cents to make, they can’t buy it for nine cents.

Dr. Jarboe commented that the same lack of transparency in the music industry is why college students don’t see the need to pay for music. But Professor Weber stated that in the music industry, it is a generational change by a group (students) who have a good understanding of the issues.

Professor Weber went on to comment on the issue of a system that has an innovation regime as its foundation but a greater level of regulation at the delivery point. Such a system could create huge problems for countries facing acute long-term health care issues due to aging populations. The current system has an implicit assumption of a continued high rate of innovation in dealing with issues of chronic care.

Lynn Sha of the Wilson Center raised the question of regulation. Even in areas that are supposedly tightly regulated, there are places/countries where unregulated activities take place. One of the lessons of the open-source model is that it is better to have these activities out in the open where they can be monitored. If the technology is not developed in the open, it will develop on its own without any public guidance.

Professor Weber noted that this points out that regulation is a very difficult with new technologies. It is possible to set up a micro-biology lab for roughly $10,000.

Dr. Jarboe raised the concern that as health care moves into a more information intensive system – and that information is more widely available – the regulatory challenge become greater. It is more and more important that, as this series of discussions has pointed out, we look closely at the new rules in the information age (regulatory, accounting, IPR, etc.) – and that we work hard to get the rules right.

Audio and written summaries of earlier forums in this series are available at the Athena Alliance website at www.athenaalliance.org.