The Flight of the Creative Class

With Richard Florida and comments by Rob Atkinson

Click to view as PDF

The I-Cubed (Information, Intangibles, Innovation) Economy runs on talent and creativity. No one has is made that more clear than Richard Florida, whose 2002 best-seller The Rise of the Creative Class received The Washington Monthly Political Book Award for that year and was later named by Harvard Business Review as one of the top breakthrough ideas of 2004. That book explored the role of creative individuals in the economic life of America’s cities and towns.

In his new book, The Flight of the Creative Class, Florida looks at the growing global competition for talent. According to Florida, the United States, which has long been the preferred destination for the world’s top entrepreneurial, innovative, scientific, artistic and cultural talent, is for the first time losing this key historical advantage. U.S. immigration laws are driving away foreign talent, while improved opportunities and greater tolerance of alternative lifestyles are luring some of the best and brightest from the United States. Florida argues that unless the United States can attract, retain, and grow top-notch creative talent, the increasingly intense competition will continue to weaken the U.S. economy. This policy forum explored these provocative ideas and discussed ways to counter these trends.

Professor Richard Florida is the Hirst Professor in the School of Public Policy at George Mason University and a nonresident Senior Fellow at the Brookings Institution. Previously, he was the Heinz Professor of Economic Development at Carnegie Mellon University, and has been a visiting professor at MIT and Harvard University’s Kennedy School of Government. Dr. Rob Atkinson is Vice President of the Progressive Policy Institute and Director of PPI’s Technology & New Economy Project. He is the author of the New Economy Index series, which looks at the impact of the new economy on the Unites States as well as state and metropolitan economies. He also authored The Past and Future of America’s Economy: Long Waves of Innovation that Power Cycles of Growth.

– – –

top

Dr. Florida and Dr. Atkinson were introduced by Dr. Kent Hughes, Director of the Program on Science, Technology, America, and the Global Economy at the Woodrow Wilson Center.

Dr. Florida began by noting the critical importance of the work produced by Dr. Atkinson and Dr. Hughes in the last two or three decades. He said it is interesting to be surrounded by people who know competitiveness and have worked on legislation to promote it far more deeply and for longer than he has, but added that he has “dabbled” in the issue since he was a graduate student.

Dr. Florida began be saying we have a big problem on our hands. He has studied competitiveness issues mainly from the high-tech side and tried to understand the Japanese challenge, but has never before been this nervous. Right now, the United States faces a competitive challenge unlike anything we ever have faced before. In The Flight of the Creative Class, he tried to lay out a threat that is far, far greater and deeper than the emerging giants of India and China.

He noted that you never “wrote the book you thought you wrote”; you wrote the book that is framed by the media and the debate, and then spend most of your life clarifying what you said. The more accurate title of his book might be The Nonarrival of the Creative Class, because its main idea is not that Americans are somehow fleeing to Canada or New Zealand. The point of the book—familiar to those who have read The Rise of the Creative Class—is that the key to economic growth today lies in the ability to mobilize and harness technology and talent. Technology and talent are two of his “Three T’s” theory of economic development.

Dr. Florida said that most economists assume that countries are somehow endowed with stocks of technology or human capital, as a factor of production or raw material. His work says something almost inanely simple: they’re not stocks; they’re flows. And the flows of talent and human capital are highly mobile.

The third T is tolerance, which means openness to talented, creative, and knowledgeable people, who tend to come equally in both genders and all age groups, races, ethnicities, sexual orientations, and family types. Dr. Florida argues that tolerant places are most open to various kinds of talented people. As a result, he has been accused of having a gay and lesbian agenda; trying to undermine the American family as we know it; advocating for cities that are composed entirely of yuppiesophistas, trendoids, and gays; and having a one-man campaign to undermine the Judeo-Christian civilization. But he said his only agenda is to understand how economic growth occurs. This ability to be open, or what his book calls “practically inclusive,” has a big additional or marginal effect on the competitiveness of a country, region, or city.

In Dr. Florida’s first book, he linked the success of large cities such as San Francisco, Boston, and metropolitan Washington, D.C., to having the Three T’s. Then he visited the film studio of Peter Jackson, the director of The Lord of the Rings movies. There in little Wellington, New Zealand—a town of 400,000 people—Jackson had assembled top creative and talented people from around the world. That’s when Dr. Florida concluded that Pittsburgh and Cleveland are not competing against San Francisco and Seattle; competition for talent had gone global.

Two years later, Dr. Florida finally wrote The Flight of the Creative Class, pointing out that America’s core advantage in the world economy is not a lot of raw materials, better manufacturing facilities, a bigger market, or something we called Yankee ingenuity. It’s the fact that over the course of a couple of centuries, America was the most open, tolerant, and inclusive country in the world. Despite our own warts, we were the place that could attract top-notch creative talent, which gave us all of our great, wonderful, technologically innovative industries.

Dr. Florida said his book traces the effect of both immigrants in general and in particular on American economic growth. From Albert Einstein to David Sarnoff, Andrew Carnegie, Andy Grove, and General George Dorio—America’s first venture capitalist—American immigrants contributed much of our country’s science and technology. Annalee Saxenian’s studies confirm that nearly one-third of all the high-tech companies started in Silicon Valley during the 1990s were founded by a Chinese or Indian person, and 50 percent of our computer scientists come from outside the United States.

For a long time the United States built its competitive advantage on all Three T’s: technology, talent, and tolerance, but now our historic ability to compete is being damaged by several factors. The first is aggressive competition for talent by other countries and regional units within them. (Dr. Florida noted that he and Dr. Atkinson share a real interest in subnational, regional economic units as opposed to countries.) Dr. Florida’s book contains all kinds of metrics for this competition for talent, but he noted specifically that the United States ranks eleventh in the world for the percentage its population in the so-called creative class—scientists, engineers, technology people, innovators and entrepreneurs, artists, musicians, writers, members of the design and entertainment industries, and those in the traditional knowledge-based professions. According to his global creativity index—the metric for the Three T’s—the United States ranks fourth, behind Sweden, Japan, and Finland. Some people, particularly among the right wing, have taken potshots at that measure, but it is almost right in line with Michael Porter’s growth-competitiveness index.

As one (but not the only) indicator of competitiveness for human capital, Dr. Florida said it is very useful to look at the flow of foreign students. While other countries, such as Australia, Canada, and the United Kingdom, are increasing their ability to attract foreign students, we are becoming, or appearing to be, more restrictive—both by neglect and by policy. Even when people are not permanently kept out, there are restrictions on visas and great delays. As indicated by an exhaustive study by a Taiwanese Ph.D. student at the University of Toronto and Dr. Florida’s own focus groups and conversations around the world, it is not simply the restrictions that matter, but also the sense that the United States is becoming a less inclusive country. Measuring the percentage of the decline in numbers of foreign students is the wrong way to look at this, because individuals matter. What if David Sarnoff, Albert Einstein, Enrico Fermi, Sabeer Bhatia, or Jerry Eng had gone somewhere else?

Dr. Florida said another point is completely neglected in the discussion of his work: our failure to harness the knowledge, talent, and intelligence of regular people. This was the subject of two of his previous books, The Breakthrough Illusion and Beyond Mass Production: the Japanese System and Its Transfer to the United States. According to the theory of the creative class, you need to have a large percentage of people in creative occupations, but the places that will win also expand their ability to tap the full creativity of a much broader segment of their population.

It’s like the Toyota production system, which became quite competitive by not only tapping the creative capabilities of MBAs, engineers, and highly ranked people, but also harnessing the knowledge and intelligence of people on the factory floor. Roger Martin, who heads the Rotman School at the University of Toronto and ran the Monitor Corporation of Canada for a couple of decades, says we need to both attract the best and brightest technology people and also harness creative energy in general. Dr. Florida’s theory is that the creative economy for the first time makes the nature of human development and economic development much more alike.

The United States faces two or three creative problems. The first is that, just as at the dawn of the Industrial Revolution, the rise of the creative economy has created a fundamental economic divide between the creative haves and the creative have-nots. We are a much more class-divided and economically unequal society than we have been in a long time. Quite puzzlingly, that level of economic inequality is a result of the creative economy itself; it is greatest in the very centers of the creative economy, such as Washington, D.C., San Francisco, Raleigh-Durham, and Boston.

Second, there is a housing affordability crisis. As the main centers of creativity and innovation become more successful and consolidate their leads, they are pricing out the next generation of creative people. For example, the young assistant professor who might have been attracted to MIT, Cal Tech, Stanford, or Berkeley 30 years ago simply cannot afford to live in these places anymore.

Third, we have a deep class divide. In the United States, you cannot build a majority political coalition to support a creative economy when only 30 percent of the people participate. There is a recoiling against the creative and innovative economy by the people being left behind. The United States experienced sustainable economic growth and rising living standards in the 1950s and 1960s because the expansion of the industrial economy included many people in what we used to call the blue-collar industrial working class. The debate today no longer is between free markets and tax relief, or supporting certain technology or industrial sectors; it has to be about expanding the benefits and participation in the creative economy to a much broader swath of Americans. This is where the United States faces its biggest challenge.

You can imagine this conversation in Sweden, Finland, Australia, the United Kingdom, or Canada—about growing the technological, innovative, entrepreneurial, creative economy and attaching many more people to that growth engine. But at the national level in the United States, it is difficult to see the creation of politics that can support and sustain expanded participation in a creative economy.

– – –

top

In his comments on Dr. Florida’s presentation, Dr. Atkinson noted that they are both urban planners who share a distrust of neoclassical economics. He agreed that the competitive advantage today is increasingly determined by innovation and knowledge; growing income inequality is a critical challenge to the nation; and the United States faces unprecedented challenges from competitors. However, he said the title of Dr. Florida’s book underscores that the center of his model is immigration, which exaggerates the case fairly significantly. “If you had a title called The Flight of the Creative Class: How Immigration, Especially of Knowledge Workers, is One of the Many Factors in Competitiveness and Economic Growth, and the U.S. Needs To Do a Bit Better Than It Currently Is, but, Of Course, It Has to Balance Open Borders With Other Concerns, I’m not sure you’d be here.”

Dr. Atkinson said it’s not really true that the competition for talent is global; he saw no data in Dr. Florida’s book on how many Americans were leaving the United States. The Urban Institute estimated that about 200,000 people leave the country each year, and perhaps three-quarters of them are foreign born. He is not sure that’s a very big number and thinks Dr. Florida’s model works at the state and local levels, but not nationally.

In addition, the book’s fundamental premise or thesis is that imported talent is the driver of economic growth. Yet on a recent trip to Finland, which is number one or two on everyone’s list of successful economies, Dr. Atkinson met no one from outside the country. They have been able to do all of the amazing things they’ve done without importing very much talent—if any. Nor are Japan, Denmark, Sweden, and China importing talent. So assuming that you have to bring in smart people makes too strong a case for what certainly is important, but not that important. If we had no immigrants, would we do worse? For every David Sarnoff, there are 10 immigrants you have never heard of who have no skills and just work at jobs every day.

Second, while Dr. Florida believes “openness is everything,” Dr. Atkinson is happy we have a stronger border than we did before 9/11. He definitely wants a student tracking system; if it makes students a little antsy coming here, that’s the price we have to pay for domestic security. Certainly immigration of skilled workers, scientists, and engineers is very important, but at the end of the day that is not what drives economic growth or the migration of jobs overseas. According to Dr. Atkinson, Dr. Florida suggests that many companies are going overseas because there is no domestic talent here, but he counters that it is because operating there costs 10–20 percent of what it costs here. In fact, we have the second-highest rate of people finishing college in the world.

More important, Dr. Florida puts too much emphasis on skills. Although Dr. Atkinson has argued for more skills, almost all of the studies in his own book suggest that the post-1955 productivity rebound in the United States resulted not from skills, but rather the increased use of innovative technology. Dr. Florida talks about technology in his book, but Dr. Atkinson doesn’t think he gives it as much credence as he should. Dr. Atkinson also is uncomfortable with Dr. Florida’s contention that we need to do more not only on skills such as science, math, and engineering, but also on things such as “validating children entering a glass-blowing class.” That may be really important for an individual child, but as a matter of national policy, given the choice between putting $50 million into the NSF for science education or putting $50 million into the NEA for arts education, he would pick the former because it will lead to economic growth.

Dr. Atkinson agreed that growing income inequality is a challenge, but he was not quite sure what Dr. Florida wants to do about it—for example, how to make cabdrivers, telephone operators, or people who clean buildings more creative. Many U.S. jobs are low-skilled, require little knowledge, and aren’t very pleasant, and somehow figuring out a way to have more creative people would not make those jobs better. In fact, Dr. Atkinson’s recent paper shows that the fastest-growing occupational category between now and 2015 will be low-wage, low-skilled jobs. Certainly some low-wage occupations have a creativity component, but as more of a structuralist, Dr. Atkinson says we need to focus on automating “bad jobs” so that there are fewer of them and providing a higher earned-income tax credit, more progressive taxes, or health-care benefits for everybody.

Finally, Dr. Atkinson said Dr. Florida did a very good job of framing the broad challenges that we face as a country, but the book didn’t provide as much of the specifics as he would have liked. He would have enjoyed something that he could give to Members of Congress and say, “Here’s the bill you need to write; here are the 10 things that government could do right now to improve this process.”

Following Dr. Atkinson’s comments, Dr. Kenan Jarboe, President of the Athena Alliance, moderated the discussion. He began by expressing concern about how to infuse jobs with creativity, using the example of London cabbies who can find any address versus D.C. cab drivers who famously cannot. He added that Dr. Atkinson’s point about automating jobs is the flip side of this issue: many of those “bad jobs” will either go off shore for cheaper labor or disappear, because if you can break the job down enough to ship it offshore, at some point you can break it down enough to automate it.

Dr. Jarboe said in Dr. Florida’s scenario, 30 percent of society is in the creative class and 70 percent essentially is composed of the drawers of water and the hewers of wood. Dr. Florida criticizes us collectively for failing to provide a clear vision of how the broad swath of society can prosper and succeed in this economy. But where does the great middle class come from—especially when you have globalization in which many smart, creative, bright, skilled, educated people will do that creative job at a quarter of the U.S. price, along with the “winner take all” phenomenon?

Dr. Florida responded by saying he spent his life trying to understand why technology-based solutions don’t work, including 20 years in Pittsburgh. Once one of the biggest cities in the United States, today it ranks number 56; a stock of technology alone didn’t keep it competitive. Everyone forgets that he said Three T’s; T number one is technology; T number two is talent or skills; T number three is tolerance. The model says you need all three, not one. If you have one, you’re Pittsburgh, or perhaps Miami—which has a high level of tolerance. What made the United States and its constituent regions particularly competitive was having all Three T’s. His model doesn’t say that openness is everything, but rather that openness is an important component when combined with the other factors.

Second, studies show that low-skilled immigrants are important components of our economic growth; the regions that have attracted low-skilled immigrants have outgrown those that have not. Of course costs matter, but costs aren’t the whole story, and certainly the United States is going to get killed in a cost-based competition. To leverage technology and skills, we have to be open.

Dr. Florida says he did not write the creativity and technology blueprint for the United States. He has no idea how to write it; nor does anyone else. It is a daunting task, and we need to write it together. We need a technology and innovation agenda, but that is not enough—as his mother would say, “that ain’t going to get you a walk in the park.” It also has to be an inclusive agenda; the 30 percent of Americans in the creative class isn’t enough. You need an agenda that links the NEA and the NSF. You have to get beyond this ridiculous welfare reform argument that says, “You don’t have any skills and you can’t do anything, so we’ll pay you off or give you a crappy job.” You have to show that every human being has value if you want to build a political constituency that can take us out of an older industrial age and into a new technology and information age.

It is not our technology, information, or knowledge that binds us together, but rather our people and creativity. Our country needs a creativity agenda, and he would like, in a very small way, to work with everybody at the seminar to create it. If we fragment the agenda into technology, arts, culture, health, human services, etc., we aren’t going to get out of this box.

– – –

top

The floor was then opened for a question and answer period. The comment came from a participant who highlighted that she was not only a demographer and social scientist by training, but also a mother. She made the point that creativity means doing something outside of the status quo. She argued that there were two times in the United States in the 20th century when we came together, which unleashed new creativity and everybody wanted to be like us: when we gathered together for World War II, and people such as women and people of color who had been excluded got to be part of that American dream; and during the Civil Rights Movement.

But the leaders of creativity in the United States today are our children.—who are in the database at Johns Hopkins and score 1,200 and above on the SAT at seventh grade. By the time they reach college, they’re just jaded. They don’t want to do any more technology or higher-level math, and then they get sucked up by the people who make money and then go overseas. She asked if the speakers are looking at that or the work of Frank Levy at MIT on the new division of labor.

Dr. Florida said he knows and recommends Dr. Levy’s work, calling him probably the most important labor economist working in the world today. While harnessing the creativity of children is really important, harnessing the creativity of everyone is important too. In his book, The Fourth Great Awakening, Robert Fogel said the two challenges for our society are to harness the creative energy of young people and the old people who are being set aside. Dr. Florida is calling for a national dialogue on harnessing this creative energy across the board. Our schools are not set up to do a very good of job of this; they were set up to do a good job in the Industrial Revolution, and they did.

Dr. Florida went on to comment that Franklin D. Roosevelt was a very interesting president because he looked the Industrial Revolution in the face and said, “We can either blow up like Europe into this horrible class antagonism or we can bring in blue-collar working people, let them participate, and grow the industrial society.” Someone has to get up and say, “This cannot be built on the backs of the technology elite or knowledge elite or artistic elite.” We have to build the mechanism just as Roosevelt did, including large swaths of the population in this creative economy. Blue-collar jobs were not always good jobs; they used to be low-pay, low-income, very dirty, very dangerous jobs. We made them better jobs through a series of institutional mechanisms, and that is the challenge today.

A participant asked if the flight of the creative class is a flight out of the country or a flight into the country, or both. Dr. Florida replied that it is not so much about Americans leaving the country, but rather our inability to harness the creative energy of our own people. Dr. Jarboe then noted that Dr. Florida’s book says students are leaving, and Dr. Florida explained that the rate of increase in the number of foreign students we attract has slowed. It’s not that everyone is abandoning the United States and Americans are all going abroad, but to some degree we are losing the global competition for talent. We still rank among the three most competitive countries, but we are losing what used to be an incredible advantage in harnessing the creative energy of students, young professionals, and technology-based people.

Another participant said Dr. Atkinson mentioned that creativity and economic growth are inversely correlated with the incarceration rate, and that certainly the incarceration rate in this country is far higher than those of Canada and the U.K. In addition, it’s been said the New Deal programs were the seed bed for the next generation of Republicans, who say, “I did it on my own, why can’t those people do it?” So you almost had a growth of intolerance among people who forgot that they climbed the rungs with the aid of government programs.

A participant with the American Electronics Association asked how to break the link between competitiveness and high-tech or high-skilled immigration, when foreign nationals comprise 40 percent of the graduates in areas at the core of the technology industry, such as engineering, science, and math. Dr. Atkinson responded by saying high-skilled immigration is an important component of our success and he is skeptical that we will train enough scientists and engineers. If we want to have scientists and engineers, for the short run we will have to import them.

However, he disagrees with Dr. Florida on the need to import people who have never finished fourth grade. There is no question that when you import people into this country our economy grows bigger, but do we want to be a bigger country? Many of the problems Dr. Florida raises, such as high housing prices, are a direct result of population growth. No one can make the case that a high level of immigration in and of itself raises per capita living standards; it just raises GDP. His goal is not to have more people, but rather for his son to have an income that’s 30 or 50 or 100 percent higher than his.

A participant said Dr. Florida’s points about classes in the United States reminded her of Homeland, a book about post-9/11 America becoming very scared about being left out of the creative economy, and the racism and hate-filled politics that emanate from that fear. In America we have been able to constantly reinvent ourselves and provide mobility through the classes, but now that is harder to do. She said it is ominous that we have written off a whole generation of inner-city kids and sent them to prisons; we haven’t had an ethic of education. Dr. Florida replied that this is the narrative of his work: you grow an economy by developing the creativity of each and every individual. The winners are those countries that develop the most creativity from the inside and attract the most from the outside.

Technology, innovation, entrepreneurship, and immigration will only get us to what we used to call in the business literature “silos,” but not to cross-functional teams—or “we.” In the absence of that, there is a recoiling against the very centers of innovation and technology, the places loaded with what one of his critics called “yuppiesophistas, trendoids, and gays.” That kind of loaded language is indicative of a reaction against the very propulsive drivers—highly concentrated, very spiky, big peaks of innovation, entrepreneurship, creativity, and technological growth. So the country becomes locked in a devastating culture war and political polarization that makes it almost impossible to address this agenda in a careful, astute, and knowledge-driven way.

Dr. Jarboe noted a recent series of articles in The Wall Street Journal on inequality in the United States, one of which used Mexico as an example of a country with almost no upward mobility for those at the bottom. Do we need mobility inside the country as well as openness to immigration? Dr. Florida said yes; his book points to both the ability to attract talent from outside and the ability to harness the creative capabilities of Americans, including low-skilled people. His father, the son of an immigrant with an eighth-grade education who worked in a factory, told him that it was not the CEOs and business minds that made the factory productive, but rather the talent, knowledge, and creativity of the men who worked there.

Another participant asked about immigrant entrepreneurship, and in particular how to inspire creativity in the staid environment of multinational corporations whose franchises often are owned by immigrants. Dr. Florida said in his new article in the Harvard Business Review on the SAS Institute in Cary, North Carolina, he described how Jim Goodnight fostered creativity in his software company. Goodnight does not outsource anything at the company, because every kind of worker has an intrinsic creative drive: the software people want to make great technology; the salespeople want to meet their sales quota; the landscaping crew wants to make a remarkable landscape. According to Dr. Florida, a segment of the business community is clamoring to find out how to tap into the creative energy of various workers, not because they’re altruistic or do-gooders, but because they want to get a competitive edge. He added that it would be a great idea to look specifically at immigrant entrepreneurs in an entrepreneurship project he has underway at George Mason University.

Chuck Wessner of the Board on Science, Technology and Economic Policy at the National Academies of Science said the flight of talent seems to be fairly self-evident, and we certainly are helping East Asia and Europe to gain it. However, tolerance doesn’t seem related to very rapid rates of growth or competitiveness, such as the case of Singapore. Growing economies have a trade policy, which means they are very careful what they allow to happen: they use currencies carefully, acquire technology, and focus on national autonomy and military contributions with a long-term view—something that we seem to have moved away from. He does not see the tolerance, but rather a much more active, less ideological use of integrated trade and technology policies. Could we adopt more thoughtful and long-term policies like those of the Chinese, Japanese, German, and French?

In response, Dr. Florida pointed out that although Singapore may not be the most open and tolerant country in the world, they reportedly have dramatically increased spending on arts and culture, and is changing their policy on the treatment of gays and lesbians. Sweden has made aggressive strides to open and liberalize their immigration policy, and countries such as Finland, Canada, Australia, and the United Kingdom are ramping up their ability to attract and retain foreign talent—although this is not so much the case in countries such as Germany and Japan.

Remaining an open country is a key to U.S. economic success. Openness is not the whole equation—you also need a great business climate, sensible trade policies, and sensible innovation, science and technology, and human policies that we don’t have—but it is a critical part. Dr. Florida is trying to make this part of the conversation, because no country has to beat us for us to lose significantly. If Sweden takes 2 percent, the United Kingdom takes 3 percent, Canada takes 10 percent, and Australia takes 5 percent, the cumulative effect will be quite significant.

Dr. Atkinson said he absolutely agrees with Dr. Wessner: technology and trade are driving these big changes and the occupational structure we see today. He added that Dr. Florida ranks the Japanese fifth in tolerance and the United States twentieth, but the Japanese do not want other people to come to their country, whereas anyone can come into America and be an American. So he questions both the metric and the importance of tolerance.

In response, Dr. Florida asked, if you had to bet over the long term on the ability to create cutting-edge innovations, would you choose a closed Japanese immigration system or a relatively open system like those of the United States or Canada? Regarding measures of tolerance, he based the index on the work of Ronald Inglehart of the University of Michigan—the only person who has surveyed attitudes on this over 40 years. He is working with the Gallop organization to conduct his own worldwide survey to measure locational preferences; numbers of immigrants; and numbers of gay and lesbian people. Without those measures, you use reported attitudes: a self-expression index and a so-called secular versus traditional religious index. The United States does very well on the self-expression index but terribly on the secular, rational, traditional religious index—for which the country pays a big penalty in its rankings.

Regarding Dr. Florida’s comments on jurisdictional advantage, Jason Jordan of the American Planning Association asked if cities or regions could adopt policies—particularly in terms of urban design or urban systems—to improve their creative competitiveness. Dr. Florida said the great shortcoming of his book was its inability to address these issues at the subnational scale, where the importance of the Three T’s will really show up.

By tolerance at the subnational level, Dr. Florida means several things: being open to different kinds of people and less segregated, as well as other factors that are very hard to measure, such as the quality of the environment or the arts and cultural community. Most people do this with completely inane measures, such as the number of symphony performances or acres of park land, but there are many measures of quality of place that matter. This jurisdictional advantage is very important because people do not choose location based only on where they can get the highest-paying job; they choose based on where they can find economic opportunity, a greater labor market, and the quality of life they desire. Most city planners think of quality of life as a great place to play golf and raise kids in a traditional nuclear family, but it varies among young people, older single people, and the gay and lesbian population. The places with jurisdictional advantage provide a wide portfolio of services and amenities that can attract a large number of people.

A participant asked how Reuters’ decision to shift editorial jobs, which might be considered creative jobs, to India, apparently based on the bottom line and the smallest possible skill set necessary to edit stories, fits into the Three T’s. Dr. Florida replied that costs matter; as Dr. Jarboe said, things that can be standardized, routinized, or rationalized will be moved to where there is a cost advantage. He added that if we in the United States act on not just a technology agenda but also a creativity agenda, we can combine those people who work in editorial positions with new technology or other writing opportunities to add value in new and unique ways. But clearly many jobs are subject to moving around the world, and functions that can be routinized and standardized will be moved first. The only possible advantage we have is moving up the value chain of not only innovation and technology, but also creativity across the board.

In closing, Dr. Florida told a story about sitting around the room with Tony Blair’s top economic advisers as they discussed the need for an information technology center and a high-tech strategy, to be more like the United States. He asked who are the richest people in England. The answer was Paul McCartney, Mick Jagger, David Bowie, and Elton John—which prompted him to asked if the U.K. government had ever really thought about their music industry. Dr. Florida admitted it is a silly example. But instead of seeing the loss of U.S. industries through the blinders of steel, autos, and consumer electronics, we have to think broadly about the areas where we can gain competitive advantage, which is in these quintessentially creative industries.

top

 

Constructing Jurisdictional Advantage

With Professor Maryann Feldman

Click to view as PDF

One of the myths surrounding the I-Cubed (Information, Intangibles, Innovation) Economy is that “place” – the physical location of economic activity – no longer matters. With the “death of distance” we are told that economic activity can occur anywhere – as the current debate over offshoring illustrates. But, in a highly interconnected global economy, place may become even more important. In response to criticisms of offshoring, we hear over and over from corporate leaders that they must go to where the resources and the talent are located. Local intangible assets are becoming key factors in a company’s competitive advantage. And the uniqueness of those local assets becomes ever more important.

Drawing on theories of corporate competitive strategy, Professor Maryann Feldman outlined a new approach to local economic development based on a community’s unique characteristics—arguing that jurisdictional advantage is established through a strategy of differentiation rather than low costs. Professor Feldman is the Jeffery S. Skoll Chair in Technical Innovation and Entrepreneurship and Professor of Business Economics at the Rotman School of Management, University of Toronto. Prior to joining Rotman, she held the position of Policy Director for Johns Hopkins Whiting School of Engineering. She was also a research scientist at the Institute on Policy Studies at the University. Her research focuses on the areas of innovation, the commercialization of academic research, and the factors that promote technological change and economic growth. A large part of her work concerns the geography of innovation – investigating the reasons why innovation clusters spatially and the mechanisms that support and sustain industrial clusters.

Professor Feldman was introduced by Dr. Kent Hughes, Director of the Project on America and the Global Economy at the Woodrow Wilson Center.

In her presentation, Professor Feldman focused on the need of cities and economic regions to construct a jurisdictional advantage, a deliberate and unique construction of social, economic, and political assets that help the city gain competitive and innovative advantages. If she had been running in a political race her campaign slogan probably would have been “it’s the location, stupid.”

Professor Feldman noted how companies built a competitive advantage by either developing a unique product or service or by using a low cost approach to providing a product and service. By low cost, she stressed that she did not mean cutting prices and hence profit margins. For example, she pointed to Southwest Airlines’ business strategy that built an entire, complex system to create their low cost advantage. To meet their price, a company essentially had to master and either duplicate or improve on their total system.

In Professor Feldman’s view, cities and regions needed to learn from the strategies of successful companies. She recognized that corporate strategies were relatively simple – their principal goal was profit maximization. Cities have broader goals that included quality of life, security, and protecting the environment, as well as fostering economic growth and job creation.

It was in terms of generating growth that she felt the cities had most to learn from the business approach. Just as she did not see a future for companies that simply cut prices and profits, she did not see long-term success for cities or regions that relied on low wages or luring industry with tax breaks and other incentives. When economic conditions change, the lured company is often the first to leave.

In terms of a strategy for growth, Professor Feldman rejected both the laissez faire approach of passively waiting for the magic of the market place and the kind of active industrial targeting that attracted attention in the United States in the early 1980s. In her view, technology simply changed too rapidly and unpredictably for industrial targeting to work.

Instead, she encouraged cities and regions to focus on constructing a jurisdictional advantage, a unique system that would allow them to capitalize on new innovations while staying flexible to market changes. Before active “construction,” a city or region had to assess it current assets and strengths. The next step would be to build on their current competitive advantages. In most cases, a city or region will find local industries that have already stimulated the development of activities that create a supportive cluster.

She gave two examples of how a specific industry had encouraged the formation of related activities in a city or region and then, in turn, been strengthened by them. Her first example was Hollywood. Citing recent work by A.J. Scott, she noted that it was not the weather but rather an innovative approach to filmmaking that helped break the New York monopoly on film production. “The New York based Motion Picture Patent Trust priced films by the foot” regardless of quality. In Hollywood, Thomas Ince pioneered the development of shooting movies in discreet segments that were reassembled later, thus lowering costs and eventually giving rise to the studio system. The innovations in filmmaking led to an increased demand for actors, technicians, craftsmen, and managers, among others. The Academy of Motion Pictures Arts and Science also played a critical role in training talent, and thereby supporting the industry’s growth.

In New York, Professor Feldman pointed to N.M. Rantisi’s work on the fashion industry where a series of complementary industries, services and educational institutions developed that helped the garment industry to move to high value added production. Other cities’ everyday garment districts failed to develop their own market niche and withered under first southern and then international competition.

In some cases, a low cost strategy may be successful. But a low cost strategy is not the same as a strategy built on low wages. Rather, it is a strategy of providing the needed economic infrastructure and services at a lower cost. The example of the educational system (K-12) in Edmonton, Canada is one where providing a high quality jurisdictional service at low cost has helped promote economic development. However, to be sustainable any low cost strategy must be used to create unique assets that can not be easily replicated by other jurisdictions.

Because economic activity is often path dependent (where you are depending on where you started) many cities attempt to lure key industries to their jurisdiction. Professor Feldman argued that by the time it was clear that an innovation was going to be a market force that a city or region might desire it is already embedded in a supportive cluster elsewhere. She again stressed the importance of building on existing strengths and, where necessary, focusing on attracting the missing industries to complete a cluster. To continue to build on their existing strengths, city and regional governments must communicate effectively and regularly with their business stakeholders to ensure that regulatory requirements are not constraining further growth.

Professor Feldman closed by noting that she will be going to India in the near future to add an international dimension to her work.

The discussion was moderated by Dr. Kenan Jarboe, President of Athena Alliance. During the discussion, a number of participants noted other studies and research that parallels Professor Feldman’s work. It was especially noted that her work essentially bridges the gap between Michael Porter’s thinking on corporate strategy with his (other others) work on economic clusters in a new and insightful way.

Part of discussion focused on how communities could identify their unique advantages and turn disadvantages into advantages. It was noted that the social infrastructure and the entrepreneurial culture of an area play a large role in helping a community identify opportunities based on their local strengthens. Serial entrepreneurs are able to switch from business to business as the opportunities change.

The issue of the current bandwagon effect of focusing on one or two supposedly key industries also came up in the discussion. As one participant put it, cities and regions go looking for the magical White Buffalo and end up with a White Elephant. Professor Feldman noted that one of the most important aspects of the work may be a greater understanding of what not to do.

Finally, participants raised the issue of states versus local regions. It was noted that while states may be the controlling political jurisdictions, the loci of economy activity is the local region – which may span state boundaries, as does the Washington DC metropolitan region. Professor Feldman stressed the importance of a regional strategy for competitive advantage. Simply limiting the strategy to existing political boundaries ignores the nature of the regional economic linkages.

Patent Donations and the Problem of Orphan Technologies

With David Martin and Peter Bloch

Click to view as PDF

Dr. David E. Martin, President and CEO of M-Cam, and Mr. Peter Bloch, COO of Light Years IP, explored different aspects of the orphan patent question. Orphan patents are those that are no longer used by their inventors or owners and are often donated to other institutions in exchange for tax deductions. Dr. Martin opened the discussion by noting that his company has developed intellectual property auditing systems to identify the commercial validity and value of patents. He noted that 30 percent of current patents are “functional forgeries” because they are issued based on the uniqueness of the words used to describe the invention not on the uniqueness of the invention itself. In addition, he contended that 90 percent of the patents granted in the United States, Europe and Japan were defensive in nature. As fee- based organizations, the patent offices depend on the volume of patents and thus have incentives to grant patents regardless of their ultimate validity.

Dr. Martin noted that private consulting firms were counseling companies to adopt an “abandon or donate” strategy for unused intellectual property for tax savings. He noted that universities have used these donations of intellectual property (IP) (rather than cash) as private- sector matching funds often required for federal research grants. In some cases, the universities would abandon patents rather than pay the $3,000 fee for each needed to maintain them.

Dr. Martin contrasted the $1.4 billion budget for the U.S. Patent and Trademark Office (PTO) with the $3.8 billion dollar cost to the U.S. taxpayer from tax deductible donations of patents. The valuation of donated patents is based on the methodology used to determine damages from a infringing patent. Yet, in the case of donated patents, none have had any commercial value. He concluded his opening remarks with a call for the PTO to do a much better job of determining what is really a new invention that deserves the temporary monopoly conferred by the law.
Mr. Bloch elaborated on the history of patent donations. While allowed since 1954, in the 1990’s corporations became more aware of the value of their patent holdings and of the tax benefits of donating unused patents. In response to a growing concern about abuse or even outright fraud, Congress began tightening provisions of the tax code for deduction of donated patents. This has caused concern, as proponents of patent donation believe that donated patents lead to new areas of research and have helped universities bring their research closer to market.

After reviewing the current system of patent donations, Mr. Bloch concluded that it did not work. He pointed to a limited number of successes but did not feel that they offset the costs. Most of the incentives provided by tax deductions are for technologies that are already closest to market and easiest to commercialize. However, ending the program completely could also prove to be a mistake. Instead, a broader look at the entire national innovation system is needed to return the focus to technologies that are more difficult to commercialize, where incentives for further development may produce more public benefit.

The speakers for this policy forum were Dr. David Martin and Mr. Peter Bloch. Dr. Martin is CEO and founder of M-CAM, a Charlottesville, Virginia corporation that developed and commercialized the world’s first international intellectual property auditing systems to identify the commercial validity and value of patents. He has been at the forefront of IP management system development for over a decade. Formerly an Assistant Professor at the University of Virginia’s School of Medicine, he has worked with numerous governments on technology transfer policies and intellectual property protection.

Mr. Bloch is the Chief Operating Officer of Light Years IP, a not- for- profit association focused on adapting modern IP marketing, asset management and licensing techniques to help developing countries earn export income. He is a business strategist and multimedia developer with over twenty-five years of experience in all aspects of startup, management and strategic planning for media companies. For the last fifteen years, he has specialized in working with media technology companies as a strategic planning consultant. As a consultant to the International Intellectual Property Institute, he co-authored a recently published research paper, IP Donations: A Policy Review.

Dr. Martin and Mr. Bloch were asked to explore different aspects of the orphan patent question. Orphan patents are those that are no longer used by their inventors or owners and are often donated to other institutions in exchange for tax deductions.

They were introduced by Dr. Kent Hughes, Director of the Project on America and the Global Economy at the Woodrow Wilson Center.

Dr. Martin began his presentation by describing the work of M-CAM on validating patents.

Based on this work, he believes that over 30 percent of the United States patents currently in circulation are “functional forgeries”— they have no uniqueness other than the use of a thesaurus, not necessarily a novel and unique product or process. He gave the example of a patent for toast issued in July of 2001 where toast is called “the thermal refreshening and remediation of a bread product.”

According to Dr. Martin, the U.S Patent and Trademark Office (PTO) is not fulfilling its constitutional charter. The U.S. Constitution says that in exchange for the disclosure of an invention or discovery that advances “science and useful arts,” the inventor may get a limited monopoly. Over 90 percent of the patents in circulation in the United States, Japan and Europe are “defensive” patents. These patents are in stark violation of the Constitution; grant of a public monopoly was not for protectionist self-interest. The grant of a monopoly was in exchange for the disclosure of something that promoted science, technology and industry.

The problem is that the PTO—and the patent offices of Japan and Europe—are fee-based organizations. Thus, their incentive is to grant more and more patents. If they reject a patent, they obviate any annuity value of the maintenance fees—and reduce their funding.

In 1983, the decision was made to go for quantity over quality and the PTO became a “customer-service organization.” But who’s the customer and what’s the service? The customer should not be the applicant. The customer should be the public whom, in that exchange of sovereign grant, there has been an advancement of public interest.

Dr. Martin pointed out that the term “orphan patent” is a bit oxymoronic. If it’s disclosed, it is no longer an “orphan”— the public can access it at no cost. If it’s disclosed, the public has no limitation on what it can do with it, save it can’t specifically commercially exploit that particular thing that is embodied in that particular patent.

The term “orphan technology” has a more legitimate historical basis in the economic dialogue. It comes out of the era that followed the liberation of defense technologies, where Congress decided that it would be a good idea to try to make those technologies available for commercial exploitation if they no longer had a defense application. For example, the under-40 female population was able to get gamma-emission detection of breast cancer. Certain optics and certain telecommunications also came out of this switch from defense to commercial technologies.

But orphan technologies were, at the time the term was coined, specifically those technologies that the public had paid for, the public had already invested a monopoly interest in. The U.S. government because of its holding rights on that intellectual property was not doing anything with them.

But, the term orphan technology now implies that there is another use, or a better use, of an invention. Implicit in this concept of technology transfer is the notion that somewhere along the line you’re introducing an economic theory called the “secondary market”—the ability to put a piece of technology, a property interest, what have you, into the hands of parties who can do something with it.

Dr. Martin believes that we pay a lot of money for R&D but don’t get much for that R&D dollar. Most technology transfer dollars actually are trying to offset this funding inefficiency—with the exception of viewing technology transfer programs as a jobs program or alternative means of funding higher education.

He referred to a study his company did for the Small Business Administration (SBA) on the Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) grants. It found that 40 percent of grant applications coming through SBIR and STTR actually were soliciting funds to pursue an investigation into something that had already been patented.

It was in that investigation that patent donation came to their attention. These, and other federal granting mechanisms, are required to show commercial utility and partnerships with industry in the development phase of funding. The partnership requirement often takes the form of private industry matching grants. They found that often these donations were intellectual property, not cash. At the same time, it became clear that the major accounting firms were hawking an “abandon or donate” strategy as an ideal way to generate phenomenal tax savings.

In one interesting case, the Internal Revenue Service got an information disclosure statement from a taxpayer that listed a number of donated patents that the company never owned. The company who did own some of these “donated” patents never even knew of the company that had been so kind as to donate them.

From Dr. Martin’s perspective, all of this began to look like a situation where companies no longer wanted to pay maintenance fees on those questionable patents acquired for defense reasons (often by copying competitor patents to build a protective hedge around their ideas). Rather than pay to maintain those patents, it became both easier and more lucrative to donate them to universities as match grants.

It has become clear that post-donation, patents aren’t being maintained. Universities are willing to walk away from patent portfolio valued for tax donation purposes at millions of dollars for the sake of saving a few thousand dollars in fees.

The result is a system that generates $1.4 billion a year in fees to support a patent office and loses $3.8 billion a year in tax revenues on patent donations.

Part of this rush for patents has been deliberate. From 1980 to 1985 we began to copy the Japanese system of defense patents. So Now, when Japanese company comes into a technology negotiation with their stack of patents, the U.S. company can counter with their own stack. And then the two companies agree to non-revenue-bearing cross licenses.

Another problem with the process is how we calculate value. The current patent-valuation methodology is drawn from infringement damages. Yet, at the time of donation, none of the patents actually has a commercial value. To have infringement damage, you have to have commercial consequences; if you don’t have commercial consequences, there is no damage. And if you have no damage, there is no economic consequence. So, Dr. Martin wondered, why are we using such a methodology to arrive at economic value?

He offered the analogy of a patent as a “No Trespassing” sign. There’s no affirmative value in either; it conveys no affirmative right. It simply enables you to keep someone else from doing something. And a “No Trespassing” sign is worth exactly what you pay for it. Put the “No Trespassing” sign on two different “mines”: a minefield and a platinum mine. The sign is still worth only what you paid for it. The “No Trespassing” sign on the minefield is worth liability avoidance. The minefield owner is doing that more than anything else just to put you on notice. The “No Trespassing” sign on the platinum mine is only worth how much the owner is willing to enforce that admonition. In either case, the sign itself has no intrinsic value other than the intrinsic value it had when you bought it.

Both the “No Trespassing” sign and the patent are not an asset; they are a contingent liability. There is a burden to do something with that sign to actually achieve what it says. They cost you money to get; they cost you money to enforce; they cost you money to maintain. Where’s the asset side so far? Some will argue that not having an enforcement action brought against you must have value, and that we need to find a GAAP accounting means of putting that on a balance sheet. Dr. Martin is not advocating for or against putting it on the balance sheet. He is pointing out that current regulations don’t have a mechanism to address the point. Hence, we have a policy problem.

Dr. Martin closed with the observation that American policy is based on the mistaken belief that we are the only creative economy, that we are the source of innovation. In all the debate about outsourcing and where jobs are going, the underlying response is to assert that Americans are still the people who invent stuff.

That assumption may get us in trouble in the future. He posed the following scenario: Hoover Dam was constructed with concrete that was calculated to fatigue next year. We have 9 million people living in a desert whose water supply is exclusively from that location. The intellectual property rights (IPR) for water desalinization—which is the only way you can save the 9 million people in the desert from a water catastrophe—are not U.S. owned. They’re owned by foreign interests.

This scenario would end up reversing role in the AIDS-drug debate in South Africa, where the U.S. is pushing for strong enforcement of drug companies’ IPR. IPR is great, until it’s our 9 million people who have to deal with a major problem.

He noted that the rest of the world is now using technologies like M-CAM’s forensic analysis of patent enforceability to detect, as he puts it, the patent frauds that are issued every day out of patent offices.

He closed by emphasizing that it is absolutely essential that we wake up to the fact that we need to do a much better job of accounting. We need to better define what invention is, what innovation is and what a monopoly is worth. After we answer these questions, we must build systems and standards to enforce it. This will require halting abuse of the ambiguity that has surrounded intellectual property.

The next speaker was Mr. Bloch, who gave an overview of the recently published report by the International Intellectual Property Institute (IIPI), IP Donations: A Policy Review. Whereas Dr. Martin looked at the detailed foundations of this issue concerning validity of the patents, the policy review looked at the broad macro level.

The issue first came up in 1954 when the IRS clarified certain rulings to allow for the donation of intellectual property. This opportunity was never really used until the mid to late nineties. At that point, corporations began to realize that their intellectual property was becoming increasingly valuable—in some cases even more valuable than their physical plant. Companies and accountants discovered that maintenance fees on used patents were becoming large costs; it was better to either abandon the patent or donate it. Donation was seen as the preferred option, since companies take tax deductions of up to 37% of the value of the patent as valued by an appraiser. Patent donations began gaining momentum in 1996, reaching a peak around 2001. Write-offs for these donations of patent portfolios have reached up into the $10 to $20 million range, with numerous large companies now routinely donating patents.

As a result, some people began looking closely at these donations and found cases of abuse, if not outright fraud. The outcome of that debate is a provision in the current tax bill—S. 1637—that would effectively eliminate tax deductions for patent donations. Many corporate donors claim that this will result in elimination of patent donations altogether.

This concerns the recipient community and others who believe that there is value in the program. The recipients of many of these patents claim that the mechanism has been immensely valuable in bringing technologies closer to market. According to them, it has enabled universities to do research in areas that they may not have been able to before.

Most patent donations are going to the second-tier research universities. These institutions don’t have their own well-funded endowments and research programs. Nor do they have a great number of their own patents. Therefore they wouldn’t have licensing and technology transfer programs if not for the donated patents.

The debate has come down to the intellectual property owners and some in the educational community against the Senate Finance Committee and others who are interested in curbing tax abuse.

It was in this environment that IIPI commissioned the policy review. One of the insights of the policy review was that there was no explicit policy on patent donations as a tool of technology transfer. It was a de facto policy resulting from the IRS’s interpretations of the rules and regulations. No one has taken a look at the overall picture and the benefit to the taxpayer. This is a subsidy to corporations but no one had asked the question of what the public was getting in exchange for the subsidy.

One of the problems is finding out how much the program costs. Discussions with the IRS, the Treasury Department and donors have led Mr. Bloch to conclude that it is almost impossible to come up with any reliable data on the value of the donated patents. It is in tens or hundreds of millions of dollars but we have trouble calculating the exact cost.

The first reason is that most of the patents were not donated until the late mid to late 1990s. However, it may take anywhere between four to 10 years to actually commercialize the technology. Companies have been set up as spin-offs of universities specifically to exploit the technology developed as a result of the donated patent but they haven’t been in business long enough to deliver any commercial results. And the vast majority of donated patents have not even gotten to the point where there is a technology developed.

The second reason is that there is no measurement system. There is no government agency, no national innovation policy czar who tracks this. There is no data on who is giving what patents to whom, on the progress of the patents, whether or not they are ever commercialized and on what economic activity is generated. The mission of the PTO is job creation and innovation but that is not tied to any national innovation policy whatsoever.

As an aside, Mr. Bloch noted that there are other consequences of not having a national innovation policy. For example, government funding of basic research has been declining steadily since 1982. And the private funding of basic research is moving offshore, away from U.S. universities.

A third problem is the process itself. It’s the technology that is closest to a commercial application that collects the highest valuations and gets the highest tax donations. Yet, this same technology, because it is closest to market, should be the easiest to license through traditional arrangements and thereby be less in need of the donation process for commercialization.

Mr. Bloch noted that a large company with a new product that is not going to create a billion-dollar market has two choices. They can give it to a research institution that will take it through a little bit more research and then sell it. As a result, the company gets a subsidy in the form of a tax deduction. The alternative would be to license the technology to another company that doesn’t need a billion-dollar market.

Why donate rather than license? According to Mr. Bloch, the answer that normally comes back from business executives varies: we couldn’t find anybody who is interested; we didn’t have the time; it was too complicated; it was easier to donate. He suspects in some cases that it was simply more profitable to donate the patent for the tax deduction than it was to seek a partner to develop the technology.

Mr. Bloch believes that that if the taxpayer is going to subsidize technology commercialization, the subsidy should go to technologies which are promising but more difficult to commercialize. But it’s the more difficult technologies which, under the rules for appraisal, will be given lower values, get lower tax deductions, and therefore are less likely to be donated.

The conclusion of the policy review: the program doesn’t work. This conclusion holds despite the fact that there have been some notable successes. There are some technologies that probably wouldn’t have gotten to market without this program. This may be a suitable mechanism for subsidizing technology development in the case of orphan drugs— there’s limited demand for the drug and the big pharmaceutical companies don’t see the return on their investment. In this area, donations or enlightened licensing to research institutions has had positive results.

However, certain proposed changes to the tax code would throw out the program entirely. Rather, we should look at criteria for designing new mechanisms to make promising technologies in early-stage development available to research institutions and to small businesses. Right now, only a 501(c)3 can receive donations for donors to get write offs. Thus, the program locks out small businesses as a recipient.

The entire complex needs to be looked at carefully: owners of a technology that for one reason or another didn’t pursue it, universities which are seen as engines of economic growth through their research activities and small business innovations programs and other government programs to foster commercialization.

Before considering subsidies and tax write-offs, Mr. Bloch stressed that we need to look broadly at what elements should be built into a new program. We also need to determine exactly where the market failures are, and to tie it to a national innovation policy.

Crafting Policy for the Information Age: EU Research on Intangibles

Clark Eustace, Executive Chairman of PRISM, presented the findings of the PRISM project, an umbrella organization of European business schools that was formed to carry on the work of the European Commission’s High-Level Expert Group on the Intangible Economy. Originally, the issue of intangibles was primarily seen an accounting problem. The further the issue was explored, the more it was realized that it was a larger, more serious issue with far greater implications throughout the economy. In recognition of the broad nature of the problem, PRISM focused on four issue areas: the evolving new theory of the firm; measurement issues; issues for the key interest groups, particularly the accountants, bankers and other market related actors; and, implications for EU policy.

One of the group’s main conclusion is that there really isn’t a new economy, but a soft revolution in number of areas of including the asset base, the speed of markets and the nature of the value chain, which is resulting in deeper transformations. This revolution has gone largely undetected over a number of years because systems of measurement aren’t able to pick it up. This transformation has significant implications for EU policy in areas of understanding the productivity of knowledge-intensive services and the creation of intangible goods. It also raises concerns over accounting standards and the adequacy of existing mechanisms of company reporting.

During the discussion, a number of issues were raised concerning innovation and intellectual property rights (IPR). It was pointed out that we still don’t have a good model of the knowledge production process – either inputs or outputs. Thus, we have difficulty in determining the incentives needed, including the role and forms of IPR that might be most appropriate. Similarly, changes to the model of business services (toward commoditization and systemization) and the rise of intangible goods as a category of products that are neither goods nor services have increased the complexity of the situation facing policymakers. The implication is that there is no one-size-fits-all policy.

Some of the implications of the rise of intangibles for the financial system, especially on access to capital and the role of credit rating agencies, were also discussed. Throughout the discussion, the need for continued research on better models and data collection was stressed. Only through both better data and a more complete theory can we understand how the shifts are occurring.

The speaker at this policy forum was Mr. Clark Eustace, Executive Chairman of PRISM. PRISM is one of the leading European research efforts on intangibles. Funded by the European Commission, the PRISM group is a consortia of eight European business schools that has spent the last two years intensively studying the role of intangibles in economic growth. A former senior partner with Price Waterhouse in Europe, Mr. Eustace is an international expert on the economic and accounting issues relating to the expanding intangible economies. He has served in a number of advisory capacities to European governments. Most recently, he was the founding Chairman of the European Commission’s High-Level Expert Group on the Intangible Economy.

He was introduced by Dr. Kent Hughes, Director of the Project on America and the Global Economy at the Woodrow Wilson Center.

Mr. Eustace began by presenting background on the PRISM project. The project is actually an outgrowth of a Brookings Institution project on which he served. The Brookings Task Force on Intangibles looked at the question of what we should be measuring with respect to intangibles, including the issue of high stock market valuations.1 This project served as a wake-up call for officials in Europe who responded by creating a High-Level Expert Group on the Intangible Economy, which he was asked to Chair. This was a highly successfully undertaking which managed in a short period of time to produce a detailed overview of the issue. 2

Based on that experience, he convinced the European Commission to expand the group and bring in academics for detailed studies on the issues. PRISM was formed as the umbrella organization for that research, with a high-level advisory group comprised of government officials, business leaders and academics. The research is now complete and PRISM has published its executive summary of findings and a large number of background papers. These can be found on its website at http://www.euintangibles.net.

Europe has made this issue of the new economy/digital economy/intangible economy a priority, starting with the Lisbon Accord in 2000. The Lisbon Accord grew out of concern over the gap in economic performance between the US and European economies. This European Commission (EC) agreement set both the research and policy agenda to confront these issues and sent a clear political message that the issue of economic changes was a priority. It is clear that these issues are understood at the highest level of the EC, including EC President Romano Prodi, a respected former economics professor.

Since then, and even before, there have been numerous studies about the economic gap between the US and the Europe. As this analysis has progressed, the view of the issue surrounding intangible assets has changed. Originally, the reason that Mr. Eustace was asked to get involved in the issue of identifying and measuring intangibles was primarily because it was seen an accounting problem. The further the issue was explored, the more everyone realized it was a larger, more serious issue that had far greater implications throughout the economy. The question then become one of how to gain support for the recognition that this was a larger issue affecting major portions of the economy, such as banks and capital markets, and not just the accountants.

Both sides of the Atlantic have now shifted their views that this is an economic problem, not simply an accounting problem. There is a greater realization that the economic fundamentals have changed. Unfortunately, fragmentation within academic economics was not helpful in confronting the issue.

As the Experts Group delved further into this question they found that a major part of the academic fracture line concerned measurement theory. They now believe they needed new models and a new math. By that they do not mean a new accounting math, such as that proposed by Baruch Lev. Rather, they mean a new way of measuring, similar to that of the revolution in science that took place with Robert Boyle. Previously, the notion was that you looked at things in a disaggregated fashion. In economics and accounting, you added up things such as input or transactions or costs. You then disaggregated each individual item and could analyze it in different ways. At some point, people in the natural sciences realized that there were far too many interactions to deal with in this way. So they created a new model and a new math for measurement and analysis.

That is where the PRISM project started. One goal was to bring together disparate academics from various disciplines to attempt to create a new way of approaching intangibles, including taking a closer look at econometrics.
The group had its first session to flesh out their ideas at the University of Ferrara, led by Professor Patrizio Bianchi. That session led to focusing on four issue areas:
The evolving new theory of the firm.
Measurement issues.
Issues for the key interest groups, particularly the accountants, bankers and other market related actors.
Implications for EU policy.
Two years later, the PRISM group has come to set of main conclusions.

First, there really isn’t a new economy. There are not the shifts that occurred similar to those at the beginning of the Industrial Revolution. There is, however, a very rapid and largely hidden change in a number of areas including the asset base, the speed of markets and the nature of the value chain. This is more a soft revolution, as Paul Romer likes to refer to it. The soft revolution has gone largely undetected over a number of years because our systems of measurement aren’t able to pick it up. And it is resulting in deeper transformations.

Mr. Eustace pointed out that they are still grappling with how to better structure these forces in order to make them more understandable.

The PRISM group’s key findings from their thematic workshops are:
A maturing global economy with surpluses is leading to a commodization of products and services.
The shift from old mass production to more customized production leads to a shift from economies of scale to economies of scope.
The struggle for comparative advantage shifts from price factors to factors of differentiation such as intangibles. The real question is why the factors of differentiation rather than price have now become the key drivers. They’ve always been there, but why now in the last 20 years have they have taken on the preeminent role?
Therefore, firms constantly try to create, maintain or invade monopolies founded on intangibles as a major part of corporate strategy. This has major implications for IPR strategies in service industries, since IPR régimes are currently not suited for protecting those intangible assets.
Hidden investment in business intangibles is now as much as 100% of physical capital. This includes investments in R&D, ICT, training and organizational change. However, variations across European countries are enormous and we don’t really yet understand why.
The economic characteristics of the intangible economy are very different from the manufacturing era.
The corporate response to these shifts has been notable. The economic changes have not occurred suddenly, or from a common cause. But, over time, they have induced important changes in the architecture of the corporate value chain. Value chains always had a limited life in competitive markets, but now eroding faster than ever before. Strategic responses include:
Having an effective “innovation machine.” This has become a must for every enterprise. The process started with the Kaizan process of continuous improvement and has grown into the key notion of companies continually re-inventing themselves at all organizational levels. The most successful organizations have ingrained this into the organizational culture. This need for change and re-invention has become relentless.
The constant search for new modes of monopoly rent. This is what keeps companies in business.
Ways of exploiting unique, or difficult-to-replicate capabilities, competences and quasi-assets.
Utilization of networks as key strategic assets. But we don’t yet understand how to measure and evaluate the power and usefulness of networks.
The new dynamics of power has implications for the old corporate governance system. However, the group did not have time to delve into this issue in any depth.
In the area of the policy agenda, PRISM felt that the statistical, market and corporate tracking systems have not kept pace. There is no systematic information, only glimpses. The macro, mezzo and micro systems for picking up information on intangibles are not there. For example, the mezzo-level of the system of real-time information collection in the United States, as exemplified by the SEC, does not exist in Europe.

The issue of intangibles strike at the very heart of the macro and micro reporting models. Stewardship of intangibles is now a major priority for companies & investors (& regulators) There is a proliferation of guidelines and indicators. Yet, holistic solutions to measurement and information are slow to mature.

There are steps that can be taken. The first is a better break-out of intangibles as input to the data collection process. This is a task facing those trying to get agreement on corporate reporting. The second is to refine the notion of using multiple layers of reporting, similar to that which Price Waterhouse and other big accounting firms are advocating. This starts at the basic transaction level and move up to higher levels of tiers, by adding more value-based indicators.

The business level data problem is especially acute when trying to understand the relationship between costs and value. With the rise of importance of intangibles, we no longer have good analytic models of business that show the relationship between what you put in (costs) and what you take out (value).

With respect to macroeconomic data, the PRISM group’s research, led by the former chief statistician for the OECD, shows that 10% of GDP goes unreported due to our inability to capture data on intangibles. Changing the GDP measure, however, is very difficult since so many policy system and automatic policy decisions – such as government program funding – is tied to the number.

On the broader policy front, the policy community has relied on “New Economy” paradigm to light the way for new policy orientations/ levers. This paradigm has collapsed, leaving the policy community very confused and creating a vacuum of ideas. As a result, the attention of policymakers in Europe has turned to sponsoring serious investigation of the issue and funding research in the area.

The final result of the current PRISM research efforts was six questions for EU policy makers:
Productivity of Knowledge-Intensive Services: What are the economics of the knowledge production function? What new tools do we need to measure the productivity of knowledge? What should we be measuring/ tracking a) at the firm level, and b) the System of National Accounts (SNA) level?
Intangible goods: What are the characteristics of the intangible goods sector – size, growth, dynamics? How should the SNA be reformulated to track intangible goods? Does the EU have a competition policy for intangible goods?
Accounting standards: Why are EU company accounts silent on assets and liabilities arising from licenses, leases, annuity and executory contracts, which form the backbone of the modern economy?
Investor communications: Do we need a European version of the SEC’s Edgar information system?
Company reporting: What is an appropriate role for EU in fostering a holistic corporate reporting model?
Single European Market: Given the EU’s strategic goal to increase R&D investment from 1.9 to 3% of GDP, to what extent – and in what areas – would a realignment of the EU’s IPR system help?
Stepping back, there appears to be a lot that can be done quickly to patch up some of these areas, such as IPR. There is an enormous amount of nonsense currently in the IPR system. It doesn’t take a lot of new analysis; it will take political will to carry the solutions through.

But there are fundamental issues that need to be addressed. The issues of productivity in the knowledge production function and the value chain in information services need more attention and analytical work. One important effort would be to try to understand the taxonomy of services. For example, how much of the basic value generation of running an airline is similar to that of running a law office? There may be only a handful of truly unique models of value generation in the services industry. If we can understand what those handful of basic models are we can make much more intelligent business and policy decisions, both within and across the industries.

There is also the difficult issue of intangible goods. There is a discrete area of products (goods) that can be stockpiled and traded, but embody intangible goods and are affected by patents, copyrights, licenses etc. Often referred to as the content sector, this is an area we really don’t understand. It needs to be fleshed out. It is a rapidly growing area that has enormous revenue and tax implications. Because it falls in between the existing categories of goods and services—and because it is so complicated to add a third category—this sector often gets ignored.

In the area of accounting standards, the new IASB standards on intangibles will become mandatory in Europe in 2005. The standards are similar to the US FAS 141 & 142, so there is considerable interest in Europe about the American experience. The need to capture data on intangibles may be the impetus for setting up an EDGAR-like system in Europe as a data repository. With the IASB requirements, it will now be possible to gather company data on a pan-European basis.

(The full report can be found at http://www.euintangibles.net)

Dr. Kenan Jarboe, President of Athena Alliance, moderated the Q&A session. He began with a comment about the importance of PRISM as an “observatory” or learning entity for the interested parties. This is similar to what Athena Alliance hopes to do. While he has similar problems with the hype over the concept of a New Economy, there clearly have been structural changes in the economy. These are not the changes that have been highlighted during the dot.com bubble, but fundamental shifts in the nature of production.

We still don’t understand what is happening with the knowledge production function. We understand many of the inputs, and we understand some of the outputs, like the number of patents and copyrights – even though these are poor proxies for innovation. But we really don’t understand what happens inside the black box. We don’t know how those costs and inputs get turned into value added products and services.

Dr. Jarboe noted that one comment Mr. Eustace made during his presentation was especially provocative and interesting when he stated that our IPR régime is unsuited for protecting investments in intangible assets. Dr. Jarboe asked Mr. Eustace if he would elaborate on that point, especially about the problem that there are a number of other forms of intangible assets such as trade secrets, tacit knowledge, and networking effects, that are not in any way protected by existing IPR.

Mr. Eustace replied that one of the key questions is determining what to protect and finding the line between public and private goods. The pharmaceutical industry and its relationship with the regulatory community is a case in point.

With respect to measuring outputs of the knowledge production function, the OECD is doing some very good work. They are trying to create a measurement model but it is very difficult because of the interconnected nature of the knowledge creation process.

Another issue is that of business process patents, which is to some extent a knee-jerk response to the fuzzy problem of protecting software. Europe is taking its time to watch what is happening in the US. He is not convinced that the business process patent is the right approach – it may be far too crude. This has been driven by the shift in the business service sector toward selling intangible assets – a shift from customized research for each client to selling a codified product to meet a client’s needs. Having a limited set of codified products helps streamline the business and increase productivity and profits. In that case, the key question is how to protect original models – and patents may not be the best mechanism. There are many other forms of IPR – some still being invented – that may be more appropriate.

Dr. Jarboe commented that it would be good to see the list of new forms of IPR that PRISM has uncovered.

He also noted that the paradox of business services moving to an industrial model of codifying the answer and selling that codified (standardized) product and away from the information economy model of customization. Are we simply moving to an industrialized model where services is following manufacturing or is there something different really going on?

Mr. Eustace replied that the business service sector is a maturing industry moving toward commoditization and systemization. That process still has a ways to go. Manufacturing is moving to another phase of exploring the tension between centralized and decentralized – such as the swing between conglomerate and highly focused organizational structures.

A key is understanding that there are many parallel paths. Policymakers have not yet come to understand the multiple models. For example, it took a long time for policymakers in the UK to understand that the services and non-services economies were out of phase – that there are many economies. The implication is that there is no one-size-fits-all policy.

One participant raised the issue of the policy implications. Outside of an academic or intellectual exercise, what are the implications for policy of the study of intangibles? Specifically, issues of control (regulation) and taxation are problematic. Are we better off simply letting that part of the economy run without government interference as the best way to foster the development of these intangibles?

Mr. Eustace replied that it is a difficult question. We can’t, however, simply ignore the situation and continue to not measure even if that might lead to increased government scrutiny. We can’t afford to have such “black holes” in key measurements of our financial system. We are now looking at major corporate balance sheets and simply saying “we haven’t got a clue.”

There’s also the issue of the volatility of markets. Brookings Institution has done very good work in showing how the information failures have led to increased volatility.

We have no choice but to move the frontier of measurement further along. The question is how we do it. We need a combination of policy experiments, such as FAS 141 and 142, and academic research to spur fresh ideas. And we need to attach new academics to this area of research.

One participant brought up Christiansen’s book on the innovator’s dilemma. One point of this book was that the level of investment in new knowledge creation, i.e. R&D, was not the determining factor in innovation. It was the recognition by the leadership of the company of an entirely new way to use an existing technological development. Thus it was not the infusion of knowledge assets, but the recognition of the implications of that knowledge which made the difference. Do we really run the risk of missing the point by measuring the knowledge inputs, which might be what matter?

Dr. Jarboe noted that measuring the correct inputs is a major problem. And it is not just measuring the R&D because we don’t measure a number of inputs such as skill inputs. We can measure the raw amount of R&D expenditures on the inputs and the number of patents on the outputs, but this doesn’t give us a clear picture of the innovation process. Those measures may not be relevant, they may not have any real correlation with economic growth, and they don’t tell us how to improve the innovation process.

Mr. Eustace noted from his consulting experience, that it’s always been very difficult to make good decisions about allocation of resources for innovation, especially in business services. It is important to recognize what we don’t know and ask the question of how far we push these measures. Part of the process is to maintain a dialogue between the research and the policy makers so as to be able to understand what is really useful.

Another participant brought up the role of technology and the problem of creating long term fixed assets in a world of rapid change. Created fixed assets requires understanding two fixed parameters. First is that humans have will still have certain needs, such as stimulating work and interaction with other humans. More and more routines jobs that humans do not like to do – including supposed thought jobs of workers in cubicles – will be done by machines. This will continue to push the envelope of human activity into areas of intellectual endeavors well beyond the notion of goods versus services. Second that there are still only 24 hours in the day. People relate to their environment in both a time and space sense – for example by measuring distance to work in terms of time. These are fixed human traits. Ultimately the development of any intangibles needs to relate to these parameters.

Dr. Jarboe noted that the parameter of time is a key factor in expansion of services. It is true as Pat Moynihan used to say that there is no productivity gains in the services because the minute waltz still takes a minute to perform. But with CDs, radio, and the ability to download music on the Internet, that minute waltz can be repeated for people over and over and over again – thereby greatly improving the productivity of those original musicians.

This raised the difficulties of categorizing products as goods or services. For example, listening to music by buying a CD is classified as a good, whereas listening to music by going to a concert is a service. The possibility was raised of switching to a categorization system based on human needs. It was suggested in the mid-seventies to classify the economy by ultimate end use – food, housing, transportation – rather than dividing between goods and services.

Mr. Eustace noted that the problems facing national economic statistics are even more difficult than issues of corporate accounting for intangibles. Not only are the classification systems outmoded, all of the time series and historical data are based on those outmoded classifications. Going back to modify all of past data is an horrendous undertaking. At every level there are huge problems. Companies, even in the same sectors, still are not reporting the same intangibles (such as R&D spending) in the same way. And then there is the political problem of changing the way in which a country reports on its economic situation.

With respect to classification of industries, Dr. Hughes mentioned efforts to structure organizations using IT links into affinity groups that would go from raw materials to finished products. Our economic statistics don’t really capture that.

Mr. Eustace stressed the need for continued research on the data collection problem in order to understand the shifts that are occurring – such as in business service and other sectors that rely heavily on intangibles. There are a number of attempts at reports – such as the UK’s Social Trends report – which try to capture these factors. But they are not enough. We need some very high quality thought pieces to set the frameworks and then lots of good statistics. The key in those high-level pieces is understanding how value is created and destroyed with these shifts.

Dr. Jarboe picked up on the point of the importance of creating value – especially in the current policy environment which is geared toward narrow and specific sectors of the economy. We may end up crafting policies which attempt to create value in a narrow area – based on our view of the economy as a producer of goods – that have unintended consequences elsewhere.

He also noted that there are a number of other reports, such as the PPI New Economy Index and the World Economic Forum’s Competitiveness report, that try to capture data on intangibles. Why don’t these reports get us to where we want to go?

Mr. Eustace replied that one problem is that these reports – such as the Department of Commerce’s Digital Economy report – are high quality but unfocused. They need to be condensed and synthesized. The data needs to be targeted toward helping understanding of what is going on and to be able to put it together in some sort of pattern so that it becomes cogent. That could then become the basis for developing longer term monitoring of the process.

A participant raised a concern over our lack of understanding of the knowledge production function and the creative production function. Without that understanding issues such as copyright become unknowable. For example, if we don’t understand the creators’ incentives for the production of new works, we can’t really craft policies to foster those incentives. Some research is being done on whether IPR, for example software patents, serve as an incentive for further innovation and development or rather as a means of protecting the market of existing IPR owners, including through the means of so-called “patent thickets.” If we don’t understand the nature of the market incentives and can’t measure how those incentives work, then we don’t know how the market will respond to different kinds of IPR regimes.

He also raised an issue of the gap between theory and measurement. For example, data from the National System of Accounts feeds directly into models that describe the economy and are used to understand the different sources of growth – the standard growth accounting economy models. These models are predicated on certain theories of how markets behave. In this intangible economy, is the theory still correct? Do we need a better theory of economic growth in order to better the statistics?

Mr. Eustace stated that this was out of his field of expertise, but suggested that one starting point for improving the models is to look at how we account for investments. The standard models of production are based on a value-chain that starts with R&D and works through to production and ultimately distribution. The notion was that you could ramp up the value generated in each stage of the value chain.

But in large parts of the economy, especially in intangible goods, there is no separate production. There is an economic production function – but not a separate activity known as production. For example, in software, the value generation lies in what is considered the R&D stage (the generation and testing of the code). Value is also generated downstream in the market distribution channels but not in the traditional physical production stage.

Yet, we don’t have a good understanding or theory of how this works. Here we go back to the issue of mapping the service sectors. We know that certain sectors are identical in the value generating activities – but we don’t have a clear view of the knowledge production function to be able to say where the value is generated. We don’t adequately measure R&D and we don’t know how to measure the downstream activities like setting up marketing channels. And therefore we don’t have a real idea of how much of the economy is made up of these types of activities.

We are even having a hard time conceptualizing the issues. This is one of the goals of PRISM – how to identify and synthesize the issue.

Dr. Jarboe noted the importance of the concept of moving from a transaction-based model to more of a field-based model. In the 70’s and 80’s, we talked about a national system of innovation not just R&D. But we never connected all the parts. While we have studied the research process, we still have not connected to the fact that much of the innovation comes from users back up the marketing channels.

A participant pointed out we need to look at the ultimate needs of the customer, otherwise we can mistake what is of value. For example what is the need for railroads or for transportation? The entire role of increased information and IP is to create better ways of accomplishing meaningful objectives – not simply improve a product.

And, as was pointed out, our national system of accounts does not cover that. They look at transactions from the production process point of view, not the end users’ point of view.

A second point made was that the technology not only improves what we are currently doing, but also makes some possible that which was not possible before. The more profound issue is the lack of imagination to see what is not already known. How do you measure that?

There were more patents for horseshoe improvements issued after the automobile was invented than before. How do you then value all of those patents granted just as a new technology is developing to make them obsolete?

One participant noted the PRISM quote from Kelvin about the importance of measurement. A counter to that is the quote, possibly from Einstein, that not everything that can be measured is important and not every thing that is important can be measured.

Mr. Eustace agreed that without a conceptual framework, we have no way of knowing what is important and should be measured. But the reality is that we need measurement systems and that we need to improve our existing measurement systems without destroying all confidence in them.

The PRISM group came to the conclusion at there were four categories of resources which is useful as a heuristic. At the softest end, there are latent resources that are unknown and maybe unknowable. They cannot be measured but can at least be identified and discussed. Next are intangible competencies which are usable and partially codifiable, such as organization structures, even if they are difficult to recognize and measure. Then there are intangible goods which can be measured and valued, such as IPR, long-term contracts and royalties.

There are difficulties in all these areas. Patents are difficult to value because of the problem of separating the knowledge out from other factors, like the workforce. In many cases the licensing out of patents is just a mechanism for transferring risks – where others assume the cost/risk of developing the patent into a revenue producing product.

On top of this, we need to move to a fair value accounting system and learn from the company’s experience as they move into this system.

Dr. Jarboe pointed out that the value of donated patents has become an issue with the IRS and Senate Finance Committee as they work through the latest changes in the corporate tax code.

One participant questioned what the policy implications of the report were and how the lack of knowledge of intangibles hurts policymaking. For example, one recommendation is for a European version of the SEC’s EDGAR system for financial reporting and another talks of improving access to capital by SMEs. How would these policy recommendations work?

Mr. Eustace replied that the idea of a European EDGAR system or some standard financial disclosure system for Europe (although not necessarily enforcement) is inevitable. The need for transparency in the data for policy decisions is overwhelming. There have already been a number of studies making this point, such as the Winter report. But, just putting together the legal framework for collecting the data will take some time.

Concerning the issue of access to capital, there is a section on this in the PRISM report. The problem is that banks and the financial system do not explicitly and systematically recognize intangibles. For example, the credit scoring models do not explicitly include any sort of intangibles. But factors like quality of management, potential market share, sustainability etc are all implicitly included. PRISM thought that some transparency would be useful – maybe a best-practice code at the European level. The goal right now is to inject this awareness of the issue of intangibles into the discussion over credit and capital allocations – such as the Basel II bank requirements.

It is also especially important in looking at the activities of the rating agencies – whose activities are not very well scrutinized. These entities need to be incorporated into the process, especially as the role of debt financing has increased in Europe over the past 20 years.

Dr. Jarboe noted that the research of our earlier speaker in this series, Jon Low, showed that stock analysts very much paid attention to intangibles, even if they didn’t know how to quantify or measure them.

One participant noted that the rating agencies have been involved in various financial industry activities on new regulations. Their concerns, however, seem to be different from other financial institutions – and their perspective is different as well. This difference becomes clear from looking at their Congressional testimony on the Enron scandal.

Participants noted the importance of intangibles as a basis for access to capital. If intangibles cannot be used to raise capital, companies may be shut out of the capital market.

It was pointed out that for certain types of intangibles, there are operating secondary markets. For example, copyrights to music are routinely bought and sold – even packaged into portfolios – with valuation based on expected royalties.

One participant raised the problem of how to account for R&D, especially if the technology is highly speculative. Much of the current system requires the expensing of current costs that are really not operating costs but investment in creating new intangible assets. The difficulty lies in measuring the future value of the benefits that consumers might receive.

Mr. Eustace noted the practical difficulties of capitalizing all expenses that go into intangible assets, including the way the accounts can be manipulated. Because of these problems, full capitalization will probably never be accepted. What is needed – and is very possible – is a better break out of where the money is being spent (regardless of how it is treated for amortization purposes). Details of spending on intangibles would help overcome our lack of basic knowledge in the area – and help get rid of the bad notions, such as that there is no R&D in service industries.

Mr. Eustace closed the session by noting that the discussion didn’t even talk about the macro-intangibles – that is, the frameworks that national governments have, or have not, put in place to recognize and deal with the policy questions surrounding intangibles. We have put into place the ability to look comparatively at frameworks across countries in a number of areas, for example, labor law and tax base. But we don’t have such a conceptualization when it comes to intangibles. In part, this is one of the future tasks facing PRISM.

1.Margaret M. Blair and Steven M.H. Wallman, Unseen Wealth: Report of the Brookings Task Force on Intangibles, The Brookings Institution, Washington, DC, 2001.

2.High Level Expert Group, The Intangible Economy – Impact and Policy Issues: Report of the European High Level Expert Group on the Intangible Economy, European Commission, Brussels, October 2000.

The Coming Bust of the Knowledge Economy

Steven Weber of the University of California at Berkeley argued that the production system for knowledge goods is undergoing not just a dramatic technological change but also a shift in mindset that is fueling a economic revolution. This shift has already occurred in the music industry. The melt-down of the telecommunications industry illustrates the magnitude of the risk. And the software and pharmaceuticals industries are next to face the threat. As illustrated by the open-source software movement, this new system reverses the traditional notion of intellectual property protection from a right-to-exclude to a right-to-distribute. While the change may increase innovation and productivity, it threatens to undermine, in a Schumpeterian fashion of creative destruction, current investments and the profitability of the existing industries.

The discussion focused on a number of points raised by Professor Weber’s presentation.
Much of the discussion centered on the possibility of an open-source type research process in the pharmaceutical industry. Given the high level of human risk and the resulting regulatory controls in pharmaceuticals, the process would be different. As health care in general moves to a much more personalized and information-intensive activity, broader issues of information regulation – including intellectual property rights – are likely to emerge.


The speaker at this policy forum was Steven Weber of UC Berkeley and the Berkeley Roundtable on the International Economy (BRIE). A leading expert in risk analysis and forecasting, Dr. Weber is Associate Professor of Political Science at UC Berkeley and directs the MacArthur Program on Multilateral Governance at Berkeley’s Institute of International Studies. At BRIE, his research focuses on the political and social change in the knowledge-based economy and the political economy of globalization. Professor Weber actively consults with major corporations, non-profits and government agencies. His latest book, The Success of Open Source, will be published this Fall.

He was introduced by Dr. Kent Hughes, Director of the Project on America and the Global Economy at the Woodrow Wilson Center.

Professor Weber began with 4 propositions:

  1. Open-source software is remaking the economics of a significant piece of the information processing industry;
  2. Open source is not just a sui generis phenomenon, but a more general illustration of how new production processes arise in complex knowledge goods;
  3. These new production processes are shifting where value-added is found in the production chain, turning highly expensive goods into commodities, and challenging current notions of what is “proprietary” (to the extent that he predicted that proprietary operating system software would become an old-fashioned, “quaint” notion in as soon as ten years); and
  4. While this process of Schumpeterian creative destruction leads to economic progress, the shift has political as well as economic and technological implications.

Professor Weber cautioned that this evolution entails risks that must be defined and prepared for, the largest being the destruction of business models that largely depend on provincial and antiquated notions of what is proprietary and can be controlled. The repercussions for these business models are substantial: the destruction of huge amounts of invested capital and stock valuation. Those heavily vested in these models will “fight tooth and nail” to halt these economic and technological changes.

Telecom and music

The background and justification for these propositions can be found in the recent history of the telecommunications and music industries and in the current situation of the software and pharmaceutical industries.

A difficult but inevitable truth is that there is no viable business model that guarantees the return of IT stocks in general to their valuations before the bursting of the technology bubble. Some companies over-invested in the 1990s and are left with unused capacity and a significant burden of debt. For instance, telecom companies invested heavily in network capabilities just as the Internet began to replace them. Some of these same companies and others built out huge amounts of bandwidth on the erroneous assumption that there would be enough content and demand to justify such a huge supply. Substantial capital was borrowed to build cable and fiber-optic bandwidth that are now, at most, an inexpensive commodity.

Other industries are still struggling to adapt to the digital revolution. The music industry faces a substantial decline in CD sales and revenue. Unlike the “meltdown” in the telecommunications industry, Professor Weber believe that the music industry faces a “corrosive loss of legitimacy and confidence.”

Professor Weber argues that it is not just a dramatic technological change but rather a shift in mindset that fuels economic revolutions. He reminded the audience that Peter Drucker argued in the nineties that new ideas, not a fascination with technology, fuel economic growth. Drucker compared the nineties to the Industrial Revolution where it was not the steam engine but major innovations in the organization of factories, corporations, daily newspapers, and trade unions that led to rapid growth during and after that period.

Napster, a relatively simple piece of technology, profoundly changed how people thought about the music industry by making the record companies’ business model transparent to their end users/customers. Professor Weber asked 150 students in one of his classes last year if it was legitimate to pay for music. Only three students (the slightly older ones) raised their hands. He believed that this was a mind shift that would not easily be changed.

Professor Weber felt that intellectual property politics led to legally inconsistent and problematic solutions in the Napster case. Under the non-circumvention clause of the Digital Millennium Copyright Act (DMCA), it is not only illegal to break an electronic lock on a protected digital good, it is also illegal (with few exceptions) to build a software tool that can be used to open an electronic lock, regardless of the builder’s intentions. As a result, technologies posing any threat to the copyright regime are being constrained instead of punishing conduct in violation of copyright law.

This preemptive law undermines a major source of innovation for the economy to protect the $12 billion dollar music industry. More fundamentally troubling, it gives more protection to a single piece of intellectual property (in this case a song) than it currently gives to much more personal information such as information encoded in one’s DNA. He argued that while another song might be written at any time, once someone’s DNA has been decoded and made public, the harm that might be done is irrevocable.

Professor Weber also referred to a proposal by Pam Samuelson (UC Berkeley) and the Electronic Frontier Foundation that would levy a tax on hard drives or broadband connections and distribute the tax-generated funds to copyright holders as compensation for file-sharing. He considered this to be a “bizarre, stunningly inefficient, and dysfunctional,” idea.

Software and pharmaceuticals

Professor Weber then went on to discuss the new threats to software and pharmaceuticals. The software and pharmaceuticals industries are both fundamental to the knowledge economy. The debate on the current IPR regime revolves around the breathtaking evolution of software and its related products and services. Meanwhile, the pharmaceuticals industry represents a large percentage of the U.S. GDP and exports and also has the potential to dramatically affect society with groundbreaking treatments for cancer and other life-threatening illnesses. Still, Professor Weber maintains that it is a “strange and discomforting time” to be involved with either industry.

Professor Weber has detected two distinct views on the future direction in both industries, but especially in the IT sector. Many people optimistically believe that the current declines in stock prices are temporary setbacks and that the markets will return to their mid-nineties positions after a few years, but without an equity bubble that proved to be a distorting distraction. The second theory, which Professor Weber agrees with, is that the current economic condition is a major period of industrial reorganization that will fundamentally change the way the markets value these industries.

The software industry represents about two and a half percent of U.S. GDP. Although the whole IT sector was affected by the bubble economy of the 1990s, Professor Weber distinguished between the grossly overvalued dot com companies and less irrational over valuations of traditional software companies that create products with a measurable value for their investors and consumers.

The challenge in the IT world comes from open-source software. Open source is a fundamentally different kind of production process for complex knowledge goods. It has three basic characteristics:

  • the source code that allows people to understand and modify what the software is doing is distributed freely with the software;
  • anyone can redistribute the software without paying royalties to the author of that software; and
  • anyone can modify the software and distribute that software under the same terms.

Open-source software is working under and thus pioneering a fundamentally different IPR regime that is directly opposite of the proprietary software companies’ strategies. It is characterized by an owner’s “right to distribute, not to exclude” under only one condition: other users cannot be constrained in how they alter the code. It is the exact opposite of the current IPR regime: “the core notion” holding together the existing proprietary software model and justifying its valuations is that the software code’s owner can rightfully exclude others according to its own terms. At a basic level, open-source software turns expensive, protected, service-intensive products into commodities. Since the computer code is open and inexpensive, it also broadens the system maintenance market. According to Professor Weber, “open-source software is not just a fluke. It is a fundamentally different kind of production process for complex knowledge goods.”

Professor Weber strongly emphasized that this new form of production means new sources of value, but it also means the destruction of old monopoly rents. Companies such as Microsoft, Oracle, and Sun are thus directly challenged by the open-source method of software creation. For example, about forty percent of businesses have overwhelmingly turned to Linux software on inexpensive Intel chips to run the same kinds and magnitude of operations as they would on much more costly Sun software and chips. Apache, an open-source software program that runs Internet servers, owns 65 percent of the server market. Linux also now runs supercomputer clusters, thereby cutting high-performance computing costs by 30 to 50 percent. Globis is an open-source software program that utilizes the entire Internet’s computational ability to solve problems. TiVo, Sharp PDAs, and household goods are either using or will soon use Linux. Open source is turning what used to be very expensive and highly protected, service intensive products, into commodities – and the market around these commodities is becoming significantly more competitive.

Professor Weber maintained that this actually helps the IT industry because of the fast rate of hardware development and the slow rate of software development. Increasing the rate of innovation and development in software technology would have a disproportionately huge impact on both the IT sector and the entire economy.

Open source has the potential to dramatically remake the software industry because of both its adaptability and its measurable success in the market in the way that Napster altered the music business. More significantly, whereas Napster was simply a distribution system for an existing product, open-source software is a free-standing production system that overturns the existing rules of intellectual property.” The change is significant for not only software companies. For example, in many industries, such as banking, companies invested their resources in capital-intensive, proprietary, specialized software that is the basis of their competitive strategy and therefore of their market valuation. The availability of non-proprietary, less expensive open-source alternatives nay cause their companies’ valuations to plummet. The results would be the bursting of another financial bubble created by the overvaluation of proprietary software.

Professor Weber also discussed what he viewed as a politically and conceptually fragile IPR regime behind the pharmaceutical and biotechnology industries. Conceptually, the success of open-source software undercuts the argument that the patent system is the only way to insure investment in innovative activity. There are also well-known problems with the patent system, including inefficiency of patent pools and the use of patenting as a business strategy for litigation rather than innovation.

Politically, Professor Weber believes that these companies face a number of challenges:

  • The visibility of their business model, as in the music business, foreshadows public pressure and perhaps ultimately, structural changes. Current medicines target a very limited set of human genes while the human genome project’s progress allows for possibly groundbreaking medical treatments that will require public-private and intra-industry collaboration and expansion. Even in the U.S., the pharmaceutical industry’s current business model and reputation do not have the legitimacy and support to obtain and sustain such partnerships;
  • The current debate in developing countries over the price and availability of drugs will soon begin in developed countries. He pointed out that differential pricing for Africa will lead to pushes for lower drug prices for lower income individuals in the U.S. as well;
  • He also foresees drug companies being targeted as villains of public health problems, with some parts of the pharmaceutical industry even being compared to the tobacco industry;
  • With the September 11, 2001 attacks, the anthrax scare, the U.S. government’s threat to take Cipro off patent, and the possibility of a SARS-like outbreak somewhere in the U.S., the pharmaceutical industries can no longer rely on the federal government’s protection for their intellectual property rights.

The pharmaceutical companies typically focus on the development of one or two major drugs. This results in the constant threat of a “biotech bomb” where stock prices collapse as soon as a key drug fails to secure FDA approval or does not prove as effective as initially expected.

Professor Weber pointed out that many people in the pharmaceutical industry, while aware of these problems, fear severe repercussions from Wall Street should they be the first to seek a new business model. The result will not be a melt down, like in telecom, but rather a possible corrosive loss of confidence, as in the music industry.

Public policy

Professor Weber believes there is good news, bad news, and contingent news in this situation. The bad news is that the technology companies will not return to the same high valuations of the nineties. He believes that the IT industry faces a period of persistent lower growth and that negative financial shocks will continue, leaving policymakers with difficult choices:

  • Pension funds must find different places to invest the capital they had previously put in the technology and pharmaceutical industries;
  • Technology outsourcing to India, China, Taiwan, and other countries will become a target for protectionist policies. At the same time, outsourcing will raise national security concerns. Unlike before, technology companies have been politically mobilized;
  • Government-funded research and development in technology from homeland security initiatives will have a disproportionate impact on technology trajectories in areas that were previously driven by the consumer sector (e.g. wireless networking, ubiquitous sensors, distributed supercomputing).

The good news, Professor Weber believes, comes from the opening of new opportunities. The creation of commodity infrastructures build the foundations upon which new industries can grow. The famous example is that of the railroads and Sears. The over-capacity which resulted in price decline in the railroads allowed the creation of a new value chain when someone figured out that they could now afford to ship goods to consumers who purchased them out of a catalog. IBM is moving decisively in this direction with its embrace of open source and its transition to becoming a global services company. Likewise, Professor Weber expects to see the emergence of service providers in the pharmaceuticals industry, perhaps following the approach of a cosmetics company that successfully provides specific products to targeted segments of the population.

The contingent news is that policymakers can either encourage or hinder this process. Professor Weber discouraged “buying time for systems that do not degrade gracefully.” He argued against desperation leading to “defensive, rigid, and inefficient” actions such as the hard drive tax proposed by the Electronic Frontier Foundation, eternal copyright, and the non-circumvention clause of the DMCA.

Professor Weber believes that the government must inevitably subsidize the reconstruction of these industries not because of their size, but because they are too essential to the infrastructure of a modern economy. A good federal subsidy strategy will permit experiments in unlicensed space (ex: Wifi) and simultaneously, give clear boundaries for areas that require regulation in the interests of society (ex: public safety communications bandwidth). The government also needs to find a solution to the problem of “first-mover disadvantage” whereby those who try to move to a new model are financially punished for writing off the old investments. At some point, the government must stop protecting the direct stakeholders and accept the reality of industry losses.

Finally, Professor Weber concluded with the admonition we are still very very early in the evolution of industries like software and pharmaceuticals/biotech. These are not even close to being mature industries. We must never forget that fact as we think and talk about policy in these areas.


Dr. Kenan Jarboe, President of Athena Alliance, moderated the Q&A session. He began by noting a change in one of the political and policy dimensions, specifically the protection of monopolies. Other analysts, notably Rob Atkinson of the Progressive Policy Institute, have made the point that many old monopolies are attempting to protect themselves from changes brought about by new information technologies – i.e. certain groups like realtors, wine stores trying to protect themselves from web-based competition. Yet, Professor Weber seems to be saying that the new industries – such as software – have matured enough that there is a danger that they will seek protection from new forms of economic activities, such as open-source.

The previous Athena/Wilson discussion with Kurt Ramin brought up the issues of how R&D costs are to be accounted for and of how accounting rules might affect outsourcing of R&D. Under the new International Accounting Standards Board (IASB) rules, development costs – once a demonstrable product has been created – are considered an asset and must be capitalized. In the U.S., all R&D costs must be expensed. There are benefits both ways – more expensing means lower profits but also lower taxes. Under both rules, if you buy a patent from outside, it can be capitalized (thus lowering your expense) whereas if you spend the money in-house it must all be expenses. There was some discussion about whether in the case of pharmaceuticals (where much of the R&D is already done outside) this would lead to further outsourcing of R&D. But, according to Professor Weber’s analysis, this type of outsourcing only works if there is a strong IPR regime where you can buy the rights.

Professor Weber responded to the question of possible changes in pharmaceutical R&D by noting that the industry’s strength is in production, marketing and managing the regulatory process. In R&D, their strength is in managing the risks about the scientific enterprise. Small biotech firms are now operating as the de-facto R&D arms of the large pharmaceutical companies. That structure could be put at risk by both changes in accounting rules and IPR – creating the inability to buy R&D in a way that it could be protected. While profoundly altering the industry structure, this could also open up the next level of innovation. Partial findings by biotech firms might be open to development by the next generation of pharmaceutical companies in new ways that can be customized for new markets. It leads to the service model of pharmaceutical companies mentioned earlier. The question is, how the new business model can be sustainable? In the current system the biotech firm sustains its cash flow by selling its latest findings. It is unclear how this would work in a different system.

Jarboe pointed out that this might become even more problematic if the biotech industry moves to a financial model of raising funds through securitization of its IP, rather than through venture capital. What happens if accounting and IPR rules move in the directions of increasing the risk at the same time that the financial model is moving to use securitization to reduce the risk?

Professor Weber noted that the result might be pharmaceutical companies bringing R&D back inside the company — not necessarily because of the ability to own the knowledge, but rather that the internal knowledge and insight about that product would give a short-term marketing advantage. The situation may be analogous to the consumer electronics industry of late 1980 and early 1990’s where control over the basic commodity technology gave companies a short-term development and marketing advantage – at least until the next innovation came along.

Dr. Catherine Mann of the Institute for International Economics raised a note of concern about comparing the computer/software and pharmaceutical industries. The R&D process is very different. The time-to-market is very different. A three to four month advantage makes a difference in computer chips; it doesn’t in pharmaceuticals where there is a long clinical trial period. The difference is in the network externalities. IT has high network externalities; pharmaceuticals, with maybe the exception of HIV/AIDS drugs, have little network externalities.

Both industries operate under international IPR rules, specifically the trade-related aspects of intellectual property rights (TRIPS) agreement as part of the World Trade Organization – which are one size fits all rules. These international rules constrain what national governments can do with their public policy in this area.

Finally much of the discussion has been on the micro-level. The real concern should be on how changes in the business model, the price of the product, and how it is used in the economy affect the performance of the macro-economy and jobs. Is the cost of this disruptive change from the standpoint of the financial markets large enough to cause macroeconomic effects?

Professor Weber commented that the time differences between software and pharmaceuticals are important points. The difference holds for drugs in the narrow sense. But in the larger area that includes medical devices and genetically modified food organisms, the time to market issue is much shorter. And the trend is toward the two industries becoming much more similar rather than diverging.

On the macro-economic issue, the question is how to look at the situation. From a purely macro perspective, value shifts from one part of the economy to another are part of normal economic evolution. Some people are hurt; some are helped. However, the shifts can have real macro-economic effects. For example, the concern over Microsoft stems from the fact that it accounts for half the market valuation of the software sector. If that valuation is the result of an IP valuation bubble even in just that one company, the bursting of that bubble may have large spin-off effects on the entire sector and the whole economy.

Nor is it clear that a new business model that is being created will return the same monopoly rents – just to a different set of economic actors. There may be a similar industry concentration in a commodity software industry, but the valuations might be much less than in the old proprietary software model. The result is a destruction of the old system of monopoly rents. This may be good from the macro-economic perspective that monopoly rents are bad. But the transition process can be extremely painful.

As Kent Hughes then pointed out, monopoly rents are what supposedly drive the innovation process. The process is one of a series of monopoly rents which are eroded over time and replaced by new monopoly rents (due to new innovations).

One participant noted that the cost of innovation in IT is relatively low – with the open-source process actual enhancing the capacity for innovation. In the biotech industry, R&D is a capital-intensive activity. In a commodity industry, resources are not available to carry out capital-intensive activities. A shift of pharmaceuticals and biotech to a commodity-type industry would therefore have a large macro impact on the innovation system.

Another participant stated that he was less concerned about IT and more concerned about pharmaceuticals. In IT there already is an alternative source of innovation and technology in the open-source movement. In pharmaceuticals, there is no such alternative source of innovation. That gives the pharmaceutical industry an enormous source of political strength for protecting IP in the sector.

Professor Weber commented that the cost of innovation is important, but not as normally thought of. The cost of innovation in pharmaceuticals is largely a function of the regulatory environment. Take the situation in the larger life-sciences industry – specifically the case of genetically modified seeds. Currently, a farmer buys seeds that are tied to a specific pesticide, such as Monsanto’s “Round-Up.” Imagine a different IP regime where farmers buy a genetic sequence licensed under something like the open-source general public license. They also buy tools to modify that sequence and compile them into seeds customized for their particular fields. They could also redistribute those seeds to others with similar types of fields. They could hire companies for specific services (pesticides, herbicides, fertilizer, irrigation) around that specific configuration of the customized seed and their specific ground characteristics.

In this alternative situation, sources of innovation are very different. Innovation is happening in farmers’ fields with real-time experimentation, rather than in the Monsanto R&D labs. The configuration of value-added is very different – centered more around providing the auxiliary services. The monopoly profits are very different. And the product life-cycle is very different, with products moving in and out of the system very quickly.

As one participant then pointed out, the predicate for this type of system is a relaxed regulatory system. It may be appropriate in some areas, but not in areas of modification of DNA and human health. Such a relaxed regulatory system is unlikely in the biotech areas. Open source is logical in the software area, but not in the biotech area where the consequences of a disastrous outcome, however remote, are enormous.

Professor Weber agreed with this assessment and noted that this is exactly where the discussion needs to focus. Where are the areas where strict regulation is required?

Dr. Jarboe also noted that the line between strict regulation and experimentation is already fuzzy because of the practice by some medical doctors of prescribing drugs, originally approved for one application, to treat conditions for which they were not originally approved. In addition, companies such as IBM are already moving to position themselves to be the information provider for personalized medicine. As the information revolution continues, the regulatory system will be modified and will modify the innovation system in return.

Dr. Mann raised the possibility that rather than pharmaceuticals becoming more like software, software would become more regulated like pharmaceuticals. Certain IT innovations would not be allowed and certain information and information services would be restricted because of issues of homeland security and of privacy. IT services would then become a highly regulated sector.

Professor Weber noted that in this scenario the big IT players become the major interface with the regulatory process – similar to the way that the large pharmaceutical companies manage the FDA regulatory process. This is not an implausible scenario. The pharmaceutical companies then might have a competitive advantage over the IT companies in managing the regulatory process.

Dr. Mann also noted that the pharmaceutical industry is in protectionist mode, the IT industry is not. From a public policy perspective, the IT industry is still very much in an open market position, not even following the “protect the market” stance of the music industry. However, Professor Weber noted that there is a fair amount of protectionism of business models centered on the war over the open source model. For example, Microsoft launches fierce counter attacks every time a local government or country announces that it is going to switch to open source software. While it appears that Microsoft is losing the battle against open source, he expressed a concern over the expansion of this protectionist mindset.

Dr. Jarboe raised a question about where the battles for openness were being fought. Microsoft may be losing the battle to protect operating systems, but other issues are heating up. There is still the issue of business process method patents, where a number of lawsuits will soon reach the courts. There are international efforts to constrain either the use or flow of information. With these other forces pushing toward greater regulation, are the principles behind open source strong enough to push back?

Professor Weber responded that a movement in the IT industry toward a highly regulated and patented system would end up creating more problems. Such an overprotection of the old system would destroy the innovative potential of the industry. The industry may generally understand this – but it also continues to be worried about the opposite scenario where there is no IP protection at all and the industry falls apart.

Dr. Hughes returned to the question of the different nature of the pharmaceutical industry, specifically the role of the public sector. Much of the basic research, and the training of the industry scientist, is paid for by the public sector. If the industry changes to this more open source model, what is the role of the public sector in the R&D process? Does public support research become the source of the pharmaceutical equivalent of LINUX?

Professor Weber responded that much of the innovation already comes out of the public sector and universities. The small biotech firms, at least on the West Coast, are an adjunct to academic work. If market-based funding based on the creation of a proprietary produce is reduced, R&D activity may return more to academic research environments. It would be a different activity, but not necessarily less innovative. Dr. Hughes pointed out it might be analogous to the role that university-based Engineering Research Centers play in the computer industry.

At this point, a participant referenced an EU-US conference at George Washington University earlier this year on open source. At that conference a step-wise model of IPR was proposed (open source escrow plan) whereby a program would become open source after it reached a certain valuation. It was also noted that the general legal discussions of Microsoft in the lawsuits were flawed because they blurred the difference between operating system and applications. The distinction is most important in the creation of new products; open source may be able to improve the process of software creation.

Professor Weber emphasized the problems of innovation and productivity in software – and that certain business models make those problems even greater. Thus, any improvement in the way in which software is created would have huge macro-economic (general public welfare) benefits. Hardware has progressed tremendously; software has not. Improvements in software creation are large sources for new value-added. However, value will be created in different parts of the economy and that shift will be messy and difficult.

The point was also raised about bundling the technology. The industry is beginning to move to a less technology-driven and more user-driven model as it stresses information services over information technology. But the transition to the service model will only occur with a great deal of displacement.

The issue of migration of technology out of the U.S. was raised by another participant. Professor Weber answered that the process was one of dispersion rather than migration. Whether this is a dispersion domestically or internationally may not matter as much as the fact of the dispersion itself. It will become a political issue insofar as it affects the concern over retaining high-wage jobs in the United States.

Dr. Jarboe noted that the whole concept of controlling either migration or dispersion of an information good – which is non-rival and non-excludable – is only possible in strong IPR legal regimes that allow for the information good to be constrained. Otherwise, the information will flow freely of its own accord. The issue becomes one of how to capture the economic rents – by creating a proprietary right, by exploiting first-mover advantages based on the information, or by becoming a service provider building upon a common information base.

Dr. Mann brought up the issue of doing the R&D versus gaining the benefits from that R&D. European pharmaceutical companies are deliberately coming to the U.S. to do their R&D. They believe there is a better climate for R&D here. There are externalities of university arrangements that make it more cost-effective. There is a health care system that will pay more for drugs. Thus, there are lower costs and higher profits for developing the drugs here as opposed to Europe. But, then they take the drugs back to Europe to be sold there. U.S. citizens have essentially paid for all of this – and have reaped the benefits. But the Europeans also get the benefits without paying the costs. Thus, there is a set of economic understructures that influence the international flows of intellectual property.

Professor Weber pointed out that this process describes a mature industry with a relatively slow product development cycle. Most industry profits come from very few drugs. But the industry is likely to change to one where there is a faster rate of innovation.

Dr. Mann asked how this might happen in a political climate where there is a thrust to greater regulation, especially in Europe over Genetically Modified Organisms (GMOs).

Professor Weber responded by saying it would to happen in the U.S. first. But it will still matter where the basic research is done. In order for it not to matter the following would have be true: that the raw information that comes out of the basic research moves simply and quickly across national borders; that the tacit knowledge gained in the research process is not important; that the companies will not try to tie up the information and use it in high profit margins.

Dr. Jarboe pointed out that this is very close to a description of the open-source model whereas the traditional product life cycle model describes a situation where R&D is done in the U.S. first and then sold in Europe. That model changes dramatically if pharmaceuticals moves to an open-source service model where there is an open knowledge base that is customized at the service delivery point.

Dr. Mann pointed out that the customized system describes information-intensive health care delivery as an IT industry. It may not describe the narrow R&D-based product-specific pharmaceuticals/biotech industry. When the IT-driven health care delivery system runs up against restrictions on information, then it becomes closer to the current highly regulated pharmaceuticals model. Under that highly regulated system, it becomes very difficult to operate an open-source model.

Dr. Jarboe speculated about what might happen when the key product is the information – not the pill. What happens when the most important factor is not the economies of scale in making the pill, but getting information about what goes in it to the point of delivery – say, the local pharmacy that can make it on site? It is the regulation of the information, not the product, that becomes the key intervention point.

Dr. Hughes raised the issue of changes in both the IPR and trade regime in pharmaceuticals under an open-source approach to research where companies don’t need to recoup profits by selling overseas, don’t have costs of extensive clinical trials and generic manufactures can sell anywhere. Do we end up with a graduated pricing system – like a differentiated tariff system?

Dr. Mann pointed out that differential pricing is difficult at the pill level because the product can be bought over the Internet, creating a gray-market problem. In such a situation with no IPR protection, the pharmaceuticals industry claims that there would be no research on new drugs. Many of these new drugs will be so-called life-style drugs, which people will be reluctant to do without.

Professor Weber noted that the pricing issue comes about because the business model for producing those drugs is not transparent. People don’t understand why, if a drug costs eight cents to make, they can’t buy it for nine cents.

Dr. Jarboe commented that the same lack of transparency in the music industry is why college students don’t see the need to pay for music. But Professor Weber stated that in the music industry, it is a generational change by a group (students) who have a good understanding of the issues.

Professor Weber went on to comment on the issue of a system that has an innovation regime as its foundation but a greater level of regulation at the delivery point. Such a system could create huge problems for countries facing acute long-term health care issues due to aging populations. The current system has an implicit assumption of a continued high rate of innovation in dealing with issues of chronic care.

Lynn Sha of the Wilson Center raised the question of regulation. Even in areas that are supposedly tightly regulated, there are places/countries where unregulated activities take place. One of the lessons of the open-source model is that it is better to have these activities out in the open where they can be monitored. If the technology is not developed in the open, it will develop on its own without any public guidance.

Professor Weber noted that this points out that regulation is a very difficult with new technologies. It is possible to set up a micro-biology lab for roughly $10,000.

Dr. Jarboe raised the concern that as health care moves into a more information intensive system – and that information is more widely available – the regulatory challenge become greater. It is more and more important that, as this series of discussions has pointed out, we look closely at the new rules in the information age (regulatory, accounting, IPR, etc.) – and that we work hard to get the rules right.

Audio and written summaries of earlier forums in this series are available at the Athena Alliance website at www.athenaalliance.org.

Counting Intangibles: The International View

hosted by
Athena Alliance and
the Project on America and the Global Economy of the Woodrow Wilson Center

Held at the Woodrow Wilson International Center for Scholars

Click here to get to the audio files.

Summary

Kurt Ramin, Commercial Director of International Accounting Standards Committee Foundation, presented the broad activities of the International Accounting Standards Board (IASB) in the area of financial reporting and intangible assets.1 After outlining the structure of IASB, he first described the Board’s current projects. Second, he spelled out some of the differences between the US Financial Accounting Standards Board (FASB) and the IASB – and the convergence between the two systems. Third, Mr. Ramin mentioned some of the work that the IASB is doing to develop standards in the field of intangibles. Specifically, he discussed how intangibles are currently treated under IAS 38 and how the forthcoming standards on business combinations will treat intangibles. He concluded with a description of how the new eXtensible Business Reporting Language (XBRL) might improve financial reporting.

In the discussion, Mr. Ramin and the participants noted that accounting rules have an enormous impact. Part of the Enron debacle could be traced to how little information Enron had to put in various financial footnotes to their reported accounts. The consequences of accounting standards can also have a significant impact on the process of innovation. For instance, under FASB rules, costs for research and development are expensed. That is, they are subtracted from reported earnings in the year in which they are incurred. Under IASB rules, however, only research costs are expensed, while development costs can be amortized or deducted over a number of years. Likewise the different treatment of internally versus externally acquired intellectual property could affect the R&D strategy and structure of a major company.

As the ongoing discussion over accounting for certain financial instruments, such as stock options, shows, accounting for intangibles is a complex and difficult activity. Establishing and enforcing the right rules will be critical to allocating capital to intangible as well as tangible investments.

The speaker at this policy forum was Mr. Kurt Ramin, the first Commercial Director of the International Accounting Standards Committee Foundation. A recognized expert on issues of the knowledge industry, Mr. Ramin served on the High Level Expert Group of the European Union on “The Intangible Economy Impact and Policy Issues.” A former partner at PricewaterhouseCoopers LLP, he has extensive experience as CFO to several different companies in the US and Germany. Mr. Ramin was expressing his personal opinions and not necessarily the opinions of the International Accounting Standards Board or the Foundation.

He was introduced by Dr. Kent Hughes, Director of the Project on America and the Global Economy at the Woodrow Wilson Center.

Mr. Ramin began by emphasizing the nature of the problem, including a large amount of the general catch-all accounting category “goodwill” still left in the financial system. He then went on to outline the structure and work of the London-based International Accounting Standards Board (IASB) and its organizational relationship with the Foundation. He emphasized the truly international nature of the Board and noted that a range of countries are represented on the various decision-making bodies of the IASB. He also noted that the Board of Trustees of the Foundation is deliberately geographically balanced, with Paul Volcker, former Chair of the US Federal Reserve Board, serving as Chair of Board of Trustees.

Mr. Ramin briefly discussed the various ways in which the standards under development by the IASB are beginning to be used. He reminded the participants that the European Commission requires listed EU companies to prepare consolidated accounts using the IASB’s International Financial Reporting Standards (IFRS) by 2005.

Mr. Ramin touched on broad activities of the IASB. Current projects include:

issues of first-time adoption of the new International Accounting Standards (IAS);
general and specific improvements in standards on financial instruments;
issues of business combinations, including accounting for intangibles; and
share-based payments; and,
the convergence of standards, especially between the IASB and the FASB.
The business combinations project is divided into two. The first set of standards (Business Combinations 1) will come out in March 2004. It will be followed by Business Combinations 2 in September. The project on business combinations will set definitions, outline application of the purchase method based on fair value, set standards for accounting for goodwill and intangible assets and for the treatment of liabilities, and define measurement of the identifiable net assets.

As part of these standards, intangible assets will generally be amortized over a definitive life. However, if the assets do not have a definitive life, an impairment test will be used instead. Of course, the assets must be measurable and meet other tests that define an asset. The big change from previous practice is that pooling of interest in acquisitions will not be allowed. As a result, companies will be forced to identify and value specific intangible assets. According to Mr. Ramin, a new industry appears to be developing to help companies value these assets more carefully.

In developing these business combinations standards, IASB is coordinating with the International Valuations Standards Committee, especially on using the same definitions and measures.

Mr. Ramin then went on to touch upon the issue of share- based payments (commonly referred to in the US as stock payments and stock options). The IASB is moving to require that these payments be recognized in financial statements as an expense. The final standard is expected in December. It will have significant differences from the current FASB standards.

Following up on that point, Mr. Ramin spelled out some of the differences between FASB and the IASB. US standards – Generally Accepted Accounting Principles (GAAP) – are actually now rule-based rather than principle-based. IAS and IFRS started off as principle-based. However, the number of pages of these standards has grown, now running to some 1600 pages, as problems with creating flexible principles surfaced. Still, IAS/IFRS remain a more narrow set of principles than the GAAP rules.

Another major difference between GAAP and IAS/IFRS concerns valuations, especially in business combinations. Fair value, as opposed to historic cost, is an overriding principle within IAS/IFRS, including the revaluation of property, plant and equipment. Standards concerning the basis of consolidation are also different, with IAS/IFRS looking at control rather than the GAAP rule of majority voting rights.

An important difference with respect to intangibles is in the area of research and development (R&D). IAS 38 allows a company to capitalize the development costs part of R&D whereas GAAP requires all of R&D to be expensed.

Both IASB and FASB have more detailed summaries available on the difference between IAS and GAAP. These summaries are available from IASB and FASB – see http://www.iasc.org.uk and http://www.fasb.org.

Various joint projects between IASB and FASB focus on these convergence issues. Two specific differences that will be subject to discussions between FASB and IASB concern segment reporting and post-employment benefits.

IASB’s goal is to have IASB and FASB standards similar enough that the 1200 plus non-American companies currently listed on American exchanges would not have to translate their European reports into American standards. To see the current differences, Mr. Ramin referred participants to 20F filings by European companies with the Securities and Exchange Commission.

Mr. Ramin then went on to detail IASB’s approach to intangible assets, as described in IAS 38. He emphasized that this standard will effectively be changed as the new standards concerning business combinations are developed.

For the IASB, an intangible asset is “an identifiable, non-monetary asset without physical substance held for use in the production or supply of goods or services, for rent to others, or for administrative purposes.” To determine an asset’s fair value, IAS 38 requires the price be that which would be used in an arm’s length transaction. The impairment loss is that which exceeds its recoverable amount.

IAS 38 governs both intangible assets internally generated and those externally generated but separate from goodwill. Something is only recognized as an asset if it is identifiable, controlled, has probable future benefits specifically attributable to the asset, and its costs can be reliably measured. The criteria of identifiable is often the most difficult; it is hard to describe the specific unit that is being classified as an asset or link it to a specific marketable product or process.

If the item cannot be recognized as an asset, its cost must be expensed as incurred if it is an internally generated item or included in the category of goodwill if it is purchased in an acquisition. Therefore, research (but not development), start-up, training and advertising costs must be treated as expenses.

In addition, IAS 38 specifically prohibits the recognition as assets of internally generated items such as brands, mastheads, publishing titles, and customer lists and items similar in substance. Thus, the costs of these items must also be expensed.

Intangible assets can be measured either using historical costs or fair market value. Currently, amortization assumes a rebuttable presumption that the useful life of the asset will not exceed 20 years. If a longer period is used, an annual impairment test is allowed.

The result of these standards will be greater emphasis on disclosure of intangible assets and more attention to their fair market value and useful life.

In general, a three-pronged approach is needed in the entire area of corporate reporting.2 The first is global GAAP. According to Mr. Ramin, this is achievable as we are right on the verge of convergence of standards. The second is industry-specific standards. This is especially important for intangibles, which are different in each industry. Third is the need for greater company-specific reporting.

Mr. Ramin then went on to describe the potential of eXtensible Business Reporting Language (XBRL) for improving financial reporting. XBRL allows for information to be tagged at the lowest possible level. The data can then be combined in whatever manner is most useful. Likewise, aggregated information can be disaggregated as necessary. Different data can be combined in different ways for different reporting purposes. Such tagging also improves data comparability, in part by forcing some common definitions. In addition, XBRL will allow for conversion of data – an especially important problem in the multi-lingual European environment. (For more information, see http://www.xbrl.org.)

With the push for greater disclosure, the ability to consolidate and disaggregate the data in various ways will be more and more important. For example, in the Enron case, the ability to disaggregate the data using XBRL would have been helpful in seeing through the various complex financial transactions.

Combining and linking data at the company and industry level is especially important for understanding intangibles. Taxonomies are needed for different industries – and are being created using XBRL tags based on the IASB standards.

Thus, according to Mr. Ramin, we are well on the way to more meaningful reporting data, including reporting on intangibles.

Dr. Kenan Jarboe, President of Athena Alliance, moderated the Q&A session. He opened with a question about the seriousness of the conceptual problems. A case in point is the difference in treatment between an intangible asset internally generated and one that is acquired from outside. He quoted from the April 2002 FASB Special Report on the new economy that there is no conceptual rationale for treating the two differently. That same report notes that “the rationale underlying FASB Statements 2 and 86 and IAS 38 does not provide a useful conceptual basis for a reconsideration of accounting for intangible assets.3 He also noted that the Garten Task Force on Strengthening Financial Markets essentially concluded that intangibles could not be measured, only disclosed. Given this lack of conceptual underpinnings, can we really hope to make progress in accounting for intangible assets?

Mr. Ramin stated that the first goal was to identify the intangibles. Progress has been made on the issue of amortization of goodwill – so that it is not simply uniformly written down over the life of an asset, typically between 20 to 40 years. However, more steps may be needed to force disclosure, which will aid in the identification process. It was noted that FASB had a disclosure project underway which was curtailed due to a lack of resources. The new IASB business combination standard will have the end result of forcing more disclosure.

A participant noted that the ability to generate revenues is already a criteria for determining an asset. Why is that revenue not used for the valuation of intangibles, even for internally generated assets? Mr. Ramin responded that the problem was the reliability of the data. Part of the problem is the mixed accounting model where some values are at fair market value while others are at historical costs. He believes that fair value should be used for all valuations.

Another participant, however, took exception to the use of fair value for intangibles. Fair value requires an estimate of future cash flows – which can change radically and differ from expert to expert. The value of the asset can fluctuate widely, making mush out of financial statements and further infecting the system with uncertainty. The result is greater investor confusion. Mr. Ramin’s response was that while it was very difficult to put a fair value on intangibles, the mixture of historical and fair value produces greater confusion for investors by creating data incomparability.

The question was raised about whether the financial markets are already capturing the imputed value of intangibles – whether or not they show up in the balance sheet or as an accounting footnote.

With regard to footnotes, the observation was made that the degree of information complexity is increasing. The business environment and financial transactions are increasingly complex. Information that is disclosed in footnotes is often sketchy and hard to integrate with other financial data. The problem, as one participant phrased it, is not one of creating additional complexity, but revealing complexity.

One participant reminded the group of the different purposes of the information – especially for accounting and for financial analysis purposes. In the case of accounting, there needs to be a clear, understandable set of books in order to avoid the apples and oranges comparison problem.

Mr. Ramin observed that the system of multi-level reporting, as embedded in the XBRL system, may help overcome that problem.

The group then discussed the problems and differences of mark-to- market and fair value. In the case of some assets, such as financial instruments, a market exists that can be used to calculate value. For many intangible assets, no such markets exist.

In addition, even where a market for assets exists, it may give differing values due to short-term factors. The example of real estate values was given where the actions of the Federal government in settling the S&L issue drove down the value of real estate as the market was flooding with liquidated properties. Mark to market in this case would have resulted in a write-down in the American economy of 20% to 30%. However, another participant argued that if there had been a fair valuation process in place, the problem would have been manifest, and possibly avoided.

The other problem of mark-to-market concerns the thinness of some markets. For example, Enron was trading in a spot market for electricity, whereas most electrical contacts are long-term. To avoid volatility, especially in thinly traded markets, one participant suggested using moving averages or range of values.

The issue of companies’ responses to these changes was raised. Will the structure within a particular industry change as it becomes more or less advantageous from a financial accounting standpoint for a company to hold intangible assets?

It was noted that such responses may vary from industry to industry. The key is to identify the nature of a particular intangible asset and then determine how that asset is used in various industries. The rules need to be tied to the asset, not the industry.

One participant pointed out that the value of an intangible asset is not just based on factors within the company, but also external factors. A company located in an area known for specialized knowledge in that company’s product might be valued higher than one located elsewhere. How can those types of assets be valued?

Mr. Ramin responded that the approach for accounting is to break the situation down into units of value. The accounting problem concerning intangibles is in identifying the units. It was pointed out that these items used to be handled in the general category of goodwill – and that the task now is to add specificity. A number of countries are undertaking efforts to identify specific intangibles. But the problem remains difficult.

It was pointed out the differences between company-based intangibles and community-based intangibles are often referred to as social capital. Social capital, especially in innovative localities such as Silicon Valley, is a very real asset, but very difficult to capture in an accounting sense.

It was also noted that Enron was not simply an accounting problem but one of corruption and malfeasance. Enforceability of the rules is crucial, not just the understandability of the data.

A question came up about the IASB’s methods for calculating the value of fixed-price stock options, such as concern over use of the Black-Scholes model. Mr. Ramin stated that IAS requires use of fair value, but the specific requirements of the standard are still being developed. It was noted by one participant that FASB is just beginning to look at the specifics of the fair value method. A discussion followed of the various methods for calculating options, including problems of non-transferability of the option and lack of vesting.

The complexity of option pricing models raised an important question of transparency. Concern was raised as to whether investors and analysts would be able to “look through” the financial statements in order to be able to understand the assumptions and rationale for how certain assets were valued. Multilevel reporting helps in this area by giving investors and analysts better tools for understanding the financial details of a company. However, it may also raise the level of complexity. As a result, the role of the analyst as an information intermediary may increase in importance as we increase the complexity of the reporting system.

It was noted that the standard data building blocks as in the XBRL system would allow for multiple and competing reports by various analysts. The richness of the information would therefore be enhanced.

A question was raised about what is happening in other countries. Mr. Ramin noted that various countries have undertaken projects on intangibles for various reasons. For example, the European Union effort was focused on intangible measures in national economies, especially on knowledge workers, and on policymaking options.

One participant raised the issue of measurement and management (“what gets measured gets managed”). The concern is that intangibles are part of the larger system that may never be measured, especially the value of relationships – both internal and external. Thus, the value of the team is different from the value of the individual members – and the value of the team may never be adequately captured.

At this point, the discussion returned to the issue of how that measurement is defined, specifically as an asset or as an expense, and how companies react to those definitions. The case was raised of R&D. As discussed earlier, internal R&D is an expense. The results of external R&D, such as a patent, are assets that can be amortized over a number of years. This different treatment of internally versus externally acquired intellectual property could affect companies’ strategies and structures with respect to outsourcing R&D.

Participant discussed whether companies in some industries have spun off their R&D functions. The example was raised of a hypothetical pharmaceutical company. If the company develops a new drug internally, it cannot take a yearly charge against earnings to reflect the declining value of the patent. The situation can be quite different, however, if a company buys a small, innovative company with a valuable patent. Anything the company pays beyond the book value (including the buildings, equipment, and accounts receivable) will be recorded as goodwill – a broad category that includes many intangibles. In this case, GAAP allows the company to reduce taxable earnings (and hence increase cash profits) by amortizing (or deducting against earnings) the goodwill over time. In effect, the interaction of the accounting rules and the tax code encourage a firm to rely on outside research rather than innovating itself.

It was noted that the structure of R&D activities will follow the efficiencies of the transactions. Organizations form because internal transactions are more efficient than external. However, if there is an efficient market for intangibles, such as R&D, those assets will be acquired from outside.

The difference between FASB and IASB was once again noted. Under FASB rules costs for R&D are expensed, that is, they are subtracted from reported earnings in the year in which they are incurred. Under IASB rules, however, only the research costs are expensed while development costs can be amortized or deducted over a number of years. Because the impact of development costs on earnings is spread out over a number of years, the company operating under IASB rules will show greater earnings in its first year. If the American company is concerned about reporting higher quarter to quarter earnings, they may decide to reduce costs that must be expensed and instead buy additional capital equipment that can be depreciated over time.

One participant noted that this example points out the need to look at different industries. Some industries are more reliant on intangibles than others.

The discussion ended with the observation that accounting standards are under renewed scrutiny in the United States and around the world. One participant noted that degree of discussion over the pricing of financial instruments highlights how far we have to go with the problem of intangibles. Options, for all their complexity, are relatively well studied and well understood – especially when compared to other intangibles. It is therefore that much more difficult in defining and accounting for these other intangibles.

In a complex global economy, a single approach to accounting may conceal as much as it reveals. Multiple approaches and rules may be needed. Establishing and enforcing the right set of rules will be critical to allocating capital to intangible as well as tangible investments.

1. Please note that Mr. Ramin was expressing his personal opinions and not necessarily the opinions of the IASB or the Foundation.

2. Taken from Samuel A. DiPiazza, Jr. and Robert G. Eccles, Building Public Trust, John Wiley & Sons, New York, 2002.

3. Wayne S. Upton, Jr., Business and Financial Reporting, Challenges from the New Economy, Special Report, Financial Accounting Series No. 219-A, Financial Accounting Standards Board, April 2001, p. 108.

Downloand Audio files:

Introduction

Presentation 1

Presentation 2

Presentation 3

Discussion

The Invisible Advantage

Summary
Jonathan Low presented the findings from his research on intangibles as the drivers of company value. Financial markets are already using intangibles such as leadership, reputation, brands, human capital and intellectual capital as important factors in determining companies’ value. His research shows that there is a direct correlation between these factors and financial outcomes of companies. Companies need to do a better job of identifying and managing these intangible assets.

Policymakers also need to be aware of the importance of these factors. The current system of measuring what is driving business value – based on traditional financial measures – is seriously broken. New systems that account for intangibles are needed if we are to properly allocate capital or develop effective economic policy (such as tax policies).

In his role as respondent, John Mitchell raised the basic question of finding a balance between corporate and individual rights over these intangibles assets. The discussion that followed raised a number of questions including: how to account for non-controllable intangible assets, the role and priority for public policy in this area, and the barriers to and need for greater disclosure of this type of information.

The speaker for this first Issues Dialogue on Owning and Counting Intangibles was Jonathan Low of the Cap Gemini Ernst & Young Center on Business Innovation. Mr. Low is the co-author of Invisible Advantage: How Intangibles are Driving Business Performance. He was introduced by Dr. Kent Hughes, Director of the Project on America and the Global Economy at the Woodrow Wilson Center.

Mr. Low began by outlining the two topics he wished to cover. The first was to summarize both his research and the research of others in the area of intangibles. The second was to discuss some of the policy implications. A number of agencies, including the SEC, the Federal Reserve Board and the Financial Accounting Standards Board (FASB) are looking into this issue in the United States while the EU and the International Accounting Standards Board are also involved in policy-making on the subject. It is an issue that affects a number of public policy areas, such as intellectual property rights, tax policy, and others.

Mr. Low’s basic message is that markets are already valuing intangibles. Markets are already looking at factors such as leadership, reputation, and credibility as important factors in determining companies’ value. His research shows that there is a direct correlation between these factors and financial outcomes of the company. Company managers need to be aware of these factors – as do policymakers.

Many business leaders and government officials already recognize the importance of intangibles in decision-making and capital allocation. However, most officials have not taken a proactive stance in the management of intangibles.

Mr. Low’s list of intangibles include:

  • Management
    • Leadership
    • Strategy Execution
    • Communication & Transparency
  • Relationships
    • Brand Equity
    • Reputation
    • Alliances & Networks
  • Organization
    • Technology and Processes
    • Human Capital
    • Workplace Organization & Culture
    • Innovation
    • Intellectual Capital
    • Adaptability

Contrary to the belief that intangibles can’t be measured, most companies are already capturing 70 percent of this data. Yet, a recent survey conducted by Mr. Low’s organization of chief financial officers revealed that they believe that financial information they are required to capture and disclose by regulatory agencies is “utterly worthless” to the managers. The challenge is marry the traditional financial information with the data on intangibles.

Mr. Low’s research into intangibles was sparked by an interest in the increasing gap between book value and market value of companies. That gap has continued to grow. This suggests that the current system of measuring what is driving business value is seriously broken. It also reinforces the belief that economic activity has less and less to do with the production of tangible goods and more and more to do with intangibles. If we are to properly allocate capital or develop tax policy, for example, it is important that we understand and can properly measure these intangibles.

The role of intangibles as value drivers continues to grow relative to the value of tangible assets. Research shows that market value is less and less correlated with the value of tangible assets (plant and equipment). Companies are clearly getting more market bang for their investment buck in intangible assets rather than in tangible assets.

As Mr. Low and his colleagues began to look into this issue of the gap between market and book value, they first surveyed institutional investors to better understand what these investors look at in companies. This survey revealed that 35 percent of their portfolio allocation decisions were based on non-financial (intangible) factors. Interestingly, information gathered by institutional investors on intangibles comes from sources other than companies themselves. A study of sell-side analysts (those who work for investment banks, brokerage houses, etc.) showed that the more they referred to intangible factors, the more accurate were their quarterly earning projections. The top ten list of the intangible factors that investors and analysts look at are:

  • Strategy Execution
  • Management
  • Credibility
  • Quality of Strategy
  • Innovativeness
  • Ability to Attract Talented People
  • Market Position
  • Management Experience
  • Quality of Executive Compensation
  • Quality of Major Processes
  • Research Leadership

Another study by Mr. Low’s organization of initial public offerings (IPOs) showed that intangibles were the only significant difference between the successful offerings and those which failed to increase in value. Traditional financial measures had no statistical correlation with future stock value. But intangibles did. The most important intangible was the alignment between corporate strategy and employee interests – something that in fact can be measured.

A survey of senior executives came to the same conclusion: the most important factors they were concerned about were intangible. However, there were significant gaps between the importance of the information and the quality of the information these managers were receiving from their own companies. According to Mr. Low, 81 percent of the senior executives surveyed said that the information they were getting on these intangible factors was not very good.

Yet, there is strong statistical evidence that if you can close the gap between the importance of the information and the quality of the information, you can improve stock market performance, compound annual growth and return on investment. This presents an important opportunity to both business managers and policymakers in that better capturing, measuring and managing intangibles can greatly increase company performance.

Interestingly, investments in technology were not found to be a differentiating factor in performance. The markets had already discounted technology investment. Technology is considered as the ante needed just to be in the game. What separated the winners from the losers are the organizational factors needed to effectively utilize technology.

When grouped together into a value creation index, these intangibles explained as much of companies’ market value as do traditional financial performance measures. In addition, improvement in this value creation index has a specific result in improvement in market value.

These value drivers include innovation, quality, knowledge of the customer, human capital, alliances, technology, brand equity, leadership, and environment. Alliances, brand equity, technology, and human capital were the most important drivers of value in non-durable manufacturing.

Each of these drivers can be measured using the number of specific indicators. Most of this data is already collected by companies. It is therefore possible to describe a company’s specific value drivers and then manage the company to improving those intangibles.

Case studies have shown over and over the importance of intangibles such as reputation, intellectual property and R&D, intellectual capital, and leadership. For example, Coca-Cola learned the value of reputation during the contamination scare in Europe.

While the rest of the world is already moving ahead with accounting principles to capture intangibles, the U.S. is moving somewhat slower. A number of European nations have specific proposals on measuring intangibles. The International Accounting Standards Board recently released their proposal, International Accounting Standard (IAS) 38.

According to IAS 38, an intangible asset is an identifiable non-monetary asset without physical substance held for use in the production or supply of goods or services, for rental to others, or for administrative purposes. Under IAS 38, an asset is recognized only and only if:

The asset is identifiable
The asset is controlled
Future benefits specifically attributable to the assets are probable
The cost can be reliably measured

As Mr. Low’s research shows, intangibles are identifiable, future benefits can be determined and cost can be measured. Control of intangibles can be a somewhat more difficult subject.

It follows from the recognition criteria that all of the following costs should be recognized as expenses:

All expenditures on research, not development
Start-up costs
Training costs
Advertising costs

According to Mr. Low, it is important for both business leaders and those interested in public policy to look at and understand the IASB guidelines since they are defining the nature of the discussion.

In summary, Mr. Low stressed the need to determine your critical intangibles, measure and benchmark those intangibles, undertake initiatives to improve your performance on key intangibles and communicate what you’re doing.

The last point is often forgotten. Right now there is a great opportunity to improve the information available to investors by including information on intangibles. It is to a company’s advantage to disclose this information and be as transparent as possible.

By including intangibles, we can do a better job in allocating capital – which is the key to future economic growth.

top


Dr. Kenan Jarboe, President of Athena Alliance, outlined his organization’s interest in the subject of intangibles. Information assets are the fuel of the information economy. There is no separate group of workers involved in the creation of intangibles – everyone in every location is involved in creation and use of intangibles. The key issue is understanding how we utilize intangibles so that everyone benefits from the transformation to the information economy. Both accounting and intellectual property rules are necessary for the utilization of these information assets, as a market requires both being able to value an asset and to own that asset. For that reason, it is important at we get the rules correct to make sure that the market works for the benefit of all.


Dr. Jarboe then introduced the respondent, John Mitchell, who addressed the controllability or ownership issue. Mr. Mitchell is the principal of the law firm Interaction Law and was formerly Legal Director for Public Knowledge, a public policy organization concerned with issues of the public information commons. He began by pointing out his interest was more on the human side of the human capital formulation and on the intellectual side rather than the property side of intellectual property.

Mr. Mitchell’s basic question is where do we draw the line. Implicit in our understanding of the importance of education in building human capital is the notion that the individual acquiring that capital cannot sell it. Yet implicit in some of the notions about increasing and protecting intellectual property, human capital and other intangibles, is a transfer of human capital from the individual to the corporate balance sheet. There are already a number of ways to capture employees’ intangible assets and human capital so that it does not leave the company, such as trade secrets and non-compete covenants. Such protection is consistent with accounting requirements, such as those under IAS 38, for controlling an asset. The danger is that these types of protection could go too far, and decrease the value of these assets to the individual, such as someone who had a marketable skill but can no longer market that skill because of these types of restrictions.
He went on to point out that there may be a tension between these attempts to protect intangibles and the ways in which intangibles are used. For example, being considered one of the “best places to work” is considered a positive intangible of a company. Yet restrictive clauses on employees in an attempt to capture and protect their human capital would undermine a company’s rating as a “best place to work.”

According to Mr. Mitchell, it has been a recognized public policy goal to find a balance between corporate and individual rights. For example, many states place limits on non-covenants so as not to deprive the general economy of certain skills and knowledge.

In the last few years there has been an increase in areas covered under the rubric of intellectual property rights. For example, while the copyright on a book prohibited reproduction and distribution, it did not restrict the owner of that book from getting it or lending it to someone else. Nor did the copyright holder have any say over the technology used to produce the book. Now in the digital age, these concepts are under review. Copyright is being used to leverage control over digital technologies which may be dangerous both to corporations and to society at large. In addition, intellectual property rights are being asserted in areas where traditionally they have not been. For example, there is the case of a major retailer attempting to assert copyright over when announcements of future sale prices can be released.

Parenthetically, tied in with issue of control is the issue of who is placing the value on the asset.

Mr. Mitchell cited the video rental market as an example where more flexible control over intellectual property led to the creation of a revenue stream for the movie studios that would not have otherwise existed. The asset of the rental rights of videos could not be controlled by the movie studios and therefore could not be booked as an asset under the accounting rules Yet, that market has become a major source of revenues.

A more current example is the issue of open source software. Open source software has become both a major investment and a revenue source for companies such as IBM and HP. How will the accounting rules cope with this asset that is not under the control of the company?

In these cases, the lack of control over intangible assets may be more valuable to companies than having complete control. There is a parallel with the role of public goods, such as the highway system. No individual company owns or controls the asset, yet it is vital to companies’ prosperity. These public assets can end up helping one company more than another. But it is important to society at large that public investment in these assets continue. The same is true in human capital, where we cannot simply rely on companies seeking to identify and invest in the best and the brightest in exchange for exclusive rights to that human capital.

A key question is to identify which assets are best left to the public. We must then resist over-privatizing those assets which are better left to the commonwealth.

top


During the question and answer period a number of issues were raised:

Dr. Jarboe began the discussion by raising the difference in managing intangibles as expenses, which traditionally is something business leaders attempt to minimize, and managing intangibles as an asset, which require long-term investments. This issue is tied in with the complicated issue of taxes, where assets must be depreciated over a long period of time and expenses are written off immediately, thereby lowering taxes. It also ties into accounting concepts of assets versus liabilities.

Mr. Low stressed that the key for management of intangibles is the ability to identify and measure the intangible, regardless of whether it gets classified as an asset or expense. Understanding how intangibles work, how they can be created and how (and to what extent) they can be controlled is important.

The question was asked specifically which countries have done the most work in this area. Mr. Low pointed to the Scandinavian countries, southern European countries and Asian countries (China, Korea and Taiwan) as examples.

One participant pointed out that we have always accounted for these intangibles, lumping them into the category of “goodwill.” Mr. Low noted that often-times the category of goodwill covers a variety of concepts that are not really drivers of value. He raised the example of the notion of synergy in mergers. According to his research, the true underlying values in mergers are intangibles such as brands, reputation, human capital and the like. The key is to identify the specific intangibles rather than simply utilize the inarticulate category of goodwill. Dr. Jarboe pointed out that we may be in the same situation economists found themselves a number years ago in studying productivity, where much of the gains were found to come from the catch-all residual category.

Another participant raised the issue of the nature of human capital, specifically how human capital is increased through learning. That improvement increases the market value of that human capital. Presumably, that potential increase in value would be reflected in compensation. In addition, the trend towards contingency work and independent contractors would undercut the notion of non-compete agreements. Thus, the market would presumably be correcting the balance between company and individual control over their intangible assets.

Mr. Mitchell pointed out, however, that there may be external benefits to society not captured in these market transactions. The danger may be that non-compete agreements may lock up that intangible asset so that it doesn’t get fully utilized, to the detriment of society as a whole. It is also difficult to separate out what is the public and what is the private investment in an intangible such as human capital. In addition, it is more difficult for individuals to be able to value and measure their intangible assets – leading to information asymmetries that distort the functioning of market.

He went on to point out the danger in relying upon private agreements by using the example of some entertainment companies that are attempting to circumvent the first-use rule (the restriction on the right of a copyright holder to prevent further sale or rental) through end licensing agreements. The market might correct for this by providing a discount to those who agree to, for example, not resell a book. But society may suffer from the restrictions on the used book market this would impose.

As an aside, Mr. Low mentioned that there is a software package being developed for bank loan officers to be able to incorporate educational levels as an intangible in making loan decisions. Dr. Jarboe pointed out the importance of making sure this software captures all forms of human capital and skills, both law degrees and plumbing skills. Such a broad definition of human capital would give poorer communities and individuals better access to financial assets needed to power economic growth.

It was also mentioned that for airlines, airport landing rights are important intangible assets. What is the role of the market vs. the government in the allocation of this asset?

A question was asked about policy priorities. Mr. Low stated that he believed the top priority was in the area of comparability of data. While the issue of comparability of data for accounting purposes covers a number of topics, it includes the ability to identify and measure these intangible assets. It also includes issues as to what information companies are required to disclose, so that investors get a clearer picture of companies’ situations. According to Mr. Low, we need to reform the accounting system which is no longer adequate to supply the information needed to value and manage today’s companies and to make informed public policy. One role for the Federal government would be to sponsor better research on these issues.

He also emphasized the need for increased interaction between U.S. agencies dealing with these issues and the activities of international bodies such as IASB.

The final question concerned why companies don’t make information on their intangibles more readily available. Mr. Low stated it was a classic case of information asymmetry. Part of the problem is that there are too many factors to focus on. In part, this speaks to the issue of corporate governance, especially concerning the combined role of chairman and CEO. One of the arguments that has been made for splitting these positions is the problem of information overload.

Mr. Low ended the session with a positive note that increasingly he finds companies understand the need to both capture and disclose information on their intangible assets. Progress is being made.


Download audio files for The Invisible Advantage: Owning and Counting Intangible Assets in the Post-Enron Era

Questions & Answers