Innovation in response to COVID-19

New data from a UK survey indicates that the COVID-19 pandemic is jump-starting innovation (at least in the UK). The survey, The Business Response to Covid-19: the CEP-CBI survey on technology adoption, looked at four types of innovation:

  • Introduction of new products or services
  • Adoption of digital technologies such as customer relationship management systems, remote working technologies, mobile technologies, cloud computing, automation, and AI. (Process innovations)
  • Adoption of digital capabilities, such as e-commerce, advanced analytics, and cyber-security. (Process innovations)
  • Adoption of digital capabilities, such as e-commerce, advanced analytics, and cyber-security. (Process innovations)

[Note: While I am not completely clear of the difference between digital technologies and digital capabilities, the difference is apparently enough to have significantly difference levels of adoption – see below.]

The survey reports that 60% of firms indicated that they adopted new digital technologies or new management practices; 38% adopted new digital capabilities and 45% introduced new products or services. Of firms adopting new digital technologies or new management practices, 95% did so because of the pandemic. The corresponding date was 90% for adoption of digital capabilities and only 75% for new products or services.

Answers to the survey also indicated that the overwhelming number of firms (90%) expect the innovations to be permanent changes not just temporary measures to get through the crisis.

Given the lower overall rate of new products and services as well as the lower direct response to the pandemic, this suggests that process innovations are what businesses are focused on.

The results are in stark contrast to innovation in “normal” times. The UK Innovation Survey for the years 2016-2018 indicated that only 13% of businesses were involved in process innovation and 18% we involved in product innovation.

The results make intuitive sense. With lower overall demand and a concern over day-to-day operating sustainability, the pandemic is causing companies to look at greater efficiencies rather than the latest new thing.

Not surprisingly companies saw macroeconomic uncertainly as the biggest barrier to innovation (with that uncertainty in thee UK complicated by Brexit). Follow that was what we have generally seen as barriers to innovation: financial constraints and a number of factors I would lump together as absorptive capacity (e.g. lack of skills, resistance to change, lack of information, lack of technological infrastructure to support new digital technologies, applicability doubts, etc.).

Finally, the survey found – not surprisingly – that companies who had adopting new digital technologies or capacities pre-pandemic were significantly more likely to innovate in response to the crisis.

The good news here is that more firms are responding to the crises with innovations, especially process innovation. The trick is to help them sustain that activity – especially for what I would call the new-to-innovation companies.

Intangible employment flatlines in Sept

This morning’s employment data from BLS for September is rather disappointing (even though the unemployment rate dropped significantly). Employment rose by only 661,000 compared to the 850,000 economists expected. All most all of that growth was in tangible producing goods and services industries. Similar to the previous months, increases occurred in industries where there is physical presence with customers, specifically Accommodation & Food Services and Trade, Transportation & Utilities.

On the intangible-producing side of the economy, employment grew modestly in almost every industry. But those gains were offset by a large drop in government employment, which the BLS says were mainly in state and local education – probably reflecting the slow return to the classroom.

Once again, under normal circumstances this would be a positive increase. However, in the age of COVID-19, this is only a modest rebound in employment. And keep in mind the worrisome trend of furloughed workers being permanently let go. Much more needs to be done.

What ever happened to “competitiveness”?

As I was doing some research on the current debate on industrial policy (see last week’s posting), I was struck by something missing. Economic competitiveness rarely gets mentioned in the policy discussions now days. The concern that we need to compete with other nations is common, especially with respect to competing with China. But the rubric of “competitiveness” as a framework for policymaking seems to have disappeared from the public discourse.

First, a little history. The term came into the public discourse in 1984 when President Reagan created the President’s Commission on Industrial Competitiveness (aka the Young Commission), chaired by then HP CEO John Young. In part, the Commission seems to have been set up to counter the focus on industrial policy by the democrats in an election year.

Whatever the reason for its establishment, the Young Commission succeeded in changing the terms of the debate. It shifted focus 90 degrees from policies for specific industrial (often referred to as horizontal policies) to a focus on crosscutting policies that (theoretically) benefit all industries (vertical policies). Their 1985 report Global Competition: The New Reality stated that “Competitiveness can be defined as the degree to which a nation can, under free and fair market conditions, produce goods and services that meet the test of international markets while at the same time maintaining or expanding the real incomes of its citizens.” Note two important concepts: test of international markets and rising standards of living.

Young then went on to form the Council on Competitiveness in 1986 to carry on the Commission’s work and the Omnibus Trade and Competitiveness Act of 1988 addressed a number of competitiveness issues, including establishing a Competitiveness Policy Council (something that I was heavily involved in).

While the CPC had its funding eliminated in 1997, other organization such as the Council on Competitiveness, carried on the work. The World Economic Forum (WEF) annually publishes its Global Competitiveness Index and IMD (International Institute for Management Development) issues a World Competitiveness Rankings. Most recently, the Council on Competitiveness established a National Commission on Innovation and Competitiveness Frontiers.

Over the years the framework for analyzing competitiveness has evolved only slightly. The Young Commission report laid out four pillars of competitiveness:

  • technology;
  • capital;
  • human resources;
  • and, trade.

The Competitiveness Policy Council started with eight issue areas:

  • capital formation;
  • education;
  • training;
  • public infrastructure;
  • corporate governance and financial markets;
  • trade policy;
  • manufacturing; and,
  • critical technologies.

WEF’s Competitiveness Index covers 12:

  • institutions;
  • infrastructure;
  • ICT adoption;
  • macroeconomic stability;
  • health;
  • skills;
  • product market;
  • labor market;
  • financial system;
  • market size;
  • business dynamism; and,
  • innovation capacity.

All of these provide a framework of components for understanding a nation’s competitiveness.

The Council on Competitiveness’ National Commission on Innovation and Competitiveness Frontiers has taken a slightly different approach. Rather than looking at components of competitiveness, they identified three challenges:

  • Developing and Deploying at Scale Disruptive Technologies
  • Exploring the Future of Sustainable Production and Consumption, and Work
  • Optimizing the U.S. Innovation System

Based on analysis by a working group in each area, they developed a nine point action plan, to be explored further:

  • Build a Diverse Pipeline of Innovators – Encourage and support more women, and racial and ethnic minorities in the pursuit of innovation and entrepreneurship.
  • Prepare America’s Workforce for the Future – Invest more in STEM education and worker retraining for coming market disruptions.
  • Expand the U.S. Map of Innovation Investment Hubs – Build more diverse engines for innovation across the United States.
  • Secure U.S. Capabilities in Critical Technologies – Including microelectronics, artificial intelligence, and biotechnology.
  • Strengthen U.S. Economic Resiliency – Regain control of critical supply chains and reduce dependency on China and other foreign sources.
  • Confront China’s plans for technological, military, and commercial supremacy.
  • Amplify U.S. University Investments – Particularly in technology transfer, commercialization and industry engagement.
  • Bridge the “Valley of Death” Gap in Innovation – Grow government investment in small business innovation, startups, and the testing of new technologies.
  • Deepen the Sustainability Culture in U.S. Businesses – Including more efficient use of energy, use of cleaner energy, and more sustainable materials sourcing.

These are all good ideas.  And they expand the problem-set implied in the original definition of competitiveness to include environment sustainability.

However, I wish they were more explicitly tied to a competitiveness framework. For example, short-term thinking in companies’ management practices and in the financial markets were identified as an area of concern in the Young Commission and CPC reports—growing out of their frameworks that explicitly looked at financing and the financial markets as a component of competitiveness.

We need a way to look systematically at the foundations of our economic competitiveness. Just as monitoring one’s personal heath is better than waiting for a diagnosis and way better than just treating the symptoms, we need a mechanism to go beyond the current problems.

Taking this more comprehensive approach would complement the work of existing organization, such as the Council on Competitiveness. I would note that the law creating Competitiveness Policy Council is still on the books. Maybe it is time to revive it. The argument is being make that Congress should resurrect the Office of Technology Assessment to provide more systematic, comprehensive, and long-term analysis on technology issues. The same argument holds true for the Competitiveness Policy Council.

You say tomato, I say … Industrial Policy

As I have noted before, the concept of industrial policy is making a resurgence. But is beginning to sound like the old song, “You say tomato, I say tomahto” (or the modern version “you say GIF, I say gif”). In this case, you say industrial policy, someone else says innovation policy – and they really mean technology policy.

Here are a few examples. Caleb Watney in “Untangling innovation from industrial policy” argues that we shouldn’t use the term anymore — it is too encompassing and therefore vague. Adam Thierer in “On Defining ‘Industrial Policy‘” takes the narrower approach, focusing on “developing or retrenching selected industries to achieve national economic goals” (a definition he borrows from historian Ellis Hawley as published in the 1986 AEI book The Politics of Industrial Policy). Dylan Gerstel and Matthew P. Goodman (From Industrial Policy to Innovation Strategy: Lessons from Japan, Europe, and the United States) seem to wrap industrial policy in the rubric of innovation policy, by which they really mean technology policy.

These discussions recount the debate over industrial policy in the 1980s. As someone who was heavily involved in those debates, the current discussions have a feeling of déjà vu. Back then, I tried to make sense of the various conceptual frameworks and approaches to industrial policy (see “A Reader’s Guide to the Industrial Policy Debate“). Many of today’s arguments fall into those same rubrics.

In some cases, the debate is on aid to a specific industry, (e.g. save the semiconductor industry, save the auto industry, save the …) – what I have called the problem-solving approach. The danger in this approach is that it fails to consider the system-wide effect of these actions on the economy as a whole. In other cases, the debate focuses on a specific policy (e.g. trade protection, Buy America, R&D tax credits) in an instrument-specific approach. The danger here is in the law-of-the-hammer: give a child a hammer and everything looks like a nail. In other words, the policy instrument is used not necessarily because it is appropriate but because it is available.

Another version, industrial policy is essentially manufacturing policy (what I labeled back in the 1980s as the “reindustrialization” approach). And while I am a strong manufacturing-matters advocate, I also strongly believe that industrial policy goes beyond manufacturing.

Two other versions of industrial policy routinely appear in the debate. I had labeled these as the “industrial triage” and the “reallocation” approaches. In industrial triage, some industries are doing fine while others are doomed to decline so attention should be paid to those where government help would be the most effective. The reallocation approach is similar in that it seeks to move resources from “sunset” industries to “sunrise” industries. While not using the same terms as before, part of the current debate over “industries of the future” is built upon this argument. Of course, both of these are subject to the classic criticism of “picking winner and losers.” Advocates for these approaches often respond with the argument that picking winners is exactly what we should be doing.

Only occasionally raised in the debate is the most comprehensive version of what I labelled “structural industrial policy” that focuses on the entire production system and multiple economic objectives. The earlier example I gave of this approach is the Japanese developmental state. It should be noted however that such an approach is difficult to establish and maintain due to a number of issues including political capture.

I would also note that the politics of industrial policy has changed very little over the past few decades. With some notable exceptions (such as Senator Mario Rubio), liberals argue for a more activist industrial policy while conservative argue for less. As a result, we tend to have a fear driven policy that appeals to both liberals and conservatives. Today, we have a fear-of-China industrial policy. In 1980s it was a fear-of-Japan industrial policy. In 1950s and 60s had a fear-of-Russia industrial policy. One could even argue that Hamilton’s Report on Manufacturing and Henry Clay’s American System were in part fear-of-Britain industrial policies.

I understand the political value of the fear-of approach and the reluctance to advocate for a systemic structural approach. Absent a national security rationale, such a more comprehensive approach brings attacks of “centralized planning.” But without the more systemic structural view, industrial policy devolves into piecemeal reactive actions. Such an outcome simply reinforces the critic that industrial policy is ineffective and captured by special interests. To avoid that outcome, we need to raise the debate to a higher conceptual level.

Understanding Intangibles as an Investment

One of the difficulties in managing intangibles has been making them understandable and measurable. The Corrado, Hulten, and Sichel (C-H-S) framework has done an excellent job of this at the macroeconomic level. But at the firm level, little has changed on that subject since my working paper Reporting Intangibles back in 2005.

That may be changing. Michael J. Mauboussin and Dan Callahan at Morgan Stanley have produced an excellent primer for investors: One Job: Expectations and the Role of Intangible Investments.

The paper looks at the nature of intangibles and how to measure them. When it comes information provided to investors, however, Mauboussin and Callahan’s bottom line is that the bottom line has lost its usefulness. “It used to be that earnings were on the income statement and investments were recorded mostly on the balance sheet,” they note. “The rise of intangible investments means that the bottom line is now a mix of earnings and investment.” Since accounting standards “do a poor job of reflecting the rise of intangible investment,” their advice is that a “thoughtful investor’s best response is to make the adjustments necessary to see the world as it is.”

Thankfully, the article provides some guidance on how to make that adjustment and allocate information provided by companies between expenses and investments. They cite a 2017 paper by Luminita Enache and Anup Srivastava (E-S), “Should Intangible Investments Be Reported Separately or Commingled with Operating Expenses? New Evidence.” As Mauboussin and Callahan describe it:

Their [Enache and Srivastava] approach starts with total SG&A [sales, general, and administrative costs] subtracts R&D and advertising, generally considered intangible investments, to come up with what they call “Main SG&A.” They then assess what part of Main SG&A is necessary to maintain the business (“Maintenance Main SG&A”) and designate the remaining Main SG&A to intangible investments (“Investment Main SG&A”). Maintenance Main SG&A, which is matched with sales, captures costs such as office and warehouse rents, customer delivery costs, and sales commissions. Investment Main SG&A reflects spending that seeks to build organizational assets and includes employee training, customer acquisition costs, and software development.

Updated to 2019 data, this approach is generally consistent with the macroeconomic findings, based on C-H-S.

Of course, this does not answer all of the questions about accounting for intangibles. For example, there is still the question of valuation of an asset that cannot easily be separated from the value of the ongoing enterprise. And, as Mauboussin and Callahan note, applying the E-S framework can be difficult.

But still, the Mauboussin and Callahan article is a useful step forward and should be widely read.

More on beyond the Endless Frontier Act

In an earlier posting I discussed the proposed Endless Frontier Act and  counterproposal “Improving the Endless Frontier Act” (by Windham, Hill, and Cheney) recently published in Issues in Science and Technology. That issue also contained three other articles on the same topic.

One of those is a direct defense of the Endless Frontier Act: “To Compete with China, America Needs the Endless Frontier Act.” In it, MIT President Rafael L. Reif argues that:

We need to fill the gap in the nation’s research system that opened with the disappearance of institutions such as Bell Labs, which did pathbreaking research that resulted in revolutionary technologies such as the transistor. And we also need to accelerate the movement of technological advances into the marketplace.

He agrees that other mechanisms, such as DARPA-like entities, would be useful (a key argument of Windham, Hill, and Cheney). But the heart of his argument is the need for “a new Technology Directorate at the National Science Foundation to fund use-inspired basic research, primarily at universities, focused on advancing key technology areas.”

He does acknowledge the point I raised about existing NSF programs:

Some current NSF programs do nod in the direction of technology development, and those efforts could remain in the existing directorates. They are no substitute for a new directorate that would have the sole purpose of fostering basic research explicitly to advance specific technological solutions, whether those solutions relate to artificial intelligence, climate change, or cyber security.

The key concepts here are basic research and university-based. In contrast to the argument of Windham, Hill, and Cheney, Reif believes that university-based, use-inspired basic research is the best the way to plug what he refers to as the Bell Labs gap.

Melissa Flagg and Paul Harris, “How to Lead Innovation in a Changed World,” disagree. They argue that the Endless Frontier Act is based on science and technology system that no longer exists.

When [Vannevar] Bush called on government to take the lead [in The Endless Frontier], industry and philanthropic spending were declining, but today the opposite is the case. …  Industry R&D now accounts for the largest portion (70% in 2018) of national spending. Even with an increase in federal government investment, it is unlikely that this would change.

Furthermore, in recent decades, the nature of industry R&D and its relationship with government has transformed in ways Bush wouldn’t recognize. … The tech companies have created their own ecosystems of science collaboration, talent pipelines with universities, and innovation centers offshore.  …

The globalized nature of the leading US technology companies highlights how the broader science and technology system has changed. Over just the past 20 years, global investment in S&T has tripled to over $2.2 trillion. The United States now accounts for just over 25% of the global total, with Chinese science having grown quickly to an almost equivalent size. But that leaves half of global science and technology made up by a diverse group of countries led by Japan, Germany, South Korea, India, France, and the United Kingdom.

They argue that:

US S&T policy must now progress from its successful postwar framework to a new framework fit for the twenty-first century. Without this, future policy interventions run the risk of wasting scarce resources, or worse, reinforcing existing problems and inequalities. The nation needs more systematic analysis of all the different inputs for science and technology—funding, yes, but also human capital, infrastructure, and the policy and regulatory framework—and new approaches to optimizing these in what is a very different environment.

Thus, while Reif argues that the Endless Frontier Act is needed because it goes beyond “just doing more of the same,” Flagg and Harris argue that the Act is exactly doing more of the same.

Greg Tassey takes a different tack in “The Endless Frontier Act Could Foster Technology Job Growth Across the United States.” Tassey begins by supporting the legislation.  But by the fourth paragraph, he has gone beyond the Act and points out a major area of needed improvement:

To be truly successful, however, the new measure must include a “master recipe” or conceptual model of the major technical, organizational, and institutional elements necessary to create an effective innovation hub. Modern technology evolves through a series of research and development phases, production steps, and marketing efforts—all of which require some degree of public input. Each set of suppliers—representing different tiers in the high-tech supply chain—requires special technology, capital, labor, and infrastructure assets. These resources, in turn, are applied at different phases of the development life cycle—each with its own needs for public technical infrastructure support

Thus, the proposed legislation should identify the full set of institutions that will support both a technology’s evolution and its subsequent production and commercialization, as well as a policy mechanism for evaluating and analyzing those roles. The range and complexity of these assets is varied, and therefore must be systematically analyzed in terms of the policy tools that should be incorporated in the legislation.

The rest of the article is a tour de force of what a national Technology-Based Economic Development (TBED) policy would look like. Tassey, a former Chief Economist at NIST and a leading expert on, has written much over the years on the need for and the vision of an American technology and manufacturing policy. This article is a good summary of the latest thinking on technology policy.

These four articles (the three discussed above and the one on which I discussed before) agree on one point: we need to rethink and update our S&T policies. As Rafael Reif stated, “Just doing more of the same will not get us where we need to be.”

What that means is unclear. Let’s continue to have the debate on where we need to be and how to get there. My hope is that the Endless Frontier Act will stimulate that debate. My concern is that in the rush to do something, it will short-circuit the debate. And that would be a terrible missed opportunity.

A warning from Dr. Fauci

Dr. Fauci and his colleague David Morens have published a thoughtful paper putting the current pandemic in perspective – and a warning. The paper, “Emerging Pandemic Diseases: How We Got to COVID-19” was published last month in the prestigious journal Cell

In addition to being a short history of pandemics and a tutorial on infectious diseases, it clearly places the COVID-19 pandemic (and the SARS-CoV-2 virus that causes the illness) in the context of the broader interaction between humans and nature. As such, their analysis offers a grave warning for the future:

SARS-CoV-2 is a deadly addition to the long list of microbial threats to the human species. It forces us to adapt, react, and reconsider the nature of our relationship to the natural world. Emerging and re-emerging infectious diseases are epiphenomena of human existence and our interactions with each other, and with nature. As human societies grow in size and complexity, we create an endless variety of opportunities for genetically unstable infectious agents to emerge into the unfilled ecologic niches we continue to create. There is nothing new about this situation, except that we now live in a human-dominated world in which our increasingly extreme alterations of the environment induce increasingly extreme backlashes from nature.

Science will surely bring us many life-saving drugs, vaccines, and diagnostics; however, there is no reason to think that these alone can overcome the threat of ever more frequent and deadly emergences of infectious diseases. Evidence suggests that SARS, MERS, and COVID-19 are only the latest examples of a deadly barrage of coming coronavirus and other emergences. The COVID-19 pandemic is yet another reminder, added to the rapidly growing archive of historical reminders, that in a human-dominated world, in which our human activities represent aggressive, damaging, and unbalanced interactions with nature, we will increasingly provoke new disease emergences. We remain at risk for the foreseeable future. COVID-19 is among the most vivid wake-up calls in over a century. It should force us to begin to think in earnest and collectively about living in more thoughtful and creative harmony with nature, even as we plan for nature’s inevitable, and always unexpected, surprises. [emphasis added]

In other words, we will continue to be a risk from new infectious diseases long after the current pandemic is gone — due to our own behavior toward nature.

You can’t say we haven’t been warned.

Employment trend continues in August

It looks like the US labor market is continuing its slow recovery – ever so slowly. This morning the BLS reported that employment in August rose by 1.4 million, in line with expectations.

Employment in both tangible and intangible producing industries rose. Similar to the previous months, increases occurred in industries where there is physical presence with customers, specifically Accommodation & Food Services and Trade, Transportation & Utilities.

On the intangible-producing side of the economy, employment in Professional & Business Services and Educational & Health Services grew. Notably, employment the tangible-producing parts of Educational and Health Services declined – specifically in nursing and residential care facilities and child care centers.

Telecommunication also saw a decline in employment while all other industries saw modest gains. Manufacturing employment continued to grow but at a very weak level. And Government employment continued to grow, adding almost 600,000 over the past two months.

Once again, under normal circumstances this would be a very positive increase. However, in the age of COVID-19, this is only a modest rebound in employment. And keep in mind the worrisome trend of furloughed workers being permanently let go.

Much more needs to be done.

The Endless Frontier Act and charting a direction for technology policy

One of the long-standing issues in technology policy is improving the process of turning new technologies into economic activities. While often referred to as technology transfer, the concept of “translation research” has taken hold as the key component of the innovation model. It is at the heart of the so-called “valley of death” where, in the linear model of advancing from the results of basic research to technology development to commercial product, the process falls short. Under this model, funding is available from mostly public sources for the first stages. Basic and some applied research is seen as a public good which by its very nature cannot attract adequate private funding. Private funding is available for the later stages of technology development when a working prototype or some other form of proof of concept is available and a company can take the technology to a commerializable scale. The lack of funding for the transition stage [the translation of an idea to a product] creates this valley of death where the technology is no longer a pure public good nor a purely private good (capable of generating a return on investment). [Note for my critique of the linear model, see here.]

A key issue then is who should fund and undertake this translation research.

Earlier this year, a bipartisan, bicameral group of legislators [Senators Chuck Schumer (D-NY) and Todd Young (R-IN); Representatives Ro Khanna (D-CA) and Mike Gallagher (R-WI)] introduced a bill to increase investment in technology development and commercialization. The Endless Frontier Act (S. 3832 and H.R. 6978) would expand the National Science Foundation (NSF) into the National Science and Technology Foundation by creating a Directorate for Technology with a budget of $100 billion over five years ($20 billion per year) to fund university-based technology development (see summary here). [Note that the entire FY2020 NSF budget is $8.3 billion.]

While the goal of increasing investment in technology development and commercialization is laudable, questions have been raised as to whether the Act is the best structure to achieve this goal.

One such critique is by three long-standing respected technology policy experts, Patrick Windham, Christopher Hill, David Cheney. [In the interest of full disclosure, I should point out that they are long-time friends and colleagues.] Published in the Summer 2020 edition of Issues in Science and Technology, their article “Improving the Endless Frontier Act” argues to let NSF be NSF and not task it with something beyond its ken. They argue instead for an expansion of Defense Advanced Research Projects Agency (DARPA)-like organizations within mission-oriented agencies.

The heart of their argument is the difference between science and engineering and the role of the research university in the innovation process:

“In the first place, most radical new technologies in recent decades have come from solutions-driven engineering work funded by agencies such as DARPA and NASA or from companies driven by market opportunities. On the government side, DARPA has played a decisive role in developing advanced materials, the personal computer, the internet, and more recently advanced prosthetics and RNA vaccines. Several government organizations, in particular the Naval Research Laboratory, were central to the development of the Global Positioning System. Government technology agencies draw on important fundamental research funded by NSF, and they themselves also fund additional university research in support of their technological goals—but as part of larger R&D programs that fund a wide range of R&D performers, including companies that often go on to commercialize these new technologies. University discoveries contribute to such work, but university research alone did not create most of the radical new technologies of our time.

Although the incentives and capabilities at universities and at NSF itself are well-aligned with basic research and open publication, universities are not equipped to undertake large applied engineering projects, much less to translate the resulting new technologies into products and processes. Putting universities in the lead on technology development misunderstands both their role and their capabilities in the innovation system. It would also risk diluting the valuable basic research role that universities and NSF play.”

I have been supporter of the “civilian DARPA” approach going back to my days as a staffer to Senator Jeff Bingaman. I remember well (as will the article’s authors) the debate between the Advanced Civilian Technology Agency (ACTA) and the Advanced Technology Program (ATP) in what became the Omnibus Trade and Competitiveness Act of 1988. For a number of reasons, the program approach (ATP) prevailed over the agency approach (ACTA).

Since that battle, thinking has shifted from the creation of a single civilian DARPA to multiple agency/technology entities. Over the years there have been numerous calls for the creation of DARPA-like entities to address technologies ranging from education to cybersecurity.

As the article notes, the Advanced Research Projects Agency-Energy (ARPA-E) at the Department of Energy is an example of a successful mission-oriented entity. Created in 2009, ARPA-E funds (according to their website) “high-potential, high-impact energy technologies that are too early for private-sector investment.” Funding spans the range of energy technologies from energy generation to storage to use.

One must be careful however not to assume that any DARPA-like sounding agency in fact operates like DARPA. An example is the question whether the Biomedical Advanced Research and Development Authority (BARDA) is truly a DARPA-like entity. Unlike DARPA or ARPA-E, BARDA has a much more limited technological focus. Part of the Department of Health and Human Services (HHS) Office of the Assistant Secretary for Preparedness and Response, BARDA “supports the transition of medical countermeasures such as vaccines, drugs, and diagnostics from research through advanced development towards consideration for approval by the FDA and inclusion into the Strategic National Stockpile.” Think anthrax, Ebola, Zika as well as COVID-19. [It should be noted that DARPA has an active Biological Technologies Office.]

BARDA also has unique authorities. It has both a R&D and an operational role. In its R&D role BARDA not only provides funding for development of new vaccines and other medical countermeasures, it also helps guide those products through the FDA approval process and the manufacturing scale-up phase. Operationally it is the lead government agency for procurement and stockpiling of these products. Thus, it is a unique combination of developer and customer.

Given the challenge of designing a successful organization, I would recommend that the Government Accountability Office (GAO) be tasked to undertake a more detailed analysis of DARPA, ARPA-E, and BARDA to distill lessons learned. A starting point might be a re-review of a 30-year-old but still relevant discussion of the underlying issues by Alic and Robyn entitled “Designing a Civilian DARPA.”

In addition to increasing DARPA funding and creating DARPA-like (ARPA-E like) agencies, the authors proposal set up a Technology Frontier pilot program in NSF and Increasing funding for the existing Manufacturing USA institutes.

I strongly support increased funding for the Manufacturing USA institutes.

But I’m a little unsure of the need for a Technology Frontier pilot at NSF.

Somewhat surprisingly, the article (nor as far as I can tell any other discussion of the Act) never mentions existing NSF programs geared toward doing exactly what the Act hopes to accomplish. NSF’s Engineering Research Center (ERC) program has been around since 1985. ERCs are university-based, multi-institution consortia that undertake interdisciplinary research and technology translation activities. According to NSF, the program has funded 75 ERCs and resulted in over 200 spinoff companies and over 850 patents. Some ERC outcomes include minimally invasive surgery technologies and high-speed internet technologies. Recently NSF announced $104 million in funding over 5 years to support 4 new centers in the areas of cryogenics for biological systems, electric vehicle re-charging technologies, quantum networks, and Internet of Things for precision agriculture.

A few years ago, the National Academies did a study on the future of the NSF’s Engineering Research Centers. Their report called for shifting the direction of these centers to transdisciplinary convergent research focused on the Grand Challenges for Engineering. NSF embraced this concept, embedding convergent research  into to its plans for funding Gen-4 Engineering Research Centers. NSF  also established a Convergence Accelerator (C-Accel) program focused on transitioning research  into practice. The 2019 program pilot provided $39 million to 43 teams in two topic areas: Harnessing the Data Revolution and the Future of Work at the Human-Technology Frontier.

Rather than start up another new pilot program, I would suggest building upon and expanding the ERC/C-Accel programs.

One final point. The authors also critique the proposal for pre-determining which technologies should be supported. I agree that it is all too tempting to build political support by targeting the hot new technologies of the day as opposed to letting the funding follow the technology:

“The US government actually has a better way to identify and fund promising new areas of research and technology. National leaders set overall priorities while researchers and agency experts scan for new scientific and technical opportunities and propose new R&D directions. Then agency leaders and, for big initiatives, the White House and Congress vet these ideas and decide which to support. The result is a flexible federal system that identifies new opportunities, reviews them, and creates a diverse and high-quality portfolio of R&D programs.

The top-down portion—statements of overall national priorities—consists of annual White House memoranda on presidential R&D priorities (including one for fiscal year 2021), agency planning documents, and congressional laws. The bottom-up portion is a remarkable American strength. Instead of looking only at current technologies, researchers and agency technical experts constantly scan for the next big things in their fields and propose new initiatives. Sometimes agency directors directly evaluate these ideas and decide which to support, as was the case with the National Nanotechnology Initiative. Sometimes NSF workshops and National Academies meetings test and refine new R&D proposals before policy leaders consider them, as with Academies reports that help set priorities in chemistry, space sciences, and other fields. Some of the resulting investments are large, such as multiagency initiatives in high-performance computing and nanotechnology, while many others are smaller or even experimental “seedling” projects. By funding a wide range of existing and new R&D areas—funding an overall R&D portfolio—federal agencies do not just develop today’s technologies; they also begin investing in the technologies of the future. Confining technology development support to a relative short list of predetermined areas that can only be updated every four years or so seems sure to result in a system far less dynamic than the current one.”

[I would just add a note that this is related to but does not directly addresses the other debate over R&D funding: curiosity-driven versus use-driven research. See also Pasteur’s Quadrant and Highly Integrative Basic and Responsive (HIBAR) research.]

Let me conclude by lauding the efforts of Senators Schumer and Young and Representatives Khanna and Gallagher. It is my hope that the Endless Frontier Act will spur action toward increasing funding for the critical task of commercializing new technologies. It is also my hope that the Act will spur a vigorous and productive debate over how to best channel that investment. The article by Windham, Hill and Cheney is an important part of that debate.

COVID-19 and the ventilator surge

This spring saw an example of an industrial surge capacity as companies such as Ford and GM revamped factories to produce medical ventilators to respond to the COVID-19 pandemic. While the pandemic rages on, enough time has passed that we can look back at the surge and ask whether it was a success or failure. The answer is clearly yes – as this article from today’s Washington Post illustrates. The surge produced a lot of ventilators (so it was a success); too many in fact (so it was a failure according to some).

The ventilator case (which I’m sure will end in business school courses) illustrates both the strengths and weaknesses of the surge process. Successful surge requires a high level of adaptability. Such adaptability is beneficial for normal operations in a rapidly changing market environment. However, an ad hoc surge process risks the overcommitment of valuable resources. Having contingency plans that can be adapted to the situation would more effective and efficient.