In an earlier posting, I argued that knowledge management might come close to being the “killer app” needed to spark a major boom in AI use.
Daron Acemoglu writing a piece in Project Syndicate has laid out a slightly contradictory argument. He notes (and many others have also noted) that productivity increases are coming from the automation of routine cognitive tasks.
“Early adoption of generative AI has naturally occurred where it performs reasonably well, meaning tasks for which there are objective measures of success, such as writing simple programming subroutines or verifying information. Here, the model can learn on the basis of outside information and readily available historical data.”
But “evaluating applications, diagnosing health problems, providing financial advice – do not have such clearly defined objective measures of success, and often involve complex context-dependent variables (what is good for one patient will not be right for another). In these cases, learning from outside observation is much harder, and generative AI models must rely instead on the behavior of existing workers.” As a result, productivity gains from AI will be lower than many expect.
Such cases are certainly more of a challenge. But relying on the behavior of existing workers is half of what knowledge management is all about. Capturing the expertise of workers is what makes knowledge management so important and so difficult. It calls for articulating the unarticulated. Often referred to as Polanyi’s Paradox (“we know more than we can say”), tacit knowledge underpins much of what we refer to as expertise.
David Autor argues that the ability to capture and share expertise is what makes AI so powerful. In an article published earlier this year, he contends that “By providing decision support in the form of real-time guidance and guardrails, AI could enable a larger set of workers possessing complementary knowledge to perform some of the higher stakes decision-making tasks currently arrogated to elite experts like doctors, lawyers, coders and educators. This would improve the quality of jobs for workers without college degrees, moderate earnings inequality, and — akin to what the Industrial Revolution did for consumer goods — lower the cost of key services such as healthcare, education and legal expertise.”
However, Autor forcefully articulates the need for AI, like any tool, to be grounded in foundational expertise. “By shortening the distance from intention to result, tools enable workers with proper training and judgment to accomplish tasks that were previously time consuming, failure-prone or infeasible. Conversely, tools are useless at best — and hazardous at worst — to those lacking relevant training and experience. A pneumatic nail gun is an indispensable time-saver for a roofer and a looming impalement hazard for a home hobbyist.” Getting knowledge management right is critical, as a recent story on Boeing in the Wall Street Journal points out.
I noted above that capturing tacit knowledge is half of the task of knowledge management. The other half is sharing learning. Expertise is not fixed. It changes and evolves. There is not a set pool of knowledge waiting to be uncovered. Learning is a process, not an end point. Business and governments will have to understand that deploying AI is a constant process. The temptation of a train-once-and-done approach is appealing—and dangerous.
In the Industrial Economy, managers and workers understood that tools wore out. In the age of AI, we need to recognize that expertise and knowledge can become obsolete. A train-once-and-done approach would lock in the existing “good” practices. It would freeze expertise and undercut development of judgement. Thus, any AI and knowledge management system need to have a built-in mechanism for dynamic renewal. That mechanism must be built on a process of constant monitoring and accurate evaluation.
There is another reason why AI needs constant monitoring, evaluation, and adjustment. AI is, as one research put it mildly, often dumb as a rock. A recent Washington Post story recounts some of the more hilarious examples of how Google got it wrong. The author, Shira Ovide, advises that “With the generative AI from Google, OpenAI’s ChatGPT and Microsoft’s Copilot, you should assume that they’re wrong until proved otherwise.” Others have pointed out that the problems of “hallucinations” (made up answers) and incorrect information may be a built-in feature, not a fixable bug. Referred to now as “slop”, this trait in AI will require ongoing human oversight.
Bottom line is that AI has tremendous potential, especially as a tool for knowledge management. But realizing that potential may be harder than many originally believed. And humans will, as far as we can see, remain an important part of the system.