History Of AI In 33 Breakthroughs: The First Expert System

History Of AI In 33 Breakthroughs: The First Expert System

In the early 1960s, computer scientist Ed Feigenbaum became interested in “creating models of the thinking processes of scientists, especially the processes of empirical induction by which hypotheses and theories were inferred from data.” In April 1964, he met geneticist (and Noble-prize winner) Joshua Lederberg who told him how experienced chemists use their knowledge about how compounds tend to break up in a mass spectrometer to make guesses about a compound’s structure.

Recalling in 1987 the development of DENDRAL, the first expert system, Lederberg remarked: “…we were trying to invent AI, and in the process discovered an expert system. This shift of paradigm, ‘that Knowledge IS Power’ was explicated in our 1971 paper [On Generality and Problem Solving: A Case Study Using the DENDRAL Program], and has been the banner of the knowledge-based-system movement within AI research from that moment.”

Expert systems represented a new stage in the evolution of AI, shifting from its initial emphasis on general problem-solvers focused on expressing in code human reasoning, i.e., drawing inferences and arriving at logical conclusions. The new focus was on knowledge, specifically the knowledge of specialized (narrow) domain experts and specifically, their heuristic knowledge.

Feigenbaum explained heuristic knowledge (in his 1983 talk “Knowledge Engineering: The Applied Side of Artificial Intelligence”) as “knowledge that constitutes the rules of expertise, the rules of good practice, the judgmental rules of the field, the rules of plausible reasoning... In contrast to the facts of the field, its rules of expertise, its rules of good guessing, are rarely written down.”

Pamela McCorduck in This Could Be Important: My Life and Times with the Artificial Intelligentsia, 2019:

“In 1965, Feigenbaum and Lederberg gathered a superb team, including philosopher Bruce Buchanan and later Carl Djerassi (one of the ‘fathers’ of the contraceptive pill) plus some brilliant graduate students who would go on to make their own marks in AI. The team began to investigate how scientists interpreted the output of mass spectrometers. To identify a chemical compound, how did an organic chemist decide which, out of several possible paths to choose, would be likelier than others? The key, they realized, is knowledge—what the organic chemist already knows about chemistry. Their research would produce the Dendral program (for dendritic algorithm, tree-like, exhibiting spreading roots and branches) with fundamental assumptions and techniques that would completely change the direction of AI research.”

The experience with DENDRAL informed the development of the Stanford team’s next expert system, MYCIN (the common suffix associated with many antimicrobial agents), designed to assist physicians in diagnosing blood infections. Feigenbaum used MYCIN to illustrate the various aspects of knowledge engineering, stating that expert systems must explain to the user how they arrived at their recommendations, “otherwise, the systems will not be credible to their professional users.”

As happened again and again with new breakthroughs throughout the history of AI, expert systems generated a lot of hype, excitement, and false predictions. Experts systems were “the new new thing” in the 1980s and it was estimated that two thirds of the Fortune 500 companies applied the technology in daily business activities, only to end in the “AI Winter” of the late 1980s.

Already in 1983, Feigenbaum identified the “key bottleneck” that led to their eventual demise, that of scaling the knowledge acquisition process: “The knowledge is currently acquired in a very painstaking way that reminds one of cottage industries, in which individual computer scientists work with individual experts in disciplines painstakingly to explicate heuristics. In the decades to come, we must have more automatic means for replacing what is currently a very tedious, time-consuming, and expensive procedure. The problem of knowledge acquisition is the key bottleneck problem in artificial intelligence.”

The automation of knowledge acquisition eventually happened, but not via the methods envisioned at the time. In 1988, members of the IBM T.J. Watson Research Center published “A statistical approach to language translation,” heralding the shift from rule-based to probabilistic methods of machine translation, and reflecting another shift in the evolution of AI to “machine learning” based on statistical analysis of known examples, not comprehension and “understanding” of the task at hand.

And while knowledge for Feigenbaum was the heuristic knowledge of experts in very specific domains, knowledge became, especially after the advent of the Web, every digitized entity accessible over the internet (and beyond) that could be mined and analyzed by machine learning, and over the last decade, by its more advanced version, “deep learning.”

In his 1987 personal history of the development of DENDRAL, Lederberg wrote about Marvin Minsky’s criticism of generate-and-test paradigms, that for “any problem worthy of the name, the search through all possibilities will be too inefficient for practical use.” Lederberg: “He had chess playing in mind with 10^120 possible move paths. It is true that equally intractable problems, like protein folding, are known in chemistry and other natural sciences. These are also difficult for human intelligence.”

In November 2020, DeepMind’s AlphaFold model, a deep learning system designed to identify the three-dimensional structures of proteins, achieved remarkably accurate results. In July 2022, DeepMind announced that AlphaFold could identify the structure of some 200 million proteins from 1 million species, covering just about every protein known to human beings.

Images Powered by Shutterstock