History Of AI In 33 Breakthroughs: The First ‘Thinking Machine’
Many histories of AI start with Homer and his description of how the crippled, blacksmith god Hephaestus fashioned for himself self-propelled tripods on wheels and “golden” assistants, “in appearance like living young women” who “from the immortal gods learned how to do things.”
I prefer to stay as close as possible to the notion of “artificial intelligence” in the sense of intelligent humans actually creating, not just imagining, tools, mechanisms, and concepts for assisting our cognitive processes or automating (and imitating) them.
In 1308, Catalan poet and theologian Ramon Llull completed Ars generalis ultima (The Ultimate General Art), further perfecting his method of using paper-based mechanical means to create new knowledge from combinations of concepts.
Llull devised a system of thought that he wanted to impart to others to assist them in theological debates, among other intellectual pursuits. He wanted to create a universal language using a logical combination of terms. The tool Llull created was comprised of seven paper discs or circles, that listed concepts (e.g., attributes of God such as goodness, greatness, eternity, power, wisdom, love, virtue, truth, and glory) could be rotated to create combinations of concepts to produce answers to theological questions.
Llull’s system was based on the belief that only a limited number of undeniable truths exists in all fields of knowledge and by studying all combinations of these elementary truths, humankind could attain the ultimate truth. His art could be used to “banish all erroneous opinions” and to arrive at “true intellectual certitude removed from any doubt.”
In early 1666, 19-year-old Gottfried Leibniz wrote De Arte Combinatoria (On the Combinatorial Art), an extended version of his doctoral dissertation in philosophy. Influenced by the works of previous philosophers, including Ramon Llull, Leibniz proposed an alphabet of human thought. All concepts are nothing but combinations of a relatively small number of simple concepts, just as words are combinations of letters, he argued. All truths may be expressed as appropriate combinations of concepts, which in turn can be decomposed into simple ideas.
Leibniz wrote: “Thomas Hobbes, everywhere a profound examiner of principles, rightly stated that everything done by our mind is a computation.” He believed such calculations could resolve differences of opinion: “The only way to rectify our reasonings is to make them as tangible as those of the mathematicians, so that we can find our error at a glance, and when there are disputes among persons, we can simply say: Let us calculate, without further ado, to see who is right” (The Art of Discovery, 1685). In addition to settling disputes, the combinatorial art could provide the means to compose new ideas and inventions.
“Thinking machines” has been the common portrayal in modern times of the new, mechanical, incarnations of these early descriptions of cognitive aids. Already in the 1820s, for example, the Difference Engine—a mechanical calculator—was referred to by Charles Babbage’s contemporaries as his “thinking machine.”
More than a century and a half later, computer software pioneer Edmund Berkeley wrote in his 1949 book Giant Brains: Or Machines That Think: “These machines are similar to what a brain would be if it were made of hardware and wire instead of flesh and nerves… A machine can handle information; it can calculate, conclude, and choose; it can perform reasonable operations with information. A machine, therefore, can think.”
And so on, to today’s gullible media, over-promising AI researchers, highly-intelligent scientists and commentators, and certain very rich people, all assuming that the human brain is nothing but a “meat machine” (per AI pioneer Marvin Minsky) and that calculations and similar computer operations are tantamount to thinking and intelligence.
In contrast, Leibniz—and Llull before him—were anti-materialists. Leibniz rejected the notion that perception and consciousness can be given mechanical or physical explanations. Perception and consciousness cannot possibly be explained mechanically, he argued, and therefore could not be physical processes.
In Monadology (1714), Leibniz wrote: “One is obliged to admit that perception and what depends upon it is inexplicable on mechanical principles, that is, by figures and motions. In imagining that there is a machine whose construction would enable it to think, to sense, and to have perception, one could conceive it enlarged while retaining the same proportions, so that one could enter into it, just like into a windmill. Supposing this, one should, when visiting within it, find only parts pushing one another, and never anything by which to explain a perception. Thus it is in the simple substance, and not in the composite or in the machine, that one must look for perception.”
For Leibniz, no matter how complex the inner workings of a “thinking machine,” nothing about them reveals that what is being observed are the inner workings of a conscious being. Two and a half centuries later, the founders of the new discipline of “artificial intelligence,” materialists all, assumed that the human brain is a machine, and therefore, could be replicated with physical components, with computer hardware and software. They believed that they were well on their way to finding the basic computations, the universal language of “intelligence,” to creating a machine that will think, decide, act just like humans or even better than humans.
This is when being rational was replaced by being digital.
The founding document of the discipline, the 1955 proposal for the first AI workshop, stated that it is based on “the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” Twenty years later, Herbert Simon and Allan Newel in their Turing Award lecture, formalized the field’s goals and convictions as The Physical Symbol System Hypothesis: “A physical symbol system has the necessary and sufficient means for general intelligent action.”
Soon thereafter, however, AI started to shift paradigms, from symbolism to connectionism, from defining (and programming) every aspect of learning and thinking, to statistical inference or finding connections or correlations leading to learning based on observations or experience.
With the advent of the Web and the creation of lots and lots of data in which to find correlations, buttressed by advances in the power of computers and the invention of sophisticated statistical analysis methods, we have arrived at the triumph of “deep learning,” and its contribution to the very large improvements in computers’ ability to perform tasks such as identifying images, responding to questions, and textual analysis.
Recently, some new tweaks to deep learning have produced AI programs that can write (“this stuff is like… alchemy!” said one of the creators of the creative machine), engage in conversations (“I felt the ground shift under my feet … increasingly felt like I was talking to something intelligent,” said another AI creator), and create images from text input, even videos.
In 1726, Jonathan Swift published Gulliver's Travels in which he described (possibly as a parody of Llull’s system), a device that generates at random permutations of word sets. The professor in charge of this invention “showed me several volumes in large Folio already collected, of broken sentences, which he intended to piece together, and out of those rich materials to give the world a complete body of all Arts and Sciences.”
There you have it, brute force deep learning in the 18 century. Over a decade ago, when the new-old discipline of “data science” emerged, bringing to the fore the sophisticated statistical analysis that is the foundation of deep learning, some observers and participants reminded us that “correlation does not imply causation.” A Swift today would probably add: “Correlation does not imply creativity.”