This week for #OpenLearning17 we’re reading excerpts from Doug Engelbart’s “Augmenting Human Intellect: A Conceptual Framework.” And here I am now, in reflecting upon this text, putting into practice the very “conceptual framework” that it so perspicaciously advanced more than half a century ago. We take it for granted today, this ability to display and manipulate and rearrange text and other symbols that represent our thought, in a networked environment that invites comment and annotation, that we forget how revolutionary these proposals were at the time. A time when computing was equated with calculation, and when speaking of AI could only mean a reference to “artificial intelligence,” not “augmented intelligence.”
So, to the text. For me, Englebart highlighted two crucial features of human intellect. First, our intellects operate largely upon external representations of meaning, that is, symbols: “texts, sketches, diagrams, lists, etc.” When ideas, concepts, and information can be displayed visually, we no longer have to hold those representations solely in our minds, which thus frees up our cognitive capacities for deeper-level thinking about the meanings inscribed therein. We have had such external representations and displays of interior thoughts since at least the time of the ancient cave paintings and, of course, the development of pictograms and writing. So there has already been “considerable” augmentation of human intellect for some time.
What Engelbart realized is that computational devices, such as the graphical interface and the mouse, could make possible the display of these symbolic representations of thought in a way that greatly facilitated our capacity to “manipulate” and “rearrange” them, thus stimulating our thinking and leading to possible new discoveries and insights. When texts, diagrams, charts, images, etc. are thus displayed, then “characteristically ‘human’ activities” can come to the fore, such as “playing with forms and relationships to ask what develops,” and “rearranging” texts and diagrams to discover new patterns and more intricate syntheses of what were previously perhaps disparate elements. Electronic representations of symbolic elements, as compared to print, afford a far more expeditious modification and rearrangement of elements. And it is that playful modification and rearrangement that can lead to new understandings and breakthroughs.
These ideas reinforcing the importance of the visual arrangement of data for the possibility of insight recalled for me some lessons that I learned many years ago from one of my philosophical mentors, Bernard Lonergan. In his work on cognitional theory, Lonergan, recovering elements from Aristotle and Aquinas, developed the notion of “insight into phantasm,” meaning that genuine intellectual discovery is a matter of apprehending an intelligible pattern or form in a presentation of sensible data (that is, data perceived by our senses or generated by our imagination). “Phantasm” here can be broadly understood as “image.” Lonergan often remarked on the importance of the diagram for inference and insight, with reference, for example, to the role of diagrams in mathematical discovery. [For a fuller (though difficult!) account, see Bernard Lonergan, Verbum: Word and Idea in Aquinas, chapter 1, section 4.]
More generally then, the presentation and arrangement of symbolic data is crucial, and any given arrangement may be more or less conducive to discovery. That is why Engelbart’s observation about “playing” with and “rearranging” the materials we are dealing with strikes such a resonant chord. Even if we are only dealing with text, the ease of recombining and manipulating our words and phrases enables our writing to more quickly reach a suitable form of expression. And when we are dealing with multiple modalities of information, composing is even further facilitated by the digital tools that augment and amplify our native intellect. Gardner Campbell coined the lovely phrase “combinatorial disposition” to describe the penchant that one brings to materials in order to rearrange them in increasingly intricate patterns.
The second major takeaway is that, as we are social animals, our intellects work most effectively, not in isolation but in connection with others. When thoughts and ideas are externally represented, they thereby enter to some degree a public space, which creates the opportunity for the shared and collaborative rearrangement and manipulation of symbolic elements and the emergence of collective insight and intelligence. Engelbart refers to “team cooperation” in this essay and later uses the term “collective IQ. So each person on a team can have simultaneous access to the same materials and “symbol structure,” and by working in “parallel independence” with one another, the team advances exponentially faster than could any single individual. The common space allows each person to contribute their own particular terminologies, intuitions, and hunches, while the interaction of these distinct perspectives spurs the refinement and precision of a more adequate set of explanatory concepts. This is made possible by the technical advancements of networked computers and hyperlinks, but again, the technological developments are only important insofar as they position the human intellect to more readily take advantage of its innately social and dialogical nature.
So now, as to the title of this post … the true and only AI. The notion of “artificial intelligence,” so much more prominent in recent awareness and discourse, was first proposed (or at least the term was coined) by computer scientist John McCarthy in 1955, less than a decade before Engelbart’s groundbreaking essay. Interestingly, McCarthy and his team were also active at Stanford during the same time period as Engelbart was, and I wonder if there was any interaction between them.
[Shortly after publishing this post, I came across this 2011 article by John Markoff, “A Fight to Win the Future: Computers vs. Humans,” which begins with precisely this juxtaposition between two labs at Stanford:
At the dawn of the modern computer era, two Pentagon-financed laboratories bracketed Stanford University. At one laboratory, a small group of scientists and engineers worked to replace the human mind, while at the other, a similar group worked to augment it.
In 1963 the mathematician-turned-computer scientist John McCarthy started the Stanford Artificial Intelligence Laboratory. The researchers believed that it would take only a decade to create a thinking machine.
Also that year the computer scientist Douglas Engelbart formed what would become the Augmentation Research Center to pursue a radically different goal — designing a computing system that would instead “bootstrap” the human intelligence of small groups of scientists and engineers.
For the past four decades that basic tension between artificial intelligence and intelligence augmentation — A.I. versus I.A. — has been at the heart of progress in computing science as the field has produced a series of ever more powerful technologies that are transforming the world.]
I have no expertise in this field, but I think it most unfortunate that “artificial intelligence” has enjoyed such prominence as compared to “augmented intelligence.” Now it depends upon how you define “intelligence,” of course, but on philosophical (and, ultimately, theological) grounds, I don’t think there is such as thing as “artificial intelligence.” Machines and systems might be said to be “intelligent” in that they are highly developed and programmed, even to the point of altering that program in response to external conditions. I agree with the philosopher John Searle, whose “Chinese Room Argument” refutes the idea that a machine could have a mind in exactly the same sense that humans do. While I can certainly appreciate the achievements and benefits of a machine such as IBM’s Watson, it does not, in my opinion, possess the “intelligence” of the sort that was necessary to create it in the first place.
With Engelbart, I contend that “augmented intelligence” is the true (and only) AI (an allusory nod there to cultural critic Christopher Lasch and his book, The True and Only Heaven). A focus on augmented intelligence can push us to identify what is truly distinctive about human intelligence even as we benefit from the increasingly complex tools which that intelligence has fashioned. Many examples of the amplification of human intelligence by digital tools can be found in Clive Thompson‘s 2014 book, Smarter Than You Think. How Technology is Changing our Minds for the Better. As a chess player myself, I was particularly taken with chapter one, “The Rise of the Centaurs,” which describes the rise of computerized chess programs and its effect upon the game. The pivotal moment came in 1997, when then world champion Garry Kasparov was defeated in a match by IBM’s Deep Blue supercomputer. Thompson goes on to explain that in the last two decades, a “new form of chess intelligence” has emerged, in which the strongest players are not grandmasters themselves nor machines themselves, but human/machine teams. As Kasparov noted, “Human strategic guidance combined with the tactical acuity of a computer is overwhelming.” [Chess engines today have gone far beyond even what Deep Blue accomplished, and you can freely (as in beer) access some of them, such as Stockfish (open source), right in your browser.]
Thompson concludes, generalizing from the chess experience: “We’re all playing advanced chess these days. We just haven’t learned to appreciate it.” So I guess that means we are all centaurs as well?