Musings with Mo

learning, design, technology

Month: February 2017

“Magical Combinations:” Notes from a Ted Talk

How does one respond to the blooming, buzzing mind of Ted Nelson? I don’t really know…trying to follow the associative trails in that mind is quite the adventure. I’ll take my fallback approach, which is to gather some fragments or nuggets of thought (or, as Jon Udell might now term them, “segments of interest“), throw them together on the page, and see what happens. This is after watching the video interview and hacking my way through Computer Lib/Dream Machines. Maybe a composition of sorts will emerge, though it will at this point be a largely impressionistic set of quick jottings.

Ted tells us that one of the reasons he got into computers was that “I wanted a satisfactory system of taking notes.”  That resonates; most of us, I think, have our personal knowledge management systems that we tweak and seek to improve. I was reminded of our previous discussions of as a “web annotation infrastructure” and also of John Stewart’s “Digital Note-taking in the History Classroom.” These open notebook practices facilitate that rearrangement and recombination of ideas that Engelbart spoke of. Nelson says that wanting to use his filecards in many places (in other words, to move them around and rearrange them) led to the notion of “transclusion” (a term I’d never heard of until Udell’s use of it a couple of weeks ago) and thence to the possibility of new compositions of ideas and expressions. And sometimes, just maybe, those new compositions, open to interaction with the compositions of others, can lead to “magic combinations,” structured and designed in novel and complex ways. Again, carrying forward the torch of Engelbart.

The branches of philosophy…well, this is definitely a novel mind at work. Schematics (“the study of structure and representation”), normatics (“the philosophy of ‘shoulds'”), gamics (“the structure of situations and maneuvers in situations), and idemics (“the representations of the tokens that we present our ideas with and argue with and manipulate as we redesign ideas”). I don’t claim to understand much if any of this, but what caught my attention with regard to idemics (sp?) are his examples of “designing sitcoms, theology, and software.” Nelson may be the first person to ever juxtapose those three domains…what on earth could he possible mean? I’m a theologian by disciplinary background, and I would love to hear more! This is a tease, especially since he apparently would not have had much use for theology, if Wikipedia is to be believed.

Quickly scanning Computer Lib/Dream Machines, there is too much to respond to, but my highlighter got busy at the description of “fantics,” by which Nelson means “the art and science of getting ideas across, both emotionally and cognitively … ‘presentation’ could be a general word for it” (319). Now I had already been stunned earlier in the essay with a quote from Bertrand Russell–“The reason is, and by rights ought to be, slave to the emotions” (306). Bertrand Russell (channelling Hume?) said this?  I mean, yes, embodied cognition and all that, but… And now Ted goes on to flesh it out. What comes across in a presentation always has a dual character: explicit declarative structure imbued with connotative impressions, feelings, and sensibilities. And so, as we consider our pedagogies,

The reader or viewer always gets feelings along with information, even when the creators of the information think that its ‘content’ is much more restricted…. Fantics is thus concerned with both the arts of effect–writing, theater, and so on–and the structures and mechanisms of thought, including the various traditions of the scholarly event (article, book, lecture, debate, and class). These are all a fundamentally inseparable whole, and technically-oriented people who think that systems to interact with people, or teach, or bring up information, can function on some “technical” basis–with no tie-ins to human feelings, psychology, or the larger social structure–are kidding themselves and everyone else (319).

This is the work of “writers and editors, movie-makers and lecturers, radio announcers and layout people and advertising people” … and, it seems, the work of teachers and learners. Let’s make some magic combinations.

Augmented Intelligence: The True and Only AI

This week for #OpenLearning17 we’re reading excerpts from Doug Engelbart’s “Augmenting Human Intellect: A Conceptual Framework.” And here I am now, in reflecting upon this text, putting into practice the very “conceptual framework” that it so perspicaciously advanced more than half a century ago. We take it for granted today, this ability to display and manipulate and rearrange text and other symbols that represent our thought, in a networked environment that invites comment and annotation, that we forget how revolutionary these proposals were at the time. A time when computing was equated with calculation, and when speaking of AI could only mean a reference to “artificial intelligence,” not “augmented intelligence.”

So, to the text. For me, Englebart highlighted two crucial features of human intellect. First, our intellects operate largely upon external representations of meaning, that is, symbols: “texts, sketches, diagrams, lists, etc.” When ideas, concepts, and information can be displayed visually, we no longer have to hold those representations solely in our minds, which thus frees up our cognitive capacities for deeper-level thinking about the meanings inscribed therein. We have had such external representations and displays of interior thoughts since at least the time of the ancient cave paintings and, of course, the development of pictograms and writing. So there has already been “considerable” augmentation of human intellect for some time.

What Engelbart realized is that computational devices, such as the graphical interface and the mouse, could make possible the display of these symbolic representations of thought in a way that greatly facilitated our capacity to “manipulate” and “rearrange” them, thus stimulating our thinking and leading to possible new discoveries and insights. When texts, diagrams, charts, images, etc. are thus displayed, then “characteristically ‘human’ activities” can come to the fore, such as “playing with forms and relationships to ask what develops,” and “rearranging” texts and diagrams to discover new patterns and more intricate syntheses of what were previously perhaps disparate elements. Electronic representations of symbolic elements, as compared to print, afford a far more expeditious modification and rearrangement of elements. And it is that playful modification and rearrangement that can lead to new understandings and breakthroughs.

These ideas reinforcing the importance of the visual arrangement of data for the possibility of insight recalled for me some lessons that I learned many years ago from one of my philosophical mentors, Bernard Lonergan. In his work on cognitional theory, Lonergan, recovering elements from Aristotle and Aquinas, developed the notion of “insight into phantasm,” meaning that genuine intellectual discovery is a matter of apprehending an intelligible pattern or form in a presentation of sensible data (that is, data perceived by our senses or generated by our imagination). “Phantasm” here can be broadly understood as “image.” Lonergan often remarked on the importance of the diagram for inference and insight, with reference, for example, to the role of diagrams in mathematical discovery. [For a fuller (though difficult!) account, see Bernard Lonergan, Verbum: Word and Idea in Aquinas, chapter 1, section 4.]

More generally then, the presentation and arrangement of symbolic data is crucial, and any given arrangement may be more or less conducive to discovery. That is why Engelbart’s observation about “playing” with and “rearranging” the materials we are dealing with strikes such a resonant chord. Even if we are only dealing with text, the ease of recombining and manipulating our words and phrases enables our writing to more quickly reach a suitable form of expression. And when we are dealing with multiple modalities of information, composing is even further facilitated by the digital tools that augment and amplify our native intellect. Gardner Campbell coined the lovely phrase “combinatorial disposition” to describe the penchant that one brings to materials in order to rearrange them in increasingly intricate patterns.

The second major takeaway is that, as we are social animals, our intellects work most effectively, not in isolation but in connection with others. When thoughts and ideas are externally represented, they thereby enter to some degree a public space, which creates the opportunity for the shared and collaborative rearrangement and manipulation of symbolic elements and the emergence of collective insight and intelligence. Engelbart refers to “team cooperation” in this essay and later uses the term “collective IQ.  So each person on a team can have simultaneous access to the same materials and “symbol structure,” and by working in “parallel independence” with one another, the team advances exponentially faster than could any single individual. The common space allows each person to contribute their own particular terminologies, intuitions, and hunches, while the interaction of these distinct perspectives spurs the refinement and precision of a more adequate set of explanatory concepts. This is made possible by the technical advancements of networked computers and hyperlinks, but again, the technological developments are only important insofar as they position the human intellect to more readily take advantage of its innately social and dialogical nature.

So now, as to the title of this post … the true and only AI. The notion of “artificial intelligence,” so much more prominent in recent awareness and discourse, was first proposed (or at least the term was coined) by computer scientist John McCarthy in 1955, less than a decade before Engelbart’s groundbreaking essay. Interestingly, McCarthy and his team were also active at Stanford during the same time period as Engelbart was, and I wonder if there was any interaction between them.

[Shortly after publishing this post, I came across this 2011 article by John Markoff, “A Fight to Win the Future: Computers vs. Humans,” which begins with precisely this juxtaposition between two labs at Stanford:

At the dawn of the modern computer era, two Pentagon-financed laboratories bracketed Stanford University. At one laboratory, a small group of scientists and engineers worked to replace the human mind, while at the other, a similar group worked to augment it.

In 1963 the mathematician-turned-computer scientist John McCarthy started the Stanford Artificial Intelligence Laboratory. The researchers believed that it would take only a decade to create a thinking machine.

Also that year the computer scientist Douglas Engelbart formed what would become the Augmentation Research Center to pursue a radically different goal — designing a computing system that would instead “bootstrap” the human intelligence of small groups of scientists and engineers.

For the past four decades that basic tension between artificial intelligence and intelligence augmentation — A.I. versus I.A. — has been at the heart of progress in computing science as the field has produced a series of ever more powerful technologies that are transforming the world.]

I have no expertise in this field, but I think it most unfortunate that “artificial intelligence” has enjoyed such prominence as compared to “augmented intelligence.” Now it depends upon how you define “intelligence,” of course, but on philosophical (and, ultimately, theological) grounds, I don’t think there is such as thing as “artificial intelligence.” Machines and systems might be said to be “intelligent” in that they are highly developed and programmed, even to the point of altering that program in response to external conditions. I agree with the philosopher John Searle, whose “Chinese Room Argument” refutes the idea that a machine could have a mind in exactly the same sense that humans do. While I can certainly appreciate the achievements and benefits of a machine such as IBM’s Watson, it does not, in my opinion, possess the “intelligence” of the sort that was necessary to create it in the first place.

With Engelbart, I contend that “augmented intelligence” is the true (and only) AI (an allusory nod there to cultural critic Christopher Lasch and his book, The True and Only Heaven). A focus on augmented intelligence can push us to identify what is truly distinctive about human intelligence even as we benefit from the increasingly complex tools which that intelligence has fashioned. Many examples of the amplification of human intelligence by digital tools can be found in Clive Thompson‘s 2014 book, Smarter Than You Think. How Technology is Changing our Minds for the Better. As a chess player myself, I was particularly taken with chapter one, “The Rise of the Centaurs,” which describes the rise of computerized chess programs and its effect upon the game. The pivotal moment came in 1997, when then world champion Garry Kasparov was defeated in a match by IBM’s Deep Blue supercomputer. Thompson goes on to explain that in the last two decades, a “new form of chess intelligence” has emerged, in which the strongest players are not grandmasters themselves nor machines themselves, but human/machine teams. As Kasparov noted, “Human strategic guidance combined with the tactical acuity of a computer is overwhelming.” [Chess engines today have gone far beyond even what Deep Blue accomplished, and you can freely (as in beer) access some of them, such as Stockfish (open source), right in your browser.]

Thompson concludes, generalizing from the chess experience: “We’re all playing advanced chess these days. We just haven’t learned to appreciate it.” So I guess that means we are all centaurs as well?

Notes and Trails

I’m still processing this week’s #openlearning17 webinar, “‘As We May Think’, Annotation, and Liberal Learning: A Conversation with’ Jon Udell and Jeremy Dean.” And I find myself wanting to use the very practices and ideas that were discussed therein, annotating the conversation and creating my own knowledge structure by connecting ideas and concepts in a trail that makes sense to me. I guess I could use a video annotator that I’m familiar with, such as Classroom Salon, or perhaps produce a transcript that could itself be annotated with  But in the interests of time, I’ll forego those options and write up just a few thoughts.

There are lots of possible places to start, but let’s go with Jon Udell’s observation about the “sea change” that occurs when we annotate a document and are able to extract the nuggets relevant to our inquiry…a phrase, a sentence, a number in a table, a cell in a spreadsheet, etc. With annotation, you have a resource that is more granular than the document as a whole, a resource that has its own URL (like a document or an entire webpage). Not only does this nugget have its own URL, but it can be tagged to become part of a larger path or group of related items.

Why is this such a game-changer (to shift our change metaphors)? Perhaps because now we can get a more accurate representation, in the exterior public space of the Web, of the kind of connections being forged in the inquirer’s brain. My thinking is not primarily a matter of linking texts and documents as such, but of connecting ideas and concepts and discrete pieces of data. And provides some of the “architectural glue” for holding the structure together. I hadn’t really thought previously about as an “infrastructure” that goes beyond just creating marginalia on a single digital text (which might be the “digital facelift” understanding of it, to use Clay Shirky’s phrase). So it’s not just the annotation itself, but this “annotation infrastructure,” the way that the associated link and tags can be combined and recombined, that is so powerful.

And, it seems that this interplay between what’s taking place in an inquirer’s brain and what’s being represented in the open space of the Web is another theme running through the conversation. In discussing the nature of a “tag,” Gardner Campbell posited a juxtaposition between the tag as (i) a shared conceptual marker (thus public and open), and (ii) the tag in an individuals’s associative trail, which is meaningful in the network of that person’s brain. Later, in a related observation, he spoke of two ways to think about associative trails (which, it seems, is what a tag fundamentally is). First, they are connections that thinkers make and want to share as part of a record of activity and a larger effort to make those kinds of links another layer of meaning (again, the public/open sense). Second, tags are a kind of context by which we can understand where ideas come from in the first place. A well chosen tag prompts us to think about our own thinking, to reflect upon how it is that one idea is connected to another in our own minds. Here Gardner laments the difficulty of getting students to engage in this metacognition, or in focusing on anything beyond what they perceive an instructor to prescribe and expect and to have already created for them.

But to become more aware of the web of connections and associations that exists in one’s brain is to realize that this web is made up not only of explicitly conceptual links, but also largely unconscious links generated by our lived experience, which may seem irrelevant to the investigation or study at hand, but which provide a rich context for thinking about how we think. Creative ideas and insights may well emerge from these seemingly unrelated associations in our minds. So as well, to learn from another person is to discover something of the web inside that person’s mind.

This is to say that as we seek to build knowledge by accessing resources and creating connections on the web, the resources available to us are not just texts and other documents but people. Learning involves engaging with others on a human level. Jon Udell, I believe, used the phrase “context is a service we provide each other.” I’m still puzzling out what this means, but I’m guessing that it refers to the ideal of accessing and processing information, not in an abstract and depersonalized manner, but within the context of other learners and practitioners in “trust networks.” And that’s the difference between reading “As We May Think” on my own, and working through it in this community, this network of fellow learners.

Powered by WordPress & Theme by Anders Norén