Musings with Mo

learning, design, technology

Author: Morris Pelzel

Social Bookmarks: Either a Tagger or a Searcher Be

Social bookmarking remains an important tool of personal knowledge management (PKM) for many of us. Recently, the news came out that, the original social bookmarking platform, was sold to the rival service Pinboard and been deprecated; users can still access previously saved bookmarks, but will not be able to add new ones. Pinboard developed Maciej Ceglowski explained in a blog post that he bought so that “a fascinating piece of web history….wouldn’t disappear.” Part of that history is the promotion of tagging as a means of organizing content and information.






I’ve been a user of the other main social bookmarking service, Diigo, for almost a decade. Lately it seems that a lot of the cool kids have switched to Pinboard, and, as Pinboard describes itself as “social bookmarking for introverts,” perhaps I should too. Alan Levine writes about making the switch to Pinboard a couple of years ago. But Diigo has worked well for me, so I’m sticking with it for now. Both services have users that I like to follow…for example, my colleague Brett Boessen in Diigo, and Alan Levine and Alan Jacobs in Pinboard. And I’ve also started following Laura Gibbs in Diigo (I met the delightful Laura at Domains17 earlier this month). Laura recently posted on “My Favorite Features of Diigo and Canvas,” and mentioned Diigo tagging and tag combinations, noting that “teaching your students about tagging in general and your tagging practice in particular can increase their access to the resources you’ve bookmarked, while also expanding their digital literacy.”

Laura’s post got me thinking about my own tagging practices (or lack thereof) with my bookmarks (of which I have several thousand). At first I tagged carefully and tried to be very intentional about building a folksonomy of categories that would both reflect and shape how I organize ideas. I found, though, that the process seemed to be creating more cognitive load than it was worth. Perhaps I deliberated too anxiously about creating the correct categories, got too worried about consistency, or wanted a system that was too neat and logically arranged. Tagging worked for me in the limited instances when I was curating items that students could consult for a particular course, but more generally I fell out of the habit of applying tags to my bookmarks. At last count, over 2,600 of them are “untagged.”

But, my library of bookmarks is searchable, and so, instead of referencing my collection by tags, I search for keywords to find what I’m looking for. Now, I feel that in some way I’m violating one of Jon Udell’s ways of “thinking like the web” by not using tags to structure my information. But everyone has to do what works for them, and I’ve found that searching with a string of appropriately chosen keywords works me for. Perhaps keywords in a search query function more or less like tags. Now, I don’t worry about how to designate a new bookmark with the right labels…rather, I count on searching over the entire collection to turn up what I need.

Still, I wondered if there had been any studies done on the relative merits of tagging vs. searching for information retrieval. About ten years ago, just as the idea of creating a folksonomy over a collection of documents was becoming more widespread, a study was conducted that “compared the search information retrieval (IR) performance of folksonomies from social bookmarking Web sites against search engines and subject directories.” The abstract (full article paywalled) stated that “The search engines in the study had the highest precision and recall, but the folksonomies fared surprisingly well. was statistically indistinguishable from the directories in many cases. Overall the directories were more precise than the folksonomies but they had similar recall scores.” Maybe there are more recent analyses, but if so I’m not aware of them.

There are other contexts in which I attempt to use tagging more consistently—putting blog posts into categories, labelling emails, tagging annotations, for example. Still, I wonder; is it the case that search has gotten so good that tagging is less important than we might otherwise think it to be? Again, that goes against my sense that tagging is one of those fundamental pieces of web architecture, like hyperlinking, syndication, and aggregation, that should be in our “user innovation toolkit.” But I want to throw this post out there to see how others are approaching information retrieval.

Gleanings from the Fields of #Domains17

On the drive up and back from North Texas to OKC for #Domains17, I observed the wheat harvest in full swing. The custom crews were making their yearly journey starting from the southern plains and rolling northward, following the ripening grain. This photo was taken by my nephew James Pelzel, as a crew moved through his beautiful field about an hour south of the Texas-Oklahoma border. I’m using it with his permission here (shh, I copied it from a Facebook post). 

As I thought about it, these sights suggested a way for me to organize a few thoughts about the conference (or the #notaconference, as Alan Levine so aptly noted). “Gleanings” are a gathering up of fragments after the harvest, and that’s what I want to do after Domains17 … gather up a few ideas after the rich presentations and conversations at the event itself, as well as the many fruitful reflections and materials people have shared in the past week.

I love the way Adam Croom started off with a call to hospitality, to making the space our own but challenging us to make new connections beyond the familiar and the comfortable. Even so, I resonated as well with Amy Collier’s post about belonging and the feeling of being “at someone else’s party.” For different reasons than Amy, I am sure, and again in spite of the generous welcome and friendliness that I experienced. I guess that’s partly my own reservedness and insecurity, but also because my position at my current school is facing an uncertain funding future, so I am on the market for a new position in instructional technology and design. I’ve helped to establish a strong digital pedagogy program going at Austin College these last three years, but AC is one of those small liberal arts colleges facing a budget crisis and making hard choices about staffing. Coming to Domains was at once an uplifting shot in the arm but also a reminder to me that the seeds we have planted toward implementing a Domains-inspired program and vision at AC are very possibly not coming to fruition. So I had to explain all that to people I met (that’s after the usual explanation about Austin College not being in Austin).

Nonetheless, there were many exciting moments for me, and I just want to touch on a few of them here, in no particular order. First, it was a thrill to meet Jon Udell and to listen again to his presentation about a “user-innovation toolkit” build around the annotation application. I say “again” because I was fortunate enough to participate earlier this spring in the #OpenLearning17 cMOOC put on by Gardner Campbell and friends, which featured a webinar with Gardner, Jon, and Jeremy Dean on the “annotation infrastructure” and liberal learning. I wrote about that in “Notes and Trails,” which includes the video of the webinar. Jon spoke there of the “sea change” that comes about when granular web resources—a phrase, a sentence, a number in a table, a cell in a spreadsheet—can be given their own URLs and thus become identifiable “small pieces” that can be referenced, combined, and rearranged in novel and creative ways. Jon referred to these pieces as “segments of interest” that are taken from larger documents and pages.

In that context, Gardner Campbell, channeling Doug Engelbart, coined the phrase “combinatorial disposition” to refer to the recombining and rearranging of small pieces of material in the search for greater cohesion, connection, and intricacy of pattern. And it occurs to me that this was a theme running through Domains17 in many ways. For example, Tim Klapdor, in “Beyond the LMS,” sketches a vision of combining systems and applications (through APIs) that is undergirded by this sense of combinatorial disposition. His key phrase, “from small pieces loosely joined to small pieces deeply connected” captured for me some of the movement going on at the conference … when we consider applications, ideas, and “segments of interest” in their most basic units, we are more free to arrange and rearrange them and in the process discover patterns and harmonies that enable deeper levels of learning.

In the same vein, Keegan Long-Wheeler, in “Domains Inside the LMS,” led us through a process of combining and remixing tools and applications inside the framework of Canvas. Again, I thought that, behind the specific instructions for how to make this happen, what was operative was a disposition for imagining how different pieces of the web could come together to create an ever more connected learning environment. As was noted, even Jim Groom’s foundations seemed to be shaken by the possibilities of an open LMS …

… but maybe it’s the other way around, and it’s the LMS crowd (well, Canvas at least) that is coming around to a new way of thinking. Perhaps Domains “in the belly of the beast” is another Trojan horse move in which transformation will occur from the inside out rather than by direct confrontation.

The Domains fair, as others have pointed out, was a masterstroke for the opening period on Monday morning. With about a dozen projects to peruse and chat about, it was a good way for introductions and conversations to begin in a low-key atmosphere of greeting and mingling. I enjoyed learning, for example, about the TRU Writer SPLOT and discussing that with Brian Lamb. If I understood Brian correctly, part of the impetus for developing this tool was the declining state of web literacy among many students. This raised for me the question, “Does creating a simplified interface really address that problem, or does it perhaps exacerbate it?” I don’t have a clear answer to that question, but I certainly wouldn’t discount the value of such an application.

The importance of the project became even more evident to me the next day, as Tanya Dorey-Elias and Alan Levine, in “Small Steps Go a Long Way: Designing Learning Tools with WordPress,” described how ready access to the means of digital storytelling could make such a difference especially for those most in need of telling their stories.

Another session that gave me lots to think about was “We’re not DOOMed: From Student Tutor to Instructional Technologist,” with Jenna and Jarrett Azar from Muhlenberg and Jess Reingold from UMW. I’m so impressed with the way the Muhlenberg team developed a program of student digital assistants and worked that into their larger Domains project. It’s the kind of thing I had been hoping to do at Austin College! And Jess’s presentation as a new graduate now working as an instructional technologist was wonderful. I appreciated her typology of the four common types of tech learners …

… with a GIF to illustrate each type:

One of the things I increasingly realize in attending events like this is that every presentation, every slide is a lesson, not only in the explicit “content” or subject matter at hand, but in the means chosen to communicate and convey the message. Jess chose an imaginative way to move beyond a mere bullet list and to leave her audience with a set of memorable images to consolidate what she was trying to express. In terms of takeaway for me:

And this is not to dismiss the valuable and incredible work that IT staffs and help desks do for us, but rather to point out how that can be complemented so effectively with the kinds of programs being developed at Muhlenberg and UMW.

The keynote by Martha Burtis, “Neither Locked Out Nor Locked In. Finding a Path Through Domain of One’s Own,” gave us both a refresher on the history of the project and a set of challenges for thinking through where we go from here. Thinking beyond the pragmatic benefits of web literacy, e-portfolio, and data ownership to really “knowing” the web … to “thinking like the web,” to use the phrase of Jon Udell that has inspired me so much. So this knowing is not just a “knowing about” this or that practical feature of Domains or of the web, but is itself a mode of thinking about anything that reflects the very dynamism of the web. Reminds me of the distinction between writing “on the web” and writing “of the web” … two very different things.

Martha challenged us to think metaphorically, symbolically, poetically about the web:

Wow, that’s a tough one for me, as I usually too quickly move to concepts and definitions in my thought. But all the more important, then, to try to get to that more fundamental level of imagination. For starters, to state the obvious, “web” itself is already a metaphor of a certain kind of concrete space that we find in nature. What came to mind next is the image of the “rabbit hole,” which references a kind of experience we’ve all had on the web. Not particularly original, I know … I recall, for example, some posts of CogDog on the subject (e.g., “Hypernoting from Books: Prairie Dogging the Rabbit Holes“). Then I thought of a labyrinthe … again, not that original, but suggestive to me of a maze of pathways and forking trails, some of which lead to dead ends, but others of which lead to discovery.

Finally, Jim Luke’s presentation, “Running Errands for Ideas: Starting a DoOO at a Community College” related one faculty member’s doggedness in turning the vision of Domains and the open web into reality at Lansing Community College. Jim built to a stirring revolutionary call at the end, which I captured in this tweet:

So, for the record, I am aware that it is not necessary to repeat the same hashtag in a tweet. The one tweet of mine that got the most circulation and attention, and it looks like I don’t quite know what I’m doing. Oh well …

There was so much more, but that gives just a sample of the generative and inspiring get-together that was #Domains17. Truly outstanding planning and execution by Lauren Brumfield, Adam Croom, Jim Groom, Tim Owens, and everyone involved. Thanks, and looking forward to #Domains18 :).

“Magical Combinations:” Notes from a Ted Talk

How does one respond to the blooming, buzzing mind of Ted Nelson? I don’t really know…trying to follow the associative trails in that mind is quite the adventure. I’ll take my fallback approach, which is to gather some fragments or nuggets of thought (or, as Jon Udell might now term them, “segments of interest“), throw them together on the page, and see what happens. This is after watching the video interview and hacking my way through Computer Lib/Dream Machines. Maybe a composition of sorts will emerge, though it will at this point be a largely impressionistic set of quick jottings.

Ted tells us that one of the reasons he got into computers was that “I wanted a satisfactory system of taking notes.”  That resonates; most of us, I think, have our personal knowledge management systems that we tweak and seek to improve. I was reminded of our previous discussions of as a “web annotation infrastructure” and also of John Stewart’s “Digital Note-taking in the History Classroom.” These open notebook practices facilitate that rearrangement and recombination of ideas that Engelbart spoke of. Nelson says that wanting to use his filecards in many places (in other words, to move them around and rearrange them) led to the notion of “transclusion” (a term I’d never heard of until Udell’s use of it a couple of weeks ago) and thence to the possibility of new compositions of ideas and expressions. And sometimes, just maybe, those new compositions, open to interaction with the compositions of others, can lead to “magic combinations,” structured and designed in novel and complex ways. Again, carrying forward the torch of Engelbart.

The branches of philosophy…well, this is definitely a novel mind at work. Schematics (“the study of structure and representation”), normatics (“the philosophy of ‘shoulds'”), gamics (“the structure of situations and maneuvers in situations), and idemics (“the representations of the tokens that we present our ideas with and argue with and manipulate as we redesign ideas”). I don’t claim to understand much if any of this, but what caught my attention with regard to idemics (sp?) are his examples of “designing sitcoms, theology, and software.” Nelson may be the first person to ever juxtapose those three domains…what on earth could he possible mean? I’m a theologian by disciplinary background, and I would love to hear more! This is a tease, especially since he apparently would not have had much use for theology, if Wikipedia is to be believed.

Quickly scanning Computer Lib/Dream Machines, there is too much to respond to, but my highlighter got busy at the description of “fantics,” by which Nelson means “the art and science of getting ideas across, both emotionally and cognitively … ‘presentation’ could be a general word for it” (319). Now I had already been stunned earlier in the essay with a quote from Bertrand Russell–“The reason is, and by rights ought to be, slave to the emotions” (306). Bertrand Russell (channelling Hume?) said this?  I mean, yes, embodied cognition and all that, but… And now Ted goes on to flesh it out. What comes across in a presentation always has a dual character: explicit declarative structure imbued with connotative impressions, feelings, and sensibilities. And so, as we consider our pedagogies,

The reader or viewer always gets feelings along with information, even when the creators of the information think that its ‘content’ is much more restricted…. Fantics is thus concerned with both the arts of effect–writing, theater, and so on–and the structures and mechanisms of thought, including the various traditions of the scholarly event (article, book, lecture, debate, and class). These are all a fundamentally inseparable whole, and technically-oriented people who think that systems to interact with people, or teach, or bring up information, can function on some “technical” basis–with no tie-ins to human feelings, psychology, or the larger social structure–are kidding themselves and everyone else (319).

This is the work of “writers and editors, movie-makers and lecturers, radio announcers and layout people and advertising people” … and, it seems, the work of teachers and learners. Let’s make some magic combinations.

Augmented Intelligence: The True and Only AI

This week for #OpenLearning17 we’re reading excerpts from Doug Engelbart’s “Augmenting Human Intellect: A Conceptual Framework.” And here I am now, in reflecting upon this text, putting into practice the very “conceptual framework” that it so perspicaciously advanced more than half a century ago. We take it for granted today, this ability to display and manipulate and rearrange text and other symbols that represent our thought, in a networked environment that invites comment and annotation, that we forget how revolutionary these proposals were at the time. A time when computing was equated with calculation, and when speaking of AI could only mean a reference to “artificial intelligence,” not “augmented intelligence.”

So, to the text. For me, Englebart highlighted two crucial features of human intellect. First, our intellects operate largely upon external representations of meaning, that is, symbols: “texts, sketches, diagrams, lists, etc.” When ideas, concepts, and information can be displayed visually, we no longer have to hold those representations solely in our minds, which thus frees up our cognitive capacities for deeper-level thinking about the meanings inscribed therein. We have had such external representations and displays of interior thoughts since at least the time of the ancient cave paintings and, of course, the development of pictograms and writing. So there has already been “considerable” augmentation of human intellect for some time.

What Engelbart realized is that computational devices, such as the graphical interface and the mouse, could make possible the display of these symbolic representations of thought in a way that greatly facilitated our capacity to “manipulate” and “rearrange” them, thus stimulating our thinking and leading to possible new discoveries and insights. When texts, diagrams, charts, images, etc. are thus displayed, then “characteristically ‘human’ activities” can come to the fore, such as “playing with forms and relationships to ask what develops,” and “rearranging” texts and diagrams to discover new patterns and more intricate syntheses of what were previously perhaps disparate elements. Electronic representations of symbolic elements, as compared to print, afford a far more expeditious modification and rearrangement of elements. And it is that playful modification and rearrangement that can lead to new understandings and breakthroughs.

These ideas reinforcing the importance of the visual arrangement of data for the possibility of insight recalled for me some lessons that I learned many years ago from one of my philosophical mentors, Bernard Lonergan. In his work on cognitional theory, Lonergan, recovering elements from Aristotle and Aquinas, developed the notion of “insight into phantasm,” meaning that genuine intellectual discovery is a matter of apprehending an intelligible pattern or form in a presentation of sensible data (that is, data perceived by our senses or generated by our imagination). “Phantasm” here can be broadly understood as “image.” Lonergan often remarked on the importance of the diagram for inference and insight, with reference, for example, to the role of diagrams in mathematical discovery. [For a fuller (though difficult!) account, see Bernard Lonergan, Verbum: Word and Idea in Aquinas, chapter 1, section 4.]

More generally then, the presentation and arrangement of symbolic data is crucial, and any given arrangement may be more or less conducive to discovery. That is why Engelbart’s observation about “playing” with and “rearranging” the materials we are dealing with strikes such a resonant chord. Even if we are only dealing with text, the ease of recombining and manipulating our words and phrases enables our writing to more quickly reach a suitable form of expression. And when we are dealing with multiple modalities of information, composing is even further facilitated by the digital tools that augment and amplify our native intellect. Gardner Campbell coined the lovely phrase “combinatorial disposition” to describe the penchant that one brings to materials in order to rearrange them in increasingly intricate patterns.

The second major takeaway is that, as we are social animals, our intellects work most effectively, not in isolation but in connection with others. When thoughts and ideas are externally represented, they thereby enter to some degree a public space, which creates the opportunity for the shared and collaborative rearrangement and manipulation of symbolic elements and the emergence of collective insight and intelligence. Engelbart refers to “team cooperation” in this essay and later uses the term “collective IQ.  So each person on a team can have simultaneous access to the same materials and “symbol structure,” and by working in “parallel independence” with one another, the team advances exponentially faster than could any single individual. The common space allows each person to contribute their own particular terminologies, intuitions, and hunches, while the interaction of these distinct perspectives spurs the refinement and precision of a more adequate set of explanatory concepts. This is made possible by the technical advancements of networked computers and hyperlinks, but again, the technological developments are only important insofar as they position the human intellect to more readily take advantage of its innately social and dialogical nature.

So now, as to the title of this post … the true and only AI. The notion of “artificial intelligence,” so much more prominent in recent awareness and discourse, was first proposed (or at least the term was coined) by computer scientist John McCarthy in 1955, less than a decade before Engelbart’s groundbreaking essay. Interestingly, McCarthy and his team were also active at Stanford during the same time period as Engelbart was, and I wonder if there was any interaction between them.

[Shortly after publishing this post, I came across this 2011 article by John Markoff, “A Fight to Win the Future: Computers vs. Humans,” which begins with precisely this juxtaposition between two labs at Stanford:

At the dawn of the modern computer era, two Pentagon-financed laboratories bracketed Stanford University. At one laboratory, a small group of scientists and engineers worked to replace the human mind, while at the other, a similar group worked to augment it.

In 1963 the mathematician-turned-computer scientist John McCarthy started the Stanford Artificial Intelligence Laboratory. The researchers believed that it would take only a decade to create a thinking machine.

Also that year the computer scientist Douglas Engelbart formed what would become the Augmentation Research Center to pursue a radically different goal — designing a computing system that would instead “bootstrap” the human intelligence of small groups of scientists and engineers.

For the past four decades that basic tension between artificial intelligence and intelligence augmentation — A.I. versus I.A. — has been at the heart of progress in computing science as the field has produced a series of ever more powerful technologies that are transforming the world.]

I have no expertise in this field, but I think it most unfortunate that “artificial intelligence” has enjoyed such prominence as compared to “augmented intelligence.” Now it depends upon how you define “intelligence,” of course, but on philosophical (and, ultimately, theological) grounds, I don’t think there is such as thing as “artificial intelligence.” Machines and systems might be said to be “intelligent” in that they are highly developed and programmed, even to the point of altering that program in response to external conditions. I agree with the philosopher John Searle, whose “Chinese Room Argument” refutes the idea that a machine could have a mind in exactly the same sense that humans do. While I can certainly appreciate the achievements and benefits of a machine such as IBM’s Watson, it does not, in my opinion, possess the “intelligence” of the sort that was necessary to create it in the first place.

With Engelbart, I contend that “augmented intelligence” is the true (and only) AI (an allusory nod there to cultural critic Christopher Lasch and his book, The True and Only Heaven). A focus on augmented intelligence can push us to identify what is truly distinctive about human intelligence even as we benefit from the increasingly complex tools which that intelligence has fashioned. Many examples of the amplification of human intelligence by digital tools can be found in Clive Thompson‘s 2014 book, Smarter Than You Think. How Technology is Changing our Minds for the Better. As a chess player myself, I was particularly taken with chapter one, “The Rise of the Centaurs,” which describes the rise of computerized chess programs and its effect upon the game. The pivotal moment came in 1997, when then world champion Garry Kasparov was defeated in a match by IBM’s Deep Blue supercomputer. Thompson goes on to explain that in the last two decades, a “new form of chess intelligence” has emerged, in which the strongest players are not grandmasters themselves nor machines themselves, but human/machine teams. As Kasparov noted, “Human strategic guidance combined with the tactical acuity of a computer is overwhelming.” [Chess engines today have gone far beyond even what Deep Blue accomplished, and you can freely (as in beer) access some of them, such as Stockfish (open source), right in your browser.]

Thompson concludes, generalizing from the chess experience: “We’re all playing advanced chess these days. We just haven’t learned to appreciate it.” So I guess that means we are all centaurs as well?

Notes and Trails

I’m still processing this week’s #openlearning17 webinar, “‘As We May Think’, Annotation, and Liberal Learning: A Conversation with’ Jon Udell and Jeremy Dean.” And I find myself wanting to use the very practices and ideas that were discussed therein, annotating the conversation and creating my own knowledge structure by connecting ideas and concepts in a trail that makes sense to me. I guess I could use a video annotator that I’m familiar with, such as Classroom Salon, or perhaps produce a transcript that could itself be annotated with  But in the interests of time, I’ll forego those options and write up just a few thoughts.

There are lots of possible places to start, but let’s go with Jon Udell’s observation about the “sea change” that occurs when we annotate a document and are able to extract the nuggets relevant to our inquiry…a phrase, a sentence, a number in a table, a cell in a spreadsheet, etc. With annotation, you have a resource that is more granular than the document as a whole, a resource that has its own URL (like a document or an entire webpage). Not only does this nugget have its own URL, but it can be tagged to become part of a larger path or group of related items.

Why is this such a game-changer (to shift our change metaphors)? Perhaps because now we can get a more accurate representation, in the exterior public space of the Web, of the kind of connections being forged in the inquirer’s brain. My thinking is not primarily a matter of linking texts and documents as such, but of connecting ideas and concepts and discrete pieces of data. And provides some of the “architectural glue” for holding the structure together. I hadn’t really thought previously about as an “infrastructure” that goes beyond just creating marginalia on a single digital text (which might be the “digital facelift” understanding of it, to use Clay Shirky’s phrase). So it’s not just the annotation itself, but this “annotation infrastructure,” the way that the associated link and tags can be combined and recombined, that is so powerful.

And, it seems that this interplay between what’s taking place in an inquirer’s brain and what’s being represented in the open space of the Web is another theme running through the conversation. In discussing the nature of a “tag,” Gardner Campbell posited a juxtaposition between the tag as (i) a shared conceptual marker (thus public and open), and (ii) the tag in an individuals’s associative trail, which is meaningful in the network of that person’s brain. Later, in a related observation, he spoke of two ways to think about associative trails (which, it seems, is what a tag fundamentally is). First, they are connections that thinkers make and want to share as part of a record of activity and a larger effort to make those kinds of links another layer of meaning (again, the public/open sense). Second, tags are a kind of context by which we can understand where ideas come from in the first place. A well chosen tag prompts us to think about our own thinking, to reflect upon how it is that one idea is connected to another in our own minds. Here Gardner laments the difficulty of getting students to engage in this metacognition, or in focusing on anything beyond what they perceive an instructor to prescribe and expect and to have already created for them.

But to become more aware of the web of connections and associations that exists in one’s brain is to realize that this web is made up not only of explicitly conceptual links, but also largely unconscious links generated by our lived experience, which may seem irrelevant to the investigation or study at hand, but which provide a rich context for thinking about how we think. Creative ideas and insights may well emerge from these seemingly unrelated associations in our minds. So as well, to learn from another person is to discover something of the web inside that person’s mind.

This is to say that as we seek to build knowledge by accessing resources and creating connections on the web, the resources available to us are not just texts and other documents but people. Learning involves engaging with others on a human level. Jon Udell, I believe, used the phrase “context is a service we provide each other.” I’m still puzzling out what this means, but I’m guessing that it refers to the ideal of accessing and processing information, not in an abstract and depersonalized manner, but within the context of other learners and practitioners in “trust networks.” And that’s the difference between reading “As We May Think” on my own, and working through it in this community, this network of fellow learners.

Happy (Associative) Trails To You, Revisited

Reading “As We May Think” again is like revisiting an old friend, and in this case it prompted me to recall a classroom experience that I had a couple of years ago, during a previous engagement with Vannevar Bush. I developed a new course called “How the Web Works” to teach at Austin College for the summer of 2015 and, grateful for the generosity of openness, I liberally borrowed elements from several existing cMOOCs and created a mashup of “Thought Vectors in Concept Space,” “DS 106,” and “Digital Citizenship” (thanks Gardner, CogDog, and Chris!) Summer enrollment at AC is quite small, and it turns out I had only one student, named Chris. At the same time, I was finishing up another short cMOOC that Christina Hendricks was facilitating, “Teaching with WordPress,” which helped me to think through the process of setting up my course on WP. So much generosity! In that context, I wrote a blog post entitled “Happy (Associative) Trails to You,” describing the pathways Chris and I wound through during one particular class meeting. It’s kinda long, but seemed to pull a lot of things together for me that may be possibly relevant to #openlearning17, so I’m going to reproduce it here on my new site.

Although #TWP15 has ended its formal run, the network lives on (I hope!), and so I’m going to keep blogging about this WordPress based course, “How the Web Works.” It also occurred to me after today’s class meeting that sometimes it would be a helpful practice to blog about a particular meeting as a reflective exercise, to think out loud, and to capture a bit of what transpired to help clarify my own thinking.

I realized toward the end of our time today that we were on an “associative” trail that was largely serendipitous and unpredictable. An associative trail, for those new to the term, was the phrase Vannevar Bush used to describe a sequence of nodes of information and ideas, one prompting the next, through which one could move in an emergent process of discovery (see his famous 1945 article, “As We May Think,” which foreshadows the development of hypertext). Bush was describing an information retrieval and management system modeled on the way our brains work, namely, by a process of association in moving from one item of consciousness to the next. Just for kicks, you can check out this video by Gardner Campbell demonstrating how he followed an associative trail through the stacks of a library and elsewhere to discover new resources for one of his classes. And here’s a cool account of a serendipitous “distraction trail” by Alan Levine (aka CogDog).

So something analogous to that happened in class today. On a regular basis I’m trying to introduce new knowledge tools for students to incorporate into their learning toolkit, and so today I started out by demonstrating Diigo for social bookmarking and annotation. I discussed how I use my Diigo library and network in my work flow, and then turned our shared screen over to Chris for him to set up his account and to add some relevant bookmarks. As I watched him learn to navigate the basic elements of the application, I had occasion to explain one crucial difference between the free and premium versions of Diigo–the ability to cache weblinks. It’s all well and good to bookmark a resource, but if the link rots or gets broken, you’re out of luck if you don’t have a cached version or a local copy of the resource. Now I didn’t require Chris to get a premium account, but this was an opportunity for a bit of discussion about the impermanence of the web, and I was able to reference a speech given earlier this year by Vincent Cerf, a pioneer developer of basic internet protocols, warning about the possibility of a “digital Dark Age.” (Of course I found the reference almost immediately because I had bookmarked an article about the speech on Diigo.)

From there I described how I use Diigo (and Twitter) networks to discover new resources and to share references with others, and how I also use a service called “If This Then That” (IFTTT) to automatically tweet the title and URL of any resource that I save in my Diigo library. That led to Chris setting up his own free IFTTT account on the spot, and creating his first “recipe,” to automatically tweet the title and URL of new blog posts to his WordPress site. We then launched into a broader discussion of APIs and how applications talk to each other and share information, and why that’s important in an open web environment. I asked Chris if he had any experience with APIs, and he mentioned the Steam gaming distribution platform, on which he is quite active. It seems that on Steam, players can develop “mods” (modifications) to existing games that other players can download and add to their version of the game. But sometimes mods can conflict, and so Steam has developed a way to bundle mods together and determine whether there will be any conflicts among them and with the base game before they are downloaded and installed.

As Chris was demonstrating this by showing me his Steam site, I noticed some of the games that he was in, and we started talking about those. Mostly they were historical immersion role playing games such as Crusader Kings II and Europa Universalis IV. I realized we had an opportunity to open up a whole new space for discussing games and learning, as I recalled two resources dealing specifically with immersive game experiences for learning history at the undergraduate level. First, Reacting to the Past, a classroom simulation experience originating at Barnard College and now used at over 300 colleges and universities in the US and abroad. According to the RTTP website,

Reacting to the Past (RTTP) consists of elaborate games, set in the past, in which students are assigned roles informed by classic texts in the history of ideas. Class sessions are run entirely by students; instructors advise and guide students and grade their oral and written work. It seeks to draw students into the past, promote engagement with big ideas, and improve intellectual and academic skills. Reacting to the Past was honored with the 2004 Theodore Hesburgh Award (TIAA-CREF) for outstanding innovation in higher education.

Secondly, I called to mind an initiative called Play the Past, which operates from a similar conceptual framework but is more oriented toward actual gameplay rather than classroom simulation. PTP is a community of academics and developers pursuing the study of gaming, learning, and cultural heritage:

Collaboratively edited and authored, Play the Past is dedicated to thoughtfully exploring and discussing the intersection of cultural heritage (very broadly defined) and games/meaningful play (equally broadly defined). Play the Past contributors come from a wide variety of backgrounds, domains, perspectives, and motivations (for being interested in both games and cultural heritage) – a fact which is evident in the wide variety of topics we tackle in our posts.

As we looked at the PTP home page, it just so happened that the latest article featured there was an interview with the developers of a World War I immersive simulation game called Verdun (after one of the iconic battles of that war). Turns out that this is one of Chris’s favorite games, he has played it extensively, and has even presented about it at a gaming conference. Interestingly, if you google “Verdun,” the first return is not, as you might have expected, the Wikipedia article on the battle (that’s no. 3), but the page for the game on Steam. So we chatted about how the game places the player inside the experience of WWI and unfolds the futility and senseless of the war as well as historical information about the military strategies and maneuvers, the political dynamics of the conflict, and the cultural and economic contexts in which the war played out.

So now Chris is going to write up a blog post about his experience playing Verdun, complete with a “let’s play” video demonstration documenting his gameplay with commentary. And when the class had started a couple of hours earlier, I had no idea we were going to get here…a completely “emergent” development that we reached by an associative trail. Now it’s not likely that every class will exhibit such a clear sequence of leaps from one idea to the next, but I thought this instance was striking enough to be worth writing about.



Hello #OpenLearning17

I’m excited to join the party at Open Learning: A Connectivist MOOC for Faculty Collaboratives. While I’ve been blogging for the last couple of years at Digital Pedagogy @ Austin College, I’ve inexplicably put off writing and speaking in a space that is truly my own, in a voice that is truly my own. The hypocrisy of it, to be honest, is that I encourage my faculty colleagues and our students to claim their own online domains, to think and write and work in public, to blog and to create in an open connected network. Even though I wasn’t really doing it myself!

For whatever reason, the chance to be a part of this course has broken through my inertia and resistance. Especially after hearing Gardner Campbell’s recent keynote at the OU Academic Tech Expo (video coming soon!) and catching his interviews with Laura Gogia, I felt that now was “the acceptable time” to make the leap. I’m grateful that you guys have opened the course to those of us beyond your fair state of Virginia.

Looking forward to learning with all of you…and welcoming any advice and suggestions for shaping up this here blog and website!


Powered by WordPress & Theme by Anders Norén