learning, design, technology

Category: OpenLearning

Open for Blogging: Greetings, #openlearning18

There is a certain genre of blog post that is an apologetic acknowledgement of a lengthy lapse in blogging activity, coupled with a hopeful if perhaps uncertain commitment to “do better” going forward. So here you go … “bless me reader, for I have sinned, it’s been about nine months since my last blog post.”

I’m glad #openlearning18 is up and running and nudging me to write here again, in this place that I started for #openlearning17. And I’m doing so from a new place and position, as Director of Academic Technology and the Digital Liberal Arts Collaborative (DLAC) at Grinnell College, a small liberal arts college about an hour east of Des Moines, Iowa. Last year at this time I was at Austin College as a academic technologist, but the funding for the position came to an end last summer. I started the position at Grinnell on September 11, so these last few months have been occupied with transition and new beginnings. Getting the blog started again is one of those new beginnings that I’m excited about, especially as one of my roles at the college is precisely to encourage students and faculty to start up their own blog engines.

What do I hope to gain from #openlearning18? Are you asking about learning outcomes? (just kidding, Gardner). I hope to continue being part of a growing network of learners interested in promoting digital literacies and open practices in higher ed, so (re)connecting with old colleagues and new is a big part of it for me. Getting that nudge back into blogging is an incentive as well. In fact, I’m doubling up on social accountability here, as I’m writing this post at a Friday afternoon collaborative work session with a group of students here at Grinnell who are part of our Vivero Digital Scholarship Fellows program. We have what amounts to a co-working space every Friday afternoon. I think I’m more committed to getting my post written here than I would be by myself in my office.

I’m also hoping to learn more about this “faculty collaborative” model of faculty development to inform our programs at Grinnell. Now that I’m getting a bit more settled in here, we’re starting to think on a longer-term scale about what we want the digital liberal arts to look like at Grinnell. I think that vision will at a minimum involve three components—a framework for articulating desired student digital competencies and literacies (along the lines of the model developed at Bryn Mawr); the development of a digital portfolio practice and culture; and, supporting both of those, a more robust version of Domain of One’s Own. We currently have a web-hosting arrangement with Reclaim Hosting but, oddly, this is generally available only to faculty and not to students.

What do I hope to contribute to #openlearning18? Well, I hope to be a supportive colleague and friend, a good “node in the network,” and an active participant in the group. Maybe I can add some helpful resources and links to the mix and offer some perspective from where I stand. The great thing about these kind of networked opportunities is that you usually can’t predict in advance what you will learn or contribute. So I’m looking forward to unexpected joys and discoveries!

I have to add that is was really fun to meet (most of) the directors of #openlearning recently as they presented at the Educause Learning Initiative conference in New Orleans. Their session got me all excited again for what #openlearning17 was and what #openlearning18 will be. As Gardner Campbell said there,

So in that spirit, I’m glad to once again be a part of this network. Let’s deliver!

“Magical Combinations:” Notes from a Ted Talk

How does one respond to the blooming, buzzing mind of Ted Nelson? I don’t really know…trying to follow the associative trails in that mind is quite the adventure. I’ll take my fallback approach, which is to gather some fragments or nuggets of thought (or, as Jon Udell might now term them, “segments of interest“), throw them together on the page, and see what happens. This is after watching the video interview and hacking my way through Computer Lib/Dream Machines. Maybe a composition of sorts will emerge, though it will at this point be a largely impressionistic set of quick jottings.

Ted tells us that one of the reasons he got into computers was that “I wanted a satisfactory system of taking notes.”  That resonates; most of us, I think, have our personal knowledge management systems that we tweak and seek to improve. I was reminded of our previous discussions of Hypothes.is as a “web annotation infrastructure” and also of John Stewart’s “Digital Note-taking in the History Classroom.” These open notebook practices facilitate that rearrangement and recombination of ideas that Engelbart spoke of. Nelson says that wanting to use his filecards in many places (in other words, to move them around and rearrange them) led to the notion of “transclusion” (a term I’d never heard of until Udell’s use of it a couple of weeks ago) and thence to the possibility of new compositions of ideas and expressions. And sometimes, just maybe, those new compositions, open to interaction with the compositions of others, can lead to “magic combinations,” structured and designed in novel and complex ways. Again, carrying forward the torch of Engelbart.

The branches of philosophy…well, this is definitely a novel mind at work. Schematics (“the study of structure and representation”), normatics (“the philosophy of ‘shoulds'”), gamics (“the structure of situations and maneuvers in situations), and idemics (“the representations of the tokens that we present our ideas with and argue with and manipulate as we redesign ideas”). I don’t claim to understand much if any of this, but what caught my attention with regard to idemics (sp?) are his examples of “designing sitcoms, theology, and software.” Nelson may be the first person to ever juxtapose those three domains…what on earth could he possible mean? I’m a theologian by disciplinary background, and I would love to hear more! This is a tease, especially since he apparently would not have had much use for theology, if Wikipedia is to be believed.

Quickly scanning Computer Lib/Dream Machines, there is too much to respond to, but my highlighter got busy at the description of “fantics,” by which Nelson means “the art and science of getting ideas across, both emotionally and cognitively … ‘presentation’ could be a general word for it” (319). Now I had already been stunned earlier in the essay with a quote from Bertrand Russell–“The reason is, and by rights ought to be, slave to the emotions” (306). Bertrand Russell (channelling Hume?) said this?  I mean, yes, embodied cognition and all that, but… And now Ted goes on to flesh it out. What comes across in a presentation always has a dual character: explicit declarative structure imbued with connotative impressions, feelings, and sensibilities. And so, as we consider our pedagogies,

The reader or viewer always gets feelings along with information, even when the creators of the information think that its ‘content’ is much more restricted…. Fantics is thus concerned with both the arts of effect–writing, theater, and so on–and the structures and mechanisms of thought, including the various traditions of the scholarly event (article, book, lecture, debate, and class). These are all a fundamentally inseparable whole, and technically-oriented people who think that systems to interact with people, or teach, or bring up information, can function on some “technical” basis–with no tie-ins to human feelings, psychology, or the larger social structure–are kidding themselves and everyone else (319).

This is the work of “writers and editors, movie-makers and lecturers, radio announcers and layout people and advertising people” … and, it seems, the work of teachers and learners. Let’s make some magic combinations.

Augmented Intelligence: The True and Only AI

This week for #OpenLearning17 we’re reading excerpts from Doug Engelbart’s “Augmenting Human Intellect: A Conceptual Framework.” And here I am now, in reflecting upon this text, putting into practice the very “conceptual framework” that it so perspicaciously advanced more than half a century ago. We take it for granted today, this ability to display and manipulate and rearrange text and other symbols that represent our thought, in a networked environment that invites comment and annotation, that we forget how revolutionary these proposals were at the time. A time when computing was equated with calculation, and when speaking of AI could only mean a reference to “artificial intelligence,” not “augmented intelligence.”

So, to the text. For me, Englebart highlighted two crucial features of human intellect. First, our intellects operate largely upon external representations of meaning, that is, symbols: “texts, sketches, diagrams, lists, etc.” When ideas, concepts, and information can be displayed visually, we no longer have to hold those representations solely in our minds, which thus frees up our cognitive capacities for deeper-level thinking about the meanings inscribed therein. We have had such external representations and displays of interior thoughts since at least the time of the ancient cave paintings and, of course, the development of pictograms and writing. So there has already been “considerable” augmentation of human intellect for some time.

What Engelbart realized is that computational devices, such as the graphical interface and the mouse, could make possible the display of these symbolic representations of thought in a way that greatly facilitated our capacity to “manipulate” and “rearrange” them, thus stimulating our thinking and leading to possible new discoveries and insights. When texts, diagrams, charts, images, etc. are thus displayed, then “characteristically ‘human’ activities” can come to the fore, such as “playing with forms and relationships to ask what develops,” and “rearranging” texts and diagrams to discover new patterns and more intricate syntheses of what were previously perhaps disparate elements. Electronic representations of symbolic elements, as compared to print, afford a far more expeditious modification and rearrangement of elements. And it is that playful modification and rearrangement that can lead to new understandings and breakthroughs.

These ideas reinforcing the importance of the visual arrangement of data for the possibility of insight recalled for me some lessons that I learned many years ago from one of my philosophical mentors, Bernard Lonergan. In his work on cognitional theory, Lonergan, recovering elements from Aristotle and Aquinas, developed the notion of “insight into phantasm,” meaning that genuine intellectual discovery is a matter of apprehending an intelligible pattern or form in a presentation of sensible data (that is, data perceived by our senses or generated by our imagination). “Phantasm” here can be broadly understood as “image.” Lonergan often remarked on the importance of the diagram for inference and insight, with reference, for example, to the role of diagrams in mathematical discovery. [For a fuller (though difficult!) account, see Bernard Lonergan, Verbum: Word and Idea in Aquinas, chapter 1, section 4.]

More generally then, the presentation and arrangement of symbolic data is crucial, and any given arrangement may be more or less conducive to discovery. That is why Engelbart’s observation about “playing” with and “rearranging” the materials we are dealing with strikes such a resonant chord. Even if we are only dealing with text, the ease of recombining and manipulating our words and phrases enables our writing to more quickly reach a suitable form of expression. And when we are dealing with multiple modalities of information, composing is even further facilitated by the digital tools that augment and amplify our native intellect. Gardner Campbell coined the lovely phrase “combinatorial disposition” to describe the penchant that one brings to materials in order to rearrange them in increasingly intricate patterns.

The second major takeaway is that, as we are social animals, our intellects work most effectively, not in isolation but in connection with others. When thoughts and ideas are externally represented, they thereby enter to some degree a public space, which creates the opportunity for the shared and collaborative rearrangement and manipulation of symbolic elements and the emergence of collective insight and intelligence. Engelbart refers to “team cooperation” in this essay and later uses the term “collective IQ.  So each person on a team can have simultaneous access to the same materials and “symbol structure,” and by working in “parallel independence” with one another, the team advances exponentially faster than could any single individual. The common space allows each person to contribute their own particular terminologies, intuitions, and hunches, while the interaction of these distinct perspectives spurs the refinement and precision of a more adequate set of explanatory concepts. This is made possible by the technical advancements of networked computers and hyperlinks, but again, the technological developments are only important insofar as they position the human intellect to more readily take advantage of its innately social and dialogical nature.

So now, as to the title of this post … the true and only AI. The notion of “artificial intelligence,” so much more prominent in recent awareness and discourse, was first proposed (or at least the term was coined) by computer scientist John McCarthy in 1955, less than a decade before Engelbart’s groundbreaking essay. Interestingly, McCarthy and his team were also active at Stanford during the same time period as Engelbart was, and I wonder if there was any interaction between them.

[Shortly after publishing this post, I came across this 2011 article by John Markoff, “A Fight to Win the Future: Computers vs. Humans,” which begins with precisely this juxtaposition between two labs at Stanford:

At the dawn of the modern computer era, two Pentagon-financed laboratories bracketed Stanford University. At one laboratory, a small group of scientists and engineers worked to replace the human mind, while at the other, a similar group worked to augment it.

In 1963 the mathematician-turned-computer scientist John McCarthy started the Stanford Artificial Intelligence Laboratory. The researchers believed that it would take only a decade to create a thinking machine.

Also that year the computer scientist Douglas Engelbart formed what would become the Augmentation Research Center to pursue a radically different goal — designing a computing system that would instead “bootstrap” the human intelligence of small groups of scientists and engineers.

For the past four decades that basic tension between artificial intelligence and intelligence augmentation — A.I. versus I.A. — has been at the heart of progress in computing science as the field has produced a series of ever more powerful technologies that are transforming the world.]

I have no expertise in this field, but I think it most unfortunate that “artificial intelligence” has enjoyed such prominence as compared to “augmented intelligence.” Now it depends upon how you define “intelligence,” of course, but on philosophical (and, ultimately, theological) grounds, I don’t think there is such as thing as “artificial intelligence.” Machines and systems might be said to be “intelligent” in that they are highly developed and programmed, even to the point of altering that program in response to external conditions. I agree with the philosopher John Searle, whose “Chinese Room Argument” refutes the idea that a machine could have a mind in exactly the same sense that humans do. While I can certainly appreciate the achievements and benefits of a machine such as IBM’s Watson, it does not, in my opinion, possess the “intelligence” of the sort that was necessary to create it in the first place.

With Engelbart, I contend that “augmented intelligence” is the true (and only) AI (an allusory nod there to cultural critic Christopher Lasch and his book, The True and Only Heaven). A focus on augmented intelligence can push us to identify what is truly distinctive about human intelligence even as we benefit from the increasingly complex tools which that intelligence has fashioned. Many examples of the amplification of human intelligence by digital tools can be found in Clive Thompson‘s 2014 book, Smarter Than You Think. How Technology is Changing our Minds for the Better. As a chess player myself, I was particularly taken with chapter one, “The Rise of the Centaurs,” which describes the rise of computerized chess programs and its effect upon the game. The pivotal moment came in 1997, when then world champion Garry Kasparov was defeated in a match by IBM’s Deep Blue supercomputer. Thompson goes on to explain that in the last two decades, a “new form of chess intelligence” has emerged, in which the strongest players are not grandmasters themselves nor machines themselves, but human/machine teams. As Kasparov noted, “Human strategic guidance combined with the tactical acuity of a computer is overwhelming.” [Chess engines today have gone far beyond even what Deep Blue accomplished, and you can freely (as in beer) access some of them, such as Stockfish (open source), right in your browser.]

Thompson concludes, generalizing from the chess experience: “We’re all playing advanced chess these days. We just haven’t learned to appreciate it.” So I guess that means we are all centaurs as well?

Notes and Trails

I’m still processing this week’s #openlearning17 webinar, “‘As We May Think’, Annotation, and Liberal Learning: A Conversation with Hypothess.is’ Jon Udell and Jeremy Dean.” And I find myself wanting to use the very practices and ideas that were discussed therein, annotating the conversation and creating my own knowledge structure by connecting ideas and concepts in a trail that makes sense to me. I guess I could use a video annotator that I’m familiar with, such as Classroom Salon, or perhaps produce a transcript that could itself be annotated with Hypothes.is  But in the interests of time, I’ll forego those options and write up just a few thoughts.

There are lots of possible places to start, but let’s go with Jon Udell’s observation about the “sea change” that occurs when we annotate a document and are able to extract the nuggets relevant to our inquiry…a phrase, a sentence, a number in a table, a cell in a spreadsheet, etc. With annotation, you have a resource that is more granular than the document as a whole, a resource that has its own URL (like a document or an entire webpage). Not only does this nugget have its own URL, but it can be tagged to become part of a larger path or group of related items.

Why is this such a game-changer (to shift our change metaphors)? Perhaps because now we can get a more accurate representation, in the exterior public space of the Web, of the kind of connections being forged in the inquirer’s brain. My thinking is not primarily a matter of linking texts and documents as such, but of connecting ideas and concepts and discrete pieces of data. And Hypothes.is provides some of the “architectural glue” for holding the structure together. I hadn’t really thought previously about Hypothes.is as an “infrastructure” that goes beyond just creating marginalia on a single digital text (which might be the “digital facelift” understanding of it, to use Clay Shirky’s phrase). So it’s not just the annotation itself, but this “annotation infrastructure,” the way that the associated link and tags can be combined and recombined, that is so powerful.

And, it seems that this interplay between what’s taking place in an inquirer’s brain and what’s being represented in the open space of the Web is another theme running through the conversation. In discussing the nature of a “tag,” Gardner Campbell posited a juxtaposition between the tag as (i) a shared conceptual marker (thus public and open), and (ii) the tag in an individuals’s associative trail, which is meaningful in the network of that person’s brain. Later, in a related observation, he spoke of two ways to think about associative trails (which, it seems, is what a tag fundamentally is). First, they are connections that thinkers make and want to share as part of a record of activity and a larger effort to make those kinds of links another layer of meaning (again, the public/open sense). Second, tags are a kind of context by which we can understand where ideas come from in the first place. A well chosen tag prompts us to think about our own thinking, to reflect upon how it is that one idea is connected to another in our own minds. Here Gardner laments the difficulty of getting students to engage in this metacognition, or in focusing on anything beyond what they perceive an instructor to prescribe and expect and to have already created for them.

But to become more aware of the web of connections and associations that exists in one’s brain is to realize that this web is made up not only of explicitly conceptual links, but also largely unconscious links generated by our lived experience, which may seem irrelevant to the investigation or study at hand, but which provide a rich context for thinking about how we think. Creative ideas and insights may well emerge from these seemingly unrelated associations in our minds. So as well, to learn from another person is to discover something of the web inside that person’s mind.

This is to say that as we seek to build knowledge by accessing resources and creating connections on the web, the resources available to us are not just texts and other documents but people. Learning involves engaging with others on a human level. Jon Udell, I believe, used the phrase “context is a service we provide each other.” I’m still puzzling out what this means, but I’m guessing that it refers to the ideal of accessing and processing information, not in an abstract and depersonalized manner, but within the context of other learners and practitioners in “trust networks.” And that’s the difference between reading “As We May Think” on my own, and working through it in this community, this network of fellow learners.

Happy (Associative) Trails To You, Revisited

Reading “As We May Think” again is like revisiting an old friend, and in this case it prompted me to recall a classroom experience that I had a couple of years ago, during a previous engagement with Vannevar Bush. I developed a new course called “How the Web Works” to teach at Austin College for the summer of 2015 and, grateful for the generosity of openness, I liberally borrowed elements from several existing cMOOCs and created a mashup of “Thought Vectors in Concept Space,” “DS 106,” and “Digital Citizenship” (thanks Gardner, CogDog, and Chris!) Summer enrollment at AC is quite small, and it turns out I had only one student, named Chris. At the same time, I was finishing up another short cMOOC that Christina Hendricks was facilitating, “Teaching with WordPress,” which helped me to think through the process of setting up my course on WP. So much generosity! In that context, I wrote a blog post entitled “Happy (Associative) Trails to You,” describing the pathways Chris and I wound through during one particular class meeting. It’s kinda long, but seemed to pull a lot of things together for me that may be possibly relevant to #openlearning17, so I’m going to reproduce it here on my new site.
————————————————————————————————–

Although #TWP15 has ended its formal run, the network lives on (I hope!), and so I’m going to keep blogging about this WordPress based course, “How the Web Works.” It also occurred to me after today’s class meeting that sometimes it would be a helpful practice to blog about a particular meeting as a reflective exercise, to think out loud, and to capture a bit of what transpired to help clarify my own thinking.

I realized toward the end of our time today that we were on an “associative” trail that was largely serendipitous and unpredictable. An associative trail, for those new to the term, was the phrase Vannevar Bush used to describe a sequence of nodes of information and ideas, one prompting the next, through which one could move in an emergent process of discovery (see his famous 1945 article, “As We May Think,” which foreshadows the development of hypertext). Bush was describing an information retrieval and management system modeled on the way our brains work, namely, by a process of association in moving from one item of consciousness to the next. Just for kicks, you can check out this video by Gardner Campbell demonstrating how he followed an associative trail through the stacks of a library and elsewhere to discover new resources for one of his classes. And here’s a cool account of a serendipitous “distraction trail” by Alan Levine (aka CogDog).

So something analogous to that happened in class today. On a regular basis I’m trying to introduce new knowledge tools for students to incorporate into their learning toolkit, and so today I started out by demonstrating Diigo for social bookmarking and annotation. I discussed how I use my Diigo library and network in my work flow, and then turned our shared screen over to Chris for him to set up his account and to add some relevant bookmarks. As I watched him learn to navigate the basic elements of the application, I had occasion to explain one crucial difference between the free and premium versions of Diigo–the ability to cache weblinks. It’s all well and good to bookmark a resource, but if the link rots or gets broken, you’re out of luck if you don’t have a cached version or a local copy of the resource. Now I didn’t require Chris to get a premium account, but this was an opportunity for a bit of discussion about the impermanence of the web, and I was able to reference a speech given earlier this year by Vincent Cerf, a pioneer developer of basic internet protocols, warning about the possibility of a “digital Dark Age.” (Of course I found the reference almost immediately because I had bookmarked an article about the speech on Diigo.)

From there I described how I use Diigo (and Twitter) networks to discover new resources and to share references with others, and how I also use a service called “If This Then That” (IFTTT) to automatically tweet the title and URL of any resource that I save in my Diigo library. That led to Chris setting up his own free IFTTT account on the spot, and creating his first “recipe,” to automatically tweet the title and URL of new blog posts to his WordPress site. We then launched into a broader discussion of APIs and how applications talk to each other and share information, and why that’s important in an open web environment. I asked Chris if he had any experience with APIs, and he mentioned the Steam gaming distribution platform, on which he is quite active. It seems that on Steam, players can develop “mods” (modifications) to existing games that other players can download and add to their version of the game. But sometimes mods can conflict, and so Steam has developed a way to bundle mods together and determine whether there will be any conflicts among them and with the base game before they are downloaded and installed.

As Chris was demonstrating this by showing me his Steam site, I noticed some of the games that he was in, and we started talking about those. Mostly they were historical immersion role playing games such as Crusader Kings II and Europa Universalis IV. I realized we had an opportunity to open up a whole new space for discussing games and learning, as I recalled two resources dealing specifically with immersive game experiences for learning history at the undergraduate level. First, Reacting to the Past, a classroom simulation experience originating at Barnard College and now used at over 300 colleges and universities in the US and abroad. According to the RTTP website,

Reacting to the Past (RTTP) consists of elaborate games, set in the past, in which students are assigned roles informed by classic texts in the history of ideas. Class sessions are run entirely by students; instructors advise and guide students and grade their oral and written work. It seeks to draw students into the past, promote engagement with big ideas, and improve intellectual and academic skills. Reacting to the Past was honored with the 2004 Theodore Hesburgh Award (TIAA-CREF) for outstanding innovation in higher education.

Secondly, I called to mind an initiative called Play the Past, which operates from a similar conceptual framework but is more oriented toward actual gameplay rather than classroom simulation. PTP is a community of academics and developers pursuing the study of gaming, learning, and cultural heritage:

Collaboratively edited and authored, Play the Past is dedicated to thoughtfully exploring and discussing the intersection of cultural heritage (very broadly defined) and games/meaningful play (equally broadly defined). Play the Past contributors come from a wide variety of backgrounds, domains, perspectives, and motivations (for being interested in both games and cultural heritage) – a fact which is evident in the wide variety of topics we tackle in our posts.

As we looked at the PTP home page, it just so happened that the latest article featured there was an interview with the developers of a World War I immersive simulation game called Verdun (after one of the iconic battles of that war). Turns out that this is one of Chris’s favorite games, he has played it extensively, and has even presented about it at a gaming conference. Interestingly, if you google “Verdun,” the first return is not, as you might have expected, the Wikipedia article on the battle (that’s no. 3), but the page for the game on Steam. So we chatted about how the game places the player inside the experience of WWI and unfolds the futility and senseless of the war as well as historical information about the military strategies and maneuvers, the political dynamics of the conflict, and the cultural and economic contexts in which the war played out.

So now Chris is going to write up a blog post about his experience playing Verdun, complete with a “let’s play” video demonstration documenting his gameplay with commentary. And when the class had started a couple of hours earlier, I had no idea we were going to get here…a completely “emergent” development that we reached by an associative trail. Now it’s not likely that every class will exhibit such a clear sequence of leaps from one idea to the next, but I thought this instance was striking enough to be worth writing about.

 

 

Hello #OpenLearning17

I’m excited to join the party at Open Learning: A Connectivist MOOC for Faculty Collaboratives. While I’ve been blogging for the last couple of years at Digital Pedagogy @ Austin College, I’ve inexplicably put off writing and speaking in a space that is truly my own, in a voice that is truly my own. The hypocrisy of it, to be honest, is that I encourage my faculty colleagues and our students to claim their own online domains, to think and write and work in public, to blog and to create in an open connected network. Even though I wasn’t really doing it myself!

For whatever reason, the chance to be a part of this course has broken through my inertia and resistance. Especially after hearing Gardner Campbell’s recent keynote at the OU Academic Tech Expo (video coming soon!) and catching his interviews with Laura Gogia, I felt that now was “the acceptable time” to make the leap. I’m grateful that you guys have opened the course to those of us beyond your fair state of Virginia.

Looking forward to learning with all of you…and welcoming any advice and suggestions for shaping up this here blog and website!

 

Powered by WordPress & Theme by Anders Norén

css.php