At ELI last month I had the opportunity to hear Dr. Satya Nitta speak twice on IBM’s Watson AI system (I also got some conference crud that turned into Bronchitis, so I’m really late posting this). My first opportunity was in a session sponsored by the EDUCAUSE Leading Academic Transformation group that was a more informal Q&A session (which, full disclosure, I helped facilitate), and the second was his morning keynote. What struck me was how different these two engagements were and how much different the take-away likely was depending on which session you attended. So this isn’t really a tale of two AIs but rather a different telling of one AI.
The LAT session was designed to provide a more in depth look at one of the keynote topics. In an ideal world, it would have been after Dr. Nitta’s keynote, but logistically it needed to be before. My co-facilitator, Thomas Cavanagh, and I had a chance to speak with Dr. Nitta on the phone before the conference and agreed on a format for the group session. We had some pre-reading suggested by Dr. Nitta for folks who were interested (Dawn of the Age of Cognitive Assistants or Chatbots in Education and Limitations of AI). At the session we made sure we had reasonably well balanced tables and then did a round of three exercises to get people to think about AI in higher education: the promise of AI, the pitfalls of AI, and some possible campus applications. After each set of table conversations, we asked each table to share something and had Dr. Nitta respond to them. If you were in that session, you likely were left with an impression of an AI expert who understands the challenges and limitations of AI and a powerful sense that humans need to stay involved in any high stake decisions or recommendations AIs might make. I thought it was a very invigorating and thoughtful conversation.
The next day was the keynote. Like most keynotes, Dr. Nitta spoke for quite awhile from a prepared presentation about the power of AI in education. Unfortunately, it lacked any of the nuanced discussion we heard in the LAT session, and it sparked comments on the back channel like this:
— Kristen Eshleman (@kreshleman) February 14, 2017
— Katrina Wehr (@katrinamwehr) February 14, 2017
OH GOD NOW WE'RE ON TO WATSON AND SESAME WORKSHOP AND IS NOTHING SACRED #eli2017 "It will be friends with children"
— Dr. Lee Skallerup Bessette (@readywriting) February 14, 2017
It was like the talk was authored by someone with a completely different perspective on AI. Whereas our LAT conversation included some specific references to the idea of the AI being an advisor to the expert (i.e. AI providing information to advisors to use with students), the keynote lacked any of that. If you went just to the LAT session, you got a sense that there is a place for AI in education, but one that requires thought and continuous human presence. If you went just to the keynote, you got a sense of AI as a replacement for human interaction. If you went to both, you were probably confused (I know I was).
So how do I reconcile this? Well, honestly, I can’t. They were two difference conversations with the same person about the same AI. The venue and structural difference of the two could explain at least some of the disparity, but that feels like a pale answer at best. What I do know is that the LAT session brought some important perspective and in-depth conversation around the issue. I think this is one of the valuable assets LAT brings to the community, and I hope we can do more of these at future conferences.