My cousin Ed is teaching an undergraduate course called “Technology and Society”. A similar course was one of my own grad school favorites so I was delighted when he invited me to make a guest appearance last term. Preparing to speak with the class led me down a number of fun rabbit holes, including looks at the “human side” of eLearning and at artificial intelligence. Like me, many eLearning Guild community members are technology enthusiasts, and it’s easy to let the allure of a new tool or approach draw us to the next shiny object. The trick, of course, is to be sure that we don’t lose sight of the human.

A few things emerged from our class discussion.

What makes us human? Does being human always matter?

I offered the students a number of screenshots of online chats with retailers and other businesses. They could not tell which were AI-powered chatbots and which were human agents. When I asked them to defend their answers they said things like, “I think it’s human because there’s an exclamation mark” or “I think that’s the chatbot because the answer seems very mechanical”. In every case, they’d chosen wrong. It seemed hard for them to grasp that the chatbot was only giving the natural language that had been programmed in—including exclamation marks. For that matter, as many readers likely know, human agents are often only following a script, too.

We moved on to conversation about Jill Watson, Georgia Tech’s first AI teaching assistant (TA) who has answered thousands of student questions—from instructor office hours to naming conventions for assignments. Students—this was in an AI course, mind you—did not catch on into well into the first semester she was deployed that she was not a human TA; even then they still wanted to nominate her for TA of the year. And as to the value of being “human”: Students preferred Jill to a human TA because she didn’t get impatient or snarky.

And we discussed Woebot, the AI conversational agent based on cognitive behavioral therapies. A recent trial showed not only that Woebot was therapeutic, but that its anonymity was a primary advantage, especially for young people uncomfortable with disclosing a mental health problem.

Wherefore identity?

As the class included many self-described gamers, I wanted to talk with them about research being done regarding identity in games, particularly the role of the avatar as “identity”. I’d just read about a conference presentation describing gamers' practice of gender-swapping with avatars. By a show of hands I found that 100 percent of students in the class had at one time or another created avatars of another gender. Those who identified as male said they found people were nicer to them when they used a female avatar. Those who identified as female said they found in what might be considered heavily “male” games, like war-based experiences, that engaging as a male reduced the misogyny they encountered otherwise. There are myriad issues around identity in gaming, and accompanying matters of social expectations. Additional research is available in the text and in resources offered in this work from Pamela Livingston.

Another point of discussion: The eLearning Guild’s upcoming DevLearn conference has four keynote speakers; one of them is Sophia the robot. Within minutes of this being announced someone on social media referred to Sophia as a chatbot with a “plastic sex toy” body. Would a good-looking male robot be described that way? Would we tolerate hearing a human woman being described that way? The class didn’t think so. And back to the conversation of what makes us “human”: At what point do we apply basic social rules to AI-based products like chatbots and conversational agents? What is their identity, even if they are not “human”? After parents complained that Amazon Echo’s Alexa was teaching kids to be rude—by just barking orders into the air—Amazon responded by programming the AI to reward those who say please and thank you to her, er, it. As we look into the future, what behaviors and norms are we reinforcing in our treatment of increasingly human technologies?

CAN we improve on humans?

AI and machine learning can already outperform humans; scanning more X-rays in a day than a human might scan in a lifetime, or searching the entire body of US case law in seconds. Physically we are looking at new technologies that include synthetic blood that promises to pump up a soldier’s speed and endurance, robotic exoskeletons that will enable a human to lift 300 pounds, and bionic eye lenses that promise superhuman vision. Ever-evolving technologies, surgeries, and medical treatments promise to reduce the incidence of blindness and deafness.

SHOULD we improve on humans?

Conversations around the ethics of some technological advances (such as gene editing with the goal of eliminating certain diseases) are fascinating and longer than this column allows, but we had some interesting discussion about benefits, risks, unanticipated consequences, and tradeoffs. Ultimately the conversation looped from discussion of how much to modify a person in pursuit of “normality” back to where we started, with “What, then, does it mean to be human?”

This experience helped to inspire this month’s research report, The Human Side of Technology and Learning, in which we explore the ways humans are already being augmented and enhanced; in which accessibility for all moves closer to reality; in which humans may become faster and stronger; in which those of us in L&D will be working on solutions for these employees; and yes, in helping to train the software that will displace some human workers.