At the recent DevLearn conference I walked the Expo Hall. I checked out some folks who I had interest in, and of course chatted with those I know. One thing struck me, however. It’s not ubiquitous, but it came up again and again. And this is the lack of real learning science in technology products. And that’s a problem.

This isn’t new, I should add. This was at least partly behind the Serious eLearning Manifesto. This initiative, published in 2014, was about eight principles that separate serious from traditional eLearning. It was driven by four of us who had attended yet another expo hall and saw shiny new logos and flashy new stands, with no real underlying change.

Drivers of design

The problem is that what’s driving product design is customer desires. And I get it, it’s hard to sell something that customers don’t want. But there’s an alternative.

Historically, we moved from hand-crafted learning solutions to tools that made it easy. Whether Authorware (driven by one of the Manifesto authors, Michael Allen) or Flash, tools made it easier to do learning design. The events of 2001 changed all that, however.

Once travel became aversive for a variety of reasons, we started down a path of looking to eLearning as an alternative. Given that the standards for training weren’t high (and, sadly, still aren’t), expectations for savings and efficiency were. And the advent of the ‘rapid eLearning’ tools only accelerated the problem.

Looking at the famous engineering mantra of “fast, cheap, or good; pick two”, good wasn’t a winner. Training, viewed as necessary but fundamentally unvalued, wasn’t going to be the source of a digital revolution. Instead, the desires and promises were to take the PowerPoints and PDFs and get them up on the screen with a quiz.

With pressure for efficiency over effectiveness, and the lack of measurement, there wasn’t a driver for aught else. I recall an engagement with a provider who was developing their next generation platform. I asked “who owns the vision?” The response was that this was being done like an internet startup. That is, product managers would compete for the resources for their different features. Which may be fine for consumer tools, but not learning. Not surprisingly, it didn’t become a marketable product. Learning requires expertise that’s not available in the market. It requires an understanding of the fundamentals.

There has been a drive, however, for ‘learner experience’ (even to the point of making it a buzzword). The phenomenon of learners staying away in droves led to emphasis on making it ‘fun’. Without an understanding of the difference between trivial and ‘hard’ fun, however, what emerged were templates that tart-up what otherwise would be ‘drill and kill’. So, you can get quiz show game formats, or competitive overlays drawing on sports metaphors. What you don’t have, however, is real learning science.

Sizzle without the steak

Yet what organizations need is not the ability to recite new facts. That can be automated! What’s instead needed is the ability to make better decisions. And to do that, you need tools that make it easy to embed decisions.

Note that the tools increasingly can handle this but it’s been grafted on, it’s not intrinsic. It’s like adding social capabilities to an LMS; it makes it possible, but it’s not coming from the core DNA.

What learning science would and should promote is concepts, examples, and most importantly, meaningful practice. There should be a rich suite of practice objects aligned to the types of decisions that we’re likely to face.

And that may map to drag-and-drop, and even multiple choice. It’s not to say that you can’t do good learning design in the standard tools, it’s just that it’s not their core focus, and you have to work harder to make it so.

And again, I get it. Why would you abandon the familiar approaches which have communities of users available for assistance? When the organizational pressures aren’t for quality learning outcomes, it’s hard to push for investments that bend away from quick conversion.

There’s one other problem that bedevils our field. The nuances between well-designed and well-produced learning, and just well-produced learning, are subtle. I’ve faced numerous instances where I’m forced to explain why a particular design is better. Once I’ve done so they’re singing the Hallelujah chorus, but having to do so is problematic. We’re not trusted to do what’s right in too many instances! If it looks like school, with content presentation and a quiz, it must be learning, right?

Where do we go wrong?

The problem is that the market is customer-driven. A vendor literally said to me that they didn’t look to learning science in building their product. Instead, they relied solely on what their customers wanted, through focus groups and surveys and the like.

Why is this a problem? For one, there’s a documented disconnect between what learners think is good for their learning, and what actually works. We’re running on folk psychology. Similarly, customers ask for what they think they want. And they don’t know what they really need.

Too much of this industry is driven by myths and misconceptions. And, of course, hype. When folks can’t distinguish the nuances of microlearning or workflow learning, we have a problem. When you can still hear talks mentioning learning styles, Dale’s Cone, and other robustly debunked approaches, it’s clear that folks should not be trusting what’s being bandied about.

Is there a role, then, for vendors to be putting emphasis on what customers need, not what they want? It’s a legitimate question, but I think the answer is at least a blend. There’s market education to be undertaken. While vendors might not be pushing their customers to understand better learning (though a case can be made), they should be making it easy to do good learning design.

There’s a concept in interface design called a ‘forcing function’ where you make it easy to do the right thing. The notion of a ‘hook’ for behavior change is similar. Imagine if we had a cultural shift in our industry where we were aligning tools and practices with what’s known about how people learn? What would the outcomes of that be?

It starts by finding people who know the learning science, and then putting them in places to influence what’s being done. And yet, it’s been noticeable to me for years that most learning technology companies don’t have anyone who understands learning anywhere near their C-suite. Is that a change that will help energize learning as an evidence-based profession? It may not be sufficient, but I argue it is necessary.

To start, get some basic background in learning science. Then hold yourselves, and your vendors to account. It’s time to put the final touches of professionalism in our profession. Let’s make ourselves proud.