Many designers have read or heard about VR, or have learned how to create VR applications/experiences as part of their work. And yet there are some critical steps for actually getting VR to effectively address skill development. VR is not the same kind of learning experience as watching a video. VR is an active experience rather than a passive viewing or demonstration, and assessing the results in order to determine whether the experience was successful for the learner is a qualitative process that must be built into the design model.

How to include missing levels of analysis in your course or curriculum development and piloting process? That is the question that I took to Tony Bevilacqua, founder and CEO of Cognitive3D.

Introducing Human Performance Analysis

Bill Brandon: I'd like to start by asking about your background in industrial training and in design of applications for it.

Tony Bevilacqua: My training and experience has really been on the analytics side. We've been focusing in my company on building analytics technology for virtual, augmented, and mixed reality for the last five years. Prior to that I provided very similar capabilities. My last startup focused on the mobile space, understanding how people are interacting with mobile applications, games, those types of things. We started Cognitive3D in 2015 with an eye on games and entertainment and VR. Where we found the most value with our software was actually heading down the direction of training simulation and consumer research and that's where we've taken the tools today.

BB: Are there industries that you’ve focused on?

TB: I’d say things that involve danger and involve complexity are typical use cases. So oil and gas utilities, scenarios that involve complex, potentially dangerous situations, expensive equipment. Large setup, large tear down, and participation of instructors.

BB: How is design and development of VR applications for skill development different from other instructional design work?

TB: We're not typically involved with the actual instructional design. We focus on collecting data on human performance in conjunction with immersive content. It's all about collecting data from virtual reality and wrapping a measure to that, and helping our customers understand how well they're doing in these scenarios.

To say that another way, we focus on the actual analytics, the human performance analysis. We don't create content, we work with our customers who are creating the experiences. What we do is analyze the actual work that's being done, the steps that are actually taken in the field, and help build out the scenario in terms of how we should measure it and what those KPIs are going to be from an organizational perspective. We don't actually build the experience itself but we're typically involved in identifying what success means.

Our human performance evaluation tools, SceneExplorer and the Objectives System, are used for data collection and after-action review so that managers can measure results from VR training. So in terms of observing users and how they interact with a particular scenario, many of our customers use our data to be able to iterate on that experience over time. Where specific user behaviors don't match expectations or where there are failures of particular users to understand a particular scenario, the Objectives System allows after-action changes. That means you can re-assess data based off a different set of criteria, without having the employee go back through the experience again. You can change the assessment criteria to understand different perspectives. We definitely act as a facilitator, in that particular area.

BB: Could you give me a thumbnail description of what human performance analysis involves?

TB: We record spatially all the human interactions from VR experiences. We measure positional data, how people are using their hands, how they're using equipment, the things that they're looking at with their eyes (eye tracking). We look at things like biometric sensor data. And we mash those things up into a 3-D-based visualization that a trainer, the employees themselves, management, whoever, can actually go back and review exactly what those users did from the training experience.

The workflow

BB: Let's put this into context. What's the typical workflow? When does the client bring you in? What do you turn over to them, and how do they use that?

TB: Clients typically bring us in about midway through a project. They've already made determinations about which particular scenarios they want to move into virtual, augmented, or mixed reality. They have typically done their initial instructional design about how they could create these experiences, and they have started development on one or more experiences. At that stage, our process is that we come in, assess the type of work that they're trying to do, how it would be most efficiently measured, and then we provide them a number of tools that they can integrate with their application. It's a lot different than shooting a video. The client needs to have the 3-D assets created. Those go into Unity or Unreal Engine. The client needs to build instructional design around those, build interaction, capabilities, all those types of things. We are not involved in those pieces. The client can then add our SDK, also known as a software development kit, at the very end of that development to their training experience and get the analytics capabilities I'm talking about. That's our approach. We work with a wide variety of companies doing a wide variety of scenarios, and they can simply pull them into their distinct 3-D VR applications.

BB: The value that you add with the human performance analysis is... what?

TB: We're typically providing answers to these questions: How well did this user do? How compliant were they, how efficient were they in process? That can be applied to a variety of scenarios, and we provide the answers to those questions in a very efficient way. We can assess with a large variety of sensors by setting up the tools in our dashboard. This gives clients a wealth of analytical data without their having to build those sensor systems from scratch. We license our technology so they don't have to build an assessment platform in 3-D space. They're basically licensing an out-of-box assessment tool.

BB: After the project is finished and you've turned everything over to the client and the client continues with their training, can they use what you've developed to assess the performance of individual learners?

TB: The way the platform works is that it's integrated into the training content, and that content is perpetually delivered. Ideally you build that content, you use it for a long period of time so our platform is involved in the individual assessment on a day-to-day basis, and it can be fully integrated with an existing LMS platform. So that means that those results can be programmatically assessed using the Objective System. Those results can be put into the LMS for a particular experience. It is individually assessing training sessions using the Objective System. And then people that are doing instructional design, looking at the bigger picture of the scenarios and how effective they are, how effective they are in the field, may go further, such as after-action review in SceneExplorer in order to assess how individuals are doing on a one-by-one basis.

BB: And all of that would typically be missing from a VR development process.

TB: Definitely. When we see clients, they are doing pilots or their initial discovery work in an immersive space, and they typically don't have this step in their plan. They may have hard-coded some assessments, or they’ve done some one-time assessment and put it right into the application. There's no real flexibility there, no capability for change or augmentation. Usually we're brought into projects that are a little bit more mature, they're looking to scale, they are looking at getting their pilot results into an LMS in a scalable way. They're looking for programmatic approach to task evaluation, and that's where we fit.