Tech giants are reeling from months of criticism over security breaches, lax privacy protections, and other damage caused by use and misuse of their products. It’s often said that science and technology advance so quickly that regulation and ethics cannot keep up. But, while some abuses of technology might be hard to anticipate, it is possible to incorporate risk mitigation into design and development processes, thereby enabling ethics to keep pace with technological development. A new “ethics toolkit” seeks to encourage this foresight.

The teams and companies developing and marketing new technologies, particularly those based on artificial intelligence (AI), have the option of focusing on potential uses—beneficial and harmful—and anticipating a broad range of scenarios. This would empower them to build in safeguards against some of the worst possible abuses of their products. A partnership of two organizations is eager to help them do just that with its Ethical OS guide and toolkit.

According to Wired.com, the “new guidebook shows tech companies that it’s possible to predict future changes to humans’ relationship with technology, and that they can tweak their products so they’ll do less damage when those eventual days arrive.” Ethical OS is the product of the Institute for the Future and the Tech and Society Solutions Lab.

Ethical OS guides developers, executives, product managers, and others in “warming up your foresight muscles and kicking off important conversations with your team.” The free kit comprises asynchronous eLearning for developers, a downloadable checklist, a set of scenarios intended to spark discussion about the long-term impact of technologies being developed, and strategies to guide developers toward ethical action to mitigate potential harms.

The kit describes eight “risk zones”; developers can focus on the areas most relevant to their product. The zones are: Truth, Disinformation, and Propaganda; Addiction and the Dopamine Economy; Economic and Asset Inequalities; Machine Ethics and Algorithmic Biases; Surveillance State; Data Control and Monetization; Implicit Trust and User Understanding; and Hateful and Criminal Actors.

L&D teams might, for example, focus on Machine Ethics and Algorithmic Biases, taking steps to ensure that they are not rushing to automate and incorporate AI in eLearning and performance support without considering whether the AI engines perpetuate discrimination or exacerbate bias. The checklist encourages developers to consider whether there’s any recourse or accountability to people impacted by algorithms used in their products and to consider whether those algorithms are transparent or are “black boxes.”

Additionally, the section on Data Control and Monetization could be relevant to L&D teams that collect data about learners. Considering the ramifications of what information is collected and how it is used can lead to a better learner experience as well as improving security and strengthening compliance with GDPR and other data privacy regulations.

The Institute for the Future is a 50-year-old nonprofit organization that considers foresight training among its core missions. Its researchers create tools—including massive multiplayer online games—and offer guidance to governments, businesses, and nonprofits in anticipating and preparing for future dilemmas and scenarios.

The Tech and Society Solutions Lab is a project of Omidyar Network, which invests in pro-social entrepreneurship—while also seeking to mitigate unintended harmful consequences of emerging technologies.

“We can’t predict the future. But that shouldn’t mean we can’t systematically build safeguards against future risk directly into our design and development processes,” Paula Goldman and Raina Kumra of the Tech and Society Solutions Lab wrote in their announcement of the toolkit’s release on August 7.

Ethical OS makes ethical L&D design and development pain-free, providing an ethics toolkit that makes it easy to build foresight into content planning processes, conduct ethics-focused discussions, and follow a checklist to ensure that their products anticipate and mitigate potential abuses of technology that could harm learners or negatively affect learner experience.