Today's post is courtesy of board member, Alexandra Levit. Alexandra's new book Humanity Works: Merging Technologies and People for the Workforce of the Future is available now.
There's a lot of hand wringing these days around the idea that increasingly smarter computers will soon render human employees obsolete. There are many reasons why this is unlikely to happen, but one of the most critical is that humans and machines are simply better together.
In Cognitive Collaboration, a Deloitte University Press paper, authors Jim Guszcza, Harvey Lewis, and Peter Evans-Greenwood reminded us of what famous cognitive scientist J. C. R. Licklider articulated about artificial intelligence many years ago. “Rather than speculate about the ability of computers to implement human-style intelligence, Licklider believed computers would complement human intelligence,” wrote the Deloitte authors. “He argued that humans and computers would develop a symbiotic relationship, the strengths of one counterbalancing the limitations of the other.”
How might this work? Well, we already see it today when we use apps like Google Translate and Waze. Humans specify goals and criteria, and algorithms do the heavy lifting of data to offer up the most relevant insights and options to aid in decision-making. In many cases, we've already identified the sweet spot where artificial intelligence can add the most value, and that involves a large data set on which one wishes to perform a routine task.
But when it comes to a novel situation or problem, you need a human to formulate hypotheses and decide which ones to test because, as the Deloitte authors pointed out, algorithms lack the conceptual understanding and commonsense reasoning needed to do anything more but make inferences from structured hypotheses. Human judgment is absolutely required to keep algorithms and their output in check.
Cognitive scientists might like to think that AI decision-making processes are based on human ones, but this is far from the case. Not only are artificial minds less biased, but they don't fatigue, they apply consistent effort regardless of circumstance, they can pull out the most relevant ideas in Big Data systems in mere seconds, and they can simultaneously examine so many sources that making an accurate prediction about a future situation is a piece of cake.
Notice, though, that I said less biased, not unbiased. The Deloitte authors cautioned us to avoid outsourcing tasks associated with fairness, societal acceptability, and morality to AI systems. Algorithms cannot be assumed to be fair or objective simply because they use hard data, and oversight is required. “Recent examples of algorithmic bias include online advertising systems that have been found to target career-coaching service ads for high-paying jobs more frequently to men than women, and ads suggestive of arrests more often to people with names commonly used by black people,” the Deloitte paper shared. And: “If the data used to train an algorithm reflect unwanted pre-existing biases, the resulting algorithm will likely reflect, and potentially amplify, these biases.”
In other words, the solution to eliminating bias is not to trust that AI will do it on its own, but to teach humans how to do it so we can mold smart machines in our own new and improved image. And this means, of course, that we still have our jobs.
© 2023 Workforce Institute All Rights Reserved • Designed and Developed by Morether Creative Agency, Temple, TX