
Too often, engineers are brainwashed into thinking they can create an impeccable artificial intelligence (AI) model — a blank slate they release into the wild for independent learning. They think: “If I create flawless math on top of the right infrastructure, I’ll have the perfect model.” Train the algorithm, let it run free, and that’s the end of the story, right?
Unfortunately, no. Just like human intelligence, artificial intelligence requires continuous learning to advance its expertise.
Here’s what to do instead.
Train, test, validate, repeat
Training a commercially applied AI is not a one-and-done exercise. It requires regular validation to understand whether the AI is working as it should. Otherwise, you’re practically begging for bias to worm its way in. Look no further than the troubling example of an AI designed to predict criminal recidivism that turned out to be biased against black people. And who can forget the now-infamous fiasco that was Microsoft’s Tay, a well-intentioned chatbot experiment that quickly soured? These and countless other examples underscore the need for continuous human validation of AI to keep it on its intended trajectory.
In addition to mitigating against bias, human validation helps AI keep up with changing knowledge. Take language, for example. The meanings of words constantly evolve. As the father of a teenager, I can personally attest to the fact that by the time a new slang term goes mainstream (“lit!”), a trendier alternative has already replaced it (“savage!”). If the only education we provide chatbots is the initial data sets we train them on, how will they keep up with the changing ways people talk to them? Like human intelligence, the only way artificial intelligence can adapt to accommodate a growing body of knowledge is if we continually…