Even the scientists who make our most complicated software are sometimes baffled by how it works—and, more frighteningly, by how it breaks.
Remember a few months ago when Microsoft produced a friendly AI chatbot named Tay, designed to interact like a 19-year-old? It was a social and marketing experiment that quickly morphed into a social nightmare. Within a day, bombarded by hateful Twitter trolls, Tay had turned into a white supremacist—tweeting racist and offensive statements—and Microsoft had to shut it down.