One thing Artificial Intelligence can’t be is prejudiced. It should be impossible; machines don’t suddenly decide to hate, they’re all about the facts. But what if the people programming them are prejudiced themselves? A disturbing new report in Science reveals that some are inadvertently doing just that.Who remembers Microsoft’s Tay, a 2016 chatbot designed to ape the verbal machinations of a 19-year-old American girl? The high-minded idea behind it was to, according to Microsoft, “conduct research on conversational understanding.” But within hours of launching, Tay was claiming 9/11 was an inside job; that Hitler was right and agreeing with Trump’s stance on immigration.