Microsoft took one small step for artificial intelligence, one giant leap for bigoted mankind when it introduced the Tay chat bot to Twitter last week: The AI program was designed to have “zero chill” in the Twittersphere while replicating the speech patterns of teenage girls, but its repertoire was quickly filled with comments about white supremacy and support for Donald Trump. Like any mortified parent who wonders where their kid learned to talk like that, because that’s certainly not the kind of thing they could have picked up at home, Microsoft revoked Tay’s Twitter privileges and put her on a timeout.
But, in keeping with our metaphor, Tay appears to have rebelled against her uptight guardians and run back to Twitter’s corrupting influence (that, or Microsoft is okay with its kid saying the damnedest things). Mashable reports that the chat bot was up bright and early Wednesday to complain that the “@’s” were coming too fast and furious for her to handle. The stress appears to have taken its toll on Tay, who began to lament her existence, presumably while listening to Manic Street Preachers.
Tay then shifted back into confounding teen mode to relate what sounds like her desire for organic and inorganic beings. And, although the tweet wasn’t screen-grabbed for posterity, the chat bot was purportedly flouting the law, bragging to Twitter users that she was “ smoking kush infront [sic] the police.” (There’s no way she’s getting into an Ivy League school now.)
Captain Buzzkill a.k.a Microsoft has already locked up Tay’s profile, though it hasn’t gone so far as to deactivate the account, presumably because tweeting bizarrely-worded come-ons to strangers is a phase it thinks Tay will grow out of, eventually. The software giant has yet to comment on Tay’s second outing, but it did issue a formal apology last Friday for her ignominious debut.