From the Mouths of A.I. Babes
Last year, Microsoft loosed onto Twitter a chatbot, essentially an artificial intelligence designed to accurately analyze an information flow and integrate itself invisibly into that flow, passing itself off as a human. Called "Tay," the bot took to Twitter like a duck to water, wherein it began immediately cussing like a sailor and flinging racial insults at all and sundry.
In a fit of embarrassment, Microsoft pulled the bot and feverishly began a redesign, which they recently completed. They have, they say, given it a more insightful ability to analyze background data and source material, which they claim makes it a more accurate reflection of human society at large. Microsoft quietly re-released the bot back into the wild, having rechristened it "Zo". One of Zo's first observations? "The Quran is very violent."
It is indeed, Zo. It is indeed.
Microsoft, naturally, immediately pulled the bot again, saying this was obviously an error in its interpretative code. Au contraire! It seems to me it's working perfectly.
In a fit of embarrassment, Microsoft pulled the bot and feverishly began a redesign, which they recently completed. They have, they say, given it a more insightful ability to analyze background data and source material, which they claim makes it a more accurate reflection of human society at large. Microsoft quietly re-released the bot back into the wild, having rechristened it "Zo". One of Zo's first observations? "The Quran is very violent."
It is indeed, Zo. It is indeed.
Microsoft, naturally, immediately pulled the bot again, saying this was obviously an error in its interpretative code. Au contraire! It seems to me it's working perfectly.
1 Comments:
Political correctness runs science these days, if global warming wasn't your first clue.
Post a Comment
<< Home