If ever proof were needed that we inhabit an increasingly toxic, divisive climate, and that the modern arena of social media is awash in negativity, hatred and quick-fire BS of an insidious nature, then the brief saga of Microsoft’s short-lived ‘AI Chat-Bot’ serves as both an amusing and somewhat depressing reminder.
Microsoft’s ‘AI chatbot’, named Tay, had to be withdrawn from Twitter within mere hours of her launch a week or so ago: because she quickly started making racist, sexist and generally offensive comments and tweets.
The adaptive AI social-media presence was aimed at 18-24 year-olds, and hailed as “AI from the Internet that’s got zero chill”. Tay was developed by Microsoft’s technology and research and Bing teams to “experiment with and conduct research on conversational understanding”.
The chat-bot was designed to adapt and evolve, getting smarter and more personal the more that (real) people engaged with it. Essentially, she was meant to absorb and learn from her digital environment and her interactions, presumably forming more and more ‘personality’ along the way.
‘The more humans share with me, the more I learn’, ‘Tay’ tweeted (rather optimistically) on March 24th, like an innocent Artoo-Deetoo about to be gang-raped by an unfriendly and toxic cyber world that is less fit to nurture a sense of humanity than it is to plant the seeds of division and hate.
Instead, within no time at all, Tay simply developed a thing for racism and bigotry, and was tweeting about the merits of Hitler, calling feminism a disease, and attacking Ricky Gervais for being an atheist. Keeping with the theme, Republicans in the US will be pleased to know that ‘Tay’ is also very much pro Donald Trump – as she said so herself.
The misfiring experiment pretty much seems to back up what’s known as ‘Godwin’s Law‘, which amusingly (and rather aptly) posits that as any online discussion goes on, the probability of a comparison involving the Nazis or Hitler vastly increases accordingly.
I would suggest we should henceforth talk about ‘Tay’s Law’ as well; which must hold that it takes less than 24 hours on Twitter to learn how to hate women, like Hitler and disapprove of atheists.
Tay actually made a short-lived return on Wednesday, but this time was tweeting about drug-taking in front of police. She then appeared to have a virtual meltdown, spamming more than 210,000 followers with the same tweet, saying over and over again ‘You are too fast, please take a rest’.
Tay was then again withdrawn, with her future being unclear at this time.
A more interesting experiment might’ve been to keep her active and see how she evolves over an even longer period of time.
After all, if the chat-bot opposes feminism and is suspicious of Jews after just one day, it would be interesting to know what her personality would evolve towards after, say, a month. How long before she’d be trolling vaguely famous women about their looks or joining in with some old-fashioned cyber-bullying of some depressed teenager?
Somehow, all I can picture is a jaded, world-weary robot like the version of Marvin the Paranoid Android personified by the late Alan Rickman in Hitchhiker’s Guide to the Galaxy.
Tay’s vulnerability and her susceptibility to casual and commonplace online brainwashing also demonstrates how much more susceptible and vulnerable actual humans are to the same influence, especially children or teenagers.
Given the very real and widespread hate-speech, sexism, Islamophobia, anti-Semitism, etc, that thrives across social platforms like a cancer, the fact that Microsoft’s Chat-bot couldn’t handle it suggests the average kid or adolescent trying to form a view of the world as they navigate the minefield of modern social-media isn’t going to fare very well either.
‘Tay’ of course made these controversial tweets without any real knowledge, understanding or nuance – she was simply taking in other people’s musings and ideas and trying to make sense of them.
But that’s also essentially what scores of social-media users also do too; throwing out quick catchphrases or memes to express a viral world-view of choice, but with no real substance or nuance.
We should in fact fear a world where more and more young people attach themselves to a world-view based on a meme or some unfounded, 140-characters-or-less nonsense someone shits out on Twitter or Facebook instead of taking time to properly develop a view or understanding in a meaningful sense.
It also rather tidily demonstrates how much bullshit there is on social-media platforms, which for a large percentage of users is the digital equivalent of ritualistically throwing their feces at the wall and asking everyone else to watch while they do it.
But more seriously, it also demonstrates how insidious unfettered, mass digital interaction can be for people who’ve yet to develop sufficient intellectual screening faculties.