AI for the Straight Guy: Don’t believe the headlines
Provocative headlines regarding AI are everywhere these days, but when you scratch beneath the surface, you’ll find that these are little more than clickbait.
Always read past the headline
AI has been called racist, sexist, homophobic and psychopathic in hyperbolic headlines so often that you’d be forgiven for believing it was some terrifying technology that will discriminate across all possible social and demographic lines. Here are some snippets of recent articles about Artificial Intelligence – within a few paragraphs, you learn the real reason behind the headline:
- Amazon scrapped ‘sexist AI’ tool – “it was clear that the system was not rating candidates in a gender-neutral way because it was built on data accumulated from CVs submitted to the firm mostly from males”.
- A beauty contest was judged by AI and the robots didn’t like dark skin - “the result was flawed because the data set used to train the AI (artificial intelligence) had not been diverse enough.”
- Google’s new AI bot thinks gay people are bad – “It seems that the system had biases programmed in to its training data.”
The media are well versed in exaggerated and misleading stories, and headlines like these aren’t limited to Artificial Intelligence, but I wanted to focus on this area as it is important not to let these stories affect your understanding of AI technology.
My chocolate cake tastes like chocolate?
They say you are what you eat but, in the case of AI, you are what you are programmed to be. If you make a cake and chocolate is on your list of ingredients, don’t be surprised when it tastes like chocolate. By the same token, if you put bad data into your AI tool, don’t be surprised when you get bad data out. Unfortunately, ‘Programmers used poor data and got poor results’ doesn’t make for a snappy, or eye-catching headline.
Many of the biases found in these algorithms aren’t placed there intentionally – most accidentally blossom over time through repeated patterns in the data. The perception is that algorithms start off impartial and neutral and somehow become racist, sexist etc. themselves. However, in reality, they are amplifying already existing prejudices that are prevalent in the technology industry and society as a whole. AI trained on bad data can itself turn bad. After all, “Machine Learning” is just a fancy way of saying “finding patterns in data”.
Fixing the issue
Creating unbiased AI software may not be as easy as just getting the code right. Experts have revealed that the solution can instead be found by making a number of significant changes across the industry. We explore what these might be in the second part of this post, which you can read here.