“This post was written for you by our state-of-the-art Artificial Intelligence”
What do you think when you read that sentence?
If you have some knowledge of AI, chances are that you know that some statistical and mathematical methods were utilized to complete the task of writing the post. You know, that we’re not at the stage of having general artificial intelligence. Citing my colleague Helmi from her great article, “The true magic of Machine Learning is that it’s exceptionally great at pattern recognition, but not so talented in understanding if those patterns should be amplified further or not.”
If you are a statistical citizen, however, who gathers their information on AI from media and culture, your imagination can suggest a very different image. Maybe you are excited that an intelligent being understands your needs and interests and creatively writes something for you. Maybe you feel an anxiety that your copy-writing position will be taken by AI, along with multiple other jobs in the current market. Or – you fear that this takes us closer and closer to the situation from multiple sci-fis, where robots take over humanity.
Whichever images come, you have nothing to be ashamed of. It’s normal, seeing what information goes around us. In fact, as Kate Crawford mentioned in her lecture, the word intelligence itself brings to us the image of something more than just a tool designed for a specific task. The word – together with the cultural aspect around it – can easily mislead the AI service users into expecting unrealistic scenarios and others avoid the responsibility for “AI’s” actions.
From various discussions, I see how this phenomenon brings the dilemma to owners, designers and developers of AI services and products. How should they be transparent about using AI?
Well, up to this point I see three options:
1) Keep quiet that you are using AI.
… or, actually, please don’t do that. There are existing cases, where opaque use of AI caused citizen distrust and harm. Most of the expert guidelines will tell you to be transparent. So let’s try another one:
2) Tell your users that the service uses Artificial Intelligence and…
As mentioned, mentioning only about AI use can lead to misunderstandings. Try to describe in easy terms what AI means in this product, what it does and what it cannot do. Or, even better, help in educating society in AI.
3) Use another name for AI.
One colleague of mine suggested using a smart algorithm. But does it actually move us forward? Algorithm seems to have its problems too. What about the term coined by Sami Kaski at FCAI’s AI day 2020? See that:
“This post was written for you by our state-of-the-art Artificial Stupidity”
This post was originally published on our Aalto course: Critical AI and DAta Justice.