Citizen Trust through AI transparency

May 2019 – Oct 2019
Main task: Leading the user research, prototyping, creating design guidelines
Role: Lead UX Researcher
Company: Saidot
The problem
AI brings great opportunities to the public sector, but it also might cause harm to society when used non-ethically. We can already observe cases of opaque and erroneous AI used in the Public Sector that caused a decrease in citizen trust. This problem was addressed by the Finnish startup called Saidot, which wants to bring transparency, fairness and accountability to AI services.
My actions
I led the citizen involvement in the study. I planned, organized and conducted 21 in-depth interviews, a design workshop and user testing with Finnish citizens and residents. At work, I collaborated with designers from other organizations. Later on, I analyzed the results, documented them and communicated them to the team and project stakeholders. In my work, I used various co-design strategies and qualitative data analysis methods with the help of tools such as Atlas.ti. The detailed methodology can be read in my master thesis.


Results
As the result, we created guidelines for public institutions that should help them in designing and developing public AI services. We also shared the documented results in form of personas representing citizens and the tables of transparency needs, concerns and needs of citizens. The project stakeholders much praised the results of the work. It helped in building the bridge of understanding between public officials and citizens. It also was used by Saidot for developing the transparency model. In 2020 Saidot helped the Cities of Helsinki and Amsterdam to publish the AI registers that are meant to increase the transparency of AI services for citizens.