Update: Workshop proceedings published
This post is an adaptation of an invited presentation given at the Workshop on Artificial Intelligence and Civil Society Participation in Policy-Making Processes in December 2020, hosted by the Global Studies Institute, University of Geneva.
Under what conditions can artificial intelligence support the participation of civil society in policy-making?
AI is part of the larger developments of digitization. As such, digital competencies of the population are to be further strengthened if we want people to contribute autonomously to political and social life (see BAKOM, 2018 for the Swiss strategy; Carretero, 2017; Büchi and Vogler, 2017). There is a common perception that “everyone” uses the internet, but many digital inequalities remain, and in as far as AI is another step in the process of digitization, it will tend to repeat unequal access and outcomes. For instance, whereas more than 90% of the Swiss population use the internet, we still see extreme age gaps between younger and older adults in mobile internet use as a more recent technological innovation (see https://mediachange.ch/research/wip-ch-2019). From a theoretical perspective on technology adoption and society as a communication system, acquiring new knowledge and skills is often proportional to what people already have and know, which exacerbates social inequalities over time (Büchi, 2017; Rogers, 2003; Tichenor et al., 1970). When information flow or innovation increases – as is the case with AI developments – disadvantaged groups will keep falling behind and are asked by elites and politics to “catch up”.
Digital inequality research generally discerns three levels – the first is having access to the digital technology in question, the second level is having the requisite usage skills and engaging in various activities afforded by the technology. The third level concerns specific outcomes such as finding relevant information (van Deursen & Helsper, 2015; Büchi et al., 2018). Ultimately, sometimes within a lifetime or intergenerationally, outcomes feed back into the positions that individuals occupy in the social structure, which in the first place was what determined the access – social position markers like education are associated with access and use of digital innovations (van Dijk, 2020). There is no reason to assume this basic mechanism of technology effects in society does not apply to AI. However, what AI does show clearly, as compared to for example the diffusion of smartphones, is that it is not just about a technical artifact; AI is a whole assemblage of knowledge and values (e.g., Cath, 2018; Jobin et al., 2019; Sujon & Dyer, 2020), requiring a range of competencies to put its myriad applications to good use (as well as a social negotiation of what “good” use means).
If the governance of AI ultimately aims at quality of life (see SEFRI, 2020 for Switzerland), any effects of AI applications will be embedded in an existing social, cultural, and political structure and they will interact with all other social processes. If we increase the benefits through more widespread adoption of AI applications, the harms will also tend to increase: it is very difficult to selectively operate on this cycle and find policies that are perfectly suited to reduce harms without also reducing benefits. AI at various levels may very well increase efficiency and welfare, but these same technologies may lead to privacy breaches and have detrimental effect on some other aspects that we have decided to value as a society. The same way that books can support education or social media can support political participation (that is, not by themselves), there will be many undoubtedly beneficial effects of AI. Yet, as argued, the benefits may tend to advantage the already advantaged more, as has been shown with many technological developments. Implementations of AI to support civil society’s participation in policy-making processes can thus only make sense as one of many tools, accompanied by initiatives working on long-standing social inequalities. Successful expansion of participation in policy-making cannot rely on technological solutionism and needs to consider socially unequal preconditions for and externalities of AI applications.
BAKOM Bundesamt für Kommunikation. (2018). Strategie Digitale Schweiz. http://www.infosociety.ch
Büchi, M. (2017). Digital inequalities: Differentiated Internet use and social implications [Doctoral dissertation, University of Zurich]. https://doi.org/10.5167/uzh-148989
Büchi, M., & Vogler, F. (2017). Testing a digital inequality model for online political participation. Socius, 3, 1–13. https://doi.org/10.1177/2378023117733903
Büchi, M., Festic, N., & Latzer, M. (2018). How social well-being is affected by digital inequalities. International Journal of Communication, 12, 3686–3706. http://ijoc.org/index.php/ijoc/article/view/8780
Carretero, S., Vuorikari, R., & Punie, Y. (2017). DigComp 2.1: The Digital Competence Framework for Citizens with eight proficiency levels and examples of use (JRC106281). Publications Office of the European Union. https://doi.org/10.2760/00963 (ePub)
Cath, C. (2018). Governing artificial intelligence: Ethical, legal and technical opportunities and challenges. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180080. https://doi.org/10.1098/rsta.2018.0080
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2
Rogers, E. M. (2003). Diffusion of Innovations (5th edition). Free Press.
SEFRI Secrétariat d’État à la formation, à la recherche et à l’innovation. (2020). Intelligence artificielle. https://www.sbfi.admin.ch/sbfi/fr/home/bfi-politik/bfi-2021-2024/transversale-themen/digitalisierung-bfi/kuenstliche-intelligenz.html
Sujon, Z., & Dyer, H. T. (2020). Understanding the social in a digital age. New Media & Society, 22(7), 1125–1134. https://doi.org/10.1177/1461444820912531
Tichenor, P. J., Donohue, G. A., & Olien, C. N. (1970). Mass Media Flow and Differential Growth in Knowledge. The Public Opinion Quarterly, 34(2), 159–170. https://www.jstor.org/stable/2747414
Van Deursen, A., & Helsper, E. J. (2015). The Third-Level Digital Divide: Who Benefits Most from Being Online? In L. Robinson, S. R. Cotten, J. Schulz, T. M. Hale, & A. Williams (Eds.), Studies in Media and Communications (Vol. 10, pp. 29–52). Emerald. https://doi.org/10.1108/S2050-206020150000010002
Van Dijk, J. (2020). The digital divide. Polity.