Perm University Herald. Juridical Sciences. 2021. Issue 3 (53)

Title: ARTIFICIAL INTELLIGENCE'S ALGORITHMIC BIAS: ETHICAL AND LEGAL ISSUES
Authors:

Yu. S. Kharitonova, Lomonosov Moscow State University

This email address is being protected from spambots. You need JavaScript enabled to view it.
ORCID: 0000-0001-7622-6215 ResearcherID: K-7495-2016
Articles of «Scopus» & «Web of Science»:       DOI: 10.17072/1995-4190-2019-43-121-145
DOI: 10.17072/1995-4190-2020-49-524-549
DOI: 10.17072/1995-4190-2016-34-451-460 

V. S. Savina, Plekhanov Russian University of Economics

This email address is being protected from spambots. You need JavaScript enabled to view it.
ORCID: 0000-0002-8385-9421
ResearcherID:

G-2782-2014

Articles of «Scopus» & «Web of Science»:       DOI: 10.17072/1995-4190-2020-49-524-549

F. Pagnini, LOYTEC Electronics GmbH

This email address is being protected from spambots. You need JavaScript enabled to view it.
ORCID: 0000-0003-4618-0740
ResearcherID: AAU-6991-2021
Articles of «Scopus» & «Web of Science»:       --- 
Requisites: Kharitonova Yu. S., Savina V. S., Pagnini F. Predvzyatost' algoritmov iskusstvennogo intellekta: voprosy etiki i prava [Artificial Intelligence's Algorithmic Bias: Ethical and Legal Issues]. Vestnik Permskogo universiteta. Juridicheskie nauki – Perm University Herald. Juridical Sciences. 2021. Issue 53. Pp. 488–515. (In Russ.). DOI: 10.17072/1995-4190-2021-53-488-515
DOI: 10.17072/1995-4190-2021-53-488-515
Annotation:

Introduction: this paper focuses on the legal problems of applying the artificial intelligence technology when solving socio-economic problems. The convergence of two disruptive technologies – Artificial Intelligence (AI) and Data Science – has created a fundamental transformation of social relations in various spheres of human life. A transformational role was played by classical areas of artificial intelligence such as algorithmic logic, planning, knowledge representation, modeling, autonomous systems, multiagent systems, expert systems (ES), decision support systems (DSS), simulation, pattern recognition, image processing, and natural language processing (NLP), as well as by special areas such as representation learning, machine learning, optimization, statistical modeling, mathematical modeling, data analytics, knowledge discovery, complexity science, computational intelligence, event analysis, behavior analysis, social network analysis, and also deep learning and cognitive computing. The mentioned AI and Big Data technologies are used in various business spheres to simplify and accelerate decision-making of different kinds and significance. At the same time, self-learning algorithms create or reproduce inequalities between participants in circulation, lead to discrimination of all kinds due to algorithmic bias. Purpose: to define the areas and directions of legal regulation of algorithmic bias in the application of artificial intelligence from the legal perspective, based on the analysis of Russian and foreign scientific concepts. Methods: empirical methods of comparison, description, interpretation; theoretical methods of formal and dialectical logic; special scientific methods such as the legal-dogmatic method and the method of interpretation of legal norms. Results: artificial intelligence has many advantages (it allows us to improve creativity, services and lifestyle, to enhance the security, helps in solving various problems), but at the same time it causes numerous concerns due to the harmful effects on individual autonomy, privacy, and fundamental human rights and freedoms. Algorithmic bias exists even when the algorithm developer has no intention to discriminate, and even when the recommendation system does not accept demographic information as input: even in the absence of this information, due to thorough analysis of the similarities between products and users, the algorithm may recommend a product to a very homogeneous set of users. The identified problems and risks of AI bias should be taken into consideration by lawyers and developers and should be mitigated to the fullest extent possible, both when developing ethical principles and requirements and in the field of legal policy and law at the national and supranational levels. The legal community sees the opportunity to solve the problem of algorithmic bias through various kinds of declarations, policies, and standards to be followed in the development, testing, and operation of AI systems. Conclusions: if left unaddressed, biased algorithms could lead to decisions that would have a disparate collective impact on specific groups of people even without the programmer’s intent to make a distinction. The study of the anticipated and unintended consequences of applying AI algorithms is especially necessary today because the current public policy may be insufficient to identify, mitigate, and remedy the effects of such non-obvious bias on participants in legal relations. Solving the issues of algorithmic bias by technical means alone will not lead to the desired results. The world community recognizes the need to introduce standardization and develop ethical principles, which would ensure proper decision-making with the application of artificial intelligence. It is necessary to create special rules that would restrict algorithmic bias. Regardless of the areas where such violations are revealed, they have standard features of unfair behavior of the participants in social relations and can be qualified as violations of human rights or fair competition. Minimization of algorithmic bias is possible through the obligatory introduction into circulation of data in the form that would not allow explicit or implicit segregation of various groups of society, i.e. it should become possible to analyze only data without any explicit attributes of groups, data in their full diversity. As a result, the AI model would be built on the analysis of data from all socio-legal groups of society.

Keywords: artificial intelligence; algorithmic bias; human rights; robotic performance; self-learning software; human-machine interaction; risks of artificial intelligence applications; standardization of artificial intelligence systems
  download the full-version article
References: 1. Weizenbaum J. Vozmozhnosti vychislitel'nykh mashin i chelovecheskiy razum: Ot suzhdeniy k vychisleniyam [Computer Power and Human Reason. From Judgment to Calculation]. Moscow, 1982. 369 p. (In Russ.).
2. Savel'ev A. I. Kommentariy k Federal'nomu zakonu ot 27 iyulya 2006 g. № 149-FZ "Ob informatsii, informatsionnykh tekhnologiyakh i zashhite informatsii" (postateynyy) [Commentary on the Federal Law No. 149-FZ of July 27, 2006 'On Information, Information Technologies and Information Protection' (Article-by-Article)]. Moscow, 2015. 320 p. (In Russ.).
3. Kharitonova A. R. Sokhrannost' i anonimnost' personal'nykh dannykh v sotsial'nykh setyakh [Security and Anonymity of Personal Data in Social Media]. Predprinimatel'skoe pravo. Prilozhenie "Pravo i Biznes"- Entrepreneurial Law. 'Law and Business' Supplement. 2019. Issue 4. Pp. 48–55. (In Russ.).
4. Kharitonova Yu. S. Kontekstnaya (povedencheskaya) reklama i pravo: tochki peresecheniya [Contextual (Behavioral) Advertising and Law: Points of Intersection]. Pravo v sfere Interneta: sbornik statey [Law in the Internet Sphere: Collection of Articles]; M. Z. Ali, D. V. Afanas'ev, V. A. Belov et al.; ed. by M. A. Rozhkova. Moscow, 2018. 528 p. (In Russ.).
5. Kharitonova Yu. S., Savina V. S. Tekhnologiya iskusstvennogo intellekta i pravo: vyzovy sovremennosti [Artificial Intelligence Technology and Law: Challenges of Our Time]. Vestnik Permskogo universiteta. Juridicheskie nauki – Perm University Herald. Juridical Sciences. 2020. Issue 3. Pp. 524–549. DOI: 10.17072/1995-4190-2020-49-524-549. (In Russ.).
6. Adelman L. Unnatural Causes: Is Inequality Making Us Sick? Preventing Chronic Disease. 2007. Vol. 4. Issue 4. (In Eng.).
7. Agniel D., Kohane I. K., Weber G. M. Biases in Electronic Health Record Data due to Processes within the Healthcare System: Retrospective Observational Study. BMJ. 2018. Vol. 361. Issue 8151. k1479. (In Eng.).
8. Angwin J., Larson J., Mattu S., Kirchner L. Machine Bias. May 23, 2016. Available at: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. (In Eng.).
9. Baeza-Yates R. Data and Algorithmic Bias in the Web. Proceedings of the 8th ACM Conference on Web Science. May 2016. (In Eng.).
10. Bakshy E., Messing S., Adamic L. A. Exposure to Ideologically Diverse News and Opinion on Facebook. Science. 2015. Vol. 348. Issue 6239. Pp. 1130–1132. DOI: 10.1126/science.aaa1160. (In Eng.).
11. Barr A. Google Mistakenly Tags Black People as 'Gorillas,' Showing Limits of Algorithms. The Wall Street Journal. July 1, 2015. Available at: https://www.wsj.com/articles/BL-DGB-42522. (In Eng.).
12. Bartolini C., Lenzini G., Santos C. An Agile Approach to Validate a Formal Representation of the GDPR. JSAI: Annual Conference of the Japanese Society for Artificial Intelligence Ed. by K. Kojima, M. Sakamoto, K. Mineshima, K. Satoh. Springer, Cham, 2019. Pp. 160–176. (In Eng.).
13. Bengio Y. Learning Deep Architectures for AI. Now Publishers Inc., 2009. (In Eng.).
14. Benkler Y. Don't Let Industry Write the Rules for AI. Nature. 2019. Issue 569(7754). Pp. 161–162. DOI: https://doi.org/10.1038/d41586 -019-01413-1. (In Eng.).
15. Buolamwini J., Gebru T. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency. PMLR. 2018. Vol. 81. Pp. 77–91. (In Eng.).
16. Burns N. Why Should We Expect Algorithms to Be Biased. MIT Technology Review. June 24, 2016. Available at: https://www.tech¬no-logyreview.com/s/601775/why-we-should-expect-algorithms-to-be-biased/. (In Eng.).
17. Casacuberta D. Bias in a Feedback Loop: Fuelling Algorithmic Injustice. СССB LAB. May 9, 2018. (In Eng.).
18. Chiappa S. Path-Specific Counterfactual Fairness. Proceedings of the AAAI Conference on Artificial Intelligence. 2019. Vol. 33. Issue 1. Pp. 7801–7808. (In Eng.).
19. Colleoni E., Rozza A., Arvidsson A. Echo Chamber or Public Sphere? Predicting Political Orientation and Measuring Political Homophily in Twitter Using Big Data. Journal of Communication. 2014. Issue 64(2). Pp. 317–332. (In Eng.).
20. Courtland R. Bias Detectives: the Researchers Striving to Make Algorithms Fair. Nature. 2018. Issue 558(7710). Pp. 357–360. (In Eng.).
21. Del Vicario M., Bessi A., Zollo F., Petroni F., Scala A., Caldarelli G. et al. The Spreading of Misinformation Online. Proceedings of the National Academy of Sciences. 2016. Issue 113(3). Pp. 554–559. (In Eng.).
22. Dixon B. What is Algorithmic Bias? TechTalks. March 26, 2018. Available at: https://bdtech¬talks.com/2018/03/26rasist-secsist-ai-depp-leaning-algoritms. (In Eng.).
23. Edizel B., Bonchi F., Hajian S., Panisson A., Tassa T. FaiRecSys: Mitigating Algorithmic Bias in Recommender Systems. International Journal of Data Science and Analytics. 2020. Issue 9(2). Pp. 197–213. (In Eng.).
24. Fitzpatrick T. B. 'Soleil et peau' [Sun and skin]. Journal de Médecine Esthétique. 1975. Issue 2. Pp. 33–34. (In Eng.).
25. Floridi L. Translating Principles into Practices of Digital Ethics: Five Risks of Being Unethical. Philosophy & Technology. 2019. Issue 32(2). Pp. 185–193. (In Eng.).
26. Garcia M. Racist in the Machine: The Disturbing Implications of Algorithmic Bias. World Policy Journal. 2016. Issue 33(4). Pp. 111–117. (In Eng.).
27. Garimella K., De Francisci Morales G., Gionis A., Mathioudakis M. Political Discourse on Social Media: Echo Chambers, Gatekeepers, and the Price of Bipartisanship. Proceedings of the 2018 World Wide Web Conference on World Wide Web. International World Wide Web Conferences Steering Committee. 2018. Pp. 913–922. (In Eng.).
28. Hacker P. Teaching Fairness to Artificial Intelligence: Existing and Novel Strategies against Algorithmic Discrimination under EU Law. Common Market Law Review. 2018. Issue 55(4). (In Eng.).
29. Hagendorff T. The Ethics of AI Ethics: an Evaluation of Guidelines. Minds and Machines. 2020. Issue 30(1). Pp. 99–120. (In Eng.).
30. Hajian S., Bonchi F., Castillo C. Algorithmic Bias: From Discrimination Discovery to Fairness-Aware Data Mining. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2016. Pp. 2125–2126. (In Eng.).
31. Hao K. This Is How AI Bias Really Happens – And Why It's So Hard to Fix. MIT Technology Review. February 4, 2019. (In Eng.).
32. Heidari H., Nanda V., Gummadi K. P. On the Long-Term Impact of Algorithmic Decision Policies: Effort Unfairness and Feature Segregation through Social Learning. Available at: https://arxiv.org/abs/1903.01209 (In Eng.).
33. Ignatieff M. Political Polarization in the American Public. Annual Colloquium on Fundamental Rights. Brussels, 2016. (In Eng.).
34. Jackson J. R. Algorithmic Bias. Journal of Leadership, Accountability & Ethics. 2018. Vol. 15. Issue 4. Pp. 55–65. (In Eng.).
35. Jobin A., Ienca M., Vayena E. The Global Landscape of AI Ethics Guidelines. Nature Machine Intelligence. 2019. Issue 1(9). Pp. 389–399. (In Eng.).
36. Klingenberg C. O., Borges M. A. V., Antunes Jr J. A. V. Industry 4.0 as a Data-Driven Paradigm: A Systematic Literature Review on Technologies. Journal of Manufacturing Technology Management. 2019. Vol. 32. Issue 3. Pp. 570–592. (In Eng.).
37. Koene A., Dowthwaite L., Seth S. IEEE P7003™ Standard for Algorithmic Bias Considerations: Work in Progress Paper. Proceedings of the International Workshop on Software Fairness. May 2018. Pp. 38–41. (In Eng.).
38. Koren Y. Collaborative Filtering with Temporal Dynamics. Communications of the ACM. 2010. Issue 53(4). Pp. 89–97. (In Eng.).
39. Lipton Z. C. The Foundations of Algorithmic Bias. 2016. (In Eng.). Available at: http: //approximate¬ly-correct.com/2016/11/07/the-foundations-of-algo¬rithmic-bias/.
40. Ludbrook F., Michalikova K. F., Musova Z., Suler P. Business Models for Sustainable Innovation in Industry 4.0: Smart Manufacturing Processes, Digitalization of Production Systems, and Data-Driven Decision Making. Journal of Self-Governance and Management Economics. 2019. Issue 7(3). Pp. 21–26. (In Eng.).
41. Mäs M, Bischofberger L. Will the Personalization of Online Social Networks Foster Opinion Polarization? 2015. Available at: http://dx.doi. org/10.2139/ssrn.2553436. (In Eng.).
42. Moutafis R. We're Facing a Fake Science Crisis, and AI is Making It Worse: Journals are Retracting More and More Papers Because They're not by the Authors They Claim to Be. June 8, 2021. Updated June 9, 2021. https://builtin.com/artificial-intelligence/ai-fake-science. (In Eng.).
43. Panch T., Szolovits P., Atun R. J. Artificial Intelligence, Machine Learning and Health Systems. Journal of Global Health. December 2018. Issue 8(2):020303. DOI: 10.7189/jogh.08. 020303. (In Eng.).
44. Panch T., Mattie H., Atun R. Artificial Intelligence and Algorithmic Bias: Implications for Health Systems. Journal of Global Health. 2019. Issue 9(2). DOI: 10.7189/jogh.09.020318. (In Eng.).
45. Pariser E. The Filter Bubble: What the Internet Is Hiding from You. Penguin UK, 2011. (In Eng.).
46. Paulus J. K., Kent D. M. Predictably Unequal: Understanding and Addressing Concerns that Algorithmic Clinical Prediction May Increase Health Disparities. NPJ Digital Medicine. 2020. Issue 3(1). Pp. 1–8. (In Eng.).
47. Rodrigues R. Legal and Human Rights Issues of AI: Gaps, Challenges and Vulnerabilities. Journal of Responsible Technology. 2020. Vol. 4. 100005. (In Eng.).
48. Schmidt A. L., Zollo F., Del Vicario M., Bessi A., Scala A., Caldarelli G. et al. Anatomy of News Consumption on Facebook. Proceedings of the National Academy of Sciences. 2017. Issue 114(12). Pp. 3035–3039. (In Eng.).
49. Schmidt A. L., Zollo F., Scala A., Quattrociocchi W. Polarization Rank: A Study on European News Consumption on Facebook. arXiv preprint arXiv:180508030. 2018. (In Eng.).
50. Schroeder D., Chatfield K., Singh M., Chennells R., Herissone-Kelly P. Ethics Dumping and the Need for a Global Code of Conduct. Equitable Research Partnerships. Springer, 2019. (In Eng.).
51. Shaulova T. Artificial Intelligence vs. Gender Equality. International Relations and Dialogue of Cultures. 2019. Issue 7. Pp. 52–54. DOI: 10.1870/HUM/2304-9480.7.04. (In Eng.).
52. Simonite T. When It Comes to Gorillas, Google Photos Remains Blind. Wired. January 11, 2018. DOI 10.1870/HUM/2304-9480.7.04. (In Eng.).
53. Sîrbu A., Pedreschi D., Giannotti F., Kertész J. Algorithmic Bias Amplifies Opinion Fragmentation and Polarization: A Bounded Confidence Model. PloS one. 2019. Issue 14(3). Available at: https://doi.org/10.1371/journal.pone. 0213246. (In Eng.).
54. Smith L. Unfairness by Algorithm: Distilling the Harms of Automated Decision-Making. Future of Privacy Forum. December 11, 2017. Available at: https://fpf. org/2017/12/11/unfair¬ness-by-al¬gorithm-distilling-theharms-of-auto¬ma¬ted-decision-making. (In Eng.).
55. Stojanovic L., Dinic M., Stojanovic N., Stojadinovic A. Big-Data-Driven Anomaly Detection in Industry (4.0): An Approach and a Case Study. 2016 IEEE International Conference on Big Data (Big Data). 2016. Pp. 1647–1652.
56. Strickland E. Racial Bias Found in Algorithms That Determine Health Care for Millions of Patients. IEEE Spectrum. October 24, 2019. (In Eng.).
57. Wael W. D. Artificial Intelligence. SC 42 Overview, ITU Workshop on AI and Data Commons. Geneva, Switzerland, January 2020. Available at: https://www.itu.int/en/ITU-T/ext-coop/ai-da¬ta-commons/Documents/ISO_IEC% 20JTC1%20SC %2042%20Keynote_Wael%20 Diab.pdf. (In Eng.).
58. Winfield A. An Updated Round Up of Ethical Principles of Robotics and AI. April 18, 2019. Available at: https://tex.stackex¬change. com/questions/3587/ how-can-i-use-bibtex-to-cite-a-web-page. (In Eng.) .
Received: 03.03.2021
Financing:

The research was prepared within the framework of the Program for Development of the Interdisciplinary Scientific and Educational School of the Moscow University “Mathematical Methods of Analysis of Complex Systems”

The Perm State University
614068, Perm, street Bukireva, 15 (Faculty of Law), +7 (342) 2 396 275
vesturn@yandex.ru
ISSN 1995-4190 ISSN (eng.) 2618-8104
ISSN (online) 2658-7106
DOI 10.17072/1995-4190
(с) Editorial board, 2010
The magazine is registered in Federal Agency of supervision in sphere of communication and mass communications.
The certificate on registration of mass media ПИ № ФС77-33087 from September, 5th, 2008
The certificate on reregistration of mass media ПИ № ФС77-53189 from Marth, 14th, 2013

The magazine is included in List ВАК and in the Russian index of scientific citing

The founder & Publisher: the State educational institution of the higher training
“The Perm State University”.
Publishing 4 times a year