Journal of Digital Technologies and Law

Advanced search

Algorithmic Discrimination and Privacy Protection

EDN: ktizpw

Full Text:


Objective: emergence of digital technologies such as Artificial intelligence became a challenge for states across the world. It brought many risks of the violations of human rights, including right to privacy and the dignity of the person. That is why it is highly relevant to research in this area. That is why this article aims to analyse the role played by algorithms in discriminatory cases. It focuses on how algorithms may implement biased decisions using personal data. This analysis helps assess how the Artificial Intelligence Act proposal can regulate the matter to prevent the discriminatory effects of using algorithms.

Methods: the methods used were empirical and comparative analysis. Comparative analysis allowed to compare regulation of and provisions of Artificial Intelligence Act proposal. Empirical analysis allowed to analyse existing cases that demonstrate us algorithmic discrimination.

Results: the study’s results show that the Artificial Intelligence Act needs to be revised because it remains on a definitional level and needs to be sufficiently empirical. Author offers the ideas of how to improve it to make more empirical.

Scientific novelty: the innovation granted by this contribution concerns the multidisciplinary study between discrimination, data protection and impact on empirical reality in the sphere of algorithmic discrimination and privacy protection.

Practical significance: the beneficial impact of the article is to focus on the fact that algorithms obey instructions that are given based on the data that feeds them. Lacking abductive capabilities, algorithms merely act as obedient executors of the orders. Results of the research can be used as a basis for further research in this area as well as in law-making process.

About the Author

E. Falletti
Università Carlo Cattaneo – LIUc

Elena Faletti – PhD, Assistant Professor

Scopus Author ID:

Corso Matteotti 22, Castellanza, 21053

Competing Interests:

The author declares no conflict of interest


1. Abdollahpouri, H., Mansoury, M., Burke, R., & Mobasher, B. (2020). The connection between popularity bias, calibration, and fairness in recommendation. In Proceedings of the 14th ACM Conference on Recommender Systems (pp. 726–731).

2. Ainis, M. (2015). La piccola eguaglianza. Einaudi.

3. Alpa, G. (2021). Quale modello normativo europeo per l’intelligenza artificiale? Contratto e impresa, 37(4), 1003–1026.

4. Alpa, G., & Resta, G. (2006). Trattato di diritto civile. Le persone e la famiglia: 1. Le persone fisiche ei diritti della personalità. UTET giuridica.

5. Altenried, M. (2020). The platform as factory: Crowdwork and the hidden labour behind artificial intelligence. Capital & Class, 44(2), 145–158.

6. Amodio, E. (1970). L’obbligo costituzionale di motivare e l’istituto della giuria. Rivista di diritto processuale.

7. Angiolini, C. S. A. (2020). Lo statuto dei dati personali: uno studio a partire dalla nozione di bene. Giappichelli.

8. Bao, M., Zhou, A., Zottola, S., Brubach, B., Desmarais, S., Horowitz, A., ... & Venkatasubramanian, S. (2021). It’s complicated: The messy relationship between rai datasets and algorithmic fairness benchmarks. arXiv preprint arXiv:2106.05498

9. Bargi, A. (1997). Sulla struttura normativa della motivazione e sul suo controllo in Cassazione. Giur. it.

10. Battini, S. (2018). Indipendenza e amministrazione fra diritto interno ed europeo.

11. Bellamy, R. (2014). Citizenship: Historical development of. Citizenship: Historical Development of’. In J. Wright (Ed.), International Encyclopaedia of Social and Behavioural Sciences, Elsevier.

12. Berk, R., Heidari, H., Jabbari, S., Kearns, M., & Roth, A. (2021). Fairness in criminal justice risk assessments: The state of the art. Sociological Methods & Research, 50(1), 3–44. Brooks, R. (2017). Machine Learning Explained. Robots, AI and other stuff.

13. Bodei, R. (2019). Dominio e sottomissione. Bologna, Il Mulino.

14. Canetti, E. (1960). Masse und Macht. Hamburg, Claassen.

15. Casonato, C., & Marchetti, B. (2021). Prime osservazioni sulla proposta di regolamento dell’Unione Europea in materia di intelligenza artificiale. BioLaw Journal-Rivista di BioDiritto, 3, 415–437.

16. Chizzini, A. (1998). Sentenza nel diritto processuale civile. Dig. disc. priv., Sez. civ.

17. Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data, 5(2), 153–163.

18. Citino, Y. (2022). Cittadinanza digitale a punti e social scoring: le pratiche scorrette nell’era dell’intelligenza artificiale. Diritti comparati.

19. Claeys, G. (2018). Marx and Marxism. Nation Books, New York.

20. Cockburn, I. M., Henderson, R., & Stern, S. (2018). The impact of artificial intelligence on innovation: An exploratory analysis. In The economics of artificial intelligence: An agenda. University of Chicago Press.

21. Cossette-Lefebvre, H., & Maclure, J. (2022). AI’s fairness problem: understanding wrongful discrimination in the context of automated decision-making. AI and Ethics, 5, 1–15.

22. Crawford, K. (2021). Time to regulate AI that interprets human emotions. Nature, 592(7853), 167.

23. Custers, B. (2022). AI in Criminal Law: An Overview of AI Applications in Substantive and Procedural Criminal Law. In B. H. M. Custers, & E. Fosch Villaronga (Eds.), Law and Artificial Intelligence (pp. 205–223). Heidelberg: Springer.

24. De Gregorio, G. & Paolucci F. (2022). Dati personali e AI Act. Media laws.

25. Di Rosa, G. (2021). Quali regole per i sistemi automatizzati “intelligenti”?. Rivista di diritto civile, 67(5), 823–853.

26. Epp, C. R. (1996). Do bills of rights matter? The Canadian Charter of Rights and Freedoms, American Political Science Review, 90(4), 765–779.

27. Fanchiotti, V. (1995). Processo penale nei paesi di Common Law. Dig. Disc. Pen.

28. Freeman, C., Louçã, F., & Louçã, F. (2001). As time goes by: from the industrial revolutions to the information revolution. Oxford University Press.

29. Freeman, K. (2016). Algorithmic injustice: How the Wisconsin Supreme Court failed to protect due process rights in State v. Loomis. North Carolina Journal of Law & Technology, 18(5), 75–90.

30. Fuchs, C. (2014). Digital Labour and Karl Marx. Routledge.

31. Gallese, C.(2022). Legal aspects of the use of continuous-learning models in Telemedicine. JURISIN.

32. Gallese, E. Falletti, M. S. Nobile, L. Ferrario, Schettini, F. & Foglia, E. (2020). Preventing litigation with a predictive model of COVID-19 ICUs occupancy. 2020 IEEE International Conference on Big Data (Big Data). (pp. 2111–2116). Atlanta, GA, USA.

33. Garg, P., Villasenor, J., & Foggo, V. (2020). Fairness metrics: A comparative analysis. In 2020 IEEE International

34. Conference on Big Data (Big Data) (pp. 3662–3666). IEEE.

35. Gressel, S., Pauleen, D. J., & Taskin, N. (2020). Management decision-making, big data and analytics. Sage.

36. Guo, F., Li, F., Lv, W., Liu, L., & Duffy, V. G. (2020). Bibliometric analysis of affective computing researches during 1999–2018. International Journal of Human-Computer Interaction, 36(9), 801–814.

37. Hildebrandt, M. (2021). The issue of bias. The framing powers of machine learning. In Pelillo, M., & Scantamburlo, T. (Eds.), Machines We Trust: Perspectives on Dependable AI. MIT Press.

38. Hoffrage, U., & Marewski, J. N. (2020). Social Scoring als Mensch-System-Interaktion. Social Credit Rating:

39. Reputation und Vertrauen beurteilen, 305–329.

40. Iftene, A. (2018). Who Is Worthy of Constitutional Protection? A Commentary on Ewert v Canada.

41. Infantino, M., & Wang, W. (2021). Challenging Western Legal Orientalism: A Comparative Analysis of Chinese Municipal Social Credit Systems. European Journal of Comparative Law and Governance, 8(1), 46–85.

42. Israni, E. (2017). Algorithmic due process: mistaken accountability and attribution in State v. Loomis.

43. Kleinberg, J., Mullainathan, S., & Raghavan, M. (2016). Inherent trade-offs in the fair determination of risk scores. arXiv preprint arXiv:1609.05807.

44. Krawiec, A., Pawela, Ł., & Puchała, Z. (2023). Discrimination and certification of unknown quantum measurements. arXiv preprint arXiv:2301.04948.

45. Kubat, M., & Kubat, J. A. (2017). An introduction to machine learning (Vol. 2, pp. 321–329). Cham, Switzerland: Springer International Publishing.

46. Kuhn, Th. S. (1962). The structure of scientific revolutions. International Encyclopedia of Unified Science, 2(2).

47. Lippert-Rasmussen, K. (2022). Algorithm-Based Sentencing and Discrimination, Sentencing and Artificial Intelligence (pp. 74–96). Oxford University Press.

48. Maamar, N. (2018). Social Scoring: Eine europäische Perspektive auf Verbraucher-Scores zwischen Big Data und Big Brother. Computer und Recht, 34(12), 820–828.

49. Mannozzi, G. (1997). Sentencing. Dig. Disc. Pen.

50. Marcus, G., & Davis, E. (2019). Rebooting AI: Building artificial intelligence we can trust. Vintage.

51. Martini, M. (2020). Regulating Algorithms – How to demystify the alchemy of code?. In Algorithms and Law (pp. 100–135). Cambridge University Press.

52. Marx, K. (2016). Economic and philosophic manuscripts of 1844. In Social Theory Re-Wired. Routledge Massa, M. (1990). Motivazione della sentenza (diritto processuale penale). Enc. Giur.

53. Mayer-Schönberger, V., & Cukier, K. (2013). Big data: A revolution that will transform how we live, work, and think. Houghton Mifflin Harcourt.

54. Messinetti, R. (2019). La tutela della persona umana versus l’intelligenza artificiale. Potere decisionale dell’apparato tecnologico e diritto alla spiegazione della decisione automatizzata, Contratto e impresa, 3, 861–894.

55. Mi, F., Kong, L., Lin, T., Yu, K., & Faltings, B. (2020). Generalised class incremental learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops (pp. 240–241).

56. Mitchell, T. M. (2007). Machine learning (Vol. 1). New York: McGraw-hill.

57. Nazir, A., Rao, Y., Wu, L., & Sun, L. (2020). Issues and challenges of aspect-based sentiment analysis: A comprehensive survey. IEEE Transactions on Affective Computing, 13(2), 845–863.

58. Oswald, M. (2018). Algorithm-assisted decision-making in the public sector: framing the issues using administrative law rules governing discretionary power. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2128), 20170359.

59. Oswald, M., Grace, J., Urwin, S., & Barnes, G. C. (2018). Algorithmic risk assessment policing models: lessons from the Durham HART model and ‘Experimental’proportionality. Information & communications technology law, 27(2), 223–250.

60. Parisi, G. I., Kemker, R., Part, J. L., Kanan, C., & Wermter, S. (2019). Continual lifelong learning with neural networks: A review. Neural networks, 113, 54–71.

61. Parona, L. (2021). Government by algorithm”: un contributo allo studio del ricorso all’intelligenza artificiale nell’esercizio di funzioni amministrative. Giornale Dir. Amm, 1.

62. Pellecchia, E. (2018). Profilazione e decisioni automatizzate al tempo della black box society: qualità dei dati e leggibilità dell’algoritmo nella cornice della responsible research and innovation. Nuove leg. civ. comm, 1209–1235.

63. Pessach, D., & Shmueli, E. (2020). Algorithmic fairness. arXiv preprint arXiv:2001.09784.

64. Petronio, U. (2020). Il precedente negli ordinamenti giuridici continentali di antico regime. Rivista di diritto civile, 66(5), 949–983.

65. Pleiss, G., Raghavan, M., Wu, F., Kleinberg, J., & Weinberger, K. Q. (2017). On fairness and calibration. Advances in neural information processing systems, 30.

66. Poria, S., Hazarika, D., Majumder, N., & Mihalcea, R. (2020). Beneath the tip of the iceberg: Current challenges and new directions in sentiment analysis research, IEEE Transactions on Affective Computing.

67. Rebitschek, F. G., Gigerenzer, G., & Wagner, G. G. (2021). People underestimate the errors made by algorithms for credit scoring and recidivism prediction but accept even fewer errors. Scientific reports, 11(1), 1–11.

68. Rodotà, S. (1995). Tecnologie e diritti, il Mulino. Bologna.

69. Rodotà, S. (2012). Il diritto di avere diritti. Gius. Laterza.

70. Rodotà, S. (2014). Il mondo nella rete: Quali i diritti, quali i vincoli. GLF Editori Laterza.

71. Russell, P. H. (1983). The political purposes of the Canadian Charter of Rights and Freedoms. Can. B. Rev., 61, 30–35.

72. Scassa, T. (2021). Administrative Law and the Governance of Automated Decision Making: A Critical Look at Canada’s Directive on Automated Decision Making, UBCL Rev, 54, 251–255.

73. Scherer, M. U. (2015). Regulating artificial intelligence systems: Risks, challenges, competencies, and strategies, Harv. JL & Tech., 29, 353–360.

74. Schiavone, A. (2019). Eguaglianza. Einaudi.

75. Starr, S. B. (2014). Evidence-based sentencing and the scientific rationalisation of discrimination. Stanford Law Review, 66, 803–872.

76. Stuurman, K., & Lachaud, E. (2022). Regulating AI. A label to complete the proposed Act on Artificial Intelligence. SSRN Electronic Journal.

77. Sunstein, C. R. (2019). Algorithms, correcting biases. Social Research: An International Quarterly, 86(2), 499–511.

78. Tarrant, A., & Cowen, T. (2022). Big Tech Lobbying in the EU. The Political Quarterly, 93(2), 218–226.

79. Taruffo, M. (1975). La motivazione della sentenza civile. Cedam, Padova.

80. Vale, D., El-Sharif, A., & Ali, M. (2022). Explainable artificial intelligence (XAI) post-hoc explainability methods: Risks and limitations in non-discrimination law. AI and Ethics, 1–12.

81. Veale, M., & Borgesius, F. Z. (2021). Demystifying the Draft EU Artificial Intelligence Act-Analysing the good, the bad, and the unclear elements of the proposed approach. Computer Law Review International, 22(4), 97–112.

82. Vogel, P. A. (2020). “Right to explanation” for algorithmic decisions?, Data-Driven Decision Making. Law, Ethics, Robotics, Health, 49, 1–12.

83. Von Tunzelmann, N. (2003). Historical coevolution of governance and technology in the industrial revolutions, Structural Change and Economic Dynamics, 14(4), 365–384.

84. Wang, C., Han, B., Patel, B., & Rudin, C. (2022). In pursuit of interpretable, fair and accurate machine learning for criminal recidivism prediction, Journal of Quantitative Criminology, 6, 1–63.

85. Witt, A. C. (2022). Platform Regulation in Europe – Per Se Rules to the Rescue?, Journal of Competition Law & Economics, 18(3), 670–708.

86. Woodcock, J. (2020). The algorithmic panopticon at Deliveroo: Measurement, precarity, and the illusion of control, Ephemera: theory & politics in organisations, 20(3), 67–95.

87. York, J. C. (2022). Silicon values: The future of free speech under surveillance capitalism. Verso Books, LondonNew York.

88. Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Profile books, London.


For citations:

Falletti E. Algorithmic Discrimination and Privacy Protection. Journal of Digital Technologies and Law. 2023;1(2):387–420. EDN: ktizpw

Views: 336

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

ISSN 2949-2483 (Online)