Explainable Artificial Intelligence and Legal Ethos: Developing Key Performance Indicators for ‘G20 Giants’
https://doi.org/10.21202/jdtl.2025.26
EDN: uiujcv
Abstract
Objective: to study the “right to explanation” in the context of the PEEC doctrine (public interest, environmental sustainability, economic development, criminal justice) in order to develop key performance indicators reflecting the socio-cultural characteristics of different countries and ensuring adaptability, transparency and cultural relevance in the regulation of explainable artificial intelligence.
Methods: the research uses a unique methodological approach that combines the iterative processes of soft systems methodology with a theoretical framework based on the PEEC principles. Such integration makes it possible to comprehensively study the social, economic, political and legal regimes of the ‘G20 Giants’ – the United States of America, the Federal Republic of Germany, Japan, the Republic of India, the Federal Republic of Brazil and the Russian Federation – when designing key performance indicators. The proposed key performance indicators are applicable to assess the transparency and accountability of artificial intelligence systems, simplifying data collection and practical implementation in various cultural contexts. The developed model corresponds to the actual social needs in decision-making using artificial intelligence technologies.
Results: the study proposes a new legal model for regulating explainable artificial intelligence based on a system of key performance indicators. In addition to eliminating the problems of regulating explainable artificial intelligence in various cultural, ethical and legal fields, this model ensures that the system of regulating explainable artificial intelligence properly takes into account anthropocentric aspects, since it is focused on unlocking the true potential of artificial intelligence. The proposed approach promotes the most effective use of artificial intelligence technologies for the benefit of society in the perspective of sustainable development.
Scientific novelty: the work applies a unique scientific approach that takes into account cultural, ethical, socio-economic and legal differences when developing a legal framework for regulating explainable artificial intelligence. This allows adapting the legal framework to various national conditions, while contributing to responsible management of artificial intelligence with a check-and-balance system.
Practical significance: the results obtained make it possible to use the proposed legal model in the practical activities of government agencies and developers of artificial intelligence systems to ensure transparency and explainability of technologies. Effective adjustment of the proposed key performance indicators, taking into account the specifics of states, will optimize them for universal use. Although all five key performance indicators are relevant for the ‘G20 Giants’, their relative significance depends on the socio-cultural and legal conditions of a particular state. Further research should cover a wider range of issues, including other developed and developing countries, in order to adapt the regulation of explainable artificial intelligence to various national and global requirements.
About the Authors
N. BhattIndia
Neelkanth Bhatt – PhD, Head of the Department & Associate Professor, Department of Civil Engineering, Government Engineering College.
Near Mavdi-Kankot Road, Rajkot, Pin Code 360 005, Gujarat
Scopus Author ID: https://www.scopus.com/authid/detail.uri?authorId=58919442100
WoS Researcher ID: https://www.webofscience.com/wos/author/record/KRO-8652-2024
Google Scholar ID: https://scholar.google.com/citations?user=L7K-e3IAAAAJ
Competing Interests:
The author declares no conflict of interest.
J. N. Bhatt
India
Jaikishen Nathalal Bhatt – Bachelor of Commerce, Retired Social Security Officer, Employees’ State Insurance Corporation.
Panchdeep Bhavan, Ashram Road, Ahmedabad, Pin 380 009, Gujarat
Competing Interests:
The author declares no conflict of interest.
References
1. Bhatt, N. (2025). Crimes in the Age of Artificial Intelligence: a Hybrid Approach to Liability and Security in the Digital Era. Journal of Digital Technologies and Law, 3(1), 65–88. https://doi.org/10.21202/jdtl.2025.3
2. Bhatt, N., & Bhatt, J. (2023). Towards a novel eclectic framework for administering artificial intelligence technologies: A proposed ‘PEEC’ doctrine. EPRA International Journal of Research and Development (IJRD), 8(9), 27–36. https://doi.org/10.13140/RG.2.2.11434.18888
3. Checkland, P., & Poulter, J. (2020). Soft Systems Methodology. In M. Reynolds, S. Holwell (Retired) (Eds), Systems Approaches to Making Change: A Practical Guide (pp. 201–253). Springer, London. https://doi.org/10.1007/978-1-4471-7472-1_5
4. Comin, D., & Hobijn, B. (2011). An exploration of technology diffusion. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.1116606
5. Doshi-Velez, F., Kortz, M., Budish, R., Bavitz, C., Gershman, S., O’Brien, D., Schieber, S., Waldo, J., Weinberger, D., Weller, A., & Wood, A. (2017). Accountability of AI Under the Law: The Role of Explanation. ArXiv, abs/1711.01134. https://doi.org/10.2139/SSRN.3064761
6. Eckhardt, G. (2002). Culture’s Consequences: Comparing Values, Behaviors, Institutions and Organisations Across Nations. Australian Journal of Management, 27(1), 89–94. https://doi.org/10.1177/031289620202700105
7. Edwards, L., & Veale, M. (2018). Enslaving the Algorithm: From a “Right to an Explanation” to a “Right to Better Decisions”? IEEE Security & Privacy, 16, 46–54. https://doi.org/10.1109/MSP.2018.2701152b
8. Gacutan, J., & Selvadurai, N. (2020). A statutory right to explanation for decisions generated using artificial intelligence. International Journal of Law and Information Technology, 28(3), 193–216. https://doi.org/10.1093/ijlit/eaaa016
9. Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., & Kagal, L. (2018). Explaining Explanations: An Overview of Interpretability of Machine Learning. In 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), Turin, Italy, 2018 (pp. 80–89). https://doi.org/10.1109/DSAA.2018.00018
10. Hacker, P., Krestel, R., Grundmann, S., & Naumann, F. (2020). Explainable AI under contract and tort law: Legal incentives and technical challenges. Artificial Intelligence and Law, 28(4), 415–439. https://doi.org/10.1007/s10506-020-09260-6
11. Irwan, M., & Mursyid, M. (2025). AI-Driven Traffic Accidents: A Comparative Legal Study. Artes Libres Law and Social Journal, 1(1), 1–20. https://doi.org/10.12345/jxt3j717
12. Jan, J., Alshare, K. A., & Lane, P. L. (2024). Hofstede’s cultural dimensions in technology acceptance models: a meta-analysis. Universal Access in the Information Society, 23(2), 717–741. https://doi.org/10.1007/s10209-022-00930-7
13. Malgieri, G. (2019). Automated decision-making in the EU Member States: The right to explanation and other “suitable safeguards” in the national legislations. Computer Law & Security Review, 35(5), 105327. https://doi.org/10.1016/J.CLSR.2019.05.002
14. Peters, U., & Carman, M. (2024). Cultural bias in explainable AI research: A systematic analysis. Journal of Artificial Intelligence Research, 79, 971–1000. https://doi.org/10.1613/jair.1.14888
15. Prabhakaran, V., Qadri, R., & Hutchinson, B. (2022). Cultural incongruencies in artificial intelligence. arXiv preprint arXiv:2211.13069. https://doi.org/10.48550/arXiv.2211.13069
16. Ribeiro, L. H. da C., Silva, C. M. da, & Viana, P. W. P. (2024). Artificial intelligence as a tool for predicting crime in large Brazilian cities. Revista FT, 28. https://doi.org/10.5281/zenodo.11100354
17. Taylor, E. (2023). Explanation and the Right to Explanation. Journal of the American Philosophical Association, 10(3), 467–482. https://doi.org/10.1017/apa.2023.7
18. Triandis, H. C. (2018). Individualism and collectivism. Routledge. https://doi.org/10.4324/9780429499845
- An innovative legal model for regulating explainable artificial intelligence was developed based on the PEEC doctrine, which integrates public interests, environmental sustainability, economic development, and criminal justice;
- A system of five comprehensive key performance indicators was proposed to assess the explainability of artificial intelligence systems: clarity and trust index, bias reduction index, AI carbon footprint index, AI socio-economic benefit-cost ratio, and cultural and legal accountability score;
- A comparative legal analysis of the “right to explanation” in the six “G20 Giants” – the USA, Germany, Japan, India, Brazil and Russia – was conducted, taking into account their socio-cultural and legal characteristics;
- Differentiated target values of key performance indicators were established depending on the risk level of decisions made: for high-risk decisions, the clarity and trust index should be 90-100%, for strategic decisions – 70-90%.
Review
For citations:
Bhatt N., Bhatt J.N. Explainable Artificial Intelligence and Legal Ethos: Developing Key Performance Indicators for ‘G20 Giants’. Journal of Digital Technologies and Law. 2025;3(4):660-676. https://doi.org/10.21202/jdtl.2025.26. EDN: uiujcv






















































