Journal of Digital Technologies and Law

Advanced search

Ethical-Legal Models of the Society Interactions with the Artificial Intelligence Technology

EDN: ekeudk

Full Text:


Objective: to explore the modern condition of the artificial intelligence technology in forming prognostic ethical-legal models of the society interactions with the end-to-end technology under study.

Methods: the key research method is modeling. Besides, comparative, abstract-logic and historical methods of scientific cognition were applied.

Results: four ethical-legal models of the society interactions with the artificial intelligence technology were formulated: the tool (based on using an artificial intelligence system by a human), the xenophobia (based on competition between a human and an artificial intelligence system), the empathy (based on empathy and co-adaptation of a human and an artificial intelligence system), and the tolerance (based on mutual exploitation and cooperation between a human and artificial intelligence systems) models. Historical and technical prerequisites for such models formation are presented. Scenarios of the legislator reaction on using this technology are described, such as the need for selective regulation, rejection of regulation, or a full-scale intervention into the technological economy sector. The models are compared by the criteria of implementation conditions, advantages, disadvantages, character of “human – artificial intelligence system” relations, probable legal effects and the need for regulation or rejection of regulation in the sector.

Scientific novelty: the work provides assessment of the existing opinions and approaches, published in the scientific literature and mass media, analyzes the technical solutions and problems occurring in the recent past and present. Theoretical conclusions are confirmed by references to applied situations of public or legal significance. The work uses interdisciplinary approach, combining legal, ethical and technical constituents, which, in the author’s opinion, are criteria for any modern socio-humanitarian researches of the artificial intelligence technologies.

Practical significance: the artificial intelligence phenomenon is associated with the fourth industrial revolution; hence, this digital technology must be researched in a multi-aspectual and interdisciplinary way. The approaches elaborated in the article can be used for further technical developments of intellectual systems, improvements of branch legislation (for example, civil and labor), and for forming and modifying ethical codes in the sphere of development, introduction and use of artificial intelligence systems in various situations.

About the Author

D. V. Bakhteev
Ural State Law University named after V. F. Yakovlev
Russian Federation

Dmitriy V. Bakhteev – Doctor of Law, Associate Professor, Department of Criminology

Scopus Author ID:

Web of Science Researcher ID:

Google Scholar ID:

РИНЦ Author ID:

21 Komsomolskaya Str., 620137, Ekaterinburg

Competing Interests:

The author declare no conflict of interests.


1. Apresyan, R. G. (1995). Normative models of moral rationality. In Morals and rationality (pp. 94–118). Moscow: Institut filosofii RAN. (In Russ.).

2. Bakhteev, D. V. (2021). Artificial intelligence: ethical-legal approach. Moscow: Prospekt. (In Russ.).

3. Balerna, M., & Ghosh, A. (2018). The details of past actions on a smartphone touchscreen are reflected by intrinsic sensorimotor dynamics. Digital Med, 1, Article 4.

4. Bukatov, V. M. (2018). Clip changes in the perception, understanding and thinking of modern schoolchildren – negative neoplasm of postindustrial way or long-awaited resuscitation of the psychic nature? Actual Problems of Psychological Knowledge, 4(49), 5–19. (In Russ.).

5. Eloundou, T., Manning, S., Mishkin, P., & Rock, D. (2023, March 17). GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models.

6. Flynn, J. R. (2009). What Is Intelligence: Beyond the Flynn Effect. Cambridge: Cambridge University Press.

7. Gunkel, D. J. (2018). Robot rights. Cambridge, MA: MIT Press.

8. Heidegger, M. (1993). The Question Concerning Technology. In Time and being: articles and speeches. Moscow: Respublika. (In Russ.).

9. Ilyin, E. P. (2016). Emotions and feelings. (2d ed.). Saint Petersburg: Piter. (In Russ.).

10. Kazim, E., & Koshiyama, A. S. (2021). A high-level overview of AI ethics. Patterns, 3(9).

11. Laptev, V. A. (2017). Responsibility of the “future”: legal essence and evidence evaluation issue. Civil Law, 3, 32–35. (In Russ.).

12. Marx, K. (2001). Capital (Vol. 1). Moscow: AST. (In Russ.).

13. Mathur, M. B., & Reichling, D. B. (2016). Navigating a social world with robot partners: A quantitative cartography of the uncanny valley. Cognition, 146, 22–32.

14. Mori, M. (2012). The uncanny valley. IEEE Robotics & Automation Magazine, 19(2), 98–100.

15. Morley, J., Elhalal, A., Garcia, F., Kinsey, L., Mokander, J., & Floridi, L. (2021). Ethics as a service: a pragmatic operationalisation of AI Ethics. Minds and Machines, 31,

16. Ogurtsov, A. P. (2006). Opportunities and difficulties in modeling intelligence. In D. I. Dubrovskii, & V. A. Lektorskii (Eds.), Artificial intelligence: interdisciplinary approach (pp. 32–48). Moscow: IIntELL. (In Russ.).

17. Scheutz, M. (2009). The Inherent Dangers of Unidirectional Emotional Bonds between Humans and Social Robots. Workshop on Roboethics at ICRA.

18. Semis-ool, I. S. (2019). “Trustworthy” artificial intelligence. In D. V. Bakhteev (Ed.), Technologies of the 21st century in jurisprudence: works of the All-Russia scientific-practical conference (Yekaterinburg, May 24, 2019) (pp. 145–149). Yekaterinburg: Uralskiy gosudarstvenniy yuridicheskiy universitet. (In Russ.).

19. Teasdale, T. W., & Owen, D. R. (2005). A long-term rise and recent decline in intelligence test performance: The Flynn Effect in reverse. Personality and Individual Differences, 39(4), 837–843.

20. Timofeev, A. V. (1978). Robots and artificial intelligence. Moscow: Glavnaya redaktsiya fiziko-matematicheskoy literatury izdatelstva “Nauka”. (In Russ.).

21. Vardi, M. (2012). Artificial Intelligence: Past and Future. Communications of the ACM, 55, 5.

22. Watkins, R., & Human, S. (2023). Needs-aware artificial intelligence: AI that ‘serves [human] needs’. AI Ethics, 3.

23. Winfield, A. (2019). Ethical standards in robotics and AI. Nature Electronics, 2, 46–48.


For citations:

Bakhteev D.V. Ethical-Legal Models of the Society Interactions with the Artificial Intelligence Technology. Journal of Digital Technologies and Law. 2023;1(2):520–539. EDN: ekeudk

Views: 315

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

ISSN 2949-2483 (Online)