Objective: the rapid expansion of the use of telemedicine in clinical practice and the increasing use of Artificial Intelligence has raised many privacy issues and concerns among legal scholars. Due to the sensitive nature of the data involved particular attention should be paid to the legal aspects of those systems. This article aimed to explore the legal implication of the use of Artificial Intelligence in the field of telemedicine, especially when continuous learning and automated decision-making systems are involved; in fact, providing personalized medicine through continuous learning systems may represent an additional risk. Particular attention is paid to vulnerable groups, such as children, the elderly, and severely ill patients, due to both the digital divide and the difficulty of expressing free consent.
Methods: comparative and formal legal methods allowed to analyze current regulation of the Artificial Intelligence and set up its correlations with the regulation on telemedicine, GDPR and others.
Results: legal implications of the use of Artificial Intelligence in telemedicine, especially when continuous learning and automated decision-making systems are involved were explored; author concluded that providing personalized medicine through continuous learning systems may represent an additional risk and offered the ways to minimize it. Author also focused on the issues of informed consent of vulnerable groups (children, elderly, severely ill patients).
Scientific novelty: existing risks and issues that are arising from the use of Artificial Intelligence in telemedicine with particular attention to continuous learning systems are explored.
Practical significance: results achieved in this paper can be used for lawmaking process in the sphere of use of Artificial Intelligence in telemedicine and as base for future research in this area as well as contribute to limited literature on the topic.
Objective: to analyze the current technological and legal theories in order to define the content of the transparency principle of the artificial intelligence functioning from the viewpoint of legal regulation, choice of applicable means of legal regulation, and establishing objective limits to legal intervention into the technological sphere through regulatory impact.
Methods: the methodological basis of the research is the set of general scientific (analysis, synthesis, induction, deduction) and specific legal (historical-legal, formal-legal, comparative-legal) methods of scientific cognition.
Results: the author critically analyzed the norms and proposals for normative formalization of the artificial intelligence transparency principle from the viewpoint of impossibility to obtain the full technological transparency of artificial intelligence. It is proposed to discuss the variants of managing algorithmic transparency and accountability based on the analysis of social, technical and regulatory problems created by algorithmic systems of artificial intelligence. It is proved that transparency is an indispensible condition to recognize artificial intelligence as trustworthy. It is proved that transparency and explainability of the artificial intelligence technology is essential not only for personal data protection, but also in other situations of automated data processing, when, in order to make a decision, the technological data lacking in the input information are taken from open sources, including those not having the status of a personal data storage. It is proposed to legislatively stipulate the obligatory audit and to introduce a standard, stipulating a compromise between the technology abilities and advantages, accuracy and explainability of its result, and the rights of the participants of civil relations. Introduction of certification of the artificial intelligence models, obligatory for application, will solve the issues of liability of the subjects obliged to apply such systems. In the context of professional liability of professional subjects, such as doctors, militants, or corporate executives of a juridical person, it is necessary to restrict the obligatory application of artificial intelligence if sufficient transparency is not provided.
Scientific novelty: the interdisciplinary character of the research allowed revealing the impossibility and groundlessness of the requirements to completely disclose the source code or architecture of the artificial intelligence models. The principle of artificial intelligence transparency may be satisfied through elaboration and provision of the right of the data subject and the subject, to whom the decision made as a result of automated data processing is addressed, to reject using automated data processing in decision-making, and the right to object to the decisions made in such a way.
Practical significance: is due to the actual absence of sufficient regulation of the principle of transparency of artificial intelligence and results of its functioning, as well as the content and features of the implementation of the right to explanation the right to objection of the decision subject. The most fruitful way to establish trust towards artificial intelligence is to recognize this technology as a part of a complex sociotechnical system, which mediates trust, and to improve the reliability of these systems. The main provisions and conclusions of the research can be used to improve the legal mechanism of providing transparency of the artificial intelligence models applied in state governance and business.
Objective: to reveal the problems associated with legal regulation of public relations, in which artificial intelligence systems are used, and to rationally comprehend the possibility of endowing such systems with a legal subject status, which is being discussed by legal scientists.
Methods: the methodological basis of the research are the general scientific methods of analysis and synthesis, analogy, abstraction and classification. Among the legal methods primarily applied in the work are formal-legal, comparative-legal and systemic-structural methods, as well as the methods of law interpretation and legal modeling.
Results: the authors present a review of the state of artificial intelligence development and its introduction into practice by the time of the research. Legal framework in this sphere is considered; the key current concepts of endowing artificial intelligence with a legal personality (individual, collective and gradient legal personality of artificial intelligence) are reviewed. Each approach is assessed; conclusions are made as to the most preferable
amendments in the current legislation, which ceases to correspond to the reality. The growing inconsistency is due to the accelerated development of artificial intelligence and its spreading in various sectors of economy, social sphere, and in the nearest future – in public management. All this testifies to the increased risk of a break between legal matter and the changing social reality.
Scientific novelty: scientific approaches are classified which endow artificial intelligence with a legal personality. Within each approach, the key moments are identified, the use of which will allow in the future creating legal constructs based on combinations, avoiding extremes and observing the balance between the interests of all parties. The optimal variant to define the legal status of artificial intelligence might be to include intellectual systems into a list of civil rights objects, but differentiating the legal regulation of artificial intelligence as an object of law and an “electronic agent” as a quasi subject of law. The demarcation line should be drawn depending on the functional differences between intellectual systems, while not only a robot but also a virtual intellectual system can be considered an “electronic agent”.
Practical significance: the research materials can be used when preparing proposals for making amendments and additions to the current legislation, as well as when elaborating academic course and writing tutorials on the topics related to regulation of using artificial intelligence.
Objective: emergence of digital technologies such as Artificial intelligence became a challenge for states across the world. It brought many risks of the violations of human rights, including right to privacy and the dignity of the person. That is why it is highly relevant to research in this area. That is why this article aims to analyse the role played by algorithms in discriminatory cases. It focuses on how algorithms may implement biased decisions using personal data. This analysis helps assess how the Artificial Intelligence Act proposal can regulate the matter to prevent the discriminatory effects of using algorithms.
Methods: the methods used were empirical and comparative analysis. Comparative analysis allowed to compare regulation of and provisions of Artificial Intelligence Act proposal. Empirical analysis allowed to analyse existing cases that demonstrate us algorithmic discrimination.
Results: the study’s results show that the Artificial Intelligence Act needs to be revised because it remains on a definitional level and needs to be sufficiently empirical. Author offers the ideas of how to improve it to make more empirical.
Scientific novelty: the innovation granted by this contribution concerns the multidisciplinary study between discrimination, data protection and impact on empirical reality in the sphere of algorithmic discrimination and privacy protection.
Practical significance: the beneficial impact of the article is to focus on the fact that algorithms obey instructions that are given based on the data that feeds them. Lacking abductive capabilities, algorithms merely act as obedient executors of the orders. Results of the research can be used as a basis for further research in this area as well as in law-making process.
Objective: to review the modern scientific approaches to regulating relations in the sphere of using the artificial intelligence technologies; to reveal the main features and limitations of using the risk-oriented and technological approaches in order to determine the directions of their further development.
Methods: the methodological basis of the research is a set of scientific cognition methods, including the general scientific dialectic method and the universal scientific methods (analysis and synthesis, comparison, summarization, structural-functional, and formal-logical methods).
Results: it was determined that the use of the risk-oriented approach implies building constructive models of risk management. A significant issue in using this approach is the bases of referring the artificial intelligence technologies to high-risk ones. When determining the risk level of using the artificial intelligence technologies, the following criteria should be applied: the type of artificial intelligence technology, its sphere of use, and the level of potential harm for the environment, health and other fundamental human rights.
In turn, the central issue of using the technological approach is the necessity and limits of regulation in the sphere of developing and using the artificial intelligence technologies. First, interference into this sphere must not create obstacles for developing technologies and innovations. Second, a natural reaction of a regulator towards newly emerging objects and subjects of turnover is the “imperfect law syndrome”. At the same time, a false idea about a lack of legal regulation may produce an opposite effect – duplication of legal norms. To solve the problem of duplicating legal requirements, it is necessary, first of all, to solve the issue of the need to regulate the artificial intelligence technologies or certain types of software applications.
Scientific novelty: a review was carried out of the main approaches to regulating relations in the sphere of developing and using the artificial intelligence technologies; the opportunities and limitations of their use are revealed; further directions of their development are proposed.
Practical significance: the main provisions and conclusions of the research can be used for determining the optimal approaches to regulating the sphere of digital technologies and for improving the legal regulation of the studied sphere of social relations.
Objective: to research the problem of determining the subject of a legally relevant act effected with participation of artificial intelligence, as well as distribution of responsibility for the consequences of its performance.
Methods: to illustrate the problematic and practical significance of the issue of legal personality of artificial intelligence, we chose automated procurements for public and corporate needs; the methodological basis of the research is the set of methods of scientific cognition, including comparison, retrospective analysis, analogy, and synthesis.
Results: by the example of the sector of competitive procurements for public and corporate needs, the evolution of automation of economic relations up to artificial intelligence introduction was analyzed. Successfully tested reactions to the challenges of stage-by-stage introduction of digital technologies into economic relations were demonstrated, as well as the respective modifications of legal regulation. Based on the current level of technological development, the prospective questions are formulated, associated with the legal regulation of economic relations implemented with the use of artificial intelligence, first of all, the question of defining the subject of a deal effected with participation of artificial intelligence. As an invitation for discussion after analysis of jurists’ conclusions about the probable variants of the legal status of artificial intelligence, the author proposes variants of answers to the question of its legal personality when effecting a deal. To solve the issue of responsibility for the decisions resulting from the implementation of algorithms of a software and hardware package, we propose several models of distributing such responsibility among its creator, owner, and other persons, whose actions might influence the results of such an algorithm functioning. The proposed conclusions may be used to develop normative regulation both as a set and individually.
Scientific novelty: based on the analysis of evolution of the practices of using digital technologies in procurement, the work formulates potential legal problems, determined by the constant automation of economic relations, and proposes legal constructs to solve such problems.
Practical significance: the conclusions and proposals of this work are of prospective significance for conceptual comprehension and normative regulation of electronic procurement tools both at corporate and national level.
Objective: international law obligates states to prosecute those who have violated laws in armed conflicts, particularly when the international community now has International Criminal Court (ICC).
That is why the aim of the paper is to discover the responsibility for the crimes made with the use of AI-based autonomous vehicles in accordance with the provisions of the Rome Statute of the ICC.
Methods: doctrinal analysis allowed to research the positions of experts on the responsibility for the crimes made with the use of AI-based autonomous vehicles in accordance with the provisions of the Rome Statute of the ICC.
Results: this paper argues that the ICC can only exercise jurisdiction over natural persons who allegedly have committed the crimes under its jurisdiction, as compared to autonomous weapons. This paper argues that the persons who facilitate the commission of the alleged crimes are highly likely to be criminally responsible for providing means for the alleged crimes to be committed by AI-based autonomous weapons under Article 25(3)(c) of the Rome Statute and concludes that the Rome Statute provides a solution even to AI-based autonomous weapons.
Scientific novelty: this paper addresses to the highly relevant issues of the responsibility for the crimes made with the use of AI-based autonomous vehicles in accordance with the provisions of the Rome Statute of the ICC.
Practical significance: the results achieved in the paper can be used in regulation design for AI-based autonomous weapons. It can also be used as a basis for the future research in the sphere of liability of AI-based autonomous weapons and AI in general
Objective: to summarize and analyze the approaches, established in criminal procedural science, regarding the use of artificial intelligence technologies, to elaborate an author’s approach to the prospects of transformation of criminal procedural proving under the influence of artificial intelligence technologies.
Methods: the methodological basis of the research is integrity of general, general scientific and specific legal methods of legal science, including abstract-logical, comparative-legal and prognostic methods.
Results: the main areas of using artificial intelligence technologies in the cri mi nal procedure are defined, such as prophylaxis and detection of crimes, organization of preliminary investigation, criminological support of crime investigation, and assessing evidences at pre-trial and trial stages. The author comes to a conclusion that the rather optimistic approach to this issue, established in the science of criminal procedure, significantly outstrips the actually existing artificial intelligence technologies. The main requirements are identified, which the activity of using artificial intelligence in collecting evidences in a criminal case should satisfy. The author pays attention to the problems of using artificial intelligence technologies in conducting expert assessments, requiring an improved methodology of forensic work. The issue is considered of the prospects of transforming the criminal-procedural proving process under introduction of artificial intelligence technologies. A conclusion is substantiated that the assessment of evidences with mathematical algorithms, in which preset values of each evidence quality are used, contradict to the principle of free assessment of evidences in the criminal procedure. The author comes to a conclusion that today there are no sufficient grounds for endowing artificial intelligence with legal personality during proving.
Scientific novelty: the work presents an attempt to consider the role of artificial intelligence in the criminal-procedural proving; it specifies the requirements to be met by this technology during evidences collection and analyzes the prospects of transforming the proving process under the introduction of artificial intelligence technologies.
Practical significance: the main provisions and conclusions of the research can be used to improve a mechanism of legal regulation of artificial intelligence technologies in the criminal procedure.
Objective: the spread and wide application of Artificial Intelligence raises ethical questions in addition to data protection measures. That is why the aim of this paper is to examine ethical aspects of Artificial Intelligence and give recommendations for its use in labor law.
Methods: research based on the methods of comparative and empirical analysis. Comparative analysis allowed to examine provisions of the modern labor law in the context of use of Artificial Intelligence. Empirical analysis made it possible to highlight the ethical issues related to Artificial Intelligence in the world of work by examining the disputable cases of the use of Artificial Intelligence in different areas, such as healthcare, education, transport, etc.
Results: the private law aspects of the ethical issues of Artificial Intelligence were examined in the context of ethical and labor law issues that affect the selection process with Artificial Intelligence and the treatment of employees as a set of data from the employers’ side. Author outlined the general aspects of ethics and issues of digital ethics. Author described individual international recommendations related to the ethics of Artificial Intelligence.
Scientific novelty:this research focused on the examination of ethical issues of the use of Artificial Intelligence in the specific field of private law – labor law. Authors gave recommendations on ethical aspects of use of Artificial Intelligence in this specific field.
Practical significance: research contributes to the limited literature on the topic. The results of the research could be used in lawmaking process and also as a basis for future research.
Objective: to explore the modern condition of the artificial intelligence technology in forming prognostic ethical-legal models of the society interactions with the end-to-end technology under study.
Methods: the key research method is modeling. Besides, comparative, abstract-logic and historical methods of scientific cognition were applied.
Results: four ethical-legal models of the society interactions with the artificial intelligence technology were formulated: the tool (based on using an artificial intelligence system by a human), the xenophobia (based on competition between a human and an artificial intelligence system), the empathy (based on empathy and co-adaptation of a human and an artificial intelligence system), and the tolerance (based on mutual exploitation and cooperation between a human and artificial intelligence systems) models. Historical and technical prerequisites for such models formation are presented. Scenarios of the legislator reaction on using this technology are described, such as the need for selective regulation, rejection of regulation, or a full-scale intervention into the technological economy sector. The models are compared by the criteria of implementation conditions, advantages, disadvantages, character of “human – artificial intelligence system” relations, probable legal effects and the need for regulation or rejection of regulation in the sector.
Scientific novelty: the work provides assessment of the existing opinions and approaches, published in the scientific literature and mass media, analyzes the technical solutions and problems occurring in the recent past and present. Theoretical conclusions are confirmed by references to applied situations of public or legal significance. The work uses interdisciplinary approach, combining legal, ethical and technical constituents, which, in the author’s opinion, are criteria for any modern socio-humanitarian researches of the artificial intelligence technologies.
Practical significance: the artificial intelligence phenomenon is associated with the fourth industrial revolution; hence, this digital technology must be researched in a multi-aspectual and interdisciplinary way. The approaches elaborated in the article can be used for further technical developments of intellectual systems, improvements of branch legislation (for example, civil and labor), and for forming and modifying ethical codes in the sphere of development, introduction and use of artificial intelligence systems in various situations.
Objective: based on studying the statistics of crimes, national legislation and norms of international law, to give a legal assessment to restrictions of the right to worship implemented with the use of artificial intelligence technologies in China.
Methods: the methodological basis of the research is the set of methods of scientific cognition, including specific sociological (analysis of statistical data and other documents), formal-legal (examining legal categories and definitions), formal-logical (analysis and synthesis), general scientific (induction, deduction), and other methods.
Results: the work researches prerequisites for using artificial intelligence technologies in China to control public relations arising during religious activity both in the digital space and beyond; analyzes the legal framework of the measures implemented; gives a legal assessment to restrictions of the religious freedom using artificial intelligence technologies; forecasts the further development of Chinese legislation and foreign policy associated with religious freedom. Additionally, the work analyzes materials of human rights organizations aimed at hindering the Chinese policy of “sinicisation” and “de-extremification” of ethnic and religious minorities, including with the help of control and propaganda using modern digital technologies.
Scientific novelty: the work researches the attempt of China to regulate the challenges related to religious activity, arising during rapid digitalization of the society and state, which the Republic faces being a developing, multinational and polyconfessional country. The established restrictions of religious freedom using artificial intelligence technologies are considered along with the relevant criminal statistics. The legal assessment of using artificial intelligence as a tool for restricting the right to worship is given from the standpoint of international law, as well as with the account of Chinese national legislation.
Practical significance: the research results can be used to elaborate a consistent legal framework for using artificial intelligence technologies to counteract extremism.
Objective: the paper aims to define the problems juridical theory and practice face with the progress of AI technologies in everyday life and correlate these problems with the human-centered approach to exploring artificial intelligence (Human-Centered AI).
Methods: the research critically analyzes the relevant literature from various disciplines: jurisprudence, sociology, philosophy, and computer sciences.
Results: the article articulates the prospects and problems the legal system confronts with the advancement of digital technologies in general and the tools of AI specifically. The identified problems are correlated with the provisions of the human-centered approach to AI. The authors acknowledge the necessity for AI inventors, as well as the owners of companies participating in the race to develop artificial intelligence technologies, to place humans, not machines, into the focus of attention as a primary value. In particular, special effort should be directed towards collecting and analyzing high-quality data for the organization of artificial intelligence tools development, taking into account that nowadays, the tools of AI are as practical as the data on which they are trained are effective.
The authors formulate three principles of human-centered AI for the legal sphere: 1) a human as a necessary link in the chain of making and executing legal decisions; 2) the need to regulate artificial intelligence at the international law level; 3) formulating “a taboo” for introducing the artificial intelligence technologies.
Scientific novelty: the article manifests one of the first attempts in the Russianlanguage scientific literature to outline the prospects of developing humancentered AI methodology in jurisprudence. Based on an analysis of special literature, the authors formulate three principles of including artificial intelligence into juridical theory and practice according to the assumptions of a human-centered approach to AI.
Practical significance: the principles and arguments the article advances can be helpful in the legal regulation of artificial intelligence technologies and their harmonious inclusion into legal practices.