Архив статей журнала
The authors discuss the problem of digital facial recognition technologies in the context of implementation of individual rights and freedoms. The analysis is focused on whether their use is legitimate and on interpretation of the provisions behind the underlying procedures. The authors note a significant range of goals to be addressed through the use of smart digital systems already at the goal-setting stage: economy, business, robotics, geological research, biophysics, mathematics, biophysics, avionics, security systems, health, etc. Higher amounts of data and a broader range of technologically complex decision-making objectives require to systematize the traditional methods and to develop new decision-making methodologies and algorithms. Progress of machine learning and neural networks will transform today’s digital technologies into self-sustained and self-learning systems intellectually superior to human mind. Video surveillance coupled with smart facial recognition technologies serves above all public security purposes and can considerably impact modern society. The article is devoted to the theme of legitimate use of digital facial recognition technologies and to the interpretation of provisions laying down the underlying procedures. The authors’ research interests assume an analysis of legal approaches to uphold human rights as digital facial recognition systems are increasingly introduced into social practices in Russia, European Union, United Kingdom, United States, China. The purpose of article is to shed light on regulatory details around the use of AI systems for remote biometric identification of persons in the process of statutory regulation. Methods: formal logic, comparison, analysis, synthesis, correlation, generalization. Conclusions: the analysis confirms that facial recognition technologies are progressing considerably faster than their legal regulation. Deployment of such technologies make possible ongoing surveillance, a form of collecting information on private life of persons. It is noted that accounting for these factors requires amending the national law in order to define the status and the rules of procedure for such data, as well as the ways to inform natural persons that information associated with them is being processed.
The last few years have witnessed a rapid penetration of artificial intelligence (AI) into different walks of life including medicine, judicial system, public governance and other important activities. Despite multiple benefits of these technologies, their widespread dissemination raises serious concerns as to whether they are trustworthy. The article provides an analysis of the key factors behind public mistrust in AI while discussing ways to build confidence. To understand the reasons of mistrust, the author invokes the historical context, social study findings as well as judicial practices. A special focus is made on the security of AI use, AI visibility to users and on decision-making responsibility. The author also discusses the current regulatory models in this area including the development of universally applicable legal framework, regulatory sandboxes and self-regulation mechanisms for the sector, with multidisciplinary collaboration and adaptation of the effective legal system to become a key factor of this process. Only this approach will producer a balanced development and use of AI systems in the interest of all stakeholders, from their vendors to end users. For a more exhaustive coverage of this subject, the following general methods are proposed: analysis, synthesis and systematization; special legal (comparative legal and historic legal) research methods. In analyzing the available data, the author argues for a comprehensive approach to make AI trustworthy. The following hypothesis is proposed based on the study’s findings. Trust in AI is a cornerstone of efficient regulation of AI development and use in various areas. The author is convinced that, with AI made transparent, safe and reliable one, provided with human oversight through adequate regulation, the government will maintain purposeful collaboration between man and technologies thus setting the stage for AI use in critical infrastructures affecting life, health and basic rights and interests of individuals.
On October 18, 2024 the XIII International Scientific and Practical Conference “Law in the Digital Age” was held at the Faculty of Law of the Higher School of Economics (HSE). This year it was devoted to the topic of artificial intelligence (AI) and law. It was considered from the standpoint of both private and public law. The conference covered the issues of the civil law regime of artificial intelligence technologies and objects created with its use, artificial intelligence and intellectual property law, as well as the topic of generative content and protection of the interests of copyright holders. The topic of regulation and self-regulation of artificial intelligence, including artificial intelligence in Legal Tech, is highlighted. Introduction of Artificial Intelligence Technologies in Labor Relations: Successes, Failures, Prospects Criminal Law Protection of Digital Economy and Finance Entities Using Elements of Artificial Intelligence. Thus, the conference attempted a comprehensive discussion of the role of law in the development of AI technologies. This approach made it possible to show the relationship between the methods of legal regulation in this area, their interaction to create conditions for the development of AI technologies. The conference raised both practical and theoretical issues of the development of law in the new conditions, as well as the problems of the development of legal education.
The article provides a discussion of legal regulation of social relations by the Interparliamentary Assembly of the CIS Member States with regard to AI and other advanced information technologies, identifiable regulatory gaps, conceptual framework, analysis of possible use scenarios and related risks, as well as the range of problems to be addressed by regulation on a priority basis. It contains a brief overview of how AI-related social relations are regulated in the CIS member states. While all these countries admit the importance of such regulation, none has developed a clear understanding of a number of issues, only to stress the relevance of developing a draft model law on AI technologies. The authors demonstrate the following common problems of regulating these relations in the CIS member states: identifying the regulatory scope and the parties concerned and, importantly, addressing the issues of liability including what party (AI technology rights holder, developer, system operator etc.) and in what case will assume a particular type of liability (administrative, civil, financial, criminal). Another important aspect is also discussed — digitization and advanced digital technologies shaping “new” digital personal rights — with an analysis and brief overview being provided. The study purports to identify the trend and opportunities for public regulation of AI and other advanced digital applications. With this in mind, the authors discuss possible regulatory vectors in the given area in light of the risks related to operational specifics of digital technologies, and identify groups of social relations to be adequately addressed by legal regulation. With digitization covering an ever wider range of social relations, the problems to be addressed by law include the protection of personal rights as well as prevention of nondiscrimination of individuals and economic agents. The article employs a number of scientific methods of inquiry, general and special research methods including the formal law method. The general research methods include systemic, dialectic, structural systemic, analytical/synthetic, inductive and deductive methods, abstraction, simulation. The article concludes that, while the CIS countries are at different regulatory stages in the discussed area, there is no comprehensive regulation, with only individual provisions and regulations in place to govern specific aspects of AI use. A model law, once developed, will allow to lay the ground for comprehensive regulation of the discussed relations by the national legislation.
The article explores the impact of AI on legal systems globally. It highlights how technology, particularly AI, disrupts social order and power dynamics, necessitating legal adaptations. The document categorizes global AI regulatory responses into four types: no response, reliance on existing tech regulations, fragmented solutions, and unified approaches. The European Union (EU) has adopted a unified approach with the Artificial Intelligence Act (AIA), aiming to harmonize AI rules, address risks, and stimulate AI development. The United States employs a piecemeal approach with the National Artificial Intelligence Act of 2020 and various state laws and executive orders. Australia lacks specific AI legislation, but it has an AI Action Plan focusing on economic benefits and talent development. South Africa’s National AI Policy Framework emphasizes economic transformation and social equity. The African Union’s Continental AI Strategy aims for socio-economic transformation while addressing AI risks. Canada has a Voluntary Code of Conduct and a proposed Artificial Intelligence and Data Act (AIDA). The document critiques current AI regulations for incomplete definitions and a lack of focus on the broader societal purpose of AI. It stresses the need for regulations to consider ethical dimensions and societal impacts. The document concludes that AI regulation must balance innovation with social order, human dignity, and safety, emphasizing the urgent need to address AI’s energy and water consumption to prevent potential global instability.