Архив статей журнала

The Artificial Intelligence Influence on Structure of Power: Long-Term Transformation (2025)
Выпуск: № 2 (2025)
Авторы: Nizov Vladimir A.

Integration of artificial intelligence (AI) into public administration marks a pivotal shift in the structure of political power, transcending mere automation to catalyze a longterm transformation of governance itself. The author argues AI’s deployment disrupts the classical foundations of liberal democratic constitutionalism — particularly the separation of powers, parliamentary sovereignty, and representative democracy — by enabling the emergence of algorithmic authority (algocracy), where decisionmaking is centralized in opaque, technocratic systems. Drawing on political theory, comparative case studies, and interdisciplinary analysis, the researcher traces how AI reconfigures power dynamics through three interconnected processes: the erosion of transparency and accountability due to algorithmic opacity; the marginalization of legislative bodies as expertise and data-driven rationality dominate policymaking; and the ideological divergence in AI governance, reflecting competing visions of legitimacy and social order. The article highlights AI’s influence extends beyond technical efficiency, fundamentally altering the balance of interests among social groups and institutions. While algorithmic governance promises procedural fairness and optimized resource allocation, it risks entrenching epistocratic rule — where authority is concentrated in knowledge elites or autonomous systems — thereby undermining democratic participation. Empirical examples like AI-driven predictive policing and legislative drafting tools, illustrate how power consolidates in executive agencies and technocratic networks, bypassing traditional checks and balances. The study examines paradox of trust in AI systems: while citizens in authoritarian regimes exhibit high acceptance of algorithmic governance, democra-cies grapple with legitimacy crises as public oversight diminishes. The author contends “new structure of power” will hinge on reconciling AI’s transformative potential with safeguards for human dignity, pluralism, and constitutionalism. It proposes a reimagined framework for governance — one that decentralizes authority along thematic expertise rather than institutional branches, while embedding ethical accountability into algorithmic design. The long-term implications demand interdisciplinary collaboration, adaptive legal frameworks, and a redefinition of democratic legitimacy in an era where power is increasingly exercised by code rather than by humans.

Сохранить в закладках
Transparency in Public Administration in the Digital Age: Legal, Institutional, and Technical Mechanisms (2025)
Выпуск: № 2 (2025)
Авторы: Kabytov Pavel P., Nazarov Nikita A.

The article contains a comprehensive analysis of the very relevant topic of ensuring transparency and explainability of public administration bodies in the context of an ever-increasing introduction of automated decision-making systems and artificial intelligence systems in their operations. Authors focus on legal, organisational and technical mechanisms designed to implement the principles of transparency and explainability, as well as on challenges to their operation. The purpose is to describe the existing and proposed approaches in a comprehensive and systematic manner, identify the key risks caused by the non-transparency of automated decisionmaking systems, and to evaluate critically the potential that various tools can have to minimise such risks. The methodological basis of the study is general scientific methods (analysis, synthesis, system approach), and private-scientific methods of legal science, including legalistic and comparative legal analysis. The work explores the conceptual foundations of the principle of transparency of public administration in the conditions of technology transformation. In particular, the issue of the “black box” that undermines trust in state institutions and creates obstacles to juridical protection, is explored. It analyses preventive (ex ante) legal mechanisms, such as mandatory disclosure of the use of automated decision-making systems, the order and logic of their operation, information on the data used, and the introduction of preaudit, certification and human rights impact assessment procedures. Legal mechanisms for ex post follow-up are reviewed, including the evolving concept of the “right to explanation” of a particular decision, the use of counterfactual explanations, and ensuring that users have access to the data that gave rise to a particular automated decision. The authors pay particular attention to the inextricable link between legal requirements, and institutional and technical solutions. The main conclusions are that none of the mechanisms under review are universally applicable. The necessary effect may only be reached through their comprehensive application, adaptation to the specific context and level of risk, and close integration of legal norms with technical standards and practical tools. The study highlights the need to further improve laws aimed at detailing the responsibilities of developers and operators of the automated decision-making system, and to foster a culture of transparency and responsibility to maintain public administration accountability in the interests of society and every citizen.

Сохранить в закладках
Trust in Artificial Intelligence: Regulatory Challenges and Prospects (2025)
Выпуск: № 2 (2025)
Авторы: Вашурина Светлана Сергеевна

The last few years have witnessed a rapid penetration of artificial intelligence (AI) into different walks of life including medicine, judicial system, public governance and other important activities. Despite multiple benefits of these technologies, their widespread dissemination raises serious concerns as to whether they are trustworthy. The article provides an analysis of the key factors behind public mistrust in AI while discussing ways to build confidence. To understand the reasons of mistrust, the author invokes the historical context, social study findings as well as judicial practices. A special focus is made on the security of AI use, AI visibility to users and on decision-making responsibility. The author also discusses the current regulatory models in this area including the development of universally applicable legal framework, regulatory sandboxes and self-regulation mechanisms for the sector, with multidisciplinary collaboration and adaptation of the effective legal system to become a key factor of this process. Only this approach will producer a balanced development and use of AI systems in the interest of all stakeholders, from their vendors to end users. For a more exhaustive coverage of this subject, the following general methods are proposed: analysis, synthesis and systematization; special legal (comparative legal and historic legal) research methods. In analyzing the available data, the author argues for a comprehensive approach to make AI trustworthy. The following hypothesis is proposed based on the study’s findings. Trust in AI is a cornerstone of efficient regulation of AI development and use in various areas. The author is convinced that, with AI made transparent, safe and reliable one, provided with human oversight through adequate regulation, the government will maintain purposeful collaboration between man and technologies thus setting the stage for AI use in critical infrastructures affecting life, health and basic rights and interests of individuals.

Сохранить в закладках
Shaping Artificial Intelligence Regulatory Model: International and Domestic Experience (2025)
Выпуск: № 2 (2025)
Авторы: Buryaga Vladimir O., Djuzhoma Veronika V., Artemenko Egor A.

The article contains an analysis of AI regulatory models in Russia and other countries. The authors discuss key regulatory trends, principles and mechanisms with a special focus on balancing the incentives for technological development and the minimization of AI-related risks. The attention is centered on three principal approaches: “soft law”, experimental legal regimes (ELR) and technical regulation. The methodology of research covers a comparative legal analysis of AI-related strategic documents and legislative initiatives such as the national strategies approved by the U. S., China, India, United Kingdom, Germany and Canada, as well as regulations and codes of conduct. The authors also explore domestic experience including the 2030 National AI Development Strategy and the AI Code of Conduct as well as the use of ELR under the Federal Law “On Experimental Legal Regimes for Digital Innovation in the Russian Federation”. The main conclusions can be summed up as follows. A vast majority of countries including Russian Federation has opted for “soft law” (codes of conduct, declarations) that provides a flexible regulation by avoiding excessive administrative barriers. Experimental legal regimes are crucial for validating AI applications by allowing to test technologies in a controlled environment. In Russia ELR are widely used in transportation, health and logistics. Technical regulation including standardization is helpful to foster security and confidence in AI. The article notes widespread development of national and international standards in this field. Special regulation (along the lines of the European Union AI Act) still has not become widespread. A draft law based on the risk-oriented approach is currently discussed in Russia. The authors of the article argue for the gradual, iterative development of legal framework for AI to avoid rigid regulatory barriers emerging too prematurely. They also note the importance of international cooperation and adaptation of the best practices to shape an efficient regulatory system.

Сохранить в закладках
Digital Technologies and Forensic Examination of Copyright Works (2025)
Выпуск: № 1 (2025)
Авторы: Buzova N.V.

With the ability to enable remote trial sessions and promptly find and forward documents, digital technologies are increasingly used in judicial proceedings worldwide including Russia. However, in view of possible risks artificial intelligence is used at court only in the test mode, including for forensic examination of copyright works as a likely option. The article contains a discussion of the benefits and risks of AI when used for forensic examination. It is argued that AI can only serve as a tool for forensic examination, with shared approaches applicable to all copyright works to be developed and made available to judges, as well as expert opinion templates.

Сохранить в закладках
The Application of Artificial Intelligence in China’s Criminal Justice System (2025)
Выпуск: № 1 (2025)
Авторы: Guo Zhiyuan, Yang Jiajia

Influenced by the advanced technologies, in recent years, Chinese criminal justice system has begun integrating artificial intelligence (AI) to assist judicial decisionmaking. AI has entered into various areas such as criminal investigations, prosecution assistance, and sentencing support. However, Chinese legal system has not comprehensively addressed the regulation of judicial AI technology yet. This paper aims to explore the application of AI in Chinese criminal justice system and propose a systematic regulatory framework for its future development. Part I provides an overview of the specific application scenarios of AI in Chinese criminal justice system. Part II analyzes the general characteristics of judicial AI and the benefits it brings to the justice system. Part III examines the challenges limiting the further development of judicial AI and the potential risks associated with its application. Part IV proposes an inclusive regulatory framework to balance the intension and potential conflicts between judicial fairness and technological advancement. This research seeks to enhance the understanding of AI application in Chinese criminal justice system and to identify and prevent potential judicial risks arising from AI application.

Сохранить в закладках
Model Regulation of Artificial Intelligence and other Advanced Technologies (2025)
Выпуск: № 1 (2025)
Авторы: Tereschenko Ludmila K., Tokolov Alexander V.

The article provides a discussion of legal regulation of social relations by the Interparliamentary Assembly of the CIS Member States with regard to AI and other advanced information technologies, identifiable regulatory gaps, conceptual framework, analysis of possible use scenarios and related risks, as well as the range of problems to be addressed by regulation on a priority basis. It contains a brief overview of how AI-related social relations are regulated in the CIS member states. While all these countries admit the importance of such regulation, none has developed a clear understanding of a number of issues, only to stress the relevance of developing a draft model law on AI technologies. The authors demonstrate the following common problems of regulating these relations in the CIS member states: identifying the regulatory scope and the parties concerned and, importantly, addressing the issues of liability including what party (AI technology rights holder, developer, system operator etc.) and in what case will assume a particular type of liability (administrative, civil, financial, criminal). Another important aspect is also discussed — digitization and advanced digital technologies shaping “new” digital personal rights — with an analysis and brief overview being provided. The study purports to identify the trend and opportunities for public regulation of AI and other advanced digital applications. With this in mind, the authors discuss possible regulatory vectors in the given area in light of the risks related to operational specifics of digital technologies, and identify groups of social relations to be adequately addressed by legal regulation. With digitization covering an ever wider range of social relations, the problems to be addressed by law include the protection of personal rights as well as prevention of nondiscrimination of individuals and economic agents. The article employs a number of scientific methods of inquiry, general and special research methods including the formal law method. The general research methods include systemic, dialectic, structural systemic, analytical/synthetic, inductive and deductive methods, abstraction, simulation. The article concludes that, while the CIS countries are at different regulatory stages in the discussed area, there is no comprehensive regulation, with only individual provisions and regulations in place to govern specific aspects of AI use. A model law, once developed, will allow to lay the ground for comprehensive regulation of the discussed relations by the national legislation.

Сохранить в закладках
A Comparative Perspective on the Future of Law in a Time of Artificial Intelligence (2025)
Выпуск: № 1 (2025)
Авторы: Cornelius Steve

The article explores the impact of AI on legal systems globally. It highlights how technology, particularly AI, disrupts social order and power dynamics, necessitating legal adaptations. The document categorizes global AI regulatory responses into four types: no response, reliance on existing tech regulations, fragmented solutions, and unified approaches. The European Union (EU) has adopted a unified approach with the Artificial Intelligence Act (AIA), aiming to harmonize AI rules, address risks, and stimulate AI development. The United States employs a piecemeal approach with the National Artificial Intelligence Act of 2020 and various state laws and executive orders. Australia lacks specific AI legislation, but it has an AI Action Plan focusing on economic benefits and talent development. South Africa’s National AI Policy Framework emphasizes economic transformation and social equity. The African Union’s Continental AI Strategy aims for socio-economic transformation while addressing AI risks. Canada has a Voluntary Code of Conduct and a proposed Artificial Intelligence and Data Act (AIDA). The document critiques current AI regulations for incomplete definitions and a lack of focus on the broader societal purpose of AI. It stresses the need for regulations to consider ethical dimensions and societal impacts. The document concludes that AI regulation must balance innovation with social order, human dignity, and safety, emphasizing the urgent need to address AI’s energy and water consumption to prevent potential global instability.

Сохранить в закладках