Архив статей журнала
Integration of artificial intelligence (AI) into public administration marks a pivotal shift in the structure of political power, transcending mere automation to catalyze a longterm transformation of governance itself. The author argues AI’s deployment disrupts the classical foundations of liberal democratic constitutionalism — particularly the separation of powers, parliamentary sovereignty, and representative democracy — by enabling the emergence of algorithmic authority (algocracy), where decisionmaking is centralized in opaque, technocratic systems. Drawing on political theory, comparative case studies, and interdisciplinary analysis, the researcher traces how AI reconfigures power dynamics through three interconnected processes: the erosion of transparency and accountability due to algorithmic opacity; the marginalization of legislative bodies as expertise and data-driven rationality dominate policymaking; and the ideological divergence in AI governance, reflecting competing visions of legitimacy and social order. The article highlights AI’s influence extends beyond technical efficiency, fundamentally altering the balance of interests among social groups and institutions. While algorithmic governance promises procedural fairness and optimized resource allocation, it risks entrenching epistocratic rule — where authority is concentrated in knowledge elites or autonomous systems — thereby undermining democratic participation. Empirical examples like AI-driven predictive policing and legislative drafting tools, illustrate how power consolidates in executive agencies and technocratic networks, bypassing traditional checks and balances. The study examines paradox of trust in AI systems: while citizens in authoritarian regimes exhibit high acceptance of algorithmic governance, democra-cies grapple with legitimacy crises as public oversight diminishes. The author contends “new structure of power” will hinge on reconciling AI’s transformative potential with safeguards for human dignity, pluralism, and constitutionalism. It proposes a reimagined framework for governance — one that decentralizes authority along thematic expertise rather than institutional branches, while embedding ethical accountability into algorithmic design. The long-term implications demand interdisciplinary collaboration, adaptive legal frameworks, and a redefinition of democratic legitimacy in an era where power is increasingly exercised by code rather than by humans.
The article contains a comprehensive analysis of the very relevant topic of ensuring transparency and explainability of public administration bodies in the context of an ever-increasing introduction of automated decision-making systems and artificial intelligence systems in their operations. Authors focus on legal, organisational and technical mechanisms designed to implement the principles of transparency and explainability, as well as on challenges to their operation. The purpose is to describe the existing and proposed approaches in a comprehensive and systematic manner, identify the key risks caused by the non-transparency of automated decisionmaking systems, and to evaluate critically the potential that various tools can have to minimise such risks. The methodological basis of the study is general scientific methods (analysis, synthesis, system approach), and private-scientific methods of legal science, including legalistic and comparative legal analysis. The work explores the conceptual foundations of the principle of transparency of public administration in the conditions of technology transformation. In particular, the issue of the “black box” that undermines trust in state institutions and creates obstacles to juridical protection, is explored. It analyses preventive (ex ante) legal mechanisms, such as mandatory disclosure of the use of automated decision-making systems, the order and logic of their operation, information on the data used, and the introduction of preaudit, certification and human rights impact assessment procedures. Legal mechanisms for ex post follow-up are reviewed, including the evolving concept of the “right to explanation” of a particular decision, the use of counterfactual explanations, and ensuring that users have access to the data that gave rise to a particular automated decision. The authors pay particular attention to the inextricable link between legal requirements, and institutional and technical solutions. The main conclusions are that none of the mechanisms under review are universally applicable. The necessary effect may only be reached through their comprehensive application, adaptation to the specific context and level of risk, and close integration of legal norms with technical standards and practical tools. The study highlights the need to further improve laws aimed at detailing the responsibilities of developers and operators of the automated decision-making system, and to foster a culture of transparency and responsibility to maintain public administration accountability in the interests of society and every citizen.
Informational privacy, often referred as data privacy or data protection, is about an individual’s right to control how their personal information is collected, used and shared. Recent AI developments around the world have engulfed the world in its charm. Indian population, as well, is living under the cyber-revolution. India is gradually becoming dependent on technology for majority of the services obtained in daily life. Use of internet and Internet of Things leave traces of digital footprints which generate big data. This data can be personal as well as non-personal in nature. Such data about individuals can be utilised for understanding the socio-economic profile, culture, lifestyle, and personal information, like love life, health, well-being, sexual preferences, sexual orientation and various other types of individual traits. Issues like data breach, however, have also exposed users of information and technology to various types of risks such as cyber-crimes and other fraudulent practices. This article critically analysis recently enacted Digital Personal Data Protection Act, 2023 (DPDP) in the light of following questions: How it tackles with the issues of informational privacy and data processing? What measures have been envisaged under the DPDP Act, for the protection of informational privacy? How individual rights with respect to data protection are balanced against the legitimate state interest in ensuring safety and security of the nation? Whether this right is available only against the State or against the non-State actors as well? etc. Having critically analysed DPDP Act, the article calls for further refinement of DPDP Act in various areas, more specifically, suggesting that, it is imperative that DPDP Act requires critical decisions based on personal data to undergo human review, ensuring they are not solely the result of automated data processing.