Архив статей журнала
Informational privacy, often referred as data privacy or data protection, is about an individual’s right to control how their personal information is collected, used and shared. Recent AI developments around the world have engulfed the world in its charm. Indian population, as well, is living under the cyber-revolution. India is gradually becoming dependent on technology for majority of the services obtained in daily life. Use of internet and Internet of Things leave traces of digital footprints which generate big data. This data can be personal as well as non-personal in nature. Such data about individuals can be utilised for understanding the socio-economic profile, culture, lifestyle, and personal information, like love life, health, well-being, sexual preferences, sexual orientation and various other types of individual traits. Issues like data breach, however, have also exposed users of information and technology to various types of risks such as cyber-crimes and other fraudulent practices. This article critically analysis recently enacted Digital Personal Data Protection Act, 2023 (DPDP) in the light of following questions: How it tackles with the issues of informational privacy and data processing? What measures have been envisaged under the DPDP Act, for the protection of informational privacy? How individual rights with respect to data protection are balanced against the legitimate state interest in ensuring safety and security of the nation? Whether this right is available only against the State or against the non-State actors as well? etc. Having critically analysed DPDP Act, the article calls for further refinement of DPDP Act in various areas, more specifically, suggesting that, it is imperative that DPDP Act requires critical decisions based on personal data to undergo human review, ensuring they are not solely the result of automated data processing.
The article contains an analysis of AI regulatory models in Russia and other countries. The authors discuss key regulatory trends, principles and mechanisms with a special focus on balancing the incentives for technological development and the minimization of AI-related risks. The attention is centered on three principal approaches: “soft law”, experimental legal regimes (ELR) and technical regulation. The methodology of research covers a comparative legal analysis of AI-related strategic documents and legislative initiatives such as the national strategies approved by the U. S., China, India, United Kingdom, Germany and Canada, as well as regulations and codes of conduct. The authors also explore domestic experience including the 2030 National AI Development Strategy and the AI Code of Conduct as well as the use of ELR under the Federal Law “On Experimental Legal Regimes for Digital Innovation in the Russian Federation”. The main conclusions can be summed up as follows. A vast majority of countries including Russian Federation has opted for “soft law” (codes of conduct, declarations) that provides a flexible regulation by avoiding excessive administrative barriers. Experimental legal regimes are crucial for validating AI applications by allowing to test technologies in a controlled environment. In Russia ELR are widely used in transportation, health and logistics. Technical regulation including standardization is helpful to foster security and confidence in AI. The article notes widespread development of national and international standards in this field. Special regulation (along the lines of the European Union AI Act) still has not become widespread. A draft law based on the risk-oriented approach is currently discussed in Russia. The authors of the article argue for the gradual, iterative development of legal framework for AI to avoid rigid regulatory barriers emerging too prematurely. They also note the importance of international cooperation and adaptation of the best practices to shape an efficient regulatory system.