Abstract

Artificial Intelligence’s application in Carbon Capture and Storage technology presents significant benefits but also poses new challenges in legal regulation. Given the potential for transboundary effects and the need for globally coordinated action, the existing legal principles need to be re-evaluated and expanded upon to ensure the responsible use of AI, especially in Carbon Capture and Storage technology.

Thus, firstly, this Blog Post gives a thematic introduction to Carbon Capture and Storage technology in India. Secondly, this Blog Post examines the use of Artificial Intelligence in Carbon Capture and Storage technology. Thirdly, this Blog Post critically analyzes the need for regulating the use of Artificial Intelligence in Carbon Capture and Storage technology. Lastly, this Blog Post gives recommendations on how to solve the conundrum of the absence of appropriate laws and regulations governing the usage of Artificial Intelligence in Carbon Capture and Storage technology in India.

In continuation with The Regulatory Gap in AI-enabled Carbon Capture and Storage Technology- Part I..

C. International Law and Jurisprudence

The role of international environmental law, particularly as delineated in international declarations and exemplified in case laws, bears significance in framing the context for AI’s application in CCS technology. A detailed, exhaustive exploration of these international instruments and related case laws is needed to fully comprehend the potential impact of AI laws on CCS technology.

The duty to not cause environmental damage is fundamental to the principles of international environmental law. This is codified in Principle 21 of the Stockholm Declaration (1972) and Principle 2 of the Rio Declaration (1992), urging States to avoid damaging the environment of other States or regions beyond national jurisdiction. This principle has its roots in the legal doctrine of customary international law, whereby States undertake a consistent practice due to a sense of legal obligation.

One could argue that the advent of AI and its applications, such as CCS, falls under the purview of the aforementioned principle. This is particularly salient, given that deploying AI in CCS technology could lead to environmental consequences transcending national borders. However, the abstractness of the principle, coupled with the nascent nature of AI and its specific applications in CCS, presents challenges. Determining the extent and nature of the responsibility that lies on a State’s shoulder when using AI in CCS technology involves a high degree of interpretation and assumption. The legal community has yet to establish a consensus on interpreting these principles in light of AI developments.

Examining the application of the aforementioned principle in landmark international jurisprudence can provide some guidance. The International Court of Justice (ICJ), in its Advisory Opinion on the Legality of the Threat or Use of Nuclear Weapons (1996), posited that the general obligation of States to ensure that activities within their jurisdiction and control respect the environment of other States or areas beyond national control is now part of the “corpus” of international law relating to the environment. A similar reasoning was upheld in its Gabˇcikovo-Nagymaros (Hungary/Slovakia) judgment of 1997. Thus, this shows that when one State adopts the use of AI in its CCS technology, it has an international obligation on it to be mindful of the impacts and consequences of the usage of such AI in CCS on the other States beyond its jurisdiction and control.

Further, the spirit of the aforementioned jurisprudence can guide the direction of AI laws relating to CCS technology. They underline a clear principle that States must ensure their activities do not cause environmental harm to others. Therefore, as AI’s role in the CCS technology evolves, States must ensure that AI does not catalyze environmental damage, whether through data mismanagement, unintended side-effects of CCS processes, or other means. This duty is not only moral but also, arguably, a legal obligation that could be interpreted from these judgments. The Trail Smelter arbitration and the ICJ’s ruling in the Corfu Channel case (1949) are further examples of the acknowledgment of this duty in other forms of dispute resolution. The essence of these decisions highlights the obligation of States to prevent their territories from being used to the detriment of other States. The use of AI in the CCS technology certainly falls within the ambit of these decisions because CCS technology, when powered by AI, could potentially lead to substantial environmental impact, not limited to the boundaries of the State deploying it.

For instance, if an AI model incorrectly assesses carbon sequestration capacity and causes an oversaturation of carbon deposits, it could lead to unanticipated carbon leaks. These leaks may extend beyond national boundaries, causing environmental damage in other jurisdictions. This is not mere speculation; evidence suggests that CCS operations could indeed exert transboundary effects, including through potential transboundary migration of injected CO2 and common CCS operations between States.

The risk multiplies with the proliferation of AI-based systems across digital ecosystems, making them virtually transnational. This could lead to outcomes that are geographically dispersed, sparking off a chain reaction of events affecting end-user rights and the global environment.

Considering these possibilities, it is imperative to incorporate AI’s role in CCS technology within the scope of these legal principles. It is of utmost necessity to address the transnational nature of these operations, the potential for transboundary effects, and the responsibility of States for the actions originating within their territories.

Consequently, the application of AI in CCS technology needs to be embedded within the broader framework of the principles of international environmental law. Especially relevant are principles such as the responsible use of AI and the conduct of proper societal and ecological impact assessments. The precautionary principle particularly underscores the need for risk management.

However, the reality is that the existing legal framework is insufficiently equipped to tackle the unique challenges presented by the intersection of AI and CCS technology. The application of AI in this domain calls for more comprehensive, targeted regulatory measures. These measures must address not only the development and design of AI systems but also their deployment in CCS and related climate operations.

Moreover, the rapid evolution of AI systems leads to autonomous decision-making capabilities, detached from human intervention or involvement.As AI systems become increasingly autonomous, the risks associated with their operation multiply. It is vital, therefore, to establish a robust legal framework that would provide oversight over these systems, regulating their operation and ensuring they do not cause environmental harm.

D. Regulating the Usage of AI in CCS Technology through EU’s AI Act, 2023

The recent paradigm shifts in global AI legislation, punctuated by the EU’s Artificial Intelligence Act of June 2023 (hereinafter referred to as the “AI Act”), have far-reaching implications for various domains. One such critical field where the ramifications of this legislation reverberate intensely is the use of AI in CCS technology.

The intersection of AI and CCS technology essentially creates a multifaceted, unprecedented fusion of the technological and environmental spheres. This unprecedented synergy begs the critical question of legal parameters and governance. The following analysis seeks to critically examine how the EU’s AI Act may inform, influence, and inevitably govern the application of AI in CCS technologies.

    i. Interpreting the AI Act through the lens of CCS Technology

The European Union’s AI Act, a landmark legal instrument, is a comprehensive framework designed to manage the risks and challenges posed by AI technologies while bolstering their beneficial applications. The criticality of such legislation in the CCS technology is not merely an abstract conjecture but is grounded in the elemental characteristics of this specific technology.

The AI Act, under Article 5, enumerates certain “Prohibited AI practices” that are deemed to pose an unacceptable level of risk to individual rights and safety. For instance, the AI Act stipulates certain AI practices that may influence political campaigns or violate privacy through inappropriate use of facial recognition technology.

A careful reading of the AI Act informs us that while CCS technology may not necessarily implicate the specific prohibited practices outlined in the AI Act, the general ethos of these prohibitions is instructive. The principle of safety and respect for individual rights, as embedded in these prohibitions, could potentially translate into regulations concerning the safety of data handling in the CCS technology and the ethical implications of AI’s decision-making processes within the CCS technology.

    ii. Implication of Fines and Penalties under the AI Act for CCS Technology

One of the core components of the AI Act is its provision for administrative fines in cases of violation of its terms. Specifically, a breach of the “Prohibited AI practices,” as delineated in Article 5, can trigger a fine of up to €40,000,000, or up to 7% of a company’s global turnover in the prior year. The severity of these penalties underscores the EU’s commitment to maintaining ethical AI practices and protecting individual rights and safety.

The implications for the CCS technology are significant. While the actual processes involved in CCS might not directly implicate individual rights or safety, the broader AI ecosystem involved in optimizing these CCS processes does hold the potential for violations. AI, the driving force behind these CCS processes, handles massive volumes of data, and there lies the potential for mishandling, misinterpretation, or unethical data usage. Should any such violations occur, the stringent penalties outlined in the AI Act would apply, thereby providing a robust accountability mechanism.

    iii. Obligations under the AI Act and their Relevance to the CCS Technology

The AI Act introduces an array of obligations aimed at regulating AI models that pose a limited risk. These obligations are not merely punitive but are preventative and constructive in nature. They include, among others, an obligation to register “Foundation Models” with an EU database prior to entering the market and transparency obligations for creators of generative AI systems.

These obligations offer a profound perspective on the potential regulatory obligations for AI within the CCS technology. The registration needs for “Foundation Models” reinforces the call for transparency in AI processes, a principle that could also extend to CCS. By ensuring that all foundational AI models used within the CCS technology are registered, the AI Act can ensure a level of accountability and traceability, thereby providing a failsafe against unethical practices or manipulations.

Further, the transparency obligations under the AI Act, particularly the need to disclose when content is generated by AI, have far-reaching implications for CCS technology. Given the significance of AI in driving CCS processes, any decision, prediction, or output generated by the AI must be clearly labeled as such, ensuring transparency and reinforcing trust in the technology.

    iv. The AI Act and its Institutional Framework – Implications for CCS Technology

The institutional framework envisaged by the AI Act is arguably one of its most pivotal components. It provides for an EU AI Office for filing complaints regarding AI and a “national supervisory authority” in each Member State to oversee the implementation and ongoing use of the AI Act.

For CCS technology, the implications of this institutional framework are profound. The existence of a designated authority for overseeing AI’s application within the technology could facilitate more efficient, effective, and ethical use of AI. Moreover, providing a platform for lodging complaints against unethical or harmful AI practices within CCS processes could significantly boost public trust and engagement in CCS technology.

    v. The Controversial Aspects of the AI Act and their Relevance to CCS Technology

Notably, the AI Act is not devoid of controversy. It has been subject to criticism, particularly from the technology industry, which argues that the Act may limit AI development and impede global competitiveness for AI developers within the EU. In this context, it is necessary to critically examine how these controversial aspects of the Act could influence the use of AI in CCS technology.

There is a legitimate concern that a stringent regulatory regime might inadvertently stifle innovation in AI applications for CCS. By imposing heavy fines and rigorous compliance obligations, the AI Act could disincentivize the integration of AI into CCS processes, thereby hindering technological advancements in the fight against climate change.

However, while acknowledging these legitimate concerns, it is equally imperative to consider the potential benefits of such a comprehensive regulatory framework. The AI Act not only aims to prevent harmful AI practices but also endeavors to foster an AI ecosystem that respects individual rights and democratic values. It promotes transparency, accountability, and ethical AI practices, which could significantly enhance the public’s trust in AI applications in CCS technology.

Moreover, by establishing legal safeguards against the misuse of AI, the AI Act could arguably stimulate responsible innovation, incentivizing AI developers to design systems that not only improve the efficacy of CCS processes but also uphold the principles of safety, transparency, and respect for human rights.

    vi. The EU AI Act – A Model Legislation in India?

While the EU AI Act may stand as a well-articulated regulatory framework for AI, including its applications in CCS technology, adopting it in its totality in India would present distinct challenges. Primarily, the onerous requirements set forth by the EU AI Act might inadvertently hinder the ease of doing business in India, particularly for small and medium-sized enterprises (“SMEs”) that may lack the resources to comply with such stringent standards.

Furthermore, the renewable energy conditions in the EU and India differ substantially. The EU, with its established renewable energy sector, may have the luxury to implement rigorous AI regulations without jeopardizing its broader energy and climate goals. On the other hand, India's energy landscape is marked by a delicate balance between burgeoning demand, the imperative to expand renewable energy, and the need to enhance energy access across diverse industries. A direct replication of the EU AI Act could impede India's efforts to innovate within the CCS domain, potentially stalling vital progress in climate change mitigation.

Additionally, the socioeconomic and political contexts in India are notably different from the EU. A legal framework transplanted from the EU might not align seamlessly with India’s unique cultural, economic, and regulatory environment. Such a misalignment might create friction and uncertainty, leading to a stifling of localized innovation.

In conclusion, while the principles and standards of the EU AI Act offer valuable insights, their wholesale adoption in India would require careful consideration through various factors such as the suitability of the EU AI Act, the relevance of the EU AI Act, and any other factors that the Indian policymakers may deem fit while considering the specific needs of the businesses, the energy sector, and the technology sector. It would be prudent for Indian policymakers to craft laws and regulations that may draw inspiration from the EU AI Act but tailor them to India’s unique energy landscape, business ecosystem, and societal needs. A more nuanced approach would safeguard against over-burdensome laws and regulations while still promoting the responsible and innovative use of AI in CCS technology within the Indian context.

Conclusion: What is the Future Way Forward for India?

Addressing the legal and regulatory gap for AI in CCS in India necessitates an approach that is comprehensive in scope, drawing from existing legislative instruments while recognizing and accommodating the unique challenges posed by AI technology. The Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011 (SPDI Rules), the proposed Personal Data Protection Bill, 2022, and the Digital India Act, 2023, are among the crucial legislative references in this regard.

Firstly, an essential step towards plugging this gap is integrating AI governance principles into existing data privacy laws, especially those governing SPDI. The SPDI Rules primarily govern data collection and processing in India. However, they were formulated at a time when the use of AI was not as pervasive, and consequently, they do not adequately address AI-related considerations. With the rise of AI in CCS and other sectors, there is an imminent need to update these rules to accommodate AI-specific concerns. Such updates could include enhanced consent mechanisms for AI-related data processing, stricter data minimization and storage limitation principles, and robust rights to explanation for AI-based decisions. Further, basic principles from the GDPR may be imported into the 2011 Rules. Additionally, the EU’s AI Act, 2023 could act as a base to establish stricter controls on the collection, use, and sharing of data by AI systems. This could include more robust consent mechanisms, greater transparency about how data is used, and stricter penalties for data breaches.

Secondly, the proposed 2022 Bill can play a pivotal role in the future regulation of AI in India. The 2022 Bill, as it currently stands, includes several provisions that could influence AI use, such as data localization requirements, enhanced individual rights, and a more robust enforcement mechanism. As the 2022 Bill is still under consideration, it would be prudent to ensure its final form is AI-ready. This could involve clarifying the bill's position on AI technologies, specifying the responsibilities of data fiduciaries in AI deployments, and providing for more robust regulatory oversight on high-risk AI use cases.

Thirdly, under the broader umbrella of the Digital India initiative, the draft 2023 Act presents another legislative opportunity to address the AI regulatory gap. As a part of the draft 2023 Act's mandate to transform India into a digitally empowered society, it could be leveraged to foster a healthy AI ecosystem in India that promotes innovation while safeguarding individual rights. For example, the draft 2023 Act could include provisions for AI literacy and capacity building, support for AI research and development, and policies to encourage responsible AI adoption in the public and private sectors.

Fourthly, the 2003 Act requires to be updated through careful amendments to align with the current technological advancements in AI integration with CCS. While the introduction of new laws, such as the proposed 2022 Bill and the draft 2023 Act, are significant steps toward regulating the data and digital sphere, there is a lucid necessity to dovetail these legal frameworks with the 2003 Act. This can be done as follows:

(i)       The provisions must be crafted within the 2003 Act to specifically address the ethical,           environmental, and safety concerns related to the utilization of AI in CCS technology. This           would involve setting clear standards, compliance requirements, and regulatory oversight           mechanisms for AI-enabled CCS processes.

(ii)       Synergies must be created between the 2003 Act and the emerging digital laws to ensure a            cohesive regulatory landscape that covers all facets of AI's application in CCS, including data            privacy,cybersecurity, and consumer protection.

Thus, these amendments would foster responsible innovation, ensuring that AI's transformative potential within CCS technology is harnessed without compromising legal, ethical, or environmental standards.

Fifthly, given that the regulatory challenge posed by AI is not limited to any one area of law, it would be beneficial to adopt a cross-sectoral approach when addressing this regulatory gap. This could include reviewing and updating other relevant laws and regulations that govern energy and electricity (2003 Act and its allied Rules and Regulations), environmental protection (Environment Protection Act, 1986), and competition (Competition Act, 2002) to ensure that they are prepared for the age of AI. Such a holistic approach would ensure that India's legal framework can effectively manage the risks and harness the benefits of AI in CCS and other sectors.

Sixthly, new legislation could introduce requirements for AI transparency, mandating that AI systems used in CCS technologies be designed in a way that allows their decision-making processes to be understandable by humans. This could involve using explain-ability techniques such as feature importance, partial dependence plots, or SHAP (SHapley Additive exPlanations) values. In addition, AI system developers might be needed to provide documentation explaining the system’s design, training process, and expected operation.

Seventhly, ethical considerations also need to be at the forefront of new legislation. There is an urgent need to ensure that AI systems are developed and used in a way that aligns with societal values and ethical norms. This is particularly pertinent in the context of CCS, where the potential for negative impacts on the environment and human health is significant. New legislation could establish a set of ethical principles for AI use in CCS technologies, such as fairness, transparency, and accountability. It could also establish mechanisms for ethical oversight of these technologies, such as ethical review boards.

Eightly, introducing AI in CCS technologies could lead to significant disruptions in the job market. As AI systems become more capable, there is the potential for job displacement. As such, new legislation may be needed to ensure that workers are protected and that there are adequate measures in place to help those affected by job displacement to reskill and find new employment.

Lastly, international coordination will be essential in regulating AI use in CCS technologies. Given the global nature of climate change and technological innovation, national laws will not be sufficient to address the complexities and challenges of this issue. Instead, international legal frameworks will need to be developed. Such frameworks could establish global standards for AI use in CCS technologies, facilitate information and technology sharing between countries, and create mechanisms for dispute resolution. Developing these international legal frameworks will not be easy, given the diverse interests and perspectives involved. However, it will be crucial to ensure that the benefits of AI use in CCS technologies can be realized in an ethical, sustainable, and equitable way.

Therefore, while India’s current laws may provide a starting point, there is a clear and urgent need for new legislation, like the EU’s AI Act, 2023, to regulate the use of AI in CCS technologies. This legislation must be forward-looking, adaptable, and comprehensive, addressing various issues, including transparency, data privacy, ethics, employment, and international coordination. The challenge is significant, but so are the potential rewards – a legal framework that can foster innovation and ensure the responsible and effective use of AI in our ongoing efforts to combat climate change.

Disclaimer

The views and opinions expressed by the Authors are personal.

About the Authors

Mr. Varun Pathak is a Partner (Dispute Resolution) at Shardul Amarchand Mangaldas & Co., New Delhi. He is an Advocate-on-Record at the Supreme Court of India. He has completed his LL.M. in Corporate and Commercial Laws from the London School of Economics (LSE).

Mr. Pushpit Singh is a 5 th -Year B.B.A. LL.B. Student at Symbiosis Law School, Hyderabad. He is a freelancing Corporate and Disputes Paralegal. He is also an Indian Institute of Arbitration and Mediation (IIAM) Panel Arbitrator.

Editorial Team

Managing Editor: Naman Anand
Editors-in-Chief(Blog): Abeer Tiwari & Muskaan Singh
Editor-in-Chief (Journal) and Senior Editor: Hamna Viriyam
Associate Editor: Pushpit Singh
Junior Editor: Ishaan Sharma

error: Content is protected !!