{"id":7547,"date":"2023-08-11T10:18:03","date_gmt":"2023-08-11T04:48:03","guid":{"rendered":"https:\/\/ijpiel.com\/?p=7547"},"modified":"2023-08-11T10:18:04","modified_gmt":"2023-08-11T04:48:04","slug":"the-regulatory-gap-in-ai-enabled-carbon-capture-and-storage-technology-part-ii","status":"publish","type":"post","link":"https:\/\/ijpiel.com\/index.php\/2023\/08\/11\/the-regulatory-gap-in-ai-enabled-carbon-capture-and-storage-technology-part-ii\/","title":{"rendered":"The Regulatory Gap in AI-enabled Carbon Capture and Storage Technology- Part II"},"content":{"rendered":"\n<p style=\"text-align: justify;\"><strong style=\"color: #000000; font-size: x-large;\"><span style=\"font-family: 'Cormorant Garamond';\">Abstract<\/span><\/strong><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\">Artificial Intelligence\u2019s application in Carbon Capture and Storage technology presents\nsignificant benefits but also poses new challenges in legal regulation. Given the potential for\ntransboundary effects and the need for globally coordinated action, the existing legal principles\nneed to be re-evaluated and expanded upon to ensure the responsible use of AI, especially in\nCarbon Capture and Storage technology.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\">Thus, <em>firstly<\/em>, this Blog Post gives a thematic introduction to Carbon Capture and Storage\ntechnology in India. <em>Secondly<\/em>, this Blog Post examines the use of Artificial Intelligence in Carbon\nCapture and Storage technology. <em>Thirdly<\/em>, this Blog Post critically analyzes the need for regulating\nthe use of Artificial Intelligence in Carbon Capture and Storage technology. <em>Lastly<\/em>, this Blog Post\ngives recommendations on how to solve the conundrum of the absence of appropriate laws and\nregulations governing the usage of Artificial Intelligence in Carbon Capture and Storage\ntechnology in India.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\"><em>In continuation with The Regulatory Gap in AI-enabled Carbon Capture and Storage Technology- Part I..<\/em><\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><em><strong style=\"color: #000000; font-size: x-large;\"><span style=\"font-family: 'Cormorant Garamond';\">C. International Law and Jurisprudence<\/span><\/strong><\/em><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\">The role of international environmental law, particularly as delineated in international\ndeclarations and exemplified in case laws, bears significance in framing the context for AI\u2019s\napplication in CCS technology. A detailed, exhaustive exploration of these international\ninstruments and related case laws is needed to fully comprehend the potential impact of AI laws\non CCS technology.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\">The duty to not cause environmental damage is fundamental to the principles of international\nenvironmental law. This is codified in Principle 21 of the Stockholm Declaration (1972) and\nPrinciple 2 of the Rio Declaration (1992), <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/abs\/pii\/S175058362200175X?dgcid=rss_sd_all\">urging States to avoid<\/a> damaging the environment of\nother States or regions beyond national jurisdiction. This principle has its roots in the legal\ndoctrine of customary international law, whereby States undertake a consistent practice due to a\nsense of legal obligation.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\">One could argue that the advent of AI and its applications, such as CCS, falls under the purview\nof the aforementioned principle. This is particularly salient, given that deploying AI in CCS\ntechnology could lead to environmental consequences transcending national borders. However,\nthe abstractness of the principle, coupled with the nascent nature of AI and its specific\napplications in CCS, presents challenges. Determining the extent and nature of the responsibility\nthat lies on a State\u2019s shoulder when using AI in CCS technology involves a high degree of\ninterpretation and assumption. The legal community has yet to establish a consensus on\ninterpreting these principles in light of AI developments.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\">Examining the application of the aforementioned principle in landmark international\njurisprudence can provide some guidance. The International Court of Justice (ICJ), in its\nAdvisory Opinion on the Legality of the Threat or Use of Nuclear Weapons (1996), <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/abs\/pii\/S175058362200175X?dgcid=rss_sd_all\">posited that\nthe general obligation<\/a> of States to ensure that activities within their jurisdiction and control\nrespect the environment of other States or areas beyond national control is now part of the\n\u201ccorpus\u201d of international law relating to the environment. A similar reasoning was upheld in its\nGab\u02c7cikovo-Nagymaros (Hungary\/Slovakia) judgment of 1997. Thus, this shows that when one\n<a href=\"https:\/\/www.sciencedirect.com\/science\/article\/abs\/pii\/S175058362200175X?dgcid=rss_sd_all\">State adopts the use of AI<\/a> in its CCS technology, it has an international obligation on it to be\nmindful of the impacts and consequences of the usage of such AI in CCS on the other States\nbeyond its jurisdiction and control.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"\"><span style=\"font-size: large; color: #000000;\">Further, the spirit of the aforementioned jurisprudence can guide the direction of AI laws\nrelating to CCS technology. They underline a clear principle that States must ensure their\nactivities do not cause environmental harm to others. Therefore, as AI\u2019s role in the CCS\ntechnology evolves, States must ensure that AI does not catalyze environmental damage, whether\nthrough data mismanagement, unintended side-effects of CCS processes, or other means. This\nduty is not only moral but also, arguably, a legal obligation that could be interpreted from these\njudgments. The Trail Smelter arbitration and the ICJ\u2019s ruling in the Corfu Channel case (1949)\nare further examples of the acknowledgment of this duty in other <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/abs\/pii\/S175058362200175X?dgcid=rss_sd_all\">forms of dispute resolution<\/a>.\nThe essence of these decisions highlights the obligation of States to prevent their territories from\nbeing used to the detriment of other States. The use of AI in the CCS technology certainly falls\nwithin the ambit of these decisions because CCS technology, when powered by AI, could\npotentially lead to substantial environmental impact, not limited to the boundaries of the State\ndeploying it.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\">For instance, if an AI model incorrectly assesses carbon sequestration capacity and causes an\noversaturation of carbon deposits, it could lead to unanticipated carbon leaks. These leaks may\nextend beyond national boundaries, causing environmental damage in other jurisdictions. This is\nnot mere speculation; evidence suggests that CCS operations could indeed exert transboundary\neffects, including through potential transboundary migration of injected CO2 and <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/abs\/pii\/S175058362200175X?dgcid=rss_sd_all\">common CCS\noperations between States<\/a>.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\">The risk multiplies with the proliferation of AI-based systems across digital ecosystems, making\nthem virtually transnational. This could lead to outcomes that are geographically dispersed,\nsparking off a chain reaction of events affecting end-user rights and the global environment.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\">Considering these possibilities, it is imperative to incorporate AI\u2019s role in CCS technology within\nthe scope of these legal principles. It is of utmost necessity to address the transnational nature of\nthese operations, the potential for transboundary effects, and the responsibility of States for the\nactions originating within their territories.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\">Consequently, the application of AI in CCS technology needs to be embedded within the\nbroader framework of the principles of international environmental law. Especially relevant are\nprinciples such as the responsible use of AI and the conduct of proper societal and ecological\nimpact assessments. <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/abs\/pii\/S175058362200175X?dgcid=rss_sd_all\">The precautionary principle particularly<\/a> underscores the need for risk\nmanagement.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\">However, the reality is that the existing legal framework is insufficiently equipped to tackle the\nunique challenges presented by the intersection of AI and CCS technology. The application of\nAI in this domain calls for more comprehensive, targeted regulatory measures. These measures\nmust address not only the development and design of AI systems but also their deployment in\n<a href=\"https:\/\/www.sciencedirect.com\/science\/article\/abs\/pii\/S175058362200175X?dgcid=rss_sd_all\">CCS and related climate operations<\/a>.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\">Moreover, the rapid evolution of AI systems leads to autonomous decision-making capabilities,\ndetached from human intervention or involvement.<a href=\"https:\/\/www.sciencedirect.com\/science\/article\/abs\/pii\/S175058362200175X?dgcid=rss_sd_all\">As AI systems become increasingly\nautonomous<\/a>, the risks associated with their operation multiply. It is vital, therefore, to establish a\nrobust legal framework that would provide oversight over these systems, regulating their\noperation and ensuring they do not cause environmental harm.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><em><strong style=\"color: #000000; font-size: x-large;\"><span style=\"font-family: 'Cormorant Garamond';\">D. Regulating the Usage of AI in CCS Technology through EU\u2019s AI Act, 2023<\/span><\/strong><\/em><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\">The recent paradigm shifts in global AI legislation, punctuated by the <a href=\"https:\/\/www.europarl.europa.eu\/doceo\/document\/TA-9-2023-0236_EN.pdf\">EU\u2019s Artificial Intelligence\nAct of June 2023<\/a> (hereinafter referred to as the \u201c<strong>AI Act<\/strong>\u201d), have far-reaching implications for\nvarious domains. One such critical field where the ramifications of this legislation reverberate\nintensely is the use of AI in CCS technology.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\">The intersection of AI and CCS technology essentially creates a multifaceted, unprecedented\nfusion of the technological and environmental spheres. This unprecedented synergy begs the\ncritical question of legal parameters and governance. The following analysis seeks to critically\nexamine how the EU\u2019s AI Act may inform, influence, and inevitably govern the application of AI\nin CCS technologies.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><em><strong style=\"color: #000000; font-size: x-large;\"><span style=\"font-family: 'Cormorant Garamond';\">&nbsp;&nbsp;&nbsp;&nbsp;i. Interpreting the AI Act through the lens of CCS Technology<\/span><\/strong><\/em><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\">The European Union\u2019s AI Act, a landmark legal instrument, is a comprehensive framework\ndesigned to manage the risks and challenges posed by AI technologies while bolstering their\nbeneficial applications. The criticality of such legislation in the CCS technology is not merely an\nabstract conjecture but is grounded in the elemental characteristics of this specific technology.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\">The AI Act, under <a href=\"https:\/\/www.lexology.com\/library\/detail.aspx?g=2f340a18-81d7-4aa9-b883-d473d9170fa4\">Article 5, enumerates certain \u201cProhibited AI practices\u201d<\/a> that are deemed to\npose an unacceptable level of risk to individual rights and safety. For instance, the AI Act\nstipulates certain AI practices that may influence political campaigns or violate privacy through\ninappropriate use of facial recognition technology.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\">A careful reading of the AI Act informs us that while CCS technology may not necessarily\nimplicate the specific prohibited practices outlined in the AI Act, the general ethos of these\nprohibitions is instructive. The principle of safety and respect for individual rights, as embedded\nin these prohibitions, could potentially translate into regulations concerning the safety of data\nhandling in the CCS technology and the ethical implications of AI\u2019s decision-making processes\nwithin the CCS technology.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><em><strong style=\"color: #000000; font-size: x-large;\"><span style=\"font-family: 'Cormorant Garamond';\">&nbsp;&nbsp;&nbsp;&nbsp;ii. Implication of Fines and Penalties under the AI Act for CCS Technology<\/span><\/strong><\/em><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\">One of the core components of the AI Act is its provision for administrative fines in cases of\nviolation of its terms. Specifically, a breach of the \u201cProhibited AI practices,\u201d as delineated in\n<a href=\"https:\/\/www.lexology.com\/library\/detail.aspx?g=2f340a18-81d7-4aa9-b883-d473d9170fa4\">Article 5, can trigger a fine of up to \u20ac40,000,000, or up to 7% of a company\u2019s global turnover in\nthe prior year<\/a>. The severity of these penalties underscores the EU\u2019s commitment to maintaining\nethical AI practices and protecting individual rights and safety.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\">The implications for the CCS technology are significant. While the actual processes involved in\nCCS might not directly implicate individual rights or safety, the broader AI ecosystem involved in\noptimizing these CCS processes does hold the potential for violations. AI, the driving force\nbehind these CCS processes, handles massive volumes of data, and there lies the potential for\nmishandling, misinterpretation, or unethical data usage. Should any such violations occur, the\nstringent penalties outlined in the AI Act would apply, thereby providing a robust accountability\nmechanism.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><em><strong style=\"color: #000000; font-size: x-large;\"><span style=\"font-family: 'Cormorant Garamond';\">&nbsp;&nbsp;&nbsp;&nbsp;iii. Obligations under the AI Act and their Relevance to the CCS Technology<\/span><\/strong><\/em><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\">The AI Act introduces an array of obligations aimed at regulating AI models that pose a limited\nrisk. These obligations are not merely punitive but are preventative and constructive in nature.\nThey include, among others, an obligation to register \u201c<a href=\"https:\/\/www.lexology.com\/library\/detail.aspx?g=2f340a18-81d7-4aa9-b883-d473d9170fa4\">Foundation Models<\/a>\u201d with an EU database\nprior to entering the market and transparency obligations for creators of generative AI systems.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\">These obligations offer a profound perspective on the potential regulatory obligations for AI\nwithin the CCS technology. The registration needs for \u201cFoundation Models\u201d reinforces the call\nfor transparency in AI processes, a principle that could also extend to CCS. By ensuring that all\nfoundational AI models used within the CCS technology are registered, the AI Act can ensure a\nlevel of accountability and traceability, thereby providing a failsafe against unethical practices or\nmanipulations.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\">Further, the transparency obligations under the AI Act, particularly the need to disclose when\ncontent is generated by AI, have far-reaching implications for CCS technology. Given the\nsignificance of AI in driving CCS processes, any decision, prediction, or output generated by the\nAI must be clearly labeled as such, ensuring transparency and reinforcing trust in the technology.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><em><strong style=\"color: #000000; font-size: x-large;\"><span style=\"font-family: 'Cormorant Garamond';\">&nbsp;&nbsp;&nbsp;&nbsp;iv. The AI Act and its Institutional Framework \u2013 Implications for CCS Technology<\/span><\/strong><\/em><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\">The institutional framework envisaged by the AI Act is arguably one of its most pivotal\ncomponents. It provides for an <a href=\"https:\/\/www.lexology.com\/library\/detail.aspx?g=2f340a18-81d7-4aa9-b883-d473d9170fa4\">EU AI Office<\/a> for filing complaints regarding AI and a \u201c<a href=\"https:\/\/www.lexology.com\/library\/detail.aspx?g=2f340a18-81d7-4aa9-b883-d473d9170fa4\">national\nsupervisory authority<\/a>\u201d in each Member State to oversee the implementation and ongoing use of\nthe AI Act.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\">For CCS technology, the implications of this institutional framework are profound. The\nexistence of a designated authority for overseeing AI\u2019s application within the technology could\nfacilitate more efficient, effective, and ethical use of AI. Moreover, providing a platform for\nlodging complaints against unethical or harmful AI practices within CCS processes could\nsignificantly boost public trust and engagement in CCS technology.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><em><strong style=\"color: #000000; font-size: x-large;\"><span style=\"font-family: 'Cormorant Garamond';\">&nbsp;&nbsp;&nbsp;&nbsp;v. The Controversial Aspects of the AI Act and their Relevance to CCS Technology<\/span><\/strong><\/em><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\">Notably, the AI Act is not devoid of controversy. It has been subject to criticism, particularly\nfrom the technology industry, which argues that the Act may limit AI development and impede\nglobal competitiveness for AI developers within the EU. In this context, it is necessary to\ncritically examine how these controversial aspects of the Act could influence the use of AI in\nCCS technology.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\">There is a legitimate concern that a stringent regulatory regime might inadvertently stifle\ninnovation in AI applications for CCS. By imposing heavy fines and rigorous compliance\nobligations, the AI Act could disincentivize the integration of AI into CCS processes, thereby\nhindering technological advancements in the fight against climate change.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\">However, while acknowledging these legitimate concerns, it is equally imperative to consider the\npotential benefits of such a comprehensive regulatory framework. The AI Act not only aims to\nprevent harmful AI practices but also endeavors to foster an AI ecosystem that respects\nindividual rights and democratic values. It promotes transparency, accountability, and ethical AI\npractices, which could significantly enhance the public\u2019s trust in AI applications in CCS\ntechnology.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\">Moreover, by establishing legal safeguards against the misuse of AI, the AI Act could arguably\nstimulate responsible innovation, incentivizing AI developers to design systems that not only\nimprove the efficacy of CCS processes but also uphold the principles of safety, transparency, and\nrespect for human rights.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><em><strong style=\"color: #000000; font-size: x-large;\"><span style=\"font-family: 'Cormorant Garamond';\">&nbsp;&nbsp;&nbsp;&nbsp;vi. The EU AI Act \u2013 A Model Legislation in India?<\/span><\/strong><\/em><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\">While the EU AI Act may stand as a well-articulated regulatory framework for AI, including its\napplications in CCS technology, adopting it in its totality in India would present distinct\nchallenges. Primarily, the onerous requirements set forth by the EU AI Act might inadvertently\nhinder the ease of doing business in India, particularly for small and medium-sized enterprises\n(\u201c<strong>SMEs<\/strong>\u201d) that may lack the resources to comply with such stringent standards.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\">Furthermore, the renewable energy conditions in the EU and India differ substantially. The EU,\nwith its established renewable energy sector, may have the luxury to implement rigorous AI\nregulations without jeopardizing its broader energy and climate goals. On the other hand, India&#39;s\nenergy landscape is marked by a delicate balance between burgeoning demand, the imperative to\nexpand renewable energy, and the need to enhance energy access across diverse industries. A\ndirect replication of the EU AI Act could impede India&#39;s efforts to innovate within the CCS\ndomain, potentially stalling vital progress in climate change mitigation.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\">Additionally, the socioeconomic and political contexts in India are notably different from the\nEU. A legal framework transplanted from the EU might not align seamlessly with India\u2019s unique\ncultural, economic, and regulatory environment. Such a misalignment might create friction and\nuncertainty, leading to a stifling of localized innovation.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\">In conclusion, while the principles and standards of the EU AI Act offer valuable insights, their\nwholesale adoption in India would require careful consideration through various factors such as\nthe suitability of the EU AI Act, the relevance of the EU AI Act, and any other factors that the\nIndian policymakers may deem fit while considering the specific needs of the businesses, the\nenergy sector, and the technology sector. It would be prudent for Indian policymakers to craft\nlaws and regulations that may draw inspiration from the EU AI Act but tailor them to India\u2019s\nunique energy landscape, business ecosystem, and societal needs. A more nuanced approach\nwould safeguard against over-burdensome laws and regulations while still promoting the\nresponsible and innovative use of AI in CCS technology within the Indian context.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><strong style=\"color: #000000; font-size: x-large;\"><span style=\"font-family: 'Cormorant Garamond';\">Conclusion: What is the Future Way Forward for India?<\/span><\/strong><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\">Addressing the legal and regulatory gap for AI in CCS in India necessitates an approach that is\ncomprehensive in scope, drawing from existing legislative instruments while recognizing and\naccommodating the unique challenges posed by AI technology. The Information Technology\n(Reasonable Security Practices and Procedures and Sensitive Personal Data or Information)\nRules, 2011 (SPDI Rules), the proposed Personal Data Protection Bill, 2022, and the Digital\nIndia Act, 2023, are among the crucial legislative references in this regard.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\"><em>Firstly<\/em>, an essential step towards plugging this gap is integrating AI governance principles into\nexisting data privacy laws, especially those governing SPDI. The SPDI Rules primarily govern\ndata collection and processing in India. However, they were formulated at a time when the use of\nAI was not as pervasive, and consequently, they do not adequately address AI-related\nconsiderations. With the rise of AI in CCS and other sectors, there is an imminent need to\nupdate these rules to accommodate AI-specific concerns. Such updates could include enhanced\nconsent mechanisms for AI-related data processing, stricter data minimization and storage\nlimitation principles, and robust rights to explanation for AI-based decisions. Further, basic\nprinciples from the GDPR may be imported into the 2011 Rules. Additionally, the EU\u2019s AI Act,\n2023 could act as a base to establish stricter controls on the collection, use, and sharing of data\nby AI systems. This could include more robust consent mechanisms, greater transparency about\nhow data is used, and stricter penalties for data breaches.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\"><em>Secondly<\/em>, the proposed 2022 Bill can play a pivotal role in the future regulation of AI in India. The\n2022 Bill, as it currently stands, includes several provisions that could influence AI use, such as\ndata localization requirements, enhanced individual rights, and a more robust enforcement\nmechanism. As the 2022 Bill is still under consideration, it would be prudent to ensure its final\nform is AI-ready. This could involve clarifying the bill&#39;s position on AI technologies, specifying\nthe responsibilities of data fiduciaries in AI deployments, and providing for more robust\nregulatory oversight on high-risk AI use cases.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\"><em>Thirdly<\/em>, under the broader umbrella of the Digital India initiative, the draft 2023 Act presents\nanother legislative opportunity to address the AI regulatory gap. As a part of the draft 2023 Act&#39;s\nmandate to transform India into a digitally empowered society, it could be leveraged to foster a\nhealthy AI ecosystem in India that promotes innovation while safeguarding individual rights. For\nexample, the draft 2023 Act could include provisions for AI literacy and capacity building, support for AI research and development, and policies to encourage responsible AI adoption in\nthe public and private sectors.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\"><em>Fourthly<\/em>, the 2003 Act requires to be updated through careful amendments to align with the\ncurrent technological advancements in AI integration with CCS. While the introduction of new\nlaws, such as the proposed 2022 Bill and the draft 2023 Act, are significant steps toward\nregulating the data and digital sphere, there is a lucid necessity to dovetail these legal frameworks\nwith the 2003 Act. This can be done as follows:<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\">(i)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The provisions must be crafted within the 2003 Act to specifically address the ethical,\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;environmental, and safety concerns related to the utilization of AI in CCS technology.\nThis &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;would involve setting clear standards, compliance requirements, and regulatory\noversight &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;mechanisms for AI-enabled CCS processes.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\">(ii)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Synergies must be created between the 2003 Act and the emerging digital laws to ensure\na &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;cohesive regulatory landscape that covers all facets of AI&#39;s application in CCS,\nincluding data &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;privacy,cybersecurity, and consumer protection.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\">Thus, these amendments would foster responsible innovation, ensuring that AI&#39;s transformative\npotential within CCS technology is harnessed without compromising legal, ethical, or\nenvironmental standards.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\"><em>Fifthly<\/em>, given that the regulatory challenge posed by AI is not limited to any one area of law, it\nwould be beneficial to adopt a cross-sectoral approach when addressing this regulatory gap. This\ncould include reviewing and updating other relevant laws and regulations that govern energy and\nelectricity (2003 Act and its allied Rules and Regulations), environmental protection\n(Environment Protection Act, 1986), and competition (Competition Act, 2002) to ensure that\nthey are prepared for the age of AI. Such a holistic approach would ensure that India&#39;s legal\nframework can effectively manage the risks and harness the benefits of AI in CCS and other\nsectors.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\"><em>Sixthly<\/em>, new legislation could introduce requirements for AI transparency, mandating that AI\nsystems used in CCS technologies be designed in a way that allows their decision-making\nprocesses to be understandable by humans. This could involve using explain-ability techniques such as feature importance, partial dependence plots, or SHAP (SHapley Additive exPlanations)\nvalues. In addition, AI system developers might be needed to provide documentation explaining\nthe system\u2019s design, training process, and expected operation.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\"><em>Seventhly<\/em>, ethical considerations also need to be at the forefront of new legislation. There is an\nurgent need to ensure that AI systems are developed and used in a way that aligns with societal\nvalues and ethical norms. This is particularly pertinent in the context of CCS, where the potential\nfor negative impacts on the environment and human health is significant. New legislation could\nestablish a set of ethical principles for AI use in CCS technologies, such as fairness, transparency,\nand accountability. It could also establish mechanisms for ethical oversight of these technologies,\nsuch as ethical review boards.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"\"><span style=\"font-size: large; color: #000000;\"><em>Eightly<\/em>, introducing AI in CCS technologies could lead to significant disruptions in the job\nmarket. As AI systems become more capable, there is the potential for job displacement. As\nsuch, new legislation may be needed to ensure that workers are protected and that there are\nadequate measures in place to help those affected by job displacement to reskill and find new\nemployment.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\"><em>Lastly<\/em>, international coordination will be essential in regulating AI use in CCS technologies.\nGiven the global nature of climate change and technological innovation, national laws will not be\nsufficient to address the complexities and challenges of this issue. Instead, international legal\nframeworks will need to be developed. Such frameworks could establish global standards for AI\nuse in CCS technologies, facilitate information and technology sharing between countries, and\ncreate mechanisms for dispute resolution. Developing these international legal frameworks will\nnot be easy, given the diverse interests and perspectives involved. However, it will be crucial to\nensure that the benefits of AI use in CCS technologies can be realized in an ethical, sustainable,\nand equitable way.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\">Therefore, while India\u2019s current laws may provide a starting point, there is a clear and urgent\nneed for new legislation, like the EU\u2019s AI Act, 2023, to regulate the use of AI in CCS\ntechnologies. This legislation must be forward-looking, adaptable, and comprehensive,\naddressing various issues, including transparency, data privacy, ethics, employment, and\ninternational coordination. The challenge is significant, but so are the potential rewards \u2013 a legal framework that can foster innovation and ensure the responsible and effective use of AI in our\nongoing efforts to combat climate change.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><strong style=\"color: #000000; font-size: x-large;\"><span style=\"font-family: 'Cormorant Garamond';\">Disclaimer<\/span><\/strong><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\"><strong><em>The views and opinions expressed by the Authors are personal.<\/em><\/strong><\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><strong style=\"color: #000000; font-size: x-large;\"><span style=\"font-family: 'Cormorant Garamond';\">About the Authors<\/span><\/strong><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\">Mr. Varun Pathak is a Partner (Dispute Resolution) at Shardul Amarchand Mangaldas &amp; Co.,\nNew Delhi. He is an Advocate-on-Record at the Supreme Court of India. He has completed his\nLL.M. in Corporate and Commercial Laws from the London School of Economics (LSE).<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\">Mr. Pushpit Singh is a 5 th -Year B.B.A. LL.B. Student at Symbiosis Law School, Hyderabad. He is\na freelancing Corporate and Disputes Paralegal. He is also an Indian Institute of Arbitration and\nMediation (IIAM) Panel Arbitrator.<\/span><\/p>\n\n\n\n<p style=\"text-align: justify;\"><strong style=\"color: #000000; font-size: x-large;\"><span style=\"font-family: 'Cormorant Garamond';\">Editorial Team<\/span><\/strong><\/p>\n\n\n\n<p style=\"text-align: justify;\"><span style=\"font-size: large; color: #000000;\"><em>Managing Editor: Naman Anand<\/em><br><em>Editors-in-Chief(Blog): Abeer Tiwari &amp; Muskaan Singh<\/em><br><em>Editor-in-Chief (Journal) and Senior Editor: Hamna Viriyam<\/em><br><em>Associate Editor: Pushpit Singh<\/em><br><em>Junior Editor: Ishaan Sharma<\/em><\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Abstract Artificial Intelligence\u2019s application in Carbon Capture and Storage technology presents significant benefits but also poses new challenges in legal regulation. Given the potential for transboundary effects and the need for globally coordinated action, the existing legal principles need to be re-evaluated and expanded upon to ensure the responsible use of AI, especially in Carbon [&hellip;]<\/p>\n","protected":false},"author":258,"featured_media":7587,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_et_pb_use_builder":"","_et_pb_old_content":"","_et_gb_content_width":"","footnotes":"","wp_social_preview_title":"","wp_social_preview_description":"","wp_social_preview_image":0},"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/ijpiel.com\/index.php\/wp-json\/wp\/v2\/posts\/7547"}],"collection":[{"href":"https:\/\/ijpiel.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ijpiel.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ijpiel.com\/index.php\/wp-json\/wp\/v2\/users\/258"}],"replies":[{"embeddable":true,"href":"https:\/\/ijpiel.com\/index.php\/wp-json\/wp\/v2\/comments?post=7547"}],"version-history":[{"count":39,"href":"https:\/\/ijpiel.com\/index.php\/wp-json\/wp\/v2\/posts\/7547\/revisions"}],"predecessor-version":[{"id":7586,"href":"https:\/\/ijpiel.com\/index.php\/wp-json\/wp\/v2\/posts\/7547\/revisions\/7586"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/ijpiel.com\/index.php\/wp-json\/wp\/v2\/media\/7587"}],"wp:attachment":[{"href":"https:\/\/ijpiel.com\/index.php\/wp-json\/wp\/v2\/media?parent=7547"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ijpiel.com\/index.php\/wp-json\/wp\/v2\/categories?post=7547"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ijpiel.com\/index.php\/wp-json\/wp\/v2\/tags?post=7547"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}