We stand at the helm of a digital transformation that has been touted as the Fourth Industrial revolution. The increased usage of Artificial Intelligence (AI) has transformed systems of production, management, and governance across industries. The pace, scope, and impact of new technologies alter the way we live, work, and associate with eachother. The study of AI and its moral responsibility becomes relevant in this backdrop.
AI refers to “…the science and engineering of making intelligent machines, especially intelligent computer programs…”. Per the “Turing Test”, AI may be categorized as weak or narrow, or strong. Weak or Narrow AI involves the processing of specific actions using a high degree of human supervision and decision making while strong AI (artificial general intelligence) can undertake multiple tasks with human-like intelligence. The potential of strong AI has led to concerns as to the creation of a super-intelligence (a system with intellect that exceeds human cognitive performance) that may pose risks to humanity. Despite the benevolent intentions of the concerned programmer, AI’s ability to make decisions without human intervention has thepotential to cause harm.
The creation of strong or largely autonomous AI has led to interesting questions of ethics and morality. As such AI has the potential to act in its own right, debates have arisen on whether AI should have a legal personality, and its actions should attract moral responsibility. This article examines the moral challenges posed by the deployment of AI in the construction industry in the context of the existing Indian legal framework.
Application of AI in the Construction Industry and Its Expanding Scope
AI has the capacity to revolutionize the construction industry.Transformative software in construction design can propose solutions and design alternatives that include offsite methods of construction. Monitoring and collecting data for directing worker performance can be done through wearable technology and worker conformance to safety measures can be seen through facial recognition software.
At present, the use of AI in the Indian construction industry appears to be restricted to narrow AI. However, there is scope for the deployment of strong AI. The industry is using AI for site security, identification and prevention of risks, data collection, analysis of accidents, and generally improving performance. The use of AI facilitates the improvement of health and safety on-site while also reducing costs. As a result of the COVID-19 pandemic,sensor-based disinfectant tunnels and drone-based spraying systems for contactless sanitization have become more prevalent.
The use of AI in construction has several benefits:
1. As the need for human intervention is reduced or eliminated, businesses face lower costs of manpower;
2. Site monitoring and data collection allow for more accurate predictive analyses as to budget, schedule, and the like. The recent launch ofColliers India’s AI-driven platform (CoGence), is indicative of a shift towards AI-driven industry solutions;
3. Data collection together with analytical tools can help identify onsite risks and help mitigate against health and safety issues that may otherwise arise on construction sites;
4. Data collection can lead to effective maintenance, productivity efficiencies, and reductions in cost;
5. Scheduling and managing engineering work can be undertaken through algorithms.
Moral Responsibility and Action
Moral and Legal Responsibility
The rise in the application of AI raises the question of pinningmoral agency when individuals work on the instructions and recommendations of AI. For example, if there is a fatality at the construction site owing to individual acting upon information from the AI system, who bears liability and responsibility. AI algorithms also run the risk of perpetuating the bias of their creator. A multitude of ethical and moral issues arise with the usage of AI for a construction industry worker.
Problems as to assigning culpability are exacerbated by the use of Black Box AI (i.e, machine learning systems that solve problems without being programmed). The AI produces insights based on a data set but the end-user is unaware of the process of reaching the conclusion and does so with minimum human intervention.Illustratively, when an architect sketches the blueprint of a construction site, the computer interprets the drawing and generates new input with suggestions that the architect can sketch over, new solutions emerge in this way which neither machine nor human could have fathomed. Thus, while Black Box AI outputs are accurate, the predictions are not traceable and use secret algorithms with imprecisely described processes. The ambiguity of the process could lead to difficulties in assessing who is responsible in cases where liability arises.
Generally, where an individual or body corporate exercises elective choice to act morally or immorally, that person is presumed to have complete knowledge of the consequences of that action, and is, therefore, accountable for their choices. It is unclear whether independent and autonomous AI can or should be subject to such accountability.
While being programmed, AI needs to be taught to exercise restraint in what it ought to and ought not to decide, thereby constraining the AI from intentionally acting immorally. Where weak AI is concerned, it is likely that the AI will almost always adopt moral principles from the human actor(s) controlling or monitoring it. Therefore, the moral evaluation of AI and its actions are of primacy.
The unquantifiable nature of human emotions creates difficulties in the AI ecosystem as the increased reliance on AI in decision making, by an employer, could mean a lack of consideration for the interests of workers. Even in situations, where the AI is given a position of assisting a human, the AI’s inputs may be given more credence as it is considered to be more objective as it is unaffected by emotion. The nuances of human empathy and similar such emotions would be glazed over for objective data fact that acts within the parameters of profit and productivity set out by an employer. Therefore, if X is late to the construction site for 10 days because X’s mother is ill, the AI reporting the worker’s entry time will report X’s delay to work, which might have been overlooked by a supervisor.
The idea of the private sphere being sacrosanct can be traced to Aristotle’s distinction of the private and public sphere. The former associated with individual family and domestic life was outside the scope of the public sphere of political activity. The understanding of privacy as valuable implied that there was no room for interference. However, the invasion of privacy by the constant surveillance and monitoring of a worker through wearable technology, facial recognition systems, and other monitory systems is gaining widespread moral acceptance. This, of course, results in a trade-off between the safety of the worker and their privacy. Proponents argue that the cost of disregarding individual privacy is outweighed by the benefit of worker safety.
Foucault coined the word governmentality to argue that governments shape the behaviour of a population by using procedures and processes for administration. Surveillance was explained as one such process to carve out a disciplined society. The idea of being watched and therefore behaving better is the premise of this understanding. Similarly, in the construction industry, individual workers are subjected to different forms of surveillance through wearable technology, facial and palm recognition devices, and other monitoring devices. The impact of this form of constant surveillance within the construction industry is twofold:
1. Employers make decisions based on the information collated by these devices. This could create biases such as discrimination against workers for health conditions revealed by the devices. Conclusions that set out patterns pertaining to where specific castes or gender could also lead to discrimination.
2. The use of devices results in a single-minded focus on the productivity of the worker. The devices have an ethical deficit owing to their programming that has no consideration for theintended beneficiary– the worker, who has free will, consciousness, and certain human rights. For example, working overtime with negligible wages and zero ‘idle time might be beneficial for the industry but has an adverse effect on the worker. A worker is often not in a position to voice concerns against these conditions or pursue idle time at the risk of being let go. The acceptance of these work conditions owing to the fear of being easily replaced in a labour excess market has taken away the choice of dissent.
The real problem arises when individuals stop questioning the impact of such scrutiny and give up on personal data in the belief that it is necessary for averting accidents and maximizing efficiency. It is the normalization of surveillance that poses threat to the right to privacy and individual autonomous action. The non-identification of this as a problem implies a reduced endeavour to avoid it.
Extant Statutes and the Regulatory Framework for AI
Specific Regulation for AI
Although there is no national regulatory framework policy document for AI from the Government of India, guiding documents have been issued by the planning commission (NITI Aayog). Additionally, the NITI Aayog has set up the National Strategy for Artificial Intelligence andAIRAWAT– AI Research, Analytics, and Knowledge Assimilation platform. The documents issued by the NITI Aayog include the National Strategy for Artificial Intelligence, working documents towards Responsible AI, and a document on the Operationalizing Principles for Responsible AI.
These documents focus on the responsible usage of AI and indicate the government’s inclination towards the widespread usage and development of AI. The policy landscape for AI also includes the Digital India initiative, an Artificial Intelligence task force set up in 2017, and the setting up of four committees for preparing a roadmap for a national AI programme by the Ministry of Electronics Technology and Information. However, despite all these working papers and documents, the law has not been amended to recognize, or account for, the increasing prevalence and development of AI.
The absence of a definition of AI in Indian law creates legal ambiguity concerning what qualifies as AI. The current framework’s silence on AI’s legal personality is a huge challenge as the intention is of utmost importance in criminal jurisprudence. In the absence of legal status, intention cannot be attributed. Since AI is not afforded legal personality, liability in situations in which AI works autonomously is unclear. This leads to questions as to who should be responsible for the actions or application of AI – the programmer, the person giving instructions, or the AI itself.
In 2021, the Copyright office in India recognised an AI tool– RAGHAV, an AI painting application, as the co-author of a copyright-protected artistic work. While this seems to be the first occasion on which AI was afforded legal personhood in India, the need of the hour is to expand extant legislation.
Protection of Workers’ Data
While Indian labour law does not deal with provisions for the protection of data collected from workers via their monitoring and surveillance, Section 72A and Section 43A of theInformation Technology Act, 2000 (IT Act) will apply. Section 72A of the IT Act penalizes the unlawful disclosure of personal information pursuant to a breach of contract. Under Section 43A of the IT Act, where a body corporate is negligent and does not implement and maintain reasonable security practices for the protection of sensitive personal data or information which it possesses, deals with, or handles, that body corporate will be liable to pay damages to persons affected by such negligence. ‘Sensitive data’ is defined as including medical history, biometric information, physiological or mental health condition, etc. and Rule 5 stipulates that while collecting sensitive data, the data collector must seek consent in writing and must ensure that collection was done on legality and necessity.
Although the IT Act and the Rules prescribed thereunder attempt to protect personal data and impose checks on body corporates and provide for recourse, bodies corporate collecting or sharing the data collected are not required to ensure that the person whose data is collected fully understands the terms laid out. Secondly, there exists no rule on renewal of consent which implies that initial consent would be justified for data collections even when new devices are used provided that the purpose of such collection and the person with whom the information is shared remain the same.
The Supreme Court of India has recognized the right to privacy as an implicit fundamental right under Article 21 of the Constitution (see,R. Rajagopal v. State of Tamil Nadu). The Court, inK.S Puttaswamy v. Union of India, has also observed that there is a need for a comprehensive legislative framework for data protection. Following the decision in Puttaswamy, the Government began working on a Personal Data Protection Bill. In November 2021, a Joint Committee of Parliament (JCP) recommended that the proposed Personal Data Protection Bill be expanded to protect both personal and non-personal data. The Bill is now known as theData Protection Bill, 2021.
The revised text of the Bill is not available. However, prior to the JPC’s recommendations, the proposed Bill labels employees as ‘data principals’ and provides them with rights that include the right to access and withdraw consent. It appears that employers will be considered ‘data fiduciaries’ who are required to give employees notice of their rights vis-à-vis their data. However, it is unclear if or how ‘employee’, ‘employer’, or ‘employment’ will be defined. As a result, it remains to be seen whether all workers in the construction industry will be afforded rights and protections under the Bill.
Prior to the JPC’s recommendations, the Bill required the express consent of data principals for the collection and sharing of sensitive data including health data and biometric information. Data fiduciaries were required to assess the impact of new technologies on data protection and, or, the processing of sensitive data.
The Way Forward
The government appears to have a positive outlook on AI given that it has published policy papers and approved expenditure on machine learning. However, in the absence of specific legislation governing AI to address issues such as legal personality, attribution of liability and, or, intent, etc., ethical, and moral considerations will remain.
Outcome-based budgeting focuses on the qualitative effects of a proposed measure by sanctioning finances based on its outcome. The starting point of the legislature could be the outcome-based budgeting of AI. This would focus on the effects the AI would have on caste or gender, dynamics that affect the lives of labour in the construction industry.
As the use of AI for commercial purposes becomes more prevalent, it is imperative that the Legislature implement a framework for AI which, amongst others, clearly defines and attributes responsibility and accountability for the actions of AI, ensures transparency, and addresses concerns regarding the privacy of persons whose data is collected. Any legislation in this regard must balance the need for digital assistance and the efficiencies it can create with the social and ethical implications of eliminating human intervention and, or, reducing the workforce. Further, the law must be flexible and capable of evolving to accommodate new agents considering the ever-expanding scope of AI.
About the Author
Ms. Puja Raghavan is an Associate at Bharucha & Partners, Delhi.
Managing Editor: Naman Anand
Editors-in-Chief: Jhalak Srivastav & Aakaansha Arya
Senior Editor: Gaurang Mandavkar
Associate Editor: Muskaan Aggarwal
Junior Editor: Harshita Tyagi
Preferred Method of Citation
Puja Raghavan, “Moral Responsibility and Action in the Use of Artificial Intelligence in Construction” (IJPIEL, 5 January 2022).