top of page
  • Writer's pictureHarry Law
Lessons from 1972 in age of doomers and optimists

Tim West, Visualising AI by DeepMind

“If the present growth trends continue unchanged, the limits of growth on this planet will be reached sometime within the next hundred years.” That was the primary claim of the 1972 The Limits to Growth written by a team of MIT researchers including Donella H. Meadows, Dennis L. Meadows, Jørgen Randers, and William W. Behrens III. Commissioned by the Club of Rome, a nonprofit founded in April 1968 by Italian industrialist Aurelio Peccei and OECD Director-General for Scientific Affairs Alexander King, the report detailed the results of a simulation modelling the possibility of economic and population growth with a finite supply of resources.

The central thesis of The Limits to Growth was simple: if the world's consumption patterns and population growth continued at the same rates documented in the 1970s, the earth's ecosystems would collapse sometime within the next hundred years. The report echoed concerns introduced by American biologist Paul Ehrlich in his 1968 bestseller The Population Bomb, which predicted that world demand for resources would lead to starvation. Ehrlich’s commentary belongs to the tradition of Malthusianism, which proposes that human population growth acts as a check on the further rise in population by consuming resources.

While the methodology of The Limits to Growth was criticised at publication and beyond–though defended in recent years–the publication of the document has become a much cited case study for historians of science and technology interested in the circulation and stabilisation of values and their ability to shape the political environment.

The origins of the report can be traced back to the formation of the Club of Rome itself. Aurelio Peccei made a 1965 speech to an international consortium of bankers aimed at supporting industrialisation in Latin America, which was translated into English and read by the then Scottish head of science at the OECD, Alexander King. According to the Club of Rome’s account, “the two found that they shared a profound concern for the long-term future of humanity and the planet, what they termed the modern predicament of mankind.”

In the spring of 1968, Peccei and King hosted thirty scientists, educators, economists, humanists, industrialists in Rome’s Lincean Academy. Founded in 1603 and housed in the historic Palazzo Corsini baroque, the academy derived its name from the Italian for lynx, an animal whose sharp vision was deemed to symbolise the observational prowess required in scientific practice. The meeting marked the birth of the Club of Rome, initially conceived as an informal collective “to foster understanding of the varied but interconnected components that make up the global system.” Crucial to the formation of the group was an intention to “bring that new understanding to the attention of policymakers and the public globally; and thus, promote new policy initiatives and action.”

At the core of its philosophy was the concept of the problématique, which held that viewing the problems of humankind––be they environment, economic, or social––could neither be viewed nor solved in isolation. As a 1970 proposal from the group explained: "It is this generalized meta-problem (or meta-system of problems) which we have called and shall continue to call the "problematic" that inheres in our situation."

Shortly after the Club of Rome’s inception, MIT systems professor Jay Forrester offered to use computer models to study the web of complex problems represented by the problématique. Receptive to the proposal, the group commissioned a team of researchers from MIT to study the implications of global economic growth by examining five basic factors: population, agricultural production, non-renewable resource depletion, industrial output, and pollution. Led by Professor Dennis Meadows, the study aimed to define the physical boundaries of population growth and the limitations imposed by economic activities by examining what they characterised as a set of interconnected problems.

Conclusions and controversy


In 1972, The Limits to Growth was published. Selling millions of copies worldwide, the document sought to scrutinise problems including the paradox of poverty amidst abundance, environmental degradation, institutional distrust, runaway urban growth, and economic disruptions such as inflation. The team from MIT developed a computer model, known as World3, based on the work of Jay Forrester. This model, which aimed to understand the causes and consequences of exponential growth in the global socioeconomic system, led the group to conclude:
That if the present growth trends in population, industrialisation, pollution, food production, and resource depletion continue unchanged, the limits to growth on the planet will be reached sometime within the next one hundred years. The most probable result will be a rather sudden and uncontrollable decline in both population and industrial capacity…It is possible to alter these growth trends and to establish a condition of ecological and economic stability that is sustainable far into the future…The state of global equilibrium could be designed so that the basic material needs of each person on earth are satisfied and each person has an equal opportunity to realise his individual human potential.
Despite its popularity, The Limits to Growth provoked criticisms soon after its publication. American academic Karl Kaysen condemned its "familiar, indeed fashionable thesis” that he felt essay that he felt overstated the scale of the problem in his paper The Computer That Printed out W*O*L*F. Peter Passell, an economist, published a 1972 article in the New York Times stating that “The Limits to Growth is best summarized not as a rediscovery of the laws of nature but as a rediscovery of the oldest maxim of computer science: Garbage In, Garbage Out.”

These criticisms took issue with the central claim at the heart of The Limits to Growth: should aggregate demand for resources increase as the world’s population grows and per capita income rises, the world will eventually run out of these precious resources. As Matthew Kahn recently explained, “Economists have tended to be more optimistic that ongoing economic growth can slow population growth, accelerate technological progress and bring about new goods that offer consumers the services they desire without the negative environmental consequences associated with past consumption.”

This dynamic was central to a 1973 rebuttal by the Science and Policy Unit at the University of Sussex, which concluded that simulations in The Limits to Growth were extremely sensitive to a small number of key assumptions. By this account, even minor changes to important variables would lead to huge swings when extrapolated far into the future. As a result, they suggested that the MIT projections were unduly pessimistic because of faults in the underlying methodology and data on which they were based. In response, however, the MIT group countered that their own arguments had been either misunderstood or wilfully misrepresented. They argued that the critics had failed to suggest any alternative model for the interaction of growth processes and resource availability, and "nor had they described in precise terms the sort of social change and technological advances that they believe would accommodate current growth processes."
A figure from ‘The Limits to Growth,’ with consumption continuing at the 1970 rate known as the 'Standard Run'. Depletion of nonrenewable resources leads to a collapse of industrial production, with growth stopping before 2100

More recently, some have proposed that the models presented in The Limits to Growth deserve a second look. Graham Turner of the Australian Commonwealth Scientific and Industrial Research Organisation, for example, used data from the UN to claim that modern indicators closely resembled the so-called 'Standard Run' from 1972 (a 'business as usual' attitude with few modifications of human behaviour). According to Turner, while birth rates and death rates were both slightly lower than projected, these effects cancelled each other out to leave the growth in world population in line with the forecasts.

Futures, policy, and AI


This post is not a commentary on whether or not the predictions made in The Limits to Growth were accurate. Given the most severe effects of the model are borne out towards the end of the twenty-first century, it remains difficult to claim with any certainty that the full account of the predictions will or will not come to pass. Where The Limits to Growth is instructive, however, is in understanding the way in which predictions about tomorrow can be used to inform the policy environment today.

Futures are a powerful thing. They are generated through an unstable field of language, practice and materiality in which actors and agents compete for the right to represent progress and deterioration. In scientific practice, visions of potential futures are often deployed to stimulate a desire to see potential technologies become realities. Historians interested in the futurity of science and technology generally take one of two approaches. First, the ‘sociociotechnical imaginaries’ perspective, which suggests that futures play a generative role in shaping socioeconomic, scientific, and political life at the point of production. They permeate the sphere of public policy, reconfiguring the socioeconomic environment by determining which citizens are included—and which are excluded—from the benefits of scientific development.

Characterised as ‘futurology’, the second approach explores the role that forecasting plays in contesting sites of action. It examines the way in which prediction can be used to control or protest futures and the role that “fictive scripts” play when deployed by scientists to win and sustain patronage. Analysis of the way in which futures are used by governments to manage the economy, and of the processes by which socioeconomic models of the future become entangled with political ideology, is central to this approach. While both ultimately seek to describe what the future could look like and prescribe what the future should look like, The Limits to Growth is generally associated with the second group. It demonstrates the power of predictions, coarse-grained or not, as tools for reorienting the policy environment through forecasts and extrapolation.

AI is a technology deeply entangled with notions of the future, and nowhere is that more clear than with respect to conceptions of highly advanced models whose capabilities might in many domains surpass those of a human. Researchers, civil society and policymakers ask when this style of AI will arrive, what impact will it have on the social and economic fabric of society, what dangers it poses, and how likely they are to manifest.

The recent calls for international bodies such as an Intergovernmental Panel for Artificial Intelligence modelled on the Intergovernmental Panel for Climate Change or an International Atomic Energy Agency equivalent for AI draw into focus the urgency with which the issue is being considered. The current climate of deep interest in AI safety has provided a window for action to design, develop and deploy governance mechanisms for powerful models in a manner that minimises harm and maximises benefit. But we should remember that today’s action space may not last indefinitely.

Right now, many believe that timelines for extremely powerful models are short. What happens if such systems have not yet materialised in the coming years, for example, and those who predicted their arrival are accused of overestimating the rate of progress? Some have suggested that we may well inadvertently enter a period in which risks are higher than ever (and in which the will to action most necessary) but in which governance efforts at the national and supranational level are harder to secure.
But The Limits to Growth episode shows us that it is not only accuracy of predictions that matters. To predict is to influence, and to influence is to exert power. Just as it is unwise to avoid careful consideration of how best to use the planet’s resources because predictions about their rate of depletion may not stack up, ensuring that the right governance structures exist to manage a world with highly-capable models should not be dependent on when precisely they will arrive.
How the International Atomic Energy Agency came to be and what its creation can tell us about a sibling agency to regulate powerful AI models
Champ Panupong Techawongthawon, Visualising AI by DeepMind

The prospect of new global institutions to govern powerful models is featuring with increased prominence and regularity in public policy discussions. In recent months, there have been calls for an Intergovernmental Panel for Artificial Intelligence modelled on the Intergovernmental Panel for Climate Change, an AI-focused group similar in scope to the European Organization for Nuclear Research (CERN), and an International Atomic Energy Agency equivalent for AI.

Perhaps the most widely discussed of these is an ‘IAEA for AI’, whose focus on promoting the peaceful use of nuclear energy its proponents suggest can be viewed as a challenge analogous to ensuring the safe use of AI around the world. Understanding how the IAEA came to be is a useful exercise for those seeking to draw lessons from the organisation to inform global AI governance approaches. To that end, this post summarises the story behind its creation.

Atoms for Peace

The story of the IAEA is the story of the end of the Second World War. Seizing an opportunity to mould a new international order in the aftermath of the conflict, the US sought to shape the emergence of institutions such as the International Monetary Fund and the United Nations, supplied Europe with financial support to rebuild via its Marshall Plan, and constructed alliances with the goal of ‘containing’ the Soviet Union.

This was the context in which the US State Department Panel of Consultants on Disarmament, a group made up of officials including Robert Oppenheimer and representatives from academia and civil society, published an influential report on nuclear policy in January 1953. The panel strongly recommended that the US government adopt a policy of greater transparency with the American public regarding the capabilities of nuclear technology and the risks associated with its development. Specifically, the report (p.43) recommended "a policy of candor toward the American people—and at least equally toward its own elected representatives and responsible officials—in presenting the meaning of the arms race." The argument put forward by the panel was that once the Soviet Union would develop its own offensive nuclear capabilities, there would be no scenario in which the US could maintain the asymmetrical advantage it held during the concluding years of the conflict.

Accepting the recommendations of the report, US President Dwight D. Eisenhower gave a speech in 1953 to the United Nations General Assembly in which he proposed the creation of an international body to regulate and promote the peaceful use of nuclear power. The speech, which was known as the Atoms for Peace address, attempted to balance fears of preventing nuclear proliferation with promises of peaceful use of uranium in nuclear reactors. (We should note, however, that Atoms for Peace would later come to refer to a broader initiative including measures such as the declassification of nuclear power information and the commercialisation of atomic energy.) The 1953 speech, however, was notable in that it outlined the basis for an international agency whose mandate would be to encourage the peaceful use of nuclear fission:
The governments principally involved, to the extent permitted by elementary prudence, should begin now and continue to make joint contributions from their stockpiles of normal uranium and fissionable materials to an international atomic energy agency. We would expect that such an agency would be set up under the aegis of the United Nations…The more important responsibility of this atomic energy agency would be to devise methods whereby this fissionable material would be allocated to serve the peaceful pursuits of mankind. Experts would be mobilized to apply atomic energy to the needs of agriculture, medicine and other peaceful activities.
William Burr of the National Security Archive has proposed, however, that “what Eisenhower left unsaid was that a proposal to donate fissionable materials would put pressure on the Soviet Union.” As US Atomic Energy Commission Chairman Lewis Strauss explained (p.3), the amount suggested by the US to be donated to the IAEA would be a “figure which we could handle from our stockpile, but which it would be difficult for the Soviets to match.”


Towards the IAEA

Following the speech, Eisenhower and Secretary of State John Foster Dulles were prepared to advance the IAEA predominantly as an American project without full international support. Such willingness proved to be unnecessary when a host of global powers, including the USSR, made clear that they did not wish to be excluded from what seemed to be a globally significant project.

By the autumn of 1954, members of the United Nations were locked in the negotiations over the new agency's statute. Burr has suggested that the USSR initially showed less interest in the American proposal on safeguards on the proliferation of nuclear technologies, which US officials viewed as a crucial function of the IAEA to ensure that fissile materials were not misused. The subsequent US planning centred on what State Department official Gerard C. Smith and others perceived as a core challenge: the promotion of atomic energy presented a "worldwide security problem" because spent reactor fuel could be repurposed for military purposes.

For Smith and colleagues, the proposed IAEA could serve as a control mechanism to mitigate a "fourth country" from developing nuclear weapons (in addition to the US, USSR and the UK) by preventing the diversion of nuclear resources intended for peaceful purposes towards military activities. To ensure the appropriate use of nuclear resources, the American government proposed that the new organisation be staffed with international civil servants who would oversee resources provided by the agency or by member states through bilateral agreements. To develop a safeguards policy that could gain broad acceptance, the US initiated discussions with allies and partners including Canada, the UK, and Australia.

By the end of 1958, this group organised around the stance that "100% effective control was impossible under any system and that audit and spot inspection would provide as effective control as could reasonably be expected." Some members of the group believed that, because total control could not be guaranteed, an effective compromise would be the introduction of inspectors who could act as a deterrent while maximising the effective scope of the agency. As one Canadian official explained, the concept was "analogous to having available policemen in sufficient number to deter the criminal but not to have one policeman assigned to each potential criminal." US officials agreed with this approach, believing it would be less costly and more politically acceptable than a system whereby inspectors were stationed on a permanent basis.

Continuing their discussions from earlier in the year, the so-called ‘Ottawa Group’ including the US, UK, and Canada reached an agreement on uniform standards for safeguards to be applied when nuclear exporters sold certain “trigger items” such as uranium, fissile material, reactors, or isotope enrichment plants. In this approach, “safeguards should be presented not as an imposition by the supplier but as a joint duty of supplier and recipient flowing from the fundamental responsibility of Governments to ensure that fissile materials are not misused.”

The first General Conference of the new International Atomic Energy in Vienna in 1957

The newly formed IAEA secretariat drafted versions of the safeguards policy, which was debated at the agency’s general conferences held in Vienna between 1957 and 1960 and at independent meetings of the Ottawa group of countries during the intervening years. However, the inspector-based safeguards system favoured by the US, UK, and others proved controversial. India, for example, viewed the proposed approach as “discriminatory in character” due to what it saw as a settlement that favoured countries with the industrial and technical capacity to refine fissile materials. During a private discussion with US, British, and French officials, USSR representative Vasilii Emelianov displayed what the group described as "complete indifference to safeguards and complete skepticism [as] to the effectiveness of any system."

In January 1961, the IAEA’s Board of Governors approved the final policy, known as INFCIRC 26, mandating that safeguards would be applied to significant quantities of existing fissile materials, production of fissionable materials, and nuclear facilities. This would include spot checks by IAEA officials to ensure that countries were abiding to the commitments contained within the draft. Throughout the process, the document underwent numerous amendments and revisions, leading one of the early safeguards officials to later describe (p.55) INFCIRC 26 as "one of the most convoluted pieces of verbal expression in history" which "few people could comprehend, except in long discussion with the handful that did." Despite this, on 31 January 1961, the IAEA finalised the INFCIRC 26 safeguards document with a vote of 17 to 6 at the annual meeting.

An IAEA for AI?

While tempting to look to the IAEA for inspiration for the governance of AI, we should remember that the agency was a product of a world very different to the one we live in today. It was created using different assumptions to solve different problems based on different motivations.

The IAEA was founded to secure international agreement for an inspection programme whose purpose was to monitor how states were using fissile materials. Today, universities and the private sector lead AI development. Nuclear technology is organised around a scarce resource, fissile materials, which can be detected in their raw format and at the point at which the intensive process of refinement begins. And while it is possible to process radioactive materials, one cannot produce the raw materials on which the technology is based. That creates a very different dynamic vis-a-vis powerful machine learning models, which can be built anywhere in the world providing appropriate access to data, compute and technical capability.

While it took the better part of a decade for the seed of an IAEA to germinate into a fully fledged organisation backed by international law, the Ottawa group of countries rapidly developed and consolidated the core ideas behind the organisation in the wake of the Atoms for Peace speech. The majority of risks associated with nuclear technology were already known, whereas the risk profile of today’s machine learning models is likely to increase as models become more capable. As a result, we have a window to study these systems––and their risks––to make sure we alight on the best ideas that work for the future, not just those that worked well in the past.

The danger with any model, image, or metaphor is that it holds us hostage. The map is not the territory, though it can certainly prove useful should we examine it closely enough to see its strengths, deficiencies, and nuances first hand. As we do so we will likely find that we need to move beyond coarse-grained comparisons and tailor our institutions to the nature of the technology we seek to regulate.
  • Writer's pictureHarry Law
A review of 'Risky Artificial Intelligence: The Role of Incidents in the Path to AI Regulation' by Giampiero Lupo of the Institute of Applied Sciences and Intelligent Systems, National Research Council of Italy.
Champ Panupong Techawongthawon, Visualising AI by DeepMind

Earlier this week, Giampiero Lupo of the Institute of Applied Sciences and Intelligent Systems, National Research Council of Italy wrote in the Law, Technology and Humans journal on the role 'incidents' in shaping AI regulation. Lupo draws comparisons between AI and other new, complex technologies––such as the advent of nuclear power and the introduction of the automobile––to connect the cataloguing and analysis of AI incidents (defined as a situation in which AI systems caused, or nearly caused, real-world harm) with an ability to effectively regulate artificial intelligence.

The paper suggests that the spread of advanced technology tends to result in a divide between those excited by its novelty and those wary of its risks, while the coexistence of new and old technologies typically introduces novel challenges that require regulation in order to safely integrate them into society. It reminds us that, like other advanced technologies, concerns about safety, novel risks, and sudden changes disrupt the systems, structures, and arrangements that people are accustomed to and comfortable with.

On the one hand, norms can control technological change, restricting its scope and impact on the existing order; while on other, it is the sense of displacement that new technology brings that prompts a rush to regulate to restore or protect normalcy. This dynamic is particularly important "because AI refers to autonomous entities by definition and involves technologies that can act as intelligent agents that receive perceptions from the external environment and perform actions autonomously." As the paper later acknowledges, however, 'AI' is challenging to define: the technology exists as a constellation of data, people, power, and technological practice. AI systems might be agentic or tool-based, use-case specific or generalist, follow hard-coded rules or 'learn' from examples. Many of today's systems are not capable of 'autonomous' action, even fewer when using a restrictive definition. (Autonomy should not, however, be confused with capability or the potential to cause harm).

According to the paper, the regulation of emerging technologies (including AI) can be connected with two drivers: uncertainty about the potential impact of a technology and its risks, and information gathering in the aftermath of undesirable or unintentional events whose happening resulting in harm, injury, or damage. The latter, which is the primary focus of the author, are referred to as ‘incidents’. Incidents shape the regulation of emerging technologies because as technology becomes more sophisticated, it becomes harder to identify safety issues and their impact on individuals, society, and the environment in advance of their use.


Historical parallels: from automotive to AI

The theoretical basis through which AI incidents are connected with regulation is the notion that crises provide insight into the operation and effects of new technology. Drawing on Yale sociologist Charles Perrow's Normal Accident Theory, which suggests that situations exist wherein the complexity of systems unavoidably leads to accidents, the paper proposes that the sophistication of AI makes incidents both inevitable and unpredictable. In this framing, it is only after a crisis that safety issues can be identified, the technology's impact fully understood, and its inner workings revealed. Describing such moments as 'crises' is perhaps particularly apt when using the clinical definition: 'the turning point of a disease when an important change takes place, indicating either recovery or death.' Perrow's idea, which was originally formulated in the wake of the Three Mile Island disaster, is applied by the author to the automobile industry in the United States.
High risks and numerous casualties also had an impact on regulatory regimes that tried to improve the safety of the use of automotive technology. For instance, the 50,000 lives lost per year in 1966 in the United States contributed to the change of paradigm from ‘auto-safety’ to ‘crashworthiness’. The auto-safety paradigm was based on the assumption that as soon as nobody hits each other, no one will get hurt; therefore, this approach focuses on the ‘three Es’: (1) engineering roads to limit the possibility of collisions and equipping vehicles with reliable brakes and steering, (2) educating drivers and pedestrians to avoid collisions and (3) drafting and enforcing rules of the road to discipline drivers’ behaviour. In contrast, the ‘crashworthiness’ paradigm diffused since the late 1960s considered that a number of incidents on the road are unavoidable; therefore, car manufacturers had to design and implement technologies like seat belts and airbags that limit the impact of incidents on the human body.
By this account, the significance of the move away from 'auto-safety' and towards 'crashworthiness' represented a shift of responsibility for the consequences of incidents away from drivers and towards the developers of automobiles. Drawing a parallel with contemporary AI systems, the paper suggests that both are "characterised by a complex interaction between technology and human agents." As a result, determining where responsibility lies when failure occurs can prove challenging. For the automobile industry, the question is whether the driver, the car’s manufacturer (or possibly, a third party or the quality of transport infrastructure) is to blame. For AI, the question is more complex given the scale of the 'value chain' underpinning modern day systems: the developers building the technology, those deploying models for enterprise and consumer applications, and individual users all have the capability to cause harm. For both the AI and automotive industry, how best to determine liability amongst different parties in the aftermath of an incident can be a challenging question to answer.


AI incidents and AI regulation

The bulk of the paper explores the link between AI incidents and regulation by examining how incident analysis can inform the regulatory agenda. To do so, the paper describes two phenomena as reactions to the uncertainties and risks associated with the rapid diffusion of AI. First, there are the adoption of so-called 'soft laws' as frameworks, guidelines, and ethical principles associated with the development and deployment of AI. Second, the national and supranational legislation whose goal is to design and implement regulatory frameworks that seek to regulate the use of AI. The research draws on AI incident databases to understand their potential for law-making and compares legislation (focusing on the European Union's AI Act) with AI incident data. The databases used by the paper include: Where in the World is AI?, an interactive database managed by the Responsible AI Institute; The AI Incident Database developed by the Responsible AI Collaborative; and the AI, Algorithmic, and Automation Incidents and Controversies Repository, database containing incidents and controversies involving AI grouped by labels including type of harm, location, and system provider.

The first area of study is the Where in the World is AI? map, which reports incidents and contains general information about how AI is used in the world. The resource classified 323 out of 430 (75%) news articles written about AI as ‘harmful’, while only 22% were deemed to contain positive views about the technology. The paper suggests that "the media’s approach may contribute to the diffusion of a diffident attitude towards the use of AI that may also influence the evaluation of experts or policymakers involved in law-making." With respect to 'soft laws' such as ethical guidelines, the analysis found that 37% of 108 documents investigated included sentences indicating potential positive outcomes caused by AI. The study also found that the Where in the World is AI? map also advocated for a strategy of best practice analysis, which is present in the European Union's AI Act. We should note, however, that reporting itself is not an incident. While the tone and tenor of reporting shapes the way in which information is consumed, how best to delineate between the impact on the policy environment of reporting on incidents and the incidents themselves remains an open question.

Next, the paper draws on the AIAAIC repository to consider the different sectors most commonly affected by AI incidents. The analysis suggests that, aside from the 'technology' sector responsible for building and deploying AI, the second sector most commonly affected by AI incidents is 'government' (21.54%). While the paper notes that the high number of AI incidents associated with government usage is not reflected in soft laws (1.85% of the documents address AI applied in public administration), the AI Act categorises several types of AI applied in governmental services as 'high-risk' applications. While the study acknowledges that "it is not possible in this paper to assess how much the empirical reality of AI incidents in the government sector has influenced the EC strategy" it also proposes that it is "plausible that highly publicised events may have somehow affected the inclusion of some systems in the high-risk category."

The AIAAIC data show that a large proportion of reported incidents take place in the 'police' sector (39.8%), while the 'justice' sector was responsible for 5.6% of incidents. The paper notes that––despite a relative lack of incidents associated with the second category––the AI Act regulates different types of AI systems applied in the judiciary by classifying them as high risk. In doing so, the study suggests that this shows "an evident concern, partially corroborated by the empirical reality of incidents, towards such systems." Given the small number of incidents related to the use of AI in the legal system, however, it is difficult to determine the extent to which the decision to classify such systems as high risk is corroborated by the evidence presented by incident databases, partially or otherwise. Similarly, despite the high number of incidents regarding illegitimate surveillance (129 of the 871 incidents) within the AIAAIC database, the study found that ethical documents consistently overlooked the issue, with 13% of the documents investigated focused on regulating AI-based surveillance systems.


Conclusion: The case for incident analysis in AI policy

The paper makes a strong argument for viewing incidents as a significant drivers of the regulatory environment. The case for viewing AI incidents––in addition to uncertainty about the potential impact of the technology and its risks––as relevant factors for decoding the genesis of regulatory proposals is a compelling one. Perrow's Normal Accident Theory and the introduction of historical case studies focused on nuclear energy and the automotive industry provide a powerful lens through which to view the relationship between AI incidents and regulation. And while there is no 'smoking gun' whose presence neatly confirms nor repudiates the role of incidents in shaping policy, connecting AI incident data to specific governance proposals is a welcome attempt at analysing the process of regulatory development.

Read the full paper by Giampiero Lupo, Institute of Applied Sciences and Intelligent Systems, National Research Council of Italy here.

1
2
bottom of page