1Organisation

The concept of ‘organisation’ both refers to an entity - an organisation - acting in pursuit of a set of goals, but the term also refers to the process of organising. Every new technical advancement in information-gathering, processing and circulation has created affordances and challenges for how organisation is structured as a process, and what it enables; think of the telegraph, the printing press, the digital computer, or the internet. That the dominant form of organisation today is achieved through techno-social units that persist across geographies and generations is in large part owed to the aforementioned technologies. The confrontation of the AI stack with the du jour systems of organisation has the capacity to inaugurate a new chapter in how organisations function, how they are structured, their daily operations and their mandates, as well as in the more fundamental sense of what kinds of societies they enable.

While ‘culture’ is something that happens everywhere all the time within societies, what becomes recognised as culture is a result of collective recognition through the variable lenses of community, audience, and nation.1 In most democratic societies, the cultural sector is positioned in the role of organisational mediator mirroring a collective recognition in its historical, contemporary and future-oriented forms, and transforms that recognition into a resource with, at the very least, a thin notion of publicness attached to it (i.e. access). More generally, cultural institutions conserve and generate meaning through the information encapsulated by their collections, archives, and activities. At the same time, new types of cultural organisations emerge to attend to new societal needs and conditions. When culture is deeply imbricated with information flows and digital interfaces at every scale, cultural organisations are, at once, users and producers of advanced technological societies, investors and investees in its social and economic affordances.2

At present, the cultural sector might not figure as a prominent space for determining how AI shapes, and will be shaped by, host societies. However, its role as an organisational mediator of collective cultural recognition inadvertently makes it the most critical, and least obvious, battlefield for the advance of AI as an existentially and operationally transformative technology.3 To understand why that is so, and how that role can be performed with greater strategic acumen and direction; it is necessary to attend to how the core elements of today’s AI - DATA, MODEL and COMPUTE - are recoding what organisations are and can be across the cultural space.4 This chapter highlights the different emerging fronts that present-day cultural institutions should be aware of as they face questions concerning the manner in which their identity, mission, and operational frameworks dovetail with questions of AI’s publicness.

Data is an essential resource without which AI models could not exist. The most recent generation of large language models (LLMs) rely on vast pools of online data to shape their capabilities. Given that all organisations, and those who interact with them, produce data, the question of how data is owned, governed and made available to train AI models becomes a key operational and possibly existential one for many organisations.

In December 2023, The New York Times filed a lawsuit against Microsoft and OpenAI for copyright infringement concerning OpenAI’s use of the publisher’s content to train ChatGPT. The case marks, to date, the apogee in an evolving tug of war between two perspectives: ‘fair use’ versus ‘copyright infringement/unjust enrichment’. The former is aligned to a vision of AI technology as a creative and transformative augmentation of networked intelligence. The latter forsees concerns with proprietary technologies expropriating from the commons. While the notion of ‘fair use’ is specific to the US jurisdiction, and ‘copyright’ is only one possible means for challenging the status quo at present, this dichotomy underlines the lack of agreement as to how the value transfer from data to model should be structured both economically and legally.5

All organisations become interfaces between data producers and AI models. This condition expands the remit of publicness to include organisations that don’t necessarily have an explicit public mission, transforming them into dormant data keepers. What follows situates this emerging organisational mandate within the broader negotiations of data relations, highlighting the specific new modalities for cultural organisations to consider as an aspect of their operations.

The Dawn of Organisational Self-Knowledge

An organisation’s internal materials - archives, CRM, briefs, minutes, contracts, etc. - hold a wealth of knowledge about the organisation that, when networked, can provide a richer level of insight into how an organisation functions, and what it can infer about its own operations. The cross-integration of different types of data could make use of applications based on an LLM-driven interface, via which data from a range of intra-organisational and extra-organisational sources are integrated and queried. These AI applications would sit at a meta level, different from traditional data management softwares such as Salesforce.6 7 The appearance of a horizon where customisable AI models that learn from organisation-specific materials make everything about organisations queryable via natural language should be on organisational agendas.

Within organisations, the role of Data Officer emerged in response to data protection laws such as GDPR and carries the primary responsibility of safeguarding personal data.8 The recognition that organisations are keepers of many different forms of data will require further expansion of related roles within organisations; this remit will intersect with organisational strategic priorities in novel and challenging ways. While data policy questions around privacy, security, access permissions and other risks will reach a new level of complexity, there are significant affordances of integrating AI tools in organisations in order to increase organisational self-knowledge. Through language, these applications can participate in - rather than be subject to - the organisation’s agenda setting.9

Rather than processing information or automating workflows, AI tools are likely to alter the logics, forms, and interfaces of organisations which are materially designed around human resources, time, and knowledge constraints. However, this can only happen if organisations know enough about the data they are producing, and how to advocate for its diverse applications and utilities (both internally and externally). This means advancing technical literacy across the organisation’s workforce, so that workers are able to take a proactive approach to shaping how the technology transforms their organisation rather than being subjected to top-down deployment of ready-made products (see Minotaur vs Centaur Framework below).

Latent Data Troves

There are complete art historical movements that are not even covered in current datasets. And therefore they are exiting our digital cultural memory. Plus the issue of online context collapse that evades the way we understand histories and amongst them of course art histories. Our responsibility as museum workers is to expand those datasets [and their attendant contexts] and account for everything that the data set does not include.

- Noam Segal

Public institutions including census registrars, libraries, or healthcare services understand themselves to be the holders of valuable data, and embrace their role as advocates (and brokers) for their use.10 For example, one of the many NHS datasets includes the National COVID-19 Chest Imaging Database (NCCID) created during the pandemic as part of the AI in Imaging programme at the NHS AI.11 The dataset was instrumental in advancing the new NHS AI screening technologies for health and social care.12

Data is not this mystery thing. Data might be a photograph. It might be a tweet.

- Oonagh Murphy

With the coming integration of AI systems across many domains, organisations that hold cultural datasets can now consider themselves to have operationally aligned responsibilities and priorities to other industries and organisations that hold valuable public data. Whereas cultural organisations have tended to confine their identity affiliations to other cultural organisations, prioritising data stewardship and governance means that cultural organisations can think strategically about their datasets in line with other types of organisations trying to achieve similar structural dynamics related to their internally set (by policy) or externally mandated (by law) obligations.

Some libraries and research institutes already consider their collections, and the meta-data around those collections, to be essential to the services they provide (e.g. British Library, Natural History Museum).13 However, the ability to leverage these datasets requires robust infrastructure both to create and maintain the datasets (digitisation, analysis, and distribution), on the one hand; and to develop new capabilities when training models with these datasets on the other (i.e. making new internal or public-facing applications). In many instances, the development of infrastructure has been outsourced to the private sector. For example, cultural institutions across the globe have relied on Google Arts & Culture to digitise their collections. Meanwhile, the NHS recently licensed Palantir to create a distribution system for patient data access.14

In order for organisations to make the best use of the value that their data holds as a public resource in the coming era of training data economies, transferable learnings and AI-specific know-how will need to emerge as a form of professional training. Cultural institutions will need to reorient some of the extant definitions around their public missions and values in respect to data. This includes understanding how to correctly price the labour of maintaining and evolving institutions at the foundation of these data sets, as well as how to structure the technical, economic and legal terms on which their relationships with developers of AI models is to be constructed.

To address the inegalitarian harms of datafication - and develop socially beneficial alternatives - will require democratising data social relations: moving from individual data subject rights, to more democratic institutions of data governance.

- Salomé Viljoen 15

Data Is Relational

Publics produce many types of data. Data, in aggregate, produce publics - groups who may not have recognised themselves as allies, but who, through networked datasets, are part of a collective able to bargain for data rights.16 Much is missed in the metaphor of data as private property. Data becomes economically useful and valuable in relationship to other data.17 Data is not owned, as such; its value is produced, contextualised and operationalised as indexable insights into the world, whether that means Uber using public transport network data for logistics calculations, or ChatGPT using Reddit posts to build an articulate language model.

In its current form, legislation such as the GDPR in the EU can only protect the interests of individuals who incur harm from the improper use of their data, rather than protecting the interest of digital publics whose behaviours are both shaped by and constitute the value being extracted from them. While GDPR is necessary for personal security, data on the open market for analysis, or model training, is only valuable in relation to other data, which brings the paradigm of individual data ownership under stress.18 To consider the protections and bargaining vehicles from a relational perspective highlights the need for data governance at the organisational scale.19

Trusted Data Stewards

We’re at the beginning of public organisations making a developmental shift: realising that they are owners of data that belongs to the public. These actors will need to understand that the data they hold might be meaningful for citizens and that they might be mandated to provide this data in a useful way. There needs to be a big cultural shift to seeing providing data and governing data as a service for the public sector.

- Theresa Züger

One way to address the question of organisation around relational data is not through the lens of private ownership, but by considering new supporting infrastructure for trusted data stewardship. A novel family of organisational forms has been proposed and tested under the wider umbrella of data coalitions: data trusts, data collectives, data unions, data cooperatives.20 Their governance models reflect a commitment to a thick public framework of claims over data as a collectively-stewarded resource. However, for the time being, data coalitions remain a niche experimental space and are legally ambiguous. This is in part a product of the undermaturation of the ecosystem. There is an insufficient level of awareness amongst organisations about how the economic and infrastructural landscape is being reshaped by AI, and what collective responses are required to steer the political economy of data markets, and their governance forms, towards ensuring thick forms of publicness.

Trust and verifiability protocols between data producers and stewarding organisations, as well as stewarding organisations and data clients, will form the necessary scaffolding for this emerging ecosystem where ‘escrow agents’ or ‘data trustees’ can act as trusted intermediaries to broker terms of data and value exchange. New tools are emerging, cryptographic protocols such as zero-knowledge proofs that allow information to be verified without revealing its contents, and federated learning, which allows for the training of AI models without centralising data.21 22

Sovereign AI & Dataset Gatekeeping

The idea of ‘Sovereign AI’ addresses the question of ensuring a nation’s ability to produce AI capabilities using its own infrastructure, data, workforce, and business networks.23 Insights into cultural complexities often live in the public cultural sector, spaces where cultural meaning, archiving and contestation happen. This poses questions about ownership at every level of the AI stack, but particularly relating to the datasets that nations wish to control, preserve, and advance. For example, Spain is currently building its own publicly funded LLM, which will feature Spanish, Catalan, and Basque, as well as other languages spoken in Spain.24

In these instances, existing public cultural institutions are positioned most closely to the production and provision of data services. They may be in a strong position to shape the narrative around how cultural data and resources are built, deployed, and stewarded.

Soft Power Diplomacy

We see the opportunity of accompanying the technical transformation that is happening anyway, which is preparing the international system for: its impact on nation states, [on] the concept of nation states, on how to organise a framework. And I'm not just talking about regulations, but a real international, a global or even planetary governance framework that goes beyond our human-centric approach. And preparing, ultimately, our stakeholders for that. We like to say that in the future, all diplomacy will be tech diplomacy.

- Martin Rauchbauer

At the level of the cultural sector, public institutions must understand themselves as intermediators of the role of AI in society. As with any significant technocultural shift, this is merely an extension of their existing soft power in shaping contemporary cultural narratives and cultivating historically and contextually grounded avant-garde voices and ideas. The two main barriers at present are a resistance to making technical expertise a priority, and the development of intersectorial standards that the scale of the foundation models’ inputs (dataset) and outputs (societal impact) demand. For example, the impacts of AI models cut across the domains of education, language and visual culture, and require cultural organisations, policy-makers, media companies, researchers and tech companies to form alliances. Working groups formed across institutions or deliberative assemblies for public input could lead to the achievement of concrete goals and the production of useful frameworks. One example of such a collaborative framework formed around shared objectives is the UN SDG Data Alliance, a multi-stakeholder partnership for producing, governing, and sharing geospatial data and processing capabilities across developing countries in order to produce insights oriented towards achieving the UN’s sustainable development goals. Similarly, institutions across domains are not only keepers of their own data, but serve to benefit from forming coalitions around mutual objectives, new use cases and governance practices.

LLMs represent an exceptionally narrow strand of a very diverse research field that takes a range of approaches to what ‘intelligence’ means. By choosing to make AI synonymous LLMs one ends up making narrow claims on ‘intelligence.’

- Agnes Cameron

Models are ambiguous entities that are proving hard to define both legally and conceptually. There is an inherent tension between the modus operandi of the law and that of technical developments. Judgements made through civil society mechanisms - whether philosophical or legal - that make prescriptive demands on those developing AI systems may lag behind technological development, and expectations codified into legal concepts may elide technical decision-making processes that appear opaque (even to those developing them). Nevertheless, incoming legislation, such as the EU AI Act or Blueprint for the AI Bill of Rights in the US, as well as the work of dedicated public AI-oriented organisations, sets some of the apex terms of the negotiation between the tech sector and the public.25 Given that we are still in the stage where foundation models are to be defined as legal and public-facing entities, what happens culturally in different societies, as well as what positions and narratives cultural institutions adopt and perpetuate, will matter.

Disambiguating Technical Opacity and Transparency

Models are seen as opaque because some of their decision-making processes remain unexplained. ‘AI explainability’ is a term used in longrunning debates about AI referring both to attempts at philosophical anatomies of the decision-making process in models, and to computer science approaches to ‘explainable AI’ architectures, as well as explainability mechanisms which provide post hoc explanatory theories as to why an AI system chose one outcome over another.26 27

But at the organisational scale, ‘transparency’ and ‘accountability’ mean something quite distinct, going beyond the explicability or interpretability of solely software-level considerations.28 Instead, questions are posed about the entire AI training pipeline as a holistic technology: what training data were used, on what architecture, based on which design decisions, and what fine-tuning methods were used. In such cases, questions of how a certain decision was made by the system, rather than an explanation as to why, becomes, overall, a much more pertinent organisational question. Crucially, this should be separated out from the educational function of teaching a public how AI systems work, an educational project which has been explored by many states, organisations and artists.29

Delineating explicability and transparency from ‘demystifying’ complex systems

The situation at other layers of the AI stack is evolving. More scrutiny is being directed to datasets—this has increasingly become a subject of interest to artists who find this media-centred domain to be native to their skillset (see Chapter 2). Each tech company deploys a divergent strategy in relation to hardware. While Meta has brought its chip R&D in-house to reduce reliance on NVIDIA, HuggingFace adopts a seemingly noncompetitive strategy - or it distributes its risk exposure, depending on how their operation is interpreted - by partnering with multiple hardware suppliers.30 OpenAI, meanwhile, is seeking to raise trillions to build its own chip manufacturing venture.31

The model is at the heart of the AI technology pipeline as far as the specific directionality of AI’s impacts on society are concerned. While it is true that models would not be able to perform without access to data and compute, it is the model, once fine-tuned, that determines the nature of the compression, and the kinds of interpretation, reasoning, and decision-making that can emerge in the application of the technology. For this reason, it might be useful, for the sake of analysis, to isolate the model as a separate entity in some negotiations (e.g. when it is necessary to locate accountability for specific outcomes), and to see it integrally connected to data and compute levels in other negotiations (e.g. antitrust, and ensuring that no monopoly or collusion occur at the level of the data-model-compute trifecta).

Cultivating Trust and Verifiability

Trust and verifiability protocols that are technically integrated across the AI stack will become a necessity as our interactions with opaque systems increase. Not only will users want to know if model outputs can be trusted, there will also be a need to verify our own identities (i.e. proof of personhood) as well as the credentials of the media around us.32 The development of capabilities to track provenance and attribution by bridging blockchain infrastructure with running of models is a new frontier that promises major advances in the years to come.33

Compression is possession

- Matt Prewitt

Models Offer a New Compression Technique

The search for legal definitions and governance solutions cannot be collapsed with technical fixes, and there are strategic reasons to not commit to a singular ontological lens and legal pathway. Therefore, it is appropriate to experiment with different metaphorical framings that depart from the core logics of the foundation model as it exists today and which can inform deeper shifts around the ways in which governance for the public good is enacted.

Models compress data and communicate their insights in various human- and machine-readable ways. Models offer a new compression technique, which, at a minimum, allows for ‘middle of the road’ synthesising and summarising a vast collection of sources, and, at best, produces new kinds of syntheses. Functionally, this process is analogous to other forms of compression techniques such as writing and printing, with the important caveat that the scale of AI model compression transcends previous human-industrial capabilities. Just as the development of copyright and patent law tracked with the development of compression technologies at a new scale (in that case, the printing press), it is appropriate to consider whether a new IP category is necessary for the scale of compression that foundation models instantiate.

What does it mean to build in the public interest? Having people weigh in on every decision isn't the answer, so we’re in the process of understanding how much of that is useful, knowing how to infer their interest and creating governance structures in which they can participate.

- Saffron Huang

New Public Mission Mechanisms

The idea of publicly funded foundation models is slowly taking hold. Several announcements have recently reported investments in this public mission: Chicago's Argonne National Laboratory have started to build the trillion-parameter model AuroraGPT.34 UK’s government confirmed a large investment into supercomputer facilities in Bristol dedicated to building Isambard-AI. Spain has announced a large language model trained in Spanish, and in the country’s co-official languages, located at Barcelona Supercomputing Centre.

Research initiatives such as Collective Intelligence Project (CIP) position themselves as incubators for new governance models for the age of AI. Departing from the notion that new levers of collective input are required for a technology that is leveraging collective intelligence, CIP is focused on designing mechanisms for public participation in the development of large language models at companies such as OpenAI and Anthropic.35 For now, CIP positions their work as a series of experiments identifying the right avenues for bringing the public into the developmental process.

Such an initiative reflects the importance of a new class of third parties that can operate as an ingress point where mediation is required between a public and an organisation (typically a corporation) developing a societally significant technology. What is critical to bear in mind is that the mediation is not purely diplomatic, but offers mechanisms that can be technically integrated. With many AI models today relying on media production for training data for a certain subset of compressions and queries, the cultural sector offers a context-specific domain for the incubation of new mechanisms for such public scrutiny.

Alignment’s Shadow Negotiations

Within the narrow, technical definition of ‘alignment’, measures need to be designed to ensure that AI models perform their objectives in a way that does not cause negative externalities. However, today there is a broader politics around ‘alignment’ that addresses the concern with negative externalities to the level of existential risk for humanity.36 The latter view is propagated by a group known as Effective Altruists (EA), which includes many prominent tech-world figures such as Elon Musk, Sam Bankman-Fried, and Facebook co-founder Dustin Moskovitz. Lobbying efforts by this broad group drawn from the Silicon Valley establishment have been influencing political policy with their own agendas (either directly or through affiliated research institutes and donor organisations).37 Given that these entities are unaccountable to public funding or oversight, the UK Parliament has voiced concerns over regulatory capture if officials rely on a narrow pool of private sector-funded expertise.38

The technical question of alignment - as to what constitutes a public good or a public risk - is being popularly debated within the cultural field; this could jeopardise the potential value of the debate especially when these are in turn leveraged as narratives within lobbying efforts. While cultural institutions may not be directly involved in these shadow negotiations, they are platforms for advancing publicly-interfaced narratives, and as such bear a responsibility to understand how their positioning plays into larger narrative wars.

Ways of Seeing in Latent Space

Cultural organisations are curators and stewards of past and future archives of what society deems to be valuable as cultural data, through production of physical collections and, increasingly, digitised archives. The narratives that weave connections between these data points become part of the institution’s criteria of truth. It is the mandate of these organisations to introduce new data points that challenge their pre-existing views (e.g. decolonisation of collections), or, more broadly, challenge who is ‘important’ and why. They may also offer an evolutionary timeline for changes (e.g. evolving historical context).

A similar process also takes place in the training of AI models on media and metadata to create a model’s embeddings, the low-dimensional mathematical silhouette of high-dimensional real-world information about the data. When embeddings are generated they expose relationships between concepts, i.e their context and meaning. Creating an intervention at the level of embeddings would mean being part of the foundation models’ development team. However, fine-tuning by adding further context and specificities can also build new meanings into the model and change its outputs. This opens up a new organisational space with an ‘old’ mandate of creating frameworks for how meaning is mediated through AI that cultural workers, artists, critics, audience, etc. are well-positioned to take up.

Minotaur vs Centaur Framework of Operational Transformation

Attempts at ‘digital transformation’ within the cultural sector have highlighted the struggle of cultural organisations to respond operationally to new technological landscapes. While this has often been attributed to a lack of relevant technical literacy and know-how within cultural organisations, an equally important, but underexplored, dimension lies with the relative poverty of organisational design innovation, as well as the lack of capacity for necessary risk-taking in order to transform how organisations function. The stereotypical formulation sees cultural organisations forced to play ‘catch-up’, with external forces pushing the cultural sector to streamline investment into surface-level applications. While there are definite benefits in such an approach, when it comes to certain software integrations (e.g. Salesforce, Mailchimp, Notion) reducing technical innovation within organisations to narrow systems migrations for existing functions and KPIs forecloses the opportunity to explore more meaningful holistic innovation.

That transformation can only arise where a sufficient degree of engagement with the technical affordances meets a daring approach to rethinking organisations from the ground up. While finding the sweet spot between ‘operational’ and ‘transformational’ when interfacing with new technological landscapes may be ambitious, particularly as there is no universal out-of-box solution, the gambit has the potential of offering organisations the kind of edge that they likely will need in circumstances where the general relevance of cultural institutions is constantly brought under question.

The contrasting approaches of top-down digital product integration versus ground-up/decentralised experimentation by employees of organisations with new tools dovetail with ‘Minotaur versus Centaur’ approaches to integrating AI tools into organisations. The former follows the familiar lines of vertical centralised control where workers are subjected to planning and management by a newly integrated AI application, to which they are expected to conform. The latter describes human workers managing how AI will carry out more narrow tasks within workflows. The Centaur (human-headed, horse-bodied chimera) approach involves organisations adopting bottom-up operational R&D for specific teams as well as for the entire organisation. This means granting workers agency over how AI applications are integrated, whilst promoting documentation, reflection, and a proactive stance on modifying workflows and operational habits. This approach is also much better suited to iterating on holistic organisational innovation given the risks associated with transforming operations in a centralised manner.39

The training and running of models requires GPUs (Graphics Processing Units), which are highly complex and resource-intensive to produce. Manufacturing these cutting-edge semiconductor devices requires state-of-the-art fabrication plants capable of etching transistors only a few nanometers wide, entailing billions of dollars in initial investment for construction, and highly specialised production pipelines. Unlike the data and model elements of the AI stack, where inroads into negotiating their publicness are currently present from an organisational perspective, negotiating the publicness of compute is a highly constrained political game.

Supply-chain Geopolitics

GPUs’ production process involves a sophisticated global supply chain for high-purity materials and components, necessitating robust logistics and international cooperation. The need for political and economic stability through cooperation or domination to sustain such a long-term and capital-intensive venture means only a handful of countries can realistically aspire to establish and maintain their own GPU manufacturing capabilities.40 US export restrictions on the three main chips producers, all US-based, to China and parts of the Middle East, paint an explicit picture of the geopolitical stakes that are at the heart of the control over computational technologies.41

IP Empire

GPUs are complex pieces of hardware that incorporate various technologies protected by intellectual property rights, including patents and copyrights. IP law covers a wide range of elements from the hardware design (e.g., the architecture, specific circuits, and manufacturing techniques) to the software that drives them (e.g., drivers, firmware, and application programming interfaces (APIs)). Additionally, GPUs may involve third-party technologies that are licensed by the GPU manufacturer to enhance their product's capabilities, such as specialised computing algorithms or software libraries. Navigating the maze of intellectual property rights, and international and national regulations adds another layer of complexity for new entrants, making market entry only available to actors who can deal with this scale of challenge. For example, OpenAI’s ambitious efforts to raise trillions to create its own transnational chip-building coalition between chip-makers, model-makers, and power suppliers is likely to be replicated by rivals.42

State as a Broker of Compute Allocations

While the UK might have fomented policy to support UK-based AI innovation, it is still reliant on other nations and organisations to supply it with hardware, at least for now. The new open science Isambard-AI Supercompute Cluster, for example, is being provisioned with 5,000 Nvidia GH200 superchips, and the American multinational Hewlett Packard Enterprise won tender for the project.43 At present, the state is the central broker of how access and distribution of compute are to be governed. For the cultural sector to develop its own AI capabilities and to participate in the evolving public AI ecosystem, arguments and mechanisms for influencing the state’s rationale of allocating computing resources will be required.44 45

Experiments & Research for a Less Computationally Intensive AI Stack

Some projects are exploring decentralised computation as a means of changing the way that users can access and share computational resources, potentially removing the reliance on centralised data storage.46 However, the decentralised approach would require major advances in reducing the computational intensity of deep learning. The scale of individual cultural projects may offer a productive testing site for these alternative technological visions; however, the cultural sector must equally be careful about promoting ‘pastoral visions’ of minor tech that are only viable within relatively privileged settings of western formally democratic states.47

Footnotes

  1. The 1982 Mexico Declaration on Cultural Policies by UNESCO defines culture as the distinct spiritual, material, intellectual, and emotional features characterising a society and encompasses arts, lifestyle, human rights, value systems, traditions, and beliefs which form part of the organisation’s remit to protect heritage and foster creativity as a right. UNESCO, About the Culture Sector [link].

  2. Michel Feher, _Rated Agency: Investee Politics in a Speculative Age _(2019).

  3. Benjamin Bratton, A New Philosophy Of Planetary Computation (2022) [link].

  4. Separating the data and model layers can create and artificial distinction as they are deeply technically entangled. The distinction reflects the fact they should be treated as different types of resources as they stem from different sources.

  5. The case of Oracle America, Inc. v Google, Inc. from 2010 highlighted the volatility of relying on ‘fair use’ in the contexts of digital economies and product development. The case concerned itself with the claim that Google’s use of the definition and SSO of Java APIs, as originally developed by Sun Technologies, constitutes fair use. The Federal Court made a judgement in favour of Oracle America noting that the use did not constitute fair use as the developer of Android phones allowed Google to enter a market with a competitive advantage over Oracle America. The Supreme Court reversed the decision, instead emphasising the transformative nature of the usage as key to constituting ‘fair use’.

  6. 'Though best known for generating convincing text and images, LLMs such as OpenAI’s GPT-4, and Google’s Gemini are likely to have greater social impacts as the executive centre for complex systems that integrate additional tools for learning about the world and acting on it.' See Seth Lazar, Frontier AI Ethics (2022) [link].

  7. This is related but distinct to existing management tools such as Salesforce or Notion developing such developing LLM capabilities in order to augment their products.

  8. Even organisations who do not explicitly maintain databases are still responsible for the personal information of their employees and their associates thanks to GDPR regulations.

  9. Seth Lazar, Frontier AI Ethics (2022) [link].

  10. Citizens data is governed in two main categories of personal data that identify a natural person, including financial records, online browsing and consumer activity, and also sensitive data or ‘special category’ data that relates to health, biometric data, race, religion, sexual orientation, political opinions and ethnic origins. See Information Commissioner’s Office, What is personal data? [link].

  11. The National COVID-19 Chest Imaging Database (NCCID) is part of the AI in Imaging programme at the NHS AI Lab. Lab. NHS England, National COVID-19 Chest Imaging Database (NCCID) [link].

  12. NHS England, AI in imaging [link].

  13. For example, the Natural History Museum in London has scanned 80 million specimens for the Distributed System of Scientific Collections (DISSCO)Distributed System of Scientific Collections [link]. National History Museum, Digital Collections [link]

  14. NHS England, Federated Data Platform [link].

  15. Salomé Viljoen, A Relational Theory of Data Governance (2021) [link].

  16. Here, ‘the public’ is not a pre-existing entity, but rather a cluster of temporarily organised communities negotiating future trajectories for the larger community’s benefit. See John Dewey, The Public and Its Problems (1927).

  17. Mercedes Bunz and Photini Vrikki, From Big to Democratic Data: Why the Rise of AI Needs Data Solidarity (2022) [link].

  18. Salomé Viljoen, A Relational Theory of Data Governance (2021) [link].

  19. Sylvie Delacroix, Sustainable Data Rivers? (2023) [link].

  20. See RadicalxChange [link], Open Data Institute [link], Data Empowerment Fund [link] Aapti Institute [link], Data Futures Institute at Kings College London [link], Data Trust Initiative [link] and Ada Lovelace Institute [link].

  21. Andrew Trask, Introducing OpenMined Research (2020) [link].

  22. Priyanka Mary Mammen Federated Learning: Opportunities and Challenges (2021) [link].

  23. Keith Strier for Nvidia, What Is Sovereign AI? (2024) [link]

  24. Computerworld, Spain Will Create Foundational AI Model in Local Languages (2024) [link].

  25. Executive Office of the President of the United States, Blueprint for an AI Bill of Rights [link].

  26. Scott Robbins, A Misdirected Principle with a Catch: Explicability for AI (2019).

  27. ‘The ability to explain the model generally is now the gold standard for building trust and deployment of artificial intelligence systems in critical domains,’ See Dwivedi et al. 2023, Explainable AI (XAI): Core Ideas, Techniques, and Solutions (2023).

  28. Scott Robbins, A Misdirected Principle with a Catch: Explicability for AI (2019).

  29. See Alasdair Milne, Turret Theory: Training the human with dmsfctn’s Godmode Epochs (2023) [link].

  30. Santosh Janardhan, Reimagining Our Infrastructure for the AI Age (2023) [link]

  31. Tong et al., ‘Exclusive: ChatGPT-owner OpenAI is exploring making its own AI chips’ Reuters (2023) [link].

  32. Puja Ohlhaver et al., Compressed to 0: The Silent Strings of Proof of Personhood (2024) [link].

  33. Steve Vassallo, AI x Blockchain: The Next Level (2023) [link].

  34. Keumars Afifi-Sabet, The GPT to Rule Them All: Training for One Trillion Parameter Model Backed by Intel and US Government Has Just Begun (2023) [link].

  35. Anthropic & Collective Intelligence Project, Collective Constitutional AI: Aligning a Language Model with Public Input (2023) [link].

  36. See Centre for the Study of Existential Risk, Risks from Artificial Intelligence [link].

  37. See Centre for Effective Altruism, History [link].

  38. 'Throughout our inquiry we encountered mounting concern about regulatory capture. This might occur through lobbying, or because officials lack technical know‑how and come to rely on a narrow pool of private sector expertise to inform policy and standards. Similar problems may emerge from groupthink. These might lead to regulatory frameworks which favour a select group of commercial rather than public interests, for example by creating barriers to new competitors entering the market'. Communications and Digital Committee, Large Language Models and Generative AI (2024) [link].

  39. Robert J. Sparrow & Adam Henschke, Minotaurs, Not Centaurs: The Future of Manned-Unmanned Teaming (2023) [link]. Also see: Ethan Mollick, On Holding Back the Strange AI Tide (2023) [link].

  40. Lucy Rodger et al., Inside the Miracle of Modern Chip Manufacturing (2024) [link].

  41. Stephen Nellis and Max A. Cherney, US Curbs AI Chip Exports from Nvidia and AMD to Some Middle East Countries (2023) [link].

  42. Keach Hagey & Asa Fitch, _Sam Altman Seeks Trillions of Dollars to Reshape Business of Chips and AI _(2024) [link]._ _

  43. See Unprecedented £225m investment to create UK's most powerful supercomputer in Bristol (2023) [link].

  44. ‘Bridging the digital divide and developing a sustainable compute ecosystem will depend on how well leaders are able to navigate their relationships with private-sector companies, which will be key partners when it comes to enabling growth’. See Bridget Boakye et al., State of Compute Access: How to Bridge the New Digital Divide (2023). [link].

  45. For example, research is currently being undertaken towards policy recommendations revolving around the use of subsidies and taxes to promote socially beneficial deployments of AI through compute allocations.

  46. See Golem [link].

  47. Daniel Chávez Heras, The Digital Pastoral: a Minor Critique of Minor Tech (2023) [link].