The common conception of what an ‘artist’ does involves the production of creative work. This very broad definition encompasses other, narrower labels supported by social conventions and organisational niches, including fine artists, medium-specific artists, contemporary artists, artists who work for hire (e.g. designers, illustrators, and creative directors), and artists as defined by their industries (e.g. film directors and fashion designers). Artists who work with advanced technologies cut across all of these domains, frequently operating across several simultaneously. Currently, artistic work is organised depending on the economic specificities and societal expectations attached to each specific genre or form. The entanglement between how artists work and what they produce enacts a complex dynamic between the integration of AI into individual and organisational workflows, setting up a dynamic of feedback and emergence that exceeds the framing of automation and the question of the replacement of artists by machines. This chapter refers to artists working across the AI tech stack, engaging with the application and model layers through consumer-facing and specialist tools.1
In order to do justice to the breadth of unfolding transitions and scenarios in the context of AI, a comprehensive treatment of the different operational planes that constitute the structural core of AxAT artistic work is necessary. FAE identifies labour, crafting, aesthetics, systems, worldbuilding, and tech development as the six planes according to which the unique qualities of artistic work can be meaningfully rendered in relation to AI technology as a medium. The question of what happens on these planes when AI systems are deployed may offer insights into wider questions AI technologies are posing for society.
The increasing integration of AI across various sectors is sparking widespread concern over the fear of becoming obsolete (FOBO). According to an IMF report, 60% of workers will be impacted by AI through job automation, and there will be changes in how critical decision-making such as hiring and disciplinary actions are operated.2
Concurrently, government efforts, such as the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence in the US and the EU AI Act are attempts to use regulatory frameworks to safeguard workers whilst legislating other guardrails for AI applications.4
All media is now training data.
- Herndon Dryhurst Studio
The AI models that we have today derive from the human labour which comprises
its training data. Large models are trained on vast quantities of text and
imagery originating from the long arc of human cultural production, from chat
rooms to digitised centuries-old encyclopaedias. In the early days of Web2,
Flickr, Tumblr and DeviantArt were used by artists to share their work with a
newly emerging distributed audience. The shift from Web1 to the Web2 platform
economy model turned everyone into a ‘content creator’, whether they were
artists or not, all whilst centralising bodies of data, which today serve as
training data for AI models.6
Leverage IP for Opt In Returns
IP litigation from artists (predominantly in the USA) claims that AI training uses artists’ work without permission, and without the financial returns they are entitled to under law. In the USA, at least 10 cases were launched against AI developers and owners in 2023. One landmark case has been brought to the UK courts by Getty Images.7
However, the longer-term efficacy of litigation is questionable given high costs and resource barriers that most artists cannot overcome, as well as the potential of pending judgments favouring AI companies.
I think we can be quite imaginative in how we use the IP framework of recognition and value in a creative work, and have more innovative ways of compensating and remunerating people in that process.
- Reema Selhi
Leverage Spawning for Opt Out Bargaining Power
One alternative is to foster an environment for meaningful bargaining and collective action. IP has great utility in this case, but safeguards are needed to prevent monopolisation by platforms that constrain downward revenue streams to the point that remuneration to most artists becomes negligible. Such safeguards can’t come from quantifiable mechanisms like pricing, but instead need to emerge from such tried and tested routes as collective bargaining.
The brainchild of founder Jordan Meyer and artists Holly Herndon and Mat Dryhurst, Spawning introduces a consent layer to the internet by giving users the possibility to opt-out from their data being used in training.9
While the opt-out strategy may resemble a protectionist measure that is only as good as the willingness of the actors behind the training to enforce it, it offers a new form of bargaining power not just to artists but to all internet users.
Data Brokerage for a Networked Commons
With an increasing awareness of data as shadow labour, data brokerage is
likely to emerge as an AI-era form of collective bargaining. In sharp contrast
to the individualising tendencies of the art market, artists now have an
opportunity to collectivise around their economic value as training data
producers and to test new modes of organising with trusted data stewards (see
Chapter 1), such as forming data unions or data trusts.11
Emerging Accountability Mechanisms
When the outputs generated from such tools as Midjourney, Stable Diffusion and ChatGPT are considered as artistic artefacts, questions of provenance, permission and attribution emerge. The IP of the artefact produced by generative AI is currently either automatically in the public domain (e.g. Midjourney), or is assigned to the prompter (e.g. GPT-4).12
The concept of ‘crafting’ as an ‘activity involving skill in making things by hand’ may seem anachronistic for the era of contemporary art - in which art-making is generally led by conceptual rather than skill-based concerns - and too anthropocentric for the era of AI.18
These tools offer access and insight into the various capabilities of deep learning models. For instance, ChatGPT is an interface into LLMs such as GPT-4 and their transformer architecture. Other tools that use API access to build on top of GPT-4 may be fine-tuned on additional expert information (e.g. CaseText, a legal advice chatbot), but they are indebted, both financially and technically, to the foundation model.22
Iteration at the Core of Synthetic Crafting
Iterative and durational learning are intrinsic to crafting for humans and for machine learning. Refinement follows reflection, and is never finished, it merely pauses. Every time a model is prompted, it ‘runs a course’ of the model architecture. Different prompts can produce variable results. These iterative runs can be seen as agents completing a task. With each prompt an agent produces, learnings that inform the next agent are produced. Working iteratively can lead to a more profound understanding of what the tools offer, as it allows for testing them out and taking them to new places.
This type of commitment to working closely with a technological system can give those engaged in synthetic crafting unique insights through competency, reciprocal learning, and higher exposure to the glitches and edge cases that a new model might reveal over time. It also means that artists who have skills that predate AI’s advance, and that speak to the sensibilities being harnessed, have an essential knowledge base for mastering and expanding the possibilities of these tools. For example, when Adobe Suite was first released, painters, photographers and physical retouchers were well placed to leverage and explore the tools as they had a deep understanding of how image production works.24
Edging into Unchartered Territory
Typically, we are inclined to downgrade models as we believe these older, less optimised versions, offer untapped potential for exploring unique sensibilities and narratives, a feature that the newer commercial models oftentimes don’t accommodate.
— CROSSLUCID
We are still at the very beginning of understanding the particular affordances of AI technologies. Though certain model architectures are developed for specific purposes, their long-tail affordances might emerge later. In such instances, the core affordances of a model are only discovered when it is adapted from one deployment context - often its original intended purpose - and placed in a new one.25
From Deskilling to Reskilling
Delegating skilled labour to a black box technology potentially reduces agency over production processes. What is found in efficiency gains risks exposure to deskilling, and could disincentivise a sustained engagement in honing a pre-existing craft or set of skills. Certain skills, such as animation or the writing of code, are becoming subject to varying degrees of automation, putting a wider range of present-day skill-sets and forms of labour at risk.
However, historically, the loss and acquisition of human skills has always been a dynamic process; while some skills may become less widespread, others proliferate, and yet others make a come-back following an era of near total disappearance as cultural actors seek novel tools and ways of working, and as social and market demands shift.26
Though the impact of the new AI tools is still unfolding, a number of specific consequences can already be identified. Tendencies in modern and contemporary art led to macrotrends of ‘embodied’ deskilling, refocusing on cognitive or conceptual contributions in artistic practice. This yielded a loosening of constraints, important for both conceptual and new media practices in the 20th century, and the turn towards research-driven practice more recently. With this latter phase, and the internal pressures of the artistic labour market, came the need for maintaining the stereotypical ‘knowledge worker’ skill set, while simultaneously requiring ‘hard’ practical and technical skills.
The need for technical skills and digital literacy have led to a reskilling phase in education. For example in the computational arts, which are often self-taught, alongside the growing sector of specialised institutional educational programmes such as those provided at the School of Digital Arts, Manchester Metropolitan University, University of the Arts London, as well as New York University’s Tisch School of the Arts.27
Overall, this reskilling has culminated in a proliferation of specialisations within the machine learning domain. Here, specific expertise and literacies - e.g. in game AI, synthetic data, generative systems, language models, specific coding languages for integration work, such as C++ for machine learning and Unreal Engine - can be combined with specific artistic or cultural skills or knowledge to produce re-triangulated specialisations. Consider, for example, the case of the poet Sasha Stiles who works fine-tuning small language models, or the graphic designer Eric Hu who builds bespoke generative systems for the layout of assets.29
From Model Querying to Seamless Intermediation
The ascent of prompt-engineering as an AI-era craft-based skill is another example which requires a certain level of domain-specific knowledge/erudition.30
Prompt-engineering can also be seen as an emerging method to ‘query’ an AI model as a search space - a contingent practice in drawing out important insights. Yet, prompt-engineering as it exists today might be specific to our moment, as newer more malleable interfaces are developed. Examples of potential futures might include user-friendly programming interfaces, such as the music app Semilla.ai, but also cloud service platforms such as NVIDIA’s Omniverse in which many models and software environments can be used together.31
The last decade has seen the re-emergence of artificial neural networks as the dominant computational paradigm in AI. Machine learning outputs from ‘early’ systems, for instance Google’s DeepDream, were initially interpreted as oneiric generative worlds, with their aesthetics capturing the imagination of the general public.32
Forays into aesthetic research, including data visualisations that surface and scrutinise the algorithmic tendencies of AI, became a core domain of artists’ work in the late 2010s.33
Convergence Engines The stabilisation of a specific aesthetic can be
explained with recourse to the data that the model is trained on, but this
does not tell the whole story. Machine learning architectures have an
optimising logic which, in part, determines their outputs. This could be
understood as the artificial neural network’s tendency to converge during the
optimisation process. Optimisation strives to locate the ‘median’ result
within the dataset. The delivery mechanism of algorithmic feeds produce a
similar convergence effect: ‘mid’ content is self-perpetuating.36
Style Capture Artists with a large enough presence in prevalent datasets
(e.g. Common Crawl or LAION-5B), or artists whose portfolios belong to a
highly indexed dataset (in relation to a historical period, style, canon,
etc.) might have their style ‘captured’ by generative AI models trained on
relevant datasets. This can happen to an extent also when an artist or content
producer works within a prevalent genre, and that genre is captured, with
implications for the competitive viability of their work. This means that
users can produce stylistic facsimiles by prompting models.38
Recombinant Aesthetics in the New Weird
It is not only extant styles that can be pulled from AI models. There is a possibility of composing new styles by exploring new connections between embeddings (i.e. compressed representations of data). This can be done in the vein of crossing styles, e.g. ‘Breaking Bad by Balenciaga,’ or it can allow for communities to play into their favourite tropes of fan lore while shifting their storylines in divergent directions (e.g. ‘Snow Crash but feminist’) creating forks, and adding fan-created storylines and characters.39
The most important thing when we're talking with artists is to understand their workflow. They're creating new ways to use the technology in a pipeline. They're combining different algorithms, software and hardware and pushing the limits in a way we don’t often see.
- Heather Schoell
Different artists engage with AI tools in very different ways. Though there are many strategies for engaging or intervening at different levels of the AI stack, there is a continuum for proximity and/or depth of engagement, which leads to different forms of cross-pollination between artistic practices and the systems at the heart of evolving AI tools. The nature of engagement is not only determined by the artist’s technical skillset, but also by the nature of the tools themselves. Is the tool’s code open source? Is a pre-trained model freely available?40
The different modalities in which models are intervened upon as a medium by artists.
Modalities for AI Tool Deployment
There is already a long history of open machine learning tools that coder-artists have been using.41
Again, if the code is open source a very small minority of artists, with the right support, can build out alternatives. An example of this is KaiberAI, a generative video tool that was developed through support from Gray Area, an art and technology organisation based in San Francisco.43
Model-Making as Meaning-Making
Architectures are generic; weights are indebted to labour, and are much more idiosyncratic.
- Moisés Horta Valenzuela
In the age of foundation models, computations across unprecedentedly vast cultural archives derive meaning through correlations within a model’s embedding spaces. If culture and art-making are to play the role of diverging from inherited convention toward new modes of meaning-making, some of that work will need to take place within the technical system of meaning production itself, such is the AI model.44
The manipulation of AI models through weights - known as fine-tuning - holds unique potential for artists to perform interrogative work. The architecture of a neural network (i.e. GAN, CNN, transformer, etc.) only becomes a valuable meaning-making agent after training on data. As such, it is the training and fine-tuning of an AI model that engenders the formal aspects of artistic expression whether individual or collective. Working in this mode, artists become gatherers of datasets, explorers of embedding spaces, engineers of new architectures, and fine tuners of models. They use their own, existing and combinations of models in their production pipelines. The schematic ontologies as to where artistic expression originates, who is its rightful holder, and on what terms (e.g. IP and moral rights), will have major bearing on what we consider art to be, how it is to be legally and economically acknowledged, and, in turn, what the agency of its meaning-making in society could be.46
Systems Builders
Machine learning tools are being integrated into broader artistic complex systems which function beyond the production of visual or textual artefacts. The integration of AI and machine learning within simulations, video games, sensory apparatuses and surveillance systems mirrors the technology’s wider societal deployment, where AI becomes more embedded, rather than being the focal point of the artist’s attention.
- Creative AI Lab
Some artists, instead of looking inward at the internal operations of AI tools, look outward, building tools into more complex, operative systems for research or production. These artists craft tools, craft with tools, or craft tools into bigger systems. Some artists build interesting pipelines without notable technical knowledge by pulling together proprietary tools and techniques into novel workflows.47
This approach, which takes the system as an artform, is less common than methods which give primacy to some combination of process and artefact. An example is the work of 0rphan Drift and Etic Lab’s Interspecies Research Communication Initiative (2021) which aimed to build a reciprocal training programme for a machine learning agent and an octopus, adopting what they termed a ‘whole-systems approach’.48
From artistic practice emerges new approaches to technical and narrative aspects of machine learning, continuing a lineage of artistic endeavours of not merely critiquing but building (computational) systems as tools for communication and analysis. Often these systems become a byproduct that has an ecosystem level value, either by pressing forward with a specific technical innovation—for example, Delta_Ark’s work to integrate reinforcement learning into Unreal Engine - or the delivery of a tool for the community to build into future workflows, as with SemillaAI. From an organisational point of view, these projects often yield public interest and sometimes have positive impacts in their pursuit of divergent technical systems that decouple from the profit motive. However, they do require much greater resourcing, since they often necessitate teams of diverse technical and academic expertise.
We're trying to simulate ecosystems with agents that are highly adaptive. So that might mean agents with LLMs that you can talk to. But it also means that there are agents that can evolve new behaviours. We're trying to make these really complex simulated ecosystems, and trying to make artworks from inside them.
- Delta_Ark
Recombinant Bubble Worlds
Artists’ modus operandi cannot be generalised. They are always seeking new ways of working, and this often means combining technologies and bringing different technical components into interaction as part of new workflows. Increasingly, this involves building tools into worlds.49
Game engines, like Unreal Engine or Unity, are currently the software environment through which experimental workflows and combinations of tooling are developed in order to build virtual worlds.50
Agent-training Arenas
A lifeform, or a life, is a compositional space.
- Ian Cheng
Simulated bubble worlds provide a place to develop both new machine learning techniques and new philosophies for understanding them.55
This approach is not restricted to the cultural domain: simulations are used in other contexts to train AI agents. Startup Imbue (formally Generally Intelligent) developed their own simulated training environment called Avalon wherein they conduct benchmarking experiments for reinforcement learning agents.56
Complex System Choreography
This perspective on the artist - thinking across ‘system’ and ‘world’ - focuses
more on the artist as performing a type of work with their unique skill set
instead of viewing them as a metaphysically distinct type of agent who can
offer opaque ethical insight into new future technologies. This echoes the art
critic Jack Burnham’s 1968 essay System Esthetics: AI shifts the
conventional role of the artist and creative away from end-content producer
(primarily visual, imagery, text) towards a choreographer of complex
information systems.57
We're not trying to compete, we simply intend to remain the hosting platform that keeps the community engaged and coming back.
- Madisen Taylor
The existence of art and technology as a professional nexus would have not
been possible without a new set of structural relationships that were
inaugurated in the 1950s involving the public sector, technology industry, and
artists interested in technologies as an exploratory and conceptual
medium.58
Exploratory Use-Case Space
The combination of artists’ appetite for risk-taking and experimentation with the low risks of art as a deployment context makes collaborations between technology developers and artists fertile soil for use-case exploration. This is true for the product-ready level of technology, but even more for R&D or community-building phases around early-stage technological applications. The specific affordances of artistic training and socialisation skills allow artists to approach new technologies with critical perspective, in some cases, including experience in cross-deploying new technologies in unexpected contexts. Under the right conditions, this can be used effectively, revealing new affordances by means of experimental use.
Apart from surfacing valuable insights and use-case prototyping, case studies can help develop a public-facing narrative around the technology. In the context of a highly abstract and complex technology such as AI, both of these capabilities are of great value and public significance. Key to the success of such programmes going forward will be the recognition of the inherent value provided by artists and arts organisations in these experimental contexts.
Artist as Influencer
Technologies oriented towards creative pursuits will inevitably leverage
artists in their demonstration campaigns.59
Case for Institutional Brokerage and Intermediation
As generative as collaborations between artists and industry can be, they are fraught with potential misalignments. Contractual misunderstandings, cultural differences, power imbalances and the sheer asymmetry of operating scales between typical artist studios and technology companies often lead to frictions and even collaborative breakdowns. Furthermore, within the technology sector, such programmes are usually initiated and argued for through business cases by employees who see merit in such collaborations, but whose main job lies elsewhere. The relative fragility and lack of longevity of such programmes is, in part, a result of these operational challenges.
In contrast to the 1950s, 1990s, and even the 2010s, today there are a number of institutions, as well as smaller organisations, who specialise in supporting artistic practices that are specifically engaged with advanced technologies such as AI.61
Footnotes
-
Creative-ai.org database and ml4a.net are two examples of resources for artists wanting to learn how to engage with AI tools. ↩
-
IMF, _Gen-AI: Artificial Intelligence and the Future of Work _(2024) [link]. ↩
-
“Whether a union exists or not, industry and labour can establish formal mechanisms to incorporate workers into AI development and deployment.” See Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (2023) [link]. ↩
-
There are also clear positive externalities of such tools being integrated into artists’ administrative processes (which are typically significantly under-resourced). The streamlining and optimising of grant-writing, artist’s statements, bios, and other collateral communication materials associated with maintaining an artistic practice are examples. ↩
-
Early Web2 platforms such as DPChallenge were later adopted as training data. See Leonardo Impett, Touch Up My Appearance (2023). ↩
-
Emilia David, _Getty Lawsuit Against Stability AI to Go to Trial in the UK _(2023) [link]. ↩
-
Michael M. Grynbaum and Ryan Mac, The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work (2023) [link]. ↩
-
Melissa Heikkilä, Artists Can Now Opt Out Of The Next Version Of Stable Diffusion (2022) [link]. ↩
-
Sylvie Delacroix, Neil D Lawrence, Bottom-up Data Trusts: Disturbing The ‘One Size Fits All’ Approach To Data Governance, International Data Privacy Law, Volume 9, Issue 4, November 2019, p. 236-252, [link]. ↩
-
Muhammad Shafay et al., Blockchain For Deep Learning: Review And Open Challenges (2023) [link]. ↩
-
Peter Osborne, The Postconceptual Condition (2018). ↩
-
Maurice Tuchman, A Report on the Art and Technology Program of the Los Angeles County Museum of Art, 1967-1971 (1971). ↩
-
Sadie Plant, Zeroes and Ones (1997). ↩
-
Geoff Cox et al., Live Coding: A User's Manual (2022). ↩
-
i.e. fine-tuning on specific datasets which can offer a specialist level of insights into a specific remit of information such as a model that is fine-tuned on case law and a set of specific legal cases within specific jurisdictions. ↩
-
‘Jailbreaking’ methods circumnavigate developer restrictions, usually at the software level, to broaden the range of possibilities a user can achieve with a tool. ↩
-
See Quantel Paintbox, a forerunner to Photoshop, early access to which was granted to Richard Hamilton. Lev Manovich,_ Software Takes Command_ (2013). ↩
-
For example AlphaGo, a machine learning software initially designed to play the board game Go has been redeveloped into the architecture used for AlphaFold, a tool able to accurately predict the 3D models of protein structures, a capability that presumed to be able to accelerate research in the development of novel medicines including for Alzheimer’s disease. See Demis Hassabis, AI Could Solve Some of Humanity’s Hardest Problems. It Already Has. (2013) [link]. ↩
-
See Leroi-Gourhan, André, Gesture and Speech (1964 [1993]). ↩
-
The School of Digital Arts at MMU was a £35m investment into an art school with state of art production facilities focusing on media production studios, immersive technologies, a gallery and music spaces that opened in 2021. ↩
-
The breadth of intersecting methodologies with AI from, the humanities and beyond, can be explored via the ‘Resources’ strand of the Creative AI Lab Database [link]. ↩
-
See Sasha Stiles's practice [link]. See Allison Parrish’s practice [link]. ↩
-
Prompt engineering commonly refers to shaping the text prompt given to a foundation model in a particular way, in order to shape the results it returns. This can involve adding clarifications or additional information to steer it away from a default path already identified by users of that model. ↩
-
See Moisés Horta Valenzuela’s more malleable interface SemillaAI [link]. ↩
-
For example, Gray Area and Google’s exhibition DeepDream: _The Art of Neural Networks _(2016). ↩
-
See Anna Riddler’s Traces of Things (2018) or Refik Anadol’s Archive Dreaming (2017). ↩
-
Where early AI systems used symbolic AI or rule-based AI, to define the terms of intelligent systems, connectionist AI allowed associations to emerge through a model’s processing of datasets. See Geoffrey E. Hinton, Preface to the Special Issue on Connectionist Symbol Processing (1990) [link]. ↩
-
Scott Robbins, A Misdirected Principle with a Catch: Explicability for AI (2019). ↩
-
Günseli Yalcinkaya, _How did everything get so mid? _(2023) [link]. ↩
-
Broad et al., Active Divergence with Generative Deep Learning -- A Survey and Taxonomy (2021) [link]. See Terrence Broad’s website for more insight to their practice [link]. ↩
-
For example text-to-image models such as DALL-E 3 can reproduce works in the style of famous painters, e.g. Hieronymus Bosch, although it is designed to refuse prompts that explicitly request the style of living artists. While it might refuse to produce works in the style of David Hockney, for example, there remain questions around whether it could be prompt engineered into doing so. ↩
-
Open weights refers to releasing only the pretrained parameters or weights of the artificial neural network model itself. ↩
-
See Machine Learning for Art repository [link] and Serpentine & King’s College London’s Creative AI Lab Database [link]. ↩
-
Artists who engage in ‘auditing’ also usually engage in other modes, too. See, for example, Delta_Ark’s engagement with reinforcement learning for Unreal Engine [link]. ↩
-
Mercedes Bunz, ‘The calculation of meaning: on the misunderstanding of new artificial intelligence as culture’ (2019) [link]. ↩
-
Mercedes Bunz & Eva Jäger, _Inquiring the Backends of Machine Learning Artworks: Making Meaning by Calculation _(2021). ↩
-
Holly+ a model and artwork created by Holly Herndon and Mat Dryhurst illustrates the complexity of artists producing work across the AI tech stack. In this work, the artists created a dataset from consenting participants, training a model together with engineer Jules Leplace, fine-tuning it on Herndon’s own voice. The work also explores recombinant IP with a strategy for shared ownership of assets created using the model [link]. ↩
-
For more on this see Alasdair Milne, ‘Re-engineering a Concept of Collaboration for Machine Learning Artistic Practice’ (thesis, forthcoming, 2024). ↩
-
Interspecies Communication Research Initiative (ISCRI): A Cephalopod ↔ Machine Encounter (2021) [link]. ↩
-
See What Models Make Worlds: Critical Imaginaries of AI exhibition at Ford Foundation Gallery (2023). Curated by Mashinka Firunts Hakopian & Meldia Yesayan [link]. ↩
-
See _Future Art Ecosystems 2: Art x Metaverse _(2022) [link] ↩
-
Agnieszka Polska proposes that AI tools might lead to the proliferation of individual worlds, or ‘paracosms’, democratising access to siloed world-dwelling. These paracosms could remain individual or become collective, depending on how one builds out the infrastructure. Agnieszka Polska,_ Future Paracosms and Their Infrastructures _(2023) [link]. ↩
-
Elsewhere, Autonomous Worlds, for example, use blockchain protocols to allow worlds to become decoupled from singular ownership, affording them the possibility of persisting indefinitely. These technologies offer the possibility. See the *Autonomous Worlds *reader (2023) [link]. ↩
-
See Deepmind’s ‘Scalable Instructable Multiworld Agent’ in A generalist AI agent for 3D virtual environments (2024) [link]. ↩
-
The term ‘bubble worlds’ emerged from a conversation between the authors and Vincent Brisebois who heads Omniverse at NVIDIA about the future of the media ecosystem in March 2024. ↩
-
Whether these truth-claims can be exported into our world remains a primary challenge for simulated scenarios and worldbuilding artistic research agendas alike. ↩
-
Generally Intelligent, Avalon: A Benchmark for RL Generalization Using Procedurally Generated Worlds (2022) [link]. ↩
-
W. Patrick McCray, _Making Art Work: How Cold War Engineers and Artists Forged a New Creative Culture _(2020). ↩
-
'“Alex [Reben] is one of the first people we share our new models with,” said Natalie Summers, a spokeswoman for OpenAI'. Reben was OpenAI’s first artist-in-residence in 2023. Leslie Katz, _An Artist in Residence on A.I.’s Territory _(2023) [link]. ↩
-
See community-based marketplaces for on-chain art such as fx(hash) [link] or Zien [link]. ↩
-
Apart from Serpentine Arts Technologies, such organisations include Gray Area in San Francisco, HEK in Basel, Museum for Moving Image and NEW INC in New York, Buffalo AKG Art Museum in Buffalo, TRANSFER Gallery and Bitforms Gallery, amongst others. ↩