AI will be the artistic movement of the 21st century — Quartz

Most agree that AI will be the defining technology of our time, but our predictions tend to differ wildly. Either AI will become the perfect servant, ushering in a new era of productivity and leisure one weather report at a time (Hi, Alexa), or it will overpower us, relegating humanity to the ash heap of biological history (I see you, Elon).

But there’s a slice of gray between the two that we should consider: what if AI became a peer and collaborator, rather than a servant or overlord?

Let’s take art as an example. The history of art and the history of techniques have always been intimately linked. In fact, artists – and entire movements – are often defined by the tools available to do the job. The precision of flint knives (high technology from the Stone Age) allowed man to carve the first pieces of figurative art in mammoth ivory. The old masters used the camera obscura to render scenes of extraordinary depth. In 2018, artists work in all media available to them, such as fluorescence microscopy, 3D bioprinting and mixed reality, further expanding the possibilities for self-expression and investigation.

The defining art-making technology of our time will be AI. But it won’t be the kind of artificial intelligence of our past imagination – it will be the increases intelligence of the present. While “artificial intelligence” still evokes the idea of ​​autonomous machines that, after a period of algorithmic maturation, will ruthlessly and inevitably outperform their human creators, “augmented intelligence” reflects the pragmatic truth of the matter: sophisticated technologies that enhance our abilities, but still need human intelligence to set rules and chart the course.

By working with AI, artists can harness chaos and complexity to find unexpected cues and beauty in noise.

When applied to artistic creation, you can think of it as a collaboration between the human artistic mind and advanced intelligent technologies. Artists collaborate for many reasons: the search for a greater sum of combined talents (illustrator and writer), inspirational feedback loops (jazz improvisation duo), or simply for the unexpected contributions that come from a conflict and a partnership (two humans who have already danced together). Tools and technologies help artists express their expression, but artists don’t collaborate with brushes, saxophones, or styluses—they wield them. Value is brought to creative processes through intelligence, insight or inspiration.

The AI ​​is unlike any of our previous art technologies. By working with AI, artists can harness chaos and complexity to find unexpected cues and beauty in noise. We can analyze, recode and connect to values ​​and patterns that are beyond our reach. AI can provide extraordinary precision tools for artists that are, on the whole, perhaps better suited to tangential and divergent thinking.

Rama Allen

Still from film created using experimental motion capture and CG techniques, mixed with ink.

But that can’t do everything. While AI can compute complex systemic analysis, humans provide the thunderbolt; patterns within patterns, the novel and the intuitive leap into something totally new.

When these abilities are combined simultaneously, we get an aesthetic dialogue similar to that employed with jazz improvisation. During a jam session, the musicians feed on signals that diverge from each other: changes in key, flourishes, tempo, changes in rhythm. They eschew the written code of music and go out of the way together to create a sound without expectations where the whole is greater than the sum of its parts. Even though it appears effortless and instantaneous in the eyes of outsiders, this high-flying artistic act only exists thanks to the formed ideas, abilities and intelligence of the partners at that time and moment.

This is the creative feedback loop, the heart of artistic collaboration.

The output of this expression is categorically different from any art previously made by man throughout history.

Artists collaborating with AIs could produce much the same result. In a similar improv session, an artist might input technology, the AI ​​processes it, returns something, and the artist responds to whatever is spewed out. By reacting to whatever the AI ​​has produced, the artist provides another input for the system, and the human-machine feedback loop begins to spin. By distilling the essence of an artist’s expression, translating it into technology, and then letting technology render it in its own language, we can find new inspirations and methods of expression.

I believe that this change in creative dynamics is a new artistic language. As artists, we can now truly collaborate with a tool to tap into new capabilities, tap into greater complexities, explore possibilities, and thereby create a new kind of art. The production of this expression is starkly different from any art previously made by humans throughout history, and this intelligent contribution inspires further investigation into the meanings of fatherhood, creativity, and art.

I propose that we call this new artistic language “augmented art”.

Many practices percolate in this new era of augmented art. For example, interdisciplinary artist Sougwen Chung creates collaborative art with her robot, Drawing Operations Unit: Generation 2: DOUG. Through drawing experiences, she created an AI that learns from her drawing style and collaborates with her by translating her gestures in her own way, thus affecting her own drawing behaviors. DOUG has her own innate behavior and works with her as a collaborative artist, turning her practice of artmaking into a real-time duo.

Last year, I explored the artist/technology feedback loop with See Sound, an art-making tool that translates the human voice into digital sculptures, inspiring us to modulate our voices in real time to create the shapes we see in our mind. . Materials, orientation, shape and volume are defined by subtle voice variables: timbre, pitch, volume, dissonance and attack. The result is a “voiceprint” which is also a multi-track audio loop, allowing anyone to create beautiful sculptures and musical compositions, simply with their voice.

This year, I’ll take it a step further by presenting a vision for the future of live music performance at SXSW. We use deep learning algorithms to train an AI to beatbox live with a human: the vocal phenomenon, beatbox champion and Harvard artist in residence, Reeps One.

We’ve created an AI that will duet and battle with Reeps One, analyzing his voice, intonation and rhythms to create new rhythmic accompaniments and melodies voiced using a remix of Reeps One’s vocal samples. In other words (or sounds), the human Reeps One will perform with the Reeps One machine, except the machine is not limited by the physical limitations of vocal cords and breathing. The added complexities of recognizing an extraordinarily expressive instrument – ​​the human voice – and creating art in real time create a true encounter between artist and machine. This premiere will be accompanied by a mixed-reality, sound-responsive floating cathedral that functions as a piece of impossible scenography; a prototype of what concert experiences and productions might look like in the future.

These projects ultimately raise the fundamental question of what it means to be creative. We feed the software with what we consider beautiful, allowing it to identify common attributes of that material and create permutations within those boundaries. In the case of our SXSW project and other human-machine collaborations, this creates improvisational fodder for the human – unexpected twists and turns that elude creative prediction, the sum being something entirely new.

But is the participation of the machine true creativity? An expression of “self”? Or is it simply the result of algorithms crossed with preserved data?

The truth is that we have a hard time defining creativity. We don’t know where the flashes of inspiration and ideas come from. We squint vaguely within ourselves and assign poetic language and best guesses as to where muse and artist intersect, but it’s hard to define it in a clearly structured set of rules. Ironically, this is the very first step needed to create an algorithm for beauty, imagination, and serendipity.

Until we get to that, technology will not be able to generate, reflect or imagine its own art without human input. Even the most advanced deep learning techniques achieve fused mimicry. (Although it could be said to be analogous to human artists who harness the sum of their experiences to influence their art.) However, this is still a far cry from the self-contained musings of a 3-year-old child with paint on your finger.

With this understanding, we look to the future and remember that the things technology can do are only limited by the imagination of the person using it. As artists collaborating with technology, we sift through the possibilities. We are on a mission of discovery to find a new way to express ourselves with our increasingly sophisticated partners: painting, writing, sculpting and making beautiful music.

Together.

Christopher S. Washington