What does memory mean to us? MASARY Studios, a Boston-based media arts studio, set out to create an interactive installation that would explore the question. Memory Diffusion is designed to absorb, display and encourage new memories.
On a wide staircase in a high-traffic area of the Boston Arts Academy, passerby can see a projected image of themselves against a digital display on a large LED tile, enhanced by lighting elements that cascade over the staircase. Two live-input cameras show and store the images.
“We knew we wanted to do something with two cameras watching the space,” says Jeremy Stewart, MASARY tech director and media artist. “Somehow the cameras should analyze what they were seeing and use that to selective store snippets of new memories,” he says, adding that the artificial intelligence devices would have to categorize what’s in front of the camera and correlate that to what’s stored simultaneously.
Stewart says the project came out of a philosophical discussion. “What does it mean to create new memories of a particular place and to remember something in relation to what’s happening in the present?” What they created was a technological response to Henri Bergson’s idea that we are constantly creating memories, detailed representations of what we see as we move through the world. Many are dormant until some new related experience revives them. When they are re-experienced, memories collide with present experience in non-linear ways.
A deep feature extractor, a convolutional neural network, analyzes incoming images and compares them with recorded events using machine learning clustering and dimensionality reduction algorithms. Through these comparisons, the artwork selectively recalls stored memories based on how similar they are to new entries. Memories are modified, further analyzed, and segmented into areas of interest. Like our memories, these digital images return fragmented and altered. Viewers can experience, in some form, other events that happened in front of the camera and their relationship to them as past data collides with present.
“For a permanent artwork in a public space, we needed something that could be reliably much more flexible than classification convolutional neural networks,” Stewart says. “We used pre-trained convolutional neural networks built to classify or identify contents of the photos, but we modified them to use them instead as feature extractors. The model we used was originally trained to identify 499-500 different kinds of things—a car, train, dog, cat, person, [and more] over many many iterations, so they reliably perform at a state-of-the-art level.” Since even that was too specific and reductionist, MASARY ripped layers from the trained network so the AI no longer produces a discrete dog or cat but something more abstract. In the end, “the neural network instead produces approximately 4000 abstract values for every new camera image analyzed.”
“Once installed, it’s only comparing what it sees to what’s been seen and stored previously. You can set it up, and it can run,” Stewart adds. If the software doesn’t find a similar stored memory, the new image is treated as unique, so the camera will store that in its database of memories.
Stewart is not aware of art-making software that goes beyond MASARY’s proprietary systems that make this possible. These aren’t available to others, but MASARY is developing them for future short-term and permanent installations of its own. “Historically, there’s not a machine learning-based approach that looks for areas of motion or change and analyzes structure or shape.
“There is a great deal of public discussion about all the A.I. tools that can generate beautiful images and what does it mean for artists,” says Stewart, who believes it can enable deeper creative work.
In this art, technology precedes aesthetics. “Once the videos get recalled, we can put them to use in different ways, based on movement and color and shape.” Stewart says artists can also learn from the way this technology organizes images. “Maybe we can aesthetically treat things that way.”