Recently, I wrapped up one of my long-in-the-making project: I Homunculi. I had been working on-and-off on this idea for the two last years. I Homunculi is a random, interactive scroll film, where the viewer controls the directionality AND speed of the image flow with the mouse or by gestures if its device uses a touch screen. The work will soon be available for free directly on the website, in the “Others” section.
Today let’s delve back to into the genesis of the content of the film.
I think this will help clarify what and whatnot I “actually did” in this project.
To create the images, I consulted an online video archive around 2017. I downloaded images around certain themes. Of course, there was always a degree of chance in the picking process. In short, I found myself with a series of videos which was pretty “dark” in tone (agricultural, educational health, medicine, old horror movies, mental-illness videos, strange experiments — including the revival of disembodied animals, and so on…)
In a Construct3 program, I also generated a sort of technical animation. It consisted of coloured dots filling infinitely a screen-like canvas. A random colour was picked for each dot and it was made to appear at a random position on the screen canvas.
In parallel to this, discovered the realm of “deep dreaming.” I had become interested in the idea of applying the deep dreaming process on moving images. I already experimented with this in my movie Where to Forage? (Edit – 07-01-22: More recently, I am exploring deep dreeping once more in the creation of “imaginary characters” — I will post a new article soon about this matter.)
Deep dream image generation is an AI process: a trained neural network “takes a look” at a picture, and attempts, according to a limited set of “iterations” and more transformative “dreams-within-dreams”, to recognize certain images into the picture. To see one thing in another. Essentially, it is a creative re-application of image recognition AI technology.
What is interesting is that “recognizing,” for this network, meant producing an image.
As the network relied on a pool of specific JPEG images, I thought it might be interesting to feed the random colour noise animation created on Construct3 into a network of this “deep dreaming” kind. At the time, I had an old PowerPC Mac tower on which I bought a copy of a program called DeepDreamer, and I could run it somehow (albeit at an incredibly slow pace) on this PowerPC. Feeding the random colour noise animation in the neural network brought me to the unusual creation of what I called the “strange wall”.
This strange wall was only one of all the result of the deep dreaming process over random colour noise animation. The result was an unusual mesh of which looked to me like the bottom of some prehistoric pool, layered with seashells — even more after I would decompose it into three monochromatic sequences (red, green, blue). The strange wall was the static video of this techno-biological texture.
Now, back to the archive footage. They are also important in this process. What I did at this point was a simple superimposition of the strange wall over an edit of the archive footage. While I did not keep the archive footage, it still transpires through the superimposition edit and all of the remaining creative process (illustrated bellow).
At this point, the next step of the process took me to reapply the same idea of feeding the superimposition edit into a neural network. However, a new problem arose. The cut’s length was much more consequent than the one of the superimposition edit that was fed into the network before, in order to create the strange wall animation. At this point, I discovered that DeepDreamer was helpless against such a long video.
One thing I tried was running DeepDreamer on a more recent Mac, the one of my Dad. I would send him these weird video bits to convert. However, it was long, tedious and the images were not that interesting.
I tried to use a variety of software to break down the video into a sequence of thousands of JPEG. I told myself it was more realistic to transform the frames individually, but I found only failure.
At this point, I stopped working on this experiment. I decided to try something different. In the darkness, I rephotographed the superimposition edit with my good ol’ Olympus OM-D EM-5, relying on the slow shutter feature of the camera. I also remember flooding a tabletop where my computer screen lied with coffee! This wet desk partially reflected the image of the screen during the recording. For a while, I thought the resulting footage was the final ending of this project.
I made a movie called Intermittence using the images, and I sent it around, but in truth it fell short of the original intentions. While it was in its own an interesting piece, my future would lead me once again to work with the strange material from which it was formed…
A year or so passed when I returned to the superimposition cut. I think I might have made more deep dream attempts with the material in-between that time and the initial attempt, but I have barely any recollection of them. I assume it was another total failure.
But in recent time, I had finally learned to work in linux an architecture much like MacOS and Windows, at the small difference that it affords increased flexibility to its user. Additionally, with linux I had begun using the terminal although with minimal efficiency. This was an “old school” way of working, where to move through your computer files, you typed the required command and path. Same to copy or delete a file or folder. And like that almost everything could be accomplished by typing in the command line. Yes, I could accomplish most task, but sometimes it required some research.
A bit later, I also acquired a new computer with a new graphic card, on which I also installed a new fresh Ubuntu OS. I was able to install nVidia CUDA, which allowed me to run image-transformation algorithm on my computer.
Luckily, I found an interesting Python library called Keras which was able to provide me with the needed elements to do this. Equipped with this specific computer and OS, I could run the deep dreaming transformation quickly over a single frame of the superimposition cut. The superimposition cut had almost 19 000 frames. Even to treat each image manually in 30 seconds would take me a long, long time to do the the whole movie.
But at the very least, I was encouraged by the fact that it was technically doable this time, unlike before where DeepDreamer would simply be lackluster when working a just a few images.
And finally, the solution came in a flash, even quicker than I had thought it would. It was after thinking about this problem for a few moments that I realized I could simply loop the Keras command line instruction from the terminal! In a few internet keywords, I found an example of a command line loop and used it in combination with Keras. Since I had automated the process, I ran it every night. It took about 13 nights or so to convert all the frames! Incredible to think back about it!
When I recompiled the frame. The moving images I got were given another transformation: they were made into black and white, heavy contrasted images. These images, in turn, were made blue. These are the ones used in the final artwork I Homunculi.
I hope you appreciated the summary of this unusual creative journey. I will be writing another post on I Homunculi in the near future. For now, I think this one is fairly long already.