Video/ kinetic installation
I Pad, 23”W TFT Color LCD screen, pumps, dc motors, steel, Plexiglas, water, typographical material computer,audio speakers and custom software
Video is unquestionable the most strongest (captivating) media in fine arts today but the enchantment of a moving image burns hot and short. I work with this media from a very strong sculptural perspective. I have been searching for a ways that video could enrich my work and decided that the combination of moving image and kinetic sculpture provides an interesting angle.
Kinetic art had its prime time in the 50th and 60th and has been considerable dormant for a long time after that. Lately it seems that many young sculptors are starting to pick up on kinetics again. One reason for that is the rapid development of user friendly soft and hardware components as well as the low costs to purchase electronically components to experiment with. Also artists like myself have developed their own kind of “Gypsy technology” whereby we cannibalize existing consumer electronics to make them useable in our art projects. It is possible to learn the basics in short time and find net communities that provide help and inspiration. Being independent from industry and outside expertise and the straight application to ones work make it fairly easy to integrate ideas of movement, light and sound to ones working process.
VIDEOKAFFE has been playroom for likeminded artist that challenged, cooperated, inspired and pushed each other into create something new and surprising.
Personally this unique exhibition concept has been one thrilling experience and I owe big time to the collaborating colleges as well as to Sebastian Ziegler the Curator of this event,
Starting point for my work was an application for a smart phone. I have to say that I was a baby in this field and a prepaid phone of the most modest design had been sufficient for me. Suddenly finding myself in possession of this sophisticated gizmo, I searched for ways to make it work to my benefit and make my live easier.
As German having moved to Finland its naturally the language that creates boundaries.
So I was quite interested in this field. I found an application (no advertisement but you can find the name in the appendix) that recognizes printed words using optical character recognition and instantly translates these words into the desired language.
The words are displayed in the original context on the original background, and the translation is performed in real-time without connection to the Internet.
Once the program has identified letters, it calculates their rotation and the perspective from which the viewer is seeing them. Then it tries to recognize each letter by consulting a library of reference font sets. (This application was created to help tourists understand signs and menus, and is not 100% accurate). It’s a bit like starting to type into the goggle search field and with every added letter you get the suggestion of “ Do you mean this?”
At one day I was sitting at the waterfront of the marina in Vuosaari, killing time browsing to my apps and in a playful attempt I activated the upper mentioned app and directed the built-in camera of the phone to the sea. Sun was shining and a mild wind was rippling the surface of the water. A lot of contrast that day between the tops and valleys of the waves.
Suddenly the program was trying to read meaning to the random noise of the waves and words popped up in the display. The oracle like messages in the water mesmerized me for hours. This happened because the program runs the image through a filter to remove shadows. Text is sharp, so it removes whatever is not sharp and makes the image black and white, to help figure out where letters are. Still, these are black and white contrast that may or may not be letters. The unpredictable game is on. For VIDEOKAFFE I wanted to recreate a version of this captivating experience.
The original idea was to have a high-resolution wifi cam scanning the water surface of the nearby Aurajoki canal and have the program running its results by way of a beamer.
Time and budget were not favorable at this point so I decided to go for a more sculptural laboratory setting.
The Plexiglas pool (60x90x10 cm) inside the gallery contains app.40 liters of water.
Two pumps and two additional dc motor powered paddles kept the water in motion.
Hovering above is an adjustable arm that is holding an I tablet running the application
Its camera lens is pointed down at the pools water surface. The Tablet is linked by a hdtv cable to a wall mounted flat screen that shows its interface in bigger scale.
This set up was a purely theoretical approach and the results were meager.
What nature is really good at is to deliver a lot of visual noise and my little pool could not match it.
The next step I took was to add submersible plastic letters floating in the water.
These were partly traced but clogged the pump filtration system and collected in certain areas. Next approach was to have silicon molded floating letters. This created surprising results especially because the constant moving water surface distorted the textual content. The program now is running in a Mode that tries to translate from English into German. I hope that you can enjoy the results at this stage and I will surprise you with a more sophisticated version in near future but for now its Videokaffe -its fresh from the oven.
So having tried to explain my approach towards the underlying concept of the exhibition and the way this work developed I have to point out an additional aspect. The work at this particular exhibition had a soundscape that was created by the brasilian artis Fernando Visockis Macedo. This is his statement :
“My work is created, imagined and thought over sound. How to search for hidden sounds surrounding us and amplify it aesthetically; how to bring different sounds to the field of visuality; how to affect the spectators perception by playing and sculpting with such an abstract rich full material?
My research is then, in a wider sense, really close to translation somehow. That's why when Thomas first told me his idea about the installation, I could, on the very first moment, see a breach for adding a new layer without changing the concept of the work abruptly: translating into sound the same material that is being used for generating the text.For that I developed a code for Pure Data that seeks for the pixels with most luminance in a live webcam pointed to plexiglass. The ripples of the water keep on changing these pixels with most luminance, generating a great input for manipulating a synthesizer and for a sound sample collected from a water stream back in my country, translating a simple light variation into an abstract moving soundscape, a generative sound piece that reinforces and adds new context to the installation.