Nebula is a performance conceived as a final project for the class Sensors, Body, and Motion in Spring 2017. The piece uses a feedback process to generate visuals that are roughly composed to accompany and be accompanied by a piece of music. The setup involves a human performer, a Logitech Webcam, a laptop screen, rear projection, and set of sensors for manipulating the inputs. Initially, I intended to use one webcam on each hand, the first aimed at the computer screen in front of the performer. The webcam on the other hand would be aimed at the rear projection or the audience. A mix of these two signals would be fed to the output video on the computer screen, then re-captured by the first webcam, thus leading to a multi-source feedback loop. For this performance, I reduced the system to using only one webcam.
The performance is partly composed and partly relies on performer improvisation. I identified several gestures or specific replicable visuals that I could use in the composition process:
Hue cycle - activated by thumb press
Fractal feedback - started by pulling hand away from screen
Change saturation - bending left finger
Pixelation/screen effect - bring camera very close to screen
Pulsing color flow - Bring camera up and right from bottom of output screen
For this performance, I decided to use exisiting audio rather than to compose my own. The music that you hear in the video is a piece called 4101 by Roly Porter.
The sensors that I settled on were two flex resistors and one force sensitive resistor. The flex sensors and FSR are attached to a glove so that the flex sensors have a value that is mapped to the bend of the my two index fingers separately, and the FSR is placed on my right thumb, functioning as a trigger if put into contact with another finger. The right index determines the mix amount between the two video feeds. A fully straightened finger gives 100% feed to one camera, while a 50% bend, mixes between the two video sources. A fully bent finger means that only the other video is mixed into the input.
The FSR acts as a trigger for an envelope in Isadora, which feeds into the HSL Adjust actor and performs a sweep on the Hue offset of the secondary video input.
For putting together the entire setup, I sewed the flex sensors and FSR into a pair of gloves. These sensors were connected to a small circuit I soldered on some perfboard and combined on a holder next to the Arduino Redboard. I placed these on a cardboard holder attached to my forearm with a velcro band.
The Arduino code I wrote for this project can be seen below: