PROJECT BACKGROUND
Distorting Reality by Colin Higgs
“Poetry is a mirror which makes beautiful that which is distorted”
― Percy Bysshe Shelley
medium: interactive installation
space requirements: 1.5 square metres
This piece focuses on a interactive work that engages a camera that captures images of a spectator and replicates that image as a set of discreet lines or points which can then be manipulated by the person via a mouse. As the person moves the mouse they distort the image by moving the points or lines. They also change the generative sound depending on the position of the mouse.
Creative Motivation
The motivation for the work comes from my background in film and tv in which there is a continue need is to reinvent the video image in terms of producing unique promotional titles for the start of a film or tv programme. The hardest task is to freshly reinvent image so the audience seeing the image become captivated by it. What was nice about this work is the immediacy of the results and the connection of the result to the person. They see a reflection of themselves in the work but complete with the realisation that they can distort this image and how that changes the representation of who they are.
Future Development
Having multiple and varied ways of changing the image representation adds to the experience of the person in playing with the piece so that would be the main goal to expand the different ways the person can activate different aspects of the distortion process.
Why make the work? We only see ourselves in the world through some kind of echo or reflection process either verbal or sound or touch or a visual representation. Its lovely to change that process into something slightly different whereby they can sculpt a video representation of themselves live.
Image distortion is also a big part of the way I see the world. The following images were part of a visual dairy I kept in Tokyo Japan.
I perceive the world through distorted images
THE RESULTS
MAXMSP CODED INPUTS FOR VIDEO:
The starting points for the video work were using a pre-existing tutorial on line: https://youtu.be/AFaPc9ElQD4
The tutorial didn’t work so it was only by playing around with it that I was able to make the initial patch work (due to no initial values for coded parameters). The adaptions to the patch were to code in the capture of images from a camera and take snapshots when they were triggered by a clap or loud sound. Further changes were to add “Depth” to the image by adding depth values based on pixel brightness to the z-depth direction. Also, the addition of taking mouse values from the MAXMSP jit.window and feeding them into the audio patch as values to control the centre frequency of the Drone generated sound.
Another alternative patch used 2 forces on the mouse to bounce the image around. An attractive force and a repulsive force.
The forces at play were a simple Newtonian physics engine whereby:
The force applied to the particles “F“ is defined as:
F = Vf /r*r based on a vector Vf = Ptarg-Pparticle r= length of Vf
Pparticle = current particle x and y position
Ptarg = current mouse x and y position
a = F/M ( M is the mass of the particles)
arep = arbitrary repulsive force
M = This is arbitrary could be noise values (between 0-1) or a value of 1
Vnew = Vold + a + arep (Calculate the particle velocity)
Pnew = Pold + Vnew (Calculate the particle position)
This was coded in jit.gen.
MAXMSP CODED INPUTS FOR SOUND:
Having studied this patch it was relatively straight forward to adapt it and simplify it so that the use of making the “line~” parameter inputs was simplified. The patch was further changed by taking inputs from the video mouse position and using them to control the centre frequency of the drone work.
I tried out the use of a different roving filter for the drone sound using a cascade filter but I still preferred the original biquad filter.
The making of the drone sound is quite straight forward. It uses multiple closely singled samples of the original waveform. I tried using different samples based on the normal major harmonic scale:
multiply by “x” to reach an octave :
1 c
1.059 c+
1.122 d
1.189 e
1.260 f
1.335 g
1.414 a
1.498 b
1.587 d
1.682 e
1.782 f
1.888 a
2.0 b
However, the closer the frequencies the better the sound result.
Also the use of freqshift~ was really important. It gave a depth to the drone sound frequencies that were are added all together. Without it the sound was too hollow. I would say having multiples of this freqshift~ would make the sound even richer. freqshift~ was a easy way to add a closely matching duplicating wave form. The same result could be done mathematically.
Instructions for compiling and running your project.
The setup at the pop-up show was as follows. A usb-camera was attached to a mac-mini with a mouse and speakers. The result was good. A small piece of code was added to make the screen full size and to not see the menus when they were not hovered over. As shown shown below. MAXMSP captured the camera with a jit.grab object and the external usb camera was selected from the video device list.
The setup on the laptop is as follows:
Just run the patches and everything should work. Change the mesh jit.gl.mesh drawing selection to points. Make the window partimage2 full screen and move the mouse around.
What software tools were used?
The data inputs: All processed through MAXMSP.
Data outputs: all outputs sent via MAXMSP
FILTER INFORMATION (mainly for my own comprehension)
Biquad:
Filtergraph~
Summary in running the installation:
Install MAXMSP on the computer run figireout08B.maxpat make sure all 3 patches are in the same folder:
figireout08B.maxpat – visual image capture and manipulation
Drone_Synth_06_single – generated sound changes when mouse moves
myWave2.maxpat – different drone wave forms to explore
Optionally attach speakers or external camera. Get the usb camera by opening “p detectaface ” clicking “getvdevlist” and selecting correct camera in p vdev/input.
Or use a laptop with built in camera. Make the draw mode of the jit.gl.mesh partimage2 mesh to points make the floating window “partimage2 ” to full screen.
Or use the <esc> key. Move the trackpad or mouse around.
A reflection how successful your project was in meeting your creative aims.
Creatively the results were perfect. Having the ability to programme the jit.gen code gave me total control over the forces at play and meant the forces could also evolve.
However, it was a bit sluggish. As there were over 400,000 particles being controlled it was not that fast. If I added the sound interaction it would be further slowed down in performance. I could use two computers to alleviate this problem if I had the choice but still annoying.
What works well?
I loved the drone sound and creating a particle system on MAMSP was amazing.
What challenges did you face?
I didn’t know how to create both the drone sound and the gen particles so earthing was challenging at first. However the basic drone creation was quite straight forward and was very educational. The GEN coding is still quite difficult as examples are sparse and the explanation are cryptic. This was the biggest hurdle understanding how to use the different objects of Gen.
What might you change if you had more time?
Developing different drone sounds that change over time would be nice and changing the image representation to be more 3d in the view and exploring more forces.
FURTHER RECORDINGS
References:
MAXMSP