03 Distorting Reality MAXSP

PROJECT BACKGROUND

Distorting Reality by Colin Higgs

“Poetry is a mirror which makes beautiful that which is distorted”
― Percy Bysshe Shelley

medium: interactive installation
space requirements: 1.5 square metres

 

This piece focuses on a interactive work that engages a camera that captures images of a spectator and replicates that image as a set of discreet lines or points which can then be manipulated by the person via a mouse. As the person moves the mouse they distort the image by moving the points or lines. They also change the generative sound depending on the position of the mouse.

  

 

Creative Motivation

The motivation for the work comes from my background in film and tv in which there is  a continue need is to reinvent the video image in terms of producing unique promotional titles for the start of a film or tv programme. The hardest task is to freshly reinvent image so the audience seeing the image become captivated by it. What was nice about this work is the immediacy of the results and the connection of the result to the person. They see a reflection of themselves in the work but complete with the realisation that they can distort this image and how that changes the representation of who they are.

Future Development

Having multiple and varied ways of changing the image representation adds to the experience of the person in playing with the piece so that would be the main goal to expand the different ways the person can activate different aspects of the distortion process.

Why make the work? We only see ourselves in the world through some kind of echo or reflection process either verbal or sound or touch or a visual representation. Its lovely to change that process into something slightly different whereby they can sculpt a video representation of themselves live.

Image distortion is also a big part of the way I see the world. The following images were part of a visual dairy I kept in Tokyo Japan.

 

I perceive the world through distorted images

 

THE RESULTS

 

 

 

 

 

 

MAXMSP CODED INPUTS FOR VIDEO:

The starting points for the video work were using a pre-existing tutorial on line: https://youtu.be/AFaPc9ElQD4

The tutorial didn’t work so it was only by playing around with it that I was able to make the initial patch work (due to no initial values for coded parameters). The adaptions to the patch were to code in the capture of images from a camera and take snapshots when they were triggered by a clap or loud sound. Further changes were to add “Depth” to the image by adding depth values based on pixel brightness to the z-depth direction. Also, the addition of taking mouse values from the MAXMSP jit.window and feeding them into the audio patch as values to control the centre frequency of the Drone generated sound.

Another alternative patch used 2 forces on the mouse to bounce the image around. An attractive force and a repulsive force.

The forces at play were a simple Newtonian physics engine whereby:

The force applied to the particles “F is defined as:

F = Vf /r*r  based on a vector Vf = Ptarg-Pparticle  r= length of V

Pparticle = current particle x and y position
Ptarg = current mouse x and y position

a = F/M ( M is the mass of the particles)

arep = arbitrary repulsive force

M = This is arbitrary could be noise values (between 0-1) or a value of 1

Vnew = Vold + a   + arep      (Calculate the particle velocity)

Pnew = Pold + Vnew  (Calculate the particle position)

This was coded in jit.gen.

 

MAXMSP CODED INPUTS FOR SOUND:

https://youtu.be/BOh7ysTFkiI

Having studied this patch it was relatively straight forward to adapt it and simplify it so that the use of making the “line~” parameter inputs was simplified. The patch was further changed by taking inputs from the video mouse position and using them to control the centre frequency of the drone work.

I tried out the use of a different roving filter for the drone sound using a cascade filter but I still preferred the original biquad filter.

The making of the drone sound is quite straight forward. It uses multiple closely singled samples of the original waveform. I tried using different samples based on the normal major harmonic scale:

multiply by “x” to reach an octave :
1 c
1.059 c+
1.122 d
1.189 e
1.260 f
1.335 g
1.414 a
1.498 b
1.587 d
1.682 e
1.782 f
1.888 a
2.0 b

However, the closer the frequencies the better the sound result.

Also the use of freqshift~ was really important. It gave a depth to the drone sound frequencies that were are added all together. Without it the sound was too hollow. I would say having multiples of this freqshift~ would make the sound even richer. freqshift~ was a easy way to add a closely matching duplicating wave form. The same result could be done mathematically.

 

 

 

 

Instructions for compiling and running your project.

The setup at the pop-up show was as follows. A usb-camera was attached to a mac-mini with a mouse and speakers. The result was good. A small piece of code was added to make the screen full size and to not see the menus when they were not hovered over. As shown shown below. MAXMSP captured the camera with a jit.grab object and the external usb camera was selected from the video device list.

 

The setup on the laptop is as follows: 

Just run the patches and everything should work. Change the mesh jit.gl.mesh drawing selection to points. Make the window partimage2 full screen and move the mouse around.

 

 

What software tools were used?


The data inputs: All processed through MAXMSP.
Data outputs: all outputs sent via MAXMSP

 

 

 

FILTER INFORMATION (mainly for my own comprehension)

 

Biquad:

biquad~ implements a two-pole, two-zero filter using the following equation:y[n] = a0 * x[n] + a1 * x[n-1] + a2 * x[n-2] – b1 * y[n-1] – b2 * y[n-2]You can specify the coefficients a0, a1, a2, b1, and b2 as signals or floats (if you make the filter explode by making the b coefficients too high, you can recover (after lowering them) with the clear message, or by turning the audio on and off).In the last tutorial, we discussed how filters could be expressed as equations, e.g.yn = 0.5xn + 0.5yn-1The 0.5 values in the equation above set the respective gains of the different samples used in the filter. If we wanted a more flexible filter, we could generalize this filter so that those numbers are variable, e.g.:yn = Axn + Byn-1By modifying the values of A and B, we could control the frequency response of this filter. While the math behind this operation is beyond the scope of this tutorial, it’s generally true that the more energy given to the delayed output sample (the yn-1 term), the smoother the output and the more the high frequencies are supressed.A fairly standard tactic in digital filter design is to create a filter equation that can perform any kind of standard filtering operation (lowpass, bandpass, etc.) on an input signal. The most common implementation of this is called the biquadratic filter equation (or biquad). It consists of the following equation:yn = Axn + Bxn-1 + Cxn-2 – Dyn-1 – Eyn-2This equation uses the incoming sample (x), the last two incoming samples, and the last two outgoing samples (y) to generate its filter. (Another term for a biquadratic filter is a two-pole, two-zero filter, because it has four delay coefficients to affect its behavior.) By adjusting the five coefficients (A, B, C, D, E), you can generate all manner of filters.

 

 

Cascade

 

Cascaded series of biquad filters

 

Filtergraph~

The horizontal axis of the filtergraph~ object’s display represents frequency and the vertical axis represents amplitude. The curve displayed reflects the frequency response of the current filter model. The frequency response is the amount that the filter amplifies or attenuates the frequencies present in an audio signal. The biquad~ (or cascade~) objects do the actual filtering based on the coefficients that filtergraph~ provides.The cutoff frequency (or center frequency) is the focal frequency of a given filter’s activity. Its specific meaning is different for each filter type, but it can generally be identified as a transitional point (or center of a peak/trough) in the graph’s amplitude curve. It is marked in the display by a colored rectangle whose width corresponds to the bandwidth of the filter.The bandwidth (the transitional band in Hz) is the principal range of a filter’s effect, centered on the cutoff frequency. The edges of a filter’s bandwidth are located where the frequency response has a 3dB change in amplitude from the cutoff or center frequency. Q (also known as resonance) describes filter “width” as the ratio of the center/cutoff frequency to the bandwidth. Using Q instead of bandwidth lets us move the center/cutoff frequency while keeping a constant bandwidth across octaves. The Q parameter for shelving filters is often called S (or slope), although it is ostensibly the same as Q.The filter’s gain is the linear amplitude at the center or cutoff frequency. The interpretation of the gain parameter depends somewhat on the type of filter. The gain may also affect a shelf or large region of the filter’s response.

 

 

 

Summary in running the installation:

Install MAXMSP on the computer run figireout08B.maxpat make sure all 3 patches are in the same folder:

figireout08B.maxpat – visual image capture and manipulation
Drone_Synth_06_single – generated sound changes when mouse moves
myWave2.maxpat – different drone wave forms to explore

Optionally attach speakers or external camera. Get the usb camera by opening “p detectaface ” clicking “getvdevlist” and selecting correct camera in p vdev/input. 

Or use a laptop with built in camera. Make the draw mode of the jit.gl.mesh partimage2 mesh to points make the floating window “partimage2 ” to full screen.

Or use the <esc> key. Move the trackpad or mouse around.

 

A reflection how successful your project was in meeting your creative aims. 

Creatively the results were perfect. Having the ability to programme the jit.gen code gave me total control over the forces at play and meant the forces could also evolve.
However, it was a bit sluggish. As there were over 400,000 particles being controlled it was not that fast. If I added the sound interaction it would be further slowed down in performance. I could use two computers to alleviate this problem if I had the choice but still annoying.

 

What works well?  

I loved the drone sound and creating a particle system on MAMSP was amazing.

What challenges did you face? 

I didn’t know how to create both the drone sound and the gen particles so earthing was challenging at first. However the basic drone creation was quite straight forward and was very educational. The GEN coding is still quite difficult as examples are sparse and the explanation are cryptic. This was the biggest hurdle understanding how to use the different objects of Gen.

What might you change if you had more time?

Developing different drone sounds that change over time would be nice and changing the image representation to be more 3d in the view and exploring more forces.

 

FURTHER RECORDINGS

 

 

References:

 

MAXMSP 

https://youtu.be/AFaPc9ElQD4

https://youtu.be/BOh7ysTFkiI