Volume-based triggering in MaxMSP.

More recently I have been using Max/MSP both as a tool for facilitating real-time interaction between a performer and synthesis engine, and as a structuring device.

One of the simplest ways one can achieve the first of these is to analyse an audio stream and wait for a certain kind of behaviour before triggering an action. An example of this is displayed in the patch below. The when the amplitude of the dac exceeds a certain point, a bang is sent to the sfplay~, and thus producing audio. This kind of behaviour, which while technically very simple, produces a somewhat convincing result. However, it is not without its issues. In this situation, one needs to compose the triggers into the piece. This is fine if it is an integral part of the piece, but if one wants to emulate the idea of a machine following a performer, and is using this kind of technology to achieve it, then it is very limiting. It ain't no AI.

In many ways this technique solves the issue of live instruments and fixed media. Naturally, a performer's rhythmical sensibilities must be altered so that they will be able to play with the tape, as where in a situation where the machine responds to the performer the dynamic is reversed. More thoughts later.