Interpretation of Data
Originally this was titled "natural process models/parameter mapping"
in the syllabus, and it sort-of still was about this. One of the main
points we tried to make throughout the term was how data could be
scaled/thinned/modified/altered to 'fit' with musical parameters for
synthesis, audio DSP alterations, etc. (this is more generically
called 'data transcoding' these days). Instead of using some
model of a natural process, which we have more-or-less already done
in past class this term, we decided to show two other techniques for
generating data. I showed how to build a generic image-interpreter
app in Max/MSP that allowed drawn lines and then an imported image
to be interpreted as sound. The audio mapping was fairly simple
(time along the x-axis, frequency along the y, intensity of resynthesis
derived from the RGB pixel values), but it was a nice example of how
an interface can be designed to reflect a particular way of thinking
about music construction.
Bryan then showed an approach similar in spirit (a particular way
of thinking about music construction) but very different in design:
a model of a Ligeti's Ètude Nr. 1 "Dèsordre",
using it as a model (constructed in OpenMusic) for subsequent
compositional manipulation.
Links
Class Downloads