chorAIle




hardware:   Apple MacBook M1 Air, Eurorack modular system/hardware, Behringer model D synth,
              Behringer 2600 synth

software:   Apple Logic Pro X, RTcmix, max/msp

I didn't really use AI in any deep way to make these pieces. What I said in the main chorAIle web page was pretty accurate -- I just instructed it to write a 4-voice, 16-bar, 16th-century counterpoint chorale. I did tell it to output the pitches as MIDI note nunmbers and the duration of a quarter note to be represnted by 1.0. It was already smart enough to do that, plus it wrote the parts separately. I was able to interpret this data using RTcmix (rtcmix~) in Max/MSP to control the modular synth gear.

I know I could have worked more with ChatGPT to sculpt the output prior to realizing it as sound. I did do a little refining: mainly setting different soprano lines for ChatGPT to use and restricting the scale-pitch choices. Specifying much more was contrary to my original intention -- to feature the AI algorithm itself. I'm also aware of the huge electrical energy toll that many AI operations take. Even though this project represents a ridiculously tiny amount of that energy, it's kind of a ridiculous use of AI, tiny or otherwise. Plus I didn't want to get trapped in the 'tweak the output forever' rabbit-hole. I was more interested in synthesizing the chorales.

The last one (4th one) was very curious, only writing out the same three chords over and over. I think I tried to over-specify the soprano line on that one. I do like it, though!

Piece website: