Here are some links to "pd" web pages (and Marlon's patch) if you want to hear the original Marlon Feld sound:
In order to learn how to process incoming audio (real-time or read from a soundfile), we coded a very basic 'mixer' instrument that reads audio in and writes it (plays it) out. The instrument is designed so that we can set the amount of time to skip on the input (for a soundfile, that is -- how far to go into the soundfile before we start reading samples), set when we schedule the output, and set the duration we want to read/write. For added fun we included the capability to modify the amplitude dynamically (PField control) so that we could use an amplitude envelope. We did this for the pan control (left-right) also.
Here is the code for the SIMPLEMIX::init() member function (I've removed most of the comments, they are intact in the source file you can download):
int SIMPLEMIX::init(double p[], int n_args) { if (rtsetoutput(p[0], p[2], this) == -1) return DONT_SCHEDULE; if (outputChannels() > 2) return die("SIMPLEMIX", "Use mono or stereo output only."); if (rtsetinput(p[1], this) == -1) return DONT_SCHEDULE; inchan = p[4]; if (inchan >= inputChannels()) return die("SIMPLEMIX", "You asked for channel %d of a %d-channel input.", inchan, inputChannels()); // set up our input object theInput = new Ortgetin(this); amp = p[3]; pan = p[5]; return nSamps(); }
The rtsetinput(p[1], this); function works like rtsetoutput(), but it only takes a parameter for the amount of time (in seconds) to skip in the input soundfile (this should always be "0" for real-time audio input) along with the "this" pointer to the instrument/note. The duration is set in rtsetoutput().
The other new addition to this code besides the setting of the inchan variable is the creation and assignment of a new Ortgetin() object. It is declared in SIMPLEMIX.h like this:
Ortgetin *theInput;and it is the object we will use to get incoming audio samples in our SIMPLEMIX::run() member function. Here is the code for that function:
int SIMPLEMIX::run() { float out[2]; float in[2]; for (int i = 0; i < framesToRun(); i++) { if (--resetter <= 0) { doupdate(); resetter = getSkip(); } // Grab the current input sample, scaled by the amplitude multiplier. theInput->next(in); out[0] = in[inchan] * amp; // If we have stereo output, use the pan pfield. if (outputChannels() == 2) { out[1] = out[0] * (1.0 - pan); out[0] *= pan; } // Write this sample frame to the output buffer. rtaddout(out); // Increment the count of sample frames this instrument has written. increment(); } // Return the number of frames we processed. return framesToRun(); }
Because we are updating our amp and pan variables in the SIMPLEMIX::doupate() member function, we already have amp-enveloping capabilities. Just use a maketable() in the scorefile to build the envelope and use it in the "amplitude" PField (p[3]).
One final note about the SIMPLEMIX code: We didn't set the value of the resetter variable for PField-updating in our SIMPLEMIX::init() member function (in the past examples we have said "resetter = 0;"). In the 'constructor' for the SIMPLEMIX instrument, we do this:
SIMPLEMIX::SIMPLEMIX() : resetter(0) { }
What we are aiming to do is to create an RTcmix instrument capable of filtering an input signal with a set of broad band-pass filters ("formants") to produce the vowel-like sounds that were in Marlon's original pd patch. Our first attempt is FORMANT1, which will instantiate only one of these filters.
Changing the code of SIMPLEMIX to do this is almost trivial. Here is the setup in FORMANT1::init():
int FORMANT1::init(double p[], int n_args) { if (rtsetoutput(p[0], p[2], this) == -1) return DONT_SCHEDULE; if (outputChannels() > 2) return die("FORMANT1", "Use mono or stereo output only."); if (rtsetinput(p[1], this) == -1) return DONT_SCHEDULE; inchan = p[6]; if (inchan >= inputChannels()) return die("FORMANT1", "You asked for channel %d of a %d-channel input.", inchan, inputChannels()); // set up our input object theInput = new Ortgetin(this); // set up our filter // Oreson(float SR, float centerFreq, float bandwidth[, Scale scaling]) theFilt = new Oreson(SR, p[4], p[5]); amp = p[3]; pan = p[7]; return nSamps(); }
Using Oreson() is also easy:
int FORMANT1::run() { float out[2]; float in[2]; // Each loop iteration processes 1 sample frame. for (int i = 0; i < framesToRun(); i++) { if (--resetter <= 0) { doupdate(); resetter = getSkip(); } // Grab the current input sample, scaled by the amplitude multiplier. theInput->next(in); // filter it! out[0] = theFilt->next(in[inchan]) * amp; // If we have stereo output, use the pan pfield. if (outputChannels() == 2) { out[1] = out[0] * (1.0 - pan); out[0] *= pan; } // Write this sample frame to the output buffer. rtaddout(out); // Increment the count of sample frames this instrument has written. increment(); } // Return the number of frames we processed. return framesToRun(); }
That's it!
RTcmix Instrument: FORMANT3
Changing our one-formant instrument into a three-formant (vocal-like!) instrument is very easy -- we just duplicate the filter two more times. Here is the relevant code in the FORMANT3::init() member function:
// set up our filters // Oreson(float SR, float centerFreq, float bandwidth[, Scale scaling]) theFilt1 = new Oreson(SR, p[4], p[5]); theFilt2 = new Oreson(SR, p[6], p[7]); theFilt3 = new Oreson(SR, p[8], p[9]);
// Grab the current input sample, scaled by the amplitude multiplier. theInput->next(in); // filter it! in[inchan] *= amp; out[0] = theFilt1->next(in[inchan]); out[0] += theFilt2->next(in[inchan]); out[0] += theFilt3->next(in[inchan]);
// Grab the current input sample, scaled by the amplitude multiplier. theInput->next(in); // filter it! in[inchan] *= amp; out[0] = theFilt1->next(in[inchan]); out[0] = theFilt2->next(out[0]); out[0] = theFilt3->next(out[0]);
To use this with our CHAOS instruments, we created a score file using the bus_config system to interconnect RTcmix instruments (see the bus_config tutorial for more information on how this scorefile command works):
rtsetparams(44100, 2) load("/Users/brad/papes/CLASS2012/SOFTspring/week6/FORMANT3/libFORMANT3.so") load("/Users/brad/papes/CLASS2012/SOFTspring/week6/CHAOS1/libCHAOS1.so") bus_config("CHAOS1", "aux 0-1 out") bus_config("FORMANT3", "aux 0-1 in", "out 0-1") Rvalue = makeconnection("mouse", "x", 3.0, 3.999, 3.5, 10) repeat_value = makeconnection("mouse", "y", 0, 100, 7, 10) CHAOS1(0, 1000.0, 40000, Rvalue, repeat_value, 0.5) amp = 1.0 FORMANT3(0, 0, 999, amp/2, 280, 34, 2250, 34, 2900, 340, 0, 0.5)
The last signal-processing instrument we coded in class was designed to show how to 'buffer' a set of input samples for additional treatment. Many DSP operations (such as FFT transforms) need to do this. The STRETCHY instrument does a very simple time-expansion on an input soundfile -- it reads a buffer of input samples and then writes that same buffer out several times before reading the next buffer of input samples. Obviously this will expand the soundfile in time by the repeat-factor of the buffers. The STRETCHY::init() member function looks like this:
int STRETCHY::init(double p[], int n_args) { if (rtsetoutput(p[0], p[2], this) == -1) return DONT_SCHEDULE; if (outputChannels() > 2) return die("STRETCHY", "Use mono or stereo output only."); if (rtsetinput(p[1], this) == -1) return DONT_SCHEDULE; inchan = p[5]; if (inchan >= inputChannels()) return die("STRETCHY", "You asked for channel %d of a %d-channel input.", inchan, inputChannels()); // set up our input object theInput = new Ortgetin(this); amp = p[3]; stretchfactor = p[4]; stretchcounter = 0; pan = p[6]; return nSamps(); }
int STRETCHY::run() { float out[2]; float in[2]; // load up the buffer if we need a new load if (--stretchcounter <= 0) { for (int i = 0; i < framesToRun(); i++) { theInput->next(in); inbuffer[i] = in[inchan] * amp; } stretchcounter = stretchfactor; } // now write out that buffer for (int i = 0; i < framesToRun(); i++) { if (--resetter <= 0) { doupdate(); resetter = getSkip(); } // Grab the current input sample, scaled by the amplitude multiplier. out[0] = inbuffer[i] * amp; // If we have stereo output, use the pan pfield. if (outputChannels() == 2) { out[1] = out[0] * (1.0 - pan); out[0] *= pan; } // Write this sample frame to the output buffer. rtaddout(out); // Increment the count of sample frames this instrument has written. increment(); } // Return the number of frames we processed. return framesToRun(); }
The samples read in this 'wrapped' loop get stored in an array called inbuffer. Where did this array come from? In the STRETCHY.h file, it is declared like this:
class STRETCHY : public Instrument { public: ... private: ... float *inbuffer; ... };
STRETCHY::STRETCHY() : inbuffer(NULL), resetter(0) { }
int STRETCHY::configure() { // RTBUFSAMPS is the maximum number of sample frames processed for each // call to run() below. inbuffer = new float [RTBUFSAMPS]; return inbuffer ? 0 : -1; // IMPORTANT: Return 0 on success, and -1 on failure. }
Note that because we initialized inbuffer with a NULL size, we can check whether or not it was allocated properly and return either a "0" or a "-1" from the STRETCHY::configure() member function. The configure() member function runs just prior to the instrument executing. If it returns "-1" it will not execute. Why do we allocate the array here? Because it helps with memory-management. If we allocated the input buffer when the note was being scheduled (i.e. in the STRETCHY::init() member function), the allocated memory would be there for the entire run of the RTcmix score until the note exectues. If you have many many many calls to STRETCHY, this can become a memory-load problem, especially if you are using RTcmix on an iDevice.
Also, because we are such good and wonderful programmers, we deallocate the memory when the note is finished in our 'destructor':
STRETCHY::~STRETCHY() { delete [] inbuffer; }
To do this, we use the getPFieldTable() Instrument class function:
int STRETCHY::init(double p[], int n_args) { ... // set up our input object theInput = new Ortgetin(this); // get the envelope table stuff for each sample chunk thenvelope = (double *)getPFieldTable(5, &thenvelopelength); amp = p[3]; ... return nSamps(); }
int STRETCHY::run() { float out[2]; float in[2]; // load up the buffer if we need a new load if (--stretchcounter <= 0) { float thenvelopelocation = 0.0; float thenvelopeskipper = (float)thenvelopelength/(float)framesToRun(); for (int i = 0; i < framesToRun(); i++) { theInput->next(in); inbuffer[i] = in[inchan] * thenvelope[(int)thenvelopelocation]; thenvelopelocation += thenvelopeskipper; } stretchcounter = stretchfactor; } // now write out that buffer for (int i = 0; i < framesToRun(); i++) { ... return framesToRun(); }
Also, STRETCHY only works on soundfiles. It is difficult to pause the
Real World as you wait to load the next set if input sound samples into
the input buffer. Let me know if you figure out how to do this.