< Back to Previous Page  TOC  Next Section > 
Chapter 1: The Digital Representation of Sound,


What is frequency? Essentially, it’s a measurement of how often a given event repeats in time. If you subscribe to a daily paper, then the frequency of paper delivery could be described as once per day, seven times per week. When we talk about the frequency of a sound, we’re referring to how many times a particular pattern of amplitudes repeats during one second. Not all waveforms or physical vibrations repeat exactly (in fact almost none do!). But many vibratory phenomena, especially those in which we perceive some sort of pitch, repeat approximately regularly. If we assume that in fact they are repeating, we can measure the rate of repetition, and we call that the waveform’s frequency. 





Sine WavesA sine wave is a good example of a repeating pattern of amplitudes, and in some ways it is the simplest. That’s why sine waves are sometimes referred to as simple harmonic motions. Let’s arbitrarily fix an amplitude scale to be from –1 to 1, so the sine wave goes from 0 to 1 to 0 to –1 to 0. If the complete cycle of the sine wave’s curve takes one second to occur, then we say that it has a frequency of one cycle per second (cps), or one hertz (Hz, or kHz for 1,000 Hz). The frequency range of sound (or, more accurately, of human hearing) is usually given as 0 Hz to 20 kHz, but our ears don’t fuse very low frequency oscillations (0 Hz to 20 Hz, called the infrasonic range) into a pitch. Low frequencies just sound like beats. These numbers are just averages: some people hear pitches as low as 15 Hz; others can hear frequencies significantly higher than 20 kHz. A lot depends on the amplitude, the timbre, and other factors. The older you get (and the more rock n’ roll you listened to!), the more your ears become insensitive to high frequencies (a natural biological phenomenon called presbycusis). 





PeriodWhen we talk about frequency in music, we are referring to how often a sonic event happens in its entirety over the course of a specified time segment. For a sound to have a perceived frequency, it must be periodic (repeat in time). Since the period of an event is the length of time it takes the event to occur, it’s clear that the two concepts (periodicity and frequency) are related, if not pretty much equivalent. The period of a repeating waveform is the length of time it takes to go through one cycle. The frequency is sort of the inverse: how many times the waveform repeats that cycle per unit time. We can understand the periodicity of sonic events just like we understand that the period of a daily newspaper delivery is one day. Since a 20 Hz tone by definition is a cycle that repeats 20 times a second, then in 1/20th of a second one cycle goes by, so a 20 Hz tone has a period of 1/20 or 0.05 second. Now the "thing" that repeats is one basic unit of this regularly repeating wave—such as a sine wave (at the beginning of this section there’s a picture of two sine waves together). It’s not hard to see that the time it takes for one copy of the basic wave to recur (or move through whatever medium it is in) is proportional to the distance from crest to crest (or any two successive corresponding points, for that matter) of the sine wave. This distance is called the wavelength of the wave (or of the periodic function). In fact, if you know how fast the wave is moving, then it is easy to figure out the wavelength from the period. Physically, the period is inversely proportional to the wavelength. Wavelength is a spatial measure that says how far the wave travels in space in one period. We measure it in distance, not time. The speed of sound (s) is about 345 meters/second. To find the wavelength (w) for a sound of a given frequency, first we invert the frequency (1/f) to get its period (p), and then we use the following simple formula:


Using the formula, we find that the wavelength of a 1 Hz tone is 345 meters, which makes sense, since a 1 Hz tone has a period of 1 second, and sound travels 345 meters in one second! That’s pretty far, until you realize that, since these waveforms are usually symmetrical, if you were standing, say, at 172.5 meters from a vibrating object making a 1 Hz tone and right behind you was a perfectly reflective surface, it’s entirely possible that the negative portion of the waveform might cancel out the positive and you’d hear nothing! While this is a rather extreme and completely hypothetical example, it is true that wave cancellation is a common physical occurrence, though it depends on a great many parameters. 

PitchMusicians usually talk to each other about the frequency content of their music in terms of pitch, or sets of pitches called scales. You’ve probably heard someone mention a Gminor chord, a blues scale, or a symphony in C, but has anyone ever told you about the new song they wrote with lots of 440 cycles per seconds in it? We hope not! Humans tend to recognize relative relationships, not absolute physical values. And when we do, those relationships (especially in the aural domain) tend to be logarithmic. That is, we don’t perceive the difference (subtraction) of two frequencies, but rather the ratio (division). This means that it is much easier for most humans to hear or describe the relationship or ratio between two frequencies than it is to name the exact frequencies they are hearing. And in fact, for most of us, the exact frequencies aren’t even very important—we recognize "Row, Row, Row Your Boat" regardless of what frequency it is sung at, as long as the relationships between the notes are more or less correct. The common musical term for this is transposition: we hear the tune correctly no matter what key it’s sung in. 





Once again, this is all part of that logarithmic perception "thing" we’ve been yammering on about, because the way we describe that increase is by logarithms and exponentials. Here’s a simple example: the difference to our ears between 101 Hz and 100 Hz is much greater than the difference between 1,001 Hz and 1,000 Hz. We don’t hear a change of 1 Hz for each; instead we hear a change of 1,001/1,000 (= 1.001) as compared to a much bigger change of 101/100 (= 1.01). Intervals and OctavesSo we don’t really care about the linear, or arithmetic, differences between frequencies; we are almost solely interested in the ratio of two frequencies. We call those ratios intervals, and almost every musical culture in the world has some term for this concept. In Western music, the 2:1 ratio is given a special importance, and it’s called an octave. It seems clear (though not totally unarguable) that most humans tend to organize the frequency spectrum between 20 Hz and 20 kHz roughly into octaves, which means powers of 2. That is, we perceive the same pitch difference between 100 Hz and 200 Hz as we do between 200 Hz and 400 Hz, 400 Hz and 800 Hz, and so on. In each case, the ratio of the two frequencies is 2:1. We sometimes call this base2 logarithmic perception. Many theorists believe that the octave is somehow fundamental to, or innate and hardwired in, our perception, but this is difficult to prove. It’s certainly common throughout the world, though a great deal of approximation is tolerated, and often preferred! 





One thing is clear, however: to have pitch, we need frequency, and thus periodic waveforms. Each of these three concepts implies the other two. This relationship is very important when we discuss how frequency is not used just for pitch, but also in determining timbre. To get some sense of this, consider that the highest note of a piano is around 4 kHz. What about the rest of the range, the almost 18 kHz of available sound? It turns out that this larger frequency range is used by the ear to determine a sound’s timbre. We will discuss timbre in Section 1.4. Before we move on to timbre, though, we should mention that pitch and amplitude are also related. When we hear sounds, we tend to compare them, and think of their amplitudes, in terms of loudness. The perceived loudness of a sound depends on a combination of factors, including the sound’s amplitude and frequency content. For example, given two sounds of very different frequencies but at exactly the same amplitude, the lowerfrequency sound will often seem softer. Our ear tends to amplify certain frequencies and attenuate others. 

FletcherMunson CurvesWhen looking at Figure 1.17 for the FletcherMunson curves, note how the curves start high in the low frequencies, dip down in the midfrequencies, and swing back up again. What does this mean? Well, humans need to be very sensitive to the midfrequency range. That’s how, for instance, you can tell immediately if your mom’s upset when she calls you on the phone. (Phones, by the way, cut off everything above around 7 kHz.) Most of the sounds we need to recognize for survival purposes occur in the midfrequency range. Low frequencies are not too important for survival. The nuances and tiny inflections in speech and most of sonic sound tend to happen in the 500 Hz to 2 kHz range, and we have evolved to be extremely sensitive in that range (though it’s hard to say which came first, the evolution of speech or the evolution of our sensitivity to speech sounds. This midfrequency range sensitivity is probably a universal human trait and not all that culturally dependent. So, if you’re traveling far from home, you may not be able to understand the person at the table in the café next to you, but if you whistle the FletcherMunson curves, you’ll have a great time together. 
< Back to Previous Page  Next Section > 
©Burk/Polansky/Repetto/Roberts/Rockmore. All rights reserved.