This chapter looks at sound from the computer's point of view. In order to assess the sound commands on some computers, it is necessary to POKE numeric values into various registers. This is clearly a very user-unfriendly way of creating sounds and music and, thanks to the ingenious programming of the BBC micro's OS, we have a much easier and more versatile method of controlling the sound output - through the BASIC SOUND and ENVELOPE commands.
Even a few casual experiments with these commands will reveal how complex and difficult they can sometimes be to control. The User Guide, excellent though it is, devotes only a total of 20 pages to the sound facilities: more information is required to get the best from the system. The problem here is exactly the same as the one we faced when we started to learn BASIC. The computer has a set way of operating and, in order to control it, we must give it instructions in its own terms. This means we need to know something about the properties of sound and how to convert this information into a program the computer can understand.
Sound is sometimes difficult to understand because we are dealing with something we cannot see. A sound is produced when an object is struck or rubbed or, in scientific terms, otherwise excited. This causes the object to vibrate which in turn causes the air to vibrate. These vibrations are sensed by the ear and we perceive them as sound.
Sound does not only travel through air, it travels through gases, liquids and solids. You can see sound waves passing .through water by tapping the side of a rain barrel. However, in a vacuum, such as on the moon or in space, there is nothing for sound vibrations to travel through and such environments are totally silent. When you next watch a space movie and see a spaceship - or planet - blow up with an ear-shattering explosion, you know that the vibrations produced by such an explosion would have nothing to travel through and the spectacle would, in reality, be accompanied by silence. Don't let this deter you from including suitable sound effects in your games. Films are made for entertainment.
Musical instruments which produce sound by being struck include the drums, piano, gong and xylophone. Stringed instruments such as the violin produce sound by being rubbed with a bow. Brass instruments such as the trumpet and trombone are played by blowing and vibrating the lips: this excites the air inside the instrument, which vibrates at a pitch proportional to the length of the brass tubing. The flute is played by blowing across the mouthpiece to excite the air column inside it. The same principle is at work when you blow across the mouth of a bottle. Instruments such as the oboe, clarinet and saxophone contain a reed which vibrates in response to vibrations from the lips. Clearly, unless we hook up the computer to some outboard equipment, we cannot create sound in this way.
Sound vibrations travel in a series of waves and different sounds produce different waveforms. If we play a sound through a microphone and feed it into an oscilloscope, we can see what its waveform looks like.
A sine wave is a pure tone which is usually only produced by a tuning fork or by electronic means. It is possible to produce many sounds by combining sine waves in the fight proportions: this process is known as additive synthesis, because waves are added together, and it is used in some commercial synthesisers. Because of the large number of sine waves you often need to add, it is a costly and time-consuming process.
The following program will plot sine waves according to the amplitude (loudness) and frequency (pitch) you input. It demonstrates how frequency and amplitude affect the waveform.
10 REM PROGRAM 1.1
20 REM Sine Wave Plotter
30
40 MODE 4
50 REM Define Windows
60 VDU28,0,4,39,0
70 VDU24,0,0;1279;850;
80
90 REPEAT
100 INPUT"Frequency (1-10)",Freq
110 INPUT"Amplitute (50-400)",Amp
120 PROCSine
130 PRINT"Press SPACE to enter another
wave"'"'C' to clear screen, 'F' to fini
sh"
140 REPEAT
150 Key=GET
160 UNTIL Key=32 OR Key=67 OR Key=70
170 IF Key=67 THEN CLG
180 UNTIL Key=70
190 MODE7
200 END
210
220 DEF PROCSine
230 VDU29,0;450;
240 MOVE0,0
250 FOR Time=0 TO 1279 STEP 10
260 DRAWTime,Amp*SIN(RAD(Freq*Time))
270 NEXT Time
280 ENDPROC
The program houses a simple graph-plotting procedure between lines 250 and 270. Line 260, which we will extend later, performs the calculations. All the VDU calls are very well explained in the User Guide.
At line 150, GET returns the ASCII value of the input character (see the User Guide page 263).
Try inputting 1 for frequency and 50 for amplitude to begin with. If you increase the frequency you will see how it bunches the waves closer together. Frequency is normally measured in 'cycles per second' and the higher the frequency, the more cycles occur every second. Our frequency figures are scaled down for the program and you can assume an input of 1 represents a frequency of around 100 cycles per second. An increase in amplitude will make the wave taller without affecting the frequency: in other words, the volume will increase but the pitch will remain the same.
If you replace line 260 with:
260 DRAWTime,Amp *SIN(RAD(Freq*Time))+Amp*SIN
(RAD(Freq *2*Time))
you are adding two sine waves together, and can see the effects of additive synthesis. Notice how the waveform changes. You can add more sine waves in line 260 by modifying the variable Freq in the expression:
Amp*SIN(RAD(Freq*Time))
and tagging it on to the line with a + as in the above example. If you modify the value of Amp such as
Amp*.5*SIN(RAD(Freq*Time))
you are reducing the amplitude, so its effect on the final wave will be reduced. These additions are known as harmonics and they are what makes each sound distinctive. Most sounds we hear in everyday fife have quite complex waveforms and are made up from many sine waves of varying frequencies and amplitudes. More examples are given in the section about timbre.
Another form of synthesis, known as subtractive synthesis, takes a waveform and filters out certain harmonics. A tone control is a simple filter and blocks out the higher frequencies as you increase its effect. This method is more common than additive synthesis and is in general use in most synthesiser systems.
Before you start filtering, you need something to filter. A sine wave, consisting of only one frequency, would be of little use. The best waveforms are those which contain a lot of harmonics, which give you plenty of body to chip away from: most synthesisers offer triangular, square and sawtooth waveforms. The triangular wave is very like a sine wave but contains a few harmonics, the square wave sounds a little like a clarinet and the sawtooth wave produces a sound with reed-like qualities. Type and run this program:
10 FOR Pitch=1 TO 253 STEP 4
20 SOUND1,-15,Pitch,10
30 NEXT Pitch
This will play 64 notes, each a semitone apart: it covers the five octave range of the sound chip. Can you tell which type of waveform the sound chip is using to make the sounds? As you listen, you will notice that the lower notes sound quite rich and full but, as the notes rise, they seem more percussive and lose their warmth. The lower notes almost have a clarinet-like quality about them and you would be right in thinking that the sound chip produces a square wave. In reality, it is a distorted square wave and its waveform alters as the pitch alters. This is why you hear a change in tonal quality as the notes get higher.
Just as sound is caused by vibrations in the air, so the sound chip generates its sounds with electrical vibrations. Basically, it sets up a series of oscillations: the higher the pitch, the faster the oscillations. These oscillations are sent to the loudspeaker, which vibrates at the same frequency and produces a sound: it is not necessary to know exactly how it does this, but the result is, perhaps obviously , an electronic sound. What restrictions this places on our programming, we will see during the course of this book.
In order for a sound to exist at all, it must have four parameters:
1) pitch
2) volume
3) duration
4) timbre
The sounds produced by acoustic instruments are actually very complex and change throughout their duration. We will look at these four aspects of sound and see how they relate to musical instruments.
The pitch of a note means how high or low it is on the musical scale and the word frequency is often used synonymously with it. (Frequency is more properly an attribute of the waveform, in terms of how many times it vibrates or oscillates per second. The human ear can sense sounds with a frequency of from 20 to 20,000 cycles per second (scientifically referred to as hertz and abbreviated to Hz.). The upper frequency limit will drop as a person gets older but no one should have any trouble hearing the range of the BBC sound chip.)
A tune consists of a series of pitches which have a definite relation to each other. In western music, this is based on the scale we get from a piano, where each note is a semitone away from its neighbours: the previous program demonstrated this. The notes are grouped into sections, which we will look at later, to form scales such as C major, B minor, etc.
On a piano, the pitch of the notes is fixed. You can't, unless the piano is out of tune, play in the cracks. Even if you are tone deaf, as long as you hit the right keys you will produce pleasant music. Other instruments, such as those of the string and brass family, require more control over pitch and notes can be 'slurred' from one to the other. If this takes place over several notes it is known as a portamento and sounds like this:
10 FOR Pitch=53 TO 149
20 SOUND1,-15,Pitch,1
30 NEXT Pitch
This is a favourite sound easily created on most synthesisers. The same thing on a piano, harp xylophone would sound something like this:
10 FOR Pitch=53 TO 149 STEP 4
20 SOUND1,-15,Pitch,1
30 NEXT Pitch
Here, we are playing in semitones and you can hear the discrete pitches: this is known as a glissando. Both effects are much used by jazz musicians and can add a human touch to synthesised music.
The BBC micro divides each semitone into four and, when we get down to this minute level of note division, the notes tend to blur into one another as this shows:
10 FOR Pitch=101 TO 149
20 SOUND1, -15,Pitch,10
30 NEXT Pitch
You can probably still hear the separate pitches, but they are generally too close together for normal western ears to appreciate musically. If you replace line 10 by:
10 FOR Pitch=1 TO 255
you will hear that the scale is uneven in parts, indicating that the pitches produced by the sound chip are not equally spaced. Oriental music uses pitches which are less than a semitone apart, which is why it often seems out of tune to westerners.
This is how loud or quiet a sound is and, at first, loudness as a quality of sound may seem rather simple and not as important as the others. It is not quite as straightforward as that
Many factors affect the perceived volume of a sound. Reverberation, echo, vibrato and duration all tend to increase volume, as does the addition of harmonics. For example, a sound lasting 1/100th or even 1/10th of a second will not seem as loud as a sound lasting one second. For most purposes this will make little difference to our experiments and we can simply set volume levels as we require them but, as you will have heard from some of the previous program examples, the volume tends to alter with pitch. If you are writing a tune in two or three parts and set all the parts at the same volume level, you may find that, at certain points in the tune, some lines get lost behind others. This is a result both of the properties of sound and of the sound chip, and can only be overcome by altering the volume of individual lines and notes where required. You will find that generally it is not a serious problem.
The loudness of a sound will vary during its production. For example, a piano, xylophone or any other percussive instrument produces a note which sounds immediately upon playing and then dies away. A violin takes just a fraction of a second before its note reaches full volume. Brass instruments sound with a sharp attack, even when played quietly, as an initial gust of breath is required to start the air in the tube vibrating. This variation in volume is called the 'loudness contour', or envelope of a sound, and plays an important part in determining instrument characteristics. Try this:
40 FOR Volume=-l5 TO -1
50 SOUND1,Vo1ume,53,1
60 NEXT Volume
It sounds like a percussive instrument being tapped smartly. Now try this:
10 FOR Volume=l TO -15 STEP -1
20 SOUND1,Volume,53,1
30 NEXT Volume
This sounds like a recording of an instrument being played backwards. It sounds unnatural, and it is, because most sounds don't happen that way: they don't work up to a crescendo and then stop. If you run both programs together, you will see how the sound has become more natural.
The ability to produce backward sounds is useful in synthesis and we can make use of it on the BBC micro to create lots of interesting effects.
Rather than control the SOUND command with a FOR. . . NEXT loop, we can use the ENVELOPE command to create a predetermined set of volume characteristics like this:
10 ENVELOPE1,1,0,0,0,0,0,0,127,-1,-1,-1,126,1
20 FOR Pitch=53 TO 101
30 SOUND1,1,Pitch,10
40 NEXT Pitch
This creates a percussive envelope and produces a piano-like sound. Alter line 20 to:
20 FOR Pitch=149 TO 245 STEP 5
and notice how, because of the change in pitch, it sounds more like a xylophone. Change line 120 back, and alter line 10 to:
10 ENVELOPEl,1,0,0,0,0,0,0,3,-100,0,0,126,0
This gives us our backward sound and, if it did not cut off so sharply, it could form the start of a violin-type envelope. If you alter the Pitch values, notice how it loses its violin-like quality.
All these effects use the same sound generator and demonstrate how the ear can be deceived by clever control of envelope parameters. We will look at this more closely later on.
Again, the complexity of a note's duration can be deceptive: in order for a sound to exist at all, it needs some duration. As far as the BBC micro is concerned, this will not normally be below 1/20th of a second. This is the minimum time you are able to allot a note in the SOUND command, although the step intervals in the ENVELOPE command increase in multiples of 1/100ths of a second.
From a psychological point of view, it is interesting to note the difference in time perception between individuals. Time seems to pass more quickly or slowly according to the events surrounding the individual. A boring after-dinner speaker may think that he has had the floor for fifteen minutes when he has been talking for half an hour: his listeners may think that three quarters of an hour have passed. There seems to be little evidence to show that a good musical appreciation of pitch, volume and timbre will endow a person with a good sense of time, as timing sense is not normally dependent upon the ear.
Having a sense of timing plays a great part in the creative production of music. Consider, the only attributes of music a piano player has control over are volume and time. The timbre and pitch are determined by the instrument and composer. A performance is judged, however subconsciously, upon accent, rhythm and phrasing - and of such things are great musicians made.
Coordination is often regarded as being of prime importance to a musician. The ability to perform accurately and at speed is at the root of a competent musical performance. A person may be quick and accurate, quick and inaccurate, slow and accurate or slow and inaccurate, all in varying degress. There is a natural limit to the speed at which a musician can play, but this does not determine how good a musician is. Rather, the way a musician makes fine alterations in the timing of a piece will affect the performance.
Such movements and timings can be measured, but are well beyond the scope of this book. However, we can arrange a simple motility test which will be of use not only to musicians but to anyone wanting to develop quick reactions. Motility is a measurement of speed and accuracy in movement and can be measured by tapping a key or a pencil and recording the average number of taps made each second.
The following program does this and records the number of taps made in a five-second period.
10 REM PROGRAM 1.2
20 REM Motility Tester
30
40 ON ERROR GOTO 290
50
60 ENVELOPE1,11,16,4,8,2,1,1,100,0,0,
-100,100,100
70 REM Turn off Auto Repeat
80 *FX11,0
90
100 REPEAT
110 Score=0
120 CLS
130 PRINTTAB(4,6)"Tap the RETURN key r
epeatedly"'" as quickly as possible an
d with"'" the minimum of movement."
140 Begin=GET
150 TIME=0
160
170 REPEAT
180 Tap=INKEY(0):IF Tap=13 Score=Score
+1
190 UNTIL TIME>=500
200
210 PRINTTAB(16,10)"STOP"
220 SOUND1,1,100,20
230 PRINTTAB(6,12)"Your MOTILITY ratin
g is"'Score/5;" taps per second"
240 PRINTTAB(8,15)"Another try (Y/N)?"
250 REPEAT:Key=GET AND &DF:UNTIL Key=8
9 OR Key=78
260 UNTIL Key=78
270
280 REM Turn On Auto Repeat
290 *FX12,0
300 END
The important part of the program lies between lines 170 and 190 which increment the variable, Score, when the user presses RETURN. The rest of the program is well REMed. For the curious and the impatient, the envelope at line 60 is examined in detail in Chapter 6.
Practice will generally increase your motility rating only slightly. An average for normal adults will be around 8.5 taps per second, rising to 9.3 after two or three weeks practice. Men tend to average one half tap per second faster than women.
For such an exercise to be valid as a test of musical ability, the test should be made on a movement similar to the one used during performance. A pianist, therefore, should be tested on a piano key and a violinist on a violin string. On simple tapping tests such as the one provided by the program, the highest speed recorded is about 12 taps per second, although rates as high as 15 have been reported.
Timbre (pronounced 'tarn-burr') or tone colour is that quality of a sound which enables us to distinguish between two sound sources producing sounds at the same pitch. It is usually very much affected by pitch and the sound envelope; for example we know that the low notes of a clarinet have a sound quality different to that of the high notes. This is evident in the BBC micro's sound generator, too, as we have already heard.
Tone colour is a result of the combination of harmonics in a sound. We saw the effects of adding sine waves in the Sine Wave Plotter program. Load the program again and alter line 260 to:
260 DRAWTime,Amp*SIN(RAD(Freq*Time))+Amp*1/2*SIN
(RAD(Freq*2*Time))+Amp*1/3*SIN(RAD(Freq*3*Time))+
Amp*1/4*SIN(RAD(Freq*4*Time))+Amp*1/5*SIN
(RAD(Freq*5*Time))
This adds the second, third, fourth and fifth harmonics to the fundamental, or main, tone, which is why it's so long. The fundamental is usually the strongest, that is loudest, frequency and gives the note its pitch. If you run the program, you will see that it produces a waveform like a sawtooth, from which it gets its name. If you add more harmonics in the same proportion, you will iron out the bumps and produce a better-looking sawtooth.
Alter line 960 again.
260 DRAWTime,Amp*SIN(RAD(Freq*Time))+Amp*1/3*SIN
(RAD(Freq*3*Time))+Amp*1/5*SIN(RAD(Freq*5*Time))+
Amp*1/7*SIN(RAD(Freq*7*Time))
This draws a square waveform, similar to the one produced by the BBC micro's sound chip. This time we are adding the odd harmonics and if you add more you will get a squarer wave.
You can experiment by adding various other harmonics, even by altering the SIN function to COS, and you will produce some quite complex waveforms. If you get a more detailed book about sound synthesis and find harmonic analyses of instrument waveforms, you will be able to work out which sine waves are required to produce the sound.
As far as the BBC micro is concerned, we have no control over the waveform but, by clever use of the SOUND and ENVELOPE commands, we can trick the ear into thinking that what it hears is something other than a dressed-up distorted square wave. This is because the ear tends to take more notice of the envelope of a sound than the actual timbre. There are limits, however: we will be testing and exploring these throughout the book.