Lesson 6 - AM synthesis and playback
Lesson 6:
Transcript
00:09 - Lesson number 06. AM synthesis and playback. In this tutorial I am going to present you with Amplitude Modulation synthesis or “AM” for short. We will use this technique to affect a soundfile, stored on our pc, which we will learn how to read into Pure Data. “AM” is based on a sound signal named “carrier” which might be a sinusoidal wave or a more complex sound as we will see, such as a recording of an instrument or a voice. The carrier is modulated in its volume or amplitude. What does it mean to modulate? 00:49 - This means that we are going to require a second sound signal, called the “modulator,” usually a sinusoidal wave, that applies its shape to the movement of the volume. In other words the volume, or the amplitude, of a signal “carrier”, is varied in proportion to that of the modulator. It is as if there were someone moving the volume up and down several times per seconds. How many times? The number defined by the frequency of the “modulator” signal. In order to produce “AM” synthesis it is necessary that the frequency of the “modulator” is bigger than 20 Hz. This is because below that threshold we would not be creating any synthesis. Instead, we would only be applying an audio effect to the “carrier” signal, known as “tremolo”. The difference is that “AM”, like other kinds of synthesis, is able to modify the physical characteristics of the “carrier” while a “tremolo” only changes the amplitude. 01:47 - I won’t delve that much into the theory behind this, due to time constraints and because this tutorial is meant to be practical. Let’s make a brand new patch and create two oscillators, one for the “carrier” and one for the “modulator”. 02:38 - Let’s add a multiplier for the signal and connect both oscillators to it, the “carrier” as left and the “modulator” as right operands. 02:50 - Now we have the “carrier” signal which is modulated by the “modulator” frequency. 02:59 - So if we would connect a “dac (tilde)” we would be done. Actually it is not true, because there are a couple of things we need to take care of. The first thing you need to know is that a waveform, such as the one we are using in this patch, oscillates at a certain rate which corresponds to the frequency, inside of a certain range that goes from -1 to +1. 03:23 - Why am I saying this? Because although what we implemented is modulating the amplitude, look at the multiplier: it is not “AM” yet. What we just did is actually called “Ring Modulation” or “RM” synthesis, and the effect produced on the sound is slightly different. To create real amplitude modulation synthesis we need to reduce the range of our modulator from -1 to +1, to 0 to 1. Doing this just requires some very basic math skills! We need first to add 1 to the “modulator” signal in order to bring our range into the positive domain. I used this object “+ (tilde)” because we are operating on a signal, not on numbers. 04:12 - Okay, now the range of our “modulator” goes from 0 to 2 and there is no more negative signal. If we divide this by 2, with the object “/ [slash] (tilde)” we bring the signal down into the range of 0 to 1. That is what we need. Let’s connect this to the “multiplier” and this is the algorithm that implements “AM” synthesis. There are still some interesting things I want to show you. Before, I said that “AM” particularly suits complex sounds, but we already know that “osc (tilde)” produces the most basic sound, a sinusoidal wave. So let’s get rid of the oscillator we used for the “carrier”. Another solution would be to use an oscillator that produces a more complex waveform such as a “phasor” which produces a sawtooth wave. 05:08 - Let’s check thenhelp file for a moment. 05:17 - If we set here a frequency of 5,000 Hz we can clearly observe that this waveform is different from the ones we have met up to now. Nevertheless I have the feeling that this waveform is not complex enough. I just wanted to show you this object because you might want to use it in the future, but for the moment we don’t need it. What I want to do now, rather, is to show you how we can read a soundfile, stored on the pc, into Pure Data. The soundfile I provide you, containnig a voice recording, definitely has a more complex waveform and thus will produce more appealing results. In order to read a soundfile we need to introduce a new object, “readsf (tilde).” 06:06 - We also add an argument: 2. What does it mean? Two is the number of channels the soundfile has: it is stereo so it has 2 channels, one left and one right. Let’s connect to the outlet of the “multiplier”, connected to our oscillator, a second multiplier, this time with a fixed argument “0.5”. 06:35 - We must also create a “dac” to go out to our sound card. 06:44 - Now we connect the left-most outlet of “readsf” to the upmost multiplier and we connect the multiplier on the bottom to the left inlet of the “dac”. We copy and paste this and we connect this, as you can see. 07:01 - Did you understand what I did? We said that our soundfile has two channels. This means that we need to run “AM” on both channels, left and right. 07:13 - Since the amplitude of each channel would sum up together we might get distortion and that’s the reason why I put another “multiplier” at the end, to halve the volume of each channel. We are almost done now; we just need to figure out how to tell “readsf” which file to read. This can be done in a few steps. First let’s create two messages and connect both to “readsf”. 07:47 - In the first one I type “0”: we will need this to stop the playback. By the way, don’t forget to check the helpfile of “readsf”. 08:00 - In the other message we are going to type “open $1, 1”. What does it mean? Open a file located somewhere, that’s the reason why I used the “$” symbol because the path itself is a variable. After this has been done, comma, start to play it, 1. 08:26 - Let’s also create a “print” object and connect to this the latter message. I’ll explain to you later why we are doing this. The last step is to get the path of the file I want to read. To do this I need two objects: a “bang” and “openpanel”. 08:56 - What “openpanel” does is it opens a window that allows us to choose a sound file stored on the hard disk. We need the “bang” to trigger “openpanel” that will pass the path of the file I choose as a variable to the message “open”. 09:14 - Let’s try it. 09:39 - Very nice, it works! We still have a very last issue to figure out. As you have noticed the file plays only once, but most likely we would prefer to create a loop so that we don’t have to select a file every time. 09:54 - Unfortunately there is no message we can send to “readsf” to set the loop on, so we need to use a trick. 10:05 - The right-most outlet of “readsf” sends out a “bang” when it ends the playback of the file. Let’s connect a bang to it to check. 10:41 - Here we go. This means that I can create a message that tells “readsf” to open again the same file at the same path and play it. That’s actually very easy to do. Let’s take a message and we type inside it “open” space and now we need to copy & paste, here inside, the exact path of the file we want to read. Where can we find this? 11:15 - That’s the reason why we used “print”. If you look in the “Log” window you will see there the path of the file we read some minutes ago. 11:26 - Let’s copy & paste it inside the message and add ‘comma 1”. 11:56 - Now we are really done! Let’s try it again. 12:28 - Very good, it works as expected! In this tutorial we not only implemented amplitude modulation synthesis but we also built an easy and effective system to read and loop soundfile, which can be used all the times you will need it. The next tutorial is the last one of the first part and we will see how to implement frequency modulation synthesis.
Example Patch: