Audio Technology
Introduction
(Konsep Dasar Audio Digital)
Iwan Sonjaya,MT
0816 956 829
iwankuliah@gmail.com
What is sound?
Sound is a physical phenomenon caused by vibration of material
(ex.: violin). As the matter vibrates, pressure variations are
created in the air around it. The pressure waves propagate in the
air. When a wave reaches the human eardrums, a sound is heard.
Ear: receive 1-D waves.
Suara (Sound)
•
Fenomena fisik yang dihasilkan oleh getaran
benda
•
Getaran suatu benda yang berupa sinyal
The wave form occurs repeatedly at regular interval or
periods. A sound with a recognizable periodicity is called
music. Non-periodic sounds called noises.
A sound frequency is the reciprocal value of its period. The
Frequency Ranges
The frequency range is divided into:
Infrasonic: 0 to 20 Hz
Audio-sonic: 20Hz to 20 kHz (
Human hearing frequency
)
Ultrasonic: 20kHz to 1 GHz
Hypersonic: 1GHz to 10 THz
•
In multimedia we are concerned with
Amplitude
The amplitude of the sound is the displacement of the air pressure from its quiescent state,
which humans perceive subjectively as loudness or volume. Sound pressure levels are
measured in (db).
Audio Representation on Computers
A computer measures the amplitude of the waveform at regular time intervals It then
generates a series of sampling values. The mechanism that converts an audio signal into
digital samples is the analog-to-digital converter (ADC).
A digital-to-analog converter
(DAC) is used to achieve the opposite conversion.
Audio Sampling
1. Determine number of samples per second;
2. At each time interval determine the amplitude;
3. Stored the sample rate and the individual amplitudes.
Quantization
: is a value of the sample. The resolution of a sample value depends
on the number of bits used in measuring the height of the waveform.
The sampled waveform with a 3 bits quantization results in only eight possible
values: 0.75, 0.50, 0.25, 0.00, - 0.25, - 0.50, -0.75 and -1. An 8 bit quantization
yields 256 possible values, 16 bit result in over 65536 values.
Quantization introduces noise
File size versus quality
1: Sampling at higher rates more accurately captures the high frequency content. 2: Audio resolution determines the accuracy with which a sound can be digitized. 3: Using more bits yields a recording that sounds more like its original.
4: High sample rate with high resolution = large files.
Here are the formulas for determining the size (in bytes) of a digital recording:
For monophonic recording:
Sampling rate
*
duration of recording in second
*
bit resolution / 8
For stereo recording:
Sampling rate * duration of recording in second * bit resolution / 8 * 2
Thus the formula for a 10 second recording at 22.05 KHz, 8 bit resolution would be:
22050 * 10 * 8/8 * 1 = 220, 500 bytes
A ten second stereo recording at 44.1 KHz, 16 bit resolution would be :
44100 * 10 * 16 / 8 * 2 = 1,764, 000 bytes
MIDI ( Musical Instrument Digital Interface ) versus digital audio
The MIDI is a small piece that plugs directly into the computer’s serial port and
allows the transmission of music signals.
Note that
: MIDI does not produce
sound, only produce the parameters that are needed to be sent to the device that
translates those numbers into sound.
2: Shielded twisted pair of 50 feet max length
Electrical Specification:
1: Asynchronous serial interface.
2: 8 data bits, 1 start bit, and 1 stop bit
3: Logic 0 is current ON
4: Rise and fall time <= 2 msec
Data format has instrument specification, notion of beginning and end of note,
frequency and sound volume. This data grouped into MIDI messages that
A message contains 1 to 2 or 3 bytes.
1: First byte is status byte used to transmit message to a specific channel.
2: Remaining bytes are data bytes:
The number of data bytes is dependent on status byte.
MIDI standard specifies 16 channels and identifies 128 instruments. For
example, 0 is for piano, 40 for violin, 73 for the flute, etc.
Q1: Does one channel correspond to one instrument?
If an instrument is defined as a MIDI device, then typically yes, one channel will
send information to one of the instruments in the MIDI chain. If, on the other
hand, instrument is defined as the sound patch or voice being played (i.e. piano,
tuba, violin), then yes, one MIDI channel carries voice message information for
a single patch only.
Q2: Can a user define a new instrument?
No. MIDI contains only control information. The instrument heard
depends entirely on the MIDI device used to decode the stream.
Q3: Can physical modeling be included?
Time
Q4: What goes through the MIDI cable?
lo
hi lo hi
lo
hi lo hi lo hi
lo
hi
lo
voltage:
1: Timed pulses of electricity – 31250 per second
Q4: Talk about the MIDI messages ?
MIDI messages are divided into two different types:
1: Channel Message: Channel messages go only to specified devices.
A - Channel voice messages that describe music by defining pitch, amplitude, duration and other sound qualities.
B - Channel mode messages determine the way that a receiving MIDI device responds to channel voice messages.
MIDI versus digital audio
In contrast to MIDI data, digital audio data are the actual representation of sound, stored in the form of thousands of individual samples (i.e. MIDI data are device dependent; digital audio are not).
MIDI data has several advantage:
1. MIDI data are much more compact than digital audio files, and the size of the MIDI file is completely independent on playback quality. In general MIDI files will be 200 to 1000 times smaller than digital audio files.
2. Because MIDI files are small, they don’t take up as much RAM, disk space, and CPU resources. 3. MIDI data are completely editable.
Now for disadvantage:
1. Because MIDI data do not represent sound but musical instruments, you can certain that playback will be accurate only if the MIDI playback device is identical to the device used for production.
2. MIDI can not easily be used to playback spoken dialog.
MIDI Devices:
1. Sound generator: The principal purpose of the generator is to produce an audio signal that becomes sound when fed into a loudspeaker.
2. Microprocessor: The microprocessor communicates with the keyboard to know what notes the musician is playing, and with control panel to know what commands the musician wants o send to the microprocessor.
3. Keyboard: The sound intensity of a tone depends on the speed and acceleration of the key pressure.
4: Control panel: The control panel controls those functions that are not directly concerned with notes and duration.