If a recording of someones very rare voice is representable by mp4 or whatever, could monkeys typing out code randomly exactly reproduce their exact timbre+tone+overall sound?

I don’t get how we can get rocks to think + exactly transcribe reality in the ways they do!

Edit: I don’t get how audio can be fossilized/reified into plaintext

Short answer: to record a sound, take samples of the sound “really really often” and store them as a sequence of numbers. Then to play the sound, create an electrical signal by converting those digital numbers to a voltage “really really often”, then smooth it, and send it to a speaker.

Slightly longer answer: you can actually take a class on this, typically called Digital Signal Processing, so I’m skipping over a

lotof details. Like a lot a lot. Like hundreds of pages of dense mathematics a lot.First, you need something to convert the sound (pressure variation) into an electrical signal. Basically, you want the electrical signal to look like how the audio sounds, but bigger and in units of voltage. You basically need a microphone.

So as humans, the range of pitches of sounds we can hear is limited. We typically classify sounds by frequency, or how often the sound wave “goes back and forth”. We can think of only sine waves for simplicity because any wave can be broken up into sine waves of different frequencies and offsets. (This is not a trivial assertion, and there are some caveats. Honestly, this warrants its own class.)

So each sine wave has a frequency, i.e. how long many times per second the wave oscillates (“goes back and forth”).

I can

guaranteethat you as a human cannot hear any pitch with a frequency higher than 20000 Hz. It’s not important to memorize that number if you don’t intend to do technical audio stuff, it’s just important to know that number exists.So if I recorded any information above that frequency, it would be a waste of storage. So let’s cap the frequency that gets recorded at

something. The listener literally cannot tell the difference.Then, since we have a maximum frequency, it turns out that,

once you do the math, you only need to sample at a frequency of exactlytwicethe maximum you expect to find. So for an audio track, 2 times 20000 Hz = 40000 timesper secondthat we sample the sound. It is typically a bit higher for various technical reasons, hence why 44100 Hz and 48000 Hz sample frequencies are common.So if you want to record exactly 69 seconds of audio, you need 69 seconds × 44100 [samples / second] = 3,042,900 samples. Assuming space is not a premium and you store the file with zero compression, each sample is stored as a number in your computer’s memory. The samples need to be stored in order.

To reproduce the sound in the real world, we feed the numbers in the order at the

same frequency(the sample frequency) that we recorded them at into a device that works as follows: for each number it receives, the device outputs a voltage that is proportional to the number it is fed, until the next number comes in. This is called a Digital-to-Analog Converter (DAC).Now at this point you do have a sound, but it generally has wasteful high frequency content that can disrupt other devices. So it needs to get smoothed out with a filter. Send this voltage to your speakers (to convert it to pressure variations that vibrate your ears which converts the signal to an electrical signal that is sent to your brain) and you got sound.

Easy peazy, hundreds of pages of calculus squeezy!

Yes, but it is astronomically unlikely to happen before you or the monkeys die.

If you have any further questions about audio signal processing, I would be literally thrilled to answer them.

When you talk about a sample, what does that actually mean? Like I recognize that the frequency of oscillations will tell me the pitch of something, but how does that actually translate to a chunk of data that is useful?

You mention a sample being stored as a number, which makes sense, but how is that number utilized? Again assuming uncompressed, if my sample “value” comes up as 420, does that include all of the necessary components of that sound bite in a 1/44100th of a second? How would a sample at value 421 compare? Is this like a RGB type situation where you’d have multiple values corresponding to different attributes of the sample (amplitude, frequencies, and I’m sure other things)? Is a single sample actually intelligible in isolation?

First, the sound in the real world has to be converted to a fluctuating voltage. Then, this voltage signal needs to be converted to a sequence of numbers.

Here’s a diagram of the relationship between a voltage signal and its samples:

The blue continuous curve is the sine wave, and the red stems are the samples.

A sample is the value [1] of the signal at a specific time. So the samples of this wave were chosen by reading the signal’s value every so often.

One of the central results of Fourier Analysis is that frequency information

determinesthe time signal, andvice versa[2]. If you have the time signal, you have its frequency response; you just gotta run it through a Fourier Transform. Similarly, if you have the frequencies that made up the signal, you have the time signal; you just gotta run it through an inverse Fourier Transform. This is not obvious.Frequency really comes into play in the ADC and DAC processes because we know ahead of time that a maximum useful frequency exists. It is not trivial to prove this, but one of the results of Fourier Analysis is that you can only represent a signal with a finite number of frequencies if there is a maximum frequency above which there is no signal information. Otherwise, a literally infinite number of numbers, i.e. an infinite sequence, would be required to recover the signal. [2]

So for sampling and representing signals, the importance of frequency is really the fact that a maximum frequency exists, which allows our math to stop at some point. Frequency also happens to be useful as a tool for analysis, synthesis, and processing of signals, but that’s for another day.

The number tells the DAC how big a voltage needs to be sent to the speaker at a given time. I run through an example below.

The value of a sample with value 420 is meaningless without specifying the range that samples are living in. Typically, we either choose the range -1 to 1 for floating point calculations, or 2^(n-1) to (2^(n-1) - 1) when using integer math [7]. If designed correctly, a sample that’s outside the range will be “clipped” to the minimum or maximum, whichever is closer.

However, once we specify a digital range for digital signals to “live in”, if the signal value is within range, then yes, it does in fact contain all the necessary components [6] for that sound bite in a 1/44100th of a second?

As an example [3], let’s say that the 69th sample has a value of 0.420, or x[69]=0.420. For simplicity, assume that all digital signals can only take values between Dmin = -1 and Dmax = 1 for the rest of this comment. Now, let’s assume that the DAC can output a maximum voltage of Vmax = 5V and a minimum voltage of Vmin = -7V [4]. Furthermore, let’s assume that the relationship between the digital signal is exactly linear, and the sample rate is 44100Hz. Then, ([69+1]/44100) seconds after the audio begins, regardless of what happened in the past, the DAC will be commanded to output a voltage Vout (calculated below) for a duration of (1/44100) seconds. After that, the number specified by x(70) will command the DAC to spit out a new voltage for the next (1/44100) seconds.

To calculate Vout, we need to fill in the equation of a line.

Vout(x) = (Vmax - Vmin) / (Dmax - Dmin) × (x - Dmin) + Vmin

Vout(x) = (5V - (-7V)) / (1 - (-1) × (x - (-1)) + (-7V)

Vout(x) = 6(x + 1) - 7 [V]

Vout(x) = 6x + 6 - 7 [V]

Vout(x) = 6x - 1 [V]

As a check,

Vout(Dmin) = Vout(-1) = 6×(-1) - 1 = -7V = Vmin ✓

Vout(Dmax) = Vout(1) = (6×1) - 1 = 5V = Vmax ✓

At this point, with respect to this DAC I have “designed”, I can always convert from a digital number to an output voltage. If x>1 for some reason, we output Vmax. If x<1 for some reason, we output Vmin. Otherwise, we plug the value into the line equation we just fitted. The DAC does this for us 44100 times per second.

For the sample x[69]=0.420:

Vout(x[69]) = 6•x[69] - 1 [V] = 6×0.420 - 1 = 1.520V.

A sample value of 0.421 would yield Vout = 1.526V, a difference of 6mV from the previous calculation.

And how does changing a sample from 0.420 to 0.421 affect how it’s going to sound? Well, if that’s the only difference, not much. They would sound practically (but not theoretically) identical. However, if you compare two otherwise identical tracks except that one is rescaled by a digital 1+0.001, then the track with the 1+0.001 rescaling will be very slightly louder. How slight really depends on your speaker system.

I have used a linear relationship because:

However, as long as the relationship between the digital value and the output voltage is monotonic (only ever goes up or only ever goes down), a designer can compensate for a nonlinear relationship. What kinds of nonlinearities are present in the ADC and DAC (besides any discussed previously) differ by the actual architecture of the ADC or DAC.

Nope. R, G, and B can be adjusted independently, whereas the samples are mapped [5] one-to-one with frequencies. Said differently: you cannot adjust sample values and frequency response independently. Said another way: samples carry the same information as the frequencies. Changing one automatically changes the other.

Nope. Practically, your speaker system might emit a very quiet “pop”, but that pop is really because the system is being asked to quickly change from “no sound” to “some sound” a lot faster than is natural.

Hope this helps. Don’t hesitate to ask more questions 😊.

[1] Actually, it is ideally

proportionalto the value of the sample, what is termed a (non-dynamic) linear relationship, which is the best you can get with DSP because digital samples have no units! In real life, it could be some non-linear relationship with the voltage signal, especially if the device sucks.[2] Infinite sequences are perfectly acceptable for analysis and design purposes, but to actually crunch numbers and put DSP into practice, we need to work with finite memory.

[3] Sample indices typically start at 0 and must be integers.

[4] Typically, you’ll see either a range of [0, something] volts or [+something, -something] volts, however to expose some of the details I chose a “weird” range.

[5] If you’ve taken linear algebra: the way computers actually do the Fourier Transform, i.e. transforming a set of samples into its frequencies, is by baking the samples into a tall matrix, then multiplying the sample matrix by a FFT matrix to get a new matrix, representing the weights of the frequencies you need to add to get back the original signal. The FFT transformation matrix is

invertible, meaning that there exists a unique matrix that undoes whatever changes the FFT matrix can possibly make. All Fourier Transforms are invertible, although the continuous Fourier Transform is too “rich” to be represented as a matrix product.[6] I have assumed for simplicity that all signals have been mono, i.e. one speaker channel. However, musical audio usually has two channels in a stereo configuration, i.e. one signal for the left and one signal for the right. For stereo signals, you need

twosamples at every sample time, one from each channel at the same time. In general, you need to take one sampleper channelthat you’re working with. Basically, this means just having two mono ADCs and DACs.[7] Why 2^n and not 10^n ? Because computers work in binary (base 2), not decimal (base 10).