what are the best types of DAC?
Originally Posted by MacGyver,Jun 22 2005, 02:38 PM
Oh, and referring to your 24-bit Burr Brown DAC is akin to talking about your Hewlett-Packard printer or your Memorex hard drive. Burr-Brown is simply the manufacturer (actually, Texas Instruments purchased that division years ago), so mentioning it when talking about the DAC doesn't really tell you anything.
Saying it's a 24-bit DAC, or a Sigma-Delta DAC, now that tells you something useful.
Saying it's a 24-bit DAC, or a Sigma-Delta DAC, now that tells you something useful.
the only way to tell the difference would be using a dyno to see the power it makes at certain rpm's
much like using special equipment to measure the difference in thd, sampling etc. and the list goes on and on.
i can write a freaking essay about DAC's. but i dont feel like writing 3-5 pages worth of info because i dont think most people would understand and also there are a few things i dont even understand. with that said, i would most likely leave some unanswered questions.
those of you interested, i can copy and paste where i got the info from.
im sure now you guys know why i didnt want to reply.
to understand DAC's completely, you have to understand pretty much everything in here. there are a few things i dont get, but hey, i still learned a great amount of info from him.
copied and pasted from werewolf on ECA.
Let's start by describing a fundamental difference between analog & digital signals. We all know that an analog circuit (like an active crossover filter) requires a power supply to operate. But that's essentially it ... once powered, you simply apply an analog input signal and the circuit responds by producing an analog output signal ... simple.
But digital signals are fundamentally different. Not only because the signal is represented by digital 1's and 0's (with great advantages like insensitivity to amplitude noise), but because something else is needed to convey or transport a digital signal ... namely, the "clock" signal.
Now there really are two types of digital systems in this world: one is "event driven", where a digital signal changing state is enough to cause further activity in a sequential chain, and the other is "timebase" driven, where a change of a digital signal is not recognized until the clock, or master timekeeper, registers that change. The world of digital audio (and most other systems for that matter) is in the SECOND category ... in addition to the digital signal itself, there needs to be a "clock" signal or master time keeper that registers any logic state change and fundamentally drives any future activity. This is very apparent even at the so-called gate level ... look at an analog opamp, and you only see analog inputs & outputs (in addition to the power supply). But look at a digital flip-flop, and you see an input, output and CLOCK input. The digital input can change all day long, but the flop will not recognize (or latch, or store) that input until the clock edge comes along.
So all digital circuitry of interest to us requires not only the digital inputs & outputs, but a master time-keeper signal ... the clock ... as well. Now Digital-to-Analog Converters are no different ... they require digital inputs (of course) to create an analog output, but they also require a master clock signal as well. In fact, the clock signal is ULTRA IMPORTANT to the DAC especially ... not only does it control the logic that latches or reads the digital input signal, but it provides the fundamental timebase for the whole conversion process. Each digital word that a DAC receives represents one sample of some analog signal at ONE EXACT POINT IN TIME. That precise moment in time must be faithfully reproduced by the DAC, in order for the analog output signal to be a faithful representation of the original analog event. It has been said (quite accurately) that the RIGHT sample at the WRONG time is in fact, the WRONG sample. I'll go one step further ... you can have the best DAC in the world, but if you don't feed it with a clean, low-jitter clock signal, you've got a handful of junk.
What level of timebase accuracy are we talking about for high fidelity digital-to-analog conversion? Let me tell ya, it's friggin scary ... here's a simple example :
Let's say I have a 1 volt peak signal, which has been converted to a digital signal by a 16 bit ADC. So I have 2**16 quantization levels covering a 2 volt peak-to-peak signal, so each LSB or bit corresponds to about 30 micro volts. Now a full-scale 10kHz signal (for example) will have a maximum slope of 2*pi*10kHz, or 63 millivolts per microsecond. So how long (in time) will it take for that 10Khz signal to span one LSB at the 16-bit level? Simple :
30 microvolts (LSB size) divided by 63 millivolts per microsecond (rate of change of signal) = 480 picoseconds !!
Yes, about half a nano-second, which is one billionth of a second. Bottom line : if you're DAC clock doesn't have substantailly LESS than 1 nano-second of jitter (timing noise or inaccuracy), you're kidding yourself if you think you've got accurate 16 bit conversion. Imagine (or rather, calculate) how clean the clock must be for 24 bit conversion !!!!
So in digital audio, DACs need VERY clean clock signals. Everybody with me so far? Next post I'll describe the industry standard for where DACs in fact GET their clock signals, and why it's a TERRIBLE approach. Finally in a third post I'll describe some options for improvement, cool?
Now there would appear to be contradiction, because I said that DAC's need a clock signal ... in fact, a very clean clock signal ... in addition to the digital data itself in order to function. And anybody who's played with outboard DACs knows that you just send the DAC one and only one signal ... the industry standard S/PDIF signal, over optical cable or coax ... and it works.
SO WHERE THE HELL DOES THE DAC GET IT'S CLOCK?
Well you might think ... that's simple, there's probably a crystal oscillator (the cleanest clock source known to an electrical engineer) on the DAC board that supplies the DAC it's clock ... what's the big fookin deal? Simple answer ... because it won't work. You see, the CD transport has a clock of it's own, that's used to spin the CD, read the data, and send the data out to the DAC. And while the crystal oscillator in the transport is some multiple of 44.1kHz, and you would surely use a 44.1kHz clock (or multiple) on your DAC board, those two clocks will NEVER be "in sync" precisely ... and they must be precisely "in sync" in order to communicate the digital data. The two clocks might start off "in sync", but will wander away from each other like two wristwatches would.
So what really happens? In addition to the DAC chip itself, the outboard DAC has another little IC on it called a "digital interface receiver" or DIR. Its job is to receive the incoming digital data stream, and RECOVER a clock from the data stream to send to the DAC. To do this, it uses a well-known circuit called a phase locked loop or PLL, which has it's own Voltage Controlled Oscillator (or VCO) embedded in a feedback loop that compares the VCO clock output with the incoming data stream. It essentially controls it's internal VCO clock so that it keeps "in sync" with the incoming data stream. And THAT clock is sent to the DAC.
Sounds clever ... and it is. It's main (make that only) virtue is that this technique allows for ONLY A SINGLE communication channel (coax or optical fiber) to send digital audio from one device to another. Only need one "wire" to send digital audio ... the receiving device will have a DIR (with a VCO in a PLL ) to recover the clock ... no need to send the clock SEPARATELY.
Can't argue with the "economy" of the solution. But the problem my friends is this : you end up clocking your precious, ultra-precise DAC with a RECOVERED clock. And no matter how good the PLL in the DIR is, a recovered clock will NEVER be as clean as a cystal oscillator clock. The recovered clock will have jitter ... in fact, a bad form called "data dependent" jitter (cuz the clock was recovered from the data in the first place), and while you can filter to some extent with loop filter components in the PLL, it's still just a fundamentally bad way to do communicate digital audio from a transport to a DAC ... cheap, yes ... but high performance, no.
Should point out that there have been many fine attempts to live within the constraints of this system, and do the best job possible to filter this jitter with low bandwidth PLL's, or cascades of PLL's ... Zapco's little outboard DAC comes to mind
But there are alternatives ... and that's my next post
OK alternatives ... some are "compatible" with S/PDIF, some are not.
1. Adam mentioned one, I2S (eye-squared-ess). A format different from S/PDIF (even enhanced by Ultra-Analog, I2Se) whereby digital audio data and clock are sent separately from device to device. Adam knows about it cuz you'll find it in Perpetual Technologies devices A real improvement ... no need to "recover" a clock from the data stream.
This is not bad at all, but still not as good as possible. What you really want to do, is to PUT THE CRYSTAL OSCILLATOR RIGHT NEXT TO THE DAC. The most precise timing source should be located right at the device that cares most for precision timing ... the DAC. It's that simple ... and anything else is sub-optimal. But how can you do this? I've already said that you can't have a crystal oscillator right at the DAC, and one at the transport, and have them communicate. Well here's the options :
1. A big memory buffer on the outboard DAC board. The outboard DAC will still have a DIR ... it will read the data sent by the transport, and store that data in a big memory buffer (RAM) on the DAC board. Then, slightly delayed in time, the DAC and it's local oscillator can read the data out of the memory on ITS OWN TIMEBASE. The memory buffer serves to "isolate" or "decouple" the two clocks that are not "in sync". I don't know if anyone has actually commercialized this approach ... but it could be compatible (almost) with S/PDIF. It would take a big frikkin RAM though ...
2. Put the crystal oscillator right next to the DAC, WHERE IT BELONGS, and send the clock signal BACK to the transport. The transport is then "slaved" to the DAC, instead of the other way around. And of course the transport sends the data forward to the DAC. Not compatible with S/PDIF ... requires TWO signals between the DAC and transport (oh the horror): one clock signal sent FROM the DAC TO the transport, one data signal from the transport to the DAC. My beloved Wadia transport/processor system (in the home listening room) does precisely this.
And so does a little company called LC Audio ... shown to me by one Jason Winslow (thanks dude!). They offer an aftermarket modification for CD players, and separate transport/DAC systems, that will implement this option. Caution : not a trivial modification by any means. But it's good engineering my friends.
3. Asynchronous sample rate conversion. Relatively new technology, fully compatible with S/PDIF. Here's how it works : it's essentially a much more complex DIR, which reads the incoming S/PDIF data just like any other DIR. But it also accepts another, completely "asynchronous" (meaning not "in sync") clock from the crystal oscillator that resides right next to your DAC. So the incoming data comes from one timebase (the transport), while the outgoing data is timed from a completely separate timebase (the DAC). And this nifty little device will actually do some fancy DSP (OK, not really fancy, just plain old interpolation) to actually CALCULATE the proper audio samples according to the desired output rate. Good stuff, this ... it allows communication between two "out-of-sync" timebases by actually calculating the correct audio samples. Yes, it WORKS. And you can find it in Bel Canto's latest DAC, and you can get a Crystal Semiconductor or Analog Devices Evaluation board with their ASYNC SRC's on them
OK I'm tired All this clocking stuff really pertains to ANY type of DAC ... although some are more sensitive to timing jitter than others.
I will start a new thread one day soon to discuss different types of DACs : one-bit, delta-sigma, multi-bit ... as well as oversampling & upsampling, what it means and more importantly what it doesn't mean.
But for now remember ... a DAC is only as good as the clock you feed it !!! So from now on, everyone is officially prohibited from talking about DACs ... one bit, 20 bit, 24 bit, etc ... unless they mention the clocking scheme in the same post Goodnite my friends
Sonic effects of jitter ...
Couple things to consider. First, increased jitter will not raise a DAC's noise level (defining noise to be any artifacts present in the absence of signal). Why? Well lets say you put all zeros into the DAC for a signal, expecting to get analog "zero" out. Well it doesn't really matter if those "zeros" are jittered in time ... the net result is still zero ... so no noise "added" by jitter.
In fact, the same statement is true for any DC signal the DAC is trying to reproduce ... DC samples just don't care if there's timing jitter ... the output is still "DC". If all the samples are the same (this is the DC case), then a timing error cannot be reproducing a "wrong sample" ... does this make sense?
Of course DC is not in the audible bandwidth but from this extreme case we can conclude that LOW FREQUENCY signals are LESS sensitive to timing jitter. They just don't change rapidly enough, so timing errors do not easily translate into voltage errors (that conversion happens through the slope, or rate of change, of the signal.
So the conclusion is, no ADDED noise with lots of jitter, and LOW FREQUENCY (bass) signals are less effected. What does happen with lots of jitter is this : a distortion or "smearing" of the higher frequency signals, along with the "defocusing" or blurring of the soundstage cues that rely on high frequency signals. And in the worst case, probably some audible "harshness" in the higher registers as well.
Ok I'm not real happy with my last post so let me try a summary, and a bit more info :
1. From the perspective of the WHOLE recording/playback chain, the absolute best way to improve resoltuion well below the LSB (and enjoy some other benefits as well) is to dither the signal right before quantization. Of course, this must be done by the CD makers in the recording or mastering process.
2. Equally 'of course', the engineer designing a CD processor for CD playback has no control over the mastering process. So, if the designer has knowledge that most recordings are NOT well dithered, he/she may be motivated to search for a technique to improve the situation. But to the best of my knowledge, any attempts to uncover true INFORMATION below the LSB of an undithered recording are guesswork. May be sonically pleasing, however, particularly at a time when most recordings were not well dithered. So I suspect there was a time when a process like ALPHA made sense ... but I believe that time is past.
3. For properly dithered recordings, any such algorithms after the quantization (like during CD playback) are completely unnecessary, and if they add anything, it can ONLY be noise.
4. The guesswork of which I speak in my second point above is in NO WAY comparable to filling in the musical information BETWEEN 44.1kHz samples in digital audio. Digital audio by nature is a sampled-data process, and at some point during analog playback the information between those samples must be "filled-in". But thanks to Mr. Nyquist, it can be mathematically proven that a bandlimited analog signal can be COMPLETELY recovered from it's samples ... hence the process to fill in audio between samples, simply called interpolation or smoothing, is not guesswork, but follows a strict mathematical model. There's some limitations in practise to be sure ... but nothing comparable to the Nyquist Theorem (that's a strong word, mi amigos) exists for "filling in" information below the LSB of an un-dithered recording.
Ok I think I'm done for now ... does this make sense?
part 2
lright lots to discuss tonite, for the interested reader.
And we'll do things a bit different in this thread. It will be an "interactive" thread, with a couple homework assignments But the grades don't count
We shall start ... at the beginning (of course) ... with the process of analog-to-digital conversion (ADC). Fear not, gentle reader, DACs will be discussed in due course as well.
We all recognize an analog signal when we hear one, or (more visually) see one on an oscilloscope. It's a continually-varying waveform ... we'll use voltage as an example, but could be current, air pressure, etc. By continually varying, we mean it takes on a different value (maybe only slightly different) a EACH POINT in time ... no matter how finely, or broadly, we observe the time axis. It's "always there", continually changing (except the extreme case of DC, of course, which is constant in time).
Now the first thing to understand, or accept as simply true for now, is the so-called Nyquist Theorem. This landmark piece of work simply states that : a bandlimited analog signal (meaning a signal with no frequency content above a certain point, like 20kHz) can be COMPLETELY recovered from only SAMPLES of the signal, providing you sample fast enough. What's fast enough? At least twice the rate of the highest frequency. Simple, concise, and undeniably true.
This theorem really is the basis for digital audio ... period. Because it says that instead of communicating, or storing, the entire analog waveform ... which is exactly what a magnetic tape or phonograph record does ... we can take "samples" of the analog waveform at DISCRETE points in time, and store only these samples ... because THAT'S ALL THE INFORMATION WE NEED TO COMPLETELY CHARACTERIZE, AND ULTIMATELY RECOVER LATER, THE ENTIRE SIGNAL.
How fast do we sample? Well with the (somewhat controversial) knowledge that the limit of human hearing is 20kHz, a "sample rate" of 44.1kHz was chosen for the CD standard ... remember, we have to be (at least) twice the highest frequency of interest.
What if the analog waveform in fact has some spectral content ABOVE half the sampling frequency (22.05kHz, to be exact)? Ouch ... that's bad news, because a very bad thing called "aliasing" will happen when that signal is sampled at a rate of 44,100 times a second. Topic for later ... right now, just recognize that aliasing must be avoided. So what's done in practise is this : right before the analog waveform is sampled at DISCRETE points in time, 44,100 times a second, the analog waveform is passed through a low-pass filter called (cleverly) an "anti-alias" filter. Sharp rolloff, bandwidth about 20kHz. More on this later ...
So to summarize "post the first" in this long, interactive thread, the ADC process begins by passing the analog voltage waveform through a 20kHz low-pass, anti-alias filter. Then, the analog signal is SAMPLED at discrete points in time ... 44,100 times a second.
But even after the signal is sampled, it's still "kinda analog", because one sample will be 0.731 volts, the next sample maybe 0.274 volts, etc ... still sounds "analog" rather than "digital" so the next post will deal with what happens next : a process called QUANTIZATION. Everybody with me? Please humor the lecturer this evening, and give me some feedback before the next post
So far, we have only 'sampled' the analog waveform. But we rest assured in the knowledge that, thanks to Mr. Nyquist, we have not yet introduced ANY ERRORS ... because later on, we can completely recover the signal from it's samples. But those samples are still kinda analog ...
So now we do the true analog-to-digital conversion process, and convert these 'kinda analog' voltage samples to DIGITAL WORDS (yes, we really do call 'em words). We will see that this process ... called quantization ... does in fact introduce errors (and believe me when I tell you, a person could EASILY devote an entire career to the study of quantization).
What better way than to proceed with an example? And we'll ask for our very first homework assignment shortly Let's say we have an analog voltage waveform that spans the range from 0.10 volts to 0.90 volts ... can take on ANY value between these two limits (chosen for simplicity). Furthermore, let's say we want to build ourselves a 3-bit ADC to "digitize" this waveform (OK, so 3-bits doesn't sound very high-end ... but I caution you to not judge this book by it's cover ... plus it's alot easier as an example).
So here's how the converter will work : if the analog voltage sample (let's say we've done the sampling described above) is BETWEEN 0.10 and 0.20 volts, we give to that "analog" sample the "digital" word : 000 . If the analog voltage sample is BETWEEN 0.20 and 0.30 volts, we give the analog sample the new digital value of : 001 . So let's construct the followng table :
Analog Voltage Sample Value Corresponding Digital Word
0.10 --> 0.20 000
0.20 --> 0.30 001
0.30 --> 0.40 010
0.40 --> 0.50 011
0.50 --> 0.60 100
0.60 --> 0.70 101
0.70 --> 0.80 110
0.80 --> 0.90 111
So here's an example : analog voltage sample of 0.37 would be assigned a digital value of : 010. But please note, that an analog voltage sample of 0.34 would be assigned the SAME digital value ... so here, for the first time, we introduce an ERROR ... called (cleverly) QUANTIZATION ERROR, or (somewhat incorrectly, but vastly used), QUANTIZATION NOISE.
In the case of my example of 0.37, the actual quantization error would be 0.02 ... because we "expect" the digital word 010 to correspond precisely with the analog voltage halfway in it's region, or 0.35 (and 0.37 - 0.35 = 0.02).
So why would anybody put a precious analog signal through such a "noisy" process? Because, once the signal is digitized into these digital words, it is (virtually) immune to further errors in storage and/or communication (this is the essence of why we like digital). And if we use ENOUGH BITS to quantize the signal (more than 3), the quantization noise will be very small indeed ... much lower than other forms of noise that plague analog storage & communication.
How do I build such a converter out of real circuits? Well, the type of analog-to-digital conversion described here is pretty straightforward to build. I might start with a 1 volt voltage reference, and use a resistor-divider chain to establish all the "boundary" voltages (0.1, 0.2, etc), creating a series, or bank, or array of voltages. Then I use a series of comparators, all of which have one input tied to the analog signal, and the other input tied to one of each of the voltage references. The comparator outputs will tell me what digital word to assign to the analog input. Make sense? The whole key, however, for this process to work is this : I must VERY ACCURATELY establish ALL of these "boundary" voltages, to compare my signal against.
And if you think that this technique, for 16-bit conversion, means I must establish voltage references that are PRECISE to within ONE PART in 2**16=65,536 ... that's where you would be RIGHT. A VERY difficult challenge ... more on this later.
Now for the homework! This one is simple. Use the 3 bit converter I built, pick a voltage somewhere in the range of 0.1 to 0.9, and report back the corresponding digital word and its quantization error. I know it's simple ... but humor me, the next assignment will be alot more fun
I won't proceed until I hear 3 answers (hey, it's my thread)
OK stay focused (yep I have the Wadia, cuz they do clocking right, digital volume control right ... no need for preamp)
Alright ... so we have built ourselves a 3-bit ADC, and uncovered something called quantization noise in the process. Let me make a few points :
1. Obviously, this concept is extendable to 16 bits.
2. These 16-bit words are EXACTLY the audio signal information stored on a CD (plus some error correcting codes, control bits, etc.).
3. To build an ADC as I've described gets more and more difficult, as the number of bits increases ... because the PRECISION needed in the analog circuitry (the resistor string that sets up the voltages, and the comparators) gets real tough, real fast.
But there's another way ... And I'll introduce by way of example, and ask for a second homework assignment The algorithm I'll describe is pretty simple, but tedious. You can do your homework by hand, or write a little program if you're so inclined.
Here's the deal. Let's say you only have ONE bit to quantize a signal ... and it takes on a value of zero (0) or one (1) ... that's it. And we have a sampled analog signal just like before, values between 0.10 and 0.90. Here's the algorithm I want you to "run" :
Pick a starting value for the digital bit : let's say 0
Pick a starting value for a variable we'll use called SUM : let's say 0.00 (it's an "analog" variable, with decimal value like the input signal).
1. Subtract the digital bit value (0 or 1) from the analog input sample. We'll call this difference DIFF.
2. Add DIFF to the current value of SUM ... this will be our "running summation".
3. If SUM is greater than 0.5, the next digital bit will be one (1). If SUM is less than 0.5, the next digital bit will be zero (0).
Then go back to step one, with the new digital bit value you've found. Simple Two things to remember :
Step 1. is a SUBTRACTION, and never clear the SUM value (although if an error finds it's way into SUM, it won't matter in the long run). Keep track of the "string" of One's and Zero's you generate ... because I'll ask you to AVERAGE them after you've run this algorithm 10,20 ... or maybe 100 cycles. Here's my example :
Analog input sample = 0.37
Bit = 0
Sum = 0.00
1. Diff = 0.37 - 0 = 0.37
2. Sum = 0.00 + 0.37 = 0.37
3. Sum is less than 0.5, so Bit = 0
1. Diff = 0.37 - 0 = 0.37
2. Sum = 0.37 + 0.37 = 0.74
3. Sum is greater than 0.5, so Bit = 1
1. Diff = 0.37 - 1 = -0.63
2. Sum = 0.74 - 0.63 = 0.11
3. Sum is less than 0.5, so Bit = 0
And so on ... everybody got it?
So pick an analog input sample like before, run this little algorithm by hand or with a little program, and report back what's happening to the "average" value of the bitsteam you generate (meaning, for example, 25 "ones" out of 100 bits will have an average of 0.25). Take your time ... 3 answers will trigger a new post
Run the algorithm ... did I explain it well? Follow my example? Can do it by hand, to create a "string" of bits maybe 20 bits long ...
take the average of the one's & zero's. Then run it thru maybe 10 more cycles, take the average over all 30 bits and see what the average is doing ... trust me, it's worth the effort, I promise ... or your money back
I REALLY want this to be clear ... don't be afraid to ask any questions at all, I'll answer till my bedtime or until the moon brings out the worst in me ...
1. In our first example, we described one way to digitize an analog signal with our 3-bit ADC. We showed how to create a multi-bit digital word that has a direct one-to-one correspondence with each analog "samples", which would usually be taken at a "Nyquist Rate" of at least 2 times the highest frequency of interest. For CD, these words are actually 16 bits long, and the sampling frequency is 44.1kHz. And it takes some very precise analog electronics to build such a converter.
2. Our second example shows a different way to digitize an analog signal ... this time, using only a measly one-bit digital "word" ... although better to think about a 'string of bits', oftentimes called a "bitstream". Now mach_y has certainly uncovered something very interesting ... and very, very powerful : Even a measly single bit bitstream can represent an analog sample with EXTREMELY high precision, providing that : we use a whole bunch of one's and zero's, and we create them with a smart little feedback loop like the algorithm I described.
Now about that "whole bunch" of bits needed ... and the question about digitizing samples that aren't just "DC", but more interesting analog waveforms. For those of you that went through this exercise, it's probably easy to believe that the bitstream generated would still accurately "capture" or represent a CHANGING analog waveform, providing that the RATE at which you generate the 1's and 0's is MUCH, MUCH faster than the rate of change of the analog input. For example, let's say that after you generate maybe a thouand 1's and 0's to represent an analog input of 0.37, the analog input changes to 0.39. I've still had a thousand digital "samples" to represent 0.37 before my input even changed, so a "slowly" changing analog input probably isn't going to upset my little converter.
And here we introduce the concept of OVERSAMPLING. A bitstream generated by our simple algorithm will still be a faithful, in fact high precision representation of a CHANGING analog input, providing that the bits are generated a rate MUCH higher than the previously mentioned Nyquist rate. How high for digital audio? Sample rates of 3 Megahertz are quite common. One way to think about the situation is this : As mach_y discovered, the quantization "error" associated with this technique can in fact be very, very small ... his averages were VERY close to the input analog value ... in fact, the LONGER he runs the algorithm, the SMALLER the error would be ... so we uncover a wonderful principle of data conversion : you can in effect TRADE speed for accuracy ... I can use very low precision digital "samples" (can't get much lower than one bit!) to characterize a signal, with almost ARBITRARILY HIGH precision, providing I sample FAST enough ... and use some clever algorithms in the process.
Now there's another advantage to oversampling ... that pertains to DACs as well as ADCs. It has to do with using digital filters instead of analog filters ... but we'll discuss that later.
And nope, nothing to do with MP3 compression or decompression You guys always jump to conclusions This will be a slow, deliberate process ... feel free to ask more questions though.
Simple take-away : even a very LOW precision digital signal (one bit, in fact!) can represent an analog waveform with (arbitrarily) HIGH precision ... or low quantization noise ... providing that : the analog waveform is "low bandwidth" or slowly changing, compared to the rate or bandwidth of the bitstream.
Remember that mach_y's resolution of the analog input was MUCH higher (meaning lower quantization error) than our 3-bit ADC could ever hope to be (unless we introduce our 3-bit ADC to a little technique known as dither ... but that's a whole 'nother topic! ... told ya quantization was fascinating ...)
you guys that have patiently stuck with this thread, even done some homework ... here's a few punchlines to reward you :
CONGRATULATIONS! You have successfully built what we call :
An Oversampled, One-bit, Delta-Sigma Analog to Digital Converter ... and even followed it with a simple Decimation Filter.
Rmember those two dummy variables, DIFF & SUM? Delta, Sigma ...
Oversampled we explained, or really just begun ... and of course One-Bit should be pretty obvious by now ...
Decimation Filter? That averaging I asked you to do? Yes, averaging is simple digital low-pass filter ... Decimation we'll return to "anon" ...
And oh yeah ... that one-bit bitstream you generated? That's EXACTLY the digital format stored on the new SACD ... yes, the new Super Audio CD.
So there you have it, for now. Our first example showed the multi-bit audio words stored on regular, old CD's. And the second example demonstrates the format stored on new SACD's.
And guess what ... we can convert from one format to another That's what we'll talk about next ... because even long before SACD's came along, our second example became the preferred method for ultimately generating the 16-bit words stored on regular old CD's.
And my final teaser, this whole process can be "run in reverse" (so to speak) to build digital audio DACs
Still having fun ??? honestly hope so ...
Alright guys, let's wrap this section up for now with a comparison, of sorts, bewteen the two ways we've discussed to lay down 16-bit audio samples at 44.1kHz on a CD :
1. The so-called Nyquist converter. Analog signal first passes through a sharp rolloff, analog anti-alias filter, bandwidth right about 20kHz. From there the signal is sampled at 44.1kHz, and finally digitized by "comparing" the signal to an array of precise voltage references.
Almost never done anymore because of two fundamental, very difficult analog problems. First, the sharp rolloff analog anti-alias filter ... component drift, phase problems like crazy, etc. Second, the precision needed in the voltage references ... mismatch causes bad converter distortion. It's an ugly picture indeed ...
2. The oversampled, delta-sigma approach. We'll walk through the signal path, but first remember a couple of things : one, the analog signal (with a 20kHz bandwidth of interest) is oversampled at (typically) 3MHz ... which means the ANALOG anti-alias filter can be MUCH more gradual ... maybe a gentle rolloff that only starts at 40 or even 80kHz (we're preventing aliasing, but the sample rate is very high ... means a more gradual analog filter will work). Two, no precision or matching needed in the actual conversion. The real benefit of a one-bit converter is this : mismatch can only cause a harmless gain or offset error ... one bit means only two values or "points" on a curve, and two points define a straight line, pure and simple (it's deviations from a staight-line transfer function that cause distortion).
So the analog signal first passes through a GRADUAL analog anti-alias filter, which just needs to provide healthy attenuation by 3Mhz. Next, the signal is sampled at maybe 3Mhz (or higher), and then digitized by an algorithm very similar to the one you guys built. In short, yes it's only one-bit ... and hence very "noisy". But the algorithm is designed to make sure that the noise is "shaped" in frequency so that the noise is VERY LOW in the low frequency band of interest (20kHz). This is why even simple averaging, like what you guys did, "reveals" the high resolution possible ... averaging is a form of low-pass filtering, which in this case "removed" alot of the one-bit quantization noise at high frequencies.
In fact, the one-bit signal is then sent to a DIGITAL low-pass filter, not unlike the averaging process you guys did. This is interesting ... it's really this digital filter that : removes most of that one-bit quantization noise, "revealing" higher precision digital words, and ... get this ... provides the SHARP anti-aliasing needed before the final step of "decimation" ... which is simply lowering the sampling rate (a sampling process itself) back down to 44.1kHz Yes, a digital filter provides the real sharp anti-aliasing needed for 44.1kHz samples !! Why do we like a digital anti-alias instead of analog? No component drift, no chance of power supply noise creeping in, and finally ... can be implemented as FIR with perfectly linear phase By the way, an FIR filter is really NOTHING MORE than a "weighted" averaging filter, that takes a running average of many (much more than 64) one bit samples to produce a higher precision digital word. And as a nice bonus, it turns out that the computation required for the FIR is not bad at all in a decimation environment ... since there's no feedback in this filter structure, you never need to calculate the outputs you're going to ignore after you downsample to 44.1kHz So that's it, the one-bit bitstream is digitally filtered by a long FIR lowpass filter, generating higher precision digital words ... and we only need about every 64th word (after filtering) to supply the 16 bit words at 44.1kHz.
Needless to say, ever since about the late 80's, the second option became the preferred, highest performance and most cost effective way to record music according to the CD Audio standard. Then in the late 90's (or so), when we figured out how to stuff more data onto CD's, the whole filtering/decimation step was eliminated and the one-bit bitstream was stored on SACD. Very similar resolution in the 20kHz band to CD, but no longer sharply bandlimited to 20kHz (cuz that was provided by the decimation filter, needed for anti-alias with 44.1kHz sampling). So yes, SACD has higher bandwidth than CD, but the resolution in those "ultrasonic" bands decreases rapidly (cuz the quantization noise is increasing rapidly).
Well constant readers, I think that about does it for the ADC side of the story If interest remains high, we'll tackle DACs next, coooool?
part 3
Alright patient readers, long time overdue. Let's start a discussion about DACs ... theory & operation. I'll try to explain basic principles & variations, maybe dispel a few myths along the way, cool? Haven't (yet) figured out how to make this thread interactive like the last one ... which should be a relief to most of you
We're gonna go one step at a time. The ONLY way to really do DACs justice is to run a sort-of "parallel" discussion, where we describe some principles in BOTH the time domain & frequency domain. These domains are NOT independent by the way (there goes one myth down the drain right off the bat!), it's just that some concepts are more easily explained in one domain than the other.
Let's start with the data stored on a CD. The audio "samples" are 16 bit digital words, reflecting a sampling rate of 44.1kHz. In very broad strokes, the digital-to-analog conversion process must perform two fundamental functions : one, convert these digital words to analog values (voltage or current), and two, provide a filtering function ... about which we'll talk at some length.
In this first post, I want to carefully describe, in both time & frequency domains, more about what that digital audio data on the CD looks like. If you read that previous post, you already have a sense about the "time" domain picture. We have 16 bit digital words, created through a process of sampling & quantization (few different techniques to arrive at this goal, with different tradeoffs in real world implementations), which represent the "amplitude" (voltage, for example) of the music signal at DISCRETE points in time. And we are comforted in the knowledge that somehow the analog signal can be COMPLETELY recovered from these samples alone, if we sampled fast enough in the first place (fast enough being twice the bandwidth of the original signal) ... 44.1kHz in this case. In fact, the whole DAC "process" describes this signal recovery. So an analog signal (music), varying CONTINUALLY in time, is sampled DISCRETELY in time, then quantized to 16 digital bits. The quantization process adds noise ... no doubt, no escaping it ... although there's room for cleverness here ... whereas the sampling process itself only forces the signal to be bandlimited to less than half the sampling rate.
Now, what does the CD data look like in the frequency domain? What is the full "spectral content" of this string of 16 bit digital words, sampling rate of 44.1kHz? Actually, it's quite simple ... and if you grasp this concept, DACs are simple to understand. The frequency domain picture of this "discrete time sequence" is simply this: the full 20kHz audio band is present & accounted for in the frequency domain, along with exact "replicas" of the 20kHz information spaced every 44.1kHz. That's it
In other words, whatever frequency content you have from DC to 20kHz is duplicated around 44.1kHz, 88.2kHz, 132.3kHz, ... on & on forever. This "spectral replication" is an artifact of the original sampling process.
By the way, I've taken a slight liberty in glossing over the fact the original music actually spans -20kHz to +20kHz ... yes, negative frequencies exist in signal processing land! So in fact, the -20Khz to +20kHz content is present, centered at 0Hz ... but an identical "replica" is centered at 44.1kHz, 88.2kHz, etc. ALL DICRETE TIME SEQUENCES have a periodically repeating spectrum like this.
But don't worry about those negative frequencies ... we don't have to be precise enough here to consider them. The only thing we need to know, is that in the frequency domain, the discrete-time or sampled sequence has the 20kHz info, but it is also REPEATED every 44.1kHz.
We call these periodic duplications ... IMAGES.
hey guys ...
Well we introduced a one-bit representation of a signal in the previous thread on ADC's. We saw that even though such "coarse" quantization must (of course) have lots of quantization "error" or "noise", that noise can be kept VERY LOW in the 20kHz bandwidth providing you do 2 things: one, sample much faster than the Nyquist Rate, and two, perform the quantization in a little feedback loop that essentailly "shapes" the quantization noise in the frequency domain so that it's very LOW at low frequencies.
In fact ALL of these principles were shown in the homework assignment, where we found a one-bit signal to be a VERY accurate representation, on average (which means for low frequencies ... averaging is nothing but a crude low-pass filter), of a low frequency signal (our example was DC, but easily extendable to low freq AC). Trust me ... that little example I asked you guys to do was FULL of all the interesting principles we need.
We'll return to 1-Bit conversion soon, but for now let me say this : from a signal processing perspective, the "number of bits" in a conversion tells you very little.
Mathematically, is 1-bit lower noise than 16-bit? From this information alone, all you can say for sure is that the TOTAL noise of a 1-bit must be higher (worse) than the 16-bit ... 1-bit must give you more noise, cuz you only have 1-bit for pete's sake. BUT, what you don't know is how that noise power is DISTRIBUTED in the frequency domain. The process of oversampling in the quantization process allows you to "shape" that 1-bit noise so that it's very low over 20kHz ... in fact, can be quite lower than 16 bit. So for an apples-to-apples comparison, you must compare : how many bits you're using, what sampling rate is used, and finally what noise shaping algorithm is employed. Only then can you tell if 1-bit or 16-bit has better noise performance in a given bandwidth of interest. It's all determined by the 3 main items : resolution in bits, sampling rate, and noise shaping algorithm (if any).
Ultimately the ADC must output 16 bit words at 44.1kHz for CD storage ... but oversampled ADC's can be used first, and through a DSP operation called decimation (the averaging filter is a crude example) we can create 16 bit words @ 44.1kHz for storage on a CD. Or, skip the decimation and store the 1-bit "bitstream" on SACD
And Kev is correct ... as we pointed out in the last thread, all this fancy digital signal processing is employed so that we can actually build a converter that has only 1 bit. The one-bit converter enjoys an advantage over ANY higher bit converter : LINEARITY. One bit means two "states", two states means only 2 points on the converter characteristic function, and two points define a straight line ... it's that simple. No chance for mis-matched analog components to cause converter non-linearity.
Now before we move onward with our DAC discussion, let me add one more thing about the description of the signal stored on the CD. As we said, 16 bit words at 44.1kHz in the "time" domain, and a faithful 20kHz bandwidth plus all the frequency images at multiples of 44.1kHz in the "frequency" domain. But there's also quantization noise present ... it came about when we quantized the signal to 16 bits. In the time domain, this is the error introduced when we used a 16-bit word to represent an instantaneous analog voltage (or current) ... in the last thread we did a 3-bit example that demonstrated this "error" or "noise".
Now, this quantization error is also manifest in the frequency domain. In a sense, it's really just a higher noise floor over the entire 20kHz bandwidth ... and of course, the identical noise floor is "replicated" in those image bands as well, at multiples of 44.1kHz. But it can be quite a bit more complicated ... for example, if dithering is used in the quantization process, the noise floor is actually a bit HIGHER. Why the hell would you do that? Simply put, to decorrelate the quantization noise from the signal. One day I'll do another thread devoted solely to dither ... I love this topic. Sampling is simple, but quantization is endlessly fascinating.
So I think that completes our picture of what the data stored on the CD looks like ... in both the time and frequency domains.
dunder ... haven't heard about TI's technology. My eyes glaze over from all the marketing hype surrounding digital amps, digital speakers, digital air, digital ears , ... post a linky and I'll have a peak. Happy to help with your homework if I can ...
matt ... haven't spun a vinyl disc in over a decade! I was in deep too ... VTA adjustments, stylus cleaning & demagnetizing, potions & elixirs ... but then I saw the light when someone suggested that to really enjoy digital, stop listening to analog altogether ! never loked back
And I think it's informative to summarize what's improved about digital audio recording & reproduction over these last twenty years, to help explain why early digital certainly did NOT live up to the full performance possibilities of the medium. Here's my take, probably missing some:
1. The quantization process itself does nasty things to low level signals ... here especially quantization is a very unpleasing distortion, not additive noise. BUT the proper use of dither in the recording process (or any quantization process) has all but eliminated this early problem.
2. Converters (ADCs, DACs) with poor linearity, especially for low level signals (near the zero crossing). Bad differential nonlinearity in the early converters, combined with the above point on non-dithered quantization, destroyed any chance of low-level signal integrity and all the sonic cues dependent on it. Of course converters have improved DRAMATICALLY in this regard ... single bit & multi-bit.
3. Timing jitter. Took a little while to really appreciate this, cuz there's no real direct analogy in analog (OK, maybe wow & flutter, but at very different frequency extremes). But here again, lots of susbstantial improvement in the last 2 decades : better clock recovery with better jitter filtering, aynchronous sample rate conversion, and perhaps just more attention to the issue (clock buffering, board layouts, etc.).
4. Digital filters in audio have DEFINITELY improved. Better understanding of quantization effects in IIR structures, higher precision available at affordable prices, appreciating the pre-echo & post-echo associated with too much passband ripple ... all these effects have no real counterpart in analog processing, took awhile to develop real understanding & solutions.
5. Analog filters for digital audio have DEFINITELY improved. This includes minimizing their use altogether in oversampled systems, plus the appreciation that digital audio systems really do benefit from surprisingly wide bandwidth analog circuits (op amps) ... not because there's more information to be revealed (CD limit is a brick wall in this regard), but because after conversion from digital, the analog signal contains the "residue" of lots of high frequency junk ... residual image energy, residual quantization noise. And if the analog circuitry cannot process this high freq "junk" happily (linearly), then demodulation INTO the audible band can result.
Well that's my top five over 2 decades We've come quite a long way actually ... I'd say the digital medium has improved quite a bit faster than the analog one, in it's early childhood Which sounds better TODAY? That's an exercise left to the reader ... but if you look at the pace of improvement (the "slope" of the curve, not just a single point), you gotta believe we've only just begun to reveal the real potential of digital audio
high time we resurrect this thread, no ? I do want to say that it's my intention to just help share some things I've learned over the years, I love to teach it's never my intention to come off as some arrogant asshole or know-it-all, so if that's the case let me know & I apologize in advance. ok enuf of that
when last we spoke, we ended with 16 bit words on the CD, representing samples of the original analog waveform every 44.1kHz. In the frequency domain, we have the full 20kHz bandwidth faithfully represented ... except for the addition of quantization noise that came about when we quantized to 16 bits ... PLUS identical 'images' of that bandwidth centered at all integer multiples of 44.1kHz.
Now we must do 2 things : first, convert that digital signal, or string of samples, to analog. That's what this post is about Much like our ADC discussion, there are fundamentally two techniques that can be used ... a sraightforward 'Nyquist' technique, and an oversampled one. We'll first focus on the Nyquist technique, and later come back to oversampling.
A Nyquist DAC is pretty simple really ... you just take each 16 bit word, one at a time, and covert to an "equivalent" analog voltage (or current) value. This is typically done with a set of switches connected to a resistor string or "ladder". The resistor ladder is connected to a main voltage reference (plus ground), and the switches are controlled by the digital word that we're trying to convert. The idea is that the resistor string will create, through simple voltage (or current) division, many attenuated voltage levels from the main voltage reference (one level for each bit is common), and the switches will direct some combination of these voltages (currents) to the output. Using our old 3 bit example, lets say you want to convert the code 011 from a voltage reference that spans 0.1V to 0.9V ... well a resistor ladder and corresponding switches would probably be deisgned to generate an output volage of 0.45V (halfway between the 0.4V and 0.5V bit boundaries) for this input code. Really just running our first 3 bit ADC in reverse.
Simple, right ? Well it is ... but the subtle signal processing is not quite that easy. Because, one must NOW ask the question (since we're now in the analog domain) : when we convert one sample to it's analog equivalent, what does the analog output value do UNTIL the next sample arrives? Yes, the next sample will come 1/44.1kHz seconds later, and at that time we'll compute a new output value, but what happens IN BETWEEN these samples? One seemingly reasonable thing to do is to just "hold" the previous analog value constant until the next sample comes along. Quite common really ... it's called a "zero order hold". But let me stop for now ... because all the fun in DACs is about what you do BETWEEN those original samples
OK this post is about "filling in" the signal between the samples.
We've described one way to convert the digital samples to analog samples, and then described one reasonable, but crude, way to "fill in" the analog signal BETWEEN the samples ... namely, just "hold" the last value until the next sample comes along (time domain). This will create a classic "staircase" looking analog signal, which does of course bear some resemblance to the original signal (that got sampled way back), but certainly not identical. So there must be a better way ... how do we know? Well, that Nyquist theorem tells us we can COMPLETELY recover the original analog signal from it's samples (ideally).
So let me pose a question : what's the absolute best "process", or signal processing block/function, that we can perform on the analog samples to COMPLETELY & ACCURATELY recover the complete signal from it's samples?
In other words, what's the BEST way to fill-in the signal between the samples?
HINT : the answer is ALOT more obvious in one domain (time or frequency) than the other ... and why it's good to have the "parallel" discussion in both domains when describing DACs.
yes the correct answer is the "sinc" time domain function ... but you would never know it by looking in the time domain. Instead, consider the frequency domain ... where we said the sampled signal has the full 20kHz spectral content faithfully represented, plus "images" centered at multiples of the sampling rate. How do you preserve the low frequency info, and eliminate the high frequency images? Why an ideal LOW PASS FILTER of course!
Looks like a brick wall or "box" (gating function) in the frequency domain. Time domain impulse reponse : sin(x)/x.
And now our Nyquist-rate DAC discussion is complete. Convert those digital samples from the disc to analog samples, then pass the signal thru a very good analog low-pass filter, and signal recovery is complete.
Just remember : "filling-in" the signal between the samples (time domain) is EXACTLY equivalent to filtering the images (frequency domain). Never forget this ...
Now, our Nyquist DAC has a couple difficulties. First is that matching required for 16 bit, or higher, precision in the analog components. Second is this great analog low-pass filter we have to build ... needs to have a pretty steep rolloff, don't want to add much noise (thermal, power supply), low temperature drift, etc. Quite a burden on analog circuitry ...
Of course there's a better way ... next post.
So let's summarize so far. One method is Nyquist DAC ... convert to analog, filter with analog. Sample rate never changes from 44.1kHz.
Second method is to perform some DSP (digital interpolation filter) to "fill-in" some digital samples between the 44.1kHz samples ... thereby increasing the sample rate before you convert to analog. This will greatly relax the order of analog filtering required. Please note that this increase in sample rate, if you're starting from data at 44.1kHz, does NOT magically increase the info you're getting from the disc ... it is NOT the same as having sampled & stored the original signal at a higher sample rate ... it only allows us to digitally filter the images, or equivalently digitally "fill-in" the signal between samples to aid the "smoothing" process.
Some subtle benefits to oversampling ... One is that the conversion to analog will now be performed at a higher "effective" sample rate ... 4, 8, 32, even 128 times faster than 44.1kHz. And this can help some non-ideal things like jitter, analog thermal noise ... cuz these artifacts are now spread over a wider bandwidth, so the energy from these artifacts that falls into a 20kHz bandwidth is lessened.
Now, as long as we're thinking about "oversampling" in the realm of DACs to use digital filters for image rejection ... lets talk about quantization We started with 16 bit words on the disc ... how much precision do we need to carry through the DSP? And who remembers our ADC discussion, where we were motivated to trade off speed for accuracy ... sample fast enough so that we could use a "smart" 1-bit converter (noisy, yes, but not in a 20kHz bandwidth!) where we could rely on the inherent linearity of single bit conversion?
Well we can use the same principle in DACs IF we increase the sample rate high enough ... typically to 2 or 3 MegaHertz ... we can ultimately quantize the signal all the way down to ONE BIT (this time, with a digital equivalent of a delta-sigma modulator). Why the strong motivation to quantize down to one bit digitally? Because, when we finally do convert to analog, one-bit converters are INHERENTLY LINEAR ... can't be screwed up by component mismatch.
So now we have all the principles we need for 2 types of DACs :
1. Nyquist. Convert samples to analog at 44.1kHz, filter with an analog filter to reject images, or fill-in between samples. 2 problems : Analog matching required in the conversion, high-order analog filter needed (noisy, drifty, phasy, etc.)
2. Oversampled. Use DSP to "interpolate" between samples. Adds no new inforamtion, just digitally "fills in" between samples. Greatly relaxes the amount of post-DAC filtering required in the analog domain. In this context, we also have the opportunity to quantize or truncate the precision down to single bit ... again, without disrupting the 20kHz bandwidth ... so that our real DAC will be inherently linear, no need to rely on precision-matched analog components. Many parallels to the oversampled ADCs we discussed ... for example, we needed to DECIMATE in an oversampled ADC to get back down to 44.1kHz, whereas we do the inverse, INTERPOLATE, in a DAC to increase from 44.1kHz.
Of course the second option has become the favorite ... with varyiing degrees of oversampling, and varying degrees of word length reduction.
As a final note to this crazy long saga, I'll finish with some posts about some (rather subtle) disadvantages to the one-bit approach, and why IMHO we still see healthy competition between multi-bit and single bit ... although oversampling itself is just too damn compelling! Even most multi-bit converters employ oversampling to some extent ... that first 2x buys you ALOT of relief in the analog filter stages
maybe this post can wrap it up
Oversampling, upsampling ... yes, essentially the same thing, only a difference of connotation i would say. Oversampling refers to the digital interpolation associated with DACs, and oftentimes implies that wordlengths will get reduced (maybe all the way to 1 bit) in the process ... but fear not, that's OK because you have wider bandwidth over which to spread the quantization noise power. Upsampling usually does not include a reduction in wordlength precision, and is often implemented independent of DACs. What they have in common : digital filtering of images, or equivalently, digital "smoothing" between samples. NO INFO GETS ADDED, absolutely NOT the same thing as having sampled faster in the first place.
Multi-bit versus single bit, yes the punchline First, oversampling just makes sense in EITHER technology ... the benefits include: digital image filtering versus analog, plus certain artifacts of the final digital-to-analog conversion process, like jitter, are spread over a wider bandwidth and consequently tend to disrupt the audio spectrum less. It's a very good thing ... but remember, digital filters can be built bad just like analog
One-bit pros/cons : The single biggest advantage is simply this : LINEARITY. Component mismatch can NOT cause harmonic distortion of the waveform. NO multi-bit technology can claim this ... it's a real gem. The problem that plagues single-bit converters is this : there's an enormous amount of quantization noise power ... should be, the signal is only represented by one bit ! ... and it's hard to keep it where it belongs Here's what I mean : quantizing to only one bit (in order to enjoy that linearity advantage) must introduce ALOT of quantization noise ... but the feedback loops or algorithms are designed so that MOST of that noise power is at very high frequencies, well above 20kHz. And the idea in a single bit DAC is simply that this noise can be easily filtered by an analog filter. BUT, even the most mild nonlinearites at those high frequencies in these analog filters can DEMODULATE that noise back down into the audio band ... and it's bad news cuz that noise is very signal dependent, but not in a nice way like images that reduce in amplitude as the signal itself reduces. In fact, it's easy to show (Parseval's relation) that the high frequency quantization noise of a 1-bit converter must increase as the signal level decreases ... VERY bad news if some of it is getting back down to audio band thru circuit non-idealities.
And the real "Acilles heal" of single bit converters is the PRISTINE cleanliness required of the Voltage Reference. All DAC's need voltage references for the conversion, and all DACs have a "multiplicative" nature to the VREF ... meaning that noise on the VREF "multiplys" thru the conversion process. And this simple math operation can WREAK UNHOLY HAVOC on single bit converters ... again, by demodulating all that quantization noise from high freqs down to audio.
So what am I saying? The demands on the analog circuitry associated with single bit converters are STEEP. No mismatch to worry about, but you gotta use very high quality circuits (like wide bandwidth opamps, so they're linear at high freqs) and design yourself a CLEAN voltage reference. And big caps on the VREF can HURT more than they help ... cuz the magnetic loop areas get bigger, so even though electrical interference reduces, magnetic interference gets worse. Trust me when I say this took many long nights to sort out in digital audio.
Multi-bit pros/cons : do not enjoy the fundamental linearity of the single bit converters, but ALOT less high frequency noise to worry about. Meaning, that the post-DAC filtering job is easier to get right.
Bottom line : well you're starting to see alot of hybrid converters these days, that mix the two technologies to avoid the pitfalls of both. It's my personal opinion that 1-bit converters ultimately hold the most promise ... technology speed continues to increase (at quite a predictable pace), diminishing the difficulties associated with 1-bit converters, while the matching required by multi-bit converters does not improve. But always remember that the circuitry associated with the 1-bit converter (Vref, filters) must be VERY good stuff (low noise, high linearity) or else you've got a pile of junk on your hands.
And finally, remember when we described SACD back on the ADC thread? Well we all know that SACD is FUNDAMENTALLY a single bit technology/representaion ... so the essence of the principle is sound indeed ... but the implentation with real-world electronics requires excellent analog engineering.
Well I'm done, except for questions Ask away !
to understand DAC's completely, you have to understand pretty much everything in here. there are a few things i dont get, but hey, i still learned a great amount of info from him.
copied and pasted from werewolf on ECA.
Let's start by describing a fundamental difference between analog & digital signals. We all know that an analog circuit (like an active crossover filter) requires a power supply to operate. But that's essentially it ... once powered, you simply apply an analog input signal and the circuit responds by producing an analog output signal ... simple.
But digital signals are fundamentally different. Not only because the signal is represented by digital 1's and 0's (with great advantages like insensitivity to amplitude noise), but because something else is needed to convey or transport a digital signal ... namely, the "clock" signal.
Now there really are two types of digital systems in this world: one is "event driven", where a digital signal changing state is enough to cause further activity in a sequential chain, and the other is "timebase" driven, where a change of a digital signal is not recognized until the clock, or master timekeeper, registers that change. The world of digital audio (and most other systems for that matter) is in the SECOND category ... in addition to the digital signal itself, there needs to be a "clock" signal or master time keeper that registers any logic state change and fundamentally drives any future activity. This is very apparent even at the so-called gate level ... look at an analog opamp, and you only see analog inputs & outputs (in addition to the power supply). But look at a digital flip-flop, and you see an input, output and CLOCK input. The digital input can change all day long, but the flop will not recognize (or latch, or store) that input until the clock edge comes along.
So all digital circuitry of interest to us requires not only the digital inputs & outputs, but a master time-keeper signal ... the clock ... as well. Now Digital-to-Analog Converters are no different ... they require digital inputs (of course) to create an analog output, but they also require a master clock signal as well. In fact, the clock signal is ULTRA IMPORTANT to the DAC especially ... not only does it control the logic that latches or reads the digital input signal, but it provides the fundamental timebase for the whole conversion process. Each digital word that a DAC receives represents one sample of some analog signal at ONE EXACT POINT IN TIME. That precise moment in time must be faithfully reproduced by the DAC, in order for the analog output signal to be a faithful representation of the original analog event. It has been said (quite accurately) that the RIGHT sample at the WRONG time is in fact, the WRONG sample. I'll go one step further ... you can have the best DAC in the world, but if you don't feed it with a clean, low-jitter clock signal, you've got a handful of junk.
What level of timebase accuracy are we talking about for high fidelity digital-to-analog conversion? Let me tell ya, it's friggin scary ... here's a simple example :
Let's say I have a 1 volt peak signal, which has been converted to a digital signal by a 16 bit ADC. So I have 2**16 quantization levels covering a 2 volt peak-to-peak signal, so each LSB or bit corresponds to about 30 micro volts. Now a full-scale 10kHz signal (for example) will have a maximum slope of 2*pi*10kHz, or 63 millivolts per microsecond. So how long (in time) will it take for that 10Khz signal to span one LSB at the 16-bit level? Simple :
30 microvolts (LSB size) divided by 63 millivolts per microsecond (rate of change of signal) = 480 picoseconds !!
Yes, about half a nano-second, which is one billionth of a second. Bottom line : if you're DAC clock doesn't have substantailly LESS than 1 nano-second of jitter (timing noise or inaccuracy), you're kidding yourself if you think you've got accurate 16 bit conversion. Imagine (or rather, calculate) how clean the clock must be for 24 bit conversion !!!!
So in digital audio, DACs need VERY clean clock signals. Everybody with me so far? Next post I'll describe the industry standard for where DACs in fact GET their clock signals, and why it's a TERRIBLE approach. Finally in a third post I'll describe some options for improvement, cool?
Now there would appear to be contradiction, because I said that DAC's need a clock signal ... in fact, a very clean clock signal ... in addition to the digital data itself in order to function. And anybody who's played with outboard DACs knows that you just send the DAC one and only one signal ... the industry standard S/PDIF signal, over optical cable or coax ... and it works.
SO WHERE THE HELL DOES THE DAC GET IT'S CLOCK?
Well you might think ... that's simple, there's probably a crystal oscillator (the cleanest clock source known to an electrical engineer) on the DAC board that supplies the DAC it's clock ... what's the big fookin deal? Simple answer ... because it won't work. You see, the CD transport has a clock of it's own, that's used to spin the CD, read the data, and send the data out to the DAC. And while the crystal oscillator in the transport is some multiple of 44.1kHz, and you would surely use a 44.1kHz clock (or multiple) on your DAC board, those two clocks will NEVER be "in sync" precisely ... and they must be precisely "in sync" in order to communicate the digital data. The two clocks might start off "in sync", but will wander away from each other like two wristwatches would.
So what really happens? In addition to the DAC chip itself, the outboard DAC has another little IC on it called a "digital interface receiver" or DIR. Its job is to receive the incoming digital data stream, and RECOVER a clock from the data stream to send to the DAC. To do this, it uses a well-known circuit called a phase locked loop or PLL, which has it's own Voltage Controlled Oscillator (or VCO) embedded in a feedback loop that compares the VCO clock output with the incoming data stream. It essentially controls it's internal VCO clock so that it keeps "in sync" with the incoming data stream. And THAT clock is sent to the DAC.
Sounds clever ... and it is. It's main (make that only) virtue is that this technique allows for ONLY A SINGLE communication channel (coax or optical fiber) to send digital audio from one device to another. Only need one "wire" to send digital audio ... the receiving device will have a DIR (with a VCO in a PLL ) to recover the clock ... no need to send the clock SEPARATELY.
Can't argue with the "economy" of the solution. But the problem my friends is this : you end up clocking your precious, ultra-precise DAC with a RECOVERED clock. And no matter how good the PLL in the DIR is, a recovered clock will NEVER be as clean as a cystal oscillator clock. The recovered clock will have jitter ... in fact, a bad form called "data dependent" jitter (cuz the clock was recovered from the data in the first place), and while you can filter to some extent with loop filter components in the PLL, it's still just a fundamentally bad way to do communicate digital audio from a transport to a DAC ... cheap, yes ... but high performance, no.
Should point out that there have been many fine attempts to live within the constraints of this system, and do the best job possible to filter this jitter with low bandwidth PLL's, or cascades of PLL's ... Zapco's little outboard DAC comes to mind
But there are alternatives ... and that's my next post
OK alternatives ... some are "compatible" with S/PDIF, some are not.
1. Adam mentioned one, I2S (eye-squared-ess). A format different from S/PDIF (even enhanced by Ultra-Analog, I2Se) whereby digital audio data and clock are sent separately from device to device. Adam knows about it cuz you'll find it in Perpetual Technologies devices A real improvement ... no need to "recover" a clock from the data stream.
This is not bad at all, but still not as good as possible. What you really want to do, is to PUT THE CRYSTAL OSCILLATOR RIGHT NEXT TO THE DAC. The most precise timing source should be located right at the device that cares most for precision timing ... the DAC. It's that simple ... and anything else is sub-optimal. But how can you do this? I've already said that you can't have a crystal oscillator right at the DAC, and one at the transport, and have them communicate. Well here's the options :
1. A big memory buffer on the outboard DAC board. The outboard DAC will still have a DIR ... it will read the data sent by the transport, and store that data in a big memory buffer (RAM) on the DAC board. Then, slightly delayed in time, the DAC and it's local oscillator can read the data out of the memory on ITS OWN TIMEBASE. The memory buffer serves to "isolate" or "decouple" the two clocks that are not "in sync". I don't know if anyone has actually commercialized this approach ... but it could be compatible (almost) with S/PDIF. It would take a big frikkin RAM though ...
2. Put the crystal oscillator right next to the DAC, WHERE IT BELONGS, and send the clock signal BACK to the transport. The transport is then "slaved" to the DAC, instead of the other way around. And of course the transport sends the data forward to the DAC. Not compatible with S/PDIF ... requires TWO signals between the DAC and transport (oh the horror): one clock signal sent FROM the DAC TO the transport, one data signal from the transport to the DAC. My beloved Wadia transport/processor system (in the home listening room) does precisely this.
And so does a little company called LC Audio ... shown to me by one Jason Winslow (thanks dude!). They offer an aftermarket modification for CD players, and separate transport/DAC systems, that will implement this option. Caution : not a trivial modification by any means. But it's good engineering my friends.
3. Asynchronous sample rate conversion. Relatively new technology, fully compatible with S/PDIF. Here's how it works : it's essentially a much more complex DIR, which reads the incoming S/PDIF data just like any other DIR. But it also accepts another, completely "asynchronous" (meaning not "in sync") clock from the crystal oscillator that resides right next to your DAC. So the incoming data comes from one timebase (the transport), while the outgoing data is timed from a completely separate timebase (the DAC). And this nifty little device will actually do some fancy DSP (OK, not really fancy, just plain old interpolation) to actually CALCULATE the proper audio samples according to the desired output rate. Good stuff, this ... it allows communication between two "out-of-sync" timebases by actually calculating the correct audio samples. Yes, it WORKS. And you can find it in Bel Canto's latest DAC, and you can get a Crystal Semiconductor or Analog Devices Evaluation board with their ASYNC SRC's on them
OK I'm tired All this clocking stuff really pertains to ANY type of DAC ... although some are more sensitive to timing jitter than others.
I will start a new thread one day soon to discuss different types of DACs : one-bit, delta-sigma, multi-bit ... as well as oversampling & upsampling, what it means and more importantly what it doesn't mean.
But for now remember ... a DAC is only as good as the clock you feed it !!! So from now on, everyone is officially prohibited from talking about DACs ... one bit, 20 bit, 24 bit, etc ... unless they mention the clocking scheme in the same post Goodnite my friends
Sonic effects of jitter ...
Couple things to consider. First, increased jitter will not raise a DAC's noise level (defining noise to be any artifacts present in the absence of signal). Why? Well lets say you put all zeros into the DAC for a signal, expecting to get analog "zero" out. Well it doesn't really matter if those "zeros" are jittered in time ... the net result is still zero ... so no noise "added" by jitter.
In fact, the same statement is true for any DC signal the DAC is trying to reproduce ... DC samples just don't care if there's timing jitter ... the output is still "DC". If all the samples are the same (this is the DC case), then a timing error cannot be reproducing a "wrong sample" ... does this make sense?
Of course DC is not in the audible bandwidth but from this extreme case we can conclude that LOW FREQUENCY signals are LESS sensitive to timing jitter. They just don't change rapidly enough, so timing errors do not easily translate into voltage errors (that conversion happens through the slope, or rate of change, of the signal.
So the conclusion is, no ADDED noise with lots of jitter, and LOW FREQUENCY (bass) signals are less effected. What does happen with lots of jitter is this : a distortion or "smearing" of the higher frequency signals, along with the "defocusing" or blurring of the soundstage cues that rely on high frequency signals. And in the worst case, probably some audible "harshness" in the higher registers as well.
Ok I'm not real happy with my last post so let me try a summary, and a bit more info :
1. From the perspective of the WHOLE recording/playback chain, the absolute best way to improve resoltuion well below the LSB (and enjoy some other benefits as well) is to dither the signal right before quantization. Of course, this must be done by the CD makers in the recording or mastering process.
2. Equally 'of course', the engineer designing a CD processor for CD playback has no control over the mastering process. So, if the designer has knowledge that most recordings are NOT well dithered, he/she may be motivated to search for a technique to improve the situation. But to the best of my knowledge, any attempts to uncover true INFORMATION below the LSB of an undithered recording are guesswork. May be sonically pleasing, however, particularly at a time when most recordings were not well dithered. So I suspect there was a time when a process like ALPHA made sense ... but I believe that time is past.
3. For properly dithered recordings, any such algorithms after the quantization (like during CD playback) are completely unnecessary, and if they add anything, it can ONLY be noise.
4. The guesswork of which I speak in my second point above is in NO WAY comparable to filling in the musical information BETWEEN 44.1kHz samples in digital audio. Digital audio by nature is a sampled-data process, and at some point during analog playback the information between those samples must be "filled-in". But thanks to Mr. Nyquist, it can be mathematically proven that a bandlimited analog signal can be COMPLETELY recovered from it's samples ... hence the process to fill in audio between samples, simply called interpolation or smoothing, is not guesswork, but follows a strict mathematical model. There's some limitations in practise to be sure ... but nothing comparable to the Nyquist Theorem (that's a strong word, mi amigos) exists for "filling in" information below the LSB of an un-dithered recording.
Ok I think I'm done for now ... does this make sense?
part 2
lright lots to discuss tonite, for the interested reader.
And we'll do things a bit different in this thread. It will be an "interactive" thread, with a couple homework assignments But the grades don't count
We shall start ... at the beginning (of course) ... with the process of analog-to-digital conversion (ADC). Fear not, gentle reader, DACs will be discussed in due course as well.
We all recognize an analog signal when we hear one, or (more visually) see one on an oscilloscope. It's a continually-varying waveform ... we'll use voltage as an example, but could be current, air pressure, etc. By continually varying, we mean it takes on a different value (maybe only slightly different) a EACH POINT in time ... no matter how finely, or broadly, we observe the time axis. It's "always there", continually changing (except the extreme case of DC, of course, which is constant in time).
Now the first thing to understand, or accept as simply true for now, is the so-called Nyquist Theorem. This landmark piece of work simply states that : a bandlimited analog signal (meaning a signal with no frequency content above a certain point, like 20kHz) can be COMPLETELY recovered from only SAMPLES of the signal, providing you sample fast enough. What's fast enough? At least twice the rate of the highest frequency. Simple, concise, and undeniably true.
This theorem really is the basis for digital audio ... period. Because it says that instead of communicating, or storing, the entire analog waveform ... which is exactly what a magnetic tape or phonograph record does ... we can take "samples" of the analog waveform at DISCRETE points in time, and store only these samples ... because THAT'S ALL THE INFORMATION WE NEED TO COMPLETELY CHARACTERIZE, AND ULTIMATELY RECOVER LATER, THE ENTIRE SIGNAL.
How fast do we sample? Well with the (somewhat controversial) knowledge that the limit of human hearing is 20kHz, a "sample rate" of 44.1kHz was chosen for the CD standard ... remember, we have to be (at least) twice the highest frequency of interest.
What if the analog waveform in fact has some spectral content ABOVE half the sampling frequency (22.05kHz, to be exact)? Ouch ... that's bad news, because a very bad thing called "aliasing" will happen when that signal is sampled at a rate of 44,100 times a second. Topic for later ... right now, just recognize that aliasing must be avoided. So what's done in practise is this : right before the analog waveform is sampled at DISCRETE points in time, 44,100 times a second, the analog waveform is passed through a low-pass filter called (cleverly) an "anti-alias" filter. Sharp rolloff, bandwidth about 20kHz. More on this later ...
So to summarize "post the first" in this long, interactive thread, the ADC process begins by passing the analog voltage waveform through a 20kHz low-pass, anti-alias filter. Then, the analog signal is SAMPLED at discrete points in time ... 44,100 times a second.
But even after the signal is sampled, it's still "kinda analog", because one sample will be 0.731 volts, the next sample maybe 0.274 volts, etc ... still sounds "analog" rather than "digital" so the next post will deal with what happens next : a process called QUANTIZATION. Everybody with me? Please humor the lecturer this evening, and give me some feedback before the next post
So far, we have only 'sampled' the analog waveform. But we rest assured in the knowledge that, thanks to Mr. Nyquist, we have not yet introduced ANY ERRORS ... because later on, we can completely recover the signal from it's samples. But those samples are still kinda analog ...
So now we do the true analog-to-digital conversion process, and convert these 'kinda analog' voltage samples to DIGITAL WORDS (yes, we really do call 'em words). We will see that this process ... called quantization ... does in fact introduce errors (and believe me when I tell you, a person could EASILY devote an entire career to the study of quantization).
What better way than to proceed with an example? And we'll ask for our very first homework assignment shortly Let's say we have an analog voltage waveform that spans the range from 0.10 volts to 0.90 volts ... can take on ANY value between these two limits (chosen for simplicity). Furthermore, let's say we want to build ourselves a 3-bit ADC to "digitize" this waveform (OK, so 3-bits doesn't sound very high-end ... but I caution you to not judge this book by it's cover ... plus it's alot easier as an example).
So here's how the converter will work : if the analog voltage sample (let's say we've done the sampling described above) is BETWEEN 0.10 and 0.20 volts, we give to that "analog" sample the "digital" word : 000 . If the analog voltage sample is BETWEEN 0.20 and 0.30 volts, we give the analog sample the new digital value of : 001 . So let's construct the followng table :
Analog Voltage Sample Value Corresponding Digital Word
0.10 --> 0.20 000
0.20 --> 0.30 001
0.30 --> 0.40 010
0.40 --> 0.50 011
0.50 --> 0.60 100
0.60 --> 0.70 101
0.70 --> 0.80 110
0.80 --> 0.90 111
So here's an example : analog voltage sample of 0.37 would be assigned a digital value of : 010. But please note, that an analog voltage sample of 0.34 would be assigned the SAME digital value ... so here, for the first time, we introduce an ERROR ... called (cleverly) QUANTIZATION ERROR, or (somewhat incorrectly, but vastly used), QUANTIZATION NOISE.
In the case of my example of 0.37, the actual quantization error would be 0.02 ... because we "expect" the digital word 010 to correspond precisely with the analog voltage halfway in it's region, or 0.35 (and 0.37 - 0.35 = 0.02).
So why would anybody put a precious analog signal through such a "noisy" process? Because, once the signal is digitized into these digital words, it is (virtually) immune to further errors in storage and/or communication (this is the essence of why we like digital). And if we use ENOUGH BITS to quantize the signal (more than 3), the quantization noise will be very small indeed ... much lower than other forms of noise that plague analog storage & communication.
How do I build such a converter out of real circuits? Well, the type of analog-to-digital conversion described here is pretty straightforward to build. I might start with a 1 volt voltage reference, and use a resistor-divider chain to establish all the "boundary" voltages (0.1, 0.2, etc), creating a series, or bank, or array of voltages. Then I use a series of comparators, all of which have one input tied to the analog signal, and the other input tied to one of each of the voltage references. The comparator outputs will tell me what digital word to assign to the analog input. Make sense? The whole key, however, for this process to work is this : I must VERY ACCURATELY establish ALL of these "boundary" voltages, to compare my signal against.
And if you think that this technique, for 16-bit conversion, means I must establish voltage references that are PRECISE to within ONE PART in 2**16=65,536 ... that's where you would be RIGHT. A VERY difficult challenge ... more on this later.
Now for the homework! This one is simple. Use the 3 bit converter I built, pick a voltage somewhere in the range of 0.1 to 0.9, and report back the corresponding digital word and its quantization error. I know it's simple ... but humor me, the next assignment will be alot more fun
I won't proceed until I hear 3 answers (hey, it's my thread)
OK stay focused (yep I have the Wadia, cuz they do clocking right, digital volume control right ... no need for preamp)
Alright ... so we have built ourselves a 3-bit ADC, and uncovered something called quantization noise in the process. Let me make a few points :
1. Obviously, this concept is extendable to 16 bits.
2. These 16-bit words are EXACTLY the audio signal information stored on a CD (plus some error correcting codes, control bits, etc.).
3. To build an ADC as I've described gets more and more difficult, as the number of bits increases ... because the PRECISION needed in the analog circuitry (the resistor string that sets up the voltages, and the comparators) gets real tough, real fast.
But there's another way ... And I'll introduce by way of example, and ask for a second homework assignment The algorithm I'll describe is pretty simple, but tedious. You can do your homework by hand, or write a little program if you're so inclined.
Here's the deal. Let's say you only have ONE bit to quantize a signal ... and it takes on a value of zero (0) or one (1) ... that's it. And we have a sampled analog signal just like before, values between 0.10 and 0.90. Here's the algorithm I want you to "run" :
Pick a starting value for the digital bit : let's say 0
Pick a starting value for a variable we'll use called SUM : let's say 0.00 (it's an "analog" variable, with decimal value like the input signal).
1. Subtract the digital bit value (0 or 1) from the analog input sample. We'll call this difference DIFF.
2. Add DIFF to the current value of SUM ... this will be our "running summation".
3. If SUM is greater than 0.5, the next digital bit will be one (1). If SUM is less than 0.5, the next digital bit will be zero (0).
Then go back to step one, with the new digital bit value you've found. Simple Two things to remember :
Step 1. is a SUBTRACTION, and never clear the SUM value (although if an error finds it's way into SUM, it won't matter in the long run). Keep track of the "string" of One's and Zero's you generate ... because I'll ask you to AVERAGE them after you've run this algorithm 10,20 ... or maybe 100 cycles. Here's my example :
Analog input sample = 0.37
Bit = 0
Sum = 0.00
1. Diff = 0.37 - 0 = 0.37
2. Sum = 0.00 + 0.37 = 0.37
3. Sum is less than 0.5, so Bit = 0
1. Diff = 0.37 - 0 = 0.37
2. Sum = 0.37 + 0.37 = 0.74
3. Sum is greater than 0.5, so Bit = 1
1. Diff = 0.37 - 1 = -0.63
2. Sum = 0.74 - 0.63 = 0.11
3. Sum is less than 0.5, so Bit = 0
And so on ... everybody got it?
So pick an analog input sample like before, run this little algorithm by hand or with a little program, and report back what's happening to the "average" value of the bitsteam you generate (meaning, for example, 25 "ones" out of 100 bits will have an average of 0.25). Take your time ... 3 answers will trigger a new post
Run the algorithm ... did I explain it well? Follow my example? Can do it by hand, to create a "string" of bits maybe 20 bits long ...
take the average of the one's & zero's. Then run it thru maybe 10 more cycles, take the average over all 30 bits and see what the average is doing ... trust me, it's worth the effort, I promise ... or your money back
I REALLY want this to be clear ... don't be afraid to ask any questions at all, I'll answer till my bedtime or until the moon brings out the worst in me ...
1. In our first example, we described one way to digitize an analog signal with our 3-bit ADC. We showed how to create a multi-bit digital word that has a direct one-to-one correspondence with each analog "samples", which would usually be taken at a "Nyquist Rate" of at least 2 times the highest frequency of interest. For CD, these words are actually 16 bits long, and the sampling frequency is 44.1kHz. And it takes some very precise analog electronics to build such a converter.
2. Our second example shows a different way to digitize an analog signal ... this time, using only a measly one-bit digital "word" ... although better to think about a 'string of bits', oftentimes called a "bitstream". Now mach_y has certainly uncovered something very interesting ... and very, very powerful : Even a measly single bit bitstream can represent an analog sample with EXTREMELY high precision, providing that : we use a whole bunch of one's and zero's, and we create them with a smart little feedback loop like the algorithm I described.
Now about that "whole bunch" of bits needed ... and the question about digitizing samples that aren't just "DC", but more interesting analog waveforms. For those of you that went through this exercise, it's probably easy to believe that the bitstream generated would still accurately "capture" or represent a CHANGING analog waveform, providing that the RATE at which you generate the 1's and 0's is MUCH, MUCH faster than the rate of change of the analog input. For example, let's say that after you generate maybe a thouand 1's and 0's to represent an analog input of 0.37, the analog input changes to 0.39. I've still had a thousand digital "samples" to represent 0.37 before my input even changed, so a "slowly" changing analog input probably isn't going to upset my little converter.
And here we introduce the concept of OVERSAMPLING. A bitstream generated by our simple algorithm will still be a faithful, in fact high precision representation of a CHANGING analog input, providing that the bits are generated a rate MUCH higher than the previously mentioned Nyquist rate. How high for digital audio? Sample rates of 3 Megahertz are quite common. One way to think about the situation is this : As mach_y discovered, the quantization "error" associated with this technique can in fact be very, very small ... his averages were VERY close to the input analog value ... in fact, the LONGER he runs the algorithm, the SMALLER the error would be ... so we uncover a wonderful principle of data conversion : you can in effect TRADE speed for accuracy ... I can use very low precision digital "samples" (can't get much lower than one bit!) to characterize a signal, with almost ARBITRARILY HIGH precision, providing I sample FAST enough ... and use some clever algorithms in the process.
Now there's another advantage to oversampling ... that pertains to DACs as well as ADCs. It has to do with using digital filters instead of analog filters ... but we'll discuss that later.
And nope, nothing to do with MP3 compression or decompression You guys always jump to conclusions This will be a slow, deliberate process ... feel free to ask more questions though.
Simple take-away : even a very LOW precision digital signal (one bit, in fact!) can represent an analog waveform with (arbitrarily) HIGH precision ... or low quantization noise ... providing that : the analog waveform is "low bandwidth" or slowly changing, compared to the rate or bandwidth of the bitstream.
Remember that mach_y's resolution of the analog input was MUCH higher (meaning lower quantization error) than our 3-bit ADC could ever hope to be (unless we introduce our 3-bit ADC to a little technique known as dither ... but that's a whole 'nother topic! ... told ya quantization was fascinating ...)
you guys that have patiently stuck with this thread, even done some homework ... here's a few punchlines to reward you :
CONGRATULATIONS! You have successfully built what we call :
An Oversampled, One-bit, Delta-Sigma Analog to Digital Converter ... and even followed it with a simple Decimation Filter.
Rmember those two dummy variables, DIFF & SUM? Delta, Sigma ...
Oversampled we explained, or really just begun ... and of course One-Bit should be pretty obvious by now ...
Decimation Filter? That averaging I asked you to do? Yes, averaging is simple digital low-pass filter ... Decimation we'll return to "anon" ...
And oh yeah ... that one-bit bitstream you generated? That's EXACTLY the digital format stored on the new SACD ... yes, the new Super Audio CD.
So there you have it, for now. Our first example showed the multi-bit audio words stored on regular, old CD's. And the second example demonstrates the format stored on new SACD's.
And guess what ... we can convert from one format to another That's what we'll talk about next ... because even long before SACD's came along, our second example became the preferred method for ultimately generating the 16-bit words stored on regular old CD's.
And my final teaser, this whole process can be "run in reverse" (so to speak) to build digital audio DACs
Still having fun ??? honestly hope so ...
Alright guys, let's wrap this section up for now with a comparison, of sorts, bewteen the two ways we've discussed to lay down 16-bit audio samples at 44.1kHz on a CD :
1. The so-called Nyquist converter. Analog signal first passes through a sharp rolloff, analog anti-alias filter, bandwidth right about 20kHz. From there the signal is sampled at 44.1kHz, and finally digitized by "comparing" the signal to an array of precise voltage references.
Almost never done anymore because of two fundamental, very difficult analog problems. First, the sharp rolloff analog anti-alias filter ... component drift, phase problems like crazy, etc. Second, the precision needed in the voltage references ... mismatch causes bad converter distortion. It's an ugly picture indeed ...
2. The oversampled, delta-sigma approach. We'll walk through the signal path, but first remember a couple of things : one, the analog signal (with a 20kHz bandwidth of interest) is oversampled at (typically) 3MHz ... which means the ANALOG anti-alias filter can be MUCH more gradual ... maybe a gentle rolloff that only starts at 40 or even 80kHz (we're preventing aliasing, but the sample rate is very high ... means a more gradual analog filter will work). Two, no precision or matching needed in the actual conversion. The real benefit of a one-bit converter is this : mismatch can only cause a harmless gain or offset error ... one bit means only two values or "points" on a curve, and two points define a straight line, pure and simple (it's deviations from a staight-line transfer function that cause distortion).
So the analog signal first passes through a GRADUAL analog anti-alias filter, which just needs to provide healthy attenuation by 3Mhz. Next, the signal is sampled at maybe 3Mhz (or higher), and then digitized by an algorithm very similar to the one you guys built. In short, yes it's only one-bit ... and hence very "noisy". But the algorithm is designed to make sure that the noise is "shaped" in frequency so that the noise is VERY LOW in the low frequency band of interest (20kHz). This is why even simple averaging, like what you guys did, "reveals" the high resolution possible ... averaging is a form of low-pass filtering, which in this case "removed" alot of the one-bit quantization noise at high frequencies.
In fact, the one-bit signal is then sent to a DIGITAL low-pass filter, not unlike the averaging process you guys did. This is interesting ... it's really this digital filter that : removes most of that one-bit quantization noise, "revealing" higher precision digital words, and ... get this ... provides the SHARP anti-aliasing needed before the final step of "decimation" ... which is simply lowering the sampling rate (a sampling process itself) back down to 44.1kHz Yes, a digital filter provides the real sharp anti-aliasing needed for 44.1kHz samples !! Why do we like a digital anti-alias instead of analog? No component drift, no chance of power supply noise creeping in, and finally ... can be implemented as FIR with perfectly linear phase By the way, an FIR filter is really NOTHING MORE than a "weighted" averaging filter, that takes a running average of many (much more than 64) one bit samples to produce a higher precision digital word. And as a nice bonus, it turns out that the computation required for the FIR is not bad at all in a decimation environment ... since there's no feedback in this filter structure, you never need to calculate the outputs you're going to ignore after you downsample to 44.1kHz So that's it, the one-bit bitstream is digitally filtered by a long FIR lowpass filter, generating higher precision digital words ... and we only need about every 64th word (after filtering) to supply the 16 bit words at 44.1kHz.
Needless to say, ever since about the late 80's, the second option became the preferred, highest performance and most cost effective way to record music according to the CD Audio standard. Then in the late 90's (or so), when we figured out how to stuff more data onto CD's, the whole filtering/decimation step was eliminated and the one-bit bitstream was stored on SACD. Very similar resolution in the 20kHz band to CD, but no longer sharply bandlimited to 20kHz (cuz that was provided by the decimation filter, needed for anti-alias with 44.1kHz sampling). So yes, SACD has higher bandwidth than CD, but the resolution in those "ultrasonic" bands decreases rapidly (cuz the quantization noise is increasing rapidly).
Well constant readers, I think that about does it for the ADC side of the story If interest remains high, we'll tackle DACs next, coooool?
part 3
Alright patient readers, long time overdue. Let's start a discussion about DACs ... theory & operation. I'll try to explain basic principles & variations, maybe dispel a few myths along the way, cool? Haven't (yet) figured out how to make this thread interactive like the last one ... which should be a relief to most of you
We're gonna go one step at a time. The ONLY way to really do DACs justice is to run a sort-of "parallel" discussion, where we describe some principles in BOTH the time domain & frequency domain. These domains are NOT independent by the way (there goes one myth down the drain right off the bat!), it's just that some concepts are more easily explained in one domain than the other.
Let's start with the data stored on a CD. The audio "samples" are 16 bit digital words, reflecting a sampling rate of 44.1kHz. In very broad strokes, the digital-to-analog conversion process must perform two fundamental functions : one, convert these digital words to analog values (voltage or current), and two, provide a filtering function ... about which we'll talk at some length.
In this first post, I want to carefully describe, in both time & frequency domains, more about what that digital audio data on the CD looks like. If you read that previous post, you already have a sense about the "time" domain picture. We have 16 bit digital words, created through a process of sampling & quantization (few different techniques to arrive at this goal, with different tradeoffs in real world implementations), which represent the "amplitude" (voltage, for example) of the music signal at DISCRETE points in time. And we are comforted in the knowledge that somehow the analog signal can be COMPLETELY recovered from these samples alone, if we sampled fast enough in the first place (fast enough being twice the bandwidth of the original signal) ... 44.1kHz in this case. In fact, the whole DAC "process" describes this signal recovery. So an analog signal (music), varying CONTINUALLY in time, is sampled DISCRETELY in time, then quantized to 16 digital bits. The quantization process adds noise ... no doubt, no escaping it ... although there's room for cleverness here ... whereas the sampling process itself only forces the signal to be bandlimited to less than half the sampling rate.
Now, what does the CD data look like in the frequency domain? What is the full "spectral content" of this string of 16 bit digital words, sampling rate of 44.1kHz? Actually, it's quite simple ... and if you grasp this concept, DACs are simple to understand. The frequency domain picture of this "discrete time sequence" is simply this: the full 20kHz audio band is present & accounted for in the frequency domain, along with exact "replicas" of the 20kHz information spaced every 44.1kHz. That's it
In other words, whatever frequency content you have from DC to 20kHz is duplicated around 44.1kHz, 88.2kHz, 132.3kHz, ... on & on forever. This "spectral replication" is an artifact of the original sampling process.
By the way, I've taken a slight liberty in glossing over the fact the original music actually spans -20kHz to +20kHz ... yes, negative frequencies exist in signal processing land! So in fact, the -20Khz to +20kHz content is present, centered at 0Hz ... but an identical "replica" is centered at 44.1kHz, 88.2kHz, etc. ALL DICRETE TIME SEQUENCES have a periodically repeating spectrum like this.
But don't worry about those negative frequencies ... we don't have to be precise enough here to consider them. The only thing we need to know, is that in the frequency domain, the discrete-time or sampled sequence has the 20kHz info, but it is also REPEATED every 44.1kHz.
We call these periodic duplications ... IMAGES.
hey guys ...
Well we introduced a one-bit representation of a signal in the previous thread on ADC's. We saw that even though such "coarse" quantization must (of course) have lots of quantization "error" or "noise", that noise can be kept VERY LOW in the 20kHz bandwidth providing you do 2 things: one, sample much faster than the Nyquist Rate, and two, perform the quantization in a little feedback loop that essentailly "shapes" the quantization noise in the frequency domain so that it's very LOW at low frequencies.
In fact ALL of these principles were shown in the homework assignment, where we found a one-bit signal to be a VERY accurate representation, on average (which means for low frequencies ... averaging is nothing but a crude low-pass filter), of a low frequency signal (our example was DC, but easily extendable to low freq AC). Trust me ... that little example I asked you guys to do was FULL of all the interesting principles we need.
We'll return to 1-Bit conversion soon, but for now let me say this : from a signal processing perspective, the "number of bits" in a conversion tells you very little.
Mathematically, is 1-bit lower noise than 16-bit? From this information alone, all you can say for sure is that the TOTAL noise of a 1-bit must be higher (worse) than the 16-bit ... 1-bit must give you more noise, cuz you only have 1-bit for pete's sake. BUT, what you don't know is how that noise power is DISTRIBUTED in the frequency domain. The process of oversampling in the quantization process allows you to "shape" that 1-bit noise so that it's very low over 20kHz ... in fact, can be quite lower than 16 bit. So for an apples-to-apples comparison, you must compare : how many bits you're using, what sampling rate is used, and finally what noise shaping algorithm is employed. Only then can you tell if 1-bit or 16-bit has better noise performance in a given bandwidth of interest. It's all determined by the 3 main items : resolution in bits, sampling rate, and noise shaping algorithm (if any).
Ultimately the ADC must output 16 bit words at 44.1kHz for CD storage ... but oversampled ADC's can be used first, and through a DSP operation called decimation (the averaging filter is a crude example) we can create 16 bit words @ 44.1kHz for storage on a CD. Or, skip the decimation and store the 1-bit "bitstream" on SACD
And Kev is correct ... as we pointed out in the last thread, all this fancy digital signal processing is employed so that we can actually build a converter that has only 1 bit. The one-bit converter enjoys an advantage over ANY higher bit converter : LINEARITY. One bit means two "states", two states means only 2 points on the converter characteristic function, and two points define a straight line ... it's that simple. No chance for mis-matched analog components to cause converter non-linearity.
Now before we move onward with our DAC discussion, let me add one more thing about the description of the signal stored on the CD. As we said, 16 bit words at 44.1kHz in the "time" domain, and a faithful 20kHz bandwidth plus all the frequency images at multiples of 44.1kHz in the "frequency" domain. But there's also quantization noise present ... it came about when we quantized the signal to 16 bits. In the time domain, this is the error introduced when we used a 16-bit word to represent an instantaneous analog voltage (or current) ... in the last thread we did a 3-bit example that demonstrated this "error" or "noise".
Now, this quantization error is also manifest in the frequency domain. In a sense, it's really just a higher noise floor over the entire 20kHz bandwidth ... and of course, the identical noise floor is "replicated" in those image bands as well, at multiples of 44.1kHz. But it can be quite a bit more complicated ... for example, if dithering is used in the quantization process, the noise floor is actually a bit HIGHER. Why the hell would you do that? Simply put, to decorrelate the quantization noise from the signal. One day I'll do another thread devoted solely to dither ... I love this topic. Sampling is simple, but quantization is endlessly fascinating.
So I think that completes our picture of what the data stored on the CD looks like ... in both the time and frequency domains.
dunder ... haven't heard about TI's technology. My eyes glaze over from all the marketing hype surrounding digital amps, digital speakers, digital air, digital ears , ... post a linky and I'll have a peak. Happy to help with your homework if I can ...
matt ... haven't spun a vinyl disc in over a decade! I was in deep too ... VTA adjustments, stylus cleaning & demagnetizing, potions & elixirs ... but then I saw the light when someone suggested that to really enjoy digital, stop listening to analog altogether ! never loked back
And I think it's informative to summarize what's improved about digital audio recording & reproduction over these last twenty years, to help explain why early digital certainly did NOT live up to the full performance possibilities of the medium. Here's my take, probably missing some:
1. The quantization process itself does nasty things to low level signals ... here especially quantization is a very unpleasing distortion, not additive noise. BUT the proper use of dither in the recording process (or any quantization process) has all but eliminated this early problem.
2. Converters (ADCs, DACs) with poor linearity, especially for low level signals (near the zero crossing). Bad differential nonlinearity in the early converters, combined with the above point on non-dithered quantization, destroyed any chance of low-level signal integrity and all the sonic cues dependent on it. Of course converters have improved DRAMATICALLY in this regard ... single bit & multi-bit.
3. Timing jitter. Took a little while to really appreciate this, cuz there's no real direct analogy in analog (OK, maybe wow & flutter, but at very different frequency extremes). But here again, lots of susbstantial improvement in the last 2 decades : better clock recovery with better jitter filtering, aynchronous sample rate conversion, and perhaps just more attention to the issue (clock buffering, board layouts, etc.).
4. Digital filters in audio have DEFINITELY improved. Better understanding of quantization effects in IIR structures, higher precision available at affordable prices, appreciating the pre-echo & post-echo associated with too much passband ripple ... all these effects have no real counterpart in analog processing, took awhile to develop real understanding & solutions.
5. Analog filters for digital audio have DEFINITELY improved. This includes minimizing their use altogether in oversampled systems, plus the appreciation that digital audio systems really do benefit from surprisingly wide bandwidth analog circuits (op amps) ... not because there's more information to be revealed (CD limit is a brick wall in this regard), but because after conversion from digital, the analog signal contains the "residue" of lots of high frequency junk ... residual image energy, residual quantization noise. And if the analog circuitry cannot process this high freq "junk" happily (linearly), then demodulation INTO the audible band can result.
Well that's my top five over 2 decades We've come quite a long way actually ... I'd say the digital medium has improved quite a bit faster than the analog one, in it's early childhood Which sounds better TODAY? That's an exercise left to the reader ... but if you look at the pace of improvement (the "slope" of the curve, not just a single point), you gotta believe we've only just begun to reveal the real potential of digital audio
high time we resurrect this thread, no ? I do want to say that it's my intention to just help share some things I've learned over the years, I love to teach it's never my intention to come off as some arrogant asshole or know-it-all, so if that's the case let me know & I apologize in advance. ok enuf of that
when last we spoke, we ended with 16 bit words on the CD, representing samples of the original analog waveform every 44.1kHz. In the frequency domain, we have the full 20kHz bandwidth faithfully represented ... except for the addition of quantization noise that came about when we quantized to 16 bits ... PLUS identical 'images' of that bandwidth centered at all integer multiples of 44.1kHz.
Now we must do 2 things : first, convert that digital signal, or string of samples, to analog. That's what this post is about Much like our ADC discussion, there are fundamentally two techniques that can be used ... a sraightforward 'Nyquist' technique, and an oversampled one. We'll first focus on the Nyquist technique, and later come back to oversampling.
A Nyquist DAC is pretty simple really ... you just take each 16 bit word, one at a time, and covert to an "equivalent" analog voltage (or current) value. This is typically done with a set of switches connected to a resistor string or "ladder". The resistor ladder is connected to a main voltage reference (plus ground), and the switches are controlled by the digital word that we're trying to convert. The idea is that the resistor string will create, through simple voltage (or current) division, many attenuated voltage levels from the main voltage reference (one level for each bit is common), and the switches will direct some combination of these voltages (currents) to the output. Using our old 3 bit example, lets say you want to convert the code 011 from a voltage reference that spans 0.1V to 0.9V ... well a resistor ladder and corresponding switches would probably be deisgned to generate an output volage of 0.45V (halfway between the 0.4V and 0.5V bit boundaries) for this input code. Really just running our first 3 bit ADC in reverse.
Simple, right ? Well it is ... but the subtle signal processing is not quite that easy. Because, one must NOW ask the question (since we're now in the analog domain) : when we convert one sample to it's analog equivalent, what does the analog output value do UNTIL the next sample arrives? Yes, the next sample will come 1/44.1kHz seconds later, and at that time we'll compute a new output value, but what happens IN BETWEEN these samples? One seemingly reasonable thing to do is to just "hold" the previous analog value constant until the next sample comes along. Quite common really ... it's called a "zero order hold". But let me stop for now ... because all the fun in DACs is about what you do BETWEEN those original samples
OK this post is about "filling in" the signal between the samples.
We've described one way to convert the digital samples to analog samples, and then described one reasonable, but crude, way to "fill in" the analog signal BETWEEN the samples ... namely, just "hold" the last value until the next sample comes along (time domain). This will create a classic "staircase" looking analog signal, which does of course bear some resemblance to the original signal (that got sampled way back), but certainly not identical. So there must be a better way ... how do we know? Well, that Nyquist theorem tells us we can COMPLETELY recover the original analog signal from it's samples (ideally).
So let me pose a question : what's the absolute best "process", or signal processing block/function, that we can perform on the analog samples to COMPLETELY & ACCURATELY recover the complete signal from it's samples?
In other words, what's the BEST way to fill-in the signal between the samples?
HINT : the answer is ALOT more obvious in one domain (time or frequency) than the other ... and why it's good to have the "parallel" discussion in both domains when describing DACs.
yes the correct answer is the "sinc" time domain function ... but you would never know it by looking in the time domain. Instead, consider the frequency domain ... where we said the sampled signal has the full 20kHz spectral content faithfully represented, plus "images" centered at multiples of the sampling rate. How do you preserve the low frequency info, and eliminate the high frequency images? Why an ideal LOW PASS FILTER of course!
Looks like a brick wall or "box" (gating function) in the frequency domain. Time domain impulse reponse : sin(x)/x.
And now our Nyquist-rate DAC discussion is complete. Convert those digital samples from the disc to analog samples, then pass the signal thru a very good analog low-pass filter, and signal recovery is complete.
Just remember : "filling-in" the signal between the samples (time domain) is EXACTLY equivalent to filtering the images (frequency domain). Never forget this ...
Now, our Nyquist DAC has a couple difficulties. First is that matching required for 16 bit, or higher, precision in the analog components. Second is this great analog low-pass filter we have to build ... needs to have a pretty steep rolloff, don't want to add much noise (thermal, power supply), low temperature drift, etc. Quite a burden on analog circuitry ...
Of course there's a better way ... next post.
So let's summarize so far. One method is Nyquist DAC ... convert to analog, filter with analog. Sample rate never changes from 44.1kHz.
Second method is to perform some DSP (digital interpolation filter) to "fill-in" some digital samples between the 44.1kHz samples ... thereby increasing the sample rate before you convert to analog. This will greatly relax the order of analog filtering required. Please note that this increase in sample rate, if you're starting from data at 44.1kHz, does NOT magically increase the info you're getting from the disc ... it is NOT the same as having sampled & stored the original signal at a higher sample rate ... it only allows us to digitally filter the images, or equivalently digitally "fill-in" the signal between samples to aid the "smoothing" process.
Some subtle benefits to oversampling ... One is that the conversion to analog will now be performed at a higher "effective" sample rate ... 4, 8, 32, even 128 times faster than 44.1kHz. And this can help some non-ideal things like jitter, analog thermal noise ... cuz these artifacts are now spread over a wider bandwidth, so the energy from these artifacts that falls into a 20kHz bandwidth is lessened.
Now, as long as we're thinking about "oversampling" in the realm of DACs to use digital filters for image rejection ... lets talk about quantization We started with 16 bit words on the disc ... how much precision do we need to carry through the DSP? And who remembers our ADC discussion, where we were motivated to trade off speed for accuracy ... sample fast enough so that we could use a "smart" 1-bit converter (noisy, yes, but not in a 20kHz bandwidth!) where we could rely on the inherent linearity of single bit conversion?
Well we can use the same principle in DACs IF we increase the sample rate high enough ... typically to 2 or 3 MegaHertz ... we can ultimately quantize the signal all the way down to ONE BIT (this time, with a digital equivalent of a delta-sigma modulator). Why the strong motivation to quantize down to one bit digitally? Because, when we finally do convert to analog, one-bit converters are INHERENTLY LINEAR ... can't be screwed up by component mismatch.
So now we have all the principles we need for 2 types of DACs :
1. Nyquist. Convert samples to analog at 44.1kHz, filter with an analog filter to reject images, or fill-in between samples. 2 problems : Analog matching required in the conversion, high-order analog filter needed (noisy, drifty, phasy, etc.)
2. Oversampled. Use DSP to "interpolate" between samples. Adds no new inforamtion, just digitally "fills in" between samples. Greatly relaxes the amount of post-DAC filtering required in the analog domain. In this context, we also have the opportunity to quantize or truncate the precision down to single bit ... again, without disrupting the 20kHz bandwidth ... so that our real DAC will be inherently linear, no need to rely on precision-matched analog components. Many parallels to the oversampled ADCs we discussed ... for example, we needed to DECIMATE in an oversampled ADC to get back down to 44.1kHz, whereas we do the inverse, INTERPOLATE, in a DAC to increase from 44.1kHz.
Of course the second option has become the favorite ... with varyiing degrees of oversampling, and varying degrees of word length reduction.
As a final note to this crazy long saga, I'll finish with some posts about some (rather subtle) disadvantages to the one-bit approach, and why IMHO we still see healthy competition between multi-bit and single bit ... although oversampling itself is just too damn compelling! Even most multi-bit converters employ oversampling to some extent ... that first 2x buys you ALOT of relief in the analog filter stages
maybe this post can wrap it up
Oversampling, upsampling ... yes, essentially the same thing, only a difference of connotation i would say. Oversampling refers to the digital interpolation associated with DACs, and oftentimes implies that wordlengths will get reduced (maybe all the way to 1 bit) in the process ... but fear not, that's OK because you have wider bandwidth over which to spread the quantization noise power. Upsampling usually does not include a reduction in wordlength precision, and is often implemented independent of DACs. What they have in common : digital filtering of images, or equivalently, digital "smoothing" between samples. NO INFO GETS ADDED, absolutely NOT the same thing as having sampled faster in the first place.
Multi-bit versus single bit, yes the punchline First, oversampling just makes sense in EITHER technology ... the benefits include: digital image filtering versus analog, plus certain artifacts of the final digital-to-analog conversion process, like jitter, are spread over a wider bandwidth and consequently tend to disrupt the audio spectrum less. It's a very good thing ... but remember, digital filters can be built bad just like analog
One-bit pros/cons : The single biggest advantage is simply this : LINEARITY. Component mismatch can NOT cause harmonic distortion of the waveform. NO multi-bit technology can claim this ... it's a real gem. The problem that plagues single-bit converters is this : there's an enormous amount of quantization noise power ... should be, the signal is only represented by one bit ! ... and it's hard to keep it where it belongs Here's what I mean : quantizing to only one bit (in order to enjoy that linearity advantage) must introduce ALOT of quantization noise ... but the feedback loops or algorithms are designed so that MOST of that noise power is at very high frequencies, well above 20kHz. And the idea in a single bit DAC is simply that this noise can be easily filtered by an analog filter. BUT, even the most mild nonlinearites at those high frequencies in these analog filters can DEMODULATE that noise back down into the audio band ... and it's bad news cuz that noise is very signal dependent, but not in a nice way like images that reduce in amplitude as the signal itself reduces. In fact, it's easy to show (Parseval's relation) that the high frequency quantization noise of a 1-bit converter must increase as the signal level decreases ... VERY bad news if some of it is getting back down to audio band thru circuit non-idealities.
And the real "Acilles heal" of single bit converters is the PRISTINE cleanliness required of the Voltage Reference. All DAC's need voltage references for the conversion, and all DACs have a "multiplicative" nature to the VREF ... meaning that noise on the VREF "multiplys" thru the conversion process. And this simple math operation can WREAK UNHOLY HAVOC on single bit converters ... again, by demodulating all that quantization noise from high freqs down to audio.
So what am I saying? The demands on the analog circuitry associated with single bit converters are STEEP. No mismatch to worry about, but you gotta use very high quality circuits (like wide bandwidth opamps, so they're linear at high freqs) and design yourself a CLEAN voltage reference. And big caps on the VREF can HURT more than they help ... cuz the magnetic loop areas get bigger, so even though electrical interference reduces, magnetic interference gets worse. Trust me when I say this took many long nights to sort out in digital audio.
Multi-bit pros/cons : do not enjoy the fundamental linearity of the single bit converters, but ALOT less high frequency noise to worry about. Meaning, that the post-DAC filtering job is easier to get right.
Bottom line : well you're starting to see alot of hybrid converters these days, that mix the two technologies to avoid the pitfalls of both. It's my personal opinion that 1-bit converters ultimately hold the most promise ... technology speed continues to increase (at quite a predictable pace), diminishing the difficulties associated with 1-bit converters, while the matching required by multi-bit converters does not improve. But always remember that the circuitry associated with the 1-bit converter (Vref, filters) must be VERY good stuff (low noise, high linearity) or else you've got a pile of junk on your hands.
And finally, remember when we described SACD back on the ADC thread? Well we all know that SACD is FUNDAMENTALLY a single bit technology/representaion ... so the essence of the principle is sound indeed ... but the implentation with real-world electronics requires excellent analog engineering.
Well I'm done, except for questions Ask away !
Yes, I oversimplified a bit when I said it was pointless to mention Burr-Brown when discussing DACS... however, how do you compare two DACs? I could easily find a DAC that would blow away one chosen at random from TI/Burr-Borwn, but not only would most not know what specs to look for when it comes to comparisons, they're probably not going to know the manufacturer of the DAC in the first place... car head manufacturers rarely list that type of info.
Part I
While the info you posted is quite encompassing of a particular subject, it has only a grazing relevance to what we're discussing here for two reasons. First off, the info about clock jitter is aimed towards systems that need to transfer the clock signal between various components (i.e., home systems, like Phil's setup, where the DAC and CD pickup are in different locations/boxes)... not the case in a single unit, such as a car stereo, since they can merely run the clock signal wherever they need to, no reclocking or PLL'ing necessary. Second, this type of error will affect all systems roughly equally (for all intents and our purposes here), regardless of the DAC since it's a function of the clock signal itself... these are not $500 DACs.
Part II
A lot of good talk and examples on ADCs... not really relevant to our discussion here since that's the flip side of the analog/digital world, but useful info to have for anyone who's interested in the subject.
Part III
Pretty much summed up what I've already posted, with a few extra details thrown in for good measure. No new info, just rehashed and recovered...
For two single page posts, I thought I did a pretty good job of covering everything you need to know when choosing DACs...
Part I
While the info you posted is quite encompassing of a particular subject, it has only a grazing relevance to what we're discussing here for two reasons. First off, the info about clock jitter is aimed towards systems that need to transfer the clock signal between various components (i.e., home systems, like Phil's setup, where the DAC and CD pickup are in different locations/boxes)... not the case in a single unit, such as a car stereo, since they can merely run the clock signal wherever they need to, no reclocking or PLL'ing necessary. Second, this type of error will affect all systems roughly equally (for all intents and our purposes here), regardless of the DAC since it's a function of the clock signal itself... these are not $500 DACs.
Part II
A lot of good talk and examples on ADCs... not really relevant to our discussion here since that's the flip side of the analog/digital world, but useful info to have for anyone who's interested in the subject.
Part III
Pretty much summed up what I've already posted, with a few extra details thrown in for good measure. No new info, just rehashed and recovered...
For two single page posts, I thought I did a pretty good job of covering everything you need to know when choosing DACs...
home systems, like Phil's setup, where the DAC and CD pickup are in different locations/boxes

i tried a few, including one of Pioneer's 'legendary' Legato Link DAC setups... it just didn't beat it. (but the Legato Link did come closest)
Originally Posted by MacGyver,Jun 23 2005, 02:38 PM
Yes, I oversimplified a bit when I said it was pointless to mention Burr-Brown when discussing DACS... however, how do you compare two DACs? I could easily find a DAC that would blow away one chosen at random from TI/Burr-Borwn, but not only would most not know what specs to look for when it comes to comparisons, they're probably not going to know the manufacturer of the DAC in the first place... car head manufacturers rarely list that type of info.
Part I
While the info you posted is quite encompassing of a particular subject, it has only a grazing relevance to what we're discussing here for two reasons. First off, the info about clock jitter is aimed towards systems that need to transfer the clock signal between various components (i.e., home systems, like Phil's setup, where the DAC and CD pickup are in different locations/boxes)... not the case in a single unit, such as a car stereo, since they can merely run the clock signal wherever they need to, no reclocking or PLL'ing necessary. Second, this type of error will affect all systems roughly equally (for all intents and our purposes here), regardless of the DAC since it's a function of the clock signal itself... these are not $500 DACs.
Part II
A lot of good talk and examples on ADCs... not really relevant to our discussion here since that's the flip side of the analog/digital world, but useful info to have for anyone who's interested in the subject.
Part III
Pretty much summed up what I've already posted, with a few extra details thrown in for good measure. No new info, just rehashed and recovered...
For two single page posts, I thought I did a pretty good job of covering everything you need to know when choosing DACs...
Part I
While the info you posted is quite encompassing of a particular subject, it has only a grazing relevance to what we're discussing here for two reasons. First off, the info about clock jitter is aimed towards systems that need to transfer the clock signal between various components (i.e., home systems, like Phil's setup, where the DAC and CD pickup are in different locations/boxes)... not the case in a single unit, such as a car stereo, since they can merely run the clock signal wherever they need to, no reclocking or PLL'ing necessary. Second, this type of error will affect all systems roughly equally (for all intents and our purposes here), regardless of the DAC since it's a function of the clock signal itself... these are not $500 DACs.
Part II
A lot of good talk and examples on ADCs... not really relevant to our discussion here since that's the flip side of the analog/digital world, but useful info to have for anyone who's interested in the subject.
Part III
Pretty much summed up what I've already posted, with a few extra details thrown in for good measure. No new info, just rehashed and recovered...
For two single page posts, I thought I did a pretty good job of covering everything you need to know when choosing DACs...
you sound so defensive as if you have something to prove. there is no need to be so defensive. do you have some kind of complex where other people cannot post more descriptive info?
actually, i was just happy to see both discussions. i was aware of some of the subject discussed, but learned a lot and refreshed some older things from the back of my memory as well.
thanks both for posting.
thanks both for posting.
Originally Posted by MR_ASDF,Jun 23 2005, 10:53 PM
whats your point? that my info is incorrect? i never once said your info was wrong. i just said your comparison is more like apples to oranges. im sure many people here would like to get correct answers and comparisons.
you sound so defensive as if you have something to prove. there is no need to be so defensive. do you have some kind of complex where other people cannot post more descriptive info?
you sound so defensive as if you have something to prove. there is no need to be so defensive. do you have some kind of complex where other people cannot post more descriptive info?
Uhhh, my point was just as I posted it. I oversimplified about the Burr-Brown part since I didn't feel it was a point worth belaboring, but as I stated later, it's still not that relevant to mention Burr-Brown. If you were comparing two engines, you would compare the specs, not the manufacturers. There are a large number of TI/Burr-Brown DACs, but if you absolutely must mention the manufacturer, why not mention the specific part number, too? At least that information would allow another to look up the specs, which is where the useful info comes in. No one ever mentions "Product X has a Maxim DAC", or "I use an Analog Devices DAC", so why bother with "It's a Burr-Brown DAC". Am I making more sense now?Part I is pretty irrelevant to the original question... the information is correct, it's just not relevant to the question at hand. Part II was mostlky on ADCs, not DACs. Part III was merely a rehash of what had already been stated.
Defensive?
You're free to post more descriptive info, if you think it would help. I was just summing up 50 pages of info that you cut and pasted. My post should be moved before yours, kind of like an outline for what you posted, if you will. If people don't care about the recording sid eof things, they could skip past Part II... but you wouldn't know that until reading a good number of pages for fear of possibly skimming past useful info on DACs.A complex?
<shrug> whatever...I think people all too often read WAY too much into posts...
Thread
Thread Starter
Forum
Replies
Last Post





