I think I did not make clear enough what my "problem" is ...
Disclaimer : all is my very own interpratation of things, based on experience and knowledge where possible.
What mr. Katz says I all agee with, BUT, what he leaves out (explicitly or implicitly) is just what it is about here.
In fact it's IMHO the same with you (Andrey and Edward) above, and it makes explanations (if any) too simple, or fail;
In my earlier long post, I tried to explain the difference between errorneous reading, and normal time jitter. You don't go into that (oh, not that you really should have, but it is important for what is happening IMO).
I say : at the DAC level samples are missed out completely. Look at the picture below; from left to right is the time domain, and the height represents the voltage level (= volume). Now, one sideways step as you can recognize in the picture, represent one sample. Mind you, this comprises of two bytes per channel (for 16 bit audio).
One sample (just made up) for two respective channels may look like :
00010101 11001000 (left)
00100011 01011100 (right)
(big endian notation, which is not even true, but never mind)
Look at the picture again. The topmost part (left channel) presents a value-change at 36.4 (never mind the meaning of that number) and a next step presents 37.6.
Now, time jitter in the DAC might just point at the 37.6 sample, while actually 36.4 should be pointed at. See this as an arrow poining from above donwards pointing at the samples, the time running by at crazy speed, and the pointer should just stop once per 0.00002 seconds (44K1 sample rate) and pick the sample, while the pointer pointing is directed by a clock (in the DAC), and the samples running by is directed by another clock (!) (in the PC).
Sidenote : the thing about the clocks is a tad more complex, because in either case it would be the DAC impeeding for the samples arriving (and the PC just causing the samples to be ready in a buffer, this already working differently for SPDIF and USB).
This explanation of time jitter as how I (!) see it, as happening in the DAC, is different from errorneous reading :
Below is the sample of the left channel again, but now underneath is what was read (at either place where reading can go wrong) :
00010101 11001000 (left original)
01010101 11001000 (left read)
The above is what everybody is talking about (you both, mr. Katz), and (big endian read and unacording plus or minus voltage) this an error of 32768 upon the original data in the 8192 range. Note that this implies a voltage spike of 4 times (12 dB) the original level, and which happens at one sample.
Do note : this is perceived as inaudible (because it lasts too short), but is dead wrong anyway.
In the DAC, these kind of errors can happen only because of a poor TTL levels (TTL : the standard voltage for 0 and 1). Thus, when a rised voltage is below the offical minimum level, the 1 can be read as 0.
Poor TTL levels can be caused by cabling and many more things (PSU, etc.).
Again different would be "time jitter" in reading out a (double) byte;
Note that it is this what you talk about with the CDR as point of view, but of which I personally do not believe it will happen in other hardware.
Thus, think of the time-pointer again as explained above, and now you imply that the individual bits are read wrongly.
NOTE : with the poor TTL levels it still can, but that aside ... no. BUT ... on the other hand, it would still be subjective to time jitter, because TTL voltage ups and lows have a rise and fall time, and it would still depend on where the "pointer" drops in (looks).
To put the above in another perspective, also look at this :
No matter what I do to influence the DAC (because that's what XX does in each of their sound engines), when the data is read back from the digital out of the soundcard (or even DAC when possible), it would read back the original data. Do note that at least *I* test this with real reading back, and not with stupid tests like DTS signals at te receiver (which would not even prove anything in Vista (!)) or do or not being able to influence digital volume level.
What does this tell ?
It tells that everything up to the input of the DAC was ok for TTL levels, so *IF* the DAC would read errorneously, then suddenly there all wouldn't cope ??
No ... (of course it could, but I just don't believe that).
Also note that if TTL levels (or better, the resulting voltage ups and lows) were not okay, there would be no way of error checking that (I think). And if it were, the DAC could cope with it just the same ... (per implementation).
As a sidestep, we now go to the CD(R);
Your (and everybody's) story about the pits and lands and the *there* impeeded time jitter ... true IMO. But mind you, this is at the bit level ! (compareable to the TTL level thing from above). So, when jitter is induced from reading the CD, we'd get
00010101 11001000 (left original)
01010101 11001000 (left read)
this again. And this would just be truely happening.
Now remember, when I state (haha) that it is not this what is happening inside the DAC, and instead there complete samples are missed (and repeated), this has a very different outcome soundwise. The errors would be outrageously more less.
Look at the picture again; When the pointer would read the 36.4 again instead of the 37.6, this is an actual difference (error) of decimal 1.2, which is quite different from 32768 ...
Note that it is nature which causes the difference to be so small, because nature dynamics can't be that large.
It is not for nothing that I always express "a zillion things are wrong at audio playback", because to me it is obvious that the misread of 32768 is just happening in a normal CDP. The better the box the less it will be, but still.
Also, I know how sound is perceived to stay the same, when I inject wrong samples on purpose. You just won't hear it. Or ...
... In the end, obviously yes, and it is this what it all is about.
If you hear the upcoming 0.9s-0 you might wonder : were 80% of before samples read wrongly, or is it now for 80% wrong ? So huge is the difference ...
Never mind bits (!) are read wrong from the CD all over the place. At the other end things are 10 times worse ... (mind you, that's my perception, and by each improvement of XX this is just proved true).
So where do we stand according to the subject of mapping the jitter (?) as produced by XXHighEnd onto a burned CDR ?
I don't know.
The most logical base for it to happen must be the PSU indeed. But, if that were true, there's so much more to make consistent;
When XX asks PSU power via the processor asking for it, and that tiny levels would influence the laser capacity ... beyond my comprehension.
When XX asks PSU power via the processor asking for it, and that influencing rise and fall times of TTL levels ... yes. The smallest difference would be enough to make a difference. How this is consistent with reading back the data bit perfectly ... probably my error in thinking that this proves something.
How a 1:1 map can emerge from supposedly jitter signatures, XX playing at 100% speed, the burning process burning at, say, 1200 % speed ... could be, but then by "resonance" only.
The only real explanation might be that so many things are going wrong, that the going wrong at XX playback is so persistant, that it infuences all. But mind you, this then must be about something we don't know yet (by far).
I'd like to think in the area of of synchroneous processes which are time constraint at the same time, thus leaving things out. This can't be it, because then the data on the various burned versions should be different.
This is crazy ...