Haha Klaus, sharp as always !
First off, the 1/10 of a sample is theoretical, just because a buffer in practice cannot be so small and is without sense obviously. However, it can be kind of measured by means of knowing the left headroom and variation of the space in the buffer.
An other angle is using a (special) timer which is far too low to be useful at all, and only when it's set to 0.05 ms it reaches the limit of the 2.4GHz core2Duo I use, that thread accessing one core only (but still inaudible to me). One 44K2 audio sample is 0.2 ms -> my presented (!) math was a little bit off, granted. BUT, in what I say here a. I used one core and b. I used 44K1. With 88K2 the results are nearly the same, implying another factor of 2 "better". So I'm on the safe side here. Okay ?
Again, this is useless; Besides that the sampling frequency wouldn't be that low, it's already useless because of no timer is stable at that high rate. Also, the code to be executed in between is relatively too much influenced, so it really can't be measured. However, when we talk about the phenomenon latency as such, it would be true for sure. Mind you, "latency" is from DAW applications (better : useful there very much), and there it would apply, although in practice I think you'd have to say that the latency (say) varies from 0.02ms to 0.07ms or whatever would come out exactly, because of a. the variance in the timer and b. the code itself which would relatively stall the timer in a variance.
*ALL* is useless, when you'd see that it all is not about this, no matter how much we tend to think it is. We talked about this before, and very very carefully I'd like to say that this is kind of proven by Linux which would go (far) under the latency figures of the #1 and #2 Engines under XP, XX there still showing off (without you being there, I know).
For whatever it is worth, I mentioned it in the original post, just and only because people like to know this. So again, the fact that it would operate in the, say, real time domain, *for me* only makes me put out the real message : there's no way this player will be influenced by its near environment (other services etc.). And remember, this was just my objective !
This objective was good by the sole theory of we otherwise tweaking thee hell out of us, to let the PC produce better sound. We all know it, and we all tend to listen to the by itself valid things of even up to switching off the PC's monitor. I just created something that allowed us to avoid this stupidnesses, and the real time "figures" is just a means of proving for dumn theories.
Remember, in the end all is about jitter.
I said it before : Vista is great. But it's a stupid shame that nobody is able to help out on the elementary things of it. That's why it took me over 4 months to get the real grasp of it, and I can tell you, there's another 25% to catch for me.
I hope this was a useful answer for you. And don't forget (like I think I said in the original post already) : those small buffers just do not exist so there is no way to use out this super latency. Okay, on a 32 polyphone synthesizer perhaps where 10ms really is sufficient. So go figure.
Peter