Ok, let's start with what happened to the sound;
First thing to notice is loads of punch and better bass. More deep, more roaring. The freshness is of a level unheard, and the fragility going with that tells me it's not fake (like freshness of distortion). What you can notice all over is a degree of balance which is so good. All fits.
Voices have more body and generally the lower mid comes more forward. In the room with the musicians.
I can play loud as I don't think I could before.
"Magic" would be a nice description indeed.
But still something is wrong. There seems to be a high frequency layer somewhere, not exactly described as sibilance. But there. It caused me to shut off the whole system at listening to Ennio Morricone with his higher pitched violins and background women. Or maybe I should have just lowered the volume.
BUT, and not sure yet, I think this is cause by me hopping over to Core Appointment Scheme-1 which contributes to the better bass, and also adds that level of "on/off" I hear. It's a bit back to W7 as it seems. Notice that Scheme-1 (for a 4 core processor) has the sound on the 2nd Core, and everything else *not* on the 2nd core (mutually exclusive). So for theories ... (and that's why I did it).
People may try if they perceive the same as I do with Scheme-1 (it's not so popular I think).
Ok, all is beginning to become one large complex thing, and all may be related. On to SFS (Split File Size) ...
As I described elsewhere (3 weeks back or so), I already noticed some a-intuitive behaviour in a large SFS vs a small one, the large implying no I/Os at all, the small implying many more, while I could see the PC being so much more busy "with" the large SFS;
As you may recall, right from the start I dedicated this to "memory" in general, and stuff like memory far away vs memory closer, remapping, or whatever I could think of. Well, that was not it ...
Of course we are doing relatively crazy things with using "arrays" of hundreds of Mega Bytes. Not that a PC wouldn't be able to cope with it, but things get rather exponential when doing things not the "right" way. Uhm ... the right way ?
I think it was in 2008 somewhere I rewrote the whole memory management in order to let dot-net keep away from my precious arrays, and that worked. This is all about the "managed" stuff dot-net implies, and the "managed" largely is about that the developer doesn't need to take care about throwing out unused arrays and such, and that the "OS" takes care of it. On of the huge problems (for all dot-net developers) is that the OS throws out unused memory (read : makes it available again) once it thinks there's some spare time to do it. So, what do we do ? we set our Thread Priority to Real Time because someone (me) thought it could have it, and next the OS itself gets less time to deal with this "garbage collection". But it will try ... It will try to free that 500MB array and merge it into other free space, in an attempt to create as large as possible contiguous blocks of memory, so a next request for a large array can get that space, instead of an "Out of memory" because that one (say 500MB) contigious space it not available, never mind 1GB is free in total.
The above summarized (and not that you need to understand), back in 2008 I was able to make the total (for everything) memory used stable in this regard, not knowing what it all implied - and the OS always trying and trying to be smarter than me (which of course is impossible, haha). So, those who were around back then, may recall that after the 8 track played you suddenly could receive an Out of memory, and only because the OS was too late to free memory which was needed again (by XX). Long live dot-net.
But that was solved.
The past week I have been reading and reading about how the various versions of Windows OSes tried to improve on this, how obsolete memory is copied from one space to a next space, and later to a next space again, and a.o. things I learned how our 500MB blocks of memory are copied and copied and copied, until maybe at last it's thrown out definitely (and who did not watch the memory useage going up and down without reason ? -> of course you must know what's happening under the hood, but I have always been very surprised, exactly knowing what XX is doing).
And so I started tweaking with the most nasty things, in order to be again more smart than the OS, and be ahead of all this;
If you'd see the end result in coding it's all quite simple, but it took me days to get there. I won.
The result ? well, for one thing that may make some sense to you, I -with 3GB of RAM and an 1GB RAMDisk- thus 2GB of avaialable memory, can now use 260MB of SFS at 8x Arc Prediction Upsampling. Before this was 100MB. Notice that this is not a difference of 160MB because you have to look at the net result, and the 160MB gross difference is something like 8x that in my case (very roughly). So, I can just use 1.2GB
more memory now. Btw, in total this comes to over 1.6GB, 400MB left for the OS.
In the program itself I changed nothing, but I manipulated the OS' working.
Of course I now can also increase the RAMDisk to 2GB, and use my (max) 100MB SFS from before ...
Very roughly speaking (hence far from correct, but hopefully understandable), with a low SFS the memory chunks the OS wanted to deal with are small hence take less cpu cycles to deal with them, whereas the large chunks take way more. BUT -and this is the tricky part- especially with the large SFS the OS could find itself near out of memory, and thus forced the unused memory to be cleared. When this happens all is ok, but again but : in the later version I have here. Not yours. In yours it still could build up again, and depending on so many things (like priority) the OS could get out of its "squeeze" or not.
The stupid thing is, no matter what, the memory is actually always full of unused data, and it is only a matter of "when shall I (the OS) free it".
As I said, all is one complex matter of related things, and we started to create RAMDisks. Ah, good for sound. But no, not good for the SFS thing, because the OS was again cornered by it; less memory available, so more need to free the old stuff.
In my system, the OS was spending over 80% of my own cpu cycles on this, with an ever result of nothing (which I so smartly caused in 2008
). So, 80% forever, result zilch and still the memory not available to me. Now ? something like 0,001 or even 0,000%.
While I could see this all happening, besides of not being sure how my manipulation would work out on another OS (as said, MS always tries to improve on it), I also could see it didn't work always. So, the next task was to find something I could use from within the program that could check this "data". This is what I spent my time on yesterday, and my "solved" from this topic came when I was able to manage that. So, at this moment I'll receive a message when too many CPU cycles are spent on rubbish, and a restart (or two) of XX solves it.
Maybe with you it will never work (at the first version of it), so we'll have to see that. In any case it is configurable, so you can leave it out anyway (and compare SQ).
Of course I don't like the restarts, so what I'll try next is to perform this "restart" myself.
Not sure yet, but I think it is fair to conclude that any copying XXHighEnd does (to the RAMDisk) also is related, because this too consumes memory, and it won't be freed - hence will cause the OS to be (more) in the corner again. Whether my tweaks solve that or not I don't know (for you to compare if you want), but notice that here this now also is related to the option of just not starting Playback while the conversions and copying are done anyway. The result ? XXHighEnd will quit, that by itself *will* mark all the used memory as obsolete, while next you start XXEngine3.exe (Alt-P) and from there on all is fresh.
Yep, it gets more and more complicated ...
One more thing :
I tested how this all works with Virtual Memory being shut off. Well, then it does *not* work. Think like this :
My SFS of 260MB (resulting in over 1.6GB) will force the OS to swap out all which can be swapped out. This is how it ends up using under 400MB of memory (more normal is 600-700MB). Without Virtual Memory this can't happen, and thus my SFS has to be lower. But how much ? You'll never know and maybe after 50 minutes of playing some scheduled task starts, and you will be out of memory (the XXEngine3 memory can't be swapped out). This has become far more apparent, because the 1.2GB I can use more is just not used space, which before the OS always could use before running out of memory. Not so anymore, because I now use that ...
This too has "SFS" implications, because without virtual memory the OS *has* to free the memory (while I'm running at Real Time priority), while with virtual memory it can just free a block of sufficient size (and notice that block already is in virtual memory as to what I found long ago), and this is a way more easy task for the OS.
All 'n all this is how our judgement on SFS size could vary very much, and if you are able to reason out how to set what when, well, you must be mad.
Peter
PS: I am still not 100% sure whether the SFS size doesn't matter to the sound, and to me it looks a smaller size is better; this is supported by the theory that with a fully used memory (virtual memory being there) still some stressing must be going on, as long as the OS wants to do something it just can't. And keep in mind how we developers tend to deal with such a thing : try once in the 20ms whether the whatever it is which can't be done, can be done now. It will be there forever, because it never can be done as long as playback continues.