Title: SFS Solved Post by: PeterSt on October 20, 2010, 07:31:07 pm Ok, I thought to open a new topic to tell you all I now completely finished the "SFS (Split File Size) issue". Issue ? haha. And the sound is ... :whistle: Title: Re: SFS Solved Post by: Marcin_gps on October 20, 2010, 07:42:41 pm yupi :) when is the new version expected? On Christmas? haha
Title: Re: SFS Solved Post by: manisandher on October 20, 2010, 09:52:11 pm You have no idea how happy it will make me not having to mess around with the SFS. So, is that one variable down, 99 still to go?
Mani. Title: Re: SFS Solved Post by: boleary on October 21, 2010, 05:52:14 am So, putting a "master tone control" on the face of the GUI wasn't the SFS solution? Ha! :)Look forward to hearing it. Right now I'm blown away by how good special mode sounds with Ramdisk and the desktop.
Title: Re: SFS Solved Post by: PeterSt on October 21, 2010, 10:25:02 am Ok, let's start with what happened to the sound;
First thing to notice is loads of punch and better bass. More deep, more roaring. The freshness is of a level unheard, and the fragility going with that tells me it's not fake (like freshness of distortion). What you can notice all over is a degree of balance which is so good. All fits. Voices have more body and generally the lower mid comes more forward. In the room with the musicians. I can play loud as I don't think I could before. "Magic" would be a nice description indeed. But still something is wrong. There seems to be a high frequency layer somewhere, not exactly described as sibilance. But there. It caused me to shut off the whole system at listening to Ennio Morricone with his higher pitched violins and background women. Or maybe I should have just lowered the volume. :scratching: BUT, and not sure yet, I think this is cause by me hopping over to Core Appointment Scheme-1 which contributes to the better bass, and also adds that level of "on/off" I hear. It's a bit back to W7 as it seems. Notice that Scheme-1 (for a 4 core processor) has the sound on the 2nd Core, and everything else *not* on the 2nd core (mutually exclusive). So for theories ... (and that's why I did it). People may try if they perceive the same as I do with Scheme-1 (it's not so popular I think). Ok, all is beginning to become one large complex thing, and all may be related. On to SFS (Split File Size) ... As I described elsewhere (3 weeks back or so), I already noticed some a-intuitive behaviour in a large SFS vs a small one, the large implying no I/Os at all, the small implying many more, while I could see the PC being so much more busy "with" the large SFS; As you may recall, right from the start I dedicated this to "memory" in general, and stuff like memory far away vs memory closer, remapping, or whatever I could think of. Well, that was not it ... Of course we are doing relatively crazy things with using "arrays" of hundreds of Mega Bytes. Not that a PC wouldn't be able to cope with it, but things get rather exponential when doing things not the "right" way. Uhm ... the right way ? I think it was in 2008 somewhere I rewrote the whole memory management in order to let dot-net keep away from my precious arrays, and that worked. This is all about the "managed" stuff dot-net implies, and the "managed" largely is about that the developer doesn't need to take care about throwing out unused arrays and such, and that the "OS" takes care of it. On of the huge problems (for all dot-net developers) is that the OS throws out unused memory (read : makes it available again) once it thinks there's some spare time to do it. So, what do we do ? we set our Thread Priority to Real Time because someone (me) thought it could have it, and next the OS itself gets less time to deal with this "garbage collection". But it will try ... It will try to free that 500MB array and merge it into other free space, in an attempt to create as large as possible contiguous blocks of memory, so a next request for a large array can get that space, instead of an "Out of memory" because that one (say 500MB) contigious space it not available, never mind 1GB is free in total. The above summarized (and not that you need to understand), back in 2008 I was able to make the total (for everything) memory used stable in this regard, not knowing what it all implied - and the OS always trying and trying to be smarter than me (which of course is impossible, haha). So, those who were around back then, may recall that after the 8 track played you suddenly could receive an Out of memory, and only because the OS was too late to free memory which was needed again (by XX). Long live dot-net. But that was solved. The past week I have been reading and reading about how the various versions of Windows OSes tried to improve on this, how obsolete memory is copied from one space to a next space, and later to a next space again, and a.o. things I learned how our 500MB blocks of memory are copied and copied and copied, until maybe at last it's thrown out definitely (and who did not watch the memory useage going up and down without reason ? -> of course you must know what's happening under the hood, but I have always been very surprised, exactly knowing what XX is doing). And so I started tweaking with the most nasty things, in order to be again more smart than the OS, and be ahead of all this; If you'd see the end result in coding it's all quite simple, but it took me days to get there. I won. The result ? well, for one thing that may make some sense to you, I -with 3GB of RAM and an 1GB RAMDisk- thus 2GB of avaialable memory, can now use 260MB of SFS at 8x Arc Prediction Upsampling. Before this was 100MB. Notice that this is not a difference of 160MB because you have to look at the net result, and the 160MB gross difference is something like 8x that in my case (very roughly). So, I can just use 1.2GB more memory now. Btw, in total this comes to over 1.6GB, 400MB left for the OS. In the program itself I changed nothing, but I manipulated the OS' working. Of course I now can also increase the RAMDisk to 2GB, and use my (max) 100MB SFS from before ... :yes: Very roughly speaking (hence far from correct, but hopefully understandable), with a low SFS the memory chunks the OS wanted to deal with are small hence take less cpu cycles to deal with them, whereas the large chunks take way more. BUT -and this is the tricky part- especially with the large SFS the OS could find itself near out of memory, and thus forced the unused memory to be cleared. When this happens all is ok, but again but : in the later version I have here. Not yours. In yours it still could build up again, and depending on so many things (like priority) the OS could get out of its "squeeze" or not. The stupid thing is, no matter what, the memory is actually always full of unused data, and it is only a matter of "when shall I (the OS) free it". As I said, all is one complex matter of related things, and we started to create RAMDisks. Ah, good for sound. But no, not good for the SFS thing, because the OS was again cornered by it; less memory available, so more need to free the old stuff. In my system, the OS was spending over 80% of my own cpu cycles on this, with an ever result of nothing (which I so smartly caused in 2008 :swoon:). So, 80% forever, result zilch and still the memory not available to me. Now ? something like 0,001 or even 0,000%. While I could see this all happening, besides of not being sure how my manipulation would work out on another OS (as said, MS always tries to improve on it), I also could see it didn't work always. So, the next task was to find something I could use from within the program that could check this "data". This is what I spent my time on yesterday, and my "solved" from this topic came when I was able to manage that. So, at this moment I'll receive a message when too many CPU cycles are spent on rubbish, and a restart (or two) of XX solves it. Maybe with you it will never work (at the first version of it), so we'll have to see that. In any case it is configurable, so you can leave it out anyway (and compare SQ). Of course I don't like the restarts, so what I'll try next is to perform this "restart" myself. Not sure yet, but I think it is fair to conclude that any copying XXHighEnd does (to the RAMDisk) also is related, because this too consumes memory, and it won't be freed - hence will cause the OS to be (more) in the corner again. Whether my tweaks solve that or not I don't know (for you to compare if you want), but notice that here this now also is related to the option of just not starting Playback while the conversions and copying are done anyway. The result ? XXHighEnd will quit, that by itself *will* mark all the used memory as obsolete, while next you start XXEngine3.exe (Alt-P) and from there on all is fresh. Yep, it gets more and more complicated ... One more thing : I tested how this all works with Virtual Memory being shut off. Well, then it does *not* work. Think like this : My SFS of 260MB (resulting in over 1.6GB) will force the OS to swap out all which can be swapped out. This is how it ends up using under 400MB of memory (more normal is 600-700MB). Without Virtual Memory this can't happen, and thus my SFS has to be lower. But how much ? You'll never know and maybe after 50 minutes of playing some scheduled task starts, and you will be out of memory (the XXEngine3 memory can't be swapped out). This has become far more apparent, because the 1.2GB I can use more is just not used space, which before the OS always could use before running out of memory. Not so anymore, because I now use that ... This too has "SFS" implications, because without virtual memory the OS *has* to free the memory (while I'm running at Real Time priority), while with virtual memory it can just free a block of sufficient size (and notice that block already is in virtual memory as to what I found long ago), and this is a way more easy task for the OS. All 'n all this is how our judgement on SFS size could vary very much, and if you are able to reason out how to set what when, well, you must be mad. Peter PS: I am still not 100% sure whether the SFS size doesn't matter to the sound, and to me it looks a smaller size is better; this is supported by the theory that with a fully used memory (virtual memory being there) still some stressing must be going on, as long as the OS wants to do something it just can't. And keep in mind how we developers tend to deal with such a thing : try once in the 20ms whether the whatever it is which can't be done, can be done now. It will be there forever, because it never can be done as long as playback continues. :heat: Title: Re: SFS Solved Post by: Marcin_gps on October 21, 2010, 11:18:13 am If I understood you right, there's a long way to solve this for everyone :scratching:
So just for starters, I should turn the virtual memory back on, right? Title: Re: SFS Solved Post by: Nick on October 21, 2010, 11:22:42 am Peter,
My hat’s off to you, that is one great piece of diagnostic work to get to the bottom of SFS and it sounds like solving the problem has lead to fundamental changes - I'm really looking forward to hearing the results. Also great explanation, it may just be me but having a bystanders view of the elegance you craft into HighEnd under the bonnet really adds to the appreciation and enjoyment of using it. Nick. Title: Re: SFS Solved Post by: PeterSt on October 21, 2010, 12:15:17 pm Nick, thanks.
Marcin, Quote If I understood you right, there's a long way to solve this for everyone :scratching: No, why ? I just don't know at this moment how it will work out for everyone. It should be OK though. But this is why I wanted that check in; if it's not right, the message will be there. Quote So just for starters, I should turn the virtual memory back on, right? Mwah, maybe not. Not now. Things work so differently in the version you use that I can't predict what will happen. So, it seems a waste of time. 0.9z-3 will be up in the weekend. (that's what I started to think in Title: Re: SFS Solved Post by: Marcin_gps on October 21, 2010, 12:24:05 pm Thank you! Fingers crossed ;)
Title: Re: SFS Solved Post by: CoenP on October 21, 2010, 02:37:45 pm The OS messing around with memory all the time and compromising SQ, hmm I've read this before in a different context.
Or maybe not so different. Now I understand why the designers of the NovaPhysicsGroup memoryplayer (PC&Dac&screen in a box) were bothered to write their own 'memory OS'-memorymanagement code (as advertised on their old site-if executing such code is possible at all, because all their stated digital theories are questionable-). Save for a probably well integrated dac and analog stage their memory managementphilosophy must be key to the raved about sound. I don't believe the 'Rur' or quantummechanical jitter voodoo blah blah has anything to do with superior sound. Have we entered the last frontier....? Title: Play In Ramdisk? Post by: goon-heaven on October 21, 2010, 03:50:25 pm Peter,
It is possible to play directly in RamDisk? i.e. dont copy into OS jungle memory? Would this not avoid the heap issue? i.e Copy music into RamDisk. Defrag. Play? :tomatoes: Steve Ministry of Silly Questions Title: Re: SFS Solved Post by: PeterSt on October 21, 2010, 04:17:23 pm Hahaha, nice thought. But then XXHighEnd would stop from being a memory player, and the smallest possible portions would go into RAM (always needed).
But who says -these days- a memory player is a good thing ? Ok, I (still) do. :) :) Peter Title: Re: SFS Solved Post by: Josef on October 21, 2010, 05:19:42 pm Quote Hahaha, nice thought. But then XXHighEnd would stop from being a memory player, and the smallest possible portions would go into RAM (always needed). Peter, could you elaborate on this? I'm not sure what goon-heaven had in mind but I was thinking what would happen if you used memory mapped files? It would cut down XX memory requirements to (essentially) ~0 (compared with current SFS setting) . It would leave more memory for RAMDisk. It would avoid problems that you seem to be having with memory management (although I think in essence you are solving self-inflicted problems because you are using .Net). It would, in theory, solve all SFS issues and provide more predictable resource usage instead of flurry of I/Os you have now when you load next track. In theory, it would also have net negative effect on SQ when playing from HDD/SDD but shouldn't it be equal (maybe even better?) if RAMDisk is used? Of course: theory & practice don't necessarily go hand-in-hand, so I'm curious if you have any experience with memory mapped files approach (especially in tandem with RAM disk) you'd like to share with us? Title: Re: SFS Solved Post by: Josef on October 21, 2010, 05:36:27 pm Quote The OS messing around with memory all the time and compromising SQ, hmm I've read this before in a different context. I'm afraid OS is not a problem here. Peter's post seems to me to be describing problems he is having with .Net memory management and, specifically, garbage collection. If memory is allocated directly in OS (instead of via .Net) it is trivial to lock it in RAM and have OS's hands off: No paging, no garbage collection, no object reference counts yadayadayada - nothing. Just a piece of memory sitting there, quietly, undisturbed........ But if you go via .Net.....well, it is _still_ possible, but not nearly as simple..... Title: Re: SFS Solved Post by: PeterSt on October 21, 2010, 11:22:08 pm Josef, your last post ... all correct, but completely solved.
You post before that one ... NO ... not familar with that. So, will investigate it. Sounds good to say the least ! :clever: Title: Re: SFS Solved Post by: Josef on October 22, 2010, 12:05:18 am Quote You post before that one ... NO ... not familar with that. So, will investigate it. Sounds good to say the least ! I'm not an expert but looking at this it seems rather simple to implement: http://www.developer.com/net/article.php/3828586/Using-Memory-Mapped-Files-in-NET-40.htm Title: Re: SFS Solved Post by: Josef on October 22, 2010, 12:35:17 am Quote Josef, your last post ... all correct, but completely solved. Great! Although, reading what you wrote today let me reserve some doubt before I see it :) Here's why: Looking at current version, I noticed rather strange memory pattern usage and I'm wondering if you got rid of it completely in next version? For example, if I set SFS to 100MB I (turns out naively) expected to see a 100MB buffer somewhere. Oh yeah, it is there. But there is another 100MB one. Hmmm..., ok. And then, there is yet another one. And, finally, one more, yet this time 'only' 50MB in size. So, 100MB SFS setting causes XX to allocate 350MB? Sounds like either a bug or some heavy tricks are going on? :) Title: Re: SFS Solved Post by: PeterSt on October 22, 2010, 12:41:39 am Of course I can't tell whether you see "live" memory of dead, but a native track of 16/44.1 :
- has to be read into memory; - has to expand to whatever it is (like 24/192); - has to have a "cyclic" counterpart (Gapless, doubles the latter); - has to be stuffed into the device's buffer (also cyclic, but this is only a fraction of it all). If you see more, and it's not dead, please let me know ! Peter Title: Re: SFS Solved Post by: Josef on October 22, 2010, 01:39:58 am Not sure what is your definition of 'live' but it sure seems 'live' to me as it is owned by XXEngine3 process and definitely taking up RAM space :)
Check out below those LargeObjects and total stats. Title: Re: SFS Solved Post by: PeterSt on October 22, 2010, 09:14:31 am Hi Josef, thanks.
But isn't this (see below) exactly what I described ? The only difference is that you upsample 2x (if I interpret it correctly). Also, be careful when to look, because in your version things can change along the way, even at an "unlucky" trackboundary (may grow again). This has all been eliminated. Peter Title: Re: SFS Solved Post by: Flecko on October 22, 2010, 10:13:25 am Quote First thing to notice is loads of punch and better bass. More deep, more roaring. The freshness is of a level unheard Yeehaa! :yahoo:You mentioned that you use all four cores with the appointment schemes. I just have two :( Do you think it could be worth to invest in a Quadcore? I need more Ram too, so it might be a new machine :) Title: Re: SFS Solved Post by: Josef on October 22, 2010, 12:16:13 pm Quote But isn't this (see below) exactly whay I described ? The only difference is that you upsample 2x (if I interpret it correctly). Nope: no upsampling, no funny business: just a single 'straight' 16/44 WAV track. Quote - has to be read into memory; - has to expand to whatever it is (like 24/192); - has to have a "cyclic" counterpart (Gapless, doubles the latter); So you're saying that's where those 3x100MB buffers come from i.e. 3.5xSFS memory is needed 'by design'? If so OK, I could see 2xSFS being needed for gapless as in SFS1 holding current track and SFS2 holding next (or, if SFS is small, SFS1 holding current 'piece' and SFS2 holding next 'piece'). But if that's how it works then buffer #1 above just screams to be eliminated as it seems relatively easy to do any conversion as file is being read instead of reading the whole file and then converting the whole thing. Sounds like perfect application for memory-mapped files approach...? (btw: does 'conversion' mean e.g. x-upsampling/Arc prediction stuff? I'd sure like my WAVs _not_ to be 'converted' :) as I don't use any of that stuff...) Come to think of it, do I understand correctly that this 'conversion' is happening _during_ music playback? (as I can see that next track (or 'track piece') is being loaded by 2nd thread during playback of current track ('piece') just a little before it ends)? Wouldn't it be an idea to instead do this 'conversion' when 'Copy to XX' is ticked in parent process? Then all file copying/conversions (i.e. really nasty torrent of I/Os & CPU cache-killing memory copies) would be done _before_ music starts playing (assuming 'Start XXEngine during conversion' is not checked). Wouldn't that mean you'd both need far less memory and have a lot less 'system disturbance' during playback? Or have I misunderstood completely? :) Title: Re: SFS Solved Post by: PeterSt on October 22, 2010, 02:02:15 pm Quote Or have I misunderstood completely? :) A little, yes. Haha. You wouldn't want to wait for that all and you'd need an 8x (etc. etc.) larger RAMDisk. But you may try AI filtering to get the idea. About the memory sizes ... The tracks are always upgraded to 32 bits if you are using a more than 16 bits DAC. Even if you are not, the memory is reserved like that. If this ends up in well meant (!) advices how I should organize the program, please let's stop. I mean, I have better things to do than explaining to you (or anyone) why it works like it works. About the memory mapped thing (I'm fully open to that) ... I don't see the benefits and only the contrary. But maybe I see it wrongly; Remember : this is all not about saving memory or anything; if that were the case XX wouldn't be a memory player. So what happens with the MMF's is that we can treat a file as if it were memory, while in the mean time the portion needed is read into memory. Then what ? then we have the same as we have now; it could have been nice or maybe helpful if I didn't make all this stuff already, but I did - and so now it will be the other way around : I am (we are) in the hands of Bill Gates how this all works and what it implies. No thank you ! If it were so that nothing's read into memory and a disk could be treated like memory, *then* we'd have another case. But it doesn't work like that (I'd like to say it works the other way around, but that is not really the situation because in fact the former is true, but only "logically"; not physically). You can see how ridiculous it is (ok, through my eyes) at the examples of inter process communication. So, we write something to memory (which we do, but which ends up on the "mapped disk"), and now another can read from the same memory address and communicate. :swoon::swoon::swoon::swoon::swoon: Just write some ".dat" files (you know), and be normal ! So, in the end we deal with memory but read/write from/to disk (RAMDisk of course), and next we think we'll have a good response ? yea, maybe, if I first load the part of the file concerned into memory (which, I think, can't even be avoided for *any* response). But if I'd behave transparently (for myself) and operate at the byte level, knowing that disk transfers go at the block level, what do you think the result will be ? The only thing I don't know - hence couldn't find out/proove, is at what level transfers go to the RAMDisk, because remember, what ever cluster size I defined, writing to the RAMDisk didn't matter a single fraction of a second. So, supposed this happens at the byte level anyway, *then* there's a chance for this. But I won't believe this, because e.g. copying to the RAMDisk just takes way longer than copying an internal array (writing to it) which easily goes within 0.1sec for 500MB or so. No, the real idea would be the opposite of being a memory player to "solve" this. Ehh, solve what ? :) Peter Title: Re: SFS Solved Post by: Josef on October 22, 2010, 02:31:15 pm And, since I'm probably _really_ getting on your nerves by now (hope you can see it's with seriously good intentions although my tone sometimes may not show it :) ) may I suggest that you do _not_ kill XXEngine process every time playlist is done?
Why? Well, now that you seem to have solved your .NET memory management issues (I'm guessing by sticking to blittable types and pinning down SFS buffers?) and seeing (hearing) tremendous SQ improvements (as you reported) I am sure you will be able to see the value of not having to 'recreate the universe' every time a new playlist is started: By having XXEngine 'stick' around with initially allocated RAM you will be helping OS minimize memory starvation issues by simple fact of not working against it! (which I'm afraid you are doing at the moment by killing & restarting Engine with every playlist which inevitably increases memory fragmentation....) Title: Re: SFS Solved Post by: PeterSt on October 22, 2010, 03:42:33 pm Hmm ...
This requires a bit of upside down thinking from my side; Technically this would be easy to make, but it would require something like "as soon as the OS is up" and always before XXHighEnd starts. So, then XXHighEnd will start on top of it, instead of "under" (memory wise). Should be better, especially when you use an SFS near the memory limit. Also -and as I found by now- the few rounds I need to get everything out of the way (GC), are not needed at all when using a low SFS. This (to me) means that when the several rounds are needed, or no possibility exists for freeing memory at that combination of things, or (more realistically) the dead memory generations (0, 1, 2) are just needed to promote. But for sure XXHighEnd.exe will be in some small space somewhere when it starts first, with XXEngine3 on top of it, leaving that small space of XXHE uncontigious at that moment, which the OS won't like. I'm blahblahing a bit, but I think your suggestion makes sense (or at least has a good chance of getting it better). Yes, I will try that. Peter Title: Re: SFS Solved Post by: Josef on October 22, 2010, 04:20:16 pm I already sent previous post without seeing your latest:
Quote If this ends up in well meant (!) advices how I should organize the program, please let's stop. I mean, I have better things to do than explaining to you (or anyone) why it works like it works. I'm sad to hear you saying you're not open to ‘well meaning advices’ but it's your forum and you make the rules here so I guess I'll have to shut up. I hope it's fair though to let me respond to your latest post? Quote it could have been nice or maybe helpful if I didn't make all this stuff already, but I did - and so now it will be the other way around : I am (we are) in the hands of Bill Gates how this all works and what it implies. No thank you ! Fair enough, it's your product. Considering the ease with which this can be tested I'd personally at least try it: even if it does not work in terms of SQ it might reveal something interesting that was not known previously which just might be useful.... But then, that's just me being curious. I guess I'd have to do it myself in order to find out, LOL... Quote If it were so that nothing's read into memory and a disk could be treated like memory, *then* we'd have another case. That is the point I was hoping you would catch on and IMHO a single most important improvement to XX (or, rather, to audio players in general) with potentially greatest benefits: Look at your RAMDisk. It is nothing but software. A device driver to be exact. And it sits right alongside OS. In fact, it is on 'per tu' basis with OS Kernel. Compare that with current situation of XX being a 'user' process (as opposed to system) running in what is essentially a virtual machine on top of OS (.NET). What if XX could be written as a device driver (a la RAMDisk) but with a twist: It 'knows' about its contents not being 'virtual disk files' but sound data that can be directly accessed without having to go through OS file I/O layer at all? And by virtue of, effectively being a part of OS, what other layers have also been removed? (not having to deal with VM much less garbage collection etc etc.). And because it sits there (like RAMDdisk) it is 'stable' and forever unchanging (except for contents, of course :) ) Now, how much closer is that to ‘bare metal’, what new possibilities would that open and how far more could XX go and sound then? Puzzles the mind.... Quote You can see how ridiculous it is (ok, through my eyes) at the examples of inter process communication. So, we write something to memory (which we do, but which ends up on the "mapped disk"), and now another can read from the same memory address and communicate. Just write some ".dat" files (you know), and be normal ! I see that you have caught on this one too: I was hoping you'd see the possibility of using MMF to 'replace' *.dat & *.dao files. I don't, however, agree that it is 'ridiculous' at least as a matter of principle: I believe most people here can agree that introducing RAMDisk brought positive changes in SQ. And, as you said yourself, it is not clear to you (not to anyone else, me included) just WHY is this so. Is it a crazy notion to suppose that it has at least something to do with eliminating SDD/HDD disk-induced I/O interrupts as such do not exist with RAMDisk? And if this at least sounds plausible, wouldn't using MMF instead of writing/reading *.dat/*.dao to disk be beneficial purely on principle as it eliminates disk I/O? (Yes, I know it may be a moot point if dat/dao are being written to RAMDisk but it still _has to_ make a difference as path through OS is very different. Maybe that difference turns out to be small so it's not worth it but would that make it 'ridiculous'?) BTW - I may be wrong (I don't have logs with me) but if I remember correctly when you do either write or read of dat/dao files they had 'non-cacheable' flags set. Those files are so small they can fit quite nicely in OS cache and, as free bonus, you are likely to avoid file I/O interrupts when reading them back in from the Engine as there is high probability they'll be served from cache. I.e. you may get MMF benefits (at least in this case) without having to use MMF (since you seem to have a particular dislike for them.) Quote But if I'd behave transparently (for myself) and operate at the byte level, knowing that disk transfers go at the block level, what do you think the result will be ? I don't know! And you don’t either! That's the whole point, LOL! :) What I was (obviously unsuccessfully) trying to communicate is that if there is an interesting approach that has not been tried before it might be worthwhile to check it out (if it does not require too much time). Obviously, a proposal mentioned above, to write XX as music-player-device-driver is a LOT of work but trying out MMF really isn't. (BTW If you interested in pursuing device-driver approach maybe some people would be willing to help just because they find it interesting – for example maybe it could even be me (God forbid) ....) But to try to answer your question in a more concrete way: I speculate you would _not_ have to worry about speed. I have read about projects using MMF to provide type-ahead suggestions (you know, like when you start typing on Google and it provides suggestions in drop-down as you type) on very, very large files: order of magnitude larger than any sound file you will EVER have - bigger than 32 bits 384kHz Beethoven's 9th gapless LOL :) And they had to do dynamic binary search on that file too, so they were jumping back & forth around that mega-giga-tera file without problems, where, in contrast, you only have to move sequentially forward. And OS is prepared for it, so it will preload SFS sections before you need them - OS is sometimes not that bad you know and they have removed most of code Bill Gates wrote in his days.... :) Quote So, supposed this happens at the byte level anyway, *then* there's a chance for this. But I won't believe this, because e.g. copying to the RAMDisk just takes way longer than copying an internal array (writing to it) which easily goes within 0.1sec for 500MB or so. Yes, it _has_ to be slower as you are going through whole OS file I/O stack which can easily be seen in debugger: there is _no_ difference compared to SDD/HDD disk access so it can never do 500MB in 0.1 sec but why would you need that? Sound card doesn’t need nowhere near that amount of data in that time period even with 8x oversampling etc. And you can ask MMF to open a view on whole file instead of SFS chunks - As mentioned, it would read in as you move forward and you would not need to manage memory because MMF memory _does not count_ in your process set! So we could have even bigger RAMDisks as XX would be using only couple MBs :) But OK, I guess I misunderstood the purpose of this forum: I foolishly believed it was a place to discuss possible improvements to XX in particular and explore new boundaries in computer-based audio reproduction in general. Seems I got it all wrong so I apologize and promise to shut up. Title: Re: SFS Solved Post by: PeterSt on October 22, 2010, 05:52:31 pm Quote But OK, I guess I misunderstood the purpose of this forum: I foolishly believed it was a place to discuss possible improvements to XX in particular and explore new boundaries in computer-based audio reproduction in general. Seems I got it all wrong so I apologize and promise to shut up. So ... You put a post A here to which I respond for a first half indicating that you don't need to interfere with some things because it only takes me the time to explain to you that you're not right. Next I spend time on your good ideas from the same post A in a second half - which is exacly what you last post E is about, we both have a nice post C and D in between, and now you suddenly end with this ? what happened to you ? Maybe you read the gueste of the sequence again and come to the conclusion that you read only half of both mine and your posts ? I really don't get it. About the .dat files ... oh dear, I better had not bring that up. Not in front of your eyes (yes, I remember now :)). So, I suggest not to act foolish (me included), get back your good mood, or reverse whatever it is that needs reversing. Ok ? Let's go on. Quote I don't know! And you don’t either! Wrong, I do know. It will be way to slow as I explained. Please don't get me wrong now : The fact that you don't know while I do, should not lead to any strange remark about it being my forum and my rules (I say this just in case). The point is, it will (apparently) need explanation I don't feel the need to. This is not your fault, but the coincidence of me having an advantage knowing what is all happening in there. Look : Quote there is _no_ difference compared to SDD/HDD disk access so it can never do 500MB in 0.1 sec but why would you need that? How do you think the expanded / filtered / upsampled etc. track emerge ? Must I explain this ? isn't it the most obvious ? Please read back the last couple of posts, and see how I appreciate your ideas which are unrelated to my way of working - and how I proceeded on that. And still will, whether you (by now) like it or not. Hahaha. And oh, I didn't read it back, but I don't think I said I wouldn't try the RAMDisk thing either. I only predicted (or at least wanted to) that it will cause the opposite of what's generally needed : less interference of the OS. If you can explain that to me otherwise, I really (!) like to hear it, but I think you just extensively agreed by your own outlay. Or ? Having it all at some driver level ... also nice. But (obviously) the next thing what will happen is that Engine3 runs as a service, so it's already closer to that. Next ? next I don't see the difference (with driver level) at this moment, but possibly this difference will be there because of OS priorities etc.; I don't know yet. Allright, let's end this kind of silly post with : if you don't stop smoking I'll start drinking. It's time for that anyway, that latter. Again, thanks, and I mean that. Peter PS: Quote so I apologize and promise to shut up. I just dragged you in. :fool::) Title: Re: SFS Solved Post by: PeterSt on October 22, 2010, 06:14:33 pm Quote What if XX could be written as a device driver (a la RAMDisk) but with a twist: It 'knows' about its contents not being 'virtual disk files' but sound data that can be directly accessed without having to go through OS file I/O layer at all? FYI : For KS I had that running. Really. In an indirect way, but still the net effect was just like that. But, it can only work for certain devices, on XP it wouldn't work at all, etc. At the time I didn't want to dive into "interference" at the driver level, which actually would be needed. Today still it could be a tad "too far" for me, but I really like the idea even better than before (knowing more what's happening etc.). :grazy: Title: Re: SFS Solved Post by: Telstar on October 22, 2010, 09:30:35 pm At the time I didn't want to dive into "interference" at the driver level, which actually would be needed. Today still it could be a tad "too far" for me, but I really like the idea even better than before (knowing more what's happening etc.). :grazy: I dont have the technical background to judge this, but I would do an attempt of the engine working at driver level and see if it changes the SQ or not (while still using the ramdisk of course). Title: Re: SFS Solved Post by: Josef on October 22, 2010, 10:45:33 pm Quote So, I suggest not to act foolish (me included), get back your good mood, or reverse whatever it is that needs reversing. Ok ? Only if I get an invitation to DAC audition and get a glass of nice red wine too! Quote How do you think the expanded / filtered / upsampled etc. track emerge ? Must I explain this ? isn't it the most obvious ? Believe it or not NO but I see now what you mean and I'm afraid this time you misunderstood me (for a change). I can see you want ultra speed to write to memory so this CPU intensive 'stressful' task of converting a track (well only if you use 8x Arc etc otherwise it's no so stressful but OK lets assume worst case) can be done as quickly as possible - but you don't, repeat, don't need same speed for _reading_ the file from disk - in fact, you cannot have it even if you wanted it. What? Well, as we seem to misunderstand each other frequently let me try to explain using an example: - Suppose we have XX written as a device driver (_not_ a 'RAMDisk' - let's call it 'Xtra Xeroxed audio' driver or 'XX driver' for short :) ). - XX driver has a defined RAM size that it uses as a music data buffer - just like you would set up your RAMDisk size now - Unlike RAMDisk however, XX driver does NOT appear as a disk D: E: F: whatever, because it is NOT an implementation of disk I/O interface (it's an 'audio playback' driver after all - just repeating to make sure I'm obvious) - access to XX driver is controlled and only possible via XX GUI. Otherwise it is completely invisible (except that your usable RAM is visibly reduced by XX driver's buffer size also just like with RAMDisk now) Ok, that was easy - But how does it work? In 6 steps: Step 1: GUI starts working just like it does now: prepares *.dat/*.dao instruction files and sends them to XX driver (You'd have to use a proprietary API or whatever method you like (oh yes, MMFs could work too, LOL) as you won't be able to 'just copy a file' because XX driver does not have a file I/O interface at all!) Step 1 addendum: BUT, when 'Copy to XX' is checked it does NOTHING! (apart from noting the setting with all others in dat/dao instruction files it just sent to XX driver) Step 1 addendum 2: XX driver (which replaces what is now XXEngine3.exe) has full, absolute, unlimited control over its allocated buffer! It does NOT have to go thru OS file I/O stack to write to its RAM! Hence, it can do ultra-fast conversion in its own buffer which is only limited by speed of the computer. IMHO This is not really that important (see Step 3) but I'm making a note as you seem to think it is. Step 2: So, our perky XX driver opens tracks in playlist (specified by dat/dao) via MMFs! (I can hear you: 'Oh my God here he goes again I just told him it's baaaaad, noooooooo......') Ahem, why MMFs? Because that way XX driver needs 0, repeat, zero memory to read _all_ tracks regardless of their size! So we can have XX driver set to use 3GB RAM if we wanted to, and we do, so damn the RAM we want more music! (notice, we do NOT have nor will ever need a RAMDisk!). And if we had 8GB RAM we'd make a bloody 7GB XX driver because we're audio freaks and have dedicated audio computer damn it etc etc. you get a picture....(not that it wouldn't work on a 2GB RAM Intel Atom notebook but our XX driver buffer would naturally have to be proportionally smaller....) Step 3: As XX reads in tracks one by one (for sake of argument, let's say it reads 10MB 'chunks' at a time - it really could be anything, does not matter as you are limited by HDD/SDD disk speed anyhow) each 'chunk' gets 'converted' right there 'in-flight' and bytes fresh & ready for sound card are put in XX driver's internal buffer starting from offset 0. Step 4. Playback thread starts playing music as soon as first chunk is 'converted' OR as soon as buffer is completely full (similar to current 'start engine while converting' setting so it's under user control) In any case, loading thread stops loading/converting when buffer is full. Step 5. Music plays, and plays, and plays and generally absolutely nothing interesting happens....track 1, gapless into track 2, I did it myyyy waaayyy, track 3, she loves you, yeah, yeah, yeah, track 4....track 7?8?9? Step 6. Ooops, XX driver sees there's only, say, 20 seconds of music left in the buffer - what do we do? Panic! Or maybe not: Go to Step 3, repeat until playlist exhausted. When playlist is done (or if user stops playback) XX GUI gets reactivated same as now. Notice when you start new playlist absolutely _nothing_ has changed in memory layout: XX driver is there all the time with its ultra fast ultra simple (single!) buffer and will just write new playlist content over previous....no allocations, deallocations, garbage collections, VM swapping, no nothing.....just music playing forever, undisturbed............ Title: Re: SFS Solved Post by: Marcin_gps on November 04, 2010, 10:35:15 pm Peter, are you familiar with that (http://msdn.microsoft.com/en-us/library/ms190730.aspx)? Could it be of any use with XXHE?
Greets, Marcin Title: Re: SFS Solved Post by: PeterSt on November 04, 2010, 11:26:25 pm Yes, but it is not relevant here because with the continuous access as required here (each 1ms or less including priorities) nothing will be swapped out. :)
|