Ok, so you have been busy !
The way the os acts is explained in microsofts technical papers. you can search for more info at microsoft technet.
Yes, before I posted my large post yesterday I read into it all, and my post did not (try to) disobey that.
Btw, note that half of my post was about reasoning what TaskManager actually shows under "Swap file" (or Paging file, I translated it from Dutch), with the conclusion that that should read as Virtual Memory (in or excluding the Swap file, depening on whether it has been switched on or not). So, my earlier statement that Vista always keeps on using some swap file was wrong. Instead, it always keeps on paging to another part of memory (which is what you said from the beginning), but this is so because the OS can't directly use the full available physical memory.
The latter is my finding, or better : conclusion from everything in the white papers, because this is not described directly. Both "processes" -> 1. not being able to use the full memory and 2. paging to the part which is available - are described separately, and not in the combination I present : by kind of accident the memory which can't be used, can be used afterall because it can be used as additional virtual memory.
What you describe as "the kernel determines its size" (similar), is therefore only partly true. What will happen is that the kernel is able to determine unused memory space, and dedicate as paging area. Also note that not in all cases the memory can be fully utilized, which again is a matter of (wrongish) drivers.
Now, correct me if I'm wrong, but from your orginal subject we both seem to have our own subject. And oh, they are both as important I think.
Thus, your subject seems to be the explanation of how the OS deals with available physical memory vs. not available physical memory but which can extend to virtual memory ...
while my subject is the stupidity of that, and that it can't work (the other half of the large post from yesterday tries to explain that).
Well, I don't blame anyone who can't understand what I have written in that post, but, you can't counteract that with explaining the theories of operation. And please keep in mind : I knew them by the time I wrote that post (which was not so day before yesterday).
Important for you is that we agree, or at least that is how I see it. However, we only agree about the theory of operation, and not about the flaws following from that (remember, you didn't go into my subjects, and explained how it works instead). So :
So what I thought of it before (the topic I referred to) didn't change a single bit now knowing how it works, and you could say that I can see how things work without knowing it really.
Of course one thing changed : you *can* shut off paging to disk, hence no disk IO will occur after that for this reason.
Things may look trivial (the theory vs. the workout in practice), but it really is not IMO;
When you said that obviously the CPU is involved when the additional (!) virtual memory gets full, I claim that what happens there is b*llsh*t, and the CPU activity there is unnecessary and besides that *has* to work with the highest priority because of the processes happening.
The main subject to the latter, is the fact that it is not the normally available physical memory which gets full, but it is the paging area which gets full.
And the most important is that this gets full not because of swapped out data, but because of prefetched data in case something will swap out.I know, you said something like "you can't know what's in that area", but please trust me, I do. -> each byte which is user data which is necessary in the normal physical memory is copied to that additional memory as well (low priority) just in case the normal physical memory gets full.
Again, with some time this can be seen rather easily looking at taskmanager.
This all is rather unrelated to any theory of operations, because this happens with the normal swap file just the same. That is, this is what I expect but cannot see without additional investigation. So, what I
expect is that when the additional virtual memory gets full, the swap file is in order, and any additionally needed memory in the paging area then goes to the swap file.
This means - in that case - that each user byte loaded goes to the swap file as well and takes I/Os. Also, as soon as that user byte is freed, again I/Os emerge to free that byte from the swap file. Now :
Since I now know better what happens when the swap file has been shut off, there's a kind of interesting conclusion (until proven wrong) :
As I told in the long post from yesterday, the 700MB or so which can't be utilized in my system (and which is so because of driver/mobo issues),
actually is an advantage. Huh ?
Yes, I think so; Where my system starts off with 600MB used memory by the OS (could be 500MB depening on things) I have 700MB spare in there. Now, coincidentally, I have another 700MB of additional virtual memory, so both balance out. And keep in mind : each user byte in normal available physical memory goes to the additional virtual memory as well and I have 700 MB of memory for playing music without the stress of the additional virtual memory getting full and the operations to solve that. Note that in best circumstances (Mem box) this allows me to play to subsequent "full CD tracks" because I actually have 1400MB free, and they just can be utilized. However, when the second track is loaded, playback will be interrupted briefly near the end of track 1.
What I now may claim (must think about it further somehwat) is that the less additional virtual memory you have, the earlier stress time applies. Also, - and this might be the most important conclusion when it's true -
when the swap file is just active, stress time does not occur (ok, unless the set limit of the swap file is reached).
astacus21, in order to judge these phenomena you get nowhere at proving the theory of operations to be right (or wrong for that matter); Instead you must do what I did, and just start playing tracks keeping in mind that one minute 44.1/16 takes around 10MB, and watch and watch and watch (TaskManager). You will notice that the green figure (the normal available physical memory actually in use) is not consistent with your thinking, which is caused by the Managed Code phenomenon, and you will also see that the derival of the green figure, the total virtual memory in use grows/shrinks linearly with the green figure, starting off with no user programs loaded (the, say, 600MB). Now, two happenings mat break the linearity :
1. Normal available physical memory gets full
2. Additional virtual memory gets full.
Ad 1.
Will cause anomalies which the OS is able to recover from (stress time).
Ad 2.
No problem, because the data is available in the paging area, which is in additional virtual memory.
Watchout though when 1. above happened first, and I claim an Out of Memory then.
Of further importance at judging the theory of operation vs. practice, is that this all is not about one single program (similar to your 2GB tests of last night), that it also is not about two programs, but that it is about one program dynamically allocating and freeing (huge amounts of) memory all the time, and that the OS takes care of the freeing, and one of the *reasons* to do that is reaching memory limits.
Looking closely, you can see that (somehow) the OS is not "smart" right after XX started playing (better : right after a reboot, and this *always* has bugged me no matter what kind of sound engine I used, no matter XP or Vista); Just start playing, and see the green figure grow and grow, until it reaches the physical limit (my 1350MB or so) and then it drops back. After that happened once, smartness pops in, and everything becomes predictable.
Side note : so, so many users (an I am one of them) reported (let alone those who did not report it) that things go wrong somewhere somehow at track 4 or 5 or 6 at a first playing session. In all cases there is no explanation (by me). But, I *know* the hard way of the OS needing to get rid of obsolete memory is in order there ...
Back to the case, I think I dare to say that I have found the solution hence differences with my system never having problems (swap file On) and a system like e.g. from the topic I pointed to (
0.9u-12 --> Hiccups and Clicks and where Edward should try to set the Swap file ON.
Then there is this one :
Earlier in this topic I claimed that with XXHighEnd Audio playback, and having applied the settings as described in
How I tweaked my Vista virtually dead, the Swap file will not be used (and where we by now have seen that it will occasionally be used at stress time situations). However :
I think this (Swap file not used) only applies for those who have an amount of additional virtual memory that is at least as large as the normal availabe physical memory, hence who have poor systems like mine. Hahaha. Thus, when you have far less (relative) space in the additional virtual mamory area, the limit in there is reached (long) before the normal available physical memory is full, and the Swap file *will* be used. Oh man, so where this encourages for shutting it off, the problems get only worse because of stress time.
astacus21, I realize that you all can't test this so easily with your 4GB of memory, and I even don't know how much of that additional virtual memory you have. Note though that if this is a lousy 700MB (which is lousy opposed to the 4GB) you hardly can encounter problems there, just because I don't with my 700MB in there.
The conclusions you draw to the 2GB and "one process" seem a bit dangerous to me, because I (so far) didn't bump into anything that descibed "one process". For your (further) mindsetting, think of this :
One 170 minute WAV (must be a hell of a bootleg btw) is not something we really need to play. But, a 60 minute 96/24 (or higher sample rate) is, and by heart this would be 1800MB. I can assure you that this 1800MB must be treated by one single process. But :
In order to get playback going, we must assume that this is needed twice the very least. In this case, this is dealt with by two threads, which I would call processes on this matter. But would they be to this respect ?
Sidenote : I plan to cut large (byte) tracks in pieces when needed, so actually it won't be a problem.
It was a surprise for me but after a while i remembered that when you start xxhe in 64 bit, it plays in an emulation mode of x32. So in this way you cant play anything above 2gb.
Yeah, that's a good one. I would never have thought of that. But be careful :
I can't look into it right now, but I am fairly sure that the process playing the role here (which would be XXEngine3) is *not* compiled for x86. I will let you know later.
Well, this was my typing for the day.