Well, what to say. Still I don't think this digs up all what really is going on. Therefore today's (say 2011) recordings should be analysed as well. And then I mean those without severe compression.
Personally I don't think this is the answer to the change. It will merely be about the digital manipulation recording engineers seem to know sh*t about. Well, logic if we look at what XXHighEnd does, even in the bit perfect realm.
(The ear judges loudness by the average signal level, not the peak signal level.)
Nice thought, but not down to the merits IMO;
Instead, the average level is brought up hugely, and yes, then it just *is* louder on average. Almost the same "statement" but still different (more logic I think).
Also, this doesn't happen because the peaks are squeezed out (in, actually), but because technically the bottom part is boosted, while the peaks stay the same. This is different from just squeezing "in" the peaks, because that wouldn't change the general level. It only allows for not hard-cutting the peaks, which would be worse for the result. So, squeeze the peaks allows for not dropping the level in order to let fit the peaks. The result of course is a louder level, when utilized.
Again, it may come down to the same, but maybe this explains better what really happens and why.
In fact, some pop records can be considered creations of the producer, not the putative musician whose name is on the album cover.
But we may wonder who's fault this is. I mean, I think it is fairly rare when musicians want the best sound (if it were about SQ and not about what's commercially best). Of course, the mixing engineer during the live event does his best, but I don't think he is about the best SQ. Then again this is not "hi-fi" of course, and the mixing engineer knows that too. But the whole point is : the musicians do not care much. I have seen too many easily avoidable flaws to think otherwise.
That my last thoughts above are related to the sound by now (about) starting to be better than during the performance itself, is possibly why I by now think like this in the first place. And so today's problem for any engineer will be that he/she is not able to hear back what's really on the recording. I don't want to bring in the NOS1 into play again, but it really is so that it makes the difference in the world. That is, I don't see how a digital end result (which it obviously always will be these days) can ever sound good anywhere, with the D/A devices used in that studio or elsewhere. Monitor speakers will matter also, but I think we know what matters by far most.
Continuing on the above, this was different too in those oldd days. There was no digital to undeliberately destroy, and the tapes listened to for mixing (if at all, see article) only needed to go to LP, which was a fairly straight forward process. In any case it was much more straight forward than today's process of "digital nothing" while in the mean time we nag about "bit perfect" already (justified or not). OpAmps may destroy, but digital manipulation destroys more. It takes out the inconsistency. No analogue-fluent transients (when things are not OK) but sudden anomalies which we are sensitive to.
Both changes (those OpAmp consoles and digital) happened in the same era to some degree (of course digital was later), and maybe it needs some further digging into what changes which. That things really started to change in the early 70's is totally clear though, but this seems to be about the general sound (OpAmps ?).
?
Peter