Mark Russinovich certainly knows his stuff. He has posted a blog from late last year about “Pushing the Limits of Windows: Virtual Memory“. Again, I was searching for stuff related to pagefiles and found this highly technical information about how Windows memory works.
It is possible to figure out the best pagefile size from this article but it is more of an art than a science:
So how do you know how much commit charge your workloads require? You might have noticed in the screenshots that Windows tracks that number and Process Explorer shows it: Peak Commit Charge. To optimally size your paging file you should start all the applications you run at the same time, load typical data sets, and then note the commit charge peak (or look at this value after a period of time where you know maximum load was attained). Set the paging file minimum to be that value minus the amount of RAM in your system (if the value is negative, pick a minimum size to permit the kind of crash dump you are configured for). If you want to have some breathing room for potentially large commit demands, set the maximum to double that number.
You will need to understand the basics of virtual memory and commit limit before it will start to make sense. In general it helps to overestimate but not necessarily too much. As memory grows to be several GB, it make good sense to not use the old formulas and instead focus on what would really be useful. Otherwise you will end up wasting GB of space on your disk for a pagefile that does not need to be that big.