There is a little misconception about paging files that has been spread since the early days of windows, and that's the story about having it on a seperate partition. You see the bloke who originally said that meant another partition on a second HDD, (on the secondary IDE cotroller prefferably). That will increase performance a fair bit, however having it on a seperate partition on the same drive is just as likely to decrease performance, depending on how far away from the currently used data sectors the swap file is, cause it's makes the read head move about more to access the swap file and data. Having it on another drive however means that the second drive can be accessing the swap file while the first is simultaneously accessing data. Computers are good at appearing to do two things at once, but they can't simultaneously access the swap file and read other data if they are on the same drive but in seperate partitions.
Also fragmentation of the information of the paging file is only a problem when the paging file gets mixed in with the normal user data, if you set a fixed size for your paging file (ie:min and max identical) then defrag your drive it will put all the paging file together in one spot and it will not get fragmented in the future since it is one file that never moves. Doing this is just as effective as having it on a seperate partition on the same drive (sometimes more effective because defragging will place the page file close to the user data rather than farther away like on a seperate partition).
having larger clusters on your page file only works if they are smaller than the page file size (by that I mean that windows and other OS's don't swap variable sized data chunks in and out of the swap file, they are always a fixed size) which is likely to be about 4K. So if you make your cluster size 16K, to swap a 4k page in and out of the swap file windows has to move a minimum of 16k for each page, reducing efficiency in your swap file by a huge amount!
Hope this was helpful
:cya: :cya: :cya:
Exactly why I set the Partition in the center of the Hard Drive. I believe in what I have read from Microsoft, is that between a dynamic MFT and a dynamic Page File, there is much fragmentation that can and will occur. But in dealing with the common Computer user, these functions are setup in a dynamic method to prevent excess Hard Drive usage and, of course, tampering with system settings. They love everything to be automatic. I plan of setting up a 2nd Hard Drive soon with RAID-Type Stripping, that will be interesting. Cluster size..... speaking of "Old Windows".... how many files do you have that are 16kb and less? Not to many anymore, but I remember when Hard Drives were first made and everything was on a 5-1/4" floppy.... well, lets not go there... hehe. All my Hard Drives are set to 16kb Cluster Size. All files on my Hard Drives are transfered in 16kb chunks, not 4kb chunks. It swaps 16kb clusters, and I do lose a bit of HD space doing that, but not efficiency. I have a 26% increase in Hard Drive performance (as per Mad Onion) on the HDD test after my reformatting as 16kb. That's a big difference, especially when your running a portable computer that considers a fast HD speed of 5600 RPM and a stingy buffer. Man, I think the Fire-Wire External Drives are faster than this now,... Rats. But A-K is right. There is way more to tweaking than a quick read in a Forum.
Warning...don't let your children try that one at home.
But remember we're not just talking file clusters here, we're talking memory. When the swap file is being used we're not swapping data from one 16kb cluster on a hdd to another on a different partition, the data is coming out of memory onto the swap file and back to memory, and that doesn't work using the file format convention we all think of so conveniantly, so you may find it does actually work better using smaller clusters, the only real way to find out is to test swap file efficiency at different cluster sizes using a program that swaps data in and out of memory.
Also speaking of cluster size efficiency, I haven't tested it but I believe different file systems are better at different cluster sizes, NTFS should be better than FAT32 at smaller cluster sizes, what are you running?
And speaking of Windows Default settings, most of them are designed for the lowest common denominator so that anybodys Tasmanian 2nd cousin can screw up their computer :lol: . (if you're not Australian you may not understand the Tasmanian bit, just find any Australian and ask them).
:cya: :cya: :cya:
The default cluster size using FAT16 and FAT32 was dynamic, actually ranging from 4kb to 32k depending on the size of the Hard Drive. Anybody who has a HD bigger that 32 Gig, running Windows 98* is running a 32kb cluster. Now THAT's eating-up some extra space, but was/is pretty fast. When NTFS was implemented in Windows 2000 the default was made 4kb as a permanant cluster size. This was done because Disk Compression does not work with any other size cluster but 4kb, in the Windows NTFS environment. Microsoft wants Disk Compression available to all the Tasmanian's cousin (that was funny) that buys XP. NTFS is very efficent with compression, but I don't use Disk Compression, I use the Computer Store.... to purchase more storage when needed....hehe.
The debate rages-on over the better filing system.... the two of us are a good example..lol. I prefer larger clusters, you don't. It increased the performance of my Portable's Hard Drive by 26%..... granted some other factors may have played a part e.g. fresh format/clean install. But that was done a year ago and performance has not dropped-off appreciably. WOW.... a year ago, hmmmm, getting time for a Spring Cleaning Format again.
I run NTFS on XP. I like it very much, but has too much "overhead" running with all the background services, and needs alot of tweaking and trimming. When you read all the problems people are having with BF1942, it seems half of them is a "Hardware or Driver Issue" the other half is the Operating System coughing-up a Wallabie's fur-ball. (They have hair,... right?)
I have this thing about eating popcorn.... my wife hates it. By the handfull.... I stuff my face by the handfull. She eats one kernal at a time. Drives me nuts watching her eat popcorn. I feed my computer the way I feed myself. Like I say, it's my prefrence and works for me. It may not work on, say, VonMeyer's system, or yours. The number of platters and read/write heads in the Hard Drive might make a huge difference.....?, amongst many other variables.
Say, have you tinkered with RAID Drives yet? I see RAID is supported in XP, and I'm dying to try it with a couple of the newer Hard Drives with the 8meg buffers. I also see that the 10,000 RPM "Serial ATA Interface" Hard Drives are hitting the streets. Any words on that A-K? Perhaps that alone is better that RAID? I picked a good time to start planning the construction of my next computer in October.
Anyway, don't let your didgeri-doo (attempt at Aulstrailian humor).
Yes, I have had a machine running RAID, but the performance sucked big time, even worse than running one drive, it was very puzzling for awhile, till I did some research on the promise raid controller I was using, and found a list of HDD's it recommended NOT using for RAID and, you guessed it, I had two of the ones they recommended against.
So, two brand new 60Gb Hard Drives, a brand new RAID capable Mobo and WINXP Pro later, big bummer :mad:
As for Serial ATA, you can RAID them on Mobo's with two controllers, howver current tests indicate that a single SATA is not much more than 15% better than current HDD's cause the Serial ATA interface is only up at 150Mbs, not much faster than IDE 133, but the next iteration will be 300Mbs and will need new HDD's, so you may want to hold off for awhile.
Oh yes, and my Didgere-don't, thank you very much for that :lol:
:cya: :cya: :cya: