The Design of Software (CLOSED)

A public forum for discussing the design of software, from the user interface to the code architecture. Now closed.

The "Design of Software" discussion group has been merged with the main Joel on Software discussion group.

The archives will remain online indefinitely.

Speed tests for large files

Okay, I'm trying out various ways of loading large data files (50-200MB) using Windows XP, but I'm getting erratic results because of Windows caching the files when they've been loaded the first time.

Without rebooting between each test, anyone got any ideas how to take the caching out of the equation?

Thanks, I've drawing a complete mental blank here. :(
GFX guy
Wednesday, December 06, 2006
 
 
Randomize the files each time, but keep the size of the file constant. However, that leads to another problem in that Windows will need to flush the cache every time. You may or may not be able to take that into consideration.

I know there are a couple system functions you can call to flush the cache, but if you compile them into the program, you can no longer really measure the true load time; you'd have to measure the cache check+cache load time and the cache check+cache flush+load time.
TheDavid
Wednesday, December 06, 2006
 
 
In Unix terms, "touch" them--change their modification timestamp to "now". This should make Windows think they're changed, so it loads from disk instead of from cache. I don't know if Windows implements touch; you could install UnxUtils (and add it to your PATH) to get Gnu touch, or simply write a script to do the same thing.
Samuel Reynolds Send private email
Wednesday, December 06, 2006
 
 
One of the ways to load them is with the FILE_FLAG_NO_BUFFERING flag of the CreateFile API.
Christopher Wells Send private email
Wednesday, December 06, 2006
 
 
What is the purpose of the speed tests? What are you actually wanting to achieve.
Neville Franks Send private email
Thursday, December 07, 2006
 
 
http://www.microsoft.com/technet/sysinternals/FileAndDisk/Sync.mspx is intended to make sure any pending writes to the disk is done. But on the page it states that it flush all file system data and works with removable drives, so it would make sence if it also removes the read cache (as it would not make sence if you could read a file from cache, that is on a disk that you have removed), so it might be what you are looking for.

Please provide feedback if it works.
ThMoJe
Thursday, December 07, 2006
 
 
> I don't know if Windows implements touch

I don't know if there is a command line tool, but there are some API's that help: GetFileTime and SetFileTime.
Adrian
Thursday, December 07, 2006
 
 
I don't know the exact cache strategy of Windows, but how about providing more files than the computer has RAM? E.g. when you have 1GB of RAM, provide files with a total size of 2GB or even some more. Use the next file for any new operation; when you've used the last file, wrap around back to the first.
Secure
Thursday, December 07, 2006
 
 
Unscientific method: do a big "Find in Files" operation in your IDE.
Richie Hindle Send private email
Thursday, December 07, 2006
 
 
What is the issue?

When I timed this stuff a while ago, I found the best approach seemed to be loading files in one go, using a blocking ReadFile call, after opening them using FILE_FLAG_SEQUENTIAL_SCAN. (I don't recall FILE_FLAG_SEQUENTIAL_SCAN changing the speed, but it played much nicer with other running programs.)

I had great difficulty getting the allegedly quicker ReadFileEx-with-OVERLAPPED-and-FILE_FLAG_NO_BUFFERING to work reliably on the PC when reading large chunks. But I didn't fret too much, figuring blocking ReadFile shouldn't be that much slower. You might get some small advantage from the system being able to DMA straight to the final location in memory rather than straight to the disk cache, but the cost of the memory copies should vanish in the disk access. Besides, each new chunk can be copied out concurrently with the next chunk being loaded in, so you should pay only for the last part.
Tom_
Thursday, December 07, 2006
 
 
"What is the issue?"

Oh, that looks ruder than was meant when put in Garamond! I mean, apart from the disk caching issue, what are you trying to optimize, and what have you tried already? (And hence my presentation of my conclusions from the stuff I tried, even though I have no idea how to flush Windows's disk cache ;)
Tom_
Thursday, December 07, 2006
 
 
Thanks for the help, guys.

Firstly, Sync didn't flush the read cache at all.

The NO_BUFFERING flag placed way too many restrictions on the file chunk size, it would have stopped me experimenting in a lot of ways.

Finally, I resorted to just loading in the same file over and over again (at least ten times) until the speed stabilized, and used that figure as a comparison. It was long winded, but seemed to work, I found which method of loading was the fastest.

For those who asked, I'm trying out different ways of loading in chunks of video/audio data (separately, interleaved, etc) to interact with my program. And because it's only for my program, I don't need to use standardized methods (because I'm not streaming the data).

Many thanks, you helped a lot!
GFX guy
Thursday, December 07, 2006
 
 
You don't want to use FILE_FLAG_NO_BUFFERING. The only reason to use that flag is in the case of a shared file and you want to ensure you are getting a record right from the disk. The speed degradation that results from using FILE_FLAG_NO_BUFFERING may not show up on your desktop, but that same code run on a high speed server with fiber SCSI and expensive disk controllers will crawl. I was doing some experimentation with that today, as my current project involves caching up to 200 gigabytes on a NUMA-complaint blade array.

Anyway, the synchronous calls do not scale, so if this is a server, I would avoid them. In my testing, the fastest disk I/O is overlapped I/O with IOCP, disk buffering enabled.
MBJ Send private email
Thursday, December 07, 2006
 
 
"Finally, I resorted to just loading in the same file over and over again (at least ten times) until the speed stabilized, and used that figure as a comparison. It was long winded, but seemed to work, I found which method of loading was the fastest."

So, you forced the file into cache and then optimized the read from cache.  I'm fairly certain that this doesn't approximate what the users of your application will experience, and you have just tuned for a scenario that is not likely to actually occur.

You might want to revise the strategy into reading a bunch of other large files before reading the test file.  Make sure to do the performance tuning on the first access of the test file.

Your results will also be very hardware dependent.  I made the mistake one time of reading a file one byte at a time.  I was looking for special codes and I had to do some funky translations.  Anyways, it worked great on my test system, but the first client read in a file off a network drive and it was reading at around 10KB per minute.  I changed to reading the file in chunks like I should have from the beginning and it worked great.
JSmith Send private email
Monday, December 11, 2006
 
 
JSmith, you don't get what the OP was actually trying to do. All your points are moot.
Captain Pedant
Saturday, December 16, 2006
 
 

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics
 
Powered by FogBugz