The Joel on Software Discussion Group (CLOSED)

A place to discuss Joel on Software. Now closed.

This community works best when people use their real names. Please register for a free account.

Other Groups:
Joel on Software
Business of Software
Design of Software (CLOSED)
.NET Questions (CLOSED)
Fog Creek Copilot

The Old Forum

Your hosts:
Albert D. Kallal
Li-Fan Chen
Stephen Jones

Disk vs. Memory speed

Hey folks, let's think about hardware for just a minute.  Is it just me, or has the performance ratio between CPU speed and mass storage become seriously lopsided?

I look at my computer use these days (compiling code, photo manipulation, maybe a bit of video) and all my time is spent waiting for the disk.  The CPU gauge rarely moves past 10%.

If I was running Intel, I'd shut down the CPU development labs (thanks guys, great job, got all the Ghz and cores I need) and get those folks working on doing -something- to speed up access to mass storage.  It's strangling today's computing. 

Think about it - a disk seek is, what, 3.3ms these days?  That's 3,300,000ns.  DRAM runs at about 300Mhz these days, or 3.3 ns/cycle.  Nice factor of a million there, but heck, I'd be happy to get a factor of a thousand back.  Can you imagine?  Windows would boot in five seconds.  Bad as it is now, I'll bet people would crack open their wallets again if they can buy that kind of speedup.
J. Peterson Send private email
Friday, January 30, 2009
Are Intel still the world's biggest maker of flash memory?
It's hard to keep track with mergers and shutdowns at the moment.
Martin Send private email
Friday, January 30, 2009
Intel produces the fastest Solid state PC harddrive according to Linus.

But I agree with the OP.  I'm honestly looking forward to the day when they just can't make CPU's any faster and multi-core doesn't make much of a difference.

I think the next revolution of computing will happen when we can get rid of harddrives and volatile RAM.  We already have 64 bit processors, so we can address a huge amount of memory.  If that memory was fast, persistent and large we would need an entirely different kind of operating system and perhaps entirely different kinds of software.
Almost H. Anonymous Send private email
Friday, January 30, 2009
The cycle time of DRAM is not the same as the seek time on a hard disk, it's more like the clock speed of the disk interface bus. Even with the fastest current DRAM technologies, the time between presenting the DRAM with an address, and when you can actually read the first data for that address, is still between 30ns-60ns. While that is MUCH faster than disk seek times, it's also MUCH slower than the base DRAM cycle time.

Otherwise, yes, the access speed of physical disks has been MUCH slower than DRAM for many years (at least 3 orders of magnitude), which is why we have memory buffers for disk data. Now, if the OS that you use has boneheaded scheduling and replacement policies for the disk buffers, then you can still feel as if you are always waiting on disk access (because you are). Most OSs (at least the ones which you would find on the desktop) have some boneheadedness in the disk buffer management policies, so don't assume that I'm just picking on the market leader.
Jeffrey Dutky Send private email
Friday, January 30, 2009
"Think about it - a disk seek is, what, 3.3ms these days?"

You should be so lucky. My Velociraptor manages five or six and it's easily the fastest seek available. Most other disks are lucky to manage ten or twelve.

The way things are going, SSDs will replace hard disks in another year or so. Then seek time will be almost zero.

You want Intel to do something ... they're currently king of SSDs. For the right amount of money they'll sell you stuff which totally blows away all existing hard disks.
Jimmy Jones Send private email
Friday, January 30, 2009
"I'm honestly looking forward to the day when they just can't make CPU's any faster and multi-core doesn't make much of a difference."

In certain applications today, multi-core doesn't make much of a difference.

From Sandia National Laboratories:

"More chip cores can mean slower supercomputing . . .

"A Sandia team simulated key algorithms for deriving knowledge from large data sets. The simulations show a significant increase in speed going from two to four multicores, but an insignificant increase from four to eight multicores. Exceeding eight multicores causes a decrease in speed. Sixteen multicores perform barely as well as two, and after that, a steep decline is registered as more cores are added.

"The problem is the lack of memory bandwidth as well as contention between processors over the memory bus available to each processor."
Deprecated Send private email
Saturday, January 31, 2009
Hasn't the performance ratio ALWAYS been seriously lopsided?

This is why I'm skeptical of the idea of virtual memory. A programmer should have control over whether something's in core or on disk. It makes all the difference in the world. You can abstract the distinction with object orientation if you want but don't try to hide it completely from the coder.

I propose three classes of storage for an object: RAM, disk but with RAM cache,  and disk. Choose the one that makes the most sense for a given case.

I once had a 10MB hard drive. Now I've got a RAM capacity two orders of magnitude above that. And I still need temp files sometimes.
Rowland Send private email
Saturday, January 31, 2009
Haven't you heard? Disk is the new tape.

Sequential speeds have gotten better over the years, with improvements in interfaces and linear densities. Seek speeds, not so much.
Mark Ransom Send private email
Monday, February 02, 2009

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics
Powered by FogBugz