OK, let me try to be more helpful. Here's good backgrounders --------------------- also recommended: Herlihy/Shavit book, blogs by Sutter and Duffy, memory alloc thread ------------ Dave Harris As you say, biological systems use different mechanisms at different levels of scale. Whilst having a single, scalable mechanism - or at least a family of related mechanisms - to do the job might be intellectually satisfying, I don't think it's *necessary* in order to solve current problems. Certainly, the required mechanisms should be built in to the platform and programmers actively prevented from wheel-reinvention - something that, after 40-odd years in IT I see time and again . We used message-passing in operational systems in the early 70s but, IIRC, shared memory was used for certain tasks as well. However, neither of these were to do with multi-processing, simply multi-threading. Interestingly, the same message passing mechanism could be used within a mainframe, within it's I/O processors and between the former & latter - a nice symmetry :) Probably the first STM-like mechanism was implemented on IBM 370 mainframes computers with the "Compare and Swap" instruction which was carried over into Intel x86 architecture. I seem to remember that STM becomes untenable when the number of threads working on the same data gets too high. The disc latency problem looks like being mostly solved as SSDs become more widespread. The major delay associated with magnetic discs is simply the mechanical movement of the disc itself and the head. SSDs remove both of these. Over the next few years their real speed will increase in line with Moore's Law, something that magnetic discs cannot do, eventually approaching RAM speeds. April 13 at 2:38am · · Report