Reflections on science, technology, and computing — leavened by personal experience


Our fascination with large instruments and massive computing often blinds us to discovery via many instruments at smaller scales.

Extraordinary parallelism, unprecedented data locality and adaptive resilience: these are daunting architecture, system software and application challenges for exascale computing.

Why do we, as researchers and practitioners, have this deep and abiding love of computing? Why do we compute? I suspect it is a deeper, more primal yearning, one that underlies all of science and engineering and that unites us in a common cause. It is the insatiable desire to know and understand. From terascale…

Not that long ago, a megabyte was a lot of storage, whether primary or secondary. Not long ago supercomputers were defined by the number of megaflops they achieved. Times change. Bigger is not just bigger, bigger is different. Quantitative change begets qualitative change.

You want to be the first person to design a successful, transistorized computer system, not the last person to design vacuum tube computer. As I frequently told my graduate students at Illinois, the great thing about parallel computing is the question never changes – “How can I increase performance?” – but the answers do. Babbage…

Cloud services now operate on the largest computing systems we have ever built on this planet, with service reliability expectations far higher than what we demand from scientific applications. Thus, I also believe there are lessons from cloud computing that are potentially applicable to computational science applications.