High-Performance Computing R&D

Categories:

Last week, the U.S. House of Representatives passed, by a voice vote, the High-Performance Computing R&D Act of 2007. The bill (see the press release) amends the original 1991 High-Performance Computing and Communications (HPCC) act. Co-sponsored by Brian Baird (D-WA) and Judy Biggert (R-IL), it updates several key provisions of the now fifteen year old legislation. Not surprisingly, given how fast technology changes, some of the provisions in the original authorization are now dated. Moreover, agency and interagency collaboration experiences have exposed the need for some process changes.

Versions of this bill have been introduced in the House during the last three Congressional sessions, but it has never passed the Senate. As Peter Harsha, CRA Director of Government Affairs, noted in the CRA blog, the bill has stalled in the Senate for reasons unrelated to the bill’s content (i.e., Senate traffic jams and disputes between the House Science and Senate Commerce committees over other legislation). This year, we expect this situation to change, with the Senate passing the bill as well, leading to a new authorization for high-performance computing.

Technically, the House bill specifies that the cognizant federal agencies must, according to paragraph (1)(C) of the bill

provide for sustained access by the research community in the United States to high-performance computing systems that are among the most advanced in the world in terms of performance in solving scientific and engineering problems, including provision for technical support for users of such systems

In my 2003 testimony to the House Science Committee at the “Supercomputing: Is the U.S. on the Right Path?” hearing, I raised some of the challenges we face, both technically and politically, in realizing such a goal.

Achieving high performance for complex applications requires a judicious match of computer architecture, system software and software development tools. Most researchers in high-end computing believe the key reasons for our current difficulties in achieving high performance on complex scientific applications can be traced to (a) inadequate research investment in software and (b) use of processor and memory architectures that are not well matched to scientific applications. … we must begin a coordinated research and development effort to create high-end systems that are better matched to the characteristics of scientific applications. To be successful, these efforts must be coordinated across agencies in a much deeper and tighter way than in the past. This will require a broad, interagency program of basic research into computer architecture, system software, programming models, software tools and algorithms.

Community concerns, my 2003 and 2004 House Science Committee testimony, and the recommendations of the 2005 report of the President’s IT Advisory Committee (PITAC) on computational science (which I chaired), means that the 2007 bill contains a key roadmap provision that the agencies must

develop and maintain a research, development, and deployment roadmap for the provision of high-performance computing systems under paragraph (1)(C)

This roadmap requirement is something I passionately believe is critical. At present, the federal agency coordination for HPC is tactical rather than strategic. Although HPC is a critical enabler for the missions of the agencies, it is not the primary mission of any agency. My hope is that the roadmap, together with a requirement for formal review of progress against the roadmap every two years, will illuminate some of our problems. These include the need for a more balanced, long-term R&D program that includes development of new architectures, improved programming systems (ease of use and productivity), better algorithms and new scientific and national security applications.

In 2006, the function of PITAC was added to the President’s Council of Advisors on Science and Technology (PCAST). As a member of PCAST, I now co-chair (with George Scalise, President of the Semiconductor Industry Association) the review of the Networking IT R&D (NITRD) program. As part of this process we continue to examine interagency coordination for IT research and expect to produce a report later this year.

Computing is the enabler of 21st century scientific discovery, and high-performance computing is the catalyst for breakthrough discovery in all scientific disciplines. Today, computing is the third pillar of the research process, complementing theory and experiment. We must continue to push, to make a difference, to shape the future. High-performance computing is the key.


Discover more from Reed's Ruminations: The Past, Present, and Future

Subscribe to get the latest posts sent to your email.

Discover more from Reed's Ruminations: The Past, Present, and Future

Subscribe now to keep reading and get access to the full archive.

Continue reading