Turing Award Won by Programmer Who Paved Way for Supercomputers

As a young researcher at the Argonne National Laboratory outside Chicago in the late 1970’s, Jack Dongra helped write a computer code called Linpack.

Linpack has now offered a way to do complex math on what we now call supercomputers. It has become an important tool for scientific laboratories as it expands the boundaries of what computers can do. These include forecasting weather patterns, modeling economies, and simulating nuclear explosions.

On Wednesday, the Association for Computing Machinery, the world’s largest society of computing professionals, Drs. Dongra, 71, will receive this year’s Turing Award for his work on basic concepts and code that allowed computer software to keep pace with the hardware inside the world’s most powerful machines. Awarded since 1966 and often referred to as the Nobel Prize in Computing, the Turing Award comes with a prize of 1 million.

In the early 1990’s, using the Linpack (short for linear algebra package) code, Dr. Dongra and his colleagues also developed a new type of test that could measure the power of a supercomputer. They focused on how many calculations he could perform with each passing second. This became the primary means of comparing the fastest machines on earth, understanding what they could do and how they needed to be changed.

“People in science often say, ‘If you can’t measure it, you don’t know what it is,'” said Paul Messina, who oversees the Energy Department’s Exxcale computing project, which seeks to create software for the country’s top supercomputers. Is. . “That’s why Jack’s work is important.”

Dr. Dongra, now a professor at the University of Tennessee and a researcher at the nearby Oak Ridge National Laboratory, was a young researcher in Chicago when he specialized in linear algebra, a form of mathematics that underpins many ambitious tasks in computer science. It covers everything from computer simulations of climate and economics to artificial intelligence techniques to mimic the human brain. Developed with researchers in several American labs, Linpack – called the Software Library – helped researchers perform this math on a wide range of machines.

“Basically, you need these algorithms when you’re dealing with problems in engineering, physics, natural sciences or economics,” said Eva Dillman, a professor of computer science at the University of Southern California who specializes in software used by supercomputers. “They let scientists do their work.”

Over the years, he continued to improve and expand Linpack and build a library for new types of machines, Dr. Dongra also developed algorithms that could increase the power and efficiency of a supercomputer. As the hardware inside the machines improved, so did the software.

In the early 1990’s, scientists could not agree on the best way to measure the progress of the supercomputer. So Dr. Dongra and his colleagues set the Linpack benchmark and began publishing a list of the world’s 500 most powerful machines.

Updated and released twice a year, the Top500 list – which excludes the space between the “Top” and “500” – competed among scientific laboratories to see who could build the fastest machine. What started out as a fight for bragging rights developed an additional edge as laboratories in Japan and China challenged traditional strongholds in the United States.

“There’s a direct parallel between how much computing power you have within the country and what problems you can solve,” said Dr. Said Dilman.

There is also a list of ways to understand how technology is evolving. In the 2000s, it showed that the most powerful supercomputers were the ones that connected thousands of smaller computers into a larger whole, each equipped with the same type of computer chips used in desktop PCs and laptops.

In the years that followed, it tracked the rise of Amazon, Google, and Microsoft’s “cloud computing” services, which also connected a large number of smaller machines.

These cloud services are the future of scientific computing, as Amazon, Google and other Internet giants create new types of computer chips that can train AI systems with speed and efficiency that has never been possible in the past. Dongra said in an interview.

“These companies are building chips tailored to their own needs, and that will have a big impact,” he said. “We will rely more on cloud computing and finally leave the ‘big iron’ machines in national laboratories today.”

Scientists are also developing a new type of machine called a quantum computer, which can make today’s machines look like toys. As the world’s computers continue to evolve, they will need new benchmarks.

“Manufacturers will brag about these things,” said Dr. Dongra said. “The question is, what is reality?”

Similar Posts

Leave a Reply

Your email address will not be published.