The University of Arizona Logo
professor with computer chips

Assistant professor Ali Akoglu, of the electrical and computer engineering department, breaks out the giant graphics processing chips he recently received from Nvidia.

UA Engineering Selected as High-Performance Parallel Computing Teaching Center

Time to read
2 minutes
Read so far

UA Engineering Selected as High-Performance Parallel Computing Teaching Center

May 24, 2011
Graphics technology company Nvidia selects the University of Arizona department of electrical and computer engineering as one of its teaching centers.

Nvidia supports almost 50 CUDA Teaching Centers around the world -- in Europe, China, the Middle East, and North America.

CUDA stands for "compute unified device architecture," which is the company's terminology for its hardware design used in high-performance computers. Nvidia's approach to achieving ever greater computing power is to use hundreds, even thousands, of computer graphics chips running in parallel.

Nvidia's graphics technology can be found everywhere from the latest multitasking 4G smartphones to the fastest supercomputer in the world, the Tianhe-1A in the National Supercomputing Center in Tianjin, China.

Heading the project at UA will be high-speed computing expert Ali Akoglu, assistant professor in the electrical and computer engineering department and BIO5 member. When he approached Nvidia about making UA a CUDA Teaching Center, he saw an opportunity to enhance the engineering curriculum, and to expand computing facilities for the entire UA campus.

"The learning center will allow parallel processing to be integrated throughout the curriculum, not just studied at a senior level." Akoglu said. "Industry needs this knowledge. If our students have this knowledge when they graduate, they are like gold to computer companies."

Supercomputer Sharing

Akoglu also plans to have this knowledge cascade from UA engineering out to the wider UA community. "We can say to other departments, send us your students and we'll train them, and you can have supercomputer capabilities in your lab," he said. "I want to attract other departments to help them accelerate their discovery."

To make computers faster, silicon chip designers cram as many transistors as possible into a central processing unit, or CPU. Because CPUs have to handle different types of operations while accessing increasing amounts of memory, the laws of physics and diminishing returns mean that chip performance cannot simply keep expanding forever just by squeezing in more and more transistors.

But the rules are different for chips that only process parallel streams of graphical information. Life is much simpler for graphics processing units, or GPUs, which have fewer tasks to perform. This simplicity allows chips designers to make GPUs that can easily outperform CPUs.

Almost 20 years ago, Nvidia was the first company to release a programmable GPU. Today, some of the world's fastest supercomputers use multiple GPUs working in parallel to run high-performance computer applications such as wind-tunnel simulations, molecular modeling, and weather forecasting.

Nvidia will provide high-performance computer chips, GeForce GTX480s and a Tesla C2070, plus funding for a teaching assistant. Akoglu's department will match the funding. Akoglu is also involved in the iPlant Collaborative, which will help fund workshops.

iPlant was established in 2008, when the National Science Foundation awarded a UA-led team $50 million to create a global center that would enable plant, computer and information scientists from around the world to research plant biology's biggest challenges.

Two-Way Flow

The CUDA Teaching Center will give UA students and researchers access to a range of Nvidia resources, such as testing and development systems, online seminars, podcasts, and teaching materials.

In return, Nvidia gets vital academic feedback to help it improve the state of parallel computing research and education. Nvidia also creates a larger pool of graduates who are well trained in parallel processing. One way Akoglu will provide this feedback is by monitoring scientific papers arising from CUDA-related research by his own and other UA labs.

"We want to build a unit that's a magnet for the UA research community, and we'll need to build a UA-wide team," Akoglu said. "If we are successful, UA could apply to become a CUDA Research Center."

Akoglu's aim to become a CUDA Research Center reflects his long-term dream of establishing the UA as a supercomputing hub for researchers in all disciplines who need high-performance computers to do research such as gene mapping, climate modeling, and air- and spacecraft simulations. "I want to be the enabler of fundamental research for the UA community," he said.