December 17, 2019
Affiliate Profile: Xulong Tang
Featured Articles
Affiliate Profile: Xulong Tang
xulong tang
Xulong Tang, assistant professor, School of Computing and Information, University of Pittsburgh

Xulong Tang, assistant professor in the University of Pittsburgh’s Department of Computer Science, is in the right place at the right time.

Tang’s research interests are in high-performance computing and advanced computer architectures. He sees the collaborative spirit of the Modeling and Managing Complicated Systems (momacs) Institute in Pitt’s School of Computing and Information (SCI) as aligning perfectly with some of the most cutting-edge work in his chosen field.

“The way modeling happens is that you divide a big modeling job into small modeling jobs,” Tang said. “It’s kind of like the idea of ‘divide and conquer.’”

That fits exactly with the idea behind parallel computing, which runs many small tasks simultaneously, and is an integral part of the work being carried out by momacs. Modeling complex systems requires a whole host of cross-stack collaborations involving, Tang said, excerpts from applications, algorithms, compilers, systems, and chip design.

By divvying up the large-scale work of modeling into a number of smaller processes, Tang’s momacs colleagues can create and run increasingly sophisticated models.

But they need Tang’s help to do it.

“Momacs really encourages people to collaborate with each other along the entire software-hardware spectrum,” Tang said. Experts on both the hardware and software sides of the stack are encouraged to talk, sharing their understanding of a developing model in order to streamline its design and prevent emerging problems.

“We cannot achieve optimal performance and modeling without understanding the problem,” Tang said.

Once the momacs team understands the problem in terms of its specific application domain, he went on, they can then ask how they might accelerate the modeling process and run the model.

“So this is really collaborative and the mission of momacs aligns very well with current developments in computer science research,” Tang said.

That collaboration continues, Tang went on, even after a model has been developed. Team members focused on database security weigh in, while systems people like himself determine how best to run the model. Other team members determine how system architecture, chip design, and other facets can work together to make the model run and, just as importantly, how to make it run faster.

He gave the example of building a model to predict earthquakes. Tang’s work supports both building models and running those models. In both cases, speed would be critical in an earthquake-detecting model. A momacs team tasked with building the model would need to incorporate a vast quantity of historical information to inform its predictions. But the model’s run time also must be extremely fast—an earthquake-prediction system that operated slowly, or gave warnings when it was too late to act on them, wouldn’t be worth much.

Having these kinds of discussions, and taking the time to understand a problem like earthquakes from the perspective of the domain expert—a geologist, in Tang’s example—is at the heart of momacs’ work, and underscores why collaboration is such a deep-rooted part of the Institute’s culture.

The emphasis on these kinds of productive conversations extends beyond the walls of momacs. Tang anticipates working with Pitt experts to see how he can support their work. As a PhD student at Penn State University, he participated in research led by biology professors seeking to understand the edges of a cell through medical imaging. The researchers used a machine-learning model to automate a previously manual, extremely time-consuming, process of altering the images.

Tang’s area of particular interest is in graphics processing units, or GPUs, a common platform for modern applications. His research explores how to use GPUs for specific tasks effectively and efficiently—how to “divide and conquer,” in other words. He seeks to increase the degree of parallelism in a computer’s architecture to make sure that an application can give the user the performance they want across multiple areas—clear graphics with quick run times, and without draining a battery in just a couple hours.

Balancing those demands requires creativity in computer architecture, as well as adaptability. The days of one-size-fits-all computing cores are long gone.

“Computers are going from homogeneous to heterogeneous,” Tang said. “You’ll use different GPU cores to accomplish different tasks. The architecture platforms are going to actually change themselves to match the application requirements, and applications are going to change their code such that they run better on a particular system.”