Location: CSIE R103
Speaker: Prof. Hung-Wei Tseng,University of California, Riverside
Host: Prof. Chia-Ling Yang
The significance of artificial intelligence (AI) and machine learning (ML) applications has changed the landscape of computer systems: AI accelerators start to emerge in a wide range of devices, from mobile phones to data center servers. In addition to the direct contribution of performance gain in AI/ML workloads, the introduction of AI/ML accelerators bring a new flavor of computation model, matrix processing model, that any matrix-based algorithm can leverage in theory. However, the highly application-specific designs of these accelerators place hurdles for a wider spectrum of workloads.
In this talk, Hung-Wei will discuss state-of-the-art AI/ML accelerators and share his experience in transforming existing algorithms to AI/ML-specific functions. Hung-Wei’s research group has demonstrated up to 288x speedup for database join operations through using NVIDIA’s tensor cores. compared with modern CPUs. If we can extend the design of AI/ML accelerators to support more matrix operations, a set of matrix applications, including dynamic programming based algorithms, can achieve more than 10x speedup over conventional GPUs.
Finally, Hung-Wei will talk about the new programming model, paradigms that new hardware accelerators enable, simultaneous and heterogeneous multithreading (SHMT). By using GPUs and TPUs at the same time, SHMT reveals 2x speedup over state-of-the-art GPU implementations. Hung-Wei will also discuss some extensions that are essential to make the upcoming revolution of general-purpose computing successful.
Hung-Wei is an associate professor in the Department of Electrical and Computer Engineering at the University of California, Riverside. He is now leading the Extreme Scale Computer Architecture Laboratory and focusing on accelerating applications through generalized computing on tensor processors, AI/ML accelerators, and leveraging intelligent data storage systems to streamline the data processing pipeline. Hung-Wei's research was recognized by a Facebook faculty research award and IEEE Micro "Top Picks from Computer Architecture" in 2020 for accelerating data-intensive applications by revisiting the storage system design. He got his Ph.D. from the Department of Computer Science and Engineering at the University of California, San Diego.