Date: 2023-12-22, 14:20-15:30
Location: CSIE R103
Speaker: Wei-Lin Chiang, PhD candidate, UC Berkeley
Host: Prof. Chih-Jen Lin
In this talk, I will share experience learned from the development of two pioneering open-source projects at UC Berkeley: SkyPilot and Vicuna.
SkyPilot is an open-source framework developed to facilitate the deployment of AI and batch jobs across various cloud providers. It offers users a cloud-agnostic interface, providing increased GPU availability and cost savings. Developed at UC Berkeley, SkyPilot adopts a unique approach by viewing all clouds and regions as a unified entity—termed the Sky. Jobs are directed to the cheapest and available resource in the Sky, ensuring efficient management and reduced cloud costs with the support of spot instances recovery (potentially across zones, regions, clouds) and automatically cleaning up idle cloud instances.
Next, I'll introduce Vicuna, one of the first high-quality open-source LLM. Building on the foundational work of Llama, Vicuna pushes the boundaries of conversational AI models, drawing significant interest from the AI community. It has garnered 100+ citations and millions of downloads on HuggingFace in the first few months of its release.
I'll also share our experiences in deploying these open-source projects to end-users from many labs and institutions for diverse use cases, including AI training on GPUs/TPUs, AI serving, and CPU batch jobs.
Wei-Lin Chiang is currently a PhD student at UC Berkeley working with Prof. Ion Stoica. He obtained his M.S. and B.S. from National Taiwan University under the supervision of Prof. Chih-Jen Lin.
His research is at the intersection of AI systems and cloud computing. One of his notable projects, SkyPilot, is an intercloud broker system tailored for AI workloads. The open-source system was published at NSDI'23 and adopted by many AI labs and organizations. Wei-Lin is also behind the creation of Vicuna, a high-quality open-source LLM fine-tuned from Llama, and FastChat, an open platform for training, serving, and evaluating Chat LLMs. More details can be found on his personal website. https://infwinston.github.io/