Welcome to our Distributed Training project page!

Students: Cheng Wan and Haoran You

In the era of rapid advancements in machine learning, the dimensions of both models and datasets are expanding at an unprecedented pace. Our dedicated team is at the forefront of tackling this challenge through innovative solutions in the realm of distributed training.

In our trailblazing journey, we have focused on enhancing the efficiency of communication overhead in distributed training for graph neural networks. Through a combination of rigorous theoretical exploration and robust empirical validation, our novel methodologies have yielded remarkable improvements in both communication efficiency and training convergence. Our research enables the seamless training of graphs containing up to 100 million nodes, redefining the boundaries of what is achievable in GNN-based learning.

Moreover, we are deeply involved in ongoing projects to accelerate the distributed training for Transformer models. Given their pivotal role in diverse language and vision tasks, our efforts are geared towards pushing the boundaries of speed and efficiency in their training. By employing cutting-edge techniques and data-driven approaches, we aim to empower researchers and practitioners to tackle even larger and more intricate challenges in the realm of language or image understanding and generation.

At the core of our endeavors lies a commitment to pushing the envelope of distributed training techniques to accommodate the ever-growing demands of machine learning. Welcome to join us as we explore uncharted territories, optimize training processes, and contribute to the evolution of machine learning in a world where possibilities are boundless.

Corresponding Publications: