Hao Kai-ming's New Role: Distinguished Scientist at Google DeepMind}
Hao Kai-ming joins Google DeepMind as a Distinguished Scientist, showcasing his continued influence and pioneering research in AI and deep learning, with ongoing contributions to the field.

Congratulations to Google.
Recently, a netizen shared that they received an email from their company welcoming Hao Kai-ming to Google, indicating he has joined Google in some capacity.
After searching for Hao Kai-ming’s personal homepage, it was confirmed that he indeed joined Google, but in a part-time role as a Distinguished Scientist at Google DeepMind.

Personal homepage: https://people.csail.mit.edu/kaiming/
Details about Hao Kai-ming’s specific research at Google are not yet available. However, recent publications suggest his focus. His team recently released a paper titled Mean Flows for One-step Generative Modeling, which was also highlighted at the CVPR workshop.
In his presentation, Hao pointed out that before AlexNet, layer-wise training methods like Deep Belief Networks (DBN) and Denoising Autoencoders (DAE) were popular. Post-AlexNet, recognition models generally adopted end-to-end training, simplifying design and training. Interestingly, today’s generative models resemble layered training: diffusion models generate images through T denoising steps, and autoregressive models generate tokens step-by-step. This raises the question: can the history of layered training repeat itself in generative modeling? Could generative modeling also evolve toward end-to-end training?
Hao Kai-ming’s PPT from his latest CVPR lecture is available for those interested: Hao Kai-ming CVPR Lecture PPT: Towards End-to-End Generative Modeling.
From Top Scorer in College Entrance Exam to Over 710,000 Citations as an AI Scholar
In 2003, Hao Kai-ming scored 900 points in the Gaokao, ranking first in Guangdong Province, and was admitted to Tsinghua University’s Physics Department. After graduation, he pursued a PhD at the Chinese University of Hong Kong under Prof. Tang Xiaoqiu.
He interned at Microsoft Asia Research in 2007 under Dr. Sun Jian. After earning his PhD in 2011, Hao joined Microsoft Asia Research as a researcher. In 2016, he moved to Facebook AI Research as a research scientist. In 2024, Hao joined MIT as an associate professor.

Hao Kai-ming’s research has received multiple awards. In 2009, his paper “Single Image Dehazing Based on Dark Channel Prior” won the CVPR Best Paper Award. In 2016, he received the CVPR Best Paper Award again for ResNet. He was also a finalist for CVPR 2021 Best Paper. His work on Mask R-CNN won the ICCV 2017 Marr Prize, and he contributed to the best student paper that year.
According to Google Scholar, Hao’s citations exceed 710,000 to date.

Since joining MIT, Hao has also been popular among students. For example, his courses include:
His Iconic Works
Hao Kai-ming’s most famous work is ResNet, published in 2016, cited over 280,000 times, and recognized as one of the most cited papers of the 21st century by Nature.

His paper “Deep Residual Learning for Image Recognition” won the CVPR Best Paper Award in 2016. The same paper is highly influential in deep learning, forming the backbone of modern models like Transformers, AlphaGo Zero, and AlphaFold.
In 2021, Hao published “Masked Autoencoders Are Scalable Vision Learners,” which quickly became a hot topic in computer vision.

Many newcomers to AI are often surprised to see that Hao Kai-ming is a key author of many influential papers. Despite working in industry, his research attitude is regarded as a benchmark — he produces only a few first-author papers each year, but they are always impactful.
His straightforward and accessible style of explaining complex ideas, avoiding tricks and unnecessary proofs, is also a unique advantage in teaching.
Finally, congratulations to Google, and we look forward to Hao Kai-ming’s continued groundbreaking work at Google.