Breaking News | Jason Wei, Pioneer of Chain-of-Thought, Joins Meta; Machine Heart Confirms: Slack Disappears}

Jason Wei, co-creator of Chain-of-Thought, leaves OpenAI for Meta amid ongoing talent acquisition, with Slack accounts deactivated. The move signals major shifts in AI research teams.

Breaking News | Jason Wei, Pioneer of Chain-of-Thought, Joins Meta; Machine Heart Confirms: Slack Disappears}

Meta continues its aggressive talent hunt from OpenAI!

This might be the most high-profile talent acquisition by Zuckerberg so far.

Recently, Wired, a senior AI journalist, reported that "multiple sources confirm that renowned OpenAI researcher Jason Wei and fellow scientist Hyung Won Chung are leaving to join Meta."

Both their Slack accounts have been deactivated. Machine Heart also confirmed with OpenAI insiders that "(Jason Wei) Slack is gone," but whether he has officially joined Meta remains unconfirmed.

image.png
image.png

Jason Wei is a well-known scientist at OpenAI and the main author of the influential Chain-of-Thought (CoT) paradigm, with Hyung Won Chung also being a core contributor to GPT models.

image.png

He is the first author of the CoT paper, which has been cited over 17,000 times.

If you still need a reminder, last December, during OpenAI’s product launch events, Hyung Won Chung sat next to Ultraman, and Jason Wei was on the far right. Both MIT graduates who previously worked at Google, now possibly at Meta together.

图片

Shortly after the leak, Jason Wei did not respond directly but posted a detailed blog on Twitter discussing asymmetric verification and the "Verifier" rule.

image.png

Here is the translated content of his original tweet:

"Over the past year, I’ve become a passionate RL enthusiast, constantly reflecting on RL principles, which taught me an important lesson about living well."

"A core concept in RL is always being in an 'on-policy' state: instead of mimicking others’ successful trajectories, it’s better to take your own actions and learn from the environment’s rewards. I realized that imitation learning is useful initially but, once a model can generate reasonable trajectories, it’s better to learn solely from its own experience."

"This applies to life too. We start by mimicking (school education), but even after graduation, we tend to study others’ successes and try to imitate. Ultimately, I learned that to surpass others, you must forge your own path, take risks, and learn from your environment."

"I prefer to review data extensively and conduct ablation studies to understand system components. These experiments, though time-consuming, have given me unique insights into effective RL strategies. Following my passion makes me more fulfilled, and I believe I am paving a stronger path for myself and my research."

"In short, imitation is good in the beginning, but to truly excel, you need to switch to 'on-policy' RL, leveraging your strengths and avoiding weaknesses."

Next, let’s look at the backgrounds of these two researchers.

Jason Wei

Jason Wei is the first author of the pioneering CoT paper—"Chain-of-Thought Prompting Elicits Reasoning in Large Language Models." He joined Google right after undergrad, promoting the CoT concept, leading early instruction tuning work, and co-authoring papers on emergent capabilities of large models.

In February 2023, he joined OpenAI, working on reasoning models like GPT-4 and deep research projects.

image.png

His papers on Google Scholar have been cited over 77,000 times, with the top two being the CoT paper and GPT-4 technical report.

image.png

Hyung Won Chung

Born in Korea, Chung is a research scientist at OpenAI focusing on LLM research and applications.

image.png

He graduated from MIT, worked at Google for over three years, contributing to projects like PaLM (540 billion parameters), BLOOM (1.76 trillion parameters), and Flan-T5. He joined OpenAI in 2023.

image.png

At OpenAI, Chung played key roles in projects like o1-preview (September 2024), o1 release (December 2024), Deep Research (February 2025), and led the training of the Codex mini model.

He frequently appears at major OpenAI events and shares insights, including at Stanford CS25 lectures.

image.png

Chung’s contributions have advanced the o1 series, enabling reasoning, information retrieval, and reinforcement learning strategies, establishing a comprehensive research ecosystem from theory to application.

With the departure of Wei and Chung, OpenAI faces significant talent loss.

Subscribe to QQ Insights

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe