China Wins IMO Math Olympiad Again with 6 Golds and Double Perfect Scores, AI Fails to Secure a Medal}
China's team dominates the IMO with six gold medals and double perfect scores, crushing the competition, while AI models fail to even win a bronze, highlighting the gap between human and AI performance.

Congratulations to the Chinese team!
Early this Saturday, news broke that China reclaimed the IMO (International Mathematical Olympiad) championship, earning six gold medals and achieving double perfect scores.

According to the “report card,” China scored a total of 231 points, achieving full marks on the first five problems, and the team’s 21 points on the sixth problem was the highest overall.
The Chinese team members:
- Deng Zhewen (High School Year 2), Wuhan Wuchang Experimental Middle School
- Xu Qiming (High School Year 2), Wuhan Jingkai Foreign Language High School
- Tan Hongyi (High School Year 2), Wuhan Jingkai Foreign Language High School
- Zhang Hengye (High School Year 2), Chongqing Bashu Middle School
- Dong Zhenyu (High School Year 3), Hangzhou Xuejun Middle School
- Deng Leyan (High School Year 1), Shanghai Shanghai High School

Both Zhewen Deng and Qiming Xu have been selected for the national team for two consecutive competitions. Since China’s first participation in IMO in 1985, 17 contestants have been selected for the team twice, including Peking University assistant professor Wei Dongyi, who scored perfect in 2008 and 2009.
From 2019 to 2023, China’s team consecutively won the IMO championship.
Last year’s champion, the US team, took five golds and one silver this time, ranking second.

Third place went to South Korea with four golds and two silvers, while Japan secured three golds, two silvers, and one bronze, with Akiyo Satoshi earning a perfect gold medal.
Additionally, Canada sent a team composed entirely of Chinese students, earning two golds, two silvers, and one bronze, ranking twelfth, with Warren Bei achieving a perfect score gold medal.

The International Mathematical Olympiad (IMO) is a global high school mathematics competition, known as the “World Cup of Mathematics.”
First held in Romania in 1959, IMO has grown into an annual event with top students from over 100 countries across five continents competing each year.
This year marks the 66th IMO, held on the Sunshine Coast, Queensland, Australia, starting from July 15. This is the second time Australia hosts IMO since 1988 in Canberra.

IMO 2025 Problems
The competition typically includes six problems over two days, with 4.5 hours each day. Participants solve three problems daily, each worth 7 points, for a total of 42 points. The problems cover algebra, geometry, number theory, and combinatorics, testing problem-solving skills and mathematical knowledge.
The gold score threshold is 35 points, silver 28, and bronze 19. This year, the first five problems were reportedly not very difficult for top contestants, resulting in 72 gold medals—19 more than last year.
The sixth problem was extremely challenging, with only six people solving it worldwide, and the last five achieving perfect scores.
Problem 1:

Problem 2:

Problem 3:

Problem 4:

Problem 5:

Problem 6:

Everyone, what do you think about this year's difficulty? Share your thoughts in the comments.
AI Models’ IMO2025 Results: No Medals Awarded
Finally, you might be curious about how AI large models performed on the latest IMO questions. In another “arena,” no large model managed to win a medal.
Gemini 2.5 Pro performed the best, scoring 31% (13 points). Previously criticized for citing non-existent theorems during USAMO evaluations, Gemini 2.5 Pro showed improvement in IMO 2025.
Grok 4, recently released, performed relatively poorly, with many answers being brief and lacking explanations, similar to other benchmarks in MathArena, often missing depth or proofs.

In MathArena, researchers hired IMO-level human judges to evaluate answers immediately after the questions were released. The average cost per problem answer was at least $3.
It appears that AI still has a long way to go to match human-level intelligence in top-tier competitions.
References: