OpenAI Wins IMO Gold Medal: Sparks Controversy Over Hype and Timing}

OpenAI announced winning an IMO gold medal, causing public debate over publicity tactics, timing, and fairness, with other teams like Google DeepMind and ByteDance also involved in the competition.

OpenAI Wins IMO Gold Medal: Sparks Controversy Over Hype and Timing}

Last weekend, Sam Altman made a high-profile announcement that OpenAI’s experimental large language model achieved a gold medal at the 2025 International Mathematical Olympiad (IMO), scoring 35/42 and successfully ranking among the world’s top math competitors.

image.png

Interestingly, reports indicate that not only OpenAI but also Google DeepMind secured IMO gold medals in this competition. However, compared to OpenAI’s high-profile announcement, Google’s approach was more low-key.

Google’s low profile is understandable. According to Joseph Myers, a member of the IMO organizing committee and a two-time IMO gold medalist, the IMO had requested AI companies (i.e., participating large models) not to overshadow the students, suggesting that results be announced a week after the closing ceremony. This was out of respect for human participants and to allow time for verifying AI solutions and formats.

image.png

However, OpenAI announced the results before the closing ceremony (or, according to researcher Noam Brown, shortly after). The IMO judges and coordinators generally viewed OpenAI’s approach as impolite and inappropriate.

Unfortunately, OpenAI seems more interested in hype and publicity, even at the expense of overshadowing the students, as they announced the results yesterday.

image.png

Around the same time, OpenAI staff celebrated their model’s IMO gold medal, with the announcement coinciding closely with the official closing time on July 19 at 4 PM local time.

It’s also reported that Google DeepMind and ByteDance’s Seed team will announce their IMO results today or soon, with scores evaluated officially.

Additionally, Joseph Myers noted that OpenAI did not cooperate with IMO testing, and none of the 91 official IMO coordinators rated OpenAI’s solutions.

In contrast, DeepMind appears to follow the rules and patiently wait for official announcements.

image.png

Harmonic, a startup focused on mathematical AI, confirmed this: “To preserve the sanctity of the student competition, the IMO board requires participating AI companies to wait until July 28 to release results.”

It seems OpenAI did not comply with this rule and released the results early.

image.png

Thang Luong, head of the reasoning team at Google DeepMind, commented that the IMO official scoring guidelines do exist but are not publicly available. Without adhering to these standards, the organization cannot officially declare winners. If any points are deducted, the result would be a silver medal, not gold.

image.png

IMO gold medalist Jasper shared similar views: the contest problems typically include six questions, each worth 7 points. The gold medal cutoff is 35 points, silver 28, and bronze 19. Even with minor point deductions, OpenAI might fall to silver. Jasper believes that based on OpenAI’s submissions, some points were likely deducted.

Terence Tao also pointed out that, although the questions remain the same, the test format is crucial. A student who might not win a bronze under standard conditions could win gold under a modified format. So, whether OpenAI truly earned the gold medal remains uncertain.

image.png

If, as previously mentioned, OpenAI ultimately received a silver medal, this reversal would be quite harsh, especially given their prior publicity claiming a gold-level achievement.

image.png

OpenAI responded, with researcher Noam Brown stating they announced the results after the closing ceremony. He personally contacted an IMO organizer and was advised to publish the results at that time. Brown also emphasized that no one told them they could only publish a week later.

Furthermore, Brown mentioned that IMO officials had contacted them months earlier, requesting to provide problems in Lean format (machine-verifiable proofs), but OpenAI declined.

image.png

This could be seen as a response to earlier doubts: the official rules required AI companies to wait until after the closing ceremony to announce results, but we did not participate.

This response has sparked intense online discussion. What do you think about this situation?

image.png

Subscribe to QQ Insights

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe