Skip to main content
ukiyo journal - 日本と世界をつなぐ新しいニュースメディア Logo
  • All Articles
  • 🗒️ Register
  • 🔑 Login
    • 日本語
    • 中文
    • Español
    • Français
    • 한국어
    • Deutsch
    • ภาษาไทย
    • हिंदी
Cookie Usage

We use cookies to improve our services and optimize user experience. Privacy Policy and Cookie Policy for more information.

Cookie Settings

You can configure detailed settings for cookie usage.

Essential Cookies

Cookies necessary for basic site functionality. These cannot be disabled.

Analytics Cookies

Cookies used to analyze site usage and improve our services.

Marketing Cookies

Cookies used to display personalized advertisements.

Functional Cookies

Cookies that provide functionality such as user settings and language selection.

Humanity vs. AI ─ The "Battle of Knowledge" Seen in the Math Olympiad and the Next Frontier: The "Mathematical Limits" Presented by Gemini and OpenAI

Humanity vs. AI ─ The "Battle of Knowledge" Seen in the Math Olympiad and the Next Frontier: The "Mathematical Limits" Presented by Gemini and OpenAI

2025年07月23日 12:05

1. Opening—The Venue Stirred by "35 Points"

Sunshine Coast, Queensland, Australia. At the convention center swept by sea breezes, 641 young mathematicians from 112 countries gathered. The 66th International Mathematical Olympiad (IMO) consists of three problems on the first day and three on the second day, each with 4.5 hours of written competition. Two giant brains—Google DeepMind's "Gemini Deep Think" and an undisclosed model from OpenAI—were brought in as "unofficial participants."


When the first day's report announced "Gemini solved 5 problems, scoring 35 points," the audience buzzed, and the timeline of X (formerly Twitter) was instantly filled with "#AImath" and "#GeminiGold."Phys.org


2. Gold Medal Achieved, But Not a Complete Victory

The gold medal line at the IMO typically corresponds to the top 10% of participants. This year, the threshold was 35 points, and 67 students achieved gold, with 5 of them scoring a perfect 42 pointsReuters. Both Gemini and OpenAI scored 35 points but were stopped by the perfect score barrier. "Humans still hold the lead," read the symbolic headline of an AFP articlePhys.org.


3. AI's Strategy—Solving in "Natural Language"

What drew attention was that both companies had the AI write proofs directly in natural language. Traditionally, AI mathematical research involved translating problems into formal languages like Coq or Lean and solving them with proof search algorithms. This time, it was neither a series transformation based on Llama nor the "Chain-of-Thought" of GPT-4. Gemini received a prompt to **"not expand its thinking too broadly but dig deeply,"** completing within the 4.5 hoursReuters. Meanwhile, OpenAI's researcher Noam Brown revealed, "We significantly scaled the test-time compute," adding, "It was very expensive."Reuters.


4. Enthusiasm and Skepticism on Social Media

 


  • "1/N Finally achieved the long-standing AI challenge!"—OpenAI's Alexander Wei's thread garnered 30,000 likes in four daysX (formerly Twitter).

  • Health science startup DINQ congratulated, "🏅Congrats! OpenAI takes gold at IMO 2025!"X (formerly Twitter).

  • On Reddit /r/math, a thread mocking the computational resource cost with "Is $2,000 per million tokens a joke?" surgedReddit.

  • Fields Medalist Terence Tao warned in an interview, "AI should recognize the difference between a lab environment that allows retries and collaboration, and the 'mold' of an exam setting."The Times of India.


5. Who is the "Winner"?—Impact on Mathematics Education

A female student (17) from the Korean team, interviewed at the venue, laughed, "AI's answers are easy to read, but 'insight' is still human." Her advisor was positive, saying, "I want to use Gemini in class to compare solution variations." Meanwhile, the Japanese delegation revealed plans to propose "transparency in scoring criteria and anti-cheating measures for AI" to the international committee.


In educational settings, a dichotomy has already emerged: whether to "delegate homework to AI or view it as the best tutor." Finland announced a pilot program to "introduce LLM interactive proof analysis into the high school mathematics curriculum" the day after the competition.


6. Cutting-Edge Research—Challenges to Unresolved Problems

Professor Jung of Brown University boldly predicted, "The era when AI and mathematicians will submit papers as 'co-authors' to arXiv will come within a year."Reuters. In fact, Google hinted in a blog post that it has established a "Gemini-Research" team to tackle three themes: the Riemann hypothesis, algebraic geometry, and topological quantum field theory. If realized, this could redefine the "standard moves of theorem proving" by AI, shaking the very definition of mathematical creativity.


7. Challenges—Computational Resources, Environment, and Fairness

OpenAI's estimated power consumption was about 1.3 MWh (estimated), equivalent to about three hours of a water-cooled data center. Besides the environmental impact, there is a risk of widening resource disparities among participating countries. The IMO committee is discussing a proposal to "separate AI participation into an official category and set a power consumption limit" from next year onward.


Additionally, the detection of "hybrid cheating," where humans might add to AI's answers, remains unresolved. DeepMind is reportedly developing "self-proving tokens" with holographic signatures.


8. Future Vision—Collaboration or Competition

In summary, this IMO demonstrated to the world that **"humans narrowly won, but AI is already in the same ring."**
The hashtags #TeamHuman and #TeamAI on social media are often discussed in a competitive framework. However, students on the ground are beginning to accept AI as "rivals who compete and teach each other."


At the next IMO in 2026 (scheduled in Serbia), the creation of an AI category and the separation of the human category are becoming more realistic. Whether the "35-point" barrier will lose its meaning again or a new ceiling will be established—the future of mathematics will shine in the margins where chalk on blackboards and silicon transistors add together.


Reference Articles

Humans Beat AI in International Math Contest, AI Scores Gold Medal-Level Marks
Source: https://phys.org/news/2025-07-humans-ai-international-math-contest.html

← Back to Article List

Contact |  Terms of Service |  Privacy Policy |  Cookie Policy |  Cookie Settings

© Copyright ukiyo journal - 日本と世界をつなぐ新しいニュースメディア All rights reserved.