"Is AI 'on par with humans'? The Core of the AGI Debate Unveiled by NVIDIA's Top Executive's Statement"

"Is AI 'on par with humans'? The Core of the AGI Debate Unveiled by NVIDIA's Top Executive's Statement"

"AGI Has Already Been Achieved" - What Did NVIDIA CEO's Statement Mean? Why Social Media Was Divided

NVIDIA CEO Jensen Huang's statement, "I think AGI has already been achieved," has sent ripples through the AI industry. However, taking this statement to mean "AI has finally reached human-level intelligence" would be quite an overstatement. In reality, he made this comment on Lex Fridman's podcast, based on a rather specific premise.

In the conversation in question, Fridman boldly defined AGI as "AI that can effectively handle human jobs, start, grow, and run a billion-dollar tech company." In response, Huang said, "I think now. I think we've achieved AGI." However, he immediately followed up by saying that while AI might create small, temporary buzz-worthy services for monetization, "the possibility of 100,000 agents creating NVIDIA itself is zero." In other words, he did not attribute the current AI with the ability to build and maintain complex organizations over the long term.

This combination of "strong statement and immediate reservation" is what sparked the controversy. In 2024, Huang himself had said that if AGI is seen as "the ability to pass a wide range of human tests," it could be achieved within five years. It seems as though his statement has shifted to "we've already reached it" in about two years, but in reality, it's more natural to see this as a shift in the measure of AGI from "human-equivalent universal intelligence" to "agent capability that can generate economic value."

So why is there strong opposition to this view? The biggest reason is that there is no fixed definition of AGI in the first place. The Verge also organizes AGI as a vague concept that has sparked intense debate among tech executives, engineers, and the general public in recent years. Furthermore, the 2026 International AI Safety Report points out that while current general AI has made significant strides in specific fields like mathematics, science, and coding, its performance remains "jagged," excelling in difficult tasks but failing in seemingly simple ones, and still producing hallucinations and unstable outputs. In other words, while there are more instances where AI seems to surpass humans in benchmarks and limited conditions, it still appears rough when viewed as comprehensive and robust intelligence.

The assessment by Brazilian experts mentioned in the linked article aligns with this point. Reports indicate that Álvaro Machado Dias from Unifesp and Esther Luna Colombini from Unicamp point out that current AI remains a highly specialized system and is hard to call general intelligence like that of humans. Machines may outperform humans in some tasks, but that doesn't immediately mean "human-level intelligence" as a whole.

 

The split reaction on social media is also due to this "definition gap." On X, there was an enthusiastic response interpreting the statement as "finally crossing the threshold," while a more skeptical view was that "they just lowered the standard." The trend summary on X also organized the online reactions as a mix of excitement and skepticism.

The optimistic side sees the very notion that "AI capable of starting a billion-dollar company" has become realistic as a historic change. Indeed, on X and LinkedIn, there were perceptions like "AGI is no longer a distant future" and "the threshold is already behind us." For them, what's important is not whether AI is philosophically the same as humans, but whether it is starting to deliver results in the fields of economics and product development in place of humans.

On the other hand, the skeptical tone was quite harsh. On X, sarcasm like "a statement from someone selling GPUs" spread, and on Reddit, comments like "Artificial greed inflation" (not artificial intelligence but "artificial greed inflation") and reactions like "if you can change the definition as you please, anything can be AGI" gained support. Additionally, while some expressed concern that "if it really is AGI, white-collar jobs will soon disappear," there were also practical counterarguments based on the sense that "current models are still weak in solving open-ended problems that junior developers could handle intuitively."

On Hacker News, the discussion was even more calm. While the headline might seem extreme, it was pointed out that Huang was merely responding to a narrow premise, and the article's framing was significant. At the same time, practical limitations of current models, such as losing context when handling large codebases or redoing already completed tasks, were also highlighted. What emerges here is that the debate on social media has shifted from "Is AI smart?" to "Is it stable enough to be entrusted as human-equivalent?"

In the end, this statement was less an "AGI arrival declaration" and more an event that visualized the struggle over "where to place the meaning of the word AGI." If you only look at the ability to create valuable apps in a short time, gather users, and speed up specific tasks, AI is already starting to create human-like scenarios. However, when considering comprehensive intelligence that includes long-term strategy, recovery from failure, context retention, and adaptation to the ambiguous real world, it's still too early to definitively say "we've reached it." As of 2026, the current state is closer to describing AI as "a very strong tool that has begun to economically replace some human functions" rather than "human-like intelligence."


Source URL

G1 Globo
https://g1.globo.com/tecnologia/noticia/2026/03/25/ceo-da-nvidia-diz-que-inteligencia-artificial-atingiu-nivel-humano-por-que-ideia-e-contestada.ghtml

Primary source of Jensen Huang's statement. Lex Fridman Podcast #494 transcript
https://lexfridman.com/jensen-huang-transcript/

Report organizing the statement. To understand Huang's statement and the immediate "retraction/reservation"
https://www.theverge.com/ai-artificial-intelligence/899086/jensen-huang-nvidia-agi

Reuters article conveying Huang's 2024 view that "it could be achieved within five years"
https://www.reuters.com/technology/nvidia-ceo-says-ai-could-pass-human-tests-five-years-2024-03-01/

International AI Safety Report 2026 organizing current general AI capabilities and limitations, hallucinations, and instability
https://internationalaisafetyreport.org/publication/international-ai-safety-report-2026

Article summarizing Brazilian expert comments introduced in the G1 article
https://dailyjournal.news/news/2026-03-25/ceo-da-nvidia-diz-que-ia-atingiu-nivel-humano-mas-especialistas-contestam

Trend page showing the topic's aggregation on X. For confirming the mixed situation of pros and cons
https://x.com/i/trending/2036153415601132001

An example of positive reception. Reaction on X/LinkedIn interpreting it as "crossing the threshold"
https://www.linkedin.com/posts/guillermoflor_breaking-nvidias-ceo-jensen-huang-just-activity-7442149074503487491-26Co

An example of skeptical reaction. Posts on X viewing it as "immediately retracting the statement" and "logic from someone selling GPUs"
https://x.com/TukiFromKL/status/2036196478582985178
https://x.com/SirClmnt/status/2036200878307164651

For checking reactions on Reddit. A thread gathering sarcasm, job anxiety, and rebuttals to definitions
https://www.reddit.com/r/technology/comments/1s1vhsf/nvidia_ceo_jensen_huang_says_i_think_weve/

For checking reactions on Hacker News. Discussions on headline framing and practical limitations of current AI
https://news.ycombinator.com/item?id=47495966