AI hallucinations are solvable, artificial general intelligence about 5 years away: NVIDIA’s Jensen Huang

AI hallucinations are solvable, artificial general intelligence about 5 years away: NVIDIA’s Jensen Huang

FP Staff March 20, 2024, 15:28:32 IST

NVIDIA top boss Jensen Huang expects that we will be faced with AGI or artificial general intelligence much before 2030. In fact, he says that humans will see the first AGIs in the next 5 years

Advertisement
AI hallucinations are solvable, artificial general intelligence about 5 years away: NVIDIA’s Jensen Huang
NVIDIA CEO Jensen Huang, believes AGI in about 5 years from being achieved.

Artificial General Intelligence or AGI is one of the biggest talking points in the world of AI, and a major milestone that almost everywoke currently working on AI, hopes will be coming soon. If one were to go by what Jensen Huang, CEO of NVIDIA believes, we will have AGI over the sec

AGI promising a massive leap forward in technological capabilities. AGI, often dubbed “strong AI” or “human-level AI,” represents the potential for machines to exhibit cognitive abilities akin to or surpassing those of humans. Unlike regular or narrow AI, which specializes in specific tasks, AGI is envisioned to excel across a wide spectrum of cognitive domains.

Advertisement

At Nvidia’s annual GTC developer conference, CEO Jensen Huang addressed the press, offering insights into the trajectory of AGI and grappling with the existential questions it raises. While acknowledging the significance of AGI, Huang expressed weariness with the persistent inquiries surrounding the topic, attributing this fatigue to frequent misinterpretations of his statements by the media.

The emergence of AGI prompts profound existential considerations, questioning humanity’s control and role in a future where machines may surpass human capabilities. Central to these concerns is the unpredictability of AGI’s decision-making processes and objectives, potentially diverging from human values and priorities—a theme explored in science fiction for decades.

Despite the insistence of some press outlets on eliciting a timeline for AGI’s development, Huang emphasized the challenge of defining AGI and cautioned against sensationalist speculation. Drawing parallels to tangible milestones like New Year’s Day or reaching a destination, Huang underscored the importance of consensus on measurement criteria for AGI attainment.

Offering a nuanced perspective, Huang proposed achievable benchmarks for AGI, suggesting a timeframe of five years for specific performance criteria. However, he emphasized the necessity of clarity in defining AGI’s parameters for accurate predictions.

Addressing concerns about AI hallucinations - instances where AI generates plausible yet inaccurate responses—Huang advocated for a solution rooted in thorough research. He proposed a “retrieval-augmented generation” approach, akin to basic media literacy, where AI verifies answers against reliable sources before responding. Particularly for critical domains like health advice, Huang recommended cross-referencing multiple sources to ensure accuracy.

Advertisement

In essence, Huang’s insights shed light on the complexities of AGI development and the imperative of responsible AI governance to mitigate potential risks. As AI continues to advance, stakeholders must navigate ethical considerations and deploy strategies to ensure AI systems align with human values and serve society’s best interests.

Latest News
Find us on YouTube
Subscribe

Top Shows

First Sports Vantage Fast and Factual Between The Lines