Nvidia takes on Tesla with what Jensen Huang calls the ‘ChatGPT moment’ for self-driving

Nvidia takes on Tesla with what Jensen Huang calls the ‘ChatGPT moment’ for self-driving

Nvidia takes on Tesla with what Jensen Huang calls the ‘ChatGPT moment’ for self-driving

“The ChatGPT moment for physical AI is here — when machines begin to understand, reason, and act in the real world.”

So said Nvidia (NVDA) CEO Jensen Huang at CES in Las Vegas, throwing down the the robotic gauntlet in a statement about the GPU maker’s latest autonomous driving move.

Nvidia’s Alpamayo is a chain-of-thought, reasoning-based vision language action (VLA) model used for self-driving cars and robotaxis, which the company said is designed to integrate perception, language, and action planning in making decisions.

Huang played a video of Alpamayo in action during his presentation. In it, a test vehicle navigated the streets of San Francisco, performing maneuvers like that of a human driver, without any interventions.

The big question now is whether Nvidia has created a system that is superior to Tesla’s (TSLA) and is on par with what Alphabet’s (GOOG, GOOGL) Waymo is doing with its best-in-class robotaxis.

Nvidia founder and CEO Jensen Huang speaks as he introduces the NVIDIA Alpamayo autonomous vehicles during Nvidia Live at CES 2026 ahead of the annual Consumer Electronics Show in Las Vegas, Nevada, on January 5, 2026. (Photo by Patrick T. Fallon / AFP via Getty Images)
Nvidia founder and CEO Jensen Huang introduces Alpamayo autonomous vehicles during Nvidia Live at CES 2026, ahead of the annual Consumer Electronics Show in Las Vegas, on Jan. 5. (Patrick T. Fallon/AFP via Getty Images) · PATRICK T. FALLON via Getty Images

Huang is extremely bullish on autonomous driving. The CEO said he sees a future with a billion autonomous cars on the road. Nvidia has been plugging away at self-driving tech for over 10 years now. Last year, Huang predicted physical AI solutions like autonomous driving would be a “multitrillion-dollar” opportunity.

At CES, Huang said the upcoming Mercedes CLA EV would be the first to implement Nvidia’s full self-driving (FSD) stack that includes Alpamayo in Q1, and that it plans to have to autonomous robotaxis — like Waymo’s — with partners like Uber (UBER) and Lucid (LCID) by 2027. Currently, Alpamayo is Level 2 advanced, meaning it can operate autonomously but requires human supervision.

All the major players in the autonomous space have the same goal, which is Level 4 autonomous, meaning the cars are fully self-driving within a geographic zone. While Waymo has achieved this in some markets, at the moment Tesla and Nvidia’s current DRIVE Hyperion system are at Level 2. Nvidia is targeting Level 4 capability with Alpamayo soon.

Read more: How to find the cheapest car insurance in 2026

Katie Driggs-Campbell, a professor at the Grainger College of Engineering at the University of Illinois, told Yahoo Finance that she’s impressed so far by Nvidia’s progress, though she warns that that PR hype can overtake reality.

On its face, Alpamayo is a step beyond what Tesla is doing with its closed FSD system, which powers its EVs and robotaxis, Driggs-Campbell said, though Tesla’s system requires supervision at this time, meaning its Level 2 autonomy. Nvidia’s Alpamayo claims to be aiming for Level 4 when fully deployed, and FSD aims to get there as well with further software updates.

Tesla’s FSD is an end-to-end neural network trained on massive amounts of real-world fleet driving data to make perception and control decisions. Tesla shifted from rule-based control and separate modules, per an Elon Musk edict, to a single neural network system that goes from camera input to vehicle control outputs, with no explicit reasoning.

How Tesla's FSD model solves for self-driving.
How Tesla’s FSD model solves for self-driving. · Tesla via the Understanding AI substack

Tesla’s FSD is also a closed system, and the company has not released much info or details about how it actually works. For instance, after Nvidia’s Alpamayo reasoning-based presentation, Tesla CEO Elon Musk claimed “that’s just exactly what Tesla is doing” in its most FSD recent release, but it’s difficult to validate that claim.

Based on what’s currently known, the Tesla neural network learns patterns from data (like millions and millions of driving videos) and can perform driving actions, but it doesn’t produce explicit reasoning (for example, stopping because a pedestrian might step through the intersection). Decision processes are essentially hidden and can’t be examined after the fact.

“Tesla is taking a more traditional deep learning approach, where it is in some ways, a simpler problem, where you have your input images and any other sensor data that they want to include, and the output actions that they get from a bunch of driving examples [to train the model],” Driggs-Campbell said. “And I would say one of the cool things about Tesla is they have a really easy way to just take data from anyone who’s driving a Tesla.”

Tesla has produced close to 9 million vehicles since its inception, with most capturing and sending visual data back to Tesla.

The issue is Tesla’s system is a “black box,” meaning engineers don’t know what’s going on in the neural network. All that can be measured is whether the driving outputs were satisfactory.

Read more: How to save money on Tesla car insurance

This is where Alpamayo and Waymo differ from Tesla, and it’s helpful to discuss by way of an example.

For instance, a Nvidia Alpamayo-equipped car could see that a traffic light is nonfunctioning when it approaches an intersection. In theory, Alpamayo would be able to “reason” around a tricky issue that comes up by processing visual information, translating that into an understandable language-based solution (for example, stop at the intersection, spot any obstacles, then drive through), and then plan and execute that action with vehicle movement.

Nvidia’s solution is likely similar to that of Waymo, which has been good about showing its work.

How Waymo's two-system approach works.
How Waymo’s two-system approach works. · Waymo

Waymo calls its modular solution a “two-system approach,” or “thinking fast and slow.” Larger systems like Waymo’s and Alpamayo are known as foundation models, though Tesla calls FSD a foundation model due its vast training on videos of the physical world.

In Waymo’s case, System 1 is reactive, meaning it mimics an automatic or instinctual response from sensor inputs. System 2 is deliberate, meaning it reasons through tasks to determine intermediate steps. The two systems then provide inputs to what the company calls its “world decoder,” which then decides the best plan of action for the vehicle (see graphic above), but in the case of Waymo, explicit rules can override its thinking.

“Almost all systems are going to have some safeguards [explicit rules] in there, since there are some hard constraints that you don’t need things to reason about in theory … [For example,] we know you shouldn’t go off the road,” Driggs-Campbell said. “There is some level of monitoring and reasoning, but the challenge comes when you have systems that don’t really play together, you do sometimes get some issues where [you’re] jittering or some back and forth, and it can cause weird problems.”

The example above of nonfunctioning traffic lights was a real-world difficulty that Waymo robotaxis encountered in San Francisco in late December, when its two-system and rules-based model couldn’t navigate a power outage that made traffic signals inoperable.

A Tesla Robotaxi drives along South Congress Avenue in Austin, Texas, on June 22, 2025. (Reuters/Joel Angel Juarez)
A Tesla Robotaxi drives along South Congress Avenue in Austin, Texas, on June 22, 2025. (Reuters/Joel Angel Juarez) · Reuters / Reuters

Musk said Tesla’s FSD-powered Robotaxis, which are currently testing in San Francisco with safety drivers, were not fazed by the San Francisco power outage, but the way its vehicles come to a solution is different from models like Alpamayo and Waymo.

So, while Tesla Robotaxis functioned during the San Francisco power outage, engineers would not be able to discern how they solved the issue because there is no reasoning output, making it difficult to refine or fix a system that may have used a faulty pattern or bad data when making a driving decision.

Which makes it seem like the big foundation reasoning models like Alpamayo and Waymo, with its two-system setup, might be better. But again, it’s not that simple.

Wall Street analysts at banks like Morgan Stanley, Deutsche Bank, and Wedbush Securities believe Tesla FSD supervised is the gold standard, powered by a massive data source and deployed at scale across Tesla EVs.

Nvidia founder and CEO Jensen Huang speaks during Nvidia Live at CES 2026, ahead of the annual Consumer Electronics Show in Las Vegas, Nevada, on Jan. 5, 2026. (Patrick T. Fallon/AFP via Getty Images)
Nvidia founder and CEO Jensen Huang speaks during Nvidia Live at CES 2026, ahead of the annual Consumer Electronics Show in Las Vegas, Nevada, on Jan. 5, 2026. (Patrick T. Fallon/AFP via Getty Images) · PATRICK T. FALLON via Getty Images

Driggs-Campbell acknowledged the advantages of Tesla’s model — neural networks can make quicker decisions and use less computational load — but notes that it’s harder to say whether it’s “best.”

She added that foundation models used in Waymo and Nvidia’s Alpamayo, in general, appear to be showing promising results and outperforming compared to Tesla’s deep-learning approach. But there are still a number of hurdles.

“I think the translation from these really big reasoning models to online performance is a huge challenge,” Driggs-Campbell said. “They take a long time. It’s very much like a human — reasoning takes an order of seconds, whereas often, when you’re actively driving, you want something much, much quicker.”

There’s a “gap” between what currently seems to be working, like FSD, and the “next-gen” models like Alpamayo and Waymo’s, she said.

In other words, Tesla FSD represents the current industry standard for advanced driver assistance and autonomous — using massive data and deployed at scale, but it’s still fundamentally reactive and supervised.

Whereas Alpamayo perhaps embodies a next-gen direction for autonomous systems that reason about situations, offering better safety and predictability in complex or rare situations and more transparent decision logic — but they may need more time to work out the kinks and improve speed.

In other words, Tesla FSD and reasoning models like Alpamayo are two different approaches to a shared goal of fully autonomous driving, which some have called a task harder than “landing on the moon.”

“Solving the long tail of real-world edge cases is incredibly hard,” Musk wrote about those 1% of driving scenarios that both models struggle solving, though he encouraged Huang and Nvidia in their shared pursuit.

“I honestly hope they succeed,” he said, though he added that he wasn’t “losing any sleep” about the latest update.

StockStory aims to help individual investors beat the market.
StockStory aims to help individual investors beat the market.

Pras Subramanian is Lead Auto Reporter for Yahoo Finance. You can follow him on X and on Instagram.

Click here for the latest technology news that will impact the stock market

Read the latest financial and business news from Yahoo Finance

Leave a Comment

Your email address will not be published. Required fields are marked *