# Nvidia released an AI model for autonomous driving
At the NeurIPS AI conference in San Diego, California, Nvidia announced Alpamayo-R1 — an open language model for visual thinking. It is designed for autopilot.
Such neural networks are capable of processing text and images, allowing vehicles to “see” their surroundings and make decisions based on the information received.
The new tool is based on the “reasoning” Cosmos-Reason. Nvidia released the Cosmos model family in January and introduced additional solutions in August.
“Previous versions of autonomous driving models have struggled in complex situations — at intersections with a high number of crossings, in front of an upcoming lane closure, or with a car parked two rows deep on a bike lane. Reasoning gives autonomous vehicles common sense, allowing them to drive at a human level,” the company noted.
Technologies like Alpamayo-R1 are crucial for companies striving to achieve Level 4 autonomous driving, according to Nvidia's blog.
The model takes into account all possible trajectories, scenarios, and then uses contextual data to select the optimal route.
The company hopes that the new tool will give autonomous vehicles “common sense” that will enable them to make more effective complex decisions while driving.
The model has been uploaded to GitHub and Hugging Face. Along with it, the company has added step-by-step guides, resources for inference, and post-training workflows. The entire toolkit is called Cosmos Cookbook.
The materials are designed to help developers better utilize and train neural networks for individual tasks.
Solutions based on Cosmos
Nvidia reported on the “virtually limitless capabilities” of applications based on Cosmos. Among the latest examples, the company mentioned:
LidarGen — the world's first model for generating lidar data during the simulation of autonomous vehicles;
Cosmos Policy — a framework for transforming large pre-trained video models into reliable robot policies — a set of rules that define their behavior;
ProtoMotions3 — a solution for training bots using realistic scenarios.
Nvidia promotes physical artificial intelligence as a new direction for its AI processors. The company's CEO, Jensen Huang, has repeatedly emphasized that this field will become the next wave of AI development.
The chipmaker is betting on the robotics sector. In August, it released a new Jetson AGX Thor module for $3499. The company refers to the processor as the “brain of the robot.”
In October, Huang stated that artificial intelligence has reached a “success spiral.” According to him, significant improvements in neural networks lead to increased investments in the technology, which further “boosts” the field.
Let us remind you that in the third quarter, Nvidia's revenue amounted to $57 billion, which is 62% more than in the same period last year.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Nvidia has released an AI model for autonomous driving - ForkLog: cryptocurrencies, AI, singularity, future
At the NeurIPS AI conference in San Diego, California, Nvidia announced Alpamayo-R1 — an open language model for visual thinking. It is designed for autopilot.
Such neural networks are capable of processing text and images, allowing vehicles to “see” their surroundings and make decisions based on the information received.
The new tool is based on the “reasoning” Cosmos-Reason. Nvidia released the Cosmos model family in January and introduced additional solutions in August.
Technologies like Alpamayo-R1 are crucial for companies striving to achieve Level 4 autonomous driving, according to Nvidia's blog.
The model takes into account all possible trajectories, scenarios, and then uses contextual data to select the optimal route.
The company hopes that the new tool will give autonomous vehicles “common sense” that will enable them to make more effective complex decisions while driving.
The model has been uploaded to GitHub and Hugging Face. Along with it, the company has added step-by-step guides, resources for inference, and post-training workflows. The entire toolkit is called Cosmos Cookbook.
The materials are designed to help developers better utilize and train neural networks for individual tasks.
Solutions based on Cosmos
Nvidia reported on the “virtually limitless capabilities” of applications based on Cosmos. Among the latest examples, the company mentioned:
Nvidia promotes physical artificial intelligence as a new direction for its AI processors. The company's CEO, Jensen Huang, has repeatedly emphasized that this field will become the next wave of AI development.
The chipmaker is betting on the robotics sector. In August, it released a new Jetson AGX Thor module for $3499. The company refers to the processor as the “brain of the robot.”
In October, Huang stated that artificial intelligence has reached a “success spiral.” According to him, significant improvements in neural networks lead to increased investments in the technology, which further “boosts” the field.
Let us remind you that in the third quarter, Nvidia's revenue amounted to $57 billion, which is 62% more than in the same period last year.