April 1, 2023
3a6491d308c081baef5e782715d0b327

Nvidia’s metaverse dream is not only Omniverse | Geek Park

When it comes to the metaverse, Nvidia cannot avoid it.

Whether it is the ray tracing technology required for a more realistic presentation of the virtual world, ormetaverseNVIDIA provides a series of technical and platform support for the required artificial intelligence and large computing power.At this year’s NVIDIA GTC 2022 conference, in addition to the RTX 40 seriesgraphics cardIn addition, the most eye-catching is the update of a series of metaverse-related products and technologies.

Officially released last year “The Engineer’smetaverse“After NVIDIA Omniverse (hereinafter referred to as Omniverse), NVIDIA launched the Omniverse Cloud cloud service this time. Founder Huang Renxun once said: “Through the Omniverse in the cloud, we can connect teams around the world to jointly design, build and run virtual worlds and digital twins.”

For NVIDIA, Omniverse is still a platform that truly unifies the computer graphics, artificial intelligence, and technical computing and physical simulation they are good at.metaverseAll the imagination and planning of Omniverse can be seen in Omniverse.

At the Rebuild Conference in May this year, we invited He Zhan, the Omniverse Lead of NVIDIA China, to chat about their views on Omniverse, an “engineer”.metaverse“Thinking.And this time, with a higher computing power GPU, the autonomous driving chip Thor, and the release of Omniverse Cloud, what new changes have Nvidia thought about the metaverse? Has their goal of the “engineer metaverse” been adjusted?

On September 29, at Rebuild 2022 in Geek Park, Founder Park’s anchor Wang Shi and NVIDIA China Senior Technical Marketing Manager Shi Chengqiu talked about NVIDIA’s development inmetaversenew thinking.

61c2e60ef7422ca2ddb3d4172df0cea6

Shi Chengqiu is a guest in the “Rebuild” column of Geek Park | Source: Screenshot of the live broadcast

metaverseBesides,

What’s New at GTC Congress

Founder Park: The newly launched Thor chip this year directly pulls the computing power to 2000T, and directly replaces the Atlan chip that is planned to be mass-produced in 2024, rather than as a follow-up upgrade. What is the consideration?

Shi Chengqiu:There are two key points for autonomous vehicles.

The first point is that there are many sources of data from the collectors on the vehicle, such as lidar, radar, and cameras. The data obtained from these samples is not single, but diverse, high-precision maps and environments; some interactions inside and outside the car, such as pedestrians passing through suddenly, road signs on the road; even the conversations and mouths of people in the car , Facial expressions, because of human-machine interaction, there may be some captures. To sum up all these data sources means that the amount of data calculated by this car-level computer per second is very huge.

Second, the data must be redundant. The safety of the vehicle is very important, and a camera on the car is enough to capture the data ahead, plus radar and lidar can achieve different levels of redundancy; ensure that when a certain device is unexpected or blocked, such as a camera Other devices can also provide a safe data redundancy in real time when blocked by other vehicles or foliage.

Redundancy and diversity have created massive computing in real-time processing by automotive-grade computers for autonomous driving. Our products may need to be enriched to a very high level of computing power to be able to meet the challenges of such a complex budget environment, so we have launched a more advanced product Thor, based on the latest Ada Lovelace architecture this year, in addition to the third In addition to the generation of ray tracing, the cooperation of artificial intelligence and neurographics has also been added.

Founder Park: Thor is actually based on the Ada Lovelace architecture. What are the core innovations of this new architecture and the problems to be solved?

Shi Chengqiu:Ada Lovelace This architecture may be derived from various aspects, it can be said that it is a pinnacle of graphics and graphics image processing. For example, the RTX 40 series products that gamers are thinking about, the RTX 6000 Ada architecture that our data center will use soon, and the L40 series of products are all designed based on the Ada Lovelace architecture.

The Ada Lovelace architecture has added the latest third-generation ray tracing core, which enables real-time ray tracing when doing professional rendering of graphics development or playing games, and may reach 24 frames per second for movies; With the TensorFlow tensor operation of artificial intelligence, you can use FP8 to do very fast graphics and image prediction with low precision. This part is the concept of neural graphics; a large number of new functions have also been added, and even artificial intelligence can be used. AI Automatically add a complete frame out.

Founder Park: Should the computing power of smart cars be centralized or distributed? How is Nvidia laid out?

Shi Chengqiu:For the current automotive market, the cockpit instruments, on-board computers, automatic driving systems, entertainment systems, and HUD displays may all be processed by different processors, or even multiple different computers or operations. system to process. That’s how it worked for a long time, for two reasons. The first is that the computing power is not enough, and one central computer cannot handle many complicated tasks; the other is that each company has its own system, and no manufacturer has thought about launching a platform to integrate all the systems on the vehicle.

First of all, Nvidia’s computing power needs to be strong enough. Second, our platform must be compatible with the underlying computing modes of these operating systems. If enough computing power can be provided to integrate the system on the car, it will be a better direction for production, verification, maintenance, and more beneficial to users and car owners.

Founder Park: This time, the Grace Hopper recommendation system was also released, and the search recommendation seems to have no growth. Why does Grace Hopper focus on the “recommendation system”?

Shi Chengqiu:GPU As the accelerator of Deep Learning, the algorithm in the recommendation system is much faster than the CPU. Grace Hopper has more than 500 G of memory. The Grace CPU core and Hopper GPU core are connected through NVLink high-bandwidth connection technology. The large memory can be accessed by the Hopper GPU at any time, which means that the capability of this system is far greater than that of a single GPU. The computing power and accelerator power are much more powerful, because it has the ARM architecture that can handle many single-threaded, very high-speed tasks that may not be suitable for running on multi-threaded GPUs; the cache mechanism, memory call mechanism between them , The memory calling mechanism is completed under the guidance of NVLink.

short video platform UGCUser Generated Content) are stored on the ISP’s servers, and the content requires a neural network to label it algorithmically. Grace Hopper is the most suitable product for data centers in terms of computing mode, software framework, scheduling convenience and data scale. We think that the future data center must be iterative in the future. This algorithm of artificial intelligence in-depth learning has developed very rapidly in the past ten years. There may be various new algorithms in the future, and hardware must be ahead of software. Be ready; when proposing a software idea, be able to provide a matching computing power to validate the software idea.

In addition, the current search or recommendation system is not only the search of characters or text content, but also needs to deal with complex content such as voice and video. Natural language processing is also very complicated. These require deep neural network processing. AI and large computing power.

Founder Park: NVIDIA’s current direction seems to be getting closer AI At the application level, such as the medical image processing AI framework Monai, and Tokkio, will you directly provide AI applications in the future?

Shi Chengqiu:These are actually demos for demonstrations, which demonstrate the effects and services that NVIDIA’s existing software frameworks and hardware devices can achieve and the services that can be provided. Users can do in-depth secondary development according to their own needs. For example, the auxiliary diagnosis of medical imaging can indicate that there may be some problems in a certain position of this CT, prompting the doctor to pay extra attention to certain positions; and this is not the last application of NVIDIA. Medical-grade equipment is very strict, and NVIDIA must jointly develop with customers to make a medical device that complies with local laws and meets the requirements of the HA; this is enough to prove that NVIDIA will not provide applications directly.

The first thing NVIDIA needs to ensure is to do a good job in the computing power and functions of the hardware, and then make a software development kit and some running frameworks in the middle, stacking layers of software stacks, and finally let users get a software that can be used out of the box. Development environment; Nvidia builds an out-of-the-box development environment, not an out-of-the-box use case.Artificial intelligence is not a form of consumption, it must be built on the complex ecosystem built by the entire industry, so NVIDIA will not launch it directly AI For application products, what we do is to provide a full software stack service layer on the platform.

Omniverse is committed to building

an equalmetaverse

Founder Park: In terms of products, why do you want to make Omniverse Nucleus and Omniverse Cloud? What is the difference between them and general databases and clouds?

Shi Chengqiu:The description of the database is not accurate. What we do is called MDL (Material Description Language), a material description language, which can describe the roughness, gravity, light reflection degree of objects, etc.; it can be thought of as an all-encompassingmetaverseA large database of all objects in it. For a long time, we have been working on MDL, and users can add materials freely; there are some unique materials in China, and we have many partners helping to make these materials; putting MDL in Omniverse allows the construction of the metaverse It is important that the users and participants are able to freely deploy and use the materials of these metaverses.

metaverseIt is necessary to provide a unified and continuous on-call computing power to realize the concept of “everyone is equal in the metaverse”; NVIDIA Omniverse Cloud hopes that the builders of the metaverse can go to the cloud to participate in the design of the Nucleus metaverse the ecological environment, computing power should not be a hindrance, through the cloud GPU Computing power ensures that users can participate in various environments such as the design, construction, verification, training, deployment, etc. of the Metaverse. Omniverse Cloud stores MDL materials through Nucleus, data assets can retain data modeling and a complete set of work processes, and then connect to the ecosystem of hundreds of third-party partners through our OmniverseConnector to create digital assets for the entire Metaverse, This is one of our design intentions.

Founder Park: Noticed that Omniverse is trying to AIdata processing and other unification into a product, is this also the future product direction of Omniverse?

Shi Chengqiu:It’s like this, in Omniverse there will be a lot of different needs, like someone has to do complex AI training, robot or autonomous driving training, first of all, the computing power required is relatively high, and then the environment in the Omniverse also requires a 1:1 digital twin, such as restoring streets and cities, requiring vehicles to experience spring, summer, autumn and winter, cloudy and sunny Rain and snow and other weather, so that the vehicle can achieve the closed loop of the hardware during training, it will think that it is training in the real world. In addition, for the environment, the same feedback as the real world should be generated for the collision of the car. The feedback of hitting a wall, a telephone pole, an animal and a person is different.

These all require simulation, but also artificial intelligence, ray tracing, graphics computing, and deep neural networks, and even language models. Because human-computer interaction is also involved, these elements are naturally integrated into a system.In fact, when all the elements are lumped together, we think of it asmetaverse.

Founder Park: This time, some hardware systems were also released, such as NVIDIA OVX. What is the thinking behind it?

Shi Chengqiu:OV is the abbreviation of Omniverse; X is an ending commonly used by NVIDIA, representing two things, one is Extreme, the other is Acceleration; OVX is designed formetaverseDo the accelerator for speeding up.

Our OVX is powerful and has 8 in itgraphics cardand very advanced network CPU storage, the newly released OVX uses the L40 graphics card of the Ada Lovelace architecture, these advanced devices combined into OVX can providemetaverseProvide computing power support; OVX SuperPOD composed of multiple OVX provides basic hardware support for the Omniverse computing system cluster.

OVX will be available soon, but it can be considered our public version design, our partners will provide NV certified OVX systems to provide demos for users – our OVX can be supported by clusters of systems stackedmetaversecomputing power.

6e3e79dac070c1bed6ecd3a7eac4bf1f

OVX hardware system | Source: NVIDIA official website

Founder Park: What application extensions does Omniverse currently have?

Shi Chengqiu:Omniverse is rich in extensions. For example, designers can do online collaborative work. The technical demonstration released by NVIDIA at this GDC conference was completed by engineers from different countries using Omniverse to divide labor and collaborate online; in this process, each person is responsible for the part and uses different applications. , Omniverse is the application of collaboration, construction, creation, viewing and other aspects of the 3D world that needs to be divided.

such as buildingmetaverseWhen it is impossible to draw the things on the earth one by one, you can use AI To generate, Omniverse Replicator can generate 3D modeling from the camera, and these 3D modeling can be imported into the metaverse in real time. This is called synthetic data, which has physical truth and conforms to the physical laws of nature.

Omniverse enables allmetaverseThe creators and users of Omniverse can better manage and run this simulated world, enabling the entire team to jointly design, program, optimize, deploy, train a series of applications and applications of artificial intelligence-based deep neural networks in Omniverse. Serve. For example, the training of self-driving vehicles, robotic arms, service robots, etc., can be carried out directly in the metaverse.

Founder Park: Not long ago, you launched Omniverse ACE. What is NVIDIA’s understanding of virtual humans in the future?

Shi Chengqiu:NVIDIA’s cloud-native development tool in the cloud is called ACE (Avatar Cloud Engine), which is the cloud engine for virtual images. Our standard positioning for virtual humans is very high. Virtual humans must be realistic, conform to the laws of nature, be physically accurate, and require ray tracing. Hair, skin, expressions, and mouth shapes must match this person; The height that a virtual human can reach.

On this basis, it is necessary to cooperate with a series of ray tracing, artificial intelligence, physical simulation, etc. There is a product Audio2Face in Omniverse, which can automatically project a paragraph of words to the face of CG characters, and can be based on the input of a paragraph. The text before and after understands this passage, automatically recognizes joy, anger, sorrow and joy, and projects rich facial expressions accordingly.

Omniverse also has a small component, Machinima, which uses machine learning to create movie scenes. Through the actions captured by a single ordinary civilian camera, the human bones, joints, body language, and every body movement can be simulated 1:1 to the virtual human in real time.

Nvidia is based on

metaversebasic service provider

Founder Park: How does NVIDIA define itmetaverseof?

Shi Chengqiu:From Web1.0 to Web2.0 to today’s Web3, the first is the emergence of the mobile Internet, and then everyone is always online anytime, anywhere.metaverseIt should also be always online.

Nvidia definedmetaverseIt uses USD (Universal Scene Description, an extensible universal language for describing virtual worlds) to connect all digital assets.Everything on Earth may go into the metaverse, with existing third parties ISV The software describes these digital assets, uses USD as a bridge, and connects digital assets in various formats through the Omniverse Connector to become a real-time 3D Internet.

People in the past never imagined that the Internet would become an important part of life,metaverseWhat it will become in the end is unpredictable. As a provider of full software stack services, NVIDIA can do well in hardware computing power, software switching capabilities, and software SDK And a series of frameworks, to be a basic service provider to help users build the metaverse together.

Founder Park: Why does the virtual world created by NVIDIA pursue the similarity with the real world?Even Lao Huang said that “the laws of particle physics, the laws of gravity, the laws of electromagnetism…pressure and sound” should all be applied tometaversemiddle.

Shi Chengqiu:metaverseThe initial stage must be the process of users first feeling and exploring, but if the metaverse has no intersection with the real world, the metaverse will eventually become a sci-fi world. The early science fiction can be said to be a prospect of the future world, and the current metaverse may also be an expression of the real society at a certain point in the future; The development of science and technology has played a role in fueling the flames. For example, a retail store in the United States projects the store’s plan in the Metaverse to the items displayed in the store through augmented reality, and can immediately get feedback on how to display it more rationally and increase the user’s desire to buy; this is how the Metaverse and the realistic interaction.

existmetaverseIt can verify, train, imagine, and practice all kinds of conjectures, deploy wild ideas, and finally it can connect to the physical world at a certain time node; NVIDIA’s GPU The accelerator is to accelerate the development of artificial intelligence. We hope that through the construction of a series of stacks and environments of our hardware and software, we can help various computing environments to achieve acceleration, because we believe that the metaverse will one day be implemented into reality. in the world. The virtual world must eventually be coupled with the real world, and the value of the virtual world should return to the real world to complete the closed loop.

Founder Park: Recently AIGC (AI Generated content) is hot, Nvidia is in this space, what are you looking for?

Shi Chengqiu:From early service providers to provide content to users to provide content, from now on by AI Self-generated content is an inevitable process of three-step evolution.FuturemetaverseThe richness of content still relies on AI-generated content.

Now a few strokes of GauGAN can help users generate a master-level art painting, which is the content of AIGC based on generative adversarial network.At this year’s graphics conference SIGGRAPH, Nvidia’s article on AI The generated paper won the Best Paper Award. The core of the paper is to reverse-render a static 3D image into a dynamic 3D model.

NVIDIA’s 40 series released this timegraphics cardOne of the most important features of DLSS3 is the third version of Deep Learning Super Sampling. It can generate a complete picture – one frame of 4K is equivalent to four frames of 1080P, we only have 1/8 of the computation, and every two frames of 4K has only one frame of 1080P real rendering computation, and the remaining 7/ 8 All are calculated by the Tensor Core in the Ada Lovelace architecture using artificial intelligence.

Based on DLSS3, various technical demonstrations can obtain geometric-level frame rate improvement, which is what AIGC generates based on the artificial intelligence completed by Tensor Core in the Ada architecture. AIGC is a very cutting-edge technology, and it is also the computing method that NVIDIA has insisted on graphics for a long time, and even used AIGC to become what is now called neurographics and neurographics.

Founder Park: The world of the game is actually a world with limited rules, butmetaverseThe world should be generated in real-time, how does NVIDIA meet this challenge?

Shi Chengqiu:Taking games as an example, the game NPCs in the 1990s were relatively simple, with only some fixed routines, and they became boring after knowing them; now, when designing games, NPCs can be driven by artificial intelligence to learn the modes and trajectories of human operations. The running logic and mode of NPC are constantly evolving, which brings higher challenges and more fun to the game.

Using deep neural networks, NPCs can not only learn human patterns, but also learn autonomously; to learn fast and well, the requirements for deep neural networks and artificial intelligence are very high, requiring powerful computing power and a background support platform .NVIDIA provides Omniverse Cloud and proposes the concept of GDN (Graphic Delivery Network), when complexmetaverseWhen a sense of substitution or interaction is present, GDN can automatically identify the computing power closest to the user, and use the best computing power. GPU The proportion is accelerated and greatly improveduser experiencefeeling.

Founder Park: NVIDIA has done hardware + system software + application framework, why do they do all of them? Is NVIDIA going to be a vertically integrated company?

Shi Chengqiu:We don’t really have that much ambition. First of all, NVIDIA is a semiconductor company. It designs semiconductors first, and then hands them over to partners for manufacturing. Secondly, our products have excellent performance. If we do not use our products to the extreme, it will be a waste, and we have the world The best team members, so he proposed to do a software stack.

Each generation of data center GPU To the edge computing computer Jetson, we will use CUDA to connect together, the user’s code and products do not need to be ported, because it is a unified architecture; the code on the A device can also run on the B device, and the efficiency will be better after optimization. high.On the CUDA architecture, a variety of SDK. In order to facilitate the use of users and developers, NVIDIA provides some open source containers (containers) that can be used immediately after downloading.

NVIDIA’s own positioning is a platform provider with a full software and hardware stack, from the underlying driver, architecture to the development language of the entire product, to development components, development middleware, development frameworks, and finally some reference designs, etc., as long as you register One account can use such a comprehensive full software stack platform solution. This is NVIDIA’s definition of itself, and the ultimate goal is to do a good job in hardware, and let users squeeze the performance of our hardware better, more conveniently, better, and more efficiently.

Founder Park: The fields involved in NVIDIA’s applications include medical treatment, autonomous driving, physical and chemical research, and cutting-edge scientific research. Where do you define your circle of competence, and which levels do you not touch?

Shi Chengqiu:We basically don’t define our own boundaries because GPU It is a processor that can be used for general-purpose computing. Of course, it will be more efficient in some fields, especially large-scale deformation operations; CPUs also have projects.We will see that there is a demand in the market, and we will study how to deliver products that meet this demand; when there is demand in a certain vertical industry segment, we will have specific solutions for this industry. SDK Appears, such as various sub-tracks such as self-driving cars, medical care, robots, etc., which are now more popular.

As long as the market has demand and users have demand, we will make arrangements.

Ewen Eagle

I am the founder of Urbantechstory, a Technology based blog. where you find all kinds of trending technology, gaming news, and much more.

View all posts by Ewen Eagle →

Leave a Reply

Your email address will not be published.