track tracking vs.  ray tracing explained

track tracking vs. ray tracing explained

Every few years, there seems to be an amazing new technology that promises to make games look more realistic than ever. For decades, we’ve had shaders, mosaics, shadow mapping, and ray tracing—and now there’s a new kid on the block: tracing the path.

So if you’re looking for the slim in this latest development in graphics technology, you’ve come to the right place. Let’s dive into the world of show and follow the path of light and learning.

What is a tracer path?

The short and sweet answer to this question is…”Tracing is just ray tracing. “The equations for modeling the behavior of light are the same, the use of data structures to speed up the search for triangle-ray interactions are also the same, and modern GPUs use the same units to speed up the process. They are also very computational intensive.

But wait. If it’s really the same, why does path tracing have a different name, and what benefit does it give game programmers? Path tracing differs with ray tracing in that instead of tracking a lot of rays, throughout the entire scene, the algorithm only follows the most likely path of light.

Path tracing differs with ray tracing in that instead of tracking a lot of rays, throughout the entire scene, the algorithm only follows the most likely path of light.

We’ve figured out how rays work (see: Deep Dive: Rasterization and Ray Tracing), but a brief overview of the process is needed here. The frame starts as normal: the graphics card displays all the geometric shapes – all the triangles that make up the scene – and saves them to memory.

After a bit of additional processing, to organize the information in such a way that geometry can be searched faster, ray tracing begins. For each pixel comprising the frame, one beam is output from the camera to the scene.

Well, not in the literal sense – a vector equation is generated, with parameters set based on where the camera and pixels are located. Then each ray is checked against the geometry of the scene and this is the first part of the complexity of ray tracing. Fortunately, the latest GPUs from AMD and Nvidia come with dedicated hardware units to speed up this process.

If a beam and an object interact, another calculation will be performed to determine exactly what triangle is included in the pattern, and the color of the triangle will effectively modify the color of the pixel.

But the light rarely hits an object and that light is completely absorbed. In fact, there is a lot of reflection and refraction, so if you want to present the most realistic possible, new vector equations are created, one for both the reflected and refracted rays.

In turn, these rays are tracked until they also hit an object, and the sequence continues until a series of rays eventually bounce back to the light source in the scene. From the original primary ray, the total number of rays traced across the scene increases exponentially with each bounce.

Rinse and repeat across all the other pixels in the frame, and the end result is a realistically lit scene…although a fair amount of additional processing is still required to arrange the final image.

But even with the most powerful GPUs and CPUs, an entire ray-tracing frame takes an inordinate amount of time to accomplish — far, far, much longer than a traditional frame, using pixel shaders.

Now this is where tracing fits into the picture, if someone pardons the pun.

When more work means less work

James Kagia introduced the initial concept of path tracing in 1986, when he was a researcher at the California Institute of Technology. He showed that the problem of stopping the processor, working through an increasing number of rays, could be solved through the use of statistical scene sampling (specifically, Monte Carlo algorithms).

Conventional ray tracing involves calculating the exact path of the reflection or refraction of each ray, and tracing it down to one or more light sources. With path tracing, multiple rays are generated for each pixel but they bounce in a random direction. This is repeated when a beam hits an object, and continues to occur until the light source is reached or a predetermined bounce limit is reached.

Perhaps this in and of itself doesn’t seem like a huge change in the amount of computing required, so where’s the magic part?

Not all rays will be used to create the final color of the pixels in the frame. Only a certain number of them will be sampled, and the algorithm used will result in a nearly perfect path of the bouncing light, from the camera to the light source. The number of samples per pixel can then be measured, to fine-tune the final image.

Despite an extra heap of math and coding, the end result is that there are far fewer rays to process, even though tracing typically shoots dozens of rays per pixel. Ray tracing and executing their interaction computations are down to performance, compared to normal rendering, so using fewer rays to color a pixel is clearly a good thing.

But here’s the really smart part: fewer rays usually result in less realistic lighting, but since the bulk of the color of the pixels in the frame is affected only by the primary rays, flooding most or all of the secondary doesn’t affect things as much as one might think .

Now, if the scene has a lot of surfaces that will be reflected And the Light is refracted, such as glass or water, and then these secondary rays become significant. To get around this problem, either the algorithm is modified to take into account the distribution of the types of rays one should have in a scene, or those specific surfaces in their viewing path are treated as “full ray tracing”.

A good developer will use the full range of rendering tools in their distribution: rasterization with shaders, path tracing, and full ray tracing. Knowing all this takes a lot of work, but at the end of the day it is less work on the hardware to deal with it.

Why follow the path in the news now?

Several times over the past few years we’ve seen headlines making news about adding a radial mod to old classics, but most of those actually refer to tracing the path. We heard it back in 2019 with a beta mod for Crysis and Quake 2, or more recently with the unofficial Half-Life ray tracing mod and Classic Doom mod. All track track.

There was also a tweet from Dihara Wijetunga, AMD’s chief R&D graphics engineer, who Announcing his project Updates to the original Return to Castle Wolfenstein with the viewer tracing path.

model 1

Then, using Traceability…

2 . model

Using Traceability…

3 . model

As mentioned above, in 2019, Nvidia announced a Quake II remaster with a ray tracing display to help boost RTX technology. This was initially the work of one person, Christoph Shedd, who created the transducer (technically known as Q2VKPT) as part of a research project. With contributions from other experts in graphics technology, Quake II RTX was born, and it was the first popular game to use path tracing for all of its lighting.

The original patterns and textures still exist and the only aspect that was changed was how the surfaces were illuminated and shadows created. Still photos aren’t the best way to show how effective the new lighting model is, but you can grab a free copy for yourself, or watch this video…

Manipulate with cool phrases like Multiple significance random sampling And the Contrast reduction algorithmsThe project highlighted two things: first of all, path tracing looks really cool, and second, it’s still very difficult for developers and devices alike. If you want to know the complexity of mathematics, read Chapter 47 of Ray Tracing Gems II.

But where the likes of Quake II RTX show what can be achieved in path-tracking everything, the likes of Control and Cyberpunk 2077 show that stunning graphics can be achieved by mixing all the ways lighting and shadows are — rasterization and shading still dominate the roost, with the ray looking out for reflections and shadows.

So we are still far from each other before we see all the games on display using nothing but path tracing.

On the way to a better future

Although relatively new to the real-time viewing world, tracing is definitely here to stay. We’ve already seen the results in a single game and path tracing has already been used to great effect in offline shows, such as Blender, as well as filmmaking, with the likes of Autodesk Arnold and Pixar’s RenderMan.

There are no signs of GPUs of any kind approaching their maximum computing power yet, so while ray tracing, traditional or path tracing, is still very demanding, more powerful hardware will appear on the market over the years.

All of this means that future PC game developers will surely explore any display technology that produces stunning graphics with achievable performance, and tracing has the potential to do exactly that.

There are current controllers to consider as well. Xbox Series X and PlayStation 5 both offer support for “traditional” ray tracing, but since their GPUs will be relatively old in just a few years, developers will be looking to take advantage of every possible shortcut to squeeze the last slip out of these machines, before moving on to the next generation of consoles.

So you have it – track tracing, a fast cousin to ray tracing. It looks almost as good, and runs a lot faster. Due to the constant advancements in home computers, graphics technology, and performance, it won’t be long before we see computer graphics from the latest blockbuster movies in our favorite games as well.

Read on. Explainers at TechSpot

#track #tracking #ray #tracing #explained

Leave a Comment

Your email address will not be published.