this post was submitted on 13 Dec 2024
50 points (100.0% liked)

Explain Like I'm Five

14415 readers
8 users here now

Simplifying Complexity, One Answer at a Time!

Rules

  1. Be respectful and inclusive.
  2. No harassment, hate speech, or trolling.
  3. Engage in constructive discussions.
  4. Share relevant content.
  5. Follow guidelines and moderators' instructions.
  6. Use appropriate language and tone.
  7. Report violations.
  8. Foster a continuous learning environment.

founded 2 years ago
MODERATORS
top 21 comments
sorted by: hot top controversial new old
[–] Shirasho@lemmings.world 51 points 1 week ago (2 children)
[–] asudox@discuss.tchncs.de 11 points 1 week ago

the most short and understandable answer yet

[–] neidu3@sh.itjust.works 8 points 1 week ago

And sometimes it goes floing (refraction)
And on rare occasions sproing (defraction)

[–] captain_aggravated@sh.itjust.works 32 points 1 week ago (2 children)

So in the real world, light obeys all kinds of laws of physics. Photons, which are particles and waves simultaneously somehow, are emitted from a light source, travel in straight lines until they encounter some matter then they either bounce off, are absorbed and re-emitted. Our eyes fairly precisely detect the number and wavelength of photons coming from the direction we are looking, which allow us to glean information about what objects are out in the world.

Simulating that with a computer takes a lot of math, because you would have to simulate the paths of a LOT of photons. For a very long time, computers, especially ones consumers could afford, just couldn't do that, especially not in real time for video game graphics.

So through the 90's and 2000's, video game developers developed shortcuts for creating reasonable approximations of lighting effects. These are a pain to figure out how to do but they look reasonable and run much faster than trying to do the lighting physics. By and by graphics cards started coming with circuitry specifically to pull off these shortcuts, and small programs designed to run on graphics cards to apply these effects to graphics are called "shaders." You may have heard that term if you've been around gaming for awhile.

Ray Tracing is the technique of doing the actual optical phyiscs problem to render the graphics instead of using those shortcuts. Like I said earlier, there is a lot more math involved here, but since you're simulating the laws of physics you can get much more realistic lighting effects this way.

Things like Pixar movies or Final Fantasy: The Spirits Within used ray tracing techniques for rendering the animation in the movie with realistic lighting, but these took minutes or even hours to render a single frame. It's also how the graphics in Myst and Riven were made, during production they ray traced the graphics then stored the results as pictures which a home computer of the time could easily display.

More recently, starting with Nvidia's RTX-2000 series graphics cards, publicly available hardware is capable of doing all that math in real time, allowing for video games to have very realistic lighting drawn by the game engine in real time. This promises two things:

  1. Better or more realistic lighting effects than possible with shaders. Things like shadows falling on your character's gun, or everything in the environment that glows casting pools of light and shadows. This has been realized to a point, though there is still more computations to do so it does run slower, when you turn ray tracing on it usually comes with a decrease in frame rate.

  2. Easier development. I'm not sure this has actually been achieved yet, but theoretically once your game engine has ray traced lighting effects built into it, you should be able to design your scene, populate it with objects and light sources, and it should just work. Problem is there are still so many graphics cards out there in use that either outright can't run real time ray tracing or do so very poorly that they still have to use the older shader approach, so in practice it has actually complicated not simplified game design.

[–] RvTV95XBeo@sh.itjust.works 5 points 1 week ago (1 children)

Reflections are a great example of this. Real, calculated, reflections are a relatively new concept in videogames, but lots of old videogames are able to replicate this effect by creating an "upside down" world, with duplicates of everything, and reflective surfaces are actually windows peering into the upside-down. Its an OK facsimile, but requires specific conditions to be met and can't be applied broadly

[–] kuberoot@discuss.tchncs.de 2 points 1 week ago

It's not just an OK facsimile, it's basically almost perfect - the only thing it's missing is interaction with lighting. That said, it absolutely is limited to only perfectly flat surfaces, and limited in terms of how many different planes you can have, usually just one for water or a big mirror in a bathroom.

I will mention, consider the game Portal - every portal in that game is effectively the work needed to create a realtime planar reflection, by having an extra camera rendering the world from a transformed point of view, with adjusted clipping.

[–] yonder@sh.itjust.works 2 points 1 week ago (1 children)

I would like to add that the way movies use raytracing (usually called path tracing in that context) is very different from how games use raytracing. While animated movies will simulate every ray of light to create the entire image, games use raytracing typically only for reflections and global illumination, while the rest of the image is still rendered using traditional techniques. (I'm no expert, though I have spent a bunch of time using Blender and playing around with Minecraft Raytracing mods)

[–] Blackmist@feddit.uk 2 points 1 week ago

Yeah, I think the Quake II RTX version uses it to completely render the scene. That's about the level of graphics we're at for whole scene path tracing.

RT is in a PS1 era right now. We're like 15 years from games having modern graphics (by today's standards) and fully pathtraced.

[–] lime@feddit.nu 32 points 1 week ago (1 children)

ray tracing mimics the way light bounces around on surfaces in real time. it does this by tracing rays from light sources in game and colouring the objects they hit. the rays then "bounce" off of the objects and redo the same step of colouring. this needs to be done for every pixel you can see.

in regards to it changing video games: idk, how much do you care about lighting?

[–] Tanoh@lemmy.world 2 points 1 week ago

Something quite important to keep in mind is that this is nothing new, there have been raytracers since at least the 70s. However, they were never anywhere near real time. A simple scene with just a few simple objects could take hours to render.

That it is now possible to do with much more complex meshes, more lighting and much higher resolution and also many times per second shows how much faster (and specialised) the hardware has become.

[–] wesker@lemmy.sdf.org 23 points 1 week ago

Ray Tracing I believe is the CEO of GeForce.

[–] hanke@feddit.nu 20 points 1 week ago (1 children)

If you want pretty graphics this is good.

If you want many frames this is bad.

[–] bluGill@fedia.io 5 points 1 week ago (1 children)

There has long been the claim that CPUs (these days GPUs, but the claim predates GPUs) that can ray trace your games with plenty of frames are just around the corner. So far that hasn't happened and most people working on CPUs/GPUs are pessimistic of it. Maybe you could raytrace something simple (tetris?) in real time, but modern games put in too many objects.

[–] Gerudo@lemm.ee 4 points 1 week ago (1 children)

Your exactly right. If graphic quality stood still for a couple years, Ray tracing speed would catch up and be on parity. We keep pushing more polygons and other things that keep putting ray tracing behind a bit.

[–] entropicdrift@lemmy.sdf.org 2 points 1 week ago (1 children)

That's kind of why it started to become feasible, right? Graphics quality has only incrementally improved over the last decade or so, vs geometrically improving in decades past

[–] Gerudo@lemm.ee 2 points 1 week ago

I mean, that's been my opinion. You can only get so many polygons and bump mapping and texture resolution, etc, before you hit a plateau. The rest is 100% lighting.

I think as things like dlss and other frame generation tech gets better, Ray tracing will eventually become the norm.

[–] donuts@lemmy.world 14 points 1 week ago (1 children)

It's a lighting technique that creates more realistic visuals, especially reflections.

The technique allows the GPU to "track" / calculate how the light should actually travel and what it bounces off of (like light in real life), compared to the pre-calculated methods of before.

So it will cost you processing power and therefore frames, but it greatly increases the visual accuracy of lighting, shadows and reflections.

A simple but effective example is Minecraft with Ray Tracing. The following video showcases the difference:

https://youtu.be/9qxfavtUs7w

Notice how the game looks completely different. That's what lighting can do for a game.

Obviously this is less noticeable when it's a game with realistic graphics and a lot of time spent on getting the (prebaked) lighting just right.

holy shit, RTX in minecraft is looking good!

the same channel you posted created a camera obscura in minecraft, thats something that only works with ray tracing: https://www.youtube.com/watch?v=AE7LWV-BFoA

[–] Carrolade@lemmy.world 7 points 1 week ago

It's a hardware intensive process that tries to make lighting as realistic as possible. So, which areas are illuminated, which are in shadow. From an artistic perspective, this is very important to how a user visually processes any particular image.

No, not really. Just better graphics.

[–] e0qdk@reddthat.com 4 points 1 week ago

Games need to figure out what color to show for each pixel on the screen. Imagine shooting lines out from your screen into the game world and seeing what objects they run into. Take whatever color that object is supposed to be and put it on the screen. That's the basic idea.

To make it look better, you can repeat the process each time one of the lines hits an object. Basically, when a line hits an object, make more lines -- maybe a hundred or a thousand or whatever the programmer picks -- and then see what those lines run into as they shoot out from the point in all directions. Mix the colors of the objects they run into and now that becomes the color you put on screen.

You can repeat that process again and again with more and more bounces. As you add more and more bounces it gets slower though -- since there are so many lines to keep track of!

When you've done as many bounces as you want to do then you can shoot out lines one last time to all the lights in the game. If there is an object in the way blocking a light, the color for the object you're trying to figure out will be darker since it's in a shadow!

It's an old and simple idea to figure out what color something is like that by bouncing off objects repeatedly... but it's hard to do quickly. So, most games until very recently did not work that way. They used other clever tricks instead that were much faster, but made it hard to draw reflections and shadows. Games with those other techniques usually did not look as good -- but you could actually play them on old computers.

[–] Fizz@lemmy.nz 1 points 1 week ago

I always disable it because it makes games looks weird. I refuse to believe it's realistic lighting