On video game graphics - 游戏中图形技术和特效、后处理技术盘点
- https://www.shroudoftheavatar.com/forum/index.php?threads/on-video-game-graphics.54435/
There was another discussion on the forums about various visual components usually found in video games and I figured I'd create some kind of index for those who are curious and are interested in understanding a bit more. That way, we spare the other threads from derailing the conversation.
Keep in mind, I'll try to add more information as I find the time to talk about different effects or if someone has a particular question about something. Typically these apply just the same for computer animation or general GCI with the exception that video games tend to use short-cuts instead of doing proper simulations.
Post-Effect.An effect that is rendered on top of the computed image. Post effects are usually crude alternatives that mimic other things that'd take too long for proper simulation but are, in contrast, very light on CPU and GPU.
"SweetFX" has been a very popular tool for gamers to customize post-effects of their own on top of whatever game they're playing. Whenever it's featured within the game or done by "SweetFX", a Post-Effect takes what the game outputs and draws on top of it.
Ambient Occlusion (video game: Post-Effect).The effect of occluding ambient light. Emphasizes crevices by darkening areas where light is less likely to hit. It's generally used to make objects look more tangible in its given space. It's particularly more noticeable in down-cast lighting environments where there are little to no shadows to drown the effect.
In the old days, that information used to be "baked" (or drawn onto) the texture file directly; which meant that the texture was meant to be used for a particular object specifically. The reason we did that way because generally computers weren't fast enough as the real Ambient Occlusion pass takes a long time to process.
Thankfully, we got a few techniques now that mimic this pass with little to no cost. So in the context of video game, they're Post-Effects. It looks a lot better than baked AO in the texture primarily due to the fact that baked AO doesn't take surrounding objects (like walls or floors) into account. Screen Space Ambient Occlusion (or SSAO for short) is the most common one. HBAO and HDAO being some of the available alternatives.
They're fast but they're not as accurate as the "real" Ambient Occlusion pass. Notice that they all share "AO" in their names.
A cool side effect is that, now, we can use a particular texture for anything. Like a wood texture could be used for a table, floor, chairs, door, etc... but before, we had to give each object their own specific texture if we wanted the same Ambient Occlusion effect. Saves tons of memory!
Frustum Culling.The effect of hiding objects that would normally be unseen by the camera. Computers are stupid. If you tell it to draw a scene, it would take EVERY object into its computation. That's bad considering that cameras (our virtual eyes in this environment) don't have peripheral vision, so the computer shouldn't even try to render the surfaces of objects the camera isn't looking at. Depending on the complexity of the scene, drawing multiple objects on screen can be taxing on the computer. So Frustum Culling is used to tell the camera to ignore objects.
This is a common thing that is practically default in any game engine. The "draw distance" setting in a video game will increase or reduce the Frustum's range.
Because this is about not drawing the surface of the object, shadows are typically still included even if the object is found outside the Frustum cage. In the image above, only the green and blue objects are computed, while the red objects only "exists" for shadows. Typically, the distance at which you want to draw the shadows is shorter than the max Frustum range... so that not ALL red objects that would be in the distance would be taken into account.
Occlusion Culling.The effect of hiding objects that would normally be unseen by the camera. This takes Frustum Culling and pushes it one step further by hiding objects that are obscured from view by other objects (ie: a table behind a wall or a house behind another larger house).
For this to work, this typically requires the game objects to be pre-accounted for as the game needs to know where each objects are in relation to each other while the camera is moving around. Games that uses dynamically placed objects need alternative solutions to cull objects. Only the objects that are pre-accounted for will be hidden from view.
Alternatives that strive to replicate Occlusion Culling perfectly (without pre-computation) are taxing on the computer and are not recommended for scenes with large quantities of objects.
Typically, custom-crafted Occlusion Culling (without pre-computation) alternatives will not include Shadows once the object is hidden.HDR (post-effect).
In Shroud of the Avatar, custom-crafted Occlusion Culling script is used for many things but, as far as a visual settings go, it's used to hide props/decorations inside people's houses. When the option is toggled "off", all objects within the Frustum are computed for rendering regardless of whenever or not you see them.
The reason why you might want to disable Occlusion Culling is that sometimes, depending on the script (when it's custom made), hiding/showing large quantities of objects simultaneously (like say, you suddenly display every decoration inside a house) might create a spike in your frame-rate. So, for those who want a more stable FPS, leaving the toggle "off" might be the solution (at the cost of a lower frame-rate, of course).
The effect of rendering images with [H]igh [D]ynamic [R]ange values. Our human eyes can easily adapt to lighting; the iris closes when it's bright and will open when it's dark. That way, on average, we're able to see as much information as possible.
Typically in photography, the process requires you to take multiple pictures at various exposures because the dark image will contain information of bright areas (like the details of the clouds in the sky) and the bright image will contain information of dark areas (like the details inside of a dog house or your backyard when the house is casting a shadow over it). By mixing these up, you end up with an HDR image that has every detail that the human eye could possibly see at any given time.
For those who are good at math, science or computing:the darkest pixel on your image is at 0.0f the brightest pixel is at 1.0f and every pixel's RGB values are converted accordingly.
So 0.0f, what would be a black pixel, is now whatever dark brown or green or whatever (and vice-versa for 1.0f) and now you have every other floating values to use as a gradient.
On a normal image, black is 0,0f but if your darkest color is a brown that sits at 0.6f, you're wasting 0.4f amount of values that would otherwise be used for other colours as an HDR.
Your rainbow on the right side has more colours with HDR.
In cinema, we can take fish-eye HDR photos and use that as lighting information to do special effects with. You have a scene in your movie where a car blows up? No problem, take that HDR photo and use it to do lighting on the car (I'll talk about Global Illumination later) and the HDR can also be mapped to use as reflections for metallic surfaces. There will always be detail wherever it's needed.
Video games use similar techniques more commonly referred to as a "Cube Map" (not going to cover this here because it's not an effect). It's called a Cube Map because skyBoxes were essentially cubes that were composed of 6 images that would make up an entire sky (top in sky, sides are sky with horizon, bottom is ground, etc). Cube Maps aren't used for lighting but, a couple of years ago, they were practically the only way to add reflections to surfaces in a convincing way.
Now-a-days, with the awesome rendering power of "Physical Rendering" that many video game engines support (including Unity 5 for those who wanted context for SotA's graphical changes half a year ago), we can ALMOST use that HDR like in the movie industry.
We can still use the HDR as a CubeMap but now we also have the benefits of rendering images directly with HDR values (0.0f-1.0f) and it does it per frame. With these new values, you can apply various types of HDR filters (most common one being tone-mapping) that will generally give out a richer image overall.
So when you have HDR toggled "on", your game will mimic your eyes adjusting to the brightness of a scene and will give you the details your eyes are expecting without squinting.
If you have HDR toggled "off", stop squinting, it does nothing. The only reason why you'd want to toggle this "off" is if you're distracted by the transition between light and dark areas.Last edited: Jul 26, 2016The Hendoman, 4EverLost, Alexander and 12 others like this.-
BLOOM (Post-Effect).
The effect of adding glow to bright light. In layman's terms, it takes a bright pixels from your image within a given range and brightens the ones surrounding it. The effect is multiplicative so, the more bright pixels you have close to each other, the brighter the pixels become.
It tends to counteract the effect of HDR (which is to see detail regardless of brightness) and, in a sense, that's what it's supposed to do... but there is such as thing as excessive:
See, while HDR allows us to see bright and dark areas, bright area (particularly light sources) should still hide some detail in order for the image to be convincing. In an HDR photograph, it's already part of the image but, for video games, we have to put that in.
So HDR allows us to see the details in a given scene and Bloom works off of that. Bloom can work regardless if HDR is on or off; it just works with the brightest pixels.
(the image directly above is a comparison between 2 sets of bloom effects)
For it to work properly, though, the Bloom pass needs to go after the HDR pass, and preferably after every other pass except for Blur, Lens Flare and Fog.
A (subtle) limitation of Bloom is that, because it's a post-effect, it can't draw a glow effect if the object not seen by the camera (regardless of whenever or not the object is culled)... and that's EVEN IF the glow would have otherwise come into view. There are no bright pixels to initiate the glow. It's not so much an issue for objects that are obscured by others, but it can be noticeable for objects that would be otherwise within the peripheral vision (which cameras lack).
GLOBAL ILLUMINATION.The effect of simulating bouncing light off surfaces is probably one of the most important aspects of CGI for generating convincing images.
The theory is that the human eye sees an object's colour because the object absorbs all the other colours. This is Substractive (surface) colour mixing at work.
We see black because the surface absorbs all the colours (white) and our eyes get nothing... and we see white because the surface absorbs nothing (black) and our eyes get every colour (white). In the example of the red sphere on the image above, red is being bounced off the object... meaning red can also hit other surfaces; not just our eyes.
In practice, if you put a bright-coloured object on a white surface, the white surface will be slightly tinted depending on how strong your lighting is. It's why we have guys holding up what is typically a white (or chrome) card when shooting movies outside; to bounce light back on your actors:
In CGI, we follow the same principles.
The cool thing about using the computer is that we can practically turn any object into a source of light. So an HDR image can be used on a skybox and we can make the skybox act as a light source. So without having the need to do elaborate lighting setups this:
... we can use the skybox and get something like this:
And it'll look natural because we're actually using the environment to light our objects. Colours, intensity, reflections, etc. There's little to no guessing involved.
Everything has a cost, though. Global Illumination is one of the most computer intensive steps for creating convincing images. Depending on the complexity of the scene and the resolution of your image, it can take a few HOURS to compute.
With the constraints having to render 60+ images every second, how do video games do it? Well, we're actually using a really old trick: light maps.
When games first started being in 3D, we didn't have the computing power to dynamically render shadows on objects. When the level artist would be done creating all the art for a level, the artist would bake all of the lighting information onto a separate texture file that would then be superimposed with the regular texture in the game. That texture file containing the lighting information is called a "light map".
That's how you get games looking like this:
Light sources no longer needed to be computed on the environments in real-time (because they were already computed beforehand) and were only used to light dynamic objects like characters and doors.
Now we can do all this stuff (shadows) in real-time but game engines these days (including Unity) can bake light maps on their own. If your game project didn't need dynamic lighting, then it would take advantage of automatically pre-computing light maps so that your game would run really fast.
All the modern shooters (and other AAA games that have static environments, like Dark Souls and racing games) use light maps in conjunction with realtime shadows. Light maps are, then, primarily used to generate Global Illumination. Pre-compute Global Illumination, render dynamic lights in real-time and you get a pretty picture.
What does that mean for Shroud of the Avatar? Well, unfortunately, it means that SotA and games like it cannot and will not look as good as the top-looking video games. That's because it cannot take advantage of Global Illumination due the dynamic nature of player housing and decorations (because all of those things need to already exist during the baking process).
There are a few tricks game developers can do to "sort of" have Global Illumination in a dynamic environment, but the amount of memory it would take for a game that size would be astronomical. At least the problem is primarily isolated to cities and villages, where there are housing available.
V-SYNC.The act of synchronizing video outputs. Computers take multiple steps in order to output an image to you:- CPU: computes all of the game elements. Where the players are, what the server is doing, how much health you have, etc.
- GPU (video card): takes what the CPU came up with and puts it all together visually.
- Monitor: draws what the GPU tells it to draw.
Ever used an old CRT monitor? Ever stared at it thinking the display was blinking? If that's the case, then you were probably running at a much higher screen resolution than you should. See, typically, a monitor will run at 75Hz or 72Hz (Televisions run at 60Hz... at least the old ones used to). That's how fast it can refresh the screen. If you have ever increased the screen's resolution, you might have noticed that the screen started blinking and that (in the settings) your monitor is how set to a lower value like 60Hz or so.
That's because you've just increased the amount of pixels it needs to draw. That lady that flips the letters in Wheel of Fortune, how much longer do you think it'd take her if she had to flip an entire wall worth of letters?
Ever seen people use computers on TV and noticed a line that moves from top to bottom? The people in the show don't see it, but you see it because the video camera that was used to film the footage was actually faster at recording the image than the computer monitors were at displaying the image on screen.
What does that have to do with video games? Well, monitors haven't really sped up over the years. My computer monitor at work is running at 75Hz (you can see that in the advanced "display" setting in Windows). Video cards, however, they've sped up quite a bit. This will greatly vary depending on your FPS while playing games but, what happens if your video card goes faster than your monitor?
Why does that happen? Because by the time your monitor got to a 3rd of the image done, the video card has already given the monitor another frame to draw. It doesn't start over from scratch, it just keeps on drawing... only, this time, it'll continue to draw with the new information. It's not as obvious on the new LED screens but the image looks distorted. The faster the video card, the more distorted the image, the higher refresh-rate (Hertz) the monitor has, the less distorted the image will be.
So what the V-Sync toggle does is that it forces your video card to wait for your monitor to draw the image by each pixel all the way to the bottom (which is what the V stands for: vertical) before rendering another frame.
So what's the point of having a powerful video card, then? Well it's so that, when the monitor finishes drawing, it needs to be quick "on the moment" to give another frame to draw. Bam! New image! Next!
Another side effect to toggling this feature "on" is that, if your video card has to wait, then it's not generating as much heat because it's not working as hard. Super great for laptops. The only downside is that you won't see any big numbers next to the FPS counter.
Another alternative is to simply buy a monitor designed for gaming. Those sweet babies run at 144Hz (although their resolutions tend to cap at 1080... because gaming).
STEREOSCOPY.Displaying images in 3 dimensions. In my line of work, I deal with "3D" every day. I walk from my home to work, I sit on my chair and I also interact with human beings. All of these things are in 3D. Not only because they have 3 dimensions visually, but because I interact with them in 3D space as well.
Even on a flat 2D computer screen, when I'm not programming and designing games in Unity (which has me dealing with 3D vectors and coordinates), I'm building 3D objects, , animating them, etc. All of my daily routine requires me to be thinking and interacting in 3 dimensional space.
So what's that got to do with "Stereoscopy"? Especially since "Stereo" means "two". I mean, ever heard of a stereo sound system? That thing has two speakers. Stereoscopy is really about providing two images; one for each eye.
Ever seen a 3DS in action? Watched a movie with 3D glasses on? The Oculus Rift? the Vive? That's stereoscopy! The idea here is to trick your mind into thinking that you're seeing an object (whenever its flat or not) in 3D space.
Okay cool! "3D" is about space, and "Stereoscopy" is about seeing in "3D space". Is it hard to do? In cinema? Absolutely and expensive too!
For video games we have it a little bit easier, though, because we don't have the constraints of physical cameras. Basically, all we have to do is add another camera in our 3D space and make sure they're properly distanced from one another to mimic the eyes.
The problems comes in three parts:- Your hardware is rendering two images every second. Doubling the strain. For a non-nauseating experience, you absolutely need a steady frame-rate. No dips.
- Because you're using two different cameras, your zDepth and other post-effects won't necessarily match. 3D engines (like Unity) have to be properly tuned to support stereoscopy which means there are steps involved to make sure the output looks intentional for both cameras. You know how I said #1 doubles the strain? Well, it's actually more than that.
- Occlusion culling has to be checked for two cameras instead of one. You know how I said #1 doubles the... yeah, you get the idea.
The 3DS has a 800x240 screen resolution which would look abysmal on a computer screen or on a VR headset. Fallout 4 currently being demoed on the Playstation VR is running at a resolution of 960xRGBx1080 for each eye (which is a decent resolution). The Playstation VR goggles support displays at 120Hz (to keep those FPSs as high as possible) and 90Hz (which is a standard for VR sets).Last edited: Jul 14, 2017 -
LeeluAvatar
- Messages:
- 1,456
- Likes Received:
- 5,755
- Trophy Points:
- 125
- Gender:
- Female
- Location:
- This Side Of the Mountain
-
Time LordAvatar
- Messages:
- 6,576
- Likes Received:
- 25,778
- Trophy Points:
- 165
- Gender:
- Male
- Location:
- ~SOTA Monk~ ~Monastery~ ~Thailand~
Gix said: ↑There was another discussion on the forums about various visual components usually found in video games and I figured I'd create some kind of index for those who are curious and are interested in understanding a bit more. That way, we spare the other threads from derailing the conversation.
Keep in mind, I'll try to add more information as I find the time to talk about different effects or if someone has a particular question about something. Typically these apply just the same for computer animation or general GCI with the exception that video games tend to use short-cuts instead of doing proper simulations.
Post-Effect.An effect that is rendered on top of the computed image. Post effects are usually crude alternatives that mimic other things that'd take too long for proper simulation but are, in contrast, very light on CPU and GPU.
"SweetFX" has been a very popular tool for gamers to customize post-effects of their own on top of whatever game they're playing. Whenever it's featured within the game or done by "SweetFX", a Post-Effect takes what the game outputs and draws on top of it.
Ambient Occlusion (video game: Post-Effect).The effect of occluding ambient light. Emphasizes crevices by darkening areas where light is less likely to hit. It's generally used to make objects look more tangible in its given space. It's particularly more noticeable in down-cast lighting environments where there are little to no shadows to drown the effect.
In the old days, that information used to be "baked" (or drawn onto) the texture file directly; which meant that the texture was meant to be used for a particular object specifically. The reason we did that way because generally computers weren't fast enough as the real Ambient Occlusion pass takes a long time to process.
Thankfully, we got a few techniques now that mimic this pass with little to no cost. So in the context of video game, they're Post-Effects. It looks a lot better than baked AO in the texture primarily due to the fact that baked AO doesn't take surrounding objects (like walls or floors) into account. Screen Space Ambient Occlusion (or SSAO for short) is the most common one. HBAO and HDAO being some of the available alternatives.
They're fast but they're not as accurate as the "real" Ambient Occlusion pass. Notice that they all share "AO" in their names.
A cool side effect is that, now, we can use a particular texture for anything. Like a wood texture could be used for a table, floor, chairs, door, etc... but before, we had to give each object their own specific texture if we wanted the same Ambient Occlusion effect. Saves tons of memory!
Frustum Culling.The effect of hiding objects that would normally be unseen by the camera. Computers are stupid. If you tell it to draw a scene, it would take EVERY object into its computation. That's bad considering that cameras (our virtual eyes in this environment) don't have peripheral vision, so the computer shouldn't even try to render the surfaces of objects the camera isn't looking at. Depending on the complexity of the scene, drawing multiple objects on screen can be taxing on the computer. So Frustum Culling is used to tell the camera to ignore objects.
This is a common thing that is practically default in any game engine. The "draw distance" setting in a video game will increase or reduce the Frustum's range.
Because this is about not drawing the surface of the object, shadows are typically still included even if the object is found outside the Frustum cage. In the image above, only the green and blue objects are computed, while the red objects only "exists" for shadows. Typically, the distance at which you want to draw the shadows is shorter than the max Frustum range... so that not ALL red objects that would be in the distance would be taken into account.
Occlusion Culling.The effect of hiding objects that would normally be unseen by the camera. This takes Frustum Culling and pushes it one step further by hiding objects that are obscured from view by other objects (ie: a table behind a wall or a house behind another larger house).
For this to work, this typically requires the game objects to be pre-accounted for as the game needs to know where each objects are in relation to each other while the camera is moving around. Games that uses dynamically placed objects need alternative solutions to cull objects. Only the objects that are pre-accounted for will be hidden from view.
Alternatives that strive to replicate Occlusion Culling perfectly (without pre-computation) are taxing on the computer and are not recommended for scenes with large quantities of objects.
Typically, custom-crafted Occlusion Culling (without pre-computation) alternatives will not include Shadows once the object is hidden.HDR (post-effect).
In Shroud of the Avatar, custom-crafted Occlusion Culling script is used for many things but, as far as a visual settings go, it's used to hide props/decorations inside people's houses. When the option is toggled "off", all objects within the Frustum are computed for rendering regardless of whenever or not you see them.
The effect of rendering images with [H]igh [D]ynamic [R]ange values. Our human eyes can easily adapt to lighting; the iris closes when it's bright and will open when it's dark. That way, on average, we're able to see as much information as possible.
Typically in photography, the process requires you to take multiple pictures at various exposures because the dark image will contain information of bright areas (like the details of the clouds in the sky) and the bright image will contain information of dark areas (like the details inside of a dog house or your backyard when the house is casting a shadow over it). By mixing these up, you end up with an HDR image that has every detail that the human eye could possibly see at any given time.
For those who are good at math, science or computing:
Your rainbow on the right side has more colours with HDR.
In cinema, we can take fish-eye HDR photos and use that as lighting information to do special effects with. You have a scene in your movie where a car blows up? No problem, take that HDR photo and use it to do lighting on the car (I'll talk about Global Illumination later) and the HDR can also be mapped to use as reflections for metallic surfaces. There will always be detail wherever it's needed.
Video games use similar techniques more commonly referred to as a "Cube Map" (not going to cover this here because it's not an effect). It's called a Cube Map because skyBoxes were essentially cubes that were composed of 6 images that would make up an entire sky (top in sky, sides are sky with horizon, bottom is ground, etc). Cube Maps aren't used for lighting but, a couple of years ago, they were practically the only way to add reflections to surfaces in a convincing way.
Now-a-days, with the awesome rendering power of "Physical Rendering" that many video game engines support (including Unity 5 for those who wanted context for SotA's graphical changes half a year ago), we can ALMOST use that HDR like in the movie industry.
We can still use the HDR as a CubeMap but now we also have the benefits of rendering images directly with HDR values (0.0f-1.0f) and it does it per frame. With these new values, you can apply various types of HDR filters (most common one being tone-mapping) that will generally give out a richer image overall.
So when you have HDR toggled "on", your game will mimic your eyes adjusting to the brightness of a scene and will give you the details your eyes are expecting without squinting.
If you have HDR toggled "off", stop squinting, it does nothing.Click to expand...!'
Thanks Gix!~Time Lord~
!'
Acred likes this. -
ZDEPTH
The zDepth is information that the computer stores temporarily after it draws objects on screen during rendering. It's basically a black and white image that can tell where objects are as a reference.
It's an extra step that takes very little memory that spares the computer from trying to figure out where objects are over and over again during a single frame. It's a neat little trick that every 3D engine takes advantage of and it has multiple use.
GOD RAYS (Post-Effect / billboards).The proper term is "Crepuscular rays" where crepusculum is a Latin word for twilight. God rays is an effect to mimic the light-shafts people see when there's a contrast between atmospheric light and unlit areas.
In cinema, this effect is called "Volumetric Lighting". It's more expensive to render this way but it can give your lights some added (and often needed) realism.
In video games, there are two unrelated ways to achieve this:
The most common one is a post-effect filter applied to the camera. If the sun is visible to the camera, the filter will brighten the pixels that are part of the path from the sun to the camera so long as the path isn't blocked by an object.
Because it's a post-effect (and because you want to draw things faster), it doesn't compute any ray-cast (the act of drawing a line from a vector until it hits a surface). The game knows at all times where the sun and camera are and will use zDepth information to determine where the objects are.
This is the effect we usually talk about when you go to a game's "video settings" and see a toggle for god rays or light shafts. It has little to no performance cost but can be annoying if it's not properly calibrated.
While the first method creates godrays directly towards the camera, the second method is used to draw any other volumetric lighting.
Now-a-days, we can achieve this with shaders (which are miniature programs/scripts that tell the video card what to do when light hits an object's surface; like windows) that will give a more realistic volumetric lighting effects. The downside is that most shader solutions tend to be expensive when you have a large amount of surface area (either a big object, or multiple of tiny objects) that uses those shaders. The faster shaders are usually tied to specific tech on certain video cards.
The cheaper "old school" way is to use planes... or more specifically cylinders or cubes and use texture alphas/transparency to make it look like light shafts.
In a lot of cases, you can't tell the difference.
You can put a lot of these in your scene so long as you don't overdo it. A crazy amount of overlapping transparencies can bog down your computer.
DEPTH OF FIELD (Post-Effect).The effect of blurring out objects that are outside the focal range. It takes the zDepth to figure out where objects are from a particular point in front of the camera.
While it can be distracting when the effects are exaggerated, depth of field is an important aspect of CGI as it gives a sense to scale to every object in the scene.
Depth of field can turn objects of any size...
and make them look like something else entirely.
We also use it a lot in cinema but for many, many years, a lot of practical effects (non-CGI) relied on making sure that everything looks as event as possible so that we can pull off tricks like this (image bellow) more convincingly:
One thing to note, however, is that, in Cinema, the blur is more realistically computed due to the fact that we can get away with spending more time to render. In video games, the blur is merely a filter that gets applied on top of the image; similar to the way you'd apply a Gaussian blur in Photoshop.Last edited: Jul 26, 20164EverLost, Time Lord, Wintermute of CoF and 2 others like this. -
BUMP/NORMAL MAPPING.
Polygons are what establishes the shape of an object. The more polygons you have, the more detailed your objects can be.
While polygons are flat, we actually have tricks to smooth out the surface.
Problem is, we still need polygons to depict detail and, in the example of the elephant, we'd need more polygons to more accurately render its eyes and wrinkles.
But polygons are taxing on the CPU, especially if the polygons are used to portray a character that twists and bends. Wait, aren't graphics done by the GPU? Well, yes but not all. GPUs are like painters, they're good at drawing so if the computer say "draw me an explosion" or "draw me a house" it can do that without any problems. However, you still need the engineer (CPU) to make some things work properly... In this case, moving bones and making sure that the geometry follows appropriately. The technical limitations require us to use the least amount of polygons possible but the art asks of us to add more detail.
So what do we do? We use "bump maps" that are specialized texture that tell the object's surface how it would react to light. The basic principle is that each polygon has a Normal (a 3D vector. You know... X,Y,Z) that represent the surface's alignment. In most cases, the Normal points up and that'll tell the computer how to draw things like reflections, where dark areas are, etc. If you change its alignment, you're actually modifying how light is perceiving the object.
What a bump map does is allow us to change the normal per pixel (of the texture, not the screen) along the surface of the polygon. It can make a flat wall a little bit more convincing.
You'd see things like this in the games that came out on the year 2000 or so. But a bump map is just a grayscale image, it has height values but it doesn't really have vector data to replace the Normal of the polygon. So it can be useful to add more detail on flat surfaces (like a wall) but it won't be enough for round surfaces (like the elephant).
What's really neat about computer graphics is that, sometimes, things just come together quite nicely.
Computers draw images using additive colour mixing and we achieve that by shooting Red, Green and Blue light. It's no accident that Photoshop images have 3 main channels. Colour use RGB, 3D Vectors use XYZ...
[1,0,0] is red, [0,1,0] is green, [0,0,1] is blue. Each axis of a vector is a value between 0 and 360 degrees. So if we convert the values so that 0.0f is 0.0f, and 1.0f is 359.0f (0.25f is 90.0f, etc), we can basically store each axis in a grayscale image. We only need 3 grayscale images and a texture file has at least 3 channels. So that's perfect!
(each channel in the image above is represented in a monochrome colour but should be in grayscale)
Bam! We now have 3D Vectors per pixel to replace the Normal of the polygon.
So now (since 2006 or so) we can do things like this:
[Polygons] > [Phong "smooth" shading] > [Normal "Bump" Maps] > [Final]Last edited: Aug 15, 2016Jens_T, Acred, Wintermute of CoF and 2 others like this. -
ANTI-ALIASING (Post-Effect).
RESAMPLING.The act of recomputing an image to either a different size, angle, warped, etc. For computer games, when you see the option in the video settings, it's typically associated with re-sizing an image.
There are a wide number of algorithms to achieve this effect so I'm just going to cover the basics:
Drawing a ton of pixels can be taxing on the GPU. The thing is, pixels are hardware bound to the monitor you're using... as they're physical things built in your monitor. Hell, if I was stuck to run all my games at 2650x1400, I'd have a frame-rate problem (and possibly a heating issue) with some of the more demanding games out there... especially back when I was using a laptop in clamshell mode. You could tell the computer to use a lower resolution, or you could tell your computer to draw certain elements at a lower resolution.
That allows you to keep your screen resolution high and have things such as cursors and text and whatever to be crisp while, at the same time, render the game's viewport (the thing that's actually intensive) at a much lower resolution.- "Downsampling" or "Downscaling" tells the GPU to use less pixels; creates a blurry-er image but is faster to compute.
- "Upscaling" tells the GPU to use the same amount or more pixels than your monitor. If you're upscaling higher than your monitor's resolution, it'll use the extra pixels to try to smooth out the image... kind of like an alternative to Anti-Aliasing.
ANISOTROPIC FILTERING.
LOD.
TESSELLATION (shader).The act of subdividing geometry to allow for greater detail.
Bump and Normal maps are great and all as they add surface detail to what would be otherwise smooth, but what if you wanted to add extruded data to surfaces?
If you read my explanation about Normal/Bump mapping, we've already established that polygons can* get heavy on the CPU (*it really depends on what you do with it). There's also the issue that scenes with lots of polygons is really taxing on memory... but what if we were to fake the polygons? I mean, the polygons would be as real as they could be in a virtual space, but wouldn't be part of the object's data.
What tessellation does is that is takes an object and it subdivides the geometry based on the information stored on a texture called a "displacement map" (seriously, all textures are "maps"). The more variation there is in the displacement map, the more subdivision it does... and THEN, it takes that displacement map to deform the geometry.
(Ghost of a Tail is a Unity game; it looks amazing)
(Notice the stairs' original shape compared to the final render)
And it's all done by the video card... AFTER the CPU has told the GPU what to do. The cool thing about it is that you can essentially control how far the effect will affect your scene; practically automating your [L]evel [O]f [D]etail (aw ****, I should probably cover that too). Unlike traditional LODs, though, the quality (and performance hit) can only go up; not down... but you could use tessellation to render really close objects with higher fidelity and then have things LOD as usual in the distance.
So what's the catch? This is too good to be true, right? Well, honestly, there's very little downside to this new trick. The problem is that it's a video card thing and (currently) requires DirectX 11 or higher. Meaning, if your video card can't do it or if you're on a Mac or Linux machine, you're SoL. That's probably why Shroud of the Avatar doesn't take advantage of it.Last edited: Aug 15, 2016