Skip to content

Commit

Permalink
Merge pull request #67 from RefuX/master
Browse files Browse the repository at this point in the history
A few more tweaks
  • Loading branch information
lwjglgamedev authored Oct 21, 2023
2 parents f46d584 + c250148 commit 7169d7d
Show file tree
Hide file tree
Showing 6 changed files with 18 additions and 16 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,4 +38,4 @@ To all the readers that have contributed with corrections, improvements and idea
- [Kai Burjack](https://github.com/httpdigest)
- [Mjrlun](https://github.com/Mjrlun)
- [Rongcui Dong](https://github.com/rongcuid)
- [Jsmrd Roome](https://github.com/RefuX)
- [James Roome](https://github.com/RefuX)
6 changes: 3 additions & 3 deletions bookcontents/chapter-08/chapter-08.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ The `loadModel` method receives an identifier associated to the model, the path

- `aiProcess_GenSmoothNormals`: This will try to generate smooth normals for all the vertices in the mesh.
- `aiProcess_JoinIdenticalVertices`: This will try to identify and combine duplicated vertices.
- `aiProcess_Triangulate`: This will transform each face of the mesh into a triangle (which is why we expect when loading that data into the GPU). If a face is made up of more than three indices, it will split that face into as many triangles as needed.
- `aiProcess_Triangulate`: This will transform each face of the mesh into a triangle (which is what we need when loading the data into the GPU). If a face is made up of more than three indices, it will split that face into as many triangles as needed.
- `aiProcess_FixInfacingNormals`: This tries to identify normals that point inwards and reverse their direction.
- `aiProcess_CalcTangentSpace`: This calculates the tangents a bitangets for each mesh. We will not use these data immediately, but we will need it when we apply light effects later on.
- `aiProcess_PreTransformVertices`: This removes the node graph and pre-transforms all vertices with the local transformation matrices of their nodes. Keep in mind that this flag cannot be used with animations.
Expand Down Expand Up @@ -342,7 +342,7 @@ public class Texture {
}
```

The `Texture` class defines the `recordedTransition` attribute to control if the texture has already been recorder to transition to the final layout or not (more on this later). We use the stb function `stbi_load` to load an image file. This function receives as a parameter the path to the file, three `IntBuffer`s to return the width , the height and the color components of the image. It also receives the desired number of color components (`4` in our case, which represents RGBA). This function returns a `ByteBuffer` with the contents of the image if it has success and fills up the `IntBuffer` used as output parameters. After that, we create a Vulkan buffer which will be used to transfer the contents to the image. Then, we create a Vulkan image. It is interesting to review the usage flags we are using in in this case:
The `Texture` class defines the `recordedTransition` attribute to control if the texture has already been recorded to transition to the final layout or not (more on this later). We use the stb function `stbi_load` to load an image file. This function receives as a parameter the path to the file, three `IntBuffer`s to return the width , the height and the color components of the image. It also receives the desired number of color components (`4` in our case, which represents RGBA). This function returns a `ByteBuffer` with the contents of the image if it has success and fills up the `IntBuffer` used as output parameters. After that, we create a Vulkan buffer which will be used to transfer the contents to the image. Then, we create a Vulkan image. It is interesting to review the usage flags we are using in in this case:

- `VK_IMAGE_USAGE_TRANSFER_DST_BIT`: The image can be used as a destination of a transfer command. We need this, because in our case, we will copy from a staging buffer to the image.

Expand Down Expand Up @@ -663,7 +663,7 @@ public class VulkanModel {
}
```

`VulkanModel` class will no longer store a list of meshes but a list of materials (which will hold references to meshes). Therefore, the `vulkanMaterialList` attribute needs to be removed. We need to change also the `transformModels` method to load the textures:
`VulkanModel` class will no longer store a list of meshes but a list of materials (which will hold references to meshes). Therefore, the `vulkanMeshList` attribute needs to be removed. We need to change also the `transformModels` method to load the textures:

```java
public class VulkanModel {
Expand Down
10 changes: 5 additions & 5 deletions bookcontents/chapter-09/chapter-09.md
Original file line number Diff line number Diff line change
Expand Up @@ -458,7 +458,7 @@ public class Camera {

This class, in essence, stores the view matrix, which can be modified by the different methods that it provides to change its position, to apply rotation or to displace around the scene. It uses the JOML library to calculate up and forward vectors to displace.

The camera is part now of the scene:
The camera is now part of the scene:

```java
public class Scene {
Expand All @@ -481,7 +481,7 @@ We will see later on how to use the camera while recording the render commands.

## Dynamic uniform buffers

Up to now, we have create the buffers associated to uniforms though descriptor sets of `VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER` type. There is another type which can use a single buffer and a descriptor set, passing a region of that buffer to the shaders when binding the descriptor sets. These are called dynamic uniform buffers. They can be used to reduce the number of individual buffers an descriptor sets, for example when passing material properties to the shaders. This is the showcase we will use to explain its usage. Therefore, we will start by including the diffuse color in the fragment shader:
Up to now, we have created the buffers associated to uniforms though descriptor sets of `VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER` type. There is another type which can use a single buffer and a descriptor set, passing a region of that buffer to the shaders when binding the descriptor sets. These are called dynamic uniform buffers. They can be used to reduce the number of individual buffers in descriptor sets, for example when passing material properties to the shaders. This is the showcase we will use to explain its usage. Therefore, we will start by including the diffuse color in the fragment shader:

```glsl
#version 450
Expand Down Expand Up @@ -534,7 +534,7 @@ public abstract class DescriptorSetLayout {
}
```

A dynamic uniform buffer will allow us to create a single buffer which will hold all the data for all the possible materials, while passing a specific window to that buffer to the shaders for the specific material to be used while rendering. These descriptor sets use the `VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC` type. As you can image we will need also a new descriptor set type that we will use for the materials. We will create a new class named `DynUniformDescriptorSet` which will inherit from `SimpleDescriptorSet`. This class will use the `VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC` type and will have an extra parameter for the `size`. If you remember from previous descriptor sets, we just used the size of the buffer that holds the uniform values. In this case is different, the buffer will hold the values for all the materials, but this new `size` parameter will not be the size of that large buffer. It will be be the size in bytes of the data associated to a single material. You can think about it as the size of one of the slices of that buffer that we can associate to a uniform. We will see later on how to calculate these slices.
A dynamic uniform buffer will allow us to create a single buffer which will hold all the data for all the possible materials, while passing a specific window to that buffer to the shaders for the specific material to be used while rendering. These descriptor sets use the `VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC` type. As you can imagine we will also need a new descriptor set type that we will use for the materials. We will create a new class named `DynUniformDescriptorSet` which will inherit from `SimpleDescriptorSet`. This class will use the `VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC` type and will have an extra parameter for the `size`. If you remember from previous descriptor sets, we just used the size of the buffer that holds the uniform values. In this case is different, the buffer will hold the values for all the materials, but this new `size` parameter will not be the size of that large buffer. It will be be the size in bytes of the data associated to a single material. You can think about it as the size of one of the slices of that buffer that we can associate to a uniform. We will see later on how to calculate these slices.
```java
public abstract class DescriptorSet {
...
Expand All @@ -550,7 +550,7 @@ public abstract class DescriptorSet {

## Completing the changes

Now it is the turn to modify the `ForwardRenderActivity` class. We start be fining new attributes for the descriptors associated to the materials, and a new descriptor set for the uniforms that will hold the view matrices associated to the camera. As it has been described before, the `Pipeline.PipeLineCreationInfo pipeLineCreationInfo` record has also been modified to control if we will use blending or not.
Now it is time to modify the `ForwardRenderActivity` class. We start by defining new attributes for the descriptors associated to the materials, and a new descriptor set for the uniforms that will hold the view matrices associated to the camera. As it has been described before, the `Pipeline.PipeLineCreationInfo pipeLineCreationInfo` record has also been modified to control if we will use blending or not.

```java
public class ForwardRenderActivity {
Expand Down Expand Up @@ -897,4 +897,4 @@ With all of these changes you will be able to see the Sponza model. You will be

<img src="screen-shot.png" title="" alt="Screen Shot" data-align="center">

[Next chapter](../chapter-10/chapter-10.md)
[Next chapter](../chapter-10/chapter-10.md)
4 changes: 2 additions & 2 deletions bookcontents/chapter-10/chapter-10.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ Hence, with deferred shading we perform two rendering phases. The first one, is

All that information is stored in attachments, as the depth attachment used in previous chapters.

The second pass is called the lighting phase. This phase takes a shape that fills up all the screen and generates the final color information, using lighting, for each fragment using as inputs the attachment outputs generated in the previous phase. When are will performing the lighting pass, the depth test in the geometry phase will have already removed all the scene data that is not be seen. Hence, the number of operations to be done are restricted to what will be displayed on the screen.
The second pass is called the lighting phase. This phase takes a shape that fills up all the screen and generates the final color information, using lighting, for each fragment using as inputs the attachment outputs generated in the previous phase. When performing the lighting pass, the depth test in the geometry phase will have already removed all the scene data that is not be seen. Hence, the number of operations to be done are restricted to what will be displayed on the screen.

## Attachments

Expand Down Expand Up @@ -1399,4 +1399,4 @@ With all these changes, you will get something like this:

Do not despair, it is exactly the same result as in the previous chapter, you will see in next chapter how we will dramatically improve the visuals. In this chapter we have just set the basis for deferred rendering.

[Next chapter](../chapter-11/chapter-11.md)
[Next chapter](../chapter-11/chapter-11.md)
6 changes: 3 additions & 3 deletions bookcontents/chapter-11/chapter-11.md
Original file line number Diff line number Diff line change
Expand Up @@ -885,7 +885,7 @@ public class LightingRenderActivity {
...
```

The lighting vertex shader (`geometry_vertex.glsl`) has not been modified at all. However, the lighting fragment shader (`geometry_fragment.glsl`) has been heavily changed. It starts like this:
The lighting vertex shader (`lighting_vertex.glsl`) has not been modified at all. However, the lighting fragment shader (`lighting_fragment.glsl`) has been heavily changed. It starts like this:

```glsl
#version 450
Expand Down Expand Up @@ -1080,7 +1080,7 @@ public class Render {
}
```

Also, since we are discarding semi-transparent objects, we can remove the re-ordering the models which set up ones that have no transparencies first:
Also, since we are discarding semi-transparent objects, we can remove the re-ordering of the models:
```java
public class Render {
...
Expand Down Expand Up @@ -1161,4 +1161,4 @@ With all these changes, you will get something like this:

<img src="screen-shot.png" title="" alt="Screen Shot" data-align="center">

[Next chapter](../chapter-12/chapter-12.md)
[Next chapter](../chapter-12/chapter-12.md)
6 changes: 4 additions & 2 deletions bookcontents/chapter-16/chapter-16.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Indirect drawing

Until this chapter, we have rendered the models by binding their material uniforms, their textures, their vertices and indices buffers and submitting one draw command for each of the meshes they are composed. In this chapter, we will start our way to a more efficient wat of rendering, we will begin the implementation of a bind-less render. This type of rendering does not receive a bunch of draw commands to draw the scene, instead they relay on indirect drawing commands. Indirect draw commands are, in essence, draw commands stored in a buffer that obtain the parameters required to perform the operation from a set of global buffers. This is a more efficient way of drawing because:
Until this chapter, we have rendered the models by binding their material uniforms, their textures, their vertices and indices buffers and submitting one draw command for each of the meshes they are composed. In this chapter, we will start our way to a more efficient way of rendering, we will begin the implementation of a bind-less render. This type of rendering does not receive a bunch of draw commands to draw the scene, instead they relay on indirect drawing commands. Indirect draw commands are, in essence, draw commands stored in a buffer that obtain the parameters required to perform the operation from a set of global buffers. This is a more efficient way of drawing because:

- We remove the need to perform several bind operations before drawing each mesh.
- We need just to record a single draw call.
Expand Down Expand Up @@ -2166,4 +2166,6 @@ public class Main implements IAppLogic {

The results will be exactly the same as in chapter 14, but now we have the basis of a bind-less pipeline.

<img src="../chapter-14/screen-shot.gif" title="" alt="Screen Shot" data-align="center">
<img src="../chapter-14/screen-shot.gif" title="" alt="Screen Shot" data-align="center">

[Next chapter](../chapter-17/chapter-17.md)

0 comments on commit 7169d7d

Please sign in to comment.