-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor buffer and descriptor management into rendergraph #526
Comments
template <typename DataType>
class GPUMemoryBuffer {
protected:
const Device &m_device;
VkBuffer m_buffer{VK_NULL_HANDLE};
VmaAllocation m_alloc{VK_NULL_HANDLE};
VmaAllocationInfo m_alloc_info{};
std::string m_name;
public:
/// Construct the GPU memory buffer without specifying the actual data to fill in, only the memory size
/// @param device The device wrapper
/// @param buffer_usage The buffer usage flags
/// @param memory_usage The VMA memory usage flags which specify the required memory allocation.
/// @param name The internal debug marker name of the GPU memory buffer.
GPUMemoryBuffer(const Device &device, const VkBufferUsageFlags &buffer_usage, const VmaMemoryUsage &memory_usage,
std::string name)
: m_device(device), m_name(std::move(name)) {
spdlog::trace("Creating GPU memory buffer of size {} for {}", sizeof(DataType), name);
const auto buffer_ci = make_info<VkBufferCreateInfo>({
.size = sizeof(DataType),
.usage = buffer_usage,
.sharingMode = VK_SHARING_MODE_EXCLUSIVE,
});
const VmaAllocationCreateInfo alloc_ci{
.flags = VMA_ALLOCATION_CREATE_MAPPED_BIT,
.usage = memory_usage,
};
if (const auto result =
vmaCreateBuffer(m_device.allocator(), &buffer_ci, &alloc_ci, &m_buffer, &m_alloc, &m_alloc_info);
result != VK_SUCCESS) {
throw VulkanException("Error: GPU memory buffer allocation for " + name + " failed!", result);
}
m_device.set_debug_marker_name(m_buffer, VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT, name);
vmaSetAllocationName(m_device.allocator(), m_alloc, m_name.c_str());
}
/// Construct the GPU memory buffer and specifies the actual data to fill it in
/// @param device The device wrapper
/// @param data A pointer to the data to fill the GPU memory buffer with
/// @param buffer_usage The buffer usage flags.
/// @param memory_usage The VMA memory usage flags which specify the required memory allocation.
/// @param name The internal debug marker name of the GPU memory buffer.
GPUMemoryBuffer(const Device &device, const DataType *data, const VkBufferUsageFlags &buffer_usage,
const VmaMemoryUsage &memory_usage, std::string name)
: GPUMemoryBuffer(device, buffer_usage, memory_usage, std::move(name)) {
update(data);
}
GPUMemoryBuffer(const GPUMemoryBuffer &) = delete;
GPUMemoryBuffer(GPUMemoryBuffer &&) noexcept;
virtual ~GPUMemoryBuffer();
GPUMemoryBuffer &operator=(const GPUMemoryBuffer &) = delete;
GPUMemoryBuffer &operator=(GPUMemoryBuffer &&) = delete;
[[nodiscard]] VmaAllocation allocation() const {
return m_alloc;
}
[[nodiscard]] VmaAllocationInfo allocation_info() const {
return m_alloc_info;
}
[[nodiscard]] VkBuffer buffer() const {
return m_buffer;
}
[[nodiscard]] const std::string &name() const {
return m_name;
}
/// Update the uniform buffer data
/// @note This method automatically deduces the size of the data type from the template type
/// @param data The data to update the uniform buffer
void update(const DataType *data) {
std::memcpy(m_alloc_info.pMappedData, data, sizeof(DataType));
}
}; |
For example, this is part of our current staging buffer code: [[nodiscard]] VkBuffer create_staging_buffer(const void *data, const VkDeviceSize data_size,
const std::string &name) const {
// Create a staging buffer for the copy operation and keep it until the CommandBuffer exceeds its lifetime
m_staging_bufs.emplace_back(m_device, name, data_size, data, data_size, VK_BUFFER_USAGE_TRANSFER_SRC_BIT,
VMA_MEMORY_USAGE_CPU_ONLY);
return m_staging_bufs.back().buffer();
} but we have an entire UniformBuffer::UniformBuffer(const Device &device, const std::string &name, const VkDeviceSize &buffer_size)
: GPUMemoryBuffer(device, name, buffer_size, VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT, VMA_MEMORY_USAGE_CPU_TO_GPU) {} |
I was experimenting with new code during the rendergraph refactoring, but I decided it's too much code to touch in one pull request, so this will be a separate refactoring afterwards. |
We could set the
As you can see, vertex buffers and index buffers can be simplified a little with this as well! |
So to create a staging buffer with this simplified code: [[nodiscard]] VkBuffer create_staging_buffer(const MyData *data) const {
// Staging buffers don't really need a name
m_staging_bufs.emplace_back(m_device, data, USAGE::STAGING_BUFFER);
return m_staging_bufs.back().buffer();
} To create a uniform buffer: m_uniform_buffers.emplace_back(m_device, MyUniformBufferObject, USAGE::UNIFORM_BUFFER, "my name"); |
This will be a large refactoring, so I'm glad I've decided not to do it in the rendergraph refactoring! |
The interesting aspect of this is that template <typename VertexType, typename IndexType = std::uint32_t>
class MeshBuffer { So we would need simply pass on the template types in the |
In addition, I could get rid of these assertions in the refactoring: assert(device.device());
assert(device.allocator());
assert(!name.empty());
assert(vertex_count > 0);
assert(index_count > 0); And order the parameters so that
|
EDIT: This class if absolutely redundant! |
On second thought, it might be best to refactor it so it's all part of the rendergraph. There is really no reason why there should be any additional wrapper for this, as we would need to pass it into the rendergraph anyways. This means we should let the rendergraph create all the buffers (not sure about staging buffers yet), and give out handles to it. This could be done as it's done at the moment with raw pointers of with smart pointers. |
The same applies to descriptors. |
|
Is your feature request related to a problem?
There's no real reason we need that
UniformBuffer
wrapper class. It just inherits fromGPUMemoryBuffer
and sets the buffer type. Why is there a wrapper for this, but no wrapper for staging buffers then? This makes no sense.Description
In addition, we could turn
GPUMemoryBuffer
into a template, so it automatically deduces the size of the required memory buffer. (This needs more discussion though, as we will still needGPUMemoryBuffer
s where we pass in a raw pointer and the size of the object manually.Alternatives
Keep everything as it is (more complicated than necessary and without any benefit)
Affected Code
The Vulkan memory management code of our engine.
Operating System
All
Additional Context
None
The text was updated successfully, but these errors were encountered: