mirror of
https://github.com/godotengine/godot.git
synced 2025-11-01 06:01:14 +00:00
Improvements from TheForge (see description)
The work was performed by collaboration of TheForge and Google. I am merely splitting it up into smaller PRs and cleaning it up. This is the most "risky" PR so far because the previous ones have been miscellaneous stuff aimed at either [improve debugging](https://github.com/godotengine/godot/pull/90993) (e.g. device lost), [improve Android experience](https://github.com/godotengine/godot/pull/96439) (add Swappy for better Frame Pacing + Pre-Transformed Swapchains for slightly better performance), or harmless [ASTC improvements](https://github.com/godotengine/godot/pull/96045) (better performance by simply toggling a feature when available). However this PR contains larger modifications aimed at improving performance or reducing memory fragmentation. With greater modifications, come greater risks of bugs or breakage. Changes introduced by this PR: TBDR GPUs (e.g. most of Android + iOS + M1 Apple) support rendering to Render Targets that are not backed by actual GPU memory (everything stays in cache). This works as long as load action isn't `LOAD`, and store action must be `DONT_CARE`. This saves VRAM (it also makes painfully obvious when a mistake introduces a performance regression). Of particular usefulness is when doing MSAA and keeping the raw MSAA content is not necessary. Some GPUs get faster when the sampler settings are hard-coded into the GLSL shaders (instead of being dynamically bound at runtime). This required changes to the GLSL shaders, PSO creation routines, Descriptor creation routines, and Descriptor binding routines. - `bool immutable_samplers_enabled = true` Setting it to false enforces the old behavior. Useful for debugging bugs and regressions. Immutable samplers requires that the samplers stay... immutable, hence this boolean is useful if the promise gets broken. We might want to turn this into a `GLOBAL_DEF` setting. Instead of creating dozen/hundreds/thousands of `VkDescriptorSet` every frame that need to be freed individually when they are no longer needed, they all get freed at once by resetting the whole pool. Once the whole pool is no longer in use by the GPU, it gets reset and its memory recycled. Descriptor sets that are created to be kept around for longer or forever (i.e. not created and freed within the same frame) **must not** use linear pools. There may be more than one pool per frame. How many pools per frame Godot ends up with depends on its capacity, and that is controlled by `rendering/rendering_device/vulkan/max_descriptors_per_pool`. - **Possible improvement for later:** It should be possible for Godot to adapt to how many descriptors per pool are needed on a per-key basis (i.e. grow their capacity like `std::vector` does) after rendering a few frames; which would be better than the current solution of having a single global value for all pools (`max_descriptors_per_pool`) that the user needs to tweak. - `bool linear_descriptor_pools_enabled = true` Setting it to false enforces the old behavior. Useful for debugging bugs and regressions. Setting it to false is required when workarounding driver bugs (e.g. Adreno 730). A ridiculous optimization. Ridiculous because the original code should've done this in the first place. Previously Godot was doing the following: 1. Create a command buffer **pool**. One per frame. 2. Create multiple command buffers from the pool in point 1. 3. Call `vkBeginCommandBuffer` on the cmd buffer in point 2. This resets the cmd buffer because Godot requests the `VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT` flag. 4. Add commands to the cmd buffers from point 2. 5. Submit those commands. 6. On frame N + 2, recycle the buffer pool and cmd buffers from pt 1 & 2, and repeat from step 3. The problem here is that step 3 resets each command buffer individually. Initially Godot used to have 1 cmd buffer per pool, thus the impact is very low. But not anymore (specially with Adreno workarounds to force splitting compute dispatches into a new cmd buffer, more on this later). However Godot keeps around a very low amount of command buffers per frame. The recommended method is to reset the whole pool, to reset all cmd buffers at once. Hence the new steps would be: 1. Create a command buffer **pool**. One per frame. 2. Create multiple command buffers from the pool in point 1. 3. Call `vkBeginCommandBuffer` on the cmd buffer in point 2, which is already reset/empty (see step 6). 4. Add commands to the cmd buffers from point 2. 5. Submit those commands. 6. On frame N + 2, recycle the buffer pool and cmd buffers from pt 1 & 2, call `vkResetCommandPool` and repeat from step 3. **Possible issues:** @dariosamo added `transfer_worker` which creates a command buffer pool: ```cpp transfer_worker->command_pool = driver->command_pool_create(transfer_queue_family, RDD::COMMAND_BUFFER_TYPE_PRIMARY); ``` As expected, validation was complaining that command buffers were being reused without being reset (that's good, we now know Validation Layers will warn us of wrong use). I fixed it by adding: ```cpp void RenderingDevice::_wait_for_transfer_worker(TransferWorker *p_transfer_worker) { driver->fence_wait(p_transfer_worker->command_fence); driver->command_pool_reset(p_transfer_worker->command_pool); // ! New line ! ``` **Secondary cmd buffers are subject to the same issue but I didn't alter them. I talked this with Dario and he is aware of this.** Secondary cmd buffers are currently disabled due to other issues (it's disabled on master). - `bool RenderingDeviceCommons::command_pool_reset_enabled` Setting it to false enforces the old behavior. Useful for debugging bugs and regressions. There's no other reason for this boolean. Possibly once it becomes well tested, the boolean could be removed entirely. Adds `command_bind_render_uniform_sets` and `add_draw_list_bind_uniform_sets` (+ compute variants). It performs the same as `add_draw_list_bind_uniform_set` (notice singular vs plural), but on multiple consecutive uniform sets, thus reducing graph and draw call overhead. - `bool descriptor_set_batching = true;` Setting it to false enforces the old behavior. Useful for debugging bugs and regressions. There's no other reason for this boolean. Possibly once it becomes well tested, the boolean could be removed entirely. Godot currently does the following: 1. Fill the entire cmd buffer with commands. 2. `submit()` - Wait with a semaphore for the swapchain. - Trigger a semaphore to indicate when we're done (so the swapchain can submit). 3. `present()` The optimization opportunity here is that 95% of Godot's rendering is done offscreen. Then a fullscreen pass copies everything to the swapchain. Godot doesn't practically render directly to the swapchain. The problem with this is that the GPU has to wait for the swapchain to be released **to start anything**, when we could start *much earlier*. Only the final blit pass must wait for the swapchain. TheForge changed it to the following (more complicated, I'm simplifying the idea): 1. Fill the entire cmd buffer with commands. 2. In `screen_prepare_for_drawing` do `submit()` - There are no semaphore waits for the swapchain. - Trigger a semaphore to indicate when we're done. 3. Fill a new cmd buffer that only does the final blit to the swapchain. 4. `submit()` - Wait with a semaphore for the submit() from step 2. - Wait with a semaphore for the swapchain (so the swapchain can submit). - Trigger a semaphore to indicate when we're done (so the swapchain can submit). 5. `present()` Dario discovered this problem independently while working on a different platform. **However TheForge's solution had to be rewritten from scratch:** The complexity to achieve the solution was high and quite difficult to maintain with the way Godot works now (after Übershaders PR). But on the other hand, re-implementing the solution became much simpler because Dario already had to do something similar: To fix an Adreno 730 driver bug, he had to implement splitting command buffers. **This is exactly what we need!**. Thus it was re-written using this existing functionality for a new purpose. To achieve this, I added a new argument, `bool p_split_cmd_buffer`, to `RenderingDeviceGraph::add_draw_list_begin`, which is only set to true by `RenderingDevice::draw_list_begin_for_screen`. The graph will split the draw list into its own command buffer. - `bool split_swapchain_into_its_own_cmd_buffer = true;` Setting it to false enforces the old behavior. This might be necessary for consoles which follow an alternate solution to the same problem. If not, then we should consider removing it. PR #90993 added `shader_destroy_modules()` but it was not actually in use. This PR adds several places where `shader_destroy_modules()` is called after initialization to free up memory of SPIR-V structures that are no longer needed.
This commit is contained in:
parent
aa8d9b83f6
commit
c77cbf096b
24 changed files with 983 additions and 200 deletions
|
|
@ -566,9 +566,12 @@ String RenderingDevice::get_perf_report() const {
|
|||
}
|
||||
|
||||
void RenderingDevice::update_perf_report() {
|
||||
perf_report_text = " gpu:" + String::num_int64(gpu_copy_count);
|
||||
perf_report_text = "";
|
||||
perf_report_text += " gpu:" + String::num_int64(gpu_copy_count);
|
||||
perf_report_text += " bytes:" + String::num_int64(copy_bytes_count);
|
||||
|
||||
perf_report_text += " lazily alloc:" + String::num_int64(driver->get_lazily_memory_used());
|
||||
|
||||
gpu_copy_count = 0;
|
||||
copy_bytes_count = 0;
|
||||
}
|
||||
|
|
@ -2639,6 +2642,15 @@ RenderingDevice::FramebufferFormatID RenderingDevice::framebuffer_get_format(RID
|
|||
return framebuffer->format_id;
|
||||
}
|
||||
|
||||
Size2 RenderingDevice::framebuffer_get_size(RID p_framebuffer) {
|
||||
_THREAD_SAFE_METHOD_
|
||||
|
||||
Framebuffer *framebuffer = framebuffer_owner.get_or_null(p_framebuffer);
|
||||
ERR_FAIL_NULL_V(framebuffer, Size2(0, 0));
|
||||
|
||||
return framebuffer->size;
|
||||
}
|
||||
|
||||
bool RenderingDevice::framebuffer_is_valid(RID p_framebuffer) const {
|
||||
_THREAD_SAFE_METHOD_
|
||||
|
||||
|
|
@ -2954,11 +2966,33 @@ Vector<uint8_t> RenderingDevice::shader_compile_binary_from_spirv(const Vector<S
|
|||
}
|
||||
|
||||
RID RenderingDevice::shader_create_from_bytecode(const Vector<uint8_t> &p_shader_binary, RID p_placeholder) {
|
||||
// Immutable samplers :
|
||||
// Expanding api when creating shader to allow passing optionally a set of immutable samplers
|
||||
// keeping existing api but extending it by sending an empty set.
|
||||
Vector<PipelineImmutableSampler> immutable_samplers;
|
||||
return shader_create_from_bytecode_with_samplers(p_shader_binary, p_placeholder, immutable_samplers);
|
||||
}
|
||||
|
||||
RID RenderingDevice::shader_create_from_bytecode_with_samplers(const Vector<uint8_t> &p_shader_binary, RID p_placeholder, const Vector<PipelineImmutableSampler> &p_immutable_samplers) {
|
||||
_THREAD_SAFE_METHOD_
|
||||
|
||||
ShaderDescription shader_desc;
|
||||
String name;
|
||||
RDD::ShaderID shader_id = driver->shader_create_from_bytecode(p_shader_binary, shader_desc, name);
|
||||
|
||||
Vector<RDD::ImmutableSampler> driver_immutable_samplers;
|
||||
for (const PipelineImmutableSampler &source_sampler : p_immutable_samplers) {
|
||||
RDD::ImmutableSampler driver_sampler;
|
||||
driver_sampler.type = source_sampler.uniform_type;
|
||||
driver_sampler.binding = source_sampler.binding;
|
||||
|
||||
for (uint32_t j = 0; j < source_sampler.get_id_count(); j++) {
|
||||
RDD::SamplerID *sampler_driver_id = sampler_owner.get_or_null(source_sampler.get_id(j));
|
||||
driver_sampler.ids.push_back(*sampler_driver_id);
|
||||
}
|
||||
|
||||
driver_immutable_samplers.append(driver_sampler);
|
||||
}
|
||||
RDD::ShaderID shader_id = driver->shader_create_from_bytecode(p_shader_binary, shader_desc, name, driver_immutable_samplers);
|
||||
ERR_FAIL_COND_V(!shader_id, RID());
|
||||
|
||||
// All good, let's create modules.
|
||||
|
|
@ -3029,6 +3063,12 @@ RID RenderingDevice::shader_create_from_bytecode(const Vector<uint8_t> &p_shader
|
|||
return id;
|
||||
}
|
||||
|
||||
void RenderingDevice::shader_destroy_modules(RID p_shader) {
|
||||
Shader *shader = shader_owner.get_or_null(p_shader);
|
||||
ERR_FAIL_NULL(shader);
|
||||
driver->shader_destroy_modules(shader->driver_id);
|
||||
}
|
||||
|
||||
RID RenderingDevice::shader_create_placeholder() {
|
||||
_THREAD_SAFE_METHOD_
|
||||
|
||||
|
|
@ -3086,12 +3126,12 @@ void RenderingDevice::_uniform_set_update_shared(UniformSet *p_uniform_set) {
|
|||
}
|
||||
}
|
||||
|
||||
template RID RenderingDevice::uniform_set_create(const LocalVector<RD::Uniform> &p_uniforms, RID p_shader, uint32_t p_shader_set);
|
||||
template RID RenderingDevice::uniform_set_create(const LocalVector<RD::Uniform> &p_uniforms, RID p_shader, uint32_t p_shader_set, bool p_linear_pool);
|
||||
|
||||
template RID RenderingDevice::uniform_set_create(const Vector<RD::Uniform> &p_uniforms, RID p_shader, uint32_t p_shader_set);
|
||||
template RID RenderingDevice::uniform_set_create(const Vector<RD::Uniform> &p_uniforms, RID p_shader, uint32_t p_shader_set, bool p_linear_pool);
|
||||
|
||||
template <typename Collection>
|
||||
RID RenderingDevice::uniform_set_create(const Collection &p_uniforms, RID p_shader, uint32_t p_shader_set) {
|
||||
RID RenderingDevice::uniform_set_create(const Collection &p_uniforms, RID p_shader, uint32_t p_shader_set, bool p_linear_pool) {
|
||||
_THREAD_SAFE_METHOD_
|
||||
|
||||
ERR_FAIL_COND_V(p_uniforms.is_empty(), RID());
|
||||
|
|
@ -3142,6 +3182,9 @@ RID RenderingDevice::uniform_set_create(const Collection &p_uniforms, RID p_shad
|
|||
driver_uniform.type = uniform.uniform_type;
|
||||
driver_uniform.binding = uniform.binding;
|
||||
|
||||
// Mark immutable samplers to be skipped when creating uniform set.
|
||||
driver_uniform.immutable_sampler = uniform.immutable_sampler;
|
||||
|
||||
switch (uniform.uniform_type) {
|
||||
case UNIFORM_TYPE_SAMPLER: {
|
||||
if (uniform.get_id_count() != (uint32_t)set_uniform.length) {
|
||||
|
|
@ -3457,7 +3500,7 @@ RID RenderingDevice::uniform_set_create(const Collection &p_uniforms, RID p_shad
|
|||
}
|
||||
}
|
||||
|
||||
RDD::UniformSetID driver_uniform_set = driver->uniform_set_create(driver_uniforms, shader->driver_id, p_shader_set);
|
||||
RDD::UniformSetID driver_uniform_set = driver->uniform_set_create(driver_uniforms, shader->driver_id, p_shader_set, p_linear_pool ? frame : -1);
|
||||
ERR_FAIL_COND_V(!driver_uniform_set, RID());
|
||||
|
||||
UniformSet uniform_set;
|
||||
|
|
@ -3503,6 +3546,10 @@ void RenderingDevice::uniform_set_set_invalidation_callback(RID p_uniform_set, I
|
|||
us->invalidated_callback_userdata = p_userdata;
|
||||
}
|
||||
|
||||
bool RenderingDevice::uniform_sets_have_linear_pools() const {
|
||||
return driver->uniform_sets_have_linear_pools();
|
||||
}
|
||||
|
||||
/*******************/
|
||||
/**** PIPELINES ****/
|
||||
/*******************/
|
||||
|
|
@ -3782,6 +3829,7 @@ Error RenderingDevice::screen_create(DisplayServer::WindowID p_screen) {
|
|||
Error RenderingDevice::screen_prepare_for_drawing(DisplayServer::WindowID p_screen) {
|
||||
_THREAD_SAFE_METHOD_
|
||||
|
||||
// After submitting work, acquire the swapchain image(s)
|
||||
HashMap<DisplayServer::WindowID, RDD::SwapChainID>::ConstIterator it = screen_swap_chains.find(p_screen);
|
||||
ERR_FAIL_COND_V_MSG(it == screen_swap_chains.end(), ERR_CANT_CREATE, "A swap chain was not created for the screen.");
|
||||
|
||||
|
|
@ -3918,7 +3966,7 @@ RenderingDevice::DrawListID RenderingDevice::draw_list_begin_for_screen(DisplayS
|
|||
clear_value.color = p_clear_color;
|
||||
|
||||
RDD::RenderPassID render_pass = driver->swap_chain_get_render_pass(sc_it->value);
|
||||
draw_graph.add_draw_list_begin(render_pass, fb_it->value, viewport, RDG::ATTACHMENT_OPERATION_CLEAR, clear_value, true, false, RDD::BreadcrumbMarker::BLIT_PASS);
|
||||
draw_graph.add_draw_list_begin(render_pass, fb_it->value, viewport, RDG::ATTACHMENT_OPERATION_CLEAR, clear_value, true, false, RDD::BreadcrumbMarker::BLIT_PASS, split_swapchain_into_its_own_cmd_buffer);
|
||||
|
||||
draw_graph.add_draw_list_set_viewport(viewport);
|
||||
draw_graph.add_draw_list_set_scissor(viewport);
|
||||
|
|
@ -4354,37 +4402,69 @@ void RenderingDevice::draw_list_draw(DrawListID p_list, bool p_use_indices, uint
|
|||
}
|
||||
}
|
||||
#endif
|
||||
thread_local LocalVector<RDD::UniformSetID> valid_descriptor_ids;
|
||||
valid_descriptor_ids.clear();
|
||||
valid_descriptor_ids.resize(dl->state.set_count);
|
||||
uint32_t valid_set_count = 0;
|
||||
uint32_t first_set_index = 0;
|
||||
uint32_t last_set_index = 0;
|
||||
bool found_first_set = false;
|
||||
|
||||
// Prepare descriptor sets if the API doesn't use pipeline barriers.
|
||||
if (!driver->api_trait_get(RDD::API_TRAIT_HONORS_PIPELINE_BARRIERS)) {
|
||||
for (uint32_t i = 0; i < dl->state.set_count; i++) {
|
||||
if (dl->state.sets[i].pipeline_expected_format == 0) {
|
||||
// Nothing expected by this pipeline.
|
||||
continue;
|
||||
}
|
||||
for (uint32_t i = 0; i < dl->state.set_count; i++) {
|
||||
if (dl->state.sets[i].pipeline_expected_format == 0) {
|
||||
continue; // Nothing expected by this pipeline.
|
||||
}
|
||||
|
||||
if (!dl->state.sets[i].bound && !found_first_set) {
|
||||
first_set_index = i;
|
||||
found_first_set = true;
|
||||
}
|
||||
// Prepare descriptor sets if the API doesn't use pipeline barriers.
|
||||
if (!driver->api_trait_get(RDD::API_TRAIT_HONORS_PIPELINE_BARRIERS)) {
|
||||
draw_graph.add_draw_list_uniform_set_prepare_for_use(dl->state.pipeline_shader_driver_id, dl->state.sets[i].uniform_set_driver_id, i);
|
||||
}
|
||||
}
|
||||
|
||||
// Bind descriptor sets.
|
||||
for (uint32_t i = 0; i < dl->state.set_count; i++) {
|
||||
for (uint32_t i = first_set_index; i < dl->state.set_count; i++) {
|
||||
if (dl->state.sets[i].pipeline_expected_format == 0) {
|
||||
continue; // Nothing expected by this pipeline.
|
||||
}
|
||||
|
||||
if (!dl->state.sets[i].bound) {
|
||||
// All good, see if this requires re-binding.
|
||||
draw_graph.add_draw_list_bind_uniform_set(dl->state.pipeline_shader_driver_id, dl->state.sets[i].uniform_set_driver_id, i);
|
||||
// Batch contiguous descriptor sets in a single call
|
||||
if (descriptor_set_batching) {
|
||||
// All good, see if this requires re-binding.
|
||||
if (i - last_set_index > 1) {
|
||||
// If the descriptor sets are not contiguous, bind the previous ones and start a new batch
|
||||
draw_graph.add_draw_list_bind_uniform_sets(dl->state.pipeline_shader_driver_id, valid_descriptor_ids, first_set_index, valid_set_count);
|
||||
|
||||
UniformSet *uniform_set = uniform_set_owner.get_or_null(dl->state.sets[i].uniform_set);
|
||||
_uniform_set_update_shared(uniform_set);
|
||||
first_set_index = i;
|
||||
valid_set_count = 1;
|
||||
valid_descriptor_ids[0] = dl->state.sets[i].uniform_set_driver_id;
|
||||
} else {
|
||||
// Otherwise, keep storing in the current batch
|
||||
valid_descriptor_ids[valid_set_count] = dl->state.sets[i].uniform_set_driver_id;
|
||||
valid_set_count++;
|
||||
}
|
||||
|
||||
draw_graph.add_draw_list_usages(uniform_set->draw_trackers, uniform_set->draw_trackers_usage);
|
||||
UniformSet *uniform_set = uniform_set_owner.get_or_null(dl->state.sets[i].uniform_set);
|
||||
_uniform_set_update_shared(uniform_set);
|
||||
draw_graph.add_draw_list_usages(uniform_set->draw_trackers, uniform_set->draw_trackers_usage);
|
||||
dl->state.sets[i].bound = true;
|
||||
|
||||
dl->state.sets[i].bound = true;
|
||||
last_set_index = i;
|
||||
} else {
|
||||
draw_graph.add_draw_list_bind_uniform_set(dl->state.pipeline_shader_driver_id, dl->state.sets[i].uniform_set_driver_id, i);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Bind the remaining batch
|
||||
if (descriptor_set_batching && valid_set_count > 0) {
|
||||
draw_graph.add_draw_list_bind_uniform_sets(dl->state.pipeline_shader_driver_id, valid_descriptor_ids, first_set_index, valid_set_count);
|
||||
}
|
||||
|
||||
if (p_use_indices) {
|
||||
#ifdef DEBUG_ENABLED
|
||||
ERR_FAIL_COND_MSG(p_procedural_vertices > 0,
|
||||
|
|
@ -4549,6 +4629,22 @@ void RenderingDevice::draw_list_draw_indirect(DrawListID p_list, bool p_use_indi
|
|||
_check_transfer_worker_buffer(buffer);
|
||||
}
|
||||
|
||||
void RenderingDevice::draw_list_set_viewport(DrawListID p_list, const Rect2 &p_rect) {
|
||||
DrawList *dl = _get_draw_list_ptr(p_list);
|
||||
|
||||
ERR_FAIL_NULL(dl);
|
||||
#ifdef DEBUG_ENABLED
|
||||
ERR_FAIL_COND_MSG(!dl->validation.active, "Submitted Draw Lists can no longer be modified.");
|
||||
#endif
|
||||
|
||||
if (p_rect.get_area() == 0) {
|
||||
return;
|
||||
}
|
||||
|
||||
dl->viewport = p_rect;
|
||||
draw_graph.add_draw_list_set_viewport(p_rect);
|
||||
}
|
||||
|
||||
void RenderingDevice::draw_list_enable_scissor(DrawListID p_list, const Rect2 &p_rect) {
|
||||
ERR_RENDER_THREAD_GUARD();
|
||||
|
||||
|
|
@ -4873,37 +4969,70 @@ void RenderingDevice::compute_list_dispatch(ComputeListID p_list, uint32_t p_x_g
|
|||
}
|
||||
}
|
||||
#endif
|
||||
thread_local LocalVector<RDD::UniformSetID> valid_descriptor_ids;
|
||||
valid_descriptor_ids.clear();
|
||||
valid_descriptor_ids.resize(cl->state.set_count);
|
||||
|
||||
// Prepare descriptor sets if the API doesn't use pipeline barriers.
|
||||
if (!driver->api_trait_get(RDD::API_TRAIT_HONORS_PIPELINE_BARRIERS)) {
|
||||
for (uint32_t i = 0; i < cl->state.set_count; i++) {
|
||||
if (cl->state.sets[i].pipeline_expected_format == 0) {
|
||||
// Nothing expected by this pipeline.
|
||||
continue;
|
||||
}
|
||||
uint32_t valid_set_count = 0;
|
||||
uint32_t first_set_index = 0;
|
||||
uint32_t last_set_index = 0;
|
||||
bool found_first_set = false;
|
||||
|
||||
for (uint32_t i = 0; i < cl->state.set_count; i++) {
|
||||
if (cl->state.sets[i].pipeline_expected_format == 0) {
|
||||
// Nothing expected by this pipeline.
|
||||
continue;
|
||||
}
|
||||
|
||||
if (!cl->state.sets[i].bound && !found_first_set) {
|
||||
first_set_index = i;
|
||||
found_first_set = true;
|
||||
}
|
||||
// Prepare descriptor sets if the API doesn't use pipeline barriers.
|
||||
if (!driver->api_trait_get(RDD::API_TRAIT_HONORS_PIPELINE_BARRIERS)) {
|
||||
draw_graph.add_compute_list_uniform_set_prepare_for_use(cl->state.pipeline_shader_driver_id, cl->state.sets[i].uniform_set_driver_id, i);
|
||||
}
|
||||
}
|
||||
|
||||
// Bind descriptor sets.
|
||||
for (uint32_t i = 0; i < cl->state.set_count; i++) {
|
||||
for (uint32_t i = first_set_index; i < cl->state.set_count; i++) {
|
||||
if (cl->state.sets[i].pipeline_expected_format == 0) {
|
||||
continue; // Nothing expected by this pipeline.
|
||||
}
|
||||
if (!cl->state.sets[i].bound) {
|
||||
// All good, see if this requires re-binding.
|
||||
draw_graph.add_compute_list_bind_uniform_set(cl->state.pipeline_shader_driver_id, cl->state.sets[i].uniform_set_driver_id, i);
|
||||
|
||||
if (!cl->state.sets[i].bound) {
|
||||
// Descriptor set batching
|
||||
if (descriptor_set_batching) {
|
||||
// All good, see if this requires re-binding.
|
||||
if (i - last_set_index > 1) {
|
||||
// If the descriptor sets are not contiguous, bind the previous ones and start a new batch
|
||||
draw_graph.add_compute_list_bind_uniform_sets(cl->state.pipeline_shader_driver_id, valid_descriptor_ids, first_set_index, valid_set_count);
|
||||
|
||||
first_set_index = i;
|
||||
valid_set_count = 1;
|
||||
valid_descriptor_ids[0] = cl->state.sets[i].uniform_set_driver_id;
|
||||
} else {
|
||||
// Otherwise, keep storing in the current batch
|
||||
valid_descriptor_ids[valid_set_count] = cl->state.sets[i].uniform_set_driver_id;
|
||||
valid_set_count++;
|
||||
}
|
||||
|
||||
last_set_index = i;
|
||||
} else {
|
||||
draw_graph.add_compute_list_bind_uniform_set(cl->state.pipeline_shader_driver_id, cl->state.sets[i].uniform_set_driver_id, i);
|
||||
}
|
||||
UniformSet *uniform_set = uniform_set_owner.get_or_null(cl->state.sets[i].uniform_set);
|
||||
_uniform_set_update_shared(uniform_set);
|
||||
|
||||
draw_graph.add_compute_list_usages(uniform_set->draw_trackers, uniform_set->draw_trackers_usage);
|
||||
|
||||
cl->state.sets[i].bound = true;
|
||||
}
|
||||
}
|
||||
|
||||
// Bind the remaining batch
|
||||
if (valid_set_count > 0) {
|
||||
draw_graph.add_compute_list_bind_uniform_sets(cl->state.pipeline_shader_driver_id, valid_descriptor_ids, first_set_index, valid_set_count);
|
||||
}
|
||||
draw_graph.add_compute_list_dispatch(p_x_groups, p_y_groups, p_z_groups);
|
||||
cl->state.dispatch_count++;
|
||||
}
|
||||
|
|
@ -4986,37 +5115,68 @@ void RenderingDevice::compute_list_dispatch_indirect(ComputeListID p_list, RID p
|
|||
}
|
||||
}
|
||||
#endif
|
||||
thread_local LocalVector<RDD::UniformSetID> valid_descriptor_ids;
|
||||
valid_descriptor_ids.clear();
|
||||
valid_descriptor_ids.resize(cl->state.set_count);
|
||||
|
||||
// Prepare descriptor sets if the API doesn't use pipeline barriers.
|
||||
if (!driver->api_trait_get(RDD::API_TRAIT_HONORS_PIPELINE_BARRIERS)) {
|
||||
for (uint32_t i = 0; i < cl->state.set_count; i++) {
|
||||
if (cl->state.sets[i].pipeline_expected_format == 0) {
|
||||
// Nothing expected by this pipeline.
|
||||
continue;
|
||||
}
|
||||
uint32_t valid_set_count = 0;
|
||||
uint32_t first_set_index = 0;
|
||||
uint32_t last_set_index = 0;
|
||||
bool found_first_set = false;
|
||||
|
||||
for (uint32_t i = 0; i < cl->state.set_count; i++) {
|
||||
if (cl->state.sets[i].pipeline_expected_format == 0) {
|
||||
// Nothing expected by this pipeline.
|
||||
continue;
|
||||
}
|
||||
|
||||
if (!cl->state.sets[i].bound && !found_first_set) {
|
||||
first_set_index = i;
|
||||
found_first_set = true;
|
||||
}
|
||||
|
||||
// Prepare descriptor sets if the API doesn't use pipeline barriers.
|
||||
if (!driver->api_trait_get(RDD::API_TRAIT_HONORS_PIPELINE_BARRIERS)) {
|
||||
draw_graph.add_compute_list_uniform_set_prepare_for_use(cl->state.pipeline_shader_driver_id, cl->state.sets[i].uniform_set_driver_id, i);
|
||||
}
|
||||
}
|
||||
|
||||
// Bind descriptor sets.
|
||||
for (uint32_t i = 0; i < cl->state.set_count; i++) {
|
||||
for (uint32_t i = first_set_index; i < cl->state.set_count; i++) {
|
||||
if (cl->state.sets[i].pipeline_expected_format == 0) {
|
||||
continue; // Nothing expected by this pipeline.
|
||||
}
|
||||
|
||||
if (!cl->state.sets[i].bound) {
|
||||
// All good, see if this requires re-binding.
|
||||
draw_graph.add_compute_list_bind_uniform_set(cl->state.pipeline_shader_driver_id, cl->state.sets[i].uniform_set_driver_id, i);
|
||||
if (i - last_set_index > 1) {
|
||||
// If the descriptor sets are not contiguous, bind the previous ones and start a new batch
|
||||
draw_graph.add_compute_list_bind_uniform_sets(cl->state.pipeline_shader_driver_id, valid_descriptor_ids, first_set_index, valid_set_count);
|
||||
|
||||
first_set_index = i;
|
||||
valid_set_count = 1;
|
||||
valid_descriptor_ids[0] = cl->state.sets[i].uniform_set_driver_id;
|
||||
} else {
|
||||
// Otherwise, keep storing in the current batch
|
||||
valid_descriptor_ids[valid_set_count] = cl->state.sets[i].uniform_set_driver_id;
|
||||
valid_set_count++;
|
||||
}
|
||||
|
||||
last_set_index = i;
|
||||
|
||||
UniformSet *uniform_set = uniform_set_owner.get_or_null(cl->state.sets[i].uniform_set);
|
||||
_uniform_set_update_shared(uniform_set);
|
||||
|
||||
draw_graph.add_compute_list_usages(uniform_set->draw_trackers, uniform_set->draw_trackers_usage);
|
||||
|
||||
cl->state.sets[i].bound = true;
|
||||
}
|
||||
}
|
||||
|
||||
// Bind the remaining batch
|
||||
if (valid_set_count > 0) {
|
||||
draw_graph.add_compute_list_bind_uniform_sets(cl->state.pipeline_shader_driver_id, valid_descriptor_ids, first_set_index, valid_set_count);
|
||||
}
|
||||
|
||||
draw_graph.add_compute_list_dispatch_indirect(buffer->driver_id, p_offset);
|
||||
cl->state.dispatch_count++;
|
||||
|
||||
|
|
@ -5253,6 +5413,7 @@ void RenderingDevice::_submit_transfer_worker(TransferWorker *p_transfer_worker,
|
|||
|
||||
void RenderingDevice::_wait_for_transfer_worker(TransferWorker *p_transfer_worker) {
|
||||
driver->fence_wait(p_transfer_worker->command_fence);
|
||||
driver->command_pool_reset(p_transfer_worker->command_pool);
|
||||
p_transfer_worker->staging_buffer_size_in_use = 0;
|
||||
p_transfer_worker->submitted = false;
|
||||
|
||||
|
|
@ -5770,7 +5931,8 @@ void RenderingDevice::swap_buffers() {
|
|||
|
||||
// Advance to the next frame and begin recording again.
|
||||
frame = (frame + 1) % frames.size();
|
||||
_begin_frame();
|
||||
|
||||
_begin_frame(true);
|
||||
}
|
||||
|
||||
void RenderingDevice::submit() {
|
||||
|
|
@ -5788,7 +5950,7 @@ void RenderingDevice::sync() {
|
|||
ERR_FAIL_COND_MSG(is_main_instance, "Only local devices can submit and sync.");
|
||||
ERR_FAIL_COND_MSG(!local_device_processing, "sync can only be called after a submit");
|
||||
|
||||
_begin_frame();
|
||||
_begin_frame(true);
|
||||
local_device_processing = false;
|
||||
}
|
||||
|
||||
|
|
@ -5892,14 +6054,22 @@ uint64_t RenderingDevice::get_memory_usage(MemoryType p_type) const {
|
|||
}
|
||||
}
|
||||
|
||||
void RenderingDevice::_begin_frame() {
|
||||
void RenderingDevice::_begin_frame(bool p_presented) {
|
||||
// Before beginning this frame, wait on the fence if it was signaled to make sure its work is finished.
|
||||
if (frames[frame].fence_signaled) {
|
||||
driver->fence_wait(frames[frame].fence);
|
||||
frames[frame].fence_signaled = false;
|
||||
}
|
||||
|
||||
update_perf_report();
|
||||
if (command_pool_reset_enabled) {
|
||||
bool reset = driver->command_pool_reset(frames[frame].command_pool);
|
||||
ERR_FAIL_COND(!reset);
|
||||
}
|
||||
|
||||
if (p_presented) {
|
||||
update_perf_report();
|
||||
driver->linear_uniform_set_pools_reset(frame);
|
||||
}
|
||||
|
||||
// Begin recording on the frame's command buffers.
|
||||
driver->begin_segment(frame, frames_drawn++);
|
||||
|
|
@ -5948,15 +6118,11 @@ void RenderingDevice::_end_frame() {
|
|||
driver->end_segment();
|
||||
}
|
||||
|
||||
void RenderingDevice::_execute_frame(bool p_present) {
|
||||
// Check whether this frame should present the swap chains and in which queue.
|
||||
const bool frame_can_present = p_present && !frames[frame].swap_chains_to_present.is_empty();
|
||||
const bool separate_present_queue = main_queue != present_queue;
|
||||
thread_local LocalVector<RDD::SwapChainID> swap_chains;
|
||||
swap_chains.clear();
|
||||
|
||||
// Execute command buffers and use semaphores to wait on the execution of the previous one. Normally there's only one command buffer,
|
||||
// but driver workarounds can force situations where there'll be more.
|
||||
void RenderingDevice::execute_chained_cmds(bool p_present_swap_chain, RenderingDeviceDriver::FenceID p_draw_fence,
|
||||
RenderingDeviceDriver::SemaphoreID p_dst_draw_semaphore_to_signal) {
|
||||
// Execute command buffers and use semaphores to wait on the execution of the previous one.
|
||||
// Normally there's only one command buffer, but driver workarounds can force situations where
|
||||
// there'll be more.
|
||||
uint32_t command_buffer_count = 1;
|
||||
RDG::CommandBufferPool &buffer_pool = frames[frame].command_buffer_pool;
|
||||
if (buffer_pool.buffers_used > 0) {
|
||||
|
|
@ -5964,6 +6130,12 @@ void RenderingDevice::_execute_frame(bool p_present) {
|
|||
buffer_pool.buffers_used = 0;
|
||||
}
|
||||
|
||||
thread_local LocalVector<RDD::SwapChainID> swap_chains;
|
||||
swap_chains.clear();
|
||||
|
||||
// Instead of having just one command; we have potentially many (which had to be split due to an
|
||||
// Adreno workaround on mobile, only if the workaround is active). Thus we must execute all of them
|
||||
// and chain them together via semaphores as dependent executions.
|
||||
thread_local LocalVector<RDD::SemaphoreID> wait_semaphores;
|
||||
wait_semaphores = frames[frame].semaphores_to_wait_on;
|
||||
|
||||
|
|
@ -5973,45 +6145,57 @@ void RenderingDevice::_execute_frame(bool p_present) {
|
|||
RDD::FenceID signal_fence;
|
||||
if (i > 0) {
|
||||
command_buffer = buffer_pool.buffers[i - 1];
|
||||
signal_semaphore = buffer_pool.semaphores[i - 1];
|
||||
} else {
|
||||
command_buffer = frames[frame].command_buffer;
|
||||
signal_semaphore = frames[frame].semaphore;
|
||||
}
|
||||
|
||||
bool signal_semaphore_valid;
|
||||
if (i == (command_buffer_count - 1)) {
|
||||
// This is the last command buffer, it should signal the fence.
|
||||
signal_fence = frames[frame].fence;
|
||||
signal_semaphore_valid = false;
|
||||
// This is the last command buffer, it should signal the semaphore & fence.
|
||||
signal_semaphore = p_dst_draw_semaphore_to_signal;
|
||||
signal_fence = p_draw_fence;
|
||||
|
||||
if (frame_can_present && separate_present_queue) {
|
||||
// The semaphore is required if the frame can be presented and a separate present queue is used.
|
||||
signal_semaphore_valid = true;
|
||||
} else if (frame_can_present) {
|
||||
if (p_present_swap_chain) {
|
||||
// Just present the swap chains as part of the last command execution.
|
||||
swap_chains = frames[frame].swap_chains_to_present;
|
||||
}
|
||||
} else {
|
||||
signal_semaphore = buffer_pool.semaphores[i];
|
||||
// Semaphores always need to be signaled if it's not the last command buffer.
|
||||
signal_semaphore_valid = true;
|
||||
}
|
||||
|
||||
driver->command_queue_execute_and_present(main_queue, wait_semaphores, command_buffer, signal_semaphore_valid ? signal_semaphore : VectorView<RDD::SemaphoreID>(), signal_fence, swap_chains);
|
||||
driver->command_queue_execute_and_present(main_queue, wait_semaphores, command_buffer,
|
||||
signal_semaphore ? signal_semaphore : VectorView<RDD::SemaphoreID>(), signal_fence,
|
||||
swap_chains);
|
||||
|
||||
// Make the next command buffer wait on the semaphore signaled by this one.
|
||||
wait_semaphores.resize(1);
|
||||
wait_semaphores[0] = signal_semaphore;
|
||||
}
|
||||
|
||||
// Indicate the fence has been signaled so the next time the frame's contents need to be used, the CPU needs to wait on the work to be completed.
|
||||
frames[frame].semaphores_to_wait_on.clear();
|
||||
}
|
||||
|
||||
void RenderingDevice::_execute_frame(bool p_present) {
|
||||
// Check whether this frame should present the swap chains and in which queue.
|
||||
const bool frame_can_present = p_present && !frames[frame].swap_chains_to_present.is_empty();
|
||||
const bool separate_present_queue = main_queue != present_queue;
|
||||
|
||||
// The semaphore is required if the frame can be presented and a separate present queue is used;
|
||||
// since the separate queue will wait for that semaphore before presenting.
|
||||
const RDD::SemaphoreID semaphore = (frame_can_present && separate_present_queue)
|
||||
? frames[frame].semaphore
|
||||
: RDD::SemaphoreID(nullptr);
|
||||
const bool present_swap_chain = frame_can_present && !separate_present_queue;
|
||||
|
||||
execute_chained_cmds(present_swap_chain, frames[frame].fence, semaphore);
|
||||
// Indicate the fence has been signaled so the next time the frame's contents need to be
|
||||
// used, the CPU needs to wait on the work to be completed.
|
||||
frames[frame].fence_signaled = true;
|
||||
|
||||
if (frame_can_present) {
|
||||
if (separate_present_queue) {
|
||||
// Issue the presentation separately if the presentation queue is different from the main queue.
|
||||
driver->command_queue_execute_and_present(present_queue, wait_semaphores, {}, {}, {}, frames[frame].swap_chains_to_present);
|
||||
driver->command_queue_execute_and_present(present_queue, frames[frame].semaphore, {}, {}, {}, frames[frame].swap_chains_to_present);
|
||||
}
|
||||
|
||||
frames[frame].swap_chains_to_present.clear();
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue