This is just an API change for all the "set_bind_group" calls. Calls
that pass a Some() argument should have unchanged behavior. The None
cases are left as TODOs.
This proves a flag in msl::PipelineOptions that attempts to write all
Metal vertex entry points to use a vertex pulling technique. It does
this by:
1) Forcing the _buffer_sizes structure to be generated for all vertex
entry points. The structure has additional buffer_size members that
contain the byte sizes of the vertex buffers.
2) Adding new args to vertex entry points for the vertex id and/or
the instance id and for the bound buffers. If there is an existing
@builtin(vertex_index) or @builtin(instance_index) param, then no
duplicate arg is created.
3) Adding code at the beginning of the function for vertex entry points
to compare the vertex id or instance id against the lengths of all the
bound buffers, and force an early-exit if the bounds are violated.
4) Extracting the raw bytes from the vertex buffer(s) and unpacking
those bytes into the bound attributes with the expected types.
5) Replacing the varyings input and instead using the unpacked
attributes to fill any structs-as-args that are rebuilt in the entry
point.
A new naga test is added which exercises this flag and demonstrates the
effect of the transform. The msl generated by this test passes
validation.
Eventually this transformation will be the default, always-on behavior
for Metal pipelines, though the flag may remain so that naga
translation tests can be run with and without the tranformation.
* Added tests for texture zero init
sadly, they are typically passing even if texture zero init isn't doing its job
However, they form nice isolated examples for testing out texture initialization
It could be possible to dirty texture memory prior to ensure zero init did the job
* texture init tracker
* tracking texture init requirements for bind group, transfer and rendertarget
* texture clears for texture init
* queue submit
* write_texture
* Enforce presence of either render target or copy_dst flag
* clear render targets also with using buffer copies
enforce COPY_DST usage now on all textures
* adjust ImageSubresourceRange.layer_range calculation for 3D textures
init_tracker has now a `discard` function to get single data points back to uninitialized
use new standardized partition_point function
* track init state for discarded textures from renderpasses
missing:
* init on the fly if discarded is found within command buffer
* handle discarding only stencil or only depth
* added tests for zero init after discard
* tracking discarded surfaces now in separate struct, piping all inits through utility function
allows to resolve discard/init_action interactions
* Move various memory init code to separate mod
CommandBufferTextureMemoryActions is now fully encapsulated
* implemented discard init fixups for everything but renderpass
* render passes also cause now discard fixups
* fixup_discarded_surfaces takes now an iterator instead of Drain
* Add memory init test for discarding depth targets
* handle divergently discarded depth/stencil target
* comment & clippy fixes
* fix collect_zero_buffer_copies_for_clear_texture yielding block breaking copies
* [pr feedback] minor cleanup in zero_init_texture_after_discard, `use` hygenie
* [pr feedback] fix bug in ImageSubresourceRange range utils
* [pr feedback] fix texture tracker check, bundle transition_texture on init, cleanups
* Implemented drop for InitTrackerDrain
* remove incorrect comment about extents in add_pass_texture_init_actions
* Fix unit test & clippy issues in init_tracker