mirror of
https://github.com/gfx-rs/wgpu.git
synced 2026-01-14 07:17:59 -05:00
* Add the base of the example. May need refining and definitely fact-checking. * Start change to changelog. * Complete changelog change for repeated-compute. * Apply suggestion to fix typos. Co-authored-by: Alphyr <47725341+a1phyr@users.noreply.github.com> * Add storage-texture example which currently works native but needs to be changed to work for wasm. [no ci] * repeated-compute now works on the web. [no ci] * `storage-texture` now works on the web as well as native. * Format because I forgot to do that (ugh). * Add `storage-texture` to changelog. * Add `render-to-texture` example. * Not all the files got git added. Fixed it. * Add `render-to-texture` to changelog. * Make better readme's and add examples to said readme's. * Oops. Put the example updates in the wrong place. * Add `uniform-values` example. * Apply clippy suggestions. * Improved readme's and documentation. * Fmt. Turning into the Joker rn. * Make instructions for examples on the web more clear. \(Fmt and clippy happy\) * hello-workgroups It doesn't work. * Add basic comments and readme to hello-workgroups. * Add hello-synchronization example. Currently doesn't have any tests but those should be added later. * Forgot to check wasm compatibility for hello-synchronization. Fixed it. * Add test for hello-synchronization. * Make my examples downlevel defaults. * Make uniform-values downlevel defaults. (Forgot to do that last commit.) * Fix clippy doc complaints. * Didn't fully fix the docs last commit. Got it here I think. * Fix redundant bullet point in examples/hello-workgroups/README.md. * Trim down the introduction section of examples/hello-workgroups/README.md. * Add technical links section to examples/hello-workgroups/README.md. * Use idiomatic Rust comments, break up big text wall into paragraphs, and fix some spelling errors. * Move output image functions into examples/common and give output_image_wasm some upgrades. * Modify changelog for moving output_image_native and output_image_wasm into wgpu-example. * Fix output_image_wasm. (Formerly did not handle pre-existing output image targets.) * Make a multiline comment be made of single lines to be more ideomatic. * "Fix" more multiline comments. I think this is actually the last of them. * Make the window a consistant, square size that's convenient for viewing. * Make the window on uniform-values not endlessly poll, taking up 100% of the main thread in background at idle. Also, change layout a little and make native use nanos by default for logging. * Make execute in hello-synchronization return a struct of vecs instead of using out parameters. * Didn't realize the naming of wgpu_example::framework so I moved my common example utility functions into wgpu_example::utils. * Add add_web_nothing_to_see_msg function to replace all the instances of adding "open the console" messages across the examples. * Add small documentation to add_web_nothing_to_see_msg and change it to use h1 instead of p. * Add documentation to output_image_native and output_image_wasm in examples/common. * Do better logging for output image functions in wgpu-example::utils. * Remove redundant append_child'ing of the output image element in wgpu-example::utils::output_image_wasm. * Fix error regarding log message for having written the image in wgpu-example::utils::output_image_native. * Fmt. * In examples/README.md, re-arrange the examples in the graph to be in alphabetical order. * Fix changlog item regarding wgpu-example::utils and the output image functions. * Move all the added examples into one changelog item that lists all of them. * Updated table in examples/README.md with new examples. Added new features to the table to accurately represent the examples.\n\nFor the new features, not all old examples may be fully represented. * Fix inaccurate comment in hello-workgroups/src/shader.wgsl. * Update examples/README.md to include basic descriptions of the basic examples as well as hints on how examples build off of each other. * Remove `capture` example. See changelog entry for reasoning. * Fix typo in hello-workgroups/shader.wgsl * Change the method of vertex generation in the shader code of render-to-texture to make it more clear. * Modify/correct message in repeated-compute/main.rs regarding the output staging buffer. * Update message in uniform-values/main.rs about writing the app state struct to the input WGSL buffer. * Add notice in repeated-compute/main.rs about why async channels are necessary (portability to WASM). * Revise comment in uniform-values/main.rs about why we don't cast the struct using POD to be more clear. * Change uniform-values to use encase for translating AppState to WGSL bytes. * Cargo & Clippy: My two best friends. * Add MIT-0 to the list of allowed liscences. * Fix docs for wasm. --------- Co-authored-by: Alphyr <47725341+a1phyr@users.noreply.github.com>
200 lines
6.6 KiB
Rust
200 lines
6.6 KiB
Rust
//! This example assumes that you've seen hello-compute and or repeated-compute
|
|
//! and thus have a general understanding of what's going on here.
|
|
//!
|
|
//! There's an explainer on what this example does exactly and what workgroups
|
|
//! are and the meaning of `@workgroup(size_x, size_y, size_z)` in the
|
|
//! README. Also see commenting in shader.wgsl as well.
|
|
//!
|
|
//! Only parts specific to this example will be commented.
|
|
|
|
use wgpu::util::DeviceExt;
|
|
|
|
async fn run() {
|
|
let mut local_a = [0i32; 100];
|
|
for (i, e) in local_a.iter_mut().enumerate() {
|
|
*e = i as i32;
|
|
}
|
|
log::info!("Input a: {local_a:?}");
|
|
let mut local_b = [0i32; 100];
|
|
for (i, e) in local_b.iter_mut().enumerate() {
|
|
*e = i as i32 * 2;
|
|
}
|
|
log::info!("Input b: {local_b:?}");
|
|
|
|
let instance = wgpu::Instance::default();
|
|
let adapter = instance
|
|
.request_adapter(&wgpu::RequestAdapterOptions::default())
|
|
.await
|
|
.unwrap();
|
|
let (device, queue) = adapter
|
|
.request_device(
|
|
&wgpu::DeviceDescriptor {
|
|
label: None,
|
|
features: wgpu::Features::empty(),
|
|
limits: wgpu::Limits::downlevel_defaults(),
|
|
},
|
|
None,
|
|
)
|
|
.await
|
|
.unwrap();
|
|
|
|
let shader = device.create_shader_module(wgpu::ShaderModuleDescriptor {
|
|
label: None,
|
|
source: wgpu::ShaderSource::Wgsl(std::borrow::Cow::Borrowed(include_str!("shader.wgsl"))),
|
|
});
|
|
|
|
let storage_buffer_a = device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
|
|
label: None,
|
|
contents: bytemuck::cast_slice(&local_a[..]),
|
|
usage: wgpu::BufferUsages::STORAGE | wgpu::BufferUsages::COPY_SRC,
|
|
});
|
|
let storage_buffer_b = device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
|
|
label: None,
|
|
contents: bytemuck::cast_slice(&local_b[..]),
|
|
usage: wgpu::BufferUsages::STORAGE | wgpu::BufferUsages::COPY_SRC,
|
|
});
|
|
let output_staging_buffer = device.create_buffer(&wgpu::BufferDescriptor {
|
|
label: None,
|
|
size: std::mem::size_of_val(&local_a) as u64,
|
|
usage: wgpu::BufferUsages::COPY_DST | wgpu::BufferUsages::MAP_READ,
|
|
mapped_at_creation: false,
|
|
});
|
|
|
|
let bind_group_layout = device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
|
|
label: None,
|
|
entries: &[
|
|
wgpu::BindGroupLayoutEntry {
|
|
binding: 0,
|
|
visibility: wgpu::ShaderStages::COMPUTE,
|
|
ty: wgpu::BindingType::Buffer {
|
|
ty: wgpu::BufferBindingType::Storage { read_only: false },
|
|
has_dynamic_offset: false,
|
|
min_binding_size: None,
|
|
},
|
|
count: None,
|
|
},
|
|
wgpu::BindGroupLayoutEntry {
|
|
binding: 1,
|
|
visibility: wgpu::ShaderStages::COMPUTE,
|
|
ty: wgpu::BindingType::Buffer {
|
|
ty: wgpu::BufferBindingType::Storage { read_only: false },
|
|
has_dynamic_offset: false,
|
|
min_binding_size: None,
|
|
},
|
|
count: None,
|
|
},
|
|
],
|
|
});
|
|
let bind_group = device.create_bind_group(&wgpu::BindGroupDescriptor {
|
|
label: None,
|
|
layout: &bind_group_layout,
|
|
entries: &[
|
|
wgpu::BindGroupEntry {
|
|
binding: 0,
|
|
resource: storage_buffer_a.as_entire_binding(),
|
|
},
|
|
wgpu::BindGroupEntry {
|
|
binding: 1,
|
|
resource: storage_buffer_b.as_entire_binding(),
|
|
},
|
|
],
|
|
});
|
|
|
|
let pipeline_layout = device.create_pipeline_layout(&wgpu::PipelineLayoutDescriptor {
|
|
label: None,
|
|
bind_group_layouts: &[&bind_group_layout],
|
|
push_constant_ranges: &[],
|
|
});
|
|
let pipeline = device.create_compute_pipeline(&wgpu::ComputePipelineDescriptor {
|
|
label: None,
|
|
layout: Some(&pipeline_layout),
|
|
module: &shader,
|
|
entry_point: "main",
|
|
});
|
|
|
|
//----------------------------------------------------------
|
|
|
|
let mut command_encoder =
|
|
device.create_command_encoder(&wgpu::CommandEncoderDescriptor { label: None });
|
|
{
|
|
let mut compute_pass = command_encoder.begin_compute_pass(&wgpu::ComputePassDescriptor {
|
|
label: None,
|
|
timestamp_writes: None,
|
|
});
|
|
compute_pass.set_pipeline(&pipeline);
|
|
compute_pass.set_bind_group(0, &bind_group, &[]);
|
|
/* Note that since each workgroup will cover both arrays, we only need to
|
|
cover the length of one array. */
|
|
compute_pass.dispatch_workgroups(local_a.len() as u32, 1, 1);
|
|
}
|
|
queue.submit(Some(command_encoder.finish()));
|
|
|
|
//----------------------------------------------------------
|
|
|
|
get_data(
|
|
&mut local_a[..],
|
|
&storage_buffer_a,
|
|
&output_staging_buffer,
|
|
&device,
|
|
&queue,
|
|
)
|
|
.await;
|
|
get_data(
|
|
&mut local_b[..],
|
|
&storage_buffer_b,
|
|
&output_staging_buffer,
|
|
&device,
|
|
&queue,
|
|
)
|
|
.await;
|
|
|
|
log::info!("Output in A: {local_a:?}");
|
|
log::info!("Output in B: {local_b:?}");
|
|
}
|
|
|
|
async fn get_data<T: bytemuck::Pod>(
|
|
output: &mut [T],
|
|
storage_buffer: &wgpu::Buffer,
|
|
staging_buffer: &wgpu::Buffer,
|
|
device: &wgpu::Device,
|
|
queue: &wgpu::Queue,
|
|
) {
|
|
let mut command_encoder =
|
|
device.create_command_encoder(&wgpu::CommandEncoderDescriptor { label: None });
|
|
command_encoder.copy_buffer_to_buffer(
|
|
storage_buffer,
|
|
0,
|
|
staging_buffer,
|
|
0,
|
|
std::mem::size_of_val(output) as u64,
|
|
);
|
|
queue.submit(Some(command_encoder.finish()));
|
|
let buffer_slice = staging_buffer.slice(..);
|
|
let (sender, receiver) = futures_intrusive::channel::shared::oneshot_channel();
|
|
buffer_slice.map_async(wgpu::MapMode::Read, move |r| sender.send(r).unwrap());
|
|
device.poll(wgpu::Maintain::Wait);
|
|
receiver.receive().await.unwrap().unwrap();
|
|
output.copy_from_slice(bytemuck::cast_slice(&buffer_slice.get_mapped_range()[..]));
|
|
staging_buffer.unmap();
|
|
}
|
|
|
|
fn main() {
|
|
#[cfg(not(target_arch = "wasm32"))]
|
|
{
|
|
env_logger::builder()
|
|
.filter_level(log::LevelFilter::Info)
|
|
.format_timestamp_nanos()
|
|
.init();
|
|
pollster::block_on(run());
|
|
}
|
|
#[cfg(target_arch = "wasm32")]
|
|
{
|
|
std::panic::set_hook(Box::new(console_error_panic_hook::hook));
|
|
console_log::init_with_level(log::Level::Info).expect("could not initialize logger");
|
|
|
|
wgpu_example::utils::add_web_nothing_to_see_msg();
|
|
|
|
wasm_bindgen_futures::spawn_local(run());
|
|
}
|
|
}
|