Files
AmosLewis c199ac78eb Add decompose of aten._scaled_dot_product_flash_attention.default
The new decompose was just implemented from pytorch thes day.
Here is pytorch pr: https://github.com/pytorch/pytorch/pull/117390
This decompose is required from lowering chatglm model in torch-mlir.
Here is the issue:https://github.com/llvm/torch-mlir/issues/2730
2024-01-16 03:03:14 +00:00
..
2023-10-26 21:53:25 +05:30
2023-10-26 21:53:25 +05:30