[Backend] Make ConvertTritonGPUToLLVMPass's tmaMetadata a member (#2271)

.. instead of an option.

This partially addresses https://github.com/openai/triton/issues/2265 to
no longer crash when printing a pass pipeline in textual form.

It is not a proper solution for the fact that pass results should be
stored in the IR and not in a pointer argument.
This commit is contained in:
Christian Sigg
2023-09-11 16:16:54 +02:00
committed by GitHub
parent 3747843143
commit f6828e1a6f
4 changed files with 14 additions and 8 deletions

View File

@@ -27,9 +27,6 @@ def ConvertTritonGPUToLLVM : Pass<"convert-triton-gpu-to-llvm", "mlir::ModuleOp"
Option<"computeCapability", "compute-capability",
"int32_t", /*default*/"80",
"device compute capability">,
Option<"tmaMetadata", "tma-metadata",
"mlir::triton::gpu::TMAMetadataTy*", /*default*/"nullptr",
"tma metadata to the runtime">,
Option<"target", "target", "enum Target", "mlir::triton::Target::Default",
"compile for target compatible LLVM",
"llvm::cl::values("

View File

@@ -21,7 +21,8 @@ enum Target { NVVM, ROCDL, Default = NVVM };
std::unique_ptr<OperationPass<ModuleOp>> createConvertTritonGPUToLLVMPass();
std::unique_ptr<OperationPass<ModuleOp>>
createConvertTritonGPUToLLVMPass(const ConvertTritonGPUToLLVMOptions &options);
createConvertTritonGPUToLLVMPass(int32_t computeCapability, Target target,
mlir::triton::gpu::TMAMetadataTy *tmaMetadata);
} // namespace triton