CLANG -> CPU (#9189)

This commit is contained in:
chenyu
2025-02-20 18:03:09 -05:00
committed by GitHub
parent f986e12f91
commit 2e7c2780a9
51 changed files with 149 additions and 147 deletions

View File

@@ -7,7 +7,7 @@
print("******** first, the runtime ***********")
from tinygrad.runtime.ops_clang import ClangJITCompiler, MallocAllocator, CPUProgram
from tinygrad.runtime.ops_cpu import ClangJITCompiler, MallocAllocator, CPUProgram
# allocate some buffers
out = MallocAllocator.alloc(4)
@@ -34,7 +34,7 @@ assert val == 5
print("******** second, the Device ***********")
DEVICE = "CLANG" # NOTE: you can change this!
DEVICE = "CPU" # NOTE: you can change this!
import struct
from tinygrad.dtype import dtypes
@@ -90,7 +90,7 @@ out = a.alu(Ops.ADD, b)
# schedule the computation as a list of kernels
sched, _, becomes_map = create_schedule_with_vars(out.sink())
for si in sched: print(si.ast.op) # NOTE: the first two convert it to CLANG
for si in sched: print(si.ast.op) # NOTE: the first two convert it to CPU
# NOTE: UOps are no longer mutable, the scheduler gives you a map to lookup which BUFFER the result was written to
out = becomes_map[out]

View File

@@ -38,7 +38,7 @@ The `Allocator` class is responsible for managing memory on the device. There is
The `Program` class is created for each loaded program. It is responsible for executing the program on the device. As an example, here is a `CPUProgram` implementation which loads program and runs it.
::: tinygrad.runtime.ops_clang.CPUProgram
::: tinygrad.runtime.ops_cpu.CPUProgram
options:
members: true

View File

@@ -31,13 +31,13 @@ These control the behavior of core tinygrad even when used as a library.
Variable | Possible Value(s) | Description
---|---|---
DEBUG | [1-6] | enable debugging output, with 4 you get operations, timings, speed, generated code and more
GPU | [1] | enable the GPU backend
GPU | [1] | enable the GPU (OpenCL) backend
CUDA | [1] | enable CUDA backend
AMD | [1] | enable AMD backend
NV | [1] | enable NV backend
METAL | [1] | enable Metal backend (for Mac M1 and after)
METAL_XCODE | [1] | enable Metal using macOS Xcode SDK
CLANG | [1] | enable Clang backend
CPU | [1] | enable CPU (Clang) backend
LLVM | [1] | enable LLVM backend
BEAM | [#] | number of beams in kernel beam search
DEFAULT_FLOAT | [HALF, ...]| specify the default float dtype (FLOAT32, HALF, BFLOAT16, FLOAT64, ...), default to FLOAT32

View File

@@ -17,7 +17,7 @@ from tinygrad import Device
print(Device.DEFAULT)
```
You will see `CUDA` here on a GPU instance, or `CLANG` here on a CPU instance.
You will see `CUDA` here on a GPU instance, or `CPU` here on a CPU instance.
## A simple model

View File

@@ -1,6 +1,6 @@
# Runtimes
tinygrad supports various runtimes, enabling your code to scale across a wide range of devices. The default runtime can be automatically selected based on the available hardware, or you can force a specific runtime to be default using environment variables (e.g., `CLANG=1`).
tinygrad supports various runtimes, enabling your code to scale across a wide range of devices. The default runtime can be automatically selected based on the available hardware, or you can force a specific runtime to be default using environment variables (e.g., `CPU=1`).
| Runtime | Description | Requirements |
|---------|-------------|--------------|
@@ -10,6 +10,6 @@ tinygrad supports various runtimes, enabling your code to scale across a wide ra
| [METAL](https://github.com/tinygrad/tinygrad/tree/master/tinygrad/runtime/ops_metal.py) | Utilizes Metal for acceleration on Apple devices | M1+ Macs; Metal 3.0+ for `bfloat` support |
| [CUDA](https://github.com/tinygrad/tinygrad/tree/master/tinygrad/runtime/ops_cuda.py) | Utilizes CUDA for acceleration on NVIDIA GPUs | NVIDIA GPU with CUDA support |
| [GPU (OpenCL)](https://github.com/tinygrad/tinygrad/tree/master/tinygrad/runtime/ops_gpu.py) | Accelerates computations using OpenCL on GPUs | OpenCL 2.0 compatible device |
| [CLANG (C Code)](https://github.com/tinygrad/tinygrad/tree/master/tinygrad/runtime/ops_clang.py) | Runs on CPU using the clang compiler | `clang` compiler in system `PATH` |
| [CPU (C Code)](https://github.com/tinygrad/tinygrad/tree/master/tinygrad/runtime/ops_cpu.py) | Runs on CPU using the clang compiler | `clang` compiler in system `PATH` |
| [LLVM (LLVM IR)](https://github.com/tinygrad/tinygrad/tree/master/tinygrad/runtime/ops_llvm.py) | Runs on CPU using the LLVM compiler infrastructure | llvm libraries installed and findable |
| [WEBGPU](https://github.com/tinygrad/tinygrad/tree/master/tinygrad/runtime/ops_webgpu.py) | Runs on GPU using the Dawn WebGPU engine (used in Google Chrome) | Dawn library installed and findable. Download binaries [here](https://github.com/wpmed92/pydawn/releases/tag/v0.1.6). |