mirror of
https://github.com/tinygrad/tinygrad.git
synced 2026-01-09 15:08:02 -05:00
remove ScheduleItem and merge it with ExecItem (#13759)
* remove ExecItem and merge it with ScheduleItem * less diff * fix issues * min diff * don't change bufs in _lower * min diff * update * revert * fixes * diff
This commit is contained in:
@@ -38,25 +38,19 @@ optim.schedule_step() # this will step the optimizer without running realize
|
||||
# The weight Tensors have been assigned to, but not yet realized. Everything is still lazy at this point
|
||||
# l1.uop and l2.uop define a computation graph
|
||||
|
||||
from tinygrad.engine.schedule import ScheduleItem
|
||||
schedule: List[ScheduleItem] = Tensor.schedule(l1, l2)
|
||||
from tinygrad.engine.schedule import ExecItem
|
||||
schedule: List[ExecItem] = Tensor.schedule(l1, l2)
|
||||
|
||||
print(f"The schedule contains {len(schedule)} items.")
|
||||
for si in schedule: print(str(si)[:80])
|
||||
|
||||
# *****
|
||||
# 4. Lower a schedule.
|
||||
# 4. Lower and run the schedule.
|
||||
|
||||
from tinygrad.engine.realize import lower_schedule_item, ExecItem
|
||||
lowered: List[ExecItem] = [lower_schedule_item(si) for si in tqdm(schedule)]
|
||||
for si in tqdm(schedule): si.run()
|
||||
|
||||
# *****
|
||||
# 5. Run the schedule
|
||||
|
||||
for ei in tqdm(lowered): ei.run()
|
||||
|
||||
# *****
|
||||
# 6. Print the weight change
|
||||
# 5. Print the weight change
|
||||
|
||||
print("first weight change\n", l1.numpy()-l1n)
|
||||
print("second weight change\n", l2.numpy()-l2n)
|
||||
|
||||
@@ -17,15 +17,15 @@ The `UOp` graph specifies the compute in terms of low level tinygrad ops. Not al
|
||||
|
||||
## Scheduling
|
||||
|
||||
The [scheduler](https://github.com/tinygrad/tinygrad/tree/master/tinygrad/engine/schedule.py) converts the graph of UOps into a list of `ScheduleItem`. One `ScheduleItem` is one kernel on the GPU, and the scheduler is responsible for breaking the large compute graph into subgraphs that can fit in a kernel. `ast` specifies what compute to run, and `bufs` specifies what buffers to run it on.
|
||||
The [scheduler](https://github.com/tinygrad/tinygrad/tree/master/tinygrad/engine/schedule.py) converts the graph of UOps into a list of `ExecItem`. One `ExecItem` is one kernel on the GPU, and the scheduler is responsible for breaking the large compute graph into subgraphs that can fit in a kernel. `ast` specifies what compute to run, and `bufs` specifies what buffers to run it on.
|
||||
|
||||
::: tinygrad.engine.schedule.ScheduleItem
|
||||
::: tinygrad.engine.schedule.ExecItem
|
||||
|
||||
## Lowering
|
||||
|
||||
The code in [realize](https://github.com/tinygrad/tinygrad/tree/master/tinygrad/engine/realize.py) lowers `ScheduleItem` to `ExecItem` with
|
||||
The code in [realize](https://github.com/tinygrad/tinygrad/tree/master/tinygrad/engine/realize.py) lowers `ExecItem` by populating its `prg` field with
|
||||
|
||||
::: tinygrad.engine.realize.lower_schedule
|
||||
::: tinygrad.engine.realize.run_schedule
|
||||
|
||||
There's a ton of complexity hidden behind this, see the `codegen/` directory.
|
||||
|
||||
|
||||
Reference in New Issue
Block a user