Timing and Context can now be used as decorators (#1385)

* Context and Timing can now be used as decorators

* Using Timing decorator in quickstart.md

The time formating is better and is a useful tool to learn.

Old: Time: 3.5260659999912605
New: Time: 3526.14 ms

* Updated env_vars documentation for Context

* Added test for Context decorator

* Put new import on same line as others
This commit is contained in:
Alex Telon
2023-08-02 02:16:10 +02:00
committed by GitHub
parent d9d1372dd0
commit b66361843a
4 changed files with 58 additions and 36 deletions

View File

@@ -9,7 +9,7 @@ We need some imports to get started:
```python
import numpy as np
import time
from tinygrad.helpers import Timing
```
## Tensors
@@ -221,23 +221,22 @@ We will be using the same batch size of 64 and will be evaluating for 1000 of th
# set training flag to false
Tensor.training = False
st = time.perf_counter()
avg_acc = 0
for step in range(1000):
# random sample a batch
samp = np.random.randint(0, X_test.shape[0], size=(64))
batch = Tensor(X_test[samp], requires_grad=False)
# get the corresponding labels
labels = Y_test[samp]
with Timing("Time: "):
avg_acc = 0
for step in range(1000):
# random sample a batch
samp = np.random.randint(0, X_test.shape[0], size=(64))
batch = Tensor(X_test[samp], requires_grad=False)
# get the corresponding labels
labels = Y_test[samp]
# forward pass
out = net(batch)
# forward pass
out = net(batch)
# calculate accuracy
pred = np.argmax(out.numpy(), axis=-1)
avg_acc += (pred == labels).mean()
print(f"Test Accuracy: {avg_acc / 1000}")
print(f"Time: {time.perf_counter() - st}")
# calculate accuracy
pred = np.argmax(out.numpy(), axis=-1)
avg_acc += (pred == labels).mean()
print(f"Test Accuracy: {avg_acc / 1000}")
```
## And that's it
@@ -266,23 +265,22 @@ from tinygrad.jit import TinyJit
def jit(x):
return net(x).realize()
st = time.perf_counter()
avg_acc = 0
for step in range(1000):
# random sample a batch
samp = np.random.randint(0, X_test.shape[0], size=(64))
batch = Tensor(X_test[samp], requires_grad=False)
# get the corresponding labels
labels = Y_test[samp]
with Timing("Time: "):
avg_acc = 0
for step in range(1000):
# random sample a batch
samp = np.random.randint(0, X_test.shape[0], size=(64))
batch = Tensor(X_test[samp], requires_grad=False)
# get the corresponding labels
labels = Y_test[samp]
# forward pass with jit
out = jit(batch)
# forward pass with jit
out = jit(batch)
# calculate accuracy
pred = np.argmax(out.numpy(), axis=-1)
avg_acc += (pred == labels).mean()
print(f"Test Accuracy: {avg_acc / 1000}")
print(f"Time: {time.perf_counter() - st}")
# calculate accuracy
pred = np.argmax(out.numpy(), axis=-1)
avg_acc += (pred == labels).mean()
print(f"Test Accuracy: {avg_acc / 1000}")
```
You will find that the evaluation time is much faster than before and that your accelerator utilization is much higher.