mirror of
https://github.com/tinygrad/tinygrad.git
synced 2026-01-09 15:08:02 -05:00
docs: fix broken links and update is_floating_point (#6023)
* docs: fix broken links and update is_floating_point broken links would only show as INFO and not an error. * make doc andhors warn
This commit is contained in:
@@ -2,7 +2,7 @@
|
||||
|
||||
## Overview
|
||||
|
||||
The main aspect of HCQ-compatible runtimes is how they interact with devices. In HCQ, all interactions with devices occur in a hardware-friendly manner using [command queues](#commandqueues). This approach allows commands to be issued directly to devices, bypassing runtime overhead such as HIP or CUDA. Additionally, by using the HCQ API, these runtimes can benefit from various optimizations and features, including [HCQGraph](#hcqgraph) and built-in profiling capabilities.
|
||||
The main aspect of HCQ-compatible runtimes is how they interact with devices. In HCQ, all interactions with devices occur in a hardware-friendly manner using [command queues](#command-queues). This approach allows commands to be issued directly to devices, bypassing runtime overhead such as HIP or CUDA. Additionally, by using the HCQ API, these runtimes can benefit from various optimizations and features, including [HCQGraph](#hcqgraph) and built-in profiling capabilities.
|
||||
|
||||
### Command Queues
|
||||
|
||||
@@ -97,7 +97,7 @@ Each HCQ-compatible device must allocate two signals for global synchronization
|
||||
|
||||
### HCQ Compatible Allocator
|
||||
|
||||
The `HCQAllocator` base class simplifies allocator logic by leveraging [command queues](#commandqueues) abstractions. This class efficiently handles copy and transfer operations, leaving only the alloc and free functions to be implemented by individual backends.
|
||||
The `HCQAllocator` base class simplifies allocator logic by leveraging [command queues](#command-queues) abstractions. This class efficiently handles copy and transfer operations, leaving only the alloc and free functions to be implemented by individual backends.
|
||||
|
||||
::: tinygrad.device.HCQAllocator
|
||||
options:
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
|
||||
A typical runtime consists of the following parts:
|
||||
|
||||
- [Compiled](#device)
|
||||
- [Compiled](#compiled)
|
||||
- [Allocator](#allocator)
|
||||
- [Program](#program)
|
||||
- [Compiler](#compiler)
|
||||
|
||||
@@ -40,7 +40,7 @@ In tinygrad, you can do [`x.conv2d(w, b)`](tensor/ops.md/#tinygrad.Tensor.conv2d
|
||||
|
||||
### tinygrad is lazy
|
||||
|
||||
When you do `a+b` in tinygrad, nothing happens. It's not until you [`realize`](tensor/index.md/#tinygrad.Tensor.realize) the Tensor that the computation actually runs.
|
||||
When you do `a+b` in tinygrad, nothing happens. It's not until you [`realize`](tensor/properties.md#tinygrad.Tensor.realize) the Tensor that the computation actually runs.
|
||||
|
||||
### tinygrad requires @TinyJit to be fast
|
||||
|
||||
|
||||
@@ -32,6 +32,10 @@ nav:
|
||||
#extra_css:
|
||||
#- css/tinygrad.css
|
||||
|
||||
validation:
|
||||
links:
|
||||
anchors: warn
|
||||
|
||||
markdown_extensions:
|
||||
- attr_list
|
||||
- admonition
|
||||
|
||||
@@ -3000,17 +3000,17 @@ class Tensor:
|
||||
|
||||
def is_floating_point(self) -> bool:
|
||||
"""
|
||||
Returns `True` if the tensor contains floating point types, i.e. is one of `dtype.half`, `dtype.float`,
|
||||
`dtype.double`, `dtype.default_float`, `dtype.float16`, `dtype.float32`, `dtype.float64`, `dtype.bfloat16`.
|
||||
Returns `True` if the tensor contains floating point types, i.e. is one of `dtype.float64`, `dtype.float32`,
|
||||
`dtype.float16`, `dtype.bfloat16`.
|
||||
|
||||
```python exec="true" source="above" session="tensor" result="python"
|
||||
t = Tensor([8, 9], dtype=dtypes.float)
|
||||
t = Tensor([8, 9], dtype=dtypes.float32)
|
||||
print(t.is_floating_point())
|
||||
```
|
||||
"""
|
||||
return dtypes.is_float(self.dtype)
|
||||
|
||||
def size(self, dim=None) -> Union[sint, Tuple[sint, ...]]:
|
||||
def size(self, dim:Optional[int]=None) -> Union[sint, Tuple[sint, ...]]:
|
||||
"""
|
||||
Return the size of the tensor. If `dim` is specified, return the length along dimension `dim`. Otherwise return the shape of the tensor.
|
||||
|
||||
|
||||
Reference in New Issue
Block a user