Elementwise
Elementwise ops operate on a per element basis. They don't change the shape of the tensor.
Unary Ops (math)¤
logical_not
¤
logical_not() -> Tensor
Computes the logical NOT of the tensor element-wise.
print(Tensor([False, True]).logical_not().numpy())
[ True False]
Source code in tinygrad/tensor.py
2769 2770 2771 2772 2773 2774 2775 2776 2777 | |
neg
¤
neg() -> Tensor
Negates the tensor element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).neg().numpy())
[ 3. 2. 1. -0. -1. -2. -3.]
Source code in tinygrad/tensor.py
2779 2780 2781 2782 2783 2784 2785 2786 2787 | |
log
¤
log() -> Tensor
Computes the natural logarithm element-wise.
See: https://en.wikipedia.org/wiki/Logarithm
print(Tensor([1., 2., 4., 8.]).log().numpy())
[0. 0.6931 1.3863 2.0794]
Source code in tinygrad/tensor.py
2801 2802 2803 2804 2805 2806 2807 2808 2809 2810 2811 | |
log2
¤
log2() -> Tensor
Computes the base-2 logarithm element-wise.
See: https://en.wikipedia.org/wiki/Logarithm
print(Tensor([1., 2., 4., 8.]).log2().numpy())
[0. 1. 2. 3.]
Source code in tinygrad/tensor.py
2825 2826 2827 2828 2829 2830 2831 2832 2833 2834 2835 | |
log10
¤
log10() -> Tensor
Computes the base-10 logarithm element-wise.
See: https://en.wikipedia.org/wiki/Logarithm
print(Tensor([1., 2., 4., 8.]).log10().numpy())
[0. 0.301 0.6021 0.9031]
Source code in tinygrad/tensor.py
2813 2814 2815 2816 2817 2818 2819 2820 2821 2822 2823 | |
exp
¤
exp() -> Tensor
Computes the exponential function element-wise.
See: https://en.wikipedia.org/wiki/Exponential_function
print(Tensor([0., 1., 2., 3.]).exp().numpy())
[ 1. 2.7183 7.3891 20.0855]
Source code in tinygrad/tensor.py
2837 2838 2839 2840 2841 2842 2843 2844 2845 2846 2847 2848 2849 2850 | |
exp2
¤
exp2() -> Tensor
Computes the base-2 exponential function element-wise.
See: https://en.wikipedia.org/wiki/Exponential_function
print(Tensor([0., 1., 2., 3.]).exp2().numpy())
[1. 2. 4. 8.]
Source code in tinygrad/tensor.py
2852 2853 2854 2855 2856 2857 2858 2859 2860 2861 2862 | |
sqrt
¤
sqrt() -> Tensor
Computes the square root of the tensor element-wise.
print(Tensor([1., 2., 3., 4.]).sqrt().numpy())
[1. 1.4142 1.7321 2. ]
Source code in tinygrad/tensor.py
2876 2877 2878 2879 2880 2881 2882 2883 2884 | |
rsqrt
¤
rsqrt()
Computes the reciprocal of the square root of the tensor element-wise.
print(Tensor([1., 2., 3., 4.]).rsqrt().numpy())
[1. 0.7071 0.5774 0.5 ]
Source code in tinygrad/mixin/math.py
508 509 510 511 512 513 514 515 516 | |
sin
¤
sin() -> Tensor
Computes the sine of the tensor element-wise.
print(Tensor([0., math.pi/2, math.pi, 3*math.pi/2, 2*math.pi]).sin().numpy())
[ 0. 1. -0. -1. 0.]
Source code in tinygrad/tensor.py
2886 2887 2888 2889 2890 2891 2892 2893 2894 | |
cos
¤
cos() -> Tensor
Computes the cosine of the tensor element-wise.
print(Tensor([0., math.pi/2, math.pi, 3*math.pi/2, 2*math.pi]).cos().numpy())
[ 1.0000e+00 0.0000e+00 -1.0000e+00 -2.3842e-07 1.0000e+00]
Source code in tinygrad/tensor.py
2896 2897 2898 2899 2900 2901 2902 2903 2904 2905 | |
tan
¤
tan() -> Tensor
Computes the tangent of the tensor element-wise.
print(Tensor([0., math.pi/4, math.pi/2, 3*math.pi/4, math.pi]).tan().numpy())
[ 0. 1. inf -1. 0.]
Source code in tinygrad/tensor.py
2907 2908 2909 2910 2911 2912 2913 2914 2915 | |
asin
¤
asin() -> Tensor
Computes the inverse sine (arcsine) of the tensor element-wise.
print(Tensor([-0.9, -0.6, -0.3, 0., 0.3, 0.6, 0.9]).asin().numpy())
[-1.1198 -0.6435 -0.3047 0. 0.3047 0.6435 1.1198]
Source code in tinygrad/tensor.py
2917 2918 2919 2920 2921 2922 2923 2924 2925 2926 2927 2928 | |
acos
¤
acos() -> Tensor
Computes the inverse cosine (arccosine) of the tensor element-wise.
print(Tensor([-0.9, -0.6, -0.3, 0., 0.3, 0.6, 0.9]).acos().numpy())
[2.6906 2.2143 1.8755 1.5708 1.2661 0.9273 0.451 ]
Source code in tinygrad/tensor.py
2930 2931 2932 2933 2934 2935 2936 2937 2938 | |
atan
¤
atan() -> Tensor
Computes the inverse tangent (arctan) of the tensor element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).atan().numpy())
[-1.249 -1.1071 -0.7854 0. 0.7854 1.1071 1.249 ]
Source code in tinygrad/tensor.py
2940 2941 2942 2943 2944 2945 2946 2947 2948 | |
trunc
¤
trunc()
Truncates the tensor element-wise.
print(Tensor([-3.5, -2.5, -1.5, -0.5, 0.5, 1.5, 2.5, 3.5]).trunc().numpy())
[-3. -2. -1. -0. 0. 1. 2. 3.]
Source code in tinygrad/mixin/math.py
261 262 263 264 265 266 267 268 269 | |
ceil
¤
ceil()
Rounds the tensor element-wise towards positive infinity.
print(Tensor([-3.5, -2.5, -1.5, -0.5, 0.5, 1.5, 2.5, 3.5]).ceil().numpy())
[-3. -2. -1. -0. 1. 2. 3. 4.]
Source code in tinygrad/mixin/math.py
347 348 349 350 351 352 353 354 355 | |
floor
¤
floor()
Rounds the tensor element-wise towards negative infinity.
print(Tensor([-3.5, -2.5, -1.5, -0.5, 0.5, 1.5, 2.5, 3.5]).floor().numpy())
[-4. -3. -2. -1. 0. 1. 2. 3.]
Source code in tinygrad/mixin/math.py
357 358 359 360 361 362 363 364 365 | |
round
¤
round() -> Tensor
Rounds the tensor element-wise with rounding half to even.
print(Tensor([-3.5, -2.5, -1.5, -0.5, 0.5, 1.5, 2.5, 3.5]).round().numpy())
[-4. -2. -2. 0. 0. 2. 2. 4.]
Source code in tinygrad/tensor.py
2952 2953 2954 2955 2956 2957 2958 2959 2960 | |
isinf
¤
Checks the tensor element-wise to return True where the element is infinity, otherwise returns False
print(Tensor([1, float('inf'), 2, float('-inf'), float('nan')]).isinf().numpy())
[False True False True False]
Source code in tinygrad/mixin/math.py
327 328 329 330 331 332 333 334 335 | |
isnan
¤
isnan()
Checks the tensor element-wise to return True where the element is NaN, otherwise returns False
print(Tensor([1, float('inf'), 2, float('-inf'), float('nan')]).isnan().numpy())
[False False False False True]
Source code in tinygrad/mixin/math.py
317 318 319 320 321 322 323 324 325 | |
isfinite
¤
isfinite()
Checks the tensor element-wise to return True where the element is finite, otherwise returns False
print(Tensor([1, float('inf'), 2, float('-inf'), float('nan')]).isfinite().numpy())
[ True False True False False]
Source code in tinygrad/mixin/math.py
337 338 339 340 341 342 343 344 345 | |
lerp
¤
Linearly interpolates between self and end by weight.
print(Tensor([1., 2., 3.]).lerp(Tensor([4., 5., 6.]), 0.5).numpy())
[2.5 3.5 4.5]
Source code in tinygrad/tensor.py
2962 2963 2964 2965 2966 2967 2968 2969 2970 2971 2972 2973 | |
square
¤
square()
Squares the tensor element-wise.
Equivalent to self*self.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).square().numpy())
[9. 4. 1. 0. 1. 4. 9.]
Source code in tinygrad/mixin/math.py
289 290 291 292 293 294 295 296 297 298 | |
clamp
¤
clamp(min_=None, max_=None)
Clips (clamps) the values in the tensor between min_ and max_ element-wise.
If min_ is None, there is no lower bound. If max_ is None, there is no upper bound.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).clip(-1, 1).numpy())
[-1. -1. -1. 0. 1. 1. 1.]
Source code in tinygrad/mixin/math.py
300 301 302 303 304 305 306 307 308 309 310 311 | |
clip
¤
clip(min_=None, max_=None)
Alias for Tensor.clamp.
Source code in tinygrad/mixin/math.py
313 314 315 | |
sign
¤
sign() -> Tensor
Returns the sign of the tensor element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).sign().numpy())
[-1. -1. -1. 0. 1. 1. 1.]
Source code in tinygrad/tensor.py
2975 2976 2977 2978 2979 2980 2981 2982 2983 | |
abs
¤
abs() -> Tensor
Computes the absolute value of the tensor element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).abs().numpy())
[3. 2. 1. 0. 1. 2. 3.]
Source code in tinygrad/tensor.py
2985 2986 2987 2988 2989 2990 2991 2992 2993 | |
reciprocal
¤
reciprocal() -> Tensor
Computes 1/x element-wise.
print(Tensor([1., 2., 3., 4.]).reciprocal().numpy())
[1. 0.5 0.3333 0.25 ]
Source code in tinygrad/tensor.py
2995 2996 2997 2998 2999 3000 3001 3002 3003 | |
Unary Ops (activation)¤
relu
¤
relu()
Applies the Rectified Linear Unit (ReLU) function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).relu().numpy())
[0. 0. 0. 0. 1. 2. 3.]
Source code in tinygrad/mixin/math.py
367 368 369 370 371 372 373 374 375 376 | |
sigmoid
¤
sigmoid()
Applies the Sigmoid function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).sigmoid().numpy())
[0.0474 0.1192 0.2689 0.5 0.7311 0.8808 0.9526]
Source code in tinygrad/mixin/math.py
378 379 380 381 382 383 384 385 386 387 388 | |
logsigmoid
¤
logsigmoid() -> Tensor
Applies the LogSigmoid function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).logsigmoid().numpy())
[-3.0486 -2.1269 -1.3133 -0.6931 -0.3133 -0.1269 -0.0486]
Source code in tinygrad/tensor.py
2864 2865 2866 2867 2868 2869 2870 2871 2872 2873 2874 | |
hardsigmoid
¤
Applies the Hardsigmoid function element-wise.
NOTE: default alpha and beta values are taken from torch
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).hardsigmoid().numpy())
[0. 0.1667 0.3333 0.5 0.6667 0.8333 1. ]
Source code in tinygrad/mixin/math.py
414 415 416 417 418 419 420 421 422 423 424 425 | |
elu
¤
elu(alpha=1.0) -> Tensor
Applies the Exponential Linear Unit (ELU) function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).elu().numpy())
[-0.9502 -0.8647 -0.6321 0. 1. 2. 3. ]
Source code in tinygrad/tensor.py
3007 3008 3009 3010 3011 3012 3013 3014 3015 3016 3017 | |
celu
¤
celu(alpha=1.0) -> Tensor
Applies the Continuously differentiable Exponential Linear Unit (CELU) function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).celu().numpy())
[-0.9502 -0.8647 -0.6321 0. 1. 2. 3. ]
Source code in tinygrad/tensor.py
3019 3020 3021 3022 3023 3024 3025 3026 3027 3028 3029 | |
selu
¤
selu(alpha=1.67326, gamma=1.0507) -> Tensor
Applies the Scaled Exponential Linear Unit (SELU) function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).selu().numpy())
[-1.6706 -1.5202 -1.1113 0. 1.0507 2.1014 3.1521]
Source code in tinygrad/tensor.py
3031 3032 3033 3034 3035 3036 3037 3038 3039 3040 3041 | |
swish
¤
swish()
See .silu()
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).swish().numpy())
[-0.1423 -0.2384 -0.2689 0. 0.7311 1.7616 2.8577]
Source code in tinygrad/mixin/math.py
484 485 486 487 488 489 490 491 492 493 494 | |
silu
¤
silu()
Applies the Sigmoid Linear Unit (SiLU) function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).silu().numpy())
[-0.1423 -0.2384 -0.2689 0. 0.7311 1.7616 2.8577]
Source code in tinygrad/mixin/math.py
496 497 498 499 500 501 502 503 504 505 506 | |
relu6
¤
relu6()
Applies the ReLU6 function element-wise.
print(Tensor([-9., -6., -3., 0., 3., 6., 9.]).relu6().numpy())
[0. 0. 0. 0. 3. 6. 6.]
Source code in tinygrad/mixin/math.py
390 391 392 393 394 395 396 397 398 399 400 | |
hardswish
¤
hardswish()
Applies the Hardswish function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).hardswish().numpy())
[-0. -0.3333 -0.3333 0. 0.6667 1.6667 3. ]
Source code in tinygrad/mixin/math.py
402 403 404 405 406 407 408 409 410 411 412 | |
tanh
¤
tanh()
Applies the Hyperbolic Tangent (tanh) function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).tanh().numpy())
[-0.9951 -0.964 -0.7616 0. 0.7616 0.964 0.9951]
Source code in tinygrad/mixin/math.py
450 451 452 453 454 455 456 457 458 459 460 | |
sinh
¤
sinh() -> Tensor
Applies the Hyperbolic Sine (sinh) function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).sinh().numpy())
[-10.0179 -3.6269 -1.1752 0. 1.1752 3.6269 10.0179]
Source code in tinygrad/tensor.py
3043 3044 3045 3046 3047 3048 3049 3050 3051 3052 3053 | |
cosh
¤
cosh() -> Tensor
Applies the Hyperbolic Cosine (cosh) function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).cosh().numpy())
[10.0677 3.7622 1.5431 1. 1.5431 3.7622 10.0677]
Source code in tinygrad/tensor.py
3055 3056 3057 3058 3059 3060 3061 3062 3063 3064 3065 | |
atanh
¤
atanh() -> Tensor
Applies the Inverse Hyperbolic Tangent (atanh) function element-wise.
print(Tensor([-0.9, -0.6, -0.3, 0., 0.3, 0.6, 0.9]).atanh().numpy())
[-1.4722 -0.6931 -0.3095 0. 0.3095 0.6931 1.4722]
Source code in tinygrad/tensor.py
3067 3068 3069 3070 3071 3072 3073 3074 3075 3076 3077 | |
asinh
¤
asinh() -> Tensor
Applies the Inverse Hyperbolic Sine (asinh) function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).asinh().numpy())
[-1.8184 -1.4436 -0.8814 0. 0.8814 1.4436 1.8184]
Source code in tinygrad/tensor.py
3079 3080 3081 3082 3083 3084 3085 3086 3087 3088 3089 | |
acosh
¤
acosh() -> Tensor
Applies the Inverse Hyperbolic Cosine (acosh) function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).acosh().numpy())
[ nan nan nan nan 0. 1.317 1.7627]
Source code in tinygrad/tensor.py
3091 3092 3093 3094 3095 3096 3097 3098 3099 3100 3101 | |
hardtanh
¤
hardtanh(min_val=-1, max_val=1)
Applies the Hardtanh function element-wise.
print(Tensor([-1.5, -1.0, -0.5, 0., 0.5, 1.0, 1.5]).hardtanh().numpy())
[-1. -1. -0.5 0. 0.5 1. 1. ]
Source code in tinygrad/mixin/math.py
427 428 429 430 431 432 433 434 435 | |
erf
¤
erf() -> Tensor
Applies error function element-wise.
- Described: https://en.wikipedia.org/wiki/Error_function
print(Tensor([-1.5, -1.0, -0.5, 0., 0.5, 1.0, 1.5]).erf().numpy())
[-0.9661 -0.8427 -0.5205 0. 0.5205 0.8427 0.9661]
Source code in tinygrad/tensor.py
3103 3104 3105 3106 3107 3108 3109 3110 3111 3112 3113 3114 3115 | |
gelu
¤
gelu()
Applies the Gaussian Error Linear Unit (GELU) function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).gelu().numpy())
[-0.0036 -0.0454 -0.1588 0. 0.8412 1.9546 2.9964]
Source code in tinygrad/mixin/math.py
472 473 474 475 476 477 478 479 480 481 482 | |
quick_gelu
¤
quick_gelu()
Applies the Sigmoid GELU approximation element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).quick_gelu().numpy())
[-0.0181 -0.0643 -0.1542 0. 0.8458 1.9357 2.9819]
Source code in tinygrad/mixin/math.py
462 463 464 465 466 467 468 469 470 | |
leaky_relu
¤
leaky_relu(neg_slope=0.01)
Applies the Leaky ReLU function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).leaky_relu().numpy())
[-0.03 -0.02 -0.01 0. 1. 2. 3. ]
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).leaky_relu(neg_slope=0.42).numpy())
[-1.26 -0.84 -0.42 0. 1. 2. 3. ]
Source code in tinygrad/mixin/math.py
437 438 439 440 441 442 443 444 445 446 447 448 | |
mish
¤
mish() -> Tensor
Applies the Mish function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).mish().numpy())
[-0.1456 -0.2525 -0.3034 0. 0.8651 1.944 2.9865]
Source code in tinygrad/tensor.py
3117 3118 3119 3120 3121 3122 3123 3124 3125 3126 3127 | |
softplus
¤
softplus(beta=1.0) -> Tensor
Applies the Softplus function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).softplus().numpy())
[0.0486 0.1269 0.3133 0.6931 1.3133 2.1269 3.0486]
Source code in tinygrad/tensor.py
3129 3130 3131 3132 3133 3134 3135 3136 3137 | |
softsign
¤
softsign() -> Tensor
Applies the Softsign function element-wise.
print(Tensor([-3., -2., -1., 0., 1., 2., 3.]).softsign().numpy())
[-0.75 -0.6667 -0.5 0. 0.5 0.6667 0.75 ]
Source code in tinygrad/tensor.py
3139 3140 3141 3142 3143 3144 3145 3146 3147 | |
Elementwise Ops (broadcasted)¤
add
¤
Adds self and x.
Equivalent to self + x.
Supports broadcasting to a common shape, type promotion, and integer, float, boolean inputs.
Tensor.manual_seed(42)
t = Tensor.randn(4)
print(t.numpy())
[-0.5144 1.085 0.9089 -0.0841]
print(t.add(20).numpy())
[19.4856 21.085 20.9089 19.9159]
print(t.add(Tensor([[2.0], [3.5]])).numpy())
[[1.4856 3.085 2.9089 1.9159]
[2.9856 4.585 4.4089 3.4159]]
Source code in tinygrad/mixin/math.py
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 | |
sub
¤
Subtracts x from self.
Equivalent to self - x.
Supports broadcasting to a common shape, type promotion, and integer, float, boolean inputs.
Tensor.manual_seed(42)
t = Tensor.randn(4)
print(t.numpy())
[-0.5144 1.085 0.9089 -0.0841]
print(t.sub(20).numpy())
[-20.5144 -18.915 -19.0911 -20.0841]
print(t.sub(Tensor([[2.0], [3.5]])).numpy())
[[-2.5144 -0.915 -1.0911 -2.0841]
[-4.0144 -2.415 -2.5911 -3.5841]]
Source code in tinygrad/tensor.py
3175 3176 3177 3178 3179 3180 3181 3182 3183 3184 3185 3186 3187 3188 3189 3190 3191 3192 3193 3194 | |
mul
¤
Multiplies self and x.
Equivalent to self * x.
Supports broadcasting to a common shape, type promotion, and integer, float, boolean inputs.
Tensor.manual_seed(42)
t = Tensor.randn(4)
print(t.numpy())
[-0.5144 1.085 0.9089 -0.0841]
print(t.mul(3).numpy())
[-1.5431 3.2549 2.7267 -0.2523]
print(t.mul(Tensor([[-1.0], [2.0]])).numpy())
[[ 0.5144 -1.085 -0.9089 0.0841]
[-1.0287 2.17 1.8178 -0.1682]]
Source code in tinygrad/mixin/math.py
56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 | |
div
¤
div(
x: Tensor | ConstType,
reverse=False,
rounding_mode: Literal["trunc", "floor"] | None = None,
) -> Tensor
Divides self by x.
Equivalent to self / x.
Supports broadcasting to a common shape, type promotion, and integer, float, boolean inputs.
div performs true division.
Tensor.manual_seed(42)
t = Tensor.randn(4)
print(t.numpy())
[-0.5144 1.085 0.9089 -0.0841]
print(t.div(3).numpy())
[-0.1715 0.3617 0.303 -0.028 ]
print(Tensor([1, 4, 10]).div(Tensor([2, 3, 4])).numpy())
[0.5 1.3333 2.5 ]
Source code in tinygrad/tensor.py
3196 3197 3198 3199 3200 3201 3202 3203 3204 3205 3206 3207 3208 3209 3210 3211 3212 3213 3214 3215 3216 3217 3218 3219 3220 3221 3222 3223 3224 3225 3226 3227 3228 | |
idiv
¤
Divides self by x.
Equivalent to self // x.
Supports broadcasting to a common shape, type promotion, and integer inputs.
idiv performs integer division (truncate towards zero).
print(Tensor([-4, 7, 5, 4, -7, 8]).idiv(Tensor([2, -3, 8, -2, 3, 5])).numpy())
[-2 -2 0 -2 -2 1]
Source code in tinygrad/mixin/math.py
122 123 124 125 126 127 128 129 130 131 132 133 | |
mod
¤
Mod self by x.
Equivalent to self % x.
Supports broadcasting to a common shape, type promotion, and integer inputs.
print(Tensor([-4, 7, 5, 4, -7, 8]).mod(Tensor([2, -3, 8, -2, 3, 5])).numpy())
[ 0 -2 5 0 2 3]
Source code in tinygrad/tensor.py
3230 3231 3232 3233 3234 3235 3236 3237 3238 3239 3240 3241 | |
bitwise_xor
¤
Computes bitwise xor of self and x.
Equivalent to self ^ x.
Supports broadcasting to a common shape, type promotion, and integer, boolean inputs.
print(Tensor([-1, -2, 3]).bitwise_xor(Tensor([1, 0, 3])).numpy())
[-2 -2 0]
print(Tensor([True, True, False, False]).bitwise_xor(Tensor([True, False, True, False])).numpy())
[False True True False]
Source code in tinygrad/mixin/math.py
106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 | |
bitwise_and
¤
Computes the bitwise AND of self and x.
Equivalent to self & x.
Supports broadcasting to a common shape, type promotion, and integer, boolean inputs.
print(Tensor([2, 5, 255]).bitwise_and(Tensor([3, 14, 16])).numpy())
[ 2 4 16]
print(Tensor([True, True, False, False]).bitwise_and(Tensor([True, False, True, False])).numpy())
[ True False False False]
Source code in tinygrad/mixin/math.py
76 77 78 79 80 81 82 83 84 85 86 87 88 89 | |
bitwise_or
¤
Computes the bitwise OR of self and x.
Equivalent to self | x.
Supports broadcasting to a common shape, type promotion, and integer, boolean inputs.
print(Tensor([2, 5, 255]).bitwise_or(Tensor([4, 4, 4])).numpy())
[ 6 5 255]
print(Tensor([True, True, False, False]).bitwise_or(Tensor([True, False, True, False])).numpy())
[ True True True False]
Source code in tinygrad/mixin/math.py
91 92 93 94 95 96 97 98 99 100 101 102 103 104 | |
bitwise_not
¤
bitwise_not() -> Tensor
Computes the bitwise NOT of self.
Equivalent to ~self.
print(Tensor([0, 2, 5, 255], dtype="int8").bitwise_not().numpy())
[-1 -3 -6 0]
print(Tensor([True, False]).bitwise_not().numpy())
[False True]
Source code in tinygrad/tensor.py
3243 3244 3245 3246 3247 3248 3249 3250 3251 3252 3253 3254 3255 | |
lshift
¤
Computes left arithmetic shift of self by x bits. self must have unsigned dtype.
Equivalent to self << x.
print(Tensor([1, 3, 31], dtype=dtypes.uint8).lshift(2).numpy())
[ 4 12 124]
Source code in tinygrad/tensor.py
3257 3258 3259 3260 3261 3262 3263 3264 3265 3266 3267 | |
rshift
¤
Computes right arithmetic shift of self by x bits. self must have unsigned dtype.
Equivalent to self >> x.
print(Tensor([4, 13, 125], dtype=dtypes.uint8).rshift(2).numpy())
[ 1 3 31]
Source code in tinygrad/tensor.py
3269 3270 3271 3272 3273 3274 3275 3276 3277 3278 3279 | |
pow
¤
Computes power of self with x.
Equivalent to self ** x.
print(Tensor([-1, 2, 3]).pow(2.0).numpy())
[1 4 9]
print(Tensor([-1, 2, 3]).pow(Tensor([-1.5, 0.5, 1.5])).numpy())
[-2147483648 1 5]
print((2.0 ** Tensor([-1, 2, 3])).numpy())
[0.5 4. 8. ]
Source code in tinygrad/tensor.py
3281 3282 3283 3284 3285 3286 3287 3288 3289 3290 3291 3292 3293 3294 3295 3296 3297 3298 3299 3300 3301 3302 | |
maximum
¤
Computes element-wise maximum of self and x.
print(Tensor([-1, 2, 3]).maximum(1).numpy())
[1 2 3]
print(Tensor([-1, 2, 3]).maximum(Tensor([-4, -2, 9])).numpy())
[-1 2 9]
Source code in tinygrad/tensor.py
3304 3305 3306 3307 3308 3309 3310 3311 3312 3313 3314 3315 | |
minimum
¤
Computes element-wise minimum of self and x.
print(Tensor([-1, 2, 3]).minimum(1).numpy())
[-1 1 1]
print(Tensor([-1, 2, 3]).minimum(Tensor([-4, -2, 9])).numpy())
[-4 -2 3]
Source code in tinygrad/tensor.py
3317 3318 3319 3320 3321 3322 3323 3324 3325 3326 3327 3328 3329 | |
where
¤
Returns a tensor of elements selected from either x or y, depending on self.
output_i = x_i if self_i else y_i.
cond = Tensor([[True, True, False], [True, False, False]])
print(cond.where(1, 3).numpy())
[[1 1 3]
[1 3 3]]
Tensor.manual_seed(42)
cond = Tensor.randn(2, 3)
print(cond.numpy())
[[ 0.9779 0.4678 0.5526]
[-0.3288 -0.8555 0.2753]]
print((cond > 0).where(cond, -float("inf")).numpy())
[[0.9779 0.4678 0.5526]
[ -inf -inf 0.2753]]
Source code in tinygrad/tensor.py
3331 3332 3333 3334 3335 3336 3337 3338 3339 3340 3341 3342 3343 3344 3345 3346 3347 3348 3349 3350 3351 3352 3353 | |
copysign
¤
copysign(other) -> Tensor
Returns a tensor of with the magnitude of self and the sign of other, elementwise.
Source code in tinygrad/tensor.py
3355 3356 3357 3358 3359 3360 3361 3362 | |
logaddexp
¤
logaddexp(other) -> Tensor
Calculates (self.exp()+other.exp()).log(), elementwise.
Source code in tinygrad/tensor.py
3364 3365 3366 3367 3368 3369 | |
Casting Ops¤
cast
¤
cast(dtype: DTypeLike) -> Tensor
Casts self to the given dtype.
t = Tensor([-1, 2.5, 3], dtype=dtypes.float)
print(t.dtype, t.numpy())
dtypes.float [-1. 2.5 3. ]
t = t.cast(dtypes.int32)
print(t.dtype, t.numpy())
dtypes.int [-1 2 3]
t = t.cast(dtypes.uint8)
print(t.dtype, t.numpy())
dtypes.uchar [255 2 3]
Source code in tinygrad/tensor.py
3816 3817 3818 3819 3820 3821 3822 3823 3824 3825 3826 3827 3828 3829 3830 3831 3832 3833 3834 3835 3836 | |
bitcast
¤
bitcast(dtype: DTypeLike) -> Tensor
Bitcasts self to the given dtype of the same itemsize.
self must not require a gradient.
t = Tensor([-1, 2, 3], dtype=dtypes.int32)
print(t.dtype, t.numpy())
dtypes.int [-1 2 3]
t = t.bitcast(dtypes.uint32)
print(t.dtype, t.numpy())
dtypes.uint [4294967295 2 3]
Source code in tinygrad/tensor.py
3838 3839 3840 3841 3842 3843 3844 3845 3846 3847 3848 3849 3850 3851 3852 3853 3854 3855 3856 3857 3858 3859 3860 3861 3862 3863 3864 | |
float
¤
float() -> Tensor
Convenience method to cast self to a float32 Tensor.
t = Tensor([-1, 2, 3], dtype=dtypes.int32)
print(t.dtype, t.numpy())
dtypes.int [-1 2 3]
t = t.float()
print(t.dtype, t.numpy())
dtypes.float [-1. 2. 3.]
Source code in tinygrad/tensor.py
3866 3867 3868 3869 3870 3871 3872 3873 3874 3875 3876 3877 3878 3879 | |
half
¤
half() -> Tensor
Convenience method to cast self to a float16 Tensor.
t = Tensor([-1, 2, 3], dtype=dtypes.int32)
print(t.dtype, t.numpy())
dtypes.int [-1 2 3]
t = t.half()
print(t.dtype, t.numpy())
dtypes.half [-1. 2. 3.]
Source code in tinygrad/tensor.py
3881 3882 3883 3884 3885 3886 3887 3888 3889 3890 3891 3892 3893 3894 | |
int
¤
int() -> Tensor
Convenience method to cast self to a int32 Tensor.
t = Tensor([-1.5, -0.5, 0.0, 0.5, 1.5])
print(t.dtype, t.numpy())
dtypes.float [-1.5 -0.5 0. 0.5 1.5]
t = t.int()
print(t.dtype, t.numpy())
dtypes.int [-1 0 0 0 1]
Source code in tinygrad/tensor.py
3896 3897 3898 3899 3900 3901 3902 3903 3904 3905 3906 3907 3908 3909 | |
bool
¤
bool() -> Tensor
Convenience method to cast self to a bool Tensor.
t = Tensor([-1, 0, 1])
print(t.dtype, t.numpy())
dtypes.int [-1 0 1]
t = t.bool()
print(t.dtype, t.numpy())
dtypes.bool [ True False True]
Source code in tinygrad/tensor.py
3911 3912 3913 3914 3915 3916 3917 3918 3919 3920 3921 3922 3923 3924 | |
bfloat16
¤
bfloat16() -> Tensor
Source code in tinygrad/tensor.py
3926 | |
double
¤
double() -> Tensor
Source code in tinygrad/tensor.py
3927 | |