some progress on batchnorms (draft) (#147)

* no of categories for efficientnet

* need layer_init_uniforn

* merge fail

* merge fail

* batchnorms

* needs work

* needs work how determine training

* pow

* needs work

* reshape was needed

* sum with axis

* sum with axis and tests

* broken

* works again

* clean up

* Update test_ops.py

* using sum

* don't always update running_stats

* space

* self

* default return running_stats

* passes test

* need to use mean

* merge

* testing

* fixing pow

* test_ops had a line dropped

* undo pow

* rebase
This commit is contained in:
Marcel Bischoff
2020-12-10 01:14:27 -05:00
committed by GitHub
parent 5d46df638a
commit d204f09316

View File

@@ -37,19 +37,21 @@ if __name__ == "__main__":
Tensor.default_gpu = os.getenv("GPU") is not None
TINY = os.getenv("TINY") is not None
TRANSFER = os.getenv("TRANSFER") is not None
if TINY:
model = TinyConvNet(classes)
elif TRANSFER:
model = EfficientNet(int(os.getenv("NUM", "0")), classes, has_se=True)
model.load_weights_from_torch()
else:
model = EfficientNet(int(os.getenv("NUM", "0")), classes, has_se=False)
#model = EfficientNet(int(os.getenv("NUM", "0")), classes, has_se=True)
#model.load_weights_from_torch()
parameters = get_parameters(model)
print("parameters", len(parameters))
optimizer = optim.Adam(parameters, lr=0.001)
#BS, steps = 16, 32
BS, steps = 64 if TINY else 16, 1024
BS, steps = 64 if TINY else 16, 2048
for i in (t := trange(steps)):
samp = np.random.randint(0, X_train.shape[0], size=(BS))