1 minute read

This is another post in the series lessons learned building machine learning systems.

What I found very useful from personal experience is to use assertions to check the sizes of your tensors.

Pepper your machine learning code with assertions that validate the sizes of your tensors like this:

assert dot_product.size() == input_exercise_bias.size()  # (batch_size,)

This may sound redundant and excessive, but youโ€™ll be happy when it triggers when you change something somewhere else:

dot_product = torch.einsum("ij,ij->i", hidden_state_batch, input_exercise_embedding)
>       assert dot_product.size() == input_exercise_bias.size()  # (batch_size,)
E       AssertionError

Which triggered because a tensor changed from torch.size([42, 1]) to torch.size([42])

Out[4]: torch.Size([42, 1])
Out[5]: torch.Size([42])

You will find resistance to this idea because it does not produce clean code. But you will be very happy once one of them triggers.

It is easy to think that we do not make mistakes. But the problem is that we do.

These assertions are just quick and hacky in place tests, so they are not a replacement for tests but more of a supplement.

Assertions on shapes of your tensors are a low effort, high reward hack. So use them wisely.