3 minute read

My first two months at Snappet have been great!

The project that I picked up is called Big Calibration. Big calibration is the process of taking all of the pupils answers and all of the questions in the Snappet platform and doing a single big recalculation of one of the core features of our model in production. The problem with Big Calibration was that it was broken, and I was tasked to fix it (actually, still am).

Of course, without some context this sounds like plain gibberish. So first let me explain what Snappet is and then what Big Calibration actually is.

What is snappet?

Snappet is an online educational learning platform for primary schools. What this means is that we have created software that teachers can use in primary school to teach math and language. In schools that use Snappet, pupils make math and language exercises on tablets. We provide the tablets, educational platform, and teacher trainings. The teachers teach the Snappet method through the tablets, voila!

The sort of vague hand-wavy explanation of how Snappet works internally is that every student has a thing called an “ability score”. Every exercise has a thing called the “difficulty score”. We try to give students the right level of difficulty by matching the students with the right ability to exercises with the right difficulties, and we use some sophisticated machine learning to do so.

Big calibration

Knowing that, Big Calibration is the process of taking all answers from all students from the start of the existence of the platform and recalculating the ability and difficulty scores of the whole platform. This might sound easy but let me tell you, we have more than 1.3 million (!) unique exercises, more than 3.2 unique users (with a big asterisk) and more than 3.8 billion (!!) answers in total.

So we take all answers of all students and all exercises and their difficulties and we just put them in Big Calibration. Poof! Except, the problem was, it didn’t work. Big Calibration stopped working for The Netherlands because of the sheer amount of data that we have collected. We have so many answers and so many students that our tensorflow just kept crashing.

Of course, I just started working here and well. You know what might seem like a nice challenge. Yeah fixing Big Calibration. I have to admit I kind of took this on myself as well because I was really hungry for a technical challenge. Something to sink my teeth in.

From a career perspective I’m not sure whether it was a smart move to start with such a hard and deeply technical project as my first. I still don’t feel like I have a real good overview of the whole platform, so that’s what I want to focus on for the next couple of months.

I’ve been writing a ton of python code, unit tests, integration tests, SQL scripts, the whole shebang. I think I rewrote around 80% of the original big calibration code base, using the starting code as a springboard. Rewriting legacy code is scary because you’re not really sure where you are going, there were no real working tests to jump off from so I started with this semi-blank slate. What really helped me was the guidance of a senior engineer that kept my confidence high every step along the way. Sometimes I felt unsure what to do he came through and imbued me with new confidence to keep on pushing forward.

What tech have I been working with?

Most of my time is spent writing Python code or SQL really, which is nice. I feel like I’ve become much better at SQL after having to actually use it. I’m super amazed at the flexibility of “plain SQL”. I’m amazed that such a big platform with billions of answers is handled by “just” some SQL tables.

Our models in production run in Dockers on Sagemakers which I’m familiar with. I wasn’t too familiar with Sagemaker but the API is well documented. For our code pipelines and CI/CD we use Azure DevOps which I had no experience with but it is straightforward enough. I don’t feel confident enough to setup those pipelines from scratch but I’ve fixed and debugged some of the cloudformation templates as I had to.