For a hands-on learning experience to develop LLM applications, join our LLM Bootcamp today.
First 6 seats get an early bird discount of 30%! So hurry up!
For a hands-on learning experience to develop LLM applications, join our LLM Bootcamp today.
First 6 seats get an early bird discount of 30%! So hurry up!
An average data scientist (ML Practitioner, AI expert) spends a significant amount of time designing and running machine learning experiments (and waiting for them to complete). This involves trying out various training algorithms, doing feature engineering, changing preprocessing steps to get more homogeneous data, trying different types of hyperparameters, and testing data with different datasets.
There is a lot that is involved in creating and running experiments. However, the only thing that we seem to be equipped with, in order to keep track of the performance, is the source code of the best-performing experiments. It is for this reason that we hear the following phrases quite frequently:
“It was working yesterday” – highlighting the commonality in reproducibility of the experiment.
“I don’t remember what the actual scores are but using feature X didn’t help” – documentation issue.
“I fixed a bug but I ran so many previous experiments with that bug” – code dependency issue.
“I am using the same parameters as experiment 4, why is it not working” – reproducibility and documentation issue.
What you’ll learn
We are looking for passionate people willing to cultivate and inspire the next generation of leaders in tech, business, and data science. If you are one of them get in touch with us!
Slides on Experiment Management for Machine Learning can be found here.