How is Data Science different from ML?

IceCream Labs
2 min readOct 21, 2022

--

It’s elementary my dear Watson, this Data Science! It’s all about Tensorflow, Keras, Python, R, OCR, Yolo, Spark, NLP, Bert, Bart, Fart…NO SHIT SHERLOCK!

Let’s get one thing straight, it’s Data Science for heaven’s sake, not Framework Science or Model Science! True, AI/ML is a phenomenal enabler for data science but at the end of the day, it’s exactly that, an enabler — a means to an end, not the goal. While the boundary between Data science and ML becomes blurry with each passing day and the accessibility of ML becomes that much easier, let’s pause and appreciate the importance of ‘Data’ in Data Science.

At the height of the COVID pandemic, we worked with a large medical school in the US that wanted to leverage ML to reduce human intervention in the application review process for student admissions. The existing process required an evaluator (usually a resident doctor) to read each application and make a recommendation as to whether the applicant needed to be called for an interview or not. And as with any human process, there’s that element of variability in how each evaluator reads and interprets an application. Or how even the same evaluator interprets what he’s reading during the application cycle (i.e. 3–4 month duration). So the case to move to ML was quite straightforward and well-established. But the question remained, would a heartless ML program be able to show the same empathy that an evaluator would? How do you train an ML model to make the application pool ‘inclusive’? How do you compare the application of a single mother trying to get into medical school against a more privileged applicant with straight As?

Train an NLP model to read a Statement of Purpose or a Letter of Recommendation — easy peasy lemon squeezy! But how do you score or rank the application?

Our recall had to be high, we could not reject a deserving candidate. The precision also had to be high, ML had to narrow down the application pool for this to make economic sense for the school.

Initially, with SOTA models, our recall was around 50%- miserably below the acceptable range. We had to magically breathe empathy into these models. And that’s when we started diving into the data!

The dataset had multiple subsets based on the socio-economic-cultural-ethnic background of the applicants. We began untangling these datasets. The mystery was slowly unravelling! What started as a single one size fits all model was now cut to size and each subset had its own model. Finally, we hit our target recall scores in the high 80s. The devil lies in the details…nah it lies in the data!!

--

--

IceCream Labs

IceCreamLabs enables business to leverages AI to solve complex problems quickly and cost-effectively