The ML skills that don’t transfer to AI (and What Does)
How the Shift from Traditional ML to AI Completely Changed My Approach to Building Intelligent Systems
I spent six years working on machine learning in FinTech, and six months ago started learning AI through online courses and building prototypes. What surprised me wasn’t the technical complexity, but how fundamentally different the thinking process is.
In this post, I’ll walk through the biggest surprises in my learning journey. This is for you if you’re:
An analyst exploring AI tools in your workflows
An engineer considering this transition into AI engineering
An enthusiast curious about how AI development actually works
From Data Detective to Conversation Design
In my days as an ML Engineer, building models felt like detective work. I’d spend countless hours performing this loop:
Dig through data
Uncover meaningful patterns
Craft mathematical transformations
This is what feature engineering looked like. Examples of features include debt-to-income ratio for credit models, all in an effort to find data transformations that would help capture signal and enable my models to learn effectively.
With LLMs, I found myself doing something completely different. Instead of mathematical transformations, I’m crafting natural language instructions that guide the model’s reasoning process. At times, it feels more like conversation design than AI engineering.
I recently took on some client work to classify sentiment from review data. In the old ML world, this would’ve taken me weeks and lots of coding to perform NLP on the dataset. Instead, I could feed the raw review text directly to an LLM and watched as the results improved with every prompt tweak.
What felt almost magical was how pre-trained LLMs already understand context, sarcasm, and nuanced language so well that I only needed to improve my instructions through the prompt to produce results that beat pre-trained BERT classifiers like VADER.
In short, both approaches require domain knowledge, and there are surprising parallels:
ML: capture signal mathematically through feature engineering
LLM: express intent through natural language instructions
When working with AI tools, I find myself adopting a “coach” role more than an “engineer” role. Instead of thinking about the best mathematical transformations to capture patterns, I now find myself coaching and instructing LLMs to perform specific tasks by crafting better prompts.
The Explainability Challenge
With traditional ML, I could use SHAP values and feature importance to explain decisions. When a loan application was rejected, I could point to specific numerical contributions: “debt-to-income ratio contributed -0.3…”
With AI systems, explainability is both more intuitive and challenging. Chain of thought reasoning allows models to walk through their decision-making process in natural language step-by-step. This feels more transparent than mathematical weights, but the challenge lies in trusting this reasoning:
While we can follow the logical steps, we can’t always verify the underlying knowledge or catch subtle biases that might influence the model’s chain of thought.
When working with LLMs, I like to include “walk through this step by step” in the prompt for more complex tasks to understand the reasoning process. I’m still figuring out how much to trust these explanations. They’re more human-readable than SHAP values, but harder to verify.
The Data Efficiency Revolution
Traditional ML needed thousands or tens of thousands of labeled examples for even simple classification tasks. I remember projects where data collection took months before we could start model development.
In contrast, modern AI systems can achieve remarkable performance with dramatically less labeled data through:
Few-shot prompting: providing a few examples in the prompt for the LLM to learn from.
Fine-tuning: providing a dataset as small as a few dozens of examples to adjust weights in final layers of the LLM for more context-specific results.
Pre-trained LLMs already contain vast amounts of learned knowledge. In some cases, it only needs a handful of examples to adapt to your specific use case.
Additionally, LLMs excel at generating synthetic training data. You can often bootstrap your dataset by having the model generate diverse examples of the patterns you want to capture. This completely changes project timelines: you can prototype and test ideas in hours with AI now!
What This Means for Your Work
Six months in, I’m still learning, but these mindset shifts have changed how I approach AI projects:
From feature engineering to prompt engineering
From mathematical explainability to chain-of-thought reasoning
From training ML models with large datasets to efficiently fine-tuning LLMs with dozens of examples
The analytical thinking from traditional ML still matters, but it’s applied to fundamentally different problems.
Whether you’re adding AI to your workflow or considering a deeper career shift, understanding these differences helps you work with AI systems.
As I continue this journey, I want to make sure I’m writing about what’s useful to you. I’m planning my next few posts and would love your input.
Got something else in mind? Comment below :)
This was a wonderful read, Claudia. I plan to learn AI to enhance my skills in the coming month, and I look forward to sharing this experience and creating new ones.