All posts

Your AI Engineers Aren't AI Engineers

February 17, 2026

Our team reviewed a resume last week from someone calling themselves a "Senior AI Engineer." $220K salary at their last company. Two years of "AI experience." LinkedIn full of posts about GPT-4 and machine learning.

I asked them to explain how transformer attention mechanisms work.

Blank stare.

I asked about their experience with fine-tuning models.

"We mostly use the OpenAI API."

I asked what vector database they'd used in production.

"Is that like PostgreSQL?"

This person wasn't an AI engineer. They were a software engineer who learned to call APIs and got a title bump during the hype cycle.

And they're not alone. The market is absolutely flooded with people like this right now.

The Problem Nobody's Talking About

Every company is desperately trying to hire AI engineers. Boards are asking about AI strategy. Customers are demanding AI features. Competitors are shipping AI products.

So CTOs are scrambling to find "AI talent" before they get left behind.

Here's what's actually happening.

  1. You post a job for "AI Engineer" or "Machine Learning Engineer." You get 300 applications. Half of them have AI or ML in their title. Their resumes list PyTorch, TensorFlow, LangChain, vector databases.
  2. You interview five candidates. They all talk confidently about large language models, embeddings, RAG architectures. They've clearly read the same blog posts you have.
  3. You hire someone. They seem smart. They have the right buzzwords. They're expensive, so they must be good, right?
  4. Three months later you realize they can't actually build what you need. They can wire up APIs. They can implement tutorials they found on GitHub. But when you need custom model work or actual ML engineering, they're lost.

You just spent $60K in salary plus another $30K in recruiting fees on someone who isn't what you thought you were hiring.

And the worst part? Your actual AI initiative is now three months behind, your team is frustrated, and you're back to square one.

Why This Is Happening Right Now

The AI boom created a gold rush. And whenever there's a gold rush, people rebrand.

What happened in 2023-2024: Thousands of software engineers saw the salary premiums for AI roles and thought "I can learn this." They took a Coursera course. They built a chatbot using the OpenAI API. They added "AI Engineer" to their LinkedIn.

Recruiters who don't understand the technical differences started placing these people in AI roles because they had the keywords.

Companies desperate for AI talent lowered their bar because they couldn't find "real" AI engineers and convinced themselves "we can train them up."

Salary data aggregators started showing "AI Engineer" as a distinct role with premium comp, which created even more incentive to rebrand.

The result is a market where the signal to noise ratio is maybe one in ten. For every real AI engineer with deep expertise, there are nine people who learned to use ChatGPT's API and call themselves AI engineers.

What Real AI Engineering Actually Requires

Let me be very specific about what separates real AI engineering from API integration with good marketing.

Real AI engineers understand the math.

They can explain backpropagation. They understand attention mechanisms in transformers. They know why you'd use one activation function versus another. They've read the actual papers, not just the blog summaries.

This doesn't mean they need a PhD. But they need to understand what's happening under the hood. Because when something breaks or doesn't perform well, you can't debug it if you don't understand it.

Real AI engineers have trained models.

Not fine-tuned a pre-trained model using a library that abstracts everything away. Actually trained models from scratch or near-scratch. They understand learning rates, batch sizes, regularization, overfitting. They've watched training curves and debugged why a model won't converge.

This matters because fine-tuning a foundation model is completely different from building ML systems. Both are valuable. But if you need someone who can do custom model work and you hire someone who's only ever fine-tuned, you're in trouble.

Real AI engineers understand infrastructure and scale.

They know how to set up training pipelines. They understand distributed training. They know what happens when you try to run inference on models that are too big for your hardware. They've dealt with the nightmare of versioning models and data.

Most "AI engineers" have only ever run things locally or on a managed platform. They've never deployed a model to production at scale. They've never optimized inference latency. They've never dealt with model drift in production.

Real AI engineers know when not to use AI.

This is the big one. The best AI engineers I know spend half their time talking companies out of using AI for problems that don't need it.

Fake AI engineers want to use AI for everything because that's all they know how to do. Real AI engineers understand the tradeoffs and will tell you when a simple heuristic or classical ML approach is better than a giant transformer.

The Questions That Separate Real from Fake

If you're hiring for AI roles right now, here are the questions that will immediately reveal who actually knows what they're doing.

Don't ask: "What's your experience with AI?"

Ask: "Walk me through a model you've trained from scratch. What architecture did you choose and why? What problems did you run into?"

Fake AI engineers will give you generic answers about using TensorFlow or talk about fine-tuning someone else's model. Real AI engineers will go deep on architectural decisions, data preprocessing challenges, and training instability they debugged.

Don't ask: "What AI tools have you used?"

Ask: "Explain how attention mechanisms work in transformers and why they're better than RNNs for certain tasks."

Fake AI engineers will struggle or give you a memorized definition. Real AI engineers will explain the intuition, the math, and give you examples of when you'd use each approach.

Don't ask: "Have you worked with large language models?"

Ask: "What's the difference between fine-tuning, RLHF, and RAG? When would you use each approach?"

Fake AI engineers think these are all basically the same thing. Real AI engineers understand they're completely different techniques with different tradeoffs and use cases.

Don't ask: "What vector databases have you used?"

Ask: "Explain how approximate nearest neighbor search works and why we need it for vector databases at scale."

Fake AI engineers have used Pinecone because a tutorial told them to. Real AI engineers understand the actual computer science problem being solved.

Don't ask: "What's your experience with prompt engineering?"

Ask: "When would you fine-tune a model versus use retrieval augmented generation versus just better prompting?"

Fake AI engineers think prompt engineering is AI engineering. Real AI engineers see prompting as one small tool in a much larger toolkit.

The Three Types of "AI Engineers" in the Market

After vetting hundreds of people with AI titles over the past 36 months, we've seen three distinct types.

Type 1: API Integrators (70% of the market)

These are software engineers who learned to call OpenAI, Anthropic, or Cohere APIs. They can build applications that use foundation models. They understand prompt engineering, function calling, maybe RAG if they've done it before.

They're valuable for certain things. If you're building a chatbot or adding LLM features to your product, they can do that.

But they can't train models. They can't do custom ML work. They can't optimize model performance beyond tweaking prompts. And they'll be completely lost if you need anything that requires actual machine learning expertise.

They're not bad engineers. They're just not AI engineers in the traditional sense. They're integration engineers who happen to integrate with AI APIs.

Type 2: ML Engineers (20% of the market)

These are real machine learning engineers. They have CS degrees or equivalent. They've taken ML courses. They understand the theory. They can train models.

But they might not have production experience. They might have mostly done kaggle competitions or academic projects. They understand the math but haven't dealt with the operational reality of ML systems at scale.

They're trainable and valuable, especially at earlier stage companies. But they're not senior AI engineers. They need mentorship and time to level up.

Type 3: Real AI Engineers (10% of the market)

These are people who understand both the theory and the practice. They've trained models from scratch. They've deployed ML systems to production. They've dealt with data pipelines, model monitoring, inference optimization, all of it.

They can have conversations about recent papers. They can debug training instability. They can make architectural decisions. They know when to use AI and when not to.

These people are rare and expensive and worth every penny. But most of them aren't on the job market because they're already well compensated at top companies or working on their own projects.

What This Means for Your Hiring

If you're a CTO or engineering leader trying to build AI capabilities right now, here's what you need to understand.

First, be honest about what you actually need. If you're adding chatbot features to your product, you don't need a real AI engineer. You need a good software engineer who understands API integration and can learn LangChain or whatever framework makes sense.

Don't pay AI engineer salaries for API integration work. That's a waste of money and you'll end up with people who are overqualified and bored or underqualified and overpaid.

Second, if you actually need real AI/ML work, be prepared for how hard the search is. Real AI engineers are maybe 10% of the people claiming to be AI engineers. You need a completely different vetting process. Technical screens need to go much deeper. You might need to hire consultants to help evaluate candidates if you're not technical enough to do it yourself.

Third, consider alternatives to hiring. If you need AI capabilities for a specific project, bringing in a consulting firm or specialized contractor might be smarter than trying to hire full-time. Real AI engineers don't want to join a company to work on one project. They want to work somewhere AI is core to the business.

Fourth, don't believe the resume. Someone who worked at OpenAI or DeepMind for six months as a junior engineer is not the same as someone who's been doing ML research for five years. Someone who lists "PyTorch" on their resume might have used it once in a tutorial.

Dig deep. Ask technical questions. Request code samples or previous work. Talk to references about their actual contributions, not just whether they were "nice to work with."

The Uncomfortable Truth

Most companies trying to hire AI engineers right now are going to fail.

Not because AI engineers don't exist. But because the supply and demand are so mismatched that finding real talent is nearly impossible without deep networks or specialized help.

The companies winning at AI hiring right now are doing one of three things.

They're paying absurd money to pull people from FAANG companies or AI research labs. We're talking $400K to $600K total comp for senior people. Most companies can't or won't do this.

They're training up strong software engineers internally. This works but takes 12 to 18 months and requires you to have someone senior enough to mentor them. Most companies don't have that person.

They're working with specialized firms that have already built networks of vetted AI talent. This is the fastest path but requires admitting you can't do it yourself, which many CTOs struggle with.

What doesn't work is posting a job on LinkedIn, interviewing whoever applies, and hoping you get lucky. That worked for general software engineering. It doesn't work for AI because the noise is too high and the stakes are too high.

What We've Learned

We've been vetting AI and ML engineers for more than 36 months now.

Out of every 100 people who claim AI expertise, maybe 10 to 15 are legitimate ML engineers and maybe three to five are truly senior AI engineers.

The vetting process takes 10x longer than general software engineers because you need to go so much deeper technically. You can't rely on resume screening. You can't rely on coding tests. You need real technical conversations about architecture, theory, and past projects.

We've also learned that geography matters more than people think. The best AI talent is clustered in very specific locations. SF, Seattle, Boston, London, certain parts of Europe.

And we've learned that companies often don't actually need what they think they need. Half the "we need an AI engineer" conversations turn into "you actually need a senior backend engineer who understands ML APIs." Which is much easier to find.

If your company is trying to build AI capabilities, you need to understand what you're actually looking for and how rare it is.

  • Don't hire "AI engineers" who are really just software engineers with rebranded titles. You'll waste money and time.
  • Don't assume everyone with the right resume is actually qualified. Most aren't.
  • Don't try to DIY this unless you have someone internal who can properly evaluate AI talent. Most CTOs can't, even if they're strong technologists.

The market is full of people claiming AI expertise they don't have. The companies that figure out how to separate signal from noise will win. The companies that don't will spend a lot of money on mediocre hires and wonder why their AI initiatives keep failing.

You Might Also Like

All articles
AI-augmented Agile team reviewing a real-time sprint workflow, with tasks and dependencies visualized as automated pipelines during high-velocity software delivery.
Blog
Extreme Agile Is Coming - What Happens When AI Makes Your Team 30x More Productive

AI won't kill Agile. But it will kill the version of Agile your team is running today. Here's what's coming and how the smartest teams are already adapting.

Learn more
Blog
Why AI Team Structure Is the Real Driver of Sprint Velocity

88% of AI pilots never reach production and the bottleneck isn't your technology. Discover why team structure, not technical capability, determines whether your AI initiative ships in 90 days or stalls for nine months.

Learn more
Software development team collaborating on enterprise financial aid software, reviewing code and compliance-driven workflows as part of an embedded development partnership.
Case Study
Genius Match + Regent Education: A Partnership for Development Success

This case study showcases how Genius Match partnered with Regent Education to build enterprise financial aid software - with embedded engineers who delivered 60-70% ROI to customers.

Learn more
All articles