The Skills Taxonomy Trap (Or: How We Wasted Six Months Categorizing)
It started innocently enough.
"Before we build redeployment pathways, we need a comprehensive skills taxonomy. How can we match people to roles if we don't have a standardized skills framework?"
This made perfect sense. You can't do skills-based workforce planning without knowing what skills actually are, right?
So we built one. A beautiful, comprehensive skills taxonomy.
Eight hundred distinct skills. Twelve major categories. Three proficiency levels for each skill. Cross-referenced to 200+ role families. Endorsed by department heads. Reviewed by L&D. Blessed by the CHRO.
Six months of work. Dozens of stakeholder meetings. Hundreds of hours of Subject Matter Expert time.
We launched it with a big announcement. We asked managers to assess their teams. We built the infrastructure to track skills across the organization.
Three months later, nobody was using it.
Not because people were resistant. Because it was too perfect to be useful.
Here's what we learned the expensive way.
The Perfectionist's Fallacy
When you're building a skills taxonomy from scratch, every decision feels critical:
"Should 'Python' be a separate skill from 'Python for Data Analysis'?"
"Is 'Project Management' too broad? Should we break it into Agile, Waterfall, and Hybrid?"
"How do we differentiate 'Basic Excel' from 'Advanced Excel'? What's the dividing line?"
These questions feel important because you're building the foundation. Get it wrong, and everything built on top of it is wrong too, right?
Wrong.
The perfectionist trap is believing you can design the right taxonomy before you use it. You can't. Because the "right" taxonomy depends entirely on decisions you haven't made yet:
- How will managers actually assess skills?
- What redeployment decisions will you make?
- How will employees interact with this?
- What data will you actually be able to maintain?
You don't know the answers until you start using the system. Which means your perfect taxonomy is almost certainly solving problems you don't actually have while missing problems you do.
What Our Perfect Taxonomy Got Wrong
Remember that question about Python vs. Python for Data Analysis?
We decided they should be separate skills. Made sense at the time. Data science is a different use case than general-purpose programming.
But when managers started assessing employees, nobody knew how to make the distinction. Does someone who writes Python scripts to clean datasets count as "Python for Data Analysis"? What about someone who uses Python for automation?
Managers started asking HR for clarification. HR asked us. We debated it for two weeks and issued guidance: "If the primary purpose is analytical insights, use 'Python for Data Analysis.' Otherwise use 'Python.'"
Nobody read the guidance. Managers made arbitrary decisions. The data became inconsistent. Nobody trusted it.
Meanwhile, we'd lumped all manufacturing equipment operation under "Production Equipment Operation" because we didn't think the specific equipment mattered.
Turns out, it matters enormously. Operating a CNC machine is completely different from operating an injection molding machine. The skills don't transfer. But our taxonomy treated them as interchangeable.
When we tried to build redeployment pathways, the system suggested that CNC operators could easily transition to injection molding roles. Operations leadership looked at our recommendations and immediately stopped trusting the entire system.
The lesson: We spent six months optimizing for theoretical consistency. We should have spent six weeks optimizing for practical decisions.
The Decisions That Actually Matter
Here's what I wish someone had told us before we started:
Your skills taxonomy doesn't need to be perfect. It needs to support specific decisions.
The only decisions that mattered for us were:
- Redeployment decisions: Can person A do job B, or what do they need to learn?
- Development decisions: What skills should we invest in training?
- Hiring decisions: What skills can't we develop internally and must acquire externally?
That's it. Everything else was noise.
If we'd started with those three decisions, our taxonomy would have looked completely different:
Instead of 800 standardized skills, we would have had 40-50 skills that actually determined redeployment viability.
Instead of three proficiency levels for everything, we would have had binary assessments for most skills ("Can do this job-critical task or can't") and detailed assessments only for differentiating skills.
Instead of perfect consistency, we would have had practical categories that matched how managers actually think about capabilities.
The Healthcare System That Started With Decisions
Healthcare network. They needed to redeploy clinical staff as patient volumes shifted between specialties and locations.
They didn't start by building a taxonomy. They started by asking: "What redeployment decisions do we need to make in the next 90 days?"
The answer: "We need to know if ICU nurses can work in Med-Surg units, if Med-Surg nurses can work in Emergency, and if any administrative staff can be trained for patient-facing roles."
So they built a taxonomy for exactly that:
- ICU-specific clinical skills (12 skills)
- Med-Surg capabilities (8 skills)
- Emergency department requirements (10 skills)
- Patient-facing requirements for admin staff (6 skills)
Total: 36 skills. Assessments took 20 minutes per person. Completed across 2,000 employees in three weeks.
Good enough? Absolutely. They identified 200 nurses who could flex between units with minimal training. They found 40 admin staff who could transition to patient-facing coordination roles.
Did their taxonomy cover every nursing capability in perfect detail? No. Did it support the decisions they needed to make? Yes.
That's the only thing that matters.
The pattern: Start with decisions. Build the minimum taxonomy needed to support those decisions. Expand only when you need to support new decisions.
The Iteration Advantage
Here's what nobody tells you about skills taxonomies: they're never finished.
New skills emerge. Technology changes. Roles evolve. Your taxonomy from six months ago is already partially obsolete.
This is why perfect upfront design is a trap. You're optimizing a snapshot when you need a video.
The organizations getting skills-based workforce planning right don't have perfect taxonomies. They have evolving taxonomies that improve through use.
How the Manufacturing Company Learned to Iterate
Remember us? The company that spent six months building the perfect taxonomy?
After it failed, we started over. But this time we changed the approach:
Week 1: Identified five critical redeployment decisions we needed to make in the next quarter Week 2: Listed the minimum skills needed to make those decisions Week 3: Had managers assess those skills for relevant employees Week 4: Made redeployment decisions based on the data
Total time: one month. Total skills: 18.
Was it comprehensive? No. Did it work? Yes.
Then we did it again for the next five decisions. And again. And again.
Six months later, we had a skills taxonomy with 120 skills. But unlike our first attempt, these 120 skills had all been tested against real decisions. We knew they were relevant because we'd used them.
We'd also learned which skills didn't matter. Remember that debate about Python vs. Python for Data Analysis? Turns out we never needed that distinction. When we made actual redeployment decisions, that granularity was irrelevant.
The pattern: Build minimum viable taxonomy. Use it for real decisions. Learn what's missing. Add what you need. Remove what you don't. Repeat.
The Employee Experience Problem
Here's another thing we got wrong: we built our taxonomy for HR's needs, not employees' needs.
When we asked employees to self-assess against 800 skills, the response was overwhelmingly: "This is exhausting and I don't understand half these descriptions."
Eight hundred skills means eight hundred decisions. "Am I proficient at this? What does proficient even mean? Do I have this skill or not?"
Most employees gave up after 50-100 skills. The ones who finished had untrustworthy data because they were just clicking through to be done.
What Actually Works for Employees
The organizations getting good skills data from employees do three things:
1. Keep it short. Thirty to fifty skills maximum for any individual assessment.
2. Make proficiency levels obvious. Not "Basic, Intermediate, Advanced." That's subjective. Instead: "I can do X task independently" or "I need help with X" or "I've never done X." Concrete observable behaviors.
3. Focus on what's relevant to the employee. Don't ask a production worker to assess data analysis skills. Don't ask an engineer to assess interpersonal conflict resolution. Only ask about skills that might actually matter for their career paths.
When you do this, completion rates skyrocket and data quality improves dramatically.
The lesson: A shorter taxonomy that people actually complete beats a comprehensive taxonomy that nobody finishes.
The Manager Assessment Problem
We also assumed managers would be good at assessing employee skills. They weren't.
Not because managers don't know their people. Because they don't know what "proficient" means in your taxonomy.
When we asked managers to rate employees on "Project Management" skills, we got wildly inconsistent results:
- Some managers rated everyone highly because "they all manage their work"
- Some managers had impossibly high standards and rated everyone low
- Some managers didn't understand the skill at all and gave random ratings
The data was useless for comparisons across teams.
What Works Better Than Manager Assessment
The companies getting reliable skills data use multiple inputs:
Inference from work history. If someone has been a Python developer for three years, they probably have Python skills. If they've led projects, they probably have project management skills. AI can extract much of this from HR systems, project records, and work samples.
Demonstrated skills. What have they actually done? Projects completed. Certifications earned. Tools used. Contributions shipped.
Selective deep assessment. For critical skills where accuracy really matters, do structured assessments. For everything else, infer from observable data.
This approach scales. Manager time is expensive. Automated inference is cheap. Use expensive manager time only where it genuinely adds value.
The pattern: Infer what you can. Validate what matters. Don't ask busy managers to rate 800 skills for 12 people.
The "Industry Standard" Trap
When we were building our taxonomy, we spent weeks researching industry standards.
SFIA framework. O*NET database. LinkedIn Skills Graph. Industry-specific skills ontologies.
We thought: "If we align with industry standards, we'll have portability and credibility."
This was partly right and mostly wrong.
Industry standards are built for labor market analysis and job board matching. They're not built for internal redeployment decisions.
When you adopt an industry standard taxonomy wholesale, you inherit problems:
- Skills that are irrelevant to your organization but present in the standard
- Granularity mismatches (too detailed in some areas, too vague in others)
- Categories that don't map to how your business actually operates
When Standards Help and When They Hurt
Use industry standards as inspiration, not gospel.
If you're hiring data scientists, knowing the industry-standard skills for that role is valuable. You want to speak the same language as the external market.
But if you're redeploying internal production workers to maintenance tech roles, industry standards don't help. You need a taxonomy that captures your specific equipment, processes, and transition pathways.
The lesson: Borrow from standards when hiring externally. Build custom for internal decisions.
What We Should Have Done Instead
If I could redo our skills taxonomy project, here's the approach:
Month 1: Pilot with One Use Case
Pick one critical redeployment decision: "Can production operators transition to maintenance tech roles?"
Build minimum taxonomy: 15-20 skills that differentiate operators who can transition from those who can't.
Assess 50 people. Make redeployment decisions. Measure results.
Month 2: Validate and Expand
Did the redeployment decisions work? Did people succeed in new roles?
If yes, which skills actually predicted success? Drop skills that didn't matter. Add skills that were missing.
Pick second use case. Build taxonomy for that. Repeat.
Month 3-6: Build Infrastructure
Once you know your taxonomy works for real decisions, build infrastructure:
- Manager assessment tools
- Employee self-assessment
- Learning pathways connected to skills
- Reporting and analytics
But only after you've validated that the underlying skills model is useful.
Total time to useful system: Three months instead of nine.
Total skills at launch: Forty instead of 800.
Probability of adoption: Dramatically higher because people see value immediately.
The Real Goal of Skills Taxonomy
Here's the thing everyone forgets: skills taxonomy isn't the goal. Decisions are the goal.
You don't need a skills taxonomy to feel smart. You need a skills taxonomy to:
- Identify who can do what
- Build redeployment pathways
- Prioritize development investments
- Reduce dependence on external hiring
If your taxonomy helps with those things, it's working. If it's beautiful but unused, it's failed.
Six months building perfect taxonomy that nobody uses is worse than one month building imperfect taxonomy that transforms workforce planning.
Where This Leaves Us
Skills-based workforce planning is genuinely transformative. But the transformation doesn't come from perfectly categorizing capabilities.
It comes from using skills data to make better, faster talent decisions.
Start with decisions. Build minimum viable taxonomy. Use it. Learn from it. Iterate.
Don't spend six months building perfect infrastructure. Spend six weeks building good-enough infrastructure and six months learning how to use it effectively.
Your taxonomy should be a tool, not a masterpiece. Tools get better through use. Masterpieces gather dust in museums.
We learned this the expensive way. You don't have to.