A course that ticked every box. Solid structure. SME-approved. Interactive. And still, six weeks later, nothing changed. No behaviour shift. No adoption movement. The learning evaporated somewhere between delivery and work.
Here’s the question that most post-mortems never ask: What if the content was never the problem?
The Instinct to Fix Content Is the Trap
When a learning experience underperforms, the diagnostic is almost automatic: shorten the modules, add examples, improve the visuals, make it more interactive. These feel productive. That feeling right there is what is leading us in the wrong direction.
The real problem lies in a spot that almost always goes unnoticed – in the gap between what a designer intends and what a learner actually understands. Content quality doesn’t close that gap. It can’t. Because the gap isn’t about quality. It’s about meaning.
Learning doesn’t fail at the content level. It fails at translating your intend to the learner’s level of thinking. And that distinction changes everything about how you diagnose a broken learning experience.
Why Two People Can Read the Same Course and Learn Different Things
There’s a concept from cognitive and social science called intersubjectivity and it explains more about learning failure than most instructional design frameworks do.
Meaning is not something you add into tables and graphics. It’s something that gets built in the space between designer and learner, shaped by prior knowledge, cultural context, cognitive style, and lived experience. The same paragraph, the same scenario, the same worked example – different learners construct genuinely different meanings from them. Neither reading is wrong. They’re just operating from different maps.
Think about a typical SaaS onboarding course. For someone migrating from a similar tool, it reads like a familiar map with new street names. For someone from a different professional background entirely, it can read like a foreign language – technically clear, structurally invisible. The content didn’t change. The learner’s cognitive entry point did.
When we design, we work toward a clear objective and shape meaning around it. But too often, that meaning is built for us to understand – making you the learner, not your actual audience.
The Default Learner Built into Standard Instructional Design
Most learning experience design rests on invisible assumptions: that learners share a common knowledge baseline, that linear sequencing works universally, that language clear to the designer is clear to the learner, that engagement mechanics perform equally across different cognitive styles.
These aren’t careless mistakes. They’re the natural consequence of designing from a single point of view.
This is where Universal Design for Learning (UDL) research becomes instructive beyond its traditional application. When a neurodivergent learner struggles with a course that “everyone else” seems to navigate fine, the issue is rarely the learner. It’s the narrowness of the design. That struggle is a diagnostic signal – a sharp version of something that blunts learning outcomes across your entire audience.
Standard instructional design isn’t neutral. It has a default learner hardcoded into it. And that learner rarely matches the actual people in your LMS.
What Happens When You Design from the Learner’s Meaning-Making Process
Platform adoption: 5% to 75%. Onboarding: redesigned from 11 hours to 1.
These are the outcomes Sara Hounshell documents from her enterprise L&D work and they weren’t achieved through better content. They were achieved by asking a different question at the start of the process.
Stevick’s background is in Deaf education, a context where meaning-making can never be assumed. Every lesson required explicit negotiation of how a learner was constructing understanding, not just what information was being delivered. When she brought that lens into product strategy, the results were measurable precisely because the question had changed:
How is this person actually making sense of what’s in front of them?
That question restructures everything downstream – sequencing, modality, language choices, entry points, assumed context. It moves the design process from content-out to learner-in.
The Question That Separates Good L&D From Effective L&D
The content trap is seductive because it’s concrete. There’s always something to fix – a module to tighten, a visual to improve, a scenario to rewrite. It keeps teams busy without touching the root cause.
The harder question cuts differently: Whose meaning are we designing for?
Join us in Episode 7 of The Learning Buzz on April 16 to explore how leading L&D teams design for meaning not just content, and drive measurable outcomes.
