We’ve Done This Before
Remember when we in L&D adapted the waterfall model of software development into ADDIE and SAM approaches to learning design and development?
It wasn’t a stretch. The logic was sound. Software development had wrestled with the problem of building complex things in stages, managing dependencies, testing before deployment, and iterating based on feedback. Learning design had the same challenges. So we borrowed the frameworks, reshaped them for our context, and made them our own.
It’s one of the things this industry does well. We’ve always been borrowers and adapters, taking ideas from cognitive science, organizational psychology, UX design, and agile development, then weaving them into how we build learning experiences.
So why have we stopped?
Right now, in tech culture, three frameworks are reshaping how high-performing teams work with AI. They’re not abstract theories. They’re open-source tools and methodologies with thousands of users, built by people like Garry Tan (President and CEO of Y Combinator), Jesse Vincent, and Lex Christopherson. They address a problem that L&D teams are going to face very soon, if they haven’t already: how do you organize human-AI collaboration so that AI doesn’t just speed things up, but actually changes the quality of what you produce?
The three frameworks are Superpowers, GSD (Get Stuff Done), and Gstack. And there’s a fourth, our own, that we believe ties them together for L&D: the (AI) DAPT Framework from Apposite Learning Labs.
This article introduces each framework, explains what it does in its original tech context, and explores how learning design and development teams can adapt its principles for AI-assisted L&D work.
Superpowers: Composable Skills for AI Agents
What It Is
Superpowers, created by Jesse Vincent at Prime Radiant, is a software development methodology built around the idea that AI agents shouldn’t operate as generic assistants. Instead, they should have modular, composable “skills”—distinct capabilities that activate based on context. When you’re brainstorming, the agent operates with one set of behaviors. When you’re executing a plan, it shifts into a different mode entirely.
The framework has gained significant traction in the developer community, with over 55,000 GitHub stars. Its core insight is structural: AI performs better when you give it a defined job rather than an open brief. Each “skill” bundles intent, constraints, and execution logic into a single unit, so the AI knows not just what it can do, but what it should do in a given moment.
The L&D Adaptation
Learning design is not one activity. It’s a sequence of cognitively distinct activities that most teams collapse into a single workflow. Needs analysis requires different thinking than content development. Assessment design requires different thinking than visual design. Review and QA require different thinking than stakeholder communication.
When we use AI as a generic assistant for all of these, we get generic results. The Superpowers principle suggests a different approach: define distinct AI “skills” for each phase of learning design work.
AI performs better when you give it a defined job rather than an open brief. This is as true in learning design as it is in software engineering.
Imagine an AI workflow where the tool operates in “Needs Analysis” mode, asking probing questions, challenging assumptions, and refusing to jump to solutions. It then shifts to “Content Architecture” mode, structuring learning sequences, applying cognitive load principles, and organizing information hierarchically. Then it shifts again to “Assessment Design” mode, generating items aligned to specific objectives, checking for alignment, and flagging gaps.
Each mode has its own constraints and priorities. The AI doesn’t try to do everything at once. It does one thing well, then hands off to the next skill.
Key Takeaway for L&D Teams
Stop treating AI as a single tool. Design distinct modes or prompting frameworks for each phase of your learning design process. The modularity isn’t overhead; it’s what makes the output consistently good.
GSD: Structured Phases, Fresh Contexts
What It Is
GSD (Get Stuff Done), created by Lex Christopherson, is a workflow framework that solves what developers call “context rot”—the quality degradation that happens when an AI assistant tries to handle too many different cognitive tasks in a single session. The longer a session runs, the more prior context accumulates, and the worse the AI’s output becomes.
GSD’s solution is structural: break every complex project into three distinct phases—plan, execute, and review—each with its own clean context window. The AI gets a fresh start for each phase, operating with full attention on the current task rather than carrying the baggage of everything that came before.
The framework has gained rapid adoption, trusted by engineers at companies like Amazon, Google, and Shopify. Its core principle is that speed-to-quality matters more than speed-to-output.
The L&D Adaptation
Context rot isn’t a software problem. It’s a thinking problem. And L&D teams experience it constantly.
Consider a typical AI-assisted content development session. You start by discussing the learning objectives with AI. Then you ask it to draft a storyboard. Then you ask it to generate assessment items. Then you ask it to review its own work. By the end of the session, the AI is operating with a muddled context—part strategy conversation, part content draft, part quality review. The output reflects that muddiness.
GSD’s principle of phase separation maps directly onto learning design:
-
- Plan phase: Define learning objectives, map the performance gap, outline the content architecture. This is the strategic thinking.
- Execute phase: Develop content, build interactions, create assessments. This is the production work. Give the AI a clean context with just the plan as input.
- Review phase: Evaluate the output against the plan. Check alignment, quality, accuracy. Again, with a clean context, the AI reviews with fresh eyes.
The deeper insight from GSD is that speed should be measured differently. Most L&D teams adopting AI measure speed-to-deliverable: how fast can we produce a course? GSD suggests measuring speed-to-validated-learning: how fast can we get to something we’ve confirmed actually works?
GSD should mean “get to validated learning faster.” Not “get to SCORM packages faster.”
This means using AI to accelerate prototyping and testing, not just production. Build three rough versions of an interaction and test them with learners before you polish one. That’s GSD for L&D.
Key Takeaway for L&D Teams
Separate your AI-assisted workflow into distinct phases with clean handoffs. Don’t let the AI carry strategy, production, and review conversations in one session. The phase separation is what keeps quality high.
Gstack: Role-Based AI Collaboration
What It Is
Gstack, created by Garry Tan, President and CEO of Y Combinator, is perhaps the most ambitious of the three frameworks. It turns a single AI assistant into a virtual team of specialists, each with its own role, priorities, and constraints. Using Gstack, Tan reported shipping 600,000 lines of production code in 60 days—part-time, while running YC full-time.
The framework defines 23 distinct “skills” organized around startup roles: CEO (product thinking and strategy), Designer (visual and UX decisions), Engineering Manager (architecture and planning), QA Engineer (testing and verification), Release Manager (deployment), and Document Engineer (documentation). Each role has its own mode of operation, and the AI shifts between them based on what the work requires.
The key philosophical insight: AI should not stay in one generic mode. It needs explicit cognitive gears. A product thinking mode that challenges your assumptions is fundamentally different from an execution mode that implements your decisions precisely.
The L&D Adaptation
Learning design teams already operate with implicit role distinctions. The person doing needs analysis thinks differently than the person writing storyboards, who thinks differently than the person building interactions, who thinks differently than the person running QA.
But when we use AI, we collapse all of these roles into one undifferentiated assistant. “Hey AI, help me with this course.” That’s like asking one person to simultaneously be the strategist, the writer, the developer, the reviewer, and the learner advocate. No human team would work that way. Why do we expect AI to?
The Gstack adaptation for L&D would define explicit AI roles:
-
- Learning Strategist mode: Challenges your framing, asks “are we solving the right problem?”, pushes back on assumptions about what learners need.
-
- Content Architect mode: Structures learning sequences, applies instructional design principles, organizes content for cognitive load management.
-
- Content Developer mode: Writes, builds, and produces based on the architecture. Focused on quality execution, not strategy.
-
- QA and Review mode: Evaluates output against objectives, checks accessibility, identifies gaps, flags inconsistencies.
-
- Learner Advocate mode: Reads everything from the learner’s perspective. Flags jargon, questions relevance, challenges engagement assumptions.
The power of this approach isn’t just efficiency. It’s that each role brings different values and priorities to the same content. The Learning Strategist might say, “This objective isn’t worth building a course for. It’s a job aid.” The Learner Advocate might say, “This scenario feels contrived. Real learners won’t engage with it.” These are the kinds of challenges that a generic AI assistant never makes.
Key Takeaway for L&D Teams
Define explicit roles for AI in your learning design workflow. Don’t use the same generic prompt for strategy, production, and review. Each role should have its own priorities, constraints, and permission to challenge the work from a different angle.
The (AI) DAPT Framework: The Strategic Spine
From Operational Patterns to Strategic Methodology
Superpowers, GSD, and Gstack each offer powerful operational patterns for how teams can work with AI. But operational patterns need a strategic methodology underneath them. Without a framework for deciding what to analyze, what competencies to develop, and how to track progress, even the best operational patterns produce activity without direction.
This is where the (AI) DAPT Framework, developed by Apposite Learning Labs, enters the picture.
(AI) DAPT is a proprietary five-phase methodology designed specifically for organizations integrating AI into their learning and development workflows. Where Superpowers tells you to modularize your AI skills, (AI) DAPT tells you which skills to build and why. Where GSD tells you to separate phases, (AI) DAPT tells you what each phase should accomplish strategically. Where Gstack tells you to define roles, (AI) DAPT provides the competency map that determines which roles matter most for your context.
| Phase | Focus | |
|---|---|---|
| A | Analyze | Map workflows, identify AI opportunity zones, evaluate organizational readiness |
| D | Define | Articulate vision for human-AI collaboration, set integration goals aligned to business outcomes |
| A | Assess | Evaluate tools against needs, run pilots, measure adoption friction and quality improvements |
| P | Plan | Design phased rollout with quick wins, create upskilling pathways, build support structures |
| T | Track | Monitor adoption metrics, measure business impact, identify best practices, iterate on strategy |
How (AI) DAPT Connects the Frameworks
The (AI) DAPT Framework isn’t a competitor to Superpowers, GSD, or Gstack. It’s the layer that sits above them, providing strategic direction for operational decisions.
During the Analyze phase, you identify which workflows in your L&D process would benefit from AI integration. This is where you decide whether Superpowers-style skill modularity, GSD-style phase separation, or Gstack-style role definition would add the most value to each workflow.
During the Define phase, you articulate what competencies your team needs to work effectively with AI in these new patterns. This isn’t just technical skill. It includes the judgment to know when AI output needs human override and the confidence to challenge AI-generated content.
During the Assess phase, you evaluate the gap between where your team is and where it needs to be. You run pilots using the operational frameworks, measuring not just speed but quality, alignment, and learner impact.
During the Plan phase, you design a phased rollout strategy. Perhaps you start with GSD-style phase separation for content development, then layer in Superpowers-style skill modularity for review processes, then build toward full Gstack-style role-based AI collaboration.
During the Track phase, you monitor adoption, measure impact, and iterate. The critical question at this stage: is AI integration changing what the learner experiences, or is it only changing the speed at which we produce the same thing?
|
The (AI) DAPT Difference Operational frameworks tell you how to work with AI. The (AI) DAPT Framework tells you why, what to prioritize, and how to measure whether it’s actually working. It’s the strategic spine that gives operational patterns direction and accountability. |
The Bigger Picture: What Changes for the Learner?
There’s a bias worth naming in this entire conversation. Most discussions about AI in L&D center on the provider’s economics: how AI makes us faster, cheaper, more efficient. The frameworks above can easily be read through that lens. Adopt Superpowers to modularize your workflow, adopt GSD to reduce rework, adopt Gstack to do more with fewer people.
But that framing misses the more important question.
If your AI workflow overhaul just means you ship the same course 40% faster, you’ve optimized the wrong thing.
The real test of these frameworks, adapted for L&D, is whether they change the learner’s experience. Does Superpowers-style modularity produce learning that’s better aligned to performance needs, because the AI was operating in “Needs Analysis” mode instead of rushing to content? Does GSD-style phase separation produce content that’s more rigorously validated, because review happened with fresh context instead of fatigued attention? Does Gstack-style role definition produce more learner-centered design, because a dedicated “Learner Advocate” mode challenged every assumption?
These are the questions that matter. And they’re the questions that the (AI) DAPT Framework’s Track phase is designed to answer.
Getting Started
You don’t need to adopt all four frameworks at once. The most practical entry point depends on where your team feels the most friction today.
-
- If your AI output feels inconsistent or generic, start with Superpowers principles. Define distinct AI modes for different phases of your work.
- If you’re producing a lot but quality is uneven, start with GSD principles. Separate planning, production, and review into distinct sessions with clean contexts.
- If AI feels like a bolt-on rather than a team member, start with Gstack principles. Define explicit roles for AI in your workflow and give each role its own priorities.
- If you need strategic direction for the whole effort, start with (AI) DAPT. Map your workflows, define the competencies you need, assess the gaps, plan the rollout, and track the impact.
We’ve always been borrowers in L&D. We took waterfall and made ADDIE. We took agile and made SAM. We took UX research and made learner experience design. The question now is whether we’ll do the same with the frameworks emerging from AI-native teams, or whether we’ll let this moment pass and wonder, a few years from now, why our workflows still feel like 2023 with a chatbot bolted on.
The frameworks are here. The adaptation is ours to make.
About Apposite Learning Labs
Apposite Learning Labs is the experimental and thought-leadership arm of Apposite Learning Solutions, a 13-year-old learning design and development company based in US and India. Apposite serves enterprise clients including Microsoft, Novartis, Deloitte, and other Fortune 500 organizations. The (AI) DAPT Framework is a proprietary methodology developed by Apposite for organizations integrating AI into learning and development workflows.
For more information: https://appositelearning.com/
