Skills Will Eat AI

Software engineering (SE) is about building long-lived software systems that meet user needs under resource and technological constraints. It does not equal “coding”, the street term used to represent the low-level SE discipline of construction or implementation. This means that we are differentiating SE from, say, a non-technical entrepreneur’s ability to create a functional prototype of their latest idea within a matter of hours (good on them, however there isn’t any evidence that these prototypes actually ever become the basis of a sustainable, revenue-generating business.)  If the task of the software engineer is to merely code–the production of executable or compilable artifacts in a given programming language or using a specific software stack–the role is not about engineering per se.

Software ate the world, AI is eating software, and skills will eat AI!

Emerging AI Caveats Emphasize Human-in-the-Loop

Long-lived systems are evolvable, extensible, and maintainable. They must remain usable and adapt to ever-changing user needs and technological advancements. The timeless principles and practices that control these qualities are more important than ever in the age of AI. AI–and when I say AI, I mean specifically Generative AI–helps us churn out code faster, but it can also do something more important: help us uphold SE principles in the systems we build if we know these principles, establish governance procedures based on these principles, and be able to express them. It is a formidable tool in a software engineer’s toolbox, a tool that changes how we work and interact with the systems we build. However, it has one significant caveat: AI is a statistical technique, and as such susceptible to reversion to the mean. Consider how AI behaves when trying to fix a bug in the code it has generated. If the first attempt is unsuccessful, it tends to enter into a vicious cycle. To break out, it must be forced to introduce noise in its output in a random-walk style search. If these attempts also fail, the system ends up in an unsalvageably messy state. In the meantime, AI may declare false victory, with self-created checks that superficially pass. The only solution is to roll the system back and start all over again.

Countless developer anecdotes tell similar stories that contradict the bold sound bites about the end of SE and SE jobs. AI may one day overcome some of these difficulties with algorithm-in-the-loop approaches that provide independent validation of the generated output to break out of its reversive behavior, but for the foreseeable future, SE will remain a human-in-the-loop process with AI already having earned the status an indispensable augmentation. What works well for tasks tolerant to approximate solutions that are occasionally off, but often good enough, is not enough for enduring software systems.

In the midst of the AI hype, evidence and anecdotes about the caveats of AI are starting to emerge. They suggest:

  • The productivity effects of AI are variable, and when they are positive, exhibit diminishing returns as the codebase grows.4 Junior and senior developers experience significant differences in productivity with junior developers receiving an initial speed-up, but regressing rapidly and suffering a latent productivity hit, where senior developers tend to reap more long-lasting benefits.2
  • Undisciplined use of AI can degrade the correctness, quality, architectural integrity, and maintainability of systems, causing technical debt and security vulnerabilities to grow rather than to decrease. Only a small fraction of AI-generated systems compile or run without modification.2, 4 
  • AI reliance is resulting in widespread cognitive decline in core skills, with serious gaps in technical knowledge and  practical know-how and glaring deficiencies in abstraction skills, conceptual reasoning, and critical thinking. Peggy Storey’s excellent recent blog termed this AI-use-induced decline cognitive debt.1
  • The dizzying multiplicity and evolving nature of AI tools and lack of well-codified and disciplined best practices impose a considerable and increasing intellectual burden, or fatigue, on developers.3 Recent findings by a UC Berkeley team5 indicate that AI fatigue is real, with software engineers stating that they spend a disproportionate amount of effort reviewing and correcting generated code from their colleagues and are forced to multi-task with frequent and distracting context switches.

Somebody Must Know What Good Software Is

In a recent LinkedIn post, Addy Osmani, Director of Cloud AI at Google, quoted Boris Cherny, who had emphasized that “great engineers are more important than ever.” Cherny had said that a human in the loop must produce the prompts, talks to customers and other teams, and decide what to build next. Adding to this insight, people who actually know what they are doing have to design and build the next generation of software development tools for others to use to build software and platforms on which that software runs. Here is an excerpt from Osmani’s quote:

Boris created Claude Code. His point here is important–when AI handles the code generation, the engineer’s value shifts to the decisions above code: 1. What do we build? 2. Why? For whom? 3. And how it all fits together. The bottleneck was always judgment, taste, and systems thinking. AI just made that more obvious… The leverage feels highest when humans stay crisp on “what problem, what constraints, what good looks like” and let agents explore the solution space aggressively underneath that.

What good looks like hasn’t changed much in the last two decades6: many principles are timeless while others evolve. And, again, somebody must possess this knowledge to be able to express it.

Everything Gets Devoured After Becoming Mainstream

In 2011, the venture capitalist Marc Andreessen coined the famous phrase “software is eating the world.” Since as early as 2017, Nvidia CEO Jensen Huang has amended this phrase by declaring “AI is eating software,” only to backtrack recently by labelling the selloff of software stocks by investors as “illogical”. He posited that “AI enhances and uses software rather than replacing the companies that build it.“

When AI has finished devouring software, if it ever does, its exponential growth will flatten: AI may even become a boring topic. This new phase will mark the re-emergence of engineering principles augmented by AI usage. Engineering software was never about churning out as much code as possible, and will not be in the foreseeable future. Acute cognitive debt, and a vengeful re-emergence of the importance of real, core skills will eventually devour AI. 

Implications for SE Education

How does all this relate to SE education? AI, rather than making SE education moot, brings it to the foreground. SE education is now more important than ever. High-level concepts and skills–such as those concerning requirements, design, architecture, conceptual integrity, technical best practices–and their documented forms, have always been a bit abstract for students, who could barely amortize their value in a mere semester-long project that applies them. Now the fruits of these concepts are instantly useful, even essential, as input to AI. Producing these artifacts no longer feels like a mere intellectual exercise to students, but a skill they must master to effectively puppeteer AI with prompts grounded in engineering principles. As a case in point, students are expressing for the first-time in my career a genuine appreciation of diagramming and system design skills.

Education programs should position themselves to tackle the impending cognitive debt crisis by focusing on the higher-level SE disciplines and preventing knowledge gaps from restraining, or even negating, the huge potential of AI. 

References

  1. How Generative and Agentic AI Shift Concern from Technical Debt to Cognitive Debt. M.A. Storey, University of Victoria, Canada, February 2026.  https://margaretstorey.com/blog/2026/02/09/cognitive-debt/
  2. Beyond Code: GenAI’s Strategic Impact on the Entire Software Development Lifecycle. DefineX Group, January 2026.
    https://www.definex.com/wp-content/uploads/2025/10/BEYOND-CODE_GenAIs-Strategic-Impact-on-the-Entire-Software-Development-Lifecycle.pdf 
  3. “AI fatigue is real and nobody talks about it: A software engineer warns there’s a mental cost to AI productivity gains.” Business Insider, February 2026.
    https://www.businessinsider.com/ai-fatigue-burnout-software-engineer-essay-siddhant-khare-2026-2 
  4. The AI Boardroom Gap: Global insights to close the AI boardroom gap and unlock a clear path to AI enterprise success in 2026. Chapter 3 – AI-Assisted Software Development & Chapter 4 – AI System Engineering. Software Improvement Group, January 2026. https://www.softwareimprovementgroup.com/wp-content/uploads/ai-boardroom-gap-2026.pdf
  5. AI Doesn’t Reduce Work–It Intensifies It. Harvard Business Review, February 2026.
    https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it 
  6. What’s Good Software, Anyway? Hakan Erdogmus, IEEE Software, 2007
    https://www.computer.org/csdl/magazine/so/2007/02/s2005/13rRUwdrdNP

The 4C Model for Defining User Stories: Context, Card, Conversation, and Confirmation

Writing user stories that effectively support software product development is difficult for students new to the practice. At ICSE’23, during the 5th International Workshop on Software Engineering Education for the Next Generation (SEENG), I presented a position paper addressing this challenge by extending the existing 3C model for defining user stories with an extra C for ‘context’. This format is targeted to interactive software systems and inspired by a grounded theory research study where the observed product managers provided context by basing most user stories on concrete and validated design artifacts, such as wireframes.

The 4C Model

In addition to defining user stories using the existing 3C model (Card, Conversation, and Confirmation), I ask my students to start defining a user story by first providing its context (as illustrated in Figure 1).

Fig. 1. The 4 C’s of user stories: (concrete and validated) Context, Card, Conversation, and Confirmation.

The user story context is defined by:

  • The name of the larger feature or epic that encompasses the story.
  • A meaningful name for the story summarizing the covered behavior.
  • A concrete and validated design artifact from which the story can be derived (if applicable). The design artifact visually represents a concrete idea for the solution, like a wireframe. It has been validated by stakeholders (see more on that below). It provides the team with a shared understanding of the story’s context and serves as a starting point to derive the story Card, Conversation, and Confirmation. Note that more than one story could potentially be derived from the same design artifact.

The concrete and validated design artifact also reminds students that a creative process needs to occur BEFORE one starts specifying user stories with acceptance criteria. Concrete visual design artifacts (like sketches or wireframe) are effective at supporting the creative process because they allow us to refine our understanding of the problem and the solution at the same time (which is important because it is impossible to fully understand the problem before moving to the solution). On the contrary, user stories with acceptance criteria are abstract textual artifacts that poorly support the creative process and are more helpful at guiding implementation.

Extended INVEST Criteria

In addition to satisfying the well-known INVEST criteria (Independent [when possible], Negotiable, Valuable, Estimable, and Small), each user story should satisfy the following additional criteria:

  • Contextualized: The story is situated in its broader context via a concrete design artifact. It is straightforward to understand the story relationships with its encompassing feature or epic and surrounding stories.
  • Understandable: The behavior covered by the story is easy to understand by stakeholders, especially by developers in charge of implementation (and by faculty and teaching assistants in charge of evaluating the story).
  • Validated: Stakeholders have validated that the encompassing feature or epic satisfies their needs: The feature or epic is useful, usable, and delightful from a user perspective, competitive from a business perspective, and feasible from a technical perspective. This extends the INVEST ‘Valuable’ criterion. The idea is to identify problems very early on, before writing detailed user stories or code, in order to reduce rework.

Example

Figure 2 presents an example of the application of the 4C model.

Fig. 2. The 4 C’s of user stories: An example

The above story is contextualized (it belongs to the Donate Items epic and is related to the provided wireframe), understandable (the behavior covered is easy to comprehend) and satisfies the stakeholders’ needs (assuming that the wireframe has been properly validated by stakeholders).

Conclusion

After experimenting with the 4C model during four semesters with positive initial results, I posit that the model helps students generate stories that are easier to create and review while supporting the development of innovative solutions that satisfy the stakeholders’ needs. However, this conclusion is based on expert judgment and anecdotal evidence. Further research is necessary to rigorously evaluate the effectiveness of the proposed 4C model.

Reference

This blog is a short summary of the following position paper:

Cécile Péraire. “Learning to Write User Stories with the 4C Model: Context, Card, Conversation, and Confirmation” IEEE/ACM 5th International Workshop on Software Engineering Education for the Next Generation (SEENG) at ICSE’2023 (2023).

Related Posts