Large language models have exposed a gap in computer science education: students can generate code before they can reason about it. The issue is not tool usage itself. The issue is replacing understanding with autocomplete.

When students rely on generated fixes they cannot evaluate, progress looks fast but fundamentals remain fragile. A curriculum designed for the pre-LLM era needs to shift from syntax production to computational reasoning.

What to De-emphasize

Syntax drills and boilerplate-heavy assignments should take less classroom time. LLMs are genuinely good at scaffolding classes, fixing minor syntax errors, and producing repetitive code. Assessing students on these tasks now measures patience more than competence.

What to Emphasize Instead

1) Mental models of execution. Students should understand stack vs. heap, data layout, and control flow well enough to debug without guessing.

2) Problem decomposition. The durable skill is mapping unfamiliar problems into solvable components, not memorizing canonical answers.

3) Code reading and critique. Real work is often maintenance. Students need repeated practice auditing code for correctness, complexity, and edge cases.

4) Trade-off reasoning. System choices depend on constraints: latency, reliability, team capacity, and deployment risk. These decisions are rarely one-prompt answers.

Assessment Needs to Change

Closed-book exams are increasingly disconnected from modern engineering, while unrestricted take-homes are easy to outsource to models. A better blend includes oral defenses, live modification tasks, and review-based assessments where students must explain and adapt their own submissions.

A Better Goal for AI in Education

We should teach students to use AI as a multiplier for judgment, not a substitute for it. The strongest graduates will be those who can interrogate generated code, spot hidden assumptions, and make principled decisions under real constraints.