Back in, say, 2023 interactions with AI systems were simple. We typed in a request and the model replied. At the time the theory was that better phrasing produced better answers. With enough persistence, we learned that a small tweak in wording could swing the result from useless to surprisingly good. We thought of prompting of as upgraded instruction writing, somewhere between asking a smart colleague for help and filling out a very literal form.
On the surface, our interactions may look similar, but what’s happening inside has changed, both in the advancement of the systems and in how we work with them. Prompting has been evolving from wording to structure, then from structure to process, and, increasingly, from process to thinking. Underneath that familiar chat interface, there has been a not-so-subtle shift in how problems get framed, explored and shared.
It helps to think about three overlapping phases: an instruction phase, a structure phase, and a workflow-and-thinking phase. Each has its own strengths and blind spots, and each becomes easier to see when we lay them out side by side.

In the early instruction phase, prompting was essentially about telling the model what to do in a single shot. Techniques like zero-shot prompting took advantage of the model’s pre-training by describing a task without supplying examples, then letting the system infer the rest from patterns it had been exposed to during training.¹ This was liberating! We could ask for summaries, explanations, classifications, or basic creative work without having to collect data or build a custom model.
IBM and others presented zero-shot prompting as a practical way to tap into a model’s generalization ability: explain the task clearly, give enough context and let the system handle a new problem by drawing on its learned representations rather than task-specific fine-tuning.¹ ² In principle, that meant we could move faster, but in practice, it meant we were leaning heavily on whatever the model had already absorbed, with limited visibility into how it was reasoning.
Limitations to the zero-shot approach showed up early. Using this technique, prompts could generate fluent responses that sounded confident while completely missing the requester’s intent.¹ ² Summaries would emphasize the wrong aspects, classifications would reflect subtle biases, explanations would be plausible but incomplete. Users tried to fix this by writing more precise instructions, adding more detail, or stacking constraints into a single, increasingly long message. Sometimes that helped but often it didn’t.
Because of that unsatisfactory experience, the structure phase of prompt engineering started to emerge. People realized that changing the shape of a prompt often mattered more than changing individual words. Few-shot prompting, which adds examples directly into the prompt, gave the model a template to imitate and improved consistency.¹ ² Chain-of-thought prompting, which nudges the model to reason step by step, helped with multi-step problems by making intermediate reasoning explicit.³ Even without formal terminology, practitioners were discovering that the prompt was less a request and more a small design space.

Guides on prompt engineering began to systematize these prompt patterns. They presented zero-shot prompting as a baseline, then recommended moving to few-shot, structured outputs, or reasoning prompts when tasks grew more complex.¹ ³ The guiding question we asked ourselves changed from “What is the right sentence to type here?” to “How should the task be staged so that the model has a sensible path to follow?” It was still easy to treat this as a bag of tricks, but the underlying change was more significant than that: prompting was becoming more thoughtful, a way of shaping the interaction with the model, not just a wall of text.
This structural awareness has expanded into something more approximating workflow design. Instead of trying to get everything in one perfect message, users have started to treat prompting as a sequence of moves. We might begin with a rough summary, then ask for missing perspectives, then request counterarguments, and finally ask the model to synthesize a refined version that reflects the full range of inputs. The original goal didn’t change but the process to reach it became multi-faceted.
This kind of sequencing is especially visible in how education has come to use AI to promote critical thinking. When students use AI tools in the course of their lessons, instructors who want to preserve genuine thought no longer focus only on forbidding “answer-seeking” prompts. They ask students to show the full interaction: initial question, intermediate refinements, challenges to the model’s response, and final evaluation. The prompt becomes part of the learning event. The student’s choices about what to ask, how to push back, and where to add context are treated as evidence of thinking rather than as something that happens offstage.

The same pattern is showing up in the workplace. Ask an AI system for a report and accept the first draft, and we are operating in the instruction phase, even if we use sophisticated phrasing. Ask for an initial draft, interrogate its assumptions, insert external constraints, and then ask the model to reconcile everything into a revised version, and now we’re closer to a workflow. The individual prompts matter much less than the overall shape of the conversation.
At this point, prompting begins to overlap with how people think, not just how they phrase. Writing a prompt in this newer context means choosing what inputs are relevant to the task, which constraints really matter, which perspectives should be represented, and where uncertainty should be exposed rather than glossed over. Those decisions demonstrate thoughtful judgment and influence how the model’s output evolves over multiple iterations.
It might be useful here to introduce the idea of prompting literacy. It involves at least four practical skills that have emerged as usage has matured.
First, problem framing. Before the model can do useful work, someone has to decide what the task actually is, how success will be judged, and what kind of answer and format are appropriate. Resources that describe zero-shot prompting often implicitly assume that this framing has already happened: the user knows what they want and can state it clearly.¹ ² The importance of this step is that users use the interaction to clarify the problem itself, revising their questions as they see how the model interprets them.
Second, process design. Work on human–machine creativity emphasizes that combining people with AI does not automatically produce better ideas. Without something that looks like a deliberate structure, performance will plateau.⁴ Researchers studying creative human–AI teams describe this in terms of stages: ideation, critique, revision, integration.⁴ Someone has to decide when the model generates, when humans intervene, and how feedback gets incorporated. The prompt design skill is the visible part of a larger process that governs the whole interaction.
Third, evaluation and revision. Most systems can now generate plausible output with minimal guidance, at least on straightforward tasks. That makes evaluation more important rather than less. Users should treat this first response as a draft to be challenged, checked against external sources, and reconstructed with different constraints. This is a much more effective form of prompting literacy than simply trying new phrasings until something “looks right.” Their prompts become hypotheses about how to steer the model, which they then test and refine.
Fourth, role assignment. Research on human–AI teams highlights the importance of clarifying who is responsible for what.⁴ Should the model propose options while humans decide? Should the model critique human ideas, or the other way around? Should the model draft content while humans supply structure, or should humans outline and the model fill in details? Prompting, then, is used to assign roles. A simple change from “Write a plan” to “Generate three alternatives that challenge our current plan, then list trade-offs for each” is as much role design as it is wording.
From their experience, IBM supports the shift away from single-turn interactions. For instance, they describe zero-shot prompting as a powerful starting point for new tasks, but more advanced materials move quickly into multi-step flows, tool use, and orchestration.¹ ³ Where guides once focused on single prompts they now emphasize prompt sequences and reusable templates that capture entire workflows rather than one-off tricks.³
The same trend appears when discussing creativity. Work from IMD and related research argues that human–machine collaborations require explicit design if they are going to deliver better creative outcomes.⁴ Gains are limited when teams simply add AI to an existing process without rethinking how ideas move between human and machine.⁴ When they restructure the process so that models are used to challenge assumptions, explore alternatives and expose blind spots at specific points in a project, results consistently improve.
Treating zero-shot prompting as a universal default was a transitional phase. It’s still useful for quick tasks and first passes, but as soon as traceable reasoning, creative exploration, or multi-stakeholder review is involved, more structured, multi-step interactions are required.¹ ² ³
We believe that the trend is to design for interaction sequences. A team might standardize on patterns such as “draft → critique → expand perspectives → integrate constraints → finalize,” with specific prompts at each step. These patterns can then be refined, shared, and adapted, with prompting functioning as a coordination mechanism for human and AI.

For education and skill development, this prompting philosophy should espouse helping users develop the critical thinking literacies of effective prompting: problem framing, process design, evaluation and role assignment. A student who can explain why they chose a particular sequence of prompts, how they evaluated each response, and how they integrated external information is doing something very different from a student who just generated an answer and copied it.
Of course, models will continue to require fewer explicit instructions to produce acceptable answers on straightforward tasks. Improvements in training and context handling already allow systems to infer more from shorter prompts. At the same time, work on human–AI teaming and the growing emphasis on multi-step workflows suggest that the real gains will come from better interaction design rather than from more polished single prompts.¹ ³ ⁴ Prompting will feel like steering a joint investigation.
Prompting started as a way to tell machines what to do but it’s turning into a way to think and explore with them. Knowing the right keywords still helps, but the more the field evolves, the more it is how people frame problems, structure interactions, and guide the development of ideas in a process. The chat window hasn’t changed much but the work happening inside it sure has.
Notes
- IBM, “What Is Zero-Shot Prompting?” IBM Think, accessed 2026, (Watts Up With That?) https://wattsupwiththat.com/2026/04/01/stimulating-creativity-in-human-machine-teams/
- GeeksforGeeks, “Zero-Shot Prompting,” 2025 https://www.geeksforgeeks.org/nlp/zero-shot-prompting/
- Adaline.ai, “Understanding Zero-Shot Prompting in 2025” https://www.adaline.ai/blog/understanding-zero-shot-prompting-in-2025?utm_source=chatgpt.com
- IMD, “Why Human-Machine Teams Need Deliberate Design to Be Creative,” (imd.org) https://www.imd.org/ibyimd/artificial-intelligence/why-human-machine-teams-need-deliberate-design-to-be-creative/?utm_source=chatgpt.com
