Skip to main content

Stop Prompting AI, Start Directing It

7 min read
AI
Design Thinking
UX Design
Prompt Engineering

A practical guide to working with AI as a designer: how to prompt with purpose, build shared context, and keep your critical thinking front and center.

Stop Prompting AI, Start Directing It

Vibe coding isn't a new concept, but the way we think about it keeps evolving. When the term first started getting thrown around, the reactions were predictable: panic, hot takes, and a lot of "now everyone can design" and "designers need to learn to code." It created real confusion, especially from people who don't fully understand what designers actually do. The noise has settled, but working well with AI is still something most designers are figuring out.

AI, at least for now, can't think critically. That's still our job, and it's why experienced designers remain valuable. What AI can do is act as a second brain: fast, tireless, and genuinely useful when you give it the right direction. Getting the most out of it means being intentional about how you work with it.

Prompting as Direction

Think of AI as your collaborator. Working with it should be a conversation, not a series of one-way commands. When you treat it like a search engine you're doing it wrong. When you treat it like a thoughtful colleague you brief before a working session, you'll get much better results.

Every prompt is a form of direction. The quality of what you get back is directly tied to the quality of what you put in.

  • Give it context. What project is this for? What's already been decided? What are you trying to solve right now?
  • Explain why it matters. The reasoning behind a decision helps AI make better suggestions. "We're simplifying this because users were dropping off" produces more useful output than "simplify this."
  • Define constraints. What can't change? What has to be true? Constraints focus the output and prevent AI from filling gaps with assumptions.

You're designing the intent, not just the output.

Structure Your Prompts Like a Mini PRD

Before you write a prompt, understand the problem you're trying to solve. This is your UX critical thinking at work, and it's not something you can skip or hand off to the AI. The prompt is where that thinking gets translated into direction.

A simple structure that works:

  • Context: What is this for?
  • User: Who is this for?
  • Goal: What needs to happen?
  • Constraints: What must be true?
  • Output: What do you want back?

In practice it might look like this: "Context: a mobile banking app for first-time investors. User: someone who just placed their first trade and is waiting for confirmation. Goal: notify them of trade status without causing anxiety. Constraints: must work with reduced motion settings, can't rely on color alone to convey status. Output: a notification component with copy suggestions."

That's a prompt that gives AI something real to work with. Compare it to "design a notification component" and the difference in what you'd get back is significant.

This is the thinking we're already doing as product designers. The difference is that with AI, you need to make it explicit before you start, not after. The clearer you are going in, the more useful the collaboration becomes. AI can help you build on solid thinking, but it can't supply the thinking for you.

Build a Shared Brain

Here's something most people don't think about: AI has no memory between sessions. Every time you open a new conversation, it starts completely fresh. It doesn't remember your product, your users, your decisions, or the conversation you had yesterday. Without context, you're starting from zero every single time.

Markdown files solve this. Before starting a project, create a small set of "thinking docs" that capture the things the AI needs to know. At the start of a session, you reference them, and the AI is immediately up to speed. This is the shared brain idea: you hold the knowledge, the docs carry it forward.

The key is keeping each file focused on one thing. A doc that tries to cover everything ends up covering nothing well. Think of each file as answering a single question clearly, not as a place to dump everything you know about the project. Short and specific beats long and comprehensive every time.

You can have Claude help you create these with your direction. The structure below is a good starting point.

Some files worth creating:

project-overview.md

  • What the product is
  • Who it's for
  • Core problems
  • Success criteria

design-principles.md

  • Your product values (clarity, accessibility, speed, etc.)
  • Interaction patterns
  • Dos and don'ts. This one often gets missed. It can be just as important to tell the AI what you don't want as what you do.

ui-system.md

  • Colors, typography, and spacing. If you're using something like Tailwind, make sure it knows.
  • Component behaviors
  • Tone of UI copy

feature-briefs/

Unlike the files above, which cover the project as a whole, this is a folder where each feature gets its own focused doc. When you're working on something specific, you reference just that file rather than loading in everything at once.

  • One file per feature
  • Problem → approach → edge cases

Then you reference it directly: "Using the feature brief for the scan results panel, suggest improvements to reduce cognitive load."

If you're working in Cursor, you can reference files directly and iterate on the implementation with full context.

Be Explicit About Constraints

If you don't give AI constraints, it will make assumptions. And those assumptions are almost always generic. It will design for an average user on an average device in an average context, which rarely matches the product you're actually building.

Constraints are what turn a generic suggestion into a useful one. They're the difference between "here's a notification pattern" and "here's a notification pattern that works on mobile, respects reduced motion, and doesn't rely on color alone to convey status."

The more specific you are upfront, the less time you spend correcting output that missed the mark.

Some constraints worth defining:

  • Platform: Web, mobile, or browser extension. Each has different interaction patterns and limitations.
  • Density: An enterprise tool used by power users all day has different needs than a consumer app someone opens once a week
  • Accessibility: Call out specific requirements, WCAG level, known user needs, or anything your design principles doc already defines
  • Performance: If animations need to be minimal, load times matter, or the product runs in low-bandwidth environments, say so
  • Brand and tone: If there's a voice, a visual language, or things that are explicitly off-brand, include them

You don't need all of these every time. But the ones that are relevant to your problem should be in the prompt, not assumed.

Iterate Like a Designer, Not a Developer

A developer iterates to make something work. A designer iterates to make it right for the user. Those are different goals, and it's easy to lose sight of the second one when AI is generating output quickly and it all looks pretty polished.

Illustrated cat character working on a laptop with a cup of coffee

As designers, we're trained to think critically and ask hard questions. Don't stop doing that just because AI gave you an answer. Treat AI outputs like rough comps, not final answers, and keep the conversation going.

  • "What am I not considering?"
  • "What are the tradeoffs here?"
  • "Where could this break?"
  • "What would an accessibility expert flag?"

That same instinct applies to edge cases and states, the parts that are easy to skip when you're moving fast but that real users will absolutely run into.

  • Empty states
  • Error states
  • Loading states
  • Permission issues
  • Unexpected user behavior

Try asking: "What edge cases would this feature need to handle?" You'll often get back a list of things you hadn't thought about yet.

Keep a Feedback Loop with Real Output

One of the risks with AI-generated output is that it looks polished fast. A prototype that renders cleanly with mock data can create a false sense of confidence. Things break when real users show up with real data, real edge cases, and real behavior you didn't anticipate.

The feedback loop doesn't change just because AI is involved. If anything, it matters more.

  • Test it in the browser with real content. Mock data hides a lot. Text overflows, images break layouts, empty states get exposed.
  • Refine based on real user behavior. What made sense in the prompt or the prototype may not hold up when someone actually uses it.
  • Talk with developers early. AI can suggest an interaction or generate a component, but a developer will tell you if it's feasible, performant, or consistent with how the rest of the product is built. Don't wait until handoff to find out something doesn't work.

The speed AI gives you is only useful if the output holds up under real conditions. Validate as you go.

Don't Skip Human Judgement

AI can generate fast, but speed isn't the same as good judgement. It doesn't know your users the way you do, it doesn't carry the context of decisions your team has already made, and it has no stake in whether the product actually works for the people using it. That's your job.

  • Product decisions still require context. AI doesn't know about the technical constraints your team is working within, the research that informed a previous decision, or the business tradeoffs that ruled something out six months ago. It will confidently suggest things that have already been tried, debated, or ruled out. You have to know the difference.
  • Accessibility still needs manual validation. Automated checks catch a subset of issues, but they miss plenty. Screen reader behavior, keyboard navigation, focus management, and real-world assistive technology use can only be properly evaluated by a human. AI can help you think through accessibility, but it can't replace testing with real users or a thorough manual audit.
  • Good UX still depends on taste and experience. Taste isn't a soft skill. It's pattern recognition built over years of seeing what works, what fails, and why. AI can produce something that looks correct and still feel off in ways that are hard to articulate but immediately obvious to an experienced designer. Trust that instinct.

Vibe coding isn't about replacing design thinking. It's about extending your capacity to act on it. The designers who get the most from AI aren't the ones who prompt the most cleverly. They're the ones who still ask the hard questions, still advocate for the user, and still know when to push back on the output. That part doesn't change.

Stop Prompting AI, Start Directing It | Cathy Villamar