AI-design isn’t a novelty anymore — it’s rapidly becoming a key part of how modern designers operate. In this article, I explore where today’s tools provide real value, how they fit into existing workflows, and what it takes to start building an AI-enhanced practice.
The focus isn’t just on solo workflows or flashy demos — it’s about how AI can be thoughtfully introduced into structured environments, especially where collaboration, design systems, and development processes already exist in wider organizations.
The fast track: where AI already delivers
Let’s cut to the chase: the clearest wins right now are in prototyping and layout generation. Thanks to new AI-powered tools, design artifacts no longer need to be built from scratch. You can generate usable layouts in minutes, accelerating the “think-out-loud” phase and enabling teams to quickly explore, communicate, and refine ideas together.
While manual sketching and grayscale wireframes still have their place, especially for brainstorming or highly custom concepts, AI tools now deliver clickable, testable outputs that feel like a real prototype for digital products. I often use my sketches as prompts for new AI threads to get there. These outputs are highly customizable and support rapid iteration, making them valuable tools for early exploration, feedback, and team alignment.
That said, the outputs from today’s AI tools aren’t production-ready on their own for businesses requiring managed platforms. They provide a strong foundation for further refinement and development, but still require accessibility and alignment with business systems. I will unpack all of that in this article, and offer ways to gain value from AI design technology today, and what we can expect in the near future.
Understanding the AI-design landscape
With a growing number of AI-powered design tools entering the market, it’s important to evaluate how they differ, not just in output, but in how they integrate with real workflows. The comparison below highlights key features that shape their usability across teams, from solo designers to scaled product organizations.

AI-assisted design tools: from early testing to uncovering business value
Earlier this year, my team and I tested several emerging AI design tools — UX Pilot, Vercel v0, and Lovable — to understand their practical value in structured design environments. We found them surprisingly easy to learn, with intuitive interfaces that designers can become functional with in hours. However, our testing revealed two distinctly different approaches and a critical industry gap.
- UX Pilot focuses on prompt-based UI generation with Figma integration, outputting HTML/CSS that designers can iterate on within familiar workflows.
- Vercel v0 takes a code-first approach, generating React/Tailwind directly but requiring manual recreation in Figma for design-centric teams. Lovable emerged as a strong middle ground, converting prompts into full React applications while maintaining export capabilities for design handoff.
- Both v0 and Lovable showed value for rapid prototyping, but our testing confirmed what the comparison chart suggests: integration with existing design workflows remains the key challenge. The tools excel at generating starting points but require significant manual effort to align with our production systems, so we mainly tested proof of concept and kept it on the “back burner.”
59% of developers use AI for core development responsibilities like code generation, whereas only 31% of designers use AI in core design work like asset generation. It’s also likely that AI’s ability to generate code is coming into play — 68% of developers say they use prompts to generate code, and 82% say they’re satisfied with the output. Simply put, developers are more widely finding AI adoption useful in their day-to-day work, while designers are still working to determine how and if these tools best fit into their processes.
— Figma’s (April) 2025 AI report: Perspectives from designers and developers.
Then Figma changed everything.
In May 2025, Figma launched Make, native AI capabilities that bypass the integration friction we had identified. Unlike the third-party tools we’d been testing, Figma’s approach leverages existing patterns and team workflows directly. Make transforms prompts into functional prototypes while working within your established Figma environment.
This shift validates what our testing had suggested: the most successful AI adoption wouldn’t come from the most sophisticated standalone tools, but from solutions that work within existing design operations.
For designers, the natural path appears to be staying within Figma, powered by Anthropic. I’m a fan of Anthropic for its business acumen as a creative resource — one that adds value where it counts: early idea generation, expressed rapidly in layouts, for proof of concept/problem solving.
In my workflow, I’ve found that it can be a very frictionless accelerant — staying in-platform, easy to learn. Although this technology is so new that I have yet to perfect my prompting craft on it, early testing for me has been very promising. I suspect adoption by designers will likely stick, and Figma could be the key to reversing the trend that designers aren’t engaging as much with AI tools.
For enterprise teams evaluating these tools, the distinction between standalone capabilities and operational integration has become critical. While early tools like UX Pilot and v0 remain valuable for specific use cases, the platform consolidation happening around design systems suggests that architectural maturity — not tool sophistication — will determine AI adoption success.
Current limitations: where friction remains
Despite their strengths, AI design tools still require significant manual effort to align with real-world product workflows. For teams operating within structured design systems, tokenized libraries, or governed component sets, AI outputs would likely need to be rebuilt or restructured before they can scale across production environments.
Common issues may include:
- Visual styles that don’t align with your design system.
- Excessive inline styling and unnecessary nesting.
- Generic placeholder components requiring replacement.
- Inconsistency when generating related screens or flows.
- Inadequate accessibility implementation.
- Challenges integrating outputs with existing codebases.
While platform-native tools like Figma’s AI capabilities reduce some integration friction by working within existing design systems, the fundamental challenges of refinement, accessibility, and production readiness remain.
Additionally, achieving optimal results requires developing effective prompting skills, and making them reusable — essentially learning the “language” each AI tool responds to best.
Bottom line: AI delivers the initial layout, but refinement, proper structure, and cohesive integration still require human expertise. Even with improved integration pathways, the design judgment and systematic thinking remain irreplaceable.
Rethinking AI’s role in the design lifecycle
Rather than expecting AI tools to deliver polished, production-ready outcomes (particularly at enterprise), it’s more productive to think of them as accelerators of momentum — tools that unblock the early stages of thinking, layout, and collaboration. Whether through third-party integrations or platform-native capabilities, the core value remains the same.
The current limitations don’t make AI ineffective — unless we redefine where it’s most valuable today. And that value starts to multiply when used properly within an existing design practice.
Start small, at low risk
Design teams working within structured systems and sprint cycles can begin integrating AI without disrupting core processes. A practical entry point is to run a low-risk pilot on early deliverables, such as wireframes, layout foundations, or initial prototypes.
Used this way, AI doesn’t replace designers — it amplifies their capabilities. By accelerating the creation of foundational structure, AI frees up time for higher-level thinking. Fewer design cycles mean less churn, and that translates to better-tested, more resilient products. The key is to evaluate results alongside your traditional workflow and use those insights to guide smarter, broader adoption.
Sidebar: how prompting works (and why it’s a skill)
Prompting an AI layout tool doesn’t mean crafting one perfect sentence — it’s an iterative design dialogue. You start broad, then refine the layout step-by-step through a series of prompts, much like guiding a junior designer.
You might say:
→ “Create a marketing homepage with a hero and product cards.”
→ “Make the hero full-width.”
→ “Add a testimonial section.”
→ “Try a sidebar layout.”
AI performs best with either creative freedom or light, sequential guidance. Overloading it with detailed, all-in-one instructions will muddy the results. Instead, break requests into smaller, actionable steps until you get to the desired result.
Many tools now support multi-modal inputs, expanding what you can feed into the AI:
- URLs: “Make it like example.com”.
- Figma: Reference your established designs.
- Upload reference images: Use sketches or wireframes.
- Image Assets: Provide PNGs or SVGs you may want to include.
- Structured text: Feed it markdown, product descriptions, or UI copy.
The Platform Advantage: Platform-native tools like Figma Make operate differently — they can read your existing visual styles and patterns directly from your Figma files. This means prompting becomes more about refining design decisions within your established visual environment rather than starting from scratch.
Whether you’re working with standalone tools or platform-native capabilities, prompting remains a core design competency. Like any skill, it improves with practice — and it’s already shaping how we collaborate with these new tools. Easing the practice into your team’s workflow will help them upskill for the next wave of AI-assisted design technology.
Checklist: how to evaluate AI tooling for design
If you’re experimenting with AI tools, here are practical criteria to help structure your evaluation:
- How quickly can it go from prompt to layout?
- How well does it map to your design system (tokens, spacing, components)?
- Is the generated code usable by engineering?
- Does it follow accessibility best practices?
- Can prompts be refined iteratively with consistent results?
- Does it accept helpful external context (URLs, Figma, markdown)?
- Can it be tested in a real sprint or story without major overhead?
What we might see in the next 6–24 months
The landscape has shifted faster than many of us expected in 2025, with some predictions already becoming reality. Rather than trying to forecast exact timelines, it’s more useful to look at what’s actually emerging and what it might mean for teams making decisions today.
Multiple integration approaches are emerging
We’re seeing different ways AI tools connect to design workflows, each with trade-offs:
Figma’s Make works natively within their platform ecosystem. Protocol-based connections like Figma’s MCP server offer a different approach — your coding tools can talk to your design files through standardized interfaces.
Teams may end up using a mix of approaches rather than picking just one. The question becomes which approach fits your specific constraints and workflow needs.
What this means for planning
If you’re evaluating AI design tools, the technical capabilities might matter less than how well they fit your existing operations. My sense is that teams with organized design foundations may have advantages, but the most practical approach remains starting small and building organizational fluency, as I’ve suggested earlier in this article.
The big picture
- Native platform AI (like Figma Make) and protocol-based integration (like MCP) represent different approaches.
- Each has distinct trade-offs for workflow integration.
- Starting small remains practical regardless of which tools emerge.
Final thoughts: don’t wait for perfect — start now
AI design tools are powerful enough to change how we work today. Don’t wait for perfect tools or perfect workflows. Start small, test often, and strengthen your foundations as you experiment. The teams that build AI fluency now will be ready, not just when the tools catch up, but when the industry shifts beneath them.
The ground is already shifting. The question isn’t whether AI will transform design work, but how well you’ll be positioned to shape that transformation. Start building now, and you’ll have a hand in defining what comes next.
The article originally appeared on Medium.
Featured image courtesy: Jim Gulsen.