How to transform your workflow and boost productivity with these AI-powered UX/UI tools?

I set a challenge for myself: to combine an idea for a product I’d like to work on with experimenting with AI tools that can speed up the UX process. I wanted to explore what AI-powered tools are out there, what they offer, and how I could integrate them into my design process in the future.
The product idea was a user-centric web app that simplifies discovering, browsing, and accessing research computing resources — HPC (high-performance computing), cloud platforms, and datasets. Since I’ve worked in the quantum and cloud computing space, I had insights into the users and their challenges. This allowed me to verify whether the information I was given was accurate.
I didn’t go through the entire design process, as the tools had limitations, and I would need an actual prototype and real users to test with. My process stopped once I had designed screens reflecting the generated ideas.
Research: AI-Supported User Insights
Objective: Understand the challenges researchers face when accessing computing resources. I used AI tools to gather insights already recorded online.

Elicit ⭐ (Winner)
Elicit is an AI-powered research assistant that helps extract insights from research papers. It works like this: you ask a question, and it generates a summary from a set number of papers, along with a fully customizable table. You can select which columns to display.
For me, the “Main Findings” column was particularly useful, as I wanted a summary of insights without having to go through entire research papers. Because of this table with summaries, Elicit became my top choice. The UI is very intuitive, and the features directly addressed my needs.


Consensus also generates insights from academic papers, presenting them in a summary format with categorized insights and citations for each argument, making it easy to reference the original source. When copying text, it provides different citation formats.
In my case, I was primarily looking for insights. The output felt similar to ChatGPT’s format, which wasn’t exactly what I needed. I chose Elicit instead because it allowed me to dive into individual papers and their findings more effectively.


Scite’s answers were the least consumable. It presented a large block of text with no clear structure, with references to papers crammed on the right side. I struggled to distinguish the question from the answer and the references. The lack of visual hierarchy made it frustrating to navigate.
An interesting part was that Scite shows the logic behind how it interprets your search query. You can manually edit the searches, which could potentially help refine how you search.
Even if the content is valuable, if it’s not presented in a digestible way, users will abandon it for tools that are actually pleasant to use. Sorry, Scite — but it’s a no from me.
Research Synthesis: AI-Supported Insights Clustering
Objective: Synthesize raw insights and cluster them into themes. Clustering would further help me understand the challenges users face when accessing computing resources. I was primarily interested in the challenges and solutions cluster but remained open to the themes AI would propose.

Dovetail ⭐ (Winner)
It allows me to import all my research notes and generate a summary of insights with corresponding tags, which I can easily adjust. Additionally, it provides a holistic view of all insights in a Kanban view, letting me move cards around as needed.

It organized my raw, unstructured research notes into a table. However, on the free version, I could only ask one question, which felt very limiting. The table was useful for what I intended to do, but the one-question limit didn’t give me enough flexibility to refine or adjust it.


My Airtable experience was unexpected. I was prompted to create an application, which wasn’t what I anticipated. After inserting my raw notes, I was presented with a few suggested questions and added some new ones myself. The notes were restructured, but I found the layout confusing — plus, I wasn’t sure why I needed an app in the first place.
After tinkering with it for a while, I finally found the table I was looking for. However, I didn’t discover anything particularly groundbreaking from their AI features. If anything, it added unnecessary complexity rather than simplifying the experience.




Looppanel was the most confusing app of them all. After pasting my raw notes, I received something called AI Notes — essentially just chunks of my raw notes. I spent quite some time unsure of what to do next until I stumbled upon the Analysis tab.
There, I realized I had to manually tag hundreds of notes. My immediate thought? I don’t need a tool for this — I can do it myself. Then, almost magically, I discovered an AI Analysis option. Finally, some progress. It categorized all the AI Notes and assigned tags.
But the tags were long, unreadable, and overwhelming. There were so many that it became painful to look at. That said, I did like the Insights tab on the right — it finally provided a summarized view of my notes. Plus, you can generate more insights if needed (though I didn’t check their accuracy).
Overall, it was a frustrating experience that still requires a lot of manual work to get right.


My old-school friend Miro. I came here hoping to generate concept cards, but that didn’t go as planned. So, I decided to test its note-summarizing features instead. I was curious about the clustering and sticky note generation. At first glance, it seemed to do a decent job, but when I looked closer, those stickies didn’t really mean much.
The way the content was structured felt more like article titles rather than meaningful sentences that captured what people actually said. Without direct quotes, they weren’t very useful — more like hashtag generation than an actual insight summary.
Then I tried the summary option. It generated a document, which I found… boring. Why would I use Miro for this when I could just ask ChatGPT? Since Miro is a whiteboarding tool, I expected a more visual summary, not just plain text.
I initially came to create concept cards — something I could present to users, with text and visuals, to get their initial reactions. But I didn’t find any such option. A document summary isn’t something I can take and immediately start testing with.
Miro AI still has some work to do.
Trends Analysis: AI-Supported Insights
ChatGPT recommended I explore keyword trend searches, so I tried using tools like Semrush, Exploding Topics, and Google Trends. I wrote prompts about challenges users face with cloud computing and accessing computing resources, but unfortunately, nothing meaningful came out of it. It felt like a dead end.
Personas: AI-Supported User Segmentation
Objective: Generate personas based on the product idea. I didn’t have much insight — if any — about the problem area, so I was fully dependent on what AI would generate.

It can generate personas based on a competitor’s domain or Google Analytics data (if you have access). I was quite excited about the idea of simply providing a competitor’s website and getting AI-generated personas based on it.
But this was a big disappointment.
After the persona was generated, all the useful information was locked behind a paywall. Not sure what the point is of giving people that kind of incentive, only to immediately let them down.


It can generate a persona based on a prompt about your product, and I found this experience very reliable. The persona was detailed, well-organized, and actually made sense.
On a free account, you can generate only one persona, which is limiting but still useful. A nice touch is that you can customize the persona, though the options are somewhat restricted. Layout-wise, the output feels a bit outdated, and while you can tweak it, the customization options are limited.
Export options include PDF, PNG, CSV, and PPTX. You can also apply your branding, but — of course — that’s locked behind the paid plan (which seems to be the case for most of the platform’s features).
One standout feature: UXPressia lets you generate journey maps based on the personas you create, which is a big plus.


It generates a persona based on a prompt about your product or service. The output is quite simplistic, with no customization options, but surprisingly accurate.
If you just need a quick, no-frills persona on the go, this tool does the job well.



QoQo.ai (Figma plugin)
This Figma plugin can generate personas based on prompts you provide, such as demographics and scenarios. You can also customize what information appears in the persona.
However, the output is difficult to consume — it’s just a long block of text, essentially what ChatGPT would generate. I tried selecting different formats like stickies and presentations, but I still got the same list-style output.
One interesting feature (which I’ve seen in other tools) is the ability to chat with the persona. I liked this — it can spark new ideas and simulate a basic conversation with a user. But, honestly, I could just do the same thing with ChatGPT.



Instant Personas ⭐ (Winner)
This chatbot-based tool creates personas by asking contextual questions about the product and audience. It starts by defining archetypes and then generates personas. The experience felt structured, and the fact that it created four personas was impressive. However, I had to sign up for a trial since it’s not a free tool.
The visual layout of the personas was the best I’ve seen among the tools I tested. There’s also a chat function that lets you ask questions about the personas or your product, making it interactive. The content of the personas wasn’t exhaustive, but considering it generated multiple personas, that’s understandable.
The personas generated were:
- Aspiring Innovator
- The Data-Driven Strategist
- The Seasoned Researcher
- The Collaborative Networker
One strange thing I noticed was that you can’t get a subscription anymore — it says it’s sold out. I’m not sure what that even means, but I hope they’re not shutting down. Also, signing out of the platform was more difficult than it needed to be.
Features: AI-Supported Feature Generation and Prioritization
Objective: Based on the user insights, I wanted to explore tools that could recommend features to test with users and help prioritize them effectively.

Taskade ⭐ (Winner)
Taskade was the only tool that seemed to offer a clear path toward creating a prioritized list of features. It works through AI agents that you ask questions, and you can attach notes for the AI to analyze. It feels like it’s based on something similar to ChatGPT technology. However, it lacks deeper reasoning — for instance, it doesn’t provide insight into what would be the most viable feature to test for an MVP or why a certain feature should be prioritized over others.
If you’re dealing with features that are vastly different, it starts to feel like you’re building two different products, which I think is where the prioritization needs more context. I ended up doing some manual selection anyway. It’s possible that by adjusting the prompt, I could have received different sets of priorities tailored to different product ideas.
For my focus, which was on improving access to computing resources, the tool’s output was more geared toward features that assist with the research aspect, which wasn’t exactly what I needed. I think Taskade could help create a project plan, but I don’t see it as a tool that would pinpoint exactly which feature to build to solve a specific problem. It could outline a rough project plan, though. One limitation I ran into was the trial — there’s only so much you can do with the free plan before hitting the wall, so if you want to make full use of it, a subscription is a must.


Notion AI was another tool I tried for generating feature ideas, and it did a solid job. The ideas were quite different from what Taskade generated, so the two tools could be complementary.
The downside, however, is the strict limitation on usage. Notion really locks you down with only 1–2 questions before hitting the “limit wall.” This means you’d better make your questions count from the start. It’s not the most generous in terms of free access. On the plus side, you can also ask Notion AI to help you generate prompts for wireframes, which could be handy if you want a solid foundation for your design process.
Ideation: AI-Driven Concepts and Wireframes
Objective: Visualize the features for fast user research turnarounds or to get started when working on wireframes, using the prompts provided by AI.
Concept cards
I tried generating concept cards, which I think are a brilliant way to gather initial insights from users without locking them into usability aspects. Concept cards allow you to capture their first impressions quickly, without the need for detailed wireframes. However, the tools I tried didn’t quite get me to the end product.
- Miro AI generated a document, but it was text-based only.
- Notion AI did the same, and when I asked for visuals, it just added emojis.
- Figma AI also failed to create a concept card.
It seems that while there are tools that can provide useful input for creating concept cards, none of them were able to deliver a fully-formed, shareable product (like a PDF or PNG).




I started using the app and was immediately presented with an example conversation that led to the creation of an app flow. It was a great way to learn and follow the same process. I really liked the conversational nature of the app, which provides multiple design solution variants. This is incredibly helpful for narrowing down the right solution quickly.
However, things started to fall apart when I tried asking it to make changes to the design. The app couldn’t handle that and completely failed. My takeaway is that while it can be useful for generating sample ideas for screens, it’s not yet capable of designing a full screen on the fly. To get better results, I found that a detailed, precise prompt was necessary — essentially, getting it right the first time. A helpful tip: try generating the prompt using another AI tool. Machines tend to understand each other better!
Galileo does provide some customization options like font and colors. While it’s not super flexible yet, the output is nice enough, and for the first round of designs, I feel it’s quite satisfactory.
What worked well was being able to copy and paste the designs into Figma. While I did run into some issues exporting to Figma, it wasn’t right away, which was a plus. The Figma output was well-structured into layers, which made it easy to work with, though there were no components. Still, the export provided enough to work with, and the overall output blew me away.
I’m definitely excited to use this tool in future projects.

Uizard’s UI was less intuitive and felt too busy, in my opinion. The outcomes were far less impressive compared to Galileo AI. On top of that, I quickly hit the AI’s limits after only about two screens. I used the same prompt as I did with Galileo, so the differences in the outcomes were quite noticeable. The UI didn’t feel polished, and it lacked the “skills of a good UX/UI designer,” which Galileo AI clearly demonstrated.



Readdy AI ⭐ (Winner)
This tool was sick. The output was not only accurate but also looked like a professional dashboard. It works based on a text prompt, and you’re chatting to provide the AI with further guidance. There’s some interactivity built into the screens generated, highlighting and making elements pop, which is really nice when you want to present it to someone.
The downside would be that there’s no way to copy the design directly to Figma, although it generates a code for you.
Once you provide a text prompt, it spits out instructions on how it will interpret your prompt and fills in the blanks about what it will generate. I found it fascinating because you can also use that to generate your own wireframes. You can tweak the response to get an even better wireframe.
I asked it to make a prototype, and suddenly, all of the charts were interactive. This is brilliant! I was able to generate a whole prototype, and the screens were actually connected to each other. It’s not advanced prototyping yet, but this can be very useful for people just wanting to show app ideas to their clients.
You can preview your designs in full screen, which is very handy for any sort of presentation.
Just like many other tools, you’ll have a hard time adjusting the screens. Once I started telling it to generate another screen to be connected to the one already generated, it got stuck. I had to start a new chat to get a new result.
I love that they’re showcasing different versions of the designs, and you can easily flip between them.
Prototyping: AI-Augmented High-Fidelity Design
Objective: Create polished, interactive designs that reflect user feedback and research based on the elements I have been able to generate so far using AI-powered tools.


This app works by having you provide a prompt, and it generates a simple flow for you. I used their text-to-design feature, asking it to design five screens and to incorporate the branding theme from my website. The quality of the outcome isn’t great, but it’s not terrible either. There’s a lot of hit and miss in terms of content and interactions. It places interactive points on elements that don’t make sense, and some pieces of content feel quite random. Overall, I would definitely prefer using Galileo AI to generate the screens and doing simple prototyping with Figma. I don’t think this tool offers much, if any, advantage as a prototyping tool.


This tool is basically an app builder without the need for code. For prototyping, it doesn’t make much sense to me. I tried generating a flow, but it only offers sample templates, which may be useful for some use cases. Otherwise, you have to build the entire product from scratch. I didn’t find any AI-powered tools like I did in Visily AI, so this tool was more of a miss for my purposes.
To Conclude
I was, in general, pleasantly surprised by where the designer tools are heading 🚀. I feel like the most useful use case can be found with the LLM-based tools, so anything for research synthesis or feature idea generation is at its best 🔍💡. Wireframe creation is also catching up, but still needs some work so that designers can fully integrate it within their workflows 🖥️. I think they will never get as good as manually tweaking it. Like, it would take a massive amount of energy to write down all the changes you want to make to the screen while fighting with GPT and correcting it when it doesn’t get you right 😤. You’d better come to Figma and correct it yourself 🔧. So, I see the wireframing tools as a great assistant, speeding up the workflows ⚡. Prototyping-wise, I haven’t discovered anything significant yet 🤔. But I feel that our work is just getting interesting 👀.
Recommendations for Improving These Tools:
- Ensure the conversation flows smoothly. Ask users about their next steps. If the chat isn’t providing the desired results, direct them to customer assistance. I got stuck many times, not getting what I wanted, and I never saw a solution for this issue.
- If the user needs to start a new chat to get the inquiry right, notify them. It’s not always clear that starting a fresh chat is necessary to generate something meaningful.
- Present users with an example flow to demonstrate how the app works. LLMs and AI tools are still new, and it’s not always obvious to people how to use them, especially if it’s something other than ChatGPT.
- When users are not getting the desired results, show them examples of what they can do next. If you’re stuck and don’t see prompts or guidance, it’s easy to abandon the app without coming back.
- If you’re offering AI features, ensure the credit limit is reasonable. Allow users to play around with the tool. If they’re blocked after just a few tries, they haven’t even had the chance to see the product’s value. A happy user on the free version is more likely to upgrade to the paid version.
- Once the user inputs a prompt, show them how it will be interpreted and allow them to edit it. Depending on the use case, these interpretations will vary greatly, and you want to make sure the user gets the best output. Having a back-and-forth conversation ensures the output is of the highest quality, and users maintain control over what they’re creating.
- I often felt the AI features were cramped into other parts of the product, making them hard to notice. There should be a separate flow for the AI features, allowing users to focus on the creation process. Otherwise, users are left fighting visual and cognitive overload.
- Some tools use a lot of fancy names for the AI agents, AI teams, AI projects — everything AI. It becomes overwhelming to understand what’s what. Stick to simple vocabulary and introduce visual elements that make AI features stand out. Provide clear descriptions of what users can achieve with these features — “magic names” don’t mean anything without context.