Skill Description AI Checker

This project focused on integrating an AI checker into our student recruitment platform to improve open-ended skill descriptions. It was my first experience designing an AI feature and involved AI prototyping throughout the design process. Compared to typical UX projects, this process was very different, requiring experimentation with dynamic outputs and iterative testing to shape the user experience effectively.
Project timeline

May 2025

Project type

Work project

UX/UI Design

Ruoxin You

Product Manager

Adam Maddocks

My Contribution

Research
UX/UI Design
Prototype
Test

Context
Skill Description: Between Questionnaire and Resume
Our company recruits students for seasonal work at camps and resorts. Within our recruitment system, applicants must complete a section called Skills, where they list their abilities, indicate their proficiency level, and briefly describe how they use those skills. This section is a hybrid between an open-ended question and a resume: applicants need to use the answer to showcase their skills, but it doesn't require the structure or formality of a traditional CV.
Because most of our applicants are students with little or no CV-writing experience, they often struggle to provide effective answers. To support them, we integrated an AI checker to review their responses, inspire more complete answers, and help them refine their descriptions.
Challenge
  • The process of refining answers is highly dynamic, as users may edit their responses multiple times, requiring consideration of the entire workflow.
  • AI output is inherently unpredictable, so we needed to design safeguards to ensure a smooth user experience.
  • Designing for AI features was new to our team, so we needed to establish best practices from scratch
Process
First Glance: Just a Quick Tweak

This started as a small side project. The goal was to add an AI checker to the Skill section of our platform, similar to other products we researched, so users could see feedback on their responses based on predefined prompts.

The Product Manager wrote prompts informed by prior user answers and interviewer feedback. The AI evaluates responses at three levels:

Poor level:
a single sentence stating the skill with minimal detail
Moderate level:
briefly describes skill level without supporting evidence
Strong level:
clearly describes skill level and provides relevant evidence such as qualifications or experience

I quickly reviewed AI design requirements and researched similar features in live products. Because it seemed simple, I wasn’t given much time. Our PM and I held a brief meeting to explore a few options and chose one that fit our limited development resources. We implemented and shipped it.

Challenge
Discovering the Drop-Off Problem

A few weeks after launch, our PM reported a problem: most users stopped at the “Moderate” skill level and didn’t try to improve their descriptions further. As a result, they weren’t discovering or using the grammar refinement feature.

We initially considered two approaches:

  • Use design changes or improved AI output to encourage students to write more.
  • Release the grammar refinement feature across all three skill levels instead of only at the final level.

After talking to our customer support team, I discovered many students simply had little to say because of limited experience. Even when contacted directly, they often left their skill descriptions mostly empty.

This insight shifted our approach. We decided to show content-refinement suggestions and also allow grammar/structure refinement for those who genuinely had little to add. Together with PM and support, we tweaked the AI checker logic:

Poor level:
show only content-refinement suggestions.
Moderate level:
show both content-refinement suggestions and grammar-refined output.
Strong level:
show only grammar-refined output.
Challenge
Untangling a Confusing First Draft

With the feature rules defined, I moved to ideation. Initially, I wanted to mimic our previous pattern: AI content suggestions as a paragraph on top, with a preview box showing the refined version and an “Accept” button.

But this quickly felt confusing - especially at medium level where both suggestions appear. The accept button looked like a one-click solution for everything, but in reality it only refined writing, not content. Also, the previous logic reloaded the interface every time the user changed content, which conflicted with the new workflow.

Process
Rethinking Interaction Patterns

After reading AI design guidelines, examining advanced AI products, and brainstorming with ChatGPT, I grouped my ideas into two directions:

  • Make content and grammar refinement visually distinct (e.g., dividers, tags).
  • Split the process: show content-refinement suggestions first, then give users a button to run grammar refinement.
Challenge
A Curveball from Our Manager

When I presented these ideas to the product and tech team, our tech manager preferred the second option and suggested automatically re-running AI every time the content changed.

My first thought was, honestly, a bit of panic. It seemed like it could be annoying, with constantly shifting output and no way to pause it. But the rest of the team liked the idea. Rather than rejecting it outright, I proposed we explore it further.

I was the only one with reservations, so I decided not to push back immediately, but to experiment with design trials after the meetings.

Challenge
Prototyping to Break the Deadlock

I still held the opposite opinion and was debating whether to push back or propose a compromise, like keeping auto-check but adding a pause button so users could turn it off. Then I realized I could prototype their idea to directly showcase the unpleasant feelings. Since I was keeping up with AI tools closely, I quickly built a demo using ChatGPT to refine prompts and Manus to generate an interactive prototype.

Check the demo here ->

Surprisingly, the demo showed the experience wasn’t as annoying as I feared. Because this was more like a questionnaire than a resume builder, users naturally wrote short sentences. Setting the auto-check with a slight delay (I tested 1 second) actually helped me complete my own skill description faster.

I shared the demo with colleagues; none found it annoying, and they liked the feature. That’s when I changed my mind and agreed to implement automatic checking.

Design
Final Design in Action
Impact

According to feedback from our customer service team, the quality of participants’ answers has noticeably improved since introducing the AI checker.

(Note: At present, we don’t have quantitative data to demonstrate this impact.) Reasons:

  1. The AI process runs on ChatGPT’s API, so we can’t access detailed process data.
  2. Our customer service team still calls participants to refine their skill descriptions, making it hard to isolate the AI’s effect from human intervention.
  3. We are a small team with limited capacity to monitor and analyze these metrics, although there are areas we could track in the future.
Key Takeaways and Reflection

This project was full of surprises:

  • AI is dynamic. Unlike traditional UX, AI output can’t be “designed” directly. We learned to involve real AI output from the start and design around it.
  • AI prototyping is powerful. It allowed us to test ideas quickly and hand over clearer specs to developers. I’ve since used it on other complex interactions.

Although this started as a small trial, it revealed the potential of AI-powered UX in our product. I’m continuing to learn and experiment with AI design to bring even more value to users.