The Project
Club & League Connect (“C&L”) is a sports administration web app that, in conjunction with a few others, provides a full suite of everything sports organizations and teams need to manage their players, coaches, and events.
At 20 years young, it was finally time for this behemoth of a web app to get the usability care it deserved. C&L grew unabated for years, continuously adding features and becoming more useful to clients. But before long, it outgrew its original foundation and was a chore to use for even its most practiced users.

Quick Stats
At the time the project started, C&L served roughly…
Goal
To collect and measure user feedback about the entire app.
My Role
I conducted many user interviews and advised on fixes for development and eventually presented findings to our executive team.
Results
Our findings helped lead to the decision to sunset the app entirely.
Research goals
Our purpose was far-reaching but simple: to collect and measure user feedback about the entire app. The product team, veterans of the app, had a good grasp on many of their users’ concerns. What they were missing was a complete understanding of how widespread and severe the issues were. The data we were to collect would be used to inform the product’s roadmap and facilitate the allocation of resources to the most troubled — and impactful — areas.
The team
Our remote UX team consisted of myself and two brilliant designers/researchers, Adam Bland and Jacqueline Schillinger, in a flat team structure. We divided the workload evenly and contributed roughly equally to each part of the process.
Some specific tasks I did for the project
- Conducted user interviews
- Organized data to create and prioritize solutions for issues raised in meetings
- Worked with the product manager to fit fixes into development sprints
- Compile and presented findings to the executive team
Recruiting participants
Realizing that passive data collection in the app would be impossible with the available developer time, we decided that the best means to collect a wealth of diverse feedback from our users would be video interviews. We set up a recruiter on the login page, and before long, the volunteers came pouring in.
Pouring.


The stars of the study
There is a lot of variety in the makeup of our userbase and how they used the app, but here is a quick breakdown of those we most frequently spoke with:
- Administrators: Management types. Not usually involved with individual players or games, more focused on financial reports.
- Coaches: The people on the field and online after the game.
- Volunteers: Typically, parents of players, meaning lots of turnover each season. Not every organization used volunteers, but those that did absolutely depended on them.
A little more about our stars:
- They regularly spent hours in the app each week during their active seasons
- They may or may not have influenced their organization’s decision to use C&L
- Their length of experience with C&L ranged from a few scant months to the entire life of the product
- They each tended to repeat an assortment of tasks within the broader feature set (no one exists that uses every part of the app)


What we asked them
A bump in the road
Just about any scheme involving humans is going to have surprises, and this was no exception. Of these, the most significant impedance to our research was very human, indeed. Our registrants were ghosting us.
On days during the research period, we typically had between 2 to 5 25-minute interviews scheduled. Each of these interviews was attended by two members of the UX team, one to ask the questions and one to take notes.
After the study commenced, we would have entire days where none of our interviewees showed, or sometimes only half of them. It was demoralizing, doing nothing for our study, and a huge time waster.

Throughout the study, we came up with several ideas that might explain the attendance issues, but none so far have panned out.
Some of our hypotheses
- The reminders must not be getting sent! (They were)
- The emails are going to spam folders? (They weren’t)
- The people that sign up weeks ahead forget or stop caring by the time of the appointment? (A pet theory of mine for a while, but after collecting the data, it didn’t hold water.)
The findings
Despite noted setbacks, we were able to gather a wealth of great feedback — from broad sentiments to specific issues and in one case, even pages of detailed suggestions from a developer. The people we talked to were passionate about the product getting better, even if they were frustrated by its current state (and most were very frustrated).
Throughout the study, we recorded and took notes on the interviews, which we collected and tagged using Dovetail. This way, it was effortless to visualize the scale of each issue, large and small, and also measure the demand of certain oft-requested features.
Feedback breakdown
Following are some of these visualizations of the feedback, built with associated tags. By nature, these labels are not exclusive, and many comments may have been tagged as multiple kinds of pain points. For that reason, comparing the proportions across categories is more useful for broad context and less so for directly comparing numbers.
That being said, there’s no changing that by far, negative feedback was the most common.
The 3 buckets of feedback
No Data Found
The most reported pain points
Comparing different groups was great, but charting contrast within one section proved even more helpful. It enabled us to quickly hone in on common themes in the feedback.
The top 10 most commonly stated problems
No Data Found
Making a list
This study was conducted over a relatively long period at around 2 months, and we weren’t just twiddling our thumbs between interviews.
While feedback streamed in, we were tracking issues huge, tiny, and everything in-between in a “hit list.”
Using a simple formula, we gauged the “immediate priority” of fixing issues based on how much positive impact its correction might create versus the estimated difficulty and time required for the designers and developers to create and deploy a fix.
For designers, “0 effort” typically meant all that was required would be documenting the solution. A 1 might be designing a single element, and a 2 might be a page or interaction. We played it a little fast and loose, but it proved a handy tool for coordinating with the product team.

Designing solutions
We uncovered a wealth of problems throughout the study, both from listening to users and just exploring the app on our own. While we didn’t have time to address everything we encountered (or frankly even a small fraction), there were a few that we felt could be quite impactful and require relatively little design work to address.
Finding problems
One part of the app that we heard about over and over was the “Find” interface. This was the backbone of many of our users’ workflows, and many found it unbearable to use. It’s an excellent example of an area we targeted to improve with surgical precision and minimal effort.
A portion of these problems were fairly obvious at a glance, but others were more specific to particular workflows, and they were driving some of our users up the wall.
A sample of the problems
- The layout of the input fields is ambiguous, and the functionality is counterintuitive. Pressing the tab key from “First Name” would take the user from “First Name” to “Parent First Name” rather than “Last Name.”
- The terminology isn’t consistent, and uses both “Find” and “Search” in different areas.
- The results navigation is functional but lacking from a UX/IA perspective.
- Users often need to scroll the results all the way to the right to find the info they need, past a lot of less important info.


Solving the problems
For issues like those just mentioned, we’d put together a quick mockup to alleviate the problems that didn’t require much more user feedback or testing. Some fixes required no design work — mainly things whose appearance wouldn’t need alteration, such as the functionality of the input fields. In such cases, we’d simply write up how things should instead function, and pass along a ticket to the product and development teams.
For more involved fixes, we would mock up proposed changes as time allowed, and would revisit them as necessary when they were slated for development.
Putting it all together
While it was nice to look at the charts and pat ourselves on the back for collecting all the data, that’s not where we stopped. Waiting at the end of this long process would be a room of executives to whom we’d present our findings and propose solutions.
To that end, we started digging into the feedback at a more granular level: tying themes together, pulling out impactful segments from our audio recordings, and putting it all together into a cohesive story of where the product stood — and paths we could take to improve C&L for its users.
Documenting problems
We then collected the most common and problematic findings into neat and concise documents and supported them with embedded audio and text quotes, as well as a few screen recordings. Some examples of these are below.
The voice of the customer
At the end of this process, we carried our users’ views directly to the highest reaches of the executive team.
The sheer volume of complaints had a significant effect, sure — but nothing’s more effective than directly hearing a frustrated user’s impassioned feedback. Watching the faces of our audience as we played some of the more harsh recordings was one of my personal project highlights.
Our work sparked a lot of lively conversation — during and after the presentation. C&L was finally going to get what it so desperately needed.
The nuclear option
A few months later, it was announced that C&L would be winding down.
The company had purchased another product (with a much better user experience) to replace it.
It was not exactly what we were aiming for, and it certainly wasn’t among our proposed plans, but the decision will likely be an improvement in the long term.
Takeaways
On the one hand, it can be frustrating for a project to be scrapped after pouring so much time and sweat into it. On the other hand, just looking at the app filled me with whole new levels of anxiety. The universe is undoubtedly better off without the user experience tragedy that was C&L. Sometimes the quickest and cheapest solution is to trash the product in question.
You always expect to be surprised in the user research phase, but this project taught me to not assume how the data and insights gleaned might be put to use.