UX Product Designer · Provo, Utah · Available for new roles
I run user interviews before the first wireframe, track behavior with FullStory and Sigma after launch, and write JIRA tickets for developers in between. Highlighted in this portfolio are 1 concept project and 6 features deployed across GoReact and RainFocus — from WCAG 2.1 compliance that unlocked higher-ed contracts, to a self-serve payment flow that cut the sales team out of every learner transaction.
Design Process
Ten steps — not as a checklist, but as a commitment. Most UX processes end at step 7. Steps 06, 08, and 10 — stakeholder alignment, implementation support, and measuring results after launch — are where the work either holds together or falls apart. Every GoReact project in this portfolio ran to step 10.
The Product UX Designer, Product Manager, and Engineering Lead collaborate across all ten steps, drawing on ideas from inside and outside the business. This isn't a waterfall — each step informs the others, research can reopen discovery, testing can reframe the problem, and the loop continues as long as the work demands. That loop is already changing shape. Collaborating in real time with a tool like Claude — generating a working concept, testing it with real users the same day, bringing the findings back to iterate immediately, and testing again — compresses what once required separate design and engineering sprints into a single cycle, and redefines what a product designer with strong research instincts is capable of delivering.
Frame the core challenge, constraints, and success criteria before any design work begins.
Research real users through interviews and observation to surface what they actually need.
Map the existing landscape to identify gaps, patterns, and opportunities the design must address.
Sketch and wireframe rapidly to explore the solution space before committing to any direction.
Put rough designs in front of real users, learn fast, and refine based on what breaks.
Bring product, engineering, and leadership into the design before it becomes costly to change.
Produce specs, component documentation, and handoff materials that engineers can build from directly.
Stay present during build to answer questions, review work in progress, and catch drift early.
Validate the near-shipped experience against the original problem definition with real participants.
Assess outcomes against success criteria and document what the design actually changed.
One shipped AI-assisted application built with Claude, six shipped features at GoReact and RainFocus — full process, no visuals due to NDA. One speculative concept project with complete visuals.
I designed and built a full progressive web application using Claude as my development partner — no prior software deployment experience. LockedIn is a merit-based player evaluation system for my U13 youth soccer team that tracks attendance, fitness, and effort across a 23-practice season. This case study documents what happens when a product designer uses AI to ship real software to real users.
Shipped a production PWA used by 15 players, 3 coaches, and their parents — with real-time data sync, voice-coached practice timers, a merit-based scoring system, a template library of US Soccer-aligned practices, and Spotify integration — all built through AI conversation with no prior deployment experience.
As a youth soccer coach working with 12–13 year olds, I needed a system to make position selection fair, transparent, and effort-based rather than subjective. Players and parents deserved to see that playing time was earned through consistent attendance, fitness effort, and engagement — not favoritism. No existing tool combined practice tracking, player evaluation, and coaching tools in one place. I also needed a voice-coached practice timer so my assistant coaches could run structured sessions independently.
I'm a product designer, not a software engineer. I had never deployed an application. Instead of learning a framework, I partnered with Anthropic's Claude to build LockedIn through iterative conversation — describing what I needed, reviewing the output, testing on my phone, and refining. Over multiple sessions spanning weeks, we built a ~5,500-line single-file vanilla JavaScript PWA with no build step, no framework, and no dependencies beyond Firebase and a few CDN libraries. Every design decision was mine. Claude translated those decisions into working code.
The application has seven core systems, each designed and iterated through AI conversation:
Merit Scoring Engine — A 6-point weekly maximum across attendance (1 pt), fitness (1–2 pts), and effort (1–3 pts) with a 5-week repeating cycle. Cumulative scores determine position selection tiers. Game bonuses reward players who meet certain thresholds. The system is transparent — every player and parent can see their scores.
Practice Timer with Voice Coaching — A timer that runs through structured practice activities and exercises with Google Cloud TTS voice announcements. The voice calls out activity transitions, exercise names, durations, and remaining practice time. Audio is amplified through the Web Audio API GainNode for outdoor use. Beep tones signal transitions.
Recording Interface — Per-player, per-practice data entry for Yes/No activity completion, RPE (Rate of Perceived Exertion) self-ratings, coaching observations (positive/negative), wellness checks, and arrival times. Every data point feeds the merit scoring engine.
Player Stats & Analytics — Season-long analytics with Chart.js visualizations including RPE trends, attendance streaks, team comparison radar charts, and arrival time patterns. Coaches see every player; players and parents see only their own data.
Individual Development Plans — Per-player IDP pages with customizable habit tracking questions and coach notes. Coaches can save and manage IDPs through the library system.
Library System — Save, organize, and import entire practices, individual activities, or single exercises. Includes 6 pre-built US Soccer U13 templates following the Play-Practice-Play methodology. A left-nav library page with search, sort, and folder organization (Practices, Activities, Exercises, Templates, Stats, IDPs).
Spotify Integration — PKCE OAuth flow connecting to Spotify's Web Playback SDK. Coaches can play music during practice with automatic volume ducking during voice announcements.
Every architectural decision was driven by my constraints as a non-engineer deploying to real users: Single HTML file — no build step, no bundler, no npm. I could drag-and-drop deploy to Cloudflare Pages. Firebase Auth + Firestore — multi-device sync so coaches, players, and parents share the same data. Network-first service worker — after discovering that cache-first strategy caused data loss on redeployment, I switched to network-first with offline fallback. LocalStorage + Firestore dual-write — every save writes locally first (instant), then syncs to Firestore (durable). A pending queue retries failed syncs when connectivity returns. Seed-only-if-empty pattern — after a data loss incident during redeployment, I built a guard that only seeds Firestore on true first-time setup, preventing defaults from overwriting real data.
This project demonstrates that a product designer with strong domain knowledge and clear design thinking can use AI to ship real software — not a prototype, not a mockup, but a production application with authentication, real-time data sync, third-party API integrations, and actual users. The design decisions were mine. The architecture was collaborative. The code was AI-generated and human-directed. The result is a tool that my team uses every practice.
Every case study in this portfolio is protected by NDA — the outcomes are real, but the visuals can't be shown. This concept project exists to change that. It's a speculative redesign of the coach's clip-tagging workflow for a sports video analysis tool, targeting a documented usability problem that forces coaches to watch the same game film multiple times. I'm a licensed soccer coach. I know this problem firsthand. This project shows my full process — from research through prototype — with nothing held back.
A unified single-pass tagging interface — where stats, clip cuts, and playlist assignments are treated as simultaneous properties of one moment — can cut post-game film processing time by 50% or more for coaches working without video staff.
The clip-tagging interface forces coaches to make two complete passes through the same game film to do what should be a single operation — tag a stat, cut a clip, and assign a playlist simultaneously. For the high school or club coach working alone after a full day of teaching or work, that time cost is prohibitive enough to abandon the features entirely.
Hudl is the dominant video analysis platform in high school and collegiate athletics. Coaches use it to review game film, tag statistical events, cut highlight clips, and share footage with players and recruiters. The platform is deeply embedded in team workflows at thousands of programs around the world.
However, a persistent and well-documented friction point (see sources below) affects the most time-constrained segment of Hudl's user base — the high school and club coach who does not have access to a video coordinator, analytics staff, or dedicated film time. For these coaches, the post-game film workflow requires a minimum of two complete passes through game footage: one to tag statistics, and a second to cut and assign clips. For a 90-minute soccer match, this routinely means three or more hours of solo post-game work before a single coaching insight reaches players.
This concept project proposes a redesigned tagging interface that enables coaches to simultaneously tag statistics, cut clips, and assign playlist destinations as a single unified operation during one playback pass — reducing post-game film processing time by an estimated 50% or more for the time-pressured teacher-coach persona.
The coach using Hudl alone. The popular image of Hudl is a college football program with a dedicated analyst tagging every snap during the game. That user exists. But Hudl's market penetration is deepest at the high school level, where the reality is significantly different. The typical high school or competitive club coach using Hudl is a full-time teacher, administrator, or working professional who coaches on the side. They have no video staff. They arrive home after a Friday night game — sometimes past midnight — and face a backlog of film work that must be completed before the next session.
The double-tagging problem — mechanics. Hudl's current film review interface separates two logically related operations into distinct modes that cannot be performed simultaneously:
| Operation | Time Cost (90-min match) | Watch Passes Required |
|---|---|---|
| Stat Tagging Pass | 60 – 90 min | 1 full pass |
| Clip Cutting Pass | 45 – 75 min | 1 full pass |
| Playlist Assignment | 20 – 40 min | Review + navigate |
| Total (current workflow) | 2.5 – 3.5+ hours | 2 – 3 passes |
| Target (proposed) | 60 – 90 min | 1 pass |
The compounding effect is significant: a coach who wants to use both stat analysis and player-specific clip sharing must invest three or more hours per game. For a program playing 15–20 games per season, that is 45–70 hours of post-game film work. Most coaches are not spending that time — they are choosing between stats and clips, or abandoning both in favor of simply sharing the raw game film link.
Four personas drive this project. Full persona documents are a deliverable in Step 02. Summaries below are for brief context only.
The core hypothesis is structural: the reason coaches cannot tag stats and cut clips simultaneously is not a technical limitation — it is a modal interface architecture that treats these as separate tasks. They are not separate tasks. They are the same task: identifying a meaningful moment in the film and attaching multiple attributes to it simultaneously.
Measured through prototype testing and qualitative research. The following criteria define a successful outcome:
| Criterion | Measurement Method | Target |
|---|---|---|
| Time-on-task reduction | Comparative test vs. current Hudl workflow | ≥ 40% reduction |
| Task completion rate | Tag + clip + playlist in one pass on prototype | ≥ 80% success |
| Coach confidence rating | Post-task Likert scale (1–5) | ≥ 4.0 mean |
| Feature adoption intent | "Would you use this instead of current workflow?" | ≥ 70% yes |
| Net new steps | Step count audit vs. current workflow | Same or fewer steps |
Constraints
Assumptions
The following questions are unresolved at the start of this project and will be addressed through research, testing, and design iteration:
The following sources provide direct evidence that the two-pass workflow problem is not a design assumption — it is a publicly documented, multi-year complaint from real Hudl users across sports and levels. Each source is specific enough to locate and read.
This scope definition establishes the precise boundaries of the redesign effort — what is being designed, what is explicitly excluded, and what assumptions are being carried into execution. It exists to prevent scope creep and to give any product or engineering reviewer a clear picture of what a real sprint or project plan would need to contain.
These criteria define what a successful outcome looks like for this project, measured through usability testing in Steps 05 and 09. Each criterion is tied to a specific measurement method and a target threshold. They exist so that the design evaluation is not subjective — the redesign either passes or it doesn't.
| Criterion | Measurement Method | Target | Test Step |
|---|---|---|---|
| Time-on-task reduction | Timed task: full post-game tag session, prototype vs. Hudl base platform | ≥ 40% faster | Step 05 & 09 |
| Single-pass task completion | % of participants who complete tag + clip + playlist in one playback pass without backtracking | ≥ 80% | Step 05 & 09 |
| Coach confidence rating | Post-task Likert scale (1–5): "I felt confident I could use this workflow after every game" | Mean ≥ 4.0 | Step 05 & 09 |
| Feature adoption intent | Closed question: "Would you use this instead of your current post-game workflow?" | ≥ 70% yes | Step 09 |
| No regression for basic users | Tech-reluctant veteran persona completes basic clip-share task with no added errors vs. current workflow | 0 net new errors | Step 05 |
| Step count parity | Interaction audit: steps to complete one fully tagged moment (stat + clip + playlist) in new interface vs. current two-pass workflow | Same or fewer total steps | Step 07 |
| Assist comparison intent | Among Assist subscribers tested: "Would this interface reduce your need for the Assist service?" | Directional — no threshold | Step 09 |
These criteria are the limit of what speculative concept work can prove. They cannot measure real-world adoption at scale, long-term behavior change, or the business impact of cannibalizing Assist revenue. Those questions require a shipped product and instrumented usage data — the kind that FullStory and Sigma would provide in a real product context. The open questions log in Step 10 will name these explicitly.
Primary users: the high school head coach who is also a full-time teacher or the part-time club coach who has a full-time day job — managing film review late at night after a game, without a video coordinator. Secondary user: the assistant coach who assists with tagging but lacks training on the platform. Research approach: interviews with coaches from Liverpool FC International Academy Utah and other club-level programs, supplemented by public G2, App Store, and coaching forum reviews to validate pain points at scale.
A fourth persona has been added specifically in response to Hudl Assist: the Assist subscriber — a coach or program that already pays for Hudl's human-analyst breakdown service (launched ~2018, upgraded through 2025). This persona is valuable as a contrast case. If Assist already solves the two-pass problem for them at an added cost, what does that reveal about the value of the underlying workflow? Research with this persona will explore willingness to pay, whether they still interact with raw film after receiving Assist breakdowns, and what capabilities they wish existed in the base interface. This directly sharpens the design question: the solution should deliver Assist-level outcomes through interface, not service.
Name: Coach Marcus Webb (composite)
Role: Varsity high school soccer head coach + full-time biology teacher
Age: 38
Program level: 3A public high school, 22-player roster
Tech comfort: Medium — uses Hudl, avoids advanced features
Hudl tier: Base subscription (school-funded). No Assist.
Games: Fridays at 6pm, home and away. Returns home 9:30–10:30pm.
Teaching load: 5 periods/day, planning periods used for grading. No free blocks for film during school hours.
Film window: Friday night 10pm–midnight, Saturday morning before family wakes up.
Team meeting: Sunday afternoon. Film insights need to reach players by then.
Name: Coach Daniela Reyes (composite)
Role: U16 girls club soccer coach + full-time marketing manager
Age: 32
Program level: Competitive club, travel team, 18-player roster
Tech comfort: High — power smartphone user, frustrated by clunky UX
Hudl tier: Base subscription (club-funded). No Assist.
Games: Weekends, 8am–5pm tournament blocks. Hotel stays 6–8 weekends/year.
Work schedule: Full-time M–F, remote. Coaches Tuesday/Thursday evenings + weekends.
Film window: Sunday evenings after tournaments. Device: laptop, occasionally iPad.
Parent/player expectations: High. Parents expect film access and individual feedback. Recruiting pressure for 4–5 players on roster.
Name: Coach Bill Hartman (composite)
Role: High school varsity football offensive coordinator + PE teacher
Age: 56
Program level: 4A public high school, 45-player program
Tech comfort: Low — uses Hudl only because the program requires it
Hudl tier: Base subscription (school-funded). No Assist. Head coach uses Assist, he doesn't.
Games: Friday nights. Film session is mandatory Saturday morning for the full coaching staff.
Workflow: Head coach or team manager handles film upload, tagging, and clip cutting. Bill watches the film in the group session but does not work in Hudl independently.
Experience: 26 years coaching. Has used VHS, DVD, Krossover, Hudl. Views technology as a means, not an end.
Device: School-provided laptop. Uses email and Microsoft Teams. No personal film work at home.
Name: Coach Sarah Kim (composite)
Role: Varsity high school lacrosse head coach + AD's assistant
Age: 42
Program level: Competitive 5A, 28-player roster, 3 state titles in 8 years
Tech comfort: High — data-driven, has used analytics dashboards professionally
Hudl tier: Hudl Assist (program pays ~$1,400/year). Receives tagged breakdowns within 12–24 hours of upload.
Assist workflow: Uploads game film Friday night. Receives Hudl Assist breakdown by Saturday morning: full stat tagging, event markers, and automated per-player clip playlists for all athletes.
Remaining manual work: Still returns to raw film for custom clip packages — specific play sequences, scheme illustrations, recruiting reels. Assist's auto-playlists are player-by-player, not thematic.
Annual cost awareness: Knows exactly what she pays. Renews every year but questions the value during off-season.
Relationship to base platform: Still logs into base Hudl weekly. Assist removes the two-pass problem for routine review but doesn't eliminate all manual film work.
Research was conducted across 11 coaching interviews — 3 teacher-coaches, 2 club coaches, 2 tech-reluctant veterans (observed in team meeting context), and 4 Assist subscribers — supplemented by structured analysis of 47 publicly available reviews from G2, the Hudl community forum, App Store reviews tagged with workflow-related keywords, and the coaching subreddits r/bootroom and r/SoccerCoaching. The interview protocol was semi-structured, 45–60 minutes, conducted over video call. Participants were recruited through Liverpool FC International Academy Utah, coaching networks in Utah and Nevada, and a publicly posted screener.
Interviews were recorded with participant consent and transcribed. Affinity diagramming was used to identify themes across participants. The synthesis below is organized by finding, not by persona, because the most useful patterns emerged across persona boundaries.
Every non-Assist participant described film work as a time competition — a fixed window between game end and the next team interaction where coaching insights have to be extracted, organized, and delivered. The window varies (Friday night, Saturday morning, Sunday evening) but its compression is constant. The critical finding: coaches do not lack motivation to use advanced features; they lack time to make two passes. This distinction matters because it changes the design question. The problem is not feature discoverability or interface learnability — it is workflow structure.
Three of 5 non-Assist participants described a specific pattern: stat tagging happens in session one (often very late at night, while fatigued) and is completed at a low level of detail. Clip cutting, requiring a second full pass, is deferred and frequently abandoned. The result is an asymmetry: stat data accumulates in Hudl but never converts to player-facing film content.
All 5 non-Assist participants had developed workarounds for the clip-delivery problem — and none had labeled them as workarounds until directly prompted. Sharing a game link with timestamps in a group text was the most common (4 of 5). Screenshot-plus-annotation via Canva or Notes appeared in 2 cases. One coach printed still-frame grabs from the film and taped them to a whiteboard for the team meeting. These workarounds represent Hudl feature abandonment, not coach failure. Coaches adapted to the interface's constraints rather than fighting them — but in doing so, they reduced Hudl to a video hosting service, not a coaching tool.
Assist subscribers described near-unanimous satisfaction with the core value proposition — getting a tagged breakdown without doing the tagging themselves. All 4 confirmed it had eliminated the two-pass problem for routine post-game review. However, all 4 also described a second layer of manual work that Assist does not address: theme-based clip packages (e.g., "all our transition moments from the last three games"), recruiting reels for individual players, and play-sequence illustrations for upcoming opponent preparation.
Assist delivers player-by-player clip playlists — every moment a specific athlete appears. What it does not deliver is intent-based clip selection: a coach pulling the 6 moments that illustrate a specific tactical concept for a specific practice session. That work still happens in the base interface, still requires a manual pass, and still takes the same amount of time as before Assist.
Six of 7 non-Assist coaches reported doing their primary film work between 10pm and midnight the night of the game. Task completion quality at this time was described by all 6 as "lower than I want it to be." Specific behaviors reported: tagging fewer event types than intended, using generic catch-all tags rather than granular ones, stopping before the end of the film, and skipping documentation of opponent patterns entirely. The fatigue-degradation loop is self-reinforcing: low-quality tags reduce the analytical value of the data, reducing the coach's confidence that the stat tagging is worth doing at all.
Design implication: the interface must support high-quality output with low cognitive load at 11pm on a Friday, not just at 9am on a Thursday. Every additional decision the interface requires is a decision that fatigued coaches simplify or skip.
Both observed tech-reluctant veterans operated inside programs with staff structures that insulated them from the interface. In both cases, a head coach, team manager, or "tech-savvy assistant coach" handled all Hudl administration. The veteran coach consumed the output — watching film in team meetings, clicking shared links — but never operated the interface independently. This is significant for the redesign: the veteran's primary risk is not from new features (which he won't use), but from visual changes to the core player experience (the film player itself, the shared link landing page) that could disorient him in the team meeting context. Preserving the film-watching experience is a higher priority than exposing new tagging functionality to this persona.
Behavioral archetypes are distinct from personas. Personas describe who users are — their background, context, and goals. Archetypes describe how users behave inside a specific workflow, regardless of which persona they belong to. A single coach can exhibit multiple archetypes depending on the game, the week, and the time available. These archetypes emerged from interview synthesis and are used to stress-test design decisions: each proposed interface change should be evaluated against what it does to each archetype, not just each persona.
Definition: Does one thing during film review — either stat tagging or clip cutting — but never both in the same session. Chooses based on which deliverable is due first. The other deliverable is deferred indefinitely.
Who shows this behavior: Teacher-Coach (Marcus archetype) most consistently. Some Club Coaches during tournament weekends when time is extremely compressed.
Trigger condition: Available time is less than what both passes require. The coach calculates (usually unconsciously) that one complete pass is better than two incomplete ones.
Design implication: The Single-Passer is the primary target user for a unified tagging interface. They already want to do both — the interface structure is what forces them to choose. Eliminating the mode-switch between stat tagging and clip cutting collapses their two-pass obligation into one. This is the highest-ROI archetype for the redesign.
Definition: Shares game film with players or parents via a direct link plus manually typed timestamps in a separate message, rather than cutting and sharing discrete clips. This is the most common workaround identified in research (4 of 5 non-Assist participants).
Who shows this behavior: Club Coaches (Daniela archetype) most visibly. Also appears in Teacher-Coaches during high-volume weeks. Rarely in programs with video coordinators.
Trigger condition: Coach has watched the film and knows which moments matter, but does not have the time or inclination to go back through the clip-cutting interface a second time to formalize those moments as discrete deliverables.
Design implication: The Timestamp Sender is actively avoiding the clip-cutting interface because the cost of returning to it exceeds the benefit they perceive. A single-pass interface that creates a clip at the moment of tagging — no return visit required — would eliminate the workaround and likely increase engagement with the clip-sharing feature significantly. The design must make clip creation feel as lightweight as typing a timestamp.
Definition: Defers all film work to a single large session the morning after the game — doing stat tagging, clip cutting, and film review in one long block. This approach preserves quality by avoiding late-night fatigue degradation, but compresses the delivery window and crowds out other weekend obligations.
Who shows this behavior: Coaches with young children or family obligations that make late-night work impossible. More common among women coaches in the sample. Also appears in coaches with longer commutes after evening games.
Trigger condition: Deliberate scheduling choice. The coach has accepted late delivery to preserve work quality. Often describes this as a "system" they've developed.
Design implication: The Saturday-Morning Editor already consolidates work into one session by choice — they are a natural fit for a unified interface even if the time pressure is lower. They are more likely to complete all deliverables if the interface rewards doing them together. Their specific concern is total session length: they want to finish before family obligations resume, so efficiency within the session matters even when urgency is lower.
Definition: Assigns film work to an assistant coach, team manager, or technically skilled player (often a senior or team captain). The head coach provides direction — "clip these three moments, tag all our defensive transitions" — and reviews the output rather than producing it.
Who shows this behavior: Head coaches at larger, better-resourced programs. Also appears in coaches who are confident in their strategic decisions but describe themselves as "not good at the tech side."
Trigger condition: Access to a trusted collaborator with better Hudl proficiency. Often born from necessity (head coach doesn't have time) and formalized into a regular division of labor.
Design implication: The Delegator is a secondary user of the tagging interface (their delegate is the primary user) but is the primary consumer of the output. The redesign must not introduce so much complexity that Delegators can no longer hand off the workflow to less-experienced assistants. The unified interface should be learnable by a first-year assistant coach, not just an experienced head coach. Also: the Delegator's preferences drive what gets tagged, but not how. Interface decisions that make delegation easier (e.g., shareable tagging templates, annotation handoff, draft review mode) have high value for this archetype.
Film work does not happen in a dedicated, distraction-free environment. It happens in stolen time — between other obligations, at the margins of the day, often while fatigued. The following temporal patterns emerged from research:
| Session Type | Typical Time | Duration | Mental State | Who Does This |
|---|---|---|---|---|
| Post-game Friday night | 10pm – 12am | 60–120 min | Tired, emotionally activated (win/loss), social media distraction high | Teacher-Coach, Club Coach (tournament eve) |
| Saturday morning | 6am – 8am | 90–180 min | Rested, focused, time-bounded by family wake-up | Saturday-Morning Editor archetype |
| Sunday afternoon/evening | 3pm – 9pm | 60–90 min | Calm but distracted by weekend obligations | Club Coach (tournament return day) |
| Team meeting in-session | Varies (during meeting) | Real-time, 30–90 min | Alert, public-facing, pressure to project competence | All personas consuming output |
| Hotel room / tournament travel | 10pm – 1am | 30–60 min | Very tired, poor network, unfamiliar device setup | Club Coach exclusively |
Key design implication: The interface must perform well at 11pm on a Friday, on a home network, on a 4-year-old laptop, by a person who has been on their feet since 7am. That is the median use condition — not a stress test. Every additional modal, confirmation dialog, or mode-switch is a cognitive tax on someone who has no cognitive surplus to spend.
All 11 participants described their film sessions as interrupted. The type and frequency of interruption varied by persona and session time:
Design implication: The interface must be tolerant of interruption. A coach who steps away mid-tag and returns 15 minutes later should not lose context. The current position in the film, any open tagging panels, and any partially completed tags should persist. This is a requirement, not a nice-to-have, given the environment the interface actually operates in.
These notes document structured interviews with four Hudl Assist subscribers conducted in February–March 2026. The interviews were designed to answer a specific design question: if Assist already solves the two-pass problem for paying subscribers, what does that reveal about what a base-platform solution must do — and where will it fall short? The Assist subscriber is a contrast case, not a primary design target. Their experience shapes scope boundaries and reveals which problems a base-platform fix cannot credibly promise to solve.
All participants are identified by role and tenure only. Quotes are lightly edited for clarity.
Uploads game film within 2 hours of final whistle using the Hudl app on an iPhone. Assist breakdown is available by 9am Saturday — full stat event markers and per-player clip playlists for all 28 players. Reviews breakdown on laptop Saturday morning, approximately 45 minutes. Sends player playlists directly from the Assist dashboard with no additional editing.
Still returns to base platform 2–3 times per week for: (1) thematic clip packages for practice installation, (2) custom reels for 3 college-bound seniors, (3) opponent film review and tagging. Assist does not cover opponent film. For opponent work, she performs the full two-pass workflow manually — stat tagging pass, then clip cutting pass — the same workflow that non-Assist coaches use for all games.
His head coach manages Assist subscription and handles all film upload and breakdown distribution. He is a consumer of Assist output, not an operator. Receives a shared link to the Assist breakdown each Saturday. Reviews it on his own, then uses the base platform to pull specific formations and play sequences for his weekly install meeting. Estimates 3 hours/week in the base platform despite having Assist — all of it on offensive scheme preparation, which Assist's breakdown does not organize thematically.
Manages Assist subscriptions for 6 teams in a competitive club program. Assist cost is ~$8,400/year total across teams. Each head coach receives Assist output independently. He uses Assist for his own U19 team and manages the system administratively for others. Primary concern: two of his six coaches don't use Assist output consistently — they revert to watching raw film because they don't trust the Assist tagging accuracy for soccer-specific events.
Adopted Assist after two years of manual two-pass tagging. Framed the switch as "I hit a wall — I couldn't keep doing it." Current workflow: uploads game film from his phone immediately after the game. Assist breakdown arrives by morning. Reviews on iPad over breakfast. Sends player playlists directly. Total time: ~30 minutes per game. Before Assist: 3–4 hours per game across two sessions.
| Finding | Implication for Design |
|---|---|
| All 4 still use base platform weekly | Assist reduces but does not eliminate base platform use. A base-platform improvement has value even for Assist subscribers. |
| Opponent film is universally manual | Assist covers own-game breakdown only. Opponent scouting is a guaranteed two-pass workflow for every Assist subscriber, every week. |
| Intent-based clips are a universal gap | Assist generates player-based playlists, not theme-based packages. Coaches still build thematic clip packages manually — this is the most time-consuming remaining work. |
| Soccer tagging accuracy concerns | Sport specificity limits Assist trust for continuous-play sports. Custom tag taxonomy may be more valuable than automated tagging for these use cases. |
| Price barrier at program level | Individual coaches justify Assist cost personally; AD approval is a separate hurdle. Free base-platform equivalent removes the barrier entirely at the program level. |
Platforms under review: Hudl (primary — base platform and Assist service tier), QwikCut, Wyscout, Nacsport, Veo, SportsVisio, Track160, and Catapult Video. A key finding already established: Hudl's own answer to the two-pass problem is Hudl Assist — a paid add-on service (~$900–$3,300/year depending on tier) where trained human analysts tag your games and deliver stats plus automatic player clip playlists within hours. As of early 2026, AI auto-tagging via Balltime AI is available for club volleyball only. This makes Assist a direct comparison point for the redesign, not just a feature note — Hudl's strategy was to sell a service layer rather than fix the interface, which is a meaningful product decision to interrogate.
The competitive analysis specifically maps how each platform positions itself relative to this gap. SportsVisio and Playbook Sports are explicitly marketing against coach workflow complexity, targeting programs without video coordinators — the same population this project serves. Also reviewed: multi-attribute tagging patterns from non-sports domains including video annotation tools (Encord, CVAT, V7 Labs), qualitative research platforms (MAXQDA, NVivo), and professional editing software. Current-state journey map of Hudl's base workflow documented step by step, with Assist's service-model flow as a parallel reference.
This matrix evaluates eight platforms across the criteria most relevant to the design problem: single-pass tagging capability, clip creation workflow, player delivery model, mobile viability, pricing accessibility, and how each positions itself relative to workflow complexity. Ratings are based on documented product features (public documentation, App Store listings, review data, and direct testing where access was available) verified as of March 2026. Assist is listed as a separate row from Hudl Base to surface the service-layer vs. interface distinction that is central to this project.
Ratings use a three-point scale: ●● Full = capability exists and works well, ● Partial = capability exists with significant friction or limitations, ○ None = capability absent or negligible.
| Platform | Single-Pass Tag + Clip | Player Clip Delivery | Custom Tag Types | Mobile Tagging | Stat Analytics | Entry Price | Target User |
|---|---|---|---|---|---|---|---|
| Hudl Base | ○ None | ● Partial | ●● Full | ○ None | ●● Full | ~$500–800/yr | HS & club programs |
| Hudl Assist | ●● Full (outsourced) | ●● Full | ○ None | ●● Full (upload only) | ●● Full | +$900–3,300/yr | Programs with budget |
| QwikCut | ● Partial | ●● Full | ● Partial | ● Partial | ● Partial | ~$400–600/yr | HS programs, football-heavy |
| Wyscout | ●● Full | ● Partial | ●● Full | ● Partial | ●● Full | ~$2,000+/yr | Pro & semi-pro scouts |
| Nacsport | ●● Full | ● Partial | ●● Full | ○ None | ●● Full | ~$600–1,200/yr | Tactically sophisticated coaches |
| Veo | ○ None | ● Partial | ○ None | ●● Full | ● Partial | ~$1,200/yr + camera | Programs wanting auto-capture |
| SportsVisio | ● Partial | ●● Full | ● Partial | ●● Full | ● Partial | ~$300–500/yr | Budget-conscious programs |
| Catapult Video | ●● Full | ●● Full | ●● Full | ● Partial | ●● Full | Enterprise pricing | Collegiate & professional |
To identify interface patterns for multi-attribute simultaneous annotation, the following non-sports tools were reviewed for tagging UX patterns that could inform the redesign:
| Tool | Domain | Relevant Pattern | Applicability to Hudl Redesign |
|---|---|---|---|
| Encord | AI video annotation | Frame-level multi-label tagging panel. Single click opens a floating panel where multiple attributes are assigned simultaneously before confirming. | High — direct model for unified moment tagging |
| CVAT | Computer vision training data | Keyboard-shortcut-driven annotation with persistent attribute panels. Tag types are pre-configured per project; selection requires only keypress, not mouse navigation. | High — keyboard-first tagging model for experienced coaches |
| MAXQDA | Qualitative research coding | Segment selection → code assignment → memo creation in a single drag action. The "code and memo" paradigm parallels "clip and tag" closely. | Medium — memo model maps to coaching note attachment |
| DaVinci Resolve | Professional video editing | Marker system: single keyboard shortcut drops a color-coded marker with a text field and duration. Multiple marker types can be pre-assigned to keys. | Medium — marker-to-clip model worth adapting for coach context |
| Spotify (playlist UX) | Music / consumer | "Add to playlist" available from any context menu without leaving current view. Multiple playlist assignments in a single interaction. | Medium — playlist multi-assignment UX pattern directly applicable |
| Notion | Knowledge management | Multi-select tag assignment with type-ahead search. Tags are properties of an item, not separate classifications — all applied in the same action. | Low-Medium — property model relevant for tag taxonomy design |
This document maps the complete post-game film workflow for Hudl Base (manual, current state), Hudl Assist (service-mediated, current state), and the proposed Redesigned Interface (target state). All three are mapped at the same level of granularity — individual steps the coach takes or does not take — so that time savings, mode-switches, and decision points can be compared directly. Step counts and time estimates are based on interview data and direct platform testing; they represent the median experienced coach, not a first-time user.
TOTAL ESTIMATED TIME: 3–4 hours per game · PASSES THROUGH FILM: 2 minimum · MODE SWITCHES: 4–6
| # | Step | Where | Est. Time | Pain Level |
|---|---|---|---|---|
| 01 | Upload game film from camera/SD card or capture device | Hudl desktop app or browser | 5–20 min | Low |
| 02 | Wait for processing and transcoding to complete | Hudl (passive wait) | 15–45 min | Low |
| 03 | Open film in tagging view. Configure tag panel — select active tag types for this game. | Hudl tagging interface | 5–10 min | Medium |
| 04 | PASS 1 — STAT TAGGING: Play film. At each relevant moment, pause, press tag key or click tag button, assign event type and player attribution, resume playback. Repeat for full game. | Hudl tagging interface | 60–90 min | High — fatiguing, interrupts flow |
| 05 | Review and correct tagging errors. Re-watch ambiguous moments. | Hudl event timeline | 10–20 min | Medium |
| 06 | MODE SWITCH: Exit tagging view. Navigate to clip-cutting interface (different tool mode). Re-orient to film controls in new context. | Hudl clip editor | 2–5 min | High — disorienting context switch |
| 07 | PASS 2 — CLIP CUTTING: Play film from beginning (again). At each moment to clip, set in-point, set out-point, name clip, assign to player playlist(s). Repeat for all desired clips. | Hudl clip editor | 60–90 min | Very high — full film re-watch required |
| 08 | Assign completed clips to team and individual player playlists. Set sharing permissions. | Hudl playlists | 10–15 min | Medium |
| 09 | Share playlists with players via Hudl notification or external link. | Hudl share tools | 5 min | Low |
TOTAL COACH TIME: 30–60 min per game · PASSES THROUGH FILM BY COACH: 0–1 (review only) · ADDITIONAL COST: $900–3,300/yr
| # | Step | Where | Est. Time | Pain Level |
|---|---|---|---|---|
| 01 | Upload game film (same as Base workflow) | Hudl app or browser | 5–20 min | Low |
| 02 | Submit film to Assist queue. No configuration required. | Hudl Assist dashboard | 1–2 min | Low |
| 03 | Hudl analyst team tags all events, assigns player attributions, generates stats. Coach does nothing during this step. | Hudl analyst team (offsite) | 6–24 hr (async) | None — coach is asleep |
| 04 | System auto-generates per-player clip playlists from analyst-tagged events. All 28 players receive individualized highlight packages. | Hudl automated | Automated (async) | None |
| 05 | Coach receives notification. Reviews Assist breakdown: stats dashboard, player playlists, event timeline. | Hudl Assist dashboard | 20–40 min | Low |
| 06 | Distribute player playlists. Players notified automatically in Hudl app. | Hudl automated | 1 min (review + confirm) | Low |
| 07 | REMAINING MANUAL WORK: Any thematic clip packages, opponent scouting, recruiting reels, or practice install sequences must still be built manually in the base platform — with full two-pass workflow. | Hudl base platform | 1–3 hr/week (still) | High — same friction as Base workflow |
PROJECTED TOTAL TIME: 60–90 min per game · PASSES THROUGH FILM: 1 · MODE SWITCHES: 0 · ADDITIONAL COST: None (base platform feature)
| # | Step | Where | Est. Time | Improvement |
|---|---|---|---|---|
| 01 | Upload game film (same as Base) | Hudl desktop or app | 5–20 min | No change |
| 02 | Open film in unified tagging view. Persistent side panel shows tag types, clip controls, and playlist destinations simultaneously — no mode switching required. | Redesigned unified interface | 1–2 min | Eliminates 5–10 min setup |
| 03 | SINGLE PASS — UNIFIED TAGGING: Play film. At each relevant moment, trigger a tag event (key or click). A unified panel opens inline: assign stat type, set clip in/out points relative to current position, assign to one or more player playlists — all before releasing the pause. Confirm. Resume playback. | Redesigned unified interface | 60–90 min | Eliminates second pass entirely (saves 60–90 min) |
| 04 | Review tagged moments in event timeline. Clips and playlists are already populated — no second pass required. Adjust clip bounds or playlist assignments if needed. | Redesigned event timeline | 5–15 min | Review only, not re-watch |
| 05 | Publish playlists to players with one action. Stats dashboard populated simultaneously from same tagging session. | Redesigned share panel | 2–5 min | Eliminates separate sharing step |
| Dimension | Hudl Base | Hudl Assist | Proposed Redesign |
|---|---|---|---|
| Film passes required | 2 minimum | 0 (coach), 1 (analyst) | 1 |
| Coach time per game | 3–4 hours | 30–60 min | 60–90 min |
| Mode switches | 4–6 | 0–1 | 0 |
| Additional cost | None | $900–3,300/yr | None |
| Custom tag types | Yes | No (analyst-defined) | Yes |
| Opponent scouting | Manual (full 2-pass) | Manual (full 2-pass) | Single-pass |
| Thematic clip packages | Manual | Manual | Inline during tagging |
| Solves for non-Assist users | No | N/A (requires payment) | Yes — primary target |
| Adds value for Assist users | N/A | Partial (opponent film still manual) | Yes — opponent scouting + thematic clips |
This journey map traces the full post-game film workflow for the Teacher-Coach persona (Marcus, Persona 01) from final whistle to player delivery. It is mapped across five dimensions for each phase: the action the coach takes, the tool or interface used, the thoughts reported in interviews, the emotional state at that point, and any friction points encountered. The map is the current state — it describes what actually happens, not what Hudl intends to happen.
The map uses a five-point emotion scale: ▲▲ Energized — ▲ Positive — — Neutral — ▼ Frustrated — ▼▼ Depleted
| Dimension | Detail |
|---|---|
| Actions | Coach shakes hands, debriefs with assistant coaches, talks to parents, drives home (30–60 min). Camera is packed up by a player or manager during this time. |
| Tools | Physical camera or capture device. No Hudl yet. |
| Thoughts | "We need to fix that defensive shape." "Jordan had a great game." "I should note what happened in the 67th minute before I forget it." Mental film is running — specific moments are sharp right now. |
| Emotion | ▲ Positive (win) or — Neutral/▼ Frustrated (loss). Highest recall window of the entire workflow — coach's mental model of the game is most complete at this moment. |
| Friction | None yet. The problem is that this high-recall window is not actionable — Hudl can't be accessed on a phone in the parking lot at full functionality. This is a lost opportunity the workflow never recovers. |
| Dimension | Detail |
|---|---|
| Actions | Connect camera to laptop. Open Hudl. Upload video file. Wait for processing. Eat something. Check phone. Processing time varies 15–45 min for a 90-min game. |
| Tools | Hudl desktop browser. Personal laptop (usually 3–5 years old, Windows). |
| Thoughts | "I hope this uploads before midnight." "Why does this always take so long." Mental sharpness of post-game fades during the wait. 3 of 5 coaches reported checking social media during upload waits. |
| Emotion | — Neutral, trending toward ▼ Frustrated as wait extends. Energy declining from game-day high. |
| Friction | Upload time is a dead zone — coach cannot begin work but also cannot step away. For coaches without dedicated home office space, this often means sitting at a kitchen table with a laptop while family is nearby. |
| Dimension | Detail |
|---|---|
| Actions | Open tagging interface. Configure tag panel for game type and team. Play film from 0:00. Pause at each event, click or press tag key, select event category from hierarchical menu, assign player, resume. Repeat ~40–80 times for a soccer match. |
| Tools | Hudl tagging interface (browser). Mouse + keyboard. Occasionally notes on paper nearby for moments to return to. |
| Thoughts | "I need to get through the whole game." "Should I tag this as a shot or a chance?" "I already know I want to clip this moment — but I can't do that right now." "I'll remember to clip the 23-minute moment later." (He won't.) |
| Emotion | ▼ Frustrated by interface complexity. Mental energy at 30–40% of game-day level (10pm+). Tag quality visibly degrades in second half of film for 4 of 5 coaches. |
| Friction | Primary friction point. Tag category hierarchy requires 3–5 clicks per event. Player attribution requires separate step. No way to simultaneously note "I want to clip this." Writing a separate list of clip timestamps on paper happens in 3 of 5 coaches as a manual workaround. |
| Dimension | Detail |
|---|---|
| Actions | Coach completes (or gives up on) stat tagging pass. Evaluates remaining energy. Decides whether to attempt clip cutting tonight or defer to morning. No interface action during this phase — it is a purely internal decision. |
| Tools | None (decision point) |
| Thoughts | "I've been doing this for two hours. I'm not going to cut good clips right now." "If I wait until morning I might not get to it." "The kids can just watch the whole film if I don't get to clips." "I'll set my alarm for 6am." (Sometimes he does. Sometimes he doesn't.) |
| Emotion | ▼▼ Depleted. This is the lowest point in the journey. The coach is aware of the gap between what they wanted to deliver to players and what is actually going to happen. Guilt is a frequently reported emotion at this stage. |
| Friction | This decision point is where clip delivery fails. The interface's two-pass structure forces this choice. A single-pass interface eliminates this decision entirely — clips exist by the time Pass 1 is complete. |
| Dimension | Detail |
|---|---|
| Actions | Open Hudl. Navigate to clip editor (different interface mode from tagging). Play film from beginning — again. Find moments to clip by memory, paper notes, or scrolling through tagged events. Set in-point. Set out-point. Name clip. Assign player playlist(s). Repeat. |
| Tools | Hudl clip editor (browser). This is a distinct interface mode from the tagging interface — different layout, different controls, different mental model. |
| Thoughts | "What was the moment I wanted to clip from the first half — 23 minutes? Let me scrub back." "Wait, did I already clip this?" "I don't remember exactly where that defensive breakdown was." Re-watching moments already watched in Pass 1 is universal at this stage. |
| Emotion | ▼ Frustrated. This entire pass feels redundant. The coach knows they already watched this film. The re-watching is experienced as wasted time, not a coaching activity. |
| Friction | The mode switch between tagging and clip editing is disorienting. Keyboard shortcuts change. Layout changes. Timeline scrubbing works differently. 3 of 5 coaches described the clip editor as feeling "like a different product." The paper notes from Pass 1 are frequently lost, incomplete, or missed, requiring additional re-watching to locate moments. |
| Dimension | Detail |
|---|---|
| Actions | Assign clips to playlists. Share playlists via Hudl notification. Or: send a group text with a link to the full game film and timestamps typed manually. Or: do nothing and show full film in team meeting. |
| Tools | Hudl sharing tools, or SMS/WhatsApp (workaround) |
| Thoughts | "At least it's out." "They'll figure out the timestamps." "I wish I had time to do this properly." High sense of inadequacy relative to what coaches feel they should be delivering to players. |
| Emotion | — Neutral to ▼ Resigned. Relief that it's done. Dissatisfaction with quality of delivery in 4 of 5 cases. |
| Friction | Even when clips are cut, the playlist assignment UI requires multiple steps to add each clip to each player — no bulk assignment or smart grouping. For a 22-player roster with 3–5 individual clips each, this can take 20+ minutes of menu navigation. |
| Phase | Emotion | Opportunity |
|---|---|---|
| Post-game (parking lot) | ▲ Energized, high recall | Highest leverage — mobile capture of intentions not currently supported |
| Upload wait | — Neutral, fading | Low — dead time, not actionable |
| Stat tagging (Pass 1) | ▼ Frustrated, fatiguing | Primary redesign target — eliminate mode-switches and parallel clip creation here |
| Decision point | ▼▼ Depleted, guilty | Eliminate by making Pass 2 unnecessary |
| Clip cutting (Pass 2) | ▼ Frustrated, redundant | Eliminate entirely with unified interface |
| Delivery | — Resigned | Improve quality by ensuring clips always exist at delivery time |
This audit catalogs every discrete friction point identified in the Hudl base platform workflow, organized by severity and type. Friction points were identified through three methods: direct coach reporting in interviews (11 participants), structured observation of the current tagging and clip editing interfaces during recorded walkthroughs, and cross-referencing with public review data (G2, App Store, coaching forum threads). Each friction point includes a severity rating, an observed frequency, and the specific design implication for the redesign.
Severity uses a three-tier scale: Critical = blocks task completion or forces workflow abandonment, Significant = causes meaningful time loss or quality degradation, Minor = creates friction but does not derail the task.
| Friction Point | Observed In | Impact | Design Implication |
|---|---|---|---|
| Mandatory second pass for clip creation | 11 of 11 participants | Adds 60–90 min per game; 3 of 11 coaches abandon clip pass entirely at least once per season | Primary target of the redesign — unified single-pass tagging |
| Mode switch between tagging and clip editing | 11 of 11 participants | Disorienting context switch; 3 coaches described clip editor as "a different product"; keyboard shortcuts and layout change between modes | Eliminate the mode distinction — tagging and clipping must happen in the same interface state |
| No mobile tagging capability | 11 of 11 participants attempted; 0 succeeded | Eliminates the post-game high-recall window entirely; all tagging deferred to laptop hours later | Mobile-viable single-pass tagging is a meaningful differentiator; even lightweight mobile capture (intent flagging) would recover the high-recall window |
| Stat tag and clip cannot reference the same moment simultaneously | 11 of 11 (by definition of current architecture) | Forces coaches to watch the same moment twice — once in each pass — to apply both types of annotation; defeats the purpose of having both features in one platform | The core architectural constraint the redesign must eliminate: a tagged moment and a clip must be the same object, not two separate records |
| Friction Point | Observed In | Impact | Design Implication |
|---|---|---|---|
| Tag category hierarchy requires 3–5 clicks per event | 8 of 11 participants | Interrupts playback rhythm; at 60+ tagging events per game, total added overhead is 3–8 min; coaches simplify or skip subcategories to reduce clicks | Single-action tag assignment — keyboard shortcut, touch target, or smart default — must reduce to 1–2 interactions per event maximum |
| Player attribution is a separate step from event type | 9 of 11 participants | Coaches frequently skip player attribution when fatigued; stat data loses value without player association; some coaches abandon attribution entirely after the first half | Player attribution must be inline with event tagging — not a sequential step — with smart defaults based on recent tags and roster composition |
| No way to flag "I want to clip this" during tagging pass | 7 of 11 participants used paper workaround | Coaches who want to clip specific moments must write timestamps on paper during Pass 1; paper notes are frequently lost, incomplete, or misread during Pass 2 | A clip-intent flag (one key or tap) attached to any tag event would eliminate this; even without immediate clip creation, flagging during Pass 1 removes the memory burden from Pass 2 |
| Playlist assignment requires individual clip-by-clip actions | 6 of 11 participants | For a 22-player roster: adding a moment to 5 individual player playlists requires 5 separate interactions per clip; scales poorly; coaches skip multi-player assignment | Bulk playlist assignment or smart grouping by player attribution — if a moment is tagged to Jordan, it should default-assign to Jordan's playlist — reduces 5 interactions to 1 |
| Video buffering during tagging breaks momentum | 5 of 11 participants | Unexpected pauses reset tagging rhythm; coaches lose track of position; 2 coaches described re-watching moments due to buffer-induced disorientation | Not a direct interface design issue, but preloading the next 30 seconds of film during an active tag entry (when playback is paused anyway) would reduce perceptible buffering |
| Clip in/out point precision requires frame-by-frame scrubbing | 7 of 11 participants | Fine-grained clip trimming requires repeated back-and-forth scrubbing; for coaches with low manual dexterity on a laptop trackpad, precision is very difficult; clips are often longer than intended | Smart clip bounds that auto-set to ±5 seconds around a tagged moment as a default (adjustable) would eliminate the precision problem for the majority of use cases |
| Friction Point | Observed In | Design Implication |
|---|---|---|
| Tag panel configuration must be repeated each session | 6 of 11 | Saved per-sport or per-team tag configurations, auto-loaded on film open |
| Clip naming requires typed text field | 5 of 11 | Auto-name from event type + player attribution + timestamp as a default; coach edits only when needed |
| No dark mode for late-night use | 4 of 11 mentioned | Dark mode is a near-zero-cost addition with meaningful UX value for the 10pm–midnight use case |
| Keyboard shortcuts not visible without documentation lookup | 5 of 11 | Persistent shortcut overlay or tooltip system; especially important for keyboard-first tagging model |
| Session state lost on browser tab close or accidental navigation | 3 of 11 | Auto-save of in-progress tag sessions; recovery prompt on return to interrupted session |
| Sharing permissions require per-clip configuration | 4 of 11 | Default sharing rules configurable at team level; override per clip only when needed |
The following friction points were identified but fall outside the scope of this interface redesign — either because they require infrastructure changes, are business model decisions, or are out of the designer's control:
This document catalogs interface patterns from within and outside the sports domain that are directly relevant to the design problem. Each pattern is evaluated for its applicability to the specific context of the Hudl redesign — late-night, fatigued, time-pressured coaches working on aging laptops with film playing in the background. Patterns that work perfectly in a calm professional setting may be inappropriate here. The evaluation accounts for cognitive load, interaction speed, error recovery, and interruption tolerance.
Patterns are organized into four categories: Tagging & Annotation, Clip & Media Management, Playlist & Delivery, and Navigation & Orientation. Each includes a source reference, description of the pattern, applicability rating, and specific adaptation notes for the Hudl context.
| Pattern | Source | Description | Applicability | Adaptation Notes |
|---|---|---|---|---|
| Unified Attribute Panel | Encord, CVAT, Label Studio | A single modal or sidebar panel triggered by a mark event where all properties of that annotation (category, attributes, assignee, metadata) are set before confirming. No property requires a separate interaction flow. | Critical — direct model | Panel must be keyboard-navigable without mouse. Default values for every field (most-recent-used) to minimize required inputs. Escape key must always cancel safely. Panel must not block the video frame being tagged. |
| Keyboard-Shortcut-First Tagging | CVAT, DaVinci Resolve, professional NLEs | Each tag type is pre-assigned to a keyboard key. A single keypress during playback drops a tag of that type at the current timestamp — no mouse interaction or menu navigation required. Tag attributes are filled via keyboard navigation. | High — for experienced coaches | Must be configurable — coaches set their own shortcut map to match their mental model of their sport. A visual shortcut reference overlay (toggleable) is essential for onboarding. Must fall back gracefully to click-based for coaches who don't want to learn shortcuts. |
| Inline Tag Editing on Timeline | Adobe Premiere Pro, Final Cut Pro | Tags/markers visible on a timeline scrubber below the video. Clicking a marker opens an inline edit panel at that position — no navigation away from the timeline required. Drag to adjust timing. | Medium — for review and correction | Essential for Pass 1 review (Step 04 in current workflow). Coach should be able to correct a missed attribution or wrong event type by clicking the marker without re-watching the moment. Inline editing reduces the correction loop from a multi-step navigation to a single click. |
| Type-Ahead Tag Search | Notion, Linear, GitHub Issues | Typing the first 2–3 characters of a tag name filters a dropdown to matching options. Common in knowledge management and project management tools where tag sets are large and custom. | Medium — for custom tag sets | Particularly useful for coaches with large custom taxonomies (e.g., "press trigger," "high block," "switch of play"). Reduces 3–5 menu clicks to 2–3 keystrokes. Must be fast — noticeable latency during playback-paused tagging is unacceptable. |
| Smart Default from Context | Gmail smart reply, Linear auto-assign | System predicts the most likely next action based on recent activity and surfaces it as the default. Coach accepts or overrides. | Medium — reduces decision fatigue | If the last 5 tags have been attributed to Player 7, the next tag defaults to Player 7. If the last event type was "shot," the next event suggestion is "shot" until changed. Reduces the number of explicit inputs required per tag event — directly addresses the late-night fatigue-degradation problem identified in research. |
| Pattern | Source | Description | Applicability | Adaptation Notes |
|---|---|---|---|---|
| Auto-Clip on Tag Confirmation | Sportscode, Nacsport | When a tag event is confirmed, the system automatically creates a clip spanning a configurable window around that timestamp (e.g., ±5 seconds, ±10 seconds). Coach adjusts only when needed. | Critical — core concept | Default clip window must be configurable at the tag-type level (a shot clip needs tighter bounds than a transition clip). Smart auto-clip eliminates the need for Pass 2 clip cutting in the vast majority of cases. Coach reviews and adjusts outliers rather than cutting all clips from scratch. |
| Non-Destructive Clip Bounds | Final Cut Pro, Avid, professional NLEs | Clip in/out points are references to the source file, not destructive cuts. Adjusting bounds after the fact does not require re-rendering or re-processing — it just changes the reference frame numbers. | High — essential for auto-clip model | If clips are auto-created at tag confirmation, coaches must be able to adjust bounds after the fact without penalty. This is standard in professional editing but not always true in consumer sports tools. The design specification must confirm with development that this is the architectural approach. |
| Clip Preview on Hover | YouTube, Vimeo, Netflix scrubber | Hovering over a timeline position or a clip thumbnail shows a short preview of that content without committing to navigation. | Medium — for review phase | Particularly useful in the clip review step (replacing Pass 2) — coach can verify clips exist and are correctly bounded by hovering along the event timeline, without needing to play each one. Reduces review time significantly for sessions with 20+ tagged moments. |
| Lasso/Range Selection | Lightroom, Finder, most file managers | Drag to select a range of items. Apply an action to all selected items simultaneously. | Medium — for batch operations | Useful for selecting all shots from the first half, or all defensive breakdown moments, and assigning them to a playlist in one action. Reduces the per-clip-per-playlist overhead identified in the friction audit (significant friction point: playlist assignment). |
| Pattern | Source | Description | Applicability | Adaptation Notes |
|---|---|---|---|---|
| Attribution-Driven Auto-Assign | Hudl Assist (service model) | If a moment is tagged with a player attribution, it is automatically added to that player's playlist without additional action from the coach. | Critical — direct adaptation of Assist model | This is exactly what Assist does with analyst-tagged events — and what the base platform should do with coach-tagged events. Player attribution at tag time → automatic playlist population. Coach override to exclude or add additional players. This single pattern eliminates the majority of post-tagging playlist management. |
| Add to Multiple Playlists from Context Menu | Spotify, Apple Music, YouTube | Right-click or long-press on any item surfaces an "Add to playlist" option that allows selection of one or more playlists simultaneously — without leaving the current view. | High — for manual multi-assign | For moments that need to be in a team playlist AND an individual player playlist AND a "recruiting reel" playlist, the coach should be able to multi-select all three destinations in one interaction from the context menu. Must not require navigating to a playlist management screen. |
| Smart Playlist (Rule-Based) | Apple Music smart playlists, Lightroom smart albums | A playlist that automatically populates based on tag criteria. "All shots by Jordan from this season" or "All defensive breakdowns tagged as high-severity" populate themselves as new tagged events match the rule. | Medium — advanced but high value | Would allow coaches to define "Jordan's recruiting reel" as a smart playlist that accumulates automatically over the season — no manual curation required. Directly addresses the recruiting reel pain point identified in Club Coach (Daniela) interviews. Scope note: this is a stretch goal for the wireframes, not a core requirement. |
| One-Tap Publish | Instagram, TikTok | A single action publishes all prepared content to all designated recipients simultaneously. No per-recipient, per-item confirmation required. | Medium — for delivery step | Once tagging is complete and playlists are populated, "Publish all playlists" should be a single action that notifies all players with their individual packages simultaneously. Currently requires navigating to each playlist and publishing separately. |
| Pattern | Source | Description | Applicability | Adaptation Notes |
|---|---|---|---|---|
| Persistent Progress Indicator | Calm, Headspace, long-form reading apps | A persistent indicator showing percentage of content reviewed. Reduces anxiety about "how much is left" and provides a reference point for returning after an interruption. | Medium — for interruption recovery | Context: coaches frequently step away mid-session. A persistent "you are at 34:22 of 91:00 — 67% complete — 14 events tagged" summary panel would allow rapid re-orientation after a family interruption. Currently the coach must remember or estimate where they left off. |
| Session State Auto-Save | Google Docs, Notion, all modern web apps | All work is saved continuously with no explicit save action required. On return after an interruption or browser close, all state is restored exactly. | High — essential for interruption-heavy context | Research found 3 coaches had lost tagging work due to accidental navigation or browser closure. Auto-save of in-progress tags, current playback position, and panel state is a baseline requirement for the context of use documented in Step 02. |
| Jump-to-Tag Navigation | Video annotation tools, subtitle editors | A list of all tagged events in chronological order. Clicking any event jumps the playhead to that moment instantly. Keyboard shortcuts for "next tag" and "previous tag" navigate between events. | High — for review phase | Replaces the need to scrub through film to find previously tagged moments during review. Coach can cycle through all 40 tagged events in the review step without any re-watching — just jump, inspect, adjust if needed, next. Fundamental to making the review step fast enough to stay in one pass. |
| Undo / Redo for Tag Actions | Every professional application | Standard undo/redo for any tag creation, deletion, or attribute change. Keyboard shortcut (Cmd+Z / Ctrl+Z) universal. | High — error recovery essential for fatigued users | Currently Hudl's undo behavior for tag actions is inconsistent. Late-night fatigued coaches make more attribution errors and need fast, reliable recovery. Undo must work for: event type changes, player attribution changes, clip bound adjustments, and playlist assignments. Not just clip creation/deletion. |
The following patterns were considered and rejected for the Hudl redesign context, with rationale:
Core design question: how do you present stat tagging, clip cutting, and playlist assignment as a single unified interaction rather than three sequential modes — without requiring a paid analyst service to do it for you? Initial concepts explored a persistent side panel, a trigger-and-expand overlay, a timeline-first layout, and a floating HUD. The overlay model (Sketch B) was selected as the primary concept: video fills the full frame during playback; a bottom-anchored overlay appears only when a tag is triggered, keeping the coach's view unobstructed at all other times.
A speculative second thread explores a non-intrusive AI suggestion layer — surfacing detected moments as dismissible badges rather than auto-tagging, preserving full coach agency while reducing the cognitive load of watch-and-tag simultaneously. This thread is clearly marked speculative throughout and is informed by Hudl's Balltime AI acquisition (February 2025), which currently supports club volleyball only. All wireframes produced in Figma at 1280×800px.
Concept sketches are the earliest-stage design exploration — rough, fast, and deliberately uncommitted. The goal is to generate a wide range of structural approaches to the unified tagging problem before any layout decisions become load-bearing. All sketches were produced in Figma using a loose wireframe component set at 50% fidelity: blocks for UI regions, placeholder text for labels, no color beyond functional annotation. Each sketch took 20–40 minutes to produce. Refinement happens in the lo-fi wireframe stage (Deliverables 2–4), not here.
Sketching constraints: every concept must support the core single-pass requirement (stat tag + clip creation + playlist assignment in one pause of playback), must not require a mode switch, and must remain usable on a 13" laptop screen by a fatigued coach at 11pm. These constraints ruled out several promising patterns early — documented in the Annotated Decision Notes (Deliverable 5).
Concept: The video player occupies 65% of the screen width. A persistent right-side panel (35%) shows the active tag panel, event timeline, and playlist summary simultaneously at all times — no overlay, no modal, always visible. Tagging happens in the panel while video plays in the left pane. Clip in/out controls appear inline in the panel when a tag is placed.
| Dimension | Assessment |
|---|---|
| Single-pass capability | ✓ Full — tag, clip, playlist all visible and actionable without leaving panel |
| Mode switches required | ✓ Zero — panel is persistent, state is always available |
| Screen real estate | ⚠ Tight on 13" laptop — 65% video width may feel cramped for detailed play review |
| Cognitive load | Medium — panel always visible means always-present decisions; could feel overwhelming for low-tech users |
| Tech-reluctant veteran risk | ⚠ Moderate — panel is always visible even when not tagging; may intimidate non-taggers |
| Verdict | Advance to lo-fi with modifications — panel should collapse to minimal state when inactive |
Concept: Video fills 100% of the screen. No persistent panel. When a tag is triggered (spacebar or T key), playback pauses automatically and a compact overlay panel expands from the bottom edge of the video frame. Coach completes all tag properties in the overlay, confirms, and the overlay collapses. Video resumes. Nothing is visible during playback except a minimal event count badge.
| Dimension | Assessment |
|---|---|
| Single-pass capability | ✓ Full — all properties in overlay before confirmation |
| Screen real estate | ✓ Excellent — full video when playing; overlay only when needed |
| Mode switches required | ✓ Zero — overlay is same interface state, not a new mode |
| Cognitive load | Low during playback; moderate during tag entry — appropriate for context |
| Tech-reluctant veteran risk | ✓ Low — video experience is completely clean when not tagging |
| Discoverability | ⚠ Keyboard trigger not obvious — overlay trigger must be surfaced clearly on first use |
| Verdict | Primary concept for lo-fi wireframe development. Best balance of all constraints. |
Concept: Inverts the hierarchy — the event timeline is the primary interface element at the top of the screen (25% height), with the video player below it (75%). Tagging happens by clicking on the timeline at a timestamp rather than by triggering during playback. Clip bounds are dragged directly on the timeline. Playlist assignment happens via a tag-properties panel that appears when a timeline event is selected.
| Dimension | Assessment |
|---|---|
| Single-pass capability | ⚠ Partial — suits post-game review and correction but not real-time tagging during playback |
| Review phase UX | ✓ Excellent — timeline-first is ideal for reviewing and adjusting already-tagged events |
| Real-time tagging | ✗ Poor — clicking on a timeline during live playback requires stopping and navigating; breaks flow |
| Clip editing | ✓ Excellent — drag-to-set bounds on timeline is far more precise than numeric fields |
| Verdict | Not the primary tagging concept — but the timeline-first layout is the best model for the review phase. Incorporate as the post-tagging review state inside Sketch B's flow. |
Concept: A small, always-visible floating heads-up display positioned in the lower-left corner of the video — similar to a game HUD. Shows only the most recently triggered tag type and a confirmation button. Designed for keyboard-heavy users who don't want to leave the video view at all. Full tag properties are set via keyboard navigation within the HUD without the panel growing or displacing the video.
| Dimension | Assessment |
|---|---|
| Speed for experienced coaches | ✓ Fastest possible — eyes stay on video, hands on keyboard |
| Learnability | ✗ Low — requires memorizing keyboard navigation; no discoverability |
| Playlist assignment | ✗ HUD is too small to present multi-playlist selection clearly |
| Mobile/touch viability | ✗ HUD targeting on touch is unreliable |
| Verdict | Rejected as primary concept. Keyboard shortcut pattern from this sketch is valuable — incorporated as an optional power-user layer on top of Sketch B, not as a standalone layout. |
This wireframe set covers the primary tagging interaction — the moment a coach triggers a tag during playback through to confirmation and resume. It documents four distinct states: the idle/playing state, the triggered/overlay-open state, the confirmation state, and the post-session review state. Each state is shown at the structure level only — no visual design, no color beyond functional annotation, no typography decisions. These are architecture documents, not design mockups.
All wireframes are produced in Figma at 1280×800px (13" laptop viewport). Component names are noted where they correspond to the component set being built in parallel. Annotations reference findings from the Friction Point Audit and Pattern Library directly.
The baseline state: video is playing, no active tag entry in progress. The interface is as close to a pure video player as possible during this state.
Triggered when the coach presses T or clicks "Tag moment." Playback pauses. An overlay panel expands from the bottom of the video frame upward. The video frame remains visible above the overlay — the coach can see the paused frame they are tagging throughout.
After the coach presses Enter/Confirm, the tag is saved, the clip is created (with auto-set bounds), and the overlay collapses. A brief confirmation toast appears at the bottom of the frame — then video resumes automatically after 1.5 seconds unless the coach explicitly pauses again.
After the coach finishes watching the film (or at any point they choose to review), clicking [● 9 tagged] transitions to the review state. The layout switches to the timeline-first model from Sketch C: the timeline dominates the top half, the video player occupies the bottom half. All tagged events are visible as interactive markers on the timeline.
The keyboard shortcut system supports the full tagging workflow without mouse interaction for coaches who prefer it. Shortcuts are displayed in a persistent reference panel (toggled with ?) and visible as labels on all UI elements during the first three sessions.
| Key | Action | State |
|---|---|---|
| T | Trigger tag overlay (pause playback) | Playing |
| Enter | Confirm tag with current values | Overlay open |
| Esc | Cancel tag, resume playback | Overlay open |
| S | Confirm stat only (no clip) | Overlay open |
| Z | Undo last tag | Any |
| Tab | Move to next field in overlay | Overlay open |
| J / K | Jump to previous / next tagged event | Any |
| [ / ] | Nudge clip in-point / out-point −2s / +2s | Overlay open |
| 1–9 | Select event type by configured shortcut | Overlay open |
| Space | Play / pause | Any |
| ? | Toggle shortcut reference panel | Any |
The playlist panel is the delivery layer of the redesign — where tagged moments become organized, publishable packages for players. The core problem it solves: in the current Hudl interface, clips must be individually assigned to playlists after cutting, and playlists must be published individually. The redesigned panel makes playlist management a zero-marginal-effort consequence of the tagging session, not a separate administrative task. This wireframe set covers the playlist management view, the per-player delivery preview, and the publish flow.
Accessible from the review state or from the main navigation. Shows all playlists generated or updated in the current session. Each playlist shows its clip count, total duration, last-updated timestamp, and publish status.
Clicking [Preview] for any player opens a full-screen preview of exactly what that player will receive — a sequenced playlist of their clips with the coach's note attached to each one (if notes were added during tagging).
Accessed via [+ New playlist from filter]. Allows the coach to create a thematic playlist — "All defensive breakdowns," "Jordan's shots this month," "Press trigger moments for film session" — by filtering the full tagged event library by any combination of criteria. The resulting playlist auto-populates immediately and can be published or saved for future use.
Publishing sends all prepared playlists to all attributed players simultaneously. The flow is designed to be a single confident action, not a multi-step wizard.
The AI Suggest Layer does not replace the coach as tagger. The primary concept (Sketch B / Unified Tagging UI) remains the core design. The AI layer is a non-intrusive suggestion surface that appears when the system detects a potentially taggable moment — reducing the cognitive load of watching and tagging simultaneously, without removing coach agency over what gets tagged and how.
Every AI suggestion in this layer follows a single principle: the coach confirms, rejects, or ignores every suggestion. Nothing is tagged automatically. The AI suggests a moment and an event type. The coach decides whether it is tagged, how it is attributed, and whether a clip is created. This principle directly addresses the trust concern raised by the club DOC (Participant C) in the Assist Subscriber Interview Notes — "some of my coaches don't trust the Assist tagging accuracy for soccer-specific events." Suggestion + confirmation preserves trust while reducing effort.
| Behavior | AI Suggest Layer Does | AI Suggest Layer Does NOT Do |
|---|---|---|
| Moment detection | Surfaces a suggestion badge when computer vision detects a likely taggable event | Tag the event automatically; add to any playlist; create any clip |
| Event classification | Pre-fills the event type field in the overlay with its best classification | Lock in the event type; prevent the coach from overriding it |
| Player attribution | Suggests a player based on jersey number recognition if available | Attribute to a player definitively; bypass confirmation |
| Rejection | Disappears silently when the coach presses Esc or continues without responding | Persist, repeat, or log rejected suggestions as negative training data without consent |
When the AI detects a likely taggable moment during playback, a small badge appears at the edge of the video frame. It does not pause playback, does not animate aggressively, and does not block the view. The coach can act on it, ignore it, or dismiss it — all without interrupting their watch experience.
When the coach presses T with an active suggestion, the tag overlay opens with AI-suggested values pre-filled — but all fields remain editable and the coach must confirm explicitly. The overlay is identical to the non-AI version (State 2 from the Unified Tagging UI) except for the pre-filled values and the source attribution.
A secondary value of the AI layer: it catches moments the coach missed. If the coach was distracted (a child interruption, a phone notification) during a detectable event, the AI creates a "missed event" marker in the timeline. After the session, the coach can review missed events and tag them from the timeline without re-watching the full film.
This pattern directly addresses the Interruptions finding from the Context of Use Notes — coaches are interrupted during film sessions, and currently have no way to recover missed moments except by re-watching. The AI suggestion layer makes missed-moment recovery a fast, structured review rather than an unguided re-watch.
This design thread requires real-time computer vision processing during video playback — specifically event detection and, ideally, jersey number recognition. As of March 2026, Hudl's Balltime AI technology (acquired February 2025) performs this for club volleyball only. Extending it to soccer and other field sports involves:
Decision notes document the reasoning behind every significant design choice made during the lo-fi phase — including choices to reject an approach, defer a feature, or accept a known tradeoff. They are organized by decision type and cross-referenced to the research and competitive work from Steps 02 and 03. The goal is to make the design rationale auditable: if a reviewer, stakeholder, or future designer questions a choice, the reasoning exists in writing, linked to specific evidence. This is especially important for a speculative concept project where design decisions cannot be validated by production data.
| Decision | Chosen Approach | Alternatives Considered | Rationale | Research Source |
|---|---|---|---|---|
| Primary tagging trigger | Single key (T) pauses playback and opens overlay | Click-only; voice trigger; always-on panel | Keyboard trigger is fastest for experienced coaches (Sketch D finding). Paired with a visible button for discoverability (non-keyboard users). Voice trigger rejected: coaches are often in family environments at 11pm. | Context of Use Notes §04; Sketch D assessment |
| Auto-pause on tag trigger | Playback pauses automatically when overlay opens | Coach manually pauses; overlay opens over live video | Coaches tagging while video plays produce lower-quality tags (wrong timestamps, missed attributions). Auto-pause ensures the tagged moment is visible and still during entry. Minor cost: coach cannot tag in real-time flow. Acceptable given interview data showing coaches already pause manually. | Step 02 Finding 04 (fatigue degrades quality); Interview synthesis |
| Auto-resume after confirmation | Video resumes automatically 1.5 seconds after confirm | Coach manually presses play; no auto-resume | Manual play after every tag is a friction tax at 40–80 tag events per game. 1.5-second window allows coach to read the confirmation toast. Auto-resume makes the tagging flow continuous rather than stop-start. Undo (Z) is available during and after the window. | Friction Point Audit §02 (tag count per game estimate); Pattern Library §04 |
| Overlay vs. modal | Overlay anchored to video bottom edge; video remains visible above | Full-screen modal; sidebar panel; floating dialog | Coach must be able to see the paused frame they are tagging during entry — the moment may be ambiguous (was that a shot or a cross?) and visual reference matters. A full-screen modal obscures the video. A sidebar changes the video proportions during entry. Overlay preserves full video width above the entry area. | Sketch A/B comparison; Friction Point Audit §01 (mode switch disorientation) |
| Pre-filled defaults | Event type and player pre-fill from last-used values | Always blank; sport-default pre-fill; AI-predicted pre-fill | Coaches tagging sequences (e.g., five consecutive Jordan shots in a dominant half) currently repeat the same entries multiple times. Last-used pre-fill reduces this to a single Enter keypress per repeat. Sport-default pre-fill rejected: default event type varies too much by game context. AI pre-fill is the speculative thread — documented separately. | Step 02 Interview Synthesis Finding 04 (fatigue); Pattern Library §01 (Smart Default) |
| Decision | Chosen Approach | Alternatives Considered | Rationale | Research Source |
|---|---|---|---|---|
| Auto-clip creation | Clip created automatically on tag confirmation with ±8s default bounds | Coach sets bounds manually every time; clip creation is a separate opt-in step | The core value proposition of the redesign is eliminating Pass 2. If clip creation requires manual bounds every time, Pass 2 is still embedded inside the overlay — just compressed. Auto-clip with adjustable defaults makes clip creation a zero-effort default. The [SKIP CLIP] option preserves the stat-only path for coaches who don't want clips. | Friction Point Audit §01 (mandatory second pass); Pattern Library §02 (Auto-Clip on Tag) |
| Default clip window (±8s) | ±8 seconds around tag timestamp as system default | ±5s; ±10s; event-type-specific defaults from day one | ±8s (16-second clips) captures most soccer events with context — a shot includes the build-up touch and the follow-through. ±5s is too tight for fast sequences. ±10s generates very long clips for minor events. Event-type defaults are the right long-term model but require per-event configuration on first use, which is friction for new users. System-wide default first; per-event-type config available in settings. | Direct observation of Hudl clip editor; coach interviews (clip length preferences) |
| Attribution-driven playlist auto-assign | Selecting Player X automatically adds clip to Player X's playlist | Coach manually assigns to each playlist; rule-based auto-assign configured in advance | Player attribution is already required for stat accuracy. If the stat is attributed to Jordan, the clip is unambiguously Jordan's highlight — auto-assigning to Jordan's playlist is the correct behavior in virtually 100% of cases. The coach unchecks rather than checks. Rule-based pre-configuration adds setup burden before first use; attribution-based is implicit and requires no setup. | Step 02 Interview Synthesis Finding 02 (workarounds); Pattern Library §03 (Attribution-Driven Auto-Assign) |
| Team playlist default | Team playlist checked by default for all clips | Team playlist opt-in; event-type rules for team playlist | Coaches want every game clip in the team library for film sessions. Opt-in would require checking Team for every tag — net new friction on every event. The coach unchecks for private/sensitive moments (a player's injury moment, a defensive breakdown that could embarrass a specific player) — rare cases that justify the override being opt-out rather than opt-in. | Coach interviews (team playlist universality) |
Testing was conducted with 8 coaches from Liverpool FC International Academy Utah and broader club-level contacts — the same population representing Hudl's core high school and club market. Sessions used a 3-condition design: Condition A (Hudl base platform, current), Condition B (redesign prototype), and Condition C (Hudl Assist walkthrough). Each participant completed all three conditions across three sessions spaced one week apart, using a Latin square rotation to control for learning effects.
The Assist comparison (Condition C) was deliberate: if a coach on Assist already gets stats and clips without doing film review themselves, what do they lose? Testing revealed that all 6 Assist walkthrough participants independently raised the same gap — you can't build thematic clip packages from Assist output, and the tags "aren't mine." These findings directly shaped what the redesign needed to preserve and what it could simplify. Six design changes were made as a result of testing; five design decisions were challenged and survived unchanged.
Most usability studies compare one design against a baseline. This study uses three conditions because the research question is more specific: the redesign is not just competing against Hudl's current interface — it is implicitly competing against Hudl Assist, the paid service that already solves the two-pass problem for coaches who can afford it. Testing only Redesign vs. Base would answer "is this better than nothing?" Testing all three answers the more important question: does this interface deliver enough of Assist's time savings that a coach wouldn't need to pay for Assist? The answer has direct product strategy implications that a two-condition study cannot surface.
Participants are randomly assigned to a condition order using a Latin square rotation to control for learning effects. All participants experience all three conditions across three sessions spaced one week apart. The one-week gap is intentional: it simulates real coach usage cadence (one game per week) and reduces within-session carry-over between conditions.
| Condition | Interface | What Coach Does | Time Budget Estimate |
|---|---|---|---|
| A — Baseline | Hudl Base platform (current) | Full two-pass workflow: stat tagging pass, then clip-cutting pass, then playlist assignment. No coaching assistance, no workarounds permitted during session. | ~180–240 min |
| B — Redesign | Prototype — Unified Tagging UI (Step 04) | Single-pass workflow: trigger-and-expand overlay for each moment, auto-clip creation, attribution-driven playlist assignment. One pass through film. | ~60–90 min (projected) |
| C — Assist Reference | Hudl Assist walkthrough (researcher-facilitated) | Upload film, submit to Assist queue, receive and review breakdown, distribute auto-playlists. Researcher narrates Assist steps coach cannot perform directly (analyst tagging). Total time represents the coach-facing portion only. | ~30–60 min |
Each participant is given access to a 90-minute soccer game film pre-loaded in a test Hudl account. The film is a real game (from a consenting program), not a synthetic recording. The task is as close to a real post-game session as possible.
| # | Task | Metric Captured | Completion Criteria |
|---|---|---|---|
| T1 | Tag a minimum of 8 stat events across the full game film — at least 3 different event types, at least 3 different player attributions. | Time to complete, event count, event type variety, player attribution completeness, error count | 8 tagged events with attribution visible in event timeline |
| T2 | Cut clips for 5 of the tagged events and assign each to the appropriate player playlist. | Time from T1 completion to T2 completion, mode-switch count, clip accuracy (correct in/out points ±3s), playlist assignment accuracy | 5 clips present in correct player playlists |
| T3 | Publish the team playlist and at least 2 individual player playlists. | Time to publish, number of actions required, error count | Playlists visible to recipient accounts |
Same game film, same task requirements. The prototype is presented in Figma Prototype mode at 1280×800 with simulated video playback (pre-cut video segments that advance when the participant interacts with the interface). The interaction fidelity is high enough that timing data is meaningful, though not identical to a fully built product.
| # | Task | Metric Captured | Completion Criteria |
|---|---|---|---|
| T1 | Tag a minimum of 8 stat events using the unified overlay — stat type, player attribution, auto-clip, and playlist assignment completed for each in a single overlay interaction. | Time to complete, single-pass compliance (did participant attempt to make a second pass?), overlay abandonment rate, default acceptance rate (how often did participant accept pre-filled values without change?) | 8 tagged events with clips and playlist assignments, all created within one playback pass |
| T2 | Use the review timeline to locate a specific event (researcher names a tagged moment: "find the 34-minute shot"), adjust the clip bounds, and add a coaching note. | Time to locate event via timeline (vs. scrubbing), clip adjustment accuracy, note attachment success | Correct event selected, bounds adjusted, note visible in player preview |
| T3 | Publish all playlists using the single-action publish flow. | Time to publish, number of actions required, error count | All playlists marked as published in the prototype |
| Metric | Method | Condition | Success Criteria (from Step 01 Success Criteria) |
|---|---|---|---|
| Total task time (T1+T2+T3) | Stopwatch, session recording timestamp | A & B | Condition B ≥ 40% faster than Condition A |
| Single-pass compliance | Observation — did participant rewind and re-tag? | B only | ≥ 80% complete tag + clip + playlist in one pass |
| Coach confidence (post-task Likert) | "I felt confident I could use this workflow after every game" — 1–5 scale | A & B | Condition B mean ≥ 4.0 |
| Feature adoption intent | "Would you use this instead of your current post-game workflow?" — closed Y/N | B only | ≥ 70% yes |
| No regression for basic users | Veteran persona (Persona 03) completes clip-share task with no added errors vs. Condition A | A & B | 0 net new errors |
| Step count (interaction audit) | Screen recording interaction count — clicks + keypresses to complete one fully tagged moment | A & B | Condition B ≤ Condition A step count per tagged moment |
| Assist comparison intent | "Would this interface reduce your need for the Assist service?" — directional, no threshold | C walkthrough participants only | Directional — no threshold |
| Error rate per tagged event | Count of incorrect attributions, missed clip assignments, wrong event types requiring correction | A & B | Secondary — compare directionally |
| Think-aloud sentiment | Frequency of positive vs. frustrated verbal expressions during task, logged by researcher | A & B | Secondary — qualitative triangulation |
Administered after each condition session. Takes approximately 5 minutes. Responses are condition-specific — participants complete a version of the questionnaire immediately after each of their three sessions.
This document contains structured notes from all 24 testing sessions (8 participants × 3 conditions). Notes follow a consistent format: pre-session context, task-by-task observation log, post-task questionnaire scores, and a researcher synthesis written within 24 hours of the session. Direct quotes are preserved verbatim and marked with quotation attribution. Time measurements are from session recordings (timestamps noted). Observations marked [OBS] are researcher inferences; quotes marked [P#] are participant statements. Participant numbers correspond to internal tracking and do not map to any external identifier.
| Metric | Condition A (Base) | Condition B (Redesign) | vs. Target |
|---|---|---|---|
| Median total task time (T1+T2+T3) | 194 min | 81 min | ✓ 58% faster — exceeds ≥40% target |
| Single-pass compliance | N/A (2-pass by design) | 7 of 8 participants (88%) | ✓ Exceeds ≥80% target |
| Coach confidence (mean Likert, 1–5) | 2.9 | 4.2 | ✓ Exceeds ≥4.0 target |
| Feature adoption intent ("Would you use this?") | N/A | 6 of 8 Yes, 1 Maybe, 1 No | ✓ 75% yes — exceeds ≥70% target |
| No regression — Persona 03 basic task | 2 errors baseline | 2 errors in prototype | ✓ 0 net new errors |
| Median step count per tagged moment | 11 interactions | 4 interactions | ✓ 64% reduction — well under baseline |
| Error rate per tagged event | 0.31 errors/event | 0.14 errors/event | ↓ 55% — directional improvement |
| Assist comparison intent (Cond. C) | — | — | 6 of 8: "this would reduce my need for Assist" (directional ✓) |
CONDITIONS: A → B → C (Latin square rotation 1) · SESSIONS: Oct 11, Oct 18, Oct 25 · TOTAL RECORDED TIME: 4h 22min
CONDITIONS: B → C → A (Latin square rotation 2) · SESSIONS: Oct 12, Oct 19, Oct 26
Full session notes for P3–P8 follow the same format as P1–P2. Key excerpts and findings from each are documented here; complete verbatim transcripts are held in the research archive.
| Participant | Role | Cond. A Time | Cond. B Time | Adoption Intent | Most Significant Quote or Finding |
|---|---|---|---|---|---|
| P3 | Teacher-Coach, 9 yrs | 211 min | 87 min | Yes | "I didn't have to write anything down. I kept waiting for the moment where I'd need to write something down, and it never came." |
| P4 | Club Coach, 11 yrs | 188 min | 79 min | Yes | Discovered undo (Z key) after a misattribution. Used it 3 times during session. "I've wanted this for years. You have no idea." |
| P5 | Teacher-Coach, 4 yrs | 223 min | 91 min | Maybe | Concerned about auto-resume: "I don't love that it just starts again. I want to control when it plays." Auto-resume flagged as a polarizing feature. (See Iteration Log §01.) |
| P6 | Club Coach, 3 yrs | 196 min | 85 min | Yes | Struggled with clip bound adjustment — found nudge buttons (+2s / -2s increments) too coarse. Wanted finer control. (See Iteration Log §03.) |
| P7 | Tech-Reluctant Veteran, 22 yrs | N/A (Cond. A only) | No added errors on basic clip-share task vs. A | N/A (basic task only) | "I just did the same thing I always do — clicked the link and watched the film. I didn't even notice the [T] thing until you pointed it out." |
| P8 | Club Coach, 7 yrs | 179 min | 72 min | No | Already uses Veo for film capture and prefers Veo's auto-clip feature. Would not switch to Hudl redesign unless Hudl also addressed the camera/capture side of the workflow. Out-of-scope concern; noted for Step 10 open questions. |
6 of 8 participants completed the Condition C Assist walkthrough (P7 and P8 were excluded — P7 due to role constraints, P8 due to Veo preference making the Assist comparison non-applicable). Key themes across the 6 walkthroughs:
Every design decision that changed as a result of testing is documented here in chronological order. Each entry records: the finding that prompted the change, the original design, the revised design, the rationale for the specific revision chosen, and the session or sessions that surfaced the issue. Entries are numbered for cross-reference in the Before/After Wireframe Pairs (Deliverable 4). The log is the design's paper trail — it makes visible that every revision was evidence-driven, not preference-driven.
Changes are classified by type: Critical = addressed immediately, affected test validity if not fixed, Significant = addressed before hi-fi phase, Minor = logged for hi-fi consideration, no immediate change to lo-fi prototype.
TYPE: Significant · SURFACED: Sessions P5 (Oct 26), P6 (Oct 27) · BEFORE/AFTER: See Wireframe Pair 01
After tag confirmation, video resumes automatically after 1.5 seconds. A progress bar in the toast counted down the auto-resume. Coach presses Space to delay.
Auto-resume is a setting, off by default. Default behavior: video stays paused after confirmation; coach presses Space or clicks Play to resume. Auto-resume can be enabled in session settings for coaches who prefer continuous flow.
Finding: P5 described auto-resume as feeling "pushy" — the interface was making a decision the coach felt was theirs to make. P6 independently described the same feeling. P1–P4 had no issue with auto-resume. The split (2 negative, 4 neutral-positive out of 6 who experienced it) is enough to justify making it configurable rather than changing the default for everyone.
Rationale for opt-in rather than removal: P1 specifically praised the auto-resume as reducing fatigue across a long tagging session. Removing it entirely would regress P1's experience. The settings toggle preserves both preferences. The default is off (manual resume) because the penalty for an unexpected auto-resume (missing the next moment) is higher than the penalty for manual play (one extra keypress).
TYPE: Significant · SURFACED: Sessions P1 (Oct 18), P3 (Oct 19), P4 (Oct 19), P5 (Oct 26) · BEFORE/AFTER: See Wireframe Pair 02
Overlay had 5 fields: Event Type, Player, Clip Bounds, Playlists, Note (optional, placeholder text "Add a coaching note…"). Note field was always visible.
Note field removed from default overlay. Replaced with a collapsed [+ Add note] trigger that expands inline only when pressed. Overlay now has 4 always-visible fields + 1 optional expansion.
Finding: 4 of 6 participants with a fully visible note field made no use of it but reported feeling like they "should" add a note — the empty field created a mild sense of obligation. Tab-navigation skipped it in all 4 cases. In the Annotated Decision Notes, this was flagged as an open question. Testing confirmed: the note field in the default overlay adds cognitive overhead with no value for the majority of tags.
Rationale: Notes are valuable for specific clips (recruiting reel moments, corrective feedback for a player) but not for the majority of tagged events. The collapsed [+ Add note] trigger is discoverable without being mandatory. Coaches who want to add notes regularly can learn the trigger quickly; coaches who never want notes don't see the empty field on every tag.
TYPE: Significant · SURFACED: Session P6 (Oct 27) · BEFORE/AFTER: See Wireframe Pair 03
Clip bound controls: ◀ −2s and +2s ▶ nudge buttons. Each press moved the in/out point by exactly 2 seconds. No finer control available in the overlay.
Nudge buttons changed to 1-second increments. A [Fine] toggle switches to frame-by-frame (0.04s) mode. A secondary [Scrub] option opens a mini-timeline directly in the overlay for drag-based adjustment. Default remains ±8s auto-bounds.
Finding: P6 tried to trim a clip to start exactly on a tackle contact — a frame-precise requirement. At 2-second increments, she overshot by one press and then overshot in the other direction. Spent 90 seconds adjusting a bound that should have taken 10 seconds. Described it as "more annoying than just going back and doing it in the clip editor" — which directly undercuts the redesign's value proposition for clip precision.
Rationale for tiered approach rather than frame-by-frame default: Frame-level precision in the overlay would require either a video preview (increasing overlay size significantly) or a numeric timecode input (requiring mental arithmetic while fatigued). 1-second nudge handles the majority of adjustment needs; the [Fine] toggle and [Scrub] option serve edge cases without burdening the default interaction with a complex control set.
TYPE: Minor → addressed in lo-fi update · SURFACED: Sessions P5 (Oct 26), P6 (Oct 27) · BEFORE/AFTER: See Wireframe Pair 04
Finding: 2 of 8 participants did not discover the T key trigger during the session — they used only the on-screen [T] button. Both described the single-letter button as "not obviously a button." P7 (tech-reluctant veteran) did not notice the button at all until explicitly pointed to it.
Revision: Button label changed from "[T]" to "[Tag ⌨]" with the keyboard icon serving as an affordance signal. A first-session tooltip ("Press T at any moment to tag it — or click here") appears on first use and is permanently dismissible. The tooltip addresses the discoverability open question flagged in the Annotated Decision Notes §05 without requiring a full onboarding wizard (which was rejected in the Pattern Library).
TYPE: Minor · SURFACED: Session P4 (Oct 19) · BEFORE/AFTER: No wireframe pair — label-level change only
Finding: P4 tagged a "Defensive error — positional" event and was surprised when it defaulted to the Team playlist. She explicitly did not want the team to see an individual player's defensive error in the shared team view — she wanted it only in that player's individual playlist as corrective feedback. The Team default had correctly been described in the Decision Notes as having an opt-out override, but P4 did not notice the unchecking option in time (she had already confirmed before realizing).
Revision: Team playlist default is now configurable per event type in settings. "Corrective" or "Development" event types (user-defined) default to player-only; "Highlight" and "Tactical" types default to team + player. This requires a one-time setup step but prevents the default from creating unwanted team-visible content for coaches who use corrective event types routinely.
TYPE: Minor · SURFACED: Sessions P1 (Oct 18), P3 (Oct 19) · BEFORE/AFTER: See Wireframe Pair 05
Finding: 2 participants (P1 and P3) expressed mild uncertainty after confirmation about whether the auto-clip had captured the right moment. Neither opened the review timeline to check — they trusted the system and moved on — but both mentioned in the post-session debrief that they would have felt more confident if they could "see a little of what got clipped" in the confirmation moment.
Revision: Confirmation toast expanded to include a 3-second thumbnail preview of the first frame of the auto-clip. The toast grows from one line (text only) to a small two-column layout: text summary on the left, thumbnail on the right. Coach can verify the clip frame visually without opening the review timeline. Toast auto-dismisses or collapses after the thumbnail is displayed for 1.5 seconds.
Open question this revision creates: The expanded toast is slightly larger — it may occlude the bottom portion of the video frame more than the text-only version. To be tested in the hi-fi phase (Step 09) against the thumbnail-free version.
| Issue | Participant(s) | Disposition |
|---|---|---|
| Mobile / iPad tagging capability request | P2 (emphatic), P3 (mentioned) | Logged for future scope. Confirmed as a genuine unmet need. Not addressed in this prototype phase. See Step 10 Open Questions. |
| Veo integration / camera-side workflow | P8 | Out of scope for interface redesign. Logged as a product strategy note — Hudl's relationship with camera platforms is a separate problem. P8's non-adoption due to Veo preference is noted as a boundary condition on this design's addressable market. |
| Keyboard shortcut configurability | P2 (wanted to remap T to a different key) | Keyboard remapping is a settings-level feature. Noted for hi-fi; not blocking at lo-fi stage. |
| Bulk-tag mode for fixed sequences | P4 (wanted to tag a 12-pass sequence as a single event) | Interesting edge case — a coach tagging a "possession sequence" as a single extended clip rather than individual pass events. The current model handles this with clip-bound extension but not with a dedicated sequence-tag mode. Logged for future exploration. |
| Dark mode request | P1, P3 (both mentioned late-night use) | Listed as a Minor friction point in the Friction Point Audit. Confirmed by testing. CSS-level change for hi-fi; no wireframe changes needed. |
Each wireframe pair documents one change made as a result of testing — the state of the interface before the testing finding, and the revised state after. Pairs are numbered to match the Iteration Log entries (ILG-01 through ILG-06). Each pair includes the ASCII wireframe states, the specific change highlighted, and a note on what the pair is intended to demonstrate. These are lo-fi documentation artifacts — they do not represent the final visual design.
✗ P5, P6: "I don't love that it just starts again"
✓ Coach controls resume. Auto-resume available in ⚙ Settings → Playback.
What this pair demonstrates: A feature that is beneficial for high-frequency taggers (P1: "that's what makes the session feel fast") and disruptive for lower-cadence or more deliberate coaches (P5, P6). The revision surfaces the same capability through a setting rather than forcing a behavioral choice. No feature is removed — coach preference determines the behavior.
✗ 4 of 6 participants felt obligated to fill the note field; skipped it via Tab but experienced friction
✓ Identical field count when needed; invisible when not. Tab order no longer requires skipping.
What this pair demonstrates: Progressive disclosure — show the minimum required fields by default, expand only when the coach actively wants more. The note field is not removed; it is placed behind a one-tap trigger. This is a pure reduction in default cognitive load with zero loss of capability.
✗ P6: overshot twice, 90-second adjustment for a 10-second task
✓ 1s default handles most cases; Fine + Scrub serve precision needs without cluttering the default
What this pair demonstrates: The cost of coarse defaults on precision-sensitive interactions. 2-second increments were chosen initially to keep the control simple — but they made the common task of fine-tuning a clip start point harder than the current Hudl editor, which undermines the redesign's core promise. 1-second default with tiered options restores precision without complexity.
✗ P5, P6: didn't read [T] as a button; P7: didn't notice it at all
✓ Button reads as a button; keyboard affordance visible; tooltip handles first-use discovery
What this pair demonstrates: A label legibility issue masquerading as a feature discoverability issue. "[T]" is a keyboard shortcut notation, not a button label — it communicates to experienced users and fails for everyone else. "[Tag ⌨]" communicates the action AND the shortcut simultaneously. The tooltip handles first-use onboarding without a wizard. Small change, large discoverability impact.
✗ P1, P3: "I would have felt more confident if I could see what got clipped"
✓ Coach can verify clip frame visually before continuing. Trust in auto-clip model increases.
What this pair demonstrates: The value of low-cost confirmation signals. A thumbnail adds approximately 60×45px to the toast — a minor layout change — but it converts an abstract confirmation ("clip saved") into a concrete visual verification. The coach's trust in the auto-clip system is directly related to how often they can see that it captured the right thing. This change invests in that trust without adding an explicit review step.
Several design decisions were challenged during testing and survived unchanged. These are worth noting explicitly because they demonstrate that the testing validated those choices — not that testing was ignored.
In a real product environment, this step involves presenting research findings and design direction to product leadership, engineering leads, and the stakeholder who owns the Assist product line — each with a different primary concern. The presentation is structured to address all four roles simultaneously: the human problem for the product director, the research credibility for the skeptic, the technical feasibility signal for the engineering lead, and the cannibalization question — honestly and head-on — for the revenue stakeholder.
The Assist context sharpens the business framing substantially. Hudl's decision to solve the two-pass problem via a paid analyst service rather than a redesigned interface is a product strategy choice — one that generates revenue from the pain point rather than eliminating it. The deliverables in this step make the case for a base-platform fix, quantify the cannibalization risk with specificity, and offer three strategic paths forward. The deck ends not with a recommendation on the revenue model — that decision belongs to the stakeholders — but with a clear ask: a green light to invest design resources in validating this at higher fidelity.
In a real product environment, this presentation would be delivered to a cross-functional audience: a product director or VP of Product, an engineering lead, a revenue or growth stakeholder who owns the Assist product line, and possibly a customer success or community manager who has direct exposure to coach complaints. Each of these roles has a different primary concern — and the presentation is structured to address all four rather than optimizing for one. The business case section is not an afterthought; it is the first thing the revenue stakeholder will evaluate, and it must hold up to scrutiny before the design work is given airtime.
This document presents the deck slide-by-slide as annotated narrative. In the actual Figma-produced deck, each section below corresponds to one or more slides. Speaker notes appear in the annotations.
SLIDES 1–4 · ~5 MINUTES
"The Two-Pass Problem: A Case for Unified Tagging in Hudl's Base Platform"
Speculative concept project · March 2026 · [Designer name]
Speaker note: Lead with what this is: a speculative concept project grounded in research, not an official Hudl proposal. The goal of this presentation is to make the case that a specific workflow problem — documented, quantified, and tested — is worth solving at the interface level, and to show what that solution looks like. If this were a live product review, the ask at the end of this deck would be: fund a hi-fi prototype for a broader validation study.
Visual: split-screen portrait of the teacher-coach persona (left) and the club coach (right), with a single stat below each. Left: "3.2 hours average post-game film time. 1.1 hours of it is a second pass they already did." Right: "4 of 5 coaches send a timestamp in a group text instead of a Hudl clip. Not because they don't know how. Because the workflow doesn't give them time."
Speaker note: Don't lead with the solution. Lead with the person. These are real coaches — composite profiles built from 11 interviews plus 47 public reviews. The numbers are not made up: the session timing data comes from Step 05 usability testing (Condition A median: 194 minutes). The timestamp workaround was identified in 4 of 5 non-Assist coaches in interview research. This slide exists to make the problem human before it becomes a metric.
Visual: a simplified flow diagram of the current Hudl workflow — two parallel bars representing Pass 1 (stat tagging, ~88 min) and Pass 2 (clip cutting, ~74 min) with a mode-switch arrow between them labeled "navigate to clip editor." A third bar below both shows the redesigned single-pass flow (~81 min total). The diagram makes the redundancy architectural, not behavioral.
Speaker note: The most important thing this slide communicates is that this is not a coach problem. It is not that coaches lack discipline or time management. The interface structure forces them to watch the same film twice to accomplish what should be one operation. The red line in the diagram is not a coach habit — it is an interface architecture decision that has a measurable cost in hours every week.
Visual: Hudl Assist product page screenshot (publicly available), showing the Assist value proposition: "Get a full breakdown of every game — stats, clips, and player playlists — delivered overnight." Below it, a callout: "Hudl's solution to the two-pass problem is a $900–$3,300/year add-on service. That is a product strategy choice, not a technical limitation."
Speaker note: This slide is the most important pivot in the deck. It reframes the presentation from "here is a problem Hudl hasn't noticed" to "here is a problem Hudl has explicitly monetized — and this is an argument about whether monetizing it is the right long-term move." This framing respects the business intelligence of the room while making a strategic case. Do not be apologetic about it. The existence of Assist validates the problem completely.
SLIDES 5–9 · ~8 MINUTES
Visual: research method overview — icons for 11 interviews, 47 public reviews, 8 usability participants, 24 sessions, 3 conditions. Timeline spanning October–March. Brief one-line description of each method and what it contributed.
Speaker note: This slide answers the "how do you know?" question before it gets asked. The sample is small but triangulated — qualitative interview findings were validated by public review data and confirmed by usability test behavior. None of the core findings rest on a single data source.
Five finding cards, each with a headline and a representative quote. Findings: (1) Time scarcity is structural, not personal. (2) Workarounds are embedded and invisible. (3) Assist solves one layer, not all. (4) Fatigue degrades tagging quality. (5) Reluctant users are protected by staff structure — and need to stay that way.
Speaker note: Finding 3 is the one that will get a reaction from the Assist revenue stakeholder. Read the room. The finding is not hostile — it is accurate and ultimately supportive of the argument. Assist subscribers still do manual film work. The redesign creates value for them too. The issue is that Assist leaves 70–80% of Hudl's base untouched, and that's the population this redesign serves.
Slide 7: the head-to-head quantitative table — Condition A vs. Condition B across all six success criteria metrics. Headline callout: "58% reduction in total task time. All six success criteria met or exceeded."
Slide 8: three participant quotes selected for cross-persona coverage — one from a teacher-coach (P1: "The fact that the clip is just there — I don't have to go back — this is the thing"), one from a club coach (P2: "I just made in four minutes what I've been making manually in iMovie for three years"), and a note from P7 (tech-reluctant veteran) confirming zero regression: "I didn't even notice the [Tag] thing until you pointed it out."
Speaker note: The P7 quote is critical for the "does this hurt existing users?" objection. Address it before someone asks. The redesign is additive — the veteran who just wants to watch film has an identical experience because the new tagging interface is invisible until triggered. This is not a rebrand. It is a feature layer.
Visual: 6-participant Condition C result. Quote from P1 about Assist tags "not being mine." The thematic clip gap (no participant could build a thematic playlist from Assist output without watching the full film). Closing stat: 6 of 8 participants said the redesign would reduce their need for Assist.
Speaker note: Present this as a signal, not a verdict. "6 of 8" is from a small sample. What it tells us is directional — coaches value agency over their own tagging data, and Assist's fully automated approach trades that agency for speed. The redesign gives them the speed without the trade-off. Whether that is a threat to Assist revenue or a case for folding this capability into a new Assist tier is a business decision, not a design decision. Surface it; don't resolve it in this deck.
SLIDES 10–14 · ~10 MINUTES
Single-statement slide: "Watch → Notice → Tag → Resume. One flow. No mode switches. No second pass." Below it, the four states of the unified interface in thumbnail — idle, overlay open, confirmation, review — showing the complete interaction arc on one slide.
Slide 11: the idle state and the triggered overlay state side by side — full-frame video when playing, bottom-anchored overlay when tagging. Annotated to show the key innovation: the clip, the stat, and the playlist assignment happen in the same pause. Three callouts: pre-filled defaults (last-used event + player), auto-clip bounds (±8s, adjustable), and attribution-driven playlist auto-populate.
Slide 12: the review timeline state — timeline-first layout with jump-to-tag navigation, inline editing, and the single-action Publish All. Annotated to show what the coach does not have to do: no second film pass, no re-scrubbing, no per-playlist publish actions.
Speaker note: These two slides carry the most weight in the deck. Spend time on them. The overlay is not a complicated interaction — it looks simple on screen — but communicate that its simplicity is the result of deliberate choices. The pre-filled defaults alone eliminate roughly 60% of the keystrokes in the current two-pass workflow. Point to the annotation, not just the visual.
The playlist overview showing auto-populated per-player playlists alongside the theme-based playlist builder. Headline callout: "The club coach's recruiting reel problem, solved inline." Before/after: current workflow (iMovie, manual export, 2–3 hours) vs. theme-based playlist builder (4 minutes, P2 session data).
Single slide, clearly labeled "Speculative — not the primary concept." The suggestion badge concept — a non-blocking prompt that detects likely taggable moments and surfaces them for coach confirmation. Three-line summary of what it requires (real-time computer vision, Balltime AI extension to soccer), why it is not the primary concept (coach agency, sport specificity concerns, trust), and why it is included (forward-looking product direction; Hudl has the data asset to build it). Offer to go deeper in Q&A if there is interest.
Speaker note: Don't over-explain this slide. The AI thread is a signal about product direction, not a deliverable at this stage. If the engineering lead wants to dig into feasibility, that's a great signal — note it and schedule a follow-up. If the room is skeptical, acknowledge that the primary concept doesn't require it and move on.
SLIDES 15–18 · ~7 MINUTES
Visual: a simple segment diagram. Hudl's total subscriber base (referenced from public Hudl figures: 200,000+ teams). Assist subscribers (estimated 5–10% based on pricing and market data). Base subscribers who are currently not using advanced features due to workflow friction (estimated large majority of the non-Assist base). The redesign's target: the non-Assist base — the largest segment by volume, currently the lowest by feature utilization.
Speaker note: Hudl's business is predominantly subscription revenue from the base tier. If the majority of base subscribers are not using the features that differentiate Hudl from a simple video hosting service — because the workflow is too friction-heavy — then churn risk concentrates exactly here. This slide frames the redesign as a retention and feature-adoption investment, not just a UX improvement.
The competitive matrix from Step 03 reduced to three rows: Hudl Base (single-pass: none, price: $$), SportsVisio (single-pass: partial, price: $), Nacsport (single-pass: full, price: $$$). Headline: "The platform that solves this at the $300–$800/year price point wins the teacher-coach and club-coach segment." Note that SportsVisio's marketing copy explicitly targets programs "without a video coordinator" — Hudl's same primary user segment — at a lower price point.
Speaker note: This slide is for the product director who thinks about market position. The argument is not "SportsVisio will beat Hudl" — Hudl's network effects and data moat are substantial. The argument is "SportsVisio is picking off the segment most frustrated by exactly the workflow this redesign fixes, and they're doing it at a lower price." That's a known attrition vector that this investment closes.
A clean two-column slide: "What the redesign does to Assist revenue" (left) and "What the redesign does for Hudl overall" (right). Left: partial cannibalization of Assist subscription revenue among coaches who adopted Assist specifically for the two-pass problem (Participant D: "I would not have paid for Assist if the base platform did this"). Right: increased base tier feature utilization, reduced churn from workflow-frustrated coaches, potential for a new "Assist Lite" tier that bundles the redesign with lighter automation at lower price. Full analysis in the Trade-off Analysis deliverable (this deck summarizes it).
Speaker note: Don't avoid this slide. The Assist revenue stakeholder is going to raise this concern whether the slide exists or not — better to surface it on your terms and with analysis attached. The Trade-off Analysis (Deliverable 2 of this step) goes into full detail. This slide signals that the cannibalization question has been taken seriously, not ignored.
The deck does not recommend a business decision — it presents three options and their implications. Option A: ship the redesign as a base-platform improvement, accept partial Assist cannibalization, recover via increased base feature adoption and reduced churn. Option B: position the redesign as a new "Assist Lite" tier at an intermediate price point — capturing some Assist revenue while being substantially cheaper than the full service. Option C: hold the redesign and continue the current Assist strategy, accepting ongoing attrition to SportsVisio and similar platforms among the workflow-frustrated segment.
Speaker note: Ending the business section with three options rather than a recommendation is intentional. The designer does not own the revenue model. The design team's job is to show what is technically possible, what coaches actually want, and what the competitive landscape looks like. The product director and revenue stakeholder own which option is right given the financials the presenter doesn't have access to. Offering three options is not hedging — it's respecting the decision rights of the room.
SLIDES 19–21 · ~5 MINUTES
Deliverables inventory — a table of every artifact produced across Steps 01–06, organized by type: research (personas, interview synthesis, behavioral archetypes, context notes, Assist interview notes), analysis (competitive matrix, workflow comparison, journey map, friction audit, pattern library), design (concept sketches, four lo-fi wireframe sets, 5 before/after pairs, iteration log), and validation (3-condition test protocol, session notes, aggregate results). This slide exists to make visible the scope of evidence behind the design direction — it is not a portfolio slide. It is a credibility signal.
Steps 07–10 summary: Step 07 (hi-fi mockups + developer specs, Figma), Step 08 (second usability study, hi-fi prototype, expanded participant pool), Step 09 (design system integration, accessibility audit), Step 10 (portfolio presentation, open questions, retrospective). Timeline: Steps 07–10 are estimated at 6–8 weeks from stakeholder approval of the direction established in this presentation. Flagged dependencies: access to Hudl's current design system tokens (publicly available via Figma community), a second round of participant recruitment, and one consenting Assist subscriber for the hi-fi comparison test.
For this concept project context: "The ask is feedback. Does the framing of the problem resonate? Does the research feel credible? Does the design direction address the right problem? What would make this more compelling as a portfolio piece or as an actual product proposal?"
In a real product context the ask would be: "We are requesting approval to move to hi-fi mockups and a second, larger validation study. The specific go/no-go decision we need from this room is whether the direction documented in this deck — unified single-pass tagging as a base-platform improvement — is a direction the product team is willing to explore. We are not asking for a shipping commitment. We are asking for a green light to invest design and research resources in validating this at higher fidelity."
This analysis exists because the redesign does not exist in a vacuum — it exists in direct relationship with a revenue-generating product line (Hudl Assist) that partially solves the same problem it addresses. Ignoring that relationship would be analytically dishonest and would undermine the credibility of the design project in any real stakeholder review. This document takes the relationship seriously: mapping what Assist does, what it doesn't do, what the redesign offers that Assist cannot, and what the cannibalization risk looks like with specificity.
This analysis is not a recommendation about Assist's future. It is a structured comparison that a product team could use to make an informed strategic decision. The designer's role is to provide the analysis; the business decision belongs to the stakeholders who own the revenue model.
Hudl Assist is a human-analyst service that receives uploaded game film and returns, within 6–24 hours: complete stat event tagging (using Hudl's standard taxonomy), per-player clip playlists (every moment each athlete appears), and a stats dashboard populated from the analyst's tags. For coaches who subscribe to it at the right tier for their sport, it delivers genuine, measurable value:
Despite solving the two-pass problem for routine game review, every Assist subscriber interviewed in Step 02 still spent 1–3 hours per week in the base Hudl interface doing manual film work. The gaps:
| Gap | Why Assist Doesn't Address It | Who Is Affected | How Redesign Addresses It |
|---|---|---|---|
| Opponent film scouting | Assist only processes the subscriber's own game film. Opponent film is raw and requires full manual two-pass tagging. | All 4 Assist subscribers. Participant A: "I'm still spending two hours a week on opponent prep and Assist has nothing to do with that." | Single-pass unified tagging applies to any film — own game or opponent. Full value proposition extends to all film work, not just own-game review. |
| Thematic / intent-based clip packages | Assist produces player-based playlists (all moments a specific athlete appears). It cannot produce theme-based playlists (all transition moments, all pressing triggers, all defensive errors) without a coach manually watching and selecting. | All 4 Assist subscribers. Participant B: "I'm basically building my own database inside the clips they gave us." 6 of 6 Condition C walkthrough participants failed to build a thematic package from Assist output without watching full film. | Theme-Based Playlist Builder — filter by event type, player, game range, or severity to build thematic packages in minutes. This is the capability Assist explicitly cannot deliver. |
| Coach agency over tagging decisions | Assist uses the analyst's interpretation of events. For coaches with bespoke taxonomies ("pressing trigger," "high block," "switch of play"), the analyst's standard category set doesn't match. For soccer-specific programs, accuracy concerns reduce trust in the output. | Participant C: "Some of my coaches don't trust the Assist tagging for soccer-specific events." Participant A: "The tags aren't mine — I can't tell which of these passes was the one I actually wanted to show Jordan." | Coach tags with their own taxonomy. Custom event types are a base-platform strength. The redesign makes them faster to apply, not less precise. |
| Real-time coaching insight during playback | Assist is asynchronous — the analyst's output arrives hours after upload. There is no in-session tagging capability; the coach reviews output but doesn't interact with the film during analysis. | Coaches who process film immediately after a game (high-recall window), or who want to annotate moments as they think about them during review. | The unified interface supports real-time tagging during playback — the coach's highest-recall window is preserved and made more productive. |
| Cost accessibility | Assist pricing ($900–$3,300/year depending on tier and sport) requires either program budget approval or personal out-of-pocket spending. AD approval is a separate barrier. 3 of 6 Condition C participants would not pay out of pocket under any circumstances. | The majority of Hudl's base subscribers — teacher-coaches at public schools, club coaches at mid-tier programs — do not have discretionary budget for Assist. This is the design's primary addressable population. | Base-platform improvement. No additional cost. Available to every current Hudl subscriber on day one of shipping. |
Not all Assist subscribers would cancel if the redesign shipped. The cannibalization risk is concentrated in a specific sub-segment — coaches who adopted Assist primarily to solve the two-pass problem and who have no other Assist-specific needs. Based on Step 02 interview data:
| Subscriber Type | Cancellation Risk if Redesign Ships | Rationale |
|---|---|---|
| Solo-use, base-workflow adopters (adopted Assist because workflow was too slow) |
High — Participant D archetype | "I would not have paid for Assist if the base platform did this." Participant D explicitly. Coaches in this group adopted Assist as a workaround for a broken interface; a fixed interface removes the reason to pay. |
| Completeness-focused adopters (adopted Assist for stat completeness at scale) |
Moderate — Participant A archetype | These coaches value the analyst's comprehensive coverage (40–60+ events vs. coach's 9–12). The redesign makes their own tagging faster but does not make it as complete as analyst tagging. They would likely retain Assist for the completeness benefit while using the redesign for opponent film and thematic packages. |
| Multi-team program administrators (use Assist as staff-replacement at scale) |
Low — Participant C archetype | The DOC managing 6 teams cannot train 6 coaches to tag consistently at the level Assist provides. The operational value of Assist (consistent output across all teams, no training burden) survives the redesign. Some individual coaches on those teams might switch — but program-level Assist remains valuable. |
| Tech-reluctant program heads (Assist used because coach can't operate the interface) |
Very low | A faster interface is still an interface that requires operation. Tech-reluctant coaches who cannot use the current Hudl interface won't use a redesigned one. Their Assist adoption is not a workflow workaround — it is the primary interface. Redesign does not affect this group. |
A strategic option not previously documented in the project: rather than shipping the redesign as a base-platform improvement (accepting cannibalization) or withholding it (accepting attrition to competitors), Hudl could position the unified tagging interface as a new middle tier — "Assist Lite" or "Assist Self-Service" — at a price point between the base subscription and full Assist service (~$200–$400/year).
| Dimension | Base Platform (free) | Assist Lite (proposed ~$200–400/yr) | Assist Full (~$900–3,300/yr) |
|---|---|---|---|
| Unified single-pass tagging | ✗ Current state | ✓ Core feature | ✓ (outsourced) |
| Auto-clip creation | ✗ | ✓ | ✓ |
| Theme-based playlist builder | ✗ | ✓ | ✗ |
| Analyst-complete stat tagging | ✗ | ✗ | ✓ |
| AI suggestion layer (future) | ✗ | ◑ Partial (suggestions only) | ✓ Full automation |
| Opponent film coverage | Manual | ✓ Full (coach-tagged) | ✗ Own game only |
| Custom tag taxonomy | ✓ | ✓ | ✗ Analyst-defined |
An "Assist Lite" positioning would: capture revenue from the Participant D archetype rather than losing it to cancellation; differentiate from full Assist on dimensions the redesign uniquely owns (thematic playlists, opponent film, custom taxonomy); and create an upgrade path from base → Assist Lite → Assist Full that covers coaches across their entire lifecycle of sophistication and budget growth. This framing converts a cannibalization concern into a tiered product strategy conversation.
| Dimension | Assist Wins | Redesign Wins | Draw / Context-Dependent |
|---|---|---|---|
| Own-game post-game review speed | ✓ (30–60 min) | ||
| Stat completeness | ✓ (analyst-level) | ||
| Opponent film workflow | ✓ (single-pass vs. manual two-pass) | ||
| Thematic clip packages | ✓ (theme builder vs. none) | ||
| Coach agency over data | ✓ (own taxonomy, own tags) | ||
| Cost accessibility | ✓ (free vs. $900+/yr) | ||
| Tech-reluctant accessibility | ✓ (no interface required) | ||
| Multi-team program management | ✓ (centralized analyst output) | ||
| Soccer sport specificity / trust | ✓ (coach interprets; analyst mis-classifies) | ||
| Mobile capture workflow | Neither addresses this adequately; future opportunity for both |
The Design Rationale Document is the authoritative record of why the design is what it is. It answers the question a product manager, engineering lead, or hiring reviewer might ask after looking at the prototype: "Why did you make this specific choice?" For every significant design decision, this document traces the chain from research evidence to design decision to implementation choice. It is organized by the major design elements of the solution — not by the research steps that informed them — because its primary audience is someone evaluating the design, not the research process.
This document also functions as the project's intellectual backbone for portfolio review. A portfolio reviewer looking at the hi-fi mockups should be able to open this document and find the reasoning behind any specific element — why the overlay is bottom-anchored, why confirmation is a toast rather than a modal, why the note field is hidden by default. That traceability is what distinguishes a design that has been thought through from one that has been styled.
The design's animating principle — "Watch → Notice → Tag → Resume, with no mode switches and no second pass" — was arrived at by identifying the exact moment in the current workflow where value creation stops and structural overhead begins. That moment is the mode switch between the tagging interface and the clip editor. Everything before it has value (the coach is doing meaningful analytical work). Everything after it is duplicated effort (the coach is re-watching film they already watched to generate an output they could have generated during the first pass).
The design principle is not "make tagging faster." It is "eliminate the boundary between tagging and clipping." These produce different designs. A faster tagging interface still requires a clip-cutting pass. An interface that treats tagging and clipping as one operation doesn't. The distinction drove every subsequent design decision: if a proposed feature created any dependency between a tagging action and a separate clip-creation action, it was rejected or restructured until it didn't.
Why an overlay at all, rather than a persistent panel: A persistent panel (Sketch A) keeps the coach's tagging options always visible — but it also means they are always present, even when the coach is not tagging. The Interface Complexity finding from the tech-reluctant veteran persona research was unambiguous: every element visible during film watching that is not directly related to watching film is cognitive overhead. A coach who just wants to watch the team film with their staff does not want a tagging panel on screen. The overlay solves this by being invisible until triggered — the interface is a video player until the coach decides it should be something more.
Why bottom-anchored, rather than centered or side-anchored: The tagged moment must remain visible during entry. If a coach pauses the film at a specific frame — a shot, a tackle, a tactical positioning moment — and then a modal or center overlay covers that frame, the coach must hold the visual memory of the paused frame in their head while entering tag properties. This is an unnecessary cognitive tax. Bottom-anchoring preserves the full video frame above the overlay throughout the entire tag entry. The coach can look up at the paused frame at any point during entry to verify they are tagging the right thing.
Why the overlay collapses after confirmation rather than staying open: A persistent open overlay would interrupt the resume-watching flow by requiring the coach to manually close it before resuming. Every millisecond between confirming a tag and resuming playback is time during which the coach could miss the next significant moment. The collapse-on-confirm behavior keeps the watching state as the dominant state — tagging is an interruption of watching, not the other way around.
The decision to pre-fill the overlay with last-used event type and player attribution is the single highest-impact interaction design choice in the project. In controlled testing (Condition B), participants accepted pre-filled defaults without change for a median of 73% of their tag events. This means that for nearly three-quarters of all tags, the total interaction cost is: trigger (T) → confirm (Enter) → two keypresses. No field selection. No dropdown navigation. No typing.
This maps directly to the most common tagging pattern in soccer coaching: a coach following a player through a match tends to tag sequential events of the same type by the same player — a striker's shots, a defender's clearances, a midfielder's pass completions in a sequence. Pre-filling from last-used aligns with the natural mental rhythm of focused player tracking.
Why not blank: A blank overlay places the decision burden on the coach for every single tag event. At 40–80 tag events per game, that is 40–80 full form completions. The Fatigue-Degradation finding from Step 02 (Finding 04) documents that coaches skip or simplify entries when fatigued. Pre-fill reduces the decision burden to override rather than selection — the coach only has to think when something changes, not when everything stays the same.
Why not AI-predicted pre-fill as the default: AI-predicted pre-fill (the speculative thread) has an accuracy ceiling — when it is wrong, the coach must actively correct it, which may be more disorienting than a blank field. Last-used pre-fill has no accuracy problem: it is always "right" in the sense that it reflects the coach's own most recent decision. It will sometimes be wrong for the next event, but it is never confidently wrong in a way that erodes trust.
Why auto-clip at all: The most direct route to eliminating the second pass is to make clip creation a zero-marginal-effort consequence of tagging. If the coach must set clip bounds for every tag, the savings in the overlay interaction are partially offset by the clip-setting work. Auto-clip with a sensible default turns clip creation into a passive benefit rather than an active cost — the clip exists unless the coach opts out, rather than existing only if the coach opts in.
Why ±8 seconds: ±8 seconds (a 16-second clip) was chosen as the default after reviewing what typical soccer event clips should contain. A shot clip should include the touch or move that created the shooting opportunity (pre-event context, ~4–6s) and the ball trajectory plus initial goalkeeper response (post-event, ~4–6s). ±8 seconds captures this envelope in virtually all cases. In testing, 7 of 8 participants accepted the default for the majority of their clips, and the clips produced were described as "right" or "about right" by participants without prompting.
Why adjustable, and why tiered precision after testing: Not all events have the same contextual requirements. A transition sequence that a coach wants to show as a coaching illustration may require 20–30 seconds of context. A set-piece defensive error may need only 5–6 seconds. The ±8s default handles the median case; the adjustment controls exist for the tails. The testing finding that 2-second nudge increments were too coarse for P6's frame-precision need led to the 1-second default with [Fine] and [Scrub] options — a tiered precision model that keeps the default simple while making precise adjustment possible for coaches who need it.
The decision to automatically assign a clip to the attributed player's playlist when the player is tagged — making the coach uncheck rather than check — was informed by two distinct findings:
Finding 1 (frequency): In interview research, coaches universally said that if a moment is attributed to a player, it belongs in that player's playlist. There were no exceptions in the interview sample to the rule "if I tagged Jordan for a shot, I want Jordan to see it." The rule held across all tagging contexts — positive feedback clips, corrective feedback clips, and neutral analytical tags.
Finding 2 (the ILG-05 exception): P4's discovery that corrective/negative tags defaulted to the Team playlist revealed a genuine edge case — not all clips are appropriate for the full team to see, even if they belong to an individual player's playlist. The revision (configurable per-event-type team default) preserves the opt-out model for player playlists while adding granularity to the team default. The opt-out model was not reverted; it was refined.
Why opt-out rather than opt-in: Opt-in requires a positive action for every tag (check the playlist box), which multiplies friction across a full session. The post-publish correction path (going back to remove a clip from a playlist) is less disruptive than the per-tag assignment path for the majority case. Opt-out means the coach makes an active choice to exclude, not an active choice to include — the burden is placed on the exception, not the rule.
The review state (State 4 of the unified interface) switches the layout from the video-dominant playing state to a timeline-dominant review state. This is a deliberate layout shift that serves the review task specifically:
The review task is fundamentally different from the tagging task. During tagging, the coach is watching new film — the video is the primary information source. During review, the coach is inspecting previously created tags — the event list is the primary information source. The layout should reflect the task, not be static across all tasks. The timeline-first layout makes the event inventory the visual focus, with the video as a secondary resource for verifying specific moments.
Jump-to-tag navigation replaces scrubbing for review purposes. The single most time-consuming element of the current Hudl clip review workflow is locating moments in the film by scrubbing. Jump-to-tag (click a marker → video jumps to that timestamp) eliminates this entirely for reviewing tagged events. In testing, all 6 participants who reached the review state used jump-to-tag navigation rather than scrubbing within 30 seconds of entering the state — without instruction. P3: "I keep forgetting I don't have to scrub anymore. It's a good thing to forget." The behavior was adopted immediately and without friction.
Design rationale is not only about what was chosen — it is also about what was explicitly excluded and why. These exclusions are as important as the inclusions for understanding the design's scope and philosophy.
Six design decisions changed as a result of Step 05 usability testing. Each is documented in full in the Iteration Log (Step 05, Deliverable 3) and represented in the Before/After Wireframe Pairs (Step 05, Deliverable 4). The summary below captures only the rationale thread — why each change was the right response to the finding, rather than an alternative response:
| Change | Finding That Prompted It | Why This Response, Not Another |
|---|---|---|
| Auto-resume → opt-in setting | 2 of 6 participants found it presumptuous; 4 found it valuable | Split opinion means neither default is universally right. Settings toggle preserves both preferences without removing capability. Alternative (remove entirely) would regress P1's experience. |
| Note field → collapsed by default | 4 of 6 felt obligated by the empty field without using it | Progressive disclosure: show the minimum required. The note field isn't removed — it's moved behind a trigger. Alternative (remove entirely) would lose capability for the 2 coaches who did use notes. |
| Clip nudge → 1s default + tiered precision | P6 overshot twice with 2s increments; 90-second adjustment | Finer default preserves speed for most cases while reducing the precision problem. Alternative (frame-by-frame default) adds complexity to every use, not just the edge cases that need it. |
| Trigger label → [Tag ⌨] + tooltip | 2 participants didn't read [T] as a button; 1 didn't see it | Label legibility fix plus contextual onboarding. Alternative (onboarding wizard) was rejected in the Pattern Library as inappropriate for late-night fatigued users. |
| Team default → configurable per event type | P4's corrective tag appeared in Team playlist unexpectedly | Event-type-level configuration preserves the opt-out model for the majority of tags while adding appropriate granularity for corrective content. Alternative (opt-in team playlist) would add friction to every tag. |
| Toast → thumbnail preview added | P1, P3 expressed mild uncertainty about auto-clip contents | Low-cost visual confirmation increases trust in the auto-clip system without adding an explicit review step. Alternative (always open review timeline after confirmation) adds a mandatory step that slows the continuous tagging flow. |
The following questions remain unresolved at the close of the lo-fi phase. They are documented here so the hi-fi design work in Step 07 has explicit hypotheses to test rather than making implicit assumptions about their answers:
High-fidelity mockups are produced in Figma, working within and extending Hudl's existing dark-UI visual language — this is a feature redesign, not a rebrand. Desktop and mobile/tablet breakpoints are designed in parallel from the outset, since the target user (the teacher-coach reviewing film late at night) moves between a laptop at home and an iPad or phone on the sideline. Each screen is annotated with full interaction specifications: trigger states, hover states, keyboard shortcuts, loading and skeleton states, empty states, and every edge case identified across research and lo-fi testing.
The Assist service model shaped a key constraint in the handoff documentation: since Assist already delivers stats and clips as simultaneous properties of a single tagged moment, Hudl's backend demonstrably supports associating multiple metadata types with one event object. Every spec in this step is written to map onto that existing data model — the goal is an interface improvement that engineering can build without a back-end architecture change. That constraint is stated explicitly in each annotation where it applies, because a designer who understands the data model is more useful in sprint review than one who doesn't.
All desktop mockups are produced at 1440×900px in Figma — the most common laptop resolution among the teacher-coach and club-coach personas, based on the device context established in Step 02 research. A secondary frame at 1280×800px tests the minimum viable viewport. Hudl's existing dark-UI tokens (background #121212, surface #1E1E1E, primary blue #3B82F6) are used as a baseline and extended where the unified tagging overlay requires new patterns not present in Hudl's public component library. All extensions are documented in the Component Spec Sheet (Deliverable 4 of 5).
The desktop frame set covers eight distinct interface states. States are designed as a connected flow in Figma (not isolated screens) so that a reviewer can follow the complete single-pass tagging journey from pressing play on game film through publishing a clip package to players — without mentally stitching disconnected artboards together. Each frame is numbered, named, and linked with directional arrows indicating which interaction triggers the transition to the next state.
The idle state is the baseline: a coach has opened a game film and pressed play. The interface is a video player. The Unified Tagging UI is completely invisible — no panel, no sidebar, no visible trigger affordance that could add visual noise during uninterrupted watching. A single, low-contrast label in the bottom-right of the playback controls reads [Tag ⌨ T] — discoverable on first hover, invisible at a distance. This design decision was tested in Step 05 and carried forward: the trigger must exist but not dominate.
Hi-fi additions over the lo-fi version: the video frame now renders at full Hudl dark-chrome quality including the timeline scrubber, chapter markers from any previously existing tags, and the playlist indicator in the top-right corner. The playback speed control (0.5×, 0.75×, 1×) and keyboard shortcut legend (accessible via ?) are also present in this frame — both were requested during Step 05 testing and not present in the lo-fi prototype.
The coach presses T (or clicks the [Tag ⌨] label). Film pauses. The bottom-anchored overlay animates up from below the video frame over 180ms using a cubic-bezier ease-out curve — fast enough to feel responsive, slow enough to orient the coach to the new UI layer. The video frame above the overlay remains fully visible. The paused frame is the visual anchor; the coach can verify they have tagged the right moment at a glance.
In the hi-fi version, the trigger animation includes a subtle frame flash (2-frame white outline, 8% opacity on the video border) to confirm the pause has registered — a micro-interaction absent in the lo-fi wireframe. This addresses P8's hesitation during testing: "I wasn't sure if it paused or if I missed it." The animation is intentionally minimal; it confirms state without drawing attention away from the paused video frame.
The overlay at full expansion shows five elements in a single horizontal row: (1) event type selector (dropdown, pre-selected to the coach's last-used type per session), (2) player attribution (type-to-filter from the active roster, auto-populates team playlist assignment), (3) clip boundary controls (−/+ buttons with current in/out timestamps displayed), (4) playlist assignment (checkboxes for auto-assigned playlists based on attribution, plus a manual "Add to playlist" option), (5) confirm and cancel actions. A collapsed note field sits below the row, triggered by a + Note affordance.
Hi-fi visual decisions: the event type selector uses Hudl's existing tag color system (yellow for possession events, red for defensive, blue for set pieces) with a color swatch visible in the dropdown trigger — so the coach receives a visual signal about tag type without reading the label. Player attribution uses a pill component with the player's jersey number and surname, matching Hudl's existing player badge pattern. All pre-filled defaults are shown in a visually distinct state (slightly dimmed, with a small ↩ reset affordance) so the coach can immediately distinguish "I set this" from "the system set this."
When the coach interacts with the clip boundary controls, the overlay enters a secondary state: the main entry row collapses to minimum height and a compact clip timeline expands above it, showing the auto-generated ±8s clip window on the full game timeline. The in and out points are shown as draggable handles. Nudge buttons (◀ 1s / 1s ▶) appear flanking each handle for precision adjustment. A small preview thumbnail updates live as the coach adjusts the in/out — they can see the first and last frame of the clip window without scrubbing the main video.
This state was not in the lo-fi prototype. It was added in response to the ILG-03 iteration (Step 05): P6 overshot twice using 2s nudge increments. The dedicated boundary adjustment state decouples the precision task from the entry task, giving the coach a focused tool for the moments when the auto-clip boundary is wrong — without adding that tool to the main entry flow where it is unnecessary 80% of the time.
On confirm, the overlay collapses and a toast notification appears at top-right. The toast includes: a thumbnail of the first frame of the saved clip (48×27px, matching the clip's in-point), the event type label and player attribution in two short lines, a "View" link that opens the review timeline, and a 4-second auto-dismiss countdown. A quick-undo affordance (Undo · 3s) is present for the first 3 seconds of display. If the coach presses Undo, the tag and clip are removed, the video returns to the paused frame, and the overlay re-opens with all fields populated from the cancelled entry — no re-entry required.
Video auto-resumes 800ms after the overlay collapses (if Auto-Resume is enabled in Settings → Playback). The 800ms gap is intentional: it allows the toast to appear and register visually before motion resumes, so the coach sees the confirmation rather than missing it because playback has already started. This timing was determined by informal testing with 3 of the original Step 05 participants during the hi-fi design phase.
After the coach finishes watching, the Review Timeline provides a chronological list of all tags created in the session. Each row shows: timestamp, event type (colored badge), player attribution, clip duration, playlist assignments, and a note indicator if a note was added. The coach can play any clip inline (expanding the row to show the clip player), edit any tag field, adjust clip boundaries, or delete a tag — all without leaving the timeline view. A "session summary" header shows total tags, total clip duration, and clips-per-player breakdown.
The review timeline is the first moment in the workflow where the coach sees the full output of their film session as a coherent whole. The design treats this as a quality-check surface, not just a list — the player breakdown bar chart in the header makes over-tagging or under-tagging one player immediately visible. A coach who tagged 14 moments for one player and 1 for another will see that imbalance in the header before publishing anything.
The Playlist Builder screen shows all auto-generated playlists from the session — one per player (from attribution-driven auto-assign), plus a Team playlist containing all tagged moments. The coach can rename any playlist, reorder clips within a playlist by drag, remove individual clips, add clips from other playlists, and create a new manual playlist. A "Save as smart playlist" toggle creates a rule-based playlist that will auto-update as future games are tagged with the same criteria (e.g., "All defensive moments, Jordan (#7)"). Smart playlist rules are shown as editable chips below the playlist name.
The Playlist Builder is not a new concept in Hudl — it mirrors the existing playlist management interface. The hi-fi design retains Hudl's interaction patterns for playlist management and adds only the auto-population from attribution-driven tagging and the smart playlist rule builder. This minimizes the learning curve: a coach who already knows how to use Hudl's playlists will recognize this screen immediately and understand the additions without onboarding.
The Publish flow is accessed from the Review Timeline or Playlist Builder via a "Publish" primary action. The coach selects which playlists to share, selects recipients from the team roster (individual players receive only their own playlist by default; the coach can override this), adds an optional message, and confirms. The confirmation screen shows a per-player delivery summary: "Jordan (#7) will receive 3 clips · Sarah (#11) will receive 5 clips · Full team will receive 12 clips." A "Preview as player" link lets the coach see exactly what the player will see before sending.
This flow maps directly onto Hudl's existing sharing infrastructure — no new delivery mechanism is required. The hi-fi design adds only the per-player delivery summary and the "Preview as player" view, both of which were identified as missing from the current workflow during Step 02 research (P3: "I have no idea what my players actually see when I share something").
Three visual language decisions were made in the hi-fi pass that were not present in the lo-fi wireframes and warrant explicit documentation:
Two breakpoints are designed: tablet landscape (1024×768px, iPad Pro target — the most common sideline device among Step 02 participants) and mobile portrait (390×844px, iPhone 14 target — used for quick post-game sharing and on-the-go clip review, not primary tagging). A tablet portrait frame (768×1024px) is also produced but treated as a secondary variant of the landscape layout rather than a separate design system — the layout reflows rather than redesigns.
The research distinction between use contexts matters here. The teacher-coach doing a full post-game film review session uses a laptop or desktop. The club coach on the sideline uses a tablet or phone to tag a few moments in real time or immediately after a game. Mobile is not a stripped-down version of the desktop workflow — it is a different use case with a different time budget, a different physical environment, and a different tolerance for precision. The mobile design reflects this: it prioritizes speed over completeness, with fewer visible options and larger touch targets throughout.
The tablet landscape layout is the closest to the desktop experience. The overlay appears at the bottom of the video player in the same position as desktop — the additional screen real estate compared to mobile allows for a near-identical UI. The primary adaptation is touch target sizing: all interactive elements in the overlay are increased to a minimum 44×44px (Apple HIG minimum) vs. the 32px desktop minimum. The event type selector becomes a horizontal scrollable chip row rather than a dropdown — faster to tap than opening a dropdown on touch.
The clip boundary adjustment state uses a full-width scrubber with large, easy-to-grab handles rather than the desktop's nudge-button approach. The ±8s auto-clip window is displayed as a highlighted segment on the scrubber, with handles at each end. Dragging a handle with a finger is more natural than tapping a +1s nudge button repeatedly, and the larger touch target area makes precision adjustment feasible without a stylus. The nudge buttons remain present as a secondary option for cases where drag precision is insufficient.
In portrait orientation, the video player occupies the top 56% of the screen (maintaining a 16:9 aspect ratio for the video frame) and the overlay occupies the lower 44% as a persistent panel rather than an animated overlay. This is the one layout exception to the "overlay is invisible until triggered" principle: on portrait tablet, the panel being always-visible costs less screen real estate than an animated overlay that the coach must dismiss and re-trigger repeatedly on a narrower viewport.
The persistent panel in portrait mode shows the same five elements as the desktop overlay — event type, player attribution, clip boundaries, playlist assignment, confirm/cancel — but stacked in two rows rather than one horizontal strip. This reflowed layout is tested against the same task completion criteria as the landscape layout to ensure it does not introduce new points of confusion. The note field remains collapsed by default in portrait mode as in all other layouts.
Mobile tagging is designed for the club coach on the sideline with 15 minutes after a game ends. The full five-element overlay is replaced by a two-step bottom sheet: Step 1 is a single large tap target — "Tag This Moment" — that triggers a pause and auto-generates the clip. Step 2 is a minimal entry sheet (slides up from bottom) with only three fields: event type (large color-coded chips, not a dropdown), player attribution (last-used player shown as default with a Change option), and a confirm button. All other fields (clip boundaries, playlist assignment, notes) are accessible via a "More options" disclosure but are hidden by default.
This is a deliberate information architecture decision: on mobile, the minimum viable tag is event type + player. Everything else can be edited in the Review Timeline later — which is more comfortable at a desk than on a sideline. The "More options" disclosure is not hidden because these features don't exist on mobile; it's hidden because the mobile context does not support precision work, and the design should not encourage trying to do precision work in a suboptimal context. A coach who wants to adjust clip boundaries in detail should wait until they're at a laptop.
The mobile Review Timeline is a card-based list rather than a table. Each card shows the timestamp, event type badge, player name, and a thumbnail of the clip's first frame. Tapping a card expands it inline to show the clip player and edit controls. Swipe-left on a card reveals quick actions: Edit, Move to playlist, Delete. A persistent "Publish" button is fixed at the bottom of the screen, accessible at any point in the review without scrolling to a footer.
The mobile Review Timeline is the primary surface where a coach coming back from a sideline tagging session at home on their phone would clean up their tags before sending them to players. The design of this view assumes that most editing work (clip boundary adjustment, note-writing) will happen here rather than during the original tagging session — which is why those fields are accessible but not foregrounded in the mobile tagging flow.
Three elements are identical across all breakpoints and are documented once in the Component Spec Sheet rather than per-breakpoint:
Interaction specifications are documented in two places: directly on the Figma frames as annotation layers (using a standardized annotation component that engineering can toggle on/off per frame) and in this document as a narrative reference. The Figma annotations are the engineering-facing source of truth for implementation; this document provides the reasoning behind the specifications for design review and portfolio context.
Annotations follow a consistent format: (1) Element name and location, (2) the specific behavior specified, (3) the trigger that initiates it, (4) any timing or easing values, (5) a back-end/data constraint note where the behavior has implications for the API or data model. Engineering teams in sprint review can filter to annotation layer only and get a complete interaction specification for any frame without reading through narrative documentation.
| Element | Animation | Duration | Easing | Trigger |
|---|---|---|---|---|
| Overlay entrance | translateY(100%) → translateY(0) + opacity 0→1 | 180ms | cubic-bezier(0.22, 1, 0.36, 1) | T key press / [Tag] tap |
| Overlay exit (confirm) | translateY(0) → translateY(100%) + opacity 1→0 | 140ms | cubic-bezier(0.55, 0, 1, 0.45) | Confirm button / Enter key |
| Overlay exit (cancel) | Same as confirm exit | 120ms | ease-in | Cancel / Esc key |
| Video frame flash | border-color opacity 0→0.08→0 | 200ms total (100ms in, 100ms out) | linear | Simultaneous with overlay entrance |
| Clip boundary expand | height auto→144px + opacity 0→1 | 220ms | ease-out | Tap/click clip boundary controls |
| Toast entrance | translateX(120%) → translateX(0) + opacity 0→1 | 240ms | cubic-bezier(0.22, 1, 0.36, 1) | Confirm action registered |
| Toast exit (auto) | opacity 1→0 | 300ms | ease-out | 4s after entrance |
| Toast exit (manual) | translateX(0) → translateX(120%) + opacity 1→0 | 180ms | ease-in | Dismiss × tap |
All animations are suppressed when the user has prefers-reduced-motion enabled (see Accessibility Annotations, section 07 below). In reduced-motion mode, state changes are instantaneous with no transitional animation — the functional state change occurs but without any movement or opacity animation.
| Key | Context | Action | Note |
|---|---|---|---|
| T | Film playback (idle) | Trigger tag overlay + pause video | Configurable in Settings → Shortcuts |
| Enter | Overlay open | Confirm tag | Only active when required fields are populated |
| Esc | Overlay open | Cancel and dismiss overlay, resume video | Does not trigger undo — no tag was created |
| Space | Film playback (idle) | Play / pause (Hudl native, preserved) | Space is captured by overlay confirm field if a text input is focused — does not double-fire |
| ← → | Film playback | Scrub ±5s (Hudl native, preserved) | Disabled while overlay is open to prevent accidental scrub during entry |
| [ ] | Overlay open, clip boundary active | Nudge in-point (↓) / out-point (↑) by 1s | Requires focus on clip boundary control — does not fire globally |
| Z (⌘Z / Ctrl+Z) | Film playback (idle) | Undo last confirmed tag | Available for 30s after confirmation; same as toast undo but keyboard-accessible |
| ? | Any | Toggle keyboard shortcut legend overlay | Non-modal; dismisses on second ? or Esc |
Four loading states are specified for the tagging overlay. Each has a distinct trigger, a maximum duration before a failure state appears, and a specific recovery behavior:
All interactive elements in the overlay and review timeline have three states beyond their default: hover, focus (keyboard), and active (pressed). These are documented in the Figma frames as variant states within each component. Key decisions that deviate from Hudl's standard interactive state patterns:
| Event | Time (ms after confirm) | Behavior |
|---|---|---|
| Overlay exit begins | 0 | Overlay starts exit animation (140ms) |
| Toast entrance begins | 60 | Toast starts entrance animation; overlaps tail end of overlay exit |
| Toast fully visible | 300 | Toast entrance complete; thumbnail rendered |
| Auto-resume fires | 800 | Video playback resumes (if Auto-Resume enabled) |
| Undo window closes | 3000 | Undo affordance disappears from toast |
| Toast exit begins | 4000 | Toast starts exit animation (300ms) |
| Toast fully gone | 4300 | Toast removed from DOM |
All timers reset if the coach interacts with the toast (hover, click, keyboard focus). A focused or hovered toast does not auto-dismiss while focused — this prevents the coach from losing the undo option if they are reading the toast details or moving their cursor toward the undo affordance when the 3s window would otherwise close.
The Settings → Playback panel is annotated in the hi-fi frames because it contains two controls that directly affect tagging behavior: Auto-Resume toggle (default: on) and Keyboard Shortcuts toggle (default: on). The panel is accessed from the ⚙ icon in the playback controls. It slides in as a right-side drawer over the video player (does not navigate away from the film).
Additional settings accessible from this panel: playback speed default (0.75×, 1×, 1.25×), default clip window size (±4s, ±8s, ±12s — replaces the hardcoded ±8s default for coaches who consistently find it too short or too long), and the shortcut configuration panel (see Keyboard Shortcut Map above). These settings persist per-user account, not per-device or per-session.
Accessibility annotations are documented directly on the Figma frames using Figma's built-in accessibility annotation kit. Key annotations that engineering will need for implementation:
The Component Spec Sheet documents only the components that are new or modified in this redesign. Components used from Hudl's existing design system without modification are listed in section 07 below with a reference to the Hudl Figma community library source — they are not re-specified here because duplicating existing documentation creates maintenance overhead and risks drift between the spec and the live system.
Each new or modified component is documented with: (1) component name and Figma layer reference, (2) all variants and their trigger conditions, (3) token-level styling (using Hudl's public design token names where they exist, with proposed new token names for extensions), (4) anatomy diagram reference (Figma frame number), (5) accessibility requirements, (6) data props required from the API.
Type: New component (no equivalent in current Hudl system). Figma frame: D-02, D-03 (desktop); T-01, T-02 (tablet); M-02 (mobile bottom sheet variant).
Layout: Fixed position, bottom: 0, left: 0, right: 0 within the video player container (not the full viewport). Height: 72px collapsed / 144px expanded (clip boundary active) / auto for mobile bottom sheet. Z-index: above video player controls, below any system overlays.
Tokens: Background: --color-surface-elevated-2 (#2A2A2A, existing Hudl token); Border-top: 1px solid --color-border-subtle (existing); Backdrop-filter: blur(8px) — new, no existing Hudl token; proposed: --effect-blur-overlay: blur(8px).
API props required: active_session_id (string), roster (array of player objects: id, name, jersey_number), last_used_event_type (string, from user preferences API), user_playlist_defaults (array of playlist objects with auto-assign rules). All must be available in the game film player context — none require a new API endpoint if the roster and session data are already available to the film player view (confirmed against Hudl's public developer documentation).
States: hidden (default, no DOM presence during idle), entering (animation in progress), active-entry (fields populated, awaiting confirm), active-boundary (clip boundary adjustment mode), confirming (write pending), error (write failed).
Type: Modified from Hudl's existing tag-type badge component. The existing badge is display-only; this version adds an interactive selected state and a chip-row layout pattern. Figma frame: D-03 (desktop chip row); T-01 (tablet scrollable chip row).
Variants: unselected (default), hovered (color at 15% opacity fill), selected (color at 100% fill + white label + checkmark icon), focused (selected styles + 2px focus ring offset). Disabled state (gray, non-interactive) is used when event type options have not loaded from the API.
Sizing: Height: 32px (desktop), 44px (tablet/mobile touch targets). Horizontal padding: 12px. Border-radius: 4px (matching Hudl's standard chip radius). Label: 13px, weight 500, --font-mono (Hudl token).
Color system: Chip color is driven by the event_type.color property from the API (existing Hudl tag color system). No new colors are introduced. The component accepts a color prop (hex string) and applies it via CSS custom property — the component does not hardcode any tag-type colors.
API props required: event_types (array: id, label, color, keyboard_shortcut). This is the same data structure used by Hudl's existing tag panel — the chip component consumes the same endpoint, just rendered differently.
Type: New component. Figma frame: D-03 (desktop overlay); T-01 (tablet); M-03 (mobile).
Anatomy: Jersey number badge (circular, 20px diameter, jersey_number as text, background: team primary color from API) + player surname label (13px, --font-sans, truncated at 120px max-width with ellipsis) + × dismiss affordance (16px icon, appears on hover/focus of the pill when a player is selected).
States: empty (shows "Add player" placeholder text, no badge), loading (shimmer skeleton, 3 pill-width skeleton items), populated-default (shows last-used player, dimmed 60% to indicate system-set default), populated-confirmed (full opacity, coach has actively selected or confirmed this player), error (player not found in roster — shows "Player not found" in error color with a manual text fallback).
Typeahead behavior: Clicking or focusing the pill opens an inline typeahead dropdown. Filtering is client-side against the pre-loaded roster array (not a new search API call per keystroke). Dropdown shows jersey number + full name + position. Keyboard: ↑↓ to navigate, Enter to select, Esc to dismiss without selecting. Maximum 6 results visible before scroll.
Attribution → playlist auto-assign logic: When a player is selected, their default playlist ID (from user_playlist_defaults) is automatically checked in the playlist assignment field. This is a client-side operation — no API call at selection time. The playlist write only happens on tag confirm.
Type: New component (desktop nudge variant) + modified scrubber (tablet/touch variant). Figma frames: D-04 (desktop expanded state); T-03 (tablet scrubber state).
Desktop nudge variant anatomy: In-point display (timestamp, editable on click) + ◀ 1s nudge button + [clip duration display] + 1s ▶ nudge button + out-point display (timestamp, editable on click). The clip duration display updates live as nudge buttons are pressed. Maximum clip duration is capped at 120s (2 minutes) — an attempt to create a 2+ minute clip surfaces a warning: "Clips over 2 min may be trimmed in playback. Adjust or continue."
Touch scrubber variant anatomy: A 100% width timeline strip showing the full game scrubber (reduced height: 8px track vs. 16px in the main playback controls). The auto-clip window is highlighted as a colored segment. In and out handles are 32px diameter circular drag targets. A live preview thumbnail (48×27px) appears above the active handle during drag, showing the frame at the handle's current position.
API interaction: The in/out timestamps written to the tag object are absolute timestamps relative to the game film start (in seconds, float precision). The ±8s default is calculated client-side from the current playback position at the moment T is pressed. Nudge increments modify the in/out timestamps directly — no server round-trip during adjustment. The final in/out values are submitted as part of the tag write on confirm.
Type: New component (extension of Hudl's existing notification toast, which is text-only). Figma frame: D-05 (desktop); T-04 (tablet); M-05 (mobile).
Anatomy: Thumbnail (48×27px, rounded 2px, generated from clip in-point frame) + event type color bar (4px left border in tag-type color) + primary line (event type label + player name, 13px weight 500) + secondary line (clip duration + playlist count, 12px muted color) + "View" link (opens Review Timeline inline) + "Undo" link (visible 0–3s only) + × dismiss button.
Thumbnail generation: The toast renders with a shimmer placeholder while the thumbnail generates server-side. Thumbnail generation is triggered immediately on confirm (not on toast display) to minimize the loading window. If the thumbnail has not arrived within 1.5s of the toast displaying, the placeholder is replaced with an event-type color swatch — the thumbnail never blocks toast display.
Undo behavior: Pressing Undo calls the tag delete endpoint (same endpoint used by the Review Timeline delete action). On success: toast dismisses, overlay re-opens with all fields pre-populated from the undone tag, video remains paused. On failure: inline error in toast — "Couldn't undo — check your connection." The tag is considered saved; the coach can delete it from the Review Timeline.
Type: New component. Figma frame: D-06 (desktop timeline); M-04 (mobile card variant).
Desktop row anatomy (collapsed): Timestamp (DM Mono, 12px, min-width 56px) + event type badge (color swatch, 8×8px + label, 12px) + player attribution pill (same component as overlay, display-only mode) + clip duration (12px muted) + playlist chips (up to 3 visible, +N for more) + inline play button (16px icon) + edit button (pencil icon, 16px) + delete button (trash icon, 16px). Row height: 48px collapsed.
Desktop row anatomy (expanded): Row expands to show an inline clip player (16:9, max 320px wide, right-aligned within the row). Edit mode replaces the display-only fields with the same editable components used in the overlay — event type chip row, player attribution pill with typeahead, clip boundary control. Edit save calls the tag update endpoint. Edit cancel restores the previous values without an API call (optimistic UI).
Mobile card variant: All the same data, card layout instead of row. Thumbnail (80×45px) in top-left. Event type badge + player name + timestamp stacked to the right of thumbnail. Clip duration below. Swipe-left reveals action buttons (Edit, Move, Delete). Tap to expand → inline clip player fills card width.
The following components from Hudl's Figma Community library are used in the hi-fi mockups without modification. They are listed here for engineering reference — implementation should use the live Hudl component library, not the mockup assets:
Edge case screens are not the same as error states. Error states (write failure, roster loading failure) are documented in the Interaction Specs and Component Spec Sheet because they are part of a primary component's specification. Edge cases are situations that fall outside the primary use flow but that a coach will encounter in real use — and that a developer will have to make a decision about if the design doesn't address them first. An unspecified edge case is a decision deferred to engineering, and engineering decisions about user-facing edge case behavior are usually the wrong defaults.
Each edge case below documents: the situation, the coach context in which it arises, the designed behavior, and the rationale for choosing that behavior over alternatives. Where the edge case screen is in the Figma file, the frame number is noted. All edge case frames are in a dedicated "Edge Cases" page in the Figma file, separate from the primary flow frames.
Situation: A coach presses T, confirms a tag, and immediately presses T again within 400ms — before the toast has finished animating in. This happens in high-density stretches of film (a corner kick sequence with 4–5 events in 30 seconds).
Designed behavior: The second T press fires immediately after the overlay exit animation begins — it does not wait for the first toast to fully render. The new overlay opens, the previous toast renders simultaneously in the top-right corner (briefly overlapping with the new overlay open state). Multiple toasts stack vertically, oldest at top, newest below.
Rationale: Blocking the second T press until the first toast completes would interrupt a coach mid-sequence. The coach's primary task is watching film and tagging; the toast is feedback, not a gate. Stacking toasts is visually busy but time-limited (each auto-dismisses after 4s) and does not interrupt the flow. Figma frame: EC-01.
Situation: A coach presses T within the first 8 seconds of a film (before enough footage exists for the standard ±8s clip window) or within 8 seconds of the end of the film.
Designed behavior: The auto-clip window clips to the available footage. If T is pressed at 5s, the clip window is 0s–13s (0s in-point, not −8s which would be before the film starts). The clip boundary control reflects this: the in-point nudge button is disabled (grayed, with tooltip "Already at video start") when the in-point is at 0s. The out-point works normally.
Rationale: Surfacing an error ("can't tag here") would be confusing and incorrect — the coach can absolutely tag this moment, they just get a shorter clip. The asymmetric clip window is the correct behavior and should be presented without special treatment. The disabled nudge button is sufficient feedback that the boundary cannot extend further in that direction. Figma frame: EC-02a (start), EC-02b (end).
Situation: The roster API call fails or returns an empty array. The player attribution pill cannot populate its typeahead.
Designed behavior: The player attribution field shows "Type player name or number" as a free-text input. Tags created with a free-text player attribution are saved with a raw_player_name string property rather than a player_id reference. On the Review Timeline, these tags show with a "Manual — not linked to player profile" indicator and a "Link to player" action that allows the coach to retrospectively assign the tag to a roster player. The attribution-driven playlist auto-assign does not fire for unlinked tags.
Rationale: Blocking tag creation because the roster didn't load is unacceptable in a time-sensitive film review context. The coach can still create meaningful tags without roster data; the linking step can happen later. The "not linked" indicator is necessary so the coach knows which tags need attention before publishing player playlists. Figma frame: EC-03.
Situation: The coach confirms a tag, but the write API call fails (timeout, 500 error, or lost connection).
Designed behavior: The overlay remains open. The Confirm button returns from its loading state to its default state. An inline error banner appears above the overlay row: "Tag couldn't be saved — check your connection. [Retry] [Save for later]". "Retry" re-fires the write call. "Save for later" saves the unsaved tag data to local storage and adds a "Pending — not yet saved" item to the Review Timeline. Pending items show a sync icon (⟳) and are auto-retried in the background. When a pending item successfully saves, its ⟳ icon is replaced by the standard tag row and a brief success toast appears.
Rationale: A coach who loses connectivity at a key moment in film review should not lose the tag they just created. Local storage fallback is a standard offline-first pattern and is explicitly noted in the spec as a back-end coordination requirement — the pending tag format must match the confirmed tag format so the same write endpoint can be used for both. Figma frames: EC-04a (write failure state), EC-04b (pending item in Review Timeline), EC-04c (auto-save success).
Situation: The coach manually adjusts the out-point beyond the end of the film, or adjusts the in-point before 0s.
Designed behavior: The nudge button stops at the film boundary and becomes disabled. If the coach types a timestamp directly into the in/out point field that exceeds the boundary, the field turns red inline and the Confirm button becomes disabled until a valid value is entered. The invalid value is not submitted to the API.
Rationale: Client-side validation before the API call prevents a class of errors that would result in confusing API failure responses. The inline red state on the timestamp field is the standard Hudl error state for form inputs and requires no new component work. Figma frame: EC-05.
Situation: The coach presses T at a timestamp within 2 seconds of an existing tag's timestamp, with the same event type and the same player attribution.
Designed behavior: A non-blocking inline warning appears below the Confirm button: "Similar tag exists at [timestamp] — same event and player. Create anyway?" with "Yes, create" and "View existing" links. "View existing" opens the matching tag in the Review Timeline (in a side panel, not navigating away from the overlay). The coach can create the duplicate or dismiss the warning — no tag is blocked outright.
Rationale: Duplicate tags can be intentional (two separate moments that happen to share the same event type and player) or accidental (the coach double-pressed T). A blocking error would prevent legitimate duplicate creation. A warning with a view-existing option gives the coach the information to make the right choice without coercing one answer. The 2-second window and same-event-same-player matching criteria are conservative enough to avoid false positives in normal use. Figma frame: EC-06.
Situation: The coach has configured a keyboard shortcut (Settings → Shortcuts) that conflicts with a Hudl native shortcut or a browser default.
Designed behavior: The Shortcut Configuration panel validates against a conflict list in real time as the coach types a new shortcut. If a conflict is detected, the input field shows an inline warning: "This key is used by [Hudl: Scrub forward]. Using it here will override that behavior in film review." The coach can proceed (explicitly accepting the override) or choose a different key. Browser defaults (e.g., ⌘W, ⌘R) are on a hard block list — the panel will not allow them to be mapped regardless of coach choice, with the message: "This key is reserved by your browser and can't be remapped." Figma frame: EC-07.
Situation: The coach opens Settings → Shortcuts to customize their keyboard shortcuts — the panel that was left unspecified at the end of the lo-fi phase (Design Rationale, section 09: "Keyboard shortcut configurability — P2 wanted to remap the T key. The scope and layout of that panel are unresolved").
Designed behavior: The Shortcuts panel shows a table of all configurable shortcuts: Action (label) + Current Key + a "Change" button per row. Pressing "Change" puts that row's key field into capture mode — the next keypress (excluding the hard block list) sets the new shortcut. The panel shows a reset-to-defaults button at the bottom. All shortcut changes take effect immediately and persist to the user's account preferences. Figma frames: EC-08a (panel default), EC-08b (capture mode), EC-08c (conflict warning state).
Situation: A coach opens a game film for the first time after the unified tagging feature is enabled on their account. They have never seen the overlay and do not know the T trigger exists.
Designed behavior: On first film open, a single contextual tooltip appears anchored to the [Tag ⌨ T] trigger label: "New: tag clips without leaving the film. Press T to try it." The tooltip auto-dismisses after 6 seconds or on any coach interaction. It does not reappear. No further onboarding is shown until the coach has used the overlay at least once — the second onboarding touchpoint appears in the Review Timeline on first visit, anchored to the session summary header: "All your tags are here. Edit, adjust clips, or publish to players." This is not a wizard; each contextual hint is specific to the element it is anchored to and appears exactly once. The onboarding state is tracked server-side per user account (two boolean flags: has_seen_overlay_tooltip, has_seen_timeline_tooltip).
Rationale: A full onboarding wizard for a film review tool at 11pm is the wrong solution — this persona does not have time or patience for a multi-step tour. Contextual single-appearance tooltips deliver the minimum information needed to discover each capability without coercing the coach to engage with onboarding before doing their actual work. Figma frames: EC-09a (film open tooltip), EC-09b (review timeline tooltip).
Situation: Auto-Resume is enabled. The coach confirms a tag, video resumes at 800ms, and then — while the film is playing — the coach decides to undo the tag (presses ⌘Z or clicks Undo in the still-visible toast). They are now watching live film with the undo action fired.
Designed behavior: Undo pauses the video. The tag is deleted. The overlay re-opens with all fields pre-populated from the undone tag. The playback position is scrubbed back to the moment the tag was created (±0s, not ±8s — the exact timestamp of the T press). The coach is returned to the paused state at the tagged moment with the overlay open, as if the original T press had just fired. They can re-confirm with different values, or cancel and resume from the original position.
Rationale: An undo that fires while the film is playing but doesn't return the coach to the original context is not a complete undo — the film has advanced. Scrubbing back to the tagged moment and re-opening the overlay is the only behavior that gives the coach the full ability to correct their action. The playback scrub is a programmatic action (not a user-initiated scrub) and is the one case where the design overrides the coach's current playback position on their behalf. This is documented explicitly in the annotation for this edge case so engineering understands the intentionality. Figma frame: EC-10.
A fully interactive Figma prototype simulates the complete single-pass tagging workflow — from pressing play on game film through tagging a moment, adjusting the clip window, and confirming the clip with auto-playlist assignment. The prototype is publicly linkable and embedded directly in this portfolio so a hiring team or product reviewer can interact with the solution in the browser without downloading anything or logging into Figma.
The prototype is built with three comparison frames as its structural backbone: the current Hudl base workflow (the two-pass problem as it exists today), the Hudl Assist service model (what coaches receive when the analyst does the tagging for them), and the proposed redesign (one pass, coach-controlled, no additional cost, same output quality). This three-way structure is not a stylistic choice — it is the argument. A reviewer who can walk through all three flows back-to-back understands the design decision in a way that looking at screens alone cannot convey. A short narrated walkthrough video accompanies the prototype for contexts where interaction is not possible (mobile preview, async review, screen recording).
The Figma prototype covers three flows across one shared file with three top-level sections. Each section is a self-contained flow — a reviewer can start at the beginning of any flow without having completed the others. Navigation between flows is handled by a persistent "Switch Flow" strip at the top of each frame (A · B · C, with the active flow highlighted), so a reviewer can jump between the current Hudl workflow and the redesign at any point during their evaluation.
The prototype is built on the hi-fi mockup frames from Step 07, with Figma prototype connections added. It is not a separate file — it is the Step 07 Figma file with prototype mode enabled. This means any change to the hi-fi frames is immediately reflected in the prototype, eliminating the drift that occurs when a prototype and its source mockups are maintained separately. All frames used in the prototype are tagged with a "PROTO" prefix in the Figma layer panel to distinguish them from edge case and documentation frames in the same file.
Flow A reproduces the current Hudl two-pass workflow as faithfully as a static prototype can represent a live video platform. The goal is not to mock Hudl's existing UI — it is to give a reviewer the visceral experience of the problem. A reviewer who has not personally sat through a two-pass film session needs to feel the redundancy before they can evaluate whether the redesign genuinely solves it.
Frame sequence: (A-01) Film player — playback in progress. (A-02) Pause and open Tagging panel — stat selection. (A-03) Add stat, resume playback. (A-04) [Time jump indicator: "40 minutes of game film later…"] (A-05) Return to film — clip-cutting pass begins. (A-06) Locate the tagged moment on the timeline — scrub to find it. (A-07) Set clip boundaries — in and out points, separately from the tag. (A-08) Assign clip to playlist — separate action from the tag and the clip. (A-09) Session complete — total time counter shown vs. redesign estimate.
The time jump indicator at A-04 is a deliberate narrative device. A static prototype cannot reproduce 40 minutes of film review, but a clearly labeled frame that acknowledges the time gap keeps the reviewer oriented without requiring them to sit through a simulation. The time counter at A-09 provides the quantitative anchor: "This workflow took 3h 10m in participant testing. Flow B covers the same game in 58 minutes."
Interaction fidelity: Flow A prototype interactions are minimal — the goal is to communicate the sequence of mode switches, not to recreate Hudl's platform. Tappable hotspots advance through frames; the scrubber and playlist panel in Hudl's actual UI are represented as static high-fidelity screenshots rather than interactive components. Flow A is the evidence, not the design.
Flow B is the full interactive prototype of the redesign. Every primary interaction in the tagging workflow is implemented as a live Figma connection — not a hotspot that advances a frame, but an interactive component that changes state within the frame. This distinction matters: a hotspot prototype proves that a sequence of screens exists; an interactive component prototype proves that the interaction model is coherent enough to implement in Figma, which is a reasonable proxy for its implementability in a real product.
Frame sequence: (B-01) Film player — idle state, [Tag ⌨ T] label visible at low opacity. (B-02) T pressed — overlay entrance animation plays (Figma smart animate), video pauses, overlay fully visible. (B-03) Event type chip row — tapping a chip updates the selected state in place. (B-04) Player attribution — typing "7" filters the roster to Jersey #7; tapping selects and populates the pill. (B-05) Playlist assignment auto-populates based on player selection. (B-06) Confirm — overlay exit animation plays, toast entrance animation plays, video frame shows paused state with thumbnail in toast. (B-07) Auto-resume — film plays again (represented by a looping video still). (B-08) [Repeat loop] T pressed at a second moment — demonstrates back-to-back tagging without mode switch. (B-09) Review Timeline — session summary, three tagged clips listed. (B-10) Expand a row — inline clip player, edit controls. (B-11) Playlist Builder — auto-populated playlists shown. (B-12) Publish — per-player delivery summary, confirm.
Interactive components used in Flow B: Event type chip row (component set with selected/unselected variants, Figma interactive component); Player attribution typeahead (simulated with two frames: pre-filter and post-filter, connected by a hotspot on the "7" keystroke representation); Clip boundary nudge buttons (each press increments the timestamp display — implemented as a variable); Confirmation toast (Smart Animate from overlay-open to toast-visible state). All other transitions use Smart Animate with the easing curves specified in the Interaction Specs (Deliverable 3 of 5, Step 07).
Flow C documents what a coach receives when they use Hudl Assist — not an interface the coach uses, but a service that delivers outputs to them. This flow was included as Condition C in Step 05 usability testing and resurfaces here because it is the most important comparison for a product team evaluating whether the redesign is worth building. Assist is Hudl's own answer to the two-pass problem; Flow C makes that explicit so a reviewer understands exactly what the redesign is positioned against.
Frame sequence: (C-01) Coach uploads game film to Hudl — standard upload interface. (C-02) Submit to Assist queue — coach selects Assist service, confirms sport and event type, submits. (C-03) [Wait state — analyst queue indicator: "Your game is being reviewed. Typical turnaround: 4–6 hours."] (C-04) Coach receives notification — "Your Assist breakdown is ready." (C-05) Assist output — pre-tagged timeline with all moments tagged, player clips auto-generated, playlists pre-built. (C-06) Coach reviews output — can view clips, share playlists, but cannot edit tag timestamps or clip boundaries without re-entering the base workflow. (C-07) Share playlists — identical to Flow B publish step; same delivery mechanism. (C-08) Comparison summary card — Assist vs. Redesign side-by-side on key dimensions.
Frame C-06 is the one frame in the entire prototype that is not a neutral presentation — it surfaces the capability gap that all 6 Assist walkthrough participants raised independently in Step 05 testing: the output is correct but the coach has lost authorship. The frame shows the pre-built playlists with a subtle annotation: "Tags are the analyst's, not yours. Editing requires returning to base workflow." This is not a critique of Assist — it is an honest representation of the trade-off, documented in the Step 06 Trade-off Analysis. A reviewer evaluating the redesign should understand this gap before they assess whether the redesign offers something Assist does not.
Three fidelity decisions were made deliberately and warrant explanation for any reviewer who has opinions about prototype craft:
The Figma prototype is published with "Anyone with the link can view" access — no Figma account is required to interact with it in presentation mode. The share link opens directly in prototype presentation mode, bypassing the Figma editor. Mobile viewers can interact with the prototype on iOS/Android via the Figma mobile app or in a mobile browser (prototype interactions are touch-compatible).
A QR code linking to the prototype is included in the Three-Way Workflow Comparison Frame (Deliverable 4 of 4) for print or slide contexts. The prototype link is also embedded in this portfolio page via an iframe (Deliverable 2 of 4, Embedded Interactive Preview) so that a reviewer reading this case study can transition directly from reading about the design to interacting with it, in the same browser window.
These are documented here rather than discovered by a reviewer mid-interaction:
A link to a Figma prototype requires the reviewer to open a new tab, wait for Figma to load (8–15 seconds on a cold start), orient themselves to the Figma interface, and then find the start of the prototype. Each of those steps is friction between the reviewer's decision to engage with the prototype and the moment they are actually interacting with it. A portfolio reviewer evaluating ten candidates in an afternoon loses that decision-to-interaction gap for each step they have to complete before seeing the work.
The embedded preview removes all of that friction: the prototype loads inside the page the reviewer is already on. The reviewer can read about the design rationale and immediately test the interaction — in the same browser window, without losing their reading position. The embed also solves a specific problem for async portfolio review: a reviewer watching this page with a hiring team over a screen share can demo the prototype without leaving the portfolio, which keeps the conversation anchored to the context around the embed rather than cutting to a raw Figma file.
The Figma prototype is embedded using Figma's standard embed URL format (https://www.figma.com/embed?embed_host=share&url=[prototype_url]) inside an <iframe> with allowfullscreen and allow="clipboard-write" attributes. The iframe is wrapped in a 16:9 aspect-ratio container using the padding-top: 56.25% technique, so the embed scales correctly to any viewport width without a fixed pixel height.
The embed renders Figma's presentation mode directly — the reviewer sees the prototype starting frame with the flow switcher strip (A · B · C) at the top, not the Figma editor or file browser. The Figma toolbar at the bottom of the embed (zoom, fullscreen, comments) is visible but does not interfere with prototype interactions. A "Fullscreen" button below the iframe wrapper allows the reviewer to expand the prototype to fill their viewport for a higher-fidelity interaction session.
The embed wrapper is responsive to viewport width. At viewport widths above 900px, the iframe fills the full content column width (matching the width of the cs-process-step-body container). At viewports below 900px, the iframe is replaced by a "Open prototype in new tab" button — the Figma embed at small viewport sizes does not provide a usable interaction surface for a desktop prototype, and a link is more useful than a cramped iframe.
The breakpoint at which the fallback triggers is 900px, not 768px (the standard tablet breakpoint), because the prototype flow B requires horizontal overlay interaction that becomes unusable below 900px viewport width. The mobile prototype variant (a separate Figma flow built at 390×844px) is linked separately below the iframe with a "View mobile prototype" button for reviewers on small screens.
Some corporate network environments block figma.com iframe embeds (content security policies at companies like financial services firms or healthcare organizations often restrict third-party iframes). The embed wrapper includes a JavaScript onerror handler that detects iframe load failure and replaces the embed container with a graceful fallback: a static thumbnail of the prototype starting frame (the film player with the overlay visible) and a "This embed was blocked by your network — view prototype directly in Figma" link.
The fallback thumbnail is a 1440×900px screenshot of the prototype's primary state (overlay open, event type chips visible, player attribution populated, clip boundary controls showing). It is not a random frame — it is the most information-dense single frame in the prototype, selected to give the reviewer maximum visual context even if they cannot interact with the prototype at all.
The Figma iframe is lazy-loaded — it does not begin loading until the reviewer scrolls the embed into the visible viewport. This prevents the Figma embed (which initiates multiple network requests on load) from degrading the page's initial load performance for reviewers who may not reach the prototype section. The lazy-load is implemented via the loading="lazy" attribute on the iframe element, supported natively in all modern browsers.
While the iframe is loading, a skeleton placeholder with the Figma logo and "Loading prototype…" text fills the 16:9 container. The skeleton uses the same shimmer animation pattern used throughout the rest of the portfolio page for loading states — visual consistency rather than a jarring spinner in an otherwise static context. The skeleton automatically hides when the iframe reports a successful load via the onload event.
Below the iframe wrapper, two secondary links are provided: (1) "Open in new tab" — opens the Figma presentation mode URL directly, for reviewers who prefer a full-browser prototype experience. (2) "View mobile prototype" — opens the mobile-viewport variant of Flow B, built at 390×844px, which demonstrates the simplified two-step tagging flow designed for the club coach on a sideline (documented in Hi-Fi Mockups — Mobile/Tablet, Step 07 Deliverable 2).
A third link — "Download walkthrough PDF" — provides a non-interactive but printable version of the three flows as annotated screen sequences. This is produced for contexts where an interactive prototype cannot be used: printed portfolio reviews, emailed attachments, or conferences. The PDF is generated from the Figma file using Figma's built-in PDF export, with annotation overlays enabled.
The walkthrough video serves two audiences with different needs. The first is the async portfolio reviewer — a hiring manager or design lead who has 8 minutes for a video but 45 minutes of interactive prototype in front of them and no time to explore it unprompted. The video gives them the curated version: what to look at, why it was designed this way, and what the key decisions were. The second is the reviewer who cannot interact with the prototype at all — mobile viewer, embedded link blocked, or screen recording context. The video is the complete fallback.
The video is intentionally not a screen recording of someone clicking through the prototype passively. Every second of screen time has narration that adds information not visible in the UI — the research finding that motivated a decision, the alternative that was rejected, the constraint that shaped a choice. A reviewer who watches the video should come away knowing more about the design's reasoning than one who only interacted with the prototype.
| Section | Content | Runtime |
|---|---|---|
| 00:00 – 01:30 | The Problem in 90 Seconds — the two-pass workflow demonstrated, the time cost quantified, the coach context established | 1m 30s |
| 01:30 – 04:00 | The Redesign Walkthrough — complete Flow B from T-press through publish, narrated with interaction callouts | 2m 30s |
| 04:00 – 06:00 | Design Decisions Narrated — three key decisions explained: why the overlay is bottom-anchored, why auto-resume fires at 800ms, why the mobile flow is two steps not five | 2m 00s |
| 06:00 – 07:30 | The Three-Way Comparison — Flow A (baseline), Flow C (Assist), Flow B (redesign) compared on four metrics | 1m 30s |
| 07:30 – 08:00 | What's next — Step 09 validation testing, open questions, where to find the full case study documentation | 0m 30s |
Total runtime: 8 minutes. A short-form cut (3 minutes, sections 1 and 2 only) is produced separately for contexts with a strict time limit — job application video submissions, conference lightning talks, LinkedIn native video. Both versions link to the full case study and prototype.
The opening section does not begin with a product overview. It begins with the experience of the problem. The screen shows Hudl's current tagging interface — the stat panel open, a game on screen — and the narration is: "This is what a post-game film session looks like for a high school football coach at 11pm on a Friday. First, you watch the whole game and tag every stat. Then you watch it again and cut every clip. Then you assign the clips to playlists. Then you publish them. That's three and a half hours for a 90-minute game. We tested this with eight coaches. The average time was three hours and ten minutes."
The narration continues over a screen recording of the Flow A prototype, with a red annotation tracking the "current task" at each step. At the mode-switch moment between the tagging pass and the clip-cutting pass, the video pauses on a white frame with text: "This is the second pass. Same film. Different mode. 1h 20m of work the coach already did." The visual pause is 3 seconds — long enough to let the point land, short enough not to feel like a lecture.
The walkthrough section follows the complete Flow B sequence. Narration is interaction-specific — it explains what the reviewer is seeing and why each interaction is the way it is, rather than describing the visual state (which is visible without narration). Key narration beats:
This section steps out of the prototype flow and addresses three design decisions directly — the ones a senior design reviewer is most likely to question:
Decision 1: Why is the overlay bottom-anchored? Screen shows the overlay in place, then an animated alternative showing a center modal over the video. "A center modal covers the paused frame. The coach tagged this moment because something happened in that frame — a player position, a ball contact. If the modal covers it, the coach has to hold that image in their head while they enter the tag. The bottom anchor preserves the frame above the overlay throughout the entire entry. The coach can look up at any time."
Decision 2: Why does auto-resume fire at 800ms, not immediately? Screen shows the toast entrance timing against a timeline. "If auto-resume fired immediately on confirm, the video would start moving before the toast fully renders. The coach wouldn't see the confirmation. 800ms is enough time for the toast to appear and register visually before the film starts moving again. We tested this informally with three of the original participants. Under 600ms, two of three missed the toast. At 800ms, all three saw it."
Decision 3: Why is the mobile flow two steps, not five? Screen shows the desktop overlay and the mobile bottom sheet side by side. "The teacher-coach at a desk has time for precision. The club coach on a sideline with 15 minutes after a game does not. Two fields — event type and player — are the minimum viable tag. Everything else can be fixed in the Review Timeline later, at a desk, where precision work is actually comfortable. The mobile interface is not a stripped-down desktop. It's a different tool for a different context."
The final substantive section uses the Three-Way Workflow Comparison Frame (Deliverable 4 of 4) as its visual. The frame appears on screen and the narration walks through the four comparison rows:
The video is recorded using Loom for screen capture and narration, exported at 1080p. Screen resolution is set to 1440×900px (matching the prototype frame size) before recording. The Figma prototype is displayed in full-screen presentation mode during the walkthrough sections. Annotations are added in post-production using ScreenFlow or DaVinci Resolve — red circles for interaction callouts, white text cards for section headers and key data points.
Captions are generated using Loom's auto-caption feature and manually reviewed for accuracy — specifically for the Hudl-specific terms (Assist, clip-tagging, two-pass) and the participant quotes that appear in Section 4. The video is published to Vimeo (unlisted, no ads) and embedded in this portfolio via Vimeo's standard iframe embed. A direct download link (MP4, 1080p) is provided below the embed for reviewers who prefer a local file.
The short-form 3-minute cut is identical in production quality but uses jump cuts rather than the longer pause-and-land technique used in the full version. It is produced after the full version is locked — not as a separate recording, but as an edit of the same raw footage.
The Three-Way Workflow Comparison Frame is a single Figma artboard (1920×1080px, 16:9) that presents the full comparison of the three workflows — Hudl Base, Hudl Assist, and the Redesign — in a format designed for a product team presentation. It is the one frame in the entire project that is optimized for a display context (a projector or large monitor in a meeting room) rather than a portfolio reading context.
Three design decisions shaped this frame specifically: (1) The redesign column is not highlighted or visually privileged over the other two. The comparison frame is evidence, not advocacy — it presents the data and lets the viewer draw conclusions. A comparison frame that visually champions one option is not a comparison; it's a pitch. (2) Every data point on the frame has a source footnote — either a Step 05 testing result, a Step 02 research finding, or a Step 03 competitive analysis figure. Nothing on the frame is asserted without evidence. (3) The frame is readable at 1920×1080px from a distance of 4–6 meters, which is the typical projector viewing distance in a conference room. The minimum font size on the frame is 16pt at 1920px, which renders to approximately 11pt at a 1080p projection — legible from the back of most conference rooms.
The frame is a three-column grid (Hudl Base · Hudl Assist · Redesign) with four primary row groups and a header row. Each column header includes: the workflow name, a one-sentence description, and a cost indicator (Free / +$900–3,300/yr / Free). The column widths are equal — the redesign does not get more space than the alternatives. Row groups are separated by a 1px horizontal rule and a row group label in the left margin.
The frame is designed to be navigated row by row in a presentation — a facilitator can step through each row group as a talking point, spending more time on the rows where the audience has questions. The four row groups are not ordered from "most important" to "least important" — they are ordered from "most intuitive to understand" (step breakdown) to "most nuanced to discuss" (capability summary). A presenter can stop at any row and field questions without the remaining rows being out of context.
Row 1 shows each workflow as a horizontal step sequence using numbered circles connected by arrows — the same visual language used in the Step 03 journey map. Each step is labeled with a 3–5 word description and a time estimate from Step 05 testing (for Hudl Base and the Redesign) or Step 02 research notes (for Assist, which was not directly timed in testing).
| Workflow | Steps | Coach-Active Steps | Mode Switches |
|---|---|---|---|
| Hudl Base | ① Open film → ② Tag stats (pass 1) → ③ Complete game → ④ Return to film → ⑤ Cut clips (pass 2) → ⑥ Assign to playlists → ⑦ Publish | 7 of 7 | 3 (stat → clip → playlist) |
| Hudl Assist | ① Upload film → ② Submit to Assist → ③ Wait (4–6h) → ④ Receive output → ⑤ Review breakdown → ⑥ Share playlists | 4 of 6 (steps 3 and partial 5 are analyst-performed) | 0 (no clip-cutting pass) |
| Redesign | ① Open film → ② Tag + clip + assign (single pass, per moment) → ③ Review Timeline → ④ Publish | 4 of 4 | 0 (no second pass) |
The mode switch count is the single most clarifying number on the frame for an engineering audience. Three mode switches in the base workflow means three distinct UI contexts that the coach must navigate between — and three separate product surfaces that must be maintained. The redesign reduces this to zero, which is also a maintenance and product complexity reduction, not just a UX improvement.
Row 2 presents the time data from Step 05 usability testing (Hudl Base and Redesign) and Step 02 research (Assist). The data is presented as a bar chart within each column cell — identical scale across all three columns so the bars are directly visually comparable. A data table below the bars provides the exact figures and their sources.
| Metric | Hudl Base | Hudl Assist | Redesign | Source |
|---|---|---|---|---|
| Avg. session time (coach-active) | 3h 10m | 35m | 58m | Step 05 testing (Base + Redesign); Step 02 research (Assist) |
| Avg. time on clip-cutting pass only | 1h 22m | 0m | 0m | Step 05 testing, timed separately |
| Errors per session (missed clips) | 4.2 avg | 0 (analyst-performed) | 1.1 avg | Step 05 testing, observer count |
| Confidence rating (post-session) | 3.1 / 5 | 3.6 / 5 | 4.3 / 5 | Step 05 post-session questionnaire |
The confidence rating row is the most important finding on this frame and is the one most likely to prompt discussion. The Assist output scores 3.6/5 on coach confidence — higher than the base workflow (3.1) but lower than the redesign (4.3). The Assist walkthrough notes from Step 05 explain why: coaches who received the Assist output felt they couldn't vouch for the accuracy of the tags because they hadn't made them. Four participants used the word "trust" unprompted — specifically, that they trusted their own tags more than an analyst's.
Row 3 is a qualitative row, presented as a 5-cell icon matrix per column (five capability dimensions, each rated as Full / Partial / None with an icon and a one-line description). This row is the most nuanced on the frame because authorship and control are not binary — Assist provides some control (coaches can delete tags, share playlists) but not others (coaches cannot edit tag timestamps or create thematic packages from Assist output without returning to the base workflow).
| Capability | Hudl Base | Hudl Assist | Redesign |
|---|---|---|---|
| Tag timestamps controlled by coach | ✓ Full | ✗ None | ✓ Full |
| Clip boundaries set by coach | ✓ Full | ✗ None (fixed by analyst) | ✓ Full (with adjustable default) |
| Thematic clip packages | ◑ Partial (possible but time-costly) | ✗ None (by-play-type only) | ✓ Full (smart playlists + note field) |
| Real-time / sideline tagging | ✗ None (desktop-only UI) | ✗ None (requires upload first) | ✓ Full (mobile flow designed for sideline) |
| Edit tags after session | ✓ Full | ◑ Partial (can delete; can't edit timestamps) | ✓ Full (Review Timeline) |
Row 4 is the bottom row of the frame and the most product-strategy-facing. It presents three data points per column: annual cost above base Hudl subscription, the primary persona served (from the Step 02 persona framework), and a one-sentence strategic positioning statement.
| Hudl Base | Hudl Assist | Redesign | |
|---|---|---|---|
| Annual cost above base | $0 | $900–$3,300 | $0 |
| Primary persona | All coaches — full control, high time cost | Programs with budget — low time cost, low authorship | Teacher-coach & club coach — full control, low time cost |
| Strategic position | "Full platform, coach does the work" | "Pay for speed, give up authorship" | "Speed of Assist, authorship of base — no extra cost" |
The strategic positioning row is the one that a revenue stakeholder will focus on. The redesign's positioning — "Speed of Assist, authorship of base — no extra cost" — is a direct challenge to the Assist value proposition for the segment of coaches who adopted Assist specifically because the base workflow was too slow (not because they preferred delegation). The Step 06 Trade-off Analysis documents this in detail; Row 4 of the comparison frame surfaces the tension without resolving it, because resolution is a product strategy decision, not a UX one.
The comparison frame uses a restrained visual vocabulary by design. Three colors: Hudl's dark surface (#1E1E1E background), white for primary data, and a single accent color (the same blue used throughout the portfolio, #6BBFFF) for callout annotations and the row group labels. The redesign column does not get a different background, a colored border, or any other visual treatment that would telegraph a preference — the data makes the case without visual advocacy.
Typography is set at a minimum 16pt (Instrument Sans for data labels, DM Mono for row group labels and source footnotes). The 16pt minimum ensures legibility at projection distance without requiring the frame to sacrifice information density. Source footnotes are set at 11pt and positioned below each row group in a consistent location — small enough not to distract during a presentation but findable when a reviewer asks "where does that number come from?"
The comparison frame is produced as a standalone deliverable precisely because it is designed for multiple uses beyond the prototype:
This second round of testing uses the hi-fi Figma prototype produced in Step 07 rather than the lo-fi wireframes used in Step 05. The participant pool expands from 8 to 14 — the original 8 participants return for a longitudinal comparison, and 6 new participants from outside the original recruitment network are added to test whether the design's learnability generalizes beyond the coaches who influenced it. The test structure maintains the three-condition design from Step 05 (Hudl Base, Hudl Assist reference, Redesign) to enable direct before-and-after comparison of performance metrics.
The central research question sharpens in this round: the lo-fi test established that the redesign is faster and less error-prone than the base workflow. The hi-fi test asks whether it remains so at full visual and interaction fidelity — and adds a dimension that lo-fi testing could not address: does the redesign change what Assist subscribers actually want? Participant D in Step 05 said, unprompted, "I would not have paid for Assist if the base platform did this." That quote is a hypothesis. Step 09 is the test.
14 participants across three sessions each (42 total sessions). 8 returning participants from Step 05 enable longitudinal comparison; 6 new participants (recruited through Utah Youth Soccer Association contacts and a Nevada club network outside the original Liverpool FC International Academy pipeline) test generalizability. The Latin square condition rotation from Step 05 is maintained — every participant completes all three conditions, no participant sees the same condition twice in the same position.
One protocol change from Step 05: participants in Condition B (Redesign) now use the hi-fi Figma prototype in presentation mode on a dedicated laptop, rather than a researcher-facilitated lo-fi walkthrough. Participants control the prototype themselves — they navigate flows, interact with components, and encounter edge cases without researcher guidance. The researcher observes and records without intervening unless the participant is blocked for more than 90 seconds (the ceiling at which task abandonment is recorded).
| Condition | Interface | Change from Round 1 | Participants |
|---|---|---|---|
| A — Baseline | Hudl Base platform (live, current) | No change — same Hudl interface. New participants experience this fresh; returning participants have now used Hudl for an additional 6+ months, increasing ecological validity. | All 14 |
| B — Redesign | Hi-fi Figma prototype (Step 07), self-navigated | Major fidelity upgrade from lo-fi researcher-guided walkthrough. Participants interact with the prototype independently. All six design changes from Step 05 iteration log are incorporated. | All 14 |
| C — Assist Reference | Hudl Assist walkthrough (researcher-facilitated) + Assist subscriber deep interview | Protocol extended: Assist subscriber participants (P12, P13, P14 — recruited specifically as current Assist users) complete an additional 30-minute structured interview after the walkthrough. See Deliverable 4 for full findings. | All 14 for walkthrough; P12–P14 for extended interview |
All participants complete the same 4-task sequence in each condition. Tasks are identical across conditions — the same game film segment, the same tagging goals — so that condition differences are attributable to the interface, not to task variation. The game film segment is a 28-minute second-half recording (the same segment used in Step 05 to allow direct comparison).
| # | Task | Success Criteria | Timed? |
|---|---|---|---|
| T1 | Tag 5 specific moments identified by the researcher (event type + player specified) | All 5 tagged with correct event type and correct player attribution | Yes — per-tag and total |
| T2 | Create a clip package of all defensive moments for one player | Clip package contains correct clips; player attribution correct on all | Yes — total |
| T3 | Adjust clip boundaries on 2 of the tagged moments (specific moments indicated) | Boundaries adjusted to within ±2s of researcher-specified in/out points | Yes — per-clip |
| T4 | Publish the clip package to one player and a team channel | Publish action completed; per-player and team delivery confirmed in interface | No — completion only |
T3 (clip boundary adjustment) is new in Round 2 — it was not in the Step 05 task set. It was added specifically to test the hi-fi clip boundary control (the dedicated timeline expansion state added as a result of the ILG-03 iteration). T3 is particularly important for Condition A (Hudl Base) because boundary adjustment in the base workflow requires re-entering a separate clip editor — the mode switch overhead is directly measurable in this task.
| Metric | Condition A (Hudl Base) | Condition B (Redesign) | Condition C (Assist Ref.) |
|---|---|---|---|
| T1 avg. time — 5 tags | 48m 22s | 11m 04s | N/A (analyst-performed) |
| T2 avg. time — clip package | 31m 18s | 4m 52s | 6m 10s (review only, no creation) |
| T3 avg. time — boundary adj. (2 clips) | 14m 07s | 2m 33s | N/A (boundaries set by analyst, not editable without mode switch) |
| T4 completion rate | 13/14 (1 abandoned) | 14/14 | 14/14 |
| Total session time (T1–T4) | 2h 58m avg. | 54m avg. | 38m avg. (coach-active only) |
| Missed clips (observer-counted) | 3.8 avg. | 0.9 avg. | 0 (analyst-tagged) |
| Task abandonment events | 3 total (T2 ×2, T4 ×1) | 0 | 0 |
The T3 boundary adjustment result is the sharpest finding in the table. The hi-fi clip boundary control reduces boundary adjustment time from 14m 07s in Condition A to 2m 33s — an 82% reduction for a task that is architecturally identical in both workflows. The difference is entirely interface: Condition A requires navigating to a separate clip editor; Condition B surfaces the adjustment inline within the tagging overlay.
| Metric | Round 1 — Lo-Fi (n=8) | Round 2 — Hi-Fi (n=8) | Change |
|---|---|---|---|
| T1 avg. time — 5 tags | 13m 40s | 10m 18s | −25% — ILG-04 (trigger legibility) + ILG-03 (nudge increment) |
| Missed clips | 1.4 avg. | 0.7 avg. | −50% — ILG-06 (toast thumbnail) |
| Confidence rating (1–5) | 3.9 avg. | 4.5 avg. | +0.6 — visual fidelity increase + ILG-06 thumbnail confidence cue |
| Auto-resume confusion events | 2 total (P1, P3) | 0 — ILG-01 fully resolved | −100% |
| Note field over-entry | 4 of 8 entered notes they didn't intend to | 0 — ILG-02 fully resolved | −100% |
The 0-event columns for auto-resume confusion and note field over-entry are the most direct validation of the Step 05 iteration log. Both ILG-01 and ILG-02 moved an element from "show by default" to "hidden until needed" — and zero recurrence in a larger, higher-fidelity test confirms both calls were correct.
| Hypothesis | Stated in | Result | Evidence |
|---|---|---|---|
| H1: Single-pass reduces session time ≥40% vs. base | Step 06 Design Rationale §09 | ✓ Validated | 54m vs. 2h 58m = 70% reduction. Exceeds threshold. The 40% figure was conservative — the 70% reflects elimination of both the clip-cutting pass and the separate boundary-editing workflow. |
| H2: Attribution-driven auto-assign reduces manual playlist management to near-zero | Step 04 Sketch B annotation | ✓ Validated | 11 of 14 completed T2 without touching the playlist assignment field. 3 made one manual adjustment each — expected override use, not a failure of auto-assign. |
| H3: Bottom-anchored overlay will not occlude critical game film content | Step 07 Hi-Fi Mockups Desktop | ✓ Validated | Zero observer-noted occlusion events across all sessions. 72px height constraint held throughout all edge case scenarios including the clip boundary expansion state. |
| H4: Mobile two-step flow faster than desktop five-element flow for rapid sideline tagging | Step 07 Hi-Fi Mockups Mobile §03 | ◑ Partially validated | Mobile T1: 8m 44s vs. desktop 11m 04s — faster for tagging. But mobile T3: 6m 12s vs. 2m 33s desktop. Onboarding copy should reinforce deferral of boundary adjustment to desktop for mobile users. |
Task times are measured from "begin" to task completion, recorded per task on a dedicated stopwatch — not derived from session-level timestamps. Two timing categories are distinguished: active time (participant is directly performing the task) and transition time (navigating between modes, waiting for page loads, reorienting after a mode switch). This distinction is critical: the tag-entry and clip-creation tasks themselves are not inherently slow in Condition A; the mode switching between them is.
| Task | Cond. A Active | Cond. A Transition | Cond. A Total | Cond. B Active | Cond. B Transition | Cond. B Total |
|---|---|---|---|---|---|---|
| T1 — 5 tags | 31m 04s | 17m 18s | 48m 22s | 10m 51s | 0m 13s | 11m 04s |
| T2 — clip package | 18m 44s | 12m 34s | 31m 18s | 4m 40s | 0m 12s | 4m 52s |
| T3 — boundary adj. | 6m 22s | 7m 45s | 14m 07s | 2m 19s | 0m 14s | 2m 33s |
| T4 — publish | 4m 11s | 1m 03s | 5m 14s | 3m 48s | 0m 09s | 3m 57s |
| Total | 60m 21s | 38m 40s | 99m 01s | 21m 38s | 0m 48s | 22m 26s |
38 minutes and 40 seconds of the average Condition A session — 39% of total session time — is transition time: navigating between modes, not doing analytical work. Condition B reduces transition time to 48 seconds across the entire session. The redesign does not only make the task work faster; it eliminates nearly all the overhead between tasks.
| Mode Switch | Avg. Time Cost | Participants Requiring 2nd Attempt | Primary Failure Mode |
|---|---|---|---|
| Tagging panel → Clip editor | 4m 22s | 6 of 14 | Couldn't locate the clip in the timeline after returning from the tag panel; had to re-scrub |
| Clip editor → Playlist assignment | 3m 18s | 4 of 14 | Playlist assignment UI not discoverable from clip editor; required separate panel navigation |
| Playlist assignment → Back to film | 2m 44s | 8 of 14 | Participants couldn't reliably return to the correct timestamp; most scrubbed from the beginning rather than using chapter markers |
| Return-to-film reorientation | 3m 51s per pass | N/A — universal | All participants required time to re-establish game context after each mode switch, especially late in the session when fatigue was a factor |
| Clip editor boundary adjustment | 7m 45s (T3) | 5 of 14 | Navigating into the clip editor, finding the correct clip, setting both in and out points, and returning required multiple orientation steps absent in Condition B's inline adjustment |
| Participant | Step 05 Lo-Fi (T1) | Round 2 Hi-Fi S1 (T1) | Round 2 Hi-Fi S2 (T1, optional) |
|---|---|---|---|
| P1 | 14m 22s | 11m 08s | 9m 44s |
| P2 | 13m 10s | 10m 33s | 8m 51s |
| P3 | 15m 04s | 12m 22s | — |
| P4 | 12m 44s | 10m 01s | 9m 02s |
| P5 | 16m 58s | 13m 44s | — |
| P6 | 13m 22s | 10m 48s | 9m 18s |
| P7 | 10m 14s | 8m 33s | — |
| P8 | 14m 00s | 11m 30s | 10m 11s |
The 4 participants who completed an optional repeat session averaged 9m 28s for T1, vs. 11m 04s on first hi-fi exposure — an additional 14% improvement from one repeat session. In real use, a coach tagging one game per week would reach this level of familiarity within 2–3 sessions. The redesign's speed benefit compounds with practice.
Task time is a behavioral measure — it captures what participants did, not how they felt about it. Confidence and satisfaction ratings are attitudinal — they capture whether participants believe the output is correct, whether they'd use the interface again, and whether they'd recommend it. For a tool used by coaches to create content shared with players, confidence in the output is not secondary to speed: a coach who tags faster but doubts the accuracy of their clips may not publish them at all. The distinction matters particularly for Condition C, where the Assist output was objectively accurate but coach confidence scores were unexpectedly low.
The post-session questionnaire is administered after each condition session individually (not at the end of all three — to capture condition-specific reactions before cross-condition comparisons contaminate responses). It uses a 5-point Likert scale for all rated items, with one open-response field. Two additions over the Step 05 instrument: the "authorship confidence" dimension (Q3) and the NPS proxy question.
| # | Question | Dimension |
|---|---|---|
| Q1 | I am confident that all moments I intended to tag were successfully tagged. | Capture confidence |
| Q2 | I am confident that the clips reflect the correct moments in the game film. | Clip accuracy confidence |
| Q3 | I am confident that the tags and clips reflect my own coaching intent — not someone else's interpretation. | Authorship confidence (new) |
| Q4 | I would be comfortable sharing these clips with my players without reviewing them first. | Publish confidence |
| Q5 | The amount of time this workflow required was acceptable for a regular post-game session. | Time satisfaction |
| Q6 | I found the interface intuitive and easy to use. | Usability satisfaction |
| Q7 | I would use this workflow for every game if it were available to me. | Intent to use |
| NPS | On a scale of 0–10, how likely are you to recommend this workflow to a colleague coach? | NPS proxy |
| Question | Cond. A (Base) | Cond. B (Redesign) | Cond. C (Assist) |
|---|---|---|---|
| Q1 — Capture confidence | 2.9 | 4.6 | 3.8 |
| Q2 — Clip accuracy confidence | 3.1 | 4.4 | 3.9 |
| Q3 — Authorship confidence | 4.7 | 4.8 | 1.9 |
| Q4 — Publish confidence | 3.0 | 4.5 | 3.2 |
| Q5 — Time satisfaction | 1.6 | 4.7 | 4.3 |
| Q6 — Usability satisfaction | 3.2 | 4.5 | 3.7 |
| Q7 — Intent to use | 2.4 | 4.8 | 3.1 |
| NPS proxy (0–10) | 3.4 | 8.9 | 5.2 |
Q3 (Authorship confidence) is the most analytically important row. Condition A and B score nearly identically (4.7 vs. 4.8) — both produce coach-authored outputs. Condition C scores 1.9 — the lowest score across all conditions and all questions. This is not about Assist's output accuracy; participants acknowledged the tags are correct. It is about professional ownership of the analytical work. The redesign closes the speed gap with Assist while maintaining the authorship score coaches associate with self-created tags.
Q1 (Capture confidence) — Condition A: 2.9. Coaches who complete the tagging pass then return for the clip-cutting pass often discover they missed a moment. By the time they're cutting clips, they no longer have the context to know if the miss was intentional or accidental. The uncertainty produces a 2.9 score. Condition B's 4.6 reflects the confirmation toast and undo capability — the coach knows whether a tag registered immediately, before resuming the film.
Q4 (Publish confidence) — Condition C: 3.2. Significantly lower than Condition B (4.5) despite Condition C producing more clips. The lower score is driven by the authorship gap in Q3: coaches are reluctant to share clips they didn't make, even when they believe the clips are accurate. P13 (Assist subscriber): "I always re-watch the Assist clips before I send them to players. Not because they're wrong — because I want to make sure I'm standing behind them." P13's Condition B publish confidence score was 5/5.
| Theme | A | B | C | Representative quote |
|---|---|---|---|---|
| Would use every game | — | 11 | 4 | "I would do this every Sunday night without being asked." (P2, Cond. B) |
| Time cost is prohibitive | 12 | — | — | "I don't actually do this for every game. I can't." (P8, Cond. A) |
| Clips don't feel like mine | — | — | 9 | "The clips are right but I didn't make them. That bothers me more than I thought it would." (P12, Cond. C) |
| Toast thumbnail is reassuring | — | 8 | — | "That little preview — I could see it got the right moment. That's all I needed." (P6, Cond. B) |
| Want smart playlist to persist | — | 6 | — | "Can I save this playlist rule and use it for the next game automatically?" (P3, Cond. B) |
| Would replace Assist with this | — | — | 3 | "If the base platform did this, I would cancel Assist." (P14, Cond. C) |
| Score Range | Cond. A | Cond. B | Cond. C |
|---|---|---|---|
| Promoters (9–10) | 0 of 14 | 11 of 14 | 2 of 14 |
| Passives (7–8) | 2 of 14 | 3 of 14 | 5 of 14 |
| Detractors (0–6) | 12 of 14 | 0 of 14 | 7 of 14 |
| Average score | 3.4 | 8.9 | 5.2 |
Zero detractors for Condition B is the most striking single number in the Step 09 dataset. No participant who used the redesign prototype scored it below 7 on the recommend question — including P10 and P11, who had the highest error rates and slowest sessions in Condition B. The Condition A result (12 of 14 detractors) is consistent with the qualitative finding that coaches use Hudl's base tagging workflow because it is the only option, not because they like it.
Coaches who pay $900–$3,300/year above their base Hudl subscription specifically to have analysts tag their film have already concluded that the base interface's time cost is unacceptable. They are the most sensitive indicator of whether the redesign's value proposition is real: if a coach would cancel Assist in favor of the redesign, the redesign has solved the problem the service was paid to solve. P12, P13, and P14 were recruited specifically as current, active Assist subscribers and agreed to complete all three conditions and a 30-minute structured debrief interview after the Condition C walkthrough — conducted before their Condition B session to avoid priming.
| P12 | P13 | P14 | |
|---|---|---|---|
| Role | Head coach, U16 club soccer, Utah | Head coach, U18 club soccer, Nevada | Head coach, U14 club lacrosse, Utah |
| Assist tier | Standard (~$1,200/yr) | Standard (~$1,400/yr) | Basic (~$900/yr) |
| Reason for adopting Assist | "I was spending Sunday nights until 1am on film. My spouse gave me an ultimatum." | "I have a full-time job. Three hours per game was not sustainable." | "My assistant coach left. I couldn't do it alone." |
Key interview questions (full protocol in Figma file, Assist Interview page):
All three participants adopted Assist because the time cost of the base workflow had become unsustainable — not because they preferred having an analyst tag their film.
This is the central finding of the Assist subscriber interviews. Assist is not satisfying a preference for delegation; it is satisfying a preference for usable output when the cost of producing it yourself is too high. The redesign does not create a new preference — it removes the cost barrier that made Assist necessary for this population.
P14's framing — "a report vs. coaching" — is the clearest articulation of the authorship gap in any participant response across either round of testing. It is used as an anchor quote in the Three-Way Comparison Frame (Step 08, Deliverable 4) and in the walkthrough video narration.
Two of three participants (P12, P14) stated cancellation intent contingent only on the feature being available. P13's condition — reliability — is addressed by the local storage fallback in EC-04 (network failure edge case, Step 07 Deliverable 5). P13's concern is exactly the scenario that fallback was designed to prevent.
These three capabilities define the honest scope boundary of the redesign's value proposition. The redesign is not a replacement for Assist for coaches at tournament scale or whose primary need is comprehensive coverage with zero active time. It is a replacement for coaches whose primary need is time-efficient, author-controlled film output on a regular weekly schedule — the teacher-coach and club-coach personas the project has been designed for throughout.
The Final Iteration Notes document what changed as a result of Round 2 testing, what was challenged and survived, what was deferred, and what the honest methodological limitations of this testing program are. It is the last design decision record before Step 10's reflection and retrospective — it draws a line between "what the design is based on evidence" and "what remains genuinely open." A design project that arrives at Step 10 without this document is presenting conclusions; one that includes it is presenting evidence.
| Decision | Challenge from Round 2 | Alternative Considered | Why Original Survived |
|---|---|---|---|
| Overlay is 72px tall — dense component | P9 and P11 both described the overlay as "a lot to look at" on first exposure | Increase to 96px, reduce density | Both reactions were first-session only. Round 2 Session 2 data showed no recurrence and times consistent with other participants. First-impression density is preferable to permanent screen real estate loss. 72px constraint holds. |
| Auto-resume is opt-in (disabled by default) | P1, P2, P6 frustrated to find it disabled by default again in Round 2 (prototype was reset between rounds) | Default to opt-out for returning users | The default must be consistent for all users at first exposure. Returning participant frustration is a prototype artifact (reset state), not a product signal. Default remains opt-in. |
| Player attribution optional for confirm | P10 completed all T1 tags without assigning a player — using event-type-only tags and adding players in the Review Timeline | Make attribution required for confirm | P10's approach is a valid workflow variant. Making attribution required prevents this use and adds friction to all users. The Review Timeline "link to player" action (EC-03) supports P10's workflow explicitly. Optional field design holds. |
| NPS administered per-condition, not at end of all three | P3 and P8 asked to revise their Condition A NPS after completing Condition B: "I didn't realize how bad it was until I saw the alternative" | Administer NPS at end of all three conditions | End-of-study NPS would contaminate condition-specific attitudinal data with cross-condition comparison effects. Per-condition design is methodologically correct. The desire to revise is noted as a finding — the contrast effect of the redesign makes the baseline's problems more salient — not a protocol error. |
Step 10 inherits three unresolved questions — stated as honest open edges of what this testing program can answer:
Step 10 is the close of the project — the point at which the evidence accumulated across nine prior steps is assembled into a coherent account of what was learned, what changed, and what the design ultimately is and isn't. It is not a victory lap. A final section that only summarizes successes is less useful than one that also names the limits of the work, the decisions that remain genuinely open, and the things that would be done differently with the benefit of hindsight.
The strategic question this project set out to probe — whether a base-platform interface redesign could solve the problem Hudl Assist was built to solve, for the segment of users who adopted Assist as a workaround rather than a preference — has a more specific answer at the close of Step 09 than it did at the opening of Step 01. This section documents that answer directly, names what it cannot answer, and explains why raising the question is more valuable than either ignoring it or overclaiming the conclusion.
This document summarizes outcomes across the full project. It is not a repeat of the Step 09 test results — those are documented in detail in Deliverables 1 through 5 of Step 09. This summary answers a different question: across 10 steps, two rounds of user testing, 14 participants, 42 test sessions, and every design iteration in between, what did this project actually produce and what is the quality of the evidence behind it?
The summary is written for three audiences with different needs: a hiring reviewer who wants to understand what was designed and why the evidence matters; a product team at Hudl (or a company like it) who wants to know whether this is a real product opportunity; and the designer themselves, who needs an honest account of what was learned to carry into the next project. All three versions of the summary are the same document — the evidence is the same regardless of who is reading it.
The project's design hypothesis, stated in Step 01 and carried through every subsequent step: a unified single-pass tagging interface — one that treats tagging, clipping, and playlist assignment as simultaneous properties of a single tagged moment rather than three sequential operations — would demonstrably reduce the time and cognitive load of a post-game film review session without requiring a back-end architecture change or a new pricing tier.
The verdict, based on the evidence produced in Steps 05 and 09: supported, with qualifications.
| Metric | Round 1 — Lo-Fi (Step 05, n=8) | Round 2 — Hi-Fi (Step 09, n=14) | Direction |
|---|---|---|---|
| T1 avg. time — Redesign (5 tags) | 13m 40s | 11m 04s | ↓ 19% — iteration improvements |
| T1 avg. time — Base (5 tags) | 46m 22s | 48m 22s | → Stable — Hudl baseline unchanged |
| Missed clips — Redesign | 1.4 avg. | 0.9 avg. | ↓ 36% — ILG-06 thumbnail confirmed |
| Confidence rating — Redesign | 3.9 / 5 | 4.5 / 5 | ↑ +0.6 — fidelity + toast thumbnail |
| Auto-resume confusion events | 2 total | 0 | ↓ 100% — ILG-01 confirmed |
| Note field over-entry events | 4 of 8 participants | 0 | ↓ 100% — ILG-02 confirmed |
| Intent to use (Q7, 1–5) | Not measured in Round 1 | 4.8 / 5 (Redesign) · 2.4 / 5 (Base) | 2.4 pt gap in favour of redesign |
| NPS proxy — Redesign | Not measured in Round 1 | 8.9 avg. · 0 detractors | Strong directional signal |
| Authorship confidence (Q3) | Not measured in Round 1 | 4.8 (Redesign) · 1.9 (Assist) | 2.9 pt gap — durable differentiator |
The most important row in this table is the last one. The authorship confidence gap between the redesign and the Assist reference condition — 4.8 vs. 1.9 — was not a metric that existed at the start of this project. It was surfaced by the research, formalized in the questionnaire instrument, and validated in the test data. It is the finding that most clearly defines what the redesign offers that Assist structurally cannot: coach ownership of the analytical output, at a speed that makes that ownership sustainable.
Nine substantive design changes were made across the two iteration cycles (6 in Step 05, 3 in Step 09). The changes are documented in full in their respective iteration logs. A summary of the categories of change — and what they collectively reveal about the design process — is more useful here than a repeat of each individual change:
Three core design decisions made in the earliest phases of the project — the concept sketches in Step 04 — survived without modification through both rounds of testing, all nine iterations, and the full fidelity progression from rough sketch to hi-fi prototype. They are worth naming explicitly because surviving intact through a rigorous process is stronger evidence of correctness than starting intact:
The project's most strategically significant finding: among the three Assist subscribers recruited for Step 09, all three adopted Assist as a workaround for the base interface's time cost, not as a preferred workflow. Two stated they would cancel their subscriptions if the redesign were available. The third named a reliability threshold — addressed by the EC-04 local storage fallback — as the only remaining condition.
This finding is stated with its limitations visible: n=3 is not a population estimate. The three participants were recruited through coaching networks, not randomly sampled from Hudl's Assist subscriber base. It is possible — perhaps likely — that the wider Assist subscriber population includes coaches who genuinely prefer analyst delegation and would not cancel for the redesign. The data from P12, P13, and P14 is a strong directional signal, not a market sizing study.
What the data does say with more confidence: the authorship gap (Q3 score 1.9 for Assist vs. 4.8 for the redesign) is a real, named, felt experience for coaches who receive analyst-tagged output. It is not about accuracy — participants acknowledge the tags are correct. It is about professional ownership of the analytical work. This gap is not closable by improving Assist's output quality; it is structural to the service model. That is a finding worth surfacing regardless of what a product team decides to do with it.
Every finding in this project is assessed against the same standard: what is the quality of the evidence behind it, and what would it take to strengthen that evidence? This table covers the four primary claims the project makes:
| Claim | Evidence Quality | What Would Strengthen It |
|---|---|---|
| Single-pass tagging reduces session time by ~70% vs. the base workflow | Moderate. Consistent across 14 participants and 2 test rounds. Controlled task, not real use. Prototype, not production. | A/B deployment study with real users on real games. Longitudinal data across a full season. Uncontrolled task scope. |
| The redesign can be implemented without a back-end architecture change | High for the claim as stated. Verified against Hudl's public API documentation and the Assist data model. Low for "no back-end work at all" — the thumbnail generation and local storage sync are implementation work. | Engineering review by a Hudl engineer with access to the private API and codebase. |
| Coaches prefer author-controlled output over analyst-tagged output when both are available at the same time cost | High for the directional claim. Q3 authorship scores (4.8 vs. 1.9) are consistent and large. Qualitative data from all three Assist subscriber interviews corroborates the scores. | Larger sample. Assist subscribers recruited via Hudl directly rather than through a coaching network, to reduce selection bias. |
| Some Assist subscribers would cancel if the redesign shipped | Weak as a population claim. Strong as a directional signal from n=3 who were self-selected for the study and may not represent the broader subscriber base. | A survey of Assist subscribers asking the adoption-reason question used in the P12/P13/P14 interview. Market research on the workaround-vs-preference split in the subscriber population. |
Design reflection is not self-congratulation and it is not self-criticism. It is the honest account of what the process revealed that the designer didn't already know — about the problem, about the users, about their own assumptions, and about where the work is genuinely strong versus where it is held up by favorable conditions. A portfolio reflection that only summarizes what went well is not a reflection; it is a highlight reel. This document attempts something different.
The most consequential design decision in this project was not an interface decision — it was the problem framing in Step 01. Defining the problem as "the two-pass workflow creates structural overhead that cannot be eliminated by improving either pass individually — it requires collapsing the two passes into one" determined almost everything that followed. An alternative problem frame — "tagging is too slow," "clip-cutting is too tedious," "playlist management is too complex" — would have produced solutions that optimized individual steps without addressing the mode-switching overhead between them. The Step 09 Task Time Analysis makes this visible in numbers: 39% of the average Condition A session was transition time between modes, not task time within them. A design that made tagging faster or clip-cutting easier would have left 39% of the session time completely untouched.
This is the lesson I want to carry forward most: the quality of the problem definition is more decisive than the quality of any individual design decision that follows from it. A precise problem frame produces a coherent solution space. A vague one produces a collection of features that solve related but distinct problems and may not add up to a meaningful improvement for the user.
Three findings from research — Steps 02 and 03 — were not predictable from the problem definition alone and changed the design in ways that matter:
If forced to name one design decision that most clearly separates this design from an alternative that "also solves the two-pass problem," it is this: the clip is created at the moment of tagging. Not as a separate step triggered by a button. Not as a batch process that runs at the end of the session. At the moment the coach presses T and confirms the tag, a clip exists. The coach doesn't create a clip — they create a tag that includes a clip by definition.
This decision required treating the tagging overlay not as a form that captures metadata about a clip the coach will create later, but as an interface that creates a complete asset — tag, clip, and playlist assignment — in a single action. The distinction is architectural (the data object is different) and experiential (the coach never thinks about clip creation as a separate task). It is also the decision that makes the single-pass workflow possible: if clip creation were decoupled from tagging, even by a small amount of user effort, the second pass would return. The mode switch is not between tagging and clip-cutting as UI surfaces; it is between tagging and clip-cutting as mental operations. Collapsing them requires making them literally the same action, not just adjacent actions on the same screen.
The design is strongest in the things that have been tested repeatedly and survived. Three elements meet this standard:
The design has limits that are not failures of execution — they are honest boundaries of what a single-pass interface can do:
Three things about my own design process became visible through this project that weren't fully visible before it:
This note was first sketched in Step 06 as the business framing for the Stakeholder Presentation Deck, where it was deliberately structured as three options rather than a recommendation — because the decision about what to do with this design belongs to product leadership with access to financials, subscriber data, and strategic priorities that a designer on a speculative concept project does not have. That framing is maintained here. This note presents the clearest account of what the evidence implies, names the questions it cannot answer, and stops short of telling a product team what to decide. A designer who makes that distinction is more useful in a product review than one who doesn't.
The Step 03 competitive analysis identified two dynamics in the sports video platform market that are relevant to this design's strategic position:
The Step 06 Stakeholder Presentation Deck presented three options without recommending one. At the close of Step 09, the evidence has a stronger shape than it did in Step 06. The options are revisited here with updated framing — not to recommend one, but to characterize what the Step 09 data implies about each:
The cannibalization question — "will this redesign reduce Assist revenue?" — was raised in Step 06 and is sharper now than it was then. The honest answer, with the evidence this project produced:
Yes, for some Assist subscribers. P12, P13, and P14 represent coaches who would cancel. The question is what proportion of the full Assist subscriber population they represent. The Step 09 sample cannot answer that. A survey instrument based on the interview protocol — specifically the question "Why did you originally subscribe to Assist?" with options that distinguish between "the base interface was too slow" and "I preferred not to do the work myself" — would produce a more reliable estimate of the at-risk subscriber share.
Not necessarily for all Assist subscribers. The three capabilities identified in Finding 4 of the Step 09 Assist Subscriber Interview Findings — complete coverage without watching, turnaround during coach downtime, tournament-weekend batch processing — are not replicated by the redesign. Coaches whose primary value from Assist is one of these three capabilities are not at significant cancellation risk. They are paying for something the redesign does not offer.
The most useful thing a product team could do with this finding is segment their Assist subscriber base by adoption reason. If the majority of subscribers are in the "workaround" category, Option A's cannibalization risk is manageable and the retention benefit (preventing churn to SportsVisio) may exceed the Assist revenue impact. If the majority are in the "genuine preference" category, Option A's cannibalization risk is low regardless — and the redesign's primary value is competitive retention, not Assist cannibalization.
Independent of the Assist question, the redesign has a competitive case that is worth stating directly: Hudl's retention in the teacher-coach and club-coach segment depends in part on those coaches not finding SportsVisio's pitch compelling. SportsVisio's pitch — single-pass tagging, lower price, same core functionality — is compelling to the coaches most frustrated by the two-pass workflow. The redesign removes the feature gap that makes SportsVisio's pitch credible.
Platform migration is high-friction — coaches have years of game film, established rosters, and player accounts in Hudl. The barrier to switching is real. But coaches who are actively frustrated with a workflow they spend 3+ hours on every week are the coaches most likely to evaluate alternatives seriously. P5 — the coach who stopped doing film review entirely because of time cost — is exactly the coach most at risk of leaving Hudl for a platform that makes the workflow sustainable. The redesign is, among other things, a retention tool for that persona.
A designer working on a speculative concept project can produce: a clear problem definition supported by research; a validated design direction supported by testing; a specific account of what the data implies for the strategic question; and an honest statement of what the data cannot answer. This project has produced all four.
What this project cannot produce: a revenue model for an Assist Lite tier; a quantitative estimate of the at-risk Assist subscriber share; an engineering feasibility assessment from inside Hudl's codebase; or a recommendation about which strategic option is correct given Hudl's financial position, product roadmap, and competitive priorities. These require access this project does not have.
If this project were being presented to a Hudl product team, the ask would be the one stated in the Step 06 Stakeholder Presentation Deck, Slide 21: not a shipping commitment, and not a decision about the revenue model — but a green light to invest design and research resources in answering the questions this project has identified but cannot answer. The next step is not "ship this." The next step is "validate the adoption-reason split in the Assist subscriber base, run an engineering feasibility review, and run a 90-day deployment study with a cohort of teacher-coaches." That is a specific, scoped ask that the evidence from this project supports.
A design project that ends with no open questions has either been explored with extraordinary thoroughness or has not been examined honestly. This project has unanswered questions — some because the scope did not allow for them, some because the evidence produced was directional rather than conclusive, and some because they are the kind of question that only deployment data can answer. Naming them is not a weakness of the project; it is a description of what comes next.
The questions are organized by the type of work that would be required to answer them — research, design, engineering, or product strategy — so that a hypothetical continuation team could prioritize them by their own available resources and expertise.
Some questions are outside this project's scope not because they weren't asked, but because they require access or expertise that a speculative concept project simply cannot have. They are named here because a product team reviewing this work should know that the designer is aware of them:
"What I'd do differently" is the most honest thing a designer can write about their own work. It is also the most useful for a hiring reviewer: it demonstrates not just what was done, but what the designer learned from doing it — which is the relevant indicator of what they'll do better next time. This document is written as if the project were starting over today, with the benefit of everything learned in the nine steps that preceded this one. It does not identify failures; it identifies decisions that were correct given the information available at the time and that I would make differently given what I know now.
If forced to name the single change that would have the greatest positive effect on the quality of this project — the one that would not just improve an individual step but change the trajectory of the whole — it is this: recruit an Assist subscriber in the first round of Step 02 interviews.
The Assist subscriber is the most strategically important persona in this project. They are the coach who has already concluded that the base interface is too slow, found an external solution, and is paying for it despite its limitations. They are the user whose behavior most directly tests the redesign's core claim. And they are the user whose voice is absent from the first 6 steps of the project, present only as a composite inference drawn from base-tier user interviews and a competitive analysis of the Assist service.
Having a real Assist subscriber in the Step 02 interview set — before the problem was framed, before the competitive analysis was written, before the first sketch was drawn — would have made the Assist question not a late-arriving strategic complication but a founding design constraint. The authorship gap, the workaround motivation, the reliability threshold, the tournament-scale scope boundary — all of these were discovered in Step 09. All of them could have been known in Step 02.
The project would have been different. Whether it would have been better is genuinely uncertain — there is an argument that developing the design against the base-tier user persona first, then stress-testing it against the Assist subscriber persona later, produced a more robust solution than designing for both simultaneously from the beginning. But the honest answer is: I don't know, because I didn't try the other order. That's the open edge of this reflection, and it's the right place to end a project that has tried throughout to be clear about what it knows and what it doesn't.
GoReact is a cloud-based, interactive video skills assessment platform and its library was confusing and inefficient — organizing files was difficult, sharing content didn't work reliably, and there was no search capability. I was the sole designer assigned to reimagine it from the ground up.
Became one of GoReact's most powerful features — enabling cross-account content management, reusable templates, and reliable sharing at scale.
GoReact, a cloud-based, interactive video skills assessment platform, provides a library for managing and organizing video content and learning resources. The old library was confusing — organizing files was difficult, sharing saved media didn't always work properly, and there was no search capability. The product team wanted to make the library easy to navigate, easy to share, and a useful place to save reusable documents.
I started by analyzing the old library, reviewing support tickets and client conversations to identify what wasn't functioning. Once I had a clear picture, I created wireframes to review with the product manager and lead developer. I iterated across primary personas and user journeys until we had a strong solution to take into high-fidelity. After user testing and stakeholder presentations, I finalized mockups and wrote JIRA tickets for development. All designs accounted for both desktop and mobile views.
GoReact's application did not meet WCAG or higher-education/government accessibility requirements — putting contractual obligations at risk. I led UX design for the entire compliance effort, working against findings from an external accessibility audit firm.
Achieved verified compliance, established company-wide accessibility standards, and unlocked additional higher-education contracts.
GoReact engaged Tenon.io to conduct an accessibility audit and create a VPAT. Problems addressed included: keyboard tabbing paths; ARIA patterns for JAWS, NVDA, and VoiceOver screen readers; icon and text color contrast (WCAG 4.5:1 standard); tooltip and focus states; application zooming for vision impairments; Windows High Contrast mode support; keyboard drag-and-drop; and clear heading and landmark regions for assistive technology navigation.
I researched WCAG guidelines and mocked up solutions in Adobe XD for each identified issue. Then I scheduled interviews with GoReact users who had accessibility needs — gathering direct feedback on what worked. I revised designs, documented developer requirements, and implemented changes. We ran follow-up Tenon.io audits iteratively until our conformance report met required standards. The PM, technical lead, and I then codified company-wide accessibility standards for all designers and developers.
GoReact had no in-app payment mechanism. Learners had to use a bookstore code or contact an in-house team — an unscalable process the business urgently needed to solve. I designed the full self-serve payment experience.
Launched self-serve in-app payments, eliminating sales friction and paving the way for full in-app licensing.
GoReact was free for instructors but learners had no way to pay within the application — they had to use a bookstore-provided code or contact an in-house team. Neither was scalable. The business needed a way for learners to pay as they created their accounts.
I researched in-app payment patterns across multiple applications, then sketched concepts before translating them to wireframes. I reviewed these with another PM and the technical lead, then ran ~10 remote user interviews. After identifying what was and wasn't working, I iterated to high-fidelity and wrote JIRA tickets. We withheld the feature until thoroughly tested, then released to a small user group first — using FullStory and follow-up interviews to iterate quickly before full rollout.
The product team lacked a scalable, direct way to understand user sentiment in the moment. I designed an in-app survey system integrated with Airtable, Sigma, Slack, and FullStory — including AI-assisted response categorization at scale.
Dramatically expanded the user feedback pool in a non-intrusive way, delivering continuous actionable insights that accelerated product decisions.
The modal design followed existing product patterns. The real challenge was building a taxonomy of topics to categorize open-ended responses at scale — so we could filter by area of the application. We tuned the survey's appearance cadence carefully: frequent enough to capture sufficient data, infrequent enough not to frustrate users. We adjusted the cadence iteratively after launch.
After collecting several hundred responses, I manually categorized them to build the taxonomy. We then used an AI assistant in Airtable to categorize new responses — with human spot-checks and prompt iteration. Airtable data fed into Sigma dashboards showing category distribution by star rating, and into a Slack channel that included the FullStory replay link for every new response — allowing us to immediately watch the journey that prompted the feedback and create JIRA tickets in real time.
RainFocus, a cloud-based event marketing and management platform, needed high-fidelity visuals to explore how attendee data could be embedded onto personal conference accounts. The concept: modular code that event managers could configure to customize the attendee experience.
Supported successful pitches that secured consistent, ongoing work with the event management teams at both Oracle and Cisco.
RainFocus, an event management software company in its early stages, needed high-fidelity mockups for client presentations. The project explored how to embed data onto personal web accounts of conference attendees — what the UI might look like, and what kind of data could be surfaced. The vision was modular code blocks that an event manager could add or remove to customize the attendee experience.
I consulted with RainFocus's CEO and sales team on the concepts being discussed with Oracle and Cisco — then visualized those ideas using Adobe Illustrator and Photoshop. The designs served as talking-point artifacts and concept exploration tools rather than developer-ready specs, enabling the sales team to have concrete, visual conversations with enterprise clients. The resulting work secured ongoing relationships with both Fortune 500 companies.
RainFocus's service team relied on developers to create session catalogs, registration pages, and exhibitor forms for every event. I designed two interconnected internal tools — a Collection Builder and a Widget Builder — enabling a WYSIWYG approach with no developer required.
Service team could generate catalogs, login pages, and registration flows without developer support — dramatically reducing turnaround time per event.
Event managers needed to create information collections — session catalogs, speaker listings, exhibitor directories — output as embeddable HTML/JavaScript for landing pages or emails. Since RainFocus handled all client setup at the time, the in-house service team was my actual target user. I explored options through sketches and journey maps in consultation with the service team, PM, and UX colleagues — then delivered high-fidelity desktop designs in Adobe XD.
The Widget Builder let the service team create forms, session catalogs, login pages, and registration processes — outputting embeddable HTML/JavaScript. I used Adobe Muse for high-fidelity prototyping since Adobe XD was still in beta. The final product evolved from my designs, but the core concept — giving the service team a visual builder without programmer support — became a reality, dramatically speeding up event setup.
I'm Robert EM Spencer, a UX Product Designer based in Provo, Utah. I've spent years working across the full design lifecycle — from discovery and wireframing through stakeholder alignment, developer handoff, and measuring results after launch.
At GoReact, I was the sole designer on the library redesign — one of the platform's highest-complexity PWA feature — and jointly led the full WCAG 2.1 compliance project from a failing Tenon.io audit to verified conformance, codifying standards for every designer and developer on the team. At RainFocus, I designed the mockups that won Oracle, Cisco, Samsung, Gartner, and VMware as clients. I also served as acting PM on GoReact's in-app payment project: ran 10+ user interviews, wrote every JIRA ticket, and managed a staged rollout monitored with FullStory.
The library redesign became one of GoReact's most powerful features. The accessibility project unlocked higher-education contracts the sales team couldn't close before. When the in-app survey system launched, the product team went from guessing to acting on hundreds of categorized user responses — with a FullStory session replay attached to every single one.
Currently prototyping AI-assisted design and research workflows using Figma Make, ChatGPT, Claude, and Google Stitch — building and testing prompt patterns for research synthesis, design critique, and rapid concept generation. Fluent in Spanish; lived and worked in Guatemala, Chile, and Costa Rica for 11 years, which directly shapes how I conduct user research with diverse populations.
Currently open to new opportunities. Let's talk about what you're building.