How to Run User Acceptance Testing That Actually Catches Issues (and Doesn’t Turn Into Chaos)
This playbook gives you two solid ways to run UAT (Azure DevOps first, Excel second), with formats, templates, and a simple workflow for turning “it didn’t work” into trackable Bugs or scoped Enhancements.
You’re here because…
- Testers give feedback like “it doesn’t work” and you can’t reproduce it.
- UAT results come back scattered across email/Teams and nothing is trackable.
- You need a clean way to tie UAT back to requirements/design, with repeatable results.
- Your client is split: some are technical (ADO is great), some aren’t (Excel is easier).
Fast path: run UAT in 60 minutes
- Pick your tool
- If the client can support it → Azure DevOps UAT
- If stakeholders need simple → Excel UAT pack
- Define scope
- 5–10 “happy path” journeys + 5–10 “edge cases”
- Include: security role checks, key forms, key automations, and one reporting/view scenario
- Publish the rules
- What “Pass” means
- What to capture on a Fail (step #, expected vs actual, screenshot, record link/GUID)
- Where failures go (ADO bug/enhancement or Excel comments)
- Pilot first
- 1 tester runs 3–5 cases
- Fix the script format before you send it to everyone
What good UAT looks like
Good UAT is not “click around.” It’s:
- Traceable: each test ties back to a requirement/design
- Repeatable: someone else can run it and get the same result
- Actionable: failures include enough detail to reproduce without a meeting
If you only remember one rule:
A UAT test case is successful when someone who didn’t write it can run it.
Consultant law #12: If the steps aren’t specific, the results will be vibes.
Option 1: Azure DevOps UAT (recommended when possible)
Best when the client can handle the tool and wants traceability, collaboration, and clean tracking.
What it is
Using Azure DevOps Test Plans to manage:
- Test Plans (your UAT cycle / release)
- Test Suites (group by feature/requirement)
- Test Cases (step-by-step scripts + expected results)
What your tests should include in ADO
At minimum, each test case should have:
- Title with an ID style:
TC### – <Feature> – <Scenario> - Linked requirement/design (story/backlog item + spec link)
- Steps + expected results per step
- Attachments when helpful (screenshots, sample data notes)
- Tags (UAT, Regression, Security, Flow, etc.)
Best practices for writing tests in ADO
1) Link tests to requirements/design
- So you can answer: “What requirements are covered?” without guessing.
2) Write steps like a script
- Each step = action + expected result (so a different person can run it).
3) Reuse shared steps
- Shared steps reduce copy/paste drift (e.g., “Login”, “Create record”).
4) Keep discussion in one place
- Use comments on the test case or the bug—not Teams threads.
- Use @mentions to pull in the right owner.
5) Use “Follow” strategically
- People follow the items they care about instead of emailing the entire project group.
The underrated ADO benefit: UAT feedback becomes trackable work (no copy/paste)
This is where ADO earns its keep.
When a test fails → create a Bug from the test
Workflow you teach clients:
- Tester runs the test case and comments what failed (step # + what happened).
- Create a Bug linked back to that test case (so you always know what exposed it).
- Bug gets: owner, severity, target release, comment thread, attachments.
- After fix, re-run the same test case to confirm it’s actually resolved.
When someone asks for a new feature → create an Enhancement / User Story
If a tester says:
- “It should also notify the supervisor”
- “Can we add a new field here?”
That’s not a bug. That’s an Enhancement (User Story / Backlog Item), linked and prioritized separately.
Simple rule for clients:
- Bug = “It doesn’t meet the expected result / requirement.”
- Enhancement = “It would be nice if it also did X.”
Why ADO is great
- Centralized tracking: test runs, outcomes, progress
- Requirement/design traceability (coverage)
- Collaboration: comments, @mentions, notifications
- Failed tests → Bugs, new requests → Enhancements (linked and searchable)
- Easy to run UAT again next release without rebuilding your process
Where ADO struggles
- Clients must learn the tool (some won’t)
- Access/licensing/setup overhead (depends on org)
- Can feel like “more process” (but that’s also why it’s trackable)
Do not be a hero: Don’t force ADO on a stakeholder group that won’t use it. You’ll end up running two UAT processes anyway.
Option 2: Excel UAT pack (simple + stakeholder-friendly)
Best when testers are non-technical or you need a clean “email it out, collect it back” workflow.
What it is
One Excel sheet used as the UAT tracker.
What the sheet should include (columns)
- Test Case ID (TC###)
- Objective
- Preconditions (role, data needed)
- Test Steps
- Expected Results
- Actual Results
- Pass/Fail
- Tester
- Date
- Comments
No “Evidence link” column needed. If someone has screenshots, they can paste into Comments or attach them in the email thread.
Two ways to run Excel UAT (both valid)
Approach A: Shared workbook (everyone edits the same file)
Usually hosted in SharePoint/OneDrive.
Pros
- One source of truth (no “latest version” confusion)
- Progress is visible in real time
- Easier to summarize results at the end
Cons
- Permissions can be annoying (especially external clients)
- People can overwrite rows / accidentally break formatting
- Some testers dislike live collaboration
- If someone sorts the whole sheet and saves, it can confuse everyone
Good fit when
- Internal UAT team
- Client has solid M365 setup
- You want a live scoreboard during the UAT window
Best practice if you do this
- Assign each tester a range of TC IDs (clear ownership)
- Freeze header row
- Agree: filter is fine; avoid sorting the entire sheet
Approach B: Individual copies (email out, collect back)
Each tester gets a copy (or a subset) and sends it back completed.
Pros
- No collaboration conflicts
- Works even when client access is limited
- Simple for non-technical testers (“fill it out and send it back”)
Cons
- You must merge results (someone becomes the librarian)
- Formatting drift (people will “improve” your sheet)
- Harder to see progress mid-UAT
Good fit when
- External stakeholders
- Limited SharePoint access
- You need the simplest possible workflow
Best practice if you do this
- Give each tester a slice of TC IDs, not the full mega-sheet
- Name files consistently:
UAT_<Name>_<Date>.xlsx - Have one owner responsible for merging into the master tracker
Why Excel works
- Familiar and low friction
- Great for non-technical stakeholders
- Easy to distribute without tool training
Where Excel struggles
- Consolidation and real-time tracking (especially with individual copies)
- No built-in threaded discussion/notifications (compared to ADO)
ADO vs Excel (quick guidance)
- If the client needs traceable + collaborative → ADO
- If the client needs easy + low training → Excel
Hybrid approach that works well
- Draft scripts in Excel → move into ADO if/when the client is ready.
When a test fails: the debugging workflow
This is what makes UAT useful instead of noisy.
What testers should capture (minimum)
- Environment (Dev/UAT/Prod + URL)
- Test Case ID
- Exact step number where it failed
- Expected vs actual
- Screenshot (or short recording if timing-related)
- Record link + GUID (if applicable)
- Error text copied (not paraphrased)
How the build team should respond
- Reproduce using the same user role/security context
- Classify: data vs security vs automation (flow/plugin) vs UX/form
- Log it properly:
- ADO: create Bug or Enhancement
- Excel: capture in Comments and mark Fail
- Fix → re-run the same TC
Definition of Done
UAT is “done” when:
- ☐ Every feature/requirement in scope has coverage (at least one test)
- ☐ All Blocker/High issues are resolved or explicitly accepted
- ☐ Every failure has enough detail to reproduce without a meeting
- ☐ Sign-off is documented
- ☐ Known issues and enhancements are listed with next steps
Copy/paste templates
ADO test case (minimum fields)
- Title:
TC### – <Feature> – <Scenario> - Steps: action + expected per step
- Links: requirement/work item (and design/spec link)
- Attachments: screenshots/supporting docs (when helpful)
- Tags: UAT, Regression, Security, Flow, etc.
Excel test case row (TC###)
- Test Case ID: TC###
- Objective:
- Preconditions: (user role, required data, environment)
- Test Steps: 1. 2. 3.
- Expected Results:
- Actual Results:
- Pass/Fail:
- Comments:
- Tester / Date:
Bug report (ADO or Excel)
- Title:
[UAT][TC###] <short description> - Environment:
- User/Role tested:
- Step # that failed:
- Steps to reproduce:
- Expected vs Actual:
- Screenshot / Error text:
- Record URL/GUID:
- Severity: Blocker/High/Med/Low
- Notes: patterns (only role X, only after step Y)
Enhancement request (ADO recommended)
- Title:
[Enhancement] <short description> - Reason / value:
- Current behavior:
- Proposed behavior:
- Priority: Now / Next / Later
- Related test case / requirement link:
Other tools teams use
If a client already has a tool they love, use it. The UAT success factor is the structure, not the platform.
- Jira shops often use add-ons like Zephyr or Xray for test management.
- Mature QA orgs may use dedicated tools like TestRail.
- Some M365-heavy orgs use Microsoft Lists for lightweight tracking.