The Day UAT Lied to Everyone
The Most Misunderstood Phase in the Lifecycle
User Acceptance Testing (UAT) is supposed to answer one critical business question:
“Does this solution support our actual business process?”
Too often, in the heat of a Salesforce delivery cycle, it answers a completely different question:
“Can we legally approve this release so we can go home?”
We have all been there. The Sandbox is ready, the stakeholders are notified, and the testing begins. The results come back 100% green. Sign-offs are gathered. Confidence is high.
And then, two days into Production, the support tickets start piling up. The users are frustrated. The logic is holding, but the process is broken.
What happened? UAT lied. Not maliciously, but effectively. It didn't validate the system; it merely confirmed the team's own biases.
How UAT Becomes Performative
In many Salesforce projects, UAT is performative art. It happens late in the timeline, often squeezed between a delayed build and a hard go-live date.
Business users, who have their own "day jobs," are handed a script. They are told exactly:
- What to test.
- Where to click.
- What data to enter.
- What "should" happen.
They follow the script. They confirm the outcome matches the text on the screen. They sign the document.
Nothing explodes because they walked the exact path the developer paved for them. They didn't test the system; they tested the Happy Path.
Salesforce-Specific Blind Spots
Salesforce amplifies this bias because the platform is so malleable. To ensure UAT "goes smoothly," project teams often inadvertently rig the game.
1. The "Just Make It Work" Permissions
To avoid "unnecessary" friction during UAT, Admins often grant testers elevated permissions.
- "Oh, you can't see that field? Let me just add you to this Permission Set Group."
- "You can't save? I'll temporarily switch you to a different Profile."
The test passes. But in Production, those temporary fixes don't exist. The actual end-users hit a wall of Insufficient Privileges errors because the security model wasn't tested—it was bypassed.
2. The Clean Data Fallacy
UAT usually happens in a Partial Copy sandbox with curated data.
- Flows run on records with perfect fields.
- Validation Rules don't trigger because the data meets all criteria.
Real users don't have clean data. They have records imported from spreadsheets in 2019. They have missing fields, duplicate contacts, and legacy Record Types that the new Flow wasn't designed to handle. UAT never saw these edge cases because the environment didn't contain them.
The Silent Contract of UAT
There is often an unspoken agreement between the project team and the business testers:
“We won’t test things that could block delivery, and you won’t surprise us with bad news.”
QA is frequently excluded from UAT interpretation. We see a binary result: Passed. We don't see that the user skipped steps 4 through 7 because "they didn't seem relevant." We don't see that they only tested one Opportunity Record Type instead of all five.
Context is lost. Nuance is erased. And UAT becomes a dangerous rubber stamp.
The QA Correction: Trust but Verify
UAT becomes dangerous when it replaces QA validation. In Salesforce, UAT confirms expectations, not system robustness. Those are different things.
Strong QA does not ask: "Did UAT pass?"
Strong QA asks: "What was excluded from UAT?"
QA Takeaways
- Audit the Testers: Check the User record of the people doing UAT. Do they have the exact Profile and Permission Sets of a Production user? If they have "Modify All Data," the test is void.
- Monitor the Logs: While UAT is happening, watch the debug logs. Did they actually trigger the complex Flow, or did they manually update the field? Did they hit a caught exception that the UI swallowed?
- Challenge the Script: If the UAT script says "Enter 100 in the Amount field," try entering
-100orABC. If the users aren't doing negative testing, you must do it doubly hard in QA. - Verify Data Volume: UAT users rarely test bulk actions. If the business process involves uploading a CSV, ensure someone tests that volume, even if the UAT user only creates one record manually.
The Uncomfortable Truth
UAT is not about discovering defects in the code. It is about discovering misalignment in the process.
If we let UAT become a formality, we are effectively deciding that Production is our real test environment.
UAT didn't lie. It just wasn't asked the right questions.
How do you ensure your business users go off-script during UAT?