The Test Case That Proved Nothing
The Illusion of Green Lights
In the dashboard of any ALM tool—whether it's Jira, Azure DevOps, or a spreadsheet—there is nothing more satisfying than a column of green checkmarks.
A test case was written. A human executed it. The result was "Pass." Evidence was attached.
And yet, it is entirely possible that this test proved absolutely nothing.
Failure Mode: Execution Without Intent.
We often confuse the act of testing with the value of testing. We treat execution as a ritual: follow the steps, click the buttons, log the result. But if the test case lacks a specific hypothesis, the green checkmark is just noise.
How Execution Becomes Ritual
Test cases are often written as instruction manuals rather than validation experiments. They describe the How (clicks, inputs, navigation) but ignore the Why (risk, logic, system behavior).
The Ritual:
- Navigate to Opportunity.
- Click New.
- Enter Name: "Test Opp".
- Click Save.
- Expected Result: Record saves successfully.
This test will pass 99% of the time. But what did it validate?
- Did it check that the "Big Deal Alert" Flow fired? No.
- Did it verify that the Integration User can't edit the amount? No.
- Did it ensure the Sharing Rules calculated correctly? No.
It only proved that Salesforce (the platform) is capable of saving a record. We already knew that. Salesforce tests its own platform. You are supposed to test your implementation.
Salesforce Amplifies the Problem
In custom web development, testing basic UI interactions is necessary because a broken JavaScript file might prevent a button from working.
In Salesforce, the UI is incredibly stable. The "Save" button works. The standard page layout loads. If your test focuses on the surface layer ("Did the page load?"), you are testing things that rarely break.
Meanwhile, the real risks—conflicting Automation, Permission Set gaps, and Order of Execution issues—operate entirely beneath the surface.
A test case that validates the interface while ignoring the engine is a waste of time.
The Missing Question
Most weak test cases answer the question:
“Did it work?”
Strong test cases answer:
“What risk did we just reduce?”
If you cannot articulate exactly what failure mode you are hunting, you aren't testing; you are touring the application.
The "So What?" Test
Let’s look at that Opportunity test again.
Weak Test:
“Create Opportunity and save successfully.”
- Result: Pass.
- Value: Near Zero.
Strong Test:
“Create Opportunity with Amount > $100k as a Standard User to verify that the Approval Process locks the record.”
- Result: Pass/Fail.
- Value: High. We validated logic, security, and process.
The difference isn't the effort. The difference is the Intent.
The QA Correction: Defend Your Test
Every test case must be able to defend its own existence. As a QA lead, challenge your suite with this rule:
If I deleted this test case tomorrow, what risk returns to the system?
- If the answer is "We might deploy a broken Flow," keep the test.
- If the answer is "We won't know if the 'Save' button works," delete the test.
The Deeper Problem
Execution-focused QA creates a dangerous metric: High Coverage, Low Confidence.
Management sees 500 passing tests and assumes the system is bulletproof. But if 400 of those tests are just "ritual clicking," the system is wide open to defects.
Coverage without intent is just movement. It consumes budget, burns time, and hides the real bugs behind a wall of green.
QA Takeaways
- Audit Your Titles: If a test title is "Verify Save," rename it to "Verify [Specific Logic] on Save." If you can't, delete it.
- Test the Invisible: Explicitly check the "System Modstamp," the background field updates, and the email logs. Don't trust the UI alone.
- Kill the Ritual: Stop writing steps for "Login." Start writing steps for "Assume the persona of a restricted user."
Tests don’t create quality. Understanding does.
Go look at your last passed test case. Did it prove the feature works, or did it just prove the button clicks?