02.04 - Risk‑based Testing & Prioritisation

Risk‑based Testing & Prioritisation

02.04 — Salesforce Testing Quadrants (Your Version)

A structured, Salesforce-adapted framework for grouping test conditions, managing effort, and prioritising tests in a scalable and repeatable way.

---

02.04.01 — Why the Quadrants Work for Salesforce

  • Why classical ISTQB levels don't fit Salesforce
  • Platform constraints → predictable risk patterns
  • How Quadrants simplify analysis & prioritisation
  • The role of Quadrants in reducing redundant test cases
  • When (and when not) to use Quadrants

---

02.04.02 — Quadrant 1: Platform-facing, configuration, core checks

  • What belongs in Q1
  • Typical Q1 issues: CRUD, FLS, validation rules, layouts
  • Why Q1 tests are fast and high-value
  • Q1 as “platform integrity check”
  • Examples from real SFDC projects

---

02.04.03 — Quadrant 2: Business-facing, functional, integration flows

  • What belongs in Q2
  • Why Q2 represents true business workflows
  • Detecting broken logic early
  • Integration flow examples
  • When a test should not be classified as Q2

---

02.04.04 — Quadrant 3: Exploratory, critique, business reality

  • Real-user behaviour simulation
  • Discovering hidden logic & UX gaps
  • Exploratory techniques for Salesforce
  • How Q3 adds value beyond scripted tests
  • Typical Q3 findings (with examples)

---

02.04.05 — Quadrant 4: Technical critique, limits, edge cases

  • Stressing Apex, Flows, integrations
  • Governor limits: what testers should care about
  • Negative testing & destructive scenarios
  • When Q4 is mandatory
  • Typical Q4 failure patterns

---

02.04.06 — How to Assign Test Conditions to Quadrants

  • Step-by-step method
  • Resolving Q1 vs Q2 confusion
  • Resolving Q2 vs Q3 confusion
  • Recognising Q4 triggers
  • Distribution guidelines (80/15/5 rule)

---

02.04.07 — Case Study: Full Quadrant Mapping for One Requirement

  • Starting from a raw requirement
  • Extracting visible test conditions
  • Identifying hidden conditions
  • Mapping all conditions to quadrants
  • Producing a final mini test plan
  • Before vs After: how Quadrant mapping changes analysis
  • 02.04.01 — Why the Quadrants Work for Salesforce

02.04.01 — Why the Quadrants Work for Salesforce

Salesforce projects do not behave like classic software development.
You do not test a product the team built — you test a solution assembled inside a platform with its own rules, limits, constraints and automation engine.

This distinction makes the Testing Quadrants uniquely effective for Salesforce, because they allow testers to split their work into platform-facing risk and business-facing risk, and then decide how deep each scenario needs to go.

This lesson explains why the quadrants work, why they’re better than ISTQB-style levels for Salesforce, and how they reduce risk, effort and confusion on every project.

---

🌍 1. Salesforce Testing Happens Inside a Platform, Not Inside Code

Traditional test levels (unit → integration → system → acceptance) assume full control over the tech stack.

Salesforce does not give you that.

  • You cannot influence how Apex is executed on the platform.
  • You cannot change underlying database behaviour.
  • You cannot bypass governor limits.
  • You cannot control how Flow runtime resolves transactions.
  • You cannot isolate components in the same way as in microservices.

Instead, Salesforce gives you:

  • configuration tools,
  • declarative automation,
  • Apex code with strict limits,
  • layered security,
  • metadata-driven UI.

The Quadrants model works because it reflects how work is actually delivered on Salesforce — a hybrid of configuration, automation, code, and business workflow.

---

🧭 2. Why Quadrants Cover the Real Risk Areas in Salesforce

The Quadrants split testing according to two axes:

  • Platform-facing ←→ Business-facing
  • Support the team ←→ Critique the product

This creates four natural categories:

  1. Q1 — Configuration, metadata, CRUD/FLS, permissions, core behaviour
  2. Q2 — Functional flows, integrations, user journey
  3. Q3 — Exploratory testing, reality checks, business unpredictability
  4. Q4 — Technical limits, performance, governor limits, edge-case destruction

Every test condition found during analysis naturally fits one of these layers.

This is why testers with strong analysis skills are faster:

they know exactly which quadrant a condition belongs to.

---

⚡ 3. Why Quadrants Reduce Time and Cost

Most project delays come from testing the wrong things at the wrong time.

Quadrants fix that by enabling:

✔ Rapid early checks (Q1)

You validate the data model, field types, field-level security, record access and configuration before spending hours building data or running long flows.

Q1 catches 30–40% of defects in minutes, not hours.

✔ Fast functional validation (Q2)

Here you verify that the requirement actually works from the user's perspective.

✔ Real-world scenarios (Q3)

Business users rarely follow the perfect path.
Q3 recreates their unpredictable behaviour — especially important in Sales Cloud or Service Cloud.

✔ Breaking attempts (Q4)

If something involves automation + looping + integrations + large datasets, Q4 testing is mandatory.

Without Quadrants, teams either:

  • spend too much time on Q2 flows too early, or
  • forget Q1 completely and find simple issues three days before UAT.

Quadrants eliminate both problems.

---

🧩 4. Case Study: Process Builder → Apex Rewrite Failure
(Real scenario based on your material)

A large project migrated multiple Process Builders into Apex classes.

Everything worked perfectly… when tested using System Administrator.

The team ran only Q2-level functional tests:

  • create record → update → automation fires → result OK.

But Process Builder runs in system context, while Apex (if written with sharing) does not.

The admin account masked all permission issues.

When the release reached Production:

  • half the users couldn’t execute the automation,
  • cross-object updates silently failed,
  • invoices could not be generated,
  • critical flows stopped mid-transaction.

What actually went wrong?

This was a Q1 failure, not a Q2 one.

Nobody checked:

  • CRUD on objects modified by the Apex automation
  • FLS on fields accessed by the logic
  • record access rules for starting users
  • required permission sets

A single Q1 checklist run would have exposed the entire risk before any functional testing.

This is why Quadrants matter.

---

🎯 5. Summary: Why Quadrants Are the Correct Model for Salesforce

  • Salesforce has its own architecture, limits and execution model.
  • ISTQB-style levels do not map to the platform.
  • Quadrants separate platform risk from business risk — exactly what Salesforce needs.
  • Q1 catches misconfiguration early.
  • Q2 validates intended behaviour.
  • Q3 validates real user behaviour.
  • Q4 validates platform behaviour under pressure.

If you skip Quadrants, you test in the wrong order.
If you use Quadrants, you test in the right order — with less effort and higher accuracy.

---

Next lesson:

02.04.02 — Quadrant 1: Platform-facing, configuration, core checks

  • 02.04.02 — Quadrant 1: Platform-facing, configuration, core checks

02.04.02 — Quadrant 1: Platform-facing, configuration, core checks

Quadrant 1 (Q1) focuses on the foundation of every Salesforce requirement:
the configuration, metadata, permissions and platform rules that must exist before any business scenario can ever work.

If something is wrong here, no functional test will ever pass, no matter how experienced the tester is.

Q1 is the quadrant that catches:

  • missing fields, wrong field types, incorrect picklists
  • broken validation rules
  • CRUD/FLS issues
  • wrong OWD or sharing
  • page layout inconsistencies
  • Flow configuration mismatches
  • automation firing under wrong conditions
  • dependency failures (Record Types, criteria, entry conditions)

This is the fastest, cheapest and most reliable quadrant in Salesforce testing — and the one most teams underuse.

---

🌍 1. Why Q1 Exists

Salesforce is metadata-driven.
This means the behaviour of the system is determined not by code, but by configuration stored in the platform.

If the metadata is wrong:

  • the system will not behave as intended,
  • automation may not fire,
  • or it may fire when it shouldn’t,
  • users may not even see the fields required to execute the process.

Q1 ensures the foundation is correct before the actual “testing” begins.

---

🔍 2. What Q1 Covers

A. Data Model

Check:

  • field type, length, precision, scale
  • required/not required
  • default values
  • picklist values
  • controlling ↔ dependent fields
  • relationship behaviour (lookup vs master-detail)
  • record type assignment
  • validation rules
  • field history tracking

Even a “simple” new field can break five other things.

---

B. Security: CRUD, FLS, Sharing

Testers check:

  • object-level access (Create, Read, Edit, Delete)
  • field-level access (read-only, hidden, editable)
  • record-level access (OWD, role hierarchy, sharing rules)
  • permission set assignments
  • profile mismatches
  • differences between sandbox users

A requirement is never valid until CRUD/FLS is validated for the correct actor, not for Admin.

---

C. Declarative Logic (Flows, validation, assignment rules)

Q1 validates:

  • entry conditions
  • stop conditions
  • required data
  • correct variables
  • correct update targets
  • no infinite loops
  • correct error handling

Many Flow defects are not functional — they are configuration-level mistakes.

---

D. UI Configuration

Check:

  • page layout visibility
  • Lightning Record Pages (components, filters)
  • button visibility
  • Quick Actions
  • dynamic visibility rules

Users cannot follow a business process if they can’t see the required UI components.

---

⚡ 3. Why Q1 Saves Enormous Time

Running a functional test (Q2) can take 5–20 minutes.
Running a Q1 checklist takes 30–90 seconds.

Q1 typically catches:

  • 40% of all defects in early stages
  • 60–70% of configuration-related defects
  • 90% of CRUD/FLS issues
  • nearly all validation rule conflicts

This is where senior testers beat juniors — not by clicking faster, but by diagnosing the underlying structure.

---

🧩 4. Case Study: Flow That Worked Only for Admins
(based on your meeting & project notes)

A project delivered a complex Flow for invoice generation.
Admin tests passed perfectly.
Functional tests also looked fine… because the tester unknowingly used a user with elevated permissions.

Once regular users tried:

  • Flow failed silently
  • invoices were not generated
  • error logs showed record access failures
  • dependent lookups were not visible
  • Flow actions attempted to write to fields blocked by FLS

Q1 Findings (after re-analysis):

  • The user lacked Edit on a related object
  • One of the Flow steps attempted to update a field hidden via FLS
  • OWD was set to Private, but Flow assumed visibility
  • A validation rule blocked updates unless Admin bypass was active

All of this would have been caught before any functional testing if Q1 was executed properly.

---

✔ 5. Summary

Q1 ensures:

  • configuration is valid
  • data model is correct
  • users have proper access
  • UI is visible
  • automation conditions make sense

Q1 is not optional.
It is the bedrock of Salesforce test design.

---

Next lesson:

02.04.03 — Quadrant 2: Business-facing, functional, integration flows

  • 02.04.03 — Quadrant 2: Business-facing, functional, integration flows

02.04.03 — Quadrant 2: Business-facing, functional, integration flows

If Q1 checks whether the system is built correctly,
Quadrant 2 (Q2) checks whether the system behaves correctly.

Q2 represents:

  • full functional validation,
  • realistic workflow execution,
  • ensuring the requirement actually delivers business value.

Q2 is where testers follow the user journey and confirm that the logic, UI, automation and data behave correctly in real business scenarios.

---

🌍 1. What Q2 Covers

✔ A. Positive-path functional testing
The “happy path” — does the requirement actually work?

Examples:

  • creating an Opportunity triggers correct stage logic
  • Case routing assigns the case to the right queue
  • Flow updates the correct records
  • Apex logic populates the right fields

---

✔ B. Integration of multiple metadata components

Most Salesforce features are not isolated.

For example:

  • Flow writes to fields → validation rules fire → triggers update → automation launches

Q2 validates the full chain, not the individual parts.

---

✔ C. Cross-object logic and end-to-end steps

Even a simple requirement touches:

  • UI
  • Data model
  • Automation
  • Security
  • Record access
  • Notifications
  • Integrations

Q2 verifies the full journey, not just one component.

---

✔ D. Expected business outcome, not technical behaviour

Q2 always answers the question:

“Does this deliver the value the business expects?”

Not:

  • “Does the button work?”

but:

  • “Does clicking this button achieve the expected business effect?”

Salesforce testers must think like business analysts at this stage.

---

🔍 2. Why Q2 Depends on Q1

If Q1 isn’t done:

  • missing CRUD breaks updates
  • missing fields break flows
  • incorrect picklists break automation
  • wrong OWD breaks visibility
  • UI filters hide critical components

Q2 must never start until Q1 passes.

Otherwise you spend hours debugging… configuration issues.

---

⚡ 3. Common Q2 Mistakes (from real projects)

❌ Testing with System Administrator
This hides:

  • security issues
  • record access problems
  • automation failures
  • validation conflicts
  • UI component visibility issues

❌ Not using realistic data
Salesforce logic often behaves differently with:

  • record volume
  • optional fields left empty
  • dependent picklists
  • complex relationships
  • historical data

❌ Testing only the simplest scenario
Most real business usage happens in the messy middle, not in the perfect path.

---

🧩 4. Case Study: Opportunity Stage Logic Breaks in Real Flows
(from your Sales Cloud analysis materials)

A project implemented stage-dependent automation:

  • validation rules
  • mandatory fields
  • pricing logic
  • product synchronization

Admin tests passed.
Simple QA tests (Q2 level, but oversimplified) also passed.

But real salespeople reported:

  • blocked updates
  • missing products
  • pricing resets
  • validation rules triggering at wrong times

Root Cause (found after reproducing business flows)

Testers had only checked:

  • Stage → required fields → save

But actual reps:

  • added 10+ products
  • updated discounts
  • added opportunity team members
  • removed one product mid-process
  • changed currency
  • reopened Opportunity to correct data

Findings:

  • Flow recalculated pricing incorrectly after product removal
  • Validation rule conflicted with stage rollback
  • Required field became invisible due to record type change
  • Apex logic double-counted discounts

All these issues required Q2-level business flow simulation, not “positive path checking.”

---

✔ 5. Summary

Q2 validates:

  • the requirement works in practice
  • all automation works together
  • business value is delivered
  • the full user journey behaves as expected

Q2 is the heart of Salesforce functional testing
but only after Q1 guarantees the foundation is correct.

---

Next lesson:

02.04.04 — Quadrant 3: Exploratory, critique, business reality

  • 02.04.04 — Quadrant 3: Exploratory, critique, business reality

02.04.04 — Quadrant 3: Exploratory, critique, business reality

Quadrant 3 (Q3) is where testing becomes real.

It is the first quadrant where we stop asking:

“Does it work?”

and begin asking:

“Does it still work when used by real humans, under real conditions, with real business data?”

Q3 is business-facing, exploratory, and intentionally critical.
While Q1 and Q2 verify correctness, Q3 verifies survivability.

If Q1 is the blueprint check,
and Q2 is the functional validation,
then Q3 is the reality check.

---

🌍 1. Purpose of Q3

Salesforce implementations often look perfect on paper:

  • clean diagrams,
  • perfect acceptance criteria,
  • simplified test scenarios.

But businesses do not operate in “perfect scenario mode.”
Real users:

  • click in the wrong order,
  • change their mind halfway,
  • go back a step,
  • reopen old records,
  • skip optional fields,
  • attach inconsistent files,
  • work with incomplete data,
  • process high volume in short bursts.

Q3 tests whether the solution can survive real-world imperfections.

---

🔍 2. What Q3 Covers

A. Exploratory testing with business context

This is not freestyle clicking.
Exploration is structured around:

  • typical user habits,
  • known shortcuts,
  • previous production incidents,
  • business exceptions,
  • variations of the process.

Examples:

  • What happens if a Case is assigned incorrectly and then reassigned?
  • What if a sales rep adds products, removes them, adds them again, changes currency, and then changes stage?
  • What if a Flow is triggered with partial data because the user saved too early?

---

B. Testing realistic multi-step user behaviour

Real users rarely follow a straight line.

Q3 examples:

  • Working with 20+ products instead of two
  • Editing Opportunity Team mid-process
  • Reopening a previously closed Case
  • Mid-process role reassignment
  • Saving incomplete drafts
  • Switching between mobile and desktop

---

C. Cross-process interactions

Salesforce is not siloed.

Q3 asks:

  • What happens when Sales Cloud and Service Cloud logic overlap?
  • What happens when CPQ installations push changes to objects shared with custom automation?
  • Does the solution handle updates coming from integration partners?

---

D. Resolution of contradictions within requirements

Many real defects come from:

  • mismatched assumptions,
  • unclear business rules,
  • edge conditions that were not documented at all.

Example:

A requirement says:

“Create invoice automatically when Opportunity is Closed Won.”

But real business process says:

  • some Oppties require approval first
  • some require finance team validation
  • some require missing data to be captured
  • some require manual product cleanup

Q3 reveals these contradictions.

---

⚡ 3. When Q3 Should Be Executed

Q3 is performed:

  • after Q1 confirms configuration correctness,
  • after Q2 confirms functional behaviour,
  • before execution of full E2E regression suites,
  • before UAT preparation (often testers find issues earlier than clients).

Q3 is also ideal after major refactors, especially migrations from:

  • Process Builder → Flow
  • Flow → Orchestrator
  • Apex rewrites
  • Entire Sales Process redesigns

Because in refactors, everything “works” functionally, but nothing works the same way.

---

🧩 4. Case Study: One Requirement → Fifteen Real-world Breakages
(based on your long-form audio case + earlier project documentation)

Requirement (simplified):

“When Opportunity moves to Stage = Proposal, generate a Quote, apply discount logic, and notify the Account Owner.”

Q1/Q2 outcome:

All passed.
Everything worked “as documented.”

Q3 findings (realistic behaviour simulation):

  1. Sales rep changed Opportunity Owner mid-process → notifications failed
  2. Discount logic broke when more than 8 products were added
  3. Quote generation failed when Opportunity Currency ≠ Account Currency
  4. Removing a product after discount calculation caused negative totals
  5. Reopening Opportunity produced duplicate Quotes
  6. Approval Process suppressed notifications
  7. Discount Flow executed twice due to record-trigger mismatch
  8. Sales team used a “Save & New” shortcut, bypassing a screen flow
  9. Quote PDF button was hidden for specific profiles
  10. Discount above threshold required finance validation — missing in requirement
  11. Large Opportunities (50+ products) hit CPU limit
  12. Product family filter did not apply to updated product bundles
  13. Stage rollback removed mandatory fields that Flow depended on
  14. Integration pushed product updates during editing, causing Flow re-entry
  15. Validation rule for related object blocked Quote updates

Lesson:

Q3 exposes the difference between a requirement and reality.
This is where senior testers deliver their strongest value.

---

🛠 5. Techniques That Work Best in Q3

✔ Session-based exploratory testing
Define:

  • mission
  • boundaries
  • targets
  • risks
  • reporting structure

✔ Data variety testing
Test with:

  • minimal data
  • maximal data
  • incorrect but realistic data
  • incomplete data
  • historical data

✔ Behavior variation
Simulate:

  • fast rep clicking
  • slow rep working through forms
  • inexperienced users making mistakes
  • switching users mid-flow

✔ Event timing manipulation
Example:

  • What if integration sync happens during edit?
  • What if approval happens while the Flow is running?

---

✔ 6. Summary

Q3 validates:

  • business realism
  • system resilience
  • user behaviour variability
  • undocumented assumptions
  • cross-process conflicts
  • process survival under non-ideal conditions

Q3 ensures:

“Even if users behave like users, the system still holds.”

Without Q3:

  • UAT becomes defect-heavy,
  • clients lose trust,
  • production issues occur under load or variation.

---

Next lesson:

02.04.05 — Quadrant 4: Technical critique, limits, edge cases

02.04.05 — Quadrant 4: Technical Critique, Limits, Edge Cases

Quadrant 4 (Q4) is the most technical quadrant in Salesforce testing.
It focuses on breaking the system deliberately — but in a controlled, analytical, risk-driven way.

Where Q3 tests business reality,
Q4 tests platform reality.

The question here is no longer “Does it work?”
nor “Does it survive real usage?”

Q4 asks:

“Where is the technical breaking point — and does that breaking point matter for this project?”

This quadrant is the least understood by clients and by junior testers.
Yet it prevents the most expensive failures.

---

🔥 1. Purpose of Q4

Salesforce is a shared, multi-tenant platform with strict execution rules:

  • governor limits
  • heap limits
  • record-locking behaviour
  • Flow runtime limits
  • SOQL limits
  • DML limits
  • CPU time limits
  • asynchronous execution dependencies
  • automation recursion control

Most real production failures are caused by breaching those constraints, not by incorrect business rules.

Q4 validates:

  • limits
  • scalability
  • concurrency
  • data volume behaviour
  • technical design resilience
  • automation chain stability

---

⚡ 2. What Q4 Includes

A. Governor Limit Stress Testing

Testing scenarios such as:

  • Opportunities with 100+ products
  • Accounts with 10,000+ related Cases
  • Mass updates via integration
  • Bulkified Apex vs non-bulkified Apex
  • Flows looping through large datasets

You check not only whether it works, but whether it fails predictably.

---

B. Performance & Execution Path Variation

You intentionally push the system into:

  • CPU spikes
  • Flow path recursion
  • Record locking cascades
  • Chained automation (Flow → Apex → Flow → Integration)
  • High-volume DML behaviour

---

C. Negative Technical Behaviour

These tests determine if the system handles:

  • simultaneous edits
  • race conditions
  • partial failures
  • rollback behaviour
  • duplicate automation triggers
  • missing async execution slots

---

D. Automation Collision Detection

Salesforce allows:

  • before-save flows
  • after-save flows
  • record-triggered flows
  • invocable flows
  • platform events
  • asynchronous apex
  • schedulers
  • triggers

Q4 verifies that these do not collide in unexpected ways.

---

🧩 3. Case Study: Opportunity With 100 Products — The Typical Hidden Limit Failure
(adapted from your project experience with CPU timeouts)

Scenario:

A simple process:

“When Opportunity is Closed Won, generate Invoices and associated records.”

Q1 & Q2:

Everything passed.

Q3:

Worked with normal data.

Q4:

Tester builds a large, realistic dataset:

  • Opportunity with ~100 products
  • Mixed product families
  • Additional discount fields
  • Related custom objects
  • Multiple pricebook interactions

Result:

The automation chain (Flow → Apex → Flow) hit:

  • CPU limit
  • DML row limit
  • SOQL limit inside a loop
  • record locking issues

The process collapsed only for heavy Opportunities.

Root Cause (found only in Q4):

  • Apex class was bulkified incorrectly
  • Flow was recalculating totals 3 times
  • Additional DML operations depended on record order
  • Integration pushed updates mid-transaction

Production would have failed for every Enterprise customer with large-volume deals.

Q4 prevented a catastrophic release.

---

🧨 4. Case Study: Case Volume Spike in Service Cloud
(based on your telecommunication platform project)

Requirement:

“System must assign Cases automatically based on routing rules.”

Everything passed until the team simulated a real monthly spike:

  • 12,000+ Cases loaded in under one hour
  • mixed priorities
  • mixed channels
  • multi-tier assignment logic

Findings:

  • Assignment Flow failed silently on some records
  • CPU timeouts caused partial routing
  • Queue-based assignment became inconsistent
  • Duplicate automation caused double-case escalation
  • Integration retry logic created race conditions

Lesson:

Q4 shows that volume amplifies design flaws hidden in Q1–Q3.

---

🛠 5. Techniques That Work Best in Q4

✔ Bulk Data Simulation
Use:

  • Data Loader
  • Postman batch scripts
  • Workbench
  • SOQLFiddle-type queries
  • Custom CSV generators

✔ Concurrency Simulation
Two approaches:

  • multiple users editing the same record
  • API + UI interactions at the same time

✔ Automation Chain Mapping
Create a technical map:

Flow → Apex → Flow → Integration → Apex → Email Alerts
and test:

  • interruptions
  • recursion
  • execution order

✔ Limit Monitoring
Use tools:

  • Debug Logs
  • Flow Faults
  • Governor Limit Logs
  • Performance Analyzer

✔ High-Variability Test Sets
Especially:

  • long text fields
  • large attachments
  • custom logic on related objects
  • deeply nested record hierarchies

---

🎯 6. Summary

Q4 answers the question:

“Can the system handle the worst realistic technical scenario?”

It protects the project from:

  • CPU errors
  • DML explosions
  • Flow recursion
  • automation collisions
  • concurrency failures
  • volume-based breakdowns
  • multi-object chain failures

Q4 is not executed daily.
But when it is skipped, production incidents are guaranteed — often the kind that business calls “unacceptable.”

---

Next lesson:

02.04.06 — How to Assign Test Conditions to Quadrants

02.04.06 — How to Assign Test Conditions to Quadrants

Assigning test conditions to quadrants is one of the most important skills in Salesforce QA.
It decides:

  • how fast you test,
  • how complete your coverage is,
  • how accurately you estimate effort,
  • whether your team understands your priorities,
  • and whether your test plan actually makes sense.

This lesson gives you a structured, repeatable method that works for every project — whether the requirement is simple (new picklist) or extremely complex (multi-step Flow with Apex fallback and integration logic).

---

1. The Core Rule

Every test condition belongs to one or more quadrants.
But not all quadrants are needed for every condition.

Some conditions need only Q1.
Some need Q1 + Q2.
Some need only Q3.
Some — the dangerous ones — require Q1 + Q2 + Q3 + Q4.

The art of assigning quadrants is understanding the risk profile behind each condition.

---

2. The 4-Step Assignment Method

Step 1 — Identify the type of test condition

Use your categories from Module 02.03:

  • Data model
  • Security
  • Declarative logic
  • Apex logic
  • UI

Each class of requirement naturally maps to specific quadrants.

---

Step 2 — Ask the Six Analysis Questions

A condition is not just a sentence.
Each answer changes the quadrant.

The method:

  1. Who is the actor?
  2. Where does the logic enter the process?
  3. How is the logic executed?
  4. Why does the business need it?
  5. What technical solution implements it?
  6. When does the action occur?

If “Who” = Admin → Q1 only
If “Who” = Business user → Q2/Q3
If “What” = Apex → Q4 likely
If “Why” = critical for invoicing → Q2/Q3 becomes mandatory

---

Step 3 — Estimate business impact vs technical risk

Two dimensions:

Business-facing impact → Q2/Q3

Examples:

  • “User cannot submit an approval” → high business risk
  • “Discount is miscalculated” → high business risk
  • “Field does not show on page” → moderate business risk

Technical-facing risk → Q1/Q4

Examples:

  • “Flow triggers twice under recursion” → Q4
  • “Permission set incomplete” → Q1
  • “Apex performs DML in a loop” → Q4
  • “Record access incorrect” → Q1 + Q2

The mix determines the quadrant(s).

---

Step 4 — Assign quadrant(s) and justify

Use this rule of thumb:

TYPE OF CONDITION QUADRANT
Pure configuration Q1
User-facing functional behaviour Q2
Realistic business flow Q2 → Q3
Unexpected sequences, business realism Q3
Performance/limits/volume Q4
Automation chain stability Q1 + Q2 + Q4
Apex logic Q1 + Q2 + Q4
Security Q1 + Q2

---

3. Practical Examples Based on Your Project History

Example 1 — Validation Rule on Opportunity

Condition:

"User cannot save Opportunity without Product Family.”

Analysis:

  • Who? Sales Rep → business user
  • What? Data model change
  • Why? Business logic
  • Risk? Low technical, moderate business

Assignment:

  • Q1 (field, rule, visibility)
  • Q2 (sales flow)
  • Q3 not needed
  • Q4 not needed

Quadrants: Q1 + Q2

---

Example 2 — New Flow triggers when Case is Closed

This one comes straight from your transcriptions.

Analysis:

  • Declarative logic
  • Multi-object impact (Case → Survey)
  • Business-critical (customer communication)
  • Hidden risk: record-type variations, permission gaps

Assignment:

  • Q1 (flow config, entry conditions, permissions)
  • Q2 (full close-case scenario)
  • Q3 (variations: escalate before close, reassign case, reopen case)
  • Q4 NOT needed unless volume is high

Quadrants: Q1 + Q2 + Q3

---

Example 3 — Apex class rewritten from Process Builder (your real case study)

Scenario:

  • Process Builder → replaced with Apex
  • Apex written with sharing
  • System previously ran as system
  • Hidden access issue discovered only on UAT

Analysis:

  • Who? Business users
  • What? Apex logic with changed execution context
  • Why? Migration/refactoring
  • Risk? Massive

Assignment:

  • Q1 (permissions, technical config)
  • Q2 (functional behaviour)
  • Q3 (business process variation)
  • Q4 (limit behaviour + cross-record access failures)

Quadrants: Q1 + Q2 + Q3 + Q4

This was the scenario that took down part of production — and cost the team several days.

---

Example 4 — High-volume Opportunity closing in telco project
(another case from your materials)

Condition:

“Closing Opportunity with 100 products should process correctly.”

Analysis:

  • High technical risk (CPU, DML)
  • High business impact
  • Multi-step automation chain

Assignment:

  • Q1 (config)
  • Q2 (normal close scenario)
  • Q3 (unexpected user sequences)
  • Q4 (volume, automation collisions, governor limits)

Quadrants: Q1 + Q2 + Q3 + Q4

---

4. How to Document Quadrant Assignment in Test Analysis

Every condition should include the line:

Quadrant Assignment: Q1 / Q2 / Q3 / Q4 (choose applicable)

Example:

Condition: Calculate invoice amount including discount levels.

Quadrant Assignment: Q1 (config) + Q2 (functional flow) + Q3 (edge business cases)

Risk Level: High

Justification: Multi-level logic + financial impact + process dependency.

---

5. When Multiple Quadrants Are Mandatory

A condition must go into multiple quadrants when:

  • users + automation + data model interact
  • there is Apex involved
  • the scenario has regression-critical status
  • the change affects financial or inventory processes
  • logic is reused across multiple objects or entry points
  • there is cross-cloud interaction (Sales ↔ Service ↔ CPQ etc.)

---

6. Summary

Assigning test conditions to quadrants is not guesswork.
It is a structured framework that gives you:

  • speed
  • risk-based prioritisation
  • predictable effort estimation
  • clarity for PM/PO/Dev
  • airtight justification for your test coverage

The quadrants do not replace test cases.
They tell you whether a condition deserves:

  • a fast check,
  • a realistic flow test,
  • deep exploratory work,
  • or technical destruction testing.

Next lesson:
02.04.07 — Case Study: Full Quadrant Mapping for One Requirement

02.04.07 — Case Study: Full Quadrant Mapping for One Requirement

02.04.07 — Case Study: Full Quadrant Mapping for One Requirement

Refactoring Process Builder → Apex (“with sharing”) and Why It Broke in Production

This case study walks through a real Salesforce scenario where a seemingly simple refactor caused a major production failure.
It is one of the clearest examples of why proper quadrant assignment is essential — and why Salesforce projects fail when Q3 or Q4 are skipped.

We will take one requirement, analyse it, derive conditions, and map them to quadrants using the method from previous lessons.

---

1. Background of the Requirement

A project team decided to eliminate several Process Builders and convert them into Apex classes for performance and maintainability.

Requirement (as provided to testers):

“Replace existing Process Builders with equivalent Apex logic.
Preserve all functional behaviour.
No functional changes expected.”

This statement hides several risks:

  • execution context differences
  • data access differences
  • Apex sharing model
  • bulk behaviour
  • recursion behaviour
  • exception handling differences

The original implementation worked flawlessly because Process Builder always runs in system context.
The rewritten Apex ran with sharing — meaning it respected the permissions of the user who triggered the automation.

The test team used an Admin actor, which masked the underlying risk.

---

2. Step-by-Step Requirement Analysis

WHO (actor):

  • Sales users submitting records (in production)
  • Admin (mistakenly used during testing)

→ Mismatch between test actor and real actor.

WHAT (technical solution):

  • Multiple Process Builders replaced with a single Apex class
  • “with sharing” keyword added by developers

WHERE (entry point):

  • Triggered on record update (Opportunity → downstream objects)

WHEN (moment of execution):

  • On save, before related logic is executed further
  • Some actions happen synchronously, others scheduled

WHY (business purpose):

  • Improve performance of automation
  • Reduce maintenance complexity

HOW (implementation detail):

  • Apex with SOQL queries, DML operations, field updates, branching logic
  • Dependent on user's CRUD / FLS / record access rights

---

3. Extracted Test Conditions

From the requirement and analysis, we derive the following test conditions:

TC1 — Apex class executes on record update

  • Entry criteria
  • Execution path
  • Branching logic

TC2 — Apex logic produces the same business outcomes as Process Builder

  • Field updates
  • Status changes
  • Related record creation

TC3 — System respects business rules for non-admin users

  • CRUD
  • FLS
  • Record-level access

TC4 — Apex logic handles large datasets

  • Bulk updates
  • Multiple related records
  • Mixed record states

TC5 — No recursive execution occurs

  • Process Builder sequence vs Apex sequence

TC6 — Flow/Apex collision does not occur

  • Other automations still present in the system

TC7 — Error handling is correct

  • Exceptions surfaced or logged?
  • Partial failure vs rollback?

---

4. Assigning Quadrants

Using the quadrant model:

Test Condition Q1 Q2 Q3 Q4
TC1 Execution
TC2 Functional parity
TC3 Permissions
TC4 Volume / Bulk
TC5 Recursion
TC6 Automation collision
TC7 Error handling

Quadrant Summary

  • Q1: Configuration validation, permission structure, Apex metadata
  • Q2: Functional reproduction of previous business behaviour
  • Q3: Real user behaviour variations
  • Q4: Limits, recursion, technical stability

✔ This requirement requires all four quadrants.

---

5. Why Production Failed

During testing:

  • Admin user had unrestricted access
  • All flows and logic appeared to work
  • No access-dependent paths were triggered
  • No large datasets were used
  • No recursion occurred because test data was simple

On production:

  • Business users with restricted access triggered the Apex
  • “with sharing” caused queries to return zero results
  • Empty datasets led to missing DML operations
  • Result: automation silently failed for legitimate business actions

This was discovered by end users, not by the testing team — a classic symptom of missing Q3 and Q4 coverage.

---

6. Full Quadrant Mapping Diagram

Quadrant Mapping Diagram (Stable Markdown Version)

Quadrant Mapping Relationships

Quadrant Focus Area Typical Questions
Q1 Platform configuration, CRUD/FLS, metadata, access "Is the system configured correctly?"
Q2 Functional business flow "Does the feature work end-to-end for the intended user?"
Q3 Realistic business behaviour, variation, exploratory testing "What happens when users behave like humans?"
Q4 Technical limits, recursion, volume, collisions "Where does the system break under stress or complexity?"

Relationship model:

  • Q1 feeds into Q2 (correct setup enables functional flow)
  • Q2 feeds into Q3 (functional flow becomes real behaviour)
  • Q3 and Q1 expose risks that must be validated in Q4

Mapping:

  • Q1: Does Apex run? Are permissions correct?
  • Q2: Does it behave like Process Builder?
  • Q3: How does a real user behave? Do alternative flows break logic?
  • Q4: Does it survive volume, recursion, collisions, and limits?

A failure in any quadrant → failure of the entire feature.

---

7. Lessons Learned

1. Never use Admin as the primary actor in functional tests.
It guarantees false positives.

2. Any refactoring of automation = high Q3 + Q4 risk.

3. Apex “with sharing” vs “without sharing” must always trigger security testing.

4. Declarative → Apex migrations require full quadrant coverage.
No exceptions.

5. Q4 testing is the only way to detect limit breaches and recursion.

---

8. Summary

This case study demonstrates:

  • how to extract deep test conditions,
  • how quadrant assignment removes blind spots,
  • why skipping Q3/Q4 causes catastrophic failures,
  • how testing becomes a risk-reducing activity rather than just “checking functionality.”

It also sets the stage for Module 02.05, where you’ll learn how to use these assignments to estimate effort and argue for realistic timelines (SRA).

Subscribe to Salesforce Tester

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe