02.05 — SRA: Salesforce Risk Assessment Framework

02.05.01 — Understanding SRA (Risk Classes, Complexity, Compliance)

Salesforce projects vary enormously in scope, architecture, and business impact. Because of this, not all requirements carry the same level of risk — and treating every requirement equally leads to wasted effort, missed defects, and unpredictable delivery quality.

The Salesforce Risk Assessment (SRA) framework is designed to bring structure, consistency, and predictability into test analysis. It helps determine:

  • how risky a requirement is,
  • how complex it will be to analyse and test,
  • how much effort is justified,
  • and which test levels and quadrants must be executed.

---

1. Why SRA Exists in Salesforce Projects

Traditional software projects rely on code-centric complexity.
Salesforce mixes:

  • declarative automation,
  • programmatic logic,
  • shared metadata,
  • permissions,
  • UI layers,
  • integrations,
  • and business-specific workflows.

A small visible change may hide a very large technical footprint.

Common causes of hidden risk:

  • one configuration update triggers multiple flows or Apex,
  • security settings override expected behaviour,
  • Salesforce governor limits introduce edge cases,
  • compliance requirements demand strict documentation,
  • multi-team ownership introduces conflicting assumptions.

SRA solves this by making risk explicit.

---

2. Pillar 1 — Risk Classes (Business + Technical Risk)

Each requirement falls into one of the risk classes:

Risk Class A — High Risk

Applies when the requirement:

  • impacts revenue, reporting, or regulatory workflows,
  • modifies high-traffic processes (sales, service, case management),
  • touches automation chains involving Flows + Apex,
  • can break integrations or external dependencies,
  • affects large numbers of users or data records.

Testing approach:

  • full analysis
  • multiple quadrants
  • exploratory testing
  • negative scenarios
  • edge cases and limit testing

---

Risk Class B — Medium Risk

Applies when the requirement:

  • modifies standard processes without critical impact,
  • introduces new logic but in low-volume areas,
  • adds moderate complexity (1–2 flows, validation logic, UI rules),
  • affects a limited group of users.

Testing approach:

  • detailed functional testing
  • selective quadrant coverage
  • limited exploratory checks

---

Risk Class C — Low Risk

Applies when the requirement:

  • contains simple configuration updates,
  • adds UI elements without logic,
  • adjusts existing picklists/labels,
  • has no impact on security, automation, or integrations.

Testing approach:

  • smoke checks
  • short checklist-based execution
  • documentation updates if required

---

3. Pillar 2 — Complexity Rating

Risk tells you how dangerous a requirement is.
Complexity tells you how large the testing surface is.

Evaluate complexity based on:

  • number of involved components (Flow, Apex, Validation Rules, Page Layout, Sharing)
  • branching logic and decision points
  • volume of test data needed
  • integrations involved
  • permissions and profiles required
  • number of user roles affected
  • number of steps in the functional path

Example scale:

  • Low Complexity — 1–2 components, one user role, no automation
  • Medium Complexity — 3–5 components, multi-step flow, conditional visibility
  • High Complexity — automation chains, Apex, integrations, multiple roles

Complexity influences estimation and prioritisation.

---

4. Pillar 3 — Compliance Requirements

Some requirements automatically elevate risk due to regulatory or audit constraints.

Examples:

  • GDPR data handling
  • financial audit processes
  • medical/pharma compliance
  • mandatory traceability
  • logs required for external auditors
  • processes subject to ISO, SOC2, PCI, HIPAA

Compliance adds obligations:

  • documentation requirements,
  • enforced testing visibility,
  • strict traceability from requirement → test objective → test case → result,
  • additional negative scenarios (e.g., access denial, retention limits).

---

5. Combining the Three Pillars

A full SRA assessment considers:

Risk Class + Complexity + Compliance

Example:

  • Medium risk + High complexity + Compliance
    → treated as High-risk requirement for testing effort.

Another example:

  • High risk + Low complexity
    → still needs deeper business validation, but execution time will be shorter.

This model ensures testing effort scales intelligently with requirement impact.

---

Summary

SRA provides a repeatable, transparent way to evaluate:

  • business risk,
  • technical risk,
  • testing complexity,
  • and compliance sensitivity.

With SRA in place, test prioritisation becomes objective rather than opinion-based, and effort estimation becomes predictable instead of arbitrary.

02.05.02 — How SRA Impacts Analysis & Prioritization

The Salesforce Risk Assessment (SRA) model does more than classify requirements — it directly shapes how testers analyse requirements, define test conditions, select quadrants, and plan effort. A good SRA assessment turns chaotic testing into an intentionally prioritised process.

Understanding how SRA affects analysis is essential for controlling project scope, ensuring predictable quality, and preventing late-stage surprises.

---

1. SRA Determines the Depth of Requirement Analysis

Not every requirement requires the same analytical depth.

High-risk requirements (Class A)

Require:

  • multiple readings of the requirement,
  • deep exploration of technical architecture,
  • full categorisation (data, security, logic, UI, Apex),
  • identification of hidden conditions behind automation,
  • consultations with BA, PO, Dev, Architect,
  • mapping to multiple quadrants,
  • scenario building and assumptions validation.

Result:
A large set of test conditions and multi-layered objectives.

---

Medium-risk requirements (Class B)

Require:

  • focused analysis,
  • verification of data model, logic, and permissions,
  • identification of potential conflicts,
  • confirmation of business rules.

Result:
Moderate number of test conditions; partially reduced quadrant coverage.

---

Low-risk requirements (Class C)

Require:

  • quick risk scan,
  • verification of impact on other metadata,
  • limited analysis questions.

Result:
Small number of conditions, lightweight testing.

---

2. SRA Directly Influences Test Prioritization

Once risk is known, prioritization becomes objective:

Risk Class When It Should Be Tested Why
A — High Immediately after deployment on test environment Highest likelihood of breaking core processes; highest business impact
B — Medium After stabilizing High-risk items Moderate impact; dependent on core logic
C — Low Anytime before release freeze Minimal impact; typically UI or simple config

This prevents “first in, first tested” chaos and aligns effort to business value.

---

3. SRA Defines Which Quadrants to Execute

The higher the risk, the more quadrants must be included.

High Risk (A)

  • Quadrant 1 — configuration, core checks
  • Quadrant 2 — functional flows
  • Quadrant 3 — exploratory and critique
  • Quadrant 4 — edge cases, limits, technical stress

Medium Risk (B)

  • Q1 + Q2 required
  • Q3 optional
  • Q4 only if touching logic/limits

Low Risk (C)

  • Q1 only, sometimes Q2
  • Q3+Q4 not required unless exceptions apply

SRA therefore determines the shape of the testing effort — not just its intensity.

---

4. SRA Changes How Testers Ask Questions

Risk dictates the type of analytical questioning.

High Risk → “deep-architecture” questions:

  • What automation calls what?
  • Could the process break if inputs are inconsistent?
  • Are there security implications?
  • What happens at scale (large volumes)?
  • Which roles are impacted?

Medium Risk → “clarification” questions:

  • What are the boundary conditions?
  • Who is the actor and what permissions do they need?
  • Which branches of the process must be followed?

Low Risk → “confirmation” questions:

  • Does it work as specified?
  • Is the configuration correct?
  • Does it affect anything else?

---

5. SRA Affects Test Estimation (Even Before ROM)

Even without a formal ROM calculator, risk + complexity predict the effort:

  • High Risk → long analysis, many conditions, broad coverage
  • Medium Risk → balanced effort
  • Low Risk → minimal testing effort

SRA creates an early estimation baseline before detailed test design begins.

---

6. SRA Enables Conflict & Dependency Detection

Higher-risk items almost always overlap with:

  • automation chains,
  • record access,
  • flows,
  • Apex,
  • integrations,
  • business-critical data.

SRA highlights where to expect conflicts before they appear during execution.

Example:

A requirement marked as High Risk with High Complexity almost guarantees dependencies with logic you must uncover before testing.

---

7. SRA Makes Prioritization Visible to PM/PO

SRA is easy to communicate:

  • 3 risk classes,
  • simple complexity scale,
  • explicit compliance flags.

This helps PMs:

  • plan sprints,
  • assign environments,
  • order deployments,
  • avoid blocking testers with low-priority items.

---

Summary

SRA is not a classification exercise — it is a decision engine for the entire testing process.

It shapes:

  • analytical depth,
  • question strategy,
  • ordering of work,
  • quadrant coverage,
  • test estimation,
  • communication with PM/PO,
  • overall project predictability.

Properly used, SRA ensures that testing investment aligns with business risk — not with random delivery order.

02.05.03 — Effort Estimation Explained (percentage-based model)

Effort estimation for Salesforce testing does not attempt to predict absolute hours.
Instead, it uses multipliers based on the developer’s or consultant’s estimation given during refinement.

The purpose of this model is simple:

Take the base estimation from the delivery team and apply a percentage depending on the type of change.

These percentages come directly from practical project experience and represent the average effort required to analyse, test, document, and report potential issues arising from each category of requirement.

This model is intentionally pragmatic — it is not meant to be mathematically perfect.
It is a practical shortcut that helps test leads and PMs rapidly evaluate testing workload.

---

1. How the Percentage Model Works

  1. Developer/consultant gives a standard refinement estimate.
  2. Tester identifies which technical category the requirement belongs to.
  3. Tester applies the percentage defined for that category.
  4. The result is the testing effort multiplier for planning and capacity management.

No formulas, no dependencies — just a fixed multiplier per requirement type.

Example:
If a developer estimates a Flow enhancement as 8h, applying the Flow Testing multiplier (40%) gives:

Testing Effort = 8h × 0.40 = 3.2h

This is only an effort planning indicator, not a strict SLA.

---

2. Percentages Used in This Model

These values come from empirical project data and represent the average proportional effort for:

  • analysis,
  • test preparation and execution,
  • data preparation,
  • bug reporting,
  • simplified test case creation.

Data & Security

Requirement Type Testing Multiplier
Data Model 45%
CRUD 20%
Sharings 20%

---

Core Platform Configuration

Requirement Type Testing Multiplier
Core Cloud configuration check (Setup level) 30%
Core Cloud configuration functional check 40%

---

Logic & Automation

Requirement Type Testing Multiplier
Flow Testing (Debug) 40%
Apex Logic (single trigger level) 30%
Formulas and Validation Rules 35%

---

UI, Integrations & Complex Logic

Requirement Type Testing Multiplier
Simple Components 30%
Integrations 50%
Complex Custom Components 50%
Complex Apex Logic 50%
Complex Flows 50%

---

3. What Each Percentage Includes

Each multiplier covers the following activities:

✔ Test Analysis

  • Understanding the requirement
  • Identifying test conditions
  • Determining actors, data, flows, and dependencies

✔ Test Execution

  • Preparing the test data
  • Running the test (tool-based or UI)
  • Capturing results

✔ Bug Reporting

  • Describing behaviour
  • Providing reproduction steps
  • Recording metadata (environment, profile, test user, input)

✔ Simplified Internal Test Case

Not customer-facing, but:

  • Ensures consistency
  • Allows repeatability
  • Forms the basis for regression

Nothing beyond this scope is included.
Advanced documentation for the client is estimated separately.

---

4. Why This Model Works

1. Developers already perform effort estimation
The testing estimate becomes predictable without needing its own refinement meeting.

2. Requirement types rarely change
Most Salesforce projects repeat the same architectural patterns:

  • Flows
  • Apex
  • Data model
  • Sharings
  • Integrations
  • UI adjustments

This makes percentages surprisingly reliable.

3. PMs understand percentage-based estimation instantly
Instead of abstract testing hours, they see:

  • “Flow = 40% of dev effort”
  • “Complex integration = 50%”
  • “CRUD = 20%”

This aids prioritisation and capacity planning.

4. It keeps the model simple
Testing does not need its own estimation complexity.
It scales with development, which reflects real project behaviour.

---

5. Practical Example

If during refinement the team produces these estimates:

  • Data Model changes — 6h
  • New Flow — 12h
  • Apex Trigger update — 4h
  • UI component enhancement — 8h

Then testing effort would be:

  • Data Model: 6h × 45% = 2.7h
  • Flow: 12h × 40% = 4.8h
  • Apex Logic: 4h × 30% = 1.2h
  • Simple Component: 8h × 30% = 2.4h

This gives predictable workload distribution for the whole sprint.

---

Summary

The percentage-based effort model:

  • relies only on the developer’s base estimate,
  • applies a fixed multiplier based on requirement category,
  • includes analysis, testing, bug reporting, and internal documentation,
  • is simple, repeatable, and easy for PMs to understand,
  • reflects years of practical Salesforce project experience.

It is not meant to be perfect —
it is meant to work.

02.05.04 — Combining SRA + Quadrants for Risk-Weighted Test Planning

SRA defines how risky a requirement is.
Quadrants define how deeply and from which angles it should be tested.

When combined, they create a complete, predictable method for planning test work — from the moment the requirement is refined, through analysis and execution, all the way to effort estimation.

This lesson explains how these two models support each other and how testers should use them together in day-to-day project work.

---

1. Why SRA Alone Is Not Enough

SRA tells you:

  • how big the risk is,
  • how complex the logic is,
  • whether compliance increases sensitivity.

But SRA does not tell you:

  • how many quadrants to execute,
  • what level of exploratory work is required,
  • how much negative or limit testing to include,
  • whether the scenario needs technical stress validation.

That’s where Quadrants come in.

---

2. Why Quadrants Alone Are Not Enough

Quadrants classify test conditions, not requirements.
They do not indicate:

  • priority,
  • business impact,
  • how soon the feature must be tested,
  • how widely the tests must be executed,
  • how much effort should be assigned.

Quadrants only say what type of work is needed.
SRA defines how important it is.

Together they create a complete prioritisation system.

---

3. How SRA Determines Quadrant Coverage

Risk Class A — High risk

  • Almost always requires Q1 + Q2 + Q3,
  • And often Q4 when there is Apex, Flow, or integrations.
  • High-risk requirements get executed earlier in the cycle.

Risk Class B — Medium risk

  • Always Q1 + Q2
  • Q3 optional, depending on actor and branching logic
  • Q4 only when technical triggers exist
  • Executed after High-risk items are stable

Risk Class C — Low risk

  • Q1 only, sometimes Q2
  • No Q3/Q4 unless extraordinary circumstances
  • Executed at any point before release freeze

Risk classification removes guessing.
Quadrants apply the exact testing strategy.

---

4. How Complexity Influences Quadrant Distribution

Even within the same risk class:

Complexity Impact on Quadrants
Low Fewer conditions, minimal Q2, no Q3/Q4
Moderate Broader Q2, selective Q3
High Expanded Q2, deep Q3, mandatory Q4

Example:

  • A high-risk but low-complexity requirement
    → Q1 + Q2, fast execution.
  • A medium-risk but high-complexity requirement
    → Q1 + Q2 + Q3 (sometimes Q4).

Complexity determines how many conditions need Quadrant assignment.

---

5. The Combined Matrix (SRA × Quadrants)

A practical tool for planning:

High Risk

Complexity Required Quadrants
Low Q1 + Q2
Moderate Q1 + Q2 + Q3
High Q1 + Q2 + Q3 + Q4

---

Medium Risk

Complexity Required Quadrants
Low Q1 + Q2
Moderate Q1 + Q2 + optional Q3
High Q1 + Q2 + Q3 (+ Q4 only for technical logic)

---

Low Risk

Complexity Required Quadrants
Low Q1
Moderate Q1 + Q2
High Q1 + Q2 (+ Q3 only if touching multiple roles)

---

6. How to Use the Combined Model in Real Projects

Step 1 — Classify the Requirement Using SRA

Risk + Complexity + Compliance
→ produces the base classification.

Step 2 — Break Requirement into Test Conditions

Using Module 02.03 methodology:

  • Data model
  • Security
  • Declarative logic
  • Apex logic
  • UI

Step 3 — Assign Each Condition to a Quadrant

Every condition lands in one or more quadrants.

Step 4 — Prioritise Based on SRA

High-risk conditions → test first
Low-risk conditions → test later

Step 5 — Estimate Based on Condition Categories

Use the percentage model from previous lesson.

This creates a predictable testing plan based not on “gut feeling” but on structure.

---

7. Case Example (Simplified)

Requirement:

“Closing Opportunity must generate Invoices, update Products, and trigger integration.”

SRA result:
- Risk: High
- Complexity: High
- Compliance: None

Quadrant mapping:
- Q1 → configuration, access, metadata
- Q2 → functional E2E flow
- Q3 → user-driven variations (reopen, edit products, discount changes)
- Q4 → limits (100+ products), automation collisions, recursion, integration retries

Effort estimation:
- Flows: 40%
- Apex logic: 30%
- Integration: 50%

This produces a reliable, transparent testing plan.

---

Summary

Combining SRA with the Quadrant model enables testers to:

  • understand risk,
  • select the right test depth,
  • avoid unnecessary over-testing,
  • detect hidden failures earlier,
  • communicate priorities clearly,
  • and plan effort predictably.

SRA defines importance.
Quadrants define execution strategy.
Together, they form a complete method for risk-weighted test planning in Salesforce projects.

02.05.05 — Case Study: Risk Assessment of a Multi-step Process

Applying SRA, Complexity, and Quadrants to a Real Salesforce Requirement

This case study demonstrates the full process of test analysis using the combined SRA + Quadrants approach.
We move from a raw business requirement → to SRA classification → to test conditions → to quadrant assignment → to testing effort estimation.

The scenario is based on a real pattern observed in Salesforce projects:
a multi-step process triggered by an Opportunity Stage change, involving automation, data updates, integrations, and role-based access.

---

1. Business Requirement (as provided by BA/PO)

“When Opportunity moves to Closed Won, the system must:
1) Generate an Invoice,
2) Create provisioning tasks,
3) Update related Products,
4) Apply updated discount logic,
5) Notify Account Owner and Finance,
6) Trigger integration to external billing.”

This requirement appears simple — but touches multiple platform layers and user roles.

It contains:

  • declarative logic,
  • Apex,
  • automation chaining,
  • field updates,
  • integration flows,
  • permission checks,
  • and dependencies between objects.

Perfect candidate for SRA.

---

2. Initial Analysis (Six Questions Method)

Who?
Sales Reps, Account Owner, Billing Integration User, Finance users.

Where?
Entry point = Opportunity Stage update → Closed Won.

What?
Combination of Flows + Apex + field updates + notifications + integration event.

When?
Immediately on Stage update, before/after commit depending on automation design.

Why?
Business-critical revenue process that triggers provisioning and billing.

How?
Automation chain:

Opportunity → Flow → Apex → Product updates → Notifications → Integration → External Billing System.

This chain already suggests high risk and high complexity.

---

3. SRA Classification

Risk Class: HIGH

Because:

  • impacts revenue and invoicing,
  • affects multiple teams,
  • has multiple automation layers,
  • influences reporting and external systems.

Complexity: HIGH

Because:

  • involves multiple objects,
  • contains logic in several technologies (Flows, Apex),
  • includes branching and conditional logic,
  • integration behaviour depends on data consistency.

Compliance: MEDIUM

No financial regulation component, but the process affects:

  • financial reporting,
  • customer communication,
  • auditability of revenue events.

Final SRA Classification:
HIGH Risk + HIGH Complexity

Such requirements are always prioritised early in the test cycle.

---

4. Extracting Test Conditions

From analysis, we derive the following test conditions:

TC1 — Opportunity Stage update → triggers automation

  • Entry criteria validated
  • Single vs multi-user behaviour
  • Action availability per role

TC2 — Invoice is generated correctly

  • Field values
  • Currency, tax, region rules
  • Record access and visibility for Finance

TC3 — Provisioning Tasks created

  • Task creation logic
  • Assignment to correct roles
  • SLA and due-date logic

TC4 — Product updates

  • Recalculated values
  • Discount logic
  • Dependent record updates

TC5 — Notifications

  • Email recipients
  • Templates
  • Behaviour across roles
  • Multi-language or regional versions (if any)

TC6 — Integration Trigger

  • Payload correctness
  • Event firing conditions
  • Retry logic
  • Handling of missing data

TC7 — Error handling

  • Invalid product configuration
  • Missing mandatory fields
  • Incorrect user permissions

TC8 — Large dataset scenario

  • Opportunity with many Products
  • Product variations
  • Volume-based behaviour

A realistic requirement will often generate 8–20+ conditions like these.

---

5. Assigning Quadrants

Now each condition is mapped to quadrants:

Condition Q1 Q2 Q3 Q4
TC1 — Trigger behaviour
TC2 — Invoice generation
TC3 — Provisioning tasks
TC4 — Product updates
TC5 — Notifications
TC6 — Integration
TC7 — Error handling
TC8 — Large dataset

Result:

This requirement requires Q1 + Q2 + Q3 + Q4.

This pattern is extremely common for multi-step sales → finance → provisioning flows.

---

6. Effort Estimation (percentage model)

We assign estimation using developer estimates multiplied by category percentages.

Assume the delivery team provided:

  • Flow enhancement: 12h
  • Apex trigger: 6h
  • Integration callout: 10h
  • Data Model changes: 4h

Testing estimation becomes:

Requirement Type Dev Estimate Multiplier Test Effort
Flow Testing 12h 40% 4.8h
Apex Logic 6h 30% 1.8h
Integration 10h 50% 5.0h
Data Model 4h 45% 1.8h

Total testing effort = 13.4h
(before test case documentation for the client, if required)

This estimate:

  • is easy to explain,
  • predictable for PM/PO,
  • consistent across sprints,
  • and directly linked to real complexity.

---

7. Final Risk-weighted Test Plan (Summary)

Priority of execution:
1. Trigger behaviour (high business risk)
2. Invoice & Product updates (financial impact)
3. Integration behaviour (external dependency)
4. Error handling
5. Exploratory flows (Q3)
6. Limit tests & large datasets (Q4)

Quadrant coverage:
- Q1: Core logic, config, permissions
- Q2: Revenue flow, functional path
- Q3: Real-world user variation
- Q4: Volume, limit testing, collision detection

Why this process works:

  • No hidden conditions remain untested
  • All dependencies are covered
  • PM gets predictable estimates
  • High-risk areas receive the correct focus
  • No over-testing of low-impact areas
  • Clear justification for execution order

---

Summary

This case study shows how to combine:

  • SRA classification,
  • condition extraction,
  • quadrants,
  • and percentage-based estimation

into a single, coherent test plan.

This method prevents under-testing critical processes, provides transparency for delivery teams, and dramatically reduces late-stage project risks — especially in Salesforce environments where multi-step automation is normal and failures can cascade.

Subscribe to Salesforce Tester

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe