03.03.01 — The Most Common Test Case Mistakes

Lesson goal

This lesson identifies the most common mistakes made when writing test cases in Salesforce projects and explains why they are dangerous.

After this lesson, you should be able to:

  • recognize weak or misleading test cases,
  • understand why certain patterns consistently fail in Salesforce,
  • avoid mistakes that create false confidence instead of real coverage.

Why test case mistakes matter more in Salesforce

In Salesforce projects, a poorly written test case does not just reduce coverage.

It actively:

  • hides configuration errors,
  • masks permission issues,
  • creates false “green” test results,
  • shifts defects into production.

Because Salesforce behavior depends heavily on context, even small documentation mistakes can invalidate entire test suites.

Mistake 1: Testing as Admin

This is the most common and most damaging mistake.

Admin users:

  • bypass validation rules,
  • ignore sharing rules,
  • see all fields and actions,
  • can execute automation paths unavailable to real users.

Testing as Admin:

  • hides permission defects,
  • invalidates CRUD and FLS coverage,
  • produces misleading pass results.

Rule:

If a feature is meant for business users, it must never be tested as Admin.

Mistake 2: Using named users

Defining users as:

  • “Test User 1”
  • “John Smith”
  • “Sales User A”

introduces fragility.

Named users:

  • differ between environments,
  • disappear after sandbox refreshes,
  • accumulate unintended permissions over time.

Correct test cases define:

  • roles,
  • profiles,
  • permission sets,
  • access scope.

Users represent permission context, not identity.

Mistake 3: Relying on “existing data”

Statements such as:

  • “Account exists”
  • “Opportunity already created”
  • “Use any available record”

hide assumptions.

This leads to:

  • inconsistent execution,
  • environment-dependent failures,
  • impossible defect reproduction.

Effective test cases define:

  • minimum required data,
  • relationships,
  • record state.

Mistake 4: UI-driven test design

Designing tests based on:

  • page layouts,
  • visible fields,
  • screen flow order

leads to superficial coverage.

UI reflects configuration, but it does not explain logic.

When UI changes:

  • test cases break,
  • intent becomes unclear,
  • maintenance cost increases.

Test design must start from:

  • business behavior,
  • configuration rules,
  • data logic.

UI validation comes last.

Mistake 5: Vague Expected Results

Examples:

  • “System works correctly”
  • “User sees an error”
  • “Record is saved”

These statements do not verify anything meaningful.

Vague Expected Results:

  • allow partial failures to pass,
  • hide incorrect behavior,
  • reduce confidence in test outcomes.

Expected Results must define specific conditions, not general outcomes.

Mistake 6: Mixing setup with execution

Including setup actions inside test steps:

  • creates duplication,
  • hides Preconditions,
  • complicates execution.

Setup belongs in Preconditions.

Execution belongs in steps.

Mixing the two:

  • increases test length,
  • reduces clarity,
  • makes Dry Runs ineffective.

Mistake 7: Hard-coded values and assumptions

Hard-coded values such as:

  • specific record IDs,
  • fixed dates,
  • environment-specific names

reduce test portability.

Salesforce environments change frequently.

Tests should describe:

  • relative conditions,
  • functional relationships,
  • logical states.

Not exact values that expire.

Mistake 8: Overloading single test cases

Trying to test too much in one test case leads to:

  • unclear failures,
  • difficult maintenance,
  • poor defect isolation.

A failing test should answer:

What exactly broke?

If it does not, the test case is too broad.

Mistake 9: Ignoring negative paths

Focusing only on “happy paths”:

  • hides validation gaps,
  • misses permission issues,
  • ignores error handling.

Negative paths are often more important than positive ones in Salesforce.

They reveal:

  • misconfigured rules,
  • missing enforcement,
  • unintended access.

Mistake 10: Writing tests without review

Unreviewed test cases:

  • reflect individual assumptions,
  • embed mental shortcuts,
  • drift away from requirements.

Without review:

  • quality is inconsistent,
  • mistakes propagate,
  • documentation debt grows.

Test cases must be reviewed like code.

Key takeaway

Most Salesforce test case failures are not execution problems.

They are design problems.

Avoiding these common mistakes:

  • improves coverage quality,
  • reduces false confidence,
  • lowers long-term maintenance cost.

High-quality test cases are a deliberate design outcome, not an accident.

Subscribe to Salesforce Tester

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe