03.01.05 — Object & Field–Specific Test Conditions

Lesson goal

This lesson explains how to identify and define test conditions specific to Salesforce objects and fields.

After this lesson, you should be able to:

  • recognise which object and field attributes influence system behaviour,
  • translate configuration details into concrete test conditions,
  • Avoid superficial testing that validates UI but misses logic.

Why objects and fields deserve dedicated analysis

In Salesforce, most business logic is expressed through:

  • object configuration,
  • field properties,
  • relationships between records.

Even small configuration changes can:

  • alter validation behaviour,
  • affect automation,
  • change visibility or editability,
  • break integrations indirectly.

Testing objects and fields without structured analysis results in partial coverage.

Object-level test conditions

Object-level conditions describe behaviour that applies to the record as a whole.

They often depend on:

  • object permissions (Create, Read, Update, Delete),
  • record ownership and sharing,
  • record lifecycle and status,
  • relationships to other objects.

Typical object-level questions

When analysing an object, ask:

  • Who can create this record?
  • Who can edit or delete it?
  • At which stage does the object become locked?
  • What related records are required for a valid state?

These questions define object-scoped test conditions.

Field-level test conditions

Field-level conditions define how individual fields behave under different contexts.

They depend on:

  • field type,
  • requiredness,
  • validation rules,
  • visibility and editability,
  • automation triggered by field changes.

A field is rarely “just a field” in Salesforce.

Required vs conditionally required fields

Some fields are:

  • always required,
  • conditionally required based on other field values,
  • required only for specific record types or users.

Test conditions should explicitly cover:

  • valid combinations,
  • invalid combinations,
  • transitions between states.

Ignoring conditional requiredness is a common source of missed defects.

Field visibility and editability

Fields may be:

  • visible but read-only,
  • editable only in certain statuses,
  • hidden for specific profiles,
  • dynamically shown or hidden by Lightning configuration.

Visibility does not guarantee editability.

Test conditions must distinguish between:

  • what the user sees,
  • what the user can change,
  • what the system accepts on save.

Field dependencies and derived values

Many fields depend on:

  • formulas,
  • roll-up summaries,
  • automation outcomes,
  • integration updates.

Test conditions should identify:

  • which fields are calculated,
  • which fields trigger logic,
  • Which fields are outputs rather than inputs.

Testing derived fields as editable inputs is a common mistake.

Data types and boundary conditions

Field behaviour varies by type:

  • text fields have length limits,
  • numeric fields have precision and scale,
  • date fields interact with time zones and logic,
  • picklists enforce allowed values.

Boundary conditions should be identified during analysis, not during execution.

Examples:

  • minimum and maximum values,
  • empty vs null behaviour,
  • invalid combinations rejected by validation rules.

Record Types as behaviour modifiers

Record Types are not just UI selectors.

They influence:

  • available fields,
  • picklist values,
  • page layouts,
  • automation paths.

Test conditions must explicitly account for:

  • each relevant record type,
  • transitions between record types,
  • default values assigned per record type.

Ignoring record types leads to incomplete coverage.

Relationships and indirect conditions

Relationships introduce indirect test conditions.

Examples:

  • parent record status affecting child behaviour,
  • required related records for process execution,
  • aggregation logic across related records.

These conditions are often not visible in the UI, but they drive system behaviour.

They must be captured explicitly in analysis.

Avoiding UI-driven test design

A common trap is designing test conditions based on:

  • what is visible on the screen,
  • what fields appear on the layout.

In Salesforce, visibility does not equal relevance.

Effective test conditions come from:

  • configuration analysis,
  • data model understanding,
  • logic review.

UI validation is the last step, not the first.

From test conditions to test cases

Once object- and field-level test conditions are identified:

  • scenarios define coverage,
  • test cases control execution,
  • Preconditions define context.

Skipping this step results in:

  • duplicated test cases,
  • missed logic paths,
  • fragile execution.

Key takeaway

In Salesforce:

  • objects define scope,
  • fields define behaviour,
  • relationships define complexity.

Object and field analysis is the bridge between requirements and effective test design.

Without it, test cases validate screens instead of systems.

Subscribe to Salesforce Tester

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe