Auditing is how you answer:
- “Who changed this?”
- “When did it change?”
- “What was it before?”
…with actual evidence, not group chat speculation.
Pre-flight checklist (audit is opt-in)
Before you export anything:
- ✅ Auditing is enabled at the right level:
- environment/org
- table/entity
- and (if relevant) specific columns
- ✅ You have appropriate permissions to read audit history
- ✅ You know your scope:
- which table(s)
- which records
- what date range
- ✅ You know what you’ll do with the output (support ticket, compliance request, investigation, UAT proof)
Consultant law #11: If auditing wasn’t enabled before the incident, the audit log will not time travel.
Tool: Audit History Extractor
(export audit history with lookup/optionset-friendly output)
What it’s for
Exporting audit history at scale, with readable values for lookups/optionsets in the output (so you can interpret changes without manually decoding everything).
When to use it
Use it when:
- You need audit data for investigations or compliance requests
- You need bulk audit exports (not one record at a time)
- You want a clean spreadsheet/report-ready output
Step-by-step (investigation-ready workflow)
- Confirm auditing prerequisites (enabled + scope).
- Choose:
- table/entity
- record set (specific record vs set)
- date range
- Export audit history.
- Review key columns:
- who
- when
- field changed
- old value → new value
- Filter to the suspect timeframe and fields.
- Produce a summary: “X changed Y from A to B on date/time.”
Common gotchas
- Auditing often isn’t enabled for every column by default.
- Large audit exports can be slow/heavy—scope tightly.
- Time zones can confuse “when” during investigations (be explicit about time zone in your summary).
Validation checklist
- Spot-check 1 record using the in-app audit history (if available) and confirm it matches your export.
- Confirm the export includes the fields you care about (and wasn’t limited by audit settings).
Real scenario
A stakeholder says “this was changed without approval.” Audit export shows:
- exactly who changed it
- when it was changed
- what it changed from/to
Now you can fix process issues with facts, not finger-pointing.
Great guide angles
- Audit prerequisites + the “why is my audit empty?” checklist
- High-volume export tips (scope, batching mindset, date filtering)
- How to interpret audit deltas (especially for option sets/lookups)
- Producing an investigation summary that a client can actually use
Environment governance & “compare across envs”
(Environment Processes Comparer)
This is the tool for answering:
“Why is Dev different from Prod?”
…without manually opening every process and squinting.
Pick your weapon (quick decision guide)
- Need: Compare process states/config across environments (great for deployment validation and drift detection)
Tool: Environment Processes Comparer
Why: Quickly surfaces what’s missing, inactive, or version-mismatched across environments
Pre-flight checklist (so your comparison means something)
- ✅ You’re comparing the right two environments (Dev→UAT, UAT→Prod).
- ✅ You know what “process” means in your context (workflows/BPFs/etc.).
- ✅ You have a deployment baseline (expected versions/states).
- ✅ You’ve decided what “matching” looks like:
- active vs inactive
- version differences
- missing processes
Consultant law #4
If you don’t define “expected,” every difference becomes a crisis.
Tool: Environment Processes Comparer
(spot drift, validate deployments, stop guessing)
What it’s for
Comparing process records across environments to identify differences—especially useful after deployments or when troubleshooting behavior inconsistencies.
When to use it
Use it when:
- Something works in Dev but not in Prod
- A process is active in one environment but not another
- You’re doing post-deploy validation and want a sanity check
- You suspect environment drift over time
Step-by-step (post-deploy drift check)
- Connect to Environment A (source) and Environment B (target).
- Load process lists.
- Compare:
- presence/absence
- status (active/inactive)
- version (where applicable)
- Export or record the diff results.
- Fix drift intentionally:
- activate missing processes
- import correct version/solution
- document what changed and why
Common gotchas
- Imports don’t always activate processes the way you expect (depends on component type and deployment approach).
- People make “tiny quick fixes” directly in UAT/Prod and forget—drift accumulates.
- Processes can be tied to solution layers; unmanaged edits can complicate comparisons.
Validation checklist
- After remediation:
- process states match expected
- the user scenario behaves the same across environments
- Confirm the solution version also matches (don’t only fix symptoms).
Real scenario
“Auto-email sends in Dev but not Prod.” Comparer shows the process is inactive in Prod. You activate it, test once, and go back to being a person.