PCAOB inspectors have cited inadequate journal entry testing documentation in every inspection cycle for the past several years. The findings appear in inspection reports for firms of every size — Big-4, national, and regional — which means the problem is not specific to under-resourced firms. It's a methodology gap that exists across the profession.
AS 2201 is the standard that governs an integrated audit of financial statements and internal control over financial reporting. It cross-references AS 2110 (Identifying and Assessing Risks) and AU-C Section 240 (the fraud consideration standard) for journal entry testing requirements. Reading all three together reveals what the documentation threshold actually is — and it's more specific than most workpaper templates reflect.
The Primary Obligation: Paragraph 62 of AU-C 240
AU-C Section 240 paragraph .62 states that when using computer-assisted audit techniques (CAATs) or other audit software to perform JE testing, the auditor should document "the type of tests performed, the data used in those tests, and the results." That's the minimum. The AS 2201 overlay adds that for integrated audits, the documentation must be sufficient for an experienced auditor having no previous connection with the engagement to understand the nature, timing, and extent of work performed.
The phrase "experienced auditor having no previous connection with the engagement" is the operative standard. Your documentation must be self-contained. It cannot rely on institutional knowledge, oral explanation, or prior-year files for its meaning. An inspector picking up the workpaper file for the first time must be able to understand what you did and why it was sufficient.
In practice, this means your JE testing documentation needs to stand alone as a description of a complete procedure, not just a record of results.
The Five Elements That Must Be Documented
Drawing from AS 2201, AS 2110, and AU-C 240 together, a complete JE testing workpaper for an IT-assisted procedure needs five elements:
1. Population Definition. Describe the complete journal entry population: the source system, the fiscal year and period covered, the company codes or legal entities included, and the total record count. If you excluded any entries from the population (for example, auto-generated system entries or specific document types), document the basis for the exclusion and confirm that excluded entries don't contain the risks you're testing for.
2. Testing Objectives and Risk Basis. State what you're testing for and why this population carries fraud risk. The required language is in AU-C 240: you're testing for the risk of management override of controls through the posting of unusual or inappropriate journal entries. Your documentation should make explicit what "unusual or inappropriate" means in the context of this specific engagement — what account types, transaction patterns, or preparer characteristics you defined as indicators of the risk you're addressing.
3. Selection Criteria and Their Basis. Whether you used manual filters, an automated tool, or a combination, document the specific criteria applied. If using an automated tool, document the tool name and version, the parameters configured for this engagement, and the basis for concluding the parameters are appropriate. "We used AuditPulsar version 2.4 with default parameters" is insufficient. "We configured the tool to flag entries posted outside 8 PM to 6 AM local time and entries where the preparer had not previously posted to the debit account in the trailing 12-month history, because the control environment at this client lacks mitigating detective controls for these patterns" is sufficient.
4. Items Selected and Disposition. For each item selected for detailed testing, document the item's identifying information (document number, date, amount, preparer), the reason it was selected (what criterion triggered it), the supporting documentation obtained, and your conclusion about the item. If the item was cleared, explain why it doesn't represent a misstatement or a control deficiency. If it was escalated, describe the escalation path and ultimate disposition.
5. Coverage Conclusion. Document the basis for concluding that the testing performed is sufficient to address the identified risk. This is where many workpapers are weakest. Stating "we reviewed 60 items from the flagged population" is not a coverage conclusion — it's a fact about what you did. The coverage conclusion explains why reviewing those 60 items, using those criteria, against that population, is sufficient to support a conclusion that the risk of material misstatement from inappropriate journal entries has been addressed.
The IT-Assisted Procedure Caveat
When the selection criteria are implemented by an automated tool rather than manually, there's an additional documentation obligation that most firms currently under-address: the auditor must document the basis for relying on the tool's output.
Under PCAOB AS 1015 (Due Professional Care), and more specifically AS 2110 paragraphs 56-58, when the auditor uses IT to perform a procedure, they must have a basis for concluding the IT is functioning as intended. For a commercial audit tool, this doesn't require the auditor to independently validate the software's source code. It does require documentation that the tool was used appropriately for this application, that the auditor understands its methodology, and that there are no known limitations that would affect the reliability of its output for this engagement.
In practice, this means your workpapers should include: the vendor's description of the tool's methodology (a product white paper or technical documentation will suffice), the specific configuration applied for this engagement, and a note confirming that you reviewed the tool's documented limitations and determined they don't affect the application here.
AuditPulsar provides a methodology document specifically designed to meet this requirement. It describes the statistical tests and ML models used in the scoring, the training data, the known limitations (including specific population types where Benford's Law analysis is not applied), and the version history of the scoring algorithm. That document is intended to be included as an exhibit in the JE testing workpaper file.
What PCAOB Inspectors Have Actually Found
The 2024 PCAOB inspection report on journal entry testing deficiencies identified several recurring patterns in inadequate workpapers. The most frequently cited: population definitions that exclude entries without documentation of the exclusion rationale, selection criteria described as "unusual entries" without defining what makes an entry unusual in the context of this engagement, workpapers that reference the tool used without describing how it was configured, and coverage conclusions that restate the testing procedure rather than explain why it's sufficient.
The second-most-cited category is timing. AU-C 240 requires JE testing to include entries posted "near the period end" as a specifically required focus area. Several inspection findings noted that firms' workpapers didn't demonstrate this focus — the testing covered the full-year population without any analysis or enhanced coverage of the period-end sub-population where fraud risk is highest.
The third category is unpopulated workpaper templates. Firms that use standard workpaper templates with fill-in fields often leave required fields blank, particularly the "nature and scope of procedures performed" and "basis for the conclusion" fields. A blank field in a PCAOB workpaper is an inspection finding regardless of how thorough the underlying work was.
Documentation Checkpoints for Automated Screening
A practical pre-completion checklist for JE testing workpapers that use automated screening:
Does the population definition include the total record count and the source system? Does it document any excluded entries with a rationale? Does it confirm the data was extracted completely (i.e., there are no known gaps in the extraction)? Does the tool documentation exhibit describe the methodology, version, and known limitations? Does the configuration documentation specify the parameters for this engagement? Does the items-tested table include each item's identifier, selection criterion, supporting documentation reference, and conclusion? Does the coverage conclusion explain why the screening criteria are appropriate for the fraud risks identified in the risk assessment for this engagement? Does the workpaper include a period-end analysis or confirm that the period-end population was given proportional coverage?
If any of these checkpoints are not met, the workpaper has a documentation gap that a PCAOB inspector or quality control reviewer would likely flag.
The Coverage Standard Is Not a Number
One common misconception: there's a coverage threshold — 100 items, 5%, or some specific percentage — that constitutes sufficient JE testing. There is no such threshold in the standards. The standard is risk-based: the testing must be sufficient to address the identified fraud risk in the context of the overall audit approach.
A complete-population screening that produces 15 high-risk items for detailed review can be more than sufficient for a small, low-complexity client. A sample of 100 items from a 500,000-entry population with high management override risk may be insufficient. The sufficiency determination is auditor judgment, and the workpaper needs to document the basis for that judgment — not just the count of items tested.
The move toward automated screening tools should make this easier, not harder. When you have complete population coverage with documented scoring criteria, the coverage conclusion is more defensible, not less. The challenge is making sure the documentation reflects the quality of the procedure that was actually performed.