Evidence Validation
What It Does
When an evidence file is uploaded and tagged to a specific PCI DSS requirement, Cortex can validate whether the document adequately covers the content items that the ROC template requires for that requirement. Kliper maintains a validation specification for each requirement — a structured checklist of content items the evidence document must address. These specs are derived from the PCI DSS v4.0.1 ROC template and stored indocument-validation.json.
Validation Flow
Text Extraction
The uploaded file’s text content is extracted using format-specific parsers:
- PDF — parsed via
pdf-parse, extracting up to 50,000 characters of text. - Word (DOCX/DOC) — parsed via
mammoth, extracting raw text. - Excel (XLSX/XLS) — converted to CSV per sheet via
xlsx. - PowerPoint (PPTX) — slide text extracted from the XML structure.
- Visio (VSDX) — text labels extracted from diagram page XML.
- Text/Config/JSON/XML — read directly as UTF-8.
- Certificates (PEM) — read directly; binary certs (P12/PFX) parsed via OpenSSL.
Criteria Lookup
The platform looks up the validation specification for the requirement. Each spec contains:
- Requirement ID — e.g.,
3.4.1 - Title — human-readable requirement name
- Type —
documentorevidence - Tag — the document reference tag (e.g.,
DOCFW,EVDFW) - Criteria — an array of specific content items the document should cover
AI Evaluation
The extracted text and criteria checklist are sent to the AI (OpenAI
gpt-4o-mini, temperature 0.2) with a structured system prompt that instructs the model to:- Check every criterion in the checklist.
- Determine whether the document content reasonably addresses each item.
- Provide a brief excerpt (max 120 characters) from the document when a criterion is found.
- Add a note for partial coverage or concerns.
- Never fabricate excerpts — if content is not present, mark it as not found.
Validation Statuses
The summary status is derived from the found/total ratio:| Status | Condition | Meaning |
|---|---|---|
Complete | All criteria found | Document fully covers the requirement |
Partial | 50% or more criteria found | Document covers most items but has gaps |
Insufficient | Less than 50% criteria found | Document is missing substantial required content |
What the Assessor Sees
In the Attachments Panel, each file displays its validation status. Expanding the validation result shows:- A checklist of all criteria with checkmarks (found) or X marks (not found).
- Excerpts from the document that demonstrate coverage.
- Notes on partial coverage or missing items.
- The AI model used and when the validation was performed.
Cortex Autofill — ROC Findings Generation
What It Does
Cortex Autofill generates a draft findings description for a specific PCI DSS requirement. This is the narrative text that appears in the final ROC, describing what the assessor examined, what methods were used, and what was observed.When to Use It
Autofill is most effective when the assessor has already:- Uploaded relevant evidence files and tagged them to the requirement.
- Filled in at least some testing procedure responses.
- Selected a finding status (In Place, Not Applicable, Not Tested, Not in Place).
[PENDING_RESPONSE]) rather than fabricating content.
How It Works
Context Assembly
When the assessor triggers autofill on a requirement, the backend assembles a comprehensive context package:
- Reporting instructions — the ROC template’s instructions for this specific requirement.
- PCI DSS guidance — the Purpose, Good Practice, Definitions, and Examples from the PCI DSS v4.0.1 guidance document (loaded from
pci-guidance.jsoncovering 200+ requirements). - Assessor responses — which testing procedures have been filled in and what they contain. Empty procedures are explicitly flagged.
- Evidence files — names and AI-generated summaries of files uploaded to the requirement’s section. If files have document reference tags (
doctag-DOCFW), the tag-to-file mapping is provided so the AI can reference actual file names. - Finding status — the selected assessment finding (In Place, Not in Place, etc.) and method flags (Compensating Control, Customized Approach).
- Customized Approach Objective — if the Customized Approach method is selected, the requirement’s Customized Approach Objective from PCI DSS guidance is included, and Cortex is instructed to address the objective rather than the standard testing procedures.
AI Generation
The context is sent to OpenAI (
gpt-4o-mini, temperature 0.3, max 300 tokens) with a system prompt that enforces QSA writing conventions:Required behaviors:- Reference evidence by tag name (e.g., “Per DOCFW, firewall rulesets restrict…”).
- State what was examined, what method was used (document review, interview, observation, configuration review), and what was found.
- Write 2–4 sentences maximum.
- Use paragraph form, no bullet points.
- Use placeholders for missing data rather than inventing content.
- Generic filler phrases (“thorough examination”, “comprehensive review”, “adequately”, “ensuring that”, “corroborated”, “in accordance with”).
- Restating the requirement text.
- Stating the finding status (the assessor selects that separately).
- Inventing tag names that were not provided.
Autofill with Compensating Controls
When the assessor selects the Compensating Control method, Cortex adjusts its output to note that Appendix C applies and frames the findings around the compensating control rather than the standard testing procedure.Autofill with Customized Approach
When the assessor selects the Customized Approach method, Cortex:- Loads the Customized Approach Objective from PCI DSS guidance for that requirement.
- Instructs the AI to explain how the entity’s implementation meets the Customized Approach Objective, rather than addressing the standard defined approach testing procedures.
- If no Customized Approach Objective exists for the requirement (some requirements are not eligible), a warning is returned.
Validation Step Analysis
Cortex also analyzes the reporting instruction text to determine which validation steps are relevant for a requirement. It uses keyword matching to identify required evidence types:| Keyword in Reporting Instructions | Validation Step Generated |
|---|---|
| ”document”, “review”, “examine”, “verify” | Documentation Reviewed |
| ”sample”, “test”, “select”, “random” | Samples Taken |
| ”interview”, “personnel”, “staff”, “employee” | Personnel Interviewed |
| ”technology”, “system”, “component”, “application” | Critical Technologies |
| ”configuration”, “setting”, “parameter” | Settings Reviewed |
| ”method”, “procedure”, “process”, “approach” | Methods |
| ”software”, “application”, “tool”, “solution” | Software |
Chat Interface
Beyond validation and autofill, Cortex provides a conversational interface in the right-side panel of the workbench. Assessors can ask free-form questions about the current requirement:- “What does PCI DSS 4.0.1 require for Requirement 3.4.1?”
- “What interview questions should I ask about encryption key management?”
- “What evidence should I look for to validate this requirement?”
Cortex chat history is maintained per assessment session. Conversations are not persisted across browser sessions.