Skip to main content
Cortex is Kliper’s built-in AI assistant. It serves two distinct functions: evidence validation (checking that uploaded documents cover the required content for a given PCI DSS requirement) and autofill (generating draft ROC findings text based on the assessor’s work). Both features are designed to accelerate the assessment process without replacing assessor judgment. Cortex produces drafts and checklists — the assessor retains full authority over all final content.

Evidence Validation

What It Does

When an evidence file is uploaded and tagged to a specific PCI DSS requirement, Cortex can validate whether the document adequately covers the content items that the ROC template requires for that requirement. Kliper maintains a validation specification for each requirement — a structured checklist of content items the evidence document must address. These specs are derived from the PCI DSS v4.0.1 ROC template and stored in document-validation.json.

Validation Flow

1

Text Extraction

The uploaded file’s text content is extracted using format-specific parsers:
  • PDF — parsed via pdf-parse, extracting up to 50,000 characters of text.
  • Word (DOCX/DOC) — parsed via mammoth, extracting raw text.
  • Excel (XLSX/XLS) — converted to CSV per sheet via xlsx.
  • PowerPoint (PPTX) — slide text extracted from the XML structure.
  • Visio (VSDX) — text labels extracted from diagram page XML.
  • Text/Config/JSON/XML — read directly as UTF-8.
  • Certificates (PEM) — read directly; binary certs (P12/PFX) parsed via OpenSSL.
2

Criteria Lookup

The platform looks up the validation specification for the requirement. Each spec contains:
  • Requirement ID — e.g., 3.4.1
  • Title — human-readable requirement name
  • Typedocument or evidence
  • Tag — the document reference tag (e.g., DOCFW, EVDFW)
  • Criteria — an array of specific content items the document should cover
Criteria are filtered on load to remove fragments, notes, and cross-references that were parsed from the ROC template but do not represent actionable validation items (items shorter than 20 characters, notes, and partial fragments are excluded).
3

AI Evaluation

The extracted text and criteria checklist are sent to the AI (OpenAI gpt-4o-mini, temperature 0.2) with a structured system prompt that instructs the model to:
  • Check every criterion in the checklist.
  • Determine whether the document content reasonably addresses each item.
  • Provide a brief excerpt (max 120 characters) from the document when a criterion is found.
  • Add a note for partial coverage or concerns.
  • Never fabricate excerpts — if content is not present, mark it as not found.
The AI responds in structured JSON for deterministic parsing.
4

Results Returned

The validation result is structured and returned to the assessor:
{
  "requirementId": "3.4.1",
  "title": "PAN rendering requirement",
  "type": "document",
  "tag": "DOCFW",
  "checkedAt": "2026-02-28T14:30:00.000Z",
  "items": [
    {
      "criterion": "Document defines encryption algorithms used for PAN storage",
      "found": true,
      "excerpt": "AES-256 encryption is applied to all PAN data at rest...",
      "note": null
    },
    {
      "criterion": "Document specifies key management procedures",
      "found": false,
      "excerpt": null,
      "note": "No key management section found in document"
    }
  ],
  "summary": {
    "total": 8,
    "found": 6,
    "missing": 2,
    "status": "partial"
  },
  "model": "gpt-4o-mini",
  "tokensUsed": { "input": 4200, "output": 850 }
}

Validation Statuses

The summary status is derived from the found/total ratio:
StatusConditionMeaning
CompleteAll criteria foundDocument fully covers the requirement
Partial50% or more criteria foundDocument covers most items but has gaps
InsufficientLess than 50% criteria foundDocument is missing substantial required content

What the Assessor Sees

In the Attachments Panel, each file displays its validation status. Expanding the validation result shows:
  • A checklist of all criteria with checkmarks (found) or X marks (not found).
  • Excerpts from the document that demonstrate coverage.
  • Notes on partial coverage or missing items.
  • The AI model used and when the validation was performed.
Validation results are advisory. The AI may miss nuanced coverage or flag items that are addressed indirectly. Assessors should review AI findings and apply professional judgment before finalizing their assessment.

Cortex Autofill — ROC Findings Generation

What It Does

Cortex Autofill generates a draft findings description for a specific PCI DSS requirement. This is the narrative text that appears in the final ROC, describing what the assessor examined, what methods were used, and what was observed.

When to Use It

Autofill is most effective when the assessor has already:
  1. Uploaded relevant evidence files and tagged them to the requirement.
  2. Filled in at least some testing procedure responses.
  3. Selected a finding status (In Place, Not Applicable, Not Tested, Not in Place).
Cortex will work with incomplete data, but it will flag what is missing and use placeholders ([PENDING_RESPONSE]) rather than fabricating content.

How It Works

1

Context Assembly

When the assessor triggers autofill on a requirement, the backend assembles a comprehensive context package:
  • Reporting instructions — the ROC template’s instructions for this specific requirement.
  • PCI DSS guidance — the Purpose, Good Practice, Definitions, and Examples from the PCI DSS v4.0.1 guidance document (loaded from pci-guidance.json covering 200+ requirements).
  • Assessor responses — which testing procedures have been filled in and what they contain. Empty procedures are explicitly flagged.
  • Evidence files — names and AI-generated summaries of files uploaded to the requirement’s section. If files have document reference tags (doctag-DOCFW), the tag-to-file mapping is provided so the AI can reference actual file names.
  • Finding status — the selected assessment finding (In Place, Not in Place, etc.) and method flags (Compensating Control, Customized Approach).
  • Customized Approach Objective — if the Customized Approach method is selected, the requirement’s Customized Approach Objective from PCI DSS guidance is included, and Cortex is instructed to address the objective rather than the standard testing procedures.
2

AI Generation

The context is sent to OpenAI (gpt-4o-mini, temperature 0.3, max 300 tokens) with a system prompt that enforces QSA writing conventions:Required behaviors:
  • Reference evidence by tag name (e.g., “Per DOCFW, firewall rulesets restrict…”).
  • State what was examined, what method was used (document review, interview, observation, configuration review), and what was found.
  • Write 2–4 sentences maximum.
  • Use paragraph form, no bullet points.
  • Use placeholders for missing data rather than inventing content.
Prohibited behaviors:
  • Generic filler phrases (“thorough examination”, “comprehensive review”, “adequately”, “ensuring that”, “corroborated”, “in accordance with”).
  • Restating the requirement text.
  • Stating the finding status (the assessor selects that separately).
  • Inventing tag names that were not provided.
3

Result with Warnings

Cortex returns the generated text along with any warnings about incomplete data:
{
  "content": "Per DOCFW, firewall rulesets restrict inbound traffic to required ports and protocols only. Configuration screenshots in EVDFW show deny-all default rules on external-facing interfaces. Network administrator interview confirmed change management procedures are followed for all modifications.",
  "warnings": [
    "Assessor responses missing for: 1.2.3.b, 1.2.3.c",
    "No evidence files uploaded for Requirement 1."
  ]
}
The assessor reviews the draft, edits as needed, and either accepts it into the findings field or discards it.

Autofill with Compensating Controls

When the assessor selects the Compensating Control method, Cortex adjusts its output to note that Appendix C applies and frames the findings around the compensating control rather than the standard testing procedure.

Autofill with Customized Approach

When the assessor selects the Customized Approach method, Cortex:
  1. Loads the Customized Approach Objective from PCI DSS guidance for that requirement.
  2. Instructs the AI to explain how the entity’s implementation meets the Customized Approach Objective, rather than addressing the standard defined approach testing procedures.
  3. If no Customized Approach Objective exists for the requirement (some requirements are not eligible), a warning is returned.

Validation Step Analysis

Cortex also analyzes the reporting instruction text to determine which validation steps are relevant for a requirement. It uses keyword matching to identify required evidence types:
Keyword in Reporting InstructionsValidation Step Generated
”document”, “review”, “examine”, “verify”Documentation Reviewed
”sample”, “test”, “select”, “random”Samples Taken
”interview”, “personnel”, “staff”, “employee”Personnel Interviewed
”technology”, “system”, “component”, “application”Critical Technologies
”configuration”, “setting”, “parameter”Settings Reviewed
”method”, “procedure”, “process”, “approach”Methods
”software”, “application”, “tool”, “solution”Software
An Assessor step is always included regardless of keywords. These steps populate the structured prefix section of the requirement answer, ensuring that the ROC includes complete documentation of what was examined.

Chat Interface

Beyond validation and autofill, Cortex provides a conversational interface in the right-side panel of the workbench. Assessors can ask free-form questions about the current requirement:
  • “What does PCI DSS 4.0.1 require for Requirement 3.4.1?”
  • “What interview questions should I ask about encryption key management?”
  • “What evidence should I look for to validate this requirement?”
Cortex responses are contextualized to the specific requirement the assessor is viewing and draw on the PCI DSS guidance data (purpose, good practice, definitions, and examples) for accurate, standard-aligned answers.
Cortex chat history is maintained per assessment session. Conversations are not persisted across browser sessions.