Skip to main content
Every PCI DSS assessment depends on evidence — firewall configurations, policy documents, access control screenshots, scan reports, and more. Kliper provides a structured evidence management pipeline that handles file uploads, malware scanning, integrity hashing, AI-powered validation, and document tagging in a single workflow.

Uploading Evidence Files

Evidence files are uploaded through the Attachments Panel on the right side of the Assessment Workbench.
1

Open the Attachments Panel

In the Assessment Workbench, expand the Attachments panel on the right side. The upload area appears at the top of the panel.
2

Upload a File

You have two options:
  • Drag and drop a file directly onto the upload area
  • Click the upload area to open a file browser
A progress bar displays the upload percentage. The file name and upload status appear while the upload is in progress.
3

Link to Current Section (Optional)

If you are viewing a specific requirement (e.g., 1.2.3), check the Link to Current Section checkbox before uploading. This automatically tags the file to the active sub-requirement, making it visible when that requirement is selected.
4

Automatic Processing

After upload completes, the file passes through the security pipeline automatically:
  1. SHA-256 hash computed for integrity verification
  2. Malware scan — ClamAV antivirus and VirusTotal hash lookup run in parallel
  3. Metadata extraction — page count, word count, headers, and preview text extracted
  4. Cortex AI analysis — automatic content analysis, requirement matching, and doctag suggestions
No action is required from you during processing. Results appear in the file’s row as each step completes.
Screenshot of Attachments Panel Upload Area

Accepted File Types

CategoryFormats
DocumentsPDF, DOCX, DOC, XLSX, XLS, PPTX, PPT, VSDX
ImagesPNG, JPG, JPEG, GIF, BMP, TIFF, SVG, WebP
Text & ConfigTXT, CSV, JSON, XML, YAML, LOG, MD, HTML, SQL, CONF, INI
CertificatesPEM, CRT, CER, KEY, PUB, CSR, P12, PFX
ArchivesZIP
LogsEVTX (Windows Event Logs)
Executables, scripts, and other potentially dangerous file types are blocked at upload. See Security & AI Trust for the full blocklist.

The Attachments Table

Each uploaded file appears as a row in the attachments table with the following columns:
ColumnWhat It Shows
File NameName and file type icon (color-coded: red for PDF, yellow for ZIP, blue for images)
SizeFormatted file size (KB / MB / GB)
Scan StatusMalware scan result badge — see below
StatusEvidence validation status dropdown — see below
Cortex AIAI analysis result with match count — see below
ActionsDownload, Delete
Screenshot of Attachments Table

Malware Scan Status

Every file displays a scan status badge immediately after processing:
BadgeMeaning
Clean (green checkmark)File passed all scan engines with no detections
Threat (red warning)File was flagged by one or more scan engines — quarantined and rejected
Scanning (spinner)Scan is currently in progress
Pending (clock)Scan has not started yet
Hover over the scan badge to see a tooltip with per-engine results:
  • ClamAV — local antivirus scan result
  • VirusTotal — hash lookup result across 70+ antivirus engines with detection count
  • Scanned at — timestamp of the last scan
Screenshot of Scan Status Tooltip

Evidence Validation Status

Each file has a validation status dropdown that tracks the assessor’s review progress:
StatusCategoryMeaning
PendingFile uploaded, not yet reviewed
QSA ReviewOpenProvided by client, awaiting QSA review
Not ProvidedOpenExpected evidence not yet received
ImprovementsOpenEvidence received but needs revisions
To DiscussOpenRequires discussion with client
ObserveOpenFlagged for observation in formal assessment
N/AClosedNot applicable to this requirement
RecommendationClosedAccepted with recommendations noted
AcceptedClosedEvidence fully accepted
Select the appropriate status from the dropdown to update the file’s review state. Changes persist immediately.

Document Tags (Doctags)

Doctags are standardized reference codes that categorize evidence files by their purpose in the PCI DSS assessment. They map files to specific documentation requirements in the ROC template (e.g., DOCFW for firewall documentation, PENTEST for penetration test reports).

How Doctags Are Assigned

Doctags can be assigned in two ways: AI-Suggested Tags: After Cortex AI analyzes a file, it suggests relevant doctags with confidence scores. In the file’s expanded detail row:
  • Suggested tags appear as outline badges with a confidence percentage (e.g., DOCFW 92%)
  • Click the + button on a suggested tag to assign it
  • Assigned tags appear as solid blue badges with an x to remove
Manual Assignment: In the file’s detail panel, you can manually search for and assign doctags that Cortex did not suggest. Screenshot of Doctag Assignment

Common Doctags

TagDescriptionRelated Requirements
DOCFWFirewall documentation1.2.2, 1.2.5, 1.2.6, 1.2.7, 1.5.1
DOCHARDHardware documentation1.2.1, 1.4.5, 2.1.1
DOCCRYPTOCryptography documentation3.6.1.1
DOCPWDPassword policy documentation8.2.2, 8.2.3, 8.2.4, 8.3.10
DOCPOIPoint-of-sale documentation9.5.1
PENTESTPenetration test reports11.4.2, 11.4.3, 11.4.4, 11.4.5
CERTINVCertificate inventory4.2.1.1
Over 100 doctags are available, covering every documentation category defined in the PCI DSS v4.0.1 ROC template.

Automatic Section 6.4 Sync

When a doctag is assigned to a file, the platform automatically updates the Section 6.4 Documentation Evidence table in your assessment. Each tagged file creates or updates a row with:
  • Reference — the assigned doctag(s) (e.g., DOCFW, DOCHARD)
  • Document Name — the file name
  • Purpose — AI-generated description of the file’s content
  • Revision Date — the upload date
This sync happens transparently. Manual rows you enter directly in Section 6.4 are preserved alongside auto-generated rows.

Cortex AI Analysis

After upload, Cortex AI automatically analyzes each file and provides:

Requirement Matching

Cortex identifies which PCI DSS requirements the file is relevant to. In the expanded file detail row:
  • Each suggested requirement shows the requirement number, title, and a confidence score
  • Confidence is color-coded: green (80%+), amber (50–79%), gray (below 50%)
  • Click Link to associate the file with a requirement, or Unlink to remove the association
Screenshot of Cortex AI Requirement Matches

Criteria Validation

If a validation specification exists for the linked requirement, Cortex checks the file against the specific content criteria that the ROC template requires:
IndicatorMeaning
Complete (green)All criteria found in the document
Partial (amber)50% or more criteria found, some gaps
Insufficient (red)Less than 50% of criteria found
Expand the validation result to see a per-criterion checklist:
  • Checkmark — criterion found, with a brief excerpt from the document
  • X mark — criterion not found, with a note explaining what is missing
Screenshot of Criteria Validation Results

AI Summary

Each analyzed file receives:
  • Relevance — why this file matters for compliance
  • Summary — AI-generated description of the file’s contents
  • Document Type — classification badge (e.g., PDF, spreadsheet, configuration)
  • Model — which AI model performed the analysis

File Integrity Verification

Every file is SHA-256 hashed at the moment of upload. At any time after upload, you can verify that the file has not been altered in storage.
1

Trigger Verification

Use the Verify action on the file to initiate an integrity check.
2

Review the Result

The platform re-downloads the file from storage, recomputes the SHA-256 hash, and compares it against the stored original:
ResultMeaning
VerifiedCurrent hash matches stored hash — file is intact
TamperedHashes do not match — file has been altered in storage
The verification timestamp is recorded in the file’s metadata for audit purposes.
Integrity verification is non-destructive and read-only. It does not modify the file or its metadata beyond recording the verification result and timestamp.

File Metadata Preview

Kliper automatically extracts metadata from uploaded files, giving assessors immediate context without downloading:
File TypeExtracted Preview
PDFPage count, word count, text preview (first 500 characters)
Word (DOCX/DOC)Word count, text preview
Excel (XLSX/XLS)Sheet names, row/column counts, header names
PowerPoint (PPTX)Slide count, text preview from first slide
Visio (VSDX)Page count, text labels from diagram elements
CSVColumn headers, row count, first 3 rows as preview
ImagesFormat, file size
Text/ConfigLine count, word count, text preview
Certificates (PEM)Certificate type (certificate, private key, public key, CSR)
This metadata appears in the file’s expanded detail row and helps assessors identify files without opening them. Screenshot of File Metadata Preview

View Modes and Filtering

The Attachments Panel supports multiple view modes:
ModeBehavior
Section ViewShows only files linked to the currently selected requirement (e.g., files tagged to 1.2.3)
All FilesShows every file uploaded to the assessment, regardless of section
Requirement Folder BrowserBrowse files organized by requirement number in a hierarchical folder structure
Toggle between modes using the view buttons at the top of the Attachments Panel.

Bulk Operations

Select multiple files using the checkboxes to perform bulk actions:
  • Download Selected — downloads all selected files
  • Delete Selected — removes all selected files from the assessment
Selection resets automatically when you switch between requirements or view modes.

Downloading and Deleting Files

Download

Click the Download button in the actions column for any file. The file downloads with its original filename. For bulk downloads, select multiple files and click Download Selected.

Delete

Click the Delete button in the actions column. The file is removed from the assessment immediately. If the file was linked to a requirement section, the link is automatically cleared.
File deletion is immediate. Deleted files are soft-deleted (preserved in the database with a deletion timestamp) but are no longer visible in the Attachments Panel.