Skip to main content
Kliper provides three integrated analysis tools that give assessors a real-time view of assessment health: Gap Assessment (what’s missing), Risk Scoring (what’s most dangerous), and AI Recommendations (what to do about it). All three are accessible from the assessment workbench tabs. Screenshot of Analysis Tabs in Workbench

Gap Assessment

The Gap Assessment dashboard identifies which PCI DSS requirements have findings gaps — requirements that are not in place, not tested, or not yet evaluated.

Accessing Gap Assessment

1

Open Your Assessment

Navigate to the assessment from the Engagement Hub or the Assessment Workbench.
2

Select the Gap Analysis Tab

In the assessment view, click the Gap Analysis tab (bar chart icon). The dashboard loads with real-time data calculated from your current assessment answers.

Dashboard Overview

The top of the dashboard displays five summary cards:
CardWhat It Shows
Compliance RatePercentage of requirements that are In Place or Not Applicable, displayed with a progress bar
In PlaceCount of requirements marked as fully compliant
Not In PlaceCount of requirements marked as non-compliant (highlighted in red)
No FindingCount of requirements with no finding status recorded (highlighted in amber)
TotalTotal requirement count with sub-counts for Not Tested and Not Applicable
Screenshot of Gap Assessment Summary Cards

Gap Severity Levels

Each requirement is assigned a gap severity based on its finding status:
SeverityConditionAction Required
Critical (red)Not In Place, or no finding recorded with no justificationImmediate attention — requirement is non-compliant or completely unevaluated
High (orange)Has justification but no finding status selectedAssessor has documented observations but not made a determination
Medium (yellow)Marked as Not TestedControl exists but was not evaluated during this assessment
Low (blue)Minor documentation gapsMinimal risk, typically administrative
Compliant (green)In Place or Not ApplicableNo gap — requirement is satisfied

Filtering and Navigation

Filter the requirement list by severity using the filter buttons:
  • All — show all requirements
  • Critical — show only critical gaps
  • High — show only high-severity gaps
  • Medium — show medium-severity gaps
  • Compliant — show only compliant requirements
Toggle between Grouped by Section (requirements organized under their parent section) and Flat View (a single sortable list).

Requirement Detail

Expand any requirement row to see:
  • Full requirement text from PCI DSS v4.0.1
  • Justification preview — the first 200 characters of the assessor’s written justification (if any)
  • Method badges — Compensating Control (CC) or Customized Approach indicators
  • Missing justification alert — a warning when a non-compliant requirement has no supporting documentation
  • Go to Section button — click to navigate directly to that requirement in the workbench
Screenshot of Gap Assessment Detail Row

Refreshing Data

Click the Refresh button to recalculate the gap assessment from the latest assessment answers. The dashboard always computes in real-time — no cached data is used.

Risk Scoring

The Risk Scoring dashboard assigns a quantitative risk score (0–100) to every requirement in the assessment, considering multiple factors beyond just the finding status.

Accessing Risk Scoring

1

Open Your Assessment

Navigate to the assessment from the Engagement Hub or the Assessment Workbench.
2

Select the Risk Analysis Tab

Click the Risk Analysis tab (shield icon). The Risk Dashboard loads with per-requirement risk calculations.

Overall Risk Score

The dashboard header displays the Overall Risk Score — a weighted average of all requirement risk scores, presented as a 0–100 score with a risk level badge.
Risk LevelScore RangeColor
Critical80–100Red
High60–79Orange
Medium40–59Yellow
Low20–39Blue
None0–19Green
Screenshot of Overall Risk Score

How Risk Scores Are Calculated

Each requirement’s risk score is computed from four weighted factors:
FactorWeightWhat It Measures
Finding Risk45%The assessment finding status (Not In Place = 100, Not Tested = 50, In Place = 0)
Documentation Risk25%Whether the assessor has written a justification (missing justification on a failing requirement = 100)
Completeness Risk15%Whether testing procedure data has been filled in
Staleness Risk15%How recently the requirement was last updated (>90 days = 60, >60 days = 40, >30 days = 20)
Adjustments:
  • Requirements with a Compensating Control receive a 10-point reduction
  • Requirements marked Not Applicable receive a 0 across all factors

Summary Cards

Four metric cards appear below the overall score:
CardWhat It Shows
Critical RiskCount of requirements with risk score 80+
High RiskCount of requirements with risk score 60–79
No RiskCount of requirements with risk score below 20
Total RequirementsTotal count of assessed requirements

Finding Distribution

A visual breakdown of all requirements by their finding status:
FindingColor
In PlaceGreen
Not In PlaceRed
Not TestedYellow
Not ApplicableGray
No FindingOrange

Requirement Risk Detail

Expand any requirement row to see a detailed breakdown:
  • Risk Factor Bars — four horizontal progress bars showing each factor’s individual contribution:
    • Finding Risk (0–100)
    • Documentation Risk (0–100)
    • Completeness Risk (0–100)
    • Staleness Risk (0–100)
  • Identified Issues — a bulleted list of specific problems:
    • “Finding: Not In Place”
    • “No assessment finding recorded”
    • “Not yet tested”
    • “Missing justification/evidence”
    • “Testing procedures incomplete”
    • “Data is stale (60+ days)”
  • Go to Section button — navigate to the requirement in the workbench to address the issues
Screenshot of Risk Detail Breakdown

Filtering

Filter the requirements list by risk level using the filter buttons: All, Critical, High, Medium, Low, No Risk.

AI Remediation Recommendations

The Recommendations panel provides AI-generated suggestions for improving your assessment, identifying weak areas, and strengthening compliance documentation.

Accessing Recommendations

1

Open Your Assessment

Navigate to the assessment workbench.
2

Select the AI Assistant Tab

Click the AI Assistant tab (robot icon). The Recommendations panel opens.

Automatic Recommendations

The platform generates rule-based recommendations based on patterns detected in your assessment data:
TypeIconExample
WarningOrange alert”3 field(s) in this section are empty”
SuggestionBlue lightbulb”Once complete, request review from QA team”
TipPurple sparkle”Ensure all evidence of security controls is documented with screenshots and configuration excerpts”
ImprovementGreen targetSpecific text improvements for brief or vague answers
Each recommendation card displays:
  • Title — brief summary of the recommendation
  • Description — detailed explanation and suggested action
  • Reasoning — why this recommendation was generated (shown in italics)
  • Confidence — how confident the system is in the recommendation (e.g., “90% confidence”)
  • Action Button — one-click action to navigate to the relevant section or apply suggested text
Screenshot of Recommendations Panel

AI-Powered Suggestions

For more targeted guidance, use the prompt field at the top of the panel:
1

Enter a Prompt

Type a question or request in the text area. Examples:
  • “How can I improve my security controls documentation?”
  • “What evidence should I collect for Requirement 3.4.1?”
  • “Suggest interview questions for encryption key management”
2

Submit

Click Send. Cortex generates context-aware recommendations based on:
  • The current requirement you are viewing
  • Your existing assessment answers
  • The PCI DSS v4.0.1 framework guidance
  • Your assessor role
3

Review and Act

AI-generated recommendations appear in the list with the same card format. Click action buttons to navigate to relevant sections or apply suggested text directly.

Text Improvement Suggestions

For individual answer fields, the AI can analyze your written text and suggest improvements:
  • Passive voice detection — suggests active voice rewrites for clearer findings
  • Date specificity — suggests adding implementation dates in YYYY-MM-DD format
  • Evidence references — suggests adding references to uploaded evidence files
Each suggestion includes the original text, the improved version, reasoning for the change, and a confidence score.

Workflow: Using Analysis Tools Together

The three analysis tools are designed to be used in sequence during assessment review:
1

Identify Gaps

Start with the Gap Assessment dashboard. Filter to Critical and High severity gaps to see which requirements need immediate attention. Note requirements with no finding recorded or missing justifications.
2

Assess Risk

Switch to the Risk Scoring dashboard. Sort by risk score descending to prioritize the highest-risk requirements. Review the risk factor breakdown to understand whether the issue is a missing finding, missing documentation, incomplete testing, or stale data.
3

Get Recommendations

Open the AI Assistant tab. Review automatic recommendations for quick wins. Use the prompt field to ask for specific guidance on the highest-risk requirements identified in the previous step.
4

Address Findings

Use the Go to Section buttons to navigate directly to each requirement in the workbench. Update testing procedures, upload evidence, set finding statuses, and write justifications.
5

Re-Check

Return to the Gap and Risk dashboards and click Refresh. Verify that addressed requirements now show reduced risk scores and resolved gaps.