Prerequisites
Before generating the ROC, ensure the following are complete:Contact Information (Section 1.1)
Complete all contact fields: assessed entity details (company name, DBA, address, website, primary contact), QSA company information, lead QSA credentials, QA reviewer details, and any associate QSAs, other assessors, or ISAs.
Assessment Dates (Section 1.2)
Fill in the report date, assessment start date, assessment end date, and onsite dates.
Business Description (Section 2.1)
Describe the nature of the business, how cardholder data is stored/processed/transmitted, and any other relevant details.
Scope & Segmentation (Section 3)
Document scope validation results, segmentation details, and validated product usage.
Testing Procedure Responses (Section 7)
Complete the assessor responses for each testing procedure across all applicable requirements. Each response populates a
Response-X.Y.Z tag in the final document.Findings & Statuses
Select a finding status (In Place, Not Applicable, Not Tested, Not in Place) for every requirement. Write or auto-generate findings descriptions for each requirement.
You can generate a ROC at any point during the assessment to preview progress. Incomplete fields will appear as empty in the document. The platform does not block export for incomplete assessments.
Generating the Report
- Navigate to your assessment in the Assessment Workbench or the Engagement Hub.
- Select the Export ROC action.
- The platform generates the DOCX file and triggers a download.
{client-name}-roc-{YYYY-MM-DD}.docx
How the Export Works
The DOCX export engine processes the ROC template through a multi-step pipeline:Step 1 — Template Loading
The engine loads a Word document template from thetemplates/ directory. The default template is report.docx, based on the PCI SSC’s official ROC template (Revision 3). Custom templates can be used by specifying a template name.
The template is a standard .docx file (ZIP archive containing XML). Tags are placed inline in the Word document as {{Tag-Name}} placeholders.
Step 2 — XML Run Merging
Word frequently splits text across multiple XML<w:r> (run) elements due to spell-check markers, formatting changes, and language attributes. This means a tag like {{Client-Name}} may be split across 3 or more XML elements.
The engine merges adjacent runs within each paragraph so that {{ ... }} tags appear in a single <w:t> element. This ensures reliable tag detection and replacement.
Step 3 — Loop Processing (Table Row Cloning)
For repeating data (tables with multiple rows), the template uses loop tags:- Identifies the
<w:tr>(table row) elements that contain the loop open and close tags. - Treats the enclosed row(s) as a template.
- Clones the template row once per item in the data array.
- Replaces
{{.fieldName}}with each item’s field values. - Renders boolean fields as checkbox symbols:
true→ ☒,false→ ☐.
| Loop Name | Data Source | Description |
|---|---|---|
AssociateQSAs | Section 1.1 | Associate QSA names and mentor assignments |
OtherAssessors | Section 1.1 | Additional assessors with certificate numbers |
ISAs | Section 1.1 | Internal Security Assessors |
ValidatedProducts | Section 3.3 | PCI SSC validated product listing references |
DataFlows | Section 4.2 | Account data flow descriptions |
SADStorage | Section 4.3 | Sensitive authentication data storage locations |
TPSPs | Section 4.4 | Third-party service providers |
InScopeNetworks | Section 4.5 | In-scope network segments (account data) |
NonADNetworks | Section 4.5 | In-scope network segments (non-account data) |
Locations | Section 4.6 | Assessment locations and facilities |
Components | Section 4.7 | In-scope system component types |
ExtScans | Section 5.1 | External vulnerability scan results |
IntScans | Section 5.3 | Internal vulnerability scan results |
SampleSets | Section 6.3 | Sampling sets and methodology |
DocEvidence | Section 6.4 | Documentation evidence table |
InterviewEvidence | Section 6.5 | Interview evidence records |
OtherEvidence | Section 6.6 | Other evidence (observation, configuration review) |
Step 4 — Diagram Image Insertion
For Sections 4.1 (Network Diagrams) and 4.2 (Account Data Flow Diagrams), the engine:- Queries uploaded image files tagged to these sections.
- Embeds the images directly into the Word document at the
{{diagrams-4.1}}and{{diagrams-4.2}}tag locations. - Creates the necessary Word XML relationships for inline image rendering.
Step 5 — Tag Replacement
All remaining{{Tag-Name}} placeholders are replaced with assessment data. The engine maintains a comprehensive tag map covering every section of the ROC:
Section 1 — Contact & Assessment Info
Section 1 — Contact & Assessment Info
| Tag | Source |
|---|---|
{{Client-Name}} | Assessed entity company name |
{{Client-DBA}} | Doing Business As |
{{Client-Address}} | Company address |
{{Client-URL}} | Company website |
{{Client-Contact-Name}} | Primary contact name |
{{Client-Contact-Phone}} | Primary contact phone |
{{Client-Contact-Email}} | Primary contact email |
{{QSAC-Name}} | QSA company name |
{{QSAC-Address}} | QSA company address |
{{QSAC-URL}} | QSA company website |
{{QSA-Name}} | Lead assessor name |
{{QSA-Phone}} | Lead assessor phone |
{{QSA-Email}} | Lead assessor email |
{{QSA-Credentials}} | Lead assessor certificate number |
{{QA-Name}} | QA reviewer name |
{{QA-Phone}} | QA reviewer phone |
{{QA-Email}} | QA reviewer email |
{{QA-Credentials}} | QA reviewer credentials |
{{Report-Date}} | Date of report |
{{Date-of-Kick-off}} | Assessment start date |
{{Assessment-End-Date}} | Assessment end date |
{{Onsite-Dates}} | Onsite assessment dates |
Section 2 — Business Description
Section 2 — Business Description
| Tag | Source |
|---|---|
{{Biz-Desc}} | Nature of business |
{{How}} | How cardholder data is stored/processed/transmitted |
{{Why}} | How services impact security |
{{Other}} | Other relevant details |
Section 3 — Scope & Segmentation
Section 3 — Scope & Segmentation
| Tag | Source |
|---|---|
{{ScopeVal-*}} | Scope validation fields (results, assessor, methods, documentation) |
{{Segmentation-*}} | Segmentation details (used, implementation, out-of-scope environments) |
{{Validated-Products-*}} | PCI SSC validated product usage attestation |
Section 5 — Vulnerability Scans
Section 5 — Vulnerability Scans
| Tag | Source |
|---|---|
{{Is-Initial-External}} | Whether this is the initial external scan |
{{Ext-Scan-Doc}} | External scan documentation |
{{Ext-Scan-Comments}} | External scan comments |
{{ASV-Attestation}} | ASV attestation completion status |
{{Is-Initial-Internal}} | Whether this is the initial internal scan |
{{Int-Scan-Doc}} | Internal scan documentation |
{{Int-Scan-Comments}} | Internal scan comments |
Section 6 — Evidence & Sampling
Section 6 — Evidence & Sampling
| Tag | Source |
|---|---|
{{Ev-Repos-Desc}} | Evidence repository description |
{{Ev-Controller}} | Evidence repository controller |
{{Ev-Retention-Ack}} | Evidence retention acknowledgment |
{{Ev-Assessor-Name}} | Evidence assessor name |
{{Sampling-*}} | Sampling methodology fields (used, rationale, representative, standardized) |
Section 7 — Findings (auto-generated)
Section 7 — Findings (auto-generated)
Per-requirement findings are populated dynamically:
{{Findings-X.Y.Z-N}}— the justification text for requirement X.Y.Z, finding index N.{{Response-X.Y.Z.a-N}}— assessor response for testing procedure lettera, row index N.{{Response-X.Y.Z}}— all assessor responses for requirement X.Y.Z, concatenated.
Requirement Summary Counts
Requirement Summary Counts
Summary counts per requirement group are populated as:
{{rs-OK-N}}— count of “In Place” findings for group N{{rs-NA-N}}— count of “Not Applicable” findings{{rs-NT-N}}— count of “Not Tested” findings{{rs-KO-N}}— count of “Not in Place” findings{{rs-CC-N}}— count of compensating controls{{rs-CA-N}}— count of customized approach findings
Section 1.8 — Status Lists
Section 1.8 — Status Lists
Auto-computed lists of requirements by status, with manual override from Section 1.8 notes:
{{list-NA}}— requirements marked Not Applicable{{list-NT}}— requirements marked Not Tested{{list-KO-Legal}}— requirements Not in Place with legal exception{{list-KO-NotLegal}}— requirements Not in Place without legal exception{{list-CC}}— requirements using compensating controls{{list-CA}}— requirements using customized approach
Step 6 — Checkbox Resolution
The template uses checkbox tags for selection fields (radio buttons in the assessment UI):Step 7 — Output Generation
The processed document is compressed using DEFLATE and returned as a buffer. The file is named using the pattern{client-slug}-roc-{date}.docx and streamed to the user’s browser for download.
Custom Templates
Organizations can use custom Word templates by placing.docx files in the templates/ directory. Custom templates must use the same {{Tag-Name}} convention. If a requested template is not found, the engine falls back to the default report.docx.
Troubleshooting
| Issue | Cause | Resolution |
|---|---|---|
Tags appear as {{Tag-Name}} in output | Tag name does not match the expected convention, or the field was not filled in | Verify the tag name matches the tag map; ensure the corresponding field is populated in the assessment |
| Table rows are empty | Loop data source returned no items | Populate the corresponding assessment section (e.g., fill in the TPSPs table for the {{#TPSPs}} loop) |
| Diagrams missing | No image files tagged to Section 4.1 or 4.2 | Upload network diagram or data flow diagram images and tag them to the correct section |
| Checkboxes show ☐ for all options | Selection field not answered | Select the appropriate radio option in the assessment form |
| Split or garbled text | Word XML run splitting not resolved | This is handled automatically by the run-merging engine; report the issue if it persists |