Skip to content

Core: CMS page updates, tests, GEO data, prerender#2

Open
martinatkin1-ui wants to merge 1 commit intomainfrom
pr/cr-core
Open

Core: CMS page updates, tests, GEO data, prerender#2
martinatkin1-ui wants to merge 1 commit intomainfrom
pr/cr-core

Conversation

@martinatkin1-ui
Copy link
Copy Markdown
Owner

@martinatkin1-ui martinatkin1-ui commented Apr 17, 2026

Scope (PR1 of 2 — CodeRabbit-sized)

Server, flat-file CMS data, shared libraries, scripts, automated tests, and .github/CODERABBIT-PR-SECTIONS.md.

Merge this PR before \pr/cr-site\ so runtime and templates stay aligned.

Includes

  • Page updates admin + API, \lib/page-updates, link/order helpers
  • \data/geo.json, \data/page-updates.json, content tweaks
  • Prerender script updates

  • pm test\ suite (node:test + supertest)

After merge

Rebase or merge \pr/cr-site\ onto \main\ if needed.

Made with Cursor

Summary by CodeRabbit

  • New Features

    • Added geographic location-based support content with FAQs and resource links
    • Introduced page updates feature for publishing announcements to community and news sections
    • Implemented social media sharing integration for announcements
  • Tests

    • Added comprehensive test coverage for new functionality
  • Documentation

    • Added PR organization and workflow guidelines

…PR plan

- Server: page-updates admin routes, media order helper modules, test data dir
- data: geo.json, page-updates.json, content updates
- lib: page-updates, page-update-links, parse-existing-order, social-formats
- scripts: prerender includes page update routes
- tests: node:test suite with fixtures and HTTP smoke
- .github/CODERABBIT-PR-SECTIONS.md for split reviews

Made-with: Cursor
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 17, 2026

📝 Walkthrough

Walkthrough

This PR introduces a page-updates feature allowing admins to create, edit, and delete timeline announcements routed to specific pages. It adds library modules for parsing, managing, and formatting updates; integrates page updates into server routes and prerendered pages; includes geo/FAQ data; and provides a comprehensive test suite with fixtures.

Changes

Cohort / File(s) Summary
Documentation & Configuration
.github/CODERABBIT-PR-SECTIONS.md, package.json
Added PR sectioning guidance and Node.js test runner scripts; introduced supertest dev dependency.
Data Files
data/content.json, data/geo.json, data/page-updates.json
Removed trailing newline from content.json; added new geo.json with Wolverhampton support FAQs and resource links; created empty page-updates.json fixture for timeline items.
Library Modules
lib/page-update-links.js, lib/page-updates.js, lib/parse-existing-order.js, lib/social-formats.js
Added four new utility modules: link parser (pipe-delimited format), page-updates manager (routing, category validation, admin helpers), media reordering parser, and social snippet formatter for multi-platform sharing.
Server & Prerender Integration
server.js, scripts/prerender-public-site.js
Enhanced server with admin CRUD routes for page updates (/admin/page-updates/*), SEO/structured data injection (FAQPage schema from geo), social snippet generation; integrated pageUpdates and geo into prerendered page locals.
Test Fixtures
tests/fixtures/minimal-data/cultural-outreach.json, tests/fixtures/minimal-data/news-more.json, tests/fixtures/minimal-data/page-updates.json
Created minimal fixture datasets supporting page-updates test suite with sample outreach/news pages and update items spanning multiple categories and scopes.
Test Suites
tests/http-smoke.test.js, tests/page-update-links.test.js, tests/page-updates-admin.test.js, tests/page-updates.test.js, tests/parse-existing-order.test.js
Added five new test files: smoke tests for core routes, unit tests for link/order parsing, admin category validation, and page-updates routing with fixture-based data filtering.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Poem

🐰 Page updates, oh what a delight!
Admin forms and social shares burning bright,
Links parsed, media sorted with care,
FAQ schemas floating in the air,
A rabbit's den of features, so fair! 🐇✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 31.25% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main changes: server/core functionality (CMS page updates, tests, GEO data, and prerender script updates), which are the key additions of this PR.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch pr/cr-core

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
server.js (2)

214-219: ⚠️ Potential issue | 🔴 Critical

Hardcoded session secret + missing production cookie flags.

Two things here:

  1. secret: 'suit-wolverhampton-2026-secret-key' is checked into source. Anyone with read access to the repo (or to a leaked build) can forge admin session cookies, bypassing requireAdmin on every /admin/* route, including the /admin/page-updates/* and /admin/community/save flows added in this PR. Session secrets must come from an environment variable and fail fast if absent in production.
  2. The cookie config sets only maxAge. In production you want secure: true (HTTPS‑only), sameSite: 'lax' (CSRF hardening for the admin POSTs), and an explicit httpOnly: true for clarity. Without secure, the admin session cookie is sent over plain HTTP if the site is ever reached that way, and can be intercepted.
🛡️ Suggested change
-app.use(session({
-  secret: 'suit-wolverhampton-2026-secret-key',
-  resave: false,
-  saveUninitialized: false,
-  cookie: { maxAge: 24 * 60 * 60 * 1000 } // 24h
-}));
+const SESSION_SECRET = process.env.SESSION_SECRET;
+if (!SESSION_SECRET && process.env.NODE_ENV === 'production') {
+  throw new Error('SESSION_SECRET must be set in production');
+}
+app.use(session({
+  secret: SESSION_SECRET || 'dev-only-insecure-secret',
+  resave: false,
+  saveUninitialized: false,
+  cookie: {
+    maxAge: 24 * 60 * 60 * 1000, // 24h
+    httpOnly: true,
+    sameSite: 'lax',
+    secure: process.env.NODE_ENV === 'production'
+  }
+}));

If running behind a proxy in production, also add app.set('trust proxy', 1); so secure cookies work correctly.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server.js` around lines 214 - 219, Replace the hardcoded session secret and
weak cookie options in the app.use(session({...})) block: read the session
secret from an environment variable (e.g., process.env.SESSION_SECRET),
throw/exit immediately if it's missing in production, and update the cookie
config to include httpOnly: true, secure: true, and sameSite: 'lax' (keeping
maxAge). Also, if the app runs behind a proxy, ensure app.set('trust proxy', 1)
is enabled so secure cookies work correctly.

168-173: ⚠️ Potential issue | 🟠 Major

Default admin credentials in source code.

SUITadmin2026! is hashed into data/admin.json on first boot if that file is missing. The plaintext is public in this repo, and the console on startup (line 1624) advertises the same credentials. Any fresh deployment that doesn’t immediately change the password is trivially compromisable.

Consider sourcing the bootstrap password from process.env.ADMIN_INITIAL_PASSWORD (fail to start if unset and admin.json is missing), and/or setting a mustChangePassword: true flag that forces a password change on first login. At minimum, drop the plaintext from the startup banner.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server.js` around lines 168 - 173, The code creates a default admin with a
hardcoded plaintext password; change this so that when ADMIN_FILE (ADMIN_FILE /
DATA_DIR) is missing you read process.env.ADMIN_INITIAL_PASSWORD, validate it is
set (exit process with an error if not), hash that value with bcrypt.hashSync
and write via writeJSON, and include a mustChangePassword: true flag in the
persisted admin object to force a reset on first login; additionally remove any
startup banner or console logging that prints the plaintext password (update the
startup banner/logging code to never emit the initial password). Ensure
references to ADMIN_FILE, writeJSON, bcrypt.hashSync, and the startup banner
logging are updated accordingly.
scripts/prerender-public-site.js (1)

168-180: ⚠️ Potential issue | 🟡 Minor

Inconsistent story meta with server.js /stories/:id.

The prerender builds desc from story.quote || story.bio || fullStory and uses `${story.name}'s Story` for the title, while server.js (lines 348‑354) uses story.excerpt || story.fullStory and `${story.name}'s story — addiction recovery, Wolverhampton`. That means prerendered HTML and dynamically rendered HTML will produce different <title>/<meta description> for the same URL, which is bad for SEO canonicalization and for any deploy that flips between static and dynamic serving.

Please unify the two code paths (prerender should match the server’s computed pageTitle/pageDescription/pageKeywords).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/prerender-public-site.js` around lines 168 - 180, The prerender loop
builds inconsistent meta: replace the current desc/title logic inside the
stories loop (where desc is computed and go(...) is called) so it matches the
server.js meta computation by using story.excerpt || story.fullStory for
pageDescription, using the same title template `${story.name}'s story —
addiction recovery, Wolverhampton` for pageTitle, and include pageKeywords
computed the same way the server does; ideally factor the server's meta logic
into a shared helper (e.g., computePageMeta or reuse the server function) and
call that from the prerender script before calling go(...) to ensure pageTitle,
pageDescription, and pageKeywords are identical.
🧹 Nitpick comments (7)
tests/parse-existing-order.test.js (1)

37-39: Consider adding a test for partial-numeric tokens.

Inputs like '1abc,0,2' or '1.5,0,2' are currently accepted (see related comment on lib/parse-existing-order.js). A test asserting these fall back to identity would pin the intended contract.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/parse-existing-order.test.js` around lines 37 - 39, Add tests to assert
that partial-numeric tokens fall back to identity: in
tests/parse-existing-order.test.js add cases calling
parsePageUpdateExistingOrder with inputs like '1abc,0,2' and '1.5,0,2' (using
the same length param 3) and assert the result equals [0,1,2]; reference the
existing test that calls parsePageUpdateExistingOrder('0,1,NaN', 3) to mirror
style and placement so the intended contract is pinned.
lib/social-formats.js (1)

6-10: truncate edge case: max <= 1 produces just the ellipsis.

truncate('hello', 1) returns '…' (via slice(0, 0) + '…'). For max === 0 it returns '…' too, which is longer than the requested max. Minor; probably never hit in practice given 280/400-char budgets, but worth a guard:

 function truncate(s, max) {
   const t = String(s || '').trim();
+  if (max <= 0) return '';
   if (t.length <= max) return t;
   return t.slice(0, Math.max(0, max - 1)).trimEnd() + '…';
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/social-formats.js` around lines 6 - 10, The truncate function can return
an ellipsis longer than the requested max when max <= 0; update truncate to
guard against small max values: compute t as now (String(s||'').trim()), then if
max <= 0 return '' to respect the zero-length request, if max === 1 return
t.slice(0,1) (the single char) rather than appending '…', and otherwise keep the
existing logic that slices to Math.max(0, max - 1) and appends '…' for longer
truncation; refer to the truncate function and the local variable t when making
this change.
tests/http-smoke.test.js (1)

7-18: Optional: hoist the require('../server') to module scope.

require is cached, so repeating it in each test is harmless but noisy. A single top-level const app = require('../server'); (or a shared before hook) keeps setup uniform and makes it obvious that all tests share the same app instance.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/http-smoke.test.js` around lines 7 - 18, Hoist the repeated
require('../server') out of each test into module scope so both tests share the
same app instance: add a single top-level const app = require('../server'); (or
create in a before hook) and remove the duplicate per-test requires in the two
tests 'GET /__suit-health responds OK' and 'GET home returns 200 HTML' so the
tests use the shared app variable.
server.js (3)

737-744: Blocking bcrypt on the request thread.

bcrypt.compareSync is CPU‑heavy and blocks the event loop for every login attempt. With a dedicated admin surface the practical risk is small, but it also makes the endpoint easy to DoS via repeated POSTs. Prefer bcrypt.compare (async) and await it.

-app.post('/admin/login', (req, res) => {
+app.post('/admin/login', async (req, res) => {
   const admin = readJSON('admin.json');
-  if (req.body.username === admin.username && bcrypt.compareSync(req.body.password, admin.password)) {
+  const ok = admin && req.body.username === admin.username
+    && await bcrypt.compare(String(req.body.password || ''), admin.password);
+  if (ok) {
     req.session.admin = true;
     return res.redirect('/admin');
   }
   res.render('admin/login', { error: 'Invalid username or password' });
 });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server.js` around lines 737 - 744, The handler for app.post('/admin/login')
currently uses the blocking bcrypt.compareSync; change the route callback to an
async function, replace bcrypt.compareSync(req.body.password, admin.password)
with the asynchronous bcrypt.compare(awaitable) call (await
bcrypt.compare(req.body.password, admin.password)), and handle it inside a
try/catch so any errors return the login view with an error or call next(err);
on successful await compare set req.session.admin = true and redirect as before.
Ensure you still read admin via readJSON('admin.json') and preserve the same
success/failure responses.

1244-1328: Nice composition; minor leak on save failure.

Overall the handler reads cleanly: validate categories → validate title → normalize date → parse links → merge existing media (with optional file removal) → append new uploads → persist. One edge case worth tightening: if loadPageUpdatesRaw() / writeJSON throws after Multer already wrote the newly uploaded files to public/uploads/page-updates/, those files are orphaned on disk but never recorded in page-updates.json. Consider a catch that unlinks req.files on failure before delegating to next(e), mirroring the cleanup you already do for removed existing media.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server.js` around lines 1244 - 1328, In the /admin/page-updates/save request
handler (the app.post(...) block that calls upload.array and later uses
loadPageUpdatesRaw, pageUpdateMediaFromUploadedFile, and writeJSON), add cleanup
of newly uploaded files on any failure before calling next(e): in the catch(e)
block iterate req.files (if any) and unlink each file.path (or compute the
public upload path similar to existing removal logic) with safe try/catch, then
rethrow/next(e); this ensures files written by Multer are removed if persisting
(writeJSON) or processing fails.

286-306: Duplicated FAQ JSON‑LD construction with scripts/prerender-public-site.js.

Same logic as faqPageSchemaFromGeo in the prerender script (lines 49–61). See the refactor comment on that file — extract once to e.g. lib/faq-schema.js and reuse from both sides so dynamic and prerendered /get-help emit identical FAQPage schema.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server.js` around lines 286 - 306, The FAQ JSON‑LD construction in server.js
duplicates the logic in faqPageSchemaFromGeo (used by the prerender script);
extract that logic into a shared helper (e.g., export a function like
buildFaqSchemaFromGeo in a new lib/faq-schema.js) that accepts the geo object
and returns either the FAQPage schema or null, then replace the inline
faqStructuredData creation in server.js to call
buildFaqSchemaFromGeo(res.locals.geo || readJSON('geo.json')) and update the
prerender script to import and call the same buildFaqSchemaFromGeo instead of
faqPageSchemaFromGeo so both runtime and prerendered outputs are identical.
scripts/prerender-public-site.js (1)

49-61: Extract faqPageSchemaFromGeo to a shared module to avoid drift.

This helper is duplicated verbatim in server.js (lines 286–299). Since both the live server and the prerender pipeline need to emit the same FAQ JSON‑LD, a future tweak to the schema shape in one place will silently diverge the other (e.g. static HTML falls out of sync with live responses). Moving it to a small shared lib (e.g. lib/faq-schema.js) and importing from both sides keeps them in lockstep.

♻️ Proposed extraction

Create lib/faq-schema.js:

function faqPageSchemaFromGeo(geo) {
  const faqs = geo && Array.isArray(geo.faqs) ? geo.faqs : [];
  if (!faqs.length) return null;
  return {
    '@context': 'https://schema.org',
    '@type': 'FAQPage',
    mainEntity: faqs.map((f) => ({
      '@type': 'Question',
      name: f.question,
      acceptedAnswer: { '@type': 'Answer', text: f.answer }
    }))
  };
}
module.exports = { faqPageSchemaFromGeo };

Then in this file:

-function faqPageSchemaFromGeo(geo) {
-  const faqs = geo && Array.isArray(geo.faqs) ? geo.faqs : [];
-  if (!faqs.length) return null;
-  return {
-    '@context': 'https://schema.org',
-    '@type': 'FAQPage',
-    mainEntity: faqs.map((f) => ({
-      '@type': 'Question',
-      name: f.question,
-      acceptedAnswer: { '@type': 'Answer', text: f.answer }
-    }))
-  };
-}
+const { faqPageSchemaFromGeo } = require('../lib/faq-schema');

And mirror the same swap in server.js around lines 286–299.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/prerender-public-site.js` around lines 49 - 61, Extract the duplicate
faqPageSchemaFromGeo function into a shared module (e.g. create
lib/faq-schema.js) that exports faqPageSchemaFromGeo, then remove the inline
implementations in both prerender and server code and replace them with
require/import of the shared function; ensure the exported API matches existing
usage (named export faqPageSchemaFromGeo) so calls that rely on the function in
both scripts continue to work unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@data/geo.json`:
- Line 10: The phone number formatting for "Recovery Near You" is inconsistent
between the JSON records; update the "answer" value in the geo.json entry (the
"answer" JSON field containing the Recovery Near You text) so the phone number
uses the spaced format "01902 444 044" to match the other dataset; ensure only
the numeric grouping is changed and other text remains identical.

In `@lib/page-updates.js`:
- Around line 50-67: collectItemPageKeys currently calls getCategoryTargetMap()
per item causing buildUpdateCategoryDefinitions() (and its readJSON calls) to
run N times; fix by creating the category target map once in getPageUpdates (or
memoize getCategoryTargetMap) and pass it into collectItemPageKeys and
itemMatchesPageKey as an optional parameter (e.g., collectItemPageKeys(item,
categoryMap) and itemMatchesPageKey(item, key, categoryMap)); update both
functions to use the provided map when present and fall back to calling
getCategoryTargetMap() only when the arg is undefined so existing callers still
work.
- Around line 20-28: The readJSON function currently swallows JSON parse errors;
update its catch block to capture the error (e.g., catch (err)) and log a
descriptive message including the filename and err.message (or err) via
console.error before returning null so corrupted files like page-updates.json
are visible; keep the rest of readJSON behavior intact.

In `@lib/parse-existing-order.js`:
- Around line 9-15: The current parsing uses parseInt which accepts malformed
tokens like "1abc" or "1.5"; update the token parsing in parse-existing-order.js
(the parts computation) to perform strict numeric validation: trim each token,
reject any token that does not match an integer-only pattern (e.g. /^\d+$/) or
otherwise produce NaN, then convert to Number for the parts array so the
existing ok checks (uniqueness, Number.isInteger, range) correctly fail and
cause the code to fall back to the identity permutation; keep the rest of the
logic (parts, ok) unchanged.

In `@lib/social-formats.js`:
- Around line 45-49: The Twitter/X entry currently uses
truncate(`${title}${teaser ? ` — ${teaser}` : ''} ${url}`, 280) which measures
JS string length and double-counts the URL; instead, budget the 280 chars by
reserving 23 characters for the t.co-wrapped URL plus separators and truncate
only the text portion (title + optional teaser) using truncate(..., 280 - 23 -
N) where N accounts for the space and em dash separators, then append the raw
url after the truncated text; update the twitter object (key "twitter") to call
truncate on the text portion (not including url) and build the final text by
concatenating truncatedText + separator + url, and consider updating the hint to
mention t.co counts as 23 chars and that this is an approximation due to
Twitter's weighted codepoint counting.

In `@package.json`:
- Around line 18-19: The package.json test scripts rely on Node's native glob
expansion which only works unflagged in Node 21+, so update package.json to pin
the minimum Node version by adding or updating the "engines": { "node": ">=21" }
field; keep "test": "node --test \"./tests/*.test.js\"" (or change to a
shell-expanded glob if you prefer cross-version support) and make "test:ci"
CI-friendly by adding a reporter switch such as "node --test --test-reporter=tap
\"./tests/*.test.js\"" so the CI script differs from the local test script.

In `@server.js`:
- Around line 1262-1266: The current logic calls new
Date(publishedRaw).toISOString() directly so an unparseable publishedRaw throws
before the NaN check; change to first construct a Date object and validate it
before calling toISOString: create a temp Date (e.g., const d = publishedRaw ?
new Date(publishedRaw) : new Date()), if Number.isNaN(d.getTime()) set d = new
Date(), then set publishedAt = d.toISOString(); update references to
publishedRaw and publishedAt accordingly so the fallback to "now" actually
occurs instead of throwing.

In `@tests/http-smoke.test.js`:
- Line 20: Typo in the test title string for the test named "GET
/news-more/announcements resolves to200 (static may301 to trailing slash)";
update the literal test description in the test(...) call to read "GET
/news-more/announcements resolves to 200 (static may 301 to trailing slash)" so
spacing is correct (locate the test call in tests/http-smoke.test.js and replace
the title string).

---

Outside diff comments:
In `@scripts/prerender-public-site.js`:
- Around line 168-180: The prerender loop builds inconsistent meta: replace the
current desc/title logic inside the stories loop (where desc is computed and
go(...) is called) so it matches the server.js meta computation by using
story.excerpt || story.fullStory for pageDescription, using the same title
template `${story.name}'s story — addiction recovery, Wolverhampton` for
pageTitle, and include pageKeywords computed the same way the server does;
ideally factor the server's meta logic into a shared helper (e.g.,
computePageMeta or reuse the server function) and call that from the prerender
script before calling go(...) to ensure pageTitle, pageDescription, and
pageKeywords are identical.

In `@server.js`:
- Around line 214-219: Replace the hardcoded session secret and weak cookie
options in the app.use(session({...})) block: read the session secret from an
environment variable (e.g., process.env.SESSION_SECRET), throw/exit immediately
if it's missing in production, and update the cookie config to include httpOnly:
true, secure: true, and sameSite: 'lax' (keeping maxAge). Also, if the app runs
behind a proxy, ensure app.set('trust proxy', 1) is enabled so secure cookies
work correctly.
- Around line 168-173: The code creates a default admin with a hardcoded
plaintext password; change this so that when ADMIN_FILE (ADMIN_FILE / DATA_DIR)
is missing you read process.env.ADMIN_INITIAL_PASSWORD, validate it is set (exit
process with an error if not), hash that value with bcrypt.hashSync and write
via writeJSON, and include a mustChangePassword: true flag in the persisted
admin object to force a reset on first login; additionally remove any startup
banner or console logging that prints the plaintext password (update the startup
banner/logging code to never emit the initial password). Ensure references to
ADMIN_FILE, writeJSON, bcrypt.hashSync, and the startup banner logging are
updated accordingly.

---

Nitpick comments:
In `@lib/social-formats.js`:
- Around line 6-10: The truncate function can return an ellipsis longer than the
requested max when max <= 0; update truncate to guard against small max values:
compute t as now (String(s||'').trim()), then if max <= 0 return '' to respect
the zero-length request, if max === 1 return t.slice(0,1) (the single char)
rather than appending '…', and otherwise keep the existing logic that slices to
Math.max(0, max - 1) and appends '…' for longer truncation; refer to the
truncate function and the local variable t when making this change.

In `@scripts/prerender-public-site.js`:
- Around line 49-61: Extract the duplicate faqPageSchemaFromGeo function into a
shared module (e.g. create lib/faq-schema.js) that exports faqPageSchemaFromGeo,
then remove the inline implementations in both prerender and server code and
replace them with require/import of the shared function; ensure the exported API
matches existing usage (named export faqPageSchemaFromGeo) so calls that rely on
the function in both scripts continue to work unchanged.

In `@server.js`:
- Around line 737-744: The handler for app.post('/admin/login') currently uses
the blocking bcrypt.compareSync; change the route callback to an async function,
replace bcrypt.compareSync(req.body.password, admin.password) with the
asynchronous bcrypt.compare(awaitable) call (await
bcrypt.compare(req.body.password, admin.password)), and handle it inside a
try/catch so any errors return the login view with an error or call next(err);
on successful await compare set req.session.admin = true and redirect as before.
Ensure you still read admin via readJSON('admin.json') and preserve the same
success/failure responses.
- Around line 1244-1328: In the /admin/page-updates/save request handler (the
app.post(...) block that calls upload.array and later uses loadPageUpdatesRaw,
pageUpdateMediaFromUploadedFile, and writeJSON), add cleanup of newly uploaded
files on any failure before calling next(e): in the catch(e) block iterate
req.files (if any) and unlink each file.path (or compute the public upload path
similar to existing removal logic) with safe try/catch, then rethrow/next(e);
this ensures files written by Multer are removed if persisting (writeJSON) or
processing fails.
- Around line 286-306: The FAQ JSON‑LD construction in server.js duplicates the
logic in faqPageSchemaFromGeo (used by the prerender script); extract that logic
into a shared helper (e.g., export a function like buildFaqSchemaFromGeo in a
new lib/faq-schema.js) that accepts the geo object and returns either the
FAQPage schema or null, then replace the inline faqStructuredData creation in
server.js to call buildFaqSchemaFromGeo(res.locals.geo || readJSON('geo.json'))
and update the prerender script to import and call the same
buildFaqSchemaFromGeo instead of faqPageSchemaFromGeo so both runtime and
prerendered outputs are identical.

In `@tests/http-smoke.test.js`:
- Around line 7-18: Hoist the repeated require('../server') out of each test
into module scope so both tests share the same app instance: add a single
top-level const app = require('../server'); (or create in a before hook) and
remove the duplicate per-test requires in the two tests 'GET /__suit-health
responds OK' and 'GET home returns 200 HTML' so the tests use the shared app
variable.

In `@tests/parse-existing-order.test.js`:
- Around line 37-39: Add tests to assert that partial-numeric tokens fall back
to identity: in tests/parse-existing-order.test.js add cases calling
parsePageUpdateExistingOrder with inputs like '1abc,0,2' and '1.5,0,2' (using
the same length param 3) and assert the result equals [0,1,2]; reference the
existing test that calls parsePageUpdateExistingOrder('0,1,NaN', 3) to mirror
style and placement so the intended contract is pinned.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 5487ae8a-338d-4d08-8d9e-b72deebec061

📥 Commits

Reviewing files that changed from the base of the PR and between 7cc2f6e and 373cad0.

⛔ Files ignored due to path filters (1)
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (19)
  • .github/CODERABBIT-PR-SECTIONS.md
  • data/content.json
  • data/geo.json
  • data/page-updates.json
  • lib/page-update-links.js
  • lib/page-updates.js
  • lib/parse-existing-order.js
  • lib/social-formats.js
  • package.json
  • scripts/prerender-public-site.js
  • server.js
  • tests/fixtures/minimal-data/cultural-outreach.json
  • tests/fixtures/minimal-data/news-more.json
  • tests/fixtures/minimal-data/page-updates.json
  • tests/http-smoke.test.js
  • tests/page-update-links.test.js
  • tests/page-updates-admin.test.js
  • tests/page-updates.test.js
  • tests/parse-existing-order.test.js

Comment thread data/geo.json
"faqs": [
{
"question": "Who can I call for drug or alcohol help in Wolverhampton?",
"answer": "SUIT’s direct line is 01902 328983 (Service User Involvement Team, Paycare House, George Street, WV2 4DX). Recovery Near You, Wolverhampton’s community drug and alcohol treatment service, can be reached on 01902 444044. National helpline FRANK is 0300 123 6600."
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Verify Recovery Near You phone number formatting consistency.

data/content.json line 118 lists Recovery Near You as "01902 444 044" (space-separated triple), while data/geo.json line 10 formats it as 01902 444044. Minor, but these appear side-by-side in structured data/FAQ markup and should match.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@data/geo.json` at line 10, The phone number formatting for "Recovery Near
You" is inconsistent between the JSON records; update the "answer" value in the
geo.json entry (the "answer" JSON field containing the Recovery Near You text)
so the phone number uses the spaced format "01902 444 044" to match the other
dataset; ensure only the numeric grouping is changed and other text remains
identical.

Comment thread lib/page-updates.js
Comment on lines +20 to +28
function readJSON(filename) {
const p = path.join(DATA_DIR, filename);
if (!fs.existsSync(p)) return null;
try {
return JSON.parse(fs.readFileSync(p, 'utf8'));
} catch {
return null;
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Silent JSON parse failures make bad data invisible.

Unlike the readJSON in server.js (which logs to console.error), this one swallows parse errors entirely. If data/page-updates.json gets corrupted (hand‑edit, half‑written save), the feature will silently serve an empty feed with no operator signal. Consider logging at least the filename and error message.

-  try {
-    return JSON.parse(fs.readFileSync(p, 'utf8'));
-  } catch {
-    return null;
-  }
+  try {
+    return JSON.parse(fs.readFileSync(p, 'utf8'));
+  } catch (e) {
+    console.error('[page-updates] Bad JSON:', filename, e.message);
+    return null;
+  }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
function readJSON(filename) {
const p = path.join(DATA_DIR, filename);
if (!fs.existsSync(p)) return null;
try {
return JSON.parse(fs.readFileSync(p, 'utf8'));
} catch {
return null;
}
}
function readJSON(filename) {
const p = path.join(DATA_DIR, filename);
if (!fs.existsSync(p)) return null;
try {
return JSON.parse(fs.readFileSync(p, 'utf8'));
} catch (e) {
console.error('[page-updates] Bad JSON:', filename, e.message);
return null;
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/page-updates.js` around lines 20 - 28, The readJSON function currently
swallows JSON parse errors; update its catch block to capture the error (e.g.,
catch (err)) and log a descriptive message including the filename and
err.message (or err) via console.error before returning null so corrupted files
like page-updates.json are visible; keep the rest of readJSON behavior intact.

Comment thread lib/page-updates.js
Comment on lines +50 to +67
function collectItemPageKeys(item) {
const keys = new Set();
const add = (scope, slug) => keys.add(pageKeyStr(scope, slug));

if (item && Array.isArray(item.categories) && item.categories.length) {
const map = getCategoryTargetMap();
item.categories.forEach((cid) => {
const tgts = map.get(String(cid));
if (tgts) tgts.forEach((t) => add(t.scope, t.slug));
});
return keys;
}

if (item && item.scope) {
add(item.scope, item.slug);
}
return keys;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Perf: getCategoryTargetMap() is rebuilt for every item during filtering.

collectItemPageKeys calls getCategoryTargetMap() inside the item.categories branch, and getPageUpdates invokes itemMatchesPageKey inside j.items.filter(...). Each call triggers buildUpdateCategoryDefinitions(), which does two readJSON disk reads (cultural-outreach.json + news-more.json). With N items in page-updates.json, every public render (/community, /community/outreach/:slug, /news-more, /news-more/:slug, etc.) incurs ~2·N synchronous file reads on the request thread.

Build the map once per getPageUpdates call and pass it through, or memoize with a small cache.

♻️ Suggested refactor — hoist the map out of the filter
-function collectItemPageKeys(item) {
+function collectItemPageKeys(item, categoryTargetMap) {
   const keys = new Set();
   const add = (scope, slug) => keys.add(pageKeyStr(scope, slug));

   if (item && Array.isArray(item.categories) && item.categories.length) {
-    const map = getCategoryTargetMap();
+    const map = categoryTargetMap || getCategoryTargetMap();
     item.categories.forEach((cid) => {
       const tgts = map.get(String(cid));
       if (tgts) tgts.forEach((t) => add(t.scope, t.slug));
     });
     return keys;
   }

   if (item && item.scope) {
     add(item.scope, item.slug);
   }
   return keys;
 }

-function itemMatchesPageKey(item, pageKey) {
-  return collectItemPageKeys(item).has(pageKey);
+function itemMatchesPageKey(item, pageKey, categoryTargetMap) {
+  return collectItemPageKeys(item, categoryTargetMap).has(pageKey);
 }
 function getPageUpdates(scope, slug) {
   const j = loadRaw();
   const wantSlug = slug == null || slug === '' ? null : String(slug);

   if (scope === 'news-more' && wantSlug === ANNOUNCEMENTS_SLUG) {
     return [...j.items].sort(
       (a, b) => new Date(b.publishedAt || 0) - new Date(a.publishedAt || 0)
     );
   }

   const pageKey = pageKeyStr(scope, wantSlug);
+  const categoryTargetMap = getCategoryTargetMap();
   return j.items
-    .filter((i) => i && itemMatchesPageKey(i, pageKey))
+    .filter((i) => i && itemMatchesPageKey(i, pageKey, categoryTargetMap))
     .sort((a, b) => new Date(b.publishedAt || 0) - new Date(a.publishedAt || 0));
 }

Consumers that call collectItemPageKeys(item) / itemMatchesPageKey(item, key) without the extra arg continue to work.

Also applies to: 77-91

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/page-updates.js` around lines 50 - 67, collectItemPageKeys currently
calls getCategoryTargetMap() per item causing buildUpdateCategoryDefinitions()
(and its readJSON calls) to run N times; fix by creating the category target map
once in getPageUpdates (or memoize getCategoryTargetMap) and pass it into
collectItemPageKeys and itemMatchesPageKey as an optional parameter (e.g.,
collectItemPageKeys(item, categoryMap) and itemMatchesPageKey(item, key,
categoryMap)); update both functions to use the provided map when present and
fall back to calling getCategoryTargetMap() only when the arg is undefined so
existing callers still work.

Comment on lines +9 to +15
const parts = String(str)
.split(',')
.map((s) => parseInt(String(s).trim(), 10));
if (parts.length !== len) return Array.from({ length: len }, (_, i) => i);
const ok =
new Set(parts).size === len &&
parts.every((n) => Number.isInteger(n) && n >= 0 && n < len);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

parseInt silently accepts malformed tokens like "1abc" or "1.5".

parseInt('1abc', 10) returns 1 and parseInt('1.5', 10) returns 1. Both pass Number.isInteger and the range/uniqueness checks, so a corrupted form submission could be accepted as a valid permutation with wrong indices rather than falling back to identity. Prefer strict numeric parsing.

♻️ Proposed fix
   const parts = String(str)
     .split(',')
-    .map((s) => parseInt(String(s).trim(), 10));
+    .map((s) => {
+      const t = String(s).trim();
+      return /^-?\d+$/.test(t) ? Number(t) : NaN;
+    });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/parse-existing-order.js` around lines 9 - 15, The current parsing uses
parseInt which accepts malformed tokens like "1abc" or "1.5"; update the token
parsing in parse-existing-order.js (the parts computation) to perform strict
numeric validation: trim each token, reject any token that does not match an
integer-only pattern (e.g. /^\d+$/) or otherwise produce NaN, then convert to
Number for the parts array so the existing ok checks (uniqueness,
Number.isInteger, range) correctly fail and cause the code to fall back to the
identity permutation; keep the rest of the logic (parts, ok) unchanged.

Comment thread lib/social-formats.js
Comment on lines +45 to +49
twitter: {
label: 'X (Twitter)',
text: truncate(`${title}${teaser ? ` — ${teaser}` : ''} ${url}`, 280),
hint: '280-character limit. Start a thread if you need more context.'
},
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

X/Twitter 280-char budget counts t.co-wrapped URLs as 23 chars, not raw length.

truncate(..., 280) uses JS string length, but Twitter wraps all links through t.co (fixed 23 chars regardless of actual URL length). For a long siteOrigin + hash, url.length can exceed 23, causing the snippet to be over-truncated even though the actual posted tweet would fit. Also, Twitter counts by weighted code points (emoji = 2), not .length.

Consider budgeting as 280 - 23 - separators for the text portion and appending the raw URL last, or documenting this as an approximation in the hint.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/social-formats.js` around lines 45 - 49, The Twitter/X entry currently
uses truncate(`${title}${teaser ? ` — ${teaser}` : ''} ${url}`, 280) which
measures JS string length and double-counts the URL; instead, budget the 280
chars by reserving 23 characters for the t.co-wrapped URL plus separators and
truncate only the text portion (title + optional teaser) using truncate(..., 280
- 23 - N) where N accounts for the space and em dash separators, then append the
raw url after the truncated text; update the twitter object (key "twitter") to
call truncate on the text portion (not including url) and build the final text
by concatenating truncatedText + separator + url, and consider updating the hint
to mention t.co counts as 23 chars and that this is an approximation due to
Twitter's weighted codepoint counting.

Comment thread package.json
Comment on lines +18 to +19
"test": "node --test \"./tests/*.test.js\"",
"test:ci": "node --test \"./tests/*.test.js\""
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🌐 Web query:

Which Node.js version added glob pattern support to the node --test CLI?

💡 Result:

Node.js version 21.0.0 added glob pattern support to the node --test CLI.

Citations:


🏁 Script executed:

cat package.json | head -25

Repository: martinatkin1-ui/suit-wolverhampton-website

Length of output: 1173


🌐 Web query:

Node.js 21 node --test glob patterns experimental flag vs unflagged

💡 Result:

In Node.js 21, support for glob patterns in the node --test command was added as a stable feature, not behind an experimental flag. Users can run node --test "/*.test.js" (with quotes to prevent shell expansion), and Node.js internally handles the glob matching using its built-in glob implementation based on minimatch. This was introduced via PR #47653, merged in June 2023, and highlighted in the Node.js 21 release announcement. Prior to Node.js 21 (e.g., v20), glob patterns passed directly to node --test did not work unless the shell expanded them first, which was unreliable (especially with npm scripts). There is no --experimental-test-glob flag; searches for it return no matches. Official docs for v21.x, v22.x, v24.x, and v25.x all document glob support without any experimental prefix. Other experimental test runner flags exist (e.g., --experimental-test-coverage, --experimental-test-isolation), but glob patterns for file discovery have been unflagged since v21.

Citations:


Verify glob expansion for node --test and adjust Node.js version requirement.

Native glob support in node --test landed in Node 21 (unflagged). On earlier versions, "./tests/*.test.js" is passed literally and node --test will error. The test and test:ci scripts are currently identical—consider making test:ci add --test-reporter=tap (or similar CI-friendly reporter) to differentiate.

Pin a minimum Node via "engines": { "node": ">=21" } so CI and contributors don't silently hit this.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@package.json` around lines 18 - 19, The package.json test scripts rely on
Node's native glob expansion which only works unflagged in Node 21+, so update
package.json to pin the minimum Node version by adding or updating the
"engines": { "node": ">=21" } field; keep "test": "node --test
\"./tests/*.test.js\"" (or change to a shell-expanded glob if you prefer
cross-version support) and make "test:ci" CI-friendly by adding a reporter
switch such as "node --test --test-reporter=tap \"./tests/*.test.js\"" so the CI
script differs from the local test script.

Comment thread server.js
Comment on lines +1262 to +1266
const publishedRaw = String(req.body.publishedAt || '').trim();
let publishedAt = publishedRaw
? new Date(publishedRaw).toISOString()
: new Date().toISOString();
if (Number.isNaN(new Date(publishedAt).getTime())) publishedAt = new Date().toISOString();
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Unreachable fallback for invalid publishedAt — throws 500 instead.

The intent of line 1266 appears to be “if the normalized publishedAt is not a valid date, fall back to now.” But it can never fire: when publishedRaw is an unparseable string, new Date(publishedRaw).toISOString() throws RangeError: Invalid time value on line 1264, the outer try/catch forwards via next(e), and the admin gets a 500 instead of a saved record with a publishedAt of "now".

🐛 Suggested fix
-      const publishedRaw = String(req.body.publishedAt || '').trim();
-      let publishedAt = publishedRaw
-        ? new Date(publishedRaw).toISOString()
-        : new Date().toISOString();
-      if (Number.isNaN(new Date(publishedAt).getTime())) publishedAt = new Date().toISOString();
+      const publishedRaw = String(req.body.publishedAt || '').trim();
+      const parsed = publishedRaw ? new Date(publishedRaw) : new Date();
+      const publishedAt = Number.isNaN(parsed.getTime())
+        ? new Date().toISOString()
+        : parsed.toISOString();
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const publishedRaw = String(req.body.publishedAt || '').trim();
let publishedAt = publishedRaw
? new Date(publishedRaw).toISOString()
: new Date().toISOString();
if (Number.isNaN(new Date(publishedAt).getTime())) publishedAt = new Date().toISOString();
const publishedRaw = String(req.body.publishedAt || '').trim();
const parsed = publishedRaw ? new Date(publishedRaw) : new Date();
const publishedAt = Number.isNaN(parsed.getTime())
? new Date().toISOString()
: parsed.toISOString();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server.js` around lines 1262 - 1266, The current logic calls new
Date(publishedRaw).toISOString() directly so an unparseable publishedRaw throws
before the NaN check; change to first construct a Date object and validate it
before calling toISOString: create a temp Date (e.g., const d = publishedRaw ?
new Date(publishedRaw) : new Date()), if Number.isNaN(d.getTime()) set d = new
Date(), then set publishedAt = d.toISOString(); update references to
publishedRaw and publishedAt accordingly so the fallback to "now" actually
occurs instead of throwing.

Comment thread tests/http-smoke.test.js
assert.ok(res.headers['content-type']?.includes('html'));
});

test('GET /news-more/announcements resolves to200 (static may301 to trailing slash)', async () => {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Minor: typo in test description.

"resolves to200 (static may301 to trailing slash)" is missing spaces — should read "resolves to 200 (static may 301 to trailing slash)".

Proposed fix
-test('GET /news-more/announcements resolves to200 (static may301 to trailing slash)', async () => {
+test('GET /news-more/announcements resolves to 200 (static may 301 to trailing slash)', async () => {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
test('GET /news-more/announcements resolves to200 (static may301 to trailing slash)', async () => {
test('GET /news-more/announcements resolves to 200 (static may 301 to trailing slash)', async () => {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/http-smoke.test.js` at line 20, Typo in the test title string for the
test named "GET /news-more/announcements resolves to200 (static may301 to
trailing slash)"; update the literal test description in the test(...) call to
read "GET /news-more/announcements resolves to 200 (static may 301 to trailing
slash)" so spacing is correct (locate the test call in tests/http-smoke.test.js
and replace the title string).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant