1. Official source
A check begins with vendor documentation, schema references, changelogs, or first-party platform material. No recommendation is supposed to exist without a source trail.
Nerviq is not just a checklist generator. It is a verification system with an evidence chain, runtime experiments, freshness rules, and a false-positive feedback loop that keeps the catalog grounded in current reality.
Every good recommendation should answer the same questions: where did this idea come from, was it tested, is it current, and what happens if users keep rejecting it?
A check begins with vendor documentation, schema references, changelogs, or first-party platform material. No recommendation is supposed to exist without a source trail.
Nerviq turns raw vendor material into structured research notes: what changed, what is still ambiguous, what contradicts community expectations, and what should become a check.
Documentation claims are probed against real CLI runs, local fixtures, and controlled repo setups so the catalog does not drift into a docs-only fantasy layer.
The verified behavior becomes a concrete check function with explainable metadata such as impact, sourceUrl, confidence, and remediation text.
Checks are monitored for staleness, false positives, and recommendation quality over time. Nerviq treats decay as a product problem, not a footnote.
These are the top-line proof signals the website uses to explain why Nerviq recommendations should be treated as evidence-backed guidance.
Static truth is not enough for fast-moving agent platforms. Nerviq treats freshness as part of the product, not a maintenance afterthought.
Changelogs, release notes, and watched official docs are monitored so the team knows when a platform has shipped a breaking behavior, deprecated a field, or introduced a new surface worth auditing.
Checks without recent verification are marked down in confidence rather than silently continuing to rank as if they were current.
The methodology is only useful if the recommendations get better under real usage.
Teams can record whether a finding helped, hurt, or did nothing. That creates a per-check signal about recommendation quality in real projects.
Checks with strong positive outcomes can rise. Checks with repeated “not helpful” outcomes can be suppressed, tightened, or sent back into the research and experiment cycle.
nerviq feedback --key permissionDeny --status accepted --effect positive --score-delta +12The goal is not just more checks. The goal is a recommendation layer that can explain itself, degrade honestly, and improve under pressure.
Every strong finding should be traceable back to a source, an implementation, and a reason it ranked where it did.
Freshness and re-verification reduce the chance that a once-true recommendation keeps misleading teams after the platform moves on.
False-positive feedback gives Nerviq a way to learn from production usage instead of freezing the catalog in one release snapshot.