How we measure cognitive performance — and what we don't claim.
A short walk through the construct mapping, scoring, aggregation, and privacy posture behind the WelloWork platform. We publish methodology as we generate pilot data.
How are exercises mapped to constructs?
Each task on the platform maps to one primary construct and at most one secondary construct. The primary mapping drives scoring; the secondary mapping is captured for methodology audit but does not contribute to the headline metric. We use established paradigms — N-back, span tasks, symbol-substitution, Posner cueing, Raven-style reasoning, task-switching — adapted for short on-platform sessions.
How are scores computed?
Per-session performance is normalised against the employee's own running baseline (z-scored within the last 90 days). This deliberately avoids comparing one employee against another at the individual level, since population-relative scoring is sensitive to noise that doesn't matter in a workplace context.
How are aggregations done?
- Minimum team size before any aggregate is shown to a manager.
- Trends are smoothed weekly to avoid single-day spikes driving manager attention.
- Annotations against work events (sprint reviews, releases, on-call rotations) are added by configurable rules.
- Variability metrics are reported separately from level metrics — they answer different questions.
What do we deliberately not claim?
- We do not claim transfer of cognitive training to specific business outcomes (revenue, productivity).
- We do not claim clinical or diagnostic value for biomarker reports.
- We do not claim individual employee ranking is reliable from short adaptive tasks. We only claim trends are.
- We do not invent metrics. Every claim ties back to a published construct or to a methodology note we will publish under
/research.
How do we publish updates?
Methodology notes will be posted under /research/science-insight as pilot cohorts produce enough data to write something defensible. We will not publish individual customer data, and we will not publish aggregates that don't meet our minimum-team threshold.
Privacy in the methodology
Methodology and privacy are linked. The platform's choice to normalise within an employee, not against a population, is also what makes it harder to "de-anonymise" an aggregate. See privacy by design for the architectural detail.