Trust layer
Protect data quality while you can still do something about it.
Trust Centre applies <un>peel’s Real, Unique, Engaged framework to every respondent while a study is live — so the team can review problems and act on them before they turn into a reporting argument.
More trust layer pages
Why this matters now
Respondent fraud is not getting easier to catch. AI-generated responses, device spoofing, click farms, and velocity attacks have made fraudulent participation harder to detect and more prevalent. At the same time, panel providers are expected to handle quality — but a supplier assurance is not the same as the team having visibility into what is actually happening inside their study.
Even when teams review data mid-field, quality issues often turn into an end-of-project problem. By then the fieldwork budget is spent, the timeline has passed, and the stakeholder presentation is already in the calendar. Disputing a sample after the fact is exactly the kind of operational fight insight leaders are trying to avoid.
The Real, Unique, Engaged framework
Trust Centre evaluates every respondent against three dimensions of quality, continuously, while the study is in field.
- Real — is this a genuine person rather than a bot, an emulator, or a fraudulent account? Checks cover device environment, network integrity, identity signals, and bot detection.
- Unique — does each response come from a distinct person who has not already participated in this study? Checks cover duplicate IP, device fingerprint, email, phone number, and matching name-and-date-of-birth signals.
- Engaged — is this respondent paying attention rather than rushing, satisficing, or gaming the survey? Checks cover question-level speeding, patterned grid responses, carousel attention scoring, and open-ended response quality.
- 38+ quality metrics feed one Real, Unique, Engaged review workflow, including question-level checks where the whole-survey average would miss the problem.

Live review, not just cleanup
Across the market, quality usually shows up as analysis-stage cleanup, post-response scoring, or managed screening behind the scenes. All can help. The gap is that the research team often sees the decision late or sees the outcome without the reasoning.
Experienced teams do sometimes catch issues mid-field manually — the problem is that this rigor is slow, inconsistent, and hard to sustain when fieldwork is moving.
38+ quality metrics feed one live review workflow.
Standard approach
Broad, rule-based review
Trust Centre approach
Live, question-level review
Speeding
Standard approach
Usually judged from total survey duration, which can over-flag people who simply saw a shorter path.
Trust Centre approach
Measured at question level and rolled up across the path the respondent actually saw, so teams can judge whether someone rushed most of their survey instead of just finishing quickly.
Patterning
Standard approach
Repeated answer patterns are often treated as a generic straightlining or attention flag.
Trust Centre approach
Patterning is checked with timing and other question-level signals, so the team is not treating every straightline or patterned response as a quality problem.
Explicit attention checks
Standard approach
Attention checks are often limited to a few standalone trap questions or handled loosely in cleanup later.
Trust Centre approach
Supported structured questions can carry deterministic attention-style review checks, and one failed explicit check is enough to flag the respondent for review.
Open-end quality
Standard approach
Usually checked later through manual review or broad cleanup rules.
Trust Centre approach
Open ends can be monitored in real time for short answers, gibberish, irrelevance, and custom word lists, including banned words that stop progression and flagged words that surface for review.
Custom rule checks
Standard approach
Limited ability to configure question-specific consistency checks.
Trust Centre approach
Supported structured question types can compare a prior-answer condition with the current answer, so configured review points flag suspicious mismatches where they happen.
Reviewer control
Standard approach
The team often sees a score, a filter, or isolated flags.
Trust Centre approach
The reviewer sees the rolled-up respondent risk alongside Real, Unique, and Engaged status, then drills into the flags behind each dimension to judge whether the case looks genuinely suspicious.
Trust Centre turns that manual ritual into a live workflow: flags appear while the study is open, the respondent view shows Real, Unique, and Engaged status, the rolled-up respondent risk, and the underlying signal stack, the team can quarantine or accept cases, and quota replacement can happen before the study closes. If issues only become clear after close, the options are mostly filtering, supplier challenge, or re-field.

What happens when someone is flagged
The workflow is designed to support action while the study is still open, not just a cleaner dashboard later. The point is reviewable control, not a black-box auto-reject system.
High-confidence flags can be handled quickly. Borderline cases stay visible so the team can make the call rather than losing good respondents to a single harsh rule.
- Quarantine for review when the team needs to inspect the case
- Fast automatic action for clear-cut problems when that rule is configured
- Override and accept when the researcher judges the flag differently
- Log the decision against the project record for later reference

The audit trail that travels with the study
If the business is moving quickly, the insight team does not have time to relitigate data quality from scratch every time someone asks how trustworthy the sample is. Trust Centre makes that conversation easier because the checks and decisions are already attached to the work.
When a stakeholder asks whether the sample can be trusted, the team can show exactly what was checked, what was flagged, and what was decided about each case.
Next step
See the Real, Unique, Engaged framework on a live study.
Book a demo and we will walk through the quality workflow, review logic, and audit trail on a real fieldwork example.