Roundtable Alias performs a series of automated checks on each survey response to identify various types of fraudulent, low-quality, and suspicious content. These checks can be broadly categorized into:

  1. Basic Checks: Analyzes the content of individual responses for gibberish, off-topic, low-effort, and AI-generated text.

  2. Duplicate Detection: Identifies duplicate and near-duplicate responses within and across participants.

  3. Behavioral Tracking: Monitors participants’ typing patterns and keystroke dynamics to flag suspicious activities and bot-like behavior.

  4. Effort Scoring: Assigns a granular effort score to each response based on factors like length, complexity, relevance, and engagement.

When a response fails one or more checks, Alias flags it in the checks object of the API response, providing detailed information about the specific issues detected.

Basic Checks

Alias performs the following basic checks on each response:

  • Gibberish: Flags responses that lack coherent semantic content and are likely to be random or nonsensical.

  • Off-topic: Identifies responses that are unrelated to the question asked and potentially indicative of disengagement or bot activity.

  • Low-effort: Flags responses that provide minimal information or lack sufficient detail to be meaningful.

  • AI-generated Content: Detects responses that appear to be generated by language models like GPT, suggesting potential fraud or automation.

Duplicate Detection

Alias uses advanced text similarity algorithms to identify duplicate and near-duplicate responses:

  • Self-duplicate: Flags instances where a participant submits similar or identical responses to multiple questions within the same survey.

  • Cross-duplicate: Identifies cases where multiple participants submit similar or identical responses to the same question across different survey submissions.

Duplicate detection helps uncover copy-pasted, templated, or bot-generated responses that undermine data quality.

Behavioral Tracking

By monitoring participants’ typing patterns and keystroke dynamics, Alias can identify suspicious behaviors that deviate from genuine human responses:

  • Abnormal Typing Speed: Flags responses with unusually fast or slow typing speeds that suggest automation or disengagement.

  • Unnatural Typing Patterns: Detects repetitive, scripted, or bot-like typing rhythms and sequences.

  • Pasted Content: Identifies responses that were likely pasted into the survey interface rather than typed naturally.

  • Programmatic Entry: Flags responses that appear to be entered programmatically or through automated means.

Behavioral tracking adds an extra layer of fraud detection by analyzing how participants interact with the survey.

Effort Scoring

Alias assigns an effort score between 1 and 10 to each response based on various linguistic and engagement factors:

  • Length: Longer, more detailed responses generally receive higher effort scores.

  • Complexity: Responses with varied vocabulary, sophisticated sentence structures, and coherent paragraphs score higher.

  • Relevance: Responses that directly address the question and stay on-topic are scored higher.

  • Uniqueness: Responses offering unique insights or perspectives receive higher scores.

  • Engagement: Responses demonstrating genuine engagement, such as providing examples or personal anecdotes, score higher.

Effort scores provide a quick way to assess the overall quality and thoughtfulness of each response.

Next Steps

Dive deeper into each type of check:

Explore the API Reference for details on the checks object and other response fields.