We’re excited to announce a significant enhancement to Roundtable Alias: user-defined thresholds for effort scores.

Understanding Effort Scores

Before we dive into the changes, let’s revisit what effort scores are and why they matter. Effort scores are numerical values assigned to each survey response, ranging from 1 to 10. These scores are calculated based on various linguistic and behavioral factors, providing a quantitative measure of the respondent’s engagement and the quality of their input.

For example, a response that’s thoughtful, detailed, and relevant might receive a high score of 8 or 9. On the other hand, a single-word answer or a very brief, low-effort response might receive a low score of 2 or 3. These scores help researchers quickly identify high-quality responses for analysis and flag problematic ones for review.

The Previous Approach

Until now, we at Roundtable set a predetermined threshold for what constituted a “low-effort” response. This fixed threshold was used across all surveys to automatically flag responses that fell below it. While this system provided a standardized approach to identifying potentially low-quality data, it had its limitations.

As we worked closely with researchers, we realized that a one-size-fits-all threshold wasn’t ideal. Different types of surveys, research goals, and respondent populations often require different standards of what constitutes an acceptable level of effort. What might be considered a low-effort response in an in-depth qualitative study could be perfectly acceptable in a quick pulse survey. Researchers often have the best understanding of their specific research context and what level of effort is appropriate for their particular study.

What We Changed

In response to these insights, we’ve introduced user-defined thresholds for effort scores. Now, instead of relying on our predetermined threshold, researchers can set their own minimum effort score for what they consider an acceptable response.

This change is implemented through a new parameter in our API: low_effort_threshold. When making an API request, researchers can now specify this threshold as an integer between 1 and 10. Any responses with an effort score at or below this threshold will be flagged as “low-effort.”

For example, if a researcher sets low_effort_threshold=5, all responses with an effort score of 5 or lower will be flagged. This gives researchers direct control over the stringency of the effort score filtering.

Fraud Detection and Effort Scores

It’s important to note that not all low-effort responses are necessarily fraudulent. Some respondents might provide answers that are technically valid but show minimal engagement or thoughtfulness. These could be very brief responses, one-word answers, or responses that don’t fully address the question at hand. While not fraudulent, these low-effort responses can still significantly impact the quality and depth of your data.

User-defined thresholds can play a crucial role in identifying both potential fraud and low-effort responses. By allowing researchers to set appropriate thresholds for their specific surveys, we’re enabling more accurate detection of responses that don’t meet the expected standard of engagement. This can help researchers more effectively sift out potentially fraudulent or low-quality data, ensuring the integrity of their research findings.

We welcome your feedback on this new feature and look forward to continuing our journey of innovation in survey research together.