Earlier this week, we launched our API for survey fraud detection. Track any open-ended response, and receive an immediate score of whether it flags any tests.

Our tests currently leverage two cognitive insights:

  1. Eyeball Tests - One of the most effective ways to determine fraudulent open-ended responses is ask a qualified human (e.g. behavioral researcher) to look through all answers and flag any accordingly. Time-consuming and ripe for bias, our algorithm leverages past datasets of manual researcher annotations to get a sense of the ‘researcher eyeball’. With this new model, we can give a gut check for any open-ended response.

  2. GPT/Junk bots have different behavioral profiles compared to humans. Typing behavior, character generation, copy-and-paste behavior. With JavaScript tracking, we can identify and analyze all the change events for any open-ended response. At the scale of large datasets that survey providers have at their disposal, we have been able to model fine-grained human vs. junk vs. GPT behavior.

title

title

Simply embed our JavaScript tracker into your Qualtrics / Forsta survey, and call the API with the below Python code.

import requests

url = 'https://roundtable.ai/.netlify/functions/alias-v01'

# Data to be sent in the request
data = {
    "questions": {...},  # Replace with your questions object
    "question_histories": {...},  # Replace with your change history object or leave empty
    "responses:": {...},  # Replace with an object with the participant's final responses
    "survey_id": "survey123",
    "participant_id": "participant123",
    "api_key": "your-api-key"
}

# Make the POST request
response = requests.post(url, json=data)

In seconds, the researcher can tell whether their respondent was a human or a bot. We’ve designed this for ultimate scale - running millions of participants a month. Try it out, and let us know how we do at support@roundtable.ai.

Survey fraud and bot detection is a cat-and-mouse game. It doesn’t make sense for individual researchers to have their own tools they are having to constantly update every few months to identify new trends of behavioral patterns. We know what we want - high-quality human data. By building human models of survey-taking behavior, we’re able to ensure that researchers have the high-quality data they need to make informed, high-stakes decisions.