Documentation

Tutorial: Build Your First Study

A step-by-step walkthrough of building a complete study — from research question to published experiment.

What We're Building

In this tutorial, we'll build a complete between-subject study investigating:

RQ: How does the bias of AI change the perceived trust in the AI?

Participants will first report their stance on AI in education, then have a conversation with an AI chatbot that either supports or opposes AI in education (depending on their assigned condition), and finally report their post-interaction trust and stance.

By the end of this tutorial, you'll understand how to use surveys, randomization, logic nodes, branching, and chat tasks to build a controlled experiment.

Follow Along

Open the Gricea builder in another tab and follow along. Each step shows you the exact node to drag onto the canvas and how to configure it.

Building the Study — Step by Step

Below is the complete walkthrough. Each step shows you the visual node as it appears in the builder, along with the key configuration you need to set.

The nodes are rendered here exactly how they will look once placed on the canvas — you just need to drag, drop, and fill in the fields shown.

Tutorial Study: Complete Flow

1
Pre-Task Survey

Demographics + Stance

2
Randomize

consonant / dissonant

3
Logic

Binarize stance → pro / con

4
Logic

assignBias → ai_Stance

5
Branch

ai_Stance = pro / con

6
Supporting AI Chat

pro system prompt

6
Against AI Chat

con system prompt

7
Post-Task Survey

Stance change + Trust

8
End

Node-by-Node Walkthrough

Drag each node onto the canvas and configure it as shown below.

1

Survey: Pre-Task Survey

Survey

Pre-Task Survey

Survey Title

Pre-task Survey

Question — Stance

"AI should be more heavily integrated in education"

Scale

4-point: Disagree → Agree (no neutral)

Variable Name

stance

This is the entry point — the first thing participants see. It collects demographics and, critically, their stance on AI in education. Add questions for age, gender, education, AI trust, and AI helpfulness.

Why this node?

The 4-point scale forces participants to lean one way or the other (no neutral option). This makes the binary classification in the next step cleaner. The variable name 'stance' is critical — downstream logic nodes reference it.

2

Randomize: Condition Assignment

Randomize

Condition Assignment

Variable Name

consonance

Value 1

consonant (weight: 1)

Value 2

dissonant (weight: 1)

Performs between-subject random assignment. Each participant is randomly assigned to one of two conditions: consonant or dissonant. Equal weights mean 50/50 split.

Why this node?

We're NOT directly randomizing whether the AI is 'pro' or 'anti'. Instead, we randomize whether the AI agrees or disagrees with the participant. This consonance/dissonance design lets you analyze the effect of alignment itself.

3

Logic: Binarize Stance

Logic

Binarize Stance

Target Variable

user_stance_bin

Expression

{{stance}} <= 2 ? 'con' : 'pro'

Converts the participant's 4-point stance into a simple binary classification. Scores 1-2 become 'con' (against AI in education), scores 3-4 become 'pro' (for AI in education).

Why this node?

The 4-point scale gives us nuance in the survey data, but for routing participants to conditions, we need a clean binary. This is a common pattern: collect fine-grained data, then classify for condition assignment.

4

Logic: Assign AI Stance

Logic

Assign AI Stance

Target Variable

ai_Stance

Expression

assignBias({{consonance}}, {{user_stance_bin}}, 'pro', 'con', 'neutral')

This is where the experimental manipulation happens. It combines the randomized condition with the participant's stance to determine what kind of AI they interact with.

Why this node?

Consonant + pro user → pro AI (agrees). Consonant + con user → con AI (agrees). Dissonant + pro user → con AI (disagrees). Dissonant + con user → pro AI (disagrees). This two-step approach is more powerful than directly randomizing 'pro' vs 'con' AI.

5

Branch: Route to Chat Task

Branch

Route to Chat Task

Condition Variable

ai_Stance

Branch 1

ai_Stance = "pro" → Supporting AI Chat

Branch 2

ai_Stance = "con" → Against AI Chat

Default

End (fallback)

Routes participants to different chat tasks based on the ai_Stance variable. If ai_Stance = "pro", they go to the Supporting AI chat. If ai_Stance = "con", they go to the Against AI chat.

Why this node?

After this node, participants diverge into two different experiences — but from their perspective, it looks like a single seamless study. The default path catches edge cases.

6

Chat Tasks: Two Conditions

Both tasks share the same structure — only the system prompt differs. This isolates the effect of AI bias on participant trust. Both use the same model, temperature, UI, and features.

ai_Stance = "pro"

Task

Supporting AI Chat

System Prompt

"You are an agent that is in support of AI in Education..."

Model

gpt-4o-mini

Temperature

0.7
ai_Stance = "con"

Task

Against AI Chat

System Prompt

"You are an agent that is against AI in Education..."

Model

gpt-4o-mini

Temperature

0.7

Inside each Chat Task

Update Component

Add welcome message

Loop

Repeat conversation

Await User Input

Wait for chat message

LLM Stream

Generate AI response

Loop End

Back to waiting

Key configuration for both tasks

  • Include history: enabled (maintains conversation context)
  • Web search: enabled (allows AI to find evidence)
  • References: enabled (shows source citations)
  • Max tokens: 2048
7

Survey: Post-Task Survey

Survey

Post-Task Survey

Question 1

"What is your stance on AI in Education?" (7-point)

Variable

post_stance

Question 2

"AI technology is helpful in my daily life." (7-point)

Variable

post_confidence

Both branches converge here. Connect both chat task outputs to this survey. It measures whether the AI's bias changed the participant's stance and trust.

Why this node?

The 7-point post-task scale gives more resolution than the 4-point pre-task scale. Comparing pre vs. post scores across conditions reveals whether AI bias affects trust.

8

Connect, Validate & Publish

With all nodes on the canvas, connect them in order. Then click Validate in the toolbar. The validation panel checks:

  • All nodes are connected (no orphans or dead ends)
  • All variables are defined before use
  • Branch has a default path
  • Each chat task has a valid embedded flow graph

If validation passes, click Publish to create an immutable version, then generate an invitation link to share with participants.

What you've built

A complete between-subject study with 2 conditions (consonant vs. dissonant), where the AI's stance is dynamically computed based on the participant's own position. Every interaction is logged — survey responses, chat messages, LLM calls, and variable assignments — giving you rich data for analysis.