Documentation

Building AI Tasks

Configure the interactive AI experiences within your study — chat interfaces, LLM behavior, retrieval, and bias manipulations.

AI Task Types

Gricea supports four types of interactive AI tasks. Each provides a different interface and interaction paradigm:

Chat

A full conversational chat interface. Participants send messages and receive AI-generated responses with optional source references, web search, and streaming.

  • Streaming responses
  • Source references/citations
  • File upload
  • Web search toggle
  • Chat history management

Web Search

A search-engine-style interface where participants query for information. Results can be sourced from RAG collections with controlled retrieval.

  • Search query interface
  • RAG-powered results
  • Source attribution
  • Article viewer
  • Controlled retrieval

Scaffolded Chat

A structured chat experience with coaching scaffolds — progress guides, step-by-step prompts, checklists, and stage-based interactions that guide participants through a process.

  • Progress tracking
  • Stage-based interaction
  • Guided prompts
  • Structured workflow
  • Coaching interface

Split View

A side-by-side layout with content on one side and interaction on the other. Useful for article reading + chat, source review + annotation, or document + discussion tasks.

  • Two-panel layout
  • Article + chat
  • Source + annotation
  • Flexible component pairing
  • Synchronized interaction

AI Manipulations

One of Gricea's key strengths is the ability to manipulate specific aspects of the AI pipeline independently. This is critical for research because it lets you isolate the causal effect of a single change. Here are the types of manipulations you can configure:

Prompt Manipulation — Change the system prompt to vary the AI's persona, tone, verbosity, or optimization target. For example, test whether a helpful vs. authoritative persona changes user trust.

Context & RAG Manipulation — Control what evidence the AI retrieves and presents. Use RAG collections with topic filters, bias modes (consonant/dissonant/neutral), and metadata filters to systematically vary the information the AI has access to.

Bias Framing — Use the bias-transform node to reframe retrieved text to be consonant (aligned) or dissonant (opposing) relative to the user's stated position. Control intensity from subtle to strong.

Model Selection — Choose different LLM models (GPT-4o, GPT-4o-mini, etc.) or vary temperature and token limits to study how model capability affects user behavior.

Interface Scaffolds — Vary the UI layout and components. Show or hide source panels, add progress guides, include note-taking tools, or change from single-panel to split-view.

Turn Behavior — Control loop conditions, minimum turns, and stop conditions to vary the interaction length and structure. Test whether forced reflection (minimum turns) improves outcomes.

Isolating Variables

The key principle: change only what you want to study, keep everything else constant. Gricea makes this easy because each manipulation is a separate configurable parameter — you don't need to rebuild the entire system to test one change.

Configuring Tasks: Node Types

Task-level nodes control the micro-level flow of the AI interaction. They determine what happens at each step: when to wait for user input, when to call the LLM, when to retrieve context, and when to update the UI. Here are all available task nodes:

task:await-user-input

Interaction

Pauses here

Pauses the task flow until a UI component emits input (e.g., the user sends a chat message). Connects the UI layer to the logic layer.

Key Configuration

sourceInstanceId (component ID)eventName (e.g. onMessageSubmit)saveAs variable

task:llm-call

AI

Calls an LLM and saves the complete response. Use for non-streaming scenarios or when you need the full response before proceeding.

Key Configuration

systemPromptuserPrompt (supports {{variables}})model (default gpt-4o)temperature (0-2)maxTokensincludeHistory

task:llm-stream

AI

Streams an LLM response token-by-token to a Chat component. The standard choice for chat tasks — provides a natural typing experience.

Key Configuration

systemPromptmodeltemperaturetargetComponentIdincludeReferencesallowWebSearch

task:retrieve-context

AI

Retrieves documents from a RAG vector store. Supports topic filtering, bias-aware retrieval (consonant/dissonant), metadata filters, and configurable result limits.

Key Configuration

collectionNamequeryVariable / queryTemplatelimit (default 5)biasMode: consonant | dissonant | neutral | mixedtopicVariablefilterscontextVariable

task:bias-transform

AI

Transforms text to apply bias framing. Can reframe content to be consonant (aligned), dissonant (opposing), neutral, or mixed relative to a user's stated position.

Key Configuration

inputVariable / inputTemplatebiasMode / biasModeVariableuserStance / userStanceVariableintensity: subtle | moderate | strongoutputVariable

task:inline-survey

Interaction

Pauses here

Shows a survey inside an InlineSurvey UI component — useful for collecting quick responses mid-task without leaving the conversation. Can appear inline or as a modal.

Key Configuration

targetComponentIdquestions[] (same as study:survey)display mode: inline | modal

task:update-component

Data & Variables

Updates properties or calls functions on a UI component at runtime. Use to dynamically change instructions, toggle visibility, or trigger component actions.

Key Configuration

targetInstanceIdnewPropsfunctionToCallparams[]

task:loop

Flow Control

Repeats a sequence of nodes (the "loop body") until a stop condition is met. Essential for chat interactions where the user sends multiple messages.

Key Configuration

loopBody (first node ID)stopConditions: maxIterations | variableEquals | userAction | timeElapsed | wordCountminTurnsshowStopAfterMinTurns

task:loop-end

Flow Control

Marks the end of a loop body. When reached, execution jumps back to the loop node to check stop conditions.

Key Configuration

loopId (optional reference)

task:loop-stop

Flow Control

Provides an explicit exit point from a loop. When reached, the loop ends and execution continues to the next node after the loop.

Key Configuration

loopId (optional reference)

task:branch

Flow Control

Conditional branching within a task. Same behavior as study:branch — checks variable conditions and routes to different nodes.

Key Configuration

branches[]conditiondefaultGoto

task:randomize

Flow Control

Random assignment within a task. Useful for within-task randomization of prompts, instructions, or conditions.

Key Configuration

variables[]values with weights

task:set-variable

Data & Variables

Sets or computes task-level variables. Same capabilities as study:set-variable: constants, variable copies, expressions, and random selection.

Key Configuration

assignments[]constant | fromVariable | expression | random

task:logic

Data & Variables

Runs logic functions to transform variables within a task. Supports the full expression engine with function calls and inline expressions.

Key Configuration

rules[]target variableLogicExpr: const | var | call | expression

task:within-subject-start

Flow Control

Within-subject design block inside a task. Each participant experiences all conditions within the task.

Key Configuration

conditions[]assignTocounterbalance

task:within-subject-end

Flow Control

End of within-subject block in a task. Loops back to start until all conditions are completed.

Key Configuration

startNodeId

Logic Node Deep Dive

The logic node provides a powerful expression engine for computing and transforming variables. Each rule in a logic node has a target variable and a value expression.

Expression Types:

  • const — A fixed value: {"type": "const", "value": 42}
  • var — Reference another variable: {"type": "var", "key": "user_stance"}
  • call — Call a built-in function: {"type": "call", "fn": "concat", "args": [...]}
  • expression — Inline expression string: {"type": "expression", "expression": "{{score}} * 2 + 1"}

Variable Interpolation: Use {{variableName}} syntax in prompts, text fields, and expressions to dynamically insert variable values. For example:

  • System prompt: "You are a {{persona}} assistant discussing {{topic}}"
  • Expression: "{{score_a}} + {{score_b}}"

The logic engine evaluates expressions in order, so later rules can reference variables set by earlier rules in the same node.

Logic Node Example: Compute a Scorejson
{
  "rules": [
    {
      "label": "Calculate total score",
      "target": "total_score",
      "value": {
        "type": "expression",
        "expression": "{{pre_score}} + {{post_score}}"
      }
    },
    {
      "label": "Normalize to 0-1",
      "target": "normalized_score",
      "value": {
        "type": "expression",
        "expression": "{{total_score}} / 100"
      }
    }
  ]
}

Example Task Configurations

Here are common task flow patterns used in Gricea studies:

Simple Chat Looptext
task:set-variable (initialize chat_history)
  → task:loop (max 20 turns, user can stop after 3)
    → task:await-user-input (wait for chat message)
    → task:llm-stream (generate AI response)
    → task:loop-end
  → End of Task
RAG-Enhanced Chattext
task:set-variable (initialize)
  → task:loop
    → task:await-user-input
    → task:retrieve-context (search RAG collection)
    → task:llm-stream (respond with retrieved context)
    → task:loop-end
  → End of Task
Bias-Manipulated Chattext
task:set-variable (initialize)
  → task:loop
    → task:await-user-input
    → task:retrieve-context (get raw evidence)
    → task:bias-transform (reframe as consonant/dissonant)
    → task:llm-stream (respond using biased context)
    → task:loop-end
  → End of Task