Documentation

Gricea Platform Guide

Everything you need to design, deploy, and analyze human-AI interaction studies.

Start Here

Tutorial: Build Your First Study

Walk through building a complete between-subject study investigating how AI bias affects trust — from research question to published experiment.

What is Gricea?

Gricea is a research platform that treats conversational-AI variants as configurable study conditions. It provides a researcher-facing authoring interface for composing multi-stage study procedures and configuring task interfaces, alongside a participant runtime that executes published study versions and logs turn-level interaction traces.

The core design goal is to make experimental manipulations low-cost, scalable, and fine-grained: researchers should be able to author both (i) the controlled study flow (macro procedure, branching, randomization) and (ii) the conversational task flow (micro turn-by-turn interaction, retrieval, generation, streaming, and interface scaffolds) within a unified, auditable framework.

Gricea is developed at the isle Lab at Johns Hopkins University.

Key Concepts

Before diving into the details, here are the core building blocks of Gricea:

Studies — The top-level unit. A study defines the complete participant journey: consent, surveys, AI tasks, and debrief. You design studies as a visual flow graph in the builder, then publish immutable versions for participants.

Tasks — Interactive AI-powered experiences embedded within studies. Tasks have their own node-based flow graphs that control the micro-level interaction: when to wait for user input, when to call the LLM, what context to retrieve, and how to update the interface.

Nodes — The building blocks of both studies and tasks. Nodes represent stages (surveys, chat interfaces), logic (branching, randomization, variable assignment), and AI operations (LLM calls, RAG retrieval, bias transforms).

Measures — Automatic event logging captures every interaction — survey responses, chat messages, LLM calls, timing data. All data is available for export and analysis.

Variables — Data that flows through your study. Survey responses, randomization assignments, computed values, and AI outputs are all stored as variables that downstream nodes can reference.

Example Research Questions

Gricea is designed to help researchers investigate questions at the intersection of human behavior and AI system design. Here are examples of research questions you can study:

  • How does the bias of AI change user opinion? — Manipulate retrieval filters or system prompts to present consonant vs. dissonant evidence, then measure attitude change.
  • How do different conversation strategies affect user reliance? — Vary system prompts, initiative strategies, or scaffolding interfaces across conditions to compare reliance behaviors.
  • How do different personas change user engagement? — Randomize persona-based system prompts and measure engagement through turn count, message length, and satisfaction surveys.
  • How do different privacy system messages affect user behavior? — Present different privacy notices and track information disclosure patterns in chat.
  • Does retrieval grounding reduce overreliance on AI? — Compare chat tasks with and without RAG-retrieved sources to measure verification behavior.
  • How do interface scaffolds shape feedback quality? — Test progress guides, source panels, or checklists as interface conditions while holding the AI pipeline constant.