Skip to content

feat: order questions and answers#28

Draft
cbueth wants to merge 1 commit intomainfrom
feat/parse-and-store-answers-to-comprehension-questions
Draft

feat: order questions and answers#28
cbueth wants to merge 1 commit intomainfrom
feat/parse-and-store-answers-to-comprehension-questions

Conversation

@cbueth
Copy link
Collaborator

@cbueth cbueth commented Dec 9, 2025

Add helpers to get the per-trial question order and building a table for answers.

In- and Output

Input per session:

  • /eye-tracking-sessions/006_SQ_CH_1_ET1/logfiles/question_order_versions.csv
    • Columns include: question_order_version, local_question_1, local_question_2, bridging_question_1, bridging_question_2, global_question_1, global_question_2
  • A mapping from trial -> stimulus (e.g., trial 1 -> Arg_PISACowsMilk_10).

Output per session:

  • results/answers.csv
    • One row per asked question (6 per trial)
    • Columns: trial, stimulus, slot, order_code, question_id, preliminary_dir, preliminary_ts, final_dir, final_ts
    • question_id format: <stimulus_numeric_id><order_code>
      • middle is 2 for PISA texts (stimulus name contains "PISA"), else 1

Status: Question order and question_id construction are implemented and tested. Parsing of preliminary/final answers from logfiles are not. Logfile format? Or are these this format namely?
https://github.com/theDebbister/multipleye-preprocessing/blob/4da1819ccf6ab8ff396dac878d966d258dc738c1/tests/MultiplEYE_toy_X_x_1_1/eye-tracking-sessions/001_TOY_X_1_ET1/logfiles/question_order_versions.csv#L1-L5

Implementation

New module: preprocessing/answers/

  • parser.py:
    • parse_question_order(csv_path) reads the session CSV and adds a trial column (fair choice?).
    • construct_question_id(stimulus_name, order_code) builds IDs as <stimulus_numeric><middle><order_code>, where <middle> is 2 when the stimulus name contains PISA, else 1.
  • collect.py:
    • collect_session_answers(question_order_csv, stimuli_trial_map, out_path=None) creates 6 rows per trial with columns: trial, stimulus, slot, order_code, question_id, preliminary_dir, preliminary_ts, final_dir, final_ts.
    • Default output path when out_path=None: <session_dir>/results/answers.csv (session_dir is parent of logfiles). To be changed.
  • io.py: CSV read/write helpers.

Tests

pytest -q tests/unit/preprocessing/answers

Missing

  • preliminary_dir: The first directional key a participant presses on a question screen (one of up, down, left, right), indicating their initial choice among the four positioned answer options.

  • preliminary_ts: The timestamp of that first (preliminary) keypress.

  • final_dir: The directional key that is ultimately submitted as the answer (after any changes of mind), again one of up/down/left/right.

  • final_ts: The timestamp when the final answer is committed.

  • Compute per-question reaction times and detect revisions:

    • maybe rt_initial = preliminary_ts - question_onset_ts
    • maybe rt_final = final_ts - question_onset_ts
    • bool changed = (preliminary_dir != final_dir)

@cbueth cbueth linked an issue Dec 9, 2025 that may be closed by this pull request
Signed-off-by: Carlson Büth <commit@cbueth.de>
@cbueth cbueth force-pushed the feat/parse-and-store-answers-to-comprehension-questions branch from 8b9234b to 03b7b5e Compare December 9, 2025 12:16
@cbueth cbueth self-assigned this Dec 9, 2025
@cbueth cbueth added the enhancement New feature or request label Dec 9, 2025
@theDebbister
Copy link
Member

I have added a few comments to the issue #7 based on our discussion

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Parse and store answers to comprehension questions

2 participants