An implementation in PsychoPy of the gradual onset continuous performance task (GradCPT; originally developed by Mike Esterman and colleagues).
A (very slow) demo of the GradCPT task
This is a basic template of the GradCPT task with interleaved experience sampling probes. The instructions assume you are using the builder view in PsychoPy but they should be easy enough to apply in the coder view as well. Many of the parameters are customizable by modifying a few lines of code that I've highlighted (see Configuration below).
In the future I'd like to make a template of GradCPT without interspersed experience sampling probes. Those with some prior experience with Python and PsychoPy could accomplish this by removing the loops, routines, and all downstream variables that depend on the experience sampling procedure.
- Install PsychoPy (see instructions)
- Clone this repository or download as zip and extract
git clone https://github.com/dbraun31/gradcptpy.git
- Open
gradcptpy.psyexp
. - Modify or execute the program.
Data is saved in two folders:
Data
: This is the directory generated from PsychoPy containing all the data.neat_data
: This is a directory generated by the custom code and stores 'neatly' formatted data. I highly suggest working with this neat data.
Assigning responses to trials in GradCPT is complex (see methods here) and I chose to implement this with an R script ad hoc rather than try to do it in PsychoPy. This means, in order to assign responses to data, you'll need to use the following steps:
- Install R (here)
- Install
dplyr
- From an R command line, run
install.packages('dplyr')
.
- From an R command line, run
- From a Bash-like command line from somewhere inside the repository,
execute the
ProcessData.r
script in theprocess_data
directory.- Eg, when in the root directory of the repository (ie,
gradcptpy
), runRscript process_data/ProcessData.r
. - By default, this script will assign responses to and concatenate all
GradCPT data found in the
neat_data
directory. You can adjust this to only process one.csv
data file by passing the file as an argument.- Eg,
Rscript process_data/ProcessData.r neat_data/my_single_file.csv
.
- Eg,
- Eg, when in the root directory of the repository (ie,
The ProcessData.r
script will save the output data into the
formatted_data
directory.
The first routine estimate_frame_rate
contains two custom code blocks
that much of the experiment depends on. The most important of these for
purposes of configuration is the global_vars
code block. The beginning of
this block contains global parameters of GradCPT and experience sampling
that can be easily adjusted by updating the corresponding values.
GradCPT parameters:
N_trials
: Total number of GradCPT trials to be performed per block.N_blocks
: Total number of GradCPT blocks to be performed in the experiment.transition_time
: Total time in seconds between 100% coherence of two successive GradCPT image stimuli.next_es_min
: The minimum amount of GradCPT trials before the onset of the next experience sampling probe.next_es_max
: The maximum amount of GradCPT trials before the onset of the next experience sampling probe.prop_dom
: Proportion of dominant (city scene) stimuli relative to total stimuli (default is .9).
Experience sampling parameters:
es_isi_time
: Total time in seconds between the offset of one experience sampling item and the onset of the next.es_items
: A Python list of dictionaries containing the information for each experience sampling item. You can modify this list to include your own items.- Note: The presentation order of these items is pseudo randomized in a probe such that the last item is always presented last. This was suitable in our lab's experiments as we ask about the participant's level of confidence in their responses after making all other responses. I'll likely add this in as a changeable parameter in the future.
Similar to the original version of GradCPT programmed in Matlab, this
version transitions between stimuli by pixel-by-pixel linear interpolation
over the transition period. This is largely accomplished thanks to Numpy's
quite elegant linspace
function
(eg, np.linspace(image1, image2, transition_steps)
). The stimuli are
(256, 256)
arrays of greyscale values. transition_steps
is the number
of transition frames to generate, and it is determined both by the desired
transition time between images as well as the monitor's refresh rate. So
np.linspace
returns a (transition_steps, 256, 256)
array of images that
transition from one stimulus to another, and one image from this array is
sampled on each frame to simulate the transition.
The keyboard is queried once per frame to assess whether a response was made. If there was a response, the program logs the state of the experiment (including response key and response time relative to stimulus t-1 was at 100% coherence). If there was no keyboard response between the time when two consecutive images (at trial t-1 and t) were at 100% coherence, the program will log this as an omission and save the state of the experiment at the time the stimulus on trial t was at 100% coherence.
- During experience sampling items, I chose to initialize the mouse at a slighly jittered position at the bottom of the screen to prevent a bias to respond similarly to the previous trial.
- GradCPT response collection will only parse the first keyboard response per frame.
- When testing the experiment with a 800 ms transition speed, the measured transition speed averaged at about 817 ms (probably an extra frame somewhere; SD = 0.001).
- I implemented a 5 s countdown after responding to experience sampling items / after a GradCPT block to prepare participants for the onset of the next GradCPT stimuli.
Contact Dave Braun with any questions: dave.braun@drexel.edu.
I'm a one man show right now; feel free to make suggestions or pull requests. You can open an issue if you notice a bug.