Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Infant-controlled trials with help from parents #143

Closed
shariliu opened this issue Apr 29, 2020 · 13 comments
Closed

Infant-controlled trials with help from parents #143

shariliu opened this issue Apr 29, 2020 · 13 comments
Labels
Participant Researcher XS about two days of work

Comments

@shariliu
Copy link

Most people who run violation-of-expectation studies on infants would love to have infant-controlled trials on Lookit - that is, an option to make the trial last longer or shorter depending on how long the infant is interested in looking.

Apart from real-time eyetracking, I was thinking that we could get parents to help us out. For instance, suppose that you are ok with having the parent look at the screen during habituation (because your main hypothesis is about infants' attention during test trials). We could instruct parents to press a key when their baby is clearly disengaged for a few seconds to stop the trial, and start the new one. The duration of the trials could then be used to calculate a habituation criterion.

There are definitely some issues to work through here: What are objective criteria for parents pressing the key? What happens during test trials, when you'd rather have the parent turn around? Instead of involving parents, we have also considered presenting habituation events that are constant in length but get shorter and shorter based on curves of habituation from experiments conducted in the lab.

@kimberscott
Copy link

This sounds like a really good idea, but yes, will take some planning (including ideally a bit of piloting with families just to see what's doable).

I wonder if it'd be easier to have the parent simply hold down the space bar while the baby is looking, and implement the appropriate thresholds on our end (e.g., wait for N seconds total of disengagement or first continuous N-second lookaway).

Either way you'd probably want to also incorporate some training with example videos and feedback, which could be standardized for use by multiple labs.

If you're interested in taking the lead on this and want to start with a simple mockup to have researchers/friends with kids try out different methods, and see what parent coding looks like best-case, let me know and I can set up a frame to log e.g. space bar presses so you could compare parent vs. lab coding.

@shariliu
Copy link
Author

shariliu commented May 1, 2020

I like the idea of having the parent roughly code whether the infant is attending to the screen in real time, throughout the entire trial. I'd be happy to take the lead and tinker with this with help from friends.

The goal is just to get a coarse idea of whether the baby is completely done with a trial so that we don't show them events longer than we need to, but I'd also be curious to know what the best-case reliability would be.

The main thing that gives me pause is coder influence - if we find an effect, how can we be sure that the parent's knowledge of what the baby was seeing didn't contribute to their data?

@shariliu
Copy link
Author

Update: there's lots of excitement about this feature from our lab so I think it's definitely worth devoting some time to it. If you're still willing to set up a frame logging key presses (maybe the right arrow key? that's off to the side and accessible if you're holding a baby), sums them up and ends a trial after xx seconds total or yy seconds of no input, that would be really great. I'm happy to collect data from researchers and friends to assess feasibility and reliability. Let me know how I can help.

@kimberscott
Copy link

Terrific! I can probably squeeze this in with some other work adding to frames (#72) but depending on how that goes it might have to be after launch.

Can you clarify how you'd like the lookaway thresholds to work ideally? Keeping some flexibility is fine so you can try out different approaches, what I mean is a description at the level of one of the below (different, arbitrary) descriptions:

(A) Right arrow key will be held down by parent when child is looking. Researcher can specify any of the following (all are optional; if not provided that criterion is not used).

  • maximum trial length (after this, trial moves on regardless; timed from start of first look, not stimulus presentation)
  • maximum non-look at start of trial before trial moves on
  • maximum continuous lookaway after first look before trial moves on
  • maximum total lookaway (including time before first look) before trial moves on
  • keycode (other than F1/ctrl-x) that parent can press to move on manually

(B) Some key will be held down by parent when child is not looking. Researcher specifies each of the following:

  • type of criterion: total lookaway (not counting time before first look), continuous lookaway (not counting time before first look)
  • threshold value for the above criterion
  • maximum total trial length from start of stimulus presentation
  • keycode for the key parent should hold down when the child is not looking
  • keycode (other than F1/ctrl-x) that parent can press to move on manually (or none)

Obviously this will just be a starting point to play around with, but there are a bunch of fiddly bits here that may be worth starting to think about.

@shariliu
Copy link
Author

Option (A) is more similar to what we do in the lab, but could be harder for parents than for us because they're holding a squirmy and active baby at the same time as coding their attention. Parents will have to sit far enough away from the screen to prevent the baby from futzing around with the keyboard, but close enough that they themselves can access it.

For these reasons, Option (B) might be less reliable but easier for the parent. I liked the parameters you added, and I also suggest having an optional way of reporting a coding mistake or bad trial (e.g. parent accidentally held key for too long, the cat walked across the keyboard). This doesn't need to be a keypress necessarily.

Other ideas from our lab about the setup:

  • having the baby sit in a high chair and the parent code from beside/behind them
  • using a mouse instead of a keyboard (not everyone has one, but would allow parent to sit behind the infant, for example)
  • including a metronome track, and having the parent end the trial if the baby disengages from the screen for x beats in the row

@kimberscott
Copy link

Cool! Let me know once you guys flesh out how you'd want it to work from a technical standpoint. (Or if you happen to want exactly B but with a mouse option(?), and in that case how reporting a bad trial would work.)

@shariliu
Copy link
Author

shariliu commented Jun 9, 2020

Sorry for the delay - here are my thoughts about what (B) would look like (Kim, your example captures most of what we wanted, so copying most of what you said here, with minor edits):

(B) Some key will be held down by parent when child is not looking. Researcher specifies each of the following:

  • type of criterion: total lookaway (not counting time before first look), continuous lookaway (not counting time before first look)
  • threshold value for the above criterion
  • maximum total trial length from start of stimulus presentation
  • keyboard or mouse input
  • if keyboard, keycode for the key parent should hold down when the child is not looking
  • keycode (other than F1/ctrl-x) that parent can press to move on manually (or none)
  • optional: keycode or mouse input for marking a 'bad trial'. after every trial, prior to the beginning of the next one, parents have x seconds (with a visible count down) to indicate whether there was a major distraction during that trial, or they made a mistake when coding.
  • bad trial parameters: keycode/mouse press (could be inherited from the above parameters), video and audio for countdown, whether the data from bad trials should be discarded for purposes of calculating habituation, whether the same trial should repeat after a bad trial

It may also be worth asking on Slack what people want from this feature. I will do that now, and update this PR with what I learn.

@kimberscott
Copy link

For the bad trial parameter "whether the data from bad trials should be discarded for purposes of calculating habituation" - is the idea to also be making a decision about e.g. whether to move on to another block of trials based on the sequence of looking times? if so we'd need to know a bit more about the logic for moving on and what parameters researchers are responsible for providing.

@kimberscott
Copy link

kimberscott commented Sep 8, 2020

Ok, a start is here: https://github.com/lookit/ember-lookit-frameplayer/commits/feature/infant-controlled-trials (can copy most recent commit ID to use on Lookit). Added a mixin that can be used to convert frames to parent-controlled versions and created parent-controlled frames exp-lookit-video-infant-control and exp-lookit-images-audio-infant-control.

Example usage, more details in frame docs.

    "image-3": {
        "kind": "exp-lookit-images-audio-infant-control",
        "lookawayKey": "p",
        "lookawayType": "total",
        "lookawayThreshold": 2,
        "lookawayTone": "noise",
        "lookawayToneVolume": 0.25,
        "endTrialKey": "q",
        "audio": "wheresremy",
        "images": [
            {
                "id": "remy",
                "src": "wheres_remy.jpg",
                "position": "fill"
            }
        ],
        "baseDir": "https://www.mit.edu/~kimscott/placeholderstimuli/",
        "audioTypes": [
            "mp3",
            "ogg"
        ],
        "autoProceed": true,
        "doRecording": false,
        "durationSeconds": 4,
        "parentTextBlock": {
            "text": "Some explanatory text for parents",
            "title": "For parents"
        },
        "showProgressBar": false
    }

and

 "play-video-twice": {
            "kind": "exp-lookit-video-infant-control",
            "lookawayKey": "p",
            "lookawayType": "total",
            "lookawayThreshold": 2,
            "lookawayTone": "noise",
            "lookawayToneVolume": 0.25,
            "endTrialKey": "q",
            "audio": {
                "loop": false,
                "source": "peekaboo"
            },
            "video": {
                "top": 10,
                "left": 25,
                "loop": true,
                "width": 50,
                "source": "cropped_apple"
            },
            "backgroundColor": "white",
            "autoProceed": true,
            "parentTextBlock": {
                "text": "If your child needs a break, just press X to pause!"
            },
            "requiredDuration": 0,
            "requireAudioCount": 0,
            "requireVideoCount": 2,
            "restartAfterPause": true,
            "pauseKey": "x",
            "pauseKeyDescription": "X",
            "pauseAudio": "pause",
            "pauseVideo": "attentiongrabber",
            "pauseText": "(You'll have a moment to turn around again.)",
            "unpauseAudio": "return_after_pause",
            "doRecording": true,
            "baseDir": "https://www.mit.edu/~kimscott/placeholderstimuli/",
            "audioTypes": [
                "ogg",
                "mp3"
            ],
            "videoTypes": [
                "webm",
                "mp4"
            ]
        },

At least for an initial "training" phase I thought it would probably be helpful to have some indicator of the coding "working." Then parents could try it out with some confidence that they were "doing something" when they pressed the button. (I'm now realizing that you could literally "train" them by displaying a video of a baby looking at a screen from about the angle they'll be watching from, and get a standardized check on parent performance ahead of time!) As a starting point you can choose a tone, noise, or silence while the lookawayKey is being held down.

I still need to add the "bad trial" entry functionality. I'm going to try to combine this with a mixin that standardizes the "establishing connection..." and "uploading..." placeholders; if you're doing an infant-controlled trial you could provide audio/video with a countdown/instructions and use whether the specified key was pressed to detect whether the trial was bad and repeat if desired.

@shariliu
Copy link
Author

shariliu commented Sep 16, 2020 via email

@kimberscott
Copy link

Sounds good on looking time. I've set up the first (from beginning of trial or first look, whichever is later) - see https://lookit.github.io/lookit-frameplayer-docs/classes/Exp-lookit-video-infant-control.html#attr_totalLookingTime. Could you let me know if/when you have a specific use case for the second, and I'll set that up based on the exact requirements? (Just suspect it may not be worth trying to cover all our bases yet, versus seeing a few concrete use cases and going from there.)

Will plan to set up a visual guide option for an indicator as another option - a border makes sense!

@shariliu
Copy link
Author

Thanks so much for your work on this - I just tried it out and I love the soft white-noise sound. If you want you can use this video for the second use case (it's a version of the Woodward study, where an agent looks to 2 objects, and then moves towards one of them, then bounces happily at the end). Here, some researchers may want to measure looking time right after this event (in my video, at 8s), provided that the baby was attending for some min # of seconds first. So maybe the parent can provide the same inputs, but only the looking time after xxxx ms in the video gets saved in totalLookingTime?

But I'd say that this is secondary to getting the visual guide - that sounds super exciting and I can't wait to try!

@kimberscott kimberscott added the XS about two days of work label Jan 9, 2021
kimberscott pushed a commit that referenced this issue Jan 23, 2021
…-control frames; allow delaying measurement period relative to stimulus onset. Finishes addressing #143
@kimberscott
Copy link

Visual indicator & option to delay the looking measurement added in v2.3.0! https://github.com/lookit/ember-lookit-frameplayer/releases/tag/v2.3.0

You can just update the code to the latest version to use these changes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Participant Researcher XS about two days of work
Projects
None yet
Development

No branches or pull requests

3 participants