-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Infant-controlled trials with help from parents #143
Comments
This sounds like a really good idea, but yes, will take some planning (including ideally a bit of piloting with families just to see what's doable). I wonder if it'd be easier to have the parent simply hold down the space bar while the baby is looking, and implement the appropriate thresholds on our end (e.g., wait for N seconds total of disengagement or first continuous N-second lookaway). Either way you'd probably want to also incorporate some training with example videos and feedback, which could be standardized for use by multiple labs. If you're interested in taking the lead on this and want to start with a simple mockup to have researchers/friends with kids try out different methods, and see what parent coding looks like best-case, let me know and I can set up a frame to log e.g. space bar presses so you could compare parent vs. lab coding. |
I like the idea of having the parent roughly code whether the infant is attending to the screen in real time, throughout the entire trial. I'd be happy to take the lead and tinker with this with help from friends. The goal is just to get a coarse idea of whether the baby is completely done with a trial so that we don't show them events longer than we need to, but I'd also be curious to know what the best-case reliability would be. The main thing that gives me pause is coder influence - if we find an effect, how can we be sure that the parent's knowledge of what the baby was seeing didn't contribute to their data? |
Update: there's lots of excitement about this feature from our lab so I think it's definitely worth devoting some time to it. If you're still willing to set up a frame logging key presses (maybe the right arrow key? that's off to the side and accessible if you're holding a baby), sums them up and ends a trial after xx seconds total or yy seconds of no input, that would be really great. I'm happy to collect data from researchers and friends to assess feasibility and reliability. Let me know how I can help. |
Terrific! I can probably squeeze this in with some other work adding to frames (#72) but depending on how that goes it might have to be after launch. Can you clarify how you'd like the lookaway thresholds to work ideally? Keeping some flexibility is fine so you can try out different approaches, what I mean is a description at the level of one of the below (different, arbitrary) descriptions: (A) Right arrow key will be held down by parent when child is looking. Researcher can specify any of the following (all are optional; if not provided that criterion is not used).
(B) Some key will be held down by parent when child is not looking. Researcher specifies each of the following:
Obviously this will just be a starting point to play around with, but there are a bunch of fiddly bits here that may be worth starting to think about. |
Option (A) is more similar to what we do in the lab, but could be harder for parents than for us because they're holding a squirmy and active baby at the same time as coding their attention. Parents will have to sit far enough away from the screen to prevent the baby from futzing around with the keyboard, but close enough that they themselves can access it. For these reasons, Option (B) might be less reliable but easier for the parent. I liked the parameters you added, and I also suggest having an optional way of reporting a coding mistake or bad trial (e.g. parent accidentally held key for too long, the cat walked across the keyboard). This doesn't need to be a keypress necessarily. Other ideas from our lab about the setup:
|
Cool! Let me know once you guys flesh out how you'd want it to work from a technical standpoint. (Or if you happen to want exactly B but with a mouse option(?), and in that case how reporting a bad trial would work.) |
Sorry for the delay - here are my thoughts about what (B) would look like (Kim, your example captures most of what we wanted, so copying most of what you said here, with minor edits): (B) Some key will be held down by parent when child is not looking. Researcher specifies each of the following:
It may also be worth asking on Slack what people want from this feature. I will do that now, and update this PR with what I learn. |
For the bad trial parameter "whether the data from bad trials should be discarded for purposes of calculating habituation" - is the idea to also be making a decision about e.g. whether to move on to another block of trials based on the sequence of looking times? if so we'd need to know a bit more about the logic for moving on and what parameters researchers are responsible for providing. |
Ok, a start is here: https://github.com/lookit/ember-lookit-frameplayer/commits/feature/infant-controlled-trials (can copy most recent commit ID to use on Lookit). Added a mixin that can be used to convert frames to parent-controlled versions and created parent-controlled frames Example usage, more details in frame docs.
and
At least for an initial "training" phase I thought it would probably be helpful to have some indicator of the coding "working." Then parents could try it out with some confidence that they were "doing something" when they pressed the button. (I'm now realizing that you could literally "train" them by displaying a video of a baby looking at a screen from about the angle they'll be watching from, and get a standardized check on parent performance ahead of time!) As a starting point you can choose a tone, noise, or silence while the lookawayKey is being held down. I still need to add the "bad trial" entry functionality. I'm going to try to combine this with a mixin that standardizes the "establishing connection..." and "uploading..." placeholders; if you're doing an infant-controlled trial you could provide audio/video with a countdown/instructions and use whether the specified key was pressed to detect whether the trial was bad and repeat if desired. |
Thank you so much Kim for setting this up and I'm really excited to try
this out. It's on my list for tomorrow/Friday!
To respond to your questions:
- Re: which looking time value to store, I think that either (1) from the
beginning of the trial / first look, or (2) at an exact moment in the trial
(e.g. the outcome gets revealed at 5400ms in video x, measure looks after
that point) would be really useful.
- Re: training, your idea of having the parent practice with visual/audio
feedback, and potentially using a video of another baby, is such a
fantastic idea. I'm wondering whether it would be distracting for the baby
if we had this indicator throughout the study, both during practice and the
real experiment (maybe it would be... but I think that having a guide
throughout actually might be really helpful for novice infant coders (-: ).
I imagine that a visual guide might work well - e.g. if the baby's looking
on, there's a thick green frame around their window view, and if the baby's
looking off, the green frame goes away, becomes black/red, or something
else.
More soon. Thank you so much again.
Shari
…On Tue, Sep 8, 2020 at 5:55 PM Kim Scott ***@***.***> wrote:
Ok, a start is here:
https://github.com/lookit/ember-lookit-frameplayer/commits/feature/infant-controlled-trials
(can copy most recent commit ID to use on Lookit). Added a mixin that can
be used to convert frames to parent-controlled versions and created
parent-controlled frames exp-lookit-video-infant-control and
exp-lookit-images-audio-infant-control.
Example usage (docs will be updated shortly):
"image-3": {
"kind": "exp-lookit-images-audio-infant-control",
"lookawayKey": "p",
"lookawayType": "total",
"lookawayThreshold": 2,
"endTrialKey": "q",
"audio": "wheresremy",
"images": [
{
"id": "remy",
"src": "wheres_remy.jpg",
"position": "fill"
}
],
"baseDir": "https://www.mit.edu/~kimscott/placeholderstimuli/",
"audioTypes": [
"mp3",
"ogg"
],
"autoProceed": true,
"doRecording": false,
"durationSeconds": 4,
"parentTextBlock": {
"text": "Some explanatory text for parents",
"title": "For parents"
},
"showProgressBar": false
}
and
"play-video-twice": {
"kind": "exp-lookit-video-infant-control",
"lookawayKey": "p",
"lookawayType": "total",
"lookawayThreshold": 2,
"endTrialKey": "q",
"audio": {
"loop": false,
"source": "peekaboo"
},
"video": {
"top": 10,
"left": 25,
"loop": true,
"width": 50,
"source": "cropped_apple"
},
"backgroundColor": "white",
"autoProceed": true,
"parentTextBlock": {
"text": "If your child needs a break, just press X to pause!"
},
"requiredDuration": 0,
"requireAudioCount": 0,
"requireVideoCount": 2,
"restartAfterPause": true,
"pauseKey": "x",
"pauseKeyDescription": "X",
"pauseAudio": "pause",
"pauseVideo": "attentiongrabber",
"pauseText": "(You'll have a moment to turn around again.)",
"unpauseAudio": "return_after_pause",
"doRecording": true,
"baseDir": "https://www.mit.edu/~kimscott/placeholderstimuli/",
"audioTypes": [
"ogg",
"mp3"
],
"videoTypes": [
"webm",
"mp4"
]
},
Still todo:
- update docs (mixin, frames)
- track & store total LT until criterion and mode of ending trial
(reached max time, ended by parent, ended by reaching criterion)
- add "bad trial parameters" & chance to indicate
@shariliu <https://github.com/shariliu> Some more questions upon actually
setting this up...
-
What would be the most useful/standard definition of looking time to
store? I'm thinking time from trial start (or first look if later) until
reaching criterion, either including or not including brief lookaways
during that time. The idea of precomputing & storing this is you could
relatively easily use in in selectNextFrame function to decide, for
instance, if the baby has reached a habituation criterion.
-
"Bad trial" entry: I'm going to try to combine this with a mixin that
standardizes the "establishing connection..." and "uploading..."
placeholders; if you're doing an infant-controlled trial you could provide
audio/video with a countdown/instructions and use whether the specified key
was pressed to detect whether the trial was bad and repeat if desired.
-
I suspect that at least for an initial "training" phase it'll be
helpful to have some indicator of the coding "working" (e.g. a progress bar
"filling up" in the corner of the screen, up to the lookaway threshold; or
a low tone that sounds while the lookaway key is pressed). Then parents
could try it out with some confidence that they were "doing something" when
they pressed the button. (I'm now realizing that you could literally
"train" them by displaying a video of a baby looking at a screen from about
the angle they'll be watching from, and get a standardized check on parent
performance ahead of time!) Do you think that'd be helpful? If so,
preference on type of indicator?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#143 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACWB5LNEPF2TP7QXFKEPQ5LSE2R3XANCNFSM4MT3ZDBQ>
.
--
Shari Liu (she/her)
https://www.shariliu.com
|
Sounds good on looking time. I've set up the first (from beginning of trial or first look, whichever is later) - see https://lookit.github.io/lookit-frameplayer-docs/classes/Exp-lookit-video-infant-control.html#attr_totalLookingTime. Could you let me know if/when you have a specific use case for the second, and I'll set that up based on the exact requirements? (Just suspect it may not be worth trying to cover all our bases yet, versus seeing a few concrete use cases and going from there.) Will plan to set up a visual guide option for an indicator as another option - a border makes sense! |
Thanks so much for your work on this - I just tried it out and I love the soft white-noise sound. If you want you can use this video for the second use case (it's a version of the Woodward study, where an agent looks to 2 objects, and then moves towards one of them, then bounces happily at the end). Here, some researchers may want to measure looking time right after this event (in my video, at 8s), provided that the baby was attending for some min # of seconds first. So maybe the parent can provide the same inputs, but only the looking time after xxxx ms in the video gets saved in totalLookingTime? But I'd say that this is secondary to getting the visual guide - that sounds super exciting and I can't wait to try! |
…-control frames; allow delaying measurement period relative to stimulus onset. Finishes addressing #143
Visual indicator & option to delay the looking measurement added in v2.3.0! https://github.com/lookit/ember-lookit-frameplayer/releases/tag/v2.3.0 You can just update the code to the latest version to use these changes. |
Most people who run violation-of-expectation studies on infants would love to have infant-controlled trials on Lookit - that is, an option to make the trial last longer or shorter depending on how long the infant is interested in looking.
Apart from real-time eyetracking, I was thinking that we could get parents to help us out. For instance, suppose that you are ok with having the parent look at the screen during habituation (because your main hypothesis is about infants' attention during test trials). We could instruct parents to press a key when their baby is clearly disengaged for a few seconds to stop the trial, and start the new one. The duration of the trials could then be used to calculate a habituation criterion.
There are definitely some issues to work through here: What are objective criteria for parents pressing the key? What happens during test trials, when you'd rather have the parent turn around? Instead of involving parents, we have also considered presenting habituation events that are constant in length but get shorter and shorter based on curves of habituation from experiments conducted in the lab.
The text was updated successfully, but these errors were encountered: