Posts: 382
Threads: 22
Joined: Sep 2020
Reputation:
2
05-12-2025, 11:51 AM
(This post was last modified: 05-12-2025, 01:13 PM by Ting.)
Hi Vivienne,
That's great to hear! Okay, we'll make the new update available on the regular production site later this week (probably Wednesday).
I actually found your use of the keypress response very creative in your case. It should work as long as it's set up as accepting only one key press: https://hub.findingfive.com/study-gramma.../#multiple. That way, the reaction time you get from that key press is measured from the start of the trial since there's no previous key press to anchor it.
Choice response would work too, but it'll involve quite a bit of hack. Let me know if you insist on doing that and I can walk you through that as well.
Posts: 8
Threads: 1
Joined: May 2025
Reputation:
0
Hi Ting,
That’s wonderful news, thank you so much! I’m really glad to hear the update will be available on the production site soon. It’s such a great improvement for visual consistency.
And thank you for the clarification regarding the keypress response, that makes a lot of sense! I actually already had "multiple": false set in my response definition, but maybe I’ve made a mistake elsewhere in the setup. I’ll share the current version of my keypress response below, just in case something looks off:
{
"type": "keypress",
"whitelist": [
"c",
"m"
],
"instruction": "",
"timeout": 10,
"feedback": false,
"multiple": false,
"scoring": {
"type": "exact_match",
"target": "{{correct_key}}"
}
}
This is what I’ve been using, and ideally, I’d like to stick with the keypress response since it seems to be the cleaner solution. Do you see anything here that might explain why RTs weren’t behaving as expected? And so far it hasn't been working that I can see
I’ll of course, also double-check with my supervisor to make sure she’s fine with me using the keypress response for this kind of design, but from my side, it seems like the best fit.
One more question: so far, after running the experiment and looking at the results, I haven’t been able to clearly see whether a participant’s response was correct or incorrect, or whether they pressed “match” or “mismatch” in each case. Is there a way I could add something to make that more transparent in the results output? That would be incredibly helpful for analysis later on.
Thanks again for your time and support, I really appreciate it!
Best regards,
Vivienne
Posts: 382
Threads: 22
Joined: Sep 2020
Reputation:
2
05-12-2025, 09:25 PM
(This post was last modified: 05-12-2025, 09:31 PM by Ting.)
Hello Vivienne,
There are several things to clarify here:
1. Some of the properties set in your experiment aren't part of FindingFive’s study grammar. It looks like they may have come from an AI tool like ChatGPT? They can sometimes invent features that don’t exist. For example, there is no such thing as a " scoring" option for keypress responses. You can check exactly what is supported for keypress responses here: https://hub.findingfive.com/study-gramma.../keypress/
2. Similarly, the setting called “correct_key” that appears in your trial templates isn’t supported either. It won’t cause any errors, but it also won’t have any effect on your experiment.
3. If you need your study to automatically record accuracy, then you’ll need to use the choice response instead. Sorry I didn't realize this before! Only the choice response allows you to define a correct answer using the “ target” property.: https://hub.findingfive.com/study-gramma...ce/#target
To make the buttons disappear for a choice response, here's what you can do. You can define two text stimuli whose "content" is simply a blank space like " ". Then, use the " stimulus reference" feature to define the choices of the choice response: https://hub.findingfive.com/study-gramma...e/#choices. Finally, make sure to define key_mapping: https://hub.findingfive.com/study-gramma...ey_mapping and optionally turn off the hint: https://hub.findingfive.com/study-gramma...oice/#hint
4. If you are switching to the choice response, then this won't matter: I'm not sure the keypress reports the wrong RT. I just tried a mini version of your study, and the RT reported by the keypress is anchored to the start of the trial (except for the very first trial, which took a bit longer to "cold start" everything). You can see that from my screenshot here:
Anyway, let me know if this helps you get closer to the final solution.
Posts: 8
Threads: 1
Joined: May 2025
Reputation:
0
Hi Ting,
Thank you so much for the detailed clarification, that’s incredibly helpful! You’re absolutely right, I had been trying to troubleshoot and pulled in some suggestions that were generated from ChatGPT, so I really appreciate you pointing out which features are actually supported. That clears things up a lot.
Based on your explanation, I’ve now switched over to using the choice response instead, and I followed all of your instructions: I created two text stimuli with just a blank space as the content, used stimulus references for the choices, added the key_mapping, and turned off the hint. It’s all working really nicely now, thank you again for guiding me through that!
The only small issue I still see is that a tiny box still appears on the screen where the buttons used to be. It’s completely blank, but it looks like the invisible stimuli are still being rendered. Is there any way to make those disappear entirely, so that the screen looks completely clean and only the Stroop stimuli are visible?
Thanks again for your kind and thorough help, I’m getting really close to the setup I need now!
Best regards,
Vivienne
Posts: 382
Threads: 22
Joined: Sep 2020
Reputation:
2
That's great to hear! Let us look into the border on the choice response. I'll get back to you by the end of this week.
Posts: 382
Threads: 22
Joined: Sep 2020
Reputation:
2
Actually, Vivienne, I have a solution that you can use right away. It'll make data analysis tiny bit more challenging, which I'll explain and offer a suggestion too.
Here's how it works. Instead of using references to text stimuli, you can define a choice response like this:
Code: {
"type": "choice",
"choices": [
" ",
" "
],
"instruction": "",
"key_only": true,
"target": " ",
"key_mapping": [
"c",
"m"
],
"hint": false
}
This creates a choice response that operates in “key_only” mode, which removes the buttons. One option is a single space (" "), and the other is a double space (" ")—though you can make the second option longer if needed, as long as it’s distinct from a single space. In my example, the correct "target" is the double space, but you can easily switch it to the single space for different stimulus conditions.
I’ve tested this, and the result looks perfect - it behaves exactly as you described. It also correctly tracks whether the participant’s response is correct or not, since a single space is distinct from a double space.
The challenge for data analysis is that you’ll need to keep track of what the single space versus double space represents in your output CSV. This can be a bit confusing to remember, but if you recode these values early in your analysis to something more intuitive, it should be manageable. Just be mindful of the process to avoid errors.
Posts: 8
Threads: 1
Joined: May 2025
Reputation:
0
Hi Ting,
That worked perfectly, thank you so much! The setup now looks exactly the way I hoped: no visible buttons, clean design, and the responses are tracked just as needed. I really appreciate the clever workaround with the single and double spaces, I’ll make sure to recode those values during data analysis to avoid any confusion later on.
Now I’m just keeping my fingers crossed that the new width setting for the text stimuli will work just as smoothly once it goes live on the regular server, that would be the final puzzle piece!
Thanks again for all your time, patience, and support throughout this process. I really couldn’t have done it without your help!
Best regards,
Vivienne
|