Welcome, Guest
You have to register before you can post on our site.

Username/Email:
  

Password
  





Search Forums

(Advanced Search)

Forum Statistics
» Members: 313
» Latest member: Shanedof
» Forum threads: 272
» Forum posts: 1,034

Full Statistics

Online Users
There are currently 103 online users.
» 6 Member(s) | 95 Guest(s)
Google, SemrushBot, alxcahvs3530, samantajnr8954, vicoriapittz6773, victoriajnior6031, zaluispetovz5199, zngeljunioro8423

Latest Threads
Бабка совратила юношу - я...
Forum: General Feedback
Last Post: O4kogok
11-20-2024, 04:16 AM
» Replies: 0
» Views: 40
Юные бляди Мне больно ког...
Forum: General Feedback
Last Post: O4kogok
11-17-2024, 09:49 AM
» Replies: 0
» Views: 80
Study not completing all ...
Forum: Platform Improvement
Last Post: Ting
11-14-2024, 10:31 AM
» Replies: 2
» Views: 103
Castifyhub
Forum: General Feedback
Last Post: DavidKab
11-14-2024, 08:50 AM
» Replies: 0
» Views: 103
Black Friday fear and fun...
Forum: General Feedback
Last Post: Michelevah
11-10-2024, 08:05 AM
» Replies: 0
» Views: 117
Letting audio play throug...
Forum: Platform Improvement
Last Post: sten_knutsen
10-22-2024, 05:08 PM
» Replies: 8
» Views: 462
Jackpot Bet Online
Forum: General Feedback
Last Post: DavidHah
10-12-2024, 07:04 AM
» Replies: 0
» Views: 132
Зняття з військового облі...
Forum: General Feedback
Last Post: Craignep
10-11-2024, 08:49 AM
» Replies: 1
» Views: 187
Multiple sessions open
Forum: Platform Improvement
Last Post: aczyp
10-08-2024, 06:13 AM
» Replies: 3
» Views: 4,064
study launched, but unava...
Forum: Platform Improvement
Last Post: noah.nelson
10-07-2024, 12:28 PM
» Replies: 1
» Views: 166

 
  December Grammar Update - 12/27/2021
Posted by: Ting - 12-21-2021, 12:50 PM - Forum: New Study Grammar Features - No Replies

Dear Researchers,

We are excited to announce the availability of the December Study Grammar update, to be effective on December 27, 2021! For December, we are bringing a host of new features and optimizations:

New Features

  • Beta Support for Safari on MacOS, and all browsers on iOS and PadOS: The FindingFive website is now fully accessible to both participants and researchers using the Safari browser on MacOS as well as all mobile browsers on iOS and PadOS. The support for Apple devices and software is currently in beta, meaning that some bugs may exist.

    For participants to be able to use Safari to participate in your study, you must launch a new session.
    Existing active sessions are still closed to Safari users, and will only work with Firefox or Chrome on a desktop or laptop.

  • Dark Mode: It is now possible to display trial content using the dark mode (black background and white text) by setting the color scheme of a trial to "dark". 

  • Duration timer: It is now possible to display a duration timer for visible responses, indicating how long participants have spent on that response. Please check out the documentation on responses for details.

Bug fixes:
  • Fixed an issue where the visual feedback for clickable images or videos isn't properly displayed
  • Fixed an issue where studies with identical duration settings for the trial and a response within the trial were permitted to run. This setup will result in a so-called racing condition, and is now banned. 
Other changes:
  • Internal updates to the text, mouse_reset, mouse_position, and the keypress responses to improve the responsiveness of these responses and the precision of reaction time calculation.
If you have any questions regarding this grammar update, please leave your comments below. Thanks!

Print this item

  Bot checks
Posted by: edia - 12-13-2021, 04:28 PM - Forum: General Feedback - No Replies

Hello,
I have been collecting the data on FindingFive through MTurk and recently the quality of the data has been pretty low. It would be helpful to be able to install bot-checks at the beginning of the experiment, which would disqualify bots from completing the study!
-Yev

Print this item

  Your study contains coding errors. . .
Posted by: sten_knutsen - 12-02-2021, 10:14 AM - Forum: Platform Improvement - Replies (9)

So I've seen the error message "Your study contains coding errors we cannot automatically detect yet" before and I've managed to figure out what went wrong. However, I am completely puzzled by this one.
The study in question has already been fully previewed, and I've launched it as a session twice over the past couple of weeks. Worked perfectly fine, collected data. No code was changed.
Just went to preview the study today (want to modify some parts) and I am getting the above error message and I don't understand why. None of the code has been changed, and it had been previously working just fine!


Some additional information. . .

I also tried to start a new session on this study (which should work since I ran two sessions already) and got the error message "You can't launch a new session because 'int' object has no attribute 'replace'"

Print this item

  target for response not working in french
Posted by: bisserai - 11-23-2021, 08:37 AM - Forum: Study Grammar & Management - Replies (5)

Hi FF team,

I'm presenting two text options for a choice response and to make analysis easier i wanted to use the "target" feature of the choice response. I've written my response like so:

Code:
{
  "type": "choice",
  "choices": [
    "C'est un ours noir, qui est petit ?",
    "C'est un petit ours, qui est noir ?"
  ],
  "instruction": " ",
  "key_mapping": [
    "Q",
    "F"
  ],
  "key_only": true,
  "locations": "fixed",
  "target": [
    "C'est un petit ours, qui est noir ?"
  ],
  "delay": 0.5
}

Then when i run the study i unfortunately get all FALSE-s as a response, because the response target isn't processed correctly i think: in my data file under the column "response_target" i only get " [C " (to compare under the response_column i indeed get the full "C'est un petit ours, qui est noir ?") But when i go back to my study and open my response it doesn't seem to be misinterpreting my quotation marks, so i don't know how to get it to understand which " ' to count and which not.



Best,
bissera

Print this item

  Hybrid stimulus and response format
Posted by: Katerina Tsar - 11-22-2021, 06:32 AM - Forum: Experimental Design - Replies (5)

Hello!
I am trying to create an experiment were people rate emojis for valence and arousal. The total number of emojis is 54, divided into 3 lists (20 - 20 - 14). I have randomised the order of the emojis in each list. Now, my problem is the following:
In every list, I want two images to be presented below each emoji; one picture indicating 9 levels of valence and one picture showing nine arousal levels. Below each picture, a rating option (1-9 Likert scale) should be available. This is the code in Trial Templates for list 1:

"list1":{
  "type":"basic",
  "stimuli":["angry","anxious_sweat",
            "beaming","blow_kiss",
            "confused","dizzy",
            "enraged","fearful",
            "grin_eyes","grin_sweat",
            "halo","kissing",
            "neutral","pensive",
            "savouring","smiling",
            "steam_nose","tongue",
            "unamused","winking"],
  "stimulus_pattern":{"order":"random"},
  "auto_advance":true,
  "responses":["valence_resp", "arousal_resp"]
},


1: Will the rating responses work the way I have included them in the code?
2: How do I include the two images in the stimuli as stable stimuli in each trial?

Thank you very much in advance!
Best,
Katerina

Print this item

  text response submit button
Posted by: catherct - 11-20-2021, 08:44 AM - Forum: Study Grammar & Management - Replies (1)

Hi community!  Tongue

I'm trying to key map the Submit button for text responses similarly to how I would key map the continue_button property on basic trial templates. I looked through the response study grammar, and it doesn't seem like we're able to edit the Submit button yet as it's not recognized as a property. I was wondering if we'll be able to do this in the future??...

Or (and this is probably just me being hopeful) if anyone has found a work-around for this?

Thanks in advance!

Print this item

  log of response positions and keys pressed
Posted by: bisserai - 11-17-2021, 05:41 PM - Forum: Study Grammar & Management - Replies (4)

Hi FF team!

I have a similar question to "keeping track of response choice locations?", except i use key presses and not mouse clicks (this is for my two studies "normingLRbranching" and "normingNeutralIntonation"). I present two text options in random positions and would like to have a log of 1. which text option appeared in which position and 2. which key did the participant press (although i guess if i have one i can easily figure out the other one). The latter one would be especially useful to see if the participant was just pressing the same key for the entire experiment and not actually interactig, so if there's a way to log it that would be fantastic!

Thanks a lot,
bissera

Print this item

  Two copies of an uploaded audio file
Posted by: HF2021 - 11-17-2021, 05:29 PM - Forum: Platform Improvement - Replies (3)

Hello, I'm not sure whether this is a bug or not but I ran into a couple of interesting issues when creating and uploading audio stimuli.
The first problem: After uploading a set of audio files (.wav) using the batch option in the "Resources files" page, I created a set of corresponding stimuli using batch upload of a CSV file.  The created stimuli remained red and the audio files did not automatically get associated with the created stimuli.  I tried the reverse, first creating the stimuli using CSV batch upload, and then uploading the audio files through the resources files page and they were still not associated with each other (stimuli remained red).  I should note that the name of the stimuli and the name of the audio file are not the same, though the stimuli are populated with the correct audio file name.  For example, a stimulus with the name "emerald" and content:
{
  "type": "audio",
  "content": "5emerald",
  "barrier": false,
  "visible": false
}

Here the audio file is named "5emerald", though the stimulus name is "emerald".  Does the batch upload require that the name of the stimulus and the content be the same as the audio file (or is it sufficient for the "content" field to have the correct audio file name)?  I can of course upload each audio file individually by clicking on the stimulus name and clicking upload, and then uploading the audio file from my computer.  This brings me the the second problem:
If I batch upload the same set of stimuli twice from the resource files page, I only get one copy of each audio file (which is great).  But, if after batch uploading audio files from the "resource files" pages, I additionally upload one of the same previously uploaded audio files from the "stimuli" section (by clicking on the red stimulus button with the right name) I end up with a second copy of an audio file in the "resource file" page.  An exact copy with the same name and content (two audio files within the resource files page) and each plays fine.  This may not be a bug, but it seemed odd that I would end up with two files with the same name and content at difference locations on the resource files page. 
Just thought I'd share this experience...
Many thanks!!

Print this item

  Response Received!
Posted by: HF2021 - 11-15-2021, 02:29 PM - Forum: Study Grammar & Management - Replies (2)

Hello, When I set "submission_point" to false in a basic trial template with an image stimulus, I still get the blue "Response Received!" message between trials.  In searching through the forum, I saw that this was previously reported as a bug and fixed. The response type is "choice" with a single choice.  I would be grateful for any clarification. Thank you!

Print this item

  catch trials
Posted by: [email protected] - 11-11-2021, 10:58 AM - Forum: Experimental Design - Replies (4)

Good morning!
I have a question about catch trials and if it is possible for me to implement them in my study given my study design. I have 224 experimental trials total (seven blocks of 32 trials), and I am hoping to put six short breaks after the first six blocks of trials. My trials are randomized, so if I put the blocks into my block sequence in my procedure, the breaks get randomized with the trials (which I don't want). 


From the grammar reference, it seems that catch trials would be the best way to implement these consistent breaks into randomized trials. However, I'm not sure if I will be able to use catch trials with my study design. I was wondering if you could help me determine that? 
In my study, each trials has a sequence of six stimuli: a response cue, a blank grid which acts as a pause, a "cued response", another blank grid, a "discrimination response" and finally, one more blank grid. 

The response cue, which is presented first, auto-advances after 5ms. Then the blank grid is presented, which also auto-advances after 5ms. Then the "cued response" stimulus is presented, to which participants make a keypress response. This stimulus will not advance until participants make a keypress. Participants then see another blank grid which auto-advances after 5ms to the final stimulus, the "discrimination response" stimulus,  to which participants must make another keypress before the trial ends. 

The trial templates are set up so that there's a template for the response cue + first pause (called e.g "Trial8 RC"), a template for the cued response stimulus + discrimination response stimulus (called e.g "Trial8Critical"), and a template for the two pauses that follow the cued response and then the discrimination response stimuli, respectively (called e.g "Trial8Pauses"). 

"Trial8 RC": {
    "type": "basic",
    "stimuli": ["ResponseCueLeft", "BlankGrid5"],
    "auto_advance": true
  },
 
  "Trial8Critical": {
    "type": "basic",
    "stimuli": ["GreenBottomCRS", "BlueTopCRS"],
    "responses": ["TrainingResponseTargetF", "TrainingResponseTargetJ"]
  },
 
  "Trial8Pauses": {
    "type": "basic",
    "stimuli": ["BlankGrid5", "BlankGrid2"],
    "auto_advance": true

The response cue stimulus has a stimulus level duration of 0.5 seconds and the barrier set to "true" and the trial-template is set to auto-advance. 
In the procedures, this is set up so that the block will look as follows: 

"TrialBlock5": {
      "trial_templates": ["Trial5Critical", "Trial5Pauses"],
      "pattern": {"order": "alternate", "repeat": 1},
      "cover_trials": ["Trial5 RC"].

The reason I don't think I can incorporate catch trials into my study is because I already have the "Trial[x] RC" as a cover trial. If I put the "Trial[x] RC" templates into the "trial_templates": [...], along with the Trial[x]Critical and Trial[x]Pause, then I won't be able to alternate the pauses between the two critical stimuli, because FF won't be able to evenly distribute the stimuli. I've tried using the catch trials with the cover trial (like how you can have a cover and end trial) but the stimuli were presented out of order.

Do you have any thoughts on this?

Thank you so much!

Grace


Hi again,

It seems that the font in my first paragraph in my above post is unreadably small. I will copy and paste it here:

"I have a question about catch trials. I have 224 experimental trials total (seven blocks of 32 trials), and I am hoping to put six short breaks after the first six blocks of trials. My trials are randomized, so if I put the blocks into my block sequence in my procedure, the breaks get randomized with the trials (which I don't want)."

Sorry about that!

Print this item