10-18-2021, 11:01 AM
(This post was last modified: 10-18-2021, 11:02 AM by chingiz.poletaev.
Edit Reason: misspellings
)
Dear Colleagues,
In course of trying out my beta study I stumbled upon a problem with collecting data. The experiment should collect audio responses - however, in the final zip-file there's only one (pretty unintelligible) .csv file. No .ogg files, no "value", no "rt". Just that .csv. The question is: where are the actual audio recordings? Am I doing something wrong? My browser is Chrome. Below is the code for my response unit "picturenaming_response":
{
"type": "audio",
"auto_start": true,
"rerecording_allowed": false,
"onset_detection": true
}
Should there be different response units?.. I need people to audially describe the pictures, and the recordings with these descriptions.
Thank you very much for your input!
In course of trying out my beta study I stumbled upon a problem with collecting data. The experiment should collect audio responses - however, in the final zip-file there's only one (pretty unintelligible) .csv file. No .ogg files, no "value", no "rt". Just that .csv. The question is: where are the actual audio recordings? Am I doing something wrong? My browser is Chrome. Below is the code for my response unit "picturenaming_response":
{
"type": "audio",
"auto_start": true,
"rerecording_allowed": false,
"onset_detection": true
}
Should there be different response units?.. I need people to audially describe the pictures, and the recordings with these descriptions.
Thank you very much for your input!