A few questions regarding blocked triggered surveys in a study in May:
Ethica ID #12219 in study ID = 714 had two surveys blocked at 14:41UTC on May 11th.
This participant answered a survey scheduled for 8:30 UTC on 14:42 and one again at 14:49 (said to be scheduled for 14:48).
a. How can these surveys have been blocked at the same time if they were scheduled at very different times that should not have overlapped?
b. What happened so that the following two surveys were triggered right after that, while they also should not have overlapped (one should have expired by that time).
When Ethica dashboard “Survey Sessions” shows a survey was blocked, “Data Quantity Report” shows it as “completed”. I can’t give an example anymore as it seems to have been resolved in the past few weeks (thank you!). This is important to our next study because we reward participants per completed survey and do this based on the quantity report overview. Would you be able to tell if this has been resolved for future studies as well?
1.A & B: You are right, as these three surveys were more than 2 hours apart, and the expiry of the survey was set to 30 minutes, they should not have been marked as blocked. The problem was that the participant’s app was terminated from the night before, until 4 pm Jul. 11th. When they open the app, the app realizes there are 3 prompts of the survey 3627 waiting for response. It opens one of them from the morning (which should have been expired) and asks the participant to respond to it (which is now you see it’s completed), and marks the other two as blocked. In fact, it should have been 2 expired surveys and 1 completed, instead of 2 blocked and 1 completed. We will try to find a solution for this. Thanks for pointing this out.
2… Yes this issued was resolved after your feedback. Thanks for pointing this out.
No. Opening and closing the app should not lead to any issues. If it does and you notice that, please let us know. The termination I was referring to happens occasionally when the workload on the phone is too much (like while watching a YouTube video or during a Skype call or similar phone usage). It’s explained in more details here.
I think in the other thread we discussed the problematic survey triggering-logic. These issues would resolve by fixing the surveys and reloading the study by participants. To be more specific the session on Day 17 isn’t marked as expired and that led to subsequent sessions being marked as blocked.
We don’t see that on day 17 a survey is marked as ‘not expired’. We do not understand why this has happened to, at this point, only 3 participants and not to everyone. We worry that updating all surveys at this point in the study will also cause problems. This is something that we’ve already encountered in the past. Could you please help us fix it?
I chose to show by participation day rather than date. So the issue stems from Day 17 not marked as expired, but looks like the participant is answering sessions of Day 19. Either way the mobile app’s behavior is unstable with this triggering logic values.
I can fix the triggering logic for you, so did you mean to set that to a absolute time or relative? if relative how many days after joining should it trigger? if Absolute 2019-12-01 is already passed so it won’t get triggered anymore do you want to set to any other absolute date!?
Hi Faham, it should have been an absolute date, the same for all 380 participants. As we indicated, we fear that changing the script while the study is running would create other problems. We decided to closely monitor the extent of the issue yesterday for now. Today we encounter another issue. Questionnaires that have been filled out and received in the past are now marked as expired. For example, user #17281 filled out 3, 4, 3 questionnaires respectively in the past three days and all are now marked ‘expired’. Similarly, the dashboard lists questionnaires as expired in the dashboard of Teun, but not expired in my dashboard. Does this relate to the same issue? Can these questionnaires be retrieved? And what is the best and safest way to move forward?
The best thing to do at this point as mentioned before is to update the format of those invalid triggering-logics to absolute and publish the surveys. Also push reload studies to devices so participants get new sessions. I can do these steps with your confirmation. In-case any data is found out to be lost due to answered sessions being marked as expired later on, those still could be recovered through participant activity history although might take some time. But my suggestion is to fix the corrupt surveys as soon as possible and publish new surveys.
Our apologies for the late response. We again did not want to risk other errors that the update may have caused in the last few days of the study.
Participants do not respond as quickly on weekenddays. The study has stopped now.
We do want to follow up with you about some of the issues we found.
We have detected the following issues:
Participants who have succesfully filled out surveys (that we indeed received) and disappeared days later
Participants whose surveys remain “grey” and turn to “expired” days later
Participants who have filled out surveys that we never received (with evidence of submission)
The following are concrete examples, matched per issue above. We think the same issues may aply to multiple participants for other days as well.
We compared data files we downloaded from several dates with one another: later dates with previous dates. We were not able to check all files for all days, so these are also just specific cases we found.
17281: on December 12th surveys of December 10th, 11th and 12th are now marked as expired that were successfully submitted previously. We also see that the device ID of this participant changed on December 10th. The same is true for 16225. The surveys of this participant that were once submitted were marked expired from November 30th onward. This change happened on Deccmber 11th or 12th.
This change in device ID we see with other participants as well, if surveys changed status after a few days it happened from the date of device ID change. Most of these turned from “grey” to “expired”. We don’t know if these could have in fact have been submitted (see also our next point). We do find participants for which this is the case (grey --> green).
16293: submitted a survey on December 11th on 12:18 (see screenshot of last page of survey). In Ethica it is marked as expired on 12:31.
Ethica dashboard as well as Kibana do not show submitted surveys (only expired).
We have the following questions, again organized per issue:
1a. Can we retreive all lost data for these participants?
1b. And is there a way to detect all other participants this happened to?
We cannot compare all possible combinations of files; and we only have saved downloads for some days and surveys have also been lost within a day (we saw this happen).
2a. How can we know for certain that these questionnaires were not submitted and then lost?
2b. How can we know the cause for “grey” surveys: because they were never sent out/received or simply expired?
Eg participants with broken phones also have “grey” surveys. Should these not be marked as blocked? And some questionnaires are still marked as “grey” while Ethica apps have been synced.
3a. Can we retrieve all lost data for this participants? And detect participants with similar issues? We have had more participants with similar issues, but no screenshots.
Or, is it possible to simply receive all datafiles for each survey for each participant?
1a. Any lost response if ever uploaded to our servers could be recovered. Even if the session is marked as expired later on.
1b. We can detect if a session status is marked expired while there has been responses uploaded for it. Essentially we could look for these cases among expired sessions and recover their responses.
2a. If there are lost responses for a session, it can be detected.
2b. We can also check the above against “grey” (unknown) sessions. If any response is received on our servers we still can recover those.
3a. Yes we can,
However the process of recovering lost responses is a lengthy process, we essentially need go through the responses received for this study and match those against the extracted data in the databases and find the lost ones.
Thank you, Faham. All the best wishes for 2020.
Could we then start a process in which we check for the instances mentioned above for all datapoints and receive a complete dataset of this study from you?
Or what would be the best way to proceed?
And would you be able to estimate the time such a recovery process would take?