HelpForum

Forum (Archive)

This forum is kept largely for historic reasons and for our latest changes announcements. (It was focused around the older EPPI Reviewer version 4.)

There are many informative posts and answers to common questions, but you may find our videos and other resources more informative if you are an EPPI Reviewer WEB user.

Click here to search the forum. If you do have questions or require support, please email eppisupport@ucl.ac.uk.

<< Back to main Help page

HomeHomeUsing EPPI-Revi...Using EPPI-Revi...Questions about...Questions about...Reconciling disagreements and applying exclusion criteriaReconciling disagreements and applying exclusion criteria
Previous
 
Next
New Post
23/06/2011 18:03
 

Hi,

we have just conducted pilot screening on two of our systematic reviews. Four reviewers piloted on our 'Social Funds' review and three on our 'School Feeding' review. As this was more than the usual two reviewers, I assumed I could not complete the agreements automatically.

Therefore what I did was login as myself and change my entries where appropriate, marking those documents complete as I went. For this we are only concerned about disagreements on whether the document should be included or excluded, so only coding disagreements along these lines were discussed and reconciled. Then I went into the 'Review Statistics' tab of the right hand box and set all the documents in the review to complete (based on my coding).

This all seemed to work fine. However, when I went through the exclusion criteria and assigned documents with an exclude code to be excluded, I found that the numbers didn't add up with our reconciliations. From playing around with it, I now see that the problem was that some documents that should be included (i.e. they have been coded 8 or 9) did not have their include checkbox ticked. So when I re-ticked these, after listing all the documents (excluded) marked with codes 8 and 9, the numbers were fine and added up. So my question is why were these documents initially set as not included? And will this issue have any bearing on our further screening (we will now be re-screening the included studies based on the full text)?

Sorry for the long email, but I wanted to check that the procedure I have followed is OK and give you as much information as possible.

As ever, many thanks for your help.

Best,

Sam

 
New Post
24/06/2011 16:07
 

Dear Sam,

I've thought about this one a little bit: at first sight, your procedure seems 100% fine, I don't see any reason why it should create a problem.
I'll summarise what I understand you've done, to double check I've got it right.

You've set up a code set, added the sub-codes (1-9, with names that explain if item should be included or excluded).
When no item was coded (or at code-set creation time), the set was configured to work for "multiple user data entry" (this is crucial: if it was initially set for single user data entry, and some codes were added at this point, then I could understand the confusing outcome).
Multiple reviewers went through the review items and coded them with the code set described above.
You have used a comparison report, or the live comparison features, to look for disagreements and made sure your own coding was always reflecting the final/agreed classification.
From review statistics, you have "bulk completed" all your coding within the relevant code-set. This should have concluded the coding phase; all that was left to do was to mark the appropriate items as excluded.
You have used the "assign documents to be included or excluded" button from the main toolbar to exclude the items that had exclusion codes applied (codes 1-7).
At the end of the process there were too many items marked as excluded. (I'm assuming you started with all items marked as included).

It is always a little embarrassing, but I'm afraid that the first explanation I have, if all my description fits with what happened, is human error. The most likely situation to produce this is that the items that were incorrectly excluded had two codes assigned to them, one to suggest inclusion and another one exclusion.
The other possibility is that during the manual reconciliation phase some of the changes you've made were not saved on our server. This is teoretically possible, for example if the connection is lost for a short while, but we have been very careful in designing a system that will always show an error in such cases. We have reviewed this system very recently and could not find a situation that would not raise an error, additionally, EPPI-Reviewer checks the communication with the main server every 30 seconds and displays a warning in the lower left "status" area if the connection is lost. On the other hand: we did notice some of these errors do not explain what happened in an user friendly way (the next version will show more clear error messages). The question for you is: have you noticed any (obscure) error message in EPPI-Reviewer during the reconciliation phase?

In short. my hypotheses are:
- code set was changed to “multiple-user” only after some coding was applied
- human error
- not-committed data changes
If none of these seem possible to you, then I'm afraid we are back on square one, I do not have other ideas at the moment - sorry!

Sergio

 
New Post
27/06/2011 11:39
 

Sergio,

thanks for your reply. In response, I'm pretty sure the code sets were set to multiple coding before any coding was done, although the outcome that this would have led to seems to make more sense than the other possibilities. Human error is, although the studies that I found to be excluded did not have more than one code applied, so this seems unlikely. As for data changes in EPPI, we did have a number of 'debugging' error messages, so this could be an issue.

Anyway, as the reconciliations are now sorted and inclusions/exclusions applied everything is OK, as we are now past the screening phase.

Thanks again for your help.

Best,

Sam

 
New Post
27/06/2011 12:14
 

Hi Sam,

If you did see some "debugging" erros during the process, then we need to look no further. I am 99.9% sure that this is what went wrong in your case. As I've said, the next version will have clear warnings asking users to redo their last changes.

Apologies for the inconvenience,

Sergio

 

 
Previous
 
Next
HomeHomeUsing EPPI-Revi...Using EPPI-Revi...Questions about...Questions about...Reconciling disagreements and applying exclusion criteriaReconciling disagreements and applying exclusion criteria


Copyright 2021 by EPPI-Centre :: Privacy Statement :: Terms Of Use :: Site Map :: Login
Home::Help::EPPI-Mapper::RIS Export::About::Account Manager