Dear Sam,
I've thought about this one a little bit: at first sight, your procedure seems 100% fine, I don't see any reason why it should create a problem.
I'll summarise what I understand you've done, to double check I've got it right.
You've set up a code set, added the sub-codes (1-9, with names that explain if item should be included or excluded).
When no item was coded (or at code-set creation time), the set was configured to work for "multiple user data entry" (this is crucial: if it was initially set for single user data entry, and some codes were added at this point, then I could understand the confusing outcome).
Multiple reviewers went through the review items and coded them with the code set described above.
You have used a comparison report, or the live comparison features, to look for disagreements and made sure your own coding was always reflecting the final/agreed classification.
From review statistics, you have "bulk completed" all your coding within the relevant code-set. This should have concluded the coding phase; all that was left to do was to mark the appropriate items as excluded.
You have used the "assign documents to be included or excluded" button from the main toolbar to exclude the items that had exclusion codes applied (codes 1-7).
At the end of the process there were too many items marked as excluded. (I'm assuming you started with all items marked as included).
It is always a little embarrassing, but I'm afraid that the first explanation I have, if all my description fits with what happened, is human error. The most likely situation to produce this is that the items that were incorrectly excluded had two codes assigned to them, one to suggest inclusion and another one exclusion.
The other possibility is that during the manual reconciliation phase some of the changes you've made were not saved on our server. This is teoretically possible, for example if the connection is lost for a short while, but we have been very careful in designing a system that will always show an error in such cases. We have reviewed this system very recently and could not find a situation that would not raise an error, additionally, EPPI-Reviewer checks the communication with the main server every 30 seconds and displays a warning in the lower left "status" area if the connection is lost. On the other hand: we did notice some of these errors do not explain what happened in an user friendly way (the next version will show more clear error messages). The question for you is: have you noticed any (obscure) error message in EPPI-Reviewer during the reconciliation phase?
In short. my hypotheses are:
- code set was changed to “multiple-user” only after some coding was applied
- human error
- not-committed data changes
If none of these seem possible to you, then I'm afraid we are back on square one, I do not have other ideas at the moment - sorry!
Sergio