Version 22.214.171.124 is a release with a relatively narrow scope, focussed around improvements to the initial phases of Priority Screening (PS) and comparisons/reconciliations. This release includes a new "export the whole list to RIS" function for EPPI-Reviewer Web and also marks the return of searching OpenAlex by the date of "record creation" (when a given reference has been included in OpenAlex), a feature that is especially useful for living reviews/maps.
[Update: 05 May 2023] On May 3rd, the OpenAlex API changed in a way we did not anticipate, which meant that both EPPI-Reviewer versions suddenly stopped being able to obtain OpenAlex references from it. This in effect disrupted most EPPI-Reviewer functionalities that rely on OpenAlex, and could produce a number of visible error messages instead. Version 126.96.36.199 is a "hotfix" designed to resolve this problem and to ensure that all OpenAlex-powered functionalities will work again.
Given the change in how OpenAlex represents references, it was also possible to include a small improvement in how OpenAlex references are shown in EPPI-Reviewer (within the "Update review" pages): the "details" view of references will now include one or 2 direct links to a downloadable PDF full text document(s) (if present) giving precedence to an open access version, whenever available.
Priority Screening improvements
Generally, we recommend starting all "Screen on Title and Abstract" rounds by explicitly allocating a pre-established number of (randomly selected) references to reviewers. This is useful because it allows reviewers to evaluate how many includes are present in the whole screening queue and thus possibly permits a "stop (screening)" criterion to be set. It is also useful to ensure that the Machine Learning component of PS will receive a reasonably representative training set, which is crucial to the overall PS performance. Finally, having explicitly assigned references to screen to reviewers, allows to know in advance what comparisons are needed, creating which can otherwise be a time-consuming activity, especially for large teams.
However, it is also possible to start using PS from time zero, without "pre-loading" the machine with training data at all. In such cases, the PS list of items is "randomly generated" in the initial phases, and automatically switches to "machine learning driven" when at least 6 includes and at least 6 excluded have been found. Under these circumstances, until now, the "randomised list" was re-shuffling far too often, and as a result, in double/multiple screening scenarios, too many items would get screened by one person alone, and then de-prioritised as the list got re-randomised. This led to an unduly slow (/long) initial phase, where items got coded, but their coding wasn’t completed because only one person had coded them.
This scenario has been fixed in this update: the prioritisation algorithm will prioritise records that need a second (or third) screen above others in the queue. These changes apply to the priority screening list in all cases, however, the resulting improvements will be felt mostly when the list is randomised and are marginal when the list is driven by machine learning.
New feature: auto-comparisons in EPPI-Reviewer Web
When a large team is involved in a multiple coding/screening exercise, it is possible to find that a large number of comparisons is required, in order to find and complete all agreements and disagreements. This is especially true when using PS, as in such cases, what pairs (or triplets) of users will see what items is effectively randomised (depends on when people are active). For example, if 6 reviewers participate, 5+4+3+2+1 = 15 pairwise comparisons are needed to cover all possible pairs of reviewers. If 8 reviewers are involved, 7+6+5+4+3+2+1 = 28 comparisons may need to be created.
Thus, the amount of work required increases with the size of the reviewing team, which is counterproductive and ultimately unnecessary.
For this reason, we created the much awaited for "auto-comparisons" functionality. This is a "one button" feature, present in the Collaborate tab of EPPI-Reviewer Web. The relative button will be enabled whenever the root of a coding tool is selected from the Codes column (excluding Administration tools).
Pressing the button will automatically generate three-ways comparisons, covering all possible pairs of reviewers for which some coding overlap exists.
When planning and writing the algorithm to do so, we decided to implement the following approach:
- Comparisons are created including 3 reviewers, when possible, as this can reduce clutter by minimising the overall number of comparisons.
- Comparisons with the higher number of coding overlaps are created first (they will appear before the others).
- 3-way comparisons which include a pair that is also present in another comparison are allowed. This is because, in real world conditions, the total number of comparisons created does not change significantly, while at the same time, it may be convenient to have more data displayed in the resulting "reconcile" pages.
These decisions are pragmatic and might not perfectly match everyone's needs and expectations. Please do let us know if you'll find we should have opted for a different strategy: we are always happy to receive suggestions.
New feature: Export to RIS (all pages)
In the References tab, the Export (to RIS and more) sub-menu has a new "Export to RIS (all pages)" entry. This function produces RIS files, like the older "Export to RIS" function, but instead of exporting only the selected items from the visible page, it will export all items from the current list. Since lists can be very long, this function will generate (numbered) RIS files, containing up to 4000 references each. This ensures that the EPPI-Reviewer client will be unlikely to run out of memory when exporting many (tens of) thousands of references, and also ensures that the resulting RIS files do not reach unmanageable sizes.
Re-enabled feature: search by OpenAlex date
In both EPPI-Reviewer versions, it is possible to filter OpenAlex search results by (publication) "date". We have now re-enabled the possibility to filter by "creation (in OpenAlex)" date. This option was disabled when OpenAlex made this functionality paid-for. We have now purchased access, so to enable this function once again.
It's worth mentioning that this functionality is important for cases in which updating living reviews relies on explicit OpenAlex searches: we believe that filtering by "creation (in OpenAlex)" date is the most effective approach to ensure complete "coverage", while avoiding duplicates.