Version 220.127.116.11 adds two new, ready for use classifiers to the pool of EPPI-Reviewer machine learning features. It is now possible to use them to identify Systematic Reviews and Economic Evaluations by title and abstract. This release also includes a bug-fix, an adjustment to Priority Screening settings and two performance-related enhancements.
Thanks to the Cochrane Crowd, we have had a machine learning classifier, that is able to distinguish between randomised controlled trials (RCTs) and other study designs, available for more than a year. We are now pleased to add two new classifiers: for systematic reviews and economic evaluations. These types of machine learning models need good data from which the machine can ‘learn’, and we’re grateful to the Centre for Reviews and Dissemination at the University of York for allowing us to use data from two of their databases for this task: the Database of Abstracts of Reviews of Effects (DARE) for the systematic reviews classifier; and the NHS Economic Evaluations Database (NHS EED) for the economic evaluations classifier.
To use these classifiers ("models" is the technical term used within EPPI-Reviewer), click on the "spanner and cog" button on the "Codes" toolbar (top right corner, in the "Codes" tab), this opens a window that allows to build custom classifiers/models (not a new feature) and apply existing classifiers ("Stage 2:" section). The "Systematic Review" and "Economic Evaluation" models are now available in this window. All users will be able to select them and apply them to the chosen batch of items. Results will appear in the Search tab as before.
Priority Screening: feature change.
Following some consulation with users who highlighted a possible problem, we have decided to re-adjust a detail in the "Screening" tab. Starting with this version, only codesets of "Screening" type will be allowed to be selected as the set used for Priority Screening, previously it was possible to use also "Standard" codesets. We think this restriction will be useful as some of the priority screening functionalities cannot work as expected when the chosen codeset is of "Standard" type. Specifically, any feature that relies on the inherent meaning attached to "Include" and "Exclude" code-types would not function or work only in limited ways. These include:
1. Auto-Reconcile (Include/Exclude only): naturally, this feature relies on code-types to discriminate between agreements and disagreements.
2. Tracking progress: the numbers in the bottom right quadrant and the graph on the top right are collected by looking at code-types. When using a "Standard" codeset, this prevents collecting up-to-date figures and would thus fail to represent the progress made so far.
BugFix (applies to Cochrane users only).
If a Cochrane user logged on via Archie without already having an EPPI-Reviewer account, the system allowed users to create it on the fly (this is not the recommended route). However, the resulting "activate account" email contained a malformed "Activation" link. This problem is now resolved.
Users who had collected a long list of numeric Outcomes data could experience delays in loading the Outcomes list when opening the Meta Analysis window. In some extreme cases, the list could time out and fail to load. We have re-written the underlying routines, which are now able to collect long lists of Outcomes without significant performance costs.
A similar situation could occur when running complex configurable reports against numerous items. In some cases, the performance cost was high enough to result in a timeout error. As per lists of outcomes, the underlying routines have been re-written and should not cause any problem in the foreseeable future.
Update 24/07/2018: V 18.104.22.168
This update includes behind the scenes changes that should not produce visible effects to ordinary users. These changes include new features that are currently being tested and should become available to all users with the next version.