HelpForum

Forum (Archive)

This forum is kept largely for historic reasons and for our latest changes announcements. (It was focused around the older EPPI Reviewer version 4.)

There are many informative posts and answers to common questions, but you may find our videos and other resources more informative if you are an EPPI Reviewer WEB user.

Click here to search the forum. If you do have questions or require support, please email eppisupport@ucl.ac.uk.

<< Back to main Help page

HomeHomeEPPI-Reviewer 4...EPPI-Reviewer 4...Forum announcem...Forum announcem...Latest Changes (13/11/2024 - V6.15.5.0)Latest Changes (13/11/2024 - V6.15.5.0)
Previous
 
Next
New Post
13/11/2024 13:08
 

Version 6.15.5.0 contains new functionalities mostly aimed at facilitating the evaluation of automation features and GPT4-generated coding in particular. There are new "all coding reports" designed to streamline the comparison of GPT vs. gold-standard coding, and a new "delete all coding by this person in this coding tool" (destructive!) function, which can speed up prompt-refining iterations. This release also changes, mostly under the hood, how EPPI Reviewer communicates with Cochrane systems, so to match changes in Cochrane APIs and policies. There is also a new option to allow users with the "Coding Only" role to access comparisons that involve them, and reconcile disagreements within the Coding Only user interface.

Changes to Cochrane Authentication, Authorisation and Licensing

EPPI Reviewer grants an automatic and specific license to Cochrane users, whereby Cochrane authors can use EPPI Reviewer to work on Cochrane reviews free of charge. At present, Cochrane is undergoing a period of transformation, with the already implemented transformation of RevMan Web into a fully-featured product and the (planned) corresponding phasing out of Archie (which was the online counterpart of RevMan 5). To license Cochrane users, EPPI Reviewer allows people to authenticate with their Cochrane credentials and then used to communicate with the Archie API to check user rights and receive the list of reviews that a given user has authorship roles in. With the planned end to Archie, all these systems had to change. Hopefully this will produce no visible effect to the vast majority of Cochrane users, however, a few might be affected by the following changes:

  • To receive the Cochrane licensing, besides having a Cochrane account, people need to be recognised as "Cochrane Authors". The new Cochrane API now returns "user info" data that includes an "Is Cochrane Author" flag. This simplifies the code on the EPPI Reviewer end, but we're told that this flag will now sometimes change from "True" to "False" meaning that inactive users will sometimes lose their Cochrane Author status. When this happens, EPPI Reviewer will detect the fact, unlink the user EPPI Reviewer account from their (not any longer useful, in EPPI Reviewer terms) Cochrane account, and inform the user.
  • Users who eventually re-gain their Cochrane Author status, will be able to re-link their Cochrane and EPPI Reviewer accounts in a handful of clicks at any time.
  • How EPPI Reviewer receives the list of Cochrane reviews for which the current user is an author of has also changed. From now on, Cochrane will be marking reviews as "Active" or "Inactive" and EPPI Reviewer, following Cochrane's explicit instructions, will receive only the list of "Active" reviews. Thus, some reviews might "disappear" in EPPI Reviewer, if/when they will be marked as "Inactive". Since this is supposed to happen only when reviews are genuinely inactive, we don't expect this to be a problem. Should our expectation prove to be wrong, please don't hesitate to contact us, and we'll find out together how to deal with the problem.

EPPI Reviewer new features

New "All coding report(s)"

In Review Home\Coding Progress, Review Admins already have an "All Coding" report function, which represents all coding data associated with a coding tool, including incomplete coding and coding associated with duplicates and deleted items; it shows data in and HTML table, where each cell includes all data associated with a given code and a specific item. The availability of GPT4o automatic coding creates the imperative need to evaluate its performance on a per-review, per question basis, at scale. To allow calculating accuracy levels on sufficiently large test sets, the existing comparison features of EPPI Reviewer are not ideal and the existing "all coding" report isn't ideal either, as it may show a lot of data of different "types", within a single cell.
Therefore, we have extended the "All Coding" report approach, and added two new reports based on the same data.
The new "All Coding" Excel report exports all coding data associated with a coding tool, but unlike the All Coding HTML report, it segregates data in cells and worksheets - it also comes with a number of options designed to facilitate different kinds of off-line analyses. Instead of having one cell for a pairwise combination of code and item, cells represent a three-ways match between Code, Item and Reviewer. Multiple sheets then replicate this three-way-matching, but represent different kind of data (ticked checkboxes, text in the infobox, PDF text selections, arms, outcomes). In theory (we are still working on it) this allows to use Excel directly to calculate agreement/disagreement rates.
For those so inclined, we also made available the data used to produce the HTML and Excel reports in JSON format, which can allow to produce R, Python (or any other language) scripts to automate such analyses.

New "delete all coding by this person in this coding tool" feature

We're introducing this (radically destructive) functionality mostly to allow iterating prompt-refining for GPT4o: evaluating and refining GPT prompts can be time consuming, and might require to auto-code the same batch of items iteratively. For this reason, being able to delete all coding produced by the robot can save a lot of time. In typical EPPI Reviewer fashion, we've decided not to restrict this function only to robot-generated coding, and allow deleting in bulk also coding from any regular reviewer (with lots of warnings and safeguards: this function destroys data and cannot be undone). This function is available only to people with review admin role, and appears in Review Home\Coding Progress, as a small trash icon close to the the "Advanced Bulk Complete\Uncomplete" icons.

Creating multiple child codes in one go

The "add child code" function (available in "Edit Tools", "Collaborate" tab, and "Item Details") has a new option: Enter multiple comma separated codes. When ticked, it allows to type a comma separated list of code names and will then create multiple codes (with an empty description) in one go.

Enhanced user interface for per-Item Open Alex matches.

In the "item details/Open alex" tab, the layout has been redesigned. The different fields from the current item and its possible Open Alex matches are now highlighted in a manner inspired by the deduplication pages. This, even if it isn't as sophisticated as the Deduplication UI, should greatly facilitate the manual checking of Open Alex matches.

New review option(s)

In Review Home / edit review: review admins can now enable or disable priority screening. Previously this was only possible through the Account Manager.

In the same place, review admins can also enable/disable the new "Show Comparisons in Coding Only" option (it's disabled by default). This option will allow comparisons to appear in the coding only UI, where only comparisons containing the current user will be listed. People will then be able to see and use comparisons also via the "coding only" interface, if and only if a review admin decided to activate this functionality.

Bugfixes

In "update review \ bring up to date" tab, searches took an unnecessarily long time to run, to the point of becoming somewhat unreliable. The efficiency of the code running these searches has been greatly improved, which certainly makes the searches a lot quicker, and hopefully also reduces the number of times when they fail to complete.

In "edit coding tools" the "aeroplane" buttons allow to move codes within a coding tool. When doing so, EPPI Reviewer computes where a given code cannot be moved to, as doing so would create self-referencing loops of parent/child relations. In some cases, it was possible to confuse the UI, whereby it would do this calculation for the wrong code. As a result, it was possible to accidentally create these self-referencing loops, which in turn made it impossible for EPPI Reviewer to load the coding tool trees. This problem is now solved.

Priority Screening training would fail when the to-be-screened list had been exhausted. This bug was introduced in Version 6.15.4.0: in the new Machine Learning environment, a Training round would fail, and thus produce no results, if it was submitted after reaching the end of the screening queue (I.e. there were no items to "rank"). This meant that the "progress" graph and table would never show that last "this is all done" data point, possibly giving users the impression that there were some unscreened items left. This problem is now solved and the "all done" final data point gets created correctly.

When creating a new item, the component that shows the list of "uploaded documents" was visible and could sometimes show the list of a previously visited item. The list of uploaded documents is now hidden in the "new item" window.

Account Manager: transfer credit between credit lines.

The EPPI Reviewer online shop allows users to purchase credit and then distribute it to Review and Account subscription as and when needed. Sometimes a small sum remains in such credit lines and it becomes difficult or impossible to use it for the desired renewal. In such cases, transferring credit to a different credit line can help to "aggregate" leftovers and overcome this problem. This functionality is available as a "If needed, you can transfer credit between purchases" link placed above the list of Credit Purchases in the Summary tab.

EPPI Visualiser: loading lists of items

When frequencies, crosstabs and maps are shown in their standalone page, clicking on elements in the results table automatically loads the relative list of items, which appears below the main table of results. From now on, when a user clicks on these links, as soon as the items list is received, the page will automatically scroll down to it.

 
Previous
 
Next
HomeHomeEPPI-Reviewer 4...EPPI-Reviewer 4...Forum announcem...Forum announcem...Latest Changes (13/11/2024 - V6.15.5.0)Latest Changes (13/11/2024 - V6.15.5.0)


Copyright 2021 by EPPI-Centre :: Privacy Statement :: Terms Of Use :: Site Map :: Login
Home::Help::EPPI-Mapper::RIS Export::About::Account Manager