HelpForum

Forum (Archive)

This forum is kept largely for historic reasons and for our latest changes announcements. (It was focused around the older EPPI Reviewer version 4.)

There are many informative posts and answers to common questions, but you may find our videos and other resources more informative if you are an EPPI Reviewer WEB user.

Click here to search the forum. If you do have questions or require support, please email eppisupport@ucl.ac.uk.

<< Back to main Help page

HomeHomeUsing EPPI-Revi...Using EPPI-Revi...Questions about...Questions about...Duplicate coding reportDuplicate coding report
Previous
 
Next
New Post
20/06/2012 14:15
 

Hi,

We are currently double coding our articles to assess quality.  

At the moment our quality assessment tool is set up under a node called 'Quality assessment', with a child code representing each question on the checklist we are using (e.g., '1. Was the sample size adequate?') and child codes under each of these questions to reflect possible answers (i.,e., tick boxes for yes, partial and no, each of which has its own definition added to the description field).  To complete the checklist we tick an answer for each question, adding explanatory text where necessary using the 'info' button.  

[If it's easier to look at this in context, it's in our 'Exercise' review, and the relevant codeset is QualityAssessment_Quant (Qualsyst).]  

I am now trying to generate coding reports (from the collaborate tab) as per the Eppi-Reviewer manual, and so far I've only been able to generate a report for each of the questions individually.  When I select the top level node, the report that is generated is blank.

I've also tried printing a coding record from each individual study record comparing our coding, but this report currently includes comments (i.e., text coded for each node) for only one of the coders.

Could you please advise on whether there's a simply way of generating a report for each paper with full coding from both reviewers using either of these options that I'm missing?  (Or if there's something I'm maybe doing incorrectly.)

Alternatively, do I need to reconsider the way this code set is set up?

Many thanks,

Lynley Aldridge

 
New Post
20/06/2012 15:35
 

Hello Lynley,

You can generate a report for each paper in the 'Coding record' tab that you can find in the 'Document details' window.

If go into one of the papers that two of you have coded (for example: Courneya (2003)),  by clicking on 'Go' for that paper in the Documents tab, you will find yourself in the 'Document details' window. Over on the right side you will see the 'Coding record' tab. If you click on that tab you will see all of the Codesets that have been applied to that item by each person that has started any coding on that particular paper. If two people have applied codes from the 'QualityAssessment_Quant (QualSyst)' codeset you will see two lines in the table for that codeset. If you select those two rows (by checking the left side boxes in those rows) and then click on 'Run comparison' a report will be displayed showing how each person coded that item. It is colour coded so one persons coding is in blue and the other persons coding is in red.

Best regards,

Jeff

 
New Post
21/06/2012 05:52
 

 Hi Jeff,

Thanks for this response - I'm not sure what I was doing wrong but I have now been able to generate the reports using the 'coding record' tab.  

Our next problem is that we would like to generate a static report of just the codes assigned by each of the two raters for each question  (e.g., yes, partial or no) without the text, and export this to Excel, before reconciliation takes place, so that we can later calculate and report on inter-rater agreement for the quality assessments.

At the moment the only way of doing this that I can see is to generate 14 separate reports (one for each question) using the 'create comparison' option from the collaborate tab.  

Can you advise on any alternatives please?

Many thanks,

Lynley

 
New Post
21/06/2012 12:30
 

Hello Lynley,

The 'Create comparison' method is question based (i.e. it looks at the child codes directly below the question). It is set up that way so if someone is running a comparison against their screening criteria, and the screening is set up as a number of includes/excludes, a comparison can be calculated based on agreements and disagreements.

In your case you have 14 questions, each with its own child codes, so I am not sure how you would compare across questions. Would you not want to first determine your agreement/disagreement at the questions level. Comparisons across quesitons would then be based on the individual ageement/disagreements at the individual questions. Or perhaps am I misunderstanding what you are asking?

Best regards,

Jeff

 
New Post
25/06/2012 12:22
 

 Hi Jeff, 

Yes, we would want to determine agreement/disagreement at the questions level.  

However, I was hoping that there would be an easy way to export the data for all 14 questions into Excel in one go (to preserve the original codings, so that we can calculate and report on inter-rater reliability for each question).  At the moment it looks like we would need to generate 14 separate 'create comparison' reports and then copy and paste the data from each one into Excel.  

We can work around this, but I just thought I'd ask in case there was a relatively simple solution I was missing.

Best wishes,

Lynley

 

 
New Post
25/06/2012 16:05
 

Hello Lynley,

I now understand what you require but I don't think there is an easy way to do this in EPPI-Reviewer other than creating the 14 comparisons.

I have been thinking about how a comparison report containing 14 questions and the respone(s) from each coder might appear. Would you still want one row per item and have multiple columns showing the the responses from each person for each question? In your scenario you would have 28 response columns in the table (assuming there were 2 coders and 14 questions). Or did you envision it a bit differently?

A comparison report that provides all of this information might be something we should add to EPPI-Reviewer so I am just trying to get an understand of what it might look like and still be of the most value to the user.

Best regards,

Jeff

 
Previous
 
Next
HomeHomeUsing EPPI-Revi...Using EPPI-Revi...Questions about...Questions about...Duplicate coding reportDuplicate coding report


Copyright 2021 by EPPI-Centre :: Privacy Statement :: Terms Of Use :: Site Map :: Login
Home::Help::EPPI-Mapper::RIS Export::About::Account Manager