Review guide for analysis best practice, developed at rOpenSci unconf 2017

Related: checkers - a package to assess analysis

# Rationale

While every analysis is different, there are common elements which can strengthen validity, reproducibility, and reusability. These guidelines describe, prioritize and assist analysts in developing the strongest analyses and workflows possible while remaining flexible for a wide variety of applications and contexts.

# Terminology

## Tiers

Tiers are in descending order of importance: Focus on Tier 1 elements, then Tier 2, then Tier 3.

• Tier 1: Must Have - These elements are required for reliable and trustworthy analyses.
• Tier 2: Nice to Have - These elements are recommended for best practice and reproducibility and should be strongly considered.
• Tier 3: Recommended - These elements are ideal best practice.

## Automation

• Fully automatic: Included as elements of checkers.
• Semi-automatic: Can be implemented as custom checks in checkers.
• Human-powered: Cannot be automated. Analyst uses guidelines to make sure analysis and report fit best practice for specific context.

# Prerequisite

This might include “What patterns do we see?” and other exploratory analyses as well as more formal hypotheses, but questions and analysis plans should be clearly defined.

# Data

## Know your Data/Data Structure (T1, Human)

In order to conduct robust analyses and make reliable inferences, we have to understand our data.

### Examples

• Is a data dictionary/codebook or other metadata available?
• Are column names reasonable - meaningful with no special characters?
• Is data tidy, with all information contained in rows/columns (not rownames or labels)?
• Have you visualized your data to better understand missingness and variable distributions?

### Package Suggestions

dplyr and tidyr for tidying data; visdat for visualizing missingness

## Quality Checks (T1, Human)

Check data for reasonable values.

### Examples of unreasonable values:

• Pregnant males
• Lab measurements outside biological limits
• Variables which should be continuous containing character values (1,239)

assertr

## Rawest data form (T1, Human)

To retain as much information and avoid as much duplication of effort and human error as possible, data should be loaded in the form closest to its original state as is reasonable.

### Examples

• Comma separated files vs formatted spreadsheets
• Raw subject-level data rather than summary statistics

## Check and control for updated source data (T2, Semi-Auto)

Do the column names and dimensions in your current data files match an expected set of names/dimensions? If rows or column are added or deleted, your results and the stability of your scripts and models might be affected.

### Examples

• Check dim() of all datasets

## Ownership clearly identified (T3, Human)

At minimum, the source of the data should be noted, along with any necessary acknowledgements. If the data is publicly available and licensed, the license should be included.