Greg Finak [email protected]
This vignette demonstrates how to use DataPackageR to build a data package. DataPackageR aims to simplify data package construction. It provides mechanisms for reproducibly preprocessing and tidying raw data into into documented, versioned, and packaged analysis-ready data sets. Long-running or computationally intensive data processing can be decoupled from the usual
R CMD build process while maintinaing data lineage.
For demonstration purposes, in this vignette we will subset and package the
mtcars data set.
Set up a new data package.
We will set up a new data package based on the
mtcars example in the README. The
datapackage_skeleton() API is used to set up a new package. The user needs to provide:
- R or Rmd files that process data.
- A list of R object names created by those files.
- Optionally, a path to a directory of raw data (will be copied into the package).
- Optionally, a list of additional files that may be dependencies of your R or Rmd data processing files.
library(DataPackageR) # Let's reproducibly package the cars in the mtcars dataset with speed # > 20. Our dataset will be called `cars_over_20`. # Get the code file that turns the raw data to our packaged and # processed analysis-ready dataset. <- processing_code system.file("extdata", "tests", "subsetCars.Rmd", package = "DataPackageR") # Create the package framework. ::datapackage_skeleton(name = "mtcars20", DataPackageRforce = TRUE, code_files = processing_code, r_object_names = "cars_over_20", path = tempdir() #dependencies argument is empty #raw_data_dir argument is empty. )'/tmp/RtmphVE6Rt/mtcars20/' [39m [32m✔ [39m Creating [34m'/tmp/RtmphVE6Rt/mtcars20' [39m [32m✔ [39m Setting active project to [34m'R/' [39m [32m✔ [39m Creating [34m'DESCRIPTION' [39m [32m✔ [39m Writing [34m'NAMESPACE' [39m [32m✔ [39m Writing [34m'<no active project>' [39m [32m✔ [39m Setting active project to [34m'/tmp/RtmphVE6Rt/mtcars20' [39m [32m✔ [39m Setting active project to [34m'data-raw/' [39m [32m✔ [39m Creating [34m'data/' [39m [32m✔ [39m Creating [34m'inst/extdata/' [39m [32m✔ [39m Creating [34m
What’s in the package skeleton structure?
The process above has created a DataPackageR source tree named “mtcars20” in a temporary directory. For a real use case, you would pick a path on your filesystem where you could then initialize a new github repository for the package.
The contents of
levelName 1 mtcars20 2 ¦--data 3 ¦--data-raw 4 ¦ °--subsetCars.Rmd 5 ¦--datapackager.yml 6 ¦--DESCRIPTION 7 ¦--inst 8 ¦ °--extdata 9 ¦--R 10 °--Read-and-delete-me
You should fill out the
DESCRIPTION file to describe your data package. It contains a new
DataVersion string that will be automatically incremented when the data package is built if the packaged data has changed.
The user-provided code files reside in
data-raw. They are executed during the data package build process.
A note about the YAML config file.
datapackager.yml file is used to configure and control the build process.
The contents are:
configuration: files: subsetCars.Rmd: enabled: yes objects: cars_over_20 render_root: tmp: '780942'
The two main pieces of information in the configuration are a list of the files to be processed and the data sets the package will store.
This example packages an R data set named
cars_over_20 (the name was passed to
datapackage_skeleton()), which is created by the
The objects must be listed in the yaml configuration file.
datapackage_skeleton() ensures this is done for you automatically.
DataPackageR provides an API for modifying this file, so it does not need to be done by hand.
Further information on the contents of the YAML configuration file, and the API are in the YAML Configuration Details vignette.
Where do I put my raw datasets?
Raw data (provided the size is not prohibitive) can be placed in
datapackage_skeleton() API has the
raw_data_dir argument, which will copy the contents of
raw_data_dir (and its subdirectories) into
In this example we are reading the
mtcars data set that is already in memory, rather than from the file system.
An API to read raw data sets from within an R or Rmd procesing script.
As stated in the README, in order for your processing scripts to be portable, you should not use absolute paths to files. DataPackageR provides an API to point to the data package root directory and the
data subdirectories. These are useful for constructing portable paths in your code to read files from these locations.
For example, to construct a path to a file named “mydata.csv” located in
inst/extdata in your data package source tree:
Rmdfile. This would return: e.g., /tmp/RtmphVE6Rt/mtcars20/inst/extdata/mydata.csv
DataPackageR::project_path()constructs a path to the data package root directory. (e.g., /tmp/RtmphVE6Rt/mtcars20)
DataPackageR::project_data_path()constructs a path to the data package
datasubdirectory. (e.g., /tmp/RtmphVE6Rt/mtcars20/data)
Raw data sets that are stored externally (outside the data package source tree) can be constructed relative to the
YAML header metadata for R files and Rmd files.
If your processing scripts are Rmd files, the usual yaml header for rmarkdown documents should be present.
If your processing scripts are R files, you can still include a yaml header, but it should be commented with
#' and it should be at the top of your R file. For example, a test R file in the DataPackageR package looks as follows:
#'--- #\'title: Sample report from R script #'author: Greg Finak #'date: August 1, 2018 #'--- data <- runif(100)
This will be converted to an Rmd file with a proper yaml header, which will then be turned into a vignette and indexed in the built package.
Build the data package.
Once the skeleton framework is set up, run the preprocessing code to build
cars_over_20, and reproducibly enclose it in a package.
dir.create(file.path(tempdir(),"lib")) :::package_build( DataPackageRfile.path(tempdir(),"mtcars20"), install = TRUE, lib = file.path(tempdir(),"lib") ) 1 data set(s) created by subsetCars.Rmd [32m✔ [39m [31m• [39m cars_over_20! [32m☘ [39m Built all datasets-interactive NEWS.md file update. [36mNon [39m'vignettes/' [39m [32m✔ [39m Creating [34m'inst/doc/' [39m [32m✔ [39m Creating [34m [1m [22m [36mℹ [39m Loading [34mmtcars20 [39m [1m [22mWriting [34mNAMESPACE [39m [1m [22mWriting [34mmtcars20.Rd [39m [1m [22mWriting [34mcars_over_20.Rd [39m [36m── [39m [36mR CMD build [39m [36m───────────────────────────────────────────────────────────────── [39m* checking for file ‘/tmp/RtmphVE6Rt/mtcars20/DESCRIPTION’ ... OK * preparing ‘mtcars20’: * checking DESCRIPTION meta-information ... OK * checking for LF line-endings in source and make files and shell scripts * checking for empty or unneeded directories * looking to see if a ‘data/datalist’ file should be added : this package now depends on R (>= 3.5.0) NB: Added dependency on R >= 3.5.0 because serialized objects in WARNING/load version 3 cannot be read in older versions of R. serializeFile(s) containing such objects: /data/cars_over_20.rda’ ‘mtcars20* building ‘mtcars20_1.0.tar.gz’ [32m [1mNext Steps [22m [39m [37m [33m [1m1. Update your package documentation. [22m [37m - Edit the documentation.R file in the package source [32mdata-raw [37m subdirectory and update the roxygen markup. [39m [39m [37m - Rebuild the package documentation with [31mdocument() [37m . [39m [37m [37m [33m [1m2. Add your package to source control. [22m [37m - Call [31mgit init . [37m in the package source root directory. [39m [39m [37m - [31mgit add [37m the package files. [39m [37m - [31mgit commit [37m your new package. [39m [37m - Set up a github repository for your pacakge. [39m [37m - Add the github repository as a remote of your local package repository. [39m [37m - [31mgit push [37m your local repository to gitub. [39m [37m 1] "/tmp/RtmphVE6Rt/mtcars20_1.0.tar.gz"[
Documenting your data set changes in NEWS.
When you build a package in interactive mode, you will be prompted to input text describing the changes to your data package (one line).
These will appear in the NEWS.md file in the following format:
DataVersion: xx.yy.zz ======== A description of your changes to the package [The rest of the file]
Logging the build process.
DataPackageR uses the
futile.logger package to log progress.
If there are errors in the processing, the script will notify you via logging to console and to
/private/tmp/Test/inst/extdata/Logfiles/processing.log. Errors should be corrected and the build repeated.
If everything goes smoothly, you will have a new package built in the parent directory.
In this case we have a new package:
A note about the package source directory after building.
The package source directory changes after the first build.
levelName 1 mtcars20 2 ¦--data 3 ¦ °--cars_over_20.rda 4 ¦--data-raw 5 ¦ ¦--documentation.R 6 ¦ °--subsetCars.Rmd 7 ¦--DATADIGEST 8 ¦--datapackager.yml 9 ¦--DESCRIPTION 10 ¦--inst 11 ¦ ¦--doc 12 ¦ ¦ ¦--subsetCars.html 13 ¦ ¦ °--subsetCars.Rmd 14 ¦ °--extdata 15 ¦ °--Logfiles 16 ¦ ¦--processing.log 17 ¦ °--subsetCars.html 18 ¦--man 19 ¦ ¦--cars_over_20.Rd 20 ¦ °--mtcars20.Rd 21 ¦--NAMESPACE 22 ¦--NEWS.md 23 ¦--R 24 ¦ °--mtcars20.R 25 ¦--Read-and-delete-me 26 °--vignettes 27 °--subsetCars.Rmd
Update the autogenerated documentation.
After the first build, the
R directory contains
mtcars.R that has autogenerated
roxygen2 markup documentation for the data package and for the
cars_over20 packaged data.
Rd files can be found in
The autogenerated documentation source is in the
documentation.R file in
You should update this file to properly document your objects. Then rebuild the documentation:
dir.create(file.path(tempdir(),"lib")) # a temporary library directory in dir.create(file.path(tempdir(), "lib")): '/tmp/RtmphVE6Rt/lib' Warning already existsdocument(file.path(tempdir(),"mtcars20"), lib = file.path(tempdir(),"lib")) [1m [22m [36mℹ [39m Updating [34mmtcars20 [39m documentation [1m [22m [36mℹ [39m Loading [34mmtcars20 [39m1] TRUE[
Updating documentation does not reprocess the data.
Once the the documentation is updated in
R/mtcars.R, then run
Why not just use R CMD build?
If the processing script is time consuming or the data set is particularly large, then
R CMD build would run the code each time the package is installed. In such cases, raw data may not be available, or the environment to do the data processing may not be set up for each user of the data. DataPackageR decouples data processing from package building/installation for data consumers.
Installing and using the new data package.
Accessing vignettes, data sets, and data set documentation.
The package source also contains files in the
inst/doc directories that provide a log of the data processing.
When the package is installed, these will be accessible via the
The vignette will detail the processing performed by the
subsetCars.Rmd processing script.
The data set documentation will be accessible via
?cars_over_20, and the data sets via
# Create a temporary library to install into. dir.create(file.path(tempdir(),"lib")) in dir.create(file.path(tempdir(), "lib")): '/tmp/RtmphVE6Rt/lib' Warning already exists # Let's use the package we just created. install.packages(file.path(tempdir(),"mtcars20_1.0.tar.gz"), type = "source", repos = NULL, lib = file.path(tempdir(),"lib")) <- loadNamespace lns if (!"package:mtcars20"%in%search()) attachNamespace(lns('mtcars20',lib.loc = file.path(tempdir(),"lib"))) #use library() in your code data("cars_over_20") # load the data # now we can use it. cars_over_20 speed dist44 22 66 45 23 54 46 24 70 47 24 92 48 24 93 49 24 120 50 25 85 # See the documentation you wrote in data-raw/documentation.R. ?cars_over_20 <- vignette(package = "mtcars20", lib.loc = file.path(tempdir(),"lib")) vignettes $results vignettes Package LibPath Item "mtcars20" "/tmp/RtmphVE6Rt/lib" "subsetCars" Topic Title "A Test Document for DataPackageR (source, html)"Topic
Using the DataVersion.
Your downstream data analysis can depend on a specific version of the data in your data package by testing the DataVersion string in the DESCRIPTION file.
We provide an API for this:
# We can easily check the version of the data. ::data_version("mtcars20", lib.loc = file.path(tempdir(),"lib")) DataPackageR1] '0.1.0' [ # You can use an assert to check the data version in reports and # analyses that use the packaged data. assert_data_version(data_package_name = "mtcars20", version_string = "0.1.0", acceptable = "equal", lib.loc = file.path(tempdir(),"lib")) #If this fails, execution stops #and provides an informative error.
Migrating old data packages.
Version 1.12.0 has moved away from controlling the build process using
datasets.R and an additional
The build process is now controlled via a
datapackager.yml configuration file located in the package root directory. See YAML Configuration Details.
Create a datapackager.yml file.
You can migrate an old package by constructing such a config file using the
# Assume I have file1.Rmd and file2.R located in /data-raw, and these # create 'object1' and 'object2' respectively. config <- construct_yml_config(code = c("file1.Rmd", "file2.R"), data = c("object1", "object2")) cat(yaml::as.yaml(config)) configuration: files: file1.Rmd: enabled: yes file2.R: enabled: yes objects: - object1 - object2 render_root: tmp: '745175'
config is a newly constructed yaml configuration object. It can be written to the package directory:
path_to_package <- tempdir() # e.g., if tempdir() was the root of our package. yml_write(config, path = path_to_package)
Now the package at
path_to_package will build with version 1.12.0 or greater.
Reading data sets from Rmd files.
In versions prior to 1.12.1 we would read data sets from
inst/extdata in an
Rmd script using paths relative to
data-raw in the data package source tree.
The old way.
# read 'myfile.csv' from inst/extdata relative to data-raw where the Rmd is rendered. read.csv(file.path("../inst/extdata","myfile.csv"))
R scripts are processed in
render_root defined in the yaml config.
To read a raw data set we can get the path to the package source directory using an API call:
The new way.
# DataPackageR::project_extdata_path() returns the path to the data package inst/extdata subdirectory directory. # DataPackageR::project_path() returns the path to the data package root directory. # DataPackageR::project_data_path() returns the path to the data package data subdirectory directory. read.csv(DataPackageR::project_extdata_path("myfile.csv"))
We can also perform partial builds of a subset of files in a package by toggling the
enabled key in the yaml config file.
This can be done with the following API:
config <- yml_disable_compile(config,filenames = "file2.R") yml_write(config, path = path_to_package) # write modified yml to the package. configuration: files: file1.Rmd: enabled: yes file2.R: enabled: no objects: - object1 - object2 render_root: tmp: '745175'
Note that the modified configuration needs to be written back to the package source directory in order for the changes to take effect.
The consequence of toggling a file to
enable: no is that it will be skipped when the package is rebuilt, but the data will still be retained in the package, and the documentation will not be altered.
This is useful in situations where we have multiple data sets, and we want to re-run one script to update a specific data set, but not the other scripts because they may be too time consuming.
We may have situations where we have mutli-script pipelines. There are two ways to share data among scripts.
- filesystem artifacts
- data objects passed to subsequent scripts
File system artifacts.
The yaml configuration property
render_root specifies the working directory where scripts will be rendered.
If a script writes files to the working directory, that is where files will appear. These can be read by subsequent scripts.
Passing data objects to subsequent scripts.
A script can access a data object designated to be packaged by previously ran scripts using
script2.Rmd will run after
script2.Rmd needs to access a data object that has been designated to be packaged named
dataset1, which was created by
script1.Rmd. This data set can be accessed by
script2.Rmd using the following expression:
dataset1 <- DataPackageR::datapackager_object_read("dataset1").
Passing of data objects amongst scripts can be turned off via:
package_build(deps = FALSE)
We recommend the following once your package is created.
Place your package under source control.
You now have a data package source tree.
Place your package under version control
git initin the package source root to initialize a new git repository.
- Create a new repository for your data package on github.
- Push your local package repository to
github. see step 7
This will let you version control your data processing code, and will provide a mechanism for sharing your package with others.
For more details on using git and github with R, there is an excellent guide provided by Jenny Bryan: Happy Git and GitHub for the useR and Hadley Wickham’s book on R packages.
Fingerprints of stored data objects.
DataPackageR calcudevtools::install(build_vignettes = T)lates an md5 checksum of each data object it stores, and keeps track of them in a file called
- Each time the package is rebuilt, the md5 sums of the new data objects are compared against
- If they do not match, the build process checks that the
DataVersionstring has been incremented in the
- If it has not, the build process will exit and produce an error message.
DATADIGEST file contains the following:
DataVersion: 0.1.0 cars_over_20: 3ccb5b0aaa74fe7cfc0d3ca6ab0b5cf3
The description file has the new
Package: mtcars20 Title: What the Package Does (One Line, Title Case) Version: 1.0 [email protected]: person("First", "Last", , "[email protected]", role = c("aut", "cre"), comment = c(ORCID = "YOUR-ORCID-ID")) Description: What the package does (one paragraph). License: `use_mit_license()`, `use_gpl3_license()` or friends to pick a license Encoding: UTF-8 Roxygen: list(markdown = TRUE) RoxygenNote: 7.2.3 DataVersion: 0.1.0 Date: 2023-03-06 Suggests: knitr, rmarkdown VignetteBuilder: knitr