Writes a Data Package and its related Data Resources to disk as a
datapackage.json
and CSV files.
Already existing CSV files of the same name will not be overwritten.
The function can also be used to download a Data Package in its entirety.
The Data Resources are handled as follows:
Resource
path
has at least one local path (e.g.deployments.csv
): CSV files are copied or downloaded todirectory
andpath
points to new location of file(s).Resource
path
has only URL(s): resource stays as is.Resource has inline
data
originally: resource stays as is.Resource has inline
data
as result of adding data withadd_resource()
: data are written to a CSV file usingreadr::write_csv()
,path
points to location of file,data
property is removed. Usecompress = TRUE
to gzip those CSV files.
Arguments
- package
Data Package object, as returned by
read_package()
orcreate_package()
.- directory
Path to local directory to write files to.
- compress
If
TRUE
, data of added resources will be gzip compressed before being written to disk (e.g.deployments.csv.gz
).
Examples
# Load the example Data Package from disk
package <- read_package(
system.file("extdata", "v1", "datapackage.json", package = "frictionless")
)
package
#> A Data Package with 3 resources:
#> • deployments
#> • observations
#> • media
#> Use `unclass()` to print the Data Package as a list.
# Write the (unchanged) Data Package to disk
write_package(package, directory = "my_directory")
# Check files
list.files("my_directory")
#> [1] "datapackage.json" "deployments.csv" "observations_1.tsv"
#> [4] "observations_2.tsv"
# No files written for the "observations" resource, since those are all URLs.
# No files written for the "media" resource, since it has inline data.
# Clean up (don't do this if you want to keep your files)
unlink("my_directory", recursive = TRUE)