Skip to contents

Set up targets for an existing project.


  script = targets::tar_config_get("script"),
  scheduler = targets::use_targets_scheduler(),
  open = interactive(),
  overwrite = FALSE,
  job_name = targets::tar_random_name()



Character of length 1, where to write the target script file. Defaults to tar_config_get("script"), which in turn defaults to _targets.R.


Character of length 1, type of scheduler for parallel computing. See <> for details. The default is automatically detected from your system (but PBS and Torque cannot be distinguished from SGE, and SGE is the default among the three). Possible values:

  • "multicore": local forked processes on Linux-like systems (but same as "multiprocess" for tar_make_future() options).

  • "multiprocess": local platform-independent and multi-process.

  • "slurm": SLURM clusters.

  • "sge": Sun Grid Engine clusters.

  • "lsf": LSF clusters.

  • "pbs": PBS clusters. (batchtools template file not available.)

  • "torque": Torque clusters.


Logical, whether to open the file for editing in the RStudio IDE.


Logical of length 1, whether to overwrite the targets file and supporting files if they already exist.


Character of length 1, job name to supply to schedulers like SLURM.


NULL (invisibly).


To set up a project-oriented function-oriented workflow for targets, use_targets() writes:

  1. A target script _targets.R tailored to your system.

  2. Template files "clustermq.tmpl" and "future.tmpl" to configure tar_make_clustermq() and tar_make_future() to a resource manager if detected on your system. They should work out of the box on most systems, but you may need to modify them by hand if you encounter errors.

  3. Script run.R to conveniently execute the pipeline using tar_make(). You can change this to tar_make_clustermq() or tar_make_future() and supply the workers argument to either.

  4. Script to conveniently call run.R in a persistent background process. Enter ./ in the shell to run it.

  5. If you have a high-performance computing scheduler like Sun Grid Engine (SGE) (or select one using the scheduler argument of use_targets()), then script is created. conveniently executes run.R as a job on a cluster. For example, to run the pipeline as a job on an SGE cluster, enter qsub in the terminal. should work out of the box on most systems, but you may need to modify it by hand if you encounter errors.

After you call use_targets(), there is still configuration left to do:

  1. Open _targets.R and edit by hand. Follow the comments to write any options, packages, and target definitions that your pipeline requires.

  2. Edit run.R and choose which pipeline function to execute (tar_make(), tar_make_clustermq(), or tar_make_future()).

  3. If applicable, edit clustermq.tmpl and/or future.tmpl to configure settings for your resource manager.

  4. If applicable, configure, "clustermq.tmpl", and/or "future.tmpl" for your resource manager.

After you finished configuring your project, follow the steps at # nolint

  1. Run tar_glimpse() and tar_manifest() to check that the targets in the pipeline are defined correctly.

  2. Run the pipeline. You may wish to call a tar_make*() function directly, or you may run run.R or

  3. Inspect the target output using tar_read() and/or tar_load().

  4. Develop the pipeline as needed by manually editing _targets.R and the scripts in R/ and repeating steps (1) through (3).

See also


if (identical(Sys.getenv("TAR_INTERACTIVE_EXAMPLES"), "true")) {
tar_dir({ # tar_dir() runs code from a temp dir for CRAN.
use_targets(open = FALSE)