Retrieve Sequence Databases from NCBI
NOTE: To make sure that you have a sufficiently stable (internet) connection between R and the respective databases, please set the default
timeout
setting on your local machine from 60sec to at least 300000sec before running any retrieval functions via:
options(timeout = 300000)
Getting Started
NCBI stores a variety of specialized database such as Genbank,
RefSeq, Taxonomy, SNP, etc. on their servers. The
download.database()
and
download.database.all()
functions implemented in
biomartr
allows users to download these databases from
NCBI. This process might be very useful for downstream analyses such as
sequence searches with e.g. BLAST. For this purpose see the R package metablastr which aims
to seamlessly integrate biomartr
based genomic data
retrieval with downstream large-scale BLAST searches.
List available databases
Before downloading specific databases from NCBI users might want to
list available databases. Using the listNCBIDatabases()
function users can retrieve a list of available databases stored on
NCBI.
# retrieve a list of available sequence databases at NCBI
biomartr::listNCBIDatabases(db = "all")
[1] "16S_ribosomal_RNA" "16S_ribosomal_RNA-nucl-metadata"
[3] "18S_fungal_sequences" "18S_fungal_sequences-nucl-metadata"
[5] "28S_fungal_sequences" "28S_fungal_sequences-nucl-metadata"
[7] "Betacoronavirus" "Betacoronavirus-nucl-metadata"
[9] "blastdb-manifest" "blastdb-metadata-1-1"
[11] "blastdbv5" "cdd_delta"
[13] "cloud" "env_nr"
[15] "env_nr-prot-metadata" "env_nt"
[17] "env_nt-nucl-metadata" "FASTA"
[19] "human_genome" "human_genome-nucl-metadata"
[21] "ITS_eukaryote_sequences" "ITS_eukaryote_sequences-nucl-metadata"
[23] "ITS_RefSeq_Fungi" "ITS_RefSeq_Fungi-nucl-metadata"
[25] "landmark" "landmark-prot-metadata"
[27] "LSU_eukaryote_rRNA" "LSU_eukaryote_rRNA-nucl-metadata"
[29] "LSU_prokaryote_rRNA" "LSU_prokaryote_rRNA-nucl-metadata"
[31] "mito" "mito-nucl-metadata"
[33] "mouse_genome" "mouse_genome-nucl-metadata"
[35] "nr" "nr-prot-metadata"
[37] "nt" "nt-nucl-metadata"
[39] "pataa" "pataa-prot-metadata"
[41] "patnt" "patnt-nucl-metadata"
[43] "pdbaa" "pdbaa-prot-metadata"
[45] "pdbnt" "pdbnt-nucl-metadata"
[47] "ref_euk_rep_genomes" "ref_euk_rep_genomes-nucl-metadata"
[49] "ref_prok_rep_genomes" "ref_prok_rep_genomes-nucl-metadata"
[51] "ref_viroids_rep_genomes" "ref_viroids_rep_genomes-nucl-metadata"
[53] "ref_viruses_rep_genomes" "ref_viruses_rep_genomes-nucl-metadata"
[55] "refseq_protein" "refseq_protein-prot-metadata"
[57] "refseq_rna" "refseq_rna-nucl-metadata"
[59] "refseq_select_prot" "refseq_select_prot-prot-metadata"
[61] "refseq_select_rna" "refseq_select_rna-nucl-metadata"
[63] "SSU_eukaryote_rRNA" "SSU_eukaryote_rRNA-nucl-metadata"
[65] "swissprot" "swissprot-prot-metadata"
[67] "taxdb" "taxdb-metadata"
[69] "tsa_nr" "tsa_nr-prot-metadata"
[71] "tsa_nt" "tsa_nt-nucl-metadata"
[73] "v4" "v5"
However, in case users already know which database they would like to
retrieve they can filter for the exact files by specifying the NCBI
database name. In the following example all sequence files that are part
of the NCBI nr
database shall be retrieved.
First, the listNCBIDatabases(db = "nr")
allows to list
all files corresponding to the nr
database.
# show all NCBI nr files
biomartr::listNCBIDatabases(db = "nr")
[1] "nr.00.tar.gz" "nr.01.tar.gz"
[3] "nr.02.tar.gz" "nr.03.tar.gz"
[5] "nr.04.tar.gz" "nr.05.tar.gz"
[7] "nr.06.tar.gz" "nr.07.tar.gz"
[9] "nr.08.tar.gz" "nr.09.tar.gz"
[11] "nr.10.tar.gz" "nr.11.tar.gz"
[13] "nr.12.tar.gz" "nr.13.tar.gz"
[15] "nr.14.tar.gz" "nr.15.tar.gz"
[17] "nr.16.tar.gz" "nr.17.tar.gz"
[19] "nr.18.tar.gz" "nr.19.tar.gz"
[21] "nr.20.tar.gz" "nr.21.tar.gz"
[23] "nr.22.tar.gz" "nr.23.tar.gz"
[25] "nr.24.tar.gz" "nr.25.tar.gz"
[27] "nr.26.tar.gz" "nr.27.tar.gz"
[29] "nr.28.tar.gz" "nr.29.tar.gz"
[31] "nr.30.tar.gz" "nr.31.tar.gz"
[33] "nr.32.tar.gz" "nr.33.tar.gz"
[35] "nr.34.tar.gz" "nr.35.tar.gz"
[37] "nr.36.tar.gz" "nr.37.tar.gz"
[39] "nr.38.tar.gz" "nr.39.tar.gz"
[41] "nr.40.tar.gz" "nr.41.tar.gz"
[43] "nr.42.tar.gz" "nr.43.tar.gz"
[45] "nr.44.tar.gz" "nr.45.tar.gz"
[47] "nr.46.tar.gz" "nr-prot-metadata.json"
[49] "nr.47.tar.gz" "nr.48.tar.gz"
[51] "nr.49.tar.gz" "nr.50.tar.gz"
[53] "nr.51.tar.gz" "nr.52.tar.gz"
[55] "nr.53.tar.gz" "nr.54.tar.gz"
[57] "nr.55.tar.gz"
The output illustrates that the NCBI nr
database has
been separated into several sub-data-packages.
Further examples are:
# show all NCBI nt files
biomartr::listNCBIDatabases(db = "nt")
[1] "nt.00.tar.gz" "nt.01.tar.gz"
[3] "nt.02.tar.gz" "nt.03.tar.gz"
[5] "nt.04.tar.gz" "nt.05.tar.gz"
[7] "nt.06.tar.gz" "nt.07.tar.gz"
[9] "nt.08.tar.gz" "nt.09.tar.gz"
[11] "nt.10.tar.gz" "nt.11.tar.gz"
[13] "nt.12.tar.gz" "nt.13.tar.gz"
[15] "nt.14.tar.gz" "nt.15.tar.gz"
[17] "nt.16.tar.gz" "nt.17.tar.gz"
[19] "nt.18.tar.gz" "nt.19.tar.gz"
[21] "nt.20.tar.gz" "nt.21.tar.gz"
[23] "nt.22.tar.gz" "nt.23.tar.gz"
[25] "nt.24.tar.gz" "nt.25.tar.gz"
[27] "nt.26.tar.gz" "nt.27.tar.gz"
[29] "nt.28.tar.gz" "nt.29.tar.gz"
[31] "nt.30.tar.gz" "nt.31.tar.gz"
[33] "nt.32.tar.gz" "nt.33.tar.gz"
[35] "nt.34.tar.gz" "nt.35.tar.gz"
[37] "nt.36.tar.gz" "nt.37.tar.gz"
[39] "nt-nucl-metadata.json" "nt.38.tar.gz"
[41] "nt.39.tar.gz" "nt.40.tar.gz"
[43] "nt.41.tar.gz" "nt.42.tar.gz"
[45] "nt.43.tar.gz" "nt.44.tar.gz"
[47] "nt.45.tar.gz" "nt.46.tar.gz"
[49] "nt.47.tar.gz" "nt.48.tar.gz"
# show all NCBI RefSeq (only proteomes)
head(biomartr::listNCBIDatabases(db = "refseq_protein"), 20)
[1] "refseq_protein.00.tar.gz" "refseq_protein.01.tar.gz"
[3] "refseq_protein.02.tar.gz" "refseq_protein.03.tar.gz"
[5] "refseq_protein.04.tar.gz" "refseq_protein.05.tar.gz"
[7] "refseq_protein.06.tar.gz" "refseq_protein.07.tar.gz"
[9] "refseq_protein.08.tar.gz" "refseq_protein.09.tar.gz"
[11] "refseq_protein.10.tar.gz" "refseq_protein.11.tar.gz"
[13] "refseq_protein.12.tar.gz" "refseq_protein.13.tar.gz"
[15] "refseq_protein.14.tar.gz" "refseq_protein.15.tar.gz"
[17] "refseq_protein.16.tar.gz" "refseq_protein.17.tar.gz"
[19] "refseq_protein.18.tar.gz" "refseq_protein.19.tar.gz"
# show all NCBI RefSeq (only RNA)
biomartr::listNCBIDatabases(db = "refseq_rna")
[1] "refseq_rna.00.tar.gz" "refseq_rna.01.tar.gz"
[3] "refseq_rna.02.tar.gz" "refseq_rna.03.tar.gz"
[5] "refseq_rna.04.tar.gz" "refseq_rna.05.tar.gz"
[7] "refseq_rna.06.tar.gz" "refseq_rna.07.tar.gz"
[9] "refseq_rna.08.tar.gz" "refseq_rna-nucl-metadata.json"
[11] "refseq_rna.09.tar.gz"
# show NCBI swissprot
biomartr::listNCBIDatabases(db = "swissprot")
[1] "swissprot.tar.gz" "swissprot-prot-metadata.json"
# show NCBI PDB
biomartr::listNCBIDatabases(db = "pdb")
[1] "pdbaa.tar.gz" "pdbnt.tar.gz"
[3] "pdbaa-prot-metadata.json" "pdbnt-nucl-metadata.json"
# show NCBI Human database
biomartr::listNCBIDatabases(db = "human")
1] "human_genome.00.tar.gz" "human_genome.01.tar.gz"
[3] "human_genome-nucl-metadata.json"
Please not that all lookup and retrieval function will only work properly when a sufficient internet connection is provided.
In a next step users can use the listNCBIDatabases()
and
download.database.all()
functions to retrieve all files
corresponding to a specific NCBI database.
Download NCBI databases
Using the same search strategy by specifying the database name as
described above, users can now download these databases using the
download.database.all()
function.
For downloading only single files users can type:
Example NCBI nr
# download the entire NCBI nr database
biomartr::download.database.all(db = "nr", path = "nr")
This command will download the pre-formatted (by makeblastdb formatted) database version is retrieved.
Using this command, all NCBI nr
files are loaded into
the nr
folder (path = "nr"
). For each data
package, biomartr
checks the md5checksum
of
the downloaded file and the file stored online to make sure that
internet connection losses didn’t currupt the file. In case you see a
warning message notifying you about not-matching
md5checksum
values, please re-download the corresponding
data package by re-running the download.database.all()
command. From my own experience this can happen when server connections
or internet connections are not very stable during the download process
of large data chunks.
Example NCBI nt
The same approach can be applied to all other databases mentioned above, e.g.:
# download the entire NCBI nt database
biomartr::download.database.all(db = "nt", path = "nt")
Example NCBI RefSeq
# download the entire NCBI refseq (protein) database
biomartr::download.database.all(db = "refseq_protein", path = "refseq_protein")
Example PDB
# download the entire NCBI PDB database
biomartr::download.database.all(db = "pdb", path = "pdb")
Example NCBI Taxonomy
Download NCBI Taxonomy via:
# download the entire NCBI taxonomy database
biomartr::download.database.all(db = "taxdb", path = "taxdb")
Starting download of the files: taxdb.tar.gz, taxdb.btd, taxdb.bti ...
This download process may take a while due to the large size of the individual data chunks ...
Starting download process of file: taxdb.tar.gz ...
Checking md5 hash of file: taxdb.tar.gz ...
The md5 hash of file 'taxdb.tar.gz' matches!
File 'taxdb/taxdb.tar.gz has successfully been retrieved.
Starting download process of file: taxdb.btd ...
Checking md5 hash of file: taxdb.btd ...
The md5 hash of file 'taxdb.btd' matches!
File 'taxdb/taxdb.btd has successfully been retrieved.
Starting download process of file: taxdb.bti ...
Checking md5 hash of file: taxdb.bti ...
The md5 hash of file 'taxdb.bti' matches!
File 'taxdb/taxdb.bti has successfully been retrieved.
Download process is finished and files are stored in 'taxdb'.
Example NCBI Swissprot
Download NCBI Swissprot via:
# download the entire NCBI swissprot database
biomartr::download.database.all(db = "swissprot", path = "swissprot")
Example NCBI CDD Delta
Download NCBI CDD Delta via:
# download the entire NCBI CDD Delta database
biomartr::download.database.all(db = "cdd_delta", path = "cdd_delta")
For each data package, biomartr
checks the
md5checksum
of the downloaded file and the file stored
online to make sure that internet connection losses didn’t currupt the
file. In case you see a warning message notifying you about not-matching
md5checksum
values, please re-download the corresponding
data package. From my own experience this can happen when server
connections or internet connections are not very stable during the
download process of large data chunks.
Please notice that most of these databases are very large, so users should take of of providing a stable internet connection throughout the download process.