Use the bulk API to index documents
Usage
docs_bulk_index(
conn,
x,
index = NULL,
type = NULL,
chunk_size = 1000,
doc_ids = NULL,
es_ids = TRUE,
raw = FALSE,
quiet = FALSE,
query = list(),
digits = NA,
sf = NULL,
...
)
Arguments
- conn
an Elasticsearch connection object, see
connect()
- x
A list, data.frame, or character path to a file. required.
- index
(character) The index name to use. Required for data.frame input, but optional for file inputs.
- type
(character) The type. default:
NULL
. Note thattype
is deprecated in Elasticsearch v7 and greater, and removed in Elasticsearch v8- chunk_size
(integer) Size of each chunk. If your data.frame is smaller thank
chunk_size
, this parameter is essentially ignored. We write in chunks because at some point, depending on size of each document, and Elasticsearch setup, writing a very large number of documents in one go becomes slow, so chunking can help. This parameter is ignored if you pass a file name. Default: 1000- doc_ids
An optional vector (character or numeric/integer) of document ids to use. This vector has to equal the size of the documents you are passing in, and will error if not. If you pass a factor we convert to character. Default: not passed
- es_ids
(boolean) Let Elasticsearch assign document IDs as UUIDs. These are sequential, so there is order to the IDs they assign. If
TRUE
,doc_ids
is ignored. Default:TRUE
- raw
(logical) Get raw JSON back or not. If
TRUE
you get JSON; ifFALSE
you get a list. Default:FALSE
- quiet
(logical) Suppress progress bar. Default:
FALSE
- query
(list) a named list of query parameters. optional. options include: pipeline, refresh, routing, _source, _source_excludes, _source_includes, timeout, wait_for_active_shards. See the docs bulk ES page for details
- digits
digits used by the parameter of the same name by
jsonlite::toJSON()
to convert data to JSON before being submitted to your ES instance. default:NA
- sf
used by
jsonlite::toJSON()
to convert sf objects. Set to "features" for conversion to GeoJSON. default: "dataframe"- ...
Pass on curl options to crul::HttpClient
Details
For doing index with a file already prepared for the bulk API,
see docs_bulk()
Only data.frame's are supported for now.
See also
Other bulk-functions:
docs_bulk_create()
,
docs_bulk_delete()
,
docs_bulk_prep()
,
docs_bulk_update()
,
docs_bulk()
Examples
if (FALSE) { # \dontrun{
x <- connect()
if (index_exists(x, "foobar")) index_delete(x, "foobar")
df <- data.frame(name = letters[1:3], size = 1:3, id = 100:102)
docs_bulk_index(x, df, 'foobar')
docs_bulk_index(x, df, 'foobar', es_ids = FALSE)
Search(x, "foobar", asdf = TRUE)$hits$hits
# more examples
docs_bulk_index(x, mtcars, index = "hello")
## field names cannot contain dots
names(iris) <- gsub("\\.", "_", names(iris))
docs_bulk_index(x, iris, "iris")
## type can be missing, but index can not
docs_bulk_index(x, iris, "flowers")
## big data.frame, 53K rows, load ggplot2 package first
# res <- docs_bulk_index(x, diamonds, "diam")
# Search(x, "diam")$hits$total$value
} # }