Preloads a causal language model to speed up next runs.
Usage
causal_preload(
model = getOption("pangoling.causal.default"),
checkpoint = NULL,
add_special_tokens = NULL,
config_model = NULL,
config_tokenizer = NULL
)Arguments
- model
Name of a pre-trained model or folder. One should be able to use models based on "gpt2". See hugging face website.
- checkpoint
Folder of a checkpoint.
- add_special_tokens
Whether to include special tokens. It has the same default as the AutoTokenizer method in Python.
- config_model
List with other arguments that control how the model from Hugging Face is accessed.
- config_tokenizer
List with other arguments that control how the tokenizer from Hugging Face is accessed.
More details about causal models
A causal language model (also called GPT-like, auto-regressive, or decoder model) is a type of large language model usually used for text-generation that can predict the next word (or more accurately in fact token) based on a preceding context.
If not specified, the causal model used will be the one set in the global
option pangoling.causal.default, this can be
accessed via getOption("pangoling.causal.default") (by default
"gpt2"). To change the default option
use options(pangoling.causal.default = "newcausalmodel").
A list of possible causal models can be found in Hugging Face website.
Using the config_model and config_tokenizer arguments, it's possible to
control how the model and tokenizer from Hugging Face is accessed, see the
Python method
from_pretrained
for details.
In case of errors when a new model is run, check the status of https://status.huggingface.co/
See also
Other causal model helper functions:
causal_config()
