Advanced Scripts¶
These are the more obscure and advanced scripts in parlai.
build_candidates¶
Short description: Build the candidate responses for a retrieval model
Build the candidate responses for a retrieval model.
Examples¶
parlai build_candidates --task convai2 --outfile /tmp/cands.txt
CLI Arguments¶
Argument |
Description |
---|---|
|
Path to json file of options. Note: Further Command-line arguments override file-based options. |
|
Warn instead of raising if an argument passed in with –init-opt is not in the target opt. |
|
ParlAI task(s), e.g. “babi:Task1” or “babi,cbt” |
|
Choose from: train, train:ordered, valid, test. to stream data add “:stream” to any option (e.g., train:stream). by default train is random with replacement, valid is ordered, test is ordered. |
|
Batch size for minibatch training schemes |
|
Use dynamic batching |
|
Print all messages |
|
Enables some debug behavior |
|
Path to datasets, defaults to {parlai_dir}/data |
|
Total number of exs to convert, -1 to convert all examples |
|
Output file where to save, by default will be created in /tmp |
|
Default: |
build_dict¶
Short description: Build a dictionary.
Generates a dictionary file from the training data.
Examples¶
# learn the vocabulary from one task, then train on another task.
parlai build_dict --task convai2 --dict-file premade.dict
parlai train_model --task squad --dict-file premade.dict --model seq2seq
CLI Arguments¶
Argument |
Description |
---|---|
|
Path to json file of options. Note: Further Command-line arguments override file-based options. |
|
Warn instead of raising if an argument passed in with –init-opt is not in the target opt. |
|
ParlAI task(s), e.g. “babi:Task1” or “babi,cbt” |
|
Choose from: train, train:ordered, valid, test. to stream data add “:stream” to any option (e.g., train:stream). by default train is random with replacement, valid is ordered, test is ordered. |
|
Batch size for minibatch training schemes |
|
Use dynamic batching |
|
Print all messages |
|
Enables some debug behavior |
|
Path to datasets, defaults to {parlai_dir}/data |
|
The model class name. can match parlai/agents/ |
|
Model file name for loading and saving models |
|
Initialize model weights and dict from this file |
|
Max number of examples to build dict on |
|
Include validation set in dictionary building for task. |
|
Include test set in dictionary building for task. |
|
Default: |
|
Path to pre-trained tokenizer vocab |
|
Path to pre-trained tokenizer merge |
|
Use BPE dropout during training. |
convert_to_json¶
Short description: Convert data to json format
Converts data used in a task to json format. (Same as “Conversation” class; ie, for use in ACUTE-eval)
Specify the task with -t
. By default, this code will save to a file with prefix “tmp”.
To change the prefix, set --world-logs
.
CLI Arguments¶
Argument |
Description |
---|---|
|
Path to json file of options. Note: Further Command-line arguments override file-based options. |
|
Warn instead of raising if an argument passed in with –init-opt is not in the target opt. |
|
ParlAI task(s), e.g. “babi:Task1” or “babi,cbt” |
|
Choose from: train, train:ordered, valid, test. to stream data add “:stream” to any option (e.g., train:stream). by default train is random with replacement, valid is ordered, test is ordered. |
|
Batch size for minibatch training schemes |
|
Use dynamic batching |
|
Print all messages |
|
Enables some debug behavior |
|
Path to datasets, defaults to {parlai_dir}/data |
|
The model class name. can match parlai/agents/ |
|
Model file name for loading and saving models |
|
Initialize model weights and dict from this file |
|
Saves a json file of the evaluation report either as an extension to the model-file (if begins with a “.”) or a whole file path. Set to the empty string to not save at all. |
|
Saves a jsonl file of the world logs.Set to the empty string to not save at all. |
|
Choices: |
|
A positive number indicates to calculate the area under the roc curve and it also determines how many decimal digits of the predictions to keep (higher numbers->more precise); also used to determine whether or not to calculate the AUC metric |
|
The name(s) of the class to calculate the auc for |
|
Default: |
|
|
|
Default: |
|
List of metrics to show/compute, e.g. all, default,or give a list split by , like ppl,f1,accuracy,hits@1,rouge,bleuthe rouge metrics will be computed as rouge-1, rouge-2 and rouge-l |
|
Report micro-averaged metrics instead of macro averaged metrics. |
|
Fields to keep when logging. Should be a comma separated list |
|
Tensorboard logging of metrics |
|
Tensorboard logging directory, defaults to model_file.tensorboard |
convert_to_parlai¶
Short description: Dump a task to a standardized format
Convert a dataset into the ParlAI text format.
Examples¶
parlai convert_data_to_parlai_format --task babi:task1k:1 --outfile /tmp/dump
CLI Arguments¶
Argument |
Description |
---|---|
|
Path to json file of options. Note: Further Command-line arguments override file-based options. |
|
Warn instead of raising if an argument passed in with –init-opt is not in the target opt. |
|
ParlAI task(s), e.g. “babi:Task1” or “babi,cbt” |
|
Choose from: train, train:ordered, valid, test. to stream data add “:stream” to any option (e.g., train:stream). by default train is random with replacement, valid is ordered, test is ordered. |
|
Batch size for minibatch training schemes |
|
Use dynamic batching |
|
Print all messages |
|
Enables some debug behavior |
|
Path to datasets, defaults to {parlai_dir}/data |
|
Total number of exs to convert, -1 to convert all examples |
|
Output file where to save, by default will be created in tmp |
|
Ignore these fields from the message (returned with .act() ) |
|
Default: |
convo_render¶
Short description: Render data as HTML
CLI Arguments¶
Argument |
Description |
---|---|
|
Path to json file of options. Note: Further Command-line arguments override file-based options. |
|
Warn instead of raising if an argument passed in with –init-opt is not in the target opt. |
|
ParlAI task(s), e.g. “babi:Task1” or “babi,cbt” |
|
Choose from: train, train:ordered, valid, test. to stream data add “:stream” to any option (e.g., train:stream). by default train is random with replacement, valid is ordered, test is ordered. |
|
Batch size for minibatch training schemes |
|
Use dynamic batching |
|
Print all messages |
|
Enables some debug behavior |
|
Path to datasets, defaults to {parlai_dir}/data |
|
Input file to read conversations from |
|
Output file to write conversations to. One of [.pdf, .png, .html] only |
|
Width of output file |
|
Height of output file |
|
Absolute Path/URL to user image icon |
|
Absolute Path/URL to alternate image icon |
|
Number of conversations to render |
data_stats¶
Short description: Compute data statistics
Count and display statistics of the data.
Examples¶
parlai data_stats --task convai2
CLI Arguments¶
Argument |
Description |
---|---|
|
Path to json file of options. Note: Further Command-line arguments override file-based options. |
|
Warn instead of raising if an argument passed in with –init-opt is not in the target opt. |
|
ParlAI task(s), e.g. “babi:Task1” or “babi,cbt” |
|
Choose from: train, train:ordered, valid, test. to stream data add “:stream” to any option (e.g., train:stream). by default train is random with replacement, valid is ordered, test is ordered. |
|
Batch size for minibatch training schemes |
|
Use dynamic batching |
|
Print all messages |
|
Enables some debug behavior |
|
Path to datasets, defaults to {parlai_dir}/data |
|
Default: |
|
Default: |
|
Use teacher (agent 0) or model (agent 1) |
|
New lines treat substrings as separate utterances. |
|
Ignore tokens containings these substrings (comma-separated) |
|
Path to pre-trained tokenizer vocab |
|
Path to pre-trained tokenizer merge |
|
Use BPE dropout during training. |
detect_offensive¶
Short description: Check task for offensive language
Basic example which iterates through the tasks specified and checks them for offensive language.
Examples¶
parlai detect_offensive_language --task "convai_chitchat" --display-examples True
CLI Arguments¶
Argument |
Description |
---|---|
|
Path to json file of options. Note: Further Command-line arguments override file-based options. |
|
Warn instead of raising if an argument passed in with –init-opt is not in the target opt. |
|
ParlAI task(s), e.g. “babi:Task1” or “babi,cbt” |
|
Choose from: train, train:ordered, valid, test. to stream data add “:stream” to any option (e.g., train:stream). by default train is random with replacement, valid is ordered, test is ordered. |
|
Batch size for minibatch training schemes |
|
Use dynamic batching |
|
Print all messages |
|
Enables some debug behavior |
|
Path to datasets, defaults to {parlai_dir}/data |
|
The model class name. can match parlai/agents/ |
|
Model file name for loading and saving models |
|
Initialize model weights and dict from this file |
|
Default: |
|
|
|
Type of safety detector to apply to messages |
eval_wordstat¶
Short description: Compute statistics from model predictions
This helper script can be used alone with modelfile and task: the output will contain the word statistics of the model outputs. One can also use the function defined here in other places in order to get such statistic for any agent given the agent object (with corr. dict) and a sequence.
Additionally provides function get_word_stats
that can be used in
other parts of runtime code since it depends only on the agent object.
For example:
from parlai.scripts.eval_wordstat import get_word_stats
reqs, cnt = get_word_stats(predictions.tolist(), self.dict)
Examples¶
parlai eval_wordstat --model-file /path/to/model_file --task convai2:self --freq-bins 10,100,1000
CLI Arguments¶
Argument |
Description |
---|---|
|
Path to json file of options. Note: Further Command-line arguments override file-based options. |
|
Warn instead of raising if an argument passed in with –init-opt is not in the target opt. |
|
ParlAI task(s), e.g. “babi:Task1” or “babi,cbt” |
|
Choose from: train, train:ordered, valid, test. to stream data add “:stream” to any option (e.g., train:stream). by default train is random with replacement, valid is ordered, test is ordered. |
|
Batch size for minibatch training schemes |
|
Use dynamic batching |
|
Print all messages |
|
Enables some debug behavior |
|
Path to datasets, defaults to {parlai_dir}/data |
|
The model class name. can match parlai/agents/ |
|
Model file name for loading and saving models |
|
Initialize model weights and dict from this file |
|
Path to pre-trained tokenizer vocab |
|
Path to pre-trained tokenizer merge |
|
Use BPE dropout during training. |
|
Default: |
|
Default: |
|
External dictionary for stat computation |
|
Bins boundaries for rare words stat |
|
Dump predictions into file |
|
Compute %% of unique responses from the model |
|
Tensorboard logging of metrics |
|
Tensorboard logging directory, defaults to model_file.tensorboard |
extract_image_feature¶
Short description: Load/extract image features
Basic example which iterates through the tasks specified and load/extract the image features.
For more options, check parlai.core.image_featurizers
Examples¶
To extract the image feature of COCO images:
parlai extract_image_feature --task vqa_v1 --image-mode resnet152
CLI Arguments¶
Argument |
Description |
---|---|
|
Path to json file of options. Note: Further Command-line arguments override file-based options. |
|
Warn instead of raising if an argument passed in with –init-opt is not in the target opt. |
|
ParlAI task(s), e.g. “babi:Task1” or “babi,cbt” |
|
Choose from: train, train:ordered, valid, test. to stream data add “:stream” to any option (e.g., train:stream). by default train is random with replacement, valid is ordered, test is ordered. |
|
Batch size for minibatch training schemes |
|
Use dynamic batching |
|
Print all messages |
|
Enables some debug behavior |
|
Path to datasets, defaults to {parlai_dir}/data |
flask¶
Example Flask server which hosts a model.
Examples¶
Serving the model
parlai flask -m repeat_query
parlai flask -mf zoo:blender/blender_90M/model
Hitting the API*
curl -k http://localhost:5000/response -H "Content-Type: application/json" -d '{"text": "foobar"}'
CLI Arguments¶
Argument |
Description |
---|---|
|
Path to json file of options. Note: Further Command-line arguments override file-based options. |
|
Warn instead of raising if an argument passed in with –init-opt is not in the target opt. |
|
ParlAI task(s), e.g. “babi:Task1” or “babi,cbt” |
|
Choose from: train, train:ordered, valid, test. to stream data add “:stream” to any option (e.g., train:stream). by default train is random with replacement, valid is ordered, test is ordered. |
|
Batch size for minibatch training schemes |
|
Use dynamic batching |
|
Print all messages |
|
Enables some debug behavior |
|
Path to datasets, defaults to {parlai_dir}/data |
|
The model class name. can match parlai/agents/ |
|
Model file name for loading and saving models |
|
Initialize model weights and dict from this file |
interactive_web¶
Short description: Interactive chat with a model in a web browser
Aliases: iweb
Talk with a model using a web UI.
Examples¶
parlai interactive_web --model-file "zoo:tutorial_transformer_generator/model"
CLI Arguments¶
Argument |
Description |
---|---|
|
Path to json file of options. Note: Further Command-line arguments override file-based options. |
|
Warn instead of raising if an argument passed in with –init-opt is not in the target opt. |
|
ParlAI task(s), e.g. “babi:Task1” or “babi,cbt” |
|
Choose from: train, train:ordered, valid, test. to stream data add “:stream” to any option (e.g., train:stream). by default train is random with replacement, valid is ordered, test is ordered. |
|
Batch size for minibatch training schemes |
|
Use dynamic batching |
|
Print all messages |
|
Enables some debug behavior |
|
Path to datasets, defaults to {parlai_dir}/data |
|
The model class name. can match parlai/agents/ |
|
Model file name for loading and saving models |
|
Initialize model weights and dict from this file |
|
|
|
Set to use a prettytable when displaying examples with text candidates |
|
Display these fields when verbose is off (e.g., “–display-add-fields label_candidates,beam_texts”) |
|
Create interactive version of task |
|
Saves a jsonl file containing all of the task examples and model replies. Set to the empty string to not save at all |
|
Format to save logs in. conversations is a jsonl format, parlai is a text format. |
|
File of label_candidates to send to other agent |
|
If on, assumes single turn episodes. |
|
Fields to keep when logging. Should be a comma separated list |
|
Port to listen on. |
|
Host from which allow requests, use 0.0.0.0 to allow all IPs |
multiprocessing_eval¶
Short description: Evaluate a model
Aliases: mp_eval
Main launch script for single-host, multi-GPU evaluation.
This is a drop-in replacement for [eval_model]. This script will launch N subprocess, each which runs the full eval loop independently.
Uses torch.nn.parallel.DistributedDataParallel for its main uses. Agents must specifically implement the wrapper of DistributedDataParallel, but all TorchRankerAgents and TorchGeneratorAgents support this.
Examples¶
parlai multiprocessing_eval --model-file "zoo:tutorial_transformer_generator/model" --batchsize 16 --task convai2
CLI Arguments¶
Argument |
Description |
---|---|
|
Path to json file of options. Note: Further Command-line arguments override file-based options. |
|
Warn instead of raising if an argument passed in with –init-opt is not in the target opt. |
|
ParlAI task(s), e.g. “babi:Task1” or “babi,cbt” |
|
Choose from: train, train:ordered, valid, test. to stream data add “:stream” to any option (e.g., train:stream). by default train is random with replacement, valid is ordered, test is ordered. |
|
Batch size for minibatch training schemes |
|
Use dynamic batching |
|
Print all messages |
|
Enables some debug behavior |
|
Path to datasets, defaults to {parlai_dir}/data |
|
The model class name. can match parlai/agents/ |
|
Model file name for loading and saving models |
|
Initialize model weights and dict from this file |
|
Saves a json file of the evaluation report either as an extension to the model-file (if begins with a “.”) or a whole file path. Set to the empty string to not save at all. |
|
Saves a jsonl file of the world logs.Set to the empty string to not save at all. |
|
Choices: |
|
A positive number indicates to calculate the area under the roc curve and it also determines how many decimal digits of the predictions to keep (higher numbers->more precise); also used to determine whether or not to calculate the AUC metric |
|
The name(s) of the class to calculate the auc for |
|
Default: |
|
|
|
Default: |
|
List of metrics to show/compute, e.g. all, default,or give a list split by , like ppl,f1,accuracy,hits@1,rouge,bleuthe rouge metrics will be computed as rouge-1, rouge-2 and rouge-l |
|
Report micro-averaged metrics instead of macro averaged metrics. |
|
Fields to keep when logging. Should be a comma separated list |
|
Tensorboard logging of metrics |
|
Tensorboard logging directory, defaults to model_file.tensorboard |
|
Number of workers. |
|
Distributed backend. Zero2 can be faster but is more experimental. Zero3 significantly reduces memory pressure. DDP is the most tested. |
multiprocessing_train¶
Short description: Train a model
Aliases: mp_train
Main launch script for single-host, multi-GPU training.
This is a drop-in replacement for [train_model]. This script will launch N subprocess, each which runs the full training loop independently.
Uses torch.nn.parallel.DistributedDataParallel for its main uses. Agents must specifically implement the wrapper of DistributedDatParallel, but all TorchRankerAgents and TorchGeneratorAgents support this.
Examples¶
parlai multiprocessing_train -m transformer/generator --batchsize 16 --task convai2 --model-file /tmp/mymodel
CLI Arguments¶
Argument |
Description |
---|---|
|
Path to json file of options. Note: Further Command-line arguments override file-based options. |
|
Warn instead of raising if an argument passed in with –init-opt is not in the target opt. |
|
ParlAI task(s), e.g. “babi:Task1” or “babi,cbt” |
|
Choose from: train, train:ordered, valid, test. to stream data add “:stream” to any option (e.g., train:stream). by default train is random with replacement, valid is ordered, test is ordered. |
|
Batch size for minibatch training schemes |
|
Use dynamic batching |
|
Print all messages |
|
Enables some debug behavior |
|
Path to datasets, defaults to {parlai_dir}/data |
|
The model class name. can match parlai/agents/ |
|
Model file name for loading and saving models |
|
Initialize model weights and dict from this file |
|
Task to use for valid/test (defaults to the one used for training) |
|
A ‘.opt’ file that is used for final eval. Useful for setting skip-generation to false. ‘datatype’ must be included as part of the opt. |
|
Set dynamic batching at evaluation time. Set to off for train-only dynamic batching. Set to none (default) to use same setting as –dynamic-batching. |
|
Number of background workers (training only) |
|
Default: |
|
Default: |
|
End training after n model updates |
|
Log every n training steps |
|
Validate every n seconds. Saves model to model_file (if set) whenever best val metric is found |
|
Validate every n training steps. Saves model to model_file (if set) whenever best val metric is found |
|
Saves the model to model_file.checkpoint after every n seconds (default -1, never). |
|
Saves the model to model_file.checkpoint after every validation (default |
|
Validate every n epochs. Saves model to model_file (if set) whenever best val metric is found |
|
Number of iterations of validation where result does not improve before we stop training |
|
Key into report table for selecting best validation |
|
The direction in which to optimize the validation metric, i.e. maximize or minimize |
|
List of metrics to show/compute, e.g. all, default,or give a list split by , like ppl,f1,accuracy,hits@1,rouge,bleuthe rouge metrics will be computed as rouge-1, rouge-2 and rouge-l |
|
Report micro-averaged metrics instead of macro averaged metrics. |
|
Saves a jsonl file of the world logs.Set to the empty string to not save at all. |
|
Choices: |
|
|
|
Fields to keep when logging. Should be a comma separated list |
|
Tensorboard logging of metrics |
|
Tensorboard logging directory, defaults to model_file.tensorboard |
|
Enable W&B logging of metrics |
|
W&B project name. Defaults to timestamp. Usually the name of the sweep. |
|
W&B entity name. |
|
Enable logging of model artifacts to weight and biases |
|
Creates a ClearML Task. Default: False. If True, ClearML logging will be enabled. |
|
ClearML Project Name. All the logs will be stored under this project in ClearML WebUI. If not set, default will set to ParlAI. |
|
ClearML Task Name. All the logs will be stored under this task in ClearML WebUI. If not set, default will set to “Default Task”. |
|
Path to pre-trained tokenizer vocab |
|
Path to pre-trained tokenizer merge |
|
Use BPE dropout during training. |
|
Number of workers. |
|
Distributed backend. Zero2 can be faster but is more experimental. Zero3 significantly reduces memory pressure. DDP is the most tested. |
|
party¶
Short description: Throw a party!
Aliases: parrot
Throw a party.
Examples¶
parlai party
CLI Arguments¶
Argument |
Description |
---|---|
|
Number of seconds to party |
profile_interactive¶
Short description: Interactive chat with a model
Basic script which allows to profile interaction with a model using repeat_query
to
avoid human interaction (so we can time it, only).
CLI Arguments¶
Argument |
Description |
---|---|
|
Path to json file of options. Note: Further Command-line arguments override file-based options. |
|
Warn instead of raising if an argument passed in with –init-opt is not in the target opt. |
|
ParlAI task(s), e.g. “babi:Task1” or “babi,cbt” |
|
Choose from: train, train:ordered, valid, test. to stream data add “:stream” to any option (e.g., train:stream). by default train is random with replacement, valid is ordered, test is ordered. |
|
Batch size for minibatch training schemes |
|
Use dynamic batching |
|
Print all messages |
|
Enables some debug behavior |
|
Path to datasets, defaults to {parlai_dir}/data |
|
The model class name. can match parlai/agents/ |
|
Model file name for loading and saving models |
|
Initialize model weights and dict from this file |
|
Default: |
|
Default: |
|
Set to use a prettytable when displaying examples with text candidates |
|
Display these fields when verbose is off (e.g., “–display-add-fields label_candidates,beam_texts”) |
|
Create interactive version of task |
profile_train¶
Short description: cProfile a training run
Run the python or pytorch profiler and prints the results.
Examples¶
To make sure that bAbI task 1 (1k exs) loads one can run and to see a few of them:
parlai profile_train --task babi:task1k:1 --model seq2seq --dict-file /tmp/dict
CLI Arguments¶
Argument |
Description |
---|---|
|
Path to json file of options. Note: Further Command-line arguments override file-based options. |
|
Warn instead of raising if an argument passed in with –init-opt is not in the target opt. |
|
ParlAI task(s), e.g. “babi:Task1” or “babi,cbt” |
|
Choose from: train, train:ordered, valid, test. to stream data add “:stream” to any option (e.g., train:stream). by default train is random with replacement, valid is ordered, test is ordered. |
|
Batch size for minibatch training schemes |
|
Use dynamic batching |
|
Print all messages |
|
Path to datasets, defaults to {parlai_dir}/data |
|
The model class name. can match parlai/agents/ |
|
Model file name for loading and saving models |
|
Initialize model weights and dict from this file |
|
Task to use for valid/test (defaults to the one used for training) |
|
A ‘.opt’ file that is used for final eval. Useful for setting skip-generation to false. ‘datatype’ must be included as part of the opt. |
|
Set dynamic batching at evaluation time. Set to off for train-only dynamic batching. Set to none (default) to use same setting as –dynamic-batching. |
|
Number of background workers (training only) |
|
Default: |
|
Default: |
|
End training after n model updates |
|
Log every n training steps |
|
Validate every n seconds. Saves model to model_file (if set) whenever best val metric is found |
|
Validate every n training steps. Saves model to model_file (if set) whenever best val metric is found |
|
Saves the model to model_file.checkpoint after every n seconds (default -1, never). |
|
Saves the model to model_file.checkpoint after every validation (default |
|
Validate every n epochs. Saves model to model_file (if set) whenever best val metric is found |
|
Number of iterations of validation where result does not improve before we stop training |
|
Key into report table for selecting best validation |
|
The direction in which to optimize the validation metric, i.e. maximize or minimize |
|
List of metrics to show/compute, e.g. all, default,or give a list split by , like ppl,f1,accuracy,hits@1,rouge,bleuthe rouge metrics will be computed as rouge-1, rouge-2 and rouge-l |
|
Report micro-averaged metrics instead of macro averaged metrics. |
|
Saves a jsonl file of the world logs.Set to the empty string to not save at all. |
|
Choices: |
|
|
|
Fields to keep when logging. Should be a comma separated list |
|
Tensorboard logging of metrics |
|
Tensorboard logging directory, defaults to model_file.tensorboard |
|
Enable W&B logging of metrics |
|
W&B project name. Defaults to timestamp. Usually the name of the sweep. |
|
W&B entity name. |
|
Enable logging of model artifacts to weight and biases |
|
Creates a ClearML Task. Default: False. If True, ClearML logging will be enabled. |
|
ClearML Project Name. All the logs will be stored under this project in ClearML WebUI. If not set, default will set to ParlAI. |
|
ClearML Task Name. All the logs will be stored under this task in ClearML WebUI. If not set, default will set to “Default Task”. |
|
Path to pre-trained tokenizer vocab |
|
Path to pre-trained tokenizer merge |
|
Use BPE dropout during training. |
|
If true, use the torch profiler. Otherwise use cProfile. |
|
If true, use the torch cuda profiler. Otherwise use cProfile. |
|
If true, enter debugger at end of run. |
token_stats¶
Short description: Compute tokenized stats.
CLI Arguments¶
Argument |
Description |
---|---|
|
Path to json file of options. Note: Further Command-line arguments override file-based options. |
|
Warn instead of raising if an argument passed in with –init-opt is not in the target opt. |
|
ParlAI task(s), e.g. “babi:Task1” or “babi,cbt” |
|
Choose from: train, train:ordered, valid, test. to stream data add “:stream” to any option (e.g., train:stream). by default train is random with replacement, valid is ordered, test is ordered. |
|
Batch size for minibatch training schemes |
|
Use dynamic batching |
|
Print all messages |
|
Enables some debug behavior |
|
Path to datasets, defaults to {parlai_dir}/data |
|
The model class name. can match parlai/agents/ |
|
Model file name for loading and saving models |
|
Initialize model weights and dict from this file |
|
Default: |
|
Default: |
|
Default: |
|
torchscript¶
CLI Arguments¶
Argument |
Description |
---|---|
|
Path to json file of options. Note: Further Command-line arguments override file-based options. |
|
Warn instead of raising if an argument passed in with –init-opt is not in the target opt. |
|
ParlAI task(s), e.g. “babi:Task1” or “babi,cbt” |
|
Choose from: train, train:ordered, valid, test. to stream data add “:stream” to any option (e.g., train:stream). by default train is random with replacement, valid is ordered, test is ordered. |
|
Batch size for minibatch training schemes |
|
Use dynamic batching |
|
Print all messages |
|
Enables some debug behavior |
|
Path to datasets, defaults to {parlai_dir}/data |
|
The model class name. can match parlai/agents/ |
|
Model file name for loading and saving models |
|
Initialize model weights and dict from this file |
|
Where the scripted model checkpoint will be saved |
|
Input string to pass into the encoder of the scripted model, to test it against the unscripted version. Separate lines with a pipe |
|
Module to TorchScript. Example: parlai.torchscript.modules:TorchScriptGreedySearch |
|
Enable inference optimizations on the scripted model. |
vacuum¶
Short description: Shrink a model file for release.
Reduces the size of a model file by stripping the optimizer.
Assumes we are working with a TorchAgent
CLI Arguments¶
Argument |
Description |
---|---|
|
Path to model file. |
|
Do not create a backup. |
verify_data¶
Short description: Check tasks for common errors
Verify data doesn’t have basic mistakes, like empty text fields or empty label candidates.
Examples¶
parlai verify_data --task convai2 --datatype valid
CLI Arguments¶
Argument |
Description |
---|---|
|
Path to json file of options. Note: Further Command-line arguments override file-based options. |
|
Warn instead of raising if an argument passed in with –init-opt is not in the target opt. |
|
ParlAI task(s), e.g. “babi:Task1” or “babi,cbt” |
|
Choose from: train, train:ordered, valid, test. to stream data add “:stream” to any option (e.g., train:stream). by default train is random with replacement, valid is ordered, test is ordered. |
|
Batch size for minibatch training schemes |
|
Use dynamic batching |
|
Print all messages |
|
Enables some debug behavior |
|
Path to datasets, defaults to {parlai_dir}/data |
|
The model class name. can match parlai/agents/ |
|
Model file name for loading and saving models |
|
Initialize model weights and dict from this file |
|
Default: |
|