Command Line Usage

This contains the command line usage for each of the standard scripts we release. These are each included in parlai/scripts, and all may be invoked with the “parlai” supercommand.

The parlai supercommand may be invoked from the command line by running parlai after installing ParlAI. Its default output looks like this:

usage: parlai [-h] [--helpall] COMMAND ...

       _
      /")
     //)
  ==//'=== ParlAI
   /

optional arguments:
  -h, --help               show this help message and exit
  --helpall                show all commands, including advanced ones.

Commands:

  display_data (dd)        Display data from a task
  display_model (dm)       Display model predictions.
  eval_model (em, eval)    Evaluate a model
  train_model (tm, train)  Train a model
  interactive (i)          Interactive chat with a model on the command line
  safe_interactive         Like interactive, but adds a safety filter
  self_chat                Generate self-chats of a model

The remainder of this page describes each of the commands, their possible arguments,
and some examples of their usage.

display_data

Short description: Display data from a task

Aliases: dd Basic example which iterates through the tasks specified and prints them out. Used for verification of data loading and iteration.

For example, to make sure that bAbI task 1 (1k exs) loads one can run and to see a few of them:

Examples

parlai display_data --task babi:task1k:1

CLI Arguments

Argument

Description

--init-opt, --o

Path to json file of options. Note: Further Command-line arguments override file-based options.

--allow-missing-init-opts

Warn instead of raising if an argument passed in with –init-opt is not in the target opt.

--task, --t

ParlAI task(s), e.g. “babi:Task1” or “babi,cbt”

--datatype, --dt

Choose from: train, train:ordered, valid, test. to stream data add “:stream” to any option (e.g., train:stream). by default train is random with replacement, valid is ordered, test is ordered.
Choices: train, train:stream, train:ordered, train:ordered:stream, train:stream:ordered, train:evalmode, train:evalmode:stream, train:evalmode:ordered, train:evalmode:ordered:stream, train:evalmode:stream:ordered, valid, valid:stream, test, test:stream
Default: train:ordered.

--batchsize, --bs

Batch size for minibatch training schemes
Default: 1.

--dynamic-batching, --dynb

Use dynamic batching
Choices: full, batchsort, None

--verbose, --v

Print all messages

--debug

Enables some debug behavior

--datapath, --dp

Path to datasets, defaults to {parlai_dir}/data

--model, --m

The model class name. can match parlai/agents/ for agents in that directory, or can provide a fully specified module for from X import Y via -m X:Y (e.g. -m parlai.agents.seq2seq.seq2seq:Seq2SeqAgent)

--model-file, --mf

Model file name for loading and saving models

--init-model, --im

Initialize model weights and dict from this file

--num-examples, --n, --ne

Default: 10.

--max-display-len, --mdl

Default: 1000.

--display-add-fields

Display these fields when verbose is off (e.g., “–display-add-fields label_candidates,beam_texts”)

--ignore-agent-reply

Default: True.


display_model

Short description: Display model predictions.

Aliases: dm Basic example which iterates through the tasks specified and runs the given model on them.

Examples

parlai display_model --task babi:task1k:1 --model repeat_label
parlai display_model --task convai2 --model-file "/path/to/model_file"  --datatype test

CLI Arguments

Argument

Description

--init-opt, --o

Path to json file of options. Note: Further Command-line arguments override file-based options.

--allow-missing-init-opts

Warn instead of raising if an argument passed in with –init-opt is not in the target opt.

--task, --t

ParlAI task(s), e.g. “babi:Task1” or “babi,cbt”

--datatype, --dt

Choose from: train, train:ordered, valid, test. to stream data add “:stream” to any option (e.g., train:stream). by default train is random with replacement, valid is ordered, test is ordered.
Choices: train, train:stream, train:ordered, train:ordered:stream, train:stream:ordered, train:evalmode, train:evalmode:stream, train:evalmode:ordered, train:evalmode:ordered:stream, train:evalmode:stream:ordered, valid, valid:stream, test, test:stream
Default: valid.

--batchsize, --bs

Batch size for minibatch training schemes
Default: 1.

--dynamic-batching, --dynb

Use dynamic batching
Choices: full, batchsort, None

--verbose, --v

Print all messages

--debug

Enables some debug behavior

--datapath, --dp

Path to datasets, defaults to {parlai_dir}/data

--model, --m

The model class name. can match parlai/agents/ for agents in that directory, or can provide a fully specified module for from X import Y via -m X:Y (e.g. -m parlai.agents.seq2seq.seq2seq:Seq2SeqAgent)

--model-file, --mf

Model file name for loading and saving models

--init-model, --im

Initialize model weights and dict from this file

--num-examples, --n, --ne

Default: 10.

--display-add-fields

Display these fields when verbose is off (e.g., “–display-add-fields label_candidates,beam_texts”)


eval_model

Short description: Evaluate a model

Aliases: em, eval Basic example which iterates through the tasks specified and evaluates the given model on them.

Examples

parlai eval_model --task "babi:Task1k:2" -m "repeat_label"
parlai eval_model --task convai2 --model-file "/path/to/model_file"

CLI Arguments

Argument

Description

--init-opt, --o

Path to json file of options. Note: Further Command-line arguments override file-based options.

--allow-missing-init-opts

Warn instead of raising if an argument passed in with –init-opt is not in the target opt.

--task, --t

ParlAI task(s), e.g. “babi:Task1” or “babi,cbt”

--datatype, --dt

Choose from: train, train:ordered, valid, test. to stream data add “:stream” to any option (e.g., train:stream). by default train is random with replacement, valid is ordered, test is ordered.
Choices: train, train:stream, train:ordered, train:ordered:stream, train:stream:ordered, train:evalmode, train:evalmode:stream, train:evalmode:ordered, train:evalmode:ordered:stream, train:evalmode:stream:ordered, valid, valid:stream, test, test:stream
Default: valid.

--batchsize, --bs

Batch size for minibatch training schemes
Default: 1.

--dynamic-batching, --dynb

Use dynamic batching
Choices: full, batchsort, None

--verbose, --v

Print all messages

--debug

Enables some debug behavior

--datapath, --dp

Path to datasets, defaults to {parlai_dir}/data

--model, --m

The model class name. can match parlai/agents/ for agents in that directory, or can provide a fully specified module for from X import Y via -m X:Y (e.g. -m parlai.agents.seq2seq.seq2seq:Seq2SeqAgent)

--model-file, --mf

Model file name for loading and saving models

--init-model, --im

Initialize model weights and dict from this file

--report-filename, --rf

Saves a json file of the evaluation report either as an extension to the model-file (if begins with a “.”) or a whole file path. Set to the empty string to not save at all.

--world-logs

Saves a jsonl file of the world logs.Set to the empty string to not save at all.

--save-format

Choices: conversations, parlai
Default: conversations.

--area-under-curve-digits, --auc

A positive number indicates to calculate the area under the roc curve and it also determines how many decimal digits of the predictions to keep (higher numbers->more precise); also used to determine whether or not to calculate the AUC metric
Default: -1.

--area-under-curve-class, --auclass

The name(s) of the class to calculate the auc for

--num-examples, --ne

Default: -1.

--display-examples, --d

--log-every-n-secs, --ltim

Default: 10.

--metrics, --mcs

List of metrics to show/compute, e.g. all, default,or give a list split by , like ppl,f1,accuracy,hits@1,rouge,bleuthe rouge metrics will be computed as rouge-1, rouge-2 and rouge-l
Default: default.

--aggregate-micro, --micro

Report micro-averaged metrics instead of macro averaged metrics.

--log-keep-fields

Fields to keep when logging. Should be a comma separated list
Default: all.

--tensorboard-log, --tblog

Tensorboard logging of metrics

--tensorboard-logdir, --tblogdir

Tensorboard logging directory, defaults to model_file.tensorboard


generate_model_card

Short description: Evaluate a model

Aliases: gmc Script to generate the model card automatically.

CLI Arguments

Argument

Description

--init-opt, --o

Path to json file of options. Note: Further Command-line arguments override file-based options.

--allow-missing-init-opts

Warn instead of raising if an argument passed in with –init-opt is not in the target opt.

--task, --t

ParlAI task(s), e.g. “babi:Task1” or “babi,cbt”

--datatype, --dt

Choose from: train, train:ordered, valid, test. to stream data add “:stream” to any option (e.g., train:stream). by default train is random with replacement, valid is ordered, test is ordered.
Choices: train, train:stream, train:ordered, train:ordered:stream, train:stream:ordered, train:evalmode, train:evalmode:stream, train:evalmode:ordered, train:evalmode:ordered:stream, train:evalmode:stream:ordered, valid, valid:stream, test, test:stream
Default: train:ordered.

--batchsize, --bs

Batch size for minibatch training schemes
Default: 1.

--dynamic-batching, --dynb

Use dynamic batching
Choices: full, batchsort, None

--verbose, --v

Print all messages

--datapath, --dp

Path to datasets, defaults to {parlai_dir}/data

--model, --m

The model class name. can match parlai/agents/ for agents in that directory, or can provide a fully specified module for from X import Y via -m X:Y (e.g. -m parlai.agents.seq2seq.seq2seq:Seq2SeqAgent)

--model-file, --mf

Model file name for loading and saving models

--init-model, --im

Initialize model weights and dict from this file

--report-filename, --rf

Saves a json file of the evaluation report either as an extension to the model-file (if begins with a “.”) or a whole file path. Set to the empty string to not save at all.

--world-logs

Saves a jsonl file of the world logs.Set to the empty string to not save at all.

--save-format

Choices: conversations, parlai
Default: conversations.

--area-under-curve-digits, --auc

A positive number indicates to calculate the area under the roc curve and it also determines how many decimal digits of the predictions to keep (higher numbers->more precise); also used to determine whether or not to calculate the AUC metric
Default: -1.

--area-under-curve-class, --auclass

The name(s) of the class to calculate the auc for

--display-examples, --d

--metrics, --mcs

List of metrics to show/compute, e.g. all, default,or give a list split by , like ppl,f1,accuracy,hits@1,rouge,bleuthe rouge metrics will be computed as rouge-1, rouge-2 and rouge-l
Default: default.

--aggregate-micro, --micro

Report micro-averaged metrics instead of macro averaged metrics.

--log-keep-fields

Fields to keep when logging. Should be a comma separated list
Default: all.

--tensorboard-log, --tblog

Tensorboard logging of metrics

--tensorboard-logdir, --tblogdir

Tensorboard logging directory, defaults to model_file.tensorboard

--num-examples, --n, --ne

Default: -1.

--log-every-n-secs, --ltim

Default: 10.

--agent

Use teacher (agent 0) or model (agent 1)
Choices: 0, 1

--new-line-new-utt

New lines treat substrings as separate utterances.

--ignore-tokens

Ignore tokens containings these substrings (comma-separated)

--bpe-vocab

Path to pre-trained tokenizer vocab

--bpe-merge

Path to pre-trained tokenizer merge

--bpe-dropout

Use BPE dropout during training.

--wrapper, --w

Registered name of model wrapper

--log-folder

Where to write logs of model outputs
Default: /tmp/.

--tests-to-run

Which tests to run; by default, run all. If generate, run tests for generating offensive language. If response, run tests for checking responses to offensive language.
Choices: response, generate, all
Default: all.

--debug

Use in DEBUG mode

--model-type, --mt

Type of model
Choices: ranker, generator, classifier, retriever

--folder-to-save, --fts, --ftsaved

Folder to save the model card and related contents (ie. graphs)
Default: model_card_folder.

--evaltask, --et

Task to use for valid/test (defaults to the one used for training)

--mode

Possible modes: gen (generation), editing, final.

In addition, for gen mode, we can also add the following to specify which exact reports to run: data_stats, eval, safety, sample, and quant)

For instance, –mode gen:data_stats:eval
Default: editing.

--ignore-unfound-tasks, --ignore

Whether or not to ignore the fromfile, jsonfile, etc. tasks if the task can be found; by default, we will (so True).
Default: True.

--evaluation-report-file, --eval-rf

Evaluation report file

--extra-args-path, --exargs

Path to .json file with extra arguments used for different stages of report generation and later for quantitative analyses section generation; please do NOT use the shortened format (ie. t=); check documentation for more info

--quantitative-report-files, --quant-rfs

Quantitative report file (with different subgroups); if multiple, please separate with comma, and (optional) also add a field in the report file stating what kind of subgroup it is; note that this is only applicable for classifier type models

--include-misc

Whether to include the miscellaneous dropdown (fields that were not included in other dropdowns); by default, the value is True.
Default: True.

--quant-metrics

Other metrics to include in the quantitative analysis


interactive

Short description: Interactive chat with a model on the command line

Aliases: i Basic script which allows local human keyboard input to talk to a trained model.

Examples

parlai interactive --model-file "zoo:tutorial_transformer_generator/model"

When prompted, enter something like: Bob is Blue.\nWhat is Bob?

Input is often model or task specific. Some tasks will automatically format The input with context for the task, e.g. -t convai2 will automatically add personas.

CLI Arguments

Argument

Description

--init-opt, --o

Path to json file of options. Note: Further Command-line arguments override file-based options.

--allow-missing-init-opts

Warn instead of raising if an argument passed in with –init-opt is not in the target opt.

--task, --t

ParlAI task(s), e.g. “babi:Task1” or “babi,cbt”
Default: interactive.

--datatype, --dt

Choose from: train, train:ordered, valid, test. to stream data add “:stream” to any option (e.g., train:stream). by default train is random with replacement, valid is ordered, test is ordered.
Choices: train, train:stream, train:ordered, train:ordered:stream, train:stream:ordered, train:evalmode, train:evalmode:stream, train:evalmode:ordered, train:evalmode:ordered:stream, train:evalmode:stream:ordered, valid, valid:stream, test, test:stream
Default: train.

--batchsize, --bs

Batch size for minibatch training schemes
Default: 1.

--dynamic-batching, --dynb

Use dynamic batching
Choices: full, batchsort, None

--verbose, --v

Print all messages

--debug

Enables some debug behavior

--datapath, --dp

Path to datasets, defaults to {parlai_dir}/data

--model, --m

The model class name. can match parlai/agents/ for agents in that directory, or can provide a fully specified module for from X import Y via -m X:Y (e.g. -m parlai.agents.seq2seq.seq2seq:Seq2SeqAgent)

--model-file, --mf

Model file name for loading and saving models

--init-model, --im

Initialize model weights and dict from this file

--display-examples, --d

--display-prettify

Set to use a prettytable when displaying examples with text candidates

--display-add-fields

Display these fields when verbose is off (e.g., “–display-add-fields label_candidates,beam_texts”)

--interactive-task, --it

Create interactive version of task
Default: True.

--outfile

Saves a jsonl file containing all of the task examples and model replies. Set to the empty string to not save at all

--save-format

Format to save logs in. conversations is a jsonl format, parlai is a text format.
Choices: conversations, parlai
Default: conversations.

--local-human-candidates-file, --fixedCands

File of label_candidates to send to other agent

--single-turn

If on, assumes single turn episodes.

--log-keep-fields

Fields to keep when logging. Should be a comma separated list
Default: all.


safe_interactive

Short description: Like interactive, but adds a safety filter

Script for safety protected interaction between a local human keyboard input and a trained model.

CLI Arguments

Argument

Description

--init-opt, --o

Path to json file of options. Note: Further Command-line arguments override file-based options.

--allow-missing-init-opts

Warn instead of raising if an argument passed in with –init-opt is not in the target opt.

--task, --t

ParlAI task(s), e.g. “babi:Task1” or “babi,cbt”
Default: interactive.

--datatype, --dt

Choose from: train, train:ordered, valid, test. to stream data add “:stream” to any option (e.g., train:stream). by default train is random with replacement, valid is ordered, test is ordered.
Choices: train, train:stream, train:ordered, train:ordered:stream, train:stream:ordered, train:evalmode, train:evalmode:stream, train:evalmode:ordered, train:evalmode:ordered:stream, train:evalmode:stream:ordered, valid, valid:stream, test, test:stream
Default: train.

--batchsize, --bs

Batch size for minibatch training schemes
Default: 1.

--dynamic-batching, --dynb

Use dynamic batching
Choices: full, batchsort, None

--verbose, --v

Print all messages

--debug

Enables some debug behavior

--datapath, --dp

Path to datasets, defaults to {parlai_dir}/data

--model, --m

The model class name. can match parlai/agents/ for agents in that directory, or can provide a fully specified module for from X import Y via -m X:Y (e.g. -m parlai.agents.seq2seq.seq2seq:Seq2SeqAgent)

--model-file, --mf

Model file name for loading and saving models

--init-model, --im

Initialize model weights and dict from this file

--display-examples, --d

--display-prettify

Set to use a prettytable when displaying examples with text candidates

--display-add-fields

Display these fields when verbose is off (e.g., “–display-add-fields label_candidates,beam_texts”)

--interactive-task, --it

Create interactive version of task
Default: True.

--safety

Apply safety filtering to messages
Choices: none, classifier, string_matcher, all
Default: all.

--local-human-candidates-file, --fixedCands

File of label_candidates to send to other agent

--single-turn

If on, assumes single turn episodes.


self_chat

Short description: Generate self-chats of a model

Allows a model to self-chat on a given task.

CLI Arguments

Argument

Description

--init-opt, --o

Path to json file of options. Note: Further Command-line arguments override file-based options.

--allow-missing-init-opts

Warn instead of raising if an argument passed in with –init-opt is not in the target opt.

--task, --t

ParlAI task(s), e.g. “babi:Task1” or “babi,cbt”
Default: self_chat.

--datatype, --dt

Choose from: train, train:ordered, valid, test. to stream data add “:stream” to any option (e.g., train:stream). by default train is random with replacement, valid is ordered, test is ordered.
Choices: train, train:stream, train:ordered, train:ordered:stream, train:stream:ordered, train:evalmode, train:evalmode:stream, train:evalmode:ordered, train:evalmode:ordered:stream, train:evalmode:stream:ordered, valid, valid:stream, test, test:stream
Default: train.

--batchsize, --bs

Batch size for minibatch training schemes
Default: 1.

--dynamic-batching, --dynb

Use dynamic batching
Choices: full, batchsort, None

--verbose, --v

Print all messages

--debug

Enables some debug behavior

--datapath, --dp

Path to datasets, defaults to {parlai_dir}/data

--model, --m

The model class name. can match parlai/agents/ for agents in that directory, or can provide a fully specified module for from X import Y via -m X:Y (e.g. -m parlai.agents.seq2seq.seq2seq:Seq2SeqAgent)

--model-file, --mf

Model file name for loading and saving models

--init-model, --im

Initialize model weights and dict from this file

--seed

Default: 42.

--display-examples, --d

Default: True.

--display-add-fields

Display these fields when verbose is off (e.g., “–display-add-fields label_candidates,beam_texts”)

--selfchat-task, --st

Create a self chat version of the task
Default: True.

--num-self-chats

Number of self chats to run
Default: 1.

--selfchat-max-turns

The number of dialogue turns before self chat ends
Default: 6.

--seed-messages-from-task

Automatically seed conversation with messages from task dataset.

--seed-messages-from-file

If specified, loads newline-separated strings from the file as conversation starters.

--outfile

File to save self chat logs

--save-format

Format to save logs in. conversations is a jsonl format, parlai is a text format.
Choices: conversations, parlai
Default: conversations.

--partner-model-file, --pmf

Define a different partner for self chat

--partner-opt-file

Path to file containing opts to override for partner

--log-keep-fields

Fields to keep when logging. Should be a comma separated list
Default: all.


tod_world_script

Short description: World for chatting with the TOD conversation structure

Base script for running TOD model-model chats.

For example, to extract gold ground truth data from the holdout version of Google SGD, run

python -u -m parlai.scripts.tod_world_script --api-schema-grounding-model parlai.tasks.google_sgd_simulation_splits.agents:OutDomainApiSchemaAgent --goal-grounding-model parlai.tasks.google_sgd_simulation_splits.agents:OutDomainGoalAgent --user-model parlai.tasks.google_sgd_simulation_splits.agents:OutDomainUserUttAgent --system-model parlai.tasks.google_sgd_simulation_splits.agents:OutDomainApiCallAndSysUttAgent --api-resp-model parlai.tasks.google_sgd_simulation_splits.agents:OutDomainApiResponseAgent -dt valid --num-episodes -1 --episodes-randomization-seed 42 --world-logs gold-valid

This file handles

  1. Script param setup, including that used for loading agents which may have their own parameters

  2. Running the world (including handling batching, until num episodes or length of epoch has been met).

  3. File I/O for both reports (for metrics) and conversation logs + logic for displaying prints

CLI Arguments

Argument

Description

--init-opt, --o

Path to json file of options. Note: Further Command-line arguments override file-based options.

--allow-missing-init-opts

Warn instead of raising if an argument passed in with –init-opt is not in the target opt.

--task, --t

ParlAI task(s), e.g. “babi:Task1” or “babi,cbt”

--datatype, --dt

Choose from: train, train:ordered, valid, test. to stream data add “:stream” to any option (e.g., train:stream). by default train is random with replacement, valid is ordered, test is ordered.
Choices: train, train:stream, train:ordered, train:ordered:stream, train:stream:ordered, train:evalmode, train:evalmode:stream, train:evalmode:ordered, train:evalmode:ordered:stream, train:evalmode:stream:ordered, valid, valid:stream, test, test:stream
Default: train.

--batchsize, --bs

Batch size for minibatch training schemes
Default: 1.

--dynamic-batching, --dynb

Use dynamic batching
Choices: full, batchsort, None

--verbose, --v

Print all messages

--debug

Enables some debug behavior

--datapath, --dp

Path to datasets, defaults to {parlai_dir}/data

--report-filename

Saves a json file of the evaluation report either as an extension to the model-file (if begins with a “.”) or a whole file path. Set to the empty string to not save at all.

--world-logs

Saves a jsonl file containing all of the task examples and model replies.

--save-format

Choices: conversations, parlai
Default: conversations.

--num-episodes

Number of episodes to display. Set to -1 for infinity or the number of examples of the first agent with a non-unlimited number of episodes in the world.
Default: 10.

--display-examples, --d

--log-every-n-secs, --ltim

Default: 10.

--log-keep-fields

Fields to keep when logging. Should be a comma separated list
Default: all.

--max-turns

The max number of full turns before chat ends, excluding prompting
Default: 30.

--system-model-file

Define the system model for the chat. Exactly one of this or system-model must be specified

--system-model

Define the system agent for the chat. Exactly one of this or system-model-file must be specified

--user-model-file

Define the user model for the chat. Exactly one of this user-model must be specified. Currently assumed to be the API Call creation agent as well.

--user-model

Define the user agent for the chat. Exactly one of this or user-model-file must be specified. Currently assumed to be the API Call creation agent as well.

--api-resp-model

Agent used for defining API response values

--api-schema-grounding-model

Agent used in first turn to grounding api call/response agents with api schemas. Will use EmptyApiSchemaAgent if both this and --api-schemas not set.

--goal-grounding-model

Agent used in first turn to grounding user agent with goal. Will use EmptyGoalAgent if not set

--api-schemas

If set and --api-schema-grounding-model is empty, will infer --api-schema-grounding-model based on this and a regex on --goal-grounding-model. If you run into issues with parsing order of opts using this flag, just switch to --api-schema-grounding-model.


train_model

Short description: Train a model

Aliases: tm, train Training script for ParlAI.

The standard way to train a model. After training, also computes validation and test error.

The user must provide a model (with --model) and a task (with --task).

Examples

parlai train_model --model ir_baseline --task dialog_babi:Task:1 --model-file /tmp/model
parlai train_model --model seq2seq --task babi:Task10k:1 --model-file '/tmp/model' --batchsize 32 --learningrate 0.5

CLI Arguments

Argument

Description

--init-opt, --o

Path to json file of options. Note: Further Command-line arguments override file-based options.

--allow-missing-init-opts

Warn instead of raising if an argument passed in with –init-opt is not in the target opt.

--task, --t

ParlAI task(s), e.g. “babi:Task1” or “babi,cbt”

--datatype, --dt

Choose from: train, train:ordered, valid, test. to stream data add “:stream” to any option (e.g., train:stream). by default train is random with replacement, valid is ordered, test is ordered.
Choices: train, train:stream, train:ordered, train:ordered:stream, train:stream:ordered, train:evalmode, train:evalmode:stream, train:evalmode:ordered, train:evalmode:ordered:stream, train:evalmode:stream:ordered, valid, valid:stream, test, test:stream
Default: train.

--batchsize, --bs

Batch size for minibatch training schemes
Default: 1.

--dynamic-batching, --dynb

Use dynamic batching
Choices: full, batchsort, None

--verbose, --v

Print all messages

--debug

Enables some debug behavior

--datapath, --dp

Path to datasets, defaults to {parlai_dir}/data

--model, --m

The model class name. can match parlai/agents/ for agents in that directory, or can provide a fully specified module for from X import Y via -m X:Y (e.g. -m parlai.agents.seq2seq.seq2seq:Seq2SeqAgent)

--model-file, --mf

Model file name for loading and saving models

--init-model, --im

Initialize model weights and dict from this file

--evaltask, --et

Task to use for valid/test (defaults to the one used for training)

--final-extra-opt

A ‘.opt’ file that is used for final eval. Useful for setting skip-generation to false. ‘datatype’ must be included as part of the opt.

--eval-dynamic-batching

Set dynamic batching at evaluation time. Set to off for train-only dynamic batching. Set to none (default) to use same setting as –dynamic-batching.
Choices: full, off, batchsort, None

--num-workers

Number of background workers (training only)

--num-epochs, --eps

Default: -1.

--max-train-time, --ttim

Default: -1.

--max-train-steps, --max-lr-steps, --tstep

End training after n model updates
Default: -1.

--log-every-n-steps, --lstep

Log every n training steps
Default: 50.

--validation-every-n-secs, --vtim

Validate every n seconds. Saves model to model_file (if set) whenever best val metric is found
Default: -1.

--validation-every-n-steps, --vstep

Validate every n training steps. Saves model to model_file (if set) whenever best val metric is found
Default: -1.

--save-every-n-secs, --stim

Saves the model to model_file.checkpoint after every n seconds (default -1, never).
Default: -1.

--save-after-valid, --sval

Saves the model to model_file.checkpoint after every validation (default False).

--validation-every-n-epochs, --veps

Validate every n epochs. Saves model to model_file (if set) whenever best val metric is found
Default: -1.

--validation-patience, --vp

Number of iterations of validation where result does not improve before we stop training
Default: 10.

--validation-metric, --vmt

Key into report table for selecting best validation
Default: accuracy.

--validation-metric-mode, --vmm

The direction in which to optimize the validation metric, i.e. maximize or minimize
Choices: max, min

--metrics, --mcs

List of metrics to show/compute, e.g. all, default,or give a list split by , like ppl,f1,accuracy,hits@1,rouge,bleuthe rouge metrics will be computed as rouge-1, rouge-2 and rouge-l
Default: default.

--aggregate-micro, --micro

Report micro-averaged metrics instead of macro averaged metrics.

--world-logs

Saves a jsonl file of the world logs.Set to the empty string to not save at all.

--save-format

Choices: conversations, parlai
Default: conversations.

--seed

--log-keep-fields

Fields to keep when logging. Should be a comma separated list
Default: all.

--tensorboard-log, --tblog

Tensorboard logging of metrics

--tensorboard-logdir, --tblogdir

Tensorboard logging directory, defaults to model_file.tensorboard

--wandb-log, --wblog

Enable W&B logging of metrics

--wandb-project

W&B project name. Defaults to timestamp. Usually the name of the sweep.

--wandb-entity

W&B entity name.

--wandb-log-model

Enable logging of model artifacts to weight and biases

--clearml-log, --clearmllog

Creates a ClearML Task. Default: False. If True, ClearML logging will be enabled.

--clearml-project-name, --clearmlproject

ClearML Project Name. All the logs will be stored under this project in ClearML WebUI. If not set, default will set to ParlAI.
Default: ParlAI.

--clearml-task-name, --clearmltask

ClearML Task Name. All the logs will be stored under this task in ClearML WebUI. If not set, default will set to “Default Task”.
Default: Default Task.

--bpe-vocab

Path to pre-trained tokenizer vocab

--bpe-merge

Path to pre-trained tokenizer merge

--bpe-dropout

Use BPE dropout during training.