Image+Seq2Seq

The Image+Seq2Seq agent is a model that incorporates image features with a sequence to sequence transformer generator. A core component of the dodecaDialogue task.

Basic Examples

Train an Image+Seq2Seq model on an image captioning task:

python parlai/scripts/train_model.py -m image_seq2seq -t flickr30k --image-mode resnext101_32x48d_wsl -mf /tmp/model

Train an Image+Seq2Seq model on a dialogue task:

python parlai/scripts/train_model.py -m image_seq2seq -t convai2 -mf /tmp/model

Multi-task train an Image+Seq2Seq model on a dialogue and captioning task:

python parlai/scripts/train_model.py -m image_seq2seq -t flickr30k,convai2 -mf /tmp/model --image-mode resnext101_32x48d_wsl

ImageSeq2seqAgent Options

Transformer Arguments

-esz, --embedding-size

Size of all embedding layers

Default: 300.

-nl, --n-layers

Default: 2.

-hid, --ffn-size

Hidden size of the FFN layers

Default: 300.

--dropout

Dropout used in Vaswani 2017.

Default: 0.0.

--attention-dropout

Dropout used after attention softmax.

Default: 0.0.

--relu-dropout

Dropout used after ReLU. From tensor2tensor.

Default: 0.0.

--n-heads

Number of multihead attention heads

Default: 2.

--learn-positional-embeddings

Default: False.

--embeddings-scale

Default: True.

--n-segments

The number of segments that support the model. If zero no segment and no langs_embedding.

Default: 0.

--variant

Chooses locations of layer norms, etc.

Choices: xlm, aiayn.

Default: aiayn. Recommended: xlm.

--activation

Nonlinear activation to use. AIAYN uses relu, but more recent papers prefer gelu.

Choices: relu, gelu.

Default: relu. Recommended: gelu.

--output-scaling

Scale the output of every transformer by this quantity.

Default: 1.0.

--share-word-embeddings

Share word embeddings table for candidate and contextin the memory network

Default: True.

-nel, --n-encoder-layers

This will overide the n-layers for asymmetrical transformers

Default: -1.

-ndl, --n-decoder-layers

This will overide the n-layers for asymmetrical transformers

Default: -1.

Torch Generator Agent

--beam-size

Beam size, if 1 then greedy search

Default: 1.

--beam-min-length

Minimum length of prediction to be generated by the beam search

Default: 1.

--beam-context-block-ngram

Size n-grams to block in beam search from the context. val <= 0 implies no blocking

Default: -1.

--beam-block-ngram

Size n-grams to block in beam search. val <= 0 implies no blocking

Default: -1.

--beam-length-penalty

Applies a length penalty. Set to 0 for no penalty.

Default: 0.65.

--inference

Generation algorithm

Choices: nucleus, topk, beam, greedy.

Default: greedy.

--topk

K used in Top K sampling

Default: 10.

--topp

P used in nucleus sampling

Default: 0.9.

--compute-tokenized-bleu

If true, compute tokenized bleu scores

Default: False.

TorchAgent Arguments

-i, --interactive-mode

Whether in full interactive mode or not, which means generating text or retrieving from a full set of candidates, which is necessary to actually do full dialogue. However, during training or quick validation (e.g. PPL for generation or ranking a few candidates for ranking models) you might want these set to off. Typically, scripts can set their preferred default behavior at the start, e.g. eval scripts.

Default: False.

-emb, --embedding-type

Choose between different strategies for initializing word embeddings. Default is random, but can also preinitialize from Glove or Fasttext. Preinitialized embeddings can also be fixed so they are not updated during training.

Choices: random, glove, glove-fixed, fasttext, fasttext-fixed, fasttext_cc, fasttext_cc-fixed.

Default: random.

-embp, --embedding-projection

If pretrained embeddings have a different dimensionality than your embedding size, strategy for projecting to the correct size. If the dimensions are the same, this is ignored unless you append “-force” to your choice.

Default: random.

--fp16

Use fp16 computations.

Default: False.

--fp16-impl

Implementation of FP16 to use

Choices: apex, mem_efficient.

Default: apex.

-rc, --rank-candidates

Whether the model should parse candidates for ranking.

Default: False.

-tr, --truncate

Truncate input lengths to increase speed / use less memory.

Default: -1.

--text-truncate

Text input truncation length: if not specified, this will default to truncate

--label-truncate

Label truncation length: if not specified, this will default to truncate

-histsz, --history-size

Number of past dialog utterances to remember.

Default: -1.

-pt, --person-tokens

Add person tokens to history. adds __p1__ in front of input text and __p2__ in front of past labels when available or past utterances generated by the model. these are added to the dictionary during initialization.

Default: False.

--split-lines

Split the dialogue history on newlines and save in separate vectors

Default: False.

--delimiter

Join history lines with this token, defaults to newline

Default: \n.

-gpu, --gpu

Which GPU to use

Default: -1.

--no-cuda

Disable GPUs even if available. otherwise, will use GPUs if available on the device.

Default: False.

Optimizer Arguments

-opt, --optimizer

Choose between pytorch optimizers. Any member of torch.optim should be valid.

Choices: adadelta, adagrad, adam, adamw, sparseadam, adamax, asgd, sgd, rprop, rmsprop, optimizer, lbfgs, mem_eff_adam, adafactor.

Default: sgd.

-lr, --learningrate

Learning rate

Default: 1.

-clip, --gradient-clip

Gradient clipping using l2 norm

Default: 0.1.

--adafactor-eps

Epsilon values for adafactor optimizer: regularization constants for square gradient and parameter scale respectively

Default: 1e-30,1e-3. Recommended: 1e-30,1e-3.

-mom, --momentum

If applicable, momentum value for optimizer.

Default: 0.

--nesterov

If applicable, whether to use nesterov momentum.

Default: True.

-nu, --nus

If applicable, nu value(s) for optimizer. can use a single value like 0.7 or a comma-separated tuple like 0.7,1.0

Default: 0.7.

-beta, --betas

If applicable, beta value(s) for optimizer. can use a single value like 0.9 or a comma-separated tuple like 0.9,0.999

Default: 0.9,0.999.

-wdecay, --weight-decay

Weight decay on the weights.

Learning Rate Scheduler

--lr-scheduler

Learning rate scheduler.

Choices: reduceonplateau, none, fixed, invsqrt, cosine, linear.

Default: reduceonplateau.

--lr-scheduler-patience

LR scheduler patience. In number of validation runs. If using fixed scheduler, LR is decayed every <patience> validations.

Default: 3.

--lr-scheduler-decay

Decay factor for LR scheduler, or how much LR is multiplied by when it is lowered.

Default: 0.5.

--max-lr-steps

Number of train steps the scheduler should take after warmup. Training is terminated after this many steps. This should only be set for –lr-scheduler cosine or linear

Default: -1.

--invsqrt-lr-decay-gamma

Constant used only to find the lr multiplier for the invsqrt scheduler. Must be set for –lr-scheduler invsqrt

Default: -1.

Image Encoder Args

--image-features-dim

Dim for image feats

Default: 2048.

--image-encoder-num-layers

Number of layers for image encoder

Default: 1. Recommended: 1.

--include-image-token

If true, include image token (or no image token) for each example

Default: True. Recommended: True.

TransformerGeneratorAgent Options

Transformer Arguments

-esz, --embedding-size

Size of all embedding layers

Default: 300.

-nl, --n-layers

Default: 2.

-hid, --ffn-size

Hidden size of the FFN layers

Default: 300.

--dropout

Dropout used in Vaswani 2017.

Default: 0.0.

--attention-dropout

Dropout used after attention softmax.

Default: 0.0.

--relu-dropout

Dropout used after ReLU. From tensor2tensor.

Default: 0.0.

--n-heads

Number of multihead attention heads

Default: 2.

--learn-positional-embeddings

Default: False.

--embeddings-scale

Default: True.

--n-segments

The number of segments that support the model. If zero no segment and no langs_embedding.

Default: 0.

--variant

Chooses locations of layer norms, etc.

Choices: xlm, aiayn.

Default: aiayn. Recommended: xlm.

--activation

Nonlinear activation to use. AIAYN uses relu, but more recent papers prefer gelu.

Choices: relu, gelu.

Default: relu. Recommended: gelu.

--output-scaling

Scale the output of every transformer by this quantity.

Default: 1.0.

--share-word-embeddings

Share word embeddings table for candidate and contextin the memory network

Default: True.

-nel, --n-encoder-layers

This will overide the n-layers for asymmetrical transformers

Default: -1.

-ndl, --n-decoder-layers

This will overide the n-layers for asymmetrical transformers

Default: -1.

Torch Generator Agent

--beam-size

Beam size, if 1 then greedy search

Default: 1.

--beam-min-length

Minimum length of prediction to be generated by the beam search

Default: 1.

--beam-context-block-ngram

Size n-grams to block in beam search from the context. val <= 0 implies no blocking

Default: -1.

--beam-block-ngram

Size n-grams to block in beam search. val <= 0 implies no blocking

Default: -1.

--beam-length-penalty

Applies a length penalty. Set to 0 for no penalty.

Default: 0.65.

--inference

Generation algorithm

Choices: nucleus, topk, beam, greedy.

Default: greedy.

--topk

K used in Top K sampling

Default: 10.

--topp

P used in nucleus sampling

Default: 0.9.

--compute-tokenized-bleu

If true, compute tokenized bleu scores

Default: False.

TorchAgent Arguments

-i, --interactive-mode

Whether in full interactive mode or not, which means generating text or retrieving from a full set of candidates, which is necessary to actually do full dialogue. However, during training or quick validation (e.g. PPL for generation or ranking a few candidates for ranking models) you might want these set to off. Typically, scripts can set their preferred default behavior at the start, e.g. eval scripts.

Default: False.

-emb, --embedding-type

Choose between different strategies for initializing word embeddings. Default is random, but can also preinitialize from Glove or Fasttext. Preinitialized embeddings can also be fixed so they are not updated during training.

Choices: random, glove, glove-fixed, fasttext, fasttext-fixed, fasttext_cc, fasttext_cc-fixed.

Default: random.

-embp, --embedding-projection

If pretrained embeddings have a different dimensionality than your embedding size, strategy for projecting to the correct size. If the dimensions are the same, this is ignored unless you append “-force” to your choice.

Default: random.

--fp16

Use fp16 computations.

Default: False.

--fp16-impl

Implementation of FP16 to use

Choices: apex, mem_efficient.

Default: apex.

-rc, --rank-candidates

Whether the model should parse candidates for ranking.

Default: False.

-tr, --truncate

Truncate input lengths to increase speed / use less memory.

Default: -1.

--text-truncate

Text input truncation length: if not specified, this will default to truncate

--label-truncate

Label truncation length: if not specified, this will default to truncate

-histsz, --history-size

Number of past dialog utterances to remember.

Default: -1.

-pt, --person-tokens

Add person tokens to history. adds __p1__ in front of input text and __p2__ in front of past labels when available or past utterances generated by the model. these are added to the dictionary during initialization.

Default: False.

--split-lines

Split the dialogue history on newlines and save in separate vectors

Default: False.

--delimiter

Join history lines with this token, defaults to newline

Default: \n.

-gpu, --gpu

Which GPU to use

Default: -1.

--no-cuda

Disable GPUs even if available. otherwise, will use GPUs if available on the device.

Default: False.

Optimizer Arguments

-opt, --optimizer

Choose between pytorch optimizers. Any member of torch.optim should be valid.

Choices: adadelta, adagrad, adam, adamw, sparseadam, adamax, asgd, sgd, rprop, rmsprop, optimizer, lbfgs, mem_eff_adam, adafactor.

Default: sgd.

-lr, --learningrate

Learning rate

Default: 1.

-clip, --gradient-clip

Gradient clipping using l2 norm

Default: 0.1.

--adafactor-eps

Epsilon values for adafactor optimizer: regularization constants for square gradient and parameter scale respectively

Default: 1e-30,1e-3. Recommended: 1e-30,1e-3.

-mom, --momentum

If applicable, momentum value for optimizer.

Default: 0.

--nesterov

If applicable, whether to use nesterov momentum.

Default: True.

-nu, --nus

If applicable, nu value(s) for optimizer. can use a single value like 0.7 or a comma-separated tuple like 0.7,1.0

Default: 0.7.

-beta, --betas

If applicable, beta value(s) for optimizer. can use a single value like 0.9 or a comma-separated tuple like 0.9,0.999

Default: 0.9,0.999.

-wdecay, --weight-decay

Weight decay on the weights.

Learning Rate Scheduler

--lr-scheduler

Learning rate scheduler.

Choices: reduceonplateau, none, fixed, invsqrt, cosine, linear.

Default: reduceonplateau.

--lr-scheduler-patience

LR scheduler patience. In number of validation runs. If using fixed scheduler, LR is decayed every <patience> validations.

Default: 3.

--lr-scheduler-decay

Decay factor for LR scheduler, or how much LR is multiplied by when it is lowered.

Default: 0.5.

--max-lr-steps

Number of train steps the scheduler should take after warmup. Training is terminated after this many steps. This should only be set for –lr-scheduler cosine or linear

Default: -1.

--invsqrt-lr-decay-gamma

Constant used only to find the lr multiplier for the invsqrt scheduler. Must be set for –lr-scheduler invsqrt

Default: -1.