Tutorial

A unified platform for training and evaluating dialog models across many tasks.

Multitask training over many datasets at once.

Supports dialog models in PyTorch, Tensorflow, Theano, Torch, and other frameworks.

Seamless integration of Amazon Mechanical Turk for data collection, training and human evaluation.

See Examples Fork me on GitHub

Get Started

Check out our GitHub repository:

Run this command:
git clone https://github.com/facebookresearch/ParlAI.git
cd ParlAI; python setup.py develop

Examples

Display 10 random examples from task 1 of the "1k training examples" bAbI task:

Run this command:
python examples/display_data.py -t babi:task1k:1

Displays 100 random examples from multitasking on the bAbI task and the SQuAD dataset at the same time:

Run this command:
python examples/display_data.py -t babi:task1k:1,squad -n 100

Evaluate an IR baseline model on the validation set of the Movies Subreddit dataset:

Run this command:
python examples/eval_model.py -m ir_baseline -t "#moviedd-reddit" -dt valid

Display the predictions of that same IR baseline model:

Run this command:
python examples/display_model.py -m ir_baseline -t "#moviedd-reddit" -dt valid

Train a simple cpu-based memory network on the "10k training examples" bAbI task 1 with 8 threads (python processes) using Hogwild (requires zmq and Lua Torch):

Run this command:
python examples/memnn_luatorch_cpu/full_task_train.py -t babi:task10k:1 -nt 8

Trains an attentive LSTM model on the SQuAD dataset with a batch size of 32 examples (pytorch and regex):

Run this command:
python examples/drqa/train.py -t squad -bs 32

For more examples, please read our tutorial. To learn more about ParlAI, click here.