Docs
Support Ukraine πŸ‡ΊπŸ‡¦ Help Provide Humanitarian Aid to Ukraine .

A unified platform for sharing, training and evaluating dialogue models across many tasks.

Many popular datasets available all in one place -- from open-domain chitchat to visual question answering.

A wide set of reference models -- from retrieval baselines to Transformers.

Seamless integration of Amazon Mechanical Turk for data collection, training and human evaluation.

Get Started Fork me on GitHub

Watch Introductory Video

What's new?

Get Started

To install ParlAI:

Run this command:
pip install parlai

Examples

Display 10 random examples from task 1 of the "1k training examples" bAbI task:

Run this command:
parlai display_data --task babi:task1k:1

Displays 100 random examples from multitasking on the bAbI task and the SQuAD dataset at the same time:

Run this command:
parlai display_data --task babi:task1k:1,squad -n 100

Evaluate an IR baseline model on the validation set of the Movies Subreddit dataset:

Run this command:
parlai eval_model --model ir_baseline --task "#moviedd-reddit" --datatype valid

Display the predictions of that same IR baseline model:

Run this command:
parlai display_model --model ir_baseline --task "#moviedd-reddit" --datatype valid

Trains an attentive LSTM model on the SQuAD dataset with a batch size of 32 examples (pytorch and regex):

Run this command:
parlai train_model --model drqa --task squad --batchsize 32 --model-file /tmp/model_drqa

For more examples, please read our tutorial. To learn more about ParlAI, click here.