Display 10 random examples from task 1 of the "1k training examples" bAbI task:
Run this command:
python examples/display_data.py -t babi:task1k:1
Displays 100 random examples from multitasking on the bAbI task and the SQuAD dataset at the same time:
Run this command:
python examples/display_data.py -t babi:task1k:1,squad -n 100
Evaluate an IR baseline model on the validation set of the Movies Subreddit dataset:
Run this command:
python examples/eval_model.py -m ir_baseline -t "#moviedd-reddit" -dt valid
Display the predictions of that same IR baseline model:
Run this command:
python examples/display_model.py -m ir_baseline -t "#moviedd-reddit" -dt valid
Train a simple cpu-based memory network on the "10k training examples" bAbI task 1 with 8 threads (python processes) using Hogwild (requires zmq and Lua Torch):
Run this command:
python examples/memnn_luatorch_cpu/full_task_train.py -t babi:task10k:1 -nt 8
Trains an attentive LSTM model on the SQuAD dataset with a batch size of 32 examples (pytorch and regex):
Run this command:
python examples/drqa/train.py -t squad -bs 32
For more examples, please read our tutorial. To learn more about ParlAI, click here.