Tasks

List of ParlAI tasks defined in the file task_list.py.

  1. QA tasks

  2. Cloze tasks

  3. Goal tasks

  4. ChitChat tasks

  5. Negotiation tasks

  6. Visual tasks

  7. decanlp tasks

QA Tasks

AmazonQA

Usage: --task amazon_qa

Links: website, code

This dataset contains Question and Answer data from Amazon, totaling around 1.4 million answered questions.

AQuA

Usage: --task aqua

Links: arXiv, code

Dataset containing algebraic word problems with rationales for their answers.

bAbI 1k

Usage: --task babi:All1k

Links: arXiv, code

20 synthetic tasks that each test a unique aspect of text and reasoning, and hence test different capabilities of learning models.

Notes

You can access just one of the bAbI tasks with e.g. ‘babi:Task1k:3’ for task 3.

bAbI 10k

Usage: --task babi:All10k

Links: arXiv, code

20 synthetic tasks that each test a unique aspect of text and reasoning, and hence test different capabilities of learning models.

Notes

You can access just one of the bAbI tasks with e.g. ‘babi:Task10k:3’ for task 3.

Conversational Question Answering Challenge

Usage: --task coqa

Links: arXiv, code

CoQA is a large-scale dataset for building Conversational Question Answering systems. The goal of the CoQA challenge is to measure the ability of machines to understand a text passage and answer a series of interconnected questions that appear in a conversation. CoQA is pronounced as coca.

HotpotQA

Usage: --task hotpotqa

Links: arXiv, code

HotpotQA is a dataset for multi-hop question answering.The overall setting is that given some context paragraphs(e.g., a few paragraphs, or the entire Web) and a question,a QA system answers the question by extracting a span of textfrom the context. It is necessary to perform multi-hop reasoningto correctly answer the question.

MCTest

Usage: --task mctest

Links: website, code

Questions about short children’s stories.

Movie Dialog QA

Usage: --task moviedialog:Task:1

Links: arXiv, code

Closed-domain QA dataset asking templated questions about movies, answerable from Wikipedia, similar to WikiMovies.

Movie Dialog Recommendations

Usage: --task moviedialog:Task:2

Links: arXiv, code

Questions asking for movie recommendations.

MTurk WikiMovies

Usage: --task mturkwikimovies

Links: arXiv, code

Closed-domain QA dataset asking MTurk-derived questions about movies, answerable from Wikipedia.

NarrativeQA

Usage: --task narrative_qa

Links: arXiv, code

A dataset and set of tasks in which the reader must answer questions about stories by reading entire books or movie scripts.

Notes

You can access summaries only task for NarrativeQA by using task ‘narrative_qa:summaries’. By default, only stories are provided.

Natural Questions

Usage: --task natural_questions

Links: paper, website, code

An open domain question answering dataset. Each example contains real questions that people searched for in Google and the content of the a Wikipedia article that was amongst the top 5 search resutls for that query, and its annotations. The annotations have the options of a long answer that is seleced from span of major content entities in the Wikipedia article (e.g., paragraphs, tables), a short answerthat is selected from one or more short span of words in the article, or ‘yes/no’. The existence of any of these answer formats depends on whether the main question can be answered, given the article; if not they are left empty.

Notes

Since this task uses ChunkTeacher, it should be used with streaming.

Question Answering in Context

Usage: --task quac

Links: arXiv, code

Question Answering in Context is a dataset for modeling, understanding, and participating in information seeking dialog. Data instances consist of an interactive dialog between two crowd workers: (1) a student who poses a sequence of freeform questions to learn as much as possible about a hidden Wikipedia text, and (2) a teacher who answers the questions by providing short excerpts (spans) from the text. QuAC introduces challenges not found in existing machine comprehension datasets: its questions are often more open-ended, unanswerable, or only meaningful within the dialog context.

Simple Questions

Usage: --task simplequestions

Links: arXiv, code

Open-domain QA dataset based on Freebase triples.

SQuAD2

Usage: --task squad2

Links: arXiv, code

Open-domain QA dataset answerable from a given paragraph from Wikipedia.

SQuAD

Usage: --task squad

Links: arXiv, code

Open-domain QA dataset answerable from a given paragraph from Wikipedia.

TriviaQA

Usage: --task triviaqa

Links: arXiv, code

Open-domain QA dataset with question-answer-evidence triples.

Web Questions

Usage: --task webquestions

Links: paper, code

Open-domain QA dataset from Web queries.

WikiMovies

Usage: --task wikimovies

Links: arXiv, code

Closed-domain QA dataset asking templated questions about movies, answerable from Wikipedia.

WikiQA

Usage: --task wikiqa

Links: website, code

Open domain QA from Wikipedia dataset

InsuranceQA

Usage: --task insuranceqa

Links: arXiv, code

Task which requires agents to identify high quality answers composed by professionals with deep domain knowledge.

MS_MARCO

Usage: --task ms_marco

Links: arXiv, code

A large scale Machine Reading Comprehension Dataset with questions sampled from real anonymized user queries and contexts from web documents.

QAngaroo

Usage: --task qangaroo

Links: website, code

Reading Comprehension with Multiple Hop. Including two datasets: WIKIHOP built on on wikipedia, MEDHOP built on paper abstracts from PubMed.

ELI5

Usage: --task eli5

Links: website, code

This dataset contains Question and Answer data from Reddit explainlikeimfive posts and comments.

DREAM

Usage: --task dream

Links: website, code

A multiple-choice answering dataset based on multi-turn, multi-party dialogue.

C3

Usage: --task c3

Links: website, code

A multiple-choice answering dataset in Chinese based on a prior passage.

CommonSenseQA

Usage: --task commonsenseqa

Links: wesite, code

CommonSenseQA is a multiple-choice Q-A dataset that relies on commonsense knowlegde to predict correct answers.


Cloze Tasks

BookTest

Usage: --task booktest

Links: arXiv, code

Sentence completion given a few sentences as context from a book. A larger version of CBT.

Children’s Book Test (CBT)

Usage: --task cbt

Links: arXiv, code

Sentence completion given a few sentences as context from a children’s book.

QA CNN

Usage: --task qacnn

Links: arXiv, code

Cloze dataset based on a missing (anonymized) entity phrase from a CNN article

QA Daily Mail

Usage: --task qadailymail

Links: arXiv, code

Cloze dataset based on a missing (anonymized) entity phrase from a Daily Mail article.


Goal Tasks

Coached Conversational Preference Elicitation

Usage: --task ccpe

Links: website, code

A dataset consisting of 502 dialogs with 12,000 annotated utterances between a user and an assistant discussing movie preferences in natural language. It was collected using a Wizard-of-Oz methodology between two paid crowd-workers, where one worker plays the role of an ‘assistant’, while the other plays the role of a ‘user’.

Dialog Based Language Learning: bAbI Task

Usage: --task dbll_babi

Links: arXiv, code

Short dialogs based on the bAbI tasks, but in the form of a question from a teacher, the answer from the student, and finally a comment on the answer from the teacher. The aim is to find learning models that use the comments to improve.

Notes

Tasks can be accessed with a format like: ‘parlai display_data -t dbll_babi:task:2_p0.5’ which specifies task 2, and policy with 0.5 answers correct, see the paper for more details of the tasks.

Dialog Based Language Learning: WikiMovies Task

Usage: --task dbll_movie

Links: arXiv, code

Short dialogs based on WikiMovies, but in the form of a question from a teacher, the answer from the student, and finally a comment on the answer from the teacher. The aim is to find learning models that use the comments to improve.

Dialog bAbI

Usage: --task dialog_babi

Links: arXiv, code

Simulated dialogs of restaurant booking

Dialog bAbI+

Usage: --task dialog_babi_plus

Links: website, paper, code

bAbI+ is an extension of the bAbI Task 1 dialogues with everyday incremental dialogue phenomena (hesitations, restarts, and corrections) which model the disfluencies and communication problems in everyday spoken interaction in real-world environments.

MutualFriends

Usage: --task mutualfriends

Links: website, code

Task where two agents must discover which friend of theirs is mutual based on the friends’s attributes.

Movie Dialog QA Recommendations

Usage: --task moviedialog:Task:3

Links: arXiv, code

Dialogs discussing questions about movies as well as recommendations.

Personalized Dialog Full Set

Usage: --task personalized_dialog:AllFull

Links: arXiv, code

Simulated dataset of restaurant booking focused on personalization based on user profiles.

Personalized Dialog Small Set

Usage: --task personalized_dialog:AllSmall

Links: arXiv, code

Simulated dataset of restaurant booking focused on personalization based on user profiles.

Task N’ Talk

Usage: --task taskntalk

Links: arXiv, code

Dataset of synthetic shapes described by attributes, for agents to play a cooperative QA game.

SCAN

Usage: --task scan

Links: arXiv, website, code

SCAN is a set of simple language-driven navigation tasks for studying compositional learning and zero-shot generalization. The SCAN tasks were inspired by the CommAI environment, which is the origin of the acronym (Simplified versions of the CommAI Navigation tasks).

MultiWOZ 2.0

Usage: --task multiwoz_v20

Links: website, code

A fully labeled collection of human-written conversations spanningover multiple domains and topics.

MultiWOZ 2.1

Usage: --task multiwoz_v21

Links: website, code

A fully labeled collection of human-written conversations spanningover multiple domains and topics.

OneCommon

Usage: --task onecommon

Links: website, code

A collaborative referring task which requires advanced skills of common grounding under continuous and partially-observable context. This code also includes reference-resolution annotation.

AirDialogue

Usage: --task airdialogue

Links: website, code

Task for goal-oriented dialogue using airplane booking conversations between agents and customers.

ReDial

Usage: --task redial

Links: website, code

Annotated dataset of dialogues where users recommend movies to each other.

GoogleSGD

Usage: --task google_sgd

Links: code

The Schema-Guided Dialogue (SGD) dataset consists of over 20k annotated multi-domain, task-oriented conversations between a human and a virtual assistant.

TaskMaster2

Usage: --task taskmaster2

Links: code

The second version of TaskMaster, containing Wizard-of-Oz dialogues for task oriented dialogue in 7 domains.


ChitChat Tasks

Blended Skill Talk

Usage: --task blended_skill_talk

Links: code

A dataset of 7k conversations explicitly designed to exhibit multiple conversation modes: displaying personality, having empathy, and demonstrating knowledge.

Cornell Movie

Usage: --task cornell_movie

Links: arXiv, code

Fictional conversations extracted from raw movie scripts.

Dialogue NLI

Usage: --task dialogue_nli

Links: website, arXiv, code

Dialogue NLI is a dataset that addresses the issue of consistency in dialogue models.

DSTC7 subtrack 1 - ubuntu

Usage: --task dstc7

Links: arXiv, code

DSTC7 is a competition which provided a dataset of dialogs very similar to the ubuntu dataset. In particular, the subtrack 1 consists in predicting the next utterance.

Movie Dialog Reddit

Usage: --task moviedialog:Task:4

Links: arXiv, code

Dialogs discussing Movies from Reddit (the Movies SubReddit).

Open Subtitles

Usage: --task opensubtitles

Links: version 2018 website, version 2009 website, related work (arXiv), code

Dataset of dialogs from movie scripts.

Ubuntu

Usage: --task ubuntu

Links: arXiv, code

Dialogs between an Ubuntu user and an expert trying to fix issue, we use the V2 version, which cleaned the data to some extent.

ConvAI2

Usage: --task convai2

Links: arXiv, website, code

A chit-chat dataset based on PersonaChat for a NIPS 2018 competition.

ConvAI_ChitChat

Usage: --task convai_chitchat

Links: website, code

Human-bot dialogues containing free discussions of randomly chosen paragraphs from SQuAD.

Persona-Chat

Usage: --task personachat

Links: arXiv, code

A chit-chat dataset where paired Turkers are given assigned personas and chat to try to get to know each other.

TaskMaster-1-2019

Usage: --task taskmaster

Links: website, code

A chit-chat dataset by GoogleAI providing high quality goal-oriented conversationsThe dataset hopes to provoke interest in written vs spoken languageBoth the datasets consists of two-person dialogs:Spoken: Created using Wizard of Oz methodology.Written: Created by crowdsourced workers who were asked to write the full conversation themselves playing roles of both the user and assistant.

Twitter

Usage: --task twitter

Links: website, code

Twitter data found on GitHub. No train/valid/test split was provided so 10k for valid and 10k for test was chosen at random.

ConvAI2_wild_evaluation

Usage: --task convai2_wild_evaluation

Links: website, code

Dataset collected during the wild evaluation of ConvaAI2 participants bots. 60% train, 20% valid and 20% test is chosen at random from the whole dataset.

Image_Chat

Usage: --task image_chat

Links: website, website2, code

202k dialogues and 401k utterances over 202k images from the YFCC100m dataset using 215 possible personality traits

Notes

If you have already downloaded the images, please specify with the --yfcc-path flag, as the image download script takes a very long time to run

Image_Chat_Generation

Usage: --task image_chat:Generation

Links: code

Image Chat task to train generative model

Wizard_of_Wikipedia

Usage: --task wizard_of_wikipedia

Links: arXiv, code

A dataset with conversations directly grounded with knowledge retrieved from Wikipedia. Contains 201k utterances from 22k dialogues spanning over 1300 diverse topics, split into train, test, and valid sets. The test and valid sets are split into two sets each: one with overlapping topics with the train set, and one with unseen topics.

Notes

To access the different valid/test splits (unseen/seen), specify the corresponding split (random_split for seen, topic_split for unseen) after the last colon in the task. E.g. wizard_of_wikipedia:WizardDialogKnowledgeTeacher:random_split

Wizard_of_Wikipedia_Generator

Usage: --task wizard_of_wikipedia:Generator

Links: code

Wizard of Wikipedia task to train generative models

Daily Dialog

Usage: --task dailydialog

Links: arXiv, code

A dataset of chitchat dialogues with strong annotations for topic, emotion and utterance act. This version contains both sides of every conversation, and uses the official train/valid/test splits from the original authors.

Empathetic Dialogues

Usage: --task empathetic_dialogues

Links: arXiv, code

A dataset of 25k conversations grounded in emotional situations to facilitate training and evaluating dialogue systems.Dataset has been released under the CC BY-NC license.

Notes

EmpatheticDialoguesTeacher returns examples like so:

  • [text]: context line (previous utterance by ‘speaker’)

  • [labels]: label line (current utterance by ‘listener’)

with additional task specific fields:

  • [situation]: a 1-3 sentence description of the situation that the conversation is

  • [emotion]: one of 32 emotion words

Other optional fields:

  • [prepend_ctx]: fasttext prediction on context line - or None

  • [prepend_cand]: fasttext prediction on label line (candidate) - or None

  • [deepmoji_ctx]: vector encoding from deepmoji penultimate layer - or None

  • [deepmoji_cand]: vector encoding from deepmoji penultimate layer for label line (candidate) - or None

Image Grounded Conversations

Usage: --task igc

Links: arXiv, code

A dataset of (image, context, question, answer) tuples, comprised of eventful images taken from Bing, Flickr, and COCO.

Holl-E

Usage: --task holl_e

Links: website, code

Sequence of utterances and responses with background knowledge aboutmovies. From the Holl-E dataset.

ReDial

Usage: --task redial

Links: website, code

Annotated dataset of dialogues where users recommend movies to each other.

Style-Controlled Generation

Usage: --task style_gen

Links: code

Dialogue datasets (BlendedSkillTalk, ConvAI2, EmpatheticDialogues, and Wizard of Wikipedia) labeled with personalities taken from the Image-Chat dataset. Used for the style-controlled generation project

DialoguE COntradiction DEteCtion (DECODE)

Usage: --task decode

Links: arXiv, code

Task for detect whether the last utterance contradicts previous dialogue history.


Negotiation Tasks

Deal or No Deal

Usage: --task dealnodeal

Links: arXiv, code

End-to-end negotiation task which requires two agents to agree on how to divide a set of items, with each agent assigning different values to each item.


Visual Tasks

FVQA

Usage: --task fvqa

Links: arXiv, code

The FVQA, a VQA dataset which requires, and supports, much deeper reasoning. We extend a conventional visual question answering dataset, which contains image-question-answer triplets, through additional image-question-answer-supporting fact tuples. The supporting fact is represented as a structural triplet, such as <Cat,CapableOf,ClimbingTrees>.

VQAv1

Usage: --task vqa_v1

Links: arXiv, code

Open-ended question answering about visual content.

VQAv2

Usage: --task vqa_v2

Links: arXiv, code

Bigger, more balanced version of the original VQA dataset.

VisDial

Usage: --task visdial

Links: arXiv, code

Task which requires agents to hold a meaningful dialog about visual content.

MNIST_QA

Usage: --task mnist_qa

Links: code

Task which requires agents to identify which number they are seeing. From the MNIST dataset.

CLEVR

Usage: --task clevr

Links: arXiv, code

A visual reasoning dataset that tests abilities such as attribute identification, counting, comparison, spatial relationships, and logical operations.

nlvr

Usage: --task nlvr

Links: website, code

Cornell Natural Language Visual Reasoning (NLVR) is a language grounding dataset based on pairs of natural language statements grounded in synthetic images.

Flickr30k

Usage: --task flickr30k

Links: website, paper1, paper2, code

30k captioned images pulled from Flickr compiled by UIUC.

COCO_Captions

Usage: --task coco_caption

Links: website, code

COCO annotations derived from the 2015 COCO Caption Competition.

Personality_Captions

Usage: --task personality_captions

Links: website, arXiv, code

200k images from the YFCC100m dataset with captions conditioned on one of 215 personalities.

Notes

If you have already downloaded the images, please specify with the --yfcc-path flag, as the image download script takes a very long time to run

Image_Chat

Usage: --task image_chat

Links: website, website2, code

202k dialogues and 401k utterances over 202k images from the YFCC100m dataset using 215 possible personality traits

Notes

If you have already downloaded the images, please specify with the --yfcc-path flag, as the image download script takes a very long time to run

Image_Chat_Generation

Usage: --task image_chat:Generation

Links: code

Image Chat task to train generative model

Image Grounded Conversations

Usage: --task igc

Links: arXiv, code

A dataset of (image, context, question, answer) tuples, comprised of eventful images taken from Bing, Flickr, and COCO.


decanlp Tasks

MultiNLI

Usage: --task multinli

Links: arXiv, code

A dataset designed for use in the development and evaluation of machine learning models for sentence understanding. Each example contains a premise and hypothesis. Model has to predict whether premise and hypothesis entail, contradict or are neutral to each other.

IWSLT14

Usage: --task iwslt14

Links: website, code

2014 International Workshop on Spoken Language task, currently only includes en_de and de_en.

ConvAI_ChitChat

Usage: --task convai_chitchat

Links: website, code

Human-bot dialogues containing free discussions of randomly chosen paragraphs from SQuAD.

SST Sentiment Analysis

Usage: --task sst

Links: website, website2, code

Dataset containing sentiment trees of movie reviews. We use the modified binary sentence analysis subtask given by the DecaNLP paper here.

CNN/DM Summarisation

Usage: --task cnn_dm

Links: website, code

Dataset collected from CNN and the Daily Mail with summaries as labels, Implemented as part of the DecaNLP task.

QA-SRL Semantic Role Labeling

Usage: --task qasrl

Links: website, code

QA dataset implemented as part of the DecaNLP task.

QA-ZRE Relation Extraction

Usage: --task qazre

Links: website, code

Zero Shot relation extraction task implemented as part of the DecaNLP task.

WOZ restuarant reservation (Goal-Oriented Dialogue)

Usage: --task woz

Links: arXiv, code

Dataset containing dialogues dengotiating a resturant reservation. Implemented as part of the DecaNLP task, focused on the change in the dialogue state.

WikiSQL semantic parsing task

Usage: --task wikisql

Links: website, code

Dataset for parsing sentences to SQL code, given a table. Implemented as part of the DecaNLP task.

MWSC pronoun resolution

Usage: --task mwsc

Links: website, code

Resolving possible ambiguous pronouns. Implemented as part of the DecaNLP task, and can be found on the decaNLP github.