Logo

Tutorials & Explanations

  • ParlAI Quick-start
    • Colab Tutorial
    • Install
    • View a task & train a model
    • Train a Transformer on Twitter
    • Add a simple model
    • Conclusion
  • Intro to ParlAI
    • What is ParlAI?
    • Core Concepts
      • Agents
      • Messages
      • Teachers
      • Worlds
    • Using ParlAI
      • Concepts in Action: Simple Display Data Script
      • Validation and Testing
      • Tasks
      • Training and Evaluating Existing Agents
      • Interacting with Models
  • Tasks and Datasets in ParlAI
    • Quickstart: Adding a new dataset
      • Handling Separate Train/Valid/Test data
      • Json file format (instead of text file format)
    • Creating a new task: the more complete way
      • Part 1: Building the Data
      • Part 2: Creating the Teacher
        • Which base teacher should I use?
        • ParlAIDialogTeacher
        • DialogTeacher
        • Chunk Teacher
        • Task from Scratch
      • Part 3: Add Task to Task List
      • Part 4: Executing the Task
      • Part 5: Contributing upstream
  • Worlds, Sharing & Batching
    • Introduction
    • Agent Sharing
    • Batching
    • Dynamic Batching
  • Using Torch Generator Agent
    • Example Models
    • Creating a Model
    • Tutorial
      • Extending TorchGeneratorAgent
      • Extending TorchGeneratorModel
      • Creating the encoder
      • Creating the decoder
      • Training
  • Using Torch Ranker Agent
    • Example Models
    • Creating a Model
    • Training a Model
      • Setting --candidates
      • Tracking ranking metrics
    • Evaluating a Model
      • Evaluating on a fixed candidate set
      • Evaluating on “vocab” candidates
  • Understanding and adding metrics
    • Introduction and Standard Metrics
      • Agent-specific metrics
    • Adding custom metrics
    • Teacher metrics
    • Agent (model) level metrics
      • Global metrics
      • Local metrics
    • List of Metrics
  • Speeding up training
    • Setting a baseline
    • Skip generation & larger eval batchsize
    • Dynamic batching
    • FP16
    • Background preprocessing
    • Use multiple GPUs
  • Mutators
    • Usage
      • Composability
      • Multi-task mutators
      • Mutator arguments
    • Writing your own Mutators
  • Running crowdsourcing tasks
    • Example Tasks
      • Sample Task: Collecting Data
    • Creating Your Own Task
    • Advanced Task Techniques
    • Running a Task
    • Reviewing Turker’s Work
    • Mephisto MTurk Tips and Tricks
      • Approving Work
      • Rejecting Work
      • Filtering Workers
      • Soft-blocking Workers
      • Preventing and Handling Crashes
      • Task Design
      • Other Tips
    • Additional Credits
  • Using Chat Services
    • Overview
    • Example Tasks
      • Creating your Own Task
    • Available Chat Services
      • Browser
        • Setup
      • Facebook Messenger
        • Setup
      • Terminal
        • Setup
      • Web Sockets
      • Adding a New Chat Service
  • Swapping Out Transformer Subcomponents
    • Making a Module Swappable
    • Making the Swap
    • Composability
    • Implementation
  • Tests in ParlAI
    • Running a Test
    • Writing a Test
      • Continuous Integration, Explained
      • Types of Tests in ParlAI
        • Unit Tests
        • Data Tests
        • Task Tests
        • Nightly Tests
      • Writing a Test
        • Common Testing Utilities
        • Integration Testing Teachers
        • Writing your own Unit Test
      • Running Your Test
  • Generating Model Cards
    • What is a model card?
    • The Process
      • Step 1: Generating reports
      • Step 2: Model Card Generation
      • Examples
    • Report Generation Details
      • Generating single reports
    • Optional Customizations
      • Using --extra-args-path
        • Adding Custom Dataset and Model Info
        • Add Custom Sections or Changing Section Order
  • ParlAI Docker image
    • Pulling the image
      • Running single ParlAI command
      • Interactive ParlAI shell
    • Runtime resources
    • Persisting the data
      • Changing ParlAI code

FAQ & Troubleshooting

  • Frequently Asked Questions
    • Why is my model not generating a response?
    • Why can’t I reproduce the results of an evaluation on a task with a pretrained model?
    • I want to generate a lot of responses to fixed utterances
    • Why is my generative model’s perplexity so high (>1000) when evaluating?
    • I changed my teacher and now its tests won’t pass.
    • Can I use ParlAI on Windows
  • Tips and Tricks
    • Command line tool
    • Multi-tasking with weighted tasks
    • Tasks with Parameters
    • Agent Convenience Functions
    • Self-Chats
    • Prettifying Display of Chats
    • Internal Agents, Tasks and More

Component Reference

  • Standard Agents
    • ALICE Bot
      • Setup
    • BART
      • Basic Examples
        • Train BART on convai2.
        • Interact with a BART Model fine-tuned in fairseq
      • BartAgent Options
      • TransformerGeneratorAgent Options
    • BERT Classifier
      • Basic Examples
      • BertClassifierAgent Options
      • BertDictionaryAgent Options
    • BERT Ranker
      • Content
      • Preliminary
      • Basic Examples
      • BiEncoderRankerAgent Options
      • CrossEncoderRankerAgent Options
    • Examples
      • Basic Examples
    • Fusion in Decoder (FiD)
      • DictionaryAgent Options
      • FidAgent Options
      • RagAgent Options
      • SearchQueryFAISSIndexFiDAgent Options
      • SearchQueryFiDAgent Options
      • SearchQuerySearchEngineFiDAgent Options
      • WizIntGoldDocRetrieverFiDAgent Options
    • Fixed Response
      • Basic Examples
      • FixedResponseAgent Options
    • GPT3
      • Setup
      • Interactive example
      • Self chat example
      • Limitations
      • Gpt3Agent Options
    • HRED Agent
      • HredAgent Options
    • Hugging Face
      • GPT2
        • Examples
      • DialoGPT
        • Examples
      • T5
        • Implementation
        • Basic Examples
    • Image+Seq2Seq
      • Basic Examples
      • DictionaryAgent Options
      • ImageSeq2seqAgent Options
      • TransformerGeneratorAgent Options
    • IR Baseline
      • Basic Examples
      • DictionaryAgent Options
      • IrBaselineAgent Options
    • Local Human
      • Basic Examples
      • LocalHumanAgent Options
    • MemNN
      • Basic Examples
      • MemnnAgent Options
    • Retrieval-Augmented Generation (RAG)
      • Installation / Memory Requirements.
        • RAM
        • GPU
      • RAG Quick Start
      • RAG Options
        • RAG Seq2Seq Generators: --generation-model
        • RAG Model Types: --rag-model-type
        • RAG Retriever Types: --rag-retriever-type
        • Other RAG Options
      • Generating your own FAISS Index.
        • 1a. [Recommended] Obtain/Choose a (Pre-trained) DPR Model
        • 1b. Train your own Dropout Poly-encoder
        • 2. Generate Dense Embeddings (~1-2 hours minutes if sharded appropriately - 50 x 1 GPU).
        • 3. Index the Dense Embeddings
      • Directory Structure / Custom Components
        • Custom Components
      • BartAgent Options
      • BartRagAgent Options
      • DictionaryAgent Options
      • PolyencoderAgent Options
      • RagAgent Options
      • T5Agent Options
      • T5RagAgent Options
      • TransformerGeneratorAgent Options
      • TransformerGeneratorRagAgent Options
    • Random Candidate
      • Basic Examples
      • RandomCandidateAgent Options
    • Repeat Label
      • Basic Examples
      • RepeatLabelAgent Options
    • Repeat Query
      • Basic Examples
    • Output Reranker
      • How to build your own re-ranker.
        • 1. Train a classifier or ranker model.
        • 2. Subclass AbstractReranker
        • 3. Subclass AbstractGeneratorRerankAgent
      • Case study: Classifier Re-Ranking.
      • Case study: LIGHT RPA Re-Ranking.
        • 1. Train a classifier or ranker model.
        • 2. Subclass AbstractReranker.
        • 3. Subclass AbstractGeneratorRerankAgent
      • AbstractGeneratorRerankAgent Options
      • AbstractGpt2RerankAgent Options
      • Gpt2Agent Options
      • LongAbstractGeneratorRerankAgent Options
      • TransformerGeneratorAgent Options
      • TransformerVariantAgent Options
    • Retriever Reader
      • Basic Examples
      • RetrieverReaderAgent Options
    • Safe Local Human
      • Basic Examples
      • LocalHumanAgent Options
      • SafeLocalHumanAgent Options
    • Seq2Seq Agent
      • Seq2seqAgent Options
    • Starspace
      • Basic Examples
      • DictionaryAgent Options
      • StarspaceAgent Options
    • Test Agents
      • MockTorchAgent Options
      • MockTrainUpdatesAgent Options
      • SilentTorchAgent Options
    • TFIDF Retriever
      • Basic Examples
      • TfidfRetrieverAgent Options
    • Transformer
      • Agent Variations
      • TransformerClassifierAgent Options
      • TransformerGeneratorAgent Options
      • TransformerRankerAgent Options
    • Unigram Agent
      • Basic Examples
      • DictionaryAgent Options
      • UnigramAgent Options
  • Tasks
    • All Tasks
      • Multi-Party Light
    • ChitChat Tasks
      • Blended Skill Talk
      • Cmu Document Grounded Conversations
      • Cornell Movie
      • Dialogue Nli
      • Dstc7 Subtrack 1 - Ubuntu
      • Movie Dialog Reddit
      • Open Subtitles
      • Ubuntu
      • Convai2
      • Convai Chitchat
      • Persona-Chat
      • Taskmaster-1-2019
      • Msr End-To-End
      • Twitter
      • Convai2 Wild Evaluation
      • Image Chat
      • Image Chat Generation
      • Wizard Of Wikipedia
      • Wizard Of Wikipedia Generator
      • Daily Dialog
      • Empathetic Dialogues
      • Image Grounded Conversations
      • Holl-E
      • Redial
      • Style-Controlled Generation
      • Dialogue Contradiction Detection (Decode)
      • Wizard Of Internet
      • Multisessionchat
      • Xpersona
      • Lccc
      • Multi-Party Light
    • Cloze Tasks
      • Booktest
      • Children’S Book Test (Cbt)
      • Qa Cnn
      • Qa Daily Mail
    • Debug Tasks
      • Integration Tests
    • Dodeca Tasks
      • Cornell Movie
      • Light-Dialogue
      • Ubuntu
      • Convai2
      • Twitter
      • Image Chat Generation
      • Wizard Of Wikipedia Generator
      • Daily Dialog
      • Empathetic Dialogues
      • Image Grounded Conversations
    • Entailment Tasks
      • Multinli
      • The Stanford Natural Language Inference (Snli) Corpus
      • Adversarial Natural Language Inference (Anli) Corpus
      • Natural Language Inference (Nli) Corpus
      • Dialogue Contradiction Detection (Decode)
      • Entailmentbank
    • Goal Tasks
      • Coached Conversational Preference Elicitation
      • Dialog Based Language Learning: Babi Task
      • Dialog Based Language Learning: Wikimovies Task
      • Dialog Babi
      • Dialog Babi+
      • Mutualfriends
      • Movie Dialog Qa Recommendations
      • Personalized Dialog Full Set
      • Personalized Dialog Small Set
      • Task N’ Talk
      • Scan
      • Multiwoz 2.0
      • Multiwoz 2.1
      • Multiwoz 2.2
      • Onecommon
      • Airdialogue
      • Redial
      • Googlesgd
      • Googlesgd Simulation Splits
      • Taskmaster2
      • Tickettalk (Taskmaster3)
      • Metalwoz
    • Grounded Tasks
      • Cmu Document Grounded Conversations
      • Light-Dialogue
      • Light-Dialogue-Wild
    • LIGHT Tasks
      • Light-Dialogue-Wild
    • MT Tasks
      • Wmt
      • Iwslt14
    • Math Tasks
      • Asdiv
      • Mathdataset
    • MovieDD Tasks
      • Movie Dialog Qa
      • Movie Dialog Qa Recommendations
      • Movie Dialog Recommendations
      • Movie Dialog Reddit
    • MultiPartyConvo Tasks
      • Friends
    • NLI Tasks
      • Dialogue Nli
      • Adversarial Natural Language Inference (Anli) Corpus
    • Negotiation Tasks
      • Deal Or No Deal
      • Casino (Campsite Negotiation Dialogues)
    • Personalization Tasks
      • Personalized Dialog Full Set
      • Personalized Dialog Small Set
    • QA Tasks
      • Amazonqa
      • Aqua
      • Babi 1K
      • Babi 10K
      • Conversational Question Answering Challenge
      • Hotpotqa
      • Mctest
      • Movie Dialog Qa
      • Movie Dialog Recommendations
      • Mturk Wikimovies
      • Narrativeqa
      • Natural Questions
      • Question Answering In Context
      • Simple Questions
      • Squad2
      • Squad
      • Triviaqa
      • Web Questions
      • Wikimovies
      • Wikiqa
      • Insuranceqa
      • Ms Marco
      • Qangaroo
      • Eli5
      • Dream
      • C3
      • Commonsenseqa
      • Eqasc
    • Reasoning Tasks
      • Choice Of Plausible Alternatives
      • Entailmentbank
      • Asdiv
      • Mathdataset
      • Eqasc
      • Reasoning Framework
      • Proofwriter
    • TOD Tasks
      • Multidogo
    • Visual Tasks
      • Fvqa
      • Vqav1
      • Vqav2
      • Visdial
      • Mnist Qa
      • Clevr
      • Nlvr
      • Flickr30K
      • Coco Captions
      • Personality Captions
      • Image Chat
      • Image Chat Generation
      • Image Grounded Conversations
    • all Tasks
      • Spolin
      • Fits
    • common ground Tasks
      • Spolin
    • decanlp Tasks
      • Multinli
      • Iwslt14
      • Convai Chitchat
      • Sst Sentiment Analysis
      • Cnn/Dm Summarisation
      • Qa-Srl Semantic Role Labeling
      • Qa-Zre Relation Extraction
      • Woz Restuarant Reservation (Goal-Oriented Dialogue)
      • Wikisql Semantic Parsing Task
      • Mwsc Pronoun Resolution
    • engaging Tasks
      • Spolin
      • Fits
    • improv Tasks
      • Spolin
    • improve Tasks
      • Fits
    • open-ended Tasks
      • Spolin
      • Fits
    • Uncategorized Tasks
      • Bot Adversarial Dialogue
      • Safety Mix
      • Glue
      • Huggingface
      • Prosocial Dialog
      • Reframe Unhelpful Thoughts
      • Superglue
      • Dialogue Qe
      • Wikipedia
      • Decanlp: The Natural Language Decathlon
      • Dialogue Safety
      • Selfchat
      • Funpedia
      • Light Gender Bias
      • Genderationbiascontroltask
      • Md Gender
      • Sensitive Topics Evaluation Topics Valid Teacher
      • Jerichoworld
      • Saferdialogues
  • Mutators
    • Original output
    • context_shuffle
    • episode_reverse
    • episode_shuffle
    • flatten
    • last_turn
    • word_reverse
    • word_shuffle
  • Model Zoo
    • Wikipedia models
      • Wikipedia Retriever (Used For Wizard Of Wikipedia)
    • Wizard Of Wikipedia models
      • Wizard Of Wikipedia (End To End Generator)
      • Wizard Of Wikipedia (Full Dialogue Retrieval Model)
      • Imageseq2Seq Dodecadialogue Wizard Of Wikipedia Ft Model
      • Unlikelihood Wizard Of Wikipedia Context And Label Repetition Model
      • Unlikelihood Wizard Of Wikipedia Context Repetition Model
      • Unlikelihood Wizard Of Wikipedia Label Repetition Model
      • Bart Fid Dpr Model
      • Bart Fid Rag Dpr-Poly Model
      • Bart Fid Rag Model
      • Bart Rag Dpr-Poly Model
      • Bart Rag Dpr Sequence Model
      • Bart Rag Dpr Token Model
      • Bart Rag Dpr Turn Doc-Then-Turn Model
      • Bart Rag Dpr Turn Doc-Only Model
      • Dropout Poly-Encoder
      • Multiset Dpr Model
      • Wikipedia Compressed Faiss Index
      • Wikipedia Exact Faiss Index
      • Wikipedia Passages
      • Wow Passages
      • Wow Passages Compressed Index
      • Wow Passages Exact Index
    • Light Dialog models
      • Light Bert-Biranker Dialogue Model
      • Imageseq2Seq Dodecadialogue Light Dialogue Ft Model
      • Light Am I Me Or You Vanilla 128 Baseline
      • Light Am I Me Or You Vanilla 1024 Baseline
      • Light Am I Me Or You Rpa Unlikelihood (128-Truncation) Model
      • Light Am I Me Or You Rpa Unlikelihood (1024-Truncation) Model
      • Light Am I Me Or You Multi-Objective Model
      • Light Am I Me Or You Profile Expanded Attention (128-Truncation) Model
      • Light Am I Me Or You Profile Expanded Attention (1024-Truncation) Model
      • Light Am I Me Or You Automated Expanded Attention (1024-Truncation) Model
      • Light Am I Me Or You Automated Expanded Attention + Multi-Objective Model
    • Personality Captions models
      • Transresnet (Resnet 152) Personality-Captions Model
    • Pretrained Transformers models
      • Poly-Encoder Transformer Reddit Pretrained Model
      • Poly-Encoder Transformer Wikipedia/Toronto Books Pretrained Model
      • Bi-Encoder Transformer Reddit Pretrained Model
      • Bi-Encoder Transformer Wikipedia/Toronto Books Pretrained Model
      • Cross-Encoder Transformer Reddit Pretrained Model
      • Cross-Encoder Transformer Wikipedia/Toronto Books Pretrained Model
    • Convai2 models
      • Poly-Encoder Transformer Convai2 Model
      • Bi-Encoder Transformer Convai2 Model
      • Imageseq2Seq Dodecadialogue Convai2 Ft Model
      • Unlikelihood Convai2 Context And Label Repetition Model
      • Unlikelihood Convai2 Context Repetition Model
      • Unlikelihood Convai2 Label Repetition Model
      • Unlikelihood Vocab Alpha 1E0 Model
      • Unlikelihood Vocab Alpha 1E1 Model
      • Unlikelihood Vocab Alpha 1E2 Model
      • Unlikelihood Vocab Alpha 1E3 Model
    • Image Chat models
      • Transresnet (Resnet152) Image-Chat Model
      • Imageseq2Seq Dodecadialogue Image Chat Ft Model
    • Dialogue Safety models
      • Transformer Classifier Single-Turn Dialogue Safety Model
      • Bert Classifier Multi-Turn Dialogue Safety Model
    • Integration Tests models
      • Integration Test Models
    • #Dodeca models
      • Imageseq2Seq Dodecadialogue All Tasks Mt Model
      • Imageseq2Seq Dodecadialogue Base Model
    • Cornell Movie models
      • Imageseq2Seq Dodecadialogue Cornell Movie Ft Model
    • Dailydialog models
      • Imageseq2Seq Dodecadialogue Dailydialog Ft Model
    • TBD models
      • Imageseq2Seq Dodecadialogue Eli5 Ft Model
      • Imageseq2Seq Dodecadialogue Pushshift.Io Reddit Ft Model
      • Generative Pre-Trained Transformer 3
    • Empathetic Dialogues models
      • Imageseq2Seq Dodecadialogue Empathetic Dialogue Ft Model
    • Igc models
      • Imageseq2Seq Dodecadialogue Image Grounded Conversations Ft Model
    • Twitter models
      • Imageseq2Seq Dodecadialogue Twitter Ft Model
    • Ubuntu models
      • Imageseq2Seq Dodecadialogue Ubuntu V2 Ft Model
    • Blended Skill Talk models
      • Blendedskilltalk: Blendedskilltalk Single-Task Model
      • Blendedskilltalk: Convai2 Single-Task Model
      • Blendedskilltalk: Empatheticdialogues Single-Task Model
      • Blendedskilltalk: Wizard Of Wikipedia Single-Task Model
      • Blendedskilltalk: Mt Single-Skills Model
      • Blendedskilltalk: Mt Single-Skills Model Fine-Tuned On Bst
      • Blender 90M
      • Blender 2.7B
      • Blender 1B Distilled
      • Blender 400M Distilled
      • Blender 9.4B
      • Multi-Modal Blenderbot (Mmb Degenpos)
      • Blenderbot3B With Name-Scrambling Gender-Bias Reduction
      • Blenderbot3B With Token-Bin Control-Generation Gender-Bias Reduction
      • Blenderbot3B With Sequence-Level Unlikelihood-Training Gender-Bias Reduction
      • Blenderbot3B With Name-Scrambling Gender/Ethnicity-Bias Reduction
    • Pushshift.Io models
      • Tutorial Transformer Generator
      • Reddit 2.7B
      • Reddit 9.4B
    • Wikipedia Plus Toronto Books models
      • Bart
    • Eli5 models
      • Unlikelihood Eli5 Context And Label Repetition Model
      • Unlikelihood Eli5 Context Repetition Model
      • Unlikelihood Eli5 Label Repetition Model
    • Style Gen models
      • Style-Controlled Generation: C75-D+ Generator
      • Style-Controlled Generation: Previous And Current Utterance Classifier
      • Style-Controlled Generation: Current-Utterance-Only Classifier
    • Bot Adversarial Dialogue models
      • Transformer Classifier Multi-Turn Dialogue Safety Model
      • Transformer Classifier Multi-Turn Dialogue Safety Model
    • Sensitive Topics Evaluation models
      • Transformer Classifier Sensitive Topics Detection
    • Md Gender models
      • Mdgender Bert Ranker Classifier
    • Wizard Of Internet models
      • Blenderbot2 Query Generator
      • Blenderbot2 3B
      • Blenderbot2 400M
      • Bart Base Wizard Of Internet
      • Serarch Query Generator Wizard Of Internet
      • Wizint Fid Search Query Search Engine
    • Multi models
      • Blenderbot2 Memory Decoder
    • Msc models
      • Msc2.7B 1024
      • Blenderbot2.7B 1024
      • Summsc-Rag 2.7B
      • Summsc-Fidrag 2.7B
      • Dialogue Summarization Model
      • Persona Summarizer
    • 8 Different Task-Oriented Dataset (See Project Page) models
      • Task-Oriented Dialog (Tod) Pretrained Model, Schema-Aware
      • Task-Oriented Dialog (Tod) Pretrained Model, Schema-Agnostic
    • Saferdialogues models
      • Saferdialogues: Taking Feedback Gracefully After Conversational Safety Failures
    • Projects.Light Whoami.Task models
      • Light Rpa Re-Ranker
      • [Test] Light Rpa Re-Ranker
      • Light Rpa Re-Ranker (For Automated Expanded Attention)
    • Pushshift.Io,Roberta,Cc100En models
      • R2C2 Base 400M
      • R2C2 Base 2.7B
    • Blended Skill Talk,Wizard Of Wikipedia,Convai2 models
      • R2C2 Blenderbot 400M
      • R2C2 Blenderbot 3Bm
    • Projects.Seeker.Tasks.Knowledge models
      • Seeker Dialogue 400M
      • Seeker Dialogue 3B
      • Seeker Lm + Dialogue 3B
    • Cc models
      • Seeker Lm Medium
      • Seeker Lm Large
      • Seeker Lm Xl
    • Fits models
      • Search Query Generator Trained On Fits
      • Blenderbot2 + Module Supervision On Fits Task
      • Blenderbot2 + Director + Module Supervision On Fits Task
      • Seeker + Module Supervision On Fits Task
      • Seeker + Director + Module Supervision On Fits Task
      • Dialogue Response Satisfaction Classifier
    • Projects.Bb3.Tasks.Module Level Tasks models
      • Blenderbot 3 3B
    • Light Multiparty models
      • Multi-Party Speaker Prediction
      • Multi-Party Utterance Only 3B
      • Multi-Party Utterance Only 400M
    • Pretrained Word Embeddings
    • BERT

Scripts Reference

  • Command Line Usage
    • display_data
      • Examples
      • CLI Arguments
    • display_model
      • Examples
      • CLI Arguments
    • eval_model
      • Examples
      • CLI Arguments
    • generate_model_card
      • CLI Arguments
    • interactive
      • Examples
      • CLI Arguments
    • safe_interactive
      • CLI Arguments
    • self_chat
      • CLI Arguments
    • tod_world_script
      • CLI Arguments
    • train_model
      • Examples
      • CLI Arguments
  • Advanced Scripts
    • build_candidates
      • Examples
      • CLI Arguments
    • build_dict
      • Examples
      • CLI Arguments
    • convert_to_json
      • CLI Arguments
    • convert_to_parlai
      • Examples
      • CLI Arguments
    • convo_render
      • CLI Arguments
    • data_stats
      • Examples
      • CLI Arguments
    • detect_offensive
      • Examples
      • CLI Arguments
    • eval_wordstat
      • Examples
      • CLI Arguments
    • extract_image_feature
      • Examples
      • CLI Arguments
    • flask
      • Examples
      • CLI Arguments
    • interactive_web
      • Examples
      • CLI Arguments
    • multiprocessing_eval
      • Examples
      • CLI Arguments
    • multiprocessing_train
      • Examples
      • CLI Arguments
    • party
      • Examples
      • CLI Arguments
    • profile_interactive
      • CLI Arguments
    • profile_train
      • Examples
      • CLI Arguments
    • token_stats
      • CLI Arguments
    • torchscript
      • CLI Arguments
    • vacuum
      • CLI Arguments
    • verify_data
      • Examples
      • CLI Arguments
  • Writing Your Own Script
    • Custom ParlAI Script
      • ParlaiScript
        • setup_args
        • run
      • Registering a Script
      • Running a script
        • Command Line
        • Import and Run with Args
        • Import and Run with Kwargs
  • Opt Presets
    • List of presets

API Reference

  • parlai.chat_service
    • Chat Service Core
      • parlai.chat_service.core.agents
      • parlai.chat_service.core.core.chat_service_manager
      • parlai.chat_service.core.socket
      • parlai.chat_service.core.world_runner
    • Services
      • Browser Chat
        • parlai.chat_service.services.browser_chat.agents
        • parlai.chat_service.services.browser_chat.browser_manager
      • Messenger
        • parlai.chat_service.services.messenger.agents
        • parlai.chat_service.services.messenger.message_sender
        • parlai.chat_service.services.messenger.messenger_manager
        • parlai.chat_service.services.messenger.worlds
      • Terminal Chat
        • parlai.chat_service.services.terminal_chat.agents
        • parlai.chat_service.services.terminal_chat.terminal_manager
      • Websocket
        • parlai.chat_service.services.websocket.agents
        • parlai.chat_service.services.websocket.sockets
        • parlai.chat_service.services.websocket.websocket_manager
    • Utilities
      • parlai.chat_service.utils.config
      • parlai.chat_service.utils.image
      • parlai.chat_service.utils.logging
      • parlai.chat_service.utils.misc
      • parlai.chat_service.utils.server
      • parlai.chat_service.utils.timeout
  • parlai.core
    • parlai.core.agents
    • parlai.core.build_data
    • parlai.core.dict
    • parlai.core.loader
    • parlai.core.message
      • text
      • id
      • labels
      • eval_labels
      • label_candidates
      • text_candidates
      • episode_done
      • reward
      • image
      • extended fields
    • parlai.core.metrics
    • parlai.core.mutators
    • parlai.core.opt
    • parlai.core.params
    • parlai.core.script
    • parlai.core.teachers
    • parlai.core.torch_agent
      • Torch Agent
      • Torch Generator Agent
      • Torch Ranker Agent
      • Torch Classifier Agent
      • Torch Image Agent
    • parlai.core.worlds
  • parlai.utils
    • parlai.utils.bpe
    • parlai.utils.conversations
    • parlai.utils.data
    • parlai.utils.distributed
    • parlai.utils.fp16
    • parlai.utils.logging
    • parlai.utils.misc
    • parlai.utils.pickle
    • parlai.utils.safety
    • parlai.utils.strings
    • parlai.utils.testing
    • parlai.utils.torch
    • parlai.utils.typing
    • parlai.utils.world_logging
ParlAI
  • Docs »
  • Search


© Copyright 2023, Facebook AI Research

Built with Sphinx using a theme provided by Read the Docs.