Root
└── data
│ ├── (all datasets go to this folder)
└── pipeline
│ ├── DialogueStateManager.py
│ ├── EmotionClassifier.py
│ ├── IntentClassifier.py
│ └── ResponseGenerator.py
├── models
│ ├── emotions
│ │ └── Emotion classification model
│ └── intents
│ │ └── Intent classification model
│ └── gpt2
│ └── response generator model
├── pipeline.py (main of the dialogue system)
├── train_emotion.py (for training emotion classifier)
├── train_intent.py (for training intention classifier)
├── train_gpt2.py (for training response generator)
├── check_maxlength.py (for checking maxlength of final prompt)
└── dataset.py (dataset consolidated for all trainings)
Dataset links please refer to links in Final Report submitted.
Models link: link
-
Install Anaconda in the machine and create environment, make sure CUDA and GPU are properly installed if you want to use GPU for finetuning or inference in pipeline.
conda create --name <env_name> python=3.9 -
Download the repo from Github and all the models from cloud drive
-
Replace all 3 paths to the model directories for 3 models in pipeline> load_models()
-
Replace the path to the persona dataset to get the persona to where it is stored.
-
Go to the root directory of the system
cd <path_to_root_dir_of_repo> -
Install all necessary requirements
pip install -r requirements.txtpython -m spacy download en_core_web_sm -
Run the system
python pipeline.pyAfter the chatbot is prepared as show in console, start talking with the chatbot. -
Anytime you want to end the conversation type
exitin the conversation to signal the system for ending the process.