Requirements
•  nltk==3.6.2
•  numpy==1.19.5
•  pandas==1.1.5
•  protobuf==3.18.1
•  rouge==1.0.1
•  rouge-score==0.0.4
•  scikit-learn==0.24.2
•  scipy==1.5.4
•  sentence-transformers==2.1.0
•  sentencepiece==0.1.96
•  sklearn==0.0
•  tokenizers==0.10.3
•  torch==1.10.0
•  torchtext==0.11.0
•  torchvision==0.11.1
•  tqdm==4.60.0
•  transformers==4.10.0


Data Format
Text features: JSON
{
	'd_id': {'episode_name': title,
    'target_speaker': speaker1,
    'target_utterance': t_utt,
    'context_speakers': [sp_a, ..., sp_b],
    'context_utterances': [utt_a, ..., utt_b],
    'code_mixed_explanation': cmt_exp,
    'sarcasm_target': t_sp,
    'start_time': start_time,
    'end_time': end_time
    }
    ...
}

Audio features: DataFrame
| episode_name	| target_speaker | target_utterance | context_speakers | context_utterances | sarcasm_target | code_mixed_explanation | start_time | end_time | audio_feats |

Video features: DataFrame
| episode_name	| target_speaker | target_utterance | context_speakers | context_utterances | sarcasm_target | code_mixed_explanation | start_time | end_time | video_feats |

Training and Evaluation
•  Place the text, audio, and video feature files in the format as described above in the following manner in the 'Data' folder:
    - Data
        - Text
            - train_text_sample.json
            - val_text_sample.json
            - test_text_sample.json
        - Audio
            - train_audio_sample.p
            - val_audio_sample.p
            - test_audio_sample.p
        - Video
            - train_video_sample.p
            - val_video_sample.p
            - test_video_sample.p
•  Execution
    - Go to the 'Code' directory and run 'python Trimodal-BART-driver-final.py'.
Models will be saved in the 'models' directory while generated explanations on the val and test sets will be saved in the 'results' folder.
    
NOTE: We are providing the partial data. Complete data will be released subject to paper acceptance.