Step1: Please download the commonly used NYT10 dataset from.

       https://drive.google.com/file/d/1kuqaiebhNnatccUB4aLVSrX1UFuNMabZ/view?usp=sharing
       
Step2: Please unzip the nyt.zip and move the documents in the nyt directory into data/nyt

Step3: Please run the following code for training our model on the NYT dataset:
(Due to the storage limitation, there is no pretrained KG encoder here, therefore, the KG encoder is randomly initialized)

     cd code_xbe/xbe
     bash train_xbe_emnlp.sh

Step4:
Please run the following code for testing the trained model:
       cd code_xbe/xbe
       bash test_xbe_emnlp.sh

Please see our Github page:https://github.com/cl-tohoku/xbe for more details. 


Dependencies:

	pytorch-ignite==0.4.5
	torch==1.9.0
	transformers==4.12.2
	scipy==1.8.0
	pytorch-lightning==1.1.1
	sentencepiece==0.1.94
	tqdm
	sklearn
	apex


Acknowledgement:
Our implementation is based the codes of the paper "How Knowledge Graph and Attention Help? A Quantitative Analysis into Bag-level Relation Extraction" (Hu et al., 2021). Special thanks for their work.
