Due to the limit of time, we first release the key process in our code, the full code will be released sonner. 



### step1 : run our model on Ls task 
	- gen_candidates.py
		the code is used to generate top-k candidates given target word and context,
		before you run it, you need to set the model and structure you used.


### step2: basic word process on the predict candidates
	- candidates_filter.py:
		the detailed word process on the candidates, including word-lemmatization, word-pos recheck, ect. 
		After the filtering, we can either directly output the rest candate (sorted by their probability), just as the 
		result in SC-GR models; or we can re-ranks the candidates following the validation process.
	
	./utils/
		the code in this folder is all from the work of arefyev (et al-2020), thanks to their word and released code in git: https://github.com/Samsung/LexSubGen
		Arefyev et al, Always Keep your Target in Mind: Studying Semantics and Improving Performance of Neural Lexical Substitution.


### step3: re-score the candidates:
	the process corresponds to the (+valid) process, which is proposed in the paper "https://aclanthology.org/P19-1328/"
	we will release our implementation sooner。


### step4: evaluate LS output in benchmark
	please see the uploaded data folder, where we show our model output in LS07/LS14 benchmarks and the steps to evaluate the model by publicly released bash file.
