Download the tmu.zip folder to a local computer

Extract the folder and go to the folder.

===================
Installation of TMU
===================

 pip install -e.
__________________________________________

=================
Run Demo Version
=================

Go to Folder ./tmu/examples/autoencoder/

$python IMDbWordEmbeddingDemo.py
_________________________________________

========
Datasets
========
Billion word Dataset: https://www.kaggle.com/datasets/alexrenz/one-billion-words-benchmark
			    https://github.com/ciprian-chelba/1-billion-word-language-modeling-benchmark
Similarity Dataset:
			    The datasets can be downloaded from: https://github.com/vecto-ai/word-benchmarks/tree/master/word-similarity/monolingual/en

________________________________________

=========================
Training TM for Embedding
=========================
Go to main folder ./

$python dataset_preprocessing_similarity.py / dataset_preproecssing_category.py

#Save word in a pickle (used in training.py)

$python training.py

#Save embedding to a folder

__________________________________________

=====================
Intrinsic Evaluation
=====================

For similarity task and categorization task,

$python evaluation.py

_________________________________________

=====================
Extrinsic Evaluation
=====================

Download GloVe embedding from the link: https://nlp.stanford.edu/projects/glove/

$python BiLSTM_glove.py
$python BiLSTM_TM.py

-------FIN-------------
