1 min readFeb 18, 2020
Thanks for the great description of this interesting project.
I’d like to see details on how you (or anyone else) applied USE. Did you have to fine-tune it? Did you build and train a classifier on top of the existing USE module from TF-hub? Did you consider retraining the SentencePiece tokenizer to fit your text better? Any code that shows how to use SentencePiece tokenization as part of a USE-based model that you train on custom data? Thanks.