Thomas Packer, Ph.D.
1 min readFeb 18, 2020

--

Thanks for the great description of this interesting project.

I’d like to see details on how you (or anyone else) applied USE. Did you have to fine-tune it? Did you build and train a classifier on top of the existing USE module from TF-hub? Did you consider retraining the SentencePiece tokenizer to fit your text better? Any code that shows how to use SentencePiece tokenization as part of a USE-based model that you train on custom data? Thanks.

--

--

Thomas Packer, Ph.D.
Thomas Packer, Ph.D.

Written by Thomas Packer, Ph.D.

I do data science (QU, NLP, conversational AI). I write applicable-allegorical fiction. I draw pictures. I have a PhD in computer science and I love my family.

Responses (1)