Logo

NYU NLP/Text-as-Data Speaker: Tatsunori Hashimoto, 10/22 - Shared screen with speaker view
He He
12:44
if you have questions, please raise your hand and we will unmute you.
Richard Pang
48:38
How about simply using (x, z) as input to language models where z is some topic embeddings, instead of x only?
Richard Pang
50:30
(Actually similar to Cho’s question)
Richard Pang
01:20:59
Would truncating losses decrease the density assigned onto diverse *but still faithful* tokens/phrases?
Richard Pang
01:22:15
Thanks!