Комментарии:
Nice. Can you do a video on how this might be related to reinforcement learning?
ОтветитьThis helped me a lot 🤗
Ответитьvery good explanation, also love how Vsauce music pops in for a split second when you say "...or are they?" lol
ОтветитьVery good high-level explanation.
ОтветитьThank you Boris. Such a clear Explanation!
ОтветитьSo if I stack an autoregressive LLM on top of a self-supervised model LLM (BERT), would it understand the context of the text better?
ОтветитьBest explanation on topic i've seen or read. Thanks so much
ОтветитьGreat Video! I'm taking a deep learning class right now, and this video really helped me understand the idea of self-supervised learning!
ОтветитьWell explained! without going into unnecessary complexities (details) like traditional professors in class or text books go.
Thanks a lot. Continue making videos like these.
Amazing explaination, thank you
ОтветитьI was not able to get my head around pretext tasks for a long time and then I found this video. Thanks man.
ОтветитьGreat video! Helps alot with my research
ОтветитьWell explained thx
ОтветитьPlease, remove the music, I had to stop 4 times to avoid stress... If your speech is interesting enough, I think it is, why confuse it and generate noise?
ОтветитьYou explain complex concepts really well! Thanks!
Ответитьwhat is this background? headache
ОтветитьVery well explained, in a simple manner. Thanks Boris
ОтветитьBy far the best explanation
ОтветитьA good introduction, thank you! However, in practice we have multiple examples (images) of the same class. Does the model try to maximize the distance between these examples? How does that work?
ОтветитьWhat Machine Learning Domain would you like to explore next?
Ответить