Комментарии:
I have to say you are very talented for teaching very complex topics. Thank you so much MIT for choosing such a brilliant presenter.
ОтветитьAcademic by day... bouncer by night
ОтветитьReally cool course! Hi Lex, why only this the fourth lecture has no subtitle (Other 4 lectures do have)? Could you please upload one? Thank you.
ОтветитьAm I the only person who thought that the video compression makes his shadow look like a low resolution shadow map...? Awesome content, great for getting into ML!
A quick question regarding LSTM's, why do we need a separate way of saying 'this information isn't important, I don't want to update my weights'. Doesn't gradient descent already take care of this? That is, if we happen to see a feature that is unimportant, won't we compute low gradients, thus telling us we don't need to move the weight by much? Why doesn't that work here?
Thanks for the course. I learned a lot from it. Thanks!
ОтветитьGreat course with huge content in it, I am curious whether the Guest Talks are available too.
ОтветитьGood video!
ОтветитьHey,Lex. Really great video! But as English is not my mother tongue, sometimes it's difficult to understand the video very well, it would be nice if you can turn on the cc subtitle options, thanks!
Ответитьwhy their is no subtitle for this vedio ?!
ОтветитьGreat video, but I wish there was more math and a more thorough explanation of BPTT and the vanishing gradient problem.
ОтветитьHello, thanks for uploading these lectures! Can LSTM networks integrate symbolic constructs in natural language learning? Can it help computers understand the relationship between language structure and real world? For example if I ask "Why is only raining outside? " It should know that the roof stops the rain falling inside. I have a feeling that we are mostly teaching the algorithm to interact with us, in some kind of smart language simulation but at it's core it doesn't really understand the meaning and relationships between words. Do you know some online references towards this?
ОтветитьSome, hopefully helpful for the audience, remarks:
1. You need a lot of data. Depends. A lot of unlabeled data helps — to model the world. Then u need very little supervised data. Easy problems require little data. Hard or badly defined tasks require a lot of data. You can always pick an easier to solve proxy objective and use data augmentation.
2. rnns are dynamic length. Hard Set sequence lengths are for speed since:
sentences come at different lengths. So u cant create batches, unless you set a hard sequence length and then train same length sentences together in a batch, or fill up sentences that are too short by padding.
If you batch sentence you can compute on them in parallel.
Now of you are trying to predict relations between consecutive sentences, batching/ parallelization would not update the weights after each sentence, but on all of them at once — making it near impossible to learn inter (between) sentence relations but allowing the net to learn intra (within) sentence relations.
Tip: read karparthys blog on rnns not the Colah one. Karpathys is more detailed allowing you to really grasp what an rnn does. An lstm is „just“ and rnn with attention/gating.
Hope this helps, even if some concepts are very high level.
Please answer me. What do I need to know to create my own Python deep learning framework? What are the books and courses to get knowledge for this?
Ответить"If you enjoyed the video" he said ! Maybe you should rethink what you say.
ОтветитьIt is extremely concerning that these students are not expected to know calculus cold. There is no such thing as, "but I understand the concepts". You use basic technical skill to check your understanding of concepts, so without knowing your abcs, you will tend to convince yourself of things that aren't true. There is a lot of democratizing technology out there now where you don't need to know what's going on "under-the-hood", but without at least some knowledge, all you will be able to do is press buttons and make graphs.
Ответитьamazing lecture
ОтветитьBrilliant
ОтветитьOoh I recognize what's on the blackboard! It's the spherical coordinate system...
ОтветитьVanilla is also what we call squares people who prefer missionary position are vanilla lol just saying
ОтветитьIs it that most explanations given for RNN are top-down and most explanations for CNN are bottom-up?
ОтветитьI love this! <3
ОтветитьThanks so much for this, Lex. Your lecture was how I finally understood how RNNs work and it helped me to successfully complete my university thesis back in 2017. It's funny how I came across you again through Joe Rogan and your podcast and figured it's the same dude that helped me through college. Hope you get to be the one that builds robots better than anybody else in the world.
ОтветитьAweosome!
ОтветитьI think before introducing the backprop it is a good idea to start with the forward mode
ОтветитьI've never listened to anyone before without understanding anything at all. It's fascinating for me watching with zero understanding. I'm literally just listening to his words... 😂
Ответитьthis presentation was bad. 1000 thumbs up was ...?
ОтветитьBruh I have no idea what he’s talking about but I’m some how interested
ОтветитьI still prefer the LSTM (for accuracy) | GRU (for speed) over the Transformer's architecture for both; their ability to learn long-dependencies and their simplicity.
ОтветитьWhere is that sky view 360?
ОтветитьCan someone tell me why those vanishing gradients or exploding gradients occur, since i am such a dumb guy, i want to correlate it to nature.
ОтветитьYou look very cute compared to 2022 here 🤩
ОтветитьParameter tuning can't be taught? But it can be learned? I wonder if that would be a useful thing to apply ML to?
ОтветитьBeautiful, Just Beautiful.
ОтветитьSo you are familiar with vanilla... of course...
ОтветитьThis gave me so much understanding. Thank you for uploading!
ОтветитьHow U can talk about something in complex detail yet sound like an illiterate r*tard is amazing Mr. Friedman
ОтветитьIs there a playlist for the lectures leading up to this?
ОтветитьAmazing lecture
Ответитьgod DAMN he was chunky. Rlly Came a long way
ОтветитьMy god, Lex is so lost on this lecture. It's almost like he forgot what he wanted to say when building the presentation.
ОтветитьNo wonder he took up podcasting. He is very confused by simple calculus.
ОтветитьIve been following your videos but never knew you are/were a tutor/lecturer... I am going to enjoy this.
Ответить