Комментарии:
Greate explanation ❤
Ответитьcould haver been better, most of it was speculative when it came to application building, not to mention the laws governing it
ОтветитьGreat video presentation! Martin Keen delivers a superbly layman friendly elucidation of what are otherwise very 'high tech talk' to people like me who do not come from a tech based professional background. These types of content are highly appreciable, and in fact motivate further learning on these subjects. Thank you IBM, Mr. Keen & team. Cheers to you all from Sri Lanka.
ОтветитьCan a subsequent SFT and RTHF with different, additional or lesser contents change the character, improve, or degrade a GPT model?
ОтветитьIsn't it using the most likely thing that humans defined and just uses patterns of what's most expected based on how humans interact and info put in..... that's not complicated. How do they not understand how it works....
ОтветитьVery nice explanation, short and to the point without getting bogged down in detail that is often misunderstood. I will share this with others
ОтветитьYou could have finished the video by saying an LLM like Chat GPT could have produced the entire explanation for this video.. (I think u hinted the same)
ОтветитьThe term large can not be referred to as large data; to be precise it is the number of parameters that is large. So slight correction.
ОтветитьHow is he writing backwards
ОтветитьHow does ChatGPT know about itself and its own behavior? If you ask questions about those topics, it will answer intelligently and accurately about itself and its own behavior. It will not just spout random from patterns from the internet. How does it know this?
ОтветитьWhat is quantized version of models, how it would be created?
ОтветитьSomething tells me “The sky is the limit” here 👀
Ответить1 petabyte is not 1m gigabytes, it is 1,000 gigabytes.
I thought this speech is coming from an engineer but perhaps it is just a hired actor.
so the transformers only for the language and text related things??
ОтветитьUgh corporate videos..... the horror
ОтветитьDid you only mirror the screen and it looks like you can write RTL, isnt it?! wow
Ответитьbut how its possible to an LLM innovate when its being trained with over human knowledge boundaries?
ОтветитьWhat is meant by, when referring to "sequences of words", "understanding"? I mean, what does "understanding" mean in that context?
ОтветитьIn this presentation, there was not enough detail on Foundation Models as a baseline to then explain what LLMs are.
Ответить