Комментарии:
Crazy to think that this came out 2 years ago, advancement in the field is crazy
ОтветитьPython code?
ОтветитьThank u😮😮😮😮😮 amazing description
ОтветитьMy guy, this has to be the best tutorial on NeRF I've seen, finally understood everything
ОтветитьAmazing video, thanks a lot.
ОтветитьPhenomemal video!
ОтветитьPretty clear and great thanks to you!!
ОтветитьBeautiful and Super Intuitive video ! Thanks :3
ОтветитьThis is an amazing explanation! I have a doubt though. You talked about the major question of training images not having information about "density". How are we even computing the loss in that case for each image? You said we compare what we see with what the model outputs. But how does the model give different density information for a particular pixel if we don't have that kind of information in the input? How will having a differentiable function that can backtrack all the way to the input space be any helpful if we don't have any reference or ground truth for the densities in the training images?
ОтветитьLoved the video. Learned a lot. Thanks
ОтветитьIs this an in depth breakdown of what photogrammetry or is this different?
ОтветитьThanks for the explanation!!
ОтветитьI was searching for nerf guns, this is better than what I was asked for.
ОтветитьI think, saying that each scene is associated with one single neural network (NN is overfitted for that scene) is not correct.
Ответитьbro you are killin' it, pretty damn good explanation thanks
Ответитьso there are really two networks (coarse and fine) or this is some kind of trick ?
ОтветитьOne of the best NeRF explanations available. Thank you so much, it helped a lot.
ОтветитьGreat video. Can you please make one on LeRF (Language embedded Radiance Fields)?
ОтветитьIts ilke end of the photogrammetry
ОтветитьTHE BEST
ОтветитьIt’s NeRF or Nothin’ 😎
ОтветитьCode?
ОтветитьGotta present this paper for a seminar at uni so this video makes it so much easier. Thank you so much for this!
ОтветитьAwesome explanation! Please dont stop making these.
ОтветитьMan you got many clear notes to explained papers. I got tons of helps from your videos.
ОтветитьYou could stack lots of objects so long as you know the transformation from object to world coordinates and give each object a bounding volume in world space for the ray tracer to bother calculating if you had a supercomputer you could render worlds with thousands of overlapping and moving objects :D
ОтветитьThanks for the Great explanation. Finally understand the central ideas behind NeRF.
ОтветитьGreat video. Thanks Yannic!
ОтветитьI noticed manny of the scenes were from UC Berkeley, kinda trippy. The engineering school there gave me a bit of PTSD ngl.
ОтветитьThis sounds as if presentation could be entirely done in a raymarching shader on the GPU as I suspect the evaluation of the model can be implemented as a shader.
ОтветитьBut can this be used real time?
ОтветитьThis is mind blowing
ОтветитьThis type of pre-digestion for a complex technical paper is very expedient. Thank you.
Ответитьawesome video! Really appreciate you doing this!
ОтветитьThank you for the clear-cut and thorough explanation! I was able to follow and that is definitely saying something because I come from a different world, model-based controls :)
Ответитьfinally something without cnns. bravo guys.
ОтветитьReally cool. I love getting into this stuff. I'm a compsci student in my first year, but considering switching and going for AI. Such an interesting field.
What a time to be alive! ;)
This video helps a lot for some fresher like me to understand NeRF, thanks!
ОтветитьThanks to your markings and visualization I can understand a lot more than I could on my own :D
ОтветитьHi Yannic, I found this video very helpful. Could you do a follow up on instant NERF by Nvidia?
ОтветитьWhen will this become available for image/video software?
ОтветитьHelp! How do they determine depth density from a photo? Wouldn't you need prior trained data to know how far away an object is, from a single photo?
Ответитьwhat do you think the next step after NeRF?
ОтветитьNice explanation!
ОтветитьGreat explanation. Yannic! Like to know if this technique could be used for 3D modelling?
ОтветитьWhy is this “overfitting”? Wouldn’t overfitting in this case be if the network snaps the rays to the nearest data point with that angle and doesn’t interpolate?
ОтветитьExcellent explanation. Realtime 3D Street View should be right around the corner now.
Ответить非常细致的讲解,thanks to you!
ОтветитьAwesome explanation! Thanks for the video.
Ответить