Комментарии:
how amazing it is that he set timer for 15 mins and the vid is 22 mins long
ОтветитьReally nice video! Love the energy and the enthusiasm. Thanks for the help!
Ответитьwow. you make the subject come alive with excitements and simplicity. you are really gifted. i will take you over hard to understand but smart Ph.D professors from Ivy league any day.
ОтветитьWas too fast for me
ОтветитьPretty impressive. This is awesome. Cheers
ОтветитьLots of Thanks, Nick :)
ОтветитьU R GOD MAN, so much thanks
ОтветитьI really like this video. It is great!
ОтветитьThis is a very novel and cool way to teach coding. I really enjoyed it, and it was good to see you troubleshoot and get stuff wrong.
Ответитьwhere is it used? why?
ОтветитьI think you missed dividing the derivative by 2. Because in the formula for cost function, we have (1/2*no. of training data)*sum of squared error, when we take the derivative, 2 from dldw and 1/2 from cost function cancel each other. Anyway, it was a cool video, keep up the good work brother
ОтветитьYou should create a model to Reduce the pressure during last minutes. Such that finding an optimal time tolerance (+-) ( 15+-b) 😂😂😂😂. 😢 but we need more videos like this to have good dataset 😂😂🎉. Thanks man
ОтветитьI've been following your channel for a while now and I always find new cool stuff here. Keep up the good work, it's really helpful. Also, I love your positive personality, you really make complex stuff look entertaining.
ОтветитьIs there any other machine learning/NVIDIA Jetson video tutorials you would recommend?
Ответитьwhy is it necessary for x and y to be list of lists ?
ОтветитьLove it!
Ответить👍👍👍
Ответитьhey man! I have a friend from Lyon and you guys have the same surname, haha
Any chance you have roots from there?
ChatGPT won this challenge instantaneously lol :
import numpy as np
# Set the learning rate
learning_rate = 0.01
# Set the number of iterations
num_iterations = 1000
# Define the data points
X = np.array([[0, 1], [1, 0], [1, 1], [0, 0]])
y = np.array([1, 1, 0, 0])
# Initialize the weights
weights = np.zeros(X.shape[1])
# Train the model
for i in range(num_iterations):
# Compute the predicted values
y_pred = 1 / (1 + np.exp(-1 * np.dot(X, weights)))
# Compute the error
error = y - y_pred
# Update the weights
weights += learning_rate * np.dot(X.T, error)
# Print the weights
print("Weights:", weights)
A.I. description of the code: "This script defines a simple dataset with four data points and trains a model using the gradient descent algorithm to learn the weights that minimize the error between the predicted values and the true values. The model uses a sigmoid activation function to make predictions.
The script initializes the weights to zeros, and then iteratively updates the weights using the gradient descent algorithm, computing the predicted values, the error, and the gradient of the error with respect to the weights. The learning rate determines the size of the step taken in each iteration.
After training the model, the final weights are printed out. You can use these weights to make predictions on new data points by computing the dot product of the data points and the weights, and applying the sigmoid function."
I can do this more efficiently
Ответитьoh god! you forgot to save and i involuntarily kept shouting SAVE IT! SAVE IT!
Ответитьthe essence of Deep learning in a few lines of code... awesome
ОтветитьI wonder how much i takes the backpropagation algorithm ?
Ответитьso can you please do this algorithm for multiple variables
ОтветитьAmazing video!! Thank you so much
ОтветитьWhere's my $50 gift card? Lol
ОтветитьThanks for the video, subscribed! A suggestion : this small change to your code would demonstrate a real-world gradient descent solution for linear regression with noisy data. E.g. :
x = np.random.randn(20,1)
noise = np.random.randn(20,1)/10
# w = 5.8, b = -231.9
y = 5.8*x - 231.9 + noise
Great video, I like this kind of video where you code some AI task counterclock, you teach us the concepts and show us the reality of implementing it👏
Well explained 😄👍
Bro, how to implement gradient descent as weight in K nearest neighbor ?
ОтветитьNice implementation bro
ОтветитьPlease do a video building a NN from scrath!!
ОтветитьCan you explain the notears algorithm? It would be a great help.
ОтветитьThanks waiting for the part 5 forza
ОтветитьGreat video. Set time to 20 mins.
ОтветитьNick but I thought there are existing algorithms that u can feed your data into ? I love the way you’re doing it though but is it good doing your style or used existing ones ??
ОтветитьThis was oddly intense. Great job Nicholas! Even though you ran out of time, this video is still a win to me. 😉
ОтветитьYou are so good at explaining these complicated concepts. Also, if you want to close the explore tab in VSCode try: Ctrl + b
Ответитьi need to say this: you are the gamechanger here!!
as a data scientist +2 years of experience, i ALWAYS learn something new with your content! please nich, never stop doing this things, and also, never cut your smile in your face, even if your are having bugs!!
thanks for everything
Great Video!
Would be cool to come back to this and add visualization during gradient descend using matplotlib and show what is actually happening.
For example drawing data points, regression line, individual loss between line and data points and showing stats like current step, w, b, total loss! :)
Please check the Auto Save in file drop down list it's really time saver 😃
I need to see the video many times to understand what are you doing
But great work
I love all what you do
Thumb up 👍👍
You can contact us on telegram
ОтветитьAwesome video !! It's preety cool to see such theoretical concepts coded and explained like this. Keep going Nich !!
ОтветитьAre you reading my mind or something? Every time I'm stuck on a topic, you drop a video about it...
ОтветитьCould you please upload correct code to github? I lost track of your logic after "def descend () etc".
Ответить