Комментарии:
I hope you see my questions you never response to my questions. why you didi not fit "BaggingClassifier' with '(x_train,y_train)', in exercise?
ОтветитьWhy during cross validation using original unscaled X instead of X scaled? Does it not affect accuracy?
ОтветитьMy results of the exercise: svm standalone 0.8, after bagging 0.8, Decision Tree standalone 0.65, after bagging 0.79. Bagging helps improve accuracy and reduce overfitting, especially in models that have high variance. Mostly used for unstable models like Decision Trees
Ответитьthis was nice and straightforward, and the quip about "copy and paste" was hilarious
ОтветитьIt us very helpful video to do my research project.!!!!!
ОтветитьThanks a lot for your awesome series!
ОтветитьI also learnt that bagging doesn't do so much in increasing the performance of the model apart from lowering the variance.
ОтветитьMy results after completing the exercise
svm:
Standalone 0.82
Bagged model 0.87
Decision Trees:
Standalone 0.79
Bagged model 0.84
My bagging model score came out to be : 0.8027, SVC : 0.8804, Decision Tree : 0.804
Ответитьwhy are we fitting our model on X,y
then what is the use of x_train and y_train and no use of scaling also if we are trainning our model on original X and y ?
How to train multiple file and then provide them label for individual file and classify a file?
ОтветитьO my gad, my computer get fever for 1 month wkwkwwk. Btw thank you sir for your clear explanation.!!!
ОтветитьThank you so much sir for this ML playlist
ОтветитьOne of the most underrated playlists for ML . I wish lots of student will join ❤
ОтветитьI tried clicking on soultion
Now I have fever
bagging svc gave a far better result than bagging decision tree
ОтветитьSVC score without bagging 0.87
DecisionTreeClassifier score without bagging 0.76
SVC score with bagging 0.867
DecisionTreeClassifier score with bagging 1.0
Drastic improvement in Decision Tree Classifier
Thankyou SIR! for this amazing playlist on machine learning
ОтветитьIm not able to see top row, all column headings in names in CSV file downloaded from kaggle - pima-indians-diabetes.csv
Am I doing any mistake while downloading?
thanks sir
ОтветитьAwesome 🔥
Appreciate your effort bro
Shouldn't you use X_train in the cross-validation calls?
ОтветитьHi sir can make a video on how to combine classifiers like decision tree, random forest ,naive bayes and svm and get a colleciive result, like a weighted output
Ответитьgood work really
ОтветитьSir, I don't see any time series forecasting model videos, request to upload videos for the same
ОтветитьBest explanation, EVER!!
ОтветитьExcellent expanation sir.The whole series has been exceptional.
Had one query -'How can reduction the size of data set decreasee variance .Decreasing no of features might decrease it,but how decreasing no of training examples can decrease it
Hi, can you explain further the difference between bagging and bagged trees. I don't really understand the explanation in the video. Thank you so much for your help! Your videos are amazing.
Ответитьthank you very much for this video
Ответитьwhen the boosting will be uploaded ?
ОтветитьYour tutorials are not properly structured and are not learner centric!
ОтветитьThat was clearly describe what is the bagging method, I wish you had a video about Boosting as well
ОтветитьYou have to do outlier detection because the max is much higher than that of 75% value
ОтветитьThis exercise was a challenge. Thank you. By just taking pure z of the set, some outliers were missed. Basically, all the outliers were the 0s for blood pressure and cholesterol. With those eliminated, I got significantly higher scores than the solution. All bagged models gave a similar 86% accuracy. The biggest jump from non-bagged model to bagged model was the Decision Tree which went from 79% accuracy without bagging to 86% with bagging. Also, I did the exercise several months after this video (was posted - not sure when it was made), so the libraries (especially SVC) may have improved (in their defaults).
ОтветитьThank you for this wonderful explanation. I have a query here. We scaled X but everywhere we use X in cross_val_score. Could you please explain why we scaled X?
ОтветитьThank you so much sir for this ML playlist. Your explanations are simple, exact, and extremely easy to follow. The method that you use of first familiarizing us with theory, then with a practical example and then an exercise is really effective. Looking forward to more of such videos in your ML series. Thanks once again, sir.
Ответить@ 21: 50 in cross Val score you x and y why not x_train and y_train can anyone explain this
Ответитьby using the df. describe() how can we decide is it necessary to do outlier removal or not please can anyone help me for my question
ОтветитьThankyou for making such a clear video in bagging and RF.
I have one doubt in RF, whe RF does rows and feature sampling so in feature sampling, some of the DT might not get relevant features or
not even the features we might wanna use, so doest this affect accuracy and not let us get the result that we want.
Ps i know this is a lot of writing!!!!!
I dont know why i am getting output and mean as 1 while using DesicionTreeClassifier and RandomForestClassifier,
I have tried with different values but value is same and not getting the exact reason.
Can you guys tell me where i have made mistake:|
Good presentation and preparation; easy to understand. I wish to get a clarification that, why the term "resampling with replacement" is used instead of "sampling with replacement". Is there incidental or there is any specific reason? Thank you.
ОтветитьI am waiting for Boosting and Xgboost methods sir
ОтветитьThank you sir for a very good explanation. Those examples are very good to training write a code and cause strong motivation.
ОтветитьThank you for your courses
I have differen code to detsct otluiers. This code also works very good. It is more simple.
Best Regards
'''Q1 = df.quantile(0.25)
Q3 = df.quantile(0.75)
IQR = Q3 - Q1
outlier_condition = ((df < (Q1 - 1.5 * IQR)) | (df > (Q3 +1.5 * IQR)))
df3 = df[~outlier_condition.any(axis=1)]
df3.shape'''
Hi, Dhaval I hope you are doing well, I have a query in this, at step 35 you have provided input as X and y to the model. What if you have provided input as X_scaled instead of X, i think accuracy might be different.
ОтветитьYou are the best! Thank you
ОтветитьRandom forest explanation is superb 👌
Ответить