Комментарии:
I am told Kaggle no longer supports SD WebUI. Is it true?
ОтветитьIs it possible to post a tutorial for training dreambooth on kohya_ss?
ОтветитьThis is great content
!
selam hocam turkcesı yokmu vıdeonun :(
ОтветитьIt's me again :(. I did everything as in the video, fresh AD1111 install (didn't check out your used version because it didn't work and the installation got stuck every time after checking out your provided hash), installed Dreambooth with your .bat file. I used your settings for 12gb VRAM (I have 10). When hitting "Train" I get the good 'ol "torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 10.00 GiB total capacity; 9.14 GiB) error... I had this before trying different tutorials and versions. So far I've only gotten it once to work training a week ago but now i basically always end up with this "out of memory" error. Is there a known way to fix this?
Ответитьthe title of this video made me laugh out loud for a whole day
ОтветитьWell i'm STILL blocked at the first step of the tutorial , when i enter the first command , i can't use my stable diffusion anymore. I get the error " RuntimeError: Couldn't install xformers.
Command: "D:\ai test\stable-diffusion-webui\venv\Scripts\python.exe" -m pip install -U -I --no-deps xformers==0.0.20 --prefer-binary
Error code: 1"
I'm starting to really hate stable diffusion now , no matter what tutorial i follow , it's always " ERROR ERROR ERROR ERROR ERROR " at the VERY FIRST STEP, fuck that's annoying.
THANK YOU SIR😊
ОтветитьHello Sir, I used 31 photos of mine that were clicked by my iPhone 13. I clicked them in daylight, they were good but not as sharp as yours. And for classification(regularization) images, it automatically generated as I didn't provide anything. As I don't have any realistic images dataset. It took the model to generate 9 hours. After doing an x/y/z plot comparison, in the last epochs it looks more like me, but always the face is blurred. (the issue I think is because of not having sharp pictures, maybe). I had rest all the same settings as you. I prompted a lot of queries yesterday, and out of I guess 2000 images, I was able to find 10 pictures that were like me. Even from them, only 2-4 were good.
Do you think, I should train the model again with a new dataset with pictures clicked on DSLR? Or something else would be good? As I'm a student I don't have much money, I already spent a lot on the initial training. I want to keep a good model, that I can keep using even after 6-7 months for creating my pictures. Your help would be really appreciable sir.
Currently training on runpod with your patreon scripts, thanks again. 7 more hours before I can test it 😂
ОтветитьThank u so much for this great tutorial!!! It works perfectly! Plz make tutorial abt training clothes
ОтветитьThank u for providing this remarkable videos!
However, I've got stuck in some point so that i want to ask sth.
I've tried to make female-idol so that i collect some images in instagram to get high resolution images.
- I collect and focal crop and resize 20~40 images which resolution is 768x1024
- Captioning is using WD or BLIP
- 2000 Class Samples images are generated using by text2image generation tab with tight positive & negative prompts and Hires.fix(R-ESRGAN). (same resolution with the dataset)
- In my dreambooth version, Text Encoder Learning Rate, TENC Weight Decay, TENC Gradient Clip Norm section aren't exist so that i skipped. rest are same as your video. except memory attention section.
- Memory Attention cause error when i choose 'default' so that i choose 'xformers'.
As a results, only little similar feeling point exist. It does not generate like you've done.
So i want to ask abouth what i've done wrong in this project.
Thank you for reading this long comments
this is insane, thanks for your videos
ОтветитьThank you so much for the video. It's really great. I do have one issue. For some reason in image generation during dreambooth training, pictures are being generated with duplicate bodies on top of each other in each image. Has anyone experienced this before and if so what is a good fix?
ОтветитьI have CUDA out of memory error - same settings and 3090 24gb :(
ОтветитьI'm having memory issues training with a 12gb rtx3060 if I choose "1024" and the lowest possible settings. Do you know if there is a workaround?
ОтветитьIt took me a hell of a lot longer than 45 minutes to get through the video and do everything successfully, but I finally made it and the results are fantastic! Thanks for all that you do.
Ответитьunder Mixed Precision I don't have 'bf1', only 'fp16'. My GPU is Quadro RTX 5000 16GB. how can I add it? :)
Ответитьyour english it's so bad
ОтветитьDo you have classification images for woman in patreon?
ОтветитьPersonally, I would like to see a workflow similiar to this photography tutorial but changing poses and clothing while maintaining physical likeness using various extensions with Stable Diffussion. This Studio Tutorial is another fantastic one. Thanks.
ОтветитьI want to do four portraits. One of myself (a man), my wife, and my young daughters. In each portrait we would be wearing a lucha libre mask. I want the portrait with a 3/4 turn and in high key. Do you think I could use this process to accomplish that? Or would the masks and the desired positioning and lighting be too problematic for this process? Thanks very much.
ОтветитьWhy do i feel back at UNI? Cant get a tutorial better than that!!
ОтветитьNice video thanks!
Ответитьgreat quality thanks. results are amazing
ОтветитьNeed help with this, I have 20 base images and 1000 good quality classification images, using 50 Class Images per Instance, also max training steps per image is set up as 150, any changes needs to be made on class images per instance and max training steps per image?
rest of the things I'm using as mentioned in the video..
You make excellent tutorials, and reminds us (when needed) where to find the necessary files if we don't have them.
For all this training, I dtopped trying a while back, because I never got the results I wanted, and because how to do what changed every week, it felt like.
I also do have just 12 GB VRAM on my videocard (3080 Ti), which seems to make trouble. Can I actually do training with 12 GB VRAM?
Lo perfeccionaste !! genial
ОтветитьI have to say, thank you DR. Your videos are so in depth and the way you create scripts for your community is awesome. How do schedule a consultation with you? Thanks again
ОтветитьThanks for what you do. I'm really looking forward to the end of my six-hour training (with your settings - 12400 steps on rtx4090). To try a new model
ОтветитьIf I have only 12GB VRAM, what about the resolution (class images, output, dataset)? Should I then use 512x512 everywhere?
ОтветитьPls do a google colab of this masterpiece are amazing the results
ОтветитьSadly the dreambooth extension just crashes on my linux PC, no matter what I do. However I applied what you said to a Fast Dreambooth collab, and produced some very nice images. Thank you!
ОтветитьCan I only produce outputs similar to the resolution of the training set?
ОтветитьThais huy has té bestia tutorials for real
Ответить🔥🔥
Ответитьcan we do all of this stuff on google collab ?
ОтветитьThank you very much!
ОтветитьMuazzam bir tutorial olmuş hocam . Rtx 3 veya 4 serisi ekran kartı alma ihtiyacı hissediyorum resmen. Ama onun yerine runpod da kullanabilirim. Zaten kart alsam yaktığı elektrik runpodu geçer 😂. Bu arada kaçırdıysam 2 farklı kişiyi (karı koca mesela) tek bir model üzerinden kullanma durumu var mı. 2 farklı model kullanarak veya ikisini birleştirerek yapabilir miyiz? Şimdiden teşekkürler.
Ответитьreliberate better than realistic vision?
ОтветитьElinize sağlık hocam.
ОтветитьYou destroyed all limits with your explanitions
Good work, keep on streaming
informative vid!!!
ОтветитьBy using regularization images from Unsplash you will actually decrease the model quality, not improve it. Regularization images should be generated by the model you are fine tuning.
ОтветитьHocam teşekkürler, aklıma takılan bir soruyu sormak istiyorum kalite olarak dreambooth training daha iyi bir kalite veriyor gibi duruyor. Lora eğitimi daha hızlı lora'da bu kaliteyi yakalamak mümkün mü
Ответить