Комментарии:
Pedro is this you???
ОтветитьT2I or IP adapter in ControlNet, both are for styling, what is preferable today?
ОтветитьWhere do you get clip vision preprocessor?
ОтветитьHi how do I install clip_vision preprocessor?
Ответитьmy preprocessor is different how to add preprocessor in ?
ОтветитьPlease help, in my preprocessor list I do not have clip vision to pick
I do have the t2istyle though
Where do I find clipvision? I have the latest controlnet and it ain't there.
ОтветитьI can't find Clip Vision Preprocessor. Where should I install it?
ОтветитьCool tutorial!
And how can I apply the style I got to a batch of images taken from a video sequence?
Ai, how come you have 5 controlnet options but i have one?
Ответитьthank you so much for this video, but i am getting this error "AttributeError: 'NoneType' object has no attribute 'unsqueeze'" if using this feature
Ответитьне работает!!!
Ответитьif your getting Warning: StyleAdapter and cfg/guess mode may not works due to non-batch-cond inference try changing the prompt to 75 or less token.
Ответитьfor some reason i can't make this work :(. it just makes random pictures
Ответитьwhy does everyone always assume we’re using stupid windows PCs?
ОтветитьI got this message when I used color adapter
runtimeerror: pixel_unshuffle expects height to be divisible by downscale_factor, but input.size(-2)=257 is not divisible by 8
Does anyone know how I can fix it? I did some research last night but still no sign of any luck 😭
My control net tab disappear, any idea? There is no option in settings < control net :c
ОтветитьHow to convert a anime movie frame to a Realistic photograph like image?
ОтветитьI‘m still looking for a way to generate consistent styles with same character(s) in different scenes (for a picture book for example) without using dreambooth to train the faces. Like using img2img with a given character and place the character in different poses and scenes with the help of controlnet without the need of train the character with dreambooth. Is there a way? Like in Midjourney you can use the url of a previous generated image follwed by the new prompt to get results with the same character in a new setting.
ОтветитьI use an other UI and I don't have the preprocessor "clip_vision", I don't find where to get it?
Ответитьwhere do you get clip vision i cant find it
Ответитьbruh, too fast for the explanation, but thanks for the video
ОтветитьThat is VERY cool!
ОтветитьI want to download the model you used in this video. Please give me the download link.
ОтветитьNice to see you show what happens when this thing is configured incorrectly, not only step by step without failures. 👍
ОтветитьIf I run more than one ControNet tab I get the CUDA out of memory error (8GB of VRAM GPU). Any suggestions?
ОтветитьAttributeError: 'NoneType' object has no attribute 'convert' Any ideas?
ОтветитьAnyone else running this slow? I'm on a 3090 gtx but it takes 2 minutes to render 1 image. Not what i'm used to hehe
Ответитьhow can i use this for sequence?
Ответитьhi, may i ask, what are the pros and cons of running stable diffusion on GC vs running it on PC locally? I have a RTX 3070, and when I used controlnet, it will run very slow. Sometimes it will out of memory, will running on GC become faster? and what do you recommend? thanks in advance.
ОтветитьYou've been on fire with the upload schedule. Please don't burn yourself out.
ОтветитьIm doing amazing things with style transfer, thanks for the guide and exceptional work 😁
ОтветитьPlease teacher, I need How-To inpaint batch processing with inpaint batch mask in directory...
ОтветитьHey so I had a similar bug with the updates, I completely removed dreambooth as it's literally just trouble and now the automatic updates work again.
I just made a separate install of the kohya_ss gui instead for training and stuff, highly recommend.
Hi Aitrepreneur which version of Stable Diffusion do you use in your videos? Im looking for the same to follow your videos but i didn't succeed. Thanks in advance
ОтветитьThanks Aitrepreneur for another great video.
For anyone that having this error: "Error - StyleAdapter and cfg/guess mode may not works due to non-batch-cond inference" when clip_vision preprocessor is loading and style doesn't apply.
Try this in webui-user.bat: "set COMMANDLINE_ARGS= --xformers --always-batch-cond-uncond" the last parameter "--always-batch-cond-uncond" do the trick for me.
How are you getting these long prompts to work with the style transfer? It seems like style won't work without using a much smaller prompt.
ОтветитьWhat's the difference between the T2I-models and the regular controlnet ones?
ОтветитьIts not working for me. I did exacly the same steps like on video, I have controlnet etc. but after render clip_vision/t2adapter it change nothing on the photo... just wtf? Tryed a lot of times with diffrent backgrounds, its always the same photo. Yes, I turned on ControlNet.
ОтветитьHey Super K, thanks for your amazing video as usual! Unfortunately for some reason the t2iadapter_style_sd14v1 model is not working for me at. All other models working except this one. So I just thought to leave my comment here to see if maybe other people with the same problem could fix the issue and can lead me in the right direction. Thanks for reading! :)
ОтветитьWouldn't that really call a style transfer imo. That painting by Hokusai has a lot of impresisonistic elements which doesn't seem to be transferred over to the new image. The female character displayed still has that very typical "artgerm, greg rutkowski" style look to it. Still a cool feature nonetheless but misleading title. Better call it "transfer elements from an image"
ОтветитьWow this is so epic 🤩
Ответитьyou are using the "posex" extension?* german laughter following: hehehehehehhehehhhehehe hehehehehehe
ОтветитьI get this error when trying to generate, any ideas?
RuntimeError: !(has_different_input_dtypes && !config.promote_inputs_to_common_dtype_ && (has_undefined_outputs || config.enforce_safe_casting_to_output_ || config.cast_common_dtype_to_outputs_)) INTERNAL ASSERT FAILED at "/Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/TensorIterator.cpp":407, please report a bug to PyTorch.
It's astounding to find all these news options in Stable Diffusion. A bit overwhelming if you did not follow up from the start but the sheer amount of possibilities nowadays is golden !
ОтветитьWhy are these models so small? The ones from another video of yours are 5.71 GB. What is the difference? Also could you make a video on how to get these models to work on a colab? I couldn't get them to work eventhough they show up in the control net options. Also the open pose doesn't work either for me. I installed open pose editor and added the model it al shows up fine I can make a pose expert to png and then upload it but it just gets ignored or something, I get a blank image when I hit generate eventhough I selected the open pose model in the control net and hit enable and generate. This is all on colab. I can't test it on my pc as my gpu is too weak. Thanks for any help.
Ответитьhow to "enable" skip clip on the top?
Ответитьclip_vision doesn't work for me :(
Ответить