TRANSFER STYLE FROM An Image With This New CONTROLNET STYLE MODEL! T2I-Adapter!

TRANSFER STYLE FROM An Image With This New CONTROLNET STYLE MODEL! T2I-Adapter!

Aitrepreneur

1 год назад

64,192 Просмотров

Ссылки и html тэги не поддерживаются


Комментарии:

@johnjohn5932
@johnjohn5932 - 25.12.2023 22:57

Pedro is this you???

Ответить
@TheMaxvin
@TheMaxvin - 22.10.2023 18:36

T2I or IP adapter in ControlNet, both are for styling, what is preferable today?

Ответить
@danowarkills4093
@danowarkills4093 - 01.09.2023 19:44

Where do you get clip vision preprocessor?

Ответить
@vincentvalenzuela3171
@vincentvalenzuela3171 - 17.08.2023 07:01

Hi how do I install clip_vision preprocessor?

Ответить
@chayjohn1669
@chayjohn1669 - 13.06.2023 09:31

my preprocessor is different how to add preprocessor in ?

Ответить
@respectthepiece4833
@respectthepiece4833 - 21.05.2023 21:06

Please help, in my preprocessor list I do not have clip vision to pick
I do have the t2istyle though

Ответить
@therookiesplaybook
@therookiesplaybook - 05.05.2023 02:31

Where do I find clipvision? I have the latest controlnet and it ain't there.

Ответить
@zirufe
@zirufe - 28.04.2023 06:25

I can't find Clip Vision Preprocessor. Where should I install it?

Ответить
@jippalippa
@jippalippa - 21.04.2023 01:44

Cool tutorial!
And how can I apply the style I got to a batch of images taken from a video sequence?

Ответить
@Rocket-Gaming
@Rocket-Gaming - 20.04.2023 04:38

Ai, how come you have 5 controlnet options but i have one?

Ответить
@victorwijayakusuma
@victorwijayakusuma - 15.04.2023 17:53

thank you so much for this video, but i am getting this error "AttributeError: 'NoneType' object has no attribute 'unsqueeze'" if using this feature

Ответить
@K-A_Z_A-K
@K-A_Z_A-K - 14.04.2023 21:25

не работает!!!

Ответить
@novalac9910
@novalac9910 - 10.04.2023 09:04

if your getting Warning: StyleAdapter and cfg/guess mode may not works due to non-batch-cond inference try changing the prompt to 75 or less token.

Ответить
@lujoviste
@lujoviste - 30.03.2023 21:45

for some reason i can't make this work :(. it just makes random pictures

Ответить
@CaritasGothKaraoke
@CaritasGothKaraoke - 28.03.2023 23:39

why does everyone always assume we’re using stupid windows PCs?

Ответить
@KarazP
@KarazP - 21.03.2023 06:12

I got this message when I used color adapter

runtimeerror: pixel_unshuffle expects height to be divisible by downscale_factor, but input.size(-2)=257 is not divisible by 8

Does anyone know how I can fix it? I did some research last night but still no sign of any luck 😭

Ответить
@victorvideoeditor
@victorvideoeditor - 19.03.2023 02:50

My control net tab disappear, any idea? There is no option in settings < control net :c

Ответить
@rageshantony2182
@rageshantony2182 - 18.03.2023 21:36

How to convert a anime movie frame to a Realistic photograph like image?

Ответить
@Kontor23
@Kontor23 - 16.03.2023 16:02

I‘m still looking for a way to generate consistent styles with same character(s) in different scenes (for a picture book for example) without using dreambooth to train the faces. Like using img2img with a given character and place the character in different poses and scenes with the help of controlnet without the need of train the character with dreambooth. Is there a way? Like in Midjourney you can use the url of a previous generated image follwed by the new prompt to get results with the same character in a new setting.

Ответить
@elmyohipohia936
@elmyohipohia936 - 16.03.2023 08:57

I use an other UI and I don't have the preprocessor "clip_vision", I don't find where to get it?

Ответить
@the_one_and_carpool
@the_one_and_carpool - 14.03.2023 15:03

where do you get clip vision i cant find it

Ответить
@dk_2405
@dk_2405 - 14.03.2023 10:27

bruh, too fast for the explanation, but thanks for the video

Ответить
@inbox0000
@inbox0000 - 12.03.2023 13:32

That is VERY cool!

Ответить
@tuhoci9017
@tuhoci9017 - 11.03.2023 12:08

I want to download the model you used in this video. Please give me the download link.

Ответить
@devnull_
@devnull_ - 10.03.2023 15:08

Nice to see you show what happens when this thing is configured incorrectly, not only step by step without failures. 👍

Ответить
@j_shelby_damnwird
@j_shelby_damnwird - 10.03.2023 08:53

If I run more than one ControNet tab I get the CUDA out of memory error (8GB of VRAM GPU). Any suggestions?

Ответить
@wgxyz
@wgxyz - 10.03.2023 00:38

AttributeError: 'NoneType' object has no attribute 'convert' Any ideas?

Ответить
@LeroyFilon-xh2wp
@LeroyFilon-xh2wp - 09.03.2023 14:49

Anyone else running this slow? I'm on a 3090 gtx but it takes 2 minutes to render 1 image. Not what i'm used to hehe

Ответить
@iz6996
@iz6996 - 09.03.2023 13:29

how can i use this for sequence?

Ответить
@dcpln7
@dcpln7 - 09.03.2023 12:27

hi, may i ask, what are the pros and cons of running stable diffusion on GC vs running it on PC locally? I have a RTX 3070, and when I used controlnet, it will run very slow. Sometimes it will out of memory, will running on GC become faster? and what do you recommend? thanks in advance.

Ответить
@jasonhemphill6980
@jasonhemphill6980 - 09.03.2023 11:47

You've been on fire with the upload schedule. Please don't burn yourself out.

Ответить
@GeekDynamicsLab
@GeekDynamicsLab - 09.03.2023 09:30

Im doing amazing things with style transfer, thanks for the guide and exceptional work 😁

Ответить
@ariftagunawan
@ariftagunawan - 09.03.2023 07:22

Please teacher, I need How-To inpaint batch processing with inpaint batch mask in directory...

Ответить
@mrrooter601
@mrrooter601 - 09.03.2023 06:42

Hey so I had a similar bug with the updates, I completely removed dreambooth as it's literally just trouble and now the automatic updates work again.

I just made a separate install of the kohya_ss gui instead for training and stuff, highly recommend.

Ответить
@qvimera3darts444
@qvimera3darts444 - 09.03.2023 00:40

Hi Aitrepreneur which version of Stable Diffusion do you use in your videos? Im looking for the same to follow your videos but i didn't succeed. Thanks in advance

Ответить
@mr.random4231
@mr.random4231 - 08.03.2023 22:36

Thanks Aitrepreneur for another great video.
For anyone that having this error: "Error - StyleAdapter and cfg/guess mode may not works due to non-batch-cond inference" when clip_vision preprocessor is loading and style doesn't apply.
Try this in webui-user.bat: "set COMMANDLINE_ARGS= --xformers --always-batch-cond-uncond" the last parameter "--always-batch-cond-uncond" do the trick for me.

Ответить
@wndrflx
@wndrflx - 08.03.2023 22:32

How are you getting these long prompts to work with the style transfer? It seems like style won't work without using a much smaller prompt.

Ответить
@AbsalonPrieto
@AbsalonPrieto - 08.03.2023 20:07

What's the difference between the T2I-models and the regular controlnet ones?

Ответить
@ADZIOO
@ADZIOO - 08.03.2023 19:20

Its not working for me. I did exacly the same steps like on video, I have controlnet etc. but after render clip_vision/t2adapter it change nothing on the photo... just wtf? Tryed a lot of times with diffrent backgrounds, its always the same photo. Yes, I turned on ControlNet.

Ответить
@kamransayah
@kamransayah - 08.03.2023 18:16

Hey Super K, thanks for your amazing video as usual! Unfortunately for some reason the t2iadapter_style_sd14v1 model is not working for me at. All other models working except this one. So I just thought to leave my comment here to see if maybe other people with the same problem could fix the issue and can lead me in the right direction. Thanks for reading! :)

Ответить
@eyoo369
@eyoo369 - 08.03.2023 17:29

Wouldn't that really call a style transfer imo. That painting by Hokusai has a lot of impresisonistic elements which doesn't seem to be transferred over to the new image. The female character displayed still has that very typical "artgerm, greg rutkowski" style look to it. Still a cool feature nonetheless but misleading title. Better call it "transfer elements from an image"

Ответить
@sownheard
@sownheard - 08.03.2023 17:21

Wow this is so epic 🤩

Ответить
@Thozi1976
@Thozi1976 - 08.03.2023 17:19

you are using the "posex" extension?* german laughter following: hehehehehehhehehhhehehe hehehehehehe

Ответить
@sigmondroland
@sigmondroland - 08.03.2023 16:43

I get this error when trying to generate, any ideas?

RuntimeError: !(has_different_input_dtypes && !config.promote_inputs_to_common_dtype_ && (has_undefined_outputs || config.enforce_safe_casting_to_output_ || config.cast_common_dtype_to_outputs_)) INTERNAL ASSERT FAILED at "/Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/TensorIterator.cpp":407, please report a bug to PyTorch.

Ответить
@MathieuCruzel
@MathieuCruzel - 08.03.2023 16:37

It's astounding to find all these news options in Stable Diffusion. A bit overwhelming if you did not follow up from the start but the sheer amount of possibilities nowadays is golden !

Ответить
@thegreatdelusion
@thegreatdelusion - 08.03.2023 16:02

Why are these models so small? The ones from another video of yours are 5.71 GB. What is the difference? Also could you make a video on how to get these models to work on a colab? I couldn't get them to work eventhough they show up in the control net options. Also the open pose doesn't work either for me. I installed open pose editor and added the model it al shows up fine I can make a pose expert to png and then upload it but it just gets ignored or something, I get a blank image when I hit generate eventhough I selected the open pose model in the control net and hit enable and generate. This is all on colab. I can't test it on my pc as my gpu is too weak. Thanks for any help.

Ответить
@valter987
@valter987 - 08.03.2023 15:03

how to "enable" skip clip on the top?

Ответить
@snatvb
@snatvb - 08.03.2023 14:31

clip_vision doesn't work for me :(

Ответить