Комментарии:
with the introduction of "Realistic Vision V5.1" id assume this be the choice is or still 2.0 the way to go?
ОтветитьHopefully this gets less complicated ie with less swapping between apps. But this was interesting none the less.
ОтветитьWould you like you to do the explanation a little slower and for Stable XL.
ОтветитьEb Synth is soo limited to small movements though.... How can I get bigger range of movement?
Ответитьthank u
ОтветитьQuick Question.
My computer doesn’t allow me to create 1024/1024 in Stable diffusion and if I have 512 png from the grid and 900/900 from stable diffusion Its creating a problem in absynth and it says that the resolution is different.
What if I make 450/450(not 512) in davinci and then make a grid.
After this it’s generating an image using 900/900 (not 1024) in Stable diffusion.
Could this work as well in Absynth ?
Or it’s only 512 images ?
Hopefully it makes sense for you and sorry for my English
And also, what’s inside the files OUTPUT 1-4 ?
ОтветитьI didn't understand the revolutionary of this method. If it's just Ebsynth, and Stable Diffusion is only used to create stylized keyframes. It seems to me to be a very limited tool that doesn't allow me to realize all my fantasies.
ОтветитьHopefully someone will answer to my question.
I have installed all this on MacBook M1, all good for now
I am doing the same thing with the controlnet, exactly, but when I press generate it doesn't generate the 4images that ive been selected but a random one. Its like my Controll net doesn't work and I don't know what to do
please help!
Hi, my sequence created 240 images, Ebsynth is saying missing files. I researched about it. Initially my dimensions were wrong. There was something wrong with the Spirte Cutter so had to slice the images using Photoshop, which resulted in the dimensions of 256pxX256px, i had to recorrect it to 512px * 512px manually. Should i try shortening the frames to 74 and try, because ebsynth is not recognizing the size
Ответитьhello, nice tutorial :)
I wanted to ask you something. why if I insert an image in grid format, the gray is totally ignored and the output is one big image instead of 4? am I doing something wrong?
one question, whenever I try to do the tile technique for generating, the images end up very wacky even with the same settings and higher number of steps. What is a solution to this?
ОтветитьI follow these instructions but don't see the lineart model after choosing lineart realistic in pre-processor - any suggestions? I want to try this so bad
ОтветитьDoes the method work correctly only with 512 x 512 frames or also with different non-square dimensions, such as 576 x 1024?
Ответитьmy "lineart realistic" preprocessor is ther, but my "lineart" model doesnt appear in the dropdown menu even tho its installed. what do i do
ОтветитьWhen I try to render the grid in stable difusion, I only get 1 or 2 person.. anybody know the problem maybe?
ОтветитьHi!, love the videos, i have a question.... what can i do if Ebysinth its just giving me "Ugly" results?, it looks like its melting, its just not working for me.
ОтветитьSo i tried on a upper body dancing girl video and i have this ugly mess between my keyframes, does it mean i have to make more keyframes ? I have 425 frames in total
ОтветитьHi, In Davinci Resolve, after running the saver, I get 240 frames as output but you have only 75 in the 10 second video. Could you please tell me if I am missing something?
ОтветитьNetflix adaptations gonna got a next level
ОтветитьThanx bro
ОтветитьI don't believe that you meant it to be offensive however, no portion of the African diaspora is "colored", as if -- were there a default for human skin tone -- it would be something other than that of the original homo sapiens, from whom we all descend and owe our existence.
Our skin isn't "colored" any more than your skin was erased.
When unsure as to what to call a Korean you would identify them by the known general region of origin, Asian. The depiction is of an African woman, not a "colored", not a "woman of color", or some other branding that serves to remove the notion of Africans as being from a place on Earth. This planet is our home too.
Fair tidings.
how did you put the four folder and it shows 0000-0025.png,i drag it and it shows all the pictures
ОтветитьThis is great thanks, what is in your original 512 folder you put in ebsynth? The Images before they were done in img2img in SD?
ОтветитьSo this method is restricted to:
1. Square video format only?
2. Four different input frames only? 🤨
Why are all the top experts on AI from Scandinavian countries?
ОтветитьInsane tutorial from such a humble artist !! I ve a question regarding using this technique for dancing characters like ballet for example. As there is a lot of different poses and movement going on you think this technique still makes sense or are more comolex movements still working better with a mix of warp fusion and controlnet???
Ответитьwhy on the website Free Sprite Sheet Packer,i just drop 4 images,and it's not 2*2。not square
ОтветитьSo AI can reasonably get a white girl to look like a not white girl
Great, still not a practical use case, where I said, “gee I am making a video to accomplish X, I know a great AI tool to achieve this”
Hello there! And I hope you are doing better! I just did everything step by step but I'm getting single odd images and not 4 images as you are getting, do you have any idea of what can be happening here? thank you very much!
Ответитьthis is totally awesome, thank you
Ответитьwhen i select lineart realistic nothing can be seen in the model dropdown.. have i missed something here?
Ответитьнейросеть делающая из наших чернокожих друзей - людьми?
Ответитьlooking forward to video 2 also...still experimenting with this method, got some OK result with 1 frame but a little tricky getting the 2x2 frames to diffuse without border changes...and still experimenting with amount of keyframes to avoid blurry interpolation between....but hey its an interesting approach looking forward to next video
ОтветитьNetflix:
heavy breathing
Is there a reason why I drag my Output folders into Davinci Resolve and it doesn't play them as clips? They're all separate images.
Ответитьwas 100% expecting "activists" here in comment seeing this thumbnail
ОтветитьYour prpduction is fantastic
Im learning a lot from you 😌
Thank you for posting this breakdown of his workflow! Did you ever post part 2? Couldn't see it on your uploads etc? Thank you
ОтветитьWhy don't you just use the Temporal kit and use it with txt2image? Its the same steps, just easier because it handles all the image crop and ebsynth.
ОтветитьTH=hat bouncing giant text constantly makes it tough to get through these videos
ОтветитьI'm confused, I've got 30 keyframes and I drag the 30 keyframe folder into Ebsynth and it's not adding anything to Stop: on the bottom section when I do and then it says I'm missing 0001 keyframe when I try to click generate but my first frame is 0000?
ОтветитьAI art is cringe
Ответитьcouldn't one just make the grid image in Photoshop? Also it would be cool if it was a little slower explaining things...
ОтветитьNice job. Kindly share the second part. I have worked on a similar video in my channel. instead of using stable diffusion, I used faceswap method to get realistic picture. However, for a full body style it will not apply. I would like to know if there is possibility to use midjourney picture and get an exact pose to a reference image
ОтветитьDid he hack the AI or the AI allowed him to hack it?
Ответить👍👍!!
Ответитьcan I do this with graviti webui site?
Ответить