Is there any link to this Bastard model? Or any suggestions on what tag name would this model get? For the other model tag, you can tag your posts with abyss_orange_mix.
Hey this is awesome man! I'd like to try to make some too, do u have a tutorial or anything that can teach me how? I could send it to you after.
Hi mate thanks :) i dont really know, u should go on the unstable diffusion discord and start talking with people thats how i learned. I'm basically doing img2img batch with a base video i'm also doing. U can pm me on discord: Sambalek#8026 Or tiktok: @Proteinique
Model used : 70% AbyssOrangeMix2_hard 30% Bastard_v2_LiveAction
Is there any link to this Bastard model? Or any suggestions on what tag name would this model get? For the other model tag, you can tag your posts with abyss_orange_mix.
This is the oldest post I could find of it. They say they got it from a discord server which, if true, would make the genuine source difficult to track down.
It's a bit strange, it's as if Cream's head was put on the body of a different character. That aside, I don't see why this image would be considered low quality, so its approved.
Oh come on, this was made before novel AI goes public, and long before better alternatives to novel AI are released. it's normal that the hands are not that good.
interesting part is that this picture is pretty because i set up stable diffusion wrong.
desaturation was partially caused by "auto" in VAE settings. After I set it to nai.vae, it became more saturated (still good tho: https://files.catbox.moe/850cnu.png ). You can try disabling VAE (set to "auto" or "none" in settings) for more desaturation.
More negprompts to the god of negprompts! Also I didn't know you can just describe the image, I thought you need to just list tags and hope model will understand what you want.
The model used was a 50% merge of Protogen x3.4 and Anything-3.0. Believe it or not, but the "aroused" and "horny" keywords are there for the facial expression. "Dreamlikeart" was included by mistake when I used an old prompt with a different model (the previous model I used was a 50% merge of Dreamlike and Anything-v3).
Just to be clear: no it's not Kagome from Inuyasha and if you watched the show you'd know. Stable Diffusion is not great at re-creating established characters (which indeed sucks)
@antlers_anon are you sure there's no hypernetwork you've selected in your settings tab, or TI that's being used to generate this image?
I tried to generate this with your provided mix__2.ckpt with all of the exact same settings (prompt, sampler, CFG scale, seed) and it still results in a fairly realistic non-anime style.
If nothing else, would you be willing to share your embeddings folder and hypernetworks folder? (just a screenshot would probably do as well?)
edit: looks like the firstpass width/height was set to 0x0, so it was just straight-up going for 768x1280 for initial resolution. I ended up finding that it gets pretty close, especially if I use kl-f8-anime2.vae instead of novelai/anythingv3 vae.
I'm guessing that colour correction and some filtering adds a little noise
I was gonna make a meme comment, but I feel like being helpful. :u
Being a Builder gives you access to a few additional features regular users don't have, like using the mode menu when viewing posts (for quick faving/unfaving, and using tag scripts) and you can give other people feedback, to name two of them... and since the Gold and Platinum levels are... sort of used, I guess (but not really?) Builder is kind of the default level that people get promoted to if they're active in uploading and tagging stuff, I suppose.
Oh I know that now. But this pic is actually an accurate representation of how I felt when the message came in. I was like, "Oh cool! I'm a builder! ....... what's a builder?"
I was gonna make a meme comment, but I feel like being helpful. :u
Being a Builder gives you access to a few additional features regular users don't have, like using the mode menu when viewing posts (for quick faving/unfaving, and using tag scripts) and you can give other people feedback, to name two of them... and since the Gold and Platinum levels are... sort of used, I guess (but not really?) Builder is kind of the default level that people get promoted to if they're active in uploading and tagging stuff, I suppose.
I'd love to know what hypernet was used with this!
I'm almost entirely sure I didn't use anything more than the mentioned model. Did you try using it with the prompt? I can try generating it again to see if I didn't mess up while copying the settings. I'll report back once I'm near my pc.
@SomeCoolUsername Sorry to disappoint, but I don't really have them. I use this to generate, and none of the metadata gets saved automatically. I only fill out what I can know for sure.
The only information that I didn't add to the metadata field that I had access to is the Guidance Scale, since I'm not sure if it's the same thing as Cfg Scale. I tend to go with either 8.5 or 9, I think this one was a 9.
Here's a site with a bunch of models to choose from: https://rentry.org/sdmodels It really depends on what you're going for, and in what style. For me I haven't experimented with many models, but I know that gape NovelAI is better for lewds, and Anything is just great overall, but doesn't do amazing with lewds. Hopefully that helps~
That expression definitely fits Weiss <3 - an astounding picture
Thanks haha, the face probably required the most work. For some reason heavily weighting squinting started putting glasses on her, so I had to heavily weight glasses in the negative. I probably could have just erased the mouth in Krita and drawn a line to let SD see what I wanted but instead I inpainted each side for 15 minutes till I got what I wanted lol
Hey this is actually a problem I've been having. Thanks for mentioning it. I just learned I need to put it in the VAE folder to make it show up on the list.
btw, have you tried using --no-half-vae? it helped me to get rid of black pictures when generating using novelai, anything, etc.
I wouldn't recommend using the Anything VAE since it would cause some images to be black. Most of the time they would be fine but once every, say, 50-60 images I would get a completely black square.
Switching to vae-ft-ema-560000-ema-pruned.ckpt fixed the issue for me.
Hey this is actually a problem I've been having. Thanks for mentioning it. I just learned I need to put it in the VAE folder to make it show up on the list.
I wouldn't recommend using the Anything VAE since it would cause some images to be black. Most of the time they would be fine but once every, say, 50-60 images I would get a completely black square.
Switching to vae-ft-ema-560000-ema-pruned.ckpt fixed the issue for me.
Interesting. I made the exact same model, with an identical hash but with absolutely identical settings and prompt I get not even close to the same art as in the post.
Also, there seems to be some problem with the colors
I can't get the result you're getting. Even though hashes match, models might differ due to the weird way they are calculated. In my experience the hash stays the same no matter what weight you use with the add difference method. So either one of us might have made a mistake there (wouldn't be the first time I messed up writing instructions for a mix). You can download the model I'm using from https://mega.nz/folder/XMUzWIAL#i52o1QYOx7j1neujUJzfWw as mix__6.
I'm also using the latent upscaler for the highres fix. You probably won't get the exact same image because of my --xformers but you should get close.
As for the colors, I think used the vae-ft-mse-840000 one. As redjoe said, it should fix your color problems.
Recently seen some models mention a 'needed' use of different clip skip variables and I wanted to know more about them. Happened to come across this example, thanks ๐
Glad I could help. If you're going to be working with CLIP a lot, I recommend adding it to your main interface. Go into WebUI settings| User interace| Quicksettings list: Add CLIP_stop_at_last_layers (put a comma between each argument here). Refresh and it'll be up top next to your model selector.
Looks great! Would you be willing to share the embedding? How many images did you use to train it? I want to train a Yoimiya embedding, and I selected around 70 images. I wonder if that's enough.
Recently seen some models mention a 'needed' use of different clip skip variables and I wanted to know more about them. Happened to come across this example, thanks ๐
Interesting. I made the exact same model, with an identical hash but with absolutely identical settings and prompt I get not even close to the same art as in the post.
Also, there seems to be some problem with the colors
This is a problem known as "bruising" (the little purple spots here and there). To fix it, go to WebUI settings| Stable Diffusion tab| SD Vae. Set it to anything-v3.0.vae or nai.vae (I'm pretty sure these are identical). I have no idea if this will make your image identical to AA's, but it will fix the bruising and desaturation.
Interesting. I made the exact same model, with an identical hash but with absolutely identical settings and prompt I get not even close to the same art as in the post.
Also, there seems to be some problem with the colors
... I don't like ads too, but it's technically not against the rules as far as I know. (I agree we could add large watermarks to the prohibited content list.)
It's not against the rules, but I'd agree it probably should be.
Posts like the two from this user so far just seem like advertisements for their patreon and people don't come here to look at stuff that only serves to promote something like that.
I'll probably get downvoted for saying this, but I do have a fanbox where people do support me. I'm sure there are some links to it even on this site and yet I don't really feel like a scumbag T_T Calling out aggressive advertisement like in this post is fine. I don't like ads too, but it's technically not against the rules as far as I know. (I agree we could add large watermarks to the prohibited content list.)
I meant the model. You tagged this one as anydream when I'm fairly certain you meant to tag anything
Jesus, I didn't even catch that. Indeed, I did mean anything, because this image (most of it) was made in the same batch as the others I uploaded a few days ago! Thanks for pointing that out
I was about to say that you should be able to check the metadata to see the tags used for making it but it seems I forgot to give the tags for the non-ai scaled version, because I'm certain I had no extra hands but oh well.
Okay I just checked and I did tag no extra hands but didn't emphasize it so maybe that's why
I meant the model. You tagged this one as anydream when I'm fairly certain you meant to tag anything
I suspect that you have mistagged the model you used on this one
I was about to say that you should be able to check the metadata to see the tags used for making it but it seems I forgot to give the tags for the non-ai scaled version, because I'm certain I had no extra hands but oh well.
Okay I just checked and I did tag no extra hands but didn't emphasize it so maybe that's why
oh come on, the floating plates may be unrealistic, but at least they are well made, no anatomy errors or anything like that. I could say that a girl flying is unreal and that alone is also a bad image, come on, that's stupid.
if it is well done, without serious anatomical errors, it deserves a second chance.
There's a difference between disinterest & poor quality.
oh come on, the floating plates may be unrealistic, but at least they are well made, no anatomy errors or anything like that. I could say that a girl flying is unreal and that alone is also a bad image, come on, that's stupid.
if it is well done, without serious anatomical errors, it deserves a second chance.