Fairly new to stable diffusion (have messed around with it on an off over the last year or so), what would be the best way to output at the above resolution? It's for a desktop background that will stretch over multiple monitors (2 1280x1024 monitors and 1 2560x1080 monitor, plan to split the image in 3 and just crop off the top for the side monitors), thus the insane width.
How should I do this? I have heard SD really prefers your generation resolution to be near the training resolution, so what would be the best generation resolution with the correct aspect ratio? It's an SDXL Turbo model. Is it maybe possible to "extend" an image by sending it to img2img and give it the same seed and settings from the original image and make an image that looks like it connects to the original? If so then I could just stitch the image together after all my generating and upscaling.
Then theres the matter of upscaling it, highres latent upscaling doesn't go high enough for width on it's own (not to mention my 7800xt probably couldn't handle it VRAM-wise anyway, but maybe it will if I switch to --lowvram), do the AI upscalers in img2img prefer full integers for scaling like regular scalers (e.g. 1080p to 4k looks nicer because it's a clean 4x pixels) or can I put in say 3.65x scale and it won't have much of an impact on the final image quality or make the ai upscale weirdly and hallucinate things? And whats the realistic bottom resolution where AI upscalers aren't going to be able to produce a good 1080p result? Would 270p for example be too low? One method could be generating 1280x270 (which is a 1/4th of 5120x1080) but that could create some weird results due to the verticle resolution being far away from training resolution, and pulling 1080p from 270p sounds like a tall order for the ai upscalers.
I'm using automatic1111 btw