Stable diffusion sxdl. Stable Diffusion XL or SDXL is the latest image generation model that is tailored towards more photorealistic outputs with more detailed imagery and composition compared to previous SD models, including SD 2. Stable diffusion sxdl

 
Stable Diffusion XL or SDXL is the latest image generation model that is tailored towards more photorealistic outputs with more detailed imagery and composition compared to previous SD models, including SD 2Stable diffusion sxdl  We provide a reference script for sampling, but there also exists a diffusers integration, which we expect to see more active community development

This post has a link to my install guide for three of the most popular repos of Stable Diffusion (SD-WebUI, LStein, Basujindal). Learn More. "art in the style of Amanda Sage" 40 steps. Artist Inspired Styles. CheezBorgir. Sampler: DPM++ 2S a, CFG scale range: 5-9, Hires sampler: DPM++ SDE Karras, Hires upscaler: ESRGAN_4x, Refiner switch at: 0. 0 and try it out for yourself at the links below : SDXL 1. 5. It’s because a detailed prompt narrows down the sampling space. You signed out in another tab or window. First, describe what you want, and Clipdrop Stable Diffusion XL will generate four pictures for you. scanner. → Stable Diffusion v1モデル_H2. 9 model and ComfyUIhas supported two weeks ago, ComfyUI is not easy to use. Appendix A: Stable Diffusion Prompt Guide. It’s worth noting that in order to run Stable Diffusion on your PC, you need to have a compatible GPU installed. It can generate novel images from text descriptions and produces. This checkpoint corresponds to the ControlNet conditioned on Image Segmentation. 9 impresses with enhanced detailing in rendering (not just higher resolution, overall sharpness), especially noticeable quality of hair. C:stable-diffusion-uimodelsstable-diffusion)Stable Diffusion models are general text-to-image diffusion models and therefore mirror biases and (mis-)conceptions that are present in their training data. With a ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. 0 base specifically. Stable Diffusion creates an image by starting with a canvas full of noise and denoise it gradually to reach the final output. You can create your own model with a unique style if you want. Fine-tuning allows you to train SDXL on a. 0, a text-to-image model that the company describes as its “most advanced” release to date. 本地使用,人尽可会!,Stable Diffusion 一键安装包,秋叶安装包,AI安装包,一键部署,秋叶SDXL训练包基础用法,第五期 最新Stable diffusion秋叶大佬4. The comparison of SDXL 0. But if SDXL wants a 11-fingered hand, the refiner gives up. 实例讲解ControlNet1. share. Hope you all find them useful. You can modify it, build things with it and use it commercially. I created a trailer for a Lakemonster movie with MidJourney, Stable Diffusion and other AI tools. Reply more replies. github. Code; Issues 1. Below are some of the key features: – User-friendly interface, easy to use right in the browser – Supports various image generation options like size, amount, mode,. Anyways those are my initial impressions!. I hope it maintains some compatibility with SD 2. Quick Tip for Beginners: You can change the default settings of Stable Diffusion WebUI (AUTOMATIC1111) in the ui-config. It gives me the exact same output as the regular model. Stable Diffusion XL. It’s important to note that the model is quite large, so ensure you have enough storage space on your device. Stable Diffusion is a Latent Diffusion model developed by researchers from the Machine Vision and Learning group at LMU Munich, a. Stable Diffusion is the latest deep learning model to generate brilliant, eye-catching art based on simple input text. I like how you have put a different prompt into your upscaler and ControlNet than the main prompt: I think this could help to stop getting random heads from appearing in tiled upscales. ckpt file contains the entire model and is typically several GBs in size. 5. Having the Stable Diffusion model and even Automatic’s Web UI available as open-source is an important step to democratising access to state-of-the-art AI tools. With 256x256 it was on average 14s/iteration, so much more reasonable, but still sluggish af. Stable Diffusion XL (SDXL) is a powerful text-to-image generation model that iterates on the previous Stable Diffusion models in three key ways: the UNet is 3x larger and SDXL combines a second text encoder (OpenCLIP ViT-bigG/14) with the original text encoder to significantly increase the number of parameters. Click on the Dream button once you have given your input to create the image. On Wednesday, Stability AI released Stable Diffusion XL 1. use a primary prompt like "a landscape photo of a seaside Mediterranean town. The only caveat here is that you need a Colab Pro account since. High resolution inpainting - Source. Stable Diffusion . What should have happened? Stable Diffusion exhibits proficiency in producing high-quality images while also demonstrating noteworthy speed and efficiency, thereby increasing the accessibility of AI-generated art creation. In the thriving world of AI image generators, patience is apparently an elusive virtue. 0 has proven to generate the highest quality and most preferred images compared to other publicly available models. r/StableDiffusion. 手順3:学習を行う. set COMMANDLINE_ARGS=--medvram --no-half-vae --opt-sdp-attention. In general, the best stable diffusion prompts will have this form: “A [type of picture] of a [main subject], [style cues]* ”. También tienes un proyecto en Github que te permite utilizar Stable Diffusion en tu ordenador. This capability is enabled when the model is applied in a convolutional fashion. With Git on your computer, use it copy across the setup files for Stable Diffusion webUI. diffusion_pytorch_model. The structure of the prompt. Download the zip file and use it as your own personal cheat-sheet - completely offline. Using a model is an easy way to achieve a certain style. Thanks to a generous compute donation from Stability AI and support from LAION, we were able to train a Latent Diffusion Model on 512x512 images from a subset of the LAION-5B database. stable diffusion教程:超强sam插件,一秒快速换衣, 视频播放量 29410、弹幕量 9、点赞数 414、投硬币枚数 104、收藏人数 1437、转发人数 74, 视频作者 斗斗ai绘画, 作者简介 sd、mj等ai绘画教程,ChatGPT等人工智能内容,大家多支持。,相关视频:1分钟学会 简单快速实现换装换脸 Stable diffusion插件Inpaint Anything. Stability AI has released the latest version of Stable Diffusion that adds image-to-image generation and other capabilities. Saved searches Use saved searches to filter your results more quicklyThis is just a comparison of the current state of SDXL1. And that's already after checking the box in Settings for fast loading. Stability AI released the pre-trained model weights for Stable Diffusion, a text-to-image AI model, to the general public. This version of Stable Diffusion creates a server on your local PC that is accessible via its own IP address, but only if you connect through the correct port: 7860. 0 with the current state of SD1. save. your Chrome crashed, freeing it's VRAM. It's trained on 512x512 images from a subset of the LAION-5B database. I've just been using clipdrop for SDXL and using non-xl based models for my local generations. weight) RuntimeError: The size of tensor a (768) must match the size of tensor b (1024) at non-singleton dimension 1. Overview. 9 Tutorial (better than Midjourney AI)Stability AI recently released SDXL 0. NAI is a model created by the company NovelAI modifying the Stable Diffusion architecture and training method. The only caveat here is that you need a Colab Pro account since the free version of Colab offers not enough VRAM to. cpu() RuntimeError: The size of tensor a (768) must match the size of tensor b (1024) at non-singleton dimension 1. I can confirm StableDiffusion works on 8GB model of RX570 (Polaris10, gfx803) card. This base model is available for download from the Stable Diffusion Art website. height and width – The height and width of image in pixel. It is common to see extra or missing limbs. 0, which was supposed to be released today. from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline import torch pipeline = StableDiffusionXLPipeline. Stable Diffusion is a deep learning based, text-to-image model. This checkpoint is a conversion of the original checkpoint into diffusers format. In the context of text-to-image generation, a diffusion model is a generative model that you can use to generate high-quality images from textual descriptions. Pankraz01. However, a great prompt can go a long way in generating the best output. 5. Credit Cost. ControlNet is a neural network structure to control diffusion models by adding extra conditions. This checkpoint corresponds to the ControlNet conditioned on HED Boundary. If a seed is provided, the resulting. Stable Diffusion requires a 4GB+ VRAM GPU to run locally. 1 was released in lllyasviel/ControlNet-v1-1 by Lvmin Zhang. It helps blend styles together! 1 / 7. 9 runs on consumer hardware but can generate "improved image and. . I have been using Stable Diffusion UI for a bit now thanks to its easy Install and ease of use, since I had no idea what to do or how stuff works. Stable Diffusion 🎨. Today, Stability AI announced the launch of Stable Diffusion XL 1. Development. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products. safetensors as the VAE; What should have. Those will probably be need to be fed to the 'G' Clip of the text encoder. Notifications Fork 22k; Star 110k. Building upon the success of the beta release of Stable Diffusion XL in April, SDXL 0. 1. They both start with a base model like Stable Diffusion v1. Press the Window key (It should be on the left of the space bar on your keyboard), and a search window should appear. 0 + Automatic1111 Stable Diffusion webui. 4-inch touchscreen PixelSense Flow Display is bright and vibrant with true-to-life HDR colour, 2400 x 1600 resolution, and up to 120Hz refresh rate for immersive. The Stable Diffusion Desktop client is a powerful UI for creating images using Stable Diffusion and models fine-tuned on Stable Diffusion like: SDXL; Stable Diffusion 1. いま一部で話題の Stable Diffusion 。. c) make full use of the sample prompt during. The backbone. This stable-diffusion-2 model is resumed from stable-diffusion-2-base ( 512-base-ema. This technique has been termed by authors. 0: cfg_scale – How strictly the diffusion process adheres to the prompt text. Create an account. S table Diffusion is a large text to image diffusion model trained on billions of images. Models Embeddings. 5 base model. It goes right after the DecodeVAE node in your workflow. 85 billion image-text pairs, as well as LAION-High-Resolution, another subset of LAION-5B with 170 million images greater than 1024×1024 resolution (downsampled to. After extensive testing, SD XL 1. It is our fastest API, matching the speed of its predecessor, while providing higher quality image generations at 512x512 resolution. ckpt file to 🤗 Diffusers so both formats are available. Others are delightfully strange. (I’ll see myself out. 1/3. With Tiled Vae (im using the one that comes with multidiffusion-upscaler extension) on, you should be able to generate 1920x1080, with Base model, both in txt2img and img2img. bat and pkgs folder; Zip; Share 🎉; Optional. safetensors Creating model from config: C: U sers d alto s table-diffusion-webui epositories s table-diffusion-stability-ai c onfigs s table-diffusion v 2-inference. The next version of the prompt-based AI image generator, Stable Diffusion, will produce more photorealistic images and be better at making hands. How to Train a Stable Diffusion Model Stable diffusion technology has emerged as a game-changer in the field of artificial intelligence, revolutionizing the way models are… 8 min read · Jul 18Start stable diffusion; Choose Model; Input prompts, set size, choose steps (doesn't matter how many, but maybe with fewer steps the problem is worse), cfg scale doesn't matter too much (within limits) Run the generation; look at the output with step by step preview on. Type cmd. Built upon the ideas behind models such as DALL·E 2, Imagen, and LDM, Stable Diffusion is the first architecture in this class which is small enough to run on typical consumer-grade GPUs. The backbone. 9, which. 0, the flagship image model developed by Stability AI, stands as the pinnacle of open models for image generation. 🙏 Thanks JeLuF for providing these directions. This page can act as an art reference. 35. dreamstudio. ckpt Applying xformers cross. Log in. Choose your UI: A1111. Enter a prompt and a URL to generate. This recent upgrade takes image generation to a new level with its. 0) is the most advanced development in the Stable Diffusion text-to-image suite of models launched by Stability AI. The . 1% new stuff. Tried with a base model 8gb m1 mac. 0 base model & LORA: – Head over to the model card page, and navigate to the “ Files and versions ” tab, here you’ll want to download both of the . 1. down_blocks. ControlNet is a neural network structure to control diffusion models by adding extra conditions. Developed by: Stability AI. Model Access Each checkpoint can be used both with Hugging Face's 🧨 Diffusers library or the original Stable Diffusion GitHub repository. 0 and 2. py ", line 294, in lora_apply_weights. I have tried putting the base safetensors file in the regular models/Stable-diffusion folder. Compared to previous versions of Stable Diffusion, SDXL leverages a three times larger UNet. 5 and 2. Iuno why he didn't ust summarize it. Clipdrop - Stable Diffusion SDXL 1. invokeai is always a good option. 09. stable-diffusion-webuiscripts Example Generation A-Zovya Photoreal [7d3bdbad51] - Stable Diffusion ModelStability AI has officially released the latest version of their flagship image model – the Stable Diffusion SDXL 1. We are releasing Stable Video Diffusion, an image-to-video model, for research purposes: SVD: This model was trained to generate 14 frames at resolution 576x1024 given a context frame of the same size. . In this newsletter, I often write about AI that’s at the research stage—years away from being embedded into everyday. Experience cutting edge open access language models. Try TD-Pro! Learn more. The SDXL base model performs significantly better than the previous variants, and the model combined with the refinement module achieves the best overall performance. The chart above evaluates user preference for SDXL (with and without refinement) over SDXL 0. 9, a follow-up to Stable Diffusion XL. Easy Diffusion is a simple way to download Stable Diffusion and use it on your computer. Stable Diffusion can take an English text as an input, called the "text prompt", and generate images that match the text description. It's a LoRA for noise offset, not quite contrast. DreamStudioという、Stable DiffusionをWeb上で操作して画像生成する公式サービスがあるのですが、こちらのページの右上にあるLoginをクリックします。. The the base model seem to be tuned to start from nothing, then to get an image. opened this issue Jul 27, 2023 · 54 comments. 0免费教程来了,,不看后悔!不用ChatGPT,AI自动生成PPT(一键生. But it’s not sufficient because the GPU requirements to run these models are still prohibitively expensive for most consumers. TypeScript. Dreamshaper. ai six days ago, on August 22nd. Combine it with the new specialty upscalers like CountryRoads or Lollypop and I can easily make images of whatever size I want without having to mess with control net or 3rd party. 1. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products. It serves as a quick reference as to what the artist's style yields. An astronaut riding a green horse. Tracking of a single cytochrome C protein is shown in. main. Unlike models like DALL. The chart above evaluates user preference for SDXL (with and without refinement) over SDXL 0. 0 will be generated at 1024x1024 and cropped to 512x512. 0 base model & LORA: – Head over to the model. StableDiffusion, a Swift package that developers can add to their Xcode projects as a dependency to deploy image generation capabilities in their. lora_apply_weights (self) File "C:\SSD\stable-diffusion-webui\extensions-builtin\Lora\ lora. Stable Audio uses the ‘latent diffusion’ architecture that was first introduced with Stable Diffusion. For a minimum, we recommend looking at 8-10 GB Nvidia models. Copy the file, and navigate to Stable Diffusion folder you created earlier. Contribute to anonytu/stable-diffusion-prompts development by creating an account on GitHub. 如果需要输入负面提示词栏,则点击“负面”按钮。. Understandable, it was just my assumption from discussions that the main positive prompt was for common language such as "beautiful woman walking down the street in the rain, a large city in the background, photographed by PhotographerName" and the POS_L and POS_R would be for detailing such as "hyperdetailed, sharp focus, 8K, UHD" that sort of thing. Generate music and sound effects in high quality using cutting-edge audio diffusion technology. Stable Doodle. 0. Waiting at least 40s per generation (comfy; the best performance I've had) is tedious and I don't have much free. Create amazing artworks using artificial intelligence. Slight differences in contrast, light and objects. Summary. Deep learning (DL) is a specialized type of machine learning (ML), which is a subset of artificial intelligence (AI). 0 and was released in lllyasviel/ControlNet-v1-1 by Lvmin Zhang. . 9 Research License. Stable Diffusion gets an upgrade with SDXL 0. Downloading and Installing Diffusion. Today, we are excited to release optimizations to Core ML for Stable Diffusion in macOS 13. You'll see this on the txt2img tab:I made a long guide called [Insights for Intermediates] - How to craft the images you want with A1111, on Civitai. com不然我骚扰你. Diffusion Bee epitomizes one of Apple’s most famous slogans: it just works. ️ Check out Lambda here and sign up for their GPU Cloud: it here online: to run it:. Fooocus. What you do with the boolean is up to you. If you guys do this, you will forever have a leg up against runway ML! Please blow them out of the water!! 7. To train a diffusion model, there are two processes: a forward diffusion process to prepare training samples and a reverse diffusion process to generate the images. However, this will add some overhead to the first run (i. #stablediffusion #多人图 #ai绘画 - 橘大AI于20230326发布在抖音,已经收获了46. Credit Calculator. Open up your browser, enter "127. Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Stable Diffusion XL is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input, cultivates autonomous freedom to produce incredible imagery, empowers billions of people to create stunning art within seconds. ckpt here. kohya SS gui optimal parameters - Kohya DyLoRA , Kohya LoCon , LyCORIS/LoCon , LyCORIS/LoHa , Standard Question | Helpfast-stable-diffusion Notebooks, A1111 + ComfyUI + DreamBooth. It is accessible to everyone through DreamStudio, which is the official image. Look at the file links at. Its the guide that I wished existed when I was no longer a beginner Stable Diffusion user. With ComfyUI it generates images with no issues, but it's about 5x slower overall than SD1. Try Stable Audio Stable LM. ckpt" so I know it. It. FAQ. 0 and stable-diffusion-xl-refiner-1. “The audio quality is astonishing. I really like tiled diffusion (tiled vae). 9) is the latest version of Stabl. Diffusion models are a. 0 online demonstration, an artificial intelligence generating images from a single prompt. "Cover art from a 1990s SF paperback, featuring a detailed and realistic illustration. 0 - The Biggest Stable Diffusion Model. Reload to refresh your session. Image diffusion model learn to denoise images to generate output images. The chart above evaluates user preference for SDXL (with and without refinement) over Stable Diffusion 1. You switched accounts on another tab or window. The default we use is 25 steps which should be enough for generating any kind of image. 前提:Stable. November 10th, 2023. 1 is the successor model of Controlnet v1. Edit interrogate. 0 & Refiner. 0 is live on Clipdrop . Learn more about A1111. This step downloads the Stable Diffusion software (AUTOMATIC1111). seed: 1. Model type: Diffusion-based text-to. LAION-5B is the largest, freely accessible multi-modal dataset that currently exists. ckpt) Place the model file inside the modelsstable-diffusion directory of your installation directory (e. bin; diffusion_pytorch_model. 3 billion English-captioned images from LAION-5B‘s full collection of 5. It can be used in combination with Stable Diffusion. Closed. 手順2:「gui. This is the SDXL running on compute from stability. safetensors; diffusion_pytorch_model. Our Language researchers innovate rapidly and release open models that rank amongst the best in the industry. License: CreativeML Open RAIL++-M License. 1 with its fixed nsfw filter, which could not be bypassed. It is a diffusion model that operates in the same latent space as the Stable Diffusion model. In general, the best stable diffusion prompts will have this form: “A [type of picture] of a [main subject], [style cues]* ”. fix to scale it to whatever size I want. #SDXL is currently in beta and in this video I will show you how to use it on Google Colab for free. b) for sanity check, i would try the LoRA model on a painting/illustration focused stable diffusion model (anime checkpoints works) and see if the face is recognizable, if it is, it is an indication to me that the LoRA is trained "enough" and the concept should be transferable for most of my use. Hot. A generator for stable diffusion QR codes. You will see the exact keyword applied to two classes of images: (1) a portrait and (2) a scene. I have had much better results using Dreambooth for people pics. Unconditional image generation Text-to-image Stable Diffusion XL Kandinsky 2. Stable Diffusion XL. from_pretrained(model_id, use_safetensors= True) The example prompt you’ll use is a portrait of an old warrior chief, but feel free to use your own prompt:We’re on a journey to advance and democratize artificial intelligence through open source and open science. 1. 9. 4万个喜欢,来抖音,记录美好生活!. Alternatively, you can access Stable Diffusion non-locally via Google Colab. Height. As a rule of thumb, you want anything between 2000 to 4000 steps in total. A group of open source hackers forked Stable Diffusion on GitHub and optimized the model to run on Apple's M1 chip, enabling images to be generated in ~ 15 seconds (512x512 pixels, 50 diffusion steps). Refiner same folder as Base model, although with refiner i can't go higher then 1024x1024 in img2img. Includes support for Stable Diffusion. Developed by: Stability AI. XL. Stable Diffusion is a deep learning generative AI model. In order to understand what Stable Diffusion is, you must know what is deep learning, generative AI, and latent diffusion model. json to enhance your workflow. This model runs on Nvidia A40 (Large) GPU hardware. 5 or XL. Stable Diffusion XL 1. 9 and Stable Diffusion 1. yaml (you only need to do this step for the first time, otherwise skip it) Wait for it to process. 1. 0, an open model representing the next evolutionary step in text-to. Following the successful release of Stable Diffusion XL beta in April, SDXL 0. Downloads. Appendix A: Stable Diffusion Prompt Guide. Of course no one knows the exact workflow right now (no one that's willing to disclose it anyways) but using it that way does seem to make it follow the style closely. The Stable Diffusion Desktop client is a powerful UI for creating images using Stable Diffusion and models fine-tuned on Stable Diffusion like: SDXL; Stable Diffusion 1. patrickvonplaten HF staff. Sort by: Open comment sort options. card classic compact. The model is a significant advancement in image generation capabilities, offering enhanced image composition and face generation that results in stunning visuals and realistic aesthetics. 9 base model gives me much(!) better results with the. Dreambooth is considered more powerful because it fine-tunes the weight of the whole model. 0, an open model representing the next evolutionary step in text-to-image generation models. SDXL. co 適当に生成してみる! 以下画像は全部 1024×1024 のサイズで生成しています One of the most popular uses of Stable Diffusion is to generate realistic people. The prompts: A robot holding a sign with the text “I like Stable Diffusion” drawn in. com github. Select “stable-diffusion-v1-4. However, since these models. Step 5: Launch Stable Diffusion. 手順3:PowerShellでコマンドを打ち込み、環境を構築する. 225,000 steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve classifier-free guidance sampling. g. [deleted] • 7 mo. SDXL 0. Cleanup. Step 3: Clone web-ui. Note: Earlier guides will say your VAE filename has to have the same as your model. py", line 214, in load_loras lora = load_lora(name, lora_on_disk. Text-to-Image with Stable Diffusion. Stable Diffusion XL 1. . Alternatively, you can access Stable Diffusion non-locally via Google Colab. 0 base model as of yesterday. 9 and Stable Diffusion 1. The SDXL base model performs significantly better than the previous variants, and the model combined with the refinement module achieves the best overall performance. proj_in in the given object!. Researchers discover that Stable Diffusion v1 uses internal representations of 3D geometry when generating an image. SDXL v1. You can keep adding descriptions of what you want, including accessorizing the cats in the pictures. With Stable Diffusion XL, you can create descriptive images with shorter prompts and generate words within images. Stable Diffusion is a text-to-image open-source model that you can use to create images of different styles and content simply by providing a text prompt. 本日、 Stability AIは、フォトリアリズムに優れたエンタープライズ向け最新画像生成モデル「Stabile Diffusion XL(SDXL)」をリリースしたことを発表しました。 SDXLは、エンタープライズ向けにStability AIのAPIを通じて提供されるStable Diffusion のモデル群に新たに追加されたものです。This is an answer that someone corrects. Stable Diffusion’s initial training was on low-resolution 256×256 images from LAION-2B-EN, a set of 2. Turn on torch. 0. Evaluation. SDXL 1. Hot New Top. Stable Diffusion Desktop client for Windows, macOS, and Linux built in Embarcadero Delphi. default settings (which i'm assuming is 512x512) took about 2-4mins/iteration, so with 50 iterations it is around 2+ hours. We’re on a journey to advance and democratize artificial intelligence through. Stable Diffusion XL Online elevates AI art creation to new heights, focusing on high-resolution, detailed imagery. Download the Latest Checkpoint for Stable Diffusion from Huggin Face. I run it following their docs and the sample validation images look great but I’m struggling to use it outside of the diffusers code. We present SDXL, a latent diffusion model for text-to-image synthesis. Stability AI Ltd. 12 votes, 17 comments. It is primarily used to generate detailed images conditioned on text descriptions, though it can also be applied to other tasks such as inpainting, outpainting, and generating image-to-image translations guided by a text prompt. A brand-new model called SDXL is now in the training phase. The Stable-Diffusion-v1-5 checkpoint was initialized with the weights of the Stable-Diffusion-v1-2 checkpoint and subsequently fine-tuned on 595k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10%. .