I’m a Christian and software engineer; I create random graphics projects and websites. Feel free to ask me for help with programming, or about my faith!

  • 1 Post
  • 19 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
  • For anyone who is confused: This is exploiting an old soundness bug in the Rust compiler that is still present. The GitHub issue page has this comment from maintainers:

    we already had a crate published on crates.io before which used this bug to transmute in safe code, see #25860 (comment).

    this issue is a priority to fix for the types team and has been so for years now. there is a reason for why it is not yet fixed. fixing it relies on where-bounds on binders which are blocked on the next-generation trait solver. we are actively working on this and cannot fix the unsoundness before it’s done.











  • I think the behavior could actually make sense with real physics, as the vehicle might be designed to mimic what the driver expects rather than real physics. For example, my car often shuts off the engine when I am not accelerating because it is a hybrid. So, if I don’t press the gas pedal, it wouldn’t really make sense for it to move. However, it is designed to artificially engage the engine when none of the pedals are pressed to more closely mimic the behavior of non-hybrid cars.

    If most pilots are used to the behavior if a vehicle in atmosphere, a space ship might be designed to mimic that behavior (through weak reverse thrusters or something else) to make it easier for pilots to get used to.








  • The usefulness of ComfyUI is not just making one simple image. It is the ability to completely customize how that image is created.

    For example, I have a workflow that generates a half-resolution preview image, then upscales the latent and puts it through two more sampling nodes. All three of the nodes have a different prompt input, with the focus slowly shifting to style instead of content.

    I have also created a custom upscaling workflow, where the image is upscaled with normal upscaling, then re-encoded and put through just a few sampling steps, the re-encoded with a tiled VAE decoder (to save my VRAM). It creates much better results (more detail and control) than a direct ERSGAN upscale, and can even be put through ERSGAN afterward to get a super large image.