Autocorrect hates me, I am sorry.

  • 4 Posts
  • 2.92K Comments
Joined 7 months ago
cake
Cake day: July 20th, 2025

help-circle
  • High RAM for MOE models, high VRAM for dense models, and the highest GPU memory bandwidth you can get.

    For stable diffusion models (comfyui), you want high VRAM and bandwidth. Diffusion is a GPU heavy and memory intensive operation.

    Software/driver support is very important for diffusion models and comfy UI, so your best experience will be Nvidia cards.

    I think realistically you need 80gb+ of RAM for things like qwen image quants (40 for model, 20-40 for LORA adapters in ComfyUI to get output).

    I run an 128gb AMD AI 395+ Max rig, qwen image takes 5-20 minutes per 720p qwen image result in ComfyUI. Batching offers an improvement, reducing iterations during prototyping makes a huge difference. I have not tested since the fall though, and the newer models are more efficient.







  • Oof lol

    I’m sorry OP, I know your pain.

    I used to have to work with a vendor who sent all our records in CSV form, usually weekly, and each week had different column headers and formats (date formats changed each week, decimal precision, numbers as text, numbers as dates, etc). Like they had a different manual extract process each week, which required us to manually reconcile each week.

    I currently have to work with a large vendor and I hate them. The other day I went to open a ticket for an issue, then I saw they already had one. I read the ticket, it was my own ticket from a year ago, with zero responses and a “triage” label applied.

    All of my tickets I’ve ever made are still open, and a few just have comments saying they’ll investigate or pointing me at the docs, which are wrong. Never a follow up.

    I keep telling my boss we need to dump them, we spend more in salary dealing with their shit than the competition costs. Hell, I could build this in house in a month, but I don’t have time for that.