My rack is finished for now (because I’m out of money).

Last time I posted I had some jank cables going through the rack and now we’re using patch panels with color coordinated cables!

But as is tradition, I’m thinking about upgrades and I’m looking at that 1U filler panel. A mini PC with a 5060ti 16gb or maybe a 5070 12gb would be pretty sick to move my AI slop generating into my tiny rack.

I’m also thinking about the PI cluster at the top. Currently that’s running a Kubernetes cluster that I’m trying to learn on. They’re all PI4 4GB, so I was going to start replacing them with PI5 8/16GB. Would those be better price/performance for mostly coding tasks? Or maybe a discord bot for shitposting.

Thoughts? MiniPC recs? Wanna bully me for using AI? Please do!

  • Korhaka@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 hours ago

    Ohh nice, I want it. Don’t really know what I would use all of it for, but I want it (but don’t want to pay for it).

    Currently been thinking of getting an N150 mini PC. Setup proxmox and a few VMs. At the very least pihole, location to dump some backups and also got a web server for a few projects.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    4
    ·
    18 hours ago

    Well, I always advocate for using the stuff you have. I don’t think a Discord bot needs four new RasPi 5. That’s likely to run on a single RasPi3. And as long as they’re sitting idle, it doesn’t really matter which model number they have… So go ahead and put something on your hardware, and buy new one once you’ve maxed out your current setup.

    I’m not educated on Bazzite. Maybe tools like Distrobox or other container solutions can help running AI workloads on the gaming rig. It’s likely easier to run a dedicated AI server, but I started learning about quantization, tested some models on my main computer with the help of ollama, KoboldCPP and some random Docker/Podman containers. I’m not saying this is the preferrable solution. But definitely enough to get started with AI. And you can always connect the computers within your local network, write some server applications and have them hook into ollama’s API and it doesn’t really matter whether that runs on your gaming pc or a server (as long as the computer in question is turned on…)

    • nagaram@startrek.websiteOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      14 hours ago

      Ollama and all that runs on it its just the firewall rules and opening it up to my network that’s the issue.

      I cannot get ufw, iptables, or anything like that running on it. So I usually just ssh into the PC and do a CLI only interaction. Which is mostly fine.

      I want to use OpenWebUI so I can feed it notes and books as context, but I need the API which isn’t open on my network.

  • _//(0)(0)\\_@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 day ago

    Looking good! Funny I happen across this post when I’m working on mine as well. As I type this I’m playing with a little 1.5” transparent OLED that will poke out of the rack beside each pi, scrolling various info (cpu load/temp, IP, LAN traffic, node role, etc)

      • _//(0)(0)\\_@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 hours ago

        Waveshare 1.51” transparent OLED. Comes with driver board, ribbon & jumpers. If you type it in Amazon it’s the only one that pops, just make sure it says transparent. Plugs into GPIO of my Pi 5s. The Amazon listing has a user guide you can download so make sure to do that. I was having trouble figuring it out until I saw that thing. Runs off a python script but once I get it behaving like I want I’ll add it to systemd so it boots on startup.

        Imma dummy so I used ChatGPT for most of it, full …ahem… transparency. 🤷🏻‍♂️

        I’m modeling a little bracket in spaceclaim today & will probably print it in transparent PETG. I’ll post a pic when I’m done!

  • ☂️-@lemmy.ml
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    22 hours ago

    for your usecase, i’d get an external gpu to plug in to one of these juicy thinkstations right there. bonus for the modularity of having an actual gpu instead of relying on whatever crappy laptop gpu minipc manufacturers put in there.

    you could probably virtualize a sick gaming setup with this rig too. stream it to your phone/laptop.

    nice setup btw.

  • thejml@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    1 day ago

    Honestly, If you are delving into Kubernetes, just add some more of those 1L PCs in there. I tend to find them on ebay cheaper than Pi’s. Last year I snagged 4x 1L Dells with 16GB RAM for $250 shipped. I swapped some RAM around, added some new SSD’s and now have 3x Kube masters, 3x Kube worker nodes and a few VMs running a Proxmox cluster across 3 of the 1L’s with 32GB and a 512GbB SSD each and its been great. The other one became my wife’s new desktop.

    Big plus, there are so many more x86_64 containers out there compared to Pi compatible ARM ones.

  • TropicalDingdong@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 day ago

    This is so pretty 😍🤩💦!!

    I’ve been considering a micro rack to support the journey, but primarily for house old laptop chassis as I convert them into proxmox resources.

    Any thoughts or comments on you choice of this rack?

    • nagaram@startrek.websiteOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      Not really a lot of thought went into rack choice. I wanted something smaller and more powerful than my several optiplexs I had.

      I also decided I didn’t want storage to happen here anymore because I am stupid and only knew how to pass through disks for Truenas. So I had 4 truenas servers on my network and I hated it.

      This was just what I wanted at a price I was good with at Like $120. There’s a 3D printable version but I wasn’t interested in that. I do want to 3D print racks and I want to make my own custom ones for the Pis to save space.

      But this set up is way cheaper if you have a printer and some patience.

  • Diplomjodler@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    1 day ago

    I’m afraid I’m going to have to deduct one style point for the misalignment of the labels on the mini PCs.

    • nagaram@startrek.websiteOP
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      1 day ago

      That’s fair and justified. I have the label maker right now in my hands. I can fix this at any moment and yet I choose not to.

      I’m man feeding orphans to the orphan crushing machine. I can stop this at any moment.

  • TexasDrunk@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 day ago

    I didn’t even know these sorts of mini racks existed. now I’m going to have to get one for all my half sized preamps if they’ll fit. That would solve like half the problems with my studio room and may help bring back some of my spark for making music.

    I have no recs. Just want to say I’m so excited to see this. I can probably build an audio patch panel.

  • ZeDoTelhado@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    1 day ago

    I have a question about ai usage on this: how do you do this? Every time I see ai usage some sort of 4090 or 5090 is mentioned, so I am curious what kind of ai usage you can do here

    • teslasdisciple@lemmy.ca
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 day ago

      I’m running ai on an old 1080 ti. You can run ai on almost anything, but the less memory you have the smaller (ie. dumber) your models will have to be.

      As for the “how”, I use Ollama and Open WebUI. It’s pretty easy to set up.

      • kata1yst@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        3 hours ago

        Similar setup here with a 7900xtx, works great and the 20-30b models are honestly pretty good these days. Magistral, Qwen 3 Coder, GPT-OSS are most of what I use

      • ZeDoTelhado@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        I tried a couple of times with Jen ai and local llama, but somehow does not work that well for me.

        But at the same time i have a 9070xt, so, not exactly optimal

    • chaospatterns@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 day ago

      Your options are to run smaller models or wait. llama3.2:3b fits on my 1080 Ti VRAM and is sufficiently fast. Bigger models will get split between VRAM and RAM and run slower but it’ll work.

      Not all models are Gen AI style LLMs. I run GPU based speech to text models on my GPU too for my smart home.

    • nagaram@startrek.websiteOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      With a RTX 3060 12gb, I have been perfectly happy with the quality and speed of the responses. It’s much slower than my 5060ti which I think is the sweet spot for text based LLM tasks. A larger context window provided by more vram or a web based AI is cool and useful, but I haven’t found the need to do that yet in my use case.

      As you may have guessed, I can’t fit a 3060 in this rack. That’s in a different server that houses my NAS. I have done AI on my 2018 Epyc server CPU and its just not usable. Even with 109gb of ram, not usable. Even clustered, I wouldn’t try running anything on these machines. They are for docker containers and minecraft servers. Jeff Geerling probably has a video on trying to run an AI on a bunch of Raspberry Pis. I just saw his video using Ryzen AI Strix boards and that was ass compared to my 3060.

      But to my use case, I am just asking AI to generate simple scripts based on manuals I feed it or some sort of writing task. I either get it to take my notes on a topic and make an outline that makes sense and I fill it in or I feed it finished writings and ask for grammatical or tone fixes. Thats fucking it and it boggles my mind that anyone is doing anything more intensive then that. I am not training anything and 12gb VRAM is plenty if I wanna feed like 10-100 pages of context. Would it be better with a 4090? Probably, but for my uses I haven’t noticed a difference in quality between my local LLM and the web based stuff.

      • ZeDoTelhado@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        So is not on this rack. OK because for a second I was thinking somehow you were able to run ai tasks with some sort of small cluster.

        I have nowadays a 9070xt on my system. I just dabbled on this, but until now I havent been that successful. Maybe I will read more into it to understand better.

        • nagaram@startrek.websiteOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          1 day ago

          Ollama + Gemma/Deepseek is a great start. I have only ran AI on my AMD 6600XT and that wasn’t great and everything that I know is that AMD is fine for gaming AI tasks these days and not really LLM or Gen AI tasks.

          A RTX 3060 12gb is the easiest and best self hosted option in my opinion. New for <$300 and used even less. However, I was running with a Geforce 1660 ti for a while and thats <$100

    • nagaram@startrek.websiteOP
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 day ago

      I was thinking about that now that I have Mac Minis on the mind. I might even just set a mac mini on top next to the modem.

    • nagaram@startrek.websiteOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 day ago

      Oh and my home office set up uses Tiny in One monitors so I configured these by plugging them into my monitor which was sick.

      I’m a huge fan of this all in one idea that is upgradable.

  • brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    1 day ago

    If you can swing $2K, get one of the new mini PCs with an AMD 395 and 64GB+ RAM (ideally 128GB).

    They’re tiny, lower power, and the absolute best way to run the new MoEs like Qwen3 or GLM Air for coding. TBH they would blow a 5060 TI out of the water, as having a ~100GB VRAM pool is a total game changer.

    I would kill for one on an ITX mobo with an x8 slot.

      • MalReynolds@piefed.social
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        Pretty sure that’s a x4 PCIe slot (admittedly PCIe 5x4, but not many video cards speak PCIe5), would totally trade a usb4 for a x8, but these laptop chips are pretty constrained lanes wise.

        • brucethemoose@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          24 hours ago

          It’s PCIe 4.0 :(

          but these laptop chips are pretty constrained lanes wise

          Indeed. I read Strix Halo only has 16 4.0 PCIe lanes in addition to its USB4, which is resonable given this isn’t supposed to be paired with discrete graphics. But I’d happily trade an NVMe slot (still leaving one) for x8.

          One of the links to a CCD could theoretically be wired to a GPU, right? Kinda like how EPYC can switch its IO between infinity fabric for 2P servers, and extra PCIe in 1P configurations. But I doubt we’ll ever see such a product.

          • MalReynolds@piefed.social
            link
            fedilink
            English
            arrow-up
            1
            ·
            23 hours ago

            It’s PCIe 4.0 :(

            Boo! Silly me thinking DDR5 implied PCIe5, what a shame.

            Feels like they’re testing the waters with Halo, hopefully a loud ‘waters great, dive in’ signal gets through and we get something a bit fitter for desktop use, maybe with more memory (and bandwidth) next gen. Still, gotta love the power usage, makes for one hell of a NAS / AI inference server (and inference isn’t that fussy about PCIe bandwidth, hell eGPU works fine as long as the model / expert fits in VRAM.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 day ago

        Nah, unfortunately it is only PCIe 4.0 4x. That’s a bit slim for a dGPU, especially in the future :(

    • nagaram@startrek.websiteOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      I think I’m going to have a harder time fitting a threadripper in my 10 inch rack than I am getting any GPU in there.

      • Cocodapuf@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        9 hours ago

        I think I’m going to have a harder time fitting a threadripper in my 10 inch rack than I am getting any GPU in there.

        Well, you could always use a closed loop CPU cooler. (Not necessarily that one)

        With the radiator hanging out in back, this shouldn’t need much height.

  • tofu@lemmy.nocturnal.garden
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 day ago

    Since you seem to be looking for problems to solve with new hardware, do you have a NAS already? Could be tight in 1U but maybe you can figure something out.

    • nagaram@startrek.websiteOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      I do already have a NAS. It’s in another box in my office.

      I was considering replacing the PIs with a BOD and passing that through to one of my boxes via USB and virtualizing something. I compromised by putting 2tb Sata SSDs in each box to use for database stuff and then backing that up to the spinning rust in the other room.

      How do I do that? Good question. I take suggestions.