Hey team,

Years ago, SO gifted me an Alienware Aurora R7.

It has an Intel I7-8700 and Nvidia GTX 1080 (8GB VRAM), 16 GB of DDR 4 RAM. (what else is relevant to this question?)

My question to you is basically this -

Given that I’m not gaming with it anymore, I want to use it for only two things -

  1. Plex Server
  2. Running random local LLM stuff like Kotaemon (https://github.com/Cinnamon/kotaemon)

Let’s say I have $1000 to throw at a GPU and I’d like to get 16 GB VRAM (or 24 if it’s possible). I want to install it myself rather than take it into a shop.

What’s a GPU I can buy that will fit,

  1. within my budget?
  2. within the chassis?
  3. with the CPU and motherboard without issues?
  4. with the needs I’ve detailed (namely Plex transcoding, and running ML models)?

I am an absolute noob when it comes to figuring out what hardware to buy (hey, we got us an Alienware sucker here).

So lemmings, help me out! I’d rather not ask ChatGPT.

  • simple@lemm.ee
    link
    fedilink
    English
    arrow-up
    8
    ·
    24 hours ago

    Not 100% sure about the chassis and compatibility with the rest of your rig so hopefully someone follows up on me, but in general I would try getting the AMD 9070 XT if you can find it near its real price ($600). That might be a bit difficult since it just released, so I would wait until it starts being more available if you can.

    If not that, I’d recommend the 4060 Ti (16GB). Note that there’s an 8GB version too, do NOT buy that one. Nvidia is generally better when it comes to running AI, but the new AMD gpus are pretty good too.

    • damnthefilibuster@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      23 hours ago

      Thank you for that! I’ll look at the AMD. I thought most ML tools don’t have outright compatibility with AMD though? Is that no longer the case?

      The 4060 Ti 16 GB version… that sounds good. About $500?

      • simple@lemm.ee
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        23 hours ago

        AMD’s compatibility has gotten better the past year, and ones that aren’t compatible usually have workarounds. But yeah, Nvidia is probably better if LLMs are important for you.

        The 4060 Ti 16 GB version… that sounds good. About $500?

        More like ~$800-900 unless you’re okay with buying used. The market is pretty darn bad, and it’s gotten SO much worse due to the tariff scare. Like I said, you’re better off waiting a few months if you can.

        • damnthefilibuster@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          21 hours ago

          I don’t shy away from buying refurb electronics. But is there a problem with buying used GPUs?

          Not looking forward to buying new during this tariff era. So perhaps my local marketplaces might be best…

        • damnthefilibuster@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          21 hours ago

          Wow; thanks for finding that doc. Yeah, I’ve made it work on my system. But I’d like to use some of the bigger models.

  • ThrowawayOnLemmy@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    23 hours ago

    These are the exact specs of my old computer.

    I got myself a new one a year ago. My old one is now running unraid, and runs all my downloading apps and Plex server. Just bumped it up to 80Tb and it’s going strong. Enable GPU support and the 1080 is more than enough for any transcoding that might pop up.

    But LLM stuff is a different question. I haven’t begun to dabble with that. But I hear Framework just announced a new desktop that’ll end up being an awesome little box for machine learning.

    • damnthefilibuster@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      21 hours ago

      Oh yeah, I’ve got an external 140 TB nigh impossible to fill up! I don’t have download ware any more though. So there’s that. I’ve yet to explore unraid. I’m stuck with windows for now because I got on windows 11 bleeding edge for some reason I can’t remember. Stepping away means I’d have to wipe or dual boot. And I’m… lazy… haha!

  • Lyra_Lycan@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    20 hours ago

    I had a long running plan that played out with my GTX 1080. I wanted my server (in my bedroom) to be as quiet as possible, and the (Gigabyte) 1080’s fans, with one slightly dodgy bearing, wasn’t going to cut it. I deshrouded it and ran thermal tests in my main PC. Long story short, while in the server machine it blitzed a local Piper model (<1s processing voice commands) and Emby transcoding with Max 5GB used. I know Piper is a far cry from an LLM, but at least you’ll know that its more than capable of everything else.

    Edit: Here are my notes in case it helps anyone lol

  • Acidbath@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    18 hours ago

    I think for ML and stuff, I am currently waiting for the release of the 24gb intel battlemage cards.

    Our college has a lot of alienware aurora r7’s and i think its a pretty okay sized case. Like the ones we have are loaded with 4090’s, so I assume you should be fine. If you are upgrading anything, you should make sure your powersupply covers the requirements.

    I’ve never tinkered with alienware but two of my friends had issues trying to upgrade cpus because the bolts and screws were installed super tight. Maybe you wont have this issue. Best of luck hope this helps :]

  • Yokozuna@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    18 hours ago

    I have the same GPU and settled on a 4070 Ti Super for my next upgrade. Do what you will with that information and happy gaming!

        • remer@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          23 hours ago

          I think the K80 can only use 12GB of vram for llm stuff. I’d look into it before buying. I’m no expert.

        • damnthefilibuster@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          21 hours ago

          Yeah I just read up a little on it. Seems it doesn’t have video out? So it only does processing? That’s kinda cool!

          • fuckwit_mcbumcrumble@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            17 hours ago

            It’s only good for data processing and can’t be used for gaming. It’s basically like two GTX 760s glued together with a fuck ton of vram. Also you need a server to use it properly. You can funnel air through it using desk fans but you cannot use this thing unmodified in a desktop.