Based off of deepseek coder, the current SOTA 33B model, allegedly has gpt 3.5 levels of performance, will be excited to test once I’ve made exllamav2 quants and will try to update with my findings as a copilot model

      • noneabove1182@sh.itjust.worksOPM
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 months ago

        I don’t have a lot of experience with either at this time, I’ve used them here and there for programming questions but usually I stick to 7b models because I use them for code completion and I only find that useful if it completes the code before I do lol

        That said, I’ve had overall good answers from either whenever I’ve decided to pull them out, it feels like wizard coder should be better since it’s so much newer but overall it hasn’t been that different. Wish phind would release an update :(

        • vinnymac@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 months ago

          That makes sense, thank you for sharing.

          I tend to use CoPilot for IDE code completion, and I use the 34B models for automated refactors and code transforms where accuracy is a requirement.