• FLeX@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    3
    ·
    1 year ago

    Indispensable, nothing less. lmao

    Have fun when they decide to multiply the price x10 and you are too dependant to have an alternative, or when it becomes stupid or malevolent 👍

    • warbond@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      Sorry, I’m not sure I understand how that makes it useless. I get the feeling that you just want to feel smug, so if it makes you feel better go ahead, I guess.

      • FLeX@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        3
        ·
        1 year ago

        Because it’s too fragile and not ready to be use at scale without causing massive damage

        Not useless for now (even if i’d like to know more about the domains where it’s really “indispensable”), but as useless as a drill with a dead battery the day they decide to cut it.

        I don’t find it future-proof, as impressive as some results are

        • DocRekd@lemm.ee
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          Nowdays LLM can be ran on consumer hardware, so the “dead battery” analogy fall short here too.

          • FLeX@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            With the same efficiency ? I’m interested in an example

            Why everyone using these crappy SaaS then ?

            • AdrianTheFrog@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              1 year ago

              Llama 2 and its derivatives, mostly. Simple local ui available here.

              Not as good as chatGPT 3.5 in my experience. Just kinda falls apart on anything too complex, and is a lot more likely to get things wrong.

              I tried it out using the ‘Open-Orca/OpenOrcaxOpenChat-Preview2-13B’ 4 bit 32g model. Its surprisingly fast to generate. It seems significantly faster than ChatGPT on my 3060. (with ExLlama)

              There are also some models tuned specifically to actually answer your requests instead of the ‘As an AI language model’ kind of stuff.

              Edit: just tried a newer model and its a lot better. (dolphin-2.1-mistral-7b)

      • FLeX@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        And you sound like the people who thought cryptos would replace credit cards ;)