I’m still baffled at how good Ollama is on working on paltry hardware like ARM and small VMs. Give it GPUs and it’s amazing.
The next step should be to encrypt information at Transit and rest to as to purchase GPU power from the cloud but maintaining client-side encryption throughout. That’ll bring even more power to the masses: imagine giving Ollama a Cloud endpoint to remote GPUs which it can compute on without the consumer purchasing any hardware.
I’m still baffled at how good Ollama is on working on paltry hardware like ARM and small VMs. Give it GPUs and it’s amazing.
The next step should be to encrypt information at Transit and rest to as to purchase GPU power from the cloud but maintaining client-side encryption throughout. That’ll bring even more power to the masses: imagine giving Ollama a Cloud endpoint to remote GPUs which it can compute on without the consumer purchasing any hardware.
ARM is not paltry, it’s in small/portable devices because it’s efficient, not weak.
deleted by creator
Tell that to groq.