To be honest, I never tried publicly available instances of any privacy front-ends (SearxNG, Nitter, Redlib etc.). I always self-host and route all such traffic via VPN.
My initial issue with SearxNG was with the default selection of search engines. Default inclusion of Qwant engine caused irrelevant and non-english results to return. Currently my selection is limited to Google, Bing and Brave as DDG takes around 2 sec to return result (based on the VPN server location I’m using).
If you still remember the error messages, I might help to help fix that.
Though it is an off-topic but what exact issues you faced with SearxNG?
On Ubuntu, replacing Firefox/Thunderbird snap version with actual deb version.
The built-in AI staff,you referred to, is nothing but an accelerator to integrate with 3rd-party or self-hosted LLMs. It’s quite similar to choosing a search engine in settings. This feature itself is lightweight and can be disabled in settings if not required.
But pocket can be disabled via about:config, right?
I thought that’s how all those soft forks handled that mess.
You may self-host SearxNG (via Docker) and avoid direct interaction with search engines - be it google, bing, Brave or DDG.
SearxNG will act as a privacy front-end for you.
This is just an add-on BTW. It’s completely up to you to decide if you need this.
My (docker based) configuration:
Software stack: Linux > Docker Container > Nvidia Runtime > Open WebUI > Ollama > Llama 3.1
Hardware: i5-13600K, Nvidia 3070 ti (8GB), 32 GB RAM
Docker: https://docs.docker.com/engine/install/
Nvidia Runtime for docker: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html
Open WebUI: https://docs.openwebui.com/
No, but the “AI” option available on Mozilla Lab tab in settings allows you to integrate with self-hosted LLM.
I have this setup running for a while now.
BTW, Lab option works better privacy wise (than Add-on) if you have LLM running locally, IMO.
Even they choose to do so in future, usually there always is a about:config entry to disable it.
It’s an add-on, not something baked-in the browser. It’s not on your property at the first place, unless you choose to install it 🙂
In such scenario you need to host your choice of LLM locally.
I came here to tell my tiny Raspberry pi 4 consumes ~10 watt, But then after noticing the home server setup of some people and the associated power consumption, I feel like a child in a crowd of adults 😀
Remember NOT to store any private data or login to any account as this VM is not hosted on your system.
Use this server just to test out distros quickly.
If you just want to try a Linux distro out, you may use https://distrosea.com/
it’s just more configurable
That’s an understatement 😄 The amount of configuration KDE offers is mindbogglingly to me. Again, UX and degree of configuration are very subjective matters.
You may try immutable OS like NixOS. Modern-day kernel has way better hardware support than earlier days.
Same for me, but via Cloudflare tunnel. No need to expose your system to world unless that is what you want.
Well, how many users really have LLM local-hosted?