

deleted by creator
deleted by creator
I might be the only person here who thinks that the upcoming quantum bubble has the potential to deliver useful things (but boring useful things, and so harder to build hype on) but stuff like this particularly irritates me:
Quantum fucking ai? Motherfucker,
Best case scenario here is that this is how one department of Google get money out of the other bits of Google, because the internal bean counters cannot control their fiscal sphincters when someone says “ai” to them.
And back on the subject of builder.ai, there’s a suggestion that it might not have been A Guy Instead, and the whole 700 human engineers thing was a misunderstanding.
https://blog.pragmaticengineer.com/builder-ai-did-not-fake-ai/
I’m not wholly sure I buy the argument, which is roughly
I guess the question then is: if they did have a good genai tool for software dev… where is it? Why wasn’t Microsoft interested in it?
Turns out some Silicon Valley folk are unhappy that a whole load of waymos got torched, fantasised that the cars could just gun down the protesters, and use genai video to bring their fantasies to some vague approximation of “life”
https://xcancel.com/venturetwins/status/1931929828732907882
The author, Justine Moore is an investment partner at a16z. May her future ventures be incendiary and uninsurable.
(via garbageday.email)
I was reading a post by someone trying to make shell scripts with an llm, and at one point the system suggested making a directory called ~
(which is a shorthand for your home directory in a bunch of unix-alikes). When the user pointed out this was bad, the llm recommended remediation using rm -r ~
which would of course delete all your stuff.
So, yeah, don’t let the approximately-correct machine do things by itself, when a single character substitution can destroy all your stuff.
And JFC, being surprised that something called “YOLO” might be bad? What were people expecting? --all-the-red-flags
LLMs aren’t profitable even if they never had to pay a penny on license fees. The providers are losing money on every query, and can only be sustained by a firehose of VC money. They’re all hoping for a miracle.
(this probably deserves its own post because it seems destined to be a shitshow full of the worst people, but I know nothing about the project or the people currently involved)
Did you know there’s a new fork of xorg, called x11libre? I didn’t! I guess not everyone is happy with wayland, so this seems like a reasonable
It’s explicitly free of any “DEI” or similar discriminatory policies… [snip]
Together we’ll make X great again!
Oh dear. Project members are of course being entirely normal about the whole thing.
Metux, one of the founding contributors, is Enrico Weigelt, who has reasonable opinions like everyone except the nazis were the real nazis in WW2, and also had an anti vax (and possibly eugenicist) rant on the linux kernel mailing list, as you do.
In sure it’ll be fine though. He’s a great coder.
(links were unashamedly pillaged from this mastodon thread: https://nondeterministic.computer/@mjg59/114664107545048173)
Relatedly, the gathering of (useful, actually works in real life, can be used to make products that turn a profit or that people actually want, and sometimes even all of the above at the same time) computer vision and machine learning and LLMs under the umbrella of “AI” is something I find particularly galling.
The eventual collapse of the AI bubble and the subsequent second AI winter is going to take a lot of useful technology with it that had the misfortune to be standing a bit too close to LLMs.
It isn’t clear that anyone in trump’s government has ever paused to consider than any of their plans might have downsides.
Little table of “ai fluency” from zapier via linkedin: https://www.linkedin.com/posts/wadefoster_how-do-we-measure-ai-fluency-at-zapier-activity-7336442774650556416-nKND
(original source https://old.mermaid.town/@Kymberly/114635617736977394)
The author says it isn’t a requirements checklist, but it does have a column marked “unacceptable”, containing gems like
Calls Al coding assistants too risky
Has never tested Al-generated code
Relies only on Stack Overflow snippets
Angry goose meme: what was the ai code generator trained on, motherfucker?
I don’t think it’sa stretch to see the independence of spacex classified as a national security risk and have it nationalised (though not called that, because that sounds too socialist) and have associated people such as elon declared traitors. Shouldn’t even be that difficult these days, seeing how he’s trashed his own reputation, and it’ll be good to encourage the other plutocrats to stay in line.
Night of the long knives is in the playbook, after all
AI audio transcription is great.
https://mastodon.social/@nixCraft/114627512725655987
Sean Murray @NoMansSky
Ignore the auto-generated captions. We did not have a secret room hiding deaf kids.
Nintendo never once sent us deaf kids. We were hiding dev-kits. DEV-KITS.
For those of you who haven’t already seen it, r/accelerate is banning users who think they’ve talked to an AI god.
https://www.404media.co/pro-ai-subreddit-bans-uptick-of-users-who-suffer-from-ai-delusions/
There’s some optimism from the redditors that the LLM folk will patch the problem out (“you must be prompting it wrong”), but assume that they somehow just don’t know about the issue yet.
As soon as the companies realise this, red team it and patch the LLMs it should stop being a problem. But it’s clear that they’re not aware of the issue enough right now.
There’s some dubious self-published analysis which coined the term “neural howlround” to mean some sort of undesirable recursive behaviour in LLMs that I haven’t read yet (and might not, because it sounds like cultspeak) and may not actually be relevant to the issue.
It wraps up with a surprisingly sensible response from the subreddit staff.
Our policy is to quietly ban those users and not engage with them, because we’re not qualified and it never goes well.
AI boosters not claiming expertise in something, or offloading the task to an LLM? Good news, though surprising.
FWIW, maemo still lives… Jolla released their C2 phone which runs the maemo-descended sailfish OS about 6 months ago. I don’t know anything about it, other than its existence, and that it doesn’t have the N900 form factor 😔
Interesting (in a depressing way) thread by author Alex de Campi about the fuckery by Unbound/Boundless (crowdfunding for publishing, which segued into financial incompetence and stealing royalties), whose latest incarnation might be trying to AI their way out of the hole they’ve dug for themselves.
From the liquidator’s proposals:
We are also undertaking new areas of business that require no funds to implement, such as starting to increase our rights income from book to videogaming by leveraging our contacts in the gaming industry and potentially creating new content based on our intellectual property utilizing inexpensive artificial intelligence platforms.
(emphasis mine)
They don’t appear to actually own any intellectual property anymore (due to defaulting on contracts) so I can’t see this ending well.
Original thread, for those of you with bluesky accounts: https://bsky.app/profile/alexdecampi.bsky.social/post/3lqfmpme2722w
It’s the usual “uninspiring right-centrist doesn’t understand why they were elected, implements a bunch of stupid policies that don’t improve things for anyone but some consultants and donors, hands country over to frothing far-right shithead” cycle.
I like that Soylent Green was set in the far off and implausible year of 2022, which coincidentally was the year of ChatGPT’s debut.
I am absolutely certain that letting a hallucination-as-a-service system call the police if it suspects a user is being nefarious is a great plan. This will definitely ensure that all the people threatening their chatbots with death will think twice about their language, and no-one on the internet will ever be naughty ever again. The police will certainly thank anthropic for keeping them up to date with the almost certainly illegal activities of a probably small number of criminal users.
It’s just more llm output, in the style of “imagine you can reason about the question you’ve just been asked. Explain how you might have come about your answer.” It has no resemblance to how a neural network functions, nor to the output filters the service providers use.
It’s how the ai doomers get themselves into a flap over “deceptive” models… “omg it lied about its train of thought!” because if course it didn’t lie, it just edited a stream of tokens that were statistically similar to something classified as reasoning during training.