Researchers Alex Hanna and Emily M. Bender call on businesses not to succumb to this artificial “intelligence” hype.
I’ve never been scared for AI, I’ve been scared of the idiots who THINK it’s smart (smarter than them at least) and want to push ot to their employees and customers. My employer has been pushing more and more automation using AI for development that if it can make sense of the codebase our customers have, then I will happily retire and let it take the madness that is their codebases.
I believe in it as a work accelerator, as it helps me think through problems in areas I know well and can sort it’s bullshit out. Editing is faster than writing, and I frequently make typos in content I draft from scratch. ChatGPT has been a godsend for me professionally.
This
Who’s butthurt? Which billionaire is crying?
The only people butthurt are politicians, journalists and artists that essentially copy and paste shit.
Basically all the jobs that can be done by a trained monkey.
Let’s not insult the trained monkeys they had to work harder than any of them ever did :(
Sure, ChatGPT isn’t actually intelligent, but it’s a good approximation. You can ask ChatGPT a technical question, give it a ton of context to the question, and it’ll “understand” all the information you’ve given it and answer your question. That’s much more akin to asking an expert human who takes in the info, understands and answers, vs trying to find the answer via a search engine.
For me and other people in my life, ChatGPT has been intensely helpful job wise. I do double check any info it gives, but generally it’s been pretty solid.
ChatGPT and other LLMs are ideal for cases where 100% accuracy is not required. If you’re ok with getting wrong answers 80-90% of the time, then you have a legitimate use case for LLMs.
In terms of technical questions, especially older Microsoft related stuff, it does very well. My experience hasn’t been any where near 80-90% wrong answers. It all depends on the topics you’re asking about I suppose.
Just like humans.