Good luck, I’m hoping that I can get a worker’s visa through my employer to move to the UK myself.
Good luck, I’m hoping that I can get a worker’s visa through my employer to move to the UK myself.
This is gonna turn into the gamer version of “this is extremely dangerous to our democracy” isn’t it
y’know what, I was gonna do this quietly, but I’m drunk, and so this feels like a time to make bad decisions:
When Yiffit closes at the end of this year, it’s probably going to be the end of my time on Lemmy. The reasons are varied (it’s mostly the tankies) but fundamentally, it boils down to the simple fact that I’ve realized that I come here not to enjoy myself or relax, but because I want to be angry at something. And boy howdy, does Lemmy give me something to be angry about (the tankies).
I’m gonna level with the rest of this platform. As it stands right now, this platform consists of the following:
~60-70%: Reposts of whatever meme is in the vicinity of the front page of /r/all on Reddit
15-20%: Politics (including some of the worst takes I’ve ever seen on the Internet) and memes about the awful takes
9-14%: Memes about Linux
~1%: Actual original content that interests me
So with all that, why would I stick around here? If the shit I see here is mostly crap reposts from Reddit and shit that I either don’t care about or actively dislike, why don’t I just… go to Reddit and browse read-only without an account, like I’ve found myself increasingly doing over the past 6 months or so?
And as for actually interacting with this place, fundamentally, most of my engagement with a platform comes from the comments. And holy fuck are the comments here absolutely fucking awful. It puts me in a mindset where when I go to this website, I’m mentally preparing myself for a session of “upvote the sane takes, downvote the unhinged bullshit, and debate whether it’s worth replying to the particularly unhinged takes and getting into an internet slapfight (hint: the answer is usually no).” And that’s fine to do for a while, but…
I’m tired, boss.
I’m tired of being angry and scared all the fucking time. I’m tired of arguing and exposing myself to the kind of toxic shit that you would normally hear from the angry drunk on the corner who hasn’t showered off the vomit from his last hangover, and you could safely ignore. And for a place that is supposedly mostly populated by leftists (y’know, the groups you would expect to be relatively accepting of minorities and adopt a “live and let live” mindset), holy FUCK I have seen more hate for furries in the ~16 months of using Lemmy than I did in a decade of using Reddit.
And that’s the other problem with this place–the moderation absolutely fucking sucks. On the one hand, you’ve got literal stalinist admins running Those Instances (.ml, lemmygrad, hexbear, etc.) issuing instance-wide bans for the mildest of lukewarm “maybe harm reduction is good actually” takes, and on the other hand you have mods and admins on the not-fucking-nutso instances so traumatized by Reddit moderation that they’re wringing their hands over “well we don’t want to have the appearance of impropriety and power abuse” while literal, 90s-style BBS trolls run rampant, flooding the platform with shit and making the comments even more toxic than they already are.
Coming from someone who is has a few years of moderation experience, here’s a dirty little secret of moderation: You cannot have an absolute, 100% objective standard. The instant you shackle yourself to the standard of objectivity, you open the door for trolls and bad-faith actors to push their discourse up to the very edge of the rules, testing the boundaries, reveling in the game of “how much can I shit all over your community while staying within the rules?” You HAVE to leave yourself at least some wriggle room to ban someone because they’re a shithead without a 5-page essay justifying why, or you’ll find yourself powerless against the inevitable onslaught of galactic martial artists. You should absolutely have methods to appeal and review moderator actions, and I do support transparency whenever possible, but moderation is fundamentally walking a tightrope of very unpleasant judgment calls that will inevitably piss off someone (if nobody else, the person who you banned and the people who want to see your community burn for their amusement).
Similarly, I think I’m going to wind down my presence on Mastodon for similar reasons. In my time there, I’ve been exposed to some of the pettiest, Mean Girls-esque, high school bullshit drama I’ve ever seen in my life. I’ve seen friends scramble to find new instances because some rando neither of us heard of mouthed off to the wrong asshole and got their instance put on a blocklist, I’ve seen groups of people that normally I would love to chill with, swap stories over beers, and generally get to know accuse one another of being racist and/or transphobic, with an astounding lack of grace, forgiveness, and willingness to understand one another’s perspective and actually fucking listen to people. Similarly to my experience with Lemmy, it’s mildly amusing to read up on the latest drama, but one can only spill the tea for so long before they get tired of cleaning up the mess and actually want to drink the goddamn tea.
I’m not sure what I’ll do next, but I think I’m generally done with the Fediverse. Say what you will about corporate, centralized social media–and holy FUCK is there a LOT to complain about–but at least their moderation is less susceptible to the kind of bullshit I’ve talked about here, so maybe I’ll make an account on Bluesky–it seems to have reached a critical mass of furries so maybe I’ll be able to find a home there, at least for a good few years.
So yeah. I might get downvoted to hell for this, but I don’t really fucking care at this point. It’s all gonna be gone in a few months, and so will I.
Yup, being nice and polite to the people helping you is the single biggest way to get them to look the other way or have them bend the rules for you. The instant you start playing the asshole card, you usually get strict by-the-letter policy.
As others have said, it’s a very snowbally game. The various characters all grow naturally stronger over the course of the game through gold (to buy items) and experience that you earn by killing minions. The problem is that killing an enemy player and destroying enemy towers grants a lot of gold and experience, so if you fuck up and die (or if you get ganged up on by the enemy team) you can end up making your opponent much stronger. Even if you live and are forced to return to base to heal, the opportunity for free farm or destroying your tower (which also makes it riskier for you to push forward) can make your opponent a lot stronger than you, which lets him kill you easier, which makes him stronger. This can also spill over to other lanes, where the opponent you made stronger starts killing your teammates and taking their towers.
There’s ways to overcome this snowball–players on killstreaks are worth more gold when they die, you can gang up on a fed opponent and catch them out to nullify their stat advantage, and you can try and help other lanes to get your team stronger. The champions also have different scaling levels, and some champions get a lot of front-loaded baseline damage while others scale better with items, and a select few champions have theoretically infinite scaling (but are generally much weaker in other areas to compensate). Worst case, this means your team can play super defensive and try to wait out the advantage until they catch up and then win from there. The problem is that all this requires A) communication and the ability to quickly adapt from your teammates, B) the opposing team screws up and doesn’t press their advantage, and C) your team is willing to try (which may require dragging the game out for over an hour). Needless to say, this is not always the case, and this design makes it very easy to blame another player for the loss (warranted or not).
Brought to you by the American National Automation Laboratory Corp?
Oh yes, let me just contact the manufacturer for this appliance and ask them to update it to support automated certificate renewa–
What’s that? “Device is end of life and will not receive further feature updates?” Okay, let me ask my boss if I can replace i–
What? “Equipment is working fine and there is no room in the budget for a replacement?” Okay, then let me see if I can find a workaround with existing equipme–
Huh? “Requested feature requires updating subscription to include advanced management capabilities?” Oh, fuck off…
I keep thinking of the anticapitalist manifesto that a spinoff team from the disco elysium developers dropped, and this part in particular stands out to me and helps crystallize exactly why I don’t like AI art:
All art is communication — dialogue across time, space and thought. In its rawest, it is one mind’s ability to provoke emotion in another. Large language models — simulacra, cold comfort, real-doll pocket-pussy, cyberspace freezer of an abandoned IM-chat — which are today passed off for “artificial intelligence”, will never be able to offer a dialogue with the vision of another human being.
Machine-generated works will never satisfy or substitute the human desire for art, as our desire for art is in its core a desire for communication with another, with a talent who speaks to us across worlds and ages to remind us of our all-encompassing human universality. There is no one to connect to in a large language model. The phone line is open but there’s no one on the other side.
Yeah, suuuuure you weren’t.
Note that the proof also generalizes to any form of creating an AI by training it on a dataset, not just LLMs. But sure, we’ll absolutely develop an entirely new approach to cognitive science in a few years, we’re definitely not boiling the planet and funneling enough money to end world poverty several times over into a scientific dead end!
You literally were LMAO
Other than that, we will keep incrementally improving our technology and it’s only a matter of time untill we get there. May take 5 years, 50 or 500 but it seems pretty inevitable to me.
Literally a direct quote. In what world is this not talking about LLMs?
Did you read the article, or the actual research paper? They present a mathematical proof that any hypothetical method of training an AI that produces an algorithm that performs better than random chance could also be used to solve a known intractible problem, which is impossible with all known current methods. This means that any algorithm we can produce that works by training an AI would run in exponential time or worse.
The paper authors point out that this also has severe implications for current AI, too–since the current AI-by-learning method that underpins all LLMs is fundamentally NP-hard and can’t run in polynomial time, “the sample-and-time requirements grow non-polynomially (e.g. exponentially or worse) in n.” They present a thought experiment of an AI that handles a 15-minute conversation, assuming 60 words are spoken per minute (keep in mind the average is roughly 160). The resources this AI would require to process this would be 60*15 = 900. The authors then conclude:
“Now the AI needs to learn to respond appropriately to conversations of this size (and not just to short prompts). Since resource requirements for AI-by-Learning grow exponentially or worse, let us take a simple exponential function O(2n ) as our proxy of the order of magnitude of resources needed as a function of n. 2^900 ∼ 10^270 is already unimaginably larger than the number of atoms in the universe (∼10^81 ). Imagine us sampling this super-astronomical space of possible situations using so-called ‘Big Data’. Even if we grant that billions of trillions (10 21 ) of relevant data samples could be generated (or scraped) and stored, then this is still but a miniscule proportion of the order of magnitude of samples needed to solve the learning problem for even moderate size n.”
That’s why LLMs are a dead end.
No, tencent is a Chinese tech company, you’re thinking of tenement.
When IT folks say devs don’t know about hardware, they’re usually talking about the forest-level overview in my experience. Stuff like how the software being developed integrates into an existing environment and how to optimize code to fit within the bounds of reality–it may be practical to dump a database directly into memory when it’s a 500 MB testing dataset on your local workstation, but it’s insane to do that with a 500+ GB database in production environment. Similarly, a program may run fine when it’s using a NVMe SSD, but lots of environments even today still depend on arrays of traditional electromechanical hard drives because they offer the most capacity per dollar, and aren’t as prone to suddenly tombstoning when it dies like flash media. Suddenly, once the program is in production, it turns out that same program’s making a bunch of random I/O calls that could be optimized into a more sequential request or batched together into a single transaction, and now it runs like dogshit and drags down every other VM, container, or service sharing that array with it. That’s not accounting for the real dumb shit I’ve read about, like “dev hard coded their local IP address and it breaks in production because of NAT” or “program crashes because it doesn’t account for network latency.”
Game dev is unique because you’re explicitly targeting a single known platform (for consoles) or targeting for an extremely wide range of performance specs (for PC), and hitting an acceptable level of performance pre-release is (somewhat) mandatory, so this kind of mindfulness is drilled into devs much more heavily than business software dev is, especially in-house dev. Business development is almost entirely focused on “does it run without failing catastrophically” and almost everything else–performance, security, cleanliness, resource optimization–is given bare lip service at best.
I gave up on it for now when the questline involving the NPC learning to write broke, and then I started crashing to desktop (without any logs anywhere, either in the Buffout directory or even in Windows’ Event Viewer) every time I left the Swan or fast traveled directly to it, even though traveling to another point literally fifty feet south worked just fine. And since there’s no logs describing the crash, I have no idea how to fix it.
I could probably fix it by uninstalling and re-downloading it again, but I have a goddamn data cap that my roommate already blows through every month with the fucking massive updates Fallout 76 has taken to pushing out, I have zero desire to download 60 GB of data (30 GB base game + 30 GB FOLON) every fucking time I sneeze wrong and make the game start crashing again. =|
The most jarring thing is when I was picking up a prescription for my cat, and on the way home I was driving next to a plain vanilla, factory-stock GMC truck whose hood was literally taller than my entire car. And I don’t drive a miata or some other sub-compact, I drive a freaking Nissan Leaf, so about the size of your average sedan.
Since then it’s like a switch flipped in my brain, and I can’t unsee just how insanely huge modern-day pickup trucks have gotten.
It’s pretty okay. If you like the gameplay loop of scavenging parts to maintain and upgrade your car, and don’t mind the roguelite elements, it’s pretty fun, and it does a good job of creating tension–there’s been multiple occasions where I wanted to loot more but I was out of time and likely to die if I stayed much longer.
The world building is immaculate, but IMO unfortunately the plot doesn’t really pay off, and the ending isn’t… super satisfying. It does enough to drive you along (no pun intended). The best part of the game is easily the soundtrack, and the best song in the soundtrack is easily The Freeze.
I’ll say that Small Saga is fairly short–it’ll only take you about 5-10 hours to beat it, including the optional bosses. I enjoyed it, definitely give it a shot.
Fine, you win, I misunderstood. I still disagree with your actual point, however. To me, Intelligence implies the ability to learn in real-time, to adapt to changes in circumstance, and for self-improvement. Once an LLM is trained, it is static and unchanging until you re-train it with new data and update the model. Even if you strip out the sapience/consciousness-related stuff like the ability to think critically about a scenario, proactively make decisions, etc., an LLM is only capable of regurgitating facts and responding to its immediate input. By design, any “learning” it can do is forgotten the instant the session ends.
The commercial aspect of the reproduction is not relevant to whether it is an infringement–it is simply a factor in damages and Fair Use defense (an affirmative defense that presupposes infringement).
What you are getting at when it applies to this particular type of AI is effectively whether it would be a fair use, presupposing there is copying amounting to copyright infringement. And what I am saying is that, ignoring certain stupid behavior like torrenting a shit ton of text to keep a local store of training data, there is no copying happening as a matter of necessity. There may be copying as a matter of stupidity, but it isn’t necessary to the way the technology works.
You’re conflating whether something is infringement with defenses against infringement. Believe it or not, basically all data transfer and display of copyrighted material on the Internet is technically infringing. That includes the download of a picture to your computer’s memory for the sole purpose of displaying it on your monitor. In practice, nobody ever bothers suing art galleries, social media websites, or web browsers, because they all have ironclad defenses against infringement claims: art galleries & social media include a clause in their TOS that grants them a license to redistribute your work for the purpose of displaying it on their website, and web browsers have a basically bulletproof fair use claim. There are other non-infringing uses such as those which qualify for a compulsory license (e.g. live music productions, usually involving royalties), but they’re largely not very relevant here. In any case, the fundamental point is that any reproduction of a copyrighted work is infringement, but there are varied defenses against infringement claims that mean most infringing activities never see a courtroom in practice.
All this gets back to the original point I made: Creators retain their copyright even when uploading data for public use, and that copyright comes with heavy restrictions on how third parties may use it. When an individual uploads something to an art website, the website is free and clear of any claims for copyright infringement by virtue of the license granted to it by the website’s TOS. In contrast, an uninvolved third party–e.g. a non-registered user or an organization that has not entered into a licensing agreement with the creator or the website (*cough* OpenAI)–has no special defense against copyright infringement claims beyond the baseline questions: was the infringement for personal, noncommercial use? And does the infringement qualify as fair use? Individual users downloading an image for their private collection are mostly A-OK, because the infringement is done for personal & noncommercial use–theoretically someone could sue over it, but there would have to be a lot of aggravating factors for it to get beyond summary judgment. AI companies using web scrapers to download creators’ works do not qualify as personal/noncommercial use, for what I hope are bloody obvious reasons.
As for a model trained purely for research or educational purposes, I believe that it would have a very strong claim for fair use as long as the model is not widely available for public use. Once that model becomes publicly available, and/or is leveraged commercially, the analysis changes, because the model is no longer being used for research, but for commercial profit. To apply it to the real world, when OpenAI originally trained ChatGPT for research, it was on strong legal ground, but when it decided to start making it publicly available, they should have thrown out their training dataset and built up a new one using data in the public domain and data that it had negotiated a license for, trained ChatGPT on the new dataset, and then released it commercially. If they had done that, and if individuals had been given the option to opt their creative works out of this dataset, I highly doubt that most people would have any objection to LLM from a legal standpoint. Hell, they probably could have gotten licenses to use most websites’ data to train ChatGPT for a song. Instead, they jumped the gun and tipped their hand before they had all their ducks in a row, and now everybody sees just how valuable their data is to OpenAI and are pricing it accordingly.
Oh, and as for your edit, you contradicted yourself: in your first line, you said “The commercial aspect of the reproduction is not relevant to whether it is an infringement.” In your edit, you said “the infringement happens when you reproduce the images for a commercial purpose.” So which is it? (To be clear, the initial download is infringing copyright both when I download the image for personal/noncommercial use, and also when I download it to make T-shirts with. The difference is that the first case has a strong defense against an infringement claim that would likely get it dismissed in summary, while the cases of making T-shirts would be straightforward claims of infringement.)
There’s a pretty big difference between chatGPT and the science/medicine AIs.
And keep in mind that for LLMs and other chatbots, it’s not that they aren’t useful at all but that they aren’t useful enough to justify their costs. Microsoft is struggling to get significant uptake for Copilot addons in Microsoft 365, and this is when AI companies are still in their “sell below cost and light VC money on fire to survive long enough to gain market share” phase. What happens when the VC money dries up and AI companies have to double their prices (or more) in order to make enough revenue to cover their costs?