Just a serval who gets into all sorts of furry shenanigans.

  • 2 Posts
  • 70 Comments
Joined 1 year ago
cake
Cake day: June 18th, 2023

help-circle




  • LLMs (I refuse to call them AI, as there’s no intelligence to be found) are simply random word sequence generators based on a trained probability model. Of course they’re going to suck at math, because they’re not actually calculating anything, they’re just dumping what their algorithm “thinks” is the most likely response to user input.

    “The ability to speak does not make you intelligent” - Qui-Gon Jin








  • From a technological point, yes. That being said, there are some complications. The US runs double-stacked intermodal freight so clearance is a concern, first of all. It’s doable, in fact India has many electrified lines that allow for double-stacked intermodal freight, but it does add a little to the cost and effort. The second issue is, unfortunately, cost, but not because it’s outright “too expensive”. Rather, it would eat too much into the short-term quarterlies of the various publicly traded rail companies that own a vast majority of the US’s rail lines during the installation. And as publicly traded companies, even if one of the major rail companies wanted to spend the money to electrify, they would get sued by their shareholders for doing so because there’s no immediate return on profits. And the final issue? NIMBYs already hate rail as-is, they’d hate the overhead lines even more.

    So, yeah, a lot of challenges to electrification unique to the US, almost all of them political in nature. It would be really nice to put in electrified rail again (late PRR and New Haven were almost fully electrified but most of that was ripped out after the Penn Central merger. Seriously, everyone likes to rag on New Haven for screwing that up but honestly the evidence all points to the New York Central’s management team being the real culprits).










  • I’m opposed to #4 on principle. ANY action taken against an account should ALWAYS be done by a person after direct review. It doesn’t matter if it can be fixed afterwards or not, you’re still potentially subjecting people to unfair treatment and profiling. You can have it notify moderators but the moderators should be the ones actually making the decision whether to limit an account for further investigation, not the auto-mod bot.

    If you implement #4 as-is, I’m just flat-out not going to stick around.

    EDIT: Also, I ran into an infinite loading bug when submitting this post.