Just some Internet guy

He/him/them 🏳️‍🌈

  • 3 Posts
  • 989 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle
  • Sometimes it’s also, is it really important to know? A lot of things I have complicated opinions of because things are nuanced and complicated in the real world, so for example even if you ask me it’s not like I can just be for or against Israel or whatever. And I certainly don’t feel like going over it again and again and again as people keep asking about random topics.

    I swear americans have this weird thing where everyone needs to have a strong opinion on every topic all the time, and talk about it all the time so they can sus out if you’re leaning democrat or republican. It’s so weird. I’m not even american, I can’t do anything about it! I’ll keep my opinions where they belong, in my head, thank you.

    It’s important to be educated about those topics but I don’t feel the need to make it my entire personnality, unlike some people. I have better things to do that actually brings me joy rather than doom and gloom over things I can’t do anything about.






  • Yeah it’ll depend on how good your coreboot implementation is. AFAIK it’s pretty good on Chromebooks because Google whereas a corebooted ThinkPad might have some downsides to it.

    The slowdowns I would attribute to likely bad power management, because ultimately the code runs on the CPU with no involvement with the BIOS unless you call into it, which should be very little.

    Looking up the article seems to confirm:

    The main reason it seems for the Dasharo firmware offering lower performance at times was the Core i5 12400 being tested never exceeded a maximum peak frequency of 4.0GHz while the proprietary BIOS successfully hit the 4.4GHz maximum turbo frequency of the i5-12400. Meanwhile the Dasharo firmware never led to the i5-12400 clocking down to 600MHz on all cores as a minimum frequency during idle but there was a ~974MHz.

    I’d expect System76 laptops to have a smaller performance gap if any since it’s a first-party implementation and it’s in their interest for that stuff to work properly. But I don’t have coreboot computers so I can’t validate, that’s all assumptions.

    That said for a 5% performance loss, I’d say it counts as viable. My games VM has a similar hit vs native. I’ve been gaming on Linux well before Proton and Steam and have taken much larger performance hits before just to avoid closing all my work to reboot for break time games.



  • Yes dual GPU. I set that up like 6 years ago, so its use changed over time. It used to be Windows but now it’s another Linux VM.

    The reason I still use it is it serves as a second seat and is very convenient at that. The GPU’s output is connected to the TV, so the TV gets its own dedicated and independent OS. So my wife can use it when I’m not. When the VM isn’t running I use the card as a render offload, so games get the full power of the better card as well.

    I also use it for toying with macOS and Windows because both of those are basically unusable without some form of 3D acceleration. For Windows I use Looking Glass which makes it feel pretty native performance. I don’t play games in it anymore but I still need to run Visual Studio to build the Windows exes for some projects.

    This week I also used the second card to test out stuff on Bazzite because one if my friends finally made the switch and I need to be able to test things out in it as I have no fucking clue how uBlue works.


  • The BIOS does a lot less than you’d expect, it doesn’t really have an impact on gaming performance. For what it’s worth, I’ve been gaming in a VM for years, and it uses the TianoCore/OVMF/EDK2 firmware, and no issues. Once Linux is booted, it doesn’t really matter all that much. You’re not even allowed to use firmware services after the OS is booted, it’s only meant for bootloaders or simple applications. As long as all the hardware is initialized and configured properly it shouldn’t matter.



  • A lot of them got sucked into the whole “the government is forcing it on you to control the population”, and they simply can’t comprehend that anyone would voluntarily wear what they now consider being the symbol of submission to the government. In their mind it doesn’t work and never worked and you’re just virtue signalling your support of the government. It’s wild and a lost cause.

    I’d expect it to get much worse now.


  • That should be mostly the default. My secondary Vega 64 is reporting using only 3W which, on a laptop would be worth it but I doubt 3W affects your electricity. It’s nothing compared to the overall power usage of the rest of the desktop, the monitors. Pretty sure even my fans use more.

    The best way to address this would be to first take proper measurements. Maybe get a kill-a-watt and measure usage with and without the card installed to get the true usage at the wall. Also maybe get a baseline with as little hardware as possible. With that data you can calculate roughly how much it costs to run the PC and how much each component costs, and from there it’s easier to decide if it’s worth.

    Just the electric bill being higher isn’t a lot to go with. Could just be that it’s getting cold, or hot. Little details can really throw expectations off. For example, mining crypto during the winter is technically cheaper than not for me because I have electric heat, so between 500W in a heating strip or 500W mining crypto, they both produce the same amount of heat in the room but one of them also made me a few cents as a byproduct. You have to consider that when optimizing for cost and not maximizing battery life on a laptop.


  • One thing to be careful with allowing some bending of the rules, is some are going to start testing how far they can bend the rules. Everytime you bend a rule you create a precedent for it as well, and you get into nasty fights of why was I banned but not them and have your clemency hit you right back in the face.

    If it’s okay to bend some rules, then that should explicitly be the rule instead. Offtopic discussions for example, you can have a rule be “all top level comments should be on topic” as a balance, so offtopic discussions can happen, just not take over the whole comment section. If you allow something, make a mod comment explaining why for transparency and set the right expectations: “This post is off-topic but is generating on-topic discussion so we’re keeping it.”

    Similarly, well designed punishments goes a long way. For example, automatic ban after N warnings can be unfair. What you’re really after is, you don’t want to be warning that user every day to stay on topic. So the punishment can be more like “more than 3 warnings within 10 days results in a 7 day ban”. But sometimes the situation is such, you can rack in 10 warnings in the same threads. So you can make the punishment account for that: “If you get warned more than 3 times during a 14 day period, you will be banned for 7 days”. Or per thread, whatever makes sense. Understand common mistakes community members do and how you can steer them in the right direction without being unnecessarily harsh.

    With those two combined, it shouldn’t matter if you moderate like a robot or not. The expectations are clear, forgiving and fair while enforcing some order for repeat offenders. The rules have the flexibility you need baked in so you don’t have to bend the rules.


  • Guarantee there will be questions of cost of setup, maintenance, and risks.

    And time moderating it, especially if they run their own. At least with Twitter/Facebook/YouTube, you get a lot of moderation for free whether you agree with it or not.

    And if they use another instance, there’s other liability questions about the particular instance to choose. If they’re gonna represent an official city account, you’d expect some cybersecurity certifications to be a requirement and all kinds of stuff, even if it’s a free service. The instance admins interfering, possibly steering opinions during city elections, etc.

    Nobody cares about decentralized social networks, the technology, or how terrible the other outlets are. For a municipality, you may want to focus on maintaining multiple channels of communications and ways to reach and engage the most users. You could then fold the fediverse into it as one more channel. Something they should keep an eye on. They’ll need a way to post the same content to all those channels with the least effort. Something easy that a trained intern or clerk can do.

    In this case IMO it might even be better to use something like Wordpress with the ActivityPub plugin, or alternatives to that. I imagine a city mostly posts announcements and stuff, so a blog that serves as both an official website and you can follow and interact with it from the comfort of your preferred social service sounds a lot more appealing than just another social media without that many users. Can even use more plugins to post to Facebook and Twitter as well, all from one place. Given the age of the board, they’re also more likely to know and care about Threads and Bluesky compatibility just because they have more users, and bureaucratic decisions are based on numbers. A nice graph showing if they join the fediverse they capture all the users fleeing Twitter by supporting AP and AT.


  • Post the Hyprland config too?

    Does it make the entire screen green by chance, not just the windows? If the shader applies to the whole screen, then setting alpha on it doesn’t really makes sense and is probably discarded since your monitor can’t display transparency. You need to make sure it applies on a per-window basis for them to be composited as transparent and show your wallpaper behind it.



  • It’s nicknamed the autohell tools for a reason.

    It’s neat but most of its functionality is completely useless to most people. The autotools are so old I think they even predate Linux itself, so it’s designed for portability between UNIXes of the time, so it checks the compiler’s capabilities and supported features and tries to find paths. That also wildly predate package managers, so they were the official way to install things so there was also a need to make sure to check for dependencies, find dependencies, and all that stuff. Nowadays you might as well just want to write a PKGBUILD if you want to install it, or a Dockerfile. Just no need to check for 99% of the stuff the autotools check. Everything it checks for has probably been standard compiler features for at least the last decade, and the package manager can ensure you have the build dependencies present.

    Ultimately you eventually end up generating a Makefile via M4 macros through that whole process, so the Makefiles that get generated look as good as any other generated Makefiles from the likes of CMake and Meson. So you might as well just go for your hand written Makefile, and use a better tool when it’s time to generate a Makefile.

    (If only c++ build systems caught up to Golang lol)

    At least it’s not node_modules