deleted by creator
deleted by creator
Here is a real world example of someone doing some reverse engineering of compiled code. Might help you understand what is possible, and some of the processes. https://nee.lv/2021/02/28/How-I-cut-GTA-Online-loading-times-by-70/
Each electron app has its own Chromium runtime. With the prevalence of electron apps, the result is multiple instances of chromium running on your machine. Chromium isn’t light weight. On top of that, there is the philosophical aspect. Do we really need to be shipping an entire browser for the purpose of creating a UI? That being said I understand why so many electron apps are created. HTML/JS/CSS are powerful and easy to use (IMO) and cross platform. I just try to avoid them and use alternatives to electron.
For real. Wouldn’t even consider it
Definitely not humanity’s biggest problem. Chromium becoming the de facto browser creates a situation where one entity controls standards and influences how the web operates, impacting user choice and freedom, and reduces incentives for privacy and security updates.
This already happened once with IE.
I only use Firefox on desktop, but I doubt it will be a relevant choice much longer.
Edit: wrt telemetry, I was referring to the Android operating system. They collect anything and everything on users and all nearby devices.
I see it as a good thing. Apple is not without faults, but anything that keeps Google from harvesting more data is a win for humanity. The Safari browser is the only thing stopping Google’s browser monopoly. Unfortunately it is forced, and 99% of Apple users probably have no clue they are holding up the last line of defense.
Just get the Pixel 8. Sometimes you have to pay a premium. A bird in the hand is better than a bird in the bush (a used phone you will have to find).
We need answers!
I have two PS4 controllers. One which came with the console in 2014. They both still work like new. Bluetooth works on iPhone and Windows flawlessly.
No.
Working as a software developer, creating software is a team effort. The developer doesn’t necessarily come up with the requirements. The business is the driver of what the software becomes. You would have to account for the product team as well. What percentage would go to whom?
How would you quantify usage? Number of API requests? Number of downloads? What if the app is only run locally. Are you going to phone home every time the data parser is fired up and charge users on a per row of data processed basis?
What about features being disabled or removed? Refactored by another dev? Now you are talking about algorithms to monitor source control to track who receives residuals. Sounds like a mess.
Sounds like an entire governing body would need to be in charge of how to track residuals. More bureaucracy is bad.
Someone else mentioned responsibility for code after you have left a company. I think one of the most relieving things about getting a new job is the mess of systems you leave behind (only to walk into a new mess).
I’ve signed a contract with every employer I have worked for that states what I work on is their IP. Employees should go in knowing that.
Another issue is fair pay. Ideally everyone would be payed fairly for their work. In the U.S., software engineers are known for being compensated well, so I don’t think that is an issue.
To tie this back to the current situation with writers, a precedent has been set in that industry, where residuals are expected. I do believe there is creativity in software development, but the extent of that is on a person to person basis. Many people write convoluted code their entire careers, which simply gets the job done. Often times creating more work than they realize when it comes time to extend.
This concept also seems to go against those most vociferous pioneers of the industry who advocate for free and open source software? Torvalds, Stallman, Jimmy Wales etc…
Vampire Survivors multiplayer on PC
Yo man, looks good! No recommendations here, just live your life brother!
I would second Elixir. Either that or Rust. Sure both are popular, but for good reason.
They are completely different from the languages you use.
You will be introduced to new paradigms.
As a person who used the same stack as you (albeit typescript instead of JavaScript), I think it would be a waste of time to learn C#. It is so close to Java, and learning it may make you hate having to use Java, because it seems a bit better put together. Even though it runs on Linux, and is a good language, I don’t think there is ever a reason to chose it over Java, because M$.