• 2 Posts
  • 85 Comments
Joined 4 years ago
cake
Cake day: June 2nd, 2020

help-circle


  • Wow, I feel like the most upvoted solutions here don’t work, and meanwhile some obvious and widely known alternatives are being completely overlooked.

    ❌ Inspect Element - many modern sites don’t even include the full article in the paywalled html, so this wouldn’t work. Also sitting there and mousing over elements and deleting them one by one, is tedious, it’s easy to accidentally delete an element that encloses the content you intended to keep, or to drive yourself crazy trying to figure out how elements are nested.

    ❌ Ublock Zapper - a similar to the above, won’t work on stub articles, and just janky because you’re manually zapping things

    ❌ Disabled JavaScript - Similar to the above, same problem because many articles are stubs anyway. And the HTML layers that block your view don’t have to be done with JavaScript.

    ❌ Rapid copy and paste of the article to notepad or rapidly printing the screen - similar problem to the above, lots of places just post the stub of an article, and besides nobody should live their life this way rapidly trying to print screen or copy everything. If you’re trying to do a quick copy you’re going to grab all kinds of gobbledygunk from the page and probably have to manually filter it out.

    ❌ Reader Mode - Your browsers reader mode will be hit and miss because, again, many sites post stub articles, and it’s possible the pay wall stuff will just get formatted into the reader mode along with an incomplete article.

    Archive.is - works!

    ✅ Pocket and Instapaper - amazingly, nobody has mentioned these even though they’re probably the longest running (dating back to 2007-2008), possibly most widely known, and most consistent solutions that still work to this day. They keep their own local caches of articles, so it’s not depending on the full content being visible on the page.

    ✅ Other dedicated extensions - Dedicated browser extensions seem to work, but be careful what you’re signing yourself up for.

    🤷‍♀️ Brave - It works, but, it’s a Chromium supported browser, so ultimately Google controls the destiny and can drive Chromium to incorporate fundamental frameworks supporting DRM and pushing their preferred web standards.






  • I entirely agree with you about Google perpetually shifting the goalposts, which increases complexity and works to their advantage. I would say I think of the standards and technology as being, in many ways, integrally related.

    I think the idea though, is that it has indeed grown so vast that you need, effectively, teams of teams to keep up. There are browsers done with small teams of developers, but the fruits of those, imo, are not super promising.

    Opera: moved to Chromium.

    Vivaldi: also on Chromium.

    Midori: moved to Chromium.

    Falkon: Developed by the KDE team. Perhaps the closest example to what you are thinking of. It’s functional but lags well behind modern web standards.

    Netsurf: Remarkable and inspiring small browser written from scratch, but well behind anything like a modern browsing experience.

    Dillo: Amazing for what it is, breathing life into old laptops from the 90s, part of the incredible software ecosystem that makes Linux so remarkable, so capable of doing more with less. It’s a web browser under a megabyte. Amazing for what it is, but can barely do more than browse text and display images with decent formatting.

    Otter: An attempt to keep the Old Opera going, but well behind modern standards. Also probably pretty close to what you are suggesting.

    Pale Moon: Interesting old fork of pre-quantum Firefox but again well behind modern web standards.

    All of the examples have either moved to Chromium to keep up, or are well behind the curve of being modern browsers. If Firefox had the compromised functionality of Otter it might shed what modest market share it still has, not to mention get pilloried in comment sections here at Lemmy by aspiring conspiracy theorists.

    I do love all of these projects and everything they stand for (well, the non-chromium ones at least) but the evidence out there suggests it’s hard to do.


  • Every corporation invested in unhealthy ventures will say it is necessary, and they can do it ethically, regardless of how misleading or untrue it is. They will launder their bad behavior through an organization to make it appear more ethical and healthy.

    My guy… you linked to a youtube documentary about the questionable economics of gold and a blog post about an unreliable certification group associated with Rainforest Alliance. Not because of anything specific to gold or certifications, but… to illustrate the general idea that corporations can be bad?

    The level of generality you have to zoom out to, to associate those to Mozilla, is the same level of zooming out typically used for Qanon conspiracy theorizing.

    This is exactly the kind of thing that people make fun of with Six Degrees of Kevin Bacon. If you’re willing to zoom out to six degrees, you can connect Kevin Bacon to anyone in the history of cinema. It doesn’t prove that Kevin Bacon is personally connected to everyone in the history of cinema, but what it does prove is the frivolousness of reasoning from such stretched out connections. That goes for historical connections, but also funding connections, and, perhaps most importantly here, for conceptual connections. And I would venture that trains of thought hinging on such remote connections are a hallmark of fuzzy thinking, which is why it’s terrible to go from “Rainforest Alliance bad” to “… and therefore Mozilla ad privacy is bad.”

    That’s not to say one shouldn’t be concerned about Mozilla’s venture into advertising, but that this is a terribly incoherent way of showing it, that’s as liable to produce overextended false positives connecting anything to anything as it is to produce any insight.


  • A fundamental flaw in this, is it still involves user data, even if “anonymized”. You can advertise without any user data.

    Right. The reassurance is supposed to be: “don’t worry, no personalized data is retained.” So, ideally, no individual record of you, with your likes, your behaviors, your browser fingerprint, aggregated together with whatever third party provider data might be purchased, and machine learning inferences can be derived from that. Instead, there’s a layer of abstraction, or several layers. Like “people who watch Breaking Bad also like Parks and Rec and are 12% more likely to be first generation home buyers”. Several abstracted identity types can be developed and refined.

    Okay, but who ordered that? Why is that something that we think satisfies us that privacy is retained? You’re still going to try and associate me with an abstract machine learned identity that, to your best efforts, closely approximates what you think I like and what is most persuasive to me. I don’t think people who are interested in privacy feel reassured at anonymized repurposing of data.

    It’s the model itself, it’s the incentives inherent in advertising as an economic model, at the end of the day. I don’t know that there’s a piecemeal negotiation that is supposed to stand in for our interests to reassure us, or whose idea was that this third way was going to be fine.