this post was submitted on 08 Nov 2024
272 points (97.6% liked)

Selfhosted

39980 readers
748 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I saw this post and I was curious what was out there.

https://neuromatch.social/@jonny/113444325077647843

Id like to put my lab servers to work archiving US federal data thats likely to get pulled - climate and biomed data seems mostly likely. The most obvious strategy to me seems like setting up mirror torrents on academictorrents. Anyone compiling a list of at-risk data yet?

top 35 comments
sorted by: hot top controversial new old
[–] vegetaaaaaaa@lemmy.world 3 points 3 hours ago

shaarli bookmarks + hecat (shaarli_api importer + download_media/archive_webpages processors + html_table exporter for the HTML index)

[–] danielquinn@lemmy.ca 1 points 2 hours ago

Monolith can be particularly handy for this. I used it in a recent project to archive the outgoing links from my own site. Coincidentally, if anyone is interested in that, it's called django-cool-urls.

For myself: Wayback It saves link to multiple different web archives and gives me pdf and warc files.

For others: Archive team have a few active projects to save at risk data and there is IRC channel in which people can suggest adding other websites for saving. They also have wiki with explanations how people can help.

[–] jaxiiruff@lemmy.zip 3 points 6 hours ago

Linkding/Linkwarden

[–] Ludrol@szmer.info 15 points 11 hours ago (1 children)

https://wiki.archiveteam.org/

they have an automatic VM that dowloads stuff in distributed manner and uploads to archive.org

[–] AnUnusualRelic@lemmy.world 10 points 9 hours ago* (last edited 9 hours ago) (1 children)

archive.org is hosted in the US and could end up being a valid target. It doesn't strike me as being a very good place to securely store anything nowadays. I'd consider anything hosted in the US to be out.

[–] Ludrol@szmer.info 3 points 4 hours ago

Depends on the threat model.

NOAA and others gets underfunded/change of menagement and need to close down open access to stuff.

or

Data becomes illegal to possess and feds start knocking on Web Archive doors.

or

Web archive will do something stupid and will get sued/DDOSed

In only one very unlikely scenario it won't be availble due to recent events. But still redundancy would be good regardless of recent stuff.

[–] yasser_kaddoura@lemmy.world 28 points 13 hours ago* (last edited 12 hours ago) (2 children)

I have a script that archives to:

I used to solely depend on archive.org, but after the recent attacks, I expanded my options.

Script: https://gist.github.com/YasserKa/9a02bc50e75e7239f6f0c8f04fe4cfb1

EDIT: Added script. Note that the script doesn't include archiving to archivebox, since its API isn't available in stable verison yet. You can add a function depending on your setup. Personally, I am depending on Caddy and docker, so I am using caddy module [1] to execute commands with this in my Caddyfile:

route /add {
	@params query url=*
	exec docker exec --user=archivebox archivebox archivebox add {http.request.uri.query.url} {
		timeout 0
	}
}

[1] https://github.com/abiosoft/caddy-exec

[–] Appoxo@lemmy.dbzer0.com 1 points 3 hours ago* (last edited 3 hours ago)

I hope you are also donating to the projects for uploading multiple copies to different services.

[–] opulentocean@lemm.ee 2 points 13 hours ago (1 children)

Would you be willing to share it?

[–] Krafting@lemmy.world 4 points 11 hours ago (1 children)

I archive youtube videos that I like with TubeArchivist, I have a playlist for random videos i'd like to keep, and also subscribe to some of my favourite creator so I can keeptheir videos, even when I'm offline

[–] vividspecter@lemm.ee 5 points 10 hours ago (1 children)

I'll add pinchflat as an alternative with the same aim.

[–] Krafting@lemmy.world 3 points 9 hours ago

Seems nice, but you need an external Player to watch the content, which can be goof for some people, but I like the webUI of TubeArchivist (even though it can be enhanced for sure)

[–] otter@lemmy.ca 64 points 20 hours ago (5 children)

One option that I've heard of in the past

https://archivebox.io/

ArchiveBox is a powerful, self-hosted internet archiving solution to collect, save, and view websites offline.

[–] tomtomtom@lemmy.world 6 points 9 hours ago (1 children)

I am using archivebox, it is pretty straight-forward to self-host and use.

However, it is very difficult to archive most news sites with it and many other sites as well. Most cookie etc pop ups on a site will render the archived page unusable and often archiving won’t work at all because some bot protection (Cloudflare etc.) will kick-in when archivebox tries to access a site.

If anyone else has more success using it, please let me know if I am doing something wrong…

[–] danielquinn@lemmy.ca 1 points 2 hours ago

Monolith has the same problem here. I think the best resolution might be some sort of browser-plugin based solution where you could say "archive this" and have it push the result somewhere.

I wonder if I could combine a dumb plugin with Monolith to do that... A weekend project perhaps.

[–] CrazyLikeGollum@lemmy.world 9 points 18 hours ago

That looks useful, I might host that. Does anyone have an RSS feed of at risk data?

[–] ptz@dubvee.org 14 points 20 hours ago* (last edited 20 hours ago)

Going to check that out because....yeah. Just gotta figure out what and where to archive.

[–] M600@lemmy.world 7 points 19 hours ago

This seems pretty cool. I might actually host this.

[–] Boomkop3@reddthat.com 2 points 15 hours ago

Eyy, I want that!

[–] fossilesque@mander.xyz 36 points 19 hours ago (1 children)
[–] PunnyName@lemmy.world 25 points 18 hours ago

Everything is at risk.

[–] catloaf@lemm.ee 10 points 20 hours ago (3 children)

I don't self-host it, I just use archive.org. That makes it available to others too.

[–] Zachariah@lemmy.world 34 points 19 hours ago (1 children)

It’s a single point of failure though.

[–] catloaf@lemm.ee 3 points 10 hours ago

In that they're a single organization, yes, but I'm a single person with significantly fewer resources. Non-availability is a significantly higher risk for things I host personally.

[–] otter@lemmy.ca 33 points 19 hours ago

There was the attack on the Internet archive recently, are there any good options out there to help mirror some of the data or otherwise provide redundancy?

[–] just_another_person@lemmy.world 0 points 20 hours ago (2 children)

Yes. This isn't something you want your own machines to be doing if something else is already doing it.

[–] jcg@halubilo.social 22 points 19 hours ago (2 children)

But then who backs up the backups?

I guess they back either other up. Like archive.is is able to take archives from archive.org but the saved page reflects the original URL and the original archiving time from the wayback machine (though it also notes the URL used from wayback itself plus the time they got archived it from wayback).

[–] just_another_person@lemmy.world 0 points 19 hours ago

Realize how how much they are supporting and storing.

Come back to the comments after.

[–] Deebster@infosec.pub 9 points 19 hours ago

Your argument is that a single backup is sufficient? I disagree, and I think that so would most in the selfhosted and datahoarder communities.

[–] chemicalwonka@discuss.tchncs.de 8 points 19 hours ago (1 children)

I use M-Discs to long term archival.

[–] Boomkop3@reddthat.com 3 points 15 hours ago

I heard news recently that some companies recently started shipping non-m disks labelled as m-disks. You may want to have a look

[–] mesamunefire@lemmy.world 3 points 19 hours ago* (last edited 19 hours ago)

Flash drives and periodic transfers.