-
-
Notifications
You must be signed in to change notification settings - Fork 380
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature request] Store local copy of webpages #132
Comments
Monolith might be a good choice here... it's CLI and works well. |
btw, hoarder already stores a local copy of the crawled content. That's what you see in the bookmark preview page. But it as of right now, doesn't include the images. It's also only the readable parts of the page, not the entire page. I've seen monolith before and I think it's cool, I might give it a try :) |
@MohamedBassem yes, I've seen that, but in fact the images are a must. I usually store articles with explanation figures embedded, and the text only turns it useless. |
@lardissone that makes sense. I think I can give monolith a try and see how it goes. Will see if I can include it in the next release. |
Awesome! You rock! |
I got a basic version of monolith working! The output pages are huge though and because everything is inlined, they take sometimes to load. I think I'll ship it disabled by default, and maybe in the future add a button to archive pages on demand. |
@MohamedBassem this is great! thanks for adding this! |
Like Raindrop does, allow to store a copy of the webpage.
There's a bunch of options to accomplish this.
Here's a nice list of tools to use for inspiration: awesome web archiving
The text was updated successfully, but these errors were encountered: