Just a webpage to list text URLs as clickable hypertexts.
This web application is useful if you have a bulk of links, and you want to visit them in the browser.
It keeps the last link click time, allows you to add tags and comments to the links.
It's a standalone web application.
All data is stored/processed only in your browser, no data is sent to external servers.
Link parsing is very primitive:
- An URL must starts with
https://
, orhttp://
. - URLs must be separated by space character(s).
A space character(s) with followed #
, or //
is used as a start of a line comment:
The delay is measured in seconds.
It uses UrlCleaner
from https://github.com/AlttiRi/string-magic to clean the input URLs.
"Url Cleaner" simply cleans the URLs.
"Url Origin" does not clean the URLs in the list, but internally works with the cleaned ones.
It's useful when a link contains some access parameters which change time by time. If you remove them, the link will be invalid.
For example:
- https://cdn.discordapp.com/attachments/1000789/12556677/Image_34.jpg?ex=667788&is=665544&hm=ab12ac23ef45
- https://cdn.discordapp.com/attachments/1000789/12556677/Image_34.jpg?ex=778899&is=776655&hm=ba21ca32fe54
With "Url Origin" you can internally normalize (clean) the both links to
So, the both links will be treated as the exactly same link. They will share the same last click time, tags, and a comment.
Also, it's useful when you just don't want to expose to the server that the link was modified, but you still need to normalize it to keep the last click time (tags, a comment) of the normalized link.
It's a userscript (href-taker.user.js) to grab links.