From a3737898e24911dd11f96a40a105008995de0179 Mon Sep 17 00:00:00 2001 From: Nick Sweeting Date: Tue, 19 Mar 2019 04:51:49 -0400 Subject: [PATCH] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 809cec71..fa2c485f 100644 --- a/README.md +++ b/README.md @@ -35,7 +35,7 @@ You can use it to preserve access to websites you care about by storing them loc ```bash echo 'http://example.com' | ./archive ``` -Just install the dependencies and ipe links into the `./archive` command to start archiving. +After installing the dependencies, just pipe some new links into the `./archive` command to start your archive. ArchiveBox is written in Python 3.5 and uses wget, Chrome headless, youtube-dl, pywb, and other common unix tools to save each page you add in multiple redundant formats. It doesn't require a constantly running server or backend, just open the generated `output/index.html` in a browser to view the archive. It can import and export links as JSON (among other formats), so it's easy to script or hook up to other APIs. If you run it on a schedule and import from browser history or bookmarks regularly, you can sleep soundly knowing that the slice of the internet you care about will be automatically preserved in multiple, durable long-term formats that will be accessible for decades (or longer).