From e9a5f19f7cbac62c46baaff26ef7290ce25f06be Mon Sep 17 00:00:00 2001 From: Nick Sweeting Date: Wed, 30 Jan 2019 00:47:58 -0800 Subject: [PATCH] Update README.md --- README.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/README.md b/README.md index 49e9f10e..fc35d1cd 100644 --- a/README.md +++ b/README.md @@ -31,8 +31,7 @@ You can use it to preserve access to websites you care about by storing them locally offline. ArchiveBox works by rendering the pages in a headless browser, then saving all the requests and fully loaded pages in multiple redundant common formats (HTML, PDF, PNG, WARC) that will last long after the original content dissapears off the internet. It also automatically extracts assets like git repositories, audio, video, subtitles, images, and PDFs into separate files using `youtube-dl`, `pywb`, and `wget`. -If you run it on a schedule and import your browser history or bookmarks continuously, you can sleep soundly knowing that -at the end of the day the slice of the internet you care about will be automatically preserved in multiple, durable long-term formats that will last for decades. +ArchiveBox doesn't require a constantly running server or backend, instead you just run the `./archive` each time you want to import new links and update the static output. If you run it on a schedule and import your browser history or bookmarks regularly, you can sleep soundly knowing that the slice of the internet you care about will be automatically preserved in multiple, durable long-term formats that will be accessible for decades (or longer).
. . . . . . . . . . . . . . . . . . . . . . . . . . . .