wget -p -k http://somewebsite.com
จาก man wget
-p
--page-requisites
This option causes Wget to download all the files that are
necessary to properly display a given HTML page. This includes
such things as inlined images, sounds, and referenced stylesheets.
Ordinarily, when downloading a single HTML page, any requisite
documents that may be needed to display it properly are not
downloaded. Using -r together with -l can help, but since Wget
does not ordinarily distinguish between external and inlined
documents, one is generally left with "leaf documents" that are
missing their requisites.
For instance, say document 1.html contains an "<IMG>" tag
referencing 1.gif and an "<A>" tag pointing to external document
2.html. Say that 2.html is similar but that its image is 2.gif and
it links to 3.html. Say this continues up to some arbitrarily high
number.
If one executes the command:
wget -r -l 2 http://<site>/1.html
then 1.html, 1.gif, 2.html, 2.gif, and 3.html will be downloaded.
As you can see, 3.html is without its requisite 3.gif because Wget
is simply counting the number of hops (up to 2) away from 1.html in
order to determine where to stop the recursion. However, with this
command:
wget -r -l 2 -p http://<site>/1.html
all the above files and 3.html's requisite 3.gif will be
downloaded. Similarly,
wget -r -l 1 -p http://<site>/1.html
will cause 1.html, 1.gif, 2.html, and 2.gif to be downloaded. One
might think that:
wget -r -l 0 -p http://<site>/1.html
would download just 1.html and 1.gif, but unfortunately this is not
the case, because -l 0 is equivalent to -l inf---that is, infinite
recursion. To download a single HTML page (or a handful of them,
all specified on the command-line or in a -i URL input file) and
its (or their) requisites, simply leave off -r and -l:
wget -p http://<site>/1.html
Note that Wget will behave as if -r had been specified, but only
that single page and its requisites will be downloaded.Links from
that page to external documents will not be followed. Actually, to
download a single page and all its requisites (even if they exist
on separate websites), and make sure the lot displays properly
locally, this author likes to use a few options in addition to -p:
wget -E -H -k -K -p http://<site>/<document>
To finish off this topic, it's worth knowing that Wget's idea of an
external document link is any URL specified in an "<A>" tag, an
"<AREA>" tag, or a "<LINK>" tag other than "<LINK
REL="stylesheet">".
==================================================================
-k
--convert-links
After the download is complete, convert the links in the document to make them suitable for local viewing. This affects not only the visible hyperlinks, but any part of the document that
links to external content, such as embedded images, links to style sheets, hyperlinks to non-HTML content, etc.
Each link will be changed in one of the two ways:
· The links to files that have been downloaded by Wget will be changed to refer to the file they point to as a relative link.
Example: if the downloaded file /foo/doc.html links to /bar/img.gif, also downloaded, then the link in doc.html will be modified to point to ../bar/img.gif. This kind of transformation
works reliably for arbitrary combinations of directories.
· The links to files that have not been downloaded by Wget will be changed to include host name and absolute path of the location they point to.
Example: if the downloaded file /foo/doc.html links to /bar/img.gif (or to ../bar/img.gif), then the link in doc.html will be modified to point to http://hostname/bar/img.gif.
Because of this, local browsing works reliably: if a linked file was downloaded, the link will refer to its local name; if it was not downloaded, the link will refer to its full Internet
address rather than presenting a broken link. The fact that the former links are converted to relative links ensures that you can move the downloaded hierarchy to another directory.
Note that only at the end of the download can Wget know which links have been downloaded. Because of that, the work done by -k will be performed at the end of all the downloads.
--convert-file-only
This option converts only the filename part of the URLs, leaving the rest of the URLs untouched. This filename part is sometimes referred to as the "basename", although we avoid that term
here in order not to cause confusion.
It works particularly well in conjunction with --adjust-extension, although this coupling is not enforced. It proves useful to populate Internet caches with files downloaded from different
hosts.
Example: if some link points to //foo.com/bar.cgi?xyz with --adjust-extension asserted and its local destination is intended to be ./foo.com/bar.cgi?xyz.css, then the link would be converted
to //foo.com/bar.cgi?xyz.css. Note that only the filename part has been modified. The rest of the URL has been left untouched, including the net path ("//") which would otherwise be
processed by Wget and converted to the effective scheme (ie. "http://").
ขอโทษสำหรับการเยื้องที่ไม่ดีของฉัน :(