Wwwleech Main Page

[an error occurred while processing the directive]
(since July 9th, 1999)

Current version: 2.0.2

I have developed wwwleech because I am too lame to download 100 links from a webpage by hand. Yes, I know there are programs for this (like wget), but AFAIK they don't support 'refering': This means, that for each link that you want to download, you send a Refer: header telling from which page the file was linked. Some site maintainers think this is an effective way to prevent automatic programs copying their site.
Wwwleech is by no means a program that automatically leeches an entire site, since you can only leech one URL with it (currently). However, with a clever and simple script that uses bjot, a small program included with wwwleech, you can easily download multiple files.

Where can I get Wwwleech?


  • (None worth mentioning yet)

Static Binaries

wwwleech bjot


2.0.2: July 6th, 1999
-Added check for PROXYHOST environment variable

2.0.1: July 6th, 1999
-Added kB/s indicator during downloads.

2.0: July 5th, 1999
-Rewrote parse_header, so it won't read the header byte by byte anymore,
 but in larger blocks. This required thinking ;) to prevent missing bytes
 in everything that follows. 
-Using getopt for commandline argument parsing now. RTFS
-added a timeout option
-Multiple debug/verbose levels (0=very summier,1=interesting info,2=all

1.3: June 23d, 1999
Added 'Host:' to HTTP requests

1.0: May 13th, 1999
First release

Mail comments about this page to the author