With the proliferation of viruses and automatic CGI scanners an average web server receives each day several probes and unfriendly hits, intentionnal or not.
To protect a server from this type of scans, a firewall is useless.
Typically the firewall rules allow all traffic on port 80 and does not try to analyze each request.
This article proposes a simple mechanism that can help protect a web server against some web scanners.
The examples are given for an apache web server running on Linux but the mechanism can easily be adapted to other servers and operating systems.
The idea is to dynamically disallow access from machines that make suspicious requests.
To this end, firewall rules are generated by the server itself when it cannot find a requested URL and finds it is malicious.
Among all Apache's configuration options, the one of interest for us is 'ErrorDocument' which is meant to specify precise actions for each type of errors.
So, in httpd.conf, we specify:
ErrorDocument 404 /cgi-bin/websentry
The sentry script follows.
It looks at the requested URL (found in $REQUEST_URI)
and if it finds it "too long" (more than 200 characters in this case) or if it contains signatures from well known viruses or scanners it just denies further acces from that source using ipchains.
sudo ipchains -I input -j DENY -s $REMOTE_ADDR
The URL you tried to access does not exist on this server.
Either you followed an expired link or typed it worng.
Go to the server's <a href="/">home page</a>.
if [ `echo "$REQUEST_URI"|wc -c` -gt 200 ]; then
case "$REQUEST_URI" in
Finally, we must give the user the web server is running the right to call the firewall command. Since using setuid on ipchains is a bit too dangerous, we used sudo.
The entry in /etc/sudoers is:
- Logging: it is convenient and straightforward to log the activity of the script: date and time, IP address, requested URL, ...
- Forgiveness: with each detected scan, the number of blacklisted source addresses will increase, hence the number of firewall rules on the web server.
It might be a good idea to reset the firewall regularly, to keep the number of firewall rules at a reasonable value. Besides, attackers are likely to change IP address. Here a simple cronjob will do the trick.
This simple solution add some extra security for a web server by allowing itself to respond to malicious requests.
It proves most useful against systematic scan scripts that try a list of known vulnerabilities on one server.
However, this solution is not perfect: if a real exploit exists in the web server URL decoding routines (e.g. a buffer overflow), the script will not be called.
Other similar projects exist, mod_fortress and mod_security, these are Apache modules that inspects requests made to the web server and can deny the suspect ones.
Implemented in C and directly interfaced with Apache, these solutions are likely to be efficient but they are triggered for every single request that the server receives, not as in our case those that are not found by the server.
Moreover, our script solution is not Apache specific.
The script is available here
Stéphane Billiart - echo date("M. Y", getlastmod());>