Up to Table of Contents | ||
Back to General Questions | Forward to Protecting Confidential Documents |
You need to protect the server from the prying eyes of both local and remote users. The simplest strategy is to create a "www" user for the Web administration/webmaster and a "www" group for all the users on your system who need to author HTML documents. On Unix systems edit the /etc/passwd file to make the server root the home directory for the www user. Edit /etc/group to add all authors to the www group.
The server root should be set up so that only the www user can write to the configuration and log directories and to their contents. It's up to you whether you want these directories to also be readable by the www group. They should _not_ be world readable. The cgi-bin directory and its contents should be world executable and readable, but not writable (if you trust them, you could give local web authors write permission for this directory). Following are the permissions for a sample server root:
drwxr-xr-x 5 www www 1024 Aug 8 00:01 cgi-bin/ drwxr-x--- 2 www www 1024 Jun 11 17:21 conf/ -rwx------ 1 www www 109674 May 8 23:58 httpd drwxrwxr-x 2 www www 1024 Aug 8 00:01 htdocs/ drwxrwxr-x 2 www www 1024 Jun 3 21:15 icons/ drwxr-x--- 2 www www 1024 May 4 22:23 logs/
The document root has different requirements. All files that you want to serve on the Internet must be readable by the server while it is running under the permissions of user "nobody". You'll also usually want local Web authors to be able to add files to the document root freely. Therefore you should make the document root directory and its subdirectories owned by user and group "www", world readable, and group writable:
drwxrwxr-x 3 www www 1024 Jul 1 03:54 contents drwxrwxr-x 10 www www 1024 Aug 23 19:32 examples -rw-rw-r-- 1 www www 1488 Jun 13 23:30 index.html -rw-rw-r-- 1 lstein www 39294 Jun 11 23:00 resource_guide.html
Many servers allow you to restrict access to parts of the document tree to Internet browsers with certain IP addresses or to remote users who can provide a correct password (see below). However, some Web administrators may be worried about unauthorized _local_ users gaining access to restricted documents present in the document root. This is a problem when the document root is world readable.
One solution to this problem is to run the server as something other than "nobody", for example as another unprivileged user ID that belongs to the "www" group. You can now make the restricted documents group- but not world-readable (don't make them group-writable unless you want the server to be able to overwrite its documents!). The documents can now be protected for prying eyes both locally and globally. Remember set the read and execute permissions for any restricted server scripts as well.
The CERN server generalizes this solution by allowing the server to execute under different user and group privileges for each part of a restricted document tree. See the CERN documentation for details on how to set this up.
If your server starts up as root but runs as another user (see Q11), then it's especially important that the logs directory not be writable by the user that the server runs as. For example, both the Netscape FastTrack and SuiteSpot servers come with the logs directory writable by the user that the server runs as (i.e. as "nobody" if you choose the default configuration values). This can make the effect of some CGI bugs much worse than they would normally be. For example if a CGI bug enables a remote user to run arbitrary commands on the server, then the remote user can also gain root access to the server by exploiting the bug to replace a log file with a symlink to /etc/passwd. When the server restarts, the symlink will result in /etc/passwd being chown'd to the server user. The remote user can now exploit the CGI bug again to add an entry to /etc/passwd. The suggested workaround is to change the ownership of the logs directory so that it's not writable by the server user, and then create empty log and pid files that are owned by the server user (the server won't start up if it can't open these files.) Although this solution is less than optimal, because it allows crackers to tamper with the log files, it is much better than the default configuration. This bug may also be present in other commercial servers. (Thanks to Laura Pearlman for this information.)
Of course, turning off automatic directory listings doesn't prevent people from fetching files whose names they guess at. It also doesn't avoid the pitfall of an automatic text keyword search program that inadvertently adds the "hidden" file to its index. To be safe, you should remove unwanted files from your document root entirely.
The NCSA and Apache servers allows you to turn symbolic link following off completely. Another option allows you to enable symbolic link following only if the owner of the link matches the owner of the link's target (i.e. you can compromise the security of a part of the document tree that you own, but not someone else's part).
Options IncludesNoExec
This is not the scenario that people warn about when they talk about "running the server as root". This warning is about servers that have been configured to run their _child processes_ as root, (e.g. by specifying "User root" in the server configuration file). This is a whopping security hole because every CGI script that gets launched with root permissions will have access to every nook and cranny in your system.
Some people will say that it's better not to start the server as root at all, warning that we don't know what bugs may lurk in the portion of the server code that controls its behavior between the time it starts up and the time it forks a child. This is quite true, although the source code to all the public domain servers is freely available and there don't _seem_ to be any bugs in these portions of the code. Running the server as an ordinary unprivileged user may be safer. Many sites launch the server as user "nobody", "daemon" or "www". However you should be aware of two potential problems with this approach:
Consider this scenario: the WWW server that has been configured to execute any file ending with the extension ".cgi". Using your ftp daemon, a remote hacker uploads a perl script to your ftp site and gives it the .cgi extension. He then uses his browser to request the newly-uploaded file from your Web server. Bingo! he's fooled your system into executing the commands of his choice.
You can overlap the ftp and Web server hierarchies, but be sure to limit ftp uploads to an "incoming" directory that can't be read by the "nobody" user.
In order to run a server in a chroot environment, you have to create a whole miniature root file system that contains everything the server needs access to. This includes special device files and shared libraries. You also need to adjust all the path names in the server's configuration files so that they are relative to the new root directory. To start the server in this environment, place a shell script around it that invokes the chroot command in this way:
chroot /path/to/new/root /server_root/httpdSetting up the new root directory can be tricky and is beyond the scope of this document. See the author's book (above), for details. You should be aware that a chroot environment is most effective when the new root directory is as barren as possible. There shouldn't be any interpreters, shells, or configuration files (including
/etc/passwd
!) in the new root directory. Unfortunately this means
that CGI scripts that rely on Perl or shells won't run in the chroot
environment. You can add these interpreters back in, but you lose
some of the benefits of chroot.
Also be aware that chroot only protects files; it's not a panacea. It doesn't prevent hackers from breaking into your system in other ways, such as grabbing system maps from the NIS network information service, or playing games with NFS.
other hosts \ server <-----> FIREWALL <------> OUTSIDE / other hosts
However, if you want to make the server available to the rest of the world, you'll need to place it somewhere outside the firewall. From the standpoint of security of your organization as a whole, the safest place to put it is completely outside the local area network:
other hosts \ other hosts <----> FIREWALL <---> server <----> OUTSIDE / other hosts
This is called a "sacrificial lamb" configuration. The server is at risk of being broken into, but at least when it's broken into it doesn't breach the security of the inner network.
It's _not_ a good idea to run the WWW server on the firewall machine. Now any bug in the server will compromise the security of the entire organization.
There are a number of variations on this basic setup, including architectures that use paired "inner" and "outer" servers to give the world access to public information while giving the internal network access to private documents. See the author's book for the gory details.
ftp://ftp.tis.com/pub/firewalls/toolkit/
The CERN server can also be configured to act as a proxy. I feel much less comfortable recommending it, however, because it is a large and complex piece of software that may contain unknown security holes.
More information about firewalls is available in the books Firewalls and Internet Security by William Cheswick and Steven Bellovin, and Building Internet Firewalls by D. Brent Chapman and Elizabeth D. Zwicky.
ftp://coast.cs.purdue.edu/pub/COAST/Tripwire/
You should also check your access and error log files periodically for suspicious activity. Look for accesses involving system commands such as "rm", "login", "/bin/sh" and "perl", or extremely long lines in URL requests (the former indicate an attempt to trick a CGI script into invoking a system command; the latter an attempt to overrun a program's input buffer). Also look for repeated unsuccessful attempts to access a password protected document. These could be symptomatic of someone trying to guess a password.
Up to Table of Contents | ||
Back to General Questions | Forward to Protecting Confidential Documents |
Lincoln D. Stein (lstein@cshl.org)
Last modified: Mon Sep 13 13:51:54 EDT 1999