Technical overview of scripts.mit.edu

From The scripts.mit.edu Wiki

Jump to: navigation, search

Development work on the SIPB web scripts service began in Fall 2003 in an attempt to make it easier for members of the MIT community to create and maintain dynamic websites at the institute. Various options for providing this capability were considered, and an early iteration of the web script service was implemented by the end of January 2004. This implementation was later extended to support other “script services”, namely the mail script service and the cron script service (aka the “shortjobs” service). In the years since the web script service was originally launched, several limitations have been removed, and many new features (such as the automatic “wiki, blog, etc” installers and CNAME/virtualhost support) have been added.

At the time of this writing (February 2010), about three thousand Athena lockers have signed up to use scripts.mit.edu, and we serve 25 million connections per month.

The following document describes the current design and implementation of the SIPB script services; it is intended to be a detailed technical overview of the internals of the services rather than a gentle introduction to the services (for documentation intended for potential new users, see http://scripts.mit.edu). This document focuses on the web script service since that service is the most popular (and the most complicated) of the services, but much of the information in this document also applies to the other script services. If you have any questions as you read this document, feel free to contact us by e-mailing scripts@mit.edu. We are still working on improving this documentation, and we would like to hear your questions or other thoughts.

Contents

Web script service

The SIPB web script service serves executable content out of the web_scripts subdirectories of users’ AFS home directories using Apache, suexec, and a specially-modified OpenAFS kernel module. In order to thoroughly understand how the web script service works, you need to have a general understanding of how Athena file retrieval normally works and how Apache+suexec web serving normally works.

Background: Apache-suexec

If you already know how Apache-suexec web serving normally works, you might want to skip ahead to the next section.

On most Linux distributions, the Apache processes run as an unprivileged user called something like “www-data” or “apache”. Running all users’ CGI scripts under this user account is undesirable since the server system wants to be able to prevent a malicious user from sending signals to or otherwise affecting other users’ processes. Apache is distributed with a program called suexec that can be used to execute each user’s scripts as the user’s own account on the server.

Files are generally associated with suexec in the system-wide Apache configuration file according to their file extension; for example, the scripts.mit.edu Apache configuration specifies that all file names that end in .php should be handled by suexec.

When Apache receives a “~username/path/to/file” request for a file that is associated with suexec, it changes to the directory containing that file and executes suexec, effectively passing it the username from the ~username part of the URL, the default group on the server corresponding to that username, and the filename of the file that has been requested. For example, a http request for http://jbarnold.scripts.mit.edu/demo/demo.pl would result in the execution of “suexec ~[jbarnold’s uid] [jbarnold’s gid] demo.pl” from the directory /afs/athena.mit.edu/user/j/b/jbarnold/web_scripts/demo (a ‘~’ appears before the uid field in order to inform suexec that this request is a “userdir” request).

Background: Athena file retrieval

If you already know how Athena file retrieval normally works, you might want to skip ahead to the next section.

When a user logs into an Athena dialup, the dialup uses the user’s password to obtain Kerberos tickets and AFS tokens on behalf of the user (A “dialup” is just MIT’s name for a server that people connect to over SSH or Kerberized telnet in order to access their Athena account). These credentials are obtained from servers on the MIT network—specifically, the Kerberos KDC and the AFS protection server—and they empower the machine that controls them (ie, the Athena dialup that the user logged in to) to perform any action that the user is allowed to perform. These credentials are stored locally on the dialup (until the user logs out), and they allow the dialup to access the users’ personal files, mail, etc.

Multiple users can log into a dialup at the same time, and the dialup’s operating system must ensure that users are not allowed to perform actions on behalf of other users (for example, when both jbarnold and presbrey are logged in to a particular dialup, the dialup must ensure that jbarnold does not use presbrey’s AFS tokens in order to request one of presbrey’s private files from an AFS file server). OpenAFS keeps track of which AFS tokens belong to which user, and whenever a user requests a file from AFS, it tries to retrieve that file from the file servers using only that user’s corresponding AFS tokens. The AFS server will deny access if these tokens are insufficient for access to be granted.

Athena file retrieval and scripts.mit.edu

The AFS file retrieval model presents some challenges for the web script service. First, it is undesirable to have users’ full account credentials (ie, their Kerberos tickets or their AFS tokens) residing permanently on scripts.mit.edu since the scripts.mit.edu server system does not need this much access (and since the server system having this much access could cause problems for users if the system were to be compromised by an evildoer).

Instead, it would be preferable if every user could grant the script server limited access to their account (specifically, most users would want to grant the server write access to a part of their home directory—such as a “web_scripts” subdirectory of their home directory—and no access to the rest of it). The AFS access control list system essentially provides this capability for an AFS user to grant another AFS identity partial access to their AFS home directory and its subdirectories.

One implementation approach for a web script service would be to create a new, isolated Kerberos principal and AFS identity for every user of the service. In order for a person to be able to host scripts using the script services, they would need to have a standard Athena AFS identity (named something like “jbarnold”) and a “scripts-specific” AFS identity (named something like “jbarnold.scripts”). AFS would not be aware of any relationship between these two accounts, so, for example, “jbarnold.scripts” would not automatically have any special access to files accessible by “jbarnold”. The person would have a single account on the scripts.mit.edu server system, and this scripts.mit.edu account would always have access to the person’s scripts-specific credentials (so that the user’s account on scripts.mit.edu would always be able to perform AFS operations as that scripts-specific identity). The person could then grant their scripts-specific AFS identity write access to their web_scripts directory in order to grant any programs that they run on the server system with this level of access.

If scripts.mit.edu had been established by the administrators of the MIT Kerberos realm, this design would have been a viable option since it achieves the desired security properties. This design does have a distinct disadvantage, however. Since this approach requires creating a “scripts-specific” Kerberos principal and AFS identity for every user of the service, an automated system for creating these credentials would be needed in order for this approach to be tractable for a large user base.

Since scripts.mit.edu was not created by IS&T staff, there was little hope of being able to establish an automated mechanism for receiving scripts-specific Kerberos principals and AFS identities, so another approach was utilized.

Instead of trying to obtain one Kerberos principal and AFS identity per user of the script services, the scripts team instead obtained a single Kerberos principal and AFS identity for scripts.mit.edu. This AFS identity is known as “daemon.scripts”. The name “daemon.scripts” is the Kerberos v4 name of the principal; its full Kerberos v5 name is “daemon/scripts.mit.edu”. This name was chosen because it follows the naming pattern that IS&T uses for other machine-specific credentials (ie, “daemon/HOSTNAME”). The name “daemon.scripts” is a bit of a misnomer, since no daemon on scripts.mit.edu is particularly responsible for these credentials.

Since scripts.mit.edu only possess tokens for one AFS identity (rather than having separate tokens for every scripts.mit.edu user), the server system must somehow share these credentials between all users in some appropriate manner. The goal should be to allow each user’s scripts to access that user’s own data, but no user’s scripts should be allowed to perform arbitrary file operations with the shared tokens (since then that person could read and write to anyone else’s files). In other words, the scripts.mit.edu AFS client must enforce its own credential-sharing and access control system locally.

The OpenAFS kernel module on scripts.mit.edu has been modified to perform all operations as the authenticated AFS identity daemon.scripts. In other words, daemon.scripts’ AFS tokens are used in order to authenticate all operations regardless of which scripts.mit.edu user requested the operation. In order to prevent users’ scripts from accessing the data of other users, the OpenAFS kernel module only allows AFS operations that fairly clearly involve a users’ scripts accessing the user’s own data – specifically, scripts.mit.edu only allows a user’s scripts to use the shared credentials to access the user’s own AFS volume. A user can therefore access their own web_scripts directory, but they cannot improperly access other people’s web_scripts directories.

The OpenAFS kernel module code that needs to perform this additional access check knows little more than the uid of the process requesting the AFS operation and the volume id of the file or directory being accessed. In order to simplify the check, we ensure that every user’s uid on scripts.mit.edu is equal to their Athena home directory’s volume id. The AFS kernel module therefore refuses most AFS operations unless the uid of the process requesting the operation is equal to the volume id of the volume containing the data being accessed.

A few AFS operations that do not satisfy this “uid == volume id” condition are still allowed. If system:anyuser would be allowed to perform the operation, the operation is always allowed. The web server’s account on scripts.mit.edu is allowed to perform any operation that requires only AFS “list” access since Apache expects to be able to “cd” to the directory containing a script before it invokes suexec. We also give special access to the web server for all files that start with ``.ht``, such as ``.htaccess``.

You can read the scripts.mit.edu AFS patch here.

Apache-suexec and scripts.mit.edu

We have added some additional security checks to suexec and removed others. The two patches we maintain are the cloexec patch and the scripts patch. The cloexec patch fixes an upstream bug and is not terribly interesting. The other patch implements scripts specific behavior.

The most important change is that we have added the requirement that if suexec is being asked to execute a script on behalf of A_USER, it will only honor the request if that script is accessible from underneath A_USER’s web_scripts directory (in other words, Apache cannot ask suexec to execute scripts that are in a user’s cron_scripts directory and are not web-accessible through some symlink in web_scripts).

We have removed the following checks:

  1. We do not care about the chmod bits of the file and its containing directory
  2. We do not care about the uid and gid of the file and its containing directory
  3. We do not care whether we are accessing the script through a symlink

We have modified suexec in the following ways:

  1. It sets PHPRC to the current directory so that we support php.ini files in the current directory
  2. It calls static-cat for a number of extensions that should be served to the web directly
  3. It increases the maximum memory limit for Java (although this is broken on Fedora 11)

Signup process

When a user runs one of the scripts.mit.edu signup scripts (signup-web, signup-mail, signup-cron, etc) essentially two tasks are performed:

  1. The user’s Athena locker is prepared for the service in question by, for example, creating a web_scripts directory and setting its AFS ACL appropriately. This step varies somewhat depending on what service (web, mail, or cron) the user is signing up for.
  2. The user’s scripts.mit.edu account is created (this step simply involves creating appropriate entries for the user in /etc/passwd and /etc/group). This step is the same regardless of what service the user is signing up for.

Any person or group with an Athena locker – that is, an AFS volume and a Hesiod pointer – can create a scripts.mit.edu account.

The signup scripts are signup-minimal and signup-web.

Group accounts

Any administrator of an Athena locker can sign up that locker for the script services. Group lockers receive their own account on scripts.mit.edu rather than being served by some other account. Giving group lockers their own account increases the system’s security isolation and is consistent with the OpenAFS kernel module’s “uid == volume id” check (since every group locker is its own AFS volume, the “uid == volume id” check suggests that every group locker should have its own scripts.mit.edu account).

If an administrator of a group’s Athena locker wants to be able to perform actions on scripts.mit.edu on behalf of that group, they can do so by invoking a special “su” program on scripts.mit.edu that will check whether they are indeed an administrator, and, if so, provide them with a shell for performing actions on scripts.mit.edu on behalf of the group.

We determine whether someone is an administrator of a locker by checking whether their username (or any visible list that contains their username) has AFS “a” access to the locker.

Autoinstallers and Wizard

At an IAP presentation in January 2005, the scripts.mit.edu maintainers introduced the service’s automatic “wiki, blog, etc” installer system. Over the summer of 2009, the autoinstaller system was substantially revamped by a project named Wizard.

Shared infrastructure

  1. /mit/scripts/bin contains one shell script per software package, named scripts-APPNAME. These shell scripts are responsible for setting up some environment variables in order to inform the other parts of the install process about what needs to be done in order to properly install the software package in question. Every shell script in /mit/scripts/bin calls /mit/scripts/deploy/bin/onathena as its last action.
  2. /mit/scripts/deploy/bin/onathena is a shell script that performs installation work that must be done with a local user's credentials (such as signing up for the scripts and SQL services). Parameters are passed to this script via environment variables from scripts. Then, if the autoinstall is supported by Wizard, it SSH's into a scripts.mit.edu server and invokes Wizard to continue the installation. If the autoinstall is not supported by Wizard, it will do common operations such as unpack the tarball and prompt for commonly-needed information such as an “admin” username, etc, and then connects to scripts.mit.edu by SSH and runs a single Perl script in /mit/scripts/deploy/bin based on what software package is being installed.

When a user implicitly requests a sql.mit.edu account by running a scripts.mit.edu auto-installer that requires a MySQL database, a file .sql/my.cnf is created for that user underneath the user’s home directory. This file contains the user’s MySQL password so that future invocations of the automatic installer system do not need to prompt the user for their SQL password. (Additionally, the user can later reference this file in order to learn their MySQL password without needing to contact the service maintainers).

Old system

The old automatic installer system has the following components:

  1. /mit/scripts/deploy/bin contains a Perl script per software package. These Perl scripts are invoked by onathena, and, unlike the /mit/scripts/bin shell scripts and onathena, they are executed on scripts.mit.edu. We could have structured the system so that these scripts would also run on Athena, but we chose not to do so for various reasons that are no longer important. If the software package that is being installed requires that web-based configuration be performed, then the Perl script corresponding to that software package is responsible for interacting with the web interface in order to perform that configuration. We currently use a widely-available command-line utility called “curl” in order to automatically perform any required web interactions.
  2. /mit/scripts/deploy/bin/onserver.pm is a Perl module that all of the Perl scripts in /mit/scripts/deploy/bin include.
  3. /mit/scripts/deploy contains a tarball for every software package (these are not versioned). The shell script onathena uses these tarballs in order to perform the software installations. /mit/scripts/deploy also contains patch files and, in theory, any other miscellaneous materials needed in order to perform the automatic installations.
  4. /mit/scripts/deploy/updates contains copies of security patches that we have applied to automatic installs. these are not versioned.

Wizard

Wizard is a next-generation autoinstall system that utilizes Git in order to make updating autoinstalls that had local changes by users tractable. Documentation lives at the Wizard website, and a live version of the code lives in /mit/scripts/wizard.

Upgrades

When we need to perform an automatic security upgrade for software installed using the automatic installer system, we first need to determine how many installations we need to upgrade and where they live in AFS. This is done using the parallel-find.pl script. We determine what installations need to be upgraded by using scripts-security-upd credentials to automatically find all matching .scripts-version files or .scripts directory in web_scripts directories in AFS. The automatic installer system grants scripts-security-upd write access to the installation directories by default, and users can remove this access if they want to opt-out of the automatic upgrades. You can then use wizard mass-upgrade to perform a mass upgrade for Wizard enabled autoinstalls.

Technical details

binfmt_misc

Normally, when a Linux file is made executable (eg, chmod u+x FILENAME) and executed (eg, ./FILENAME), Linux will look at the first line of the file for a shebang line before Linux will try to execute the file as a machine-executable binary. Linux supports a mechanism called binfmt_misc that allows a default interpreter to be associated with a file extension so that files with that extension will be executed by default using that interpreter. scripts.mit.edu uses binfmt_misc in order to assist with serving both dynamic content and static content. (The service uses binfmt_misc for dynamic content so that web scripts, such as .php files, do not need to have a shebang line in order to be executed using the correct interpreter when they are visited from the web. The static content searching system is more complicated and is described in the next section.)

Static content (static-cat)

Using suexec and CGI only for serving executable content (PHP, Perl, etc.) and serving all static content (HTML, images, etc.) using a completely unprivileged Apache user (generally called something like “apache” or “www-data”) has several problems. First and foremost, this model makes it quite difficult to use scripts.mit.edu to serve protected content (i.e. content that should only be made available to particular people – for example, as specified by a .htaccess file). If Apache is completely unprivileged, anything that can be read by Apache can be read by any scripts.mit.edu account. Running Apache as a privileged user (that, for example, has the ability to read any content that the web script service can read) is undesirable because this approach significantly worsens the consequences of an Apache security compromise.

Since giving Apache no special access is undesirable and since giving Apache complete access is also undesirable, other options need to be considered. The web script service currently uses a special “static content serving system” (known as “staticsys”) in order to ensure that files with certain extensions are served to the web using heightened privileges (see http://scripts.mit.edu/faq/50 for a complete list of the file extensions affected by this system). This system effectively allows Apache to read files that both 1) have an extension on the staticsys list and 2) are underneath someone’s web_scripts directory. We consider it acceptable to allow Apache to have read access to these files because files with these extensions are almost always intended to be directly web-server-readable when they appear underneath a web_scripts directory. For example, a user who puts a “.html” file in their web_scripts directory probably expects Apache to be able to read that file, although they might not expect a “.db” file to be readable by the web server by default.

Unfortunately, these access checks (that check whether the file’s extension is on the staticsys list and whether the file is under a web_scripts directory) cannot trivially be performed in the OpenAFS kernel module (or at least we have not done so yet), and so we implement this system using another mechanism.

The static content serving system works by setting up a program (/usr/local/staticsys/static) as the default interpreter for files with extensions on the staticsys list (binfmt_misc, described previously, is used to make this association). “static” is a C program that is basically a glorified GNU “cat” that also prints http header information such as a Content-Type header. Apache is then instructed to treat these file extensions as though they are CGI scripts, and so suexec is executed in order to retrieve the content of these files. The AFS read operations are performed as the user’s own scripts.mit.edu account, and so the files can be retrieved even if they are not readable by Apache (ie, even if they are not system:anyuser readable). A performance penalty is certainly associated with these extra forks and executions of the program “static”. We currently use this system because it provides a relatively desirable set of properties.

uid mismatch problems

scripts.mit.edu uids do not match Athena uids since every user’s scripts.mit.edu uid is based on their Athena home directory’s volume id. AFS uids normally do not matter at all since AFS exclusively uses its access control list system in order to determine who may access what.

Unfortunately, some pieces of web software (such as “gallery”, the open source photo gallery package) are “too smart for their own good” and think that there is a problem when they notice that files do not always have the uids that the software might expect on an ordinary filesystem (for example, …). We have therefore modified scripts.mit.edu’s OpenAFS kernel module so that it reports that all files and directories that are in the same volume have the same uid (specifically, it reports that these files have a uid equal to the volume id of the volume that contains them). Programs that check whether their uid matches the uid of their data files therefore are not confused when using this system.

For convenience, the scripts.mit.edu OpenAFS kernel module reports fake gids in order to make it easy to determine the Athena account or scripts.mit.edu locker that created a file. Instead of reporting the gid stored on the AFS server, scripts.mit.edu reports the Athena account/locker that created the file.

Specifically, if a file was created on Athena itself, the modified kernel module reports the uid of the Athena account that created the file; if the file was created on scripts.mit.edu, the modified kernel module reports the volume id of the locker that requested the file’s creation. When a user performs an “ls -al” on scripts.mit.edu, the system converts these numbers into appropriate textual usernames and locker names for display. This conversion is handled properly because we have set up appropriate groups on scripts.mit.edu so that the numbers are mapped to textual names as desired.

The system determines who created the file by looking at the AFS uid of the file as stored on the AFS file server – the AFS servers ensure that the uid of every file reflects the AFS identity that originally created the file. When files are created by scripts.mit.edu, they have a uid of 33554596 since that uid corresponds to the AFS identity “daemon.scripts”.

Analysis of design and implementation

The current scripts.mit.edu design has the following advantages:

  • The design requires only one dedicated Kerberos principal (and corresponding AFS pts id), which is desirable since the project could not obtain a principal per user from MIT in an automated manner (MIT’s IS&T Accounts group generally does grant “/extra” principles upon request, but obtaining one requires effort from both the user and the Accounts staff).
  • The design requires a small, constant number of IP addresses. This was historically a benefit because MIT charged for IP addresses; this is no longer the case. However, getting IP addresses for all of Scripts users would be hard, due to the fact that we'd need an entire subnet to serve them all.
  • The design allows users to utilize essentially any programming language with little or no additional work required from the service maintainers (people can install Linux binaries into AFS for use with the system).
  • Linux is a popular web server operation system, and much open source software and many tools work with Linux, apache, and suexec.
  • The design is significantly more efficient than creating a dedicated virtual machine for every user of the service.
  • Users of the service are not responsible for maintaining their own server system (whereas they might be responsible for system administration if scripts.mit.edu were to be implemented using a virtual-machine-based design).

Of course, the design also presents particular challenges:

  • Securing Linux against local compromise can be difficult and requires frequent, low-latency patching of the system’s software (ie, whenever a security vulnerability in Linux is discovered, we must fix the problem within hours rather than days).
  • Upgrades to the server’s software can potentially cause users’ scripts to break, particularly when users’ scripts rely on software that does not respect backwards-compatibility. We do our best to try to ensure that upgrades proceed smoothly, but some problems related to upgrades still arise occasionally. We try to provide a mechanism for users to confirm that their scripts will continue working through a major server change. In the past, we have set up http://scripts-test.mit.edu one week before major service changes so that people can test their scripts against the new server configuration before the transition occurs.

Core values

  • “Everything should be automated”: Whenever feasible, users should not need to contact a scripts maintainer in order to take advantage of the services that we provide. People tend to be discouraged by the latency and extra work involved with making a non-automated request. Additionally, responding to requests that should be automated consumes some of the valuable time of the system maintainers.
  • “As few added restrictions as possible”: We view scripts.mit.edu as a means of providing every member of the MIT community with a self-maintaining server on MITnet. In general, the service maintainers should not enforce arbitrary limitations that would not exist for someone running their own server. For example, we generally do not enforce any restrictions about how scripts.mit.edu may be used that are not imposed on us by MIT.

Development

If you’ve made it this far, you should help us improve scripts.mit.edu! You can see our list of open development projects at https://scripts.mit.edu/trac/report/3. Feel free to ask questions about any of the tasks on that list (or, even better, volunteer to help us with one of them!). You can reach us by e-mailing scripts@mit.edu.

You can perform a checkout of the repository by running "svn co svn://scripts.mit.edu/trunk"; a web interface for browsing is available at http://scripts.mit.edu/trac/browser/trunk

Personal tools