Integrating Linux into a large-scale production network running SPARCs and Windows.
Linux has come a long way in these past few years, no longer a geek toy and well on its way to being a mainstream operating system. Linus Torvalds, with tongue firmly planted in cheek, is striving for world domination; however, one of the more intriguing strengths of Linux is its friendly and fruitful coexistence with other systems, UNIX or not. In fact, its standards-based approach is one of my favorite ways of distinguishing between it and certain commercial products.
This being said, I would like to present some experiences with integrating Linux machines into a production computer network of over 1000 nodes, divided into about two-thirds SPARCs running Sun Solaris and one-third PCs running one of the variety of software packages emanating from Redmond, WA.
Any large UNIX site usually employs the operating system’s easy yet effective mechanisms for maintaining a large number of users working on an equally large number of machines. For Linux to participate in such a network, it needs to be able to participate in or even provide any of these services. At our site, these are:
- NIS: Sun’s Network Information Services, formerly called Yellow Pages. This is a well-proven mechanism to distribute any kind of information that can be represented as lists, such as user accounts, passwords and printer definitions.
- NFS: the Network File System. This allows mounting of remote file systems, typically with a mix of static and automatic mounts. The former are configured on a per-machine basis and the latter are distributed via NIS maps.
- Unified login scripts: a carefully set up and maintained web of shell scripts, providing users with the required settings to work with their respective applications. This eliminates the need for each user to hack together her own environment with all the support implications.
Additionally, it is almost self-evident that a Linux box should be able to access networked or server-based printers via the LPR protocol as well as utilize all other communication protocols like HTTP, NNTP, SMTP or FTP. Linux has a well-deserved reputation for being an excellent performer in this respect.
The Information Source: Enabling NIS
A three-step approach lets a Linux machine participate in an NIS domain, beginning with the installation of the necessary software. In the case of the Red Hat 5.x distributions, these come in two RPM packages—ypbind and yptools. The former provides the ypbind executable, which must run on any NIS client as a daemon, and provides communication with the NIS server. The latter contains various NIS-related tools for querying NIS tables (ypcat, ypmatch, yppoll) and maintaining the client configuration (ypwhich, ypset).
Next, the server-side configuration is modified to let the new NIS client participate in the domain. NIS has only basic security mechanisms, with a common one being the “securenets” list enumerating the networks considered secure to participate in a domain. If your Linux box lives in a different subnet than your other UNIX boxes, make sure that network is present in your server’s securenets list (commonly located in /var/yp/securenets on a Sun server).
The final hurdle is the client-side configuration. First of all, the ypbind daemon needs to know the NIS domain name (another security precaution, although a rather fragile one). This is set in the file /etc/yp.conf, together with either an NIS server name or the instruction to broadcast for a server. The file needs to contain only a single line in this format:
domain
The server bigboy.my.net must have an entry in the hosts database, /etc/hosts.
Now the NIS domain name needs to be set. This can be accomplished via the domainname my.NIS.domain command. To make this setting persist even after a reboot, the domain name should also be entered into the system’s network configuration, in the case of Red Hat: /etc/sysconfig/network:
DOMAINNAME=
After creating a directory /var/yp/binding for ypbind to store binding information in, ypbind can be started via its script: /etc/rc.d/init.d/ypbind start.
Next, we have to let the system know to actually use NIS to resolve things like hostnames, user IDs and passwords. To do this, edit the file /etc/nsswitch.conf and change the corresponding lines for each service with which you would like to use NIS, e.g.:
passwd: files nis shadow: files nis group: files nis hosts: nis files dns automount: files nis
In the above examples, the login program trying to authenticate a user will consult /etc/nsswitch.conf, see the sequence files nis and look for the information in the respective files. Upon failure, it will query the NIS service for the user’s password and shadow entries. If this also fails, the login is denied. The reason to have the entry listed as files nis is that the root user is normally not defined in NIS (this is considered a security hole). In the case of a network problem, looking in the local passwd/shadow files first lets root log in without further problems.
This is basically it. Once the file is edited and ypbind is running correctly (verify this by looking for suspicious messages in /var/log/messages and in the corresponding file on the NIS server), your machine is part of the NIS domain. Of course, you can reboot if that makes you feel better; it also allows you to test that the system will come up with the correct configuration.
You can verify ypbind’s connection to the server (a “binding” in NIS parlance) using the ypwhich command. You can also manually look up information: the command ypmatch joe passwd will show Joe’s entry from the NIS password map.
The Net: Enabling NFS
Now that NIS is working, let’s attend to NFS. Depending on who you listen to, NFS is either the evil beast or the magic bullet to all your user data-related problems. In my opinion, NFS makes a large network with huge amounts of user data easy and transparent to set up, but it comes with a massive performance penalty common to all networked file systems. Count on NFS access being on the order of ten times slower than local hard disk file access. Slow or not, large sites simply can’t live without NFS.
That said, setting up an NFS client basically follows the same steps as for the NIS client: software installation, server side configuration and client configuration changes.
NFS requires a kernel built with support for it, presumably as a kernel module, but you can compile it into the kernel itself if you wish. If your kernel does not yet have NFS support, you need to enable it under “Filesystems”. Go to your kernel source directory (most likely /usr/src/linux) and type make xconfig or make menuconfig. Obviously, to use NFS, the kernel needs to have network support enabled. After compiling and installing the NFS module, your system has all the software it needs. I’d suggest you install one piece of optional software, though, which is showmount. Look for a package called something like nfs*client* on your distribution CD-ROM.
On the NFS server, there is usually a file stating which file systems are exported. Depending on the flavor of UNIX, it can be called /etc/exports (SunOS, Linux, *BSD), /etc/dfs/dfstab (Solaris, other System V variants), or something completely different. An OS-independent way of finding that information is to run the showmount command against the NFS server, e.g., showmount -e. This will list the exported file systems and also the machines or groups of machines allowed to mount them.
Large sites usually have a need to manage machines in groups. For example, all users’ desktop workstations should be able to mount any of the home directories, whereas only servers might be allowed to mount CDs from a networked jukebox. In NIS, this mechanism is provided by the netgroup map, and chances are the showmount command will list only the netgroups allowed to access specific exports. A sample output would be
/home/ftp (everyone) /homedesktops /var/mail mailservers
everyone is a special name denoting every machine, while desktops and mailservers are netgroups. Executing
ypmatch -k desktops netgroup
might produce:
desktops: penguin, turkey, heron
For your Linux machine to be able to access the /home, NFS share requires it to belong to the desktops netgroup. Otherwise, the server will deny access.
Once your server lets you in, the last obstacle is advertising the NFS exports to your client. The easiest way to handle this is a permanent mount entry in your /etc/fstab, such as:
bigboy:/export/home /home nfs 0 0
This way, /home would be hard-mounted on each boot. While this approach certainly works very well, it has limitations. At our site, we have a mount point for each user’s home directory; e.g., /home/joe for Joe and /home/sue for Sue. With 1200+ users distributed across ten file servers, hard-mounting each directory would require much housekeeping, and a server replacement or elimination would be a major headache.
Fortunately, there is an elegant way around this, called the automounter. This enterprising little daemon watches a set of mount points specified in files for access by the operating system. Once an access is detected, the automount daemon tries to mount the export belonging to the mount point. Other than a slight delay, neither applications nor users notice a difference from a regular mount. As might be expected, the automounter will release (umount) a mounted file system after a configurable period of inactivity.
To make use of the automounter, install the autofs package and look at the files it installed in the /etc/auto directory. The first and most important is /etc/auto.master which lists each mount point to be supervised by the automounter and its associated map, usually named /etc/auto.mountpoint. Each of these maps follows the basic schema set forth in /etc/auto.misc:
d -fstype=iso9660,ro,user :/dev/cdrom fd -fstype=auto,user :/dev/fd0
In this example, /misc/cd is mounted with the usual options associated with a CD drive on /misc/cd, whereas the floppy currently in drive /dev/fd0 is mounted on /misc/fd. Note that the mounts will not occur until the directory is accessed, e.g., by doing ls /misc/cd, and the automounter will automatically create each of the mount points listed in the file.
“Great”, you say, “now, what’s all that got to do with NFS and NIS?” Well, the automount maps are actually lists which can be maintained on the NIS server and distributed to the clients. For example, a typical NIS map named auto.home would look like this:
joe bigboy:/export/home/2/joe sue beanbox:/export/home/sue
Here, then, is the reason to have the huge number of mount points mentioned earlier. If Joe changes jobs and joins the finance department, his home directory can be moved to beanbox. His new entry would then read:
joe beanbox:/export/home/joe
but the mount point on his desktop machine is still /home/joe. In other words, even though he changed to another server, he does not need to adapt any of the environment settings, application data paths or shell scripts he might have. Not convinced? Type grep $HOME $HOME/.* to see how many instances of your home path are actually saved everywhere.
If, during NIS configuration, you edited your /etc/nsswitch.conf to contain the line:
automount: files nis
the automounter will read its startup files from /etc/auto.master. After that, it will query the NIS server for an NIS map named auto.master and will process the entries accordingly. Thus, the above change for user Joe needs to be made only one time on one system (the NIS master), and it will be known to all clients. No entries to forget, no conflicting client configurations. How’s that for efficiency?
Login Scripts: A Uniform Approach
What we’ve done so far has been largely tech-oriented to let our Linux box be a part of the enterprise network. Login scripts, on the other hand, are to be understood on an administrative level. Sites with only a few users may have no need for them, but if you have to support hundreds or even thousands of users of varying degrees of computer literacy and quite likely in different physical locations, you start to look at the situation differently.
Two of the most widely used shells, tcsh and bash, as well as their precursors csh and sh, utilize a two-step setup procedure. Without going into too much detail, files called .login and .profile are executed on login. Afterwards and on each invocation of a non-login shell (opening a new xterm window), a file called .(t)cshrc or .bash_profile is executed. All of these files reside in the user’s home directory; there is also a system default login and profile script (note the missing “.”s) stored in the /etc directory.
When a new user is set up at our site, we give her a default set of .login and .cshrc (the csh variants are the standard shells, but the same could also be done for bash) plus some other files. The only thing that needs to be adjusted is the setting for the default printer in .cshrc:
setenv PRINTER
Listing 1. Example .login Script
# get primary group setenv SETUP /usr/local/etc/dotfiles/\ `groups | cut -f1 -d' '` # set configurations source $SETUP/setup.WORDPROC source $SETUP/setup.WEBBROWSER source $SETUP/setup.OPENWIN [...] source .cshrc # start GUI source $SETUP/startup.OPENWIN # # script continues only after GUI has terminated rm -rf .wastebasket/* core logout
An example default .login script is shown in Listing 1. First, the script figures out the primary group (inside the backticks) and loads the variable $SETUP with the path to that group’s setup files, e.g., /usr/local/etc/dotfiles/finance if the user’s primary group is finance. Then, a number of so-called setup files are sourced (included) into the currently running script and the commands in them executed. In the case of the setup.OPENWIN script, it might look like this:
setenv OPENWINHOME /usr/openwin setenv MANPATH ${MANPATH}:/usr/openwin/man setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:/usr/openwin/lib
These scripts ensure each user gets the same environment settings for her particular group. Finally, the windowing system (in this case, OpenWindows, Sun’s version of X) is started. The startup.OPENWIN will not return until the user explicitly logs out of the GUI, at which time execution of this .login resumes. It proceeds to delete some files the user may have left behind and logs the user out of the system.
Again, the beauty of this concept is in its simplicity. If we install a new web browser, we need to change only the central setup file to point to the newly installed version. Upon the next login, each user receives the required settings to start it successfully.
The same concept is also employed for the arrangement of each user’s GUI environment. The OpenLook Window Manager, OLWM, as well as most other window managers, comes with an application menu which can be customized to include whichever applications a user might like to access easily. The menu description is stored in a file named ~/.openwin-menu. Again, rather than having everyone create or modify their own menus, this file is merely a link to the central one stored in $SETUP/.openwin-menu. In it, a reference to a private menu stored in $HOME/.openwin-private gives each user an easy chance to add personal items. The central menu files are always carefully maintained to make sure each application works as advertised and are updated each time a new application is brought on-line. Support personnel are grateful they can maneuver a user through the menu by phone while looking at the exact same version of the menu.
Slipping into the Establishment
Listing 2. Code to Integrate Linux Setup with Windows
[...] # get operating system name setenv OS `uname` # common setup for all OSs source $SETUP/setup.PROXIES # different setups for different OSs switch($OS) case "SunOS": source $SETUP/setup.WORDPROC source $SETUP/setup.WEBBROWSER source $SETUP/SunOS/setup.OPENWIN source $SETUP/SunOS/startup.OPENWIN breaksw case "Linux": source $SETUP/setup.WEBBROWSER source $SETUP/Linux/setup.X11 source $SETUP/Linux/startup.X11 breaksw [...] default: echo "Not prepared for your OS: $OS. " echo "You'll only get a shell." $SHELL endsw [...]
Integrating a Linux machine into this organization requires the basic setup scripts to differentiate between operating systems if paths are not the same or if some features are not available across operating systems. Since OpenWindows is a proprietary Sun package, a way has to be found for a user to get her OW setup when logging into a Sun box and getting a reasonably similar X11 setup on a Linux box. Wanting the least impact on existing scripts, one good way is to insert the passage shown in Listing 2 into the user’s .login. This example first sources all setups that are common across all platforms, like the default HTTP proxy settings, possibly the NNTPSERVER variable used by newsreaders and others. Then, a switch statement treats each supported operating system independently. In this case, setup.WORDPROC is executed only for SunOS because we have no word processor for Linux. The setup.WEBBROWSER script is also called from the $SETUP directory, because it can differentiate between operating systems. This makes sense if you use the same applications across all platforms, e.g., gcc and Netscape. The OpenWindows and X11 scripts are platform specific.
Adding support for other systems would be easy to implement in the same way. The “default” statement catches unsupported systems and leaves the user at a shell prompt. Quitting this shell, as well as quitting the GUI in the other cases, will continue .login execution and conveniently log the user out.
Having made our way through the setup scripts, the last obstacle to tackle is the GUI environment. Both X11 and OpenWindows make use of the user’s .xinitrc script. Luckily, this is just another shell script and can be treated the same way as .login: add a switch statement to distinguish between operating systems. Generally, this shouldn’t be necessary if you take care in setting up the paths correctly and calling whatever X clients are started in .xinitrc without full paths. So, rather than having:
/usr/bin/panel /usr/X11R6/bin/xload /usr/local/bin/wmaker
to start two clients and the window manager, it is much more convenient to write:
if [ -x panel ]; then panel & fi xload wmaker
This assumes that both xload and WindowMaker are locatable through $PATH. The Gnome panel application may or may not be present and will be executed only if found.
Conclusion
Introducing a renegade operating system like Linux into the holy grail of a major company’s production network takes a lot of enthusiasm, persuasion and lobbying in addition to a fine feeling for nestling it in as smoothly and unobtrusively as possible. If people take notice without being pointed toward the change, something went wrong.
Without sacrificing any of its inherent flexibility, Linux fits the bill almost perfectly. I always take special pride in demonstrating what Linux can do whenever one of its commercial brethren fails to accomplish something satisfactorily, whether it is related to performance issues, the speed and flexibility of open-source software, or the speed with which the operating system develops. This benefits the whole company and has led to Tux being a well-liked companion on many a desk in addition to the server rooms. This is a testament to the superiority of this OS and should definitely help Linus toward his ultimate goal after all.
All listings referred to in this article are available by anonymous download in the file ftp.linuxjournal.com/pub/lj/listings/issue68/3528.tgz.