Monday, April 30, 2007

Don't Skimp on OS Downloads

In the world of OSS I often do network installs. When I'm downloading a CD I often reason that I shouldn't download the things that I don't need, so I usually chose the minimal net-install CD. With Debian that means I choose the netinst image, and for OpenBSD I chose the cdxx.iso. One problem I've been running into is the availability of those remote repositories.

Today I went to install an OpenBSD 3.9 system to replace a production system. I stepped through the installer with the same options as before, it being a standard install for us. I was hitting my enter key liberally when suddenly I got a prompt I didn't expect. It had turned out that the mirror I had been using suddenly wasn't accepting anonymous logins. I tried a few other listed mirrors and struck out on three in a row with an invalid directory error. I logged into them via ftp from my laptop and found they only had 4.0 and 4.1 repositories. 3.9 was nowhere to be seen. I did a little hunting and finally found a mirror in the US that had 3.9, but it shook me. I had come to expect repositories to be available on the internet for quite some time. 3.9 was only released May 2006--only a year ago--yet it was already being taken off major mirrors.

This incident reminds me how important it can be to download all your packages and keep a stable repository around. If the CD isn't handy with the whole OS, a local repository should be, at least on any system that has a large number of installs.

I'm glad it was a simple one to fix, but it would be much easier--and I would rest easier--knowing that we have all our packages on hand.

Wednesday, April 25, 2007

Name That Host

So I found that the system I am migrating to as our main file server does not support CIFS well, so I thought I would export via NFS to our server and then via CIFS from there. This would add a nice layer of transparency and allow me to reuse a known configuration.

I ran into problems with Samba needing root access to those mounts in order to get into various 700 directories. I ended up learning a little more about how Solaris handles the rw= and root= arguments on its shares.
Apparently a network defined by an IP address can be specified as "@xx.xx.xx[.0][/x]", so 192.168.1.0/24 could be "@192.168.1", "@192.168.1.0", or "@192.168.1.0/24". This seems nice and flexible.

Individual hosts cannot be specified by IP address. They must be specified by an LDAP name or a fully-qualified domain name. I tried entering the FQN of our Samba server but it still wasn't taking. I checked that nssswitch.conf had "files,dns", but still no luck. Eventually I manually added the host to /etc/hosts and suddenly it works. You have to be careful with the host definitions of these Solaris-based NFS shares. It can be quite picky on what will work.

See Sun's documentation for full, albeit unhelpful, details.

Tuesday, April 24, 2007

Ubuntu Numbering Scheme

I was talking to my mother about the latest Ubuntu release (yes, we managed to get her to run Linux). I was explaining the concept of LTS versions to her and she mentioned something to the effect that if Ubuntu was already on version 7.04, then it must have been around for a long time. I was pretty sure that Ubuntu hadn't been around for more than a few years, so I decided to look it up. I came across the answer on the Ubuntu FAQ. Scroll down to the bottom and you'll be enlightened.
The version number comes from the year and month of the release, it really is that simple.
They've had a total of six releases--Warty Warthog, Hoary Hedgehog, Breezy Badger,
Dapper Drake, Edgy Eft, and now Feisty Fawn--but the number of their first release was 4.10. So they've only been around for about 3 years and get out a release twice a year, once in April, once in October, though Dapper was delayed until June.

Looking at the numbering system I'm struck by its elegance. What better way to number something than by the date it's released? Release dates have more information in them than version numbers. That version 2.0 of something could be 10 years old, but you know that 01.05 release is only six years old. The only thing it looses is the knowledge of what is a major version or minor version, but that might be better addressed by a major version number tacked on to the name as well. Then you could have version 1.01.05 which was released on 05/2001, and then bump it up to 2.02.01 when you release version 2 on 01/2002. And minor versions are easy to track inbetween. Best of both worlds? Perhaps just too much trouble.

Friday, April 20, 2007

Subversion on Vista

I've been encouraging our users not to install Vista on any of our systems if they can avoid it. Unfortunately a few copies have snuck in through OEM installations. It can be hard to find systems that still have XP on them and the users are bringing in their laptops from home which they bought with Vista pre-installed, and now it's too late to "fix" them.

Other than some hardware drivers which won't work on Vista I haven't heard anyone have much software trouble, but now I ran into my first major problem: Subversion. I cannot seem to find a good SVN client that works under Vista. TortoiseSVN was a staple of our installs with XP, but I can't find an equivalent program for Vista. I'm not even sure if TortoiseSVN will ever be ported because it depends on integration into Explorer.

The only client I've found so far is the Syncro Subversion Client, but at $60 per license I'm inclined to keep searching. It's also Java based, which is probably why it works but not something I want to depend on in Vista either.

Update: Actually TortoiseSVN works on Vista with some known problems. The latest builds look promising, but I haven't tested them.

Thursday, April 19, 2007

What distro should I use on my file server?

A friend of mine just asked me what distribution to use for a file server in his lab at another university. His local tech guy had been pushing for Ubuntu. He knows his way around Linux, but almost entirely as a desktop end-user. Here is what I suggested:

Ubuntu is definitely one of the more solid distributions out there. I'm glad that we're getting our desktops away from older FC distros and on to Ubuntu. One nice thing is that stanford maintains a local ubuntu repository so updates are always fast.

Ubuntu is designed around being a desktop distribution, but they also created a server distribution with a similar philosophy. I'm not sure that it translates all that well. One thing it does do nicely is not try to install a lot by default, but if I were going to build a server I would I go with Debian or FreeBSD. I feel like Ubuntu made some compromises in terms of security and structure to make linux more accessible for desktop users, but in my servers I'm perfectly happy to have my system inaccessibe as long as I know my way around. However, if you are going to be one of the primary maintainers, Ubuntu might be great for a server. In a lab environment behind firewalls security isn't as big a deal as for public servers.

When setting up a file server there is ZFS on the horizon. It is has been getting a lot of what I believe is well-deserved hype. It is easy to administer, fast, and stable. However, it was invented by Sun and is under the CDDL license which is incompatible with GNU and Linux. There are currently projects underway to port ZFS to FreeBSD (scheduled for release 7.0) and into the FUSE framework to allow it work in Linux, but at the moment the only way to get it in a stable distribution is to use Solaris or OpenSolaris. OS.X 10.5 will also have ZFS in it! It might be worth looking into a ZFS solution for your large data storage, but if you need the systems now I would recommend Debian or Ubuntu.

A note on Ubuntu versions:
There are lots of different flavors of Ubuntu, such as Kubuntu and Xubuntu and Edubuntu. They all use the same package tree and just have different desktops and default packages. Ubuntu can run KDE, Kubuntu can run XFCE, etc. Ubuntu is often considered the vanilla flavor. There really isn't much of a difference. For a server you'll want the Server version, and make sure to grab the LTS version. The Long-Term Support version is guaranteed to have updates available for the next 3 years if it's desktop or 5 years if it's server.

Packaging Can Say a Lot

You can tell when a machine is meant for enterprise or SOHO by the input/output you get out the box.

Take for example the SunFire X4500 I configured recently. This thing didn't even do VGA output till you logged into the remote management console via serial or TCP/IP and enabled it. It also doesn't even have an option for a built-in CD-ROM drive, requiring you use USB or the web-based KVMS. I went through the whole installation without a mouse and never thought twice about it.

On the other end of the spectrum is the Dell PowerEdge I just started working on. This thing has a VGA and two USB ports on the front for easy access. Additionally the configuration assistant CD expects the use of a mouse, as it is actually linux + mozilla based.

It makes me think of buying tools at the hardware store. A lot of the good quality tools are fairly unassuming and probably ugly. It's because the engineers were optimizing for performance, not sales, appearance or ease of use. On the other hand, a lot of the lower-quality tools are fairly gaudy with neon colors and fancy foam grips, or they're overly specialized, like a refrigerator-bolt screwdriver. With systems it's the same way. Some systems will try to put everything in front of you and hold your hand, and often in practice they end up being inferior. Other systems give you the tools and let you figure out how to best use them, and these are the systems I prefer. There is a class of both ugly and cheap tools, but please ignore that for the purpose of this discussion; I could have used Windows vs. Linux, but that would have been too easy.

Wednesday, April 18, 2007

VMWare and ZFS not acceptable

While I can get the system to run I find that the I/O rate with ZFS in VMWare is unacceptable, only about 10 MB/s on a four-disk raid 10. Even with direct access to the disk I believe that ZFS's desire to really own the spindles is not very compatible with VMWare's emulation. Xen my be an option, but at this point I'm just going to run with Linux and hardware raid as the system needs to go into production.

Thursday, April 12, 2007

No hardware support? No problem.

I've been lamenting for a while now that I can't run ZFS under Linux. I know the fileserver has been getting a lot of hype lately, but I believe it is well deserved. After playing with a few of the features I'm mightily impressed. To get a good impression of what it is like take all the benefits of LVM, add a few new features, then make it easy to use and administer. See this article for more info.

I've had trouble lately in that it is only supported under Solaris. There are ports in the works, but nothing is stable yet outside the Solaris kernel. Fortunately for those of us who want to remain open source, there is OpenSolaris. Sadly, it doesn't support the Dell hardware I'm installing on.

Again, to my fortune, VMWare came to the rescue. I was able to get NexentaOS installed in VMWare Server on top of Debian and exporting wonderful ZFS volumes. The major caveat so far has been that NexentaOS doesn't support CIFS as far as I can tell, so I'm exporting the volumes to the Debian system through NFS and then exporting with Samba from there. Since the majority of my users are windows users that means I'll actually do most of my user administration in the Debian installation. So far it looks pretty good. We'll see how it works in practice.

Wednesday, April 11, 2007

Creating and Using a Serial Console

Sometimes we end up with a lot of headless computers. I find that I have a lot of headless computers less due to lack of monitors--I'm thinking of that pile of CRTs in the closet--and more due to lack of keyboards. In either case, I have multiple machines in racks that don't have monitors and keyboards attached to them all the time. I usually just use SSH to interact with these systems but sometimes I want a little more, such as when the network is down.

This is where serial consoles come in. When Linux starts it spawns ttys, basically terminals, that you can access to do text-mode logins. It usually spawns 6 of them, but it varies from distribution to distribution. Most people are familiar with these terminals but many don't know that you can spawn them on all sorts of devices as well. All it takes is a little modification of your /etc/inittab file.

You may want to read your man pages a little to get a feel for what getty is doing, but the gist of it is that you add a like like the following to your inittab:

# Serial terminals.
s0:2345:respawn:/sbin/agetty -L -f /etc/issue 9600 ttyS0 vt100

This tells the system that when it goes into runlevels 2, 3, 4, or 5 it should launch agetty and relaunch it whenever it terminates. The arguments aftewards tell it to bind to the device ttyS0 at 9600 baud and output for a vt100 emulator. When a user connects they will see the contents of /etc/issue and also we told it that this is a hard line and so there is no need to do modem initialization.

You can read a more detailed explanation at http://www.vanemery.com/Linux/Serial/serial-console.html

Debian Upgrade Woes

Apparently after the Debian upgrades I had a few things break on some of my servers.

1. Apparently snmpd only wants to bind to localhost now. I need it to do some monitoring with Nagios from another server. The fix was to revert to the old options in /etc/default/snmpd This meant removing the user and SMUX options and removing 127.0.0.1 from the end of SNMPDOPTS.

2. Nagios wouldn't restart because it had a duplicate check definition in check-fast-alive. Apparently it gets defined in fping.cfg and ping.cfg. I commented out the fping definition.

I'm sure there are more troubles to come.

Tuesday, April 10, 2007

NexentaOS Install CD Has Tetris

Now here's a feature that I think all distributions could learn from. The NexentaOS install CD (alpha6 as of posting) has virtual terminals, one of which is a shell and another of which is a console-based Tetris. Who wouldn't want something fun to do during those long installs? It almost makes me want to install the OS on various systems just so I can try to beat my high scores. It really makes install time fly by, even though it probably lengthens install time considerably by distracting me from important dialogs.

They're All Clones

One thing that frustrates me most about purchasing servers is that all the major vendors are basically the same. Let's look at three of the largest: HP, Dell, and IBM. They all basically market the same systems, but with different names. You can make identical systems on each site and find Dell the cheapest, followed by IBM, followed by HP, but when you talk to the sales rep they all seem to come out the same. It's become a game of which sales rep I can convince to give me the best deal.

The hardware and software is all same as well, but they name it differently. Both HP and Dell have configuration assistant CDs that boot into linux and use a full-screen Mozilla variant, and their management software and Lights-Out Managers are basically the same too.

The vendors all claim that the price differentials you would see is due to optimizations such as custom hard-drive firmware. How that justifies a doubling in price over generic hardware isn't clear to me. Shouldn't we all have learned from Google that off-the-shelf hardware is a cost-effective performance solution if you have the expertise to make it work?

At least Sun differentiates themselves by reducing interoperability and using more proprietary components, though functionally I haven't seen much of a difference.

All of this doesn't matter that much to me as I shop mostly for hardware. Home-grown linux installs aren't too picky about service contracts. At least I get to play the salesmen off each other to get cut-rate prices. When vendors compete to sell the same products to a fixed pool of individuals the customer always wins.

Thursday, April 5, 2007

A new blog

I am a data dad. My job is to take care of the systems and make sure they keep running properly. This means I have to be reactive and proactive. Ultimately I just want to make sure that all my children (systems) are living up to their potential. I want them to be the best that they can be. When my systems work well, I'm proud. They really do all the work, not me. I just help them do it.

This here is a catalog of interesting things that I discover along the way.