PGTS PGTS Pty. Ltd.   ACN: 007 008 568               Mobile Version Coming Soon

point Site Navigation

Valid HTML 4.01!

   Brilliant XED lighting!
   Australian XED Lighting


   Australia/NZ Distributor of
   Reming XED Lighting Products

Setting up a LAN and Web Host with Mandrake

By Gerry Patterson

Setting up a Mandrake Webhost was relatively simple and often a matter of just following the menus. I wanted to setup a web host, a LAN and a Samba Server. The end result was a powerful and versatile combined Web/LAN Server that would be quite suitable for a small business. There were some problems that were mostly hardware related. Among them a problem with a mouse and a problem with ethernet that produced the following messages:

    SIOCADDRT: No such device
    task `ip' exit_signal 17 in reparent_to_init

This document is a brief log of what I did and how I did it.

Why Mandrake?

Initially I had intended to set up my network, web host, DNS and mail host using BSD. This is because I am familiar with BSD. It uses "standard" Unix conventions. And it has a well deserved reputation for being robust. Believe it or not however, I was not able to obtain the latest copy of BSD in Melbourne. Perhaps I was looking in the wrong places? I could have downloaded the latest release. However at the price my ISP was charging it would have been ridiculously expensive (and time consuming). I decided the best option was to have a copy shipped from the USA, which was going to take three weeks (I opted for standard delivery rather than express). In the meantime I already had my new server and naturally I was chomping at the bit. The supplier from whom I purchased the PC, had shipped the Mandrake 8.1 distribution with the unit. I had heard some good reports about the Mandrake workstation, so it seemed an obvious candidate for a trial. The front-end installation GUI looks good, although I remain sceptical about GUI interfaces. I would much prefer a well-documented reliable command line interface to a poorly documented, flaky GUI interface. Overall the install front-end behaved itself. There were a few odd bugs which were easily remedied. I did not document them all as they occurred, so this list is incomplete:
  1. There is an option to upgrade an existing installation. If you run this after you have done an initial installation, it clobbers parts of the original setup. Actually this could be called an undocumented feature.
  2. There is a built-in firewall that can be launched from netconf. It uses a daemon that checks for new services and ipchains. Unfortunately ipchains is no longer used on Linux version 2.4
  3. I could not find any dictionaries for ispell. No problems, I wanted the version that had the HTML option anyway. So I downloaded a more recent version of ispell and tried to build it. The build failed. The error message mentioned a missing ltermcap, whilst trying to compile term.c. In order to get it to work I had to create a logical link as follows:
    	ln -s
    This wasn't serious. Missing or incorrectly named libraries are common problem for Linux distributions, because of the rapid change in the source code. There were a couple examples of these. Unfortunately I did not note them down at the time.
  4. The machine would intermittently crash while running KDE. I did not intend to use KDE, since the machine would be functioning purely as a server. However, this was a genuine bug for this system configuration. Fortunately it was easily resolved by rebuilding the kernel. However, if you are new to Linux, I would not recommend rebuilding the kernel as your first Linux project. Rebuilding the kernel in Linux is not the relatively simple process that it is in BSD. And it is fraught with many pitfalls that can leave an unwary user with a system that will not boot. If you are a new user and you are determined to go ahead with a kernel rebuild in Linux I recommend that you RTFM very carefully.

I did have an initial problem setting up RAID. The server I have opted for has two IDE buses, which meant that it was possible to setup a software RAID. I had opted for RAID-1. The Linux RAID software seemed easy enough to comprehend, and there was a binary distribution on the CD. The RAID-HOWTO was also very helpful. However the Mandrake distribution CD has a GUI front-end, and this did not seem to do manual disk partitioning. Eventually I realized that if I wanted to install the RAID software and partition the disks manually, I would have to select expert mode in the first startup screen. The boot partition cannot be RAID, of course (otherwise the system would not start). So I configured both disks with a small boot partition of identical size. I will have to make sure that a copy of the boot partition is kept on both these (in case I lose one of the disks). This will mean that changes to the kernel will have to changed on both partitions. I will write a script to do this. I disabled one of the disks and was pleased to see the system boot with a broken mirror. Even better was the message in /proc/mdstat about the mirror being rebuilt when I brought the disk back. I just hope things go this smoothly if I have a genuine crash!

Mouse Config

For some reason I could not get my mouse to work in console mode. It seemed to work ok in X-windows but something weird happened during the Mandrake startup. I don't know if this is a hardware problem with my system or a general problem with Shuttle Motherboards. I have a Shuttle AV41 Pentium 4 Motherboard and I am using the PS/2 mouse port. This is the sort of trivial problem that I normally would not bother looking at. Since I was not intending to use the main server as a workstation, I would only be using the system console. However it is handy to have a mouse for cut and paste when working at the system console. Also this new Linux distribution was setup with a nice 90x43 font. This looked very good with the out of the box colour support. I especially liked the language specific smart colour support for vim. As an aside, I must add that I have become very used to vim. Unfortunately rebuilding the editor is usually one of my last priorities, so I never get round to replacing vi with vim on other systems. It is good to get on a Linux system and have it working. Amazingly, I do have the enhanced version of vim on my Microsoft laptop (running cygwin of course). In any case, even if I didn't run X, the console on this new machine would be quite useful in character mode. The browser I most often use is a character-based browser called lynx. If you are a BSD user, you will probably know about it already. Otherwise if you are a GUI-sceptic like me, and you haven't already heard of it, you should check it out. (I could also ask where have you been for the last few years?) When the current Mandrake distribution is on a Pentium 4 with 1.5 GB of RAM, the response is not too shabby. I could see myself doing a lot of work from this particular system console. So I persisted until I solved my little mouse problem. Of course this was a software solution, achieved with the following scripts:

		FULLNAME="PS/2|Generic PS2 Wheel Mouse"


		/etc/rc.d/init.d/gpm stop
		cat /usr/local/sbin/ps2_wheel > /etc/sysconfig/mouse
		/etc/rc.d/init.d/gpm start
		/etc/rc.d/init.d/gpm stop
		cat /usr/local/sbin/ps2_standard > /etc/sysconfig/mouse
		/etc/rc.d/init.d/gpm start

This last script is now called from /etc/rc.d/rc.local. Of course I won't be booting the machine very often, so I probably didn't have to worry about making it a startup script. Don't ask me why this works! If anyone knows about a problem with Linux, Shuttle Motherboards and mice, you can email me at feedback

Starting NFS

This was easy. I just used the linuxconf program. Then added an entry to /etc/exports, which was in the form:

	exported folder_name   hostname(rw)
The other operating systems which I used to test NFS access were Redhat Linux and my old BSD workstation. I placed an entry in the /etc/fstab on the Redhat machine and I used /stand/sysinstall on the BSD machine to configure it as an NFS client, and mounted the remote drive at startup. I am not sure how to make BSD auto-mount NFS volumes so I used a startup script in rc.d. There is a security concern with NFS, but a firewall should fix that (more about firewalls later).

Zip drive ppa on the new system

Getting the zip drive working was a matter of adding a mount point and fstab entry for the zip drive and adding 'ppa' to /etc/modules. This was so simple I wasn't even going to mention it. However I was never able to get the zip drive working under Linux on my old workstation, which is a rather venerable Pentium. I might have suspected that it was hardware fault had it not been for the fact that it worked perfectly under DR-DOS, PC DOS 6.2 and Windows 3.1 (ok, I did say it was rather venerable). It is a credit to the versatility of the open-source software that this old beast was working so well with four operating systems (DR-DOS, PC-DOS, Linux and BSD). Certainly there aren't any commercially available operating systems that I could have installed, and obtained reasonable performance. Having said all that, the old workstation must have had some gremlins in the parallel port that made it incompatible with the ppa driver. The new server didn't miss a beat, however. The installation was as easy as the documentation for the ppa module claimed it would be. The only thing I noticed which was unusual is that the /dev directory has been set up to be non-writeable, and the MAKEDEV script is not in /dev but in /sbin. You still use it in a similar manner as on other BSD-like systems, but the node is created with "-" on the end. For example to create sda4 (for the zip drive) you enter:

	cd /dev
	/sbin/MAKEDEV sda4
	mv sda4- sda4
	chmod pmask sda4
where pmask is the permission mask that you wish to use for your site. This is not really a bug, just an undocumented feature (and it probably is documented somewhere, I just didn't bother trying to find it). I assumed that it was some security feature. No doubt any Linux experts knew about it already. As I said, installing the zip drive was trivial. The only reason I mention it is that I had previously spent several very late nights sweating over the ppa module on my old machine. So this paragraph probably should have the heading of "Problems with Ancient Computer Equipment" (no doubt some would consider a 100MB zip drive in that category). Although I had intended to devote this machine exclusively to server tasks, the more I used it, the more my enthusiasm waned for my old workstation, which is eight years old and has only 80MB of RAM. Also my intermittent problem with X had disappeared since I rebuilt the kernel (more about that later). The KDE interface was very slick. As was the browser (konqueror). I normally don't like GUI browsers because of the response time. Nothing like a huge pool of RAM and a high speed processor to solve that problem. Still, I was not sure about starting and stopping X while the server was running. I was not even sure how to stop the X server in Linux. I could find no mention of it in the documentation. However, I found that by first making sure that all applications are closed, and then opening a shell window and assuming super-user I can terminate xdm without any unpleasant side-effects with the following:
	kill -TERM `cat /var/run/`
After xdm disappears, just type "rm /var/run/" to clean-up. Perhaps this is way that the X server is supposed to be stopped?

Setting Up Inetd

Since the Mandrake system will be the web server, I initially intended to use anonymous FTP, to allow users to download files from the site. Starting inetd was easy enough. Initially I used a menu config system to get it going. This menu program added the necessary start script to the init scripts. A closer reading of the man pages reveal that the startup scripts can be managed with a utility called chkconfig. After using it a couple of times I decided that chkconfig was more dependable and configurable than the menu driven scripts. It is certainly easier than creating the scripts manually. Actually Linux has done away with inetd. You can still get it to run. But most distributions use xinetd, which has enhanced handling of individual services, especially with regard to security. I also found that I needed to install ftpd from the distribution CD. It looks as though most Linux distributions use an enhanced version of ftpd called proftpd, which seems to be more suited to anonymous FTP. As with most of the installations from the CD I used the rpm command, which has a rather obtuse command-line, but over the years I have gotten used to it. For example to install proftpd I would use:

	rpm -iv  /mnt/cdrom/Mandrake/RPMS/proftpd-1.2.2-2mdk.i586.rpm
After all this I decided not to use anonymous FTP. It seemed to me that it was just another potential security hole. So I setup a download directory that could be accessed by HTTP. Initially I had thought that this would be a lot of work since each additional file would have to be manually added to the html list of the download directories. However some testing with the Apache Extranet server, soon confirmed that support for this was built in to the distribution.

Samba and DHCP

I definitely wanted to test samba, since I believe that this will be the most likely choice for customers. I found many documents on the web that strongly advised me to download the latest stable release, as there had been a number of bug fixes, so I downloaded the latest samba gzipped tar and built it. If you do the same on a standard Mandrake installation with samba already installed, be aware that you should manually remove the old smbd and nmbd binaries as well as the other old binaries in the samba suite. I just mv'ed them to smbd.old and nmbd.old (etc) and created logical links to the binaries in /usr/local/samba/bin (the default location for the build). I struggled with the configuration of samba for a while. The smb.conf file is quite complex. Fortunately there are default smb.conf files in the samba distribution. I found one which was appropriate for my system and copied it to /usr/local/samba/lib and modified it to get it working with my Windows 2000 laptop. Samba is an excellent product and infinitely cheaper than Windows 2000 server. In order to get samba working however, I needed a DHCP server. Windows works best with DHCP, although it is possible to use static addresses. The instructions in the samba distribution urged me to download the latest stable release of DHCP. This was relatively easy to get working. However I did need to add a line to /etc/dhcpd.conf:

	ddns-update-style ad-hoc
I also had to touch /var/state/dhcp/dhcpd.leases, obviously the dhcp daemon doesn't just create it on the fly. Later I discovered that the ddns-update-style line I inserted in /etc/dhcpd.conf generated DNS update requests to my primary DNS, which had not been built at this stage but would be a BSD system, configured to deny DNS updates (for security). Later when I had DNS working the DHCP server caused lots of annoying log messages about someone trying to update my DNS. If the ddns-update-style is set to "none", this does not happen. However this means that Windows DHCP clients on the LAN do not get an entry in DNS. And there are some programs which rely on this to validate clients. Since I will only have one Windows 2000 work station I am not fussed. However if I am configuring for a customer, it may be better to tailor DNS to only allow specific hosts (such as the DHCP server) to update it. This may have implications for an IMAP server. Also when I installed the DHCP server, the default lease time was set to 10 minutes and the max lease time was two hours. These values make sense for a medium size site with 40 or so desktop clients and 20 or so laptops being plugged in and out of the LAN at various times. However for a home LAN the values can be increased (otherwise your log files just get cluttered with DHCP lease expirations and acknowledgements.

Apache and Postgres

My Web Host would be Apache of course. There really aren't any alternatives worth considering. I wanted to trial an SQL enabled database engine with the Apache server. I have been meaning to evaluate Postgres for many years, so this was an ideal opportunity to have a close look at this well known GPL RDBMS. I had previously trialed MySQL. And I must say that I was underwhelmed by the product. I have spent nine years working with Oracle and I have worked as an Oracle DBA. In previous incarnations I have worked with IBM mainframe products like DB2. So my expectations for an RDBMS are high. MySQL struck me as a toy RDBMS. It lacked basic features like rollback, and was incapable of running complex SQL (like joins on sub-selects etc). I am really only interested in a real RDBMS. And from the reports I have read so far postgres seemed as if it would be a good choice.

On the advice of some documents I found on the web, I downloaded the latest stable copy of postgres from the Postgres Home Page . After unpacking the tar ball I decided that the procedural language support looked interesting. I was especially interested in the in the plperl support. I did have a little trouble building it in however. The README had contained a warning that I might have to rebuild perl (in order to create a shared perl library). The perl re-build failed. I did not take a note of the error message, because I found a copy of (the perl shared library) from the Mandrake installation, and I was able to copy it to /usr/local/pgsql/lib. I was disappointed to discover that plperl was not able to access the system RDBMS functions. Also, it was not possible to pass arrays as arguments. Although this second restriction may have more to do with the procedural language interface than with the plperl implementation. I had intended to write a function that mimics Oracle's decode function in perl, as my first test of the language. However without the ability to pass arrays this would be difficult if not impossible. My understanding of the present state of postgress and plperl is that these issues will be addressed soon. I also created the plpgsql procedural language. This was quite a powerful and versatile procedural SQL implementation. There was one "feature" that I found a little annoying. Because the functions are defined to the RDBMS as quoted strings escaped and quoted strings have to be double-escaped and or double-quoted. Which can get really ugly if they already are double-escaped or double-quoted, or if you are generating SQL statements dynamically.

However postgres, in my opinion, is a professional RDBMS which has all the features that you would expect from a serious commercial product. Ok, if you really want industrial strength, Oracle might still be the product of choice. Oracle is more tunable and has a proven record for being robust and efficient. Although DB2 is probably more tunable than Oracle, DB2 is complex and expensive. Postgres would have to be the choice on budget alone. But even if it weren't the cheapest, postgres has features that would make it a good choice for integration with Apache. These include the ability to produce HTML output from queries, and the ability to produce output as raw records separated by a configurable record separator, with each field separated by configurable field separator. The possibilities for CGI scripts are obvious. The tests I ran indicate that postgres is a very solid product that is modular and powerful. It offers most of the features of ISO SQL and has implemented a few of the Oracle features. Combined with perl and Apache on a Unix platform, it would make an extremely powerful web database engine.

Dynalink Router

The router that was supplied by my ISP was a Dyanlink RTA020. The technical sheet that was published by Dynalink claimed that it was a firewall, router and a modem that was capable of NAT and DHCP. A rather impressive list of claims for such a tiny package which, as it turned out, it failed to live up to. The first thing I noticed was that it allowed telnet access, requiring only a password, with no login id. Well, I thought, this is one of the first things I will change. Since it was supposed to be a firewall, I thought I would put in a rule to drop telnet packets. There did not seem to be a way to do this. The rules consisted of nothing more than a source and destination addresses and netmasks, type, port and action. However, it was mandatory to specify a source and a destination on every rule (!?!!?). The documentation that came with the modem was woeful. There was a short paragraph on creating ip filters. It described how to drop packets for a specific port from a known source address inside the private network destined for a known address outside. This poorly written document seem to miss the point about firewalls entirely. There was no documentation for NAT. I tried the Dynalink website and rang Dynalink support. I was referred to some even less substantial documentation, which was supposed to describe how to open pinholes to an SMTP server inside. Basically the document said I should login to the router with http and click on a button. Then go to my (Microsoft) SMTP server and click on another button. There was a few scrappy paragraphs of text and lots of pretty pictures showing screen snapshots. And what does one do if one does not have a Microsoft SMTP server? This really has to be seen to be believed, so I will quote directly from the document:

For applications not listed here, read the instructions/manual of the application to find out which TCP/UDP ports must be opened. Often such instructions can be found from a section on how to configure application behind a firewall or under similar headings.

I already know which ports I want open and which I want closed. What I would like to know is how do you open and close them? Oh dear me! It would be funny if it wasn't true! The instructions on how to set up an SMTP server are "Just click on some buttons and trust Microsoft -- for other systems read the instructions manual!". But there is no instruction manual The other documentation on the Dynalink website was of an equally poor standard. I did a quick look around the web for other Dynalink modems. The units I found were setup with the default password, and most worrying for these systems is the fact that I was able to crack them. If you are one of these users you should not be sitting fat and happy. You should revoke telnet access now. And be aware that altering the httpd access does not alter the telnet access. The next web denizen who cracks into your router might not be one of the guys wearing a white hat. I was now concerned about the security of my own system. I had revoked access on the router but it was still letting everything through to the public network. As a firewall it would make a good tea-strainer. There is an instruction set on the Dyanlink router. But no documentation! One of the PDF files on the Dynalink website suggested that I use the help command on the router. I had already done this. The help command (where implemented) issues one line per command, and no examples. Not really all that helpful, and very tedious. The documentation for the Dyanlink RTA020 is sub-standard and amateurish and I would not recommend the product to anyone. And I would further urge my ISP to find another product to ship.

The Dyanlink modem also uses SNMP, which in light of the recent security concerns about SNMP could be another security hole. As I was focusing on security, I checked the system logs. There had already been several attempts at telneting into the web host, and one of them had tried sshd. I looked up a couple of these addresses using reverse lookups. It was no-one that I knew, and they had no business trying to telnet into the machine. Perhaps they had mistyped an IP address or maybe they were just rattling the doors and windows to see if they were locked. I thought of inserting another ethernet card into the web-host, thus making it a router. This would enable me to build a real firewall. This is not an ideal situation. I prefer that the firewall is a separate unit and diskless. In any case this was not to be. After I put the new card in the machine, a couple of disturbing error messages appeared:

	task `ip' exit_signal 17 in reparent_to_init
	SIOCADDRT: No such device

The first one appeared when I used linuxconf to configure the new interface. And the second appeared when I tried to route the interfaces. I searched the web to see if other Linux users had encountered similar problems. There were many, and the fact that they mostly seemed to be using RealTek 8139 chip sets was ominous. On hearing about my woes, one of my colleagues commented drily, "Oh yes, I never buy cheap ethernet cards, it just isn't worth the money you think you might save". I wish I'd spoken to him before I went out and bought the cheapest cards I could find. I tried re-building the kernel with another driver but that did no good. I took the card out of the Linux machine and put it in the BSD machine. And it worked as expected! So, it was beginning to look like a Linux problem ... Before purchasing the RealTeks I had searched the web for drivers. And I had felt a twinge of uncertainty about the fact that there were two linux drivers for these devices. The Mandrake distribution contained a driver with the adhoc name of 8139too (as a replacement for 8139). It seems that my doubts about the driver had been well-founded. I started to consider my options:

  1. Build my own diskless firewall/router with BSD, using etherboot or picobsd.
  2. Tighten up on security by using iptables on the Linux machine and restricting services on the BSD nameserver.
  3. Rebuild the LAN/websites on separate machines using BSD.

Option one is the preferred option. And it will be the first project on my list. Option three might also be a good one to check in the future. However, I will settle for the second option as an interim measure. It is not satisfactory because the two subnets are sharing the one interface. This makes it a soft route, which could be susceptible to attack. A hard route is safer, and when combined in the same diskless unit with a BSD firewall, practically unbreakable. There is no such thing as fully secure system on the internet but compared to most Microsoft systems, these two boxes where fairly well fortified. With so many easier targets, I can only hope that the legendary evil cracker will go after them, rather than my site. The one good thing that came out of this was the disappearance of the intermittent KDE bug. Rebuilding the kernel had cleaned the cobwebs out. There were a number of options I used when I rebuilt it. I selected the processor type that exactly matched my hardware, and I changed the memory module that suited my hardware, I also removed some network drivers that I knew I would never use. Any of these could have fixed the problem. So far the machine has not missed a beat while running X.


Having tested the Mandrake installation, I decided to keep it as a server. It seems easy to configure and secure. The Mandrake KDE combination is also very impressive. The desktop compares favourably with the Microsoft product. It may even be feasible to set up an entire office with the server and desktop software being installed from the one set of disks. Based on what I have seen so far, I would like to trial the StarOffice 6.0 suite.