From Richard Storey on 20 May 1998
To a "newbie" on the edge of installing Linux some of what I read leaves me
concerned that some form of minor shakeout is building up in the Linux
versions arena. It has me confused about which direction to turn because
I'm not really interested in installing a lot of stuff, configuring it and
then finding that 6 mos. later I am out on a limb due to some standards
shift.
I can understand your concern. This is a problem that IT managers face all the time when selecting hardware and software for their companies. It affects the home user, too but no one gets fired over those little disasters (well, it might cause the occasional divorce but ....).
That's why the rule in MIS/IT used to be "no one ever got fired for buying IBM" (and why we see such "devotion" to Microsofts products today).
However, I can lay your fears to rest on a couple of grounds. This is not "the market" --- it is the free software world. (In this particular case I'm referring to GNU and Linux free software and not merely to a broader world of "open software").
What is this issue about regarding GNU gcc libraries and some versions shifting to a new standard? I've seen bits of info. on it and did somewhat understand what it meant, but I'm not a programmer, therefore, I don't get the big picture here.
The debate about glibc2 (Linux libc 6) and libc5 is mostly of concern to programmers. Even as a sysadmin I'm fairly oblivious to it. It's really a bear for package and distribution maintainers (which is probably where the messages you're thinking about are coming from).
There is probably quite a bit of traffic about the pros and cons of each. I won't get into that, mostly because I'm simply not technically qualified to do so. (I'm not a programmer either).
The high elevation overview is that glibc and libc5 can co-exist roughly to the same degree that libc5 co-exists with libc4 and a.out co-exists with ELF. Nobody is being left "high and dry." In this respect it is completely different than the shift from DOS and Windows 3.x to Windows '95 and/or from either of those to NT. It's also a far cry from the shameful way that Microsoft and IBM have treated their OS/2 adopters.
Zooming in a little bit I can say that the next major release of most Linux distributions will be glibc based. Most will probably ship with optional libc5 libraries for those who want or need to run programs that are linked against them.
glibc is the reference implementation of the 86Open standard. This should be supported by almost all x86 Unix vendors within the next couple of years. (Hopefully most of us will have moved to PPC, Alpha, Merced [though its release schedule has been stretched], or whatever by then -- but I'm the one with the 10 year old server that handles all the mail into and out of my domain --- so don't bet on it).
The hope is that we'll finally have true binary compatibility across the PC Unix flavors. SCO and Sun have traditionally bolluxed this up in the interest of their market rivalry --- but the increasing acceptance of Linux and other GNU software makes it their only reasonable option. Neither of them can force the market to adopt their standards (iBCS and the x86 ABI) and the consumer shrink wrap software market is rapidly shifting to Linux.
It should also be much easier for Linux to keep pace with the rest of GNU development as we adopt glibc. There should be less duplicated effort in porting new library features from glibc/gcc to Linux than there was under all of the previous Linux libc's.
Right now we are in a transition between them, just as we were a couple of years ago when we shifted from a.out to ELF. 'a.out' and ELF are "linking formats" --- different ways of representing the same machine language instructions. They require different loading methods by the kernel, in order to execute properly. It is possible (trivial, in fact) to support a.out and ELF on the same system concurrently. In fact I can compile a.out as a "loadable module" and configure my system to automatically and transparently load that --- which saves a bit of kernel memory when I'm not running any older apps --- but allows me to do so without any concern.
Although shared libraries are completely different from (and independent of) executable formats the similarity is that we (as users and admins) can mostly just let the programmers, distribution and package maintainers take care of all that.
Let me try and give some background on this:
Most programs under Linux (and most other modern forms of Unix) are "dynamically linked" against "shared libraries." Windows "DLL's" (dynamically linked libraries) are an example of Microsoft's attempt to implement similar features for their OS. (I believe that Unix shared libraries pre-date OS/2 and MS Windows by a considerable margin).
Almost all dynamically linked programs are linked against some version of "libc" (which provides the functions that you get when you use a #include <stdio.h> directive in a C program). Obviously your distribution includes the libc that most of its programs are linked against.
It can also include other versions of the shared libraries. A binary specifies the major and minimum minor version of the libary that it requires. So a program linked against libc5 might specify that it needs libc5 or libc5.4.
If you only have libc5.3 and the program requires libc5.4 you'll get an error message. If you have libc5.3 and libc5.4 then the program should run fine. Any program that only requires libc5 (which fails to specify the minor version) will get the last version in the libc5.x series (assuming your ldconfig is correct).
This is not to say that the system is perfect. Occasionally you'll find a program like Netscape Navigator or StarOffice that specifies a less specific library then it should (or sometimes it might just have the wrong version specified). When this happens the bugs might be pretty subtle. This is especially true when a program "depends upon a bug in the libraries" (so the fix to the library breaks the programs that are linked to it).
In the worst cases you can just put copies of the necessary (working) libraries into a directory and start the affected program through a small shell script wrapper. That wrapper just exports environment variable(s): LD_PRELOAD and/or LD_LIBRARY_PATH to point to these libraries or this directory (respectively). These magic environment variables will force the dynamic linker to over-ride its normal linking conventions according to your needs.
(This is obviously only a problem when the sources to the affected application are unavailable since re-compiling and re-linking solves the real problems).
In truly desperate cases you could possibly get a statically linked binary. This would contain all the code it requires and no dynamic linking would be necessary (or possible).
Note that the problem that I've just described relates shared libraries already. This is not new with the introduction of glibc --- since the actual cases where I've had to use LD_PRELOAD were all with libc5.
Some defections at Debian have me wondering about using that version.
I'm not sure I understand this. First I'm not sure which defections you're referring to. I presume you've read some messages to the affect that some developers or package maintainers refuse to make this migration (to glibc).
More importantly I'm not sure which 'version' you are referring to.
The current version of Debian (1.3) uses libc5. The next version (currently in feature freeze --- slated to be 2.0) uses glibc.
I've read about some major problems with RedHat 5.0, but 4.x isn't compatable with the new GNU gcc libs, (right?).
I wouldn't call the problems with Red Hat 5.0 to be "major." When I was in tech support and QA we used a system of bug categories ranging from "cat 1" to "cat 4." The categories were roughly:... and the bugs that I've seen from RH5 have all been at cat 3 or lower. (Granted people might argue about the severity level of various security bugs --- but let's not get into that).
- causes data loss or crashes the whole system
- dies and is unusable
- major function fails
- cosmetic
However I agree that there have been many bugs in that release --- and that many of these have been glaring and very ugly.
One of the two times that I've had dinner with Erik Troan (he's a senior developer and architect for Red Hat Inc) I asked him why they forced out glibc support so soon and for such a major release.
He gave a refreshingly forthright response by asking:
"How many glibc users were there a month ago?"
(essentially none --- just a few developers)... and:
"How many are out there now?"
Basically it sounds like Red Hat Inc knew that there were going to be problems with glibc --- and made the strategic decision to ship them out and force them to be found and fixed. This had to hurt but was probably viewed as the only way to move the whole Linux community forward.
I think they could have worked a little bit longer before release (since I really get a bad taste in my mouth when 'vipw' segfaults right after fresh installation -- 'vipw' is the "vi for your passwd file" that sysadmins use to safely, manually, add new accounts or make other changes). I was also hoping they'd wait until the 2.2 kernel was ready so they could wrap both into one new release.
(However, I guess that would have put them about 6 to 8 months behind their own schedule. They seem to put out a new minor version every four to six months --- or not quite quarterly).
At the same time I have to recognize their approach as a service to the community. They have fixed the bugs quickly (and newer pressings of the same CD's contain many of these fixes). Like all shrink wrap software companies Red Hat Inc. is forced (by the distribution channel) to do "silent inlining" (incorporation of bug fixes into production runs without incrementing the version number). This is a sad fact of the industry --- and one that costs users and software companies millions of hours of troubleshooting time and confusion.
(My suggestion was that RH cut monthly CD's of their bug fixes and 'contrib' directory and offer subscriptions and direct orders of those. I don't know if they've ever taken me up on that. My concern is to save some of that bandwidth. I occasionally burn CD's of this sort and hand them out at users groups meetings so they can be shared freely).
Could you explain some of these shifts on the Linux Versions field?
Well, I hope this has helped. There have been many sorts of shifts and transitions in Linux over the years. Obviously there were shifts from libc4 to libc5, and i shifts from a.out to ELF, and between various kernel versions: .99 to 1.0. --> 1.1 --> 1.2 --> 2.0 and the current effort to get 2.2 "out the door."
I think that all of these shifts have been progressive.
The worst one we suffered through was the change in the /proc filesystem structures between 1.2 and 2.0. This was the only time that a newly compiled kernel caused vital "user space" programs to just keel over and die (things like the 'ps' and 'top' commands would segfault).
That was ugly!
There was no easy way to switch between the kernel versions on the same root fs. The best solution at the time seemed to be to keep one root fs with the old "procps" suite on it and another with the new one. You'd then have whichever of these you were using mount all your other filesystems. For those of us that habitually create small root filesystems and create primary and alternate/emergency versions of that --- it was too much of a problem.
(I usually use about 63Mb --- sometimes as much as 127Mb and mount /usr, /var, and others --- or at least mount /usr and /usr/local and make thinks like /var, /opt, and /tmp into appropriate symlinks. I just isolated the "bad" binaries by moving them from under /usr to /root/.usr/ and replaced them with symlinks. Then the 'ps' that I called was automatically resolved to the one under my root fs -- which was different depending on which kernel I boot from).
I see no evidence that glibc (or the 2.2 kernel) will pose these sorts of problems. The worst problem I foresee with glibc is that it is so big. This is a problem for creating boot diskettes. I've heard that it compresses down to almost the same size as libc5 --- which means that glibc based diskettes might be possible using compressed initrd (initialization RAM disks). At the same time it is probably unnecessary. The main features of glibc seems to have to be in the support for NIS (network resolution of things like user account and group information --- things normally done by accessing local files like /etc/passwd and /etc/group). Many of these new features are probably unnecessary on rescue diskettes.
One final note about all these transitions and changes:
You aren't forced to go along for the ride. You can sit back and run Red Hat 4.2 or Debian 1.3 for as long as you like. You can install them and install glibc with them. You aren't forced to upgrade or change everything at once.
These "old" versions are never really "unsupported" --- there was someone who just released an updated Linux-Lite (based on the 1.0.9 kernel). This is one of the smallest, most stable kernels in the history of the OS. It has been updated to support ELF and have a few bug fixes applied. It can boot in about 2Mb of RAM (which is just enough for an internal router or print server) and handy for embedded applications that need TCP/IP.
Since we have the sources, and we're licensed to modify and redistribute them in perpetuity (the heart of the GPL) we can continue to maintain them as long as we like. Obviously there are some people out there who still like.
Thanks.
RS
versions | lilo | virtdom | kernel | winmodem | basicmail | betterbak |
shadow | dell | dumbterm | whylinux | redhat | netcard | macrovir |
newlook | tacacs | sendmail | dialdppp | ppp233 | msmail | procmail |