bios 1 2 3 4 5 6 7 8 9 10 11 12 |
There is no guarantee that your questions here will ever be answered. Readers at confidential sites must provide permission to publish. However, you can be published anonymously - just let us know!
From Paul Bussiere
Answered By Mike Orr, Jim Dennis
[Heather] Last month Paul Bussiere wrote in with a submission that raised a valid point, which I published in The Mailbag (http://www.linuxgazette.com/issue67/lg_mail67.html#mailbag/5) and which, pleasantly, has got us a few responses from potential authors. It mentioned that TAG had some comments for him, and I linked across, but it had escaped my processing script.
Not surprisingly a few people mailed him, wondering why we hadn't answered him. (See this month's Mailbag.) While it's certainly true that every month The Answer Gang does send out answers to a lot more people than you see in print these days, I had definitely intended to see his thread published.
So here it is -- my apologies for the confusion.
Of all the articles I have read on how wonderful Linux is, seldom have I seen any that [cynically] document how the average Windows user can go from mouse-clicking dweeb to Linux junkie.
[Mike] Have you read The Answer Gang column? It's choc-full of problems people have installing and using Linux, and should be a dose of reality for anybody thinking that going from Win to Lin requires no effort. Occasionally we run pieces about things to watch out for when doing your first Linux install.
So, the claim of FREE FREE FREE really isn't so....I've found other places that you can buy a CD copy cheaper but still, some money negates the FREE.
[Mike] Most experienced Linuxers would caution a new user against downloading the OS the first time or getting a $5 CD from Cheap Bytes. The cost of a commercial distribution with a detailed tutorial and reference manual is quite worth it, compared to spending a weekend (or two) getting it right.
Why doesn't Linux do the equivalent of a DOS PATH command? Newbie Me is trying to shutdown my system and I, armed with book, type "shutdown -h now" and am told 'command not found'. But wait, my book says...etc etc....and of course, I now know you have to wander into sbin to make things happen. Why such commands aren't pathed like DOS is beyond me....perhaps that's another HowTo that has eluded me.
[Mike] Linux does have commands for querying and setting the path.
$ echo $PATH /usr/bin:/bin:/usr/X11R6/bin:/usr/games:. $ PATH=/home/me/bin:$PATH $ echo $PATH /home/me/bin:/usr/bin:/bin:/usr/X11R6/bin:/usr/games:.
The path is an environment variable like any other environment variable, so you set it in your shell and it propogates down to all subcommands. (Actually, that's the way it works in DOS too; DOS just has an extra convenience command to set it.)
[JimD] Actually, technically the way that environment variables and shell variables work in UNIX is somewhat different than how they work in DOS' COMMAND.COM.
In UNIX shell there are variables. These are either shell/local variables or they are in the "environment." A variable is an association of a value to a name. They are untyped strings. One can move a variable from the shell's heap into the environment using the 'export' (Bourne and friends) or the 'setenv' (csh/tcsh) built-in commands.
In either case all that is being done is that the variable and its value are being stored in different memory regions (segments). Here's why:
When any program is started under UNIX it is done via one of the exec*() family of system calls. exec() replaces the currently running program with a new one. That is to say that it overwrites the code, heap and other memory segments of the current process with a new program (and performs a number of initialization and maintenance functions that relate to closing any file descriptors that were marked "close on exec" resetting the signal processing masks, etc).
The environment is the one segment that is NOT overwritten during exec(). This allows the process to retain some vestige of its "former self."
Under UNIX all processes are creatd via the fork() system call. (Under Linux fork() is a special case of the clone() system call --- but the statement is still "mostly" true). fork() creates an exact copy of a process. Normally the fork()'d processes (now there are two "clones" of one another) immediately go their separate ways. One of them continues one set of operations (usually the parent) while the other handles some other jobs (processing a subshell, handling a network connection/tranaction, or going on to exec() a new program).
So, a side effect of the environment handling is that a copy of the environment is passed from a child shell to all of its descendents. Note: this is a copy. The environment is NOT an interprocess communications mechanism. At least, it is NOT a bidirectional one.
(Incidentally any process can also remove items from its environment, or even corrupt it by scribbling sequences of characters that don't follow the variable=value\0 convention, using NUL terminated ASCII strings. Also there are variations of the exec*() system call which allow a process to specify an alternative block of memory --- a pointer to a new environment. In this way a process can prepare a completely new environment for itself).
Notice that, in UNIX, the notion of a process persists through the execution of multiple programs. The init process forks children which become (exec) shells to handle startup scripts, and "getty" processes to handle login requests. the various rc shell process spawn off childrem which become invocations of external commands (like mount, fsck, rm, etc). Some of those children set themselves as "session leaders" (create their own process groups), detach themselves from the console and "become" various sorts of daemons. Meanwhile the getty processes "become" copies of login which in turn may become login shells or which (under other versions of the login suite --- particularly PAM versions with logout "cleanup" enabled) may spawn children that become interactive shells.
An interactive shell spawns of many children. EVERY pipe implicitly creates a subprocess. Every "normal" invocation of an external command also creates a subprocess (the shell's own "exec" command being a notable exception, it terminates the shell causing the current process to "become" a running instance of some other shell --- in other words the shell "exec" command is a wrapper around the "exec()" system call). Some of the subprocesses don't perform any exec(). These are subshells. Thus a command like:
echo foo | read bar
.. from bash will create one subshell (child process) which will read a value from the pipeline. (It will then simply exit; since this is a nonsensical example). A command like:
/bin/echo foo | { read bar; echo $bar$bar ; }
... create two children (actually a child and a grandchild). The child will create a pipe, fork(), and then exec() the external version of the echo command. It's child (our shell's grandchild) will read its pipeline modify its copy of the bar variable, then echo a couple of copies of that value. Note that we don't know (from these examples) if bar is a shell/local variable or an environment variable. It doesn't matter. If the variable was in our shell's environment than the subshell (the grandchild, in this case) will be modify its copy of that environment variable. If the variable didn't exist, the subshell will simply create it as a local variable. If the variable did exist as a shell/local (heap) variable in our shell, it would cease to exist in the child process after the exec() of the /sbin/echo command, but a copy of it would still exist (and be overwritten) in the grandchild process.
Meanwhile the original shell process does a "wait()" system call on its child. In other words it just idly sits by until the work is done, and then it reaps the result codes (the exit values returned by the suprocesses) and continues.
(Incidentally, the fact that the child process is on the "right" side of these pipe operators is common but not guaranteed. It is the case for the Bourne and bash shells. However, the opposite case holds true for newer versions of ksh ('93 or maybe '88 and later?) and zsh; I personally believe that ksh and zsh is doing "The Right Thing (TM)" in this regard --- but it is a nitpick).
My point here is that the nature of "environment" variables seems to cause new students of UNIX endless confusion. It's really quite easy to understand if you think in terms of the underlying fork() and exec() operations and how they'll effect a process' memory map.
MS-DOS has an "environment" that is similar to the UNIX environment in that it is a set of variable/name value pairs and that it exists in a portion of memory that will persist through the execution of new programs. However MS-DOS doesn't have a "fork()" or similar system call and can't implement pipes as "coprocesses" (with one process writing into a "virtual file" --- an unnamed file descriptor that exists purely in memory and never on a real, physical filesystem).
(MS-DOS handles pipes by created a temporary file and a "transparent redirection" executing a writer process, waiting for that to complete --- writing all its output into the temp file, and then executing a reader process with transparent input redirection to eat up the contents of the temp file; and finally executing its own deletion of the temp file. This is a pale imitation of how UNIX manages pipes).
The scary thing about the way that MS-DOS runs these programs is that it marks some state in one region of memory (part of its "reserved/resident" memory; then it executes the program). When the external program exits it passes control back to the command interpreter's resident portion. The resident portion then performs a checksum on the "transient portion" of the DOS address space to determine if that "overlay" needs to be reloaded from the command interpreter's disk image/file. Then it resumes some of its state. If it was in the process if executing a batch file it *re-opens* the file, searches to its previous offset (!) and resume it's read/parse/execute process.
I can imagine that experienced UNIX programmers who were never tortured with MS-DOS internals or the nitty gritty of CP/M are cringing in horror at this model. However, it really makes alot of sense if you consider the constraints under which MS-DOS was hacked to operate. It was intended to work from floppies (possibly on systems with a single floppy drive and potentially without any resident system filesystem). It need to work in about 128K or less (that's kilobytes) of RAM though it might have had as much as 640K to work with..
I guess I get nervous when I see people explaining UNIX semantics in terms of MS-DOS. I've learned too much about the differences between them to be comfortable with that --- and I've seen to many ways in which the analogy can lead to confusion in the UNIX novice. Probably it's silly of me to nitpick on that and bring up these hoary details. MS-DOS is almost dead; so it may be that the less people know about how it worked, the better.
[Mike] /sbin and /usr/sbin should be in the root user's path. If they're not, adjust /root/.bash_profile or /root/.bashrc.
Whether to put the sbin directories in ordinary users' paths is a matter of debate. The debate goes like this:
CON: The sbin directories are for administrative commands that ordinary users would have no reason to use.PRO: But what about traceroute, ping, route and ifconfig? Ordinary users may want to run these to see if a host is down, find out which ISPs it goes through to reach a host, find out what our IP number is and which hosts are our gateways.CON: I don't want my users running ping because it increases network load and it can be misused for a DoS attack. As for route and ifconfig, too bad.PRO: You're a fascist. I'm putting them in my path myself. Nyaa, nyaa, nyaa!
Some programs are borderline so it can be difficult to determine whether they belong in sbin or bin. Also, there are disagreements and uncertainty about what sbin is really for. (I've heard it was originally for statically-linked programs in case their dynamic counterparts weren't running.)
Actually, that was my submission....tongue and cheek.....not exactly questions for the column! Whoops...should have been more specific!
Paul J. Bussiere
[Mike] Submitted to the Mailbag and The Answer Gang. It'll be up to the editor of those sections whether to publish it.
[Heather] And, I decided to publish it both ways, but then I screwed up. Oh well, I'm only human...
bios 1 2 3 4 5 6 7 8 9 10 11 12 |