May the source be with you, but remember the KISS principle ;-)

Contents Bulletin Scripting in shell and Perl Network troubleshooting History Humor

Unix Shells

Ksh93 and Bash Shells for Unix System Administrators


Unix shell history

Best Shell Books

Recommended Links Papers, ebooks  tutorials Unix Tools

Unix Utilities


Bourne Shell and portability




Bash Built-in Variables

Readline and inputrc 

IFS variable

Command completion


 Bash history and bang commands

Comparison operators

 Arithmetic Expressions in BASH

String Operations in Shell

Bash Control Structures

if statements Loops in Bash

Usage of pipes with loops in shell

Case statement Bash select statement
Input and Output Redirection Advanced filesystem navigation Shell Prompts Customarization Subshells Brace Expansion Examples of .bashrc files History substitution Vi editing mode Pushd and popd
Pretty Printing Functions Unix shell vi mode The Unix Hater’s Handbook Sysadmin Horror Stories bash Tips and Tricks Tips Humor Etc
"The lyfe so short, the craft so long to lerne,''

Chaucer c. 1340–1400
(borrowed from the bash FAQ)

Command completionThis collection of links is oriented on students (initially it was provided as a reference material to my shell programming university course) and is designed to emphasize usage of  advanced shell constructs and pipes in shell programming (mainly in the context of ksh93 and bash 3.2+ which have good support for those constructs).  An introductory paper  Slightly Skeptical View on Shell discusses the shell as a scripting language and as one of the earliest examples of very high level languages.  The page might also be useful for system administrators who constitute the considerable percentage of  shell users and lion part of shell programmers.

This page is the main page to a set of sub-pages devoted to shell that collectively are known as Shellorama. The most important are:

I strongly recommend getting a so-called orthodox file manager (OFM). This tool can immensely simplify Unix filesystem navigation and file operations (Midnight Commander  while defective in handling command line can be tried first as this is an active project and it provides fpt and sftp virtual filesystem in remote hosts)

Actually filesystem navigation in shell is an area of great concern as there are several serious problems with the current tools for Unix filesystem navigation.  I would say that usage of cd command (the most common method) is conceptually broken and deprives people from the full understanding of Unix filesystem; I doubt that it can be fixed within the shell paradigm (C-shell made an attempt to compensate for this deficiency by introducing history and popd/pushd/dirs troika, but this proved to be neither necessary nor sufficient for compensating problems with the in-depth understanding of the classical Unix hierarchical filesystem  inherent in purely command line navigation ;-). Paradoxically sysadmins who use OFMs usually have much better understanding of the power and flexibility of the Unix filesystem then people who use command line.   All-in-all usage of OFM is system administration represents Eastern European school of administration and it might be a better way to administer system that a typical "North American Way".

The second indispensable tool for shell programmer is Expect. This is a very flexible application that can be used for automation of interactive sessions as well as automation of testing of applications.

Usually people who know shell and awk and/or Perl well are usually considered to be advanced Unix system administrators (this is another way to say the system administrators who does not know shall/awk/Perl troika well are essentially a various flavors of entry-level system administrators no matter how many years of experience they have). I would argue that no system administrator can consider himself to be a senior Unix system administrator without in-depth knowledge of both one of the OFMs and Expect.

No system administrator can consider himself to be a senior Unix system administrator without in-depth knowledge of both one of the OFMs and Expect.

An OFM tends to educate the user about the Unix filesystem in some subtle, but definitely  psychologically superior way. Widespread use of OFMs in Europe, especially in Germany and Eastern Europe, tend to produce specialists with substantially greater skills at handling Unix (and Windows) file systems than users that only have experience with a more primitive command line based navigational tools.

And yes, cd navigation is conceptually broken. This is not a bizarre opinion of the author,  this is a fact:  when you do not even suspect that a particular part of the tree exists something is conceptually broken.  People using command line know only fragments of the file system structure like blinds know only the parts of the elephant.  Current Unix file system with, say,  13K directories for a regular Solaris installation, are just unsuitable for the "cd way of navigation"; 1K directories was probably OK. But when there are over 10K of directories you need something else. Here quantity turns into quality. That's my point.

The page provides rather long quotes as web pages as web pages are notoriously unreliable medium and can disappear without trace.  That makes this page somewhat difficult to browse, but it's not designed for browsing; it's designed as a supplementary material to the university shell course and for self-education.

Note: A highly recommended shell site is SHELLdorado by Heiner Steven.
This is a really excellent site with the good coding practice section,
some interesting example scripts and tips and tricks

A complementary page with Best Shell Books Reviews is also available. Although the best book selection is to a certain extent individual, the selection of a bad book is not: so this page might at least help you to avoid most common bad books (often the book recommended by a particular university are either weak or boring or both;  Unix Shell by Example is one such example ;-). Still the shell literature is substantial (over a hundred of books) and that mean that you can find a suitable textbook. Please be aware of the fact that that few authors of shell programming books have a broad understanding of Unix necessary for writing a comprehensive shell book. 

IMHO the first edition of O'Reilly Learning Korn Shell is probably one of the best and contains nice set of  examples (the second edition is more up to date but generally is weaker). Also the first edition has advantage of being available in HTML form too (O'Reilly Unix CD). It does not cover ksh93 but it presents ksh in a unique way that no other book does. Some useful examples can also be found in UNIX Power Tools Book( see Archive of all shell scripts (684 KB); the book is available in HTML from one of O'Reilly CD bookshelf collections).

Still one needs to understand that Unix shells are pretty archaic languages which were designed with compatibility with dinosaur shells in mind (and Borne is a dinosaur shell by any definition). Designers even such strong designers as David Korn were hampered by compatibility problems from the very beginning (in a way it is amazing how much ingenuity they demonstrate in enhancing  Borne shell; I am really amazed how David Korn managed to extend borne shell into something much more usable and much loser to "normal" scripting language. In this sense ksh93 stands like a real pinnacle of shell compatibility and the the testament of the art of shell language extension).

That means that outside of interactive usage and small one page scripts they generally outlived their usefulness. That's why for more or less complex tasks Perl is usually used (and should be used) instead of shells. While shells continued to improve since the original C-shell and Korn shell, the shell syntax is frozen in space and time and now looks completely archaic.  There are a large number of problems with this syntax as it does not cleanly separate lexical analysis from syntax analysis.  Bash 3.2 actually made some progress of overcoming most archaic features of old shells but still it has it own share of warts (for example last stage of the pipe does not run in on the same level as encompassing the pipe script)

Some syntax features in shell are idiosyncratic as Steve Bourne played with Algol 68 before starting work on the shell. In a way, he proved to be the most influential bad language designer, the designer who has the most lasting influence on Unix environment (that does not exonerate the subsequent designers which probably can take a more aggressive stance on the elimination of initial shell design blunders by marking them as "legacy").

For example there is very little logic in how  different types of blocks are delimitated in shell scripts. Conditional statements end with (broken) classic Algor-68 the reverse keyword syntax: 'if condition; then echo yes; else echo no; fi', but loops are structured like perverted version of PL/1 (loop prefix do; ... done;) , individual case branches blocks ends with  ';;' . Functions have C-style bracketing "{", "}".  M. D. McIlroy as Steve Borne manager should be ashamed. After all at this time the level of compiler construction knowledge was pretty sufficient to avoid such blunders (David Gries book was published in 1971) and Bell Labs staff were not a bunch of enthusiasts ;-).

Also the original Bourne shell was a almost pure macro language. It performed variable substitution, tokenization and other operations on one line at a time without understanding the underlying syntax. This results in many unexpected side effects: Consider a simple command
rm $file
If variable $file is accidentally contains space that will lead to treating it as two separate augments to the rm command with possible nasty side effects.  To fix this, the user has to make sure every use of a variable in enclosed in quotes, like in rm "$file"

Variable assignments in Bourne shell are whitespace sensitive. 'foo=bar' is an assignment, but 'foo = bar' is not. It is a function call with "= "and "bar" as two arguments. This is another strange idiosyncrasy.

There is also an overlap between aliases and functions. Aliases are positional macros that are recognized only as the first word of the command like in classic  alias ll='ls -l'.  Because of this, aliases have several limitations:

Functions are not positional and can in most cases can emulate aliases functionality:
ll() { ls -l $*; }
The curly brackets are some sort of pseudo-commands, so skipping the semicolon in the example above results in a syntax error. As there is no clean separation between lexical analysis and syntax analysis  removing the whitespace between the opening bracket and 'ls' will also result in a syntax error.

Since the use of variables as commands is allowed, it is impossible to reliably check the syntax of a script as substitution can accidentally result in key word as in example that I found in the paper about fish (not that I like or recommend fish):

if true; then if [ $RANDOM -lt 1024 ]; then END=fi; else END=true; fi; $END
Both bash and zsh try to determine if the command in the current buffer is finished when the user presses the return key, but because of issues like this, they will sometimes fail.

Dr. Nikolai Bezroukov


Old News ;-)

2009 2008 2007 2006 2005 2004 2003 2002

[Nov 08, 2015] Get timestamps on Bash's History
One of the annoyances of Bash, is that searching through your history has no context. When did I last run that command? What commands were run at 3am, while on the lock?

The following, single line, run in the shell, will provide date and time stamping for your Bash History the next time you login, or run bash.

echo  'export HISTTIMEFORMAT="%h/%d - %H:%M:%S "' >>  ~/.bashrc

[Nov 08, 2015] Single command shell accounts

Notable quotes:
"... /etc/ssh/sshd_config ..."
"... /home/restricteduser/.ssh/authorized_keys ..."

There are times when you will want a single purpose user account – an account that cannot get a shell, not can it do anything but run a single command. This can come in useful for a few reasons – for me, I use it to force an svn update on machines that can't use user generated crontabs. Others have used this setup to allow multiple users run some arbitrary command, without giving them shell access.

Add the user

Add the user as you'd add any user. You'll need a home directory, as I want to use ssh keys so I don't need a password and it can be scripted from the master server.

 root@slave1# adduser restricteduser
Set the users password

Select a nice strong password. I like using $pwgen 32

 root@slave1# passwd restricteduser
Copy your ssh-key to the server

Some Linux distros don't have the following command, in this case, contact your distro mailing list or Google.

 root@master# ssh-copy-id restricteduser@slave1
Lock out the user

Password lock out the user. This contradicts the above step, but it ensures that restricteduser can't update their password.

 root@slave1# passwd -l restricteduser
Edit the sshd config

Depending on your system, this can be in a number of places. On Debian, it's in /etc/ssh/sshd_config. Put it down the bottom.

 Match User restricteduser
    AllowTCPForwarding no
    X11Forwarding no
    ForceCommand /bin/foobar_command
Restart ssh
 root@slave1# service ssh restart
Add more ssh keys

Add any additional ssh key to /home/restricteduser/.ssh/authorized_keys

[Nov 08, 2015] A Practical Guide to Linux Commands, Editors, and Shell Programming (3rd Edition)

Average content, but huge page wise volume (1200). It's unclear what are playing your money for. Like all Sobel's books it is mainly a reference, not so much textbook. It tried to cover too much with (weak and superficial) chapters on Perl, Python and MySQL. He also covers both the apt-get and yum utilities which has little or nothing to do with shell programming. There are even some OS X notes as well. Older version (Prentice Hall July 11, 2005 ISBN-10: 0131478230) is much cheaper and contains mostly the same material (see my review)
Notable quotes:
"... Keep in mind that this is a reference book ..."
"... It's a very dense reference manual that needs the company of a few lighter books to fill in some gaps and provide some more in-depth info on the more interesting (for you) sections. ..."
September 24, 2012 |
ayates83 on December 6, 2012
Excellent reference guide

Upon first getting this book I wasn't sure what to expect. I've read a number of books and online how-to guides dealing with BASH scripting. This book is well organized and presents the material in a way that is easily understood. It was surprising as well to see that it covered OSX which I also use extensively in my day-to-day work. Most scripting guides that I have read just cover a specific Linux distribution.

I think this book would be great for those just beginning to work with Linux as well. I would definitely recommend this book as a supplement to my students as they are learning how to tackle the sometimes daunting task of learning how to write effective scripts for the Linux classes that I teach. The fact that a number of languages were briefly covered makes this book an excellent tool. BASH and python are the most common languages that I see used and that are taught in my class. It's an added bonus to have a section covering perl scripts as well.

The portion of the book I found especially helpful is the command reference section and the appendix on regular expressions. Not being particularly strong in regexes myself this has become a very valuable quick reference guide when i need to brush up on my syntax.

The only criticism that I would have for the book is the section on MySQL. It just seems out of place to me in a system scripting/programming book to me and should rather be included in a book geared more specifically towards systems administration.

Overall, I give this book 4/5 stars.

Adam Yates

Mehon October 23, 2015

4.0 out of 5 stars The good, the bad, and the slightly ugly

I'll keep it short. A full review would take too much time and space.

The good:

This is a very large reference manual, with lots of information in it on a wide variety of subjects. The fact that there is an entire chapter on MySQL is a big plus, as anyone who deals with software professionally NEEDS to know about databases. The more, the better. The indices in the book are some of the best I've seen. Redundancies are not always a bad thing!

The bad:

Probably just a personal gripe, but the author doesn't use strict/warnings in his Perl code. It doesn't seem to be a "Perl for people who already know Perl" chapter. Any intro to Perl should use strict/warnings. Perl lets you do all kinds of crazy things, which is both its greatest strength and its greatest weakness. Using strict/warnings in ALL of your Perl helps you learn to write better code. When you KNOW how to write code and know when to add "no strict" to a subroutine (or just leave it out entirely) to work some Perl magic, then you can take advantage of the added freedoms. Nearly all of the scripts in this chapter need to be partially or completely rewritten if you use strict, but I suppose that's just another way of learning Perl.

The slightly ugly:

Keep in mind that this is a reference book, not Lord of the freakin Rings (unless you spend way too much time with computers, I suppose). There are multiple instances on just about every page of "refer to xyz on page 1###" when you're only on page 3##. There's a lot of flipping around to various sections of the book, like one of those old "Choose your own story" books. Granted, it's tough to pull off a book this thick and dense without doing that, but there was just a LOT of it, and it gets a bit old after a while.

The short version:

It's a very dense reference manual that needs the company of a few "lighter" books to fill in some gaps and provide some more in-depth info on the more interesting (for you) sections.

[Nov 08, 2015] Linux Command Line and Shell Scripting Bible by Richard Blum & Christine Bresnahan

***+  This is an average or slightly above average introductory book, oriented on newcomers to shell scripting. Looks like the author just added (( )) type of arithmetic expressions as a afterthought, while it should be taught as a primary construct. Similar situation with C-style loop available now in bash as well. One of the few books on shell scripting published after 2005.  Unix Shell Programming, Third Edition by Stephen Kochan & Patrick Wood is a much better book. For everybody but absolute beginners, Classic Shell Scripting (May 1, 2005) might also be a better book, although it belong to intermediate level.  But this is definitely a good alternative to Sobel's A Practical Guide to Linux Commands, Editors, and Shell Programming (3rd Edition) published in 2012.
BTW it somebody pretend to write a scripting bible he/she should be a scripting god. Neither author qualifies ; -).  They are pretty average run of the mill instructors. 
Notable quotes:
"... It has one of the best explanations for sed and gawk I have ever seen. ..."
"... Great examples and clearly described concepts. ..."
January 20, 2015 |
Michael, on May 17, 2015
By the time I bought this book, I had already read a lot of online resources about bash scripting, and I had already been using linux for two years. I had even read most of the A-plus certification book on Linux. Despite that, I was constantly struggling to write bash scripts that worked, this is because so much of the free online documentation on bash scripting is confusing and incomplete. Even when consulting co-workers, they too could not explain why so many things I tried to code in a bash script did not work. That's when I decided to buy this book.

The "Linux Command Line and Shell Scripting Bible" cleared up a lot of problems that have been plaguing me for a long time now. I wish that I had started to learn bash scripting with this book, it could have saved me a lot of time. I would highly recommend this for anybody who will use linux.

Let me list some things this book explained to me that I struggled with for years prior:
- When is a subshell made, what are the implications of that, how does variable scoping come into play.
- how can you create, manipulate, and pass around arrays in bash
- how does the "return" statement behave in functions, how to use that in an if statement
- how can you do math in bash
- the differences between [ ] and [[ ]]

Here are some other things I love about this book:
- it has an excellent explanation of how you could parse a command line that follows a complicated pattern like "mycommand --longopt -a -bcf input.txt -- foo bar zop". Before I picked up this book I thought that would be too difficult to do in a bash script.
- It explains how to easily create GUI interfaces for your script.
- It has one of the best explanations for sed and gawk I have ever seen.

Throughout the entire book, everything said is clear and easy to understand, and the authors give you ample examples to demonstrate the point. While the thickness of the book is a bit intimidating, you will find that you can read it pretty fast because a lot of those pages are full of clear examples that you can read quickly.

Metro CA on September 8, 2015

Four Stars

Great examples and clearly described concepts.

[Aug 02, 2015] The Linux Tips HOWTO Detailed Tips

Converting all files in a directory to lowercase. Justin Dossey,

I noticed a few overly difficult or unnecessary procedures recommended in the 2c tips section of Issue 12. Since there is more than one, I'm sending it to you:

         # lowerit
         # convert all file names in the current directory to lower case
         # only operates on plain files--does not change the name of directories
         # will ask for verification before overwriting an existing file
         for x in `ls`
           if [ ! -f $x ]; then
           lc=`echo $x  | tr '[A-Z]' '[a-z]'`
           if [ $lc != $x ]; then
             mv -i $x $lc

Wow. That's a long script. I wouldn't write a script to do that; instead, I would use this command:
for i in * ; do [ -f $i ] && mv -i $i `echo $i | tr '[A-Z]' '[a-z]'`;
on the command line.

The contributor says he wrote the script how he did for understandability (see below).

On the next tip, this one about adding and removing users, Geoff is doing fine until that last step. Reboot? Boy, I hope he doesn't reboot every time he removes a user. All you have to do is the first two steps. What sort of processes would that user have going, anyway? An irc bot? Killing the processes with a simple

kill -9 `ps -aux |grep ^<username> |tr -s " " |cut -d " " -f2`
Example, username is foo
kill -9 `ps -aux |grep ^foo |tr -s " " |cut -d " " -f2`
That taken care of, let us move to the forgotten root password.

The solution given in the Gazette is the most universal one, but not the easiest one. With both LILO and loadlin, one may provide the boot parameter "single" to boot directly into the default shell with no login or password prompt. From there, one may change or remove any passwords before typing "init 3" to start multiuser mode. Number of reboots: 1 The other way Number of reboots: 2

Justin Dossey

[Aug 27, 2011] Turn Vim into a bash IDE By Joe 'Zonker' Brockmeier

June 11, 2007 |

By itself, Vim is one of the best editors for shell scripting. With a little tweaking, however, you can turn Vim into a full-fledged IDE for writing scripts. You could do it yourself, or you can just install Fritz Mehner's Bash Support plugin.

To install Bash Support, download the zip archive, copy it to your ~/.vim directory, and unzip the archive. You'll also want to edit your ~/.vimrc to include a few personal details; open the file and add these three lines:

let g:BASH_AuthorName   = 'Your Name'
let g:BASH_Email        = ''
let g:BASH_Company      = 'Company Name'

These variables will be used to fill in some headers for your projects, as we'll see below.

The Bash Support plugin works in the Vim GUI (gVim) and text mode Vim. It's a little easier to use in the GUI, and Bash Support doesn't implement most of its menu functions in Vim's text mode, so you might want to stick with gVim when scripting.

When Bash Support is installed, gVim will include a new menu, appropriately titled Bash. This puts all of the Bash Support functions right at your fingertips (or mouse button, if you prefer). Let's walk through some of the features, and see how Bash Support can make Bash scripting a breeze.

Header and comments

If you believe in using extensive comments in your scripts, and I hope you are, you'll really enjoy using Bash Support. Bash Support provides a number of functions that make it easy to add comments to your bash scripts and programs automatically or with just a mouse click or a few keystrokes.

When you start a non-trivial script that will be used and maintained by others, it's a good idea to include a header with basic information -- the name of the script, usage, description, notes, author information, copyright, and any other info that might be useful to the next person who has to maintain the script. Bash Support makes it a breeze to provide this information. Go to Bash -> Comments -> File Header, and gVim will insert a header like this in your script:

#          FILE:
#         USAGE:  ./
#       OPTIONS:  ---
#          BUGS:  ---
#         NOTES:  ---
#        AUTHOR:  Joe Brockmeier,
#       COMPANY:  Dissociated Press
#       VERSION:  1.0
#       CREATED:  05/25/2007 10:31:01 PM MDT
#      REVISION:  ---

You'll need to fill in some of the information, but Bash Support grabs the author, company name, and email address from your ~/.vimrc, and fills in the file name and created date automatically. To make life even easier, if you start Vim or gVim with a new file that ends with an .sh extension, it will insert the header automatically.

As you're writing your script, you might want to add comment blocks for your functions as well. To do this, go to Bash -> Comment -> Function Description to insert a block of text like this:

#===  FUNCTION  ================================================================
#          NAME:
#       RETURNS:

Just fill in the relevant information and carry on coding.

The Comment menu allows you to insert other types of comments, insert the current date and time, and turn selected code into a comment, and vice versa.

Statements and snippets

Let's say you want to add an if-else statement to your script. You could type out the statement, or you could just use Bash Support's handy selection of pre-made statements. Go to Bash -> Statements and you'll see a long list of pre-made statements that you can just plug in and fill in the blanks. For instance, if you want to add a while statement, you can go to Bash -> Statements -> while, and you'll get the following:

while _; do

The cursor will be positioned where the underscore (_) is above. All you need to do is add the test statement and the actual code you want to run in the while statement. Sure, it'd be nice if Bash Support could do all that too, but there's only so far an IDE can help you.

However, you can help yourself. When you do a lot of bash scripting, you might have functions or code snippets that you reuse in new scripts. Bash Support allows you to add your snippets and functions by highlighting the code you want to save, then going to Bash -> Statements -> write code snippet. When you want to grab a piece of prewritten code, go to Bash -> Statements -> read code snippet. Bash Support ships with a few included code fragments.

Another way to add snippets to the statement collection is to just place a text file with the snippet under the ~/.vim/bash-support/codesnippets directory.

Running and debugging scripts

Once you have a script ready to go, and it's testing and debugging time. You could exit Vim, make the script executable, run it and see if it has any bugs, and then go back to Vim to edit it, but that's tedious. Bash Support lets you stay in Vim while doing your testing.

When you're ready to make the script executable, just choose Bash -> Run -> make script executable. To save and run the script, press Ctrl-F9, or go to Bash -> Run -> save + run script.

Bash Support also lets you call the bash debugger (bashdb) directly from within Vim. On Ubuntu, it's not installed by default, but that's easily remedied with apt-get install bashdb. Once it's installed, you can debug the script you're working on with F9 or Bash -> Run -> start debugger.

If you want a "hard copy" -- a PostScript printout -- of your script, you can generate one by going to Bash -> Run -> hardcopy to This is where Bash Support comes in handy for any type of file, not just bash scripts. You can use this function within any file to generate a PostScript printout.

Bash Support has several other functions to help run and test scripts from within Vim. One useful feature is syntax checking, which you can access with Alt-F9. If you have no syntax errors, you'll get a quick OK. If there are problems, you'll see a small window at the bottom of the Vim screen with a list of syntax errors. From that window you can highlight the error and press Enter, and you'll be taken to the line with the error.

Put away the reference book...

Don't you hate it when you need to include a regular expression or a test in a script, but can't quite remember the syntax? That's no problem when you're using Bash Support, because you have Regex and Tests menus with all you'll need. For example, if you need to verify that a file exists and is owned by the correct user ID (UID), go to Bash -> Tests -> file exists and is owned by the effective UID. Bash Support will insert the appropriate test ([ -O _]) with your cursor in the spot where you have to fill in the file name.

To build regular expressions quickly, go to the Bash menu, select Regex, then pick the appropriate expression from the list. It's fairly useful when you can't remember exactly how to express "zero or one" or other regular expressions.

Bash Support also includes menus for environment variables, bash builtins, shell options, and a lot more.

Hotkey support

Vim users can access many of Bash Support's features using hotkeys. While not as simple as clicking the menu, the hotkeys do follow a logical scheme that makes them easy to remember. For example, all of the comment functions are accessed with \c, so if you want to insert a file header, you use \ch; if you want a date inserted, type \cd; and for a line end comment, use \cl.

Statements can be accessed with \a. Use \ac for a case statement, \aie for an "if then else" statement, \af for a "for in..." statement, and so on. Note that the online docs are incorrect here, and indicate that statements begin with \s, but Bash Support ships with a PDF reference card (under .vim/bash-support/doc/bash-hot-keys.pdf) that gets it right.

Run commands are accessed with \r. For example, to save the file and run a script, use \rr; to make a script executable, use \re; and to start the debugger, type \rd. I won't try to detail all of the shortcuts, but you can pull up a reference using :help bashsupport-usage-vim when in Vim, or use the PDF. The full Bash Support reference is available within Vim by running :help bashsupport, or you can read it online.

Of course, we've covered only a small part of Bash Support's functionality. The next time you need to whip up a shell script, try it using Vim with Bash Support. This plugin makes scripting in bash a lot easier.

[Jul 30, 2011] Advanced Techniques

Shell scripts can be powerful tools for writing software. Graphical interfaces notwithstanding, they are capable of performing nearly any task that could be performed with a more traditional language. This chapter describes several techniques that will help you write more complex software using shell scripts.

[Jul 30, 2011] Tuesday's Tips for Unix Shell Scripts

Welcome to Tuesday's Tips for shell scripting.

These tips come from my own scripting as well as answers I have provided to queries in various usenet newgroups (e.g., and comp.os.linux.misc).

The series ran from April to September, 2004, at which time I began work on a book of shell scripts. Due to the demands that project made on my time, I was unable to continue the series.

  1. The Easy PATH (28 Sep 2004)
  2. flocate — locate a file (21 Sep 2004)
  3. Toggling a variable (31 Aug 2004)
  4. Redirecting stdout and stderr (24 Aug 2004)
  5. A list of directories (17 Aug 2004)
  6. Learning to read (10 Aug 2004)
  7. New Bash Parameter expansion ( 3 Aug 2004)
  8. Heads up (27 Jul 2004)
  9. Random thoughts (20 Jul 2004)
  10. Useful variables (13 Jul 2004)
  11. Centering text on a line ( 6 July 2004)
  12. Setting multiple variables with one command (29 June 2004)
  13. Adding numbers from a file (22 June 2004)
  14. How many days in the month? (15 June 2004)
  15. Is this a leap year? ( 8 June 2004:
  16. Searching man pages with sman() ( 1 June 2004)
  17. Removing non-consecutive duplicate lines from a file (25 May 2004)
  18. Printing to entire width of screen (18 May 2004)
  19. Extracting multiple values from a string ( 4 May 2004)
  20. Automatic array indexing (27 April 2004)

[Sep 10, 2010] bash iterator trick

The UNIX Blog

A neat little feature I never new existed in bash is being able to iterate over a sequence of number in a more or less C-esque manner. Coming from Bourne/Korn shell background creating an elegant iterator is always a slight nuisance, since you would come up with something like this to iterate over a sequence of numbers:

while [ $i -lt 10 ]; do
i=`expr $i + 1`;

Well, not exactly the most elegant solution. With bash on the other hand it can be done as simple as:

for((i=1; $i<10; i++)); do

Simple and to the point.

[Jul 28, 2010] Bash Co-Processes

Linux Journal

One of the new features in bash 4.0 is the coproc statement. The coproc statement allows you to create a co-process that is connected to the invoking shell via two pipes: one to send input to the co-process and one to get output from the co-process.

The first use that I found for this I discovered while trying to do logging and using exec redirections. The goal was to allow you to optionally start writing all of a script's output to a log file once the script had already begun (e.g. due to a --log command line option).

The main problem with logging output after the script has already started is that the script may have been invoked with the output already redirected (to a file or to a pipe). If we change where the output goes when the output has already been redirected then we will not be executing the command as intended by the user.

The previous attempt ended up using named pipes:


echo hello

if test -t 1; then
    # Stdout is a terminal.
    exec >log
    # Stdout is not a terminal.
    trap "rm -f $npipe" EXIT
    mknod $npipe p
    tee <$npipe log &
    exec 1>&-
    exec 1>$npipe

echo goodbye

From the previous article:

Here, if the script's stdout is not connected to the terminal, we create a named pipe (a pipe that exists in the file-system) using mknod and setup a trap to delete it on exit. Then we start tee in the background reading from the named pipe and writing to the log file. Remember that tee is also writing anything that it reads on its stdin to its stdout. Also remember that tee's stdout is also the same as the script's stdout (our main script, the one that invokes tee) so the output from tee's stdout is going to go wherever our stdout is currently going (i.e. to the user's redirection or pipeline that was specified on the command line). So at this point we have tee's output going where it needs to go: into the redirection/pipeline specified by the user.

We can do the same thing using a co-process:

echo hello

if test -t 1; then
    # Stdout is a terminal.
    exec >log
    # Stdout is not a terminal.
    exec 7>&1
    coproc tee log 1>&7
    #echo Stdout of coproc: ${COPROC[0]} >&2
    #echo Stdin of coproc: ${COPROC[1]} >&2
    #ls -la /proc/$$/fd
    exec 7>&-
    exec 7>&${COPROC[1]}-
    exec 1>&7-
    eval "exec ${COPROC[0]}>&-"
    #ls -la /proc/$$/fd
echo goodbye
echo error >&2

In the case that our standard output is going to the terminal then we just use exec to redirect our output to the desired log file, as before. If our output is not going to the terminal then we use coproc to run tee as a co-process and redirect our output to tee's input and redirect tee's output to where our output was originally going.

Running tee using the coproc statement is essentially the same as running tee in the background (e.g. tee log &), the main difference is that bash runs tee with both its input and output connected to pipes. Bash puts the file descriptors for those pipes into an array named COPROC (by default):

Note that these pipes are created before any redirections are done in the command.

Focusing on the part where the original script's output is not connected to the terminal. The following line duplicates our standard output on file descriptor 7.

exec 7>&1

Then we start tee with its output redirected to file descriptor 7.

coproc tee log 1>&7

So tee will now write whatever it reads on its standard input to the file named log and to file descriptor 7, which is our original standard out.

Now we close file descriptor 7 with (remember that tee still has the "file" that's open on 7 opened as its standard output) with:

exec 7>&-

Since we've closed 7 we can reuse it, so we move the pipe that's connected to tee's input to 7 with:

exec 7>&${COPROC[1]}-

Then we move our standard output to the pipe that's connected to tee's standard input (our file descriptor 7) via:

exec 1>&7-

And finally, we close the pipe connected to tee's output, since we don't have any need for it, with:

eval "exec ${COPROC[0]}>&-"

The eval here is required here because otherwise bash thinks the value of ${COPROC[0]} is a command name. On the other hand, it's not required in the statement above (exec 7>&${COPROC[1]}-), because in that one bash can recognize that "7" is the start of a file descriptor action and not a command.

Also note the commented command:

#ls -la /proc/$$/fd

This is useful for seeing the files that are open by the current process.

We now have achieved the desired effect: our standard output is going into tee. Tee is "logging" it to our log file and writing it to the pipe or file that our output was originally going to.

As of yet I haven't come up with any other uses for co-processes, at least ones that aren't contrived. See the bash man page for more about co-processes.

[Jun 12, 2010] Writing Robust Bash Shell Scripts

Many people hack together shell scripts quickly to do simple tasks, but these soon take on a life of their own. Unfortunately shell scripts are full of subtle effects which result in scripts failing in unusual ways. It's possible to write scripts which minimise these problems. In this article, I explain several techniques for writing robust bash scripts.

Use set -u

How often have you written a script that broke because a variable wasn't set? I know I have, many times.

rm -rf $chroot/usr/share/doc 

If you ran the script above and accidentally forgot to give a parameter, you would have just deleted all of your system documentation rather than making a smaller chroot. So what can you do about it? Fortunately bash provides you with set -u, which will exit your script if you try to use an uninitialised variable. You can also use the slightly more readable set -o nounset.

david% bash /tmp/
david% bash -u /tmp/
/tmp/ line 3: $1: unbound variable

Use set -e

Every script you write should include set -e at the top. This tells bash that it should exit the script if any statement returns a non-true return value. The benefit of using -e is that it prevents errors snowballing into serious issues when they could have been caught earlier. Again, for readability you may want to use set -o errexit.

Using -e gives you error checking for free. If you forget to check something, bash will do it or you. Unfortunately it means you can't check $? as bash will never get to the checking code if it isn't zero. There are other constructs you could use:

if [ "$?"-ne 0]; then echo "command failed"; exit 1; fi 

could be replaced with

command || { echo "command failed"; exit 1; } 


if ! command; then echo "command failed"; exit 1; fi 

What if you have a command that returns non-zero or you are not interested in its return value? You can use command || true, or if you have a longer section of code, you can turn off the error checking, but I recommend you use this sparingly.

set +e
set -e 

On a slightly related note, by default bash takes the error status of the last item in a pipeline, which may not be what you want. For example, false | true will be considered to have succeeded. If you would like this to fail, then you can use set -o pipefail to make it fail.

Program defensively - expect the unexpected

Your script should take into account of the unexpected, like files missing or directories not being created. There are several things you can do to prevent errors in these situations. For example, when you create a directory, if the parent directory doesn't exist, mkdir will return an error. If you add a -p option then mkdir will create all the parent directories before creating the requested directory. Another example is rm. If you ask rm to delete a non-existent file, it will complain and your script will terminate. (You are using -e, right?) You can fix this by using -f, which will silently continue if the file didn't exist.

Be prepared for spaces in filenames

Someone will always use spaces in filenames or command line arguments and you should keep this in mind when writing shell scripts. In particular you should use quotes around variables.

if [ $filename = "foo" ]; 

will fail if $filename contains a space. This can be fixed by using:

if [ "$filename" = "foo" ]; 

When using $@ variable, you should always quote it or any arguments containing a space will be expanded in to separate words.

david% foo() { for i in $@; do echo $i; done }; foo bar "baz quux"
david% foo() { for i in "$@"; do echo $i; done }; foo bar "baz quux"
baz quux 

I can not think of a single place where you shouldn't use "$@" over $@, so when in doubt, use quotes.

If you use find and xargs together, you should use -print0 to separate filenames with a null character rather than new lines. You then need to use -0 with xargs.

david% touch "foo bar"
david% find | xargs ls
ls: ./foo: No such file or directory
ls: bar: No such file or directory
david% find -print0 | xargs -0 ls
./foo bar 

Setting traps

Often you write scripts which fail and leave the filesystem in an inconsistent state; things like lock files, temporary files or you've updated one file and there is an error updating the next file. It would be nice if you could fix these problems, either by deleting the lock files or by rolling back to a known good state when your script suffers a problem. Fortunately bash provides a way to run a command or function when it receives a unix signal using the trap command.

trap command signal [signal ...]

There are many signals you can trap (you can get a list of them by running kill -l), but for cleaning up after problems there are only 3 we are interested in: INT, TERM and EXIT. You can also reset traps back to their default by using - as the command.

Signal Description
INT Interrupt - This signal is sent when someone kills the script by pressing ctrl-c.
TERM Terminate - this signal is sent when someone sends the TERM signal using the kill command.
EXIT Exit - this is a pseudo-signal and is triggered when your script exits, either through reaching the end of the script, an exit command or by a command failing when using set -e.

Usually, when you write something using a lock file you would use something like:

if [ ! -e $lockfile ]; then
   touch $lockfile
   rm $lockfile
   echo "critical-section is already running"

What happens if someone kills your script while critical-section is running? The lockfile will be left there and your script won't run again until it's been deleted. The fix is to use:

if [ ! -e $lockfile ]; then
   trap "rm -f $lockfile; exit" INT TERM EXIT
   touch $lockfile
   rm $lockfile
   trap - INT TERM EXIT
   echo "critical-section is already running"

Now when you kill the script it will delete the lock file too. Notice that we explicitly exit from the script at the end of trap command, otherwise the script will resume from the point that the signal was received.

Race conditions

It's worth pointing out that there is a slight race condition in the above lock example between the time we test for the lockfile and the time we create it. A possible solution to this is to use IO redirection and bash's noclobber mode, which won't redirect to an existing file. We can use something similar to:

if ( set -o noclobber; echo "$$" > "$lockfile") 2> /dev/null; 
   trap 'rm -f "$lockfile"; exit $?' INT TERM EXIT

   rm -f "$lockfile"
   trap - INT TERM EXIT
   echo "Failed to acquire lockfile: $lockfile." 
   echo "Held by $(cat $lockfile)"

A slightly more complicated problem is where you need to update a bunch of files and need the script to fail gracefully if there is a problem in the middle of the update. You want to be certain that something either happened correctly or that it appears as though it didn't happen at all.Say you had a script to add users.

add_to_passwd $user
cp -a /etc/skel /home/$user
chown $user /home/$user -R

There could be problems if you ran out of diskspace or someone killed the process. In this case you'd want the user to not exist and all their files to be removed.

rollback() {
   del_from_passwd $user
   if [ -e /home/$user ]; then
      rm -rf /home/$user

trap rollback INT TERM EXIT
add_to_passwd $user
cp -a /etc/skel /home/$user
chown $user /home/$user -R

We needed to remove the trap at the end or the rollback function would have been called as we exited, undoing all the script's hard work.

Be atomic

Sometimes you need to update a bunch of files in a directory at once, say you need to rewrite urls form one host to another on your website. You might write:

for file in $(find /var/www -type f -name "*.html"); do
   perl -pi -e 's/' $file

Now if there is a problem with the script you could have half the site referring to and the rest referring to You could fix this using a backup and a trap, but you also have the problem that the site will be inconsistent during the upgrade too.

The solution to this is to make the changes an (almost) atomic operation. To do this make a copy of the data, make the changes in the copy, move the original out of the way and then move the copy back into place. You need to make sure that both the old and the new directories are moved to locations that are on the same partition so you can take advantage of the property of most unix filesystems that moving directories is very fast, as they only have to update the inode for that directory.

cp -a /var/www /var/www-tmp
for file in $(find /var/www-tmp -type f -name "*.html"); do
   perl -pi -e 's/' $file
mv /var/www /var/www-old
mv /var/www-tmp /var/www 

This means that if there is a problem with the update, the live system is not affected. Also the time where it is affected is reduced to the time between the two mvs, which should be very minimal, as the filesystem just has to change two entries in the inodes rather than copying all the data around.

The disadvantage of this technique is that you need to use twice as much disk space and that any process that keeps files open for a long time will still have the old files open and not the new ones, so you would have to restart those processes if this is the case. In our example this isn't a problem as apache opens the files every request. You can check for files with files open by using lsof. An advantage is that you now have a backup before you made your changes in case you need to revert.

[May 20, 2010] SFR Fresh bash-4.1.tar.gz (Download & Source Code Browsing)

[Nov 26, 2009] Bash 4.0

Interesting featured are -l -u options in declare statement (automatic conversion to lower case/upper case).

n.  The -p option to `declare' now displays all variable values and attributes
    (or function values and attributes if used with -f).

o.  There is a new `compopt' builtin that allows completion functions to modify
    completion options for existing completions or the completion currently
    being executed.

p.  The `read' builtin has a new -i option which inserts text into the reply
    buffer when using readline.

s.  Changed format of internal help documentation for all builtins to roughly
    follow man page format.

t.  The `help' builtin now has a new -d option, to display a short description,
    and a -m option, to print help information in a man page-like format.

u.  There is a new `mapfile' builtin to populate an array with lines from a
    given file.  The name `readarray' is a synonym.

w.  There is a new shell option: `globstar'.  When enabled, the globbing code
    treats `**' specially -- it matches all directories (and files within
    them, when appropriate) recursively.

x.  There is a new shell option: `dirspell'.  When enabled, the filename
    completion code performs spelling correction on directory names during

dd. The parser now understands `|&' as a synonym for `2>&1 |', which redirects
    the standard error for a command through a pipe.

ee. The new `;&' case statement action list terminator causes execution to
    continue with the action associated with the next pattern in the
    statement rather than terminating the command.

hh. There are new case-modifying word expansions: uppercase (^[^]) and
    lowercase (,[,]).  They can work on either the first character or
    array element, or globally.  They accept an optional shell pattern
    that determines which characters to modify.  There is an optionally-
    configured feature to include capitalization operators.

ii. The shell provides associative array variables, with the appropriate
    support to create, delete, assign values to, and expand them.

jj. The `declare' builtin now has new -l (convert value to lowercase upon
    assignment) and -u (convert value to uppercase upon assignment) options.
    There is an optionally-configurable -c option to capitalize a value at

kk. There is a new `coproc' reserved word that specifies a coprocess: an
    asynchronous command run with two pipes connected to the creating shell.
    Coprocs can be named.  The input and output file descriptors and the
    PID of the coprocess are available to the calling shell in variables
    with coproc-specific names.

[Apr 22, 2009] Advanced Bash Scripting Guide

Changes: Fairly extensive coverage of the version 4.0 Bash release. A great deal of other new material and bugfixes. This is a very important update.

[Apr 22, 2009] » bashstyle-ng 7.7

BashStyle-NG is a graphical tool for changing Bash's behavior and look and feel. It can also style Readline, Nano, and Vim.

It ships with a set of scripts, which are used by the styles shipped with BS-NG, but can also be used separately. Since v6.3 you have the opportunity to create your own prompts. For important notes on how to do so, refer back to the documentation.

If you don’t understand an option or something does not work as expected read the documentation. It’s installed in /usr/share/doc/bashstyle-ng/index.html ( /usr is default but may vary if you passed an other prefix to configure).

Currently BS-NG ships 15 pre-defined Styles for your prompt. Most of them can be modified via the Custom Prompt Builder. If you want to save your current configuration and want to re-import it later (or for using it on a different user/machine) you can do so via the Profiler (bs-ng-profiler –help).

Standalone (GConf-Free) Configuration (for faster Bash-Startup) can be created via the RCGenerator (rcgenerator –help).

[Apr 7, 2009] Mastering Unix Shell Scripting: Bash, Bourne, and Korn Shell Scripting for Programmers, System Administrators, and UNIX Gurus (Paperback)

This is one of the few book with some AIX bias. For example, Chapter 11 is about AIX Logical Volume Manager. That author also wrote a great book on AIX administration

by Randal K. Michael (Author)

From foreword:

We urge everyone to study this entire book. Every chapter hits a different topic

using a different approach. The book is written this way to emphasize that there is

never only one technique to solve a challenge in UNIX. All the shell scripts in this book

are real-world examples of how to solve a problem. Thumb through the chapters, and

you can see that we tried to hit most of the common (and some uncommon!) tasks

in UNIX. All the shell scripts have a good explanation of the thinking process, and

we always start out with the correct command syntax for the shell script targeting a

specific goal. I hope you enjoy this book as much as I enjoyed writing it. Let’s get


4.0 out of 5 stars A must-have for all levels of *nix users. , August 1, 2004
By Bindlestiff ( - See all my reviews
This review is from: Mastering UNIX Shell Scripting (Paperback)

The breadth of real-world examples make the difference between this book and most reference texts. It's true that it's written for korn, but I've had little trouble adapting for Bash; many of the scripts run almost unchanged and the ones that don't provide a useful opportunity for exercise in adaptation. The authors prose is clear. His attitude is a bit challenging; he says early on that that his intention is to teach you how to -solve problems- by shell scripting, NOT to present a ream of canned solutions. This is NOT a reference text for any particular shell, you'll still need plenty of O'Reilly books, a web browser & etc.

This book has enabled me to write a major project using scripting as the glue to hold together a hefty mass of file-moving daemons, fax/paging engines, python UI code, PostGreSQL database engine, networking/email, SSH, and Expect scripts on a Gnu Linux platform. I absolutely could not have done it without this book and I'm very grateful to Mr Michael for his work. If a later edition could more closely serve the needs of the masses by presenting more Bash examples and maybe throwing in a CD it would be a 5-star text.


Command readability and step-by-step comments are just the very basics of a

well-written script. Using a lot of comments will make our life much easier when we

have to come back to the code after not looking at it for six months, and believe me; we

will look at the code again. Comment everything! This includes, but is not limited to,

describing what our variables and files are used for, describing what loops are doing,

describing each test, maybe including expected results and how we are manipulating

the data and the many data fields. A hash mark, #, precedes each line of a comment.

The script stub that follows is on this book’s companion web site at

go/michael2e. The name is script.stub. It has all the comments ready to get started

writing a shell script. The script.stub file can be copied to a new filename. Edit the

new filename, and start writing code. The script.stub file is shown in Listing 1-1.






# REV: 1.1.A (Valid are A, B, D, T and P)

# (For Alpha, Beta, Dev, Test and Production)


# PLATFORM: (SPECIFY: AIX, HP-UX, Linux, OpenBSD, Solaris

# or Not platform dependent)


# PURPOSE: Give a clear, and if necessary, long, description of the

# purpose of the shell script. This will also help you stay

# focused on the task at hand.





# MODIFICATION: Describe what was modified, new features, etc--



# set -n # Uncomment to check script syntax, without execution.

# # NOTE: Do not forget to put the comment back in or

# # the shell script will not execute!

# set -x # Uncomment to debug this shell script





Listing 1-1 script.stub shell script starter listing

Michael c01.tex V4 - 03/24/2008 4:45pm Page 8

8 PartIThe Basics of Shell Scripting







# End of script

Listing 1-1 (continued)

The shell script starter shown in Listing 1-1 gives you the framework to start writing

the shell script with sections to declare variables and files, create functions, and write

the final section, BEGINNING OF MAIN, where the main body of the shell script is


[Oct 27, 2008] Bash Debugger 4.0-0.1

About: The Bash Debugger (bashdb) is a debugger for Bash scripts. The debugger command interface is modeled on the gdb command interface. Front-ends supporting bashdb include GNU-Emacs and ddd. In the past, the project has been used as a springboard for other experimental features such as a timestamped history file (now in Bash versions after 3.0).

Changes: This major rewrite and reorganization of code has numerous bugfixes and has been tested on bash 3.1 and 4.0 alpha. With the introduction of a simple debugger command alias mechanism, there are some incompatibilities. Command aliasing of short commands is no longer hard-wired. New commands of note are "set autoeval" and "step+", taken from ruby-debug. Emacs support has been greatly improved. Long option command-processing is guaranteed.

[Oct 27, 2008] Teach an old shell new tricks with BashDiff

BashDiff is a patch for the bash shell that can do an amazing number of things. It extends existing bash features, brings a few of awk's tricks into the shell itself, exposes some common C functions to bash shell programming, adds an exception mechanism, provides features of functional programming such as list comprehension and the map function, lets you talk with GTK+2 and databases, and even adds a Web server right into the standard bash shell.

There are no packages of BashDiff in the openSUSE, Fedora, or Ubuntu repositories. I'll build from source using BashDiff 1.45 on an x86 Fedora 9 machine running bash 3.0. While versions 3.1 and 3.2 of bash are available, the 1.45 BashDiff patch does not apply cleanly to either.

[Jul 21, 2008] Project details for Advanced Bash Scripting Guide 5.3

Recreational scripts were added, such as an almost full-featured Perquacky clone script, a script that does the "Petals Around the Rose" puzzle, and a crossword puzzle solver script. There is also the usual batch of bugfixes and other new material.

[Apr 3, 2008] -- Excerpt from Linux Cookbook, Part 1

[Mar 31, 2008] Bash Shell Programming in Linux by P. Lutus

Simple but nice intro with good examples

[Mar 30, 2008] Linux, Unix, -etc Unix Shell Scripts

These are simple stand-alone scripts. You are unlikely to find anything very impressive here if you already hack your own scripts, but a novice might find some of the ideas new.


Recommended Links

Softpanorama Top Visited

Softpanorama Recommended

Please visit  Heiner Steven SHELLdorado ; the best shell scripting site on the Internet

***** SHELLdorado -- An excellent site by Heiner Steven. Very cute name that catches the fact that the shell is a crown jewel of Unix ;-) IMHO this is the best shell-related site on Internet. Actively maintained. Highly recommended ! Note: This site now has a bulletin that you can subscribe to [Feb 02, 2002]

***** The official Korn Shell Web Site by David Korn, the author of ksh88 and ksh93. See also his son's Home Page  (actually I think that Tksh implementation is the best free shell available, but ksh93 is better supported).

***+ Open Directory - Computers Operating Systems Unix Shell -- a decent collection of links.  Could be better...

**** Unix shell scripts by Paul Dunne. contains links and several useful scripts.

scripting -- this is a member site and can be down

***+ kshweb.html--  a good site by Dana French

*** home -- several dot files of very uneven (often low) quality. Still better than nothing...

Other Links

MAN pages

KornShell 93 Manual Page man pages section 1 User Commands - ksh

Man pages from Neosoft

Bash(1) manual page

pdksh man page

FAQs -- the only Usenet group about shells

Note: see first  Unix FAQshell Index -- it's probably more up-to-date than this document


See also additional material at

Collections of reference materials, cards, etc

Reference cards:

Reference manuals

Desktop Kornshell


Legacy staff

Public domain Korn Shell (pdksh)

Public domain Korn Shell is closer to ksh88 than ksh93. It has most of the ksh88 features, not much of the ksh93 features, and a number of its own features.

Old Korn shell (Kornshell 88)



Another open source shell that is actively developed. Probably the most powerful free Unix shell but it definitely suffers from feature creep.  Not 100% compatible with ksh93 or bash. Many of the useful features of bash, ksh93, and tcsh were incorporated into zsh; many original features were added (see An Introduction to the Z Shell ).  Zsh was originally written by Paul Falstad while he was a student in Princeton. Zsh is now maintained by the members of the zsh workers mailing list The development is currently coordinated by Zoltan Hidvegi, Z-shell is very popular in Europe. Not so much in the USA.

Advantages of Zsh include:

Main WEB sites include:

Experimental Scripting Language based shells

Most scripting languages can be used as shell. Experimental shells exist for Perl,  Scheme, but only TCL mange to get into commercial grade product (Tksh). Javascript can be used as shell for NT.


Scsh (a Unix Scheme shell) FAQ

Perl shell 0.009       

Perl Shell  Gregor N. Purdy - November 23rd 1999, 18:03 EST

The Perl Shell (psh) is an interactive command-line Unix shell that aims to bring the benefits of Perl scripting to a shell context and the shell's interactive execution model to Perl.

Changes: This version of the Perl Shell adds significant functionality. However, it is still an early development release. New features include rudimentary background jobs handling and job management, signal handling, filename completion, updates to history handling, flexible %built_ins mechanism for adding built-in functions and smart mode is now on by default.

C-shell and tcsh

Tcsh and C shell

C-shell based tutorials

Random Findings


A modular Perl shell written, configured, and operated entirely in Perl. It aspires to be a fully operational login shell with all the features one normally expects. But it also gives direct access to Perl objects and data structures from the command line, and allows you to run Perl code within the scope of your command line. And it's named after one of the greatest characters on Futurama, so it must be good...

RC shell

A very interesting shell with advanced piping support.

Lost Links


FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit exclusivly for research and educational purposes.   If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner. 

ABUSE: IPs or network segments from which we detect a stream of probes might be blocked for no less then 90 days. Multiple types of probes increase this period.  


Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy


War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes


Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law


Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least

Copyright © 1996-2015 by Dr. Nikolai Bezroukov. was created as a service to the UN Sustainable Development Networking Programme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License.

The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting development of this site and speed up access. In case is down currently there are two functional mirrors: (the fastest) and


The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

Last modified: November 09, 2015