Softpanorama
May the source be with you, but remember the KISS principle ;-)

Contents Bulletin Scripting in shell and Perl Network troubleshooting History Humor

Bash as a scripting language

News

See also

Best Shell Books

Recommended Links Papers, ebooks  tutorials

Man pages

Reference Bash as command interpreter

Faqs

Advanced navigation

Command history reuse

Pipes

Dot files

Aliases

Functions

Command completion

 

Vi editing mode

Loops in Shell Pipes in Loops Process Substitution in Shell Strange Files Deletion and Renaming Input and output redirection Restricted Shell Shell Prompts   Bourne Shell and portability

Pushd, popd

Regular Expressions

Classic Unix Tools

Scripts Collections

Debugging

Pretty Printing

Tips  

Humor

Note: this page is partially based Nikolai Bezroukov lectures  in 2013 and 2017 on this topic, which in turn  were using Linux Shell Scripting with Bash by Ken O. Burtch (2004) as a textbook. See Best books about Bash and Korn Shell for books recommendations.


Introduction

It is important  to separate bash as a computer languages from the bash role as a command interpreter in Unix/Linux. Of course there are intersections and that the environment affects what you can do and how you can do it. But still...

Shell scripts remain a staple of the Linux world.  Despite being on of the oldest language in use shell is not dying. Even Perl, which is definitely a better tool for complex scripts, failed to dislodge it. Many scripts reflect process that are simply a matter of following a series of steps, after which the results are written to the disk. This is the very type of activity scripts are most convenient  to handle.

Each script is a regular Unix file and as such is identified by a name. For practicality, script names should not exceed 32 characters and usually consist of lowercase characters, underscores, minus signs, and periods. Spaces and punctuation symbols are permitted, but it is a bad practice to use then.

Filenames do not require a suffix to identify their contents, but ollowing the tradition established by MS DOS  they are often used to help to identify the type of the file.  By convention the  suffix for scripts is .sh. Other common suffixes include

.txt— A generic text file
.log— A log file
.html— A HTML Web page
.tgz (or .tar.gz)— Compressed file archive

Commands usually have no suffixes.

Shell scripts, text files, and executable commands and other normal files are collectively referred to as regular files. They contain data that can be read or instructions that can be executed. There are also files that are not regular, such as directories or named pipes; they contain unique data or have special behaviors when they are accessed.

PART 1

Creating a Script

By convention, Bash shell scripts have names ending with .sh. For example

cat hello.sh
#!/bin/bash
# hello.sh
# This is my first shell script
# Joe User 
# Jul 6, 2017

printf '%s\n" "Hello world!"
exit 0 

Lines beginning with number signs (#) are comments. They are notes to the reader and do not affect the execution of a script. Everything between the number sign and the end of the line is effectively ignored by Bash. Comments should always be informative, describing what the script is trying to accomplish, not a blow-by-blow recount of what each command is doing. Too many scripts has no comments at all or bad uninformative comments. Their chances to survive are much lower than the chances for a well commented scripts. Clear and informative comments help to troubleshoot and debug obscure problems.

The very first line of a script is the header line. This line begins with #! at the top of the script, flush with the left margin. This character combination identifies the kind of script. Linux uses this information to start the right program to run the script. For Bash scripts, this line is the absolute pathname indicating where the Bash interpreter resides. On most Linux distributions, the first header line is as follows

#!/bin/bash

If you don't know the location of the Bash shell, use the which or whereis command to find it:

which bash
/bin/bash

It is a good practice to provide the name of the author and the purpose of the script in the first two lines after the line #!/bin/bash

#!/bin/bash
# Compress old files if nobody is on the computer
# John A Doer
... ... ... 

If I do not see such lines in a script my first reaction is -- the script is written by a clueless amateur.

The Bash header line is followed by comments describing the purpose of the script and who wrote it. Some comments might be inserted by version control system such as git, which we will discuss later.

Next  after variable declarations, you might wish to specify some options via shopt command , which are instruction to bash interpreter on how to process your script. For example:
shopt -s nounset 

This command detects some spelling mistakes by reporting undefined variables.

You can execute this script by issuing the sh  command:

bash hello.sh

If you invoke the sh  command without an argument specifying a script file, a new interactive shell is launched. To exit the new shell and return to your previous session, issue the exit  command.

If the hello.sh file were in a directory other than the current working directory, you'd have to type an absolute path, for example:

bash /home/bill/hello.sh

You can make it a bit easier to execute the script by changing its access mode to include execute access. To do so, issue the following command:

chmod ugo+x hello.sh

This gives you, members of your group, and everyone else the ability to execute the file. To do so, simply type the absolute path of the file, for example:

/home/bill/hello.sh

If the file is in the current directory, you can issue the following command:

./hello.sh

You may wonder why you can't simply issue the command:

hello.sh

In fact, this still simpler form of the command will work, so long as hello.sh resides in a directory on your search path. You'll learn about the search path later.

Structural elements of bash script

Bash script consists of several types of elements

This elements define so called "lexical level" of the language.

Comments exist purely to facilitate human understanding and are discarded by interpreter. The only exception is the pseudo comment  at the beginning of the script that starts with "#!"

Identifiers are string without enclosing quote starting with the letter. They can contain underscore and numbers as well.  they can't start with the number, though.

Numeric literals can be either integer, or real. 

String literals are string of characters enclosed into either single quotes or double quotes.  If the string contain the quote it needs to be doubled.

Bash Keywords

A keyword is an identifiers which has a special meaning in BASH language: they represent directives to the interpreter. As such they are distinct from the variables -- they do not have any value and treated "literally" much like punctuation signs such as . , : , +, etc.   The following symbols and words provides some examples

if then else elif  fi
for do in done break
while until function return exit
case esac echo printf  
declare read time type  

To keep scripts understandable, keywords should never be used for variable names.

Quoted Strings

Enclosing a command argument within single quotes, you can prevent the shell from expanding any special characters inside this string.

To see this in action, consider how you might cause the echo  command to produce the output $PATH. If you simply issue the command:

echo $PATH

the echo command will print the value of the PATH shell variable. However, by enclosing the argument within single quotes, you obtain the desired result:

echo '$PATH'

Double quotes permit the expansion of shell variables.

Back quotes operate differently; they let you execute a command and use its output as an argument of another command. For example, the command:

dir_listing=`ls`

Bash Variables

Shell variables are widely used within shell scripts, because they provide a convenient way of transferring values from one command to another. Programs can obtain the value of a shell variable and use the value to modify their operation, in much the same way they use the value of command-line arguments.

There are two major types of bash variables -- numeric (integers or real numbers) and strings. If the variable is not previously declared as numeric it is assumed to be a string. In other words, the default is string. 

BASH variable start with the letter $ when used on the right side of the assignment statement and without it on the left side of assignments statement and in all cases where it accepts a value (read statement is another example)

Note: In Bash you should NOT use the dollar sign in front on the variable to the left of the assignment sign. That's an important difference:

You can declare integer variable. To declare that a variable should accept only numeric values (integers), use the following statement:
declare -i varname

If variable is declared as integer you can perform arithmetic operations on it:

#!/bin/bash
declare -i count
count=12
count=$count+1
printf "%d\n" $count

The declare statement has some other options and can be used to declare an array. All variables can be used as arrays without explicit definition. As a matter of fact, it appears that in a sense, all variables are arrays, and that assignment without a subscript is the same as assigning to "[0]". Consider the following script:

#!/bin/bash

a=12
echo ${a[0]}
b[0]=13
echo $b

When run it produces:

$ sh arr.sh
12
13

For further options, see the bash man page (search for "^SHELL BUILTINS", then search for "declare").

echo and print statements

The echo  commands simply print text on the console. The -n option of the first echo  command causes omission of the trailing newline character normally written by the echo  command, so both echo  commands write their text on a single line.

The built-in printf (print formatted) command prints a message to the screen. This is upgraded version of the command echo that existed in shell since the very beginning. Command printf provides better control over output then echo. Should be used instead echo. Echo can be emulated using  alias or function.

For example

alias echo='printf "%s\n" '

Bash printf is very similar to the C standard I/O printf() function, but they are not identical. In particular, single- and double-quoted strings are treated differently in shell scripts than in C programs.

The first parameter is a format string describing how the items being printed will be represented. For example, the special formatting code "%d" represents an integer number, and the code "%f" represents a floating-point number.

$ printf "%d\n" 5
5
$ printf "%f\n" 5
5.000000

Include a format code for each item you want to print. Each format code is replaced with the appropriate value when printed. Any characters in the format string that are not part of a formatting instruction are treated as printable characters.

$ printf "There are %d customers with purchases over %d.\n" 50 20000
There are 50 customers with purchases over 20000.

printf  is sometimes used to redirect a variable or some unchanging input to a command. For example, suppose all you want to do is pipe a variable to a command. Instead of using printf, Bash provides a shortcut <<< redirection operator. <<< redirects a string into a command as if it were piped using printf.

The tr command can convert text to uppercase. This example shows an error message being converted to uppercase with both printf and <<<.

$ printf "%s\n" "$ERRMSG" | tr [:lower:] [:upper:]
WARNING: THE FILES FROM THE OHIO OFFICE HAVEN'T ARRIVED.
$ tr [:lower:] [:upper:] <<< "$ERRMSG"
WARNING: THE FILES FROM THE OHIO OFFICE HAVEN'T ARRIVED.

The format codes include the following.

If a number is too large, Bash reports an out-of-range error.

$ printf "%d\n" 123456789123456789012
bash: printf: warning: 123456789123456789012: Numerical result out of range

For compatibility with C's printf, Bash also recognizes the following flags, but treats them the same as %d:

Also for C compatibility, you can preface the format codes with a l or L to indicate a long number.

The %q format is important in shell script programming and it is discussed in the quoting section, in the Chapter 5, “Variables.”

To create reports with neat columns, numbers can proceed many of the formatting codes to indicate the width of a column. For example, "%10d" prints a signed number in a column 10 characters wide.

$ printf "%10d\n" 11
        11

Likewise, a negative number left-justifies the columns.

$ printf "%-10d %-10d\n" 11 12
11         12

A number with a decimal point represents a column width and a minimum number of digits (or decimal places with floating-point values). For example, "%10.5f" indicates a floating-point number in a 10-character column with a minimum of five decimal places.

$ printf "%10.5f\n" 17.2
  17.20000

Finally, an apostrophe (')displays the number with thousands groupings based on the current country locale.

The \n in the format string is an example of a backslash code for representing unprintable characters. \n indicates a new line should be started. There are special backslash formatting codes for the representation of unprintable characters.

$ printf "Two separate\nlines\n"
Two separate
lines

Any 8-bit byte or ASCII character can be represented by \0 or \and its octal value.

$ printf "ASCII 65 (octal 101) is the character \0101\n"
ASCII 65 (octal 101) is the character A

printf recognizes numbers beginning with a zero as octal notation, and numbers beginning with 0x as hexadecimal notation. As a result, printf can convert numbers between these different notations.

$ printf "%d\n" 010
8
$ printf "%d\n " 0xF
15
$ printf "0x%X\n " 15
0xF
$ printf "0%o\n " 8
010

Most Linux distributions also have a separate printf command to be compliant with the POSIX standard

More about shell variables

There are multiple system variable in shell, the variables that are set by the system.

For example, PATH variable.  Of cause you cal also set it yourself:

PATH=/usr/bin:/usr/sbin/:usr/local/bin

By default, shell variables are typeless and can have both athithmentic and non-numeric values.

You can see a list of system variables in you env by issuing the env command. Usually, the command produces more than a single screen of output. So, you can use a pipe redirector and the more  command to view the output one screen at a time:

env | more

Press the Space bar to see each successive page of output. You'll probably see several of the shell variables described below:

You can use the value of a shell variable in a command by preceding the name of the shell variable by a dollar sign ($). To avoid confusion with surrounding text, you can enclose the name of the shell variable within curly braces ({});  For example, you can change the current working directory to your home directory by issuing the command:

cd $HOME

An easy way to see the value of a shell variable is to specify the variable as the argument of the echo  command. For example, to see the value of the PATH shell variable, issue the command:

echo $PATH

To make the value of a shell variable available not just to the shell, but to programs invoked by using the shell, you must export the shell variable. To do so, use the export  command, which has the form:

export variable

where variable specifies the name of the variable to be exported. A shorthand form of the command lets you assign a value to a shell variable and export the variable in a single command:

export variable=value

You can remove the value associated with shell variable by giving the variable an empty value:

variable=

However, a shell variable with an empty value remains a shell variable and appears in the output of the set  command. To dispense with a shell variable, you can issue the unset  command:

unset variable

Once you unset the value of a variable, the variable no longer appears in the output of the set  command.

The Search Path

The special shell variable PATH holds a series of paths known collectively as the search path.  This is a very important varibale and that's why we will study it separately. Whenever you issue an external command, the shell searches paths that comprise the search path, seeking the program file that corresponds to the command. The startup scripts establish the initial value of the PATH shell variable, but you can modify its value to include any desired series of paths. You must use a colon (:) to separate each path of the search path.

For example, suppose that PATH has the following value:

/usr/bin:/bin:/usr/local/bin:/usr/bin/X11:/usr/X11R6/bin

You can add a new search directory, say /opt/bin, with the following command:

PATH=$PATH:/opt/bin/

Now, the shell will look for external programs in /opt/bin/ as well as the default directories. However, it will look there last. If you prefer to check /opt/bin first, issue the following command instead:

PATH=/opt/bin:$PATH

The which command helps you work with the PATH shell variable. It checks the search path for the file specified as its argument and prints the name of the matching path, if any. For example, suppose you want to know where the program file for the wc  command resides. Issuing the command:

which wc

will tell you that the program file is /usr/bin/wc, or whatever other path is correct for your system.

 

More on assignment statement

Variables in can be assigned string as well. The string literal assigned should be in either in single or in double quotes. The difference is that if string is in double quotes all variable in it are expanded to their values (so called macrosubstitution). For example:

LOGFILE='mylog.txt'
printf "%s\n" $LOGFILE
version=12
LOGFILE="mylog.$version.txt"
printf "%s\n" $LOGFILE

The value of variables can be printed using the printf command. printf has two arguments: a formatting code, and the variable to display. For simple variables, the formatting code is "%s\n" and the variable name should appear in double quotes with a dollar sign in front of the name

$ printf "%s\n" $LOGFILE
mylog.txt

printf also works without formatting codes. I think "%s" is assumed (note that there is no carriage return (newline) at the end). So the statement

printf $HOME
printf $PWD

produces the path to your home directory ($HOME is a system variable which is set by OS when you login to your account) concatenated with the path to your current directory, which might be not what you wanted.

Old output statement in Unix shells -- echo produces the line with the carriage return. Please note printf in modern bash by and large replaced old echo command and now plays an important role in shell scripting.

Assigning the result of execution of a command

You can assign to bash variable the results of the execution of any command or script. This is done via so called backquoting:

TIMESTAMP=`date`
printf "%s\n" "$TIMESTAMP"

Will produce

Wed, Jul 05, 2017 8:50:41 AM

The date shown is the date when the variable TIMESTAMP is assigned its value. The value of the variable remains the same until a new value is assigned.

Example of a bash script

#!/bin/sh
#
#     tiger - A UN*X security checking system
#     Copyright (C) 1993 Douglas Lee Schales, David K. Hess, David R. Safford
#
#     Please see the file `COPYING' for the complete copyright notice.
#
# check_system - 06/15/93
#
#-----------------------------------------------------------------------------
#
TigerInstallDir='.'

#
# Set default base directory.
# Order or preference:
#      -B option
#      TIGERHOMEDIR environment variable
#      TigerInstallDir installed location
#
basedir=${TIGERHOMEDIR:=$TigerInstallDir}

for parm
do
   case $parm in
   -B) basedir=$2; break;;
   esac
done

#
# Verify that a config file exists there, and if it does
# source it.
#
[ ! -r $basedir/config ] && {
  echo "--ERROR-- [init002e] No 'config' file in \`$basedir'."
  exit 1
}

. $basedir/config

. $BASEDIR/initdefs

#
# If run in test mode (-t) this will verify that all required
# elements are set.
#
[ "$Tiger_TESTMODE" = 'Y' ] && {
  haveallcmds GREP || exit 1
  haveallfiles BASEDIR WORKDIR || exit 1
  
  echo "--CONFIG-- [init003c] $0: Configuration ok..."
  exit 0
}

#------------------------------------------------------------------------
echo
echo "# Performing system specific checks..."

haveallfiles BASEDIR || exit 1

runtable()
{
  haveallcmds GREP && {
    $GREP -v '^#' |
    while read script
    do
      case "$script" in
	/*)
	if [ $TESTEXEC $script ]; then
	  echo "# Running '$script'..."
	  $script
	else
	  echo "--ERROR-- [misc005w] Can't find $script'..."
	fi
        ;;
        *)
	if [ $TESTEXEC $CONFIG_DIR/$script ]; then
	  echo "# Running '$CONFIG_DIR/$script'..."
	  $CONFIG_DIR/$script
	elif [ $TESTEXEC $SCRIPTDIR/$script ]; then
	  echo "# Running '$SCRIPTDIR/$script'..."
	  $SCRIPTDIR/$script
	else
	  echo "--ERROR-- [misc005w] Can't find $script'..."
	fi
        ;;
      esac
    done
  }
}

for dir in $OS/$REL/$REV/$ARCH $OS/$REL/$REV $OS/$REL $OS
do
  [ $TESTEXEC $BASEDIR/systems/$dir/check ] && {
    echo "# Performing checks for $dir..."
    $BASEDIR/systems/$dir/check
  }
done

[ -r $BASEDIR/check.tbl ] && runtable < $BASEDIR/check.tbl

for dir in $OS/$REL/$REV/$ARCH $OS/$REL/$REV $OS/$REL $OS
do
  [ -r $BASEDIR/systems/$dir/check.tbl ] && runtable < $BASEDIR/systems/$dir/check.tbl
done

Two types of Unix utilities and Bash scripts: Filters vs. utilities

The commands that can be typed at the Bash shell prompt are usually Linux programs stored externally on your file system. Some commands are built into the shell for speed, standardization, or because they can function properly only when they are built-in.

No matter what their source, commands fall into a number of informal categories. Utilities are general-purpose commands useful in many applications, such as returning the date or counting the number of lines in a file.

Filters are commands that take standard output from previous command  modify it and output the result to the standard output.  For example you can extract line that contain  a certain word from the text file.  Many standard Unix utilities can be used as filters. You can write you own filter in bash.

Multiple Commands

Multiple commands can be combined on a single line. How they are executed depends on what symbols separate them.

If each command is separated by a semicolon, the commands are executed consecutively, one after another.

$ printf "%s\n" "This is executed" ; printf "%s\n" "And so is this"
This is executed
And so is this

If each command is separated by a double ampersand (&&), the commands are executed until one of them fails or until all the commands are executed.

$ date && printf "%s\n" "The date command was successful"
Wed Aug 15 14:36:32 EDT 2001
The date command was successful

If each command is separated by a double vertical bar (||), the commands are executed as long as each one fails until all the commands are executed.

$ date 'duck!' || printf "%s\n" "The date command failed"
date: bad conversion
The date command failed

Semicolons, double ampersands, and double vertical bars can be freely mixed in a single line.

$ date 'format-this!' || printf "%s\n" "The date command failed" && \
					printf "%s\n" "But the printf didn't!"
date: bad conversion
The date command failed
But the printf didn't!

These are primarily intended as command-line shortcuts: When mixed with redirection operators such as >, a long command chain is difficult to read and you should avoid it in scripts.

The second view on a typical Bash script structure

A well-structured Bash script can be divided into five sections:

  1. The header.  The header defines what kind of script this is, who wrote it, what version it is, and what assumptions or shell options Bash uses. The script without proper header is unprofessional script.
  2. Declarations of global variables. It is a good practice to declare variable that you use. countless hours were later spend in trying to find error which turned to be the result of a misspelled variable in some rarely executed part of the script.
  3. Sanity checks. Verify that supplied parameters and env corresponds to the assumption you made during the creation of the script. Environment tends to change.
  4. The main functionality tof the script. Here is where that real action is.
  5. Cleanup. here you need to remove temp file if you created during execution of the main part of the script, write some summary messages to the log if your script has long, etctc.

Declarations of global variables

All declarations that apply to the entirety of the script should occur at the top of the script, beneath the header.

By placing global declarations in one place, you make it easy for someone to refer to them while reading the script

# Global Declarations

declare -rx SCRIPT=${0##*/}     # A useful BASH idiom that puts the name of the script into the variable

declare -rx who="/usr/bin/who   # rx means that the variable can be read and executed only; the who command - man 1 who
declare -rx sync="/bin/sync     # the sync command - man 1 sync
declare -rx wc="/usr/bin/wc     # the wc command - man 1 wc

In bash 3.x and 4.x you can disallow the use of undefined variables by using the following command:

shopt -s nounset

That makes sense and is highly recommended practice.

Sanity Checks

The next section, sanity checks, protects the script from unexpected changes in the environment in which the script is running.  here you check the environment and it is does not correponds to expected exit the script, not to do some damage, due to changed environment in  which it is running.  Such changes can include different version of the command, different location of the command, running as non privileged user (for script that are designed to run as root), etc.

Normally, when a command runs at the command prompt, Bash searches several directories for the command you want to run. If it can't find the command, perhaps because of a spelling mistake, Bash reports an error. This kind of behavior is good for working interactively with Bash because it saves time and any mistakes are easily corrected with a few keystrokes.

Scripts, on the other hand, run without any human supervision. Before a script executes any statements, it needs to verify that all the necessary files are accessible. All required commands should be executable and stored in the expected locations. These checks are sometimes called sanity checks because they do not let the script begin its main task unless the computer is in a known, or “sane,” state. This is especially important with operating systems such as Linux that are highly customizable: What is true on one computer might not be true on another.

Another way of putting it is that Bash relies on runtime error checking. Most errors are only caught when the faulty statement executes. Checking for dangerous situations early in the script prevents the script from failing in the middle of a task, otherwise making it difficult to determine where the script left off and what needs to be done to continue.

Sometimes system administrators unintentionally delete or change the accessibility of a file, making it unavailable to a script. Other times, changes in the environment can change which commands are executed. Malicious computer users have also been known to tamper with a person's login profile so that the commands you think you are running are not the ones you are actually using.

In the most primitive case you can check if external command are in the same places as on the computer you wrote the script. Generally you need to verify that location of the commands if you are using multiple flavors of Linux such as SUSE and Red Hat.

# Sanity checks

if [ -z "$BASH" ]  ; then
   printf "$SCRIPT:$LINENO: please run this script with the BASH shell\n" >&2
   exit 1
fi
if [ ! -x "$who" ] ; then
   printf "$SCRIPT:$LINENO: the command $who is not available aborting\n " >&2
   exit 1
fi

if test [ ! -x "$wc" ] ; then
   printf "$SCRIPT:$LINENO: the command $wc is not available aborting\n " >&2
   exit 1
fi

there are a couple of elements that will not understadn in those statement but please ignore them for now. They will be explained later.

The actual functionality

When you have verified that the system is sane, the script can proceed to do its work.

# create a backup of my files at the beginning of the session
mybackup="/Scratch/users/$USER/myhome_backup"`date +"%y%m%d"`".tar" 
if [ ! -d  "/Scratch/users/$USER/" ] ; then
   mkdir -p "/Scratch/users/$USER/"
fi
tar cvf /Scratch/users/$USER/$mybackup.tar $HOME
Here system variable $HOME is set to the your home directory by the system at the beginning of your login session.

Cleanup

Finally, the script needs to clean up after itself. Any temporary files should be deleted, and the script returns a status code to the person or program running the script. In this case, there are no files to clean up.

echo -n Deleting the temporary files... 
rm -f *.tmp
echo Done.

More complex scripts might use a variable to keep track of that status code returned by a failed command.

As seen previously, the exit command unconditionally stops a script, exit can and should include a status code to return to the caller of the script. Return code 0 indicates successful completion of the script (no errors).

 If the status code is omitted, the status of the last command executed by the script is returned. As a result, it’s always best to supply an exit status.

if [ -f $mybackup ] ; then 
   exit 0 # all is well
else 
   exit 1 # backup was not created
fi

A script automatically stops when it reaches its end as if there was an implicit exit typed there, but the exit status in this case is the status of the last command executed.

There is also a utility called sleep, which suspends the script execution for a specific number of seconds after which it wakes up and resumes at the next statement after the sleep command.

sleep 5 # wait for 5 seconds

Sleep is useful for placing pauses in the script, enabling the user to read what's been displayed on the screen. Sleep isn’t suitable for synchronizing events, however, because how long a particular program runs on the computer often depends on the system load, number of users, hardware upgrades, and other factors outside of the script's control.

Reading Keyboard Input

The built-in read command stops the script and waits for the user to type something from the keyboard. The text typed is assigned to the variable that accompanies the read command.

printf "%s\n" "Enter the number of days from now you want to include into your backup"
read BACKUP_PERIOD

In this example, the variable ARCHIVE DAYS contains the number of days typed by the user.

There are a number of options for read. First, -p (prompt) is a shorthand feature that combines the printf and read statements, read displays a short message before waiting for the user to respond.

read -p "Enter the number of days from now you want to include into your backup?" BACKUP_PERIOD

The -r (raw input) option disables the backslash escaping of special characters. Normally, read understands escape sequences such as \n when they’re typed by the user. Using raw input, read treats the backspace the same as any other character typed on the keyboard. Typically you need to use -r when you need to enter Windows style path where directieries are separated with backslashes. 

read -p "Enter a Windows backup path): " -i BACKUP_PATH

The -e option works only interactively, not in shell scripts. It enables you to use Bash's history features to select the line to return. You can use the Up and Down Arrow keys to move through recently typed commands like Ctrl-R command on the bash command line.

A timeout can be set up using the -t (timeout) switch. If nothing is typed by the end of the timeout period, the shell continues with the next command and the value of the variable is unchanged. If the user starts typing after the timeout period ends, anything typed is lost. The timeout is measured in seconds.

read -t 5 FILENAME # wait up to 5 seconds to read a filename

If in your script there is a variable called tmout. Bash times out after the number of seconds in the variable even if -t is not used.

A limit can be placed on the number of characters to read using the -n (number of characters) switch. If the maximum number of characters is reached, the shell continues with the next command without waiting for the Enter/Return key to be pressed.

read -n 10 FILENAME # read no more than 10 characters

If you don't supply a variable, read puts the typed text into a variable named reply. Well-structured scripts should avoid this default behavior to make it clear to a script reader where the value of REPLY is coming from.

While reading from the keyboard, read normally returns a status code of 0.

=======================================================================================================================================

Basic Redirection

You can divert messages from commands like printf to files or other commands. Bash refers to this as redirection. There are a large number of redirection operators.

The > operator redirects the messages of a command to a file. The redirection operator is followed by the name of the file the messages should be written to. For example, to write the message "The processing is complete" to a file named results . txt, you use

timestamp=`date`
printf "%s\n" "The processing started at $timestamp" > /tmp/nikolai.log

The > operator always overwrites the named file. If a series of printf messages are redirected to the same file, only the last message appears.

To add messages to a file without overwriting the earlier ones, Bash has an append operator, >>. This operator redirects messages to the end of a file.

printf "%s\n" "The processing started at $timestamp" > /tmp/nikolai.log
... ... ... 
printf "There were no errors. Normal exist of the program" >>  /tmp/nikolai.log

In the same way, input can be redirected to a command from a file. The input redirection symbol is <. For example, the utility wc (word count) is able to calculate number of lines in the file with the option -l. That means that you can count the number of lines in a file, using the command:

wc  -l <  $HOME/.bashrc

Again, wc -l count lines of the file. In this case this is number of lines in your  .bashrc. Printing this information from your .bash_profile script might be a useful reminder to you that can alert you to the fact that you recently modified your env, or God forbid your .bashrc file disappeared without trace :-)  

There is also a possibility to imitate reading from a file inside the script by putting several lines directly into the script. The operator <<MARKER treats the lines following it in a script as if they were typed from the keyboard until it reaches the file starting from the work MARKET. In other words the lines which are treated as an input file are limited by the a special line using the delimiter you you define yourself. For example, in the following example the delimiter word used is "EOL": 

cat > /tmp/example <<EOF
this is a test demostrating how you can 
write several lines of text into 
a file
EOF

If you use >> instead of > you can add lines to a file without using any editor:

cat >>/etc.resolv.conf <<EOF
search datacenter.firma.com headquaters.firma.com
nameserver 10.100.20.5
nameserver 10.100.20.6
EOF

In this example. Bash treats the three lines between the EOF markers as if they were being typed from the keyboard and write them to the file specified after > (/tmp/example in our case).  there should be no spaces between << and EOF marker. Again, the name EOF is arbitrary. you can choose, for example,  LINES_END instead. the only important thing is there should be no lines in your test that start with the same word. that's why using all caps makes sense in this case. 

The data in the << list is known as a here file (or a here document) because the word HERE was often used in Bourne shell scripts as the marker of the end of the input lines.

Bash have another here file redirection operator, <<<, which redirects a variable or a literal.

cat > /tmp/example <<<  "this is another example of piping info into the file" 

Pipes

Instead of files, the results of a command can be redirected as input to another command. This process is called piping and uses the vertical bar (or pipe) operator |.

who | wc -l # count the number or users

Any number of commands can be strung together with vertical bar symbols. A group of such commands is called a pipeline.

If one command ends prematurely in a series of pipe commands, for example, because you interrupted a command with a control-c, Bash displays the message "Broken Pipe" on the screen.

The shell provides three standard data streams:

By default, most programs read their input from stdin and write their output to stdout. Because both streams are normally associated with a console, programs behave as you generally want, reading input data from the console keyboard and writing output to the console screen. When a well-behaved program writes an error message, it writes the message to the stderr stream, which is also associated with the console by default. Having separate streams for output and error messages presents an important opportunity, as you'll see in a moment.

Although the shell associates the three standard input/output streams with the console by default, you can specify input/output redirectors that, for example, associate an input or output stream with a file:

To see how redirection works, consider the wc  command on the console.

Perhaps you can now see the reason for having the separate output streams stdout and stderr. If the shell provided a single output stream, error messages and output would be mingled. Therefore, if you redirected the output of a program to a file, any error messages would also be redirected to the file. This might make it difficult to notice an error that occurred during program execution. Instead, because the streams are separate, you can choose to redirect only stdout to a file. When you do so, error messages sent to stderr appear on the console in the usual way. Of course, if you prefer, you can redirect both stdout and stderr to the same file or redirect them to different files. As usual in the Unix world, you can have it your own way.

A simple way of avoiding annoying output is to redirect it to the null file, /dev/null. If you redirect the stderr stream of a command to /dev/null, you won't see any error messages the command produces.

Just as you can direct the standard output or error stream of a command to a file, you can also redirect a command's standard input stream to a file, so that the command reads from the file instead of the console. For example, if you issue the wc  command without arguments, the command reads its input from stdin. Type some words and then type the end of file character (Ctrl-D) and wc  will report the number of lines, words, and characters you entered. You can tell wc  to read from a file, rather than the console, by issuing a command like:

wc </etc/passwd

Of course, this isn't the usual way of invoking wc. The author of wc  helpfully provided a command-line argument that lets you specify the file from which wc  reads. However, by using a redirector, you could read from any desired file even if the author had been less helpful.

Some programs are written to ignore redirectors. For example, the passwd  command expects to read the new password only from the console, not from a file. You can compel such programs to read from a file, but doing so requires techniques more advanced than redirectors.

When you specify no command-line arguments, many Unix programs read their input from stdin and write their output to stdout. Such programs are called filters. Filters can be easily fitted together to perform a series of related operations. The tool for combining filters is the pipe, which connects the output of one program to the input of another. For example, consider this command:

ls -l ~ | wc -l

The command consists of two commands, joined by the pipe redirector ( |). The first command lists the names of the files in the users home directory, one file per line. The second command invokes wc  by using the -l option, which causes wc  to print only the total number of lines, rather than printing the total number of lines, words, and characters. The pipe redirector sends the output of the ls  command to the wc  command, which counts and prints the number of lines in its input, which happens to be the number of files in the user's home directory.

This is a simple example of the power and sophistication of the Unix shell. Unix doesn't include a command that counts the files in the user's home directory and doesn't need to do so. Should the need to count the files arise, a knowledgeable Unix user can prepare a simple script that computes the desired result by using general-purpose Unix commands.

By default Unix/Linux assumes that all output is going to STNDIN  which is assigned to a user screen/console called 

dev/tty

Try to execute

printf "\s\n" "Hello to myself" > /dev/tty

You will see that it typed on the your screen exactly the say way as if you executed the command

printf "\s\n" "Hello to myself"

Because those two command are actually identical.

When messages aren't redirected in your program, the output goes through a special file called standard output. By default, standard output represents the screen. That means that everything sent through standard output is redirected to the screen. Bash uses the symbol &l to refer to standard output, and you can explicitly redirect messages to it. You can redirect to the file the output of the whole script

bash myscript.sh > mylisting.txt

In this case any printf statement will write the information not the  screen, but to the file you've redirected the output to. In this case this is the file mylisting. txt.  Let's see another set of examples:

printf "Don't forget to backup your data" > results.txt   # sent to a file on disk
printf "Don't forget to backup your data" > /dev/tty      # send explicitly to the screen
printf "Don't forget to backup your data"                 # sent to screen via standard output
printf "Don't forget to backup your data >&1              # same as the last one
printf "Don't forget to backup your data >/dev/stdout     # same as the last one

Using standard output is a way to send all the output from a script and any commands in it to a new destination.

A script doesn't usually need to know where the messages are going: There’s always the possibility they were redirected. However, when errors occur and when warning messages are printed to the user, you don't want these messages to get redirected along with everything else.

Linux defines a second file especially for messages intended for the user called standard error. This file represents the destination for all error messages. Because standard error, like standard output, is a file, standard error can likewise be redirected. The symbol for standard error is &2. /dev/stderr can also be used. The default destination, like standard output, is the screen. For example,

printf "$SCRIPT:SLINENO: No files available for processing" >&2

This command appears to work the same as a printf without the >&2 redirection, but there is an important difference. It displays an error message to the screen, no matter where standard output has been previously redirected.

 The redirection symbols for standard error are the same as standard output except they begin with the number 2. For example

bash myscript.sh 2> myscript_errors.txt

type command

If you don't know whether a command is built-in, an external command, an alias,  or a function the Bash type command can help. For example

type cd
cd is a shell builtin

type id
id is /usr/bin/id
NOTE: type command is essentially the same command of declare.

The shopt commands

Bash options can be enabled or disabled by commands or as a command switch when Bash is started. For example, to start a Bash session and disallow the use of undefined variables, use this:

$ bash -o nounset

In a script you can disallow the use of undefined variables by using the following command:

shopt -s nounset

Historically, the set command was used to turn options on and off. As the number of options grew, set became more difficult to use because options are represented by single letter codes. As a result, Bash provides the shopt (shell option) command to turn options on and off by name instead of a letter. You can set certain options only by letter. Others are available only under the shopt command. This makes finding and setting a particular option a confusing task.

shopt -s (set) turns on a particular shell option. shopt -u (unset) turns off an option. Without an -s or -u, shopt toggles the current setting.

shopt -u -nounset

shopt by itself or with -p prints a list of options and shows whether they are on, excluding the -o options. To see these options, you need to set -o. A list of the letter codes is stored in the shell variable $-.

There are way too may options in bash, but a good thing is that we care only about few of them:

# shopt -p
shopt -u autocd
shopt -u cdable_vars
shopt -u cdspell
shopt -u checkhash
shopt -u checkjobs
shopt -u checkwinsize
shopt -s cmdhist
shopt -u compat31
shopt -u compat32
shopt -u compat40
shopt -u compat41
shopt -u compat42
shopt -u compat43
shopt -u completion_strip_exe
shopt -s complete_fullquote
shopt -u direxpand
shopt -u dirspell
shopt -u dotglob
shopt -u execfail
shopt -s expand_aliases
shopt -u extdebug
shopt -u extglob
shopt -s extquote
shopt -u failglob
shopt -s force_fignore
shopt -u globasciiranges
shopt -u globstar
shopt -u gnu_errfmt
shopt -s histappend
shopt -u histreedit
shopt -u histverify
shopt -s hostcomplete
shopt -u huponexit
shopt -u inherit_errexit
shopt -s interactive_comments
shopt -u lastpipe
shopt -u lithist
shopt -s login_shell
shopt -u mailwarn
shopt -u no_empty_cmd_completion
shopt -u nocaseglob
shopt -u nocasematch
shopt -u nullglob
shopt -s progcomp
shopt -s promptvars
shopt -u restricted_shell
shopt -u shift_verbose
shopt -s sourcepath
shopt -u xpg_echo

I recommend to setup just a couple, they are not setup by default

shopt -s lastpipe
shopt -u autocd

Regular expressions in filenames

Shell regular expressions are of two types:

Primitive regular expressions metacharacers  are as following:

NOTE: Bash 3.2 introduced "normal " regular expression and operator =~ to use them

PART 2

During a normal day a sysadmin often writes several bash or Perl scripts.  They are called throwaway scripts.  Often they perform specific task related to the current problem that sysadmin is trying to solve. For example some information collection.  often if you face a problem you want to extract from the log file relevant to the problem information, but logfile is "dirty" and needs to be filtered from junk before it becomes usable. 

The main tool in such circumstances is a subclass of Unix utilities called filters and pipes connected together via two mechanisms available in Unix -- redirection and pipes. which allow them to process each out output as input

In some specific roles like web server administrator extracting relevant information from web server and proxy log can approach a full time job.

Standard files

Unix has three standard files:

Of course, that's mostly by convention. There's nothing stopping you from writing your error information to standard output if you wish. You can even close the three file handles totally and open your own files for I/O.

Redirection and pipes

There are two major mechanism that increase flexibility of Unix utilities:

Before shell executes a command, it scans the command line for redirection characters. These special symbols instruct the shell to redirect input and output. Redirection characters can appear anywhere in a simple command or can precede or follow a command. They are not passed on to the invoked command.

 

Redirection basics

By default Unix/Linux assumes that all output is going to STDOUT  which is assigned to a user screen/console called  /dev/tty. You can divert messages directed to standard output, for example from commands like printf,  to files or other commands. Bash refers to this as redirection.

The most popular is > operator, which redirects STDOUT to a file. The redirection operator is followed by the name of the file the messages should be written to. For example, to write the message "The processing is complete" to a file named my.log , you use

timestamp=`date`
printf "%s\n" "The processing started at $timestamp" > /tmp/my.log

Try to execute

printf "\s\n" "Hello world" > /dev/tty

You will see that it typed on the your screen exactly the say way as if you executed the command

printf "\s\n" "Hello to myself"

Because those two command are actually identical.

It is important ot understand that when messages aren't redirected in your program, the output goes through a special file called standard output. By default, standard output represents the screen. That means that everything sent through standard output is redirected to the screen. Bash uses the symbol &l to refer to standard output, and you can explicitly redirect messages to it. You can redirect to the file the output of the whole script

bash myscript.sh > mylisting.txt
This is the same as
bash myscript.sh 1> mylisting.txt

In this case any printf statement will write the information not the  screen, but to the file you've redirected the output to. In this case this is the file mylisting. txt

But you can also redirect each printf statement in you script. Let's see another set of examples:

printf "Don't forget to backup your data" > /dev/tty      # send explicitly to the screen
printf "Don't forget to backup your data"                 # sent to screen via standard output
printf "Don't forget to backup your data >&1              # same as the last one
printf "Don't forget to backup your data >/dev/stdout     # same as the last one
printf "Don't forget to backup your data" > warning.txt   # sent to a file on disk

Using standard output is a way to send all the output from a script and any commands in it to a new destination.

A script doesn't usually need to know where the messages are going: There’s always the possibility they were redirected. However, when errors occur and when warning messages are printed to the user, you don't want these messages to get redirected along with everything else.

Linux defines a second file especially for messages intended for the user called standard error. This file represents the destination for all error messages. Because standard error, like standard output, is a file, standard error can likewise be redirected. The symbol for standard error is &2. /dev/stderr can also be used. The default destination, like standard output, is the screen. For example,

printf "$SCRIPT:SLINENO: No files available for processing" >&2

This command appears to work the same as a printf without the >&2 redirection, but there is an important difference. It displays an error message to the screen, no matter where standard output has been previously redirected.

 The redirection symbols for standard error are the same as standard output except they begin with the number 2. For example

bash myscript.sh 2> myscript_errors.txt

There are several classic types of redirection:

Source and target can be expression. In this case bash performs command and parameter substitution before using the  parameter. File name substitution occurs only if the pattern matches a single file

Unix command cat is actually short for "catenate," i.e., link together. It accepts multiple filename arguments and copies them to the standard output. But let's pretend, for the moment, that cat and other utilities don't accept filename arguments and accept only standard input. Unix shell lets you redirect standard input so that it comes from a file. The notation command <  filename does the same as cat with less overhead.

The > operator always overwrites the named file. If a series of printf messages are redirected to the same file, only the last message appears.

To add messages to a file without overwriting the earlier ones, Bash has an append operator, >>. This operator redirects messages to the end of a file.

printf "%s\n" "The processing started at $timestamp" > /tmp/nikolai.log
... ... ... 
printf "There were no errors. Normal exist of the program" >>  /tmp/nikolai.log

In the same way, input can be redirected to a command from a file. The input redirection symbol is <. For example, the utility wc (word count) is able to calculate number of lines in the file with the option -l. That means that you can count the number of lines in a file, using the command:

wc  -l <  $HOME/.bashrc

Again, wc -l count lines of the file. In this case this is number of lines in your  .bashrc. Printing this information from your .bash_profile script might be a useful reminder to you that can alert you to the fact that you recently modified your env, or God forbid your .bashrc file disappeared without trace :-)  

There is also a possibility to imitate reading from a file inside the script by putting several lines directly into the script. The operator <<MARKER treats the lines following it in a script as if they were typed from the keyboard until it reaches the file starting from the work MARKET. In other words the lines which are treated as an input file are limited by the a special line using the delimiter you you define yourself. For example, in the following example the delimiter word used is "EOL": 

cat > /tmp/example <<EOF
this is a test demostrating how you can 
write several lines of text into 
a file
EOF

If you use >> instead of  >  you can add lines to a file without using any editor:

cat >>/etc/resolv.conf <<EOF
search datacenter.mycompany.com headquarters.mycompany.com
nameserver 10.100.20.5
nameserver 10.100.20.6
EOF

In this example bash treats the three lines between the EOF markers as if they were being typed from the keyboard and write them to the file specified after > (/tmp/example in our case).  there should be no spaces between << and EOF marker. Again, the name EOF is arbitrary. you can choose, for example,  LINES_END instead. the only important thing is there should be no lines in your test that start with the same word.

cat >>/etc/resolv.conf <<LINES_END
search datacenter.mycompany.com headquaters.mycompany.com
nameserver 10.100.20.5
nameserver 10.100.20.6
LINES_END

There should no market at any beginning of the lines of included text. that's why using all caps makes sense in this case. 

The data in the << list is known as a here file (or a here document) because the word HERE was often used in Bourne shell scripts as the marker of the end of the input lines.

Bash have another here file redirection operator, <<<, which redirects a variable or a literal.

cat > /tmp/example <<<  "this is another example of piping info into the file" 
Here is a summary of what we can do.

Pipes as cascading redirection

Instead of files, the results of a command can be redirected as input to another command. This process is called piping and uses the vertical bar (or pipe) operator |.

who | wc -l # count the number or users

Any number of commands can be strung together with vertical bar symbols. A group of such commands is called a pipeline.

If one command ends prematurely in a series of pipe commands, for example, because you interrupted a command with a Ctrl-C, Bash displays the message "Broken Pipe" on the screen.

See Pipes -- powerful and elegant programming paradigm

PART 3: Understanding somebody else shell scripts

As bash has long  history scripts written by other people can use outdated features. but they can use also advanced features that you did not suspect exist.

Arithmetic Expressions

The ((...)) Command

The ((...)) command is equivalent to the let command, except that all characters between the (( and )) are treated as quoted arithmetic expressions. This is more convenient to use than let, because many of the arithmetic operators have special meaning to the Korn shell. The following commands are equivalent:

$ let "X=X + 1" 

and

$ ((X=X + 1)) 

Before the Korn shell let and ((...)) commands, the only way to perform arithmetic was with expr. For example, to do the same increment X operation using expr:
$ X=`expr $X + 1` 

In tests on a few systems, the let command performed the same operation 35-60 times faster! That is quite a difference.

Processing Arguments

You can easily write scripts that process arguments, because a set of special shell variables holds the values of arguments specified when your script is invoked.

For example, here's a simple one-line script that prints the value of its second argument:

echo My second argument has the value $2.

Suppose you store this script in the file second, change its access mode to permit execution, and invoke it as follows:

./second a b c

The script will print the output:

My second argument has the value b.
$0 The command name. $1, $2, ... , $9 The individual arguments of the command. $* The entire list of arguments, treated as a single word. $@ The entire list of arguments, treated as a series of words.$? The exit status of the previous command. The value 0 denotes successful completion. $$ he process id of the current process.

Notice that the shell provides variables for accessing only nine arguments. Nevertheless, you can access more than nine arguments. The key to doing so is the shift  command, which discards the value of the first argument and shifts the remaining values down one position. Thus, after executing the shift  command, the shell variable $9 contains the value of the tenth argument. To access the eleventh and subsequent arguments, you simply execute the shift  command the appropriate number of times.

Exit Codes

The shell variable $? holds the numeric exit status of the most recently completed command. By convention, an exit status of zero denotes successful completion; other values denote error conditions of various sorts.

You can set the error code in a script by issuing the exit  command, which terminates the script and posts the specified exit status. The format of the command is:

exit 
status

where status is a non-negative integer that specifies the exit status.

Conditional Logic

A shell script can employ conditional logic, which lets the script take different action based on the values of arguments, shell variables, or other conditions. The test  command lets you specify a condition, which can be either true or false. Conditional commands (including the if, case, while, and until  commands) use the test  command to evaluate conditions.

The test command

 The test  command evaluates its arguments and sets the exit status to 0, which indicates that the specified condition was true, or a non-zero value, which indicates that the specified condition was false. Some commonly used argument forms used with the test  command:

To see the test  command in action, consider the following script:

test -d $1
echo $?

This script tests whether its first argument specifies a directory and displays the resulting exit status, a zero or a non-zero value that reflects the result of the test.

Suppose the script were stored in the file tester, which permitted read access. Executing the script might yield results similar to the following:

$ ./tester /
0
$ ./tester /missing
1

These results indicate that the / directory exists and that the /missing directory does not exist.

The if command

The test  command is not of much use by itself, but combined with commands such as the if  command, it is useful indeed. The if  command has the following form:

if 
command
then
  
commands
else
 
commands
fi

Usually the command that immediately follows the word if  is a test  command. However, this need not be so. The if  command merely executes the specified command and tests its exit status. If the exit status is 0, the first set of commands is executed; otherwise the second set of commands is executed. An abbreviated form of the if  command does nothing if the specified condition is false:

if 
command
then
  
commands
fi

When you type an if  command, it occupies several lines; nevertheless it's considered a single command. To underscore this, the shell provides a special prompt (called the secondary prompt) after you enter each line. Often, scripts are entered by using a text editor; when you enter a script using a text editor you don't see the secondary prompt, or any other shell prompt for that matter.

As an example, suppose you want to delete a file file1 if it's older than another file file2. The following command would accomplish the desired result:

if test file1 -ot file2
then
  rm file1
fi

You could incorporate this command in a script that accepts arguments specifying the filenames:

if test $1 -ot $2
then
  rm $1
  echo Deleted the old file.
fi

If you name the script riddance and invoke it as follows:

riddance thursday wednesday

the script will delete the file thursday if that file is older than the file wednesday.

The case command

The case  command provides a more sophisticated form of conditional processing:

case 
value in
  
pattern1) 
commands;;
  
pattern2) 
commands ;;
  ...
esac

The case  command attempts to match the specified value against a series of patterns. The commands associated with the first matching pattern, if any, are executed. Patterns are built using characters and metacharacters, such as those used to specify command arguments. As an example, here's a case  command that interprets the value of the first argument of its script:

case $1 in
  -r) echo Force deletion without confirmation ;;
  -i) echo Confirm before deleting ;;
   *) echo Unknown argument ;;
esac

The command echoes a different line of text, depending on the value of the script's first argument. As done here, it's good practice to include a final pattern that matches any value.

The while command

The while  command lets you execute a series of commands iteratively (that is, repeatedly) so long as a condition tests true:

while 
command
do
 
commands
done

Here's a script that uses a while  command to print its arguments on successive lines:

echo $1
while shift 2> /dev/null
do
  echo $1
done

The commands that comprise the do  part of a while  (or another loop command) can include if  commands, case  commands, and even other while  commands. However, scripts rapidly become difficult to understand when this occurs often. You should include conditional commands within other conditional commands only with due consideration for the clarity of the result. Include a comment command (#) to clarify difficult constructs.

The until command

The until  command lets you execute a series of commands iteratively (that is, repeatedly) so long as a condition tests false:

until 
command
do
 
commands
done

Here's a script that uses an until  command to print its arguments on successive lines, until it encounters an argument that has the value red:

until test $1 = red
do
  echo $1
  shift
done

For example, if the script were named stopandgo and stored in the current working directory, the command:

./stopandgo green yellow red blue

would print the lines:

green
yellow

The for command

The for  command iterates over the elements of a specified list:

for variable in list
do  
	commands
done

Within the commands, you can reference the current element of the list by means of the shell variable $ variable, where variable is the name specified following the for. The list typically takes the form of a series of arguments, which can incorporate metacharacters. For example, the following for  command:

for i in 2 4 6 8
do
  echo $i
done

prints the numbers 2, 4, 6, and 8 on successive lines.

A special form of the for  command iterates over the arguments of a script:

for variable
do  
	commands
done

For example, the following script prints its arguments on successive lines:

for i
do
  echo $i
done

The break and continue commands

The break and continue commands are simple commands that take no arguments. When the shell encounters a break  command, it immediately exits the body of the enclosing loop ( while, until, or for) command. When the shell encounters a continue  command, it immediately discontinues the current iteration of the loop. If the loop condition permits, other iterations may occur; otherwise the loop is exited.


Top updates

Softpanorama Switchboard
Softpanorama Search


NEWS CONTENTS

Old News ;-)

Recommended Links

Softpanorama hot topic of the month

Softpanorama Recommended

...



Etc

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit exclusivly for research and educational purposes.   If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner. 

ABUSE: IPs or network segments from which we detect a stream of probes might be blocked for no less then 90 days. Multiple types of probes increase this period.  

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least


Copyright © 1996-2016 by Dr. Nikolai Bezroukov. www.softpanorama.org was created as a service to the UN Sustainable Development Networking Programme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License.

The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

Last modified: July, 20, 2017