Softpanorama
May the source be with you, but remember the KISS principle ;-)

Contents Bulletin Scripting in shell and Perl Network troubleshooting History Humor

AWK one liners

News

AWK Programming

Best Shell Books

Recommended Links AWK Regular expressions String Operations in Shell

Pipes -- powerful and elegant programming paradigm

Shell Input and Output Redirection

Perl One Liners BASH Debugging Shell Giants  

Introduction to the Unix shell history

Bourne Shells Family

Sysadmin Horror Stories Tips Humor Etc

Introduction

One great blunder of Unix developers was that they did not integrated AWK into shell. Actually development team which developed AWK technically was heads above shell developers before David Korn.  David Korn was a talented developer,  but he came too late to change the situation.  Later the attempt to integrate shell and AWk was attempted in Perl, which can be used instead of AWK in some cases. See Perl One Liners. But while AWK is simple and elegant, Perl suffers from overcomplexity.

Simple AWK programs enclosed in single quotes can be passed as a parameter. For example:

awk 'BEGIN { FS = ":" } { print $1 | "sort" }' /etc/passwd
This program
 BEGIN { FS = ":" } { print $1 | "sort" }

prints a sorted list of the login names of all users from /etc/passwd.

If an input file or output file are not specified, AWK will expect input from stdin or output to stdout.

AWK was pioneer in introducing regular expression to Unix See AWK Regular expressions. But pattern matching capabilities of AWK are not limited to regular expression. Patterns can be

  1. regular expressions enclosed by slashes, e.g.: /[a-z]+/
  2. relational expressions, e.g.: $3!=$4
  3. pattern-matching expressions, e.g.: $1 !~ /string/
  4. or any combination of these, e.g.:
    (substr($0,5,2)=="xx" && $3 ~ /nasty/ ) || /^The/ || /$mean/ || $4>2

(This last example selects lines where the two characters starting in fifth column are xx and the third field matches nasty, plus lines beginning with The, plus lines ending with mean, plus lines in which the fourth field is greater than two.)

AWK procedures are enclosed in { curly brackets }. Procedures can

Assign variables or arrays

For example:

AWK operators by order of (decreasing) precedence

When creating complex expressions in AWK you need to understand the precedence of operators or use brackets (safer and simpler way ;-). Don't try to save on brackets !

Here is the precedence of operators n AWK:

 Field reference:			$
  Increment or decrement:		++ --
  Exponentiate:				^
  Multiply, divide, modulus:		* / %
  Add, subtract:			+ -
  Concatenation:			(blank space)
  Relational:				< <= > >= != ==
  Match regular expression:		~ !~
  Logical:				&& || 
  C-style assignment:			= += -= *= /= %= ^=

AWK arithmetic functions

AWK has most functions available in C. For example

exp, int, log and sqrt

AWK string functions

Print output

 Output can be unformatted (print) or formatted (printf):

Perform flow-control

Do-loops:

for   ( [initial expression]; 
		[test expression]; 
		[increment counter expression] ) 
	{ commands } 

example:

for (i = 1; i <= 20; i++)
does 20 iterations

If-Then-Else:

if (condition) 
		{ commands1 } 
	[ else 
		{ commands2 } ] 

does commands1 if condition is true; commands2 (or nothing) if false; condition is any expression with relational or pattern-match operators.

Other flow-control commands:

The following command runs a simple awk  program that searches the input file /etc/passwd for the character string `foo' (a grouping of characters is usually called a string; the term string is based on similar usage in English, such as “a string of pearls,” or “a string of cars in a train”):

awk '/foo/ { print $0 }' /etc/passwd

When lines containing `foo' are found, they are printed because `print $0' means print the current line. (Just `print' by itself means the same thing, so we could have written that instead.)

You will notice that slashes (`/') surround the string `foo' in the awk  program. The slashes indicate that `foo' is the pattern to search for. This type of pattern is called a regular expression, which is covered in more detail later (see Regexp). The pattern is allowed to match parts of words. There are single quotes around the awk  program so that the shell won't interpret any of it as special shell characters.

Here is what this program prints:

 $ awk '/foo/ { print $0 }' BBS-list
     -| fooey        555-1234     2400/1200/300     B
     -| foot         555-6699     1200/300          B
     -| macfoo       555-6480     1200/300          A
     -| sabafoo      555-2127     1200/300          C

In an awk  rule, either the pattern or the action can be omitted, but not both. If the pattern is omitted, then the action is performed for every input line. If the action is omitted, the default action is to print all lines that match the pattern.

Thus, we could leave out the action (the print  statement and the curly braces) in the previous example and the result would be the same: all lines matching the pattern `foo' are printed. By comparison, omitting the print  statement but retaining the curly braces makes an empty action that does nothing (i.e., no lines are printed).

Many practical awk  programs are just a line or two. Following is a collection of useful, short programs to get you started. Some of these programs contain constructs that haven't been covered yet. (The description of the program will give you a good idea of what is going on, but please read the rest of the Web page to become an awk  expert!) Most of the examples use a data file named data. This is just a placeholder; if you use these programs yourself, substitute your own file names for data. For future reference, note that there is often more than one way to do things in awk. At some point, you may want to look back at these examples and see if you can come up with different ways to do the same things shown here:


Top updates

Softpanorama Switchboard
Softpanorama Search


NEWS CONTENTS

Old News ;-)

The AWK Manual - One-liners Useful "One-liners"

Useful awk programs are often short, just a line or two. Here is a collection of useful, short programs to get you started. Some of these programs contain constructs that haven't been covered yet. The description of the program will give you a good idea of what is going on, but please read the rest of the manual to become an awk expert!

awk '{ if (NF > max) max = NF }
END { print max }'
This program prints the maximum number of fields on any input line.
awk 'length($0) > 80'
This program prints every line longer than 80 characters. The sole rule has a relational expression as its pattern, and has no action (so the default action, printing the record, is used).
awk 'NF > 0'
This program prints every line that has at least one field. This is an easy way to delete blank lines from a file (or rather, to create a new file similar to the old file but from which the blank lines have been deleted).
awk '{ if (NF > 0) print }'
This program also prints every line that has at least one field. Here we allow the rule to match every line, then decide in the action whether to print.
awk 'BEGIN { for (i = 1; i <= 7; i++)
print int(101 * rand()) }'
This program prints 7 random numbers from 0 to 100, inclusive.
ls -l files | awk '{ x += $4 } ; END { print "total bytes: " x }'
This program prints the total number of bytes used by files.
expand file | awk '{ if (x < length()) x = length() }
END { print "maximum line length is " x }'
This program prints the maximum line length of file. The input is piped through the expand program to change tabs into spaces, so the widths compared are actually the right-margin columns.
awk 'BEGIN { FS = ":" } { print $1 | "sort" }' /etc/passwd
This program prints a sorted list of the login names of all users.
awk '{ nlines++ }
END { print nlines }'
This programs counts lines in a file.
awk 'END { print NR }'
This program also counts lines in a file, but lets awk do the work.
awk '{ print NR, $0 }'
This program adds line numbers to all its input files, similar to `cat -n'.

HANDY ONE-LINERS FOR AWK 22 July 2003 compiled by Eric Pement ...

HANDY ONE-LINERS FOR AWK 22 July 2003

compiled by Eric Pement <pemente@northpark.edu> version 0.22 Latest version of this file is usually at:

http://www.student.northpark.edu/pemente/awk/awk1line.txt

USAGE:

Most of my experience comes from version of GNU awk (gawk) compiled for Win32. Note in particular that DJGPP compilations permit the awk script to follow Unix quoting syntax '/like/ {"this"}'. However, the user must know that single quotes under DOS/Windows do not protect the redirection arrows (<, >) nor do they protect pipes (|). Both are special symbols for the DOS/CMD command shell and their special meaning is ignored only if they are placed within "double quotes." Likewise, DOS/Win users must remember that the percent sign (%) is used to mark DOS/Win environment variables, so it must be doubled (%%) to yield a single percent sign visible to awk.

If I am sure that a script will NOT need to be quoted in Unix, DOS, or CMD, then I normally omit the quote marks. If an example is peculiar to GNU awk, the command 'gawk' will be used. Please notify me if you find errors or new commands to add to this list (total length under 65 characters). I usually try to put the shortest script first.

FILE SPACING:

 # double space a file
 awk '1;{print ""}'
 awk 'BEGIN{ORS="\n\n"};1'

 # double space a file which already has blank lines in it. Output file
 # should contain no more than one blank line between lines of text.
 # NOTE: On Unix systems, DOS lines which have only CRLF (\r\n) are
 # often treated as non-blank, and thus 'NF' alone will return TRUE.
 awk 'NF{print $0 "\n"}'

 # triple space a file
 awk '1;{print "\n"}'

NUMBERING AND CALCULATIONS:

 # precede each line by its line number FOR THAT FILE (left alignment).
 # Using a tab (\t) instead of space will preserve margins.
 awk '{print FNR "\t" $0}' files*

 # precede each line by its line number FOR ALL FILES TOGETHER, with tab.
 awk '{print NR "\t" $0}' files*

 # number each line of a file (number on left, right-aligned)
 # Double the percent signs if typing from the DOS command prompt.
 awk '{printf("%5d : %s\n", NR,$0)}'

 # number each line of file, but only print numbers if line is not blank
 # Remember caveats about Unix treatment of \r (mentioned above)
 awk 'NF{$0=++a " :" $0};{print}'
 awk '{print (NF? ++a " :" :"") $0}'

 # count lines (emulates "wc -l")
 awk 'END{print NR}'

 # print the sums of the fields of every line
 awk '{s=0; for (i=1; i<=NF; i++) s=s+$i; print s}'

 # add all fields in all lines and print the sum
 awk '{for (i=1; i<=NF; i++) s=s+$i}; END{print s}'

 # print every line after replacing each field with its absolute value
 awk '{for (i=1; i<=NF; i++) if ($i < 0) $i = -$i; print }'
 awk '{for (i=1; i<=NF; i++) $i = ($i < 0) ? -$i : $i; print }'

 # print the total number of fields ("words") in all lines
 awk '{ total = total + NF }; END {print total}' file

 # print the total number of lines that contain "Beth"
 awk '/Beth/{n++}; END {print n+0}' file

 # print the largest first field and the line that contains it
 # Intended for finding the longest string in field #1
 awk '$1 > max {max=$1; maxline=$0}; END{ print max, maxline}'

 # print the number of fields in each line, followed by the line
 awk '{ print NF ":" $0 } '

 # print the last field of each line
 awk '{ print $NF }'

 # print the last field of the last line
 awk '{ field = $NF }; END{ print field }'

 # print every line with more than 4 fields
 awk 'NF > 4'

 # print every line where the value of the last field is > 4
 awk '$NF > 4'

TEXT CONVERSION AND SUBSTITUTION:

 # IN UNIX ENVIRONMENT: convert DOS newlines (CR/LF) to Unix format
 awk '{sub(/\r$/,"");print}'   # assumes EACH line ends with Ctrl-M

 # IN UNIX ENVIRONMENT: convert Unix newlines (LF) to DOS format
 awk '{sub(/$/,"\r");print}

 # IN DOS ENVIRONMENT: convert Unix newlines (LF) to DOS format
 awk 1

 # IN DOS ENVIRONMENT: convert DOS newlines (CR/LF) to Unix format
 # Cannot be done with DOS versions of awk, other than gawk:
 gawk -v BINMODE="w" '1' infile >outfile

 # Use "tr" instead.
 tr -d \r <infile >outfile            # GNU tr version 1.22 or higher

 # delete leading whitespace (spaces, tabs) from front of each line
 # aligns all text flush left
 awk '{sub(/^[ \t]+/, ""); print}'

 # delete trailing whitespace (spaces, tabs) from end of each line
 awk '{sub(/[ \t]+$/, "");print}'

 # delete BOTH leading and trailing whitespace from each line
 awk '{gsub(/^[ \t]+|[ \t]+$/,"");print}'
 awk '{$1=$1;print}'           # also removes extra space between fields

 # insert 5 blank spaces at beginning of each line (make page offset)
 awk '{sub(/^/, "     ");print}'

 # align all text flush right on a 79-column width
 awk '{printf "%79s\n", $0}' file*

 # center all text on a 79-character width
 awk '{l=length();s=int((79-l)/2); printf "%"(s+l)"s\n",$0}' file*

 # substitute (find and replace) "foo" with "bar" on each line
 awk '{sub(/foo/,"bar");print}'           # replaces only 1st instance
 gawk '{$0=gensub(/foo/,"bar",4);print}'  # replaces only 4th instance
 awk '{gsub(/foo/,"bar");print}'          # replaces ALL instances in a line

 # substitute "foo" with "bar" ONLY for lines which contain "baz"
 awk '/baz/{gsub(/foo/, "bar")};{print}'

 # substitute "foo" with "bar" EXCEPT for lines which contain "baz"
 awk '!/baz/{gsub(/foo/, "bar")};{print}'

 # change "scarlet" or "ruby" or "puce" to "red"
 awk '{gsub(/scarlet|ruby|puce/, "red"); print}'

 # reverse order of lines (emulates "tac")
 awk '{a[i++]=$0} END {for (j=i-1; j>=0;) print a[j--] }' file*

 # if a line ends with a backslash, append the next line to it
 # (fails if there are multiple lines ending with backslash...)
 awk '/\\$/ {sub(/\\$/,""); getline t; print $0 t; next}; 1' file*

 # print and sort the login names of all users
 awk -F ":" '{ print $1 | "sort" }' /etc/passwd

 # print the first 2 fields, in opposite order, of every line
 awk '{print $2, $1}' file

 # switch the first 2 fields of every line
 awk '{temp = $1; $1 = $2; $2 = temp}' file

 # print every line, deleting the second field of that line
 awk '{ $2 = ""; print }'

 # print in reverse order the fields of every line
 awk '{for (i=NF; i>0; i--) printf("%s ",i);printf ("\n")}' file

 # remove duplicate, consecutive lines (emulates "uniq")
 awk 'a !~ $0; {a=$0}'

 # remove duplicate, nonconsecutive lines
 awk '! a[$0]++'                     # most concise script
 awk '!($0 in a) {a[$0];print}'      # most efficient script

 # concatenate every 5 lines of input, using a comma separator
 # between fields
 awk 'ORS=%NR%5?",":"\n"' file

SELECTIVE PRINTING OF CERTAIN LINES:

 # print first 10 lines of file (emulates behavior of "head")
 awk 'NR < 11'

 # print first line of file (emulates "head -1")
 awk 'NR>1{exit};1'

  # print the last 2 lines of a file (emulates "tail -2")
 awk '{y=x "\n" $0; x=$0};END{print y}'

 # print the last line of a file (emulates "tail -1")
 awk 'END{print}'

 # print only lines which match regular expression (emulates "grep")
 awk '/regex/'

 # print only lines which do NOT match regex (emulates "grep -v")
 awk '!/regex/'

 # print the line immediately before a regex, but not the line
 # containing the regex
 awk '/regex/{print x};{x=$0}'
 awk '/regex/{print (x=="" ? "match on line 1" : x)};{x=$0}'

 # print the line immediately after a regex, but not the line
 # containing the regex
 awk '/regex/{getline;print}'

 # grep for AAA and BBB and CCC (in any order)
 awk '/AAA/; /BBB/; /CCC/'

 # grep for AAA and BBB and CCC (in that order)
 awk '/AAA.*BBB.*CCC/'

 # print only lines of 65 characters or longer
 awk 'length > 64'

 # print only lines of less than 65 characters
 awk 'length < 64'

 # print section of file from regular expression to end of file
 awk '/regex/,0'
 awk '/regex/,EOF'

 # print section of file based on line numbers (lines 8-12, inclusive)
 awk 'NR==8,NR==12'

 # print line number 52
 awk 'NR==52'
 awk 'NR==52 {print;exit}'          # more efficient on large files

 # print section of file between two regular expressions (inclusive)
 awk '/Iowa/,/Montana/'             # case sensitive
SELECTIVE DELETION OF CERTAIN LINES:
 # delete ALL blank lines from a file (same as "grep '.' ")
 awk NF
 awk '/./'
CREDITS AND THANKS:

Special thanks to Peter S. Tillier for helping me with the first release of this FAQ file.

For additional syntax instructions, including the way to apply editing commands from a disk file instead of the command line, consult:

"sed & awk, 2nd Edition," by Dale Dougherty and Arnold Robbins O'Reilly, 1997

"UNIX Text Processing," by Dale Dougherty and Tim O'Reilly Hayden Books, 1987

"Effective awk Programming, 3rd Edition." by Arnold Robbins O'Reilly, 2001

To fully exploit the power of awk, one must understand "regular expressions." For detailed discussion of regular expressions, see "Mastering Regular Expressions, 2d edition" by Jeffrey Friedl (O'Reilly, 2002).

The manual ("man") pages on Unix systems may be helpful (try "man awk", "man nawk", "man regexp", or the section on regular expressions in "man ed"), but man pages are notoriously difficult. They are not written to teach awk use or regexps to first-time users, but as a reference text for those already acquainted with these tools.

USE OF '\t' IN awk SCRIPTS: For clarity in documentation, we have used the expression '\t' to indicate a tab character (0x09) in the scripts. All versions of awk, even the UNIX System 7 version should recognize the '\t' abbreviation.

AWK One-Liners

Although awk can be used to write programs of some complexity, many useful programs are not complicated. Here is a collection of short programs that you might find handy and/or instructive:
  1. Print the total number of input lines:
    END { print NR }
  2. Print the tenth input line:
    NR == 10
  3. Print the last field of every input line:
    { print $NF }
  4. Print the last field of the last input line:
    { field = $NF}
    END { print field } 
  5. Print every input line with more than 4 fields:
    NF > 4
  6. Print every input line in which the last field is more than 4:
    $NF > 4
  7. Print the total number of fields in all input lines:
    { nf = nf + NF }
    END { print nf }      
  8. Print the total number of lines that contain Beth:
    /Beth/ { nlines = nlines + 1 }
    END { print nlines }      
  9. Print the largest first fields and the line that contains it ( assumes some $1 is positive):
    $1 > max { max = $1 ; maxlines = $0 }
    END { print max, maxline)
           
  10. Print every line that has at least one field:
    NF > 0
  11. Pritn every line longer than 80 characters:
    length($0) > 80
  12. Print the numer of fields in every line, followed by the line itself:
    { print NF, $0 }
  13. Print the first two fields, in opposite order, of every line:
    { print $2, $1 }
  14. Exchange the first two fields of every line and then print the line:
    { temp = $1 ; $1 = $2 ; $2 = temp ; print }
  15. Print every line witg rge first field replaced by the line number:
    { $1 = NR ; print }
  16. Print every line after erasing the second field:
    { $2 = ""; print }
  17. Print in reverse order the fields of every line:
    { for (i=NF ; i>0 ; i=i-1) printf( "%s ", $1)
           printf("\n")
    }       
  18. Print the sums of the fields of every line:
    { sum = 0
           for ( i=1 ; i<=NF ; i=i+1) sum = sum + $i
           print sum
    }       
  19. Ad up all fields in all lines and print the sum:
    { for ( i=1 ; i<=NF ; i=i+1 ) sum = sum + $i}
    END { print sum }       
           
  20. Print every line after replacing each field by its absolute value:
    { for (i=1 ; i<=NF ; i=i+1) if ($i<0) $i=-$i
           print
    }       
Source: The AWK Programming Language

One-liners

Useful "One-liners"
*******************

Useful `awk' programs are often short, just a line or two. Here is a
collection of useful, short programs to get you started. Some of these
programs contain constructs that haven't been covered yet. The
description of the program will give you a good idea of what is going
on, but please read the rest of the manual to become an `awk' expert!

Since you are reading this in Info, each line of the example code is
enclosed in quotes, to represent text that you would type literally.
The examples themselves represent shell commands that use single quotes
to keep the shell from interpreting the contents of the program. When
reading the examples, focus on the text between the open and close
quotes.

`awk '{ if (NF > max) max = NF }'
` END { print max }''
This program prints the maximum number of fields on any input line.

`awk 'length($0) > 80''
This program prints every line longer than 80 characters. The sole
rule has a relational expression as its pattern, and has no action
(so the default action, printing the record, is used).

`awk 'NF > 0''
This program prints every line that has at least one field. This
is an easy way to delete blank lines from a file (or rather, to
create a new file similar to the old file but from which the blank
lines have been deleted).

`awk '{ if (NF > 0) print }''
This program also prints every line that has at least one field.
Here we allow the rule to match every line, then decide in the
action whether to print.

`awk 'BEGIN { for (i = 1; i <= 7; i++)'
` print int(101 * rand()) }''
This program prints 7 random numbers from 0 to 100, inclusive.

`ls -l FILES | awk '{ x += $4 } ; END { print "total bytes: " x }''
This program prints the total number of bytes used by FILES.

`expand FILE | awk '{ if (x < length()) x = length() }'
` END { print "maximum line length is " x }''
This program prints the maximum line length of FILE. The input is
piped through the `expand' program to change tabs into spaces, so
the widths compared are actually the right-margin columns.

`awk 'BEGIN { FS = ":" }'
` { print $1 | "sort" }' /etc/passwd'
This program prints a sorted list of the login names of all users.

`awk '{ nlines++ }'
` END { print nlines }''
This programs counts lines in a file.

`awk 'END { print NR }''
This program also counts lines in a file, but lets `awk' do the
work.

`awk '{ print NR, $0 }''
This program adds line numbers to all its input files, similar to
`cat -n'.

The GAWK Manual - Useful One-liners

Useful awk programs are often short, just a line or two. Here is a collection of useful, short programs to get you started. Some of these programs contain constructs that haven't been covered yet. The description of the program will give you a good idea of what is going on, but please read the rest of the manual to become an awk expert!

awk '{ if (NF > max) max = NF }
END { print max }'
This program prints the maximum number of fields on any input line.
awk 'length($0) > 80'
This program prints every line longer than 80 characters. The sole rule has a relational expression as its pattern, and has no action (so the default action, printing the record, is used).
awk 'NF > 0'
This program prints every line that has at least one field. This is an easy way to delete blank lines from a file (or rather, to create a new file similar to the old file but from which the blank lines have been deleted).
awk '{ if (NF > 0) print }'
This program also prints every line that has at least one field. Here we allow the rule to match every line, then decide in the action whether to print.
awk 'BEGIN { for (i = 1; i <= 7; i++)
print int(101 * rand()) }'
This program prints 7 random numbers from 0 to 100, inclusive.
ls -l files | awk '{ x += $4 } ; END { print "total bytes: " x }'
This program prints the total number of bytes used by files.
expand file | awk '{ if (x < length()) x = length() }
END { print "maximum line length is " x }'
This program prints the maximum line length of file. The input is piped through the expand program to change tabs into spaces, so the widths compared are actually the right-margin columns.
awk 'BEGIN { FS = ":" }
{ print $1 | "sort" }' /etc/passwd
This program prints a sorted list of the login names of all users.
awk '{ nlines++ }
END { print nlines }'
This programs counts lines in a file.
awk 'END { print NR }'
This program also counts lines in a file, but lets awk do the work.
awk '{ print NR, $0 }'
This program adds line numbers to all its input files, similar to `cat -n'

Top updates

Softpanorama Switchboard
Softpanorama Search


NEWS CONTENTS

Old News ;-)

[Jan 2, 2007] An alterative way to pass shell variables into AWK program from the command line using "variable assignment" parameters:

Pseudo-files

AWK knows another way to assign values to AWK variables, like in the following example:

awk '{ print "var is", var }' var=TEST file1 

This statement assigns the value "TEST" to the AWK variable "var", and then reads the files "file1" and "file2". The assignment works, because AWK interprets each file name containing an equal sign ("=") as an assignment.

This example is very portable (even oawk understands this syntax), and easy to use. So why don't we use this syntax exclusively?

This syntax has two drawbacks: the variable assignment are interpreted by AWK the moment the file would have been read. At this time the assignment takes place. Since the BEGIN action is performed before the first file is read, the variable is not available in the BEGIN action.

The second problem is, that the order of the variable assignments and of the files are important. In the following example

awk '{ print "var is", var }' file1 var=TEST file2

the variable var is not defined during the read of file1, but during the reading of file2. This may cause bugs that are hard to track down.

Hartigan-Computer-AWK

EXAMPLES      # is the comment character for awk.  'field' means 'column'

# Print first two fields in opposite order:
  awk '{ print $2, $1 }' file


# Print lines longer than 72 characters:
  awk 'length > 72' file
    

# Print length of string in 2nd column
  awk '{print length($2)}' file


# Add up first column, print sum and average:
       { s += $1 }
  END  { print "sum is", s, " average is", s/NR }


# Print fields in reverse order:
  awk '{ for (i = NF; i > 0; --i) print $i }' file


# Print the last line
      {line = $0}
  END {print line}


# Print the total number of lines that contain the word Pat
  /Pat/ {nlines = nlines + 1}
  END {print nlines}


# Print all lines between start/stop pairs:
  awk '/start/, /stop/' file


# Print all lines whose first field is different from previous one:
  awk '$1 != prev { print; prev = $1 }' file


# Print column 3 if column 1 > column 2:
  awk '$1 > $2 {print $3}' file
     

# Print line if column 3 > column 2:
  awk '$3 > $2' file


# Count number of lines where col 3 > col 1
  awk '$3 > $1 {print i + "1"; i++}' file


# Print sequence number and then column 1 of file:
  awk '{print NR, $1}' file


# Print every line after erasing the 2nd field
  awk '{$2 = ""; print}' file


# Print hi 28 times
  yes | head -28 | awk '{ print "hi" }'


# Print hi.0010 to hi.0099 (NOTE IRAF USERS!)
  yes | head -90 | awk '{printf("hi00%2.0f \n", NR+9)}'


# Replace every field by its absolute value
  { for (i = 1; i <= NF; i=i+1) if ($i < 0) $i = -$i print}

# If you have another character that delimits fields, use the -F option
# For example, to print out the phone number for Jones in the following file,
# 000902|Beavis|Theodore|333-242-2222|149092
# 000901|Jones|Bill|532-382-0342|234023
# ...
# type
  awk -F"|" '$2=="Jones"{print $4}' filename



# Some looping for printouts
  BEGIN{
	for (i=875;i>833;i--){
		printf "lprm -Plw %d\n", i
	} exit
       }


 Formatted printouts are of the form printf( "format\n", value1, value2, ... valueN)
		e.g. printf("howdy %-8s What it is bro. %.2f\n", $1, $2*$3)
	%s = string
	%-8s = 8 character string left justified
 	%.2f = number with 2 places after .
	%6.2f = field 6 chars with 2 chars after .
	\n is newline
	\t is a tab


# Print frequency histogram of column of numbers
$2 <= 0.1 {na=na+1}
($2 > 0.1) && ($2 <= 0.2) {nb = nb+1}
($2 > 0.2) && ($2 <= 0.3) {nc = nc+1}
($2 > 0.3) && ($2 <= 0.4) {nd = nd+1}
($2 > 0.4) && ($2 <= 0.5) {ne = ne+1}
($2 > 0.5) && ($2 <= 0.6) {nf = nf+1}
($2 > 0.6) && ($2 <= 0.7) {ng = ng+1}
($2 > 0.7) && ($2 <= 0.8) {nh = nh+1}
($2 > 0.8) && ($2 <= 0.9) {ni = ni+1}
($2 > 0.9) {nj = nj+1}
END {print na, nb, nc, nd, ne, nf, ng, nh, ni, nj, NR}


# Find maximum and minimum values present in column 1
NR == 1 {m=$1 ; p=$1}
$1 >= m {m = $1}
$1 <= p {p = $1}
END { print "Max = " m, "   Min = " p }

# Example of defining variables, multiple commands on one line
NR == 1 {prev=$4; preva = $1; prevb = $2; n=0; sum=0}
$4 != prev {print preva, prevb, prev, sum/n; n=0; sum=0; prev = $4; preva = $1; prevb = $2}
$4 == prev {n++; sum=sum+$5/$6}
END {print preva, prevb, prev, sum/n}

# Example of using substrings
# substr($2,9,7) picks out characters 9 thru 15 of column 2
{print "imarith", substr($2,1,7) " - " $3, "out."substr($2,5,3)}
{print "imarith", substr($2,9,7) " - " $3, "out."substr($2,13,3)}
{print "imarith", substr($2,17,7) " - " $3, "out."substr($2,21,3)}
{print "imarith", substr($2,25,7) " - " $3, "out."substr($2,29,3)}

[3.0] Awk Examples, Nawk, & Awk Quick Reference

For example, suppose I want to turn a document with single-spacing into a document with double-spacing. I could easily do that with the following Awk program:

 awk '{print ; print ""}' infile > outfile
Notice how single-quotes (' ') are used to allow using double-quotes (" ") within the Awk expression. This "hides" special characters from the shell you are using. You could also do this as follows:
 awk "{print ; print \"\"}" infile > outfile 
-- but the single-quote method is simpler.

This program does what it supposed to, but it also doubles every blank line in the input file, which leaves a lot of empty space in the output. That's easy to fix, just tell Awk to print an extra blank line if the current line is not blank:

 awk '{print ; if (NF != 0) print ""}' infile > outfile
* One of the problems with Awk is that it is ingenious enough to make a user want to tinker with it, and use it for tasks for which it isn't really appropriate. For example, you could use Awk to count the number of lines in a file:
 awk 'END {print NR}' infile
-- but this is dumb, because the "wc (word count)" utility gives the same answer with less bother. "Use the right tool for the job."

Awk is the right tool for slightly more complicated tasks. Once I had a file containing an email distribution list. The email addresses of various different groups were placed on consecutive lines in the file, with the different groups separated by blank lines. If I wanted to quickly and reliably determine how many people were on the distribution list, I couldn't use "wc", since, it counts blank lines, but Awk handled it easily:

 awk 'NF != 0 {++count} END {print count}' list
* Another problem I ran into was determining the average size of a number of files. I was creating a set of bitmaps with a scanner and storing them on a floppy disk. The disk started getting full and I was curious to know just how many more bitmaps I could store on the disk.

I could obtain the file sizes in bytes using "wc -c" or the "list" utility ("ls -l" or "ll"). A few tests showed that "ll" was faster. Since "ll" lists the file size in the fifth field, all I had to do was sum up the fifth field and divide by NR. There was one slight problem, however: the first line of the output of "ll" listed the total number of sectors used, and had to be skipped.

No problem. I simply entered:

 ll | awk 'NR!=1 {s+=$5} END {print "Average: " s/(NR-1)}'
This gave me the average as about 40 KB per file.

* Awk is useful for performing simple iterative computations for which a more sophisticated language like C might prove overkill. Consider the Fibonacci sequence:

 1 1 2 3 5 8 13 21 34 ...
Each element in the sequence is constructed by adding the two previous elements together, with the first two elements defined as both "1". It's a discrete formula for exponential growth. It is very easy to use Awk to generate this sequence:
 awk 'BEGIN {a=1;b=1; while(++x<=10){print a; t=a;a=a+b;b=t}; exit}'
This generates the following output data:
 1
   2
   3
   5
   8
   13
   21
   34
   55
   89

UNIX Basics Examples with awk A short introduction

A colleague of mine used AWK to extract the first column from a file with the command:

awk ' '{print $1}' file


Easy, isn't it? This simple task does not need complex programming in C. One line of AWK does it. Once we have learned the lesson on how to extract a column we can do things such as renaming files (append .new to "files_list"):

ls files_list | awk '{print "mv "$1" "$1".new"}' | sh

... and more:

  1. Renaming within the name:
    ls -1 *old* | awk '{print "mv "$1" "$1}' | sed s/old/new/2 | sh
    (although in some cases it will fail, as in file_old_and_old)
  2. Remove only files:
    ls -l * | grep -v drwx | awk '{print "rm "$9}' | sh
    or with awk alone:
    ls -l|awk '$1!~/^drwx/{print $9}'|xargs rm
    Be careful when trying this out in your home directory. We remove files!
  3. Remove only directories
    ls -l | grep '^d' | awk '{print "rm -r "$9}' | sh
    or
    ls -p | grep /$ | wk '{print "rm -r "$1}'
    or with awk alone:
    ls -l|awk '$1~/^d.*x/{print $9}'|xargs rm -r
    Be careful when trying this out in your home directory. We remove things!
  4. Killing processes by name (in this example we kill the process called netscape):
    kill `ps auxww | grep netscape | egrep -v grep | awk '{print $2}'`
    or with awk alone:
    ps auxww | awk '$0~/netscape/&&$0!~/awk/{print $2}' |xargs kill
    It has to be adjusted to fit the ps command on whatever unix system you are on. Basically it is: "If the process is called netscape and it is not called 'grep netscape' (or awk) then print the pid"

Recommended Links

Softpanorama hot topic of the month

Softpanorama Recommended

The AWK Programming Language

The awk programming language

-home-jpc1-rpmfind-redhat-5.2-sparc-usr_share_awk_Tree.html

Index of -system-monitor-2.1.2-scripts-awk

Index of /sw/sun4_56/gawk-3.0.3/lib/awk



Etc

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit exclusivly for research and educational purposes.   If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner. 

ABUSE: IPs or network segments from which we detect a stream of probes might be blocked for no less then 90 days. Multiple types of probes increase this period.  

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least


Copyright © 1996-2016 by Dr. Nikolai Bezroukov. www.softpanorama.org was created as a service to the UN Sustainable Development Networking Programme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License.

The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

Last modified: October 20, 2015