Softpanorama
May the source be with you, but remember the KISS principle ;-)

Contents Bulletin Scripting in shell and Perl Network troubleshooting History Humor

GNU grep Regular Expressions

Dr. Nikolai Bezroukov

News Best books about Regular Expressions Unix tools Utilities Unix Grep pcregrep Regular expressions

Introduction

Unix grep  is the old and pretty capricious text file search utility that can search either for strings or for regular expression. I never have luck to write a complex regular expression that works from the first time in "old" grep (before introduction of Perl regular expressions) despite using Perl on daily basis (and due to my teaching experience being reasonably well-versed in regex syntax. ). This tutorial is based on the GNU version of grep. Other version are quite similar and sometimes more powerful but GNU grep is a standard de-facto and currently it beat others by having -P (Perl regex) option.

Grep regex syntax has three variations:

This "multiple personalities" behavior is very confusing and that fact essentially spoil the broth. I hate the fact that nobody has the courage to implement a new standard grep and that the current implementation has all warts accumulated during the 30 years of Unix existence. I highly recommend using -P option (Perl regular expressions). It make grep behavior less insane.

I highly recommend using -P option (Perl regular expressions). It make grep behavior less insane.

The simplest way of using grep is plain vanilla string search (fgrep or grep -F invocation): you can select all lines that contain a certain string in one or more files. For example,

fgrep foo file  
returns all the lines that contain a string "foo" in the file "file".

Another way of using grep  is to have it accept data through STDIN. instead of having it search a file. For example,

ls | fgrep blah
lists all files in the current directory containing the string "blah"

As for regular expressions grep is very idiosyncratic in a sense that you need to remember to use backslash before any special character in a regular expressions. For example:

grep 'if | while'  #-- wrong
grep 'if \|while' #-- will work, please note single quotes 
grep -E "if |while " #-- will also work 

In complex cases it's always easier to use Perl or use grep -P option (Perl regular expression option is available only in GNU grep) than to explore intricacies of grep syntax.

In complex cases it's always easier to use Perl or use grep -P option (Perl regular expression option is available only in GNU grep) than to explore intricacies of grep syntax

Please note that only the latest GNU grep has an option -P that gives you the possibility to use Perl-style regular expressions. Here is the relevant quote from the GNU grep 2.5 documentation:

There are four major variants of grep, controlled by the following options.

The simplest example for grep is to find a word in the file:

grep foo myfile
lists the lines that match the regular expression "foo" in the file "myfile".

The name grep  is a combination of editor command characters. It is from the editor command :g/RE/p, which translates to global Regular Expression print. In fgrep the f stands for fast.

The most primitive regular expression is a string. In this case grep returns all matching lines that contain foo as a substring. grep has a special version that does string searching very fast (fgrep, see below).

Another way of using grepis to use pipe, for example,

ls | grep ".bak"
lists all files in the current directory containing the string "".bak""

There are also several variants of grep that can search directly in archives, for example gzgrep and bzgrep. gzgrep  is an envelope for grep that can invoke the grep  on compressed or gzip'ed files. All options specified are passed directly to grep. If no file is specified, then the standard input is decompressed and fed to grep. Otherwise the given files are uncompressed if necessary and fed to grep.

Special Characters and Quotes

Special Characters

Here, we outline the special characters for grep. Note that in egrep (which uses extended regular expressions), which actually are no more functional than standard regular expressions if you use GNU grep ) , the list of special characters increases ( | in grep is the same as \| egrep and vice versa, there are also other differences. Check the man page for details ) The following characters are considered special and need to be "escaped":

?  \  .  [  ]  ^  $

Note that a $ sign loses its meaning if characters follow it (I think) and the carat ^ loses its meaning if other characters precede it.
Square brackets behave a little differently. The rules for square brackets go as follows:

Quotes

Single quotes are the safest to use, because they protect your regular expression from the shell. For example, grep "!" file  will often produce an error (since the shell thinks that "!" is referring to the shell command history) while grep '!' file  will not.

When should you use single quotes ?

The answer is this: if you want to use shell variables, you need double quotes; otherwise always use single quotes.

For example,

grep "$HOME" file 

searches file for the name of your home directory, while

grep '$HOME' file 

searches for the string $HOME

Regular Grep

Usage of grep with regular expressions is pretty idiosyncratic in a sense that you need to remember to use backslash before any special character in a regular expressions even if you use a single quotes. For example:

grep 'if | while' #-- wrong
grep 'if \|while' #-- will work, please note that despite single quotes we need a backslash
egrep 'if|while' # -- will work (this is an extended grep
grep -P 'if|while' # -- also will work (Perl regular expressions are used)

There are actually four major variants of grep, controlled by the following options.

  1. `-G' `--basic-regexp' Interpret the pattern as a basic regular expression. This is the default.
  2. `-E' `--extended-regexp' Interpret the pattern as an extended regular expression.
  3. `-F' `--fixed-strings' Interpret the pattern as a list of fixed strings, separated by newlines, any of which is to be matched.
  4. `-P' `--perl-regexp' Interpret the pattern as a Perl regular expression.

    In addition, two shortcuts EGREP and FGREP are available. EGREP is the same as `grep -E'. FGREP is the same as `grep -F' Also there is a separate implementation of grep that uses Perl regular expressions, called pcregrep

In complex cases it's always easier to use Perl than to explore intricacies of grep syntax. In any case I strongly recommend you to use option -P if it is available. that's the best way to preserve your sanity.

grep grep -E grep -P
a\+ a+ a+
a\? a? a?
a\|b a|b a|b
\(expression\) (expression1) (expression1)
\{m,n\} {m,n} {m,n}
\{,n\} {,n} {,n}
\{m,} {m,} {m,}
\{m} {m} {m}

Grep Regular Expressions Warts

In basic regular expressions the metacharacters `?', `+', `{', `|', `(', and `)' should be backslashed `\?', `\+', `\{', `\|', `\(', and `\)'. As GNU grep manual stated:

Traditional egrep  did not support the `{' metacharacter, and some egrep  implementations support `\{' instead, so portable scripts should avoid `{' in `egrep' patterns and should use `[{]' to match a literal `{'.

GNU egrep  attempts to support traditional usage by assuming that `{' is not special if it would be the start of an invalid interval specification. For example, the shell command

egrep '{1'

searches for the two-character string '{1' instead of reporting a syntax error in the regular expression. POSIX.2 allows this behavior as an extension...

To search for a line containing text hello.gif, the correct command is

grep  hello\\.gif file 

or

grep -P 'hello\.gif' file

More on Regular Expressions

To match a selection of characters, use []. This is often used to make search less case sensitive:

[Hh]ello
matches lines containing hello  or Hello

Ranges of characters are also permitted.

There are also some alternate forms :

[[:alpha:]] is the same as [a-zA-Z]
[[:upper:]] is the same as [A-Z]
[[:lower:]] is the same as [a-z]
[[:digit:]] is the same as [0-9]
[[:alnum:]] is the same as [0-9a-zA-Z]
[[:space:]] matches any white space including tabs

Backreferences

Suppose you want to search for a string which contains a certain substring in more than one place. An example is the heading tag in HTML. Suppose I wanted to search for <H1>some string</H1>  . This is easy enough to do. But suppose I wanted to do the same but allow H2 H3 H4 H5 H6  in place of H1. The expression <H[1-6]>.*</H[1-6]>  is not good enough since it matches <H1>Hello world</H3>  but we want the opening tag to match the closing one. To do this, we use a backreference

The expression \n where n is a number, matches the contents of the n'th set of parentheses in the expression

For example:
\<H\([1-6]\).*</H\1>
matches what we were trying to match before.

Examples

Fgrep:

fgrep -l 'hahaha' * # just the names of matching files
fgrep  'May 16'  /var/logs/https/access # we are searching string, so fgrep is better
fgrep -v 'yahoo.com' /var/logs/https/access  # filtering yahoo.com using -v options
find . -type f -print | xargs fgrep -l 'hahaha'  

Grep:

Suppose you want to match a specific number of repetitions of a pattern. A good example is IP address. You could search for an arbitrary IP address like this:

grep -P '[:digit:]{1,3}(\.[:digit:]{1,3}){3}' file

There is actually no difference between [0-9]  and [[:digit:]]  but the latter can be faster.

The same can be done for phone numbers written in 999-999-9999 form:

([[:digit:]]{3}[[:punct:]]){2}[[:digit:]]{4}

To search email that has come from a certain address:

grep -P '^From:.*somebody\@' /var/spool/mail/root

To search several variants of the same name:

grep -P 'Nic?k\(olai\)\? Bezroukov '  # matches "Nick Bezroukov" or "Nikolai Bezroukov".
grep -P 'cat|dog' file # matches lines containing the word "cat" or the word "dog" 
grep -P '^\(From\|To\|Subject\):'  # matches corresponding part of the email header 

Using -l option

grep -l 'nobody\@nowhere' /spam/*

Using -w option

grep -w '\<abuse' * 
grep -w 'abuse\>' *

The first command searches for those lines where any word in that line begins with the letters 'abuse' and the second command searches for those lines where any word in that line ends with the letter 'abuse'

Grep with pipes

The output of grep can also be piped to another program as follows:

ps -ef | grep httpd | wc  -l
The example above counts how many 'httpd' processes are running:

We can count letters from particular spammer using the following pipe:

ls -l | xargs -n1 fgrep -l 'badspam@nowhere.com'
During debugging comments often obsure the logic of trhe program and interfere with the search of a bug. Here is how to display non comment lines of myscript.pl
grep -v '^#' ~/mysript.pl | less

Grep is very useful as a simple yet powerful HTML analyzer. Here is how find HTML tags that are not closed before the line break.

egrep '<[^>]*$' *.html

Tips

Tip 1: How to block an extra line when grepping ps output for a string or pattern:

ps -ef | grep '[c]ron'

If the pattern had been written without the square brackets, it would have matched not only the ps  output line for cron, but also the ps  output line for grep. on the length of a line except the available memory.

Tip 2: How do I search directories recursively?

grep -r 'hello' ~/*.html

Newer version of grep has -r option. Example above searches for `hello' in all html files under the user home directory. For more control of which files are searched, use find  and xargs. For example,

find ~ -name *html -type f -print | xargs grep 'hello'  

Tip 3: How do I output context around the matching lines?

grep -C 2 'hello' * # prints two lines of context around each matching line.  
Tip 4: In order to force grep to print the name of the file apprend /dev/null to the list of files:
find . -type f -print | xargs fgrep -l 'hahaha'  '{}'  /dev/null 
Tip 5: Find all the hrefs  that point to URLs that mistakenly have a space in them. This example uses the enhanced regular expressions of egrep.
grep -P -i 'href="[^"]* [^"]*"' *.html

Webliography

Appendix

grep, print lines matching a pattern 2.1 GNU Extensions

GNU Extensions

`-A num'
`--after-context=num'
Print num lines of trailing context after matching lines.

`-B num'
`--before-context=num'
Print num lines of leading context before matching lines.

`-C num'
`--context=num'
Print num lines of output context.

`--colour[=WHEN]'
`--color[=WHEN]'
The matching string is surrounded by the marker specify in GREP_COLOR. WHEN may be `never', `always', or `auto'.

`-num'
Same as `--context=num' lines of leading and trailing context. However, grep will never print any given line more than once.

`-V'
`--version'
Print the version number of grep  to the standard output stream. This version number should be included in all bug reports.

`--help'
Print a usage message briefly summarizing these command-line options and the bug-reporting address, then exit.

`--binary-files=type'
If the first few bytes of a file indicate that the file contains binary data, assume that the file is of type type. By default, type is `binary', and grep  normally outputs either a one-line message saying that a binary file matches, or no message if there is no match. If type is `without-match', grep  assumes that a binary file does not match; this is equivalent to the `-I' option. If type is `text', grep  processes a binary file as if it were text; this is equivalent to the `-a' option. Warning: `--binary-files=text' might output binary garbage, which can have nasty side effects if the output is a terminal and if the terminal driver interprets some of it as commands.

`-b'
`--byte-offset'
Print the byte offset within the input file before each line of output. When grep  runs on MS-DOS or MS-Windows, the printed byte offsets depend on whether the `-u' (`--unix-byte-offsets') option is used; see below.

`-D action'
`--devices=action'
If an input file is a device, FIFO or socket, use action to process it. By default, action is `read', which means that devices are read just as if they were ordinary files. If action is `skip', devices, FIFOs and sockets are silently skipped.

`-d action'
`--directories=action'
If an input file is a directory, use action to process it. By default, action is `read', which means that directories are read just as if they were ordinary files (some operating systems and filesystems disallow this, and will cause grep  to print error messages for every directory or silently skip them). If action is `skip', directories are silently skipped. If action is `recurse', grep  reads all files under each directory, recursively; this is equivalent to the `-r' option.

`-H'
`--with-filename'
Print the filename for each match.

`-h'
`--no-filename'
Suppress the prefixing of filenames on output when multiple files are searched.

`--line-buffered'
Set the line buffering policy, this can be a performance penality.

`--label=LABEL'
Displays input actually coming from standard input as input coming from file LABEL. This is especially useful for tools like zgrep, e.g. gzip -cd foo.gz |grep --label=foo something

`-L'
`--files-without-match'
Suppress normal output; instead print the name of each input file from which no output would normally have been printed. The scanning of every file will stop on the first match.

`-a'
`--text'
Process a binary file as if it were text; this is equivalent to the `--binary-files=text' option.
`-I'
Process a binary file as if it did not contain matching data; this is equivalent to the `--binary-files=without-match' option.
`-w'
`--word-regexp'
Select only those lines containing matches that form whole words. The test is that the matching substring must either be at the beginning of the line, or preceded by a non-word constituent character. Similarly, it must be either at the end of the line or followed by a non-word constituent character. Word-constituent characters are letters, digits, and the underscore.

`-r'
`-R'
`--recursive'
For each directory mentioned in the command line, read and process all files in that directory, recursively. This is the same as the `--directories=recurse' option.

`--include=file_pattern'
When processing directories recursively, only files matching file_pattern will be search.

`--exclude=file_pattern'
When processing directories recursively, skip files matching file_pattern.

`-m num'
`--max-count=num'
Stop reading a file after num matching lines. If the input is standard input from a regular file, and num matching lines are output, grep  ensures that the standard input is positioned to just after the last matching line before exiting, regardless of the presence of trailing context lines. This enables a calling process to resume a search. For example, the following shell script makes use of it:
while grep -m 1 PATTERN
do
  echo xxxx
done < FILE

But the following probably will not work because a pipe is not a regular file:

# This probably will not work.
cat FILE |
while grep -m 1 PATTERN
do
  echo xxxx
done

When grep  stops after NUM matching lines, it outputs any trailing context lines. Since context does not include matching lines, grep  will stop when it encounters another matching line. When the `-c' or `--count' option is also used, grep  does not output a count greater than num. When the `-v' or `--invert-match' option is also used, grep  stops after outputting num non-matching lines.

`-y'
Obsolete synonym for `-i'.

`-U'
`--binary'
Treat the file(s) as binary. By default, under MS-DOS and MS-Windows, grep  guesses the file type by looking at the contents of the first 32kB read from the file. If grep  decides the file is a text file, it strips the CR  characters from the original file contents (to make regular expressions with ^  and $  work correctly). Specifying `-U' overrules this guesswork, causing all files to be read and passed to the matching mechanism verbatim; if the file is a text file with CR/LF  pairs at the end of each line, this will cause some regular expressions to fail. This option has no effect on platforms other than MS-DOS and MS-Windows.

`-u'
`--unix-byte-offsets'
Report Unix-style byte offsets. This switch causes grep  to report byte offsets as if the file were Unix style text file, i.e., the byte offsets ignore the CR  characters which were stripped. This will produce results identical to running grep  on a Unix machine. This option has no effect unless `-b' option is also used; it has no effect on platforms other than MS-DOS and MS-Windows.

`--mmap'
If possible, use the mmap  system call to read input, instead of the default read  system call. In some situations, `--mmap' yields better performance. However, `--mmap' can cause undefined behavior (including core dumps) if an input file shrinks while grep  is operating, or if an I/O error occurs.

`-Z'
`--null'
Output a zero byte (the ASCII NUL  character) instead of the character that normally follows a file name. For example, `grep -lZ' outputs a zero byte after each file name instead of the usual newline. This option makes the output unambiguous, even in the presence of file names containing unusual characters like newlines. This option can be used with commands like `find -print0', `perl -0', `sort -z', and `xargs -0' to process arbitrary file names, even those that contain newline characters.

`-z'
`--null-data'
Treat the input as a set of lines, each terminated by a zero byte (the ASCII NUL  character) instead of a newline. Like the `-Z' or `--null' option, this option can be used with commands like `sort -z' to process arbitrary file names.

Several additional options control which variant of the grep  matching engine is used. See section 4. grep  programs.


Etc

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit exclusivly for research and educational purposes.   If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner. 

ABUSE: IPs or network segments from which we detect a stream of probes might be blocked for no less then 90 days. Multiple types of probes increase this period.  

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Haterís Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least


Copyright © 1996-2016 by Dr. Nikolai Bezroukov. www.softpanorama.org was created as a service to the UN Sustainable Development Networking Programme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License.

The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

Created: May 16, 1997; Last modified: July 13, 2016