Softpanorama

May the source be with you, but remember the KISS principle ;-)
Contents Bulletin Scripting in shell and Perl Network troubleshooting History Humor

Slightly Skeptical View on Enterprise Unix Administration

News Webliography of problems with "pure" cloud environment Recommended Books Recommended Links Shadow IT Is DevOps a yet another "for profit" technocult Bosos or Empty Suits (Aggressive Incompetent Managers)
Unix Configuration Management Tools Job schedulers Unix System Monitoring Over 50 and unemployed Corporate bullshit as a communication method Diplomatic Communication Using HP ILO virtual CDROM

The KISS rule can be expanded as: Keep It Simple, Sysadmin ;-)

Additional useful material on the topic can also be found in an older article Solaris vs Linux:

Abstract

Introduction

Nine factors framework for comparison of two flavors of Unix in a large enterprise environment

Four major areas of Linux and Solaris deployment

Comparison of internal architecture and key subsystems

Security

Hardware: SPARC vs. X86

Development environment

Solaris as a cultural phenomenon

Using Solaris-Linux enterprise mix as the least toxic Unix mix available

Conclusions

Acknowledgements

Webliography

Here are my notes/reflection of sysadmin problem in strange (and typically pretty toxic) IT departments of large corporations:


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

2016 2015 2014 2013 2012 2011 2010 2009 2008
2007 2006 2005 2004 2003 2002 2001 2000 1999

"I appreciate Woody Allen's humor because one of my safety valves is an appreciation for life's absurdities. His message is that life isn't a funeral march to the grave. It's a polka."

-- Dennis Kusinich

[Oct 17, 2017] Converting string to lower case in Bash - Stack Overflow

Feb 15, 2010 | stackoverflow.com

assassin , Feb 15, 2010 at 7:02

Is there a way in bash to convert a string into a lower case string?

For example, if I have:

a="Hi all"

I want to convert it to:

"hi all"

ghostdog74 , Feb 15, 2010 at 7:43

The are various ways: tr
$ echo "$a" | tr '[:upper:]' '[:lower:]'
hi all
AWK
$ echo "$a" | awk '{print tolower($0)}'
hi all
Bash 4.0
$ echo "${a,,}"
hi all
Perl
$ echo "$a" | perl -ne 'print lc'
hi all
Bash
lc(){
    case "$1" in
        [A-Z])
        n=$(printf "%d" "'$1")
        n=$((n+32))
        printf \\$(printf "%o" "$n")
        ;;
        *)
        printf "%s" "$1"
        ;;
    esac
}
word="I Love Bash"
for((i=0;i<${#word};i++))
do
    ch="${word:$i:1}"
    lc "$ch"
done

jangosteve , Jan 14, 2012 at 21:58

Am I missing something, or does your last example (in Bash) actually do something completely different? It works for "ABX", but if you instead make word="Hi All" like the other examples, it returns ha , not hi all . It only works for the capitalized letters and skips the already-lowercased letters. – jangosteve Jan 14 '12 at 21:58

Richard Hansen , Feb 3, 2012 at 18:55

Note that only the tr and awk examples are specified in the POSIX standard. – Richard Hansen Feb 3 '12 at 18:55

Richard Hansen , Feb 3, 2012 at 18:58

tr '[:upper:]' '[:lower:]' will use the current locale to determine uppercase/lowercase equivalents, so it'll work with locales that use letters with diacritical marks. – Richard Hansen Feb 3 '12 at 18:58

Adam Parkin , Sep 25, 2012 at 18:01

How does one get the output into a new variable? Ie say I want the lowercased string into a new variable? – Adam Parkin Sep 25 '12 at 18:01

Tino , Nov 14, 2012 at 15:39

@Adam: b="$(echo $a | tr '[A-Z]' '[a-z]')" – Tino Nov 14 '12 at 15:39

Dennis Williamson , Feb 15, 2010 at 10:31

In Bash 4:

To lowercase

$ string="A FEW WORDS"
$ echo "${string,}"
a FEW WORDS
$ echo "${string,,}"
a few words
$ echo "${string,,[AEIUO]}"
a FeW WoRDS

$ string="A Few Words"
$ declare -l string
$ string=$string; echo "$string"
a few words

To uppercase

$ string="a few words"
$ echo "${string^}"
A few words
$ echo "${string^^}"
A FEW WORDS
$ echo "${string^^[aeiou]}"
A fEw wOrds

$ string="A Few Words"
$ declare -u string
$ string=$string; echo "$string"
A FEW WORDS

Toggle (undocumented, but optionally configurable at compile time)

$ string="A Few Words"
$ echo "${string~~}"
a fEW wORDS
$ string="A FEW WORDS"
$ echo "${string~}"
a FEW WORDS
$ string="a few words"
$ echo "${string~}"
A few words

Capitalize (undocumented, but optionally configurable at compile time)

$ string="a few words"
$ declare -c string
$ string=$string
$ echo "$string"
A few words

Title case:

$ string="a few words"
$ string=($string)
$ string="${string[@]^}"
$ echo "$string"
A Few Words

$ declare -c string
$ string=(a few words)
$ echo "${string[@]}"
A Few Words

$ string="a FeW WOrdS"
$ string=${string,,}
$ string=${string~}
$ echo "$string"

To turn off a declare attribute, use + . For example, declare +c string . This affects subsequent assignments and not the current value.

The declare options change the attribute of the variable, but not the contents. The reassignments in my examples update the contents to show the changes.

Edit:

Added "toggle first character by word" ( ${var~} ) as suggested by ghostdog74

Edit: Corrected tilde behavior to match Bash 4.3.

ghostdog74 , Feb 15, 2010 at 10:52

there's also ${string~} – ghostdog74 Feb 15 '10 at 10:52

Hubert Kario , Jul 12, 2012 at 16:48

Quite bizzare, "^^" and ",," operators don't work on non-ASCII characters but "~~" does... So string="łódź"; echo ${string~~} will return "ŁÓDŹ", but echo ${string^^} returns "łóDź". Even in LC_ALL=pl_PL.utf-8 . That's using bash 4.2.24. – Hubert Kario Jul 12 '12 at 16:48

Dennis Williamson , Jul 12, 2012 at 18:20

@HubertKario: That's weird. It's the same for me in Bash 4.0.33 with the same string in en_US.UTF-8 . It's a bug and I've reported it. – Dennis Williamson Jul 12 '12 at 18:20

Dennis Williamson , Jul 13, 2012 at 0:44

@HubertKario: Try echo "$string" | tr '[:lower:]' '[:upper:]' . It will probably exhibit the same failure. So the problem is at least partly not Bash's. – Dennis Williamson Jul 13 '12 at 0:44

Dennis Williamson , Jul 14, 2012 at 14:27

@HubertKario: The Bash maintainer has acknowledged the bug and stated that it will be fixed in the next release. – Dennis Williamson Jul 14 '12 at 14:27

shuvalov , Feb 15, 2010 at 7:13

echo "Hi All" | tr "[:upper:]" "[:lower:]"

Richard Hansen , Feb 3, 2012 at 19:00

+1 for not assuming english – Richard Hansen Feb 3 '12 at 19:00

Hubert Kario , Jul 12, 2012 at 16:56

@RichardHansen: tr doesn't work for me for non-ACII characters. I do have correct locale set and locale files generated. Have any idea what could I be doing wrong? – Hubert Kario Jul 12 '12 at 16:56

wasatchwizard , Oct 23, 2014 at 16:42

FYI: This worked on Windows/Msys. Some of the other suggestions did not. – wasatchwizard Oct 23 '14 at 16:42

Ignacio Vazquez-Abrams , Feb 15, 2010 at 7:03

tr :
a="$(tr [A-Z] [a-z] <<< "$a")"
AWK :
{ print tolower($0) }
sed :
y/ABCDEFGHIJKLMNOPQRSTUVWXYZ/abcdefghijklmnopqrstuvwxyz/

Sandeepan Nath , Feb 2, 2011 at 11:12

+1 a="$(tr [A-Z] [a-z] <<< "$a")" looks easiest to me. I am still a beginner... – Sandeepan Nath Feb 2 '11 at 11:12

Haravikk , Oct 19, 2013 at 12:54

I strongly recommend the sed solution; I've been working in an environment that for some reason doesn't have tr but I've yet to find a system without sed , plus a lot of the time I want to do this I've just done something else in sed anyway so can chain the commands together into a single (long) statement. – Haravikk Oct 19 '13 at 12:54

Dennis , Nov 6, 2013 at 19:49

The bracket expressions should be quoted. In tr [A-Z] [a-z] A , the shell may perform filename expansion if there are filenames consisting of a single letter or nullgob is set. tr "[A-Z]" "[a-z]" A will behave properly. – Dennis Nov 6 '13 at 19:49

Haravikk , Jun 15, 2014 at 10:51

@CamiloMartin it's a BusyBox system where I'm having that problem, specifically Synology NASes, but I've encountered it on a few other systems too. I've been doing a lot of cross-platform shell scripting lately, and with the requirement that nothing extra be installed it makes things very tricky! However I've yet to encounter a system without sed – Haravikk Jun 15 '14 at 10:51

fuz , Jan 31, 2016 at 14:54

Note that tr [A-Z] [a-z] is incorrect in almost all locales. for example, in the en-US locale, A-Z is actually the interval AaBbCcDdEeFfGgHh...XxYyZ . – fuz Jan 31 '16 at 14:54

nettux443 , May 14, 2014 at 9:36

I know this is an oldish post but I made this answer for another site so I thought I'd post it up here:

UPPER -> lower : use python:

b=`echo "print '$a'.lower()" | python`

Or Ruby:

b=`echo "print '$a'.downcase" | ruby`

Or Perl (probably my favorite):

b=`perl -e "print lc('$a');"`

Or PHP:

b=`php -r "print strtolower('$a');"`

Or Awk:

b=`echo "$a" | awk '{ print tolower($1) }'`

Or Sed:

b=`echo "$a" | sed 's/./\L&/g'`

Or Bash 4:

b=${a,,}

Or NodeJS if you have it (and are a bit nuts...):

b=`echo "console.log('$a'.toLowerCase());" | node`

You could also use dd (but I wouldn't!):

b=`echo "$a" | dd  conv=lcase 2> /dev/null`

lower -> UPPER

use python:

b=`echo "print '$a'.upper()" | python`

Or Ruby:

b=`echo "print '$a'.upcase" | ruby`

Or Perl (probably my favorite):

b=`perl -e "print uc('$a');"`

Or PHP:

b=`php -r "print strtoupper('$a');"`

Or Awk:

b=`echo "$a" | awk '{ print toupper($1) }'`

Or Sed:

b=`echo "$a" | sed 's/./\U&/g'`

Or Bash 4:

b=${a^^}

Or NodeJS if you have it (and are a bit nuts...):

b=`echo "console.log('$a'.toUpperCase());" | node`

You could also use dd (but I wouldn't!):

b=`echo "$a" | dd  conv=ucase 2> /dev/null`

Also when you say 'shell' I'm assuming you mean bash but if you can use zsh it's as easy as

b=$a:l

for lower case and

b=$a:u

for upper case.

JESii , May 28, 2015 at 21:42

Neither the sed command nor the bash command worked for me. – JESii May 28 '15 at 21:42

nettux443 , Nov 20, 2015 at 14:33

@JESii both work for me upper -> lower and lower-> upper. I'm using sed 4.2.2 and Bash 4.3.42(1) on 64bit Debian Stretch. – nettux443 Nov 20 '15 at 14:33

JESii , Nov 21, 2015 at 17:34

Hi, @nettux443... I just tried the bash operation again and it still fails for me with the error message "bad substitution". I'm on OSX using homebrew's bash: GNU bash, version 4.3.42(1)-release (x86_64-apple-darwin14.5.0) – JESii Nov 21 '15 at 17:34

tripleee , Jan 16, 2016 at 11:45

Do not use! All of the examples which generate a script are extremely brittle; if the value of a contains a single quote, you have not only broken behavior, but a serious security problem. – tripleee Jan 16 '16 at 11:45

Scott Smedley , Jan 27, 2011 at 5:37

In zsh:
echo $a:u

Gotta love zsh!

Scott Smedley , Jan 27, 2011 at 5:39

or $a:l for lower case conversion – Scott Smedley Jan 27 '11 at 5:39

biocyberman , Jul 24, 2015 at 23:26

Add one more case: echo ${(C)a} #Upcase the first char only – biocyberman Jul 24 '15 at 23:26

devnull , Sep 26, 2013 at 15:45

Using GNU sed :
sed 's/.*/\L&/'

Example:

$ foo="Some STRIng";
$ foo=$(echo "$foo" | sed 's/.*/\L&/')
$ echo "$foo"
some string

technosaurus , Jan 21, 2012 at 10:27

For a standard shell (without bashisms) using only builtins:
uppers=ABCDEFGHIJKLMNOPQRSTUVWXYZ
lowers=abcdefghijklmnopqrstuvwxyz

lc(){ #usage: lc "SOME STRING" -> "some string"
    i=0
    while ([ $i -lt ${#1} ]) do
        CUR=${1:$i:1}
        case $uppers in
            *$CUR*)CUR=${uppers%$CUR*};OUTPUT="${OUTPUT}${lowers:${#CUR}:1}";;
            *)OUTPUT="${OUTPUT}$CUR";;
        esac
        i=$((i+1))
    done
    echo "${OUTPUT}"
}

And for upper case:

uc(){ #usage: uc "some string" -> "SOME STRING"
    i=0
    while ([ $i -lt ${#1} ]) do
        CUR=${1:$i:1}
        case $lowers in
            *$CUR*)CUR=${lowers%$CUR*};OUTPUT="${OUTPUT}${uppers:${#CUR}:1}";;
            *)OUTPUT="${OUTPUT}$CUR";;
        esac
        i=$((i+1))
    done
    echo "${OUTPUT}"
}

Dereckson , Nov 23, 2014 at 19:52

I wonder if you didn't let some bashism in this script, as it's not portable on FreeBSD sh: ${1:$...}: Bad substitution – Dereckson Nov 23 '14 at 19:52

tripleee , Apr 14, 2015 at 7:09

Indeed; substrings with ${var:1:1} are a Bashism. – tripleee Apr 14 '15 at 7:09

Derek Shaw , Jan 24, 2011 at 13:53

Regular expression

I would like to take credit for the command I wish to share but the truth is I obtained it for my own use from http://commandlinefu.com . It has the advantage that if you cd to any directory within your own home folder that is it will change all files and folders to lower case recursively please use with caution. It is a brilliant command line fix and especially useful for those multitudes of albums you have stored on your drive.

find . -depth -exec rename 's/(.*)\/([^\/]*)/$1\/\L$2/' {} \;

You can specify a directory in place of the dot(.) after the find which denotes current directory or full path.

I hope this solution proves useful the one thing this command does not do is replace spaces with underscores - oh well another time perhaps.

Wadih M. , Nov 29, 2011 at 1:31

thanks for commandlinefu.com – Wadih M. Nov 29 '11 at 1:31

John Rix , Jun 26, 2013 at 15:58

This didn't work for me for whatever reason, though it looks fine. I did get this to work as an alternative though: find . -exec /bin/bash -c 'mv {} `tr [A-Z] [a-z] <<< {}`' \; – John Rix Jun 26 '13 at 15:58

Tino , Dec 11, 2015 at 16:27

This needs prename from perl : dpkg -S "$(readlink -e /usr/bin/rename)" gives perl: /usr/bin/prename – Tino Dec 11 '15 at 16:27

c4f4t0r , Aug 21, 2013 at 10:21

In bash 4 you can use typeset

Example:

A="HELLO WORLD"
typeset -l A=$A

community wiki, Jan 16, 2016 at 12:26

Pre Bash 4.0

Bash Lower the Case of a string and assign to variable

VARIABLE=$(echo "$VARIABLE" | tr '[:upper:]' '[:lower:]') 

echo "$VARIABLE"

Tino , Dec 11, 2015 at 16:23

No need for echo and pipes: use $(tr '[:upper:]' '[:lower:]' <<<"$VARIABLE") – Tino Dec 11 '15 at 16:23

tripleee , Jan 16, 2016 at 12:28

@Tino The here string is also not portable back to really old versions of Bash; I believe it was introduced in v3. – tripleee Jan 16 '16 at 12:28

Tino , Jan 17, 2016 at 14:28

@tripleee You are right, it was introduced in bash-2.05b - however that's the oldest bash I was able to find on my systems – Tino Jan 17 '16 at 14:28

Bikesh M Annur , Mar 23 at 6:48

You can try this
s="Hello World!" 

echo $s  # Hello World!

a=${s,,}
echo $a  # hello world!

b=${s^^}
echo $b  # HELLO WORLD!

ref : http://wiki.workassis.com/shell-script-convert-text-to-lowercase-and-uppercase/

Orwellophile , Mar 24, 2013 at 13:43

For Bash versions earlier than 4.0, this version should be fastest (as it doesn't fork/exec any commands):
function string.monolithic.tolower
{
   local __word=$1
   local __len=${#__word}
   local __char
   local __octal
   local __decimal
   local __result

   for (( i=0; i<__len; i++ ))
   do
      __char=${__word:$i:1}
      case "$__char" in
         [A-Z] )
            printf -v __decimal '%d' "'$__char"
            printf -v __octal '%03o' $(( $__decimal ^ 0x20 ))
            printf -v __char \\$__octal
            ;;
      esac
      __result+="$__char"
   done
   REPLY="$__result"
}

technosaurus's answer had potential too, although it did run properly for mee.

Stephen M. Harris , Mar 22, 2013 at 22:42

If using v4, this is baked-in . If not, here is a simple, widely applicable solution. Other answers (and comments) on this thread were quite helpful in creating the code below.
# Like echo, but converts to lowercase
echolcase () {
    tr [:upper:] [:lower:] <<< "${*}"
}

# Takes one arg by reference (var name) and makes it lowercase
lcase () { 
    eval "${1}"=\'$(echo ${!1//\'/"'\''"} | tr [:upper:] [:lower:] )\'
}

Notes:

JaredTS486 , Dec 23, 2015 at 17:37

In spite of how old this question is and similar to this answer by technosaurus . I had a hard time finding a solution that was portable across most platforms (That I Use) as well as older versions of bash. I have also been frustrated with arrays, functions and use of prints, echos and temporary files to retrieve trivial variables. This works very well for me so far I thought I would share. My main testing environments are:
  1. GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu)
  2. GNU bash, version 3.2.57(1)-release (sparc-sun-solaris2.10)
lcs="abcdefghijklmnopqrstuvwxyz"
ucs="ABCDEFGHIJKLMNOPQRSTUVWXYZ"
input="Change Me To All Capitals"
for (( i=0; i<"${#input}"; i++ )) ; do :
    for (( j=0; j<"${#lcs}"; j++ )) ; do :
        if [[ "${input:$i:1}" == "${lcs:$j:1}" ]] ; then
            input="${input/${input:$i:1}/${ucs:$j:1}}" 
        fi
    done
done

Simple C-style for loop to iterate through the strings. For the line below if you have not seen anything like this before this is where I learned this . In this case the line checks if the char ${input:$i:1} (lower case) exists in input and if so replaces it with the given char ${ucs:$j:1} (upper case) and stores it back into input.

input="${input/${input:$i:1}/${ucs:$j:1}}"

Gus Neves , May 16 at 10:04

Many answers using external programs, which is not really using Bash .

If you know you will have Bash4 available you should really just use the ${VAR,,} notation (it is easy and cool). For Bash before 4 (My Mac still uses Bash 3.2 for example). I used the corrected version of @ghostdog74 's answer to create a more portable version.

One you can call lowercase 'my STRING' and get a lowercase version. I read comments about setting the result to a var, but that is not really portable in Bash , since we can't return strings. Printing it is the best solution. Easy to capture with something like var="$(lowercase $str)" .

How this works

The way this works is by getting the ASCII integer representation of each char with printf and then adding 32 if upper-to->lower , or subtracting 32 if lower-to->upper . Then use printf again to convert the number back to a char. From 'A' -to-> 'a' we have a difference of 32 chars.

Using printf to explain:

$ printf "%d\n" "'a"
97
$ printf "%d\n" "'A"
65

97 - 65 = 32

And this is the working version with examples.
Please note the comments in the code, as they explain a lot of stuff:

#!/bin/bash

# lowerupper.sh

# Prints the lowercase version of a char
lowercaseChar(){
    case "$1" in
        [A-Z])
            n=$(printf "%d" "'$1")
            n=$((n+32))
            printf \\$(printf "%o" "$n")
            ;;
        *)
            printf "%s" "$1"
            ;;
    esac
}

# Prints the lowercase version of a sequence of strings
lowercase() {
    word="$@"
    for((i=0;i<${#word};i++)); do
        ch="${word:$i:1}"
        lowercaseChar "$ch"
    done
}

# Prints the uppercase version of a char
uppercaseChar(){
    case "$1" in
        [a-z])
            n=$(printf "%d" "'$1")
            n=$((n-32))
            printf \\$(printf "%o" "$n")
            ;;
        *)
            printf "%s" "$1"
            ;;
    esac
}

# Prints the uppercase version of a sequence of strings
uppercase() {
    word="$@"
    for((i=0;i<${#word};i++)); do
        ch="${word:$i:1}"
        uppercaseChar "$ch"
    done
}

# The functions will not add a new line, so use echo or
# append it if you want a new line after printing

# Printing stuff directly
lowercase "I AM the Walrus!"$'\n'
uppercase "I AM the Walrus!"$'\n'

echo "----------"

# Printing a var
str="A StRing WITH mixed sTUFF!"
lowercase "$str"$'\n'
uppercase "$str"$'\n'

echo "----------"

# Not quoting the var should also work, 
# since we use "$@" inside the functions
lowercase $str$'\n'
uppercase $str$'\n'

echo "----------"

# Assigning to a var
myLowerVar="$(lowercase $str)"
myUpperVar="$(uppercase $str)"
echo "myLowerVar: $myLowerVar"
echo "myUpperVar: $myUpperVar"

echo "----------"

# You can even do stuff like
if [[ 'option 2' = "$(lowercase 'OPTION 2')" ]]; then
    echo "Fine! All the same!"
else
    echo "Ops! Not the same!"
fi

exit 0

And the results after running this:

$ ./lowerupper.sh 
i am the walrus!
I AM THE WALRUS!
----------
a string with mixed stuff!
A STRING WITH MIXED STUFF!
----------
a string with mixed stuff!
A STRING WITH MIXED STUFF!
----------
myLowerVar: a string with mixed stuff!
myUpperVar: A STRING WITH MIXED STUFF!
----------
Fine! All the same!

This should only work for ASCII characters though .

For me it is fine, since I know I will only pass ASCII chars to it.
I am using this for some case-insensitive CLI options, for example.

nitinr708 , Jul 8, 2016 at 9:20

To store the transformed string into a variable. Following worked for me - $SOURCE_NAME to $TARGET_NAME
TARGET_NAME="`echo $SOURCE_NAME | tr '[:upper:]' '[:lower:]'`"

[Oct 17, 2017] Use Spare Older Workers to Overcome 'Labour Shortages' naked capitalism

Notable quotes:
"... By Leith van Onselen. Originally published at MacroBusiness ..."
"... "remains low by historical standards" ..."
"... There's a myth that innovation comes from the 20 something in their basement, but that's just not the case. ..."
Oct 17, 2017 | www.nakedcapitalism.com

Yves here. On the one hand, as someone who is getting to be pretty long in tooth, I'm not sure about calling un and under-employed older workers "spare". But when the alternative is being thrown on the trash heap, maybe that isn't so unflattering.

Even though this analysis is from Australia, most of if not all of its finding would almost certainly prove out in the US. However, there is a whole 'nother set of issues here. Australia is 85% urban, with most of the population living in or near four large cities. So its labor mobility issues are less pronounced than here. Moreover, a lot of the whinging in the US about worker shortages, as even readers of the Wall Street Journal regularly point out in its comment section is:

1. Not being willing to pay enough to skilled workers, which includes not being willing to pay them to relocate

2. Not being willing to train less skilled workers, as companies once did as a matter of course

By Leith van Onselen. Originally published at MacroBusiness

A few weeks back, the Benevolent Society released a report which found that age-related discrimination is particularly rife in the workplace, with over a quarter (29%) of survey respondents stating they had been turned down for a job because of their old age, whereas 14% claimed they had been denied a promotion because of their old age.

Today, the Regional Australia Institute (RAI) has warned that Australia is facing a pension crisis unless employers stop their "discrimination" against older workers. From The ABC :

[RAI] has warned the Federal Government's pension bill would rise from $45 billion to $51 billion within three years, unless efforts were made to help more mature workers gain employment, particularly in regional communities.

Chief executive Jack Archer said continued unemployment of people older than 55 would cut economic growth and put a greater strain on public resources.

"We hear that there is a lot of people who would like to work, who would love to stay in the workforce either part-time or full-time even though they're in their late 50s, 60s and even into their 70s," he said.

"But we're not doing a very good job of giving them the training, giving them the incentives around the pension, and working with employers to stop the discrimination around employing older workers"

"It basically means you've got a lot of talent on the bench, a lot of people who could be involved and contributing who are sitting around homes and wishing they were doing something else," he said

Mr Archer said as the population aged the workforce shrank, and that risked future economic growth.

But he said that could be reversed provided employers embraced an older workforce

"[When] those people are earning [an income], their pension bills will either disappear or be much lower and the government will get a benefit from that."

For years the growth lobby and the government has told us that Australia needs to run high levels of immigration in order to alleviate so-called 'skills shortages' and to mitigate an ageing population. This has come despite the Department of Employment showing that Australia's skills shortage "remains low by historical standards" and Australia's labour underutilisation rate tracking at high levels:

Economic models are often cited as proof that a strong immigration program is 'good' for the economy because they show that real GDP per capita is moderately increased via immigration, based on several dubious assumptions.

The most dubious of these assumptions is that population ageing will necessarily result in fewer people working, which will subtract from per capita GDP (due to the ratio of workers to dependents falling).

Leaving aside the fact that the assumed benefit to GDP per capita from immigration is only transitory, since migrants also age (thereby requiring an ever-bigger immigration intake to keep the population age profile from rising), it is just as likely that age-specific workforce participation will respond to labour demand, resulting in fewer people being unemployed. This is exactly what has transpired in Japan where an ageing population has driven the unemployment rate down to only 2.8% – the lowest level since the early-1990s:

The ABS last month revealed that more Australians are working past traditional retirement age, thereby mitigating concerns that population ageing will necessarily reduce the employment-to-population ratio:

Clearly, however, there is much further scope to boost workforce participation among older workers.

Rather than relying on mass immigration to fill phantom 'labour shortages' – in turn displacing both young and older workers alike – the more sensible policy option is to moderate immigration and instead better utilise the existing workforce as well as use automation to overcome any loss of workers as the population ages – as has been utilised in Japan.

It's worth once again highlighting that economists at MIT recently found that there is absolutely no relationship between population ageing and economic decline. To the contrary, population ageing seems to have been associated with improvements in GDP per capita, thanks to increased automation:

If anything, countries experiencing more rapid aging have grown more in recent decades we show that since the early 1990s or 2000s, the periods commonly viewed as the beginning of the adverse effects of aging in much of the advanced world, there is no negative association between aging and lower GDP per capita on the contrary, the relationship is significantly positive in many specifications.

The last thing that Australia should be doing is running a mass immigration program which, as noted many times by the Productivity Commission cannot provide a long-term solution to ageing, and places increasing strains on infrastructure, housing and the natural environment.

The sustainable 'solution' to population ageing is to better utilise the existing workforce, where significant spare capacity exists.

Enquiring Mind , October 17, 2017 at 10:26 am

At what point might an impatient constituency demand greater accountability by its elected representatives? In the business world, the post-2000 accounting scandals like Enron resulted in legislation to make company execs sign off on financial statements under threat of harsh personal penalties for misrepresentation. If legislators were forced by constituents to enact similar legislation about their own actions, the transparency could be very enlightening and a type of risk reduction due to acknowledgement of material factors. Imagine seeing in print the real reasons for votes, the funding sources behind those votes and prospect of jail time for misrepresentation about what is just their damn job. Call it Truth-In-Legislating, similar to the prior Truth-In-Lending act.

Vatch , October 17, 2017 at 11:29 am

It's a nice idea, but I don't think that very many executives have been penalized under the Sarbanes Oxley Act. Jamie Dimon certainly wasn't penalized for the actions of the London Whale. I guess we'll see what happens in the near future to the executives of Wells Fargo. I suspect that a Truth-In-Legislating law would be filled with loopholes or would be hampered by enforcement failures, like current Congressional ethics rules and the Sarbanes Oxley Act.

sgt_doom , October 17, 2017 at 2:11 pm

At what point might an impatient constituency demand greater accountability by its elected representatives?

At that point when they start shooting them (as they did in Russian in the very early 1900s, or lop their heads off, as they once did in France).

Personally, I'll never work for any Ameritard corporation ever again, as real innovation is not allowed, and the vast majority are all about financialization in some form or other!

My work life the past thirty years became worse and worse and worse, in direct relation to the majority of others, and my last jobs were beyond commenting up.

My very last position, which was in no manner related to my experience, education, skill set and talents -- like too many other American workers -- ended with a most tortuous layoff: the private equity firm which was owner in a failed "pump and dump" brought a "toxic work environment specialist" whose job was to advise the sleazoid senior executives (and by that time I was probably one of only four actual employee workers there, they had hired a whole bunch of executives, though) on how to create a negative work environment to convince us to leave instead of merely laying us off (worked for two, but not the last lady there I myself).

The American workplace sucks big time as evidenced by their refusal to raise wages while forever complaining about their inability to find skilled employees -- they are all criminals today!

RUKidding , October 17, 2017 at 10:55 am

Interesting article and thanks.

I lived and worked in Australia in the late '70s and early '80s. Times were different. Back then, the government jobs came with mandatory retirement. I believe (but could be wrong) that it was at 63, but you could request staying until 65 (required approval). After that, one could continue working in the private sector, if you could find a job.

The population was much less than it is now. I believe the idea was to make room for the younger generation coming up. Back then, government workers, as well as many private sector workers, had defined benefit pension plans. So retiring younger typically worked out ok.

I had one friend who continued working until about 70 because she wanted to; liked her job; and wasn't interested in retiring. However, I knew far more people who were eager to stop at 63. But back then, it appeared to me that they had the financial means to do so without much worry.

Things have changed since then. More of my friends are putting off retirement bc they need the money now. Plus defined benefit pension plans have mostly been dispensed with and replaced by, I believe (I'm not totally clear on this), the Aussie version of a 401 (k) (someone can correct me if I'm wrong).

What the article proposes makes sense. Of course here in the USA, older workers/job seekers face a host of discriminatory practices, especially for the better paying jobs. Nowadays, though, US citizens in their golden years can sell their house, buy an RV, and become itinerant workers – sometimes at back breaking labor, such as harvesting crops or working at an Amazon gulag – for $10 an hour. Yippee kay-o kay-aaay!

So let us also talk about cutting Medicare for all of those lazy slacker Seniors out there. Woo hoo!

jrs , October 17, 2017 at 1:34 pm

There is really two issues:
1) for those whom age discrimination in employment is hitting in their 50s or even younger, before anyone much is retiring, it needs to be combatted
2) eventually (sometimes in their 60's and really should be at least by 65) people ought to be allowed to retire and with enough money to not be in poverty. This work full time until you drop garbage is just that (it's not as if 70 year olds can even say work 20 hours instead, no it's the same 50+ hours or whatever as everyone else is doing). And most people won't live that much longer, really they won't, U.S. average lifespans aren't that long and falling fast. So it really is work until you die that is being pushed if people aren't allowed to retire sometime in their 60s. Some people have good genes and good luck and so on (they may also have a healthy lifestyle but sheer luck plays a large role), and will live far beyond that, but averages

RUKIdding , October 17, 2017 at 4:44 pm

Agree with you about the 2 issues.

Working past 65 is one of those things where it just depends. I know people who are happily (and don't "really" need the money) working past 65 bc they love their jobs and they're not taking a toll on their health. They enjoy the socialization at work; are intellectually stimulated; and are quite happy. That's one issue.

But when people HAVE TO work past 65 – and I know quite a few in this category – when it starts taking a toll on their health, that is truly bad. And I can reel off several cases that I know of personally. It's just wrong.

Whether you live much longer or not is sort of up to fate, no matter what. But yes, if work is taking a toll on your heath, then you most likely won't live as long.

cocomaan , October 17, 2017 at 11:06 am

In January, economists from MIT published a paper, entitled Secular Stagnation? The Effect of Aging on Economic Growth in the Age of Automation, which showed that there is absolutely no relationship between population aging and economic decline. To the contrary, population aging seems to have been associated with improvements in GDP per capita, thanks to increased automation:

From the cited article.

I don't know why it never occurred to me before, but there's no reason to ditch your most knowledgeable, most skilled workers toward the eve of their careers except if you don't want to pay labor costs. Which we know that most firms do not, in their mission for profit for shareholders or the flashy new building or trying to Innuhvate .

There's a myth that innovation comes from the 20 something in their basement, but that's just not the case. Someone who has, for instance, overseen 100 construction projects building bridges needs to be retained, not let go. Maybe they can't lift the sledge anymore, but I'd keep them on as long as possible.

Good food for thought! I enjoyed this piece.

HotFlash , October 17, 2017 at 4:19 pm

There's a myth that innovation comes from the 20 something in their basement, but that's just not the case.

Widely held by 20 somethings. Maybe it's just one of those Oedipus things.

fresno dan , October 17, 2017 at 11:08 am

1. Not being willing to pay enough to skilled workers, which includes not being willing to pay them to relocate

2. Not being willing to train less skilled workers, as companies once did as a matter of course

3. older workers have seen all the crap and evil management has done, and is usually in a much better position than young less established employees to take effective action against it

Disturbed Voter , October 17, 2017 at 1:26 pm

This. Don't expect rational actors, in management or labor. If everyone was paid the same, regardless of age or training or education or experience etc then the financial incentives for variant outcomes would decrease. Except for higher health costs for older workers. For them, we could simply ban employer provide health insurance then that takes that variable out of the equation too. So yes, the ideal is a rational Marxism or the uniformity of the hive-mind-feminism. While we would have "from each according to their ability, to each according to their need" we will have added it as an axiom that all have the same need. And a whip can encourage the hoi polloi to do their very best.

Jeremy Grimm , October 17, 2017 at 2:25 pm

Fully agee! To your list I would add a corollory to your item #3 -- older workers having seen all the crap and evil management has done are more likely to inspire other employees to feel and act with them. -- This corollory is obvious but I think it bears stating for emphasis of the point.

I believe your whole list might be viewed as symtoms resulting from the concept of workers as commodity -- fungible as cogs on a wheel. Young and old alike are dehumanized.

The boss of the branch office of the firm I last worked for before I retired constantly emphasized how each of us must remain "fungible" [he's who introduced me to this word] if we wanted to remain employed. The firm would win contracts using one set of workers in its bids and slowly replace them with new workers providing the firm a higher return per hour billed to the client. I feel very lucky I managed to remain employed -- to within a couple of years of the age when I could apply for Medicare. [Maybe it's because I was too cowed to make waves and avoided raises as best I could.]

[I started my comment considering the idea of "human capital" but ran into trouble with that concept. Shouldn't capital be assessed in terms of its replacement costs and its capacity for generating product or other gain? I had trouble working that calculus into the way firms treat their employees and decided "commodity" rather than "capital" better fit how workers were regarded and treated.]

BoycottAmazon , October 17, 2017 at 11:16 am

"skills vs. demand imbalance" not labor shortage. Capital wants to tip the scale the other way, but isn't willing to invest the money to train the people, per a comment I made last week. Plenty of unemployed or under-employed even in Japan, much less Oz.

Keeping the elderly, who already have the skills, in the work place longer is a way to put off making the investments. Getting government to tax the poor for their own training is another method. Exploiting poor nations education systems by importing skills yet another.

Some business hope to develop skills that only costs motive power (electric), minimal maintenance, and are far less capital intensive and quicker to the market than the current primary source's 18 years. Capitalism on an finite resource will eat itself, but even capitalism with finite resources will self-destruct in the end.

Jim Haygood , October 17, 2017 at 1:35 pm

Importantly, the chart labeled as Figure 2 uses GDP per capita on the y-axis.

Bearing in mind that GDP growth is composed of labor force growth times productivity, emerging economies that are growing faster than the rich world in both population and GDP look more anemic on a per capita basis, allowing us rich country denizens to feel better about our good selves. :-)

But in terms of absolute GDP growth, things ain't so bright here in the Homeland. Both population and productivity growth are slowing. Over the past two-thirds century, the trend in GDP groaf is relentlessly down, even as debt rises in an apparent attempt to maintain unsustainable living standards. Chart (viewer discretion advised):

https://gailtheactuary.files.wordpress.com/2016/02/us-annual-gdp-growth-rate-2015.png

Van Onselen doesn't address the rich world's busted pension systems. To the extent that they contain a Ponzi element premised on endless growth, immigration would modestly benefit them by adding new victims workers to support the greying masses of doddering Boomers.

Will you still need me
Will you still feed me
When I'm sixty-four?

-- The Beatles

Arthur Wilke , October 17, 2017 at 1:44 pm

There's been an increase in the employment of older people in the U.S. in the U.S. population. To provide a snapshot, below are three tables referring to the U.S. by age cohorts of 1) the total population, 2) Employment and 3) employment-population ratios (percent).based on Bureau of Labor Statistics weightings for population estimates and compiled in the Merge Outgoing Rotation Groups (MORG) dataset by the National Bureau of Economic Research (NBER) from the monthly Current Population Survey (CPS).

The portion of the population 16 to 54 has declined while those over 54 has increased.
1. Percent Population in Age Cohorts: 1986 & 2016

1986 2016 AGE
18.9 15.2 16-24
53.7 49.6 25-54
12.2 16.3 55-64
9.4 11.2 65-74
5.8 7.7 75 & OVER
100.0 100.0 ALL

The portion of the population 16 to 54 employed has declined while the portion over 54 has increased..

2 Percent Employed in Age Cohorts: 1986 & 2016

1986 2016 AGE
18.5 12.5 16-24
68.4 64.7 25-54
10.4 16.9 55-64
2.3 4.8 65-74
0.4 1.0 75 & OVER
100.0 100.0 ALL

The employment-population ratios (percents) show significant declines for those under 25 while increases for those 55 and above.

3. Age-Specific Employment Population Ratios (Percents)

1986 2016 AGE
59.5 49.4 16-24
77.3 77.9 25-54
51.8 61.8 55-64
14.8 25.9 65-74
3.8 7.9 75 & OVER
60.7 59.7 ALL

None of the above data refute claims about age and experience inequities. Rather these provide a base from which to explore such concerns. Because MORG data are representative samples with population weightings, systematic contingency analyses are challenging.

In the 30 year interval of these data there have been changes in population and employment by education status, gender, race, citizenship status along with industry and occupation, all items of which are found in the publicly available MORG dataset.

AW

Yves Smith Post author , October 17, 2017 at 4:54 pm

I think you are missing the point. Life expectancy at birth has increased by nearly five years since 1986. That renders simple comparisons of labor force participation less meaningful. The implication is that many people are not just living longer but are in better shape in their later middle age. Look at the dramatic drop in labor force participation from the 25-54 age cohort v. 55 to 64. How can so few people in that age group be working given that even retiring at 65 is something most people cannot afford? And the increase over time in the current 55=64 age cohort is significantly due to the entry of women into the workplace. Mine was the first generation where that became widespread.

The increase in the over 65 cohort reflects desperation. Anyone who can work stays working.

Arthur Wilke , October 17, 2017 at 6:27 pm

Even if life-expectancy is increasing due to improved health, the percentage of those in older cohorts who are working is increasing at an even faster rate. If a ratio is 6/8 for a category and goes up to 10/12 the category has increased (8 to 12 or 50%) and the subcategory has increased (6 to 10 i or 67% and the ratios go from 6/8 or 75/100 to 10/12 or 83.3/100)

I assume you are referencing the employment-population (E/P) ratio when noting "the dramatic drop in labor force participation from the 25-54 age cohort v. 55 to 64." However the change in the E/P ratio for 25-54 year olds was virtually unchanged (77.3/100 in 1986 to 77.9/100 in 2016) and for the 55-64 year olds the E/P ratio INCREASED significantly, from 51.8/100 in 1986 to 61.8/100 in 2016.

You query: "How can so few people in that age group be working given that even retiring at 65 is something most people cannot afford?" That's a set of concerns the data I've compiled cannot and thus cannot address. It would take more time to see if an empirical answer could be constructed, something that doesn't lend itself to making a timely, empirically based comment. The data I compiled was done after reading the original post.

You note: ". . . ;;[T] the increase over time in the current 55-64 age cohort is significantly due to the entry of women into the workplace." Again, I didn't compute the age-gender specific E/P ratios. I can do that if there's interest. The OVERALL female E/P ratio (from FRED) did not significantly increase from December 1986 ( 51.7/100) to December 2016 (53.8/100).

Your write: "The increase in the over 65 cohort reflects desperation. Anyone who can work stays working." Again, the data I was using provided me no basis for this interpretation. I suspect that the MORG data can provide some support for that interpretation. However, based on your comments about longer life expectancy, it's likely that a higher proportion of those in professional-middle class or in the upper-middle class category Richard Reeves writes about (Dream Hoarders) were able and willing to continue working. For a time in higher education some institutions offered incentives for older faculty to continue working thereby they could continue to receive a salary and upon becoming eligible for Social Security draw on that benefit. No doubt many, many vulnerable older people, including workers laid off in the wake of the Great Recession and otherwise burdened lengthened their or sought employment.

Again the MORG data can get somewhat closer to your concerns and interests, but whether this is the forum is a challenge given the reporting-comment cycle which guides this excellent site.

paul , October 17, 2017 at 1:54 pm

Institutional memory (perhaps, wisdom) is a positive threat to institutional change (for the pillage).

In my experience,those in possession of it are encouraged/discouraged/finally made to go.

The break up of British Rail is a salient,suppurating example.

The break up of National Health Service is another.

It would be easy to go on, I just see it as the long year zero the more clinical sociopaths desire.

Livius Drusus , October 17, 2017 at 3:20 pm

I don't understand how the media promotes the "society is aging, we need more immigrants to avoid a labor shortage" argument and the "there will be no jobs in the near future due to automation, there will be a jobs shortage" argument at the same time. Dean Baker has discussed this issue:

http://cepr.net/publications/op-eds-columns/badly-confused-economics-the-debate-on-automation

In any event, helping to keep older workers in the workforce can be a good thing. Some people become physically inactive after retirement and their social networks decline which can cause depression and loneliness. Work might benefit some people who would otherwise sink into inactivity and loneliness.

Of course, results might vary based on individual differences and those who engaged in hard physical labor will likely have to retire earlier due to wear and tear on their bodies.

flora , October 17, 2017 at 7:42 pm

Increase in life expectancy is greatly influenced by a decrease in childhood mortality. People are living longer because they aren't dying in large numbers in childhood anymore in the US. So many arguments that start out "we're living longer, so something" confuse a reduction in childhood mortality with how long one can expect to live to in old age, based on the actuarial charts. Pols who want to cut SS or increase the retirement age find this confusion very useful.

" Life expectancy at birth is very sensitive to reductions in the death rates of children, because each child that survives adds many years to the amount of life in the population. Thus, the dramatic declines in infant and child mortality in the twentieth century were accompanied by equally stunning increases in life expectancy. "

http://www.pbs.org/fmc/timeline/dmortality.htm

TarheelDem , October 17, 2017 at 5:24 pm

I've noticed ever since the 1990s that "labor shortage" is a signal for cost-cutting measures that trigger a recession. Which then becomes the excuse for shedding workers and really getting the recession on.

It is not just older workers who are spare. There are other forms of discrimination that could fall by the wayside if solving the "labor shortage" was the sincere objective.

JBird , October 17, 2017 at 5:57 pm

Often productively, sales, and profits decrease with those cost cuttings, which justified further cuts which decreases productivity, sales, and profits which justifies

It's a pattern I first noticed in the 1990s and looking back in the 80s too. It's like some malevolent MBAs went out and convinced the whole of American middle and senior business management that this was the Way to do it. It's like something out of the most hidebound, nonsensical ideas of Maoism and Stalinism as something that could not fail but only be failed. It is right out of the Chicago Boys' economics playbook. Thirty-five years later and the Way still hasn't succeeded, but they're still trying not to fail it.

SpringTexan , October 17, 2017 at 6:59 pm

Love your reflections. Yeah, it's like a religion that they can't pay more, can't train, must cut people till they are working to their max at ordinary times (so have no slack for crises), etc. etc., and that it doesn't work doesn't change the faith in it AT ALL.

JBird , October 17, 2017 at 5:42 pm

This is ranting, but most jobs can be done at most ages. If want someone to be a SEAL or do 12 hours at farm labor no of course not, but just about everything else so what's the problem?

All this "we have a skilled labor shortage" or "we have a labor surplus" or "the workers are all lazy/stupid" narratives" and "it's the unions' fault" and "the market solves everything" and the implicit "we are a true meritocracy and the losers are waste who deserve their pain" and my favorite of the "Job creators do make jobs" being said, and/or believed all at the same time is insanity made mainstream.

Sometimes I think whoever is running things are told they have to drink the Draught of UnWisdom before becoming the elites.

Dan , October 17, 2017 at 7:12 pm

So I'm a middle aged fella – early thirties – and have to admit that in my industry I find that most older workers are a disaster. I'm in tech and frankly find that most older workers are a detriment simply from being out of date. While I sympathize, in some cases experience can be a minus rather than a plus. The willingness to try new things and stay current with modern technologies/techniques just isn't there for the majority of tech workers that are over the hill.

flora , October 17, 2017 at 7:46 pm

Well, if you're lucky, your company won't replace you with a cheaper HiB visa holder or outsource your job to the sub-continent before you're 40.

[Oct 16, 2017] Indenting Here-Documents - bash Cookbook

Oct 16, 2017 | www.safaribooksonline.com

Indenting Here-Documents Problem

The here-document is great, but it's messing up your shell script's formatting. You want to be able to indent for readability. Solution

Use <<- and then you can use tab characters (only!) at the beginning of lines to indent this portion of your shell script.

   $ cat myscript.sh
        ...
             grep $1 <<-'EOF'
                lots of data
                can go here
                it's indented with tabs
                to match the script's indenting
                but the leading tabs are
                discarded when read
                EOF
            ls
        ...
        $
Discussion

The hyphen just after the << is enough to tell bash to ignore the leading tab characters. This is for tab characters only and not arbitrary white space. This is especially important with the EOF or any other marker designation. If you have spaces there, it will not recognize the EOF as your ending marker, and the "here" data will continue through to the end of the file (swallowing the rest of your script). Therefore, you may want to always left-justify the EOF (or other marker) just to be safe, and let the formatting go on this one line.

[Oct 16, 2017] Indenting bourne shell here documents

Oct 16, 2017 | prefetch.net

The Bourne shell provides here documents to allow block of data to be passed to a process through STDIN. The typical format for a here document is something similar to this:

command <<ARBITRARY_TAG
data to pass 1
data to pass 2
ARBITRARY_TAG

This will send the data between the ARBITRARY_TAG statements to the standard input of the process. In order for this to work, you need to make sure that the data is not indented. If you indent it for readability, you will get a syntax error similar to the following:

./test: line 12: syntax error: unexpected end of file

To allow your here documents to be indented, you can append a "-" to the end of the redirection strings like so:

if [ "${STRING}" = "SOMETHING" ]
then
        somecommand <<-EOF
        this is a string1
        this is a string2
        this is a string3
        EOF
fi

You will need to use tabs to indent the data, but that is a small price to pay for added readability. Nice!

[Oct 15, 2017] Two Cheers For Trump's Immigration Proposal Especially "Interior Enforcement" - The Unz Review

Notable quotes:
"... In the 1970s a programming shop was legacy American, with only a thin scattering of foreigners like myself. Twenty years later programming had been considerably foreignized , thanks to the H-1B visa program. Now, twenty years further on, I believe legacy-American programmers are an endangered species. ..."
"... So a well-paid and mentally rewarding corner of the middle-class job market has been handed over to foreigners -- for the sole reason, of course, that they are cheaper than Americans. The desire for cheap labor explains 95 percent of U.S. immigration policy. The other five percent is sentimentality. ..."
"... Now they are brazen in their crime: you have heard, I'm sure, those stories about American workers being laid off, with severance packages conditional on their helping train their cheaper foreign replacements. That's our legal ..."
"... A "merit-based" points system won't fix that. It will quickly and easily be gamed by employers to lay waste yet more middle-class occupational zones for Americans. If it was restricted to the higher levels of "merit," we would just be importing a professional overclass of foreigners, most East and South Asians, to direct the labors of less-meritorious legacy Americans. How would that ..."
"... Measured by the number of workers per year, the largest guestworker program in the entire immigration system is now student visas through the Optional Practical Training program (OPT). Last year over 154,000 aliens were approved to work on student visas. By comparison, 114,000 aliens entered the workforce on H-1B guestworker visas. ..."
"... A History of the 'Optional Practical Training' Guestworker Program , ..."
"... on all sorts of subjects ..."
"... for all kinds of outlets. (This ..."
"... no longer includes ..."
"... National Review, whose editors had some kind of tantrum and ..."
"... and several other ..."
"... . He has had two books published by VDARE.com com: ..."
"... ( also available in Kindle ) and ..."
"... Has it ever occurred to anyone other than me that the cost associated with foreign workers using our schools and hospitals and pubic services for free, is more than off-set by the cheap price being paid for grocery store items like boneless chicken breast, grapes, apples, peaches, lettuce etc, which would otherwise be prohibitively expensive even for the wealthy? ..."
Oct 15, 2017 | www.unz.com

Headliner of the week for immigration patriots was President Trump's immigration reform proposal , which he sent to Congress for their perusal last Sunday. The proposal is a very detailed 70-point list under three main headings:

Border Security (27 items) Interior Enforcement (39 items) Merit-Based Immigration System (four items)

Item-wise, the biggest heading there is the second one, "Interior Enforcement." That's very welcome.

Of course we need improved border security so that people don't enter our country without permission. That comes under the first heading. An equally pressing problem, though, is the millions of foreigners who are living and working here, and using our schools and hospitals and public services, who should not be here.

The President's proposals on interior enforcement cover all bases: Sanctuary cities , visa overstays , law-enforcement resources , compulsory E-Verify , more deportations , improved visa security.

This is a major, wonderful improvement in national policy, when you consider that less than a year ago the White House and Justice Department were run by committed open-borders fanatics. I thank the President and his staff for having put so much work into such a detailed proposal for restoring American sovereignty and the rights of American workers and taxpayers.

That said, here come the quibbles.

That third heading, "Merit-Based Immigration System," with just four items, needs work. Setting aside improvements on visa controls under the other headings, this is really the only part of the proposal that covers legal immigration. In my opinion, it does so imperfectly.

There's some good meat in there, mind. Three of the four items -- numbers one, three, and four -- got a fist-pump from me:

cutting down chain migration by limiting it to spouse and dependent children; eliminating the Diversity Visa Lottery ; and limiting the number of refugees admitted, assuming this means severely cutting back on the numbers, preferably all the way to zero.

Good stuff. Item two, however, is a problem. Quote:

Establish a new, points-based system for the awarding of Green Cards (lawful permanent residents) based on factors that allow individuals to successfully assimilate and support themselves financially.

sounds OK, bringing in talented, well-educated, well-socialized people, rather than what the late Lee Kuan Yew referred to as " fruit-pickers ." Forgive me if I have a rather jaundiced view of this merit-based approach.

For most of my adult life I made a living as a computer programmer. I spent four years doing this in the U.S.A. through the mid-1970s. Then I came back in the late 1980s and worked at the same trade here through the 1990s. (Pictured right–my actual H-1B visa ) That gave me two clear snapshots twenty years apart, of this particular corner of skilled middle-class employment in America.

In the 1970s a programming shop was legacy American, with only a thin scattering of foreigners like myself. Twenty years later programming had been considerably foreignized , thanks to the H-1B visa program. Now, twenty years further on, I believe legacy-American programmers are an endangered species.

So a well-paid and mentally rewarding corner of the middle-class job market has been handed over to foreigners -- for the sole reason, of course, that they are cheaper than Americans. The desire for cheap labor explains 95 percent of U.S. immigration policy. The other five percent is sentimentality.

On so-called "merit-based immigration," therefore, you can count me a cynic. I have no doubt that American firms could recruit all the computer programmers they need from among our legacy population. They used to do so, forty years ago. Then they discovered how to game the immigration system for cheaper labor.

Now they are brazen in their crime: you have heard, I'm sure, those stories about American workers being laid off, with severance packages conditional on their helping train their cheaper foreign replacements. That's our legal immigration system in a nutshell. It's a cheap-labor racket.

A "merit-based" points system won't fix that. It will quickly and easily be gamed by employers to lay waste yet more middle-class occupational zones for Americans. If it was restricted to the higher levels of "merit," we would just be importing a professional overclass of foreigners, most East and South Asians, to direct the labors of less-meritorious legacy Americans. How would that contribute to social harmony?

With coming up to a third of a billion people, the U.S.A. has all the talent, all the merit , it needs. You might make a case for a handful of certified geniuses like Einstein or worthy dissidents like Solzhenitsyn, but those cases aside, there is no reason at all to have guest-worker programs. They should all be shut down.

Some of these cheap-labor rackets don't even need congressional action to shut them down; it can be done by regulatory change via executive order. The scandalous OPT-visa scam, for example, which brings in cheap workers under the guise of student visas.

Here is John Miano writing about the OPT program last month, quote:

Measured by the number of workers per year, the largest guestworker program in the entire immigration system is now student visas through the Optional Practical Training program (OPT). Last year over 154,000 aliens were approved to work on student visas. By comparison, 114,000 aliens entered the workforce on H-1B guestworker visas.

Because there is no reporting on how long guestworkers stay in the country, we do not know the total number of workers in each category. Nonetheless, the number of approvals for work on student visas has grown by 62 percent over the past four years so their numbers will soon dwarf those on H-1B visas.

The troubling fact is that the OPT program was created entirely through regulation with no authorization from Congress whatsoever. [ A History of the 'Optional Practical Training' Guestworker Program , CIS, September 18, 2017]

End quote. (And a cheery wave of acknowledgement to John Miano here from one of the other seventeen people in the U.S.A. that knows the correct placement of the hyphen in "H-1B.")

Our legal immigration system is addled with these scams. Don't even get me started on the EB-5 investor's visa . It all needs sweeping away.

So for preference I would rewrite that third heading to include, yes, items one, three, and four -- cutting down chain migration, ending the Diversity Visa Lottery, and ending refugee settlement for anyone of less stature than Solzhenitsyn; but then, I'd replace item two with the following:

End all guest-worker programs, with exceptions only for the highest levels of talent and accomplishment, limit one hundred visas per annum .

So much for my amendments to the President's October 8th proposals. There is, though, one glaring omission from that 70-item list. The proposal has no mention at all of birthright citizenship.

have abandoned it . It leads to obstetric tourism : women well-advanced in pregnancy come to the U.S.A. to give birth, knowing that the child will be a U.S. citizen. It is deeply unpopular with Americans , once it's explained to them.

Yes, yes, I know: some constitutional authorities argue that birthright citizenship is implied in the Fourteenth Amendment , although it is certain that the framers of that Amendment did not have foreign tourists or illegal entrants in mind. Other scholars think Congress could legislate against it.

The only way to find out is to have Congress legislate. If the courts strike down the legislation as unconstitutional, let's then frame a constitutional amendment and put it to the people.

Getting rid of birthright citizenship might end up a long and difficult process. We might ultimately fail. The only way to find out is to get the process started . Failure to mention this in the President's proposal is a very glaring omission.

Setting aside that, and the aforementioned reservations about working visas, I give two cheers to the proposal. email him ] writes an incredible amount on all sorts of subjects for all kinds of outlets. (This no longer includes National Review, whose editors had some kind of tantrum and fired him. ) He is the author of We Are Doomed: Reclaiming Conservative Pessimism and several other books . He has had two books published by VDARE.com com: FROM THE DISSIDENT RIGHT ( also available in Kindle ) and FROM THE DISSIDENT RIGHT II: ESSAYS 2013 . (Republished from VDare.com by permission of author or representative)

SimpleHandle > > , October 14, 2017 at 2:56 am GMT

I agree with ending birthright citizenship. But Trump should wait until he can put at least one more strict constitutionalist in the supreme court. There will be a court challenge, and we need judges who can understand that if the 14th Amendment didn't give automatic citizenship to American Indians it doesn't give automatic citizenship to children of Mexican citizens who jumped our border.

Diversity Heretic > > , October 14, 2017 at 5:04 am GMT

@Carroll Price

Insofar as your personal situation is concerned, perhaps you would find yourself less "relatively poor" if you had a job with higher wages.

Diversity Heretic > > , October 14, 2017 at 5:16 am GMT

John's article, it seems to me, ignores the elephant in the room: the DACA colonists. Trump is offering this proposal, more or less, in return for some sort of semi-permanent regularization of their status. Bad trade, in my opinion. Ending DACA and sending those illegals back where they belong will have more real effect on illegal and legal immigration/colonization than all sorts of proposals to be implemented in the future, which can and will be changed by subsequent Administrations and Congresses.

Trump would also be able to drive a much harder bargain with Congress (like maybe a moratorium on any immigration) if he had kept his campaign promise, ended DACA the afternoon of January 20, 2017, and busloads of DACA colonists were being sent south of the Rio Grande.

The best hope for immigration patriots is that the Democrats are so wedded to Open Borders that the entire proposal dies and Trump, in disgust, reenacts Ike's Operation Wetback.

bartok > > , October 14, 2017 at 6:32 am GMT

@Carroll Price

Once all the undocumented workers who are doing all the dirty, nasty jobs Americans refuse to do are run out the country, then what?

White people couldn't possibly thrive without non-Whites! Why, without all of that ballast we'd ascend too near the sun.

Negrolphin Pool > > , October 14, 2017 at 7:53 am GMT

Well, in the real world, things just don't work that way. It's pay me now or pay me later. Once all the undocumented workers who are doing all the dirty, nasty jobs Americans refuse to do are run out the country, then what?

Right, prior to 1965, Americans didn't exist. They had all starved to death because, as everyone knows, no Americans will work to produce food and, even if they did, once Tyson chicken plants stop making 50 percent on capital they just shut down.

If there were no Somalis in Minnesota, even Warren Buffett couldn't afford grapes.

Joe Franklin > > , October 14, 2017 at 12:24 pm GMT

Illegal immigrants picking American produce is a false economy.

Illegal immigrants are subsidized by the taxpayer in terms of public health, education, housing, and welfare.

If businesses didn't have access to cheap and subsidized illegal alien labor, they would be compelled to resort to more farm automation to reduce cost.

Cheap illegal alien labor delays the inevitable use of newer farm automation technologies.

Many Americans would likely prefer a machine touch their food rather than a illegal alien with strange hygiene practices.

In addition, anti-American Democrats and neocons prefer certain kinds of illegal aliens because they bolster their diversity scheme.

Carroll Price > > , October 14, 2017 at 12:27 pm GMT

@Realist "Once all the undocumented workers who are doing all the dirty, nasty jobs Americans refuse to do are run out the country, then what?"

Eliminate welfare...then you'll have plenty of workers. Unfortunately, that train left the station long ago. With or without welfare, there's simply no way soft, spoiled, lazy, over-indulged Americans who have never hit a lick at anything their life, will ever perform manual labor for anyone, including themselves.

Jonathan Mason > > , October 14, 2017 at 2:57 pm GMT

@Randal Probably people other than you have worked out that once their wages are not being continually undercut by cheap and easy immigrant competition, the American working classes will actually be able to earn enough to pay the increased prices for grocery store items, especially as the Americans who, along with machines, will replace those immigrants doing the "jobs Americans won't do" will also be earning more and actually paying taxes on it.

The "jobs Americans/Brits/etc won't do" myth is a deliberate distortion of reality that ignores the laws of supply and demand. There are no jobs Americans etc won't do, only jobs for which the employers are not prepared to pay wages high enough to make them worthwhile for Americans etc to do.

Now of course it is more complicated than that. There are jobs that would not be economically viable if the required wages were to be paid, and there are marginal contributions to job creation by immigrant populations, but those aspects are in reality far less significant than the bosses seeking cheap labour want people to think they are.

As a broad summary, a situation in which labour is tight, jobs are easy to come by and staff hard to hold on to is infinitely better for the ordinary working people of any nation than one in which there is a huge pool of excess labour, and therefore wages are low and employees disposable.

You'd think anyone purporting to be on the "left", in the sense of supporting working class people would understand that basic reality, but far too many on the left have been indoctrinated in radical leftist anti-racist and internationalist dogmas that make them functional stooges for big business and its mass immigration program.

Probably people other than you have worked out that once their wages are not being continually undercut by cheap and easy immigrant competition, the American working classes will actually be able to earn enough to pay the increased prices for grocery store items, especially as the Americans who, along with machines, will replace those immigrants doing the "jobs Americans won't do" will also be earning more and actually paying taxes on it.

There might be some truth in this. When I was a student in England in the 60′s I spent every summer working on farms, picking hops, apples, pears, potatoes and made some money and had a lot of fun too and became an expert farm tractor operator.

No reason why US students and high school seniors should not pick up a lot of the slack. Young people like camping in the countryside and sleeping rough, plus lots of opportunity to meet others, have sex, smoke weed, drink beer, or whatever. If you get a free vacation plus a nice check at the end, that makes the relatively low wages worthwhile. It is not always a question of how much you are paid, but how much you can save.

George Weinbaum > > , October 14, 2017 at 3:35 pm GMT

We can fix the EB-5 visa scam. My suggestion: charge would-be "investors" $1 million to enter the US. This $1 is not refundable under any circumstance. It is paid when the "investor's" visa is approved. If the "investor" is convicted of a felony, he is deported. He may bring no one with him. No wife, no child, no aunt, no uncle. Unless he pays $1 million for that person.

We will get a few thousand Russian oligarchs and Saudi princes a year under this program

As to fixing the H-1B visa program, we charge employer users of the program say $25,000 per year per employee. We require the employers to inform all employees that if any is asked to train a replacement, he should inform the DOJ immediately. The DOJ investigates and if true, charges managerial employees who asked that a replacement be trained with fraud.

As to birthright citizenship: I say make it a five-year felony to have a child while in the US illegally. Make it a condition of getting a tourist visa that one not be pregnant. If the tourist visa lasts say 60 days and the woman has a child while in the US, she gets charged with fraud.

None of these suggestions requires a constitutional amendment.

Auntie Analogue > > , October 14, 2017 at 7:10 pm GMT

In the United States middle class prosperity reached its apogee in 1965 – before the disastrous (and eminently foreseeable) wage-lowering consequence of the Hart-Celler Open Immigration Act's massive admission of foreigners increased the supply of labor which began to lower middle class prosperity and to shrink and eradicate the middle class.

It was in 1965 that ordinary Americans, enjoying maximum employment because employers were forced to compete for Americans' talents and labor, wielded their peak purchasing power . Since 1970 wages have remained stagnant, and since 1965 the purchasing power of ordinary Americans has gone into steep decline.

It is long past time to halt Perpetual Mass Immigration into the United States, to end birthright citizenship, and to deport all illegal aliens – if, that is, our leaders genuinely care about and represent us ordinary Americans instead of continuing their legislative, policy, and judicial enrichment of the 1-percenter campaign donor/rentier class of transnational Globali$t Open Border$ E$tabli$hment $ellout$.

Jim Sweeney > > , October 14, 2017 at 8:26 pm GMT

Re the birthright citizenship argument, that is not settled law in that SCOTUS has never ruled on the question of whether a child born in the US is thereby a citizen if the parents are illegally present. Way back in 1897, SCOTUS did resolve the issue of whether a child born to alien parents who were legally present was thereby a citizen. That case is U.S. vs Wong Kim Ark 169 US 649. SCOTUS ruled in favor of citizenship. If that was a justiciable issue how much more so is it when the parents are illegally present?

My thinking is that the result would be the same but, at least, the question would be settled. I cannot see justices returning a toddler to Beijing or worse. They would never have invitations to cocktail parties again for the shame heaped upon them for such uncaring conduct. Today, the title of citizen is conferred simply by bureaucratic rule, not by judicial order.

JP Straley > > , October 14, 2017 at 9:42 pm GMT

Arguments Against Fourteenth Amendment Anchor Baby Interpretation
J. Paige Straley

Part One. Anchor Baby Argument, Mexican Case.
The ruling part of the US Constitution is Amendment Fourteen: "All persons born or naturalized in the United States, and subject to the jurisdiction thereof, are citizens of the United States and of the State wherein they reside."

Here is the ruling part of the Mexican Constitution, Section II, Article Thirty:
Article 30
Mexican nationality is acquired by birth or by naturalization:
A. Mexicans by birth are:
I. Those born in the territory of the Republic, regardless of the nationality of
their parents:
II. Those born in a foreign country of Mexican parents; of a Mexican father and
a foreign mother; or of a Mexican mother and an unknown father;
III. Those born on Mexican vessels or airships, either war or merchant vessels. "

A baby born to Mexican nationals within the United States is automatically a Mexican citizen. Under the anchor baby reasoning, this baby acquires US citizenship at the same time and so is a dual citizen. Mexican citizenship is primary because it stems from a primary source, the parents' citizenship and the law of Mexico. The Mexican Constitution states the child of Mexican parents is automatically a Mexican citizen at birth no matter where the birth occurs. Since the child would be a Mexican citizen in any country, and becomes an American citizen only if born in America, it is clear that Mexico has the primary claim of citizenry on the child. This alone should be enough to satisfy the Fourteenth Amendment jurisdiction thereof argument. Since Mexican citizenship is primary, it has primary jurisdiction; thus by the plain words of the Fourteenth such child is not an American citizen at birth.

[MORE]
There is a second argument for primary Mexican citizenship in the case of anchor babies. Citizenship, whether Mexican or American, establishes rights and duties. Citizenship is a reciprocal relationship, thus establishing jurisdiction. This case for primary Mexican citizenship is supported by the fact that Mexico allows and encourages Mexicans resident in the US, either illegal aliens or legal residents, to vote in Mexican elections. They are counted as Mexican citizens abroad, even if dual citizens, and their government provides widespread consular services as well as voting access to Mexicans residing in the US. As far as Mexico is concerned, these persons are not Mexican in name only, but have a civil relationship strong enough to allow a political voice; in essence, full citizenship. Clearly, all this is the expression of typical reciprocal civic relationships expressed in legal citizenship, further supporting the establishment of jurisdiction.

Part Two: Wong Kim Ark (1898) case. (Birthright Citizenship)

The Wong Kim Ark (WKA) case is often cited as the essential legal reasoning and precedent for application of the fourteenth amendment as applied to aliens. There has been plenty of commentary on WKA, but the truly narrow application of the case is emphasized reviewing a concise statement of the question the case was meant to decide, written by Hon. Horace Gray, Justice for the majority in this decision.

"[W]hether a child born in the United States, of parents of Chinese descent, who, at the time of his birth, are subjects of the Emperor of China, but have a permanent domicile and residence in the United States, and are there carrying on business, and are not employed in any diplomatic or official capacity under the Emperor of China, becomes at the time of his birth a citizen of the United States by virtue of the first clause of the Fourteenth Amendment of the Constitution." (Italics added.)

For WKA to justify birthright citizenship, the parents must have " permanent domicile and residence " But how can an illegal alien have permanent residence when the threat of deportation is constantly present? There is no statute of limitation for illegal presence in the US and the passage of time does not eliminate the legal remedy of deportation. This alone would seem to invalidate WKA as a support and precedent for illegal alien birthright citizenship.

If illegal (or legal) alien parents are unemployed, unemployable, illegally employed, or if they get their living by illegal means, then they are not ". . .carrying on business. . .", and so the children of indigent or criminal aliens may not be eligible for birthright citizenship

If legal aliens meet the two tests provided in WKA, birthright citizenship applies. Clearly the WKA case addresses the specific situation of the children of legal aliens, and so is not an applicable precedent to justify birthright citizenship for the children of illegal aliens.

Part three. Birth Tourism

Occasionally foreign couples take a trip to the US during the last phase of the wife's pregnancy so she can give birth in the US, thus conferring birthright citizenship on the child. This practice is called "birth tourism." WKA provides two tests for birthright citizenship: permanent domicile and residence and doing business, and a temporary visit answers neither condition. WKA is therefore disqualified as justification for a "birth tourism" child to be granted birthright citizenship.

Realist > > , October 14, 2017 at 10:05 pm GMT

@Carroll Price Unfortunately, that train left the station long ago. With or without welfare, there's simply no way soft, spoiled, lazy, over-indulged Americans who have never hit a lick at anything their life, will ever perform manual labor for anyone, including themselves. Then let them starve to death. The Pilgrims nipped that dumb ass idea (welfare) in the bud

Alfa158 > > , October 15, 2017 at 2:10 am GMT

@Carroll Price

An equally pressing problem, though, is the millions of foreigners who are living and working here, and using our schools and hospitals and public services, who should not be here.
Has it ever occurred to anyone other than me that the cost associated with foreign workers using our schools and hospitals and pubic services for free, is more than off-set by the cheap price being paid for grocery store items like boneless chicken breast, grapes, apples, peaches, lettuce etc, which would otherwise be prohibitively expensive even for the wealthy?

Let alone relatively poor people (like myself) and those on fixed incomes? What un-thinking Americans want, is having their cake and eating it too. Well, in the real world, things just don't work that way. It's pay me now or pay me later. Once all the undocumented workers who are doing all the dirty, nasty jobs Americans refuse to do are run out the country, then what? Please look up;History; United States; pre mid-twentieth century. I'm pretty sure Americans were eating chicken, grapes, apples, peaches, lettuce, etc. prior to that period. I don't think their diet consisted of venison and tree bark.
But since I wasn't there, maybe I'm wrong and that is actually what they were eating.
I know some people born in the 1920′s; I'll check with them and let you know what they say.

[Oct 09, 2017] TMOUT - Auto Logout Linux Shell When There Isn't Any Activity by Aaron Kili

Oct 07, 2017 | www.tecmint.com
... ... ..

To enable automatic user logout, we will be using the TMOUT shell variable, which terminates a user's login shell in case there is no activity for a given number of seconds that you can specify.

To enable this globally (system-wide for all users), set the above variable in the /etc/profile shell initialization file.

[Oct 03, 2017] Timeshift A System Restore Utility Tool Review - LinuxAndUbuntu - Linux News Apps Reviews Linux Tutorials HowTo

Look like technologically this is a questionable approach although technical details are unclear. Rsync is better done by other tools and BTRFS is a niche filesystem.
www.unz.com

TimeShift is a system restore tool for Linux. It provides functionality that is quite similar to the System Restore feature in Windows or the Time Machine tool in MacOS. TimeShift protects your system by making incremental snapshots of the file system manually or at regular automated intervals.

These snapshots can then be restored at a later point to undo all changes to the system and restore it to the previous state. Snapshots are made using rsync and hard-links and the tool shares common files amongst snapshots in order to save disk space. Now that we have an idea about what Timeshift is, let us take take a detail look at setting up and using this tool. ​​

... ... ...

Timeshift supports 2 snapshot formats. The first is by using Rsync and the second is by using the in-built features of BTRFS file system that allows snapshots to be created. So you can select the BTRFS format if you are using that particular filesystem. Other than that, you have to choose the Rsync format.

[Oct 03, 2017] Silicon Valley companies have placed lowering wages and flooding the labor market with cheaper labor near the top of their goals and as a business model.

Notable quotes:
"... That's Silicon Valley's dirty secret. Most tech workers in Palo Alto make about as much as the high school teachers who teach their kids. And these are the top coders in the country! ..."
"... I don't see why more Americans would want to be coders. These companies want to drive down wages for workers here and then also ship jobs offshore... ..."
"... Silicon Valley companies have placed lowering wages and flooding the labor market with cheaper labor near the top of their goals and as a business model. ..."
"... There are quite a few highly qualified American software engineers who lose their jobs to foreign engineers who will work for much lower salaries and benefits. This is a major ingredient of the libertarian virus that has engulfed and contaminating the Valley, going hand to hand with assembling products in China by slave labor ..."
"... If you want a high tech executive to suffer a stroke, mention the words "labor unions". ..."
"... India isn't being hired for the quality, they're being hired for cheap labor. ..."
"... Enough people have had their hands burnt by now with shit companies like TCS (Tata) that they are starting to look closer to home again... ..."
"... Globalisation is the reason, and trying to force wages up in one country simply moves the jobs elsewhere. The only way I can think of to limit this happening is to keep the company and coders working at the cutting edge of technology. ..."
"... I'd be much more impressed if I saw that the hordes of young male engineers here in SF expressing a semblance of basic common sense, basic self awareness and basic life skills. I'd say 91.3% are oblivious, idiotic children. ..."
"... Not maybe. Too late. American corporations objective is to low ball wages here in US. In India they spoon feed these pupils with affordable cutting edge IT training for next to nothing ruppees. These pupils then exaggerate their CVs and ship them out en mass to the western world to dominate the IT industry. I've seen it with my own eyes in action. Those in charge will anything/everything to maintain their grip on power. No brag. Just fact. ..."
Oct 02, 2017 | profile.theguardian.com
Terryl Dorian , 21 Sep 2017 13:26
That's Silicon Valley's dirty secret. Most tech workers in Palo Alto make about as much as the high school teachers who teach their kids. And these are the top coders in the country!
Ray D Wright -> RogTheDodge , , 21 Sep 2017 14:52
I don't see why more Americans would want to be coders. These companies want to drive down wages for workers here and then also ship jobs offshore...
Richard Livingstone -> KatieL , , 21 Sep 2017 14:50
+++1 to all of that.

Automated coding just pushes the level of coding further up the development food chain, rather than gets rid of it. It is the wrong approach for current tech. AI that is smart enough to model new problems and create their own descriptive and runnable language - hopefully after my lifetime but coming sometime.

Arne Babenhauserheide -> Evelita , , 21 Sep 2017 14:48
What coding does not teach is how to improve our non-code infrastructure and how to keep it running (that's the stuff which actually moves things). Code can optimize stuff, but it needs actual actuators to affect reality.

Sometimes these actuators are actual people walking on top of a roof while fixing it.

WyntonK , 21 Sep 2017 14:47
Silicon Valley companies have placed lowering wages and flooding the labor market with cheaper labor near the top of their goals and as a business model.

There are quite a few highly qualified American software engineers who lose their jobs to foreign engineers who will work for much lower salaries and benefits. This is a major ingredient of the libertarian virus that has engulfed and contaminating the Valley, going hand to hand with assembling products in China by slave labor .

If you want a high tech executive to suffer a stroke, mention the words "labor unions".

TheEgg -> UncommonTruthiness , , 21 Sep 2017 14:43

The ship has sailed on this activity as a career.

Nope. Married to a highly-technical skillset, you can still make big bucks. I say this as someone involved in this kind of thing academically and our Masters grads have to beat the banks and fintech companies away with dog shits on sticks. You're right that you can teach anyone to potter around and throw up a webpage but at the prohibitively difficult maths-y end of the scale, someone suitably qualified will never want for a job.

Mike_Dexter -> Evelita , , 21 Sep 2017 14:43
In a similar vein, if you accept the argument that it does drive down wages, wouldn't the culprit actually be the multitudes of online and offline courses and tutorials available to an existing workforce?
Terryl Dorian -> CountDooku , , 21 Sep 2017 14:42
Funny you should pick medicine, law, engineering... 3 fields that are *not* taught in high school. The writer is simply adding "coding" to your list. So it seems you agree with his "garbage" argument after all.
anticapitalist -> RogTheDodge , , 21 Sep 2017 14:42
Key word is "good". Teaching everyone is just going to increase the pool of programmers code I need to fix. India isn't being hired for the quality, they're being hired for cheap labor. As for women sure I wouldn't mind more women around but why does no one say their needs to be more equality in garbage collection or plumbing? (And yes plumbers are a high paid professional).

In the end I don't care what the person is, I just want to hire and work with the best and not someone I have to correct their work because they were hired by quota. If women only graduate at 15% why should IT contain more than that? And let's be a bit honest with the facts, of those 15% how many spend their high school years staying up all night hacking? Very few. Now the few that did are some of the better developers I work with but that pool isn't going to increase by forcing every child to program... just like sports aren't better by making everyone take gym class.

WithoutPurpose , 21 Sep 2017 14:42
I ran a development team for 10 years and I never had any trouble hiring programmers - we just had to pay them enough. Every job would have at least 10 good applicants.

Two years ago I decided to scale back a bit and go into programming (I can code real-time low latency financial apps in 4 languages) and I had four interviews in six months with stupidly low salaries. I'm lucky in that I can bounce between tech and the business side so I got a decent job out of tech.

My entirely anecdotal conclusion is that there is no shortage of good programmers just a shortage of companies willing to pay them.

oddbubble -> Tori Turner , , 21 Sep 2017 14:41
I've worn many hats so far, I started out as a started out as a sysadmin, then I moved on to web development, then back end and now I'm doing test automation because I am on almost the same money for half the effort.
peter nelson -> raffine , , 21 Sep 2017 14:38
But the concepts won't. Good programming requires the ability to break down a task, organise the steps in performing it, identify parts of the process that are common or repetitive so they can be bundled together, handed-off or delegated, etc.

These concepts can be applied to any programming language, and indeed to many non-software activities.

Oliver Jones -> Trumbledon , , 21 Sep 2017 14:37
In the city maybe with a financial background, the exception.
anticapitalist -> Ethan Hawkins , 21 Sep 2017 14:32
Well to his point sort of... either everything will go php or all those entry level php developers will be on the street. A good Java or C developer is hard to come by. And to the others, being a being a developer, especially a good one, is nothing like reading and writing. The industry is already saturated with poor coders just doing it for a paycheck.
peter nelson -> Tori Turner , 21 Sep 2017 14:31
I'm just going to say this once: not everyone with a computer science degree is a coder.

And vice versa. I'm retiring from a 40-year career as a software engineer. Some of the best software engineers I ever met did not have CS degrees.

KatieL -> Mishal Almohaimeed , 21 Sep 2017 14:30
"already developing automated coding scripts. "

Pretty much the entire history of the software industry since FORAST was developed for the ORDVAC has been about desperately trying to make software development in some way possible without driving everyone bonkers.

The gulf between FORAST and today's IDE-written, type-inferring high level languages, compilers, abstracted run-time environments, hypervisors, multi-computer architectures and general tech-world flavour-of-2017-ness is truly immense[1].

And yet software is still fucking hard to write. There's no sign it's getting easier despite all that work.

Automated coding was promised as the solution in the 1980s as well. In fact, somewhere in my archives, I've got paper journals which include adverts for automated systems that would programmers completely redundant by writing all your database code for you. These days, we'd think of those tools as automated ORM generators and they don't fix the problem; they just make a new one -- ORM impedance mismatch -- which needs more engineering on top to fix...

The tools don't change the need for the humans, they just change what's possible for the humans to do.

[1] FORAST executed in about 20,000 bytes of memory without even an OS. The compile artifacts for the map-reduce system I built today are an astonishing hundred million bytes... and don't include the necessary mapreduce environment, management interface, node operating system and distributed filesystem...

raffine , 21 Sep 2017 14:29
Whatever they are taught today will be obsolete tomorrow.
yannick95 -> savingUK , , 21 Sep 2017 14:27
"There are already top quality coders in China and India"

AHAHAHAHAHAHAHAHAHAHAHA *rolls on the floor laughting* Yes........ 1%... and 99% of incredibly bad, incompetent, untalented one that produce cost 50% of a good developer but produce only 5% in comparison. And I'm talking with a LOT of practical experience through more than a dozen corporations all over the world which have been outsourcing to India... all have been disasters for the companies (but good for the execs who pocketed big bonuses and left the company before the disaster blows up in their face)

Wiretrip -> mcharts , , 21 Sep 2017 14:25
Enough people have had their hands burnt by now with shit companies like TCS (Tata) that they are starting to look closer to home again...
TomRoche , 21 Sep 2017 14:11

Tech executives have pursued [the goal of suppressing workers' compensation] in a variety of ways. One is collusion – companies conspiring to prevent their employees from earning more by switching jobs. The prevalence of this practice in Silicon Valley triggered a justice department antitrust complaint in 2010, along with a class action suit that culminated in a $415m settlement.

Folks interested in the story of the Techtopus (less drily presented than in the links in this article) should check out Mark Ames' reporting, esp this overview article and this focus on the egregious Steve Jobs (whose canonization by the US corporate-funded media is just one more impeachment of their moral bankruptcy).

Another, more sophisticated method is importing large numbers of skilled guest workers from other countries through the H1-B visa program. These workers earn less than their American counterparts, and possess little bargaining power because they must remain employed to keep their status.

Folks interested in H-1B and US technical visas more generally should head to Norm Matloff 's summary page , and then to his blog on the subject .

Olympus68 , 21 Sep 2017 13:49

I have watched as schools run by trade unions have done the opposite for the 5 decades. By limiting the number of graduates, they were able to help maintain living wages and benefits. This has been stopped in my area due to the pressure of owners run "trade associations".

During that same time period I have witnessed trade associations controlled by company owners, while publicising their support of the average employee, invest enormous amounts of membership fees in creating alliances with public institutions. Their goal has been that of flooding the labor market and thus keeping wages low. A double hit for the average worker because membership fees were paid by employees as well as those in control.

And so it goes....

savingUK , 21 Sep 2017 13:38
Coding jobs are just as susceptible to being moved to lower cost areas of the world as hardware jobs already have. It's already happening. There are already top quality coders in China and India. There is a much larger pool to chose from and they are just as good as their western counterparts and work harder for much less money.

Globalisation is the reason, and trying to force wages up in one country simply moves the jobs elsewhere. The only way I can think of to limit this happening is to keep the company and coders working at the cutting edge of technology.

whitehawk66 , 21 Sep 2017 15:18

I'd be much more impressed if I saw that the hordes of young male engineers here in SF expressing a semblance of basic common sense, basic self awareness and basic life skills. I'd say 91.3% are oblivious, idiotic children.

They would definitely not survive the zombie apocalypse.

P.S. not every kid wants or needs to have their soul sucked out of them sitting in front of a screen full of code for some idiotic service that some other douchbro thinks is the next iteration of sliced bread.

UncommonTruthiness , 21 Sep 2017 14:10
The demonization of Silicon Valley is clearly the next place to put all blame. Look what "they" did to us: computers, smart phones, HD television, world-wide internet, on and on. Get a rope!

I moved there in 1978 and watched the orchards and trailer parks on North 1st St. of San Jose transform into a concrete jungle. There used to be quite a bit of semiconductor equipment and device manufacturing in SV during the 80s and 90s. Now quite a few buildings have the same name : AVAILABLE. Most equipment and device manufacturing has moved to Asia.

Programming started with binary, then machine code (hexadecimal or octal) and moved to assembler as a compiled and linked structure. More compiled languages like FORTRAN, BASIC, PL-1, COBOL, PASCAL, C (and all its "+'s") followed making programming easier for the less talented. Now the script based languages (HTML, JAVA, etc.) are even higher level and accessible to nearly all. Programming has become a commodity and will be priced like milk, wheat, corn, non-unionized workers and the like. The ship has sailed on this activity as a career.

William Fitch III , 21 Sep 2017 13:52
Hi: As I have said many times before, there is no shortage of people who fully understand the problem and can see all the connections.

However, they all fall on their faces when it comes to the solution. To cut to the chase, Concentrated Wealth needs to go, permanently. Of course the challenge is how to best accomplish this.....

.....Bill

MostlyHarmlessD , , 21 Sep 2017 13:16

Damn engineers and their black and white world view, if they weren't so inept they would've unionized instead of being trampled again and again in the name of capitalism.
mcharts -> Aldous0rwell , , 21 Sep 2017 13:07
Not maybe. Too late. American corporations objective is to low ball wages here in US. In India they spoon feed these pupils with affordable cutting edge IT training for next to nothing ruppees. These pupils then exaggerate their CVs and ship them out en mass to the western world to dominate the IT industry. I've seen it with my own eyes in action. Those in charge will anything/everything to maintain their grip on power. No brag. Just fact.

Woe to our children and grandchildren.

Where's Bernie Sanders when we need him.

[Oct 03, 2017] The dream of coding automation remain illusive... Very illusive...

Oct 03, 2017 | discussion.theguardian.com

Richard Livingstone -> Mishal Almohaimeed , 21 Sep 2017 14:46

Wrong again, that approach has been tried since the 80s and will keep failing only because software development is still more akin to a technical craft than an engineering discipline. The number of elements required to assemble a working non trivial system is way beyond scriptable.
freeandfair -> Taylor Dotson , 21 Sep 2017 14:26
> That's some crystal ball you have there. English teachers will need to know how to code? Same with plumbers? Same with janitors, CEOs, and anyone working in the service industry?

You don't believe there will be robots to do plumbing and cleaning? The cleaner's job will be to program robots to do what they need.
CEOs? Absolutely.

English teachers? Both of my kids have school laptops and everything is being done on the computers. The teachers use software and create websites and what not. Yes, even English teachers.

Not knowing / understanding how to code will be the same as not knowing how to use Word/ Excel. I am assuming there are people who don't, but I don't know any above the age of 6.

Wiretrip -> Mishal Almohaimeed , 21 Sep 2017 14:20
We've had 'automated coding scripts' for years for small tasks. However, anyone who says they're going to obviate programmers, analysts and designers doesn't understand the software development process.
Ethan Hawkins -> David McCaul , 21 Sep 2017 13:22
Even if expert systems (an 80's concept, BTW) could code, we'd still have a huge need for managers. The hard part of software isn't even the coding. It's determining the requirements and working with clients. It will require general intelligence to do 90% of what we do right now. The 10% we could automate right now, mostly gets in the way. I agree it will change, but it's going to take another 20-30 years to really happen.
Mishal Almohaimeed -> PolydentateBrigand , , 21 Sep 2017 13:17
wrong, software companies are already developing automated coding scripts. You'll get a bunch of door to door knives salespeople once the dust settles that's what you'll get.
freeandfair -> rgilyead , , 21 Sep 2017 14:22
> In 20 years time AI will be doing the coding

Possible, but your still have to understand how AI operates and what it can and cannot do.

[Oct 03, 2017] Coding and carpentry are not so distant, are they ?

Thw user "imipak" views are pretty common misconceptions. They are all wrong.
Notable quotes:
"... I was about to take offence on behalf of programmers, but then I realized that would be snobbish and insulting to carpenters too. Many people can code, but only a few can code well, and fewer still become the masters of the profession. Many people can learn carpentry, but few become joiners, and fewer still become cabinetmakers. ..."
"... Many people can write, but few become journalists, and fewer still become real authors. ..."
Oct 03, 2017 | discussion.theguardian.com

imipak, 21 Sep 2017 15:13

Coding has little or nothing to do with Silicon Valley. They may or may not have ulterior motives, but ultimately they are nothing in the scheme of things.

I disagree with teaching coding as a discrete subject. I think it should be combined with home economics and woodworking because 90% of these subjects consist of transferable skills that exist in all of them. Only a tiny residual is actually topic-specific.

In the case of coding, the residual consists of drawing skills and typing skills. Programming language skills? Irrelevant. You should choose the tools to fit the problem. Neither of these needs a computer. You should only ever approach the computer at the very end, after you've designed and written the program.

Is cooking so very different? Do you decide on the ingredients before or after you start? Do you go shopping half-way through cooking an omelette?

With woodwork, do you measure first or cut first? Do you have a plan or do you randomly assemble bits until it does something useful?

Real coding, taught correctly, is barely taught at all. You teach the transferable skills. ONCE. You then apply those skills in each area in which they apply.

What other transferable skills apply? Top-down design, bottom-up implementation. The correct methodology in all forms of engineering. Proper testing strategies, also common across all forms of engineering. However, since these tests are against logic, they're a test of reasoning. A good thing to have in the sciences and philosophy.

Technical writing is the art of explaining things to idiots. Whether you're designing a board game, explaining what you like about a house, writing a travelogue or just seeing if your wild ideas hold water, you need to be able to put those ideas down on paper in a way that exposes all the inconsistencies and errors. It doesn't take much to clean it up to be readable by humans. But once it is cleaned up, it'll remain free of errors.

So I would teach a foundation course that teaches top-down reasoning, bottom-up design, flowcharts, critical path analysis and symbolic logic. Probably aimed at age 7. But I'd not do so wholly in the abstract. I'd have it thoroughly mixed in with one field, probably cooking as most kids do that and it lacks stigma at that age.

I'd then build courses on various crafts and engineering subjects on top of that, building further hierarchies where possible. Eliminate duplication and severely reduce the fictions we call disciplines.

oldzealand, 21 Sep 2017 14:58
I used to employ 200 computer scientists in my business and now teach children so I'm apparently as guilty as hell. To be compared with a carpenter is, however, a true compliment, if you mean those that create elegant, aesthetically-pleasing, functional, adaptable and long-lasting bespoke furniture, because our crafts of problem-solving using limited resources in confined environments to create working, life-improving artifacts both exemplify great human ingenuity in action. Capitalism or no.
peter nelson, 21 Sep 2017 14:29
"But coding is not magic. It is a technical skill, akin to carpentry."

But some people do it much better than others. Just like journalism. This article is complete nonsense, as I discuss in another comment. The author might want to consider a career in carpentry.

Fanastril, 21 Sep 2017 14:13
"But coding is not magic. It is a technical skill, akin to carpentry."

It is a way of thinking. Perhaps carpentry is too, but the arrogance of the above statement shows a soul who is done thinking.

NDReader, 21 Sep 2017 14:12
"But coding is not magic. It is a technical skill, akin to carpentry."

I was about to take offence on behalf of programmers, but then I realized that would be snobbish and insulting to carpenters too. Many people can code, but only a few can code well, and fewer still become the masters of the profession. Many people can learn carpentry, but few become joiners, and fewer still become cabinetmakers.

Many people can write, but few become journalists, and fewer still become real authors.

MostlyHarmlessD, 21 Sep 2017 13:08
A carpenter!? Good to know that engineers are still thought of as jumped up tradesmen.

[Oct 02, 2017] Programming vs coding

This idiotic US term "coder" is complete baloney.
Notable quotes:
"... You can learn to code, but that doesn't mean you'll be good at it. There will be a few who excel but most will not. This isn't a reflection on them but rather the reality of the situation. In any given area some will do poorly, more will do fairly, and a few will excel. The same applies in any field. ..."
"... Oh no, there's loads of people who say they're coders, who have on their CV that they're coders, that have been paid to be coders. Loads of them. Amazingly, about 9 out of 10 of them, experienced coders all, spent ages doing it, not a problem to do it, definitely a coder, not a problem being "hands on"... can't actually write working code when we actually ask them to. ..."
"... I feel for your brother, and I've experienced the exact same BS "test" that you're describing. However, when I said "rudimentary coding exam", I wasn't talking about classic fiz-buz questions, Fibonacci problems, whiteboard tests, or anything of the sort. We simply ask people to write a small amount of code that will solve a simple real world problem. Something that they would be asked to do if they got hired. We let them take a long time to do it. We let them use Google to look things up if they need. You would be shocked how many "qualified applicants" can't do it. ..."
"... "...coding is not magic. It is a technical skill, akin to carpentry. " I think that is a severe underestimation of the level of expertise required to conceptualise and deliver robust and maintainable code. The complexity of integrating software is more equivalent to constructing an entire building with components of different materials. If you think teaching coding is enough to enable software design and delivery then good luck. ..."
"... Being able to write code and being able to program are two very different skills. In language terms its the difference between being able to read and write (say) English and being able to write literature; obviously you need a grasp of the language to write literature but just knowing the language is not the same as being able to assemble and marshal thought into a coherent pattern prior to setting it down. ..."
"... What a dumpster argument. I am not a programmer or even close, but a basic understanding of coding has been important to my professional life. Coding isn't just about writing software. Understanding how algorithms work, even simple ones, is a general skill on par with algebra. ..."
"... Never mind that a good education is clearly one of the most important things you can do for a person to improve their quality of life wherever they live in the world. It's "neoliberal," so we better hate it. ..."
"... A lot of resumes come across my desk that look qualified on paper, but that's not the same thing as being able to do the job. Secondarily, while I agree that one day our field might be replaced by automation, there's a level of creativity involved with good software engineering that makes your carpenter comparison a bit flawed. ..."
Oct 02, 2017 | profile.theguardian.com
Wiretrip -> Mark Mauvais , 21 Sep 2017 14:23
Yes, 'engineers' (and particularly mathematicians) write appalling code.
Trumbledon , 21 Sep 2017 14:23
A good developer can easily earn £600-800 per day, which suggests to me that they are in high demand, and society needs more of them.
Wiretrip -> KatieL , 21 Sep 2017 14:22
Agreed, to many people 'coding' consists of copying other people's JavaScript snippets from StackOverflow... I tire of the many frauds in the business...
stratplaya , 21 Sep 2017 14:21
You can learn to code, but that doesn't mean you'll be good at it. There will be a few who excel but most will not. This isn't a reflection on them but rather the reality of the situation. In any given area some will do poorly, more will do fairly, and a few will excel. The same applies in any field.
peter nelson -> UncommonTruthiness , 21 Sep 2017 14:21

The ship has sailed on this activity as a career.

Oh, rubbish. I'm in the process of retiring from my job as an Android software designer so I'm tasked with hiring a replacement for my organisation. It pays extremely well, the work is interesting, and the company is successful and serves an important worldwide industry.

Still, finding highly-qualified people is hard and they get snatched up in mid-interview because the demand is high. Not only that but at these pay scales, we can pretty much expect the Guardian will do yet another article about the unconscionable gap between what rich, privileged techies like software engineers make and everyone else.

Really, we're damned if we do and damned if we don't. If tech workers are well-paid we're castigated for gentrifying neighbourhoods and living large, and yet anything that threatens to lower what we're paid produces conspiracy-theory articles like this one.

Fanastril -> Taylor Dotson , 21 Sep 2017 14:17
I learned to cook in school. Was there a shortage of cooks? No. Did I become a professional cook? No. but I sure as hell would not have missed the skills I learned for the world, and I use them every day.
KatieL -> Taylor Dotson , 21 Sep 2017 14:13
Oh no, there's loads of people who say they're coders, who have on their CV that they're coders, that have been paid to be coders. Loads of them. Amazingly, about 9 out of 10 of them, experienced coders all, spent ages doing it, not a problem to do it, definitely a coder, not a problem being "hands on"... can't actually write working code when we actually ask them to.
youngsteveo -> Taylor Dotson , 21 Sep 2017 14:12
I feel for your brother, and I've experienced the exact same BS "test" that you're describing. However, when I said "rudimentary coding exam", I wasn't talking about classic fiz-buz questions, Fibonacci problems, whiteboard tests, or anything of the sort. We simply ask people to write a small amount of code that will solve a simple real world problem. Something that they would be asked to do if they got hired. We let them take a long time to do it. We let them use Google to look things up if they need. You would be shocked how many "qualified applicants" can't do it.
Fanastril -> Taylor Dotson , 21 Sep 2017 14:11
It is not zero-sum: If you teach something empowering, like programming, motivating is a lot easier, and they will learn more.
UncommonTruthiness , 21 Sep 2017 14:10
The demonization of Silicon Valley is clearly the next place to put all blame. Look what "they" did to us: computers, smart phones, HD television, world-wide internet, on and on. Get a rope!

I moved there in 1978 and watched the orchards and trailer parks on North 1st St. of San Jose transform into a concrete jungle. There used to be quite a bit of semiconductor equipment and device manufacturing in SV during the 80s and 90s. Now quite a few buildings have the same name : AVAILABLE. Most equipment and device manufacturing has moved to Asia.

Programming started with binary, then machine code (hexadecimal or octal) and moved to assembler as a compiled and linked structure. More compiled languages like FORTRAN, BASIC, PL-1, COBOL, PASCAL, C (and all its "+'s") followed making programming easier for the less talented.

Now the script based languages (HTML, JAVA, etc.) are even higher level and accessible to nearly all. Programming has become a commodity and will be priced like milk, wheat, corn, non-unionized workers and the like. The ship has sailed on this activity as a career.

KatieL -> Taylor Dotson , 21 Sep 2017 14:10
"intelligence, creativity, diligence, communication ability, or anything else that a job"

None of those are any use if, when asked to turn your intelligent, creative, diligent, communicated idea into some software, you perform as well as most candidates do at simple coding assessments... and write stuff that doesn't work.

peter nelson , 21 Sep 2017 14:09

At its root, the campaign for code education isn't about giving the next generation a shot at earning the salary of a Facebook engineer. It's about ensuring those salaries no longer exist, by creating a source of cheap labor for the tech industry.

Of course the writer does not offer the slightest shred of evidence to support the idea that this is the actual goal of these programs. So it appears that the tinfoil-hat conspiracy brigade on the Guardian is operating not only below the line, but above it, too.

The fact is that few of these students will ever become software engineers (which, incidentally, is my profession) but programming skills are essential in many professions for writing little scripts to automate various tasks, or to just understand 21st century technology.

kcrane , 21 Sep 2017 14:07
Sadly this is another article by a partial journalist who knows nothing about the software industry, but hopes to subvert what he had read somewhere to support a position he had already assumed. As others had said, understanding coding had already become akin to being able to use a pencil. It is a basic requirement of many higher level roles.

But knowing which end of a pencil to put on the paper (the equivalent of the level of coding taught in schools) isn't the same as being an artist. Moreover anyone who knows the field recognises that top coders are gifted, they embody genius. There are coding Caravaggio's out there, but few have the experience to know that. No amount of teaching will produce high level coders from average humans, there is an intangible something needed, as there is in music and art, to elevate the merely good to genius.

All to say, however many are taught the basics, it won't push down the value of the most talented coders, and so won't reduce the costs of the technology industry in any meaningful way as it is an industry, like art, that relies on the few not the many.

DebuggingLife , 21 Sep 2017 14:06
Not all of those children will want to become programmers but at least the barrier to entry, - for more to at least experience it - will be lower.

Teaching music to only the children whose parents can afford music tuition means than society misses out on a greater potential for some incredible gifted musicians to shine through.

Moreover, learning to code really means learning how to wrangle with the practical application of abstract concepts, algorithms, numerical skills, logic, reasoning, etc. which are all transferrable skills some of which are not in the scope of other classes, certainly practically.
Like music, sport, literature etc. programming a computer, a website, a device, a smartphone is an endeavour that can be truly rewarding as merely a pastime, and similarly is limited only by ones imagination.

rgilyead , 21 Sep 2017 14:01
"...coding is not magic. It is a technical skill, akin to carpentry. " I think that is a severe underestimation of the level of expertise required to conceptualise and deliver robust and maintainable code. The complexity of integrating software is more equivalent to constructing an entire building with components of different materials. If you think teaching coding is enough to enable software design and delivery then good luck.
Taylor Dotson -> cwblackwell , 21 Sep 2017 14:00
Yeah, but mania over coding skills inevitably pushes over skills out of the curriculum (or deemphasizes it). Education is zero-sum in that there's only so much time and energy to devote to it. Hence, you need more than vague appeals to "enhancement," especially given the risks pointed out by the author.
Taylor Dotson -> PolydentateBrigand , 21 Sep 2017 13:57
"Talented coders will start new tech businesses and create more jobs."

That could be argued for any skill set, including those found in the humanities and social sciences likely to pushed out by the mania over coding ability. Education is zero-sum: Time spent on one subject is time that invariably can't be spent learning something else.

Taylor Dotson -> WumpieJr , 21 Sep 2017 13:49
"If they can't literally fix everything let's just get rid of them, right?"

That's a strawman. His point is rooted in the recognition that we only have so much time, energy, and money to invest in solutions. One's that feel good but may not do anything distract us for the deeper structural issues in our economy. The probably with thinking "education" will fix everything is that it leaves the status quo unquestioned.

martinusher , 21 Sep 2017 13:31
Being able to write code and being able to program are two very different skills. In language terms its the difference between being able to read and write (say) English and being able to write literature; obviously you need a grasp of the language to write literature but just knowing the language is not the same as being able to assemble and marshal thought into a coherent pattern prior to setting it down.

To confuse things further there's various levels of skill that all look the same to the untutored eye. Suppose you wished to bridge a waterway. If that waterway was a narrow ditch then you could just throw a plank across. As the distance to be spanned got larger and larger eventually you'd have to abandon intuition for engineering and experience. Exactly the same issues happen with software but they're less tangible; anyone can build a small program but a complex system requires a lot of other knowledge (in my field, that's engineering knowledge -- coding is almost an afterthought).

Its a good idea to teach young people to code but I wouldn't raise their expectations of huge salaries too much. For children educating them in wider, more general, fields and abstract activities such as music will pay off huge dividends, far more than just teaching them whatever the fashionable language du jour is. (...which should be Logo but its too subtle and abstract, it doesn't look "real world" enough!).

freeandfair , 21 Sep 2017 13:30
I don't see this is an issue. Sure, there could be ulterior motives there, but anyone who wants to still be employed in 20 years has to know how to code . It is not that everyone will be a coder, but their jobs will either include part-time coding or will require understanding of software and what it can and cannot do. AI is going to be everywhere.
WumpieJr , 21 Sep 2017 13:23
What a dumpster argument. I am not a programmer or even close, but a basic understanding of coding has been important to my professional life. Coding isn't just about writing software. Understanding how algorithms work, even simple ones, is a general skill on par with algebra.

But is isn't just about coding for Tarnoff. He seems to hold education in contempt generally. "The far-fetched premise of neoliberal school reform is that education can mend our disintegrating social fabric." If they can't literally fix everything let's just get rid of them, right?

Never mind that a good education is clearly one of the most important things you can do for a person to improve their quality of life wherever they live in the world. It's "neoliberal," so we better hate it.

youngsteveo , 21 Sep 2017 13:16
I'm not going to argue that the goal of mass education isn't to drive down wages, but the idea that the skills gap is a myth doesn't hold water in my experience. I'm a software engineer and manager at a company that pays well over the national average, with great benefits, and it is downright difficult to find a qualified applicant who can pass a rudimentary coding exam.

A lot of resumes come across my desk that look qualified on paper, but that's not the same thing as being able to do the job. Secondarily, while I agree that one day our field might be replaced by automation, there's a level of creativity involved with good software engineering that makes your carpenter comparison a bit flawed.

[Oct 02, 2017] Does programming provides a new path to the middle class? Probably no longer, unless you are really talanted. In the latter case it is not that different from any other fields, but the pressure from H1B makes is harder for programmers. The neoliberal USA have a real problem with the social mobility

Notable quotes:
"... I do think it's peculiar that Silicon Valley requires so many H1B visas... 'we can't find the talent here' is the main excuse ..."
"... This is interesting. Indeed, I do think there is excess supply of software programmers. ..."
"... Well, it is either that or the kids themselves who have to pay for it and they are even less prepared to do so. Ideally, college education should be tax payer paid but this is not the case in the US. And the employer ideally should pay for the job related training, but again, it is not the case in the US. ..."
"... Plenty of people care about the arts but people can't survive on what the arts pay. That was pretty much the case all through human history. ..."
"... I was laid off at your age in the depths of the recent recession and I got a job. ..."
"... The great thing about software , as opposed to many other jobs, is that it can be done at home which you're laid off. Write mobile (IOS or Android) apps or work on open source projects and get stuff up on github. I've been to many job interviews with my apps loaded on mobile devices so I could show them what I've done. ..."
"... Schools really can't win. Don't teach coding, and you're raising a generation of button-pushers. Teach it, and you're pandering to employers looking for cheap labour. Unions in London objected to children being taught carpentry in the twenties and thirties, so it had to be renamed "manual instruction" to get round it. Denying children useful skills is indefensible. ..."
Oct 02, 2017 | discussion.theguardian.com
swelle , 21 Sep 2017 17:36
I do think it's peculiar that Silicon Valley requires so many H1B visas... 'we can't find the talent here' is the main excuse, though many 'older' (read: over 40) native-born tech workers will tell your that's plenty of talent here already, but even with the immigration hassles, H1B workers will be cheaper overall...

Julian Williams , 21 Sep 2017 18:06

This is interesting. Indeed, I do think there is excess supply of software programmers. There is only a modest number of decent jobs, say as an algorithms developer in finance, general architecture of complex systems or to some extent in systems security. However, these jobs are usually occupied and the incumbents are not likely to move on quickly. Road blocks are also put up by creating sub networks of engineers who ensure that some knowledge is not ubiquitous.

Most very high paying jobs in the technology sector are in the same standard upper management roles as in every other industry.

Still, the ability to write a computer program in an enabler, knowing how it works means you have an ability to imagine something and make it real. To me it is a bit like language, some people can use language to make more money than others, but it is still important to be able to have a basic level of understanding.

FabBlondie -> peter nelson , 21 Sep 2017 17:42
And yet I know a lot of people that has happened to. Better to replace a $125K a year programmer with one who will do the same, or even less, job for $50K.

JMColwill , 21 Sep 2017 18:17

This could backfire if the programmers don't find the work or pay to match their expectations... Programmers, after all tend to make very good hackers if their minds are turned to it.

freeandfair -> FabBlondie , 21 Sep 2017 18:23

> While I like your idea of what designing a computer program involves, in my nearly 40 years experience as a programmer I have rarely seen this done.

Well, I am a software architect and what he says sounds correct for a certain type of applications. Maybe you do a different type of programming.

peter nelson -> FabBlondie , 21 Sep 2017 18:23

While I like your idea of what designing a computer program involves, in my nearly 40 years experience as a programmer I have rarely seen this done.

How else can you do it?

Java is popular because it's a very versatile language - On this list it's the most popular general-purpose programming language. (Above it javascript is just a scripting language and HTML/CSS aren't even programming languages) https://fossbytes.com/most-used-popular-programming-languages/ ... and below it you have to go down to C# at 20% to come to another general-purpose language, and even that's a Microsoft house language.

Also the "correct" choice of programming languages is also based on how many people in the shop know it so they maintain code that's written in it by someone else.

freeandfair -> FabBlondie , 21 Sep 2017 18:22
> job-specific training is completely different. What a joke to persuade public school districts to pick up the tab on job training.

Well, it is either that or the kids themselves who have to pay for it and they are even less prepared to do so. Ideally, college education should be tax payer paid but this is not the case in the US. And the employer ideally should pay for the job related training, but again, it is not the case in the US.

freeandfair -> mlzarathustra , 21 Sep 2017 18:20
> The bigger problem is that nobody cares about the arts, and as expensive as education is, nobody wants to carry around a debt on a skill that won't bring in the buck

Plenty of people care about the arts but people can't survive on what the arts pay. That was pretty much the case all through human history.

theindyisbetter -> Game Cabbage , 21 Sep 2017 18:18
No. The amount of work is not a fixed sum. That's the lump of labour fallacy. We are not tied to the land.
ConBrio , 21 Sep 2017 18:10
Since newspaper are consolidating and cutting jobs gotta clamp down on colleges offering BA degrees, particularly in English Literature and journalism.

And then... and...then...and...

LMichelle -> chillisauce , 21 Sep 2017 18:03
This article focuses on the US schools, but I can imagine it's the same in the UK. I don't think these courses are going to be about creating great programmers capable of new innovations as much as having a work force that can be their own IT Help Desk.

They'll learn just enough in these classes to do that.

Then most companies will be hiring for other jobs, but want to make sure you have the IT skills to serve as your own "help desk" (although they will get no salary for their IT work).

edmundberk -> FabBlondie , 21 Sep 2017 17:57
I find that quite remarkable - 40 years ago you must have been using assembler and with hardly any memory to work with. If you blitzed through that without applying the thought processes described, well...I'm surprised.
James Dey , 21 Sep 2017 17:55
Funny. Every day in the Brexit articles, I read that increasing the supply of workers has negligible effect on wages.
peter nelson -> peterainbow , 21 Sep 2017 17:54
I was laid off at your age in the depths of the recent recession and I got a job. As I said in another posting, it usually comes down to fresh skills and good personal references who will vouch for your work-habits and how well you get on with other members of your team.

The great thing about software , as opposed to many other jobs, is that it can be done at home which you're laid off. Write mobile (IOS or Android) apps or work on open source projects and get stuff up on github. I've been to many job interviews with my apps loaded on mobile devices so I could show them what I've done.

Game Cabbage -> theindyisbetter , 21 Sep 2017 17:52
The situation has a direct comparison to today. It has nothing to do with land. There was a certain amount of profit making work and not enough labour to satisfy demand. There is currently a certain amount of profit making work and in many situations (especially unskilled low paid work) too much labour.
edmundberk , 21 Sep 2017 17:52
So, is teaching people English or arithmetic all about reducing wages for the literate and numerate?

Or is this the most obtuse argument yet for avoiding what everyone in tech knows - even more blatantly than in many other industries, wages are curtailed by offshoring; and in the US, by having offshoring centres on US soil.

chillisauce , 21 Sep 2017 17:48
Well, speaking as someone who spends a lot of time trying to find really good programmers... frankly there aren't that many about. We take most of ours from Eastern Europe and SE Asia, which is quite expensive, given the relocation costs to the UK. But worth it.

So, yes, if more British kids learnt about coding, it might help a bit. But not much; the real problem is that few kids want to study IT in the first place, and that the tuition standards in most UK universities are quite low, even if they get there.

Baobab73 , 21 Sep 2017 17:48
True......
peter nelson -> rebel7 , 21 Sep 2017 17:47
There was recently an programme/podcast on ABC/RN about the HUGE shortage in Australia of techies with specialized security skills.
peter nelson -> jigen , 21 Sep 2017 17:46
Robots, or AI, are already making us more productive. I can write programs today in an afternoon that would have taken me a week a decade or two ago.

I can create a class and the IDE will take care of all the accessors, dependencies, enforce our style-guide compliance, stub-in the documentation ,even most test cases, etc, and all I have to write is very-specific stuff required by my application - the other 90% is generated for me. Same with UI/UX - stubs in relevant event handlers, bindings, dependencies, etc.

Programmers are a zillion times more productive than in the past, yet the demand keeps growing because so much more stuff in our lives has processors and code. Your car has dozens of processors running lots of software; your TV, your home appliances, your watch, etc.

Quaestor , 21 Sep 2017 17:43

Schools really can't win. Don't teach coding, and you're raising a generation of button-pushers. Teach it, and you're pandering to employers looking for cheap labour. Unions in London objected to children being taught carpentry in the twenties and thirties, so it had to be renamed "manual instruction" to get round it. Denying children useful skills is indefensible.

jamesupton , 21 Sep 2017 17:42
Getting children to learn how to write code, as part of core education, will be the first step to the long overdue revolution. The rest of us will still have to stick to burning buildings down and stringing up the aristocracy.
cjenk415 -> LMichelle , 21 Sep 2017 17:40
did you misread? it seemed like he was emphasizing that learning to code, like learning art (and sports and languages), will help them develop skills that benefit them in whatever profession they choose.
FabBlondie -> peter nelson , 21 Sep 2017 17:40
While I like your idea of what designing a computer program involves, in my nearly 40 years experience as a programmer I have rarely seen this done. And, FWIW, IMHO choosing the tool (programming language) might reasonably be expected to follow designing a solution, in practice this rarely happens. No, these days it's Java all the way, from day one.
theindyisbetter -> Game Cabbage , 21 Sep 2017 17:40
There was a fixed supply of land and a reduced supply of labour to work the land.

Nothing like then situation in a modern economy.

LMichelle , 21 Sep 2017 17:39
I'd advise parents that the classes they need to make sure their kids excel in are acting/drama. There is no better way to getting that promotion or increasing your pay like being a skilled actor in the job market. It's a fake it till you make it deal.
theindyisbetter , 21 Sep 2017 17:36
What a ludicrous argument.

Let's not teach maths or science or literacy either - then anyone with those skills will earn more.

SheriffFatman -> Game Cabbage , 21 Sep 2017 17:36

After the Black Death in the middle ages there was a huge under supply of labour. It produced a consistent rise in wages and conditions

It also produced wage-control legislation (which admittedly failed to work).

peter nelson -> peterainbow , 21 Sep 2017 17:32
if there were truly a shortage i wouldn't be unemployed

I've heard that before but when I've dug deeper I've usually found someone who either let their skills go stale, or who had some work issues.

LMichelle -> loveyy , 21 Sep 2017 17:26
Really? You think they are going to emphasize things like the importance of privacy and consumer rights?
loveyy , 21 Sep 2017 17:25
This really has to be one of the silliest articles I read here in a very long time.
People, let your children learn to code. Even more, educate yourselves and start to code just for the fun of it - look at it like a game.
The more people know how to code the less likely they are to understand how stuff works. If you were ever frustrated by how impossible it seems to shop on certain websites, learn to code and you will be frustrated no more. You will understand the intent behind the process.
Even more, you will understand the inherent limitations and what is the meaning of safety. You will be able to better protect yourself in a real time connected world.

Learning to code won't turn your kid into a programmer, just like ballet or piano classes won't mean they'll ever choose art as their livelihood. So let the children learn to code and learn along with them

Game Cabbage , 21 Sep 2017 17:24
Tipping power to employers in any profession by oversupply of labour is not a good thing. Bit of a macabre example here but...After the Black Death in the middle ages there was a huge under supply of labour. It produced a consistent rise in wages and conditions and economic development for hundreds of years after this. Not suggesting a massive depopulation. But you can achieve the same effects by altering the power balance. With decades of Neoliberalism, the employers side of the power see-saw is sitting firmly in the mud and is producing very undesired results for the vast majority of people.
Zuffle -> peterainbow , 21 Sep 2017 17:23
Perhaps you're just not very good. I've been a developer for 20 years and I've never had more than 1 week of unemployment.
Kevin P Brown -> peterainbow , 21 Sep 2017 17:20
" at 55 finding it impossible to get a job"

I am 59, and it is not just the age aspect it is the money aspect. They know you have experience and expectations, and yet they believe hiring someone half the age and half the price, times 2 will replace your knowledge. I have been contracting in IT for 30 years, and now it is obvious it is over. Experience at some point no longer mitigates age. I think I am at that point now.

TheLane82 , 21 Sep 2017 17:20
Completely true! What needs to happen instead is to teach the real valuable subjects.

Gender studies. Islamic studies. Black studies. All important issues that need to be addressed.

peter nelson -> mlzarathustra , 21 Sep 2017 17:06
Dear, dear, I know, I know, young people today . . . just not as good as we were. Everything is just going down the loo . . . Just have a nice cuppa camomile (or chamomile if you're a Yank) and try to relax ... " hey you kids, get offa my lawn !"
FabBlondie , 21 Sep 2017 17:06
There are good reasons to teach coding. Too many of today's computer users are amazingly unaware of the technology that allows them to send and receive emails, use their smart phones, and use websites. Few understand the basic issues involved in computer security, especially as it relates to their personal privacy. Hopefully some introductory computer classes could begin to remedy this, and the younger the students the better.

Security problems are not strictly a matter of coding.

Security issues persist in tech. Clearly that is not a function of the size of the workforce. I propose that it is a function of poor management and design skills. These are not taught in any programming class I ever took. I learned these on the job and in an MBA program, and because I was determined.

Don't confuse basic workforce training with an effective application of tech to authentic needs.

How can the "disruption" so prized in today's Big Tech do anything but aggravate our social problems? Tech's disruption begins with a blatant ignorance of and disregard for causes, and believes to its bones that a high tech app will truly solve a problem it cannot even describe.

Kool Aid anyone?

peterainbow -> brady , 21 Sep 2017 17:05
indeed that idea has been around as long as cobol and in practice has just made things worse, the fact that many people outside of software engineering don;t seem to realise is that the coding itself is a relatively small part of the job
FabBlondie -> imipak , 21 Sep 2017 17:04
Hurrah.
peterainbow -> rebel7 , 21 Sep 2017 17:04
so how many female and old software engineers are there who are unable to get a job, i'm one of them at 55 finding it impossible to get a job and unlike many 'developers' i know what i'm doing
peterainbow , 21 Sep 2017 17:02
meanwhile the age and sex discrimination in IT goes on, if there were truly a shortage i wouldn't be unemployed
Jared Hall -> peter nelson , 21 Sep 2017 17:01
Training more people for an occupation will result in more people becoming qualified to perform that occupation, irregardless of the fact that many will perform poorly at it. A CS degree is no guarantee of competency, but it is one of the best indicators of general qualification we have at the moment. If you can provide a better metric for analyzing the underlying qualifications of the labor force, I'd love to hear it.

Regarding your anecdote, while interesting, it poor evidence when compared to the aggregate statistical data analyzed in the EPI study.

peter nelson -> FabBlondie , 21 Sep 2017 17:00

Job-specific training is completely different.

Good grief. It's not job-specific training. You sound like someone who knows nothing about computer programming.

Designing a computer program requires analysing the task; breaking it down into its components, prioritising them and identifying interdependencies, and figuring out which parts of it can be broken out and done separately. Expressing all this in some programming language like Java, C, or C++ is quite secondary.

So once you learn to organise a task properly you can apply it to anything - remodeling a house, planning a vacation, repairing a car, starting a business, or administering a (non-software) project at work.

[Oct 02, 2017] Evaluation of potential job candidates for programming job should include evaluation of thier previous projects and code written

Notable quotes:
"... Thank you. The kids that spend high school researching independently and spend their nights hacking just for the love of it and getting a job without college are some of the most competent I've ever worked with. Passionless college grads that just want a paycheck are some of the worst. ..."
"... how about how new labor tried to sign away IT access in England to India in exchange for banking access there, how about the huge loopholes in bringing in cheap IT workers from elsewhere in the world, not conspiracies, but facts ..."
"... And I've never recommended hiring anyone right out of school who could not point me to a project they did on their own, i.e., not just grades and test scores. I'd like to see an IOS or Android app, or a open-source component, or utility or program of theirs on GitHub, or something like that. ..."
"... most of what software designers do is not coding. It requires domain knowledge and that's where the "smart" IDEs and AI coding wizards fall down. It will be a long time before we get where you describe. ..."
Oct 02, 2017 | discussion.theguardian.com

peter nelson -> c mm , 21 Sep 2017 19:49

Instant feedback is one of the things I really like about programming, but it's also the thing that some people can't handle. As I'm developing a program all day long the compiler is telling me about build errors or warnings or when I go to execute it it crashes or produces unexpected output, etc. Software engineers are bombarded all day with negative feedback and little failures. You have to be thick-skinned for this work.
peter nelson -> peterainbow , 21 Sep 2017 19:42
How is it shallow and lazy? I'm hiring for the real world so I want to see some real world accomplishments. If the candidate is fresh out of university they can't point to work projects in industry because they don't have any. But they CAN point to stuff they've done on their own. That shows both motivation and the ability to finish something. Why do you object to it?
anticapitalist -> peter nelson , 21 Sep 2017 14:47
Thank you. The kids that spend high school researching independently and spend their nights hacking just for the love of it and getting a job without college are some of the most competent I've ever worked with. Passionless college grads that just want a paycheck are some of the worst.
John Kendall , 21 Sep 2017 19:42
There is a big difference between "coding" and programming. Coding for a smart phone app is a matter of calling functions that are built into the device. For example, there are functions for the GPS or for creating buttons or for simulating motion in a game. These are what we used to call subroutines. The difference is that whereas we had to write our own subroutines, now they are just preprogrammed functions. How those functions are written is of little or no importance to today's coders.

Nor are they able to program on that level. Real programming requires not only a knowledge of programming languages, but also a knowledge of the underlying algorithms that make up actual programs. I suspect that "coding" classes operate on a quite superficial level.

Game Cabbage -> theindyisbetter , 21 Sep 2017 19:40
Its not about the amount of work or the amount of labor. Its about the comparative availability of both and how that affects the balance of power, and that in turn affects the overall quality of life for the 'majority' of people.
c mm -> Ed209 , 21 Sep 2017 19:39
Most of this is not true. Peter Nelson gets it right by talking about breaking steps down and thinking rationally. The reason you can't just teach the theory, however, is that humans learn much better with feedback. Think about trying to learn how to build a fast car, but you never get in and test its speed. That would be silly. Programming languages take the system of logic that has been developed for centuries and gives instant feedback on the results. It's a language of rationality.
peter nelson -> peterainbow , 21 Sep 2017 19:37
This article is about the US. The tech industry in the EU is entirely different, and basically moribund. Where is the EU's Microsoft, Apple, Google, Amazon, Oracle, Intel, Facebook, etc, etc? The opportunities for exciting interesting work, plus the time and schedule pressures that force companies to overlook stuff like age because they need a particular skill Right Now, don't exist in the EU. I've done very well as a software engineer in my 60's in the US; I cannot imagine that would be the case in the EU.
peterainbow -> peter nelson , 21 Sep 2017 19:37
sorry but that's just not true, i doubt you are really programming still, or quasi programmer but really a manager who like to keep their hand in, you certainly aren't busy as you've been posting all over this cif. also why would you try and hire someone with such disparate skillsets, makes no sense at all

oh and you'd be correct that i do have workplace issues, ie i have a disability and i also suffer from depression, but that shouldn't bar me from employment and again regarding my skills going stale, that again contradicts your statement that it's about planning/analysis/algorithms etc that you said above ( which to some extent i agree with )

c mm -> peterainbow , 21 Sep 2017 19:36
Not at all, it's really egalitarian. If I want to hire someone to paint my portrait, the best way to know if they're any good is to see their previous work. If they've never painted a portrait before then I may want to go with the girl who has
c mm -> ragingbull , 21 Sep 2017 19:34
There is definitely not an excess. Just look at projected jobs for computer science on the Bureau of Labor statistics.
c mm -> perble conk , 21 Sep 2017 19:32
Right? It's ridiculous. "Hey, there's this industry you can train for that is super valuable to society and pays really well!"
Then Ben Tarnoff, "Don't do it! If you do you'll drive down wages for everyone else in the industry. Build your fire starting and rock breaking skills instead."
peterainbow -> peter nelson , 21 Sep 2017 19:29
how about how new labor tried to sign away IT access in England to India in exchange for banking access there, how about the huge loopholes in bringing in cheap IT workers from elsewhere in the world, not conspiracies, but facts
peter nelson -> eirsatz , 21 Sep 2017 19:25
I think the difference between gifted and not is motivation. But I agree it's not innate. The kid who stayed up all night in high school hacking into the school server to fake his coding class grade is probably more gifted than the one who spent 4 years in college getting a BS in CS because someone told him he could get a job when he got out.

I've done some hiring in my life and I always ask them to tell me about stuff they did on their own.

peter nelson -> TheBananaBender , 21 Sep 2017 19:20

Most coding jobs are bug fixing.

The only bugs I have to fix are the ones I make.

peter nelson -> Ed209 , 21 Sep 2017 19:19
As several people have pointed out, writing a computer program requires analyzing and breaking down a task into steps, identifying interdependencies, prioritizing the order, figuring out what parts can be organized into separate tasks that be done separately, etc.

These are completely independent of the language - I've been programming for 40 years in everything from FORTRAN to APL to C to C# to Java and it's all the same. Not only that but they transcend programming - they apply to planning a vacation, remodeling a house, or fixing a car.

peter nelson -> ragingbull , 21 Sep 2017 19:14
Neither coding nor having a bachelor's degree in computer science makes you a suitable job candidate. I've done a lot of recruiting and interviews in my life, and right now I'm trying to hire someone. And I've never recommended hiring anyone right out of school who could not point me to a project they did on their own, i.e., not just grades and test scores. I'd like to see an IOS or Android app, or a open-source component, or utility or program of theirs on GitHub, or something like that.

That's the thing that distinguishes software from many other fields - you can do something real and significant on your own. If you haven't managed to do so in 4 years of college you're not a good candidate.

peter nelson -> nickGregor , 21 Sep 2017 19:07
Within the next year coding will be old news and you will simply be able to describe things in ur native language in such a way that the machine will be able to execute any set of instructions you give it.

In a sense that's already true, as i noted elsewhere. 90% of the code in my projects (Java and C# in their respective IDEs) is machine generated. I do relatively little "coding". But the flaw in your idea is this: most of what software designers do is not coding. It requires domain knowledge and that's where the "smart" IDEs and AI coding wizards fall down. It will be a long time before we get where you describe.

Ricardo111 -> martinusher , 21 Sep 2017 19:03
Completely agree. At the highest levels there is more work that goes into managing complexity and making sure nothing is missed than in making the wheels turn and the beepers beep.
ragingbull , 21 Sep 2017 19:02
Hang on... if the current excess of computer science grads is not driving down wages, why would training more kids to code make any difference?
Ricardo111 -> youngsteveo , 21 Sep 2017 18:59
I've actually interviewed people for very senior technical positions in Investment Banks who had all the fancy talk in the world and yet failed at some very basic "write me a piece of code that does X" tests.

Next hurdle on is people who have learned how to deal with certain situations and yet don't really understand how it works so are unable to figure it out if you change the problem parameters.

That said, the average coder is only slightly beyond this point. The ones who can take in account maintenability and flexibility for future enhancements when developing are already a minority, and those who can understand the why of software development process steps, design software system architectures or do a proper Technical Analysis are very rare.

eirsatz -> Ricardo111 , 21 Sep 2017 18:57
Hubris. It's easy to mistake efficiency born of experience as innate talent. The difference between a 'gifted coder' and a 'non gifted junior coder' is much more likely to be 10 or 15 years sitting at a computer, less if there are good managers and mentors involved.
Ed209 , 21 Sep 2017 18:57
Politicians love the idea of teaching children to 'code', because it sounds so modern, and nobody could possible object... could they? Unfortunately it simply shows up their utter ignorance of technical matters because there isn't a language called 'coding'. Computer programming languages have changed enormously over the years, and continue to evolve. If you learn the wrong language you'll be about as welcome in the IT industry as a lamp-lighter or a comptometer operator.

The pace of change in technology can render skills and qualifications obsolete in a matter of a few years, and only the very best IT employers will bother to retrain their staff - it's much cheaper to dump them. (Most IT posts are outsourced through agencies anyway - those that haven't been off-shored. )

peter nelson -> YEverKnot , 21 Sep 2017 18:54
And this isn't even a good conspiracy theory; it's a bad one. He offers no evidence that there's an actual plan or conspiracy to do this. I'm looking for an account of where the advocates of coding education met to plot this in some castle in Europe or maybe a secret document like "The Protocols of the Elders of Google", or some such.
TheBananaBender , 21 Sep 2017 18:52
Most jobs in IT are shit - desktop support, operations droids. Most coding jobs are bug fixing.
Ricardo111 -> Wiretrip , 21 Sep 2017 18:49
Tool Users Vs Tool Makers. The really good coders actually get why certain things work as they do and can adjust them for different conditions. The mass produced coders are basically code copiers and code gluing specialists.
peter nelson -> AmyInNH , 21 Sep 2017 18:49
People who get Masters and PhD's in computer science are not usually "coders" or software engineers - they're usually involved in obscure, esoteric research for which there really is very little demand. So it doesn't surprise me that they're unemployed. But if someone has a Bachelor's in CS and they're unemployed I would have to wonder what they spent their time at university doing.

The thing about software that distinguishes it from lots of other fields is that you can make something real and significant on your own . I would expect any recent CS major I hire to be able to show me an app or an open-source component or something similar that they made themselves, and not just test scores and grades. If they could not then I wouldn't even think about hiring them.

Ricardo111 , 21 Sep 2017 18:44
Fortunately for those of us who are actually good at coding, the difference in productivity between a gifted coder and a non-gifted junior developer is something like 100-fold. Knowing how to code and actually being efficient at creating software programs and systems are about as far apart as knowing how to write and actually being able to write a bestselling exciting Crime trilogy.
peter nelson -> jamesupton , 21 Sep 2017 18:36

The rest of us will still have to stick to burning buildings down and stringing up the aristocracy.

If you know how to write software you can get a robot to do those things.

peter nelson -> Julian Williams , 21 Sep 2017 18:34
I do think there is excess supply of software programmers. There is only a modest number of decent jobs, say as an algorithms developer in finance, general architecture of complex systems or to some extent in systems security.

This article is about coding; most of those jobs require very little of that.

Most very high paying jobs in the technology sector are in the same standard upper management roles as in every other industry.

How do you define "high paying". Everyone I know (and I know a lot because I've been a sw engineer for 40 years) who is working fulltime as a software engineer is making a high-middle-class salary, and can easily afford a home, travel on holiday, investments, etc.

YEverKnot , 21 Sep 2017 18:32

Tech's push to teach coding isn't about kids' success – it's about cutting wages

Nowt like a good conspiracy theory.
freeandfair -> WithoutPurpose , 21 Sep 2017 18:31
What is a stupidly low salary? 100K?
freeandfair -> AmyInNH , 21 Sep 2017 18:30
> Already there. I take it you skipped right past the employment prospects for US STEM grads - 50% chance of finding STEM work.

That just means 50% of them are no good and need to develop their skills further or try something else.
Not every with a STEM degree from some 3rd rate college is capable of doing complex IT or STEM work.

peter nelson -> edmundberk , 21 Sep 2017 18:30

So, is teaching people English or arithmetic all about reducing wages for the literate and numerate?

Yes. Haven't you noticed how wage growth has flattened? That's because some do-gooders" thought it would be a fine idea to educate the peasants. There was a time when only the well-to do knew how to read and write, and that's why they well-to-do were well-to-do. Education is evil. Stop educating people and then those of us who know how to read and write can charge them for reading and writing letters and email. Better yet, we can have Chinese and Indians do it for us and we just charge a transaction fee.

AmyInNH -> peter nelson , 21 Sep 2017 18:27
Massive amounts of public use cars, it doesn't mean millions need schooling in auto mechanics. Same for software coding. We aren't even using those who have Bachelors, Masters and PhDs in CS.
carlospapafritas , 21 Sep 2017 18:27
"..importing large numbers of skilled guest workers from other countries through the H1-B visa program..."

"skilled" is good. H1B has long ( appx 17 years) been abused and turned into trafficking scheme. One can buy H1B in India. Powerful ethnic networks wheeling & dealing in US & EU selling IT jobs to essentially migrants.

The real IT wages haven't been stagnant but steadily falling from the 90s. It's easy to see why. $82K/year IT wage was about average in the 90s. Comparing the prices of housing (& pretty much everything else) between now gives you the idea.

freeandfair -> whitehawk66 , 21 Sep 2017 18:27
> not every kid wants or needs to have their soul sucked out of them sitting in front of a screen full of code for some idiotic service that some other douchbro thinks is the next iteration of sliced bread

Taking a couple of years of programming are not enough to do this as a job, don't worry.
But learning to code is like learning maths, - it helps to develop logical thinking, which will benefit you in every area of your life.

James Dey , 21 Sep 2017 18:25
We should stop teaching our kids to be journalists, then your wage might go up.
peter nelson -> AmyInNH , 21 Sep 2017 18:23
What does this even mean?

[Oct 02, 2017] Programming is a culturally important skill

Notable quotes:
"... A lot of basic entry level jobs require a good level of Excel skills. ..."
"... Programming is a cultural skill; master it, or even understand it on a simple level, and you understand how the 21st century works, on the machinery level. To bereave the children of this crucial insight is to close off a door to their future. ..."
"... What a dumpster argument. I am not a programmer or even close, but a basic understanding of coding has been important to my professional life. Coding isn't just about writing software. Understanding how algorithms work, even simple ones, is a general skill on par with algebra. ..."
"... Never mind that a good education is clearly one of the most important things you can do for a person to improve their quality of life wherever they live in the world. It's "neoliberal," so we better hate it. ..."
"... We've seen this kind of tactic for some time now. Silicon Valley is turning into a series of micromanaged sweatshops (that's what "agile" is truly all about) with little room for genuine creativity, or even understanding of what that actually means. I've seen how impossible it is to explain to upper level management how crappy cheap developers actually diminish productivity and value. All they see is that the requisition is filled for less money. ..."
"... Libertarianism posits that everyone should be free to sell their labour or negotiate their own arrangements without the state interfering. So if cheaper foreign labour really was undercutting American labout the Libertarians would be thrilled. ..."
"... Not producing enough to fill vacancies or not producing enough to keep wages at Google's preferred rate? Seeing as research shows there is no lack of qualified developers, the latter option seems more likely. ..."
"... We're already using Asia as a source of cheap labor for the tech industry. Why do we need to create cheap labor in the US? ..."
www.moonofalabama.org
David McCaul -> IanMcLzzz , 21 Sep 2017 13:03
There are very few professional Scribes nowadays, a good level of reading & writing is simplely a default even for the lowest paid jobs. A lot of basic entry level jobs require a good level of Excel skills. Several years from now basic coding will be necessary to manipulate basic tools for entry level jobs, especially as increasingly a lot of real code will be generated by expert systems supervised by a tiny number of supervisors. Coding jobs will go the same way that trucking jobs will go when driverless vehicles are perfected.

anticapitalist, 21 Sep 2017 14:25

Offer the class but not mandatory. Just like I could never succeed playing football others will not succeed at coding. The last thing the industry needs is more bad developers showing up for a paycheck.

Fanastril , 21 Sep 2017 14:08

Programming is a cultural skill; master it, or even understand it on a simple level, and you understand how the 21st century works, on the machinery level. To bereave the children of this crucial insight is to close off a door to their future. What's next, keep them off Math, because, you know . .
Taylor Dotson -> freeandfair , 21 Sep 2017 13:59
That's some crystal ball you have there. English teachers will need to know how to code? Same with plumbers? Same with janitors, CEOs, and anyone working in the service industry?
PolydentateBrigand , 21 Sep 2017 12:59
The economy isn't a zero-sum game. Developing a more skilled workforce that can create more value will lead to economic growth and improvement in the general standard of living. Talented coders will start new tech businesses and create more jobs.

WumpieJr , 21 Sep 2017 13:23

What a dumpster argument. I am not a programmer or even close, but a basic understanding of coding has been important to my professional life. Coding isn't just about writing software. Understanding how algorithms work, even simple ones, is a general skill on par with algebra.

But is isn't just about coding for Tarnoff. He seems to hold education in contempt generally. "The far-fetched premise of neoliberal school reform is that education can mend our disintegrating social fabric." If they can't literally fix everything let's just get rid of them, right?

Never mind that a good education is clearly one of the most important things you can do for a person to improve their quality of life wherever they live in the world. It's "neoliberal," so we better hate it.

mlzarathustra , 21 Sep 2017 16:52
I agree with the basic point. We've seen this kind of tactic for some time now. Silicon Valley is turning into a series of micromanaged sweatshops (that's what "agile" is truly all about) with little room for genuine creativity, or even understanding of what that actually means. I've seen how impossible it is to explain to upper level management how crappy cheap developers actually diminish productivity and value. All they see is that the requisition is filled for less money.

The bigger problem is that nobody cares about the arts, and as expensive as education is, nobody wants to carry around a debt on a skill that won't bring in the bucks. And smartphone-obsessed millennials have too short an attention span to fathom how empty their lives are, devoid of the aesthetic depth as they are.

I can't draw a definite link, but I think algorithm fails, which are based on fanatical reliance on programmed routines as the solution to everything, are rooted in the shortage of education and cultivation in the arts.

Economics is a social science, and all this is merely a reflection of shared cultural values. The problem is, people think it's math (it's not) and therefore set in stone.

AmyInNH -> peter nelson , 21 Sep 2017 16:51
Geeze it'd be nice if you'd make an effort.
rucore.libraries.rutgers.edu/rutgers-lib/45960/PDF/1/
https://rucore.libraries.rutgers.edu/rutgers-lib/46156 /
https://rucore.libraries.rutgers.edu/rutgers-lib/46207 /
peter nelson -> WyntonK , 21 Sep 2017 16:45
Libertarianism posits that everyone should be free to sell their labour or negotiate their own arrangements without the state interfering. So if cheaper foreign labour really was undercutting American labout the Libertarians would be thrilled.

But it's not. I'm in my 60's and retiring but I've been a software engineer all my life. I've worked for many different companies, and in different industries and I've never had any trouble competing with cheap imported workers. The people I've seen fall behind were ones who did not keep their skills fresh. When I was laid off in 2009 in my mid-50's I made sure my mobile-app skills were bleeding edge (in those days ANYTHING having to do with mobile was bleeding edge) and I used to go to job interviews with mobile devices to showcase what I could do. That way they could see for themselves and not have to rely on just a CV.

They older guys who fell behind did so because their skills and toolsets had become obsolete.

Now I'm trying to hire a replacement to write Android code for use in industrial production and struggling to find someone with enough experience. So where is this oversupply I keep hearing about?

Jared Hall -> RogTheDodge , 21 Sep 2017 16:42
Not producing enough to fill vacancies or not producing enough to keep wages at Google's preferred rate? Seeing as research shows there is no lack of qualified developers, the latter option seems more likely.
JayThomas , 21 Sep 2017 16:39

It's about ensuring those salaries no longer exist, by creating a source of cheap labor for the tech industry.

We're already using Asia as a source of cheap labor for the tech industry. Why do we need to create cheap labor in the US? That just seems inefficient.

FabBlondie -> RogTheDodge , 21 Sep 2017 16:39
There was never any need to give our jobs to foreigners. That is, if you are comparing the production of domestic vs. foreign workers. The sole need was, and is, to increase profits.
peter nelson -> AmyInNH , 21 Sep 2017 16:34
Link?
FabBlondie , 21 Sep 2017 16:34
Schools MAY be able to fix big social problems, but only if they teach a well-rounded curriculum that includes classical history and the humanities. Job-specific training is completely different. What a joke to persuade public school districts to pick up the tab on job training. The existing social problems were not caused by a lack of programmers, and cannot be solved by Big Tech.

I agree with the author that computer programming skills are not that limited in availability. Big Tech solved the problem of the well-paid professional some years ago by letting them go, these were mostly workers in their 50s, and replacing them with H1-B visa-holders from India -- who work for a fraction of their experienced American counterparts.

It is all about profits. Big Tech is no different than any other "industry."

peter nelson -> Jared Hall , 21 Sep 2017 16:31
Supply of apples does not affect the demand for oranges. Teaching coding in high school does not necessarily alter the supply of software engineers. I studied Chinese History and geology at University but my doing so has had no effect on the job prospects of people doing those things for a living.
johnontheleft -> Taylor Dotson , 21 Sep 2017 16:30
You would be surprised just how much a little coding knowledge has transformed my ability to do my job (a job that is not directly related to IT at all).
peter nelson -> Jared Hall , 21 Sep 2017 16:29
Because teaching coding does not affect the supply of actual engineers. I've been a professional software engineer for 40 years and coding is only a small fraction of what I do.
peter nelson -> Jared Hall , 21 Sep 2017 16:28
You and the linked article don't know what you're talking about. A CS degree does not equate to a productive engineer.

A few years ago I was on the recruiting and interviewing committee to try to hire some software engineers for a scientific instrument my company was making. The entire team had about 60 people (hw, sw, mech engineers) but we needed 2 or 3 sw engineers with math and signal-processing expertise. The project was held up for SIX months because we could not find the people we needed. It would have taken a lot longer than that to train someone up to our needs. Eventually we brought in some Chinese engineers which cost us MORE than what we would have paid for an American engineer when you factor in the agency and visa paperwork.

Modern software engineers are not just generic interchangable parts - 21st century technology often requires specialised scientific, mathematical, production or business domain-specific knowledge and those people are hard to find.

freeluna -> freeluna , 21 Sep 2017 16:18
...also, this article is alarmist and I disagree with it. Dear Author, Phphphphtttt! Sincerely, freeluna
AmyInNH , 21 Sep 2017 16:16
Regimentation of the many, for benefit of the few.
AmyInNH -> Whatitsaysonthetin , 21 Sep 2017 16:15
Visa jobs are part of trade agreements. To be very specific, US gov (and EU) trade Western jobs for market access in the East.
http://www.marketwatch.com/story/in-india-british-leader-theresa-may-preaches-free-trade-2016-11-07
There is no shortage. This is selling off the West's middle class.
Take a look at remittances in wikipedia and you'll get a good idea just how much it costs the US and EU economies, for sake of record profits to Western industry.
jigen , 21 Sep 2017 16:13
And thanks to the author for not using the adjective "elegant" in describing coding.
freeluna , 21 Sep 2017 16:13
I see advantages in teaching kids to code, and for kids to make arduino and other CPU powered things. I don't see a lot of interest in science and tech coming from kids in school. There are too many distractions from social media and game platforms, and not much interest in developing tools for future tech and science.
jigen , 21 Sep 2017 16:13
Let the robots do the coding. Sorted.
FluffyDog -> rgilyead , 21 Sep 2017 16:13
Although coding per se is a technical skill it isn't designing or integrating systems. It is only a small, although essential, part of the whole software engineering process. Learning to code just gets you up the first steps of a high ladder that you need to climb a fair way if you intend to use your skills to earn a decent living.
rebel7 , 21 Sep 2017 16:11
BS.

Friend of mine in the SV tech industry reports that they are about 100,000 programmers short in just the internet security field.

Y'all are trying to create a problem where there isn't one. Maybe we shouldn't teach them how to read either. They might want to work somewhere besides the grill at McDonalds.

AmyInNH -> WyntonK , 21 Sep 2017 16:11
To which they will respond, offshore.
AmyInNH -> MrFumoFumo , 21 Sep 2017 16:10
They're not looking for good, they're looking for cheap + visa indentured. Non-citizens.
nickGregor , 21 Sep 2017 16:09
Within the next year coding will be old news and you will simply be able to describe things in ur native language in such a way that the machine will be able to execute any set of instructions you give it. Coding is going to change from its purely abstract form that is not utilized at peak- but if you can describe what you envision in an effective concise manner u could become a very good coder very quickly -- and competence will be determined entirely by imagination and the barriers of entry will all but be extinct
AmyInNH -> unclestinky , 21 Sep 2017 16:09
Already there. I take it you skipped right past the employment prospects for US STEM grads - 50% chance of finding STEM work.
AmyInNH -> User10006 , 21 Sep 2017 16:06
Apparently a whole lot of people are just making it up, eh?
http://www.motherjones.com/politics/2017/09/inside-the-growing-guest-worker-program-trapping-indian-students-in-virtual-servitude /
From today,
http://www.computerworld.com/article/2915904/it-outsourcing/fury-rises-at-disney-over-use-of-foreign-workers.html
All the way back to 1995,
https://www.youtube.com/watch?v=vW8r3LoI8M4&feature=youtu.be
JCA1507 -> whitehawk66 , 21 Sep 2017 16:04
Bravo
JCA1507 -> DirDigIns , 21 Sep 2017 16:01
Total... utter... no other way... huge... will only get worse... everyone... (not a very nuanced commentary is it).

I'm glad pieces like this are mounting, it is relevant that we counter the mix of messianism and opportunism of Silicon Valley propaganda with convincing arguments.

RogTheDodge -> WithoutPurpose , 21 Sep 2017 16:01
That's not my experience.
AmyInNH -> TTauriStellarbody , 21 Sep 2017 16:01
It's a stall tactic by Silicon Valley, "See, we're trying to resolve the [non-existant] shortage."
AmyInNH -> WyntonK , 21 Sep 2017 16:00
They aren't immigrants. They're visa indentured foreign workers. Why does that matter? It's part of the cheap+indentured hiring criteria. If it were only cheap, they'd be lowballing offers to citizen and US new grads.
RogTheDodge -> Jared Hall , 21 Sep 2017 15:59
No. Because they're the ones wanting them and realizing the US education system is not producing enough
RogTheDodge -> Jared Hall , 21 Sep 2017 15:58
Except the demand is increasing massively.
RogTheDodge -> WyntonK , 21 Sep 2017 15:57
That's why we are trying to educate American coders - so we don't need to give our jobs to foreigners.
AmyInNH , 21 Sep 2017 15:56
Correct premises,
- proletarianize programmers
- many qualified graduates simply can't find jobs.
Invalid conclusion:
- The problem is there aren't enough good jobs to be trained for.

That conclusion only makes sense if you skip right past ...
" importing large numbers of skilled guest workers from other countries through the H1-B visa program. These workers earn less than their American counterparts, and possess little bargaining power because they must remain employed to keep their status"

Hiring Americans doesn't "hurt" their record profits. It's incessant greed and collusion with our corrupt congress.

Oldvinyl , 21 Sep 2017 15:51
This column was really annoying. I taught my students how to program when I was given a free hand to create the computer studies curriculum for a new school I joined. (Not in the UK thank Dog). 7th graders began with studying the history and uses of computers and communications tech. My 8th grade learned about computer logic (AND, OR, NOT, etc) and moved on with QuickBASIC in the second part of the year. My 9th graders learned about databases and SQL and how to use HTML to make their own Web sites. Last year I received a phone call from the father of one student thanking me for creating the course, his son had just received a job offer and now works in San Francisco for Google.
I am so glad I taught them "coding" (UGH) as the writer puts it, rather than arty-farty subjects not worth a damn in the jobs market.
WyntonK -> DirDigIns , 21 Sep 2017 15:47
I live and work in Silicon Valley and you have no idea what you are talking about. There's no shortage of coders at all. Terrific coders are let go because of their age and the availability of much cheaper foreign coders(no, I am not opposed to immigration).
Sean May , 21 Sep 2017 15:43
Looks like you pissed off a ton of people who can't write code and are none to happy with you pointing out the reason they're slinging insurance for geico.

I think you're quite right that coding skills will eventually enter the mainstream and slowly bring down the cost of hiring programmers.

The fact is that even if you don't get paid to be a programmer you can absolutely benefit from having some coding skills.

There may however be some kind of major coding revolution with the advent of quantum computing. The way code is written now could become obsolete.

Jared Hall -> User10006 , 21 Sep 2017 15:43
Why is it a fantasy? Does supply and demand not apply to IT labor pools?
Jared Hall -> ninianpark , 21 Sep 2017 15:42
Why is it a load of crap? If you increase the supply of something with no corresponding increase in demand, the price will decrease.
pictonic , 21 Sep 2017 15:40
A well-argued article that hits the nail on the head. Amongst any group of coders, very few are truly productive, and they are self starters; training is really needed to do the admin.
Jared Hall -> DirDigIns , 21 Sep 2017 15:39
There is not a huge skills shortage. That is why the author linked this EPI report analyzing the data to prove exactly that. This may not be what people want to believe, but it is certainly what the numbers indicate. There is no skills gap.

http://www.epi.org/files/2013/bp359-guestworkers-high-skill-labor-market-analysis.pdf

Axel Seaton -> Jaberwocky , 21 Sep 2017 15:34
Yeah, but the money is crap
DirDigIns -> IanMcLzzz , 21 Sep 2017 15:32
Perfect response for the absolute crap that the article is pushing.
DirDigIns , 21 Sep 2017 15:30
Total and utter crap, no other way to put it.

There is a huge skills shortage in key tech areas that will only get worse if we don't educate and train the young effectively.

Everyone wants youth to have good skills for the knowledge economy and the ability to earn a good salary and build up life chances for UK youth.

So we get this verbal diarrhoea of an article. Defies belief.

Whatitsaysonthetin -> Evelita , 21 Sep 2017 15:27
Yes. China and India are indeed training youth in coding skills. In order that they take jobs in the USA and UK! It's been going on for 20 years and has resulted in many experienced IT staff struggling to get work at all and, even if they can, to suffer stagnating wages.
WmBoot , 21 Sep 2017 15:23
Wow. Congratulations to the author for provoking such a torrent of vitriol! Job well done.
TTauriStellarbody , 21 Sep 2017 15:22
Has anyones job is at risk from a 16 year old who can cobble together a couple of lines of javascript since the dot com bubble?

Good luck trying to teach a big enough pool of US school kids regular expressions let alone the kind of test driven continuous delivery that is the norm in the industry now.

freeandfair -> youngsteveo , 21 Sep 2017 13:27
> A lot of resumes come across my desk that look qualified on paper, but that's not the same thing as being able to do the job

I have exactly the same experience. There is undeniable a skill gap. It takes about a year for a skilled professional to adjust and learn enough to become productive, it takes about 3-5 years for a college grad.

It is nothing new. But the issue is, as the college grad gets trained, another company steal him/ her. And also keep in mind, all this time you are doing job and training the new employee as time permits. Many companies in the US cut the non-profit department (such as IT) to the bone, we cannot afford to lose a person and then train another replacement for 3-5 years.

The solution? Hire a skilled person. But that means nobody is training college grads and in 10-20 years we are looking at the skill shortage to the point where the only option is brining foreign labor.

American cut-throat companies that care only about the bottom line cannibalized themselves.

farabundovive -> Ethan Hawkins , 21 Sep 2017 15:10

Heh. You are not a coder, I take it. :) Going to be a few decades before even the easiest coding jobs vanish.

Given how shit most coders of my acquaintance have been - especially in matters of work ethic, logic, matching s/w to user requirements and willingness to test and correct their gormless output - most future coding work will probably be in the area of disaster recovery. Sorry, since the poor snowflakes can't face the sad facts, we have to call it "business continuation" these days, don't we?
UncommonTruthiness , 21 Sep 2017 14:10
The demonization of Silicon Valley is clearly the next place to put all blame. Look what "they" did to us: computers, smart phones, HD television, world-wide internet, on and on. Get a rope!

I moved there in 1978 and watched the orchards and trailer parks on North 1st St. of San Jose transform into a concrete jungle. There used to be quite a bit of semiconductor equipment and device manufacturing in SV during the 80s and 90s. Now quite a few buildings have the same name : AVAILABLE. Most equipment and device manufacturing has moved to Asia.

Programming started with binary, then machine code (hexadecimal or octal) and moved to assembler as a compiled and linked structure. More compiled languages like FORTRAN, BASIC, PL-1, COBOL, PASCAL, C (and all its "+'s") followed making programming easier for the less talented. Now the script based languages (HTML, JAVA, etc.) are even higher level and accessible to nearly all. Programming has become a commodity and will be priced like milk, wheat, corn, non-unionized workers and the like. The ship has sailed on this activity as a career.

[Oct 01, 2017] How to Use Script Command To Record Linux Terminal Session

Oct 01, 2017 | linoxide.com

How to Use "Script" Command To Record Linux Terminal Session May 30, 2014 By Pungki Arianto Updated June 14, 2017 Facebook Google+ Twitter Pinterest LinkedIn StumbleUpon Reddit Email This script command is very helpful for system admin. If any problem occurs to the system, it is very difficult to find what command was executed previously. Hence, system admin knows the importance of this script command. Sometimes you are on the server and you think to yourself that your team or somebody you know is actually missing a documentation on how to do a specific configuration. It is possible for you to do the configuration, record all actions of your shell session and show the record to the person who will see exactly what you had (the same output) on your shell at the moment of the configuration. How does script command work?

script command records a shell session for you so that you can look at the output that you saw at the time and you can even record with timing so that you can have a real-time playback. It is really useful and comes in handy in the strangest kind of times and places.

The script command keeps action log for various tasks. The script records everything in a session such as things you type, things you see. To do this you just type script command on the terminal and type exit when finished. Everything between the script and the exit command is logged to the file. This includes the confirmation messages from script itself.

1. Record your terminal session

script makes a typescript of everything printed on your terminal. If the argument file is given, script saves all dialogue in the indicated file in the current directory. If no file name is given, the typescript is saved in default file typescript. To record your shell session so what you are doing in the current shell, just use the command below

# script shell_record1
Script started, file is shell_record1

It indicates that a file shell_record1 is created. Let's check the file

# ls -l shell_*
-rw-r--r-- 1 root root 0 Jun 9 17:50 shell_record1

After completion of your task, you can enter exit or Ctrl-d to close down the script session and save the file.

# exit
exit
Script done, file is shell_record1

You can see that script indicates the filename.

2. Check the content of a recorded terminal session

When you use script command, it records everything in a session such as things you type so all your output. As the output is saved into a file, it is possible after to check its content after existing a recorded session. You can simply use a text editor command or a text file command viewer.

# cat shell_record1 
Script started on Fri 09 Jun 2017 06:23:41 PM UTC
[root@centos-01 ~]# date
Fri Jun 9 18:23:46 UTC 2017
[root@centos-01 ~]# uname -a
Linux centos-01 3.10.0-514.16.1.el7.x86_64 #1 SMP Wed Apr 12 15:04:24 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
[root@centos-01 ~]# whoami
root
[root@centos-01 ~]# pwd
/root
[root@centos-01 ~]# exit
exit

Script done on Fri 09 Jun 2017 06:25:11 PM UTC

While you view the file you realize that the script also stores line feeds and backspaces. It also indicates the time of the recording to the top and the end of the file.

3. Record several terminal session

You can record several terminal session as you want. When you finish a record, just begin another new session record. It can be helpful if you want to record several configurations that you are doing to show it to your team or students for example. You just need to name each recording file.

For example, let us assume that you have to do OpenLDAP , DNS , Machma configurations. You will need to record each configuration. To do this, just create recording file corresponding to each configuration when finished.

# script openldap_record
   ...............
    configuration step
   ..............
# exit

When you have finished with the first configuration, begin to record the next configuration

# script machma_record
    ............
     configuration steps
    .............
# exit

And so on for the other. Note that if you script command followed by existing filename, the file will be replaced. So you will lost everything.

Now, let us imagine that you have begun Machma configuration but you have to abort its configuration in order to finish DNS configuration because of some emergency case. Now you want to continue the machma configuration where you left. It means you want to record the next steps into the existing file machma_record without deleting its previous content; to do this you will use script -a command to append the new output to the file.

This is the content of our recorded file

Now if we want to continue our recording in this file without deleting the content already present, we will do

# script -a machma_record
Script started, file is machma_record

Now continue the configuration, then exit when finished and let's check the content of the recorded file.

Note the new time of the new record which appears. You can see that the file has the previous and actual records.

4. Replay a linux terminal session

We have seen that it is possible to see the content of the recorded file with commands to display a text file content. The script command also gives the possibility to see the recorded session as a video. It means that you will review exactly what you have done step by step at the moment you were entering the commands as if you were looking a video. So you will playback/replay the recorded terminal session.

To do it, you have to use --timing option of script command when you will start the record.

# script --timing=file_time shell_record1
Script started, file is shell_record1

See that the file into which to record is shell_record1. When the record is finished, exit normally

# exit
exit
Script done, file is shell_record1

Let's see check the content of file_time

# cat file_time 
0.807440 49
0.030061 1
116.131648 1
0.226914 1
0.033997 1
0.116936 1
0.104201 1
0.392766 1
0.301079 1
0.112105 2
0.363375 152

The --timing option outputs timing data to the file indicated. This data contains two fields, separated by a space which indicates how much time elapsed since the previous output how many characters were output this time. This information can be used to replay typescripts with realistic typing and output delays.

Now to replay the terminal session, we use scriptreplay command instead of script command with the same syntax when recording the session. Look below

# scriptreplay --timing=file_time shell_record1

You will that the recorded session with be played as if you were looking a video which was recording all that you were doing. You can just insert the timing file without indicating all the --timing=file_time. Look below

# scriptreplay file_time shell_record1

So you understand that the first parameter is the timing file and the second is the recorded file.

Conclusion

The script command can be your to-go tool for documenting your work and showing others what you did in a session. It can be used as a way to log what you are doing in a shell session. When you run script, a new shell is forked. It reads standard input and output for your terminal tty and stores the data in a file.

[Sep 27, 2017] Chkservice - An Easy Way to Manage Systemd Units in Terminal

Sep 27, 2017 | linoxide.com

Systemd is a system and service manager for Linux operating systems which introduces the concept of systemd units and provides a number of features such as parallel startup of system services at boot time, on-demand activation of daemons, etc. It helps to manage services on your Linux OS such as starting/stopping/reloading. But to operate on services with systemd, you need to know the different services launched and the name which exactly matches the service. There is a tool provided which can help Linux users to navigate through the different services available on your Linux as you do for the different process in progress on your system with top command.

What is chkservice?

chkservice is a new and handy tool for systemd units management in a terminal. It is a GitHub project developed by Svetlana Linuxenko. It has the particularity to list the differents services presents on your system. You have a view of each service available and you are able to manage it as you want.

Debian:

sudo add-apt-repository ppa:linuxenko/chkservice
sudo apt-get update
sudo apt-get install chkservice

Arch

git clone https://aur.archlinux.org/chkservice.git
cd chkservice
makepkg -si

Fedora

dnf copr enable srakitnican/default
dnf install chkservice

chkservice require super user privileges to make changes into unit states or sysv scripts. For user it works read-only.

Package dependencies:

Build dependencies:

Build and install debian package.

git clone https://github.com/linuxenko/chkservice.git
mkdir build
cd build
cmake -DCMAKE_INSTALL_PREFIX=/usr ../
cpack

dpkg -i chkservice-x.x.x.deb

Build release version.

git clone https://github.com/linuxenko/chkservice.git
mkdir build
cd build
cmake ../
make

[Sep 27, 2017] Arithmetic Evaluation

Sep 27, 2017 | mywiki.wooledge.org

Bash has several different ways to say we want to do arithmetic instead of string operations. Let's look at them one by one.

The first way is the let command:

$ unset a; a=4+5
$ echo $a
4+5
$ let a=4+5
$ echo $a
9

You may use spaces, parentheses and so forth, if you quote the expression:

$ let a='(5+2)*3'

For a full list of operators availabile, see help let or the manual.

Next, the actual arithmetic evaluation compound command syntax:

$ ((a=(5+2)*3))

This is equivalent to let , but we can also use it as a command , for example in an if statement:

$ if (($a == 21)); then echo 'Blackjack!'; fi

Operators such as == , < , > and so on cause a comparison to be performed, inside an arithmetic evaluation. If the comparison is "true" (for example, 10 > 2 is true in arithmetic -- but not in strings!) then the compound command exits with status 0. If the comparison is false, it exits with status 1. This makes it suitable for testing things in a script.

Although not a compound command, an arithmetic substitution (or arithmetic expression ) syntax is also available:

$ echo "There are $(($rows * $columns)) cells"

Inside $((...)) is an arithmetic context , just like with ((...)) , meaning we do arithmetic (multiplying things) instead of string manipulations (concatenating $rows , space, asterisk, space, $columns ). $((...)) is also portable to the POSIX shell, while ((...)) is not.

Readers who are familiar with the C programming language might wish to know that ((...)) has many C-like features. Among them are the ternary operator:

$ ((abs = (a >= 0) ? a : -a))

and the use of an integer value as a truth value:

$ if ((flag)); then echo "uh oh, our flag is up"; fi

Note that we used variables inside ((...)) without prefixing them with $ -signs. This is a special syntactic shortcut that Bash allows inside arithmetic evaluations and arithmetic expressions.

There is one final thing we must mention about ((flag)) . Because the inside of ((...)) is C-like, a variable (or expression) that evaluates to zero will be considered false for the purposes of the arithmetic evaluation. Then, because the evaluation is false, it will exit with a status of 1. Likewise, if the expression inside ((...)) is non-zero , it will be considered true ; and since the evaluation is true, it will exit with status 0. This is potentially very confusing, even to experts, so you should take some time to think about this. Nevertheless, when things are used the way they're intended, it makes sense in the end:

$ flag=0      # no error
$ while read line; do
>   if [[ $line = *err* ]]; then flag=1; fi
> done < inputfile
$ if ((flag)); then echo "oh no"; fi

[Sep 27, 2017] Integer ASCII value to character in BASH using printf

Sep 27, 2017 | stackoverflow.com

user14070 , asked May 20 '09 at 21:07

Character to value works:
$ printf "%d\n" \'A
65
$

I have two questions, the first one is most important:

broaden , answered Nov 18 '09 at 10:10

One line
printf "\x$(printf %x 65)"

Two lines

set $(printf %x 65)
printf "\x$1"

Here is one if you do not mind using awk

awk 'BEGIN{printf "%c", 65}'

mouviciel , answered May 20 '09 at 21:12

For this kind of conversion, I use perl:
perl -e 'printf "%c\n", 65;'

user2350426 , answered Sep 22 '15 at 23:16

This works (with the value in octal):
$ printf '%b' '\101'
A

even for (some: don't go over 7) sequences:

$ printf '%b' '\'{101..107}
ABCDEFG

A general construct that allows (decimal) values in any range is:

$ printf '%b' $(printf '\\%03o' {65..122})
ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz

Or you could use the hex values of the characters:

$ printf '%b' $(printf '\\x%x' {65..122})
ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz

You also could get the character back with xxd (use hexadecimal values):

$ echo "41" | xxd -p -r
A

That is, one action is the reverse of the other:

$ printf "%x" "'A" | xxd -p -r
A

And also works with several hex values at once:

$ echo "41 42 43 44 45 46 47 48 49 4a" | xxd -p -r
ABCDEFGHIJ

or sequences (printf is used here to get hex values):

$ printf '%x' {65..90} | xxd -r -p 
ABCDEFGHIJKLMNOPQRSTUVWXYZ

Or even use awk:

$ echo 65 | awk '{printf("%c",$1)}'
A

even for sequences:

$ seq 65 90 | awk '{printf("%c",$1)}'
ABCDEFGHIJKLMNOPQRSTUVWXYZ

David Hu , answered Dec 1 '11 at 9:43

For your second question, it seems the leading-quote syntax ( \'A ) is specific to printf :

If the leading character is a single-quote or double-quote, the value shall be the numeric value in the underlying codeset of the character following the single-quote or double-quote.

From http://pubs.opengroup.org/onlinepubs/009695399/utilities/printf.html

Naaff , answered May 20 '09 at 21:21

One option is to directly input the character you're interested in using hex or octal notation:
printf "\x41\n"
printf "\101\n"

MagicMercury86 , answered Feb 21 '12 at 22:49

If you want to save the ASCII value of the character: (I did this in BASH and it worked)
{
char="A"

testing=$( printf "%d" "'${char}" )

echo $testing}

output: 65

chand , answered Nov 20 '14 at 10:05

Here's yet another way to convert 65 into A (via octal):
help printf  # in Bash
man bash | less -Ip '^[[:blank:]]*printf'

printf "%d\n" '"A'
printf "%d\n" "'A"

printf '%b\n' "$(printf '\%03o' 65)"

To search in man bash for \' use (though futile in this case):

man bash | less -Ip "\\\'"  # press <n> to go through the matches

,

If you convert 65 to hexadecimal it's 0x41 :

$ echo -e "\x41" A

[Sep 27, 2017] linux - How to convert DOS-Windows newline (CRLF) to Unix newline in a Bash script

Notable quotes:
"... Technically '1' is your program, b/c awk requires one when given option. ..."

Koran Molovik , asked Apr 10 '10 at 15:03

How can I programmatically (i.e., not using vi ) convert DOS/Windows newlines to Unix?

The dos2unix and unix2dos commands are not available on certain systems. How can I emulate these with commands like sed / awk / tr ?

Jonathan Leffler , answered Apr 10 '10 at 15:13

You can use tr to convert from DOS to Unix; however, you can only do this safely if CR appears in your file only as the first byte of a CRLF byte pair. This is usually the case. You then use:
tr -d '\015' <DOS-file >UNIX-file

Note that the name DOS-file is different from the name UNIX-file ; if you try to use the same name twice, you will end up with no data in the file.

You can't do it the other way round (with standard 'tr').

If you know how to enter carriage return into a script ( control-V , control-M to enter control-M), then:

sed 's/^M$//'     # DOS to Unix
sed 's/$/^M/'     # Unix to DOS

where the '^M' is the control-M character. You can also use the bash ANSI-C Quoting mechanism to specify the carriage return:

sed $'s/\r$//'     # DOS to Unix
sed $'s/$/\r/'     # Unix to DOS

However, if you're going to have to do this very often (more than once, roughly speaking), it is far more sensible to install the conversion programs (e.g. dos2unix and unix2dos , or perhaps dtou and utod ) and use them.

ghostdog74 , answered Apr 10 '10 at 15:21

tr -d "\r" < file

take a look here for examples using sed :

# IN UNIX ENVIRONMENT: convert DOS newlines (CR/LF) to Unix format.
sed 's/.$//'               # assumes that all lines end with CR/LF
sed 's/^M$//'              # in bash/tcsh, press Ctrl-V then Ctrl-M
sed 's/\x0D$//'            # works on ssed, gsed 3.02.80 or higher

# IN UNIX ENVIRONMENT: convert Unix newlines (LF) to DOS format.
sed "s/$/`echo -e \\\r`/"            # command line under ksh
sed 's/$'"/`echo \\\r`/"             # command line under bash
sed "s/$/`echo \\\r`/"               # command line under zsh
sed 's/$/\r/'                        # gsed 3.02.80 or higher

Use sed -i for in-place conversion e.g. sed -i 's/..../' file .

Steven Penny , answered Apr 30 '14 at 10:02

Doing this with POSIX is tricky:

To remove carriage returns:

ex -bsc '%!awk "{sub(/\r/,\"\")}1"' -cx file

To add carriage returns:

ex -bsc '%!awk "{sub(/$/,\"\r\")}1"' -cx file

Norman Ramsey , answered Apr 10 '10 at 22:32

This problem can be solved with standard tools, but there are sufficiently many traps for the unwary that I recommend you install the flip command, which was written over 20 years ago by Rahul Dhesi, the author of zoo . It does an excellent job converting file formats while, for example, avoiding the inadvertant destruction of binary files, which is a little too easy if you just race around altering every CRLF you see...

Gordon Davisson , answered Apr 10 '10 at 17:50

The solutions posted so far only deal with part of the problem, converting DOS/Windows' CRLF into Unix's LF; the part they're missing is that DOS use CRLF as a line separator , while Unix uses LF as a line terminator . The difference is that a DOS file (usually) won't have anything after the last line in the file, while Unix will. To do the conversion properly, you need to add that final LF (unless the file is zero-length, i.e. has no lines in it at all). My favorite incantation for this (with a little added logic to handle Mac-style CR-separated files, and not molest files that're already in unix format) is a bit of perl:
perl -pe 'if ( s/\r\n?/\n/g ) { $f=1 }; if ( $f || ! $m ) { s/([^\n])\z/$1\n/ }; $m=1' PCfile.txt

Note that this sends the Unixified version of the file to stdout. If you want to replace the file with a Unixified version, add perl's -i flag.

codaddict , answered Apr 10 '10 at 15:09

Using AWK you can do:
awk '{ sub("\r$", ""); print }' dos.txt > unix.txt

Using Perl you can do:

perl -pe 's/\r$//' < dos.txt > unix.txt

anatoly techtonik , answered Oct 31 '13 at 9:40

If you don't have access to dos2unix , but can read this page, then you can copy/paste dos2unix.py from here.
#!/usr/bin/env python
"""\
convert dos linefeeds (crlf) to unix (lf)
usage: dos2unix.py <input> <output>
"""
import sys

if len(sys.argv[1:]) != 2:
  sys.exit(__doc__)

content = ''
outsize = 0
with open(sys.argv[1], 'rb') as infile:
  content = infile.read()
with open(sys.argv[2], 'wb') as output:
  for line in content.splitlines():
    outsize += len(line) + 1
    output.write(line + '\n')

print("Done. Saved %s bytes." % (len(content)-outsize))

Cross-posted from superuser .

nawK , answered Sep 4 '14 at 0:16

An even simpler awk solution w/o a program:
awk -v ORS='\r\n' '1' unix.txt > dos.txt

Technically '1' is your program, b/c awk requires one when given option.

UPDATE : After revisiting this page for the first time in a long time I realized that no one has yet posted an internal solution, so here is one:

while IFS= read -r line;
do printf '%s\n' "${line%$'\r'}";
done < dos.txt > unix.txt

Santosh , answered Mar 12 '15 at 22:36

This worked for me
tr "\r" "\n" < sampledata.csv > sampledata2.csv

ThorSummoner , answered Jul 30 '15 at 17:38

Super duper easy with PCRE;

As a script, or replace $@ with your files.

#!/usr/bin/env bash
perl -pi -e 's/\r\n/\n/g' -- $@

This will overwrite your files in place!

I recommend only doing this with a backup (version control or otherwise)

Ashley Raiteri , answered May 19 '14 at 23:25

For Mac osx if you have homebrew installed [ http://brew.sh/][1]
brew install dos2unix

for csv in *.csv; do dos2unix -c mac ${csv}; done;

Make sure you have made copies of the files, as this command will modify the files in place. The -c mac option makes the switch to be compatible with osx.

lzc , answered May 31 '16 at 17:15

TIMTOWTDI!
perl -pe 's/\r\n/\n/; s/([^\n])\z/$1\n/ if eof' PCfile.txt

Based on @GordonDavisson

One must consider the possibility of [noeol] ...

kazmer , answered Nov 6 '16 at 23:30

You can use awk. Set the record separator ( RS ) to a regexp that matches all possible newline character, or characters. And set the output record separator ( ORS ) to the unix-style newline character.
awk 'BEGIN{RS="\r|\n|\r\n|\n\r";ORS="\n"}{print}' windows_or_macos.txt > unix.txt

user829755 , answered Jul 21 at 9:21

interestingly in my git-bash on windows sed "" did the trick already:
$ echo -e "abc\r" >tst.txt
$ file tst.txt
tst.txt: ASCII text, with CRLF line terminators
$ sed -i "" tst.txt
$ file tst.txt
tst.txt: ASCII text

My guess is that sed ignores them when reading lines from input and always writes unix line endings on output.

Gannet , answered Jan 24 at 8:38

As an extension to Jonathan Leffler's Unix to DOS solution, to safely convert to DOS when you're unsure of the file's current line endings:
sed '/^M$/! s/$/^M/'

This checks that the line does not already end in CRLF before converting to CRLF.

vmsnomad , answered Jun 23 at 18:37

Had just to ponder that same question (on Windows-side, but equally applicable to linux.) Surprisingly nobody mentioned a very much automated way of doing CRLF<->LF conversion for text-files using good old zip -ll option (Info-ZIP):
zip -ll textfiles-lf.zip files-with-crlf-eol.*
unzip textfiles-lf.zip

NOTE: this would create a zip file preserving the original file names but converting the line endings to LF. Then unzip would extract the files as zip'ed, that is with their original names (but with LF-endings), thus prompting to overwrite the local original files if any.

Relevant excerpt from the zip --help :

zip --help
...
-l   convert LF to CR LF (-ll CR LF to LF)
I tried sed 's/^M$//' file.txt on OSX as well as several other methods ( http://www.thingy-ma-jig.co.uk/blog/25-11-2010/fixing-dos-line-endings or http://hintsforums.macworld.com/archive/index.php/t-125.html ). None worked, the file remained unchanged (btw Ctrl-v Enter was needed to reproduce ^M). In the end I used TextWrangler. Its not strictly command line but it works and it doesn't complain.

[Sep 24, 2017] Inside Amazon's Warehouses: Thousands of Senior Citizens and the Occasional Robot Mishap

Sep 24, 2017 | hardware.slashdot.org

(wired.com) Posted by EditorDavid on Saturday September 23, 2017 @09:30PM from the looking-inside dept. Amazon aggressively recruited thousands of retirees living in mobile homes to migrate to Amazon's warehouses for seasonal work, according to a story shared by nightcats . Wired reports: From a hiring perspective, the RVers were a dream labor force. They showed up on demand and dispersed just before Christmas in what the company cheerfully called a "taillight parade." They asked for little in the way of benefits or protections . And though warehouse jobs were physically taxing -- not an obvious fit for older bodies -- recruiters came to see CamperForce workers' maturity as an asset. These were diligent, responsible employees. Their attendance rates were excellent. "We've had folks in their eighties who do a phenomenal job for us," noted Kelly Calmes, a CamperForce representative, in one online recruiting seminar... In a company presentation, one slide read, "Jeff Bezos has predicted that, by the year 2020, one out of every four workampers in the United States will have worked for Amazon." The article is adapted from a new book called " Nomadland ," which also describes seniors in mobile homes being recruited for sugar beet harvesting and jobs at an Iowa amusement park, as well as work as campground hsots at various national parks. Many of them "could no longer afford traditional housing," especially after the financial downturn of 2008. But at least they got to hear stories from their trainers at Amazon about the occasional "unruly" shelf-toting "Kiva" robot: They told us how one robot had tried to drag a worker's stepladder away. Occasionally, I was told, two Kivas -- each carrying a tower of merchandise -- collided like drunken European soccer fans bumping chests. And in April of that year, the Haslet fire department responded to an accident at the warehouse involving a can of "bear repellent" (basically industrial-grade pepper spray). According to fire department records, the can of repellent was run over by a Kiva and the warehouse had to be evacuated.

[Sep 19, 2017] Boston Startups Are Teaching Boats to Drive Themselves by Joshua Brustein

Notable quotes:
"... He's also a sort of maritime-technology historian. A tall, white-haired man in a baseball cap, shark t-shirt and boat shoes, Benjamin said he's spent the last 15 years "making vehicles wet." He has the U.S. armed forces to thank for making his autonomous work possible. The military sparked the field of marine autonomy decades ago, when it began demanding underwater robots for mine detection, ..."
"... In 2006, Benjamin launched his open-source software project. With it, a computer is able to take over a boat's navigation-and-control system. Anyone can write programs for it. The project is funded by the U.S. Office for Naval Research and Battelle Memorial Institute, a nonprofit. Benjamin said there are dozens of types of vehicles using the software, which is called MOOS-IvP. ..."
Sep 19, 2017 | www.msn.com

Originally from: Bloomberg via Associated Press

Frank Marino, an engineer with Sea Machines Robotics, uses a remote control belt pack to control a self-driving boat in Boston Harbor. (Bloomberg) -- Frank Marino sat in a repurposed U.S. Coast Guard boat bobbing in Boston Harbor one morning late last month. He pointed the boat straight at a buoy several hundred yards away, while his colleague Mohamed Saad Ibn Seddik used a laptop to set the vehicle on a course that would run right into it. Then Ibn Seddik flipped the boat into autonomous driving mode. They sat back as the vessel moved at a modest speed of six knots, smoothly veering right to avoid the buoy, and then returned to its course.

In a slightly apologetic tone, Marino acknowledged the experience wasn't as harrowing as barreling down a highway in an SUV that no one is steering. "It's not like a self-driving car, where the wheel turns on its own," he said. Ibn Seddik tapped in directions to get the boat moving back the other way at twice the speed. This time, the vessel kicked up a wake, and the turn felt sharper, even as it gave the buoy the same wide berth as it had before. As far as thrills go, it'd have to do. Ibn Seddik said going any faster would make everyone on board nauseous.

The two men work for Sea Machines Robotics Inc., a three-year old company developing computer systems for work boats that can make them either remote-controllable or completely autonomous. In May, the company spent $90,000 to buy the Coast Guard hand-me-down at a government auction. Employees ripped out one of the four seats in the cabin to make room for a metal-encased computer they call a "first-generation autonomy cabinet." They painted the hull bright yellow and added the words "Unmanned Vehicle" in big, red letters. Cameras are positioned at the stern and bow, and a dome-like radar system and a digital GPS unit relay additional information about the vehicle's surroundings. The company named its new vessel Steadfast.

Autonomous maritime vehicles haven't drawn as much the attention as self-driving cars, but they're hitting the waters with increased regularity. Huge shipping interests, such as Rolls-Royce Holdings Plc, Tokyo-based fertilizer producer Nippon Yusen K.K. and BHP Billiton Ltd., the world's largest mining company, have all recently announced plans to use driverless ships for large-scale ocean transport. Boston has become a hub for marine technology startups focused on smaller vehicles, with a handful of companies like Sea Machines building their own autonomous systems for boats, diving drones and other robots that operate on or under the water.

As Marino and Ibn Seddik were steering Steadfast back to dock, another robot boat trainer, Michael Benjamin, motored past them. Benjamin, a professor at Massachusetts Institute of Technology, is a regular presence on the local waters. His program in marine autonomy, a joint effort by the school's mechanical engineering and computer science departments, serves as something of a ballast for Boston's burgeoning self-driving boat scene. Benjamin helps engineers find jobs at startups and runs an open-source software project that's crucial to many autonomous marine vehicles.

He's also a sort of maritime-technology historian. A tall, white-haired man in a baseball cap, shark t-shirt and boat shoes, Benjamin said he's spent the last 15 years "making vehicles wet." He has the U.S. armed forces to thank for making his autonomous work possible. The military sparked the field of marine autonomy decades ago, when it began demanding underwater robots for mine detection, Benjamin explained from a chair on MIT's dock overlooking the Charles River. Eventually, self-driving software worked its way into all kinds of boats.

These systems tended to chart a course based on a specific script, rather than sensing and responding to their environments. But a major shift came about a decade ago, when manufacturers began allowing customers to plug in their own autonomy systems, according to Benjamin. "Imagine where the PC revolution would have gone if the only one who could write software on an IBM personal computer was IBM," he said.

In 2006, Benjamin launched his open-source software project. With it, a computer is able to take over a boat's navigation-and-control system. Anyone can write programs for it. The project is funded by the U.S. Office for Naval Research and Battelle Memorial Institute, a nonprofit. Benjamin said there are dozens of types of vehicles using the software, which is called MOOS-IvP.

Startups using MOOS-IvP said it has created a kind of common vocabulary. "If we had a proprietary system, we would have had to develop training and train new employees," said Ibn Seddik. "Fortunately for us, Mike developed a course that serves exactly that purpose."

Teaching a boat to drive itself is easier than conditioning a car in some ways. They typically don't have to deal with traffic, stoplights or roundabouts. But water is unique challenge. "The structure of the road, with traffic lights, bounds your problem a little bit," said Benjamin. "The number of unique possible situations that you can bump into is enormous." At the moment, underwater robots represent a bigger chunk of the market than boats. Sales are expected to hit $4.6 billion in 2020, more than double the amount from 2015, according to ABI Research. The biggest customer is the military.

Several startups hope to change that. Michael Johnson, Sea Machines' chief executive officer, said the long-term potential for self-driving boats involves teams of autonomous vessels working in concert. In many harbors, multiple tugs bring in large container ships, communicating either through radio or by whistle. That could be replaced by software controlling all the boats as a single system, Johnson said.

Sea Machines' first customer is Marine Spill Response Corp., a nonprofit group funded by oil companies. The organization operates oil spill response teams that consist of a 210-foot ship paired with a 32-foot boat, which work together to drag a device collecting oil. Self-driving boats could help because staffing the 32-foot boat in choppy waters or at night can be dangerous, but the theory needs proper vetting, said Judith Roos, a vice president for MSRC. "It's too early to say, 'We're going to go out and buy 20 widgets.'"

Another local startup, Autonomous Marine Systems Inc., has been sending boats about 10 miles out to sea and leaving them there for weeks at a time. AMS's vehicles are designed to operate for long stretches, gathering data in wind farms and oil fields. One vessel is a catamaran dubbed the Datamaran, a name that first came from an employee's typo, said AMS CEO Ravi Paintal. The company also uses Benjamin's software platform. Paintal said AMS's longest missions so far have been 20 days, give or take. "They say when your boat can operate for 30 days out in the ocean environment, you'll be in the running for a commercial contract," he said.

... ... ...

[Sep 17, 2017] The last 25 years (or so) were years of tremendous progress in computers and networking that changed the human civilization

Notable quotes:
"... To emulate those capabilities on computers will probably require another 100 years or more. Selective functions can be imitated even now (manipulator that deals with blocks in a pyramid was created in 70th or early 80th I think, but capabilities of human "eye controlled arm" is still far, far beyond even wildest dreams of AI. ..."
"... Similarly human intellect is completely different from AI. At the current level the difference is probably 1000 times larger then the difference between a child with Down syndrome and a normal person. ..."
"... Human brain is actually a machine that creates languages for specific domain (or acquire them via learning) and then is able to operate in terms of those languages. Human child forced to grow up with animals, including wild animals, learns and is able to use "animal language." At least to a certain extent. Some of such children managed to survive in this environment. ..."
"... If you are bilingual, try Google translate on this post. You might be impressed by their recent progress in this field. It did improved considerably and now does not cause instant laugh. ..."
"... One interesting observation that I have is that automation is not always improve functioning of the organization. It can be quite opposite :-). Only the costs are cut, and even that is not always true. ..."
"... Of course the last 25 years (or so) were years of tremendous progress in computers and networking that changed the human civilization. And it is unclear whether we reached the limit of current capabilities or not in certain areas (in CPU speeds and die shrinking we probably did; I do not expect anything significant below 7 nanometers: https://en.wikipedia.org/wiki/7_nanometer ). ..."
May 28, 2017 | economistsview.typepad.com

libezkova , May 27, 2017 at 10:53 PM

"When combined with our brains, human fingers are amazingly fine manipulation devices."

Not only fingers. The whole human arm is an amazing device. Pure magic, if you ask me.

To emulate those capabilities on computers will probably require another 100 years or more. Selective functions can be imitated even now (manipulator that deals with blocks in a pyramid was created in 70th or early 80th I think, but capabilities of human "eye controlled arm" is still far, far beyond even wildest dreams of AI.

Similarly human intellect is completely different from AI. At the current level the difference is probably 1000 times larger then the difference between a child with Down syndrome and a normal person.

Human brain is actually a machine that creates languages for specific domain (or acquire them via learning) and then is able to operate in terms of those languages. Human child forced to grow up with animals, including wild animals, learns and is able to use "animal language." At least to a certain extent. Some of such children managed to survive in this environment.

Such cruel natural experiments have shown that the level of flexibility of human brain is something really incredible. And IMHO can not be achieved by computers (although never say never).

Here we are talking about tasks that are 1 million times more complex task that playing GO or chess, or driving a car on the street.

My impression is that most of recent AI successes (especially IBM win in Jeopardy ( http://www.techrepublic.com/article/ibm-watson-the-inside-story-of-how-the-jeopardy-winning-supercomputer-was-born-and-what-it-wants-to-do-next/ ), which probably was partially staged, is by-and-large due to the growth of storage and the number of cores of computers, not so much sophistication of algorithms used.

The limits of AI are clearly visible when we see the quality of translation from one language to another. For more or less complex technical text it remains medium to low. As in "requires human editing".

If you are bilingual, try Google translate on this post. You might be impressed by their recent progress in this field. It did improved considerably and now does not cause instant laugh.

Same thing with the speech recognition. The progress is tremendous, especially the last three-five years. But it is still far from perfect. Now, with a some training, programs like Dragon are quite usable as dictation device on, say PC with 4 core 3GHz CPU with 16 GB of memory (especially if you are native English speaker), but if you deal with special text or have strong accent, they still leaves much to be desired (although your level of knowledge of the program, experience and persistence can improve the results considerably.

One interesting observation that I have is that automation is not always improve functioning of the organization. It can be quite opposite :-). Only the costs are cut, and even that is not always true.

Of course the last 25 years (or so) were years of tremendous progress in computers and networking that changed the human civilization. And it is unclear whether we reached the limit of current capabilities or not in certain areas (in CPU speeds and die shrinking we probably did; I do not expect anything significant below 7 nanometers: https://en.wikipedia.org/wiki/7_nanometer ).

[Sep 17, 2017] Colleagues Addicted to Tech

Notable quotes:
"... dwelling on the negative can backfire. ..."
"... It's fine to acknowledge a misstep. But spin the answer to focus on why this new situation is such an ideal match of your abilities to the employer's needs. ..."
Apr 20, 2015 | NYTimes.com

Discussing Bad Work Situations

I have been in my present position for over 25 years. Five years ago, I was assigned a new boss, who has a reputation in my industry for harassing people in positions such as mine until they quit. I have managed to survive, but it's clear that it's time for me to move along. How should I answer the inevitable interview question: Why would I want to leave after so long? I've heard that speaking badly of a boss is an interview no-no, but it really is the only reason I'm looking to find something new. BROOKLYN

I am unemployed and interviewing for a new job. I have read that when answering interview questions, it's best to keep everything you say about previous work experiences or managers positive.

But what if you've made one or two bad choices in the past: taking jobs because you needed them, figuring you could make it work - then realizing the culture was a bad fit, or you had an arrogant, narcissistic boss?

Nearly everyone has had a bad work situation or boss. I find it refreshing when I read stories about successful people who mention that they were fired at some point, or didn't get along with a past manager. So why is it verboten to discuss this in an interview? How can the subject be addressed without sounding like a complainer, or a bad employee? CHICAGO

As these queries illustrate, the temptation to discuss a negative work situation can be strong among job applicants. But in both of these situations, and in general, criticizing a current or past employer is a risky move. You don't have to paint a fictitiously rosy picture of the past, but dwelling on the negative can backfire. Really, you don't want to get into a detailed explanation of why you have or might quit at all. Instead, you want to talk about why you're such a perfect fit for the gig you're applying for.

So, for instance, a question about leaving a long-held job could be answered by suggesting that the new position offers a chance to contribute more and learn new skills by working with a stronger team. This principle applies in responding to curiosity about jobs that you held for only a short time.

It's fine to acknowledge a misstep. But spin the answer to focus on why this new situation is such an ideal match of your abilities to the employer's needs.

The truth is, even if you're completely right about the past, a prospective employer doesn't really want to hear about the workplace injustices you've suffered, or the failings of your previous employer. A manager may even become concerned that you will one day add his or her name to the list of people who treated you badly. Save your cathartic outpourings for your spouse, your therapist, or, perhaps, the future adoring profile writer canonizing your indisputable success.

Send your workplace conundrums to workologist@nytimes.com, including your name and contact information (even if you want it withheld for publication). The Workologist is a guy with well-intentioned opinions, not a professional career adviser. Letters may be edited.

[Sep 16, 2017] Google Drive Faces Outage, Users Report

Sep 16, 2017 | tech.slashdot.org

(google.com) 75

Posted by msmash on Thursday September 07, 2017

Numerous Slashdot readers are reporting that they are facing issues access Google Drive, the productivity suite from the Mountain View-based company. Google's dashboard confirms that Drive is facing outage .

Third-party web monitoring tool DownDetector also reports thousands of similar complaints from users. The company said, "Google Drive service has already been restored for some users, and we expect a resolution for all users in the near future.

Please note this time frame is an estimate and may change. Google Drive is not loading files and results in a failures for a subset of users."

[Sep 16, 2017] Will Millennials Be Forced Out of Tech Jobs When They Turn 40?

Notable quotes:
"... Karen Panetta, the dean of graduate engineering education at Tufts University and the vice president of communications and public relations at the IEEE-USA, believes the outcome for tech will be Logan's Run -like, where age sets a career limit... ..."
"... It's great to get the new hot shot who just graduated from college, but it's also important to have somebody with 40 years of experience who has seen all of the changes in the industry and can offer a different perspective." ..."
Sep 16, 2017 | it.slashdot.org

(ieeeusa.org)

Posted by EditorDavid on Sunday September 03, 2017 @07:30AM

dcblogs shared an interesting article from IEEE-USA's "Insight" newsletter: Millennials, which date from the 1980s to mid-2000s, are the largest generation. But what will happen to this generation's tech workers as they settle into middle age ?

Will the median age of tech firms rise as the Millennial generation grows older...? The median age range at Google, Facebook, SpaceX, LinkedIn, Amazon, Salesforce, Apple and Adobe, is 29 to 31, according to a study last year by PayScale, which analyzes self-reported data...

Karen Panetta, the dean of graduate engineering education at Tufts University and the vice president of communications and public relations at the IEEE-USA, believes the outcome for tech will be Logan's Run -like, where age sets a career limit...

Tech firms want people with the current skills sets and those "without those skills will be pressured to leave or see minimal career progression," said Panetta... The idea that the tech industry may have an age bias is not scaring the new college grads away. "They see retirement so far off, so they are more interested in how to move up or onto new startup ventures or even business school," said Panetta.

"The reality sets in when they have families and companies downsize and it's not so easy to just pick up and go on to another company," she said. None of this may be a foregone conclusion.

Millennials may see the experience of today's older workers as a cautionary tale, and usher in cultural changes... David Kurtz, a labor relations partner at Constangy, Brooks, Smith & Prophete, suggests tech firms should be sharing age-related date about their workforce, adding "The more of a focus you place on an issue the more attention it gets and the more likely that change can happen.

It's great to get the new hot shot who just graduated from college, but it's also important to have somebody with 40 years of experience who has seen all of the changes in the industry and can offer a different perspective."

[Sep 01, 2017] linux - Looping through the content of a file in Bash - Stack Overflow

Notable quotes:
"... done <<< "$(...)" ..."
Sep 01, 2017 | stackoverflow.com
down vote favorite 234

Peter Mortensen , asked Oct 5 '09 at 17:52

How do I iterate through each line of a text file with Bash ?

With this script

echo "Start!"
for p in (peptides.txt)
do
    echo "${p}"
done

I get this output on the screen:

Start!
./runPep.sh: line 3: syntax error near unexpected token `('
./runPep.sh: line 3: `for p in (peptides.txt)'

(Later I want to do something more complicated with $p than just output to the screen.)


The environment variable SHELL is (from env):

SHELL=/bin/bash

/bin/bash --version output:

GNU bash, version 3.1.17(1)-release (x86_64-suse-linux-gnu)
Copyright (C) 2005 Free Software Foundation, Inc.

cat /proc/version output:

Linux version 2.6.18.2-34-default (geeko@buildhost) (gcc version 4.1.2 20061115 (prerelease) (SUSE Linux)) #1 SMP Mon Nov 27 11:46:27 UTC 2006

The file peptides.txt contains:

RKEKNVQ
IPKKLLQK
QYFHQLEKMNVK
IPKKLLQK
GDLSTALEVAIDCYEK
QYFHQLEKMNVKIPENIYR
RKEKNVQ
VLAKHGKLQDAIN
ILGFMK
LEDVALQILL

Bruno De Fraine , answered Oct 5 '09 at 18:00

One way to do it is:
while read p; do
  echo $p
done <peptides.txt

Exceptionally, if the loop body may read from standard input , you can open the file using a different file descriptor:

while read -u 10 p; do
  ...
done 10<peptides.txt

Here, 10 is just an arbitrary number (different from 0, 1, 2).

Warren Young , answered Oct 5 '09 at 17:54

cat peptides.txt | while read line
do
   # do something with $line here
done

Stan Graves , answered Oct 5 '09 at 18:18

Option 1a: While loop: Single line at a time: Input redirection
#!/bin/bash
filename='peptides.txt'
echo Start
while read p; do 
    echo $p
done < $filename

Option 1b: While loop: Single line at a time:
Open the file, read from a file descriptor (in this case file descriptor #4).

#!/bin/bash
filename='peptides.txt'
exec 4<$filename
echo Start
while read -u4 p ; do
    echo $p
done

Option 2: For loop: Read file into single variable and parse.
This syntax will parse "lines" based on any white space between the tokens. This still works because the given input file lines are single work tokens. If there were more than one token per line, then this method would not work as well. Also, reading the full file into a single variable is not a good strategy for large files.

#!/bin/bash
filename='peptides.txt'
filelines=`cat $filename`
echo Start
for line in $filelines ; do
    echo $line
done

mightypile , answered Oct 4 '13 at 13:30

This is no better than other answers, but is one more way to get the job done in a file without spaces (see comments). I find that I often need one-liners to dig through lists in text files without the extra step of using separate script files.
for word in $(cat peptides.txt); do echo $word; done

This format allows me to put it all in one command-line. Change the "echo $word" portion to whatever you want and you can issue multiple commands separated by semicolons. The following example uses the file's contents as arguments into two other scripts you may have written.

for word in $(cat peptides.txt); do cmd_a.sh $word; cmd_b.py $word; done

Or if you intend to use this like a stream editor (learn sed) you can dump the output to another file as follows.

for word in $(cat peptides.txt); do cmd_a.sh $word; cmd_b.py $word; done > outfile.txt

I've used these as written above because I have used text files where I've created them with one word per line. (See comments) If you have spaces that you don't want splitting your words/lines, it gets a little uglier, but the same command still works as follows:

OLDIFS=$IFS; IFS=$'\n'; for line in $(cat peptides.txt); do cmd_a.sh $line; cmd_b.py $line; done > outfile.txt; IFS=$OLDIFS

This just tells the shell to split on newlines only, not spaces, then returns the environment back to what it was previously. At this point, you may want to consider putting it all into a shell script rather than squeezing it all into a single line, though.

Best of luck!

Jahid , answered Jun 9 '15 at 15:09

Use a while loop, like this:
while IFS= read -r line; do
   echo "$line"
done <file

Notes:

  1. If you don't set the IFS properly, you will lose indentation.
  2. You should almost always use the -r option with read.
  3. Don't read lines with for

codeforester , answered Jan 14 at 3:30

A few more things not covered by other answers: Reading from a delimited file
# ':' is the delimiter here, and there are three fields on each line in the file
# IFS set below is restricted to the context of `read`, it doesn't affect any other code
while IFS=: read -r field1 field2 field3; do
  # process the fields
  # if the line has less than three fields, the missing fields will be set to an empty string
  # if the line has more than three fields, `field3` will get all the values, including the third field plus the delimiter(s)
done < input.txt
Reading from more than one file at a time
while read -u 3 -r line1 && read -u 4 -r line2; do
  # process the lines
  # note that the loop will end when we reach EOF on either of the files, because of the `&&`
done 3< input1.txt 4< input2.txt
Reading a whole file into an array (Bash version 4+)
readarray -t my_array < my_file

or

mapfile -t my_array < my_file

And then

for line in "${my_array[@]}"; do
  # process the lines
done

Anjul Sharma , answered Mar 8 '16 at 16:10

If you don't want your read to be broken by newline character, use -
#!/bin/bash
while IFS='' read -r line || [[ -n "$line" ]]; do
    echo "$line"
done < "$1"

Then run the script with file name as parameter.

Sine , answered Nov 14 '13 at 14:23

#!/bin/bash
#
# Change the file name from "test" to desired input file 
# (The comments in bash are prefixed with #'s)
for x in $(cat test.txt)
do
    echo $x
done

dawg , answered Feb 3 '16 at 19:15

Suppose you have this file:
$ cat /tmp/test.txt
Line 1
    Line 2 has leading space
Line 3 followed by blank line

Line 5 (follows a blank line) and has trailing space    
Line 6 has no ending CR

There are four elements that will alter the meaning of the file output read by many Bash solutions:

  1. The blank line 4;
  2. Leading or trailing spaces on two lines;
  3. Maintaining the meaning of individual lines (i.e., each line is a record);
  4. The line 6 not terminated with a CR.

If you want the text file line by line including blank lines and terminating lines without CR, you must use a while loop and you must have an alternate test for the final line.

Here are the methods that may change the file (in comparison to what cat returns):

1) Lose the last line and leading and trailing spaces:

$ while read -r p; do printf "%s\n" "'$p'"; done </tmp/test.txt
'Line 1'
'Line 2 has leading space'
'Line 3 followed by blank line'
''
'Line 5 (follows a blank line) and has trailing space'

(If you do while IFS= read -r p; do printf "%s\n" "'$p'"; done </tmp/test.txt instead, you preserve the leading and trailing spaces but still lose the last line if it is not terminated with CR)

2) Using process substitution with cat will reads the entire file in one gulp and loses the meaning of individual lines:

$ for p in "$(cat /tmp/test.txt)"; do printf "%s\n" "'$p'"; done
'Line 1
    Line 2 has leading space
Line 3 followed by blank line

Line 5 (follows a blank line) and has trailing space    
Line 6 has no ending CR'

(If you remove the " from $(cat /tmp/test.txt) you read the file word by word rather than one gulp. Also probably not what is intended...)


The most robust and simplest way to read a file line-by-line and preserve all spacing is:

$ while IFS= read -r line || [[ -n $line ]]; do printf "'%s'\n" "$line"; done </tmp/test.txt
'Line 1'
'    Line 2 has leading space'
'Line 3 followed by blank line'
''
'Line 5 (follows a blank line) and has trailing space    '
'Line 6 has no ending CR'

If you want to strip leading and trading spaces, remove the IFS= part:

$ while read -r line || [[ -n $line ]]; do printf "'%s'\n" "$line"; done </tmp/test.txt
'Line 1'
'Line 2 has leading space'
'Line 3 followed by blank line'
''
'Line 5 (follows a blank line) and has trailing space'
'Line 6 has no ending CR'

(A text file without a terminating \n , while fairly common, is considered broken under POSIX. If you can count on the trailing \n you do not need || [[ -n $line ]] in the while loop.)

More at the BASH FAQ

,

Here is my real life example how to loop lines of another program output, check for substrings, drop double quotes from variable, use that variable outside of the loop. I guess quite many is asking these questions sooner or later.
##Parse FPS from first video stream, drop quotes from fps variable
## streams.stream.0.codec_type="video"
## streams.stream.0.r_frame_rate="24000/1001"
## streams.stream.0.avg_frame_rate="24000/1001"
FPS=unknown
while read -r line; do
  if [[ $FPS == "unknown" ]] && [[ $line == *".codec_type=\"video\""* ]]; then
    echo ParseFPS $line
    FPS=parse
  fi
  if [[ $FPS == "parse" ]] && [[ $line == *".r_frame_rate="* ]]; then
    echo ParseFPS $line
    FPS=${line##*=}
    FPS="${FPS%\"}"
    FPS="${FPS#\"}"
  fi
done <<< "$(ffprobe -v quiet -print_format flat -show_format -show_streams -i "$input")"
if [ "$FPS" == "unknown" ] || [ "$FPS" == "parse" ]; then 
  echo ParseFPS Unknown frame rate
fi
echo Found $FPS

Declare variable outside of the loop, set value and use it outside of loop requires done <<< "$(...)" syntax. Application need to be run within a context of current console. Quotes around the command keeps newlines of output stream.

Loop match for substrings then reads name=value pair, splits right-side part of last = character, drops first quote, drops last quote, we have a clean value to be used elsewhere.

[Aug 29, 2017] The booklet for common tasks on a Linux system.

Aug 29, 2017 | bumble.sourceforge.net

This booklet is designed to help with common tasks on a Linux system. It is designed to be presentable as a series of "recipes" for accomplishing common tasks. These recipes consist of a plain English one-line description, followed by the Linux command which carries out the task.

The document is focused on performing tasks in Linux using the 'command line' or 'console'.

The format of the booklet was largely inspired by the "Linux Cookbook" www.dsl.org/cookbook

[Aug 29, 2017] backup-etc.sh -- A script to backup the /etc directory

This is simple script that generated "dot" progression lines. Backup name includes a timestamp. No rotation implemented.
Aug 29, 2017 | wpollock.com
   #!/bin/bash
# Script to backup the /etc heirarchy
#
# Written 4/2002 by Wayne Pollock, Tampa Florida USA
#
#  $Id: backup-etc,v 1.6 2004/08/25 01:42:26 wpollock Exp $
#
# $Log: backup-etc,v $
# Revision 1.6  2004/08/25 01:42:26  wpollock
# Changed backup name to include the hostname and 4 digit years.
#
# Revision 1.5  2004/01/07 18:07:33  wpollock
# Fixed dots routine to count files first, then calculate files per dot.
#
# Revision 1.4  2003/04/03 08:10:12  wpollock
# Changed how the version number is obtained, so the file
# can be checked out normally.
#
# Revision 1.3  2003/04/03 08:01:25  wpollock
# Added ultra-fancy dots function for verbose mode.
#
# Revision 1.2  2003/04/01 15:03:33  wpollock
# Eliminated the use of find, and discovered that tar was working
# as intended all along!  (Each directory that find found was
# recursively backed-up, so for example /etc, then /etc/mail,
# caused /etc/mail/sendmail.mc to be backuped three times.)
#
# Revision 1.1  2003/03/23 18:57:29  wpollock
# Modified by Wayne Pollock:
#
# Discovered not all files were being backed up, so
# added "-print0 --force-local" to find and "--null -T -"
# to tar (eliminating xargs), to fix the problem when filenames
# contain metacharacters such as whitespace.
# Although this now seems to work, the current version of tar
# seems to have a bug causing it to backup every file two or
# three times when using these options!  This is still better
# than not backing up some files at all.)
#
# Changed the logger level from "warning" to "error".
#
# Added '-v, --verbose' options to display dots every 60 files,
# just to give feedback to a user.
#
# Added '-V, --version' and '-h, --help' options.
#
# Removed the lock file mechanism and backup file renaming
# (from foo to foo.1), in favor of just including a time-stamp
# of the form "yymmdd-hhmm" to the filename.
#
#

PATH=/bin:/usr/bin

# The backups should probably be stored in /var somplace:
REPOSITORY=/root
TIMESTAMP=$(date '+%Y%m%d-%H%M')
HOSTNAME=$(hostname)
FILE="$REPOSITORY/$HOSTNAME-etc-full-backup-$TIMESTAMP.tgz"

ERRMSGS=/tmp/backup-etc.$$
PROG=${0##*/}
VERSION=$(echo $Revision: 1.6 $ |awk '{print$2}')
VERBOSE=off

usage()
{  echo "This script creates a full backup of /etc via tar in $REPOSITORY."
   echo "Usage: $PROG [OPTIONS]"
   echo '  Options:'
   echo '    -v, --verbose   displays some feedback (dots) during backup'
   echo '    -h, --help      displays this message'
   echo '    -V, --version   display program version and author info'
   echo
}

dots()
{  MAX_DOTS=50
   NUM_FILES=`find /etc|wc -l`
   let 'FILES_PER_DOT = NUM_FILES / MAX_DOTS'
   bold=`tput smso`
   norm=`tput rmso`
   tput sc
   tput civis
   echo -n "$bold(00%)$norm"
   while read; do
      let "cnt = (cnt + 1) % FILES_PER_DOT"
      if [ "$cnt" -eq 0 ]
      then
         let '++num_dots'
         let 'percent = (100 * num_dots) / MAX_DOTS'
         [ "$percent" -gt "100" ] && percent=100
         tput rc
         printf "$bold(%02d%%)$norm" "$percent"
         tput smir
         echo -n "."
         tput rmir
      fi
   done
   tput cnorm
   echo
}

# Command line argument processing:
while [ $# -gt 0 ]
do
   case "$1" in
      -v|--verbose)  VERBOSE=on; ;;
      -h|--help)     usage; exit 0; ;;
      -V|--version)  echo -n "$PROG version $VERSION "
                     echo 'Written by Wayne Pollock '
                     exit 0; ;;
      *)             usage; exit 1; ;;
   esac
   shift
done

trap "rm -f $ERRMSGS" EXIT

cd /etc

# create backup, saving any error messages:
if [ "$VERBOSE" != "on" ]
then
    tar -cz --force-local -f $FILE . 2> $ERRMSGS 
else
    tar -czv --force-local -f $FILE . 2> $ERRMSGS | dots
fi

# Log any error messages produced:
if [ -s "$ERRMSGS" ]
then logger -p user.error -t $PROG "$(cat $ERRMSGS)"
else logger -t $PROG "Completed full backup of /etc"
fi

exit 0

[Aug 29, 2017] How to view the `.bash_history` file via command line

Aug 29, 2017 | askubuntu.com

If you actually need the output of the .bash_history file , replace history with

cat ~/.bash_history in all of the commands below.

If you actually want the commands without numbers in front, use this command instead of history :

history | cut -d' ' -f 4-

[Aug 29, 2017] The quickie guide to continuous delivery in DevOps

This is pretty idiotic: "But wait -- Isn't speed the key to all software development? These days, companies routinely require their developers to update or add features once per day, week, or month. This was unheard of back in the day, even in the era of agile software development ."
And now example buss words infused nonsense: ""DevOps is a concept, an idea, a life philosophy," says Gottfried Sehringer, chief marketing officer at XebiaLabs , a software delivery automation company. "It's not really a process or a toolset, or a technology." And another one: ..." "In an ideal world, you would push a button to release every few seconds," Sehringer says. But this is not an ideal world, and so people plug up the process along the way."... "
I want to see sizable software product with the release every few seconds. Even for a small and rapidly evolving web site scripts should be released no more frequently then daily.
Notable quotes:
"... Even if you're a deity of software bug-squashing, how can you -- or any developer or operations specialist -- deliver high-quality, "don't break anything" code when you have to build and release that fast? Everyone has their own magic bullet. "Agile -- " cries one crowd. " Continuous build -- " yells another. " Continuous integration -- " cheers a third. ..."
"... Automation has obvious returns on investment. "You can make sure it's good in pre-production and push it immediately to production without breaking anything, and then just repeat, repeat, repeat, over and over again," says Sehringer. ..."
"... In other words, you move delivery through all the steps in a structured, repeatable, automated way to reduce risk and increase the speed of releases and updates. ..."
Aug 29, 2017 | insights.hpe.com
The quickie guide to continuous delivery in DevOps

In today's world, you have to develop and deliver almost in the same breath. Here's a quick guide to help you figure out which continuous delivery concepts will help you breathe easy, and which are only hot air. Developers are always under pressure to produce more and release software faster, which encourages the adoption of new concepts and tools. But confusing buzzwords obfuscate real technology and business benefits, particularly when a vendor has something to sell. That makes it hard to determine what works best -- for real, not just as a marketing phrase -- in the continuous flow of build and deliver processes. This article gives you the basics of continuous delivery to help you sort it all out.

To start with, the terms apply to different parts of the same production arc, each of which are automated to different degrees:

With continuous deployment, "a developer's job typically ends at reviewing a pull request from a teammate and merging it to the master branch," explains Marko Anastasov in a blog post . "A continuous integration/continuous deployment service takes over from there by running all tests and deploying the code to production, while keeping the team informed about [the] outcome of every important event."

However, knowing the terms and their definitions isn't enough to help you determine when and where it is best to use each. Because, of course, every shop is different.

It would be great if the market clearly distinguished between concepts and tools and their uses, as they do with terms like DevOps. Oh, wait.

"DevOps is a concept, an idea, a life philosophy," says Gottfried Sehringer, chief marketing officer at XebiaLabs , a software delivery automation company. "It's not really a process or a toolset, or a technology."

But, alas, industry terms are rarely spelled out that succinctly. Nor are they followed with hints and tips on how and when to use them. Hence this guide, which aims to help you learn when to use what.

Choose your accelerator according to your need for speed

But wait -- Isn't speed the key to all software development? These days, companies routinely require their developers to update or add features once per day, week, or month. This was unheard of back in the day, even in the era of agile software development .

That's not the end of it; some businesses push for software updates to be faster still. "If you work for Amazon, it might be every few seconds," says Sehringer.

Even if you're a deity of software bug-squashing, how can you -- or any developer or operations specialist -- deliver high-quality, "don't break anything" code when you have to build and release that fast? Everyone has their own magic bullet. "Agile -- " cries one crowd. " Continuous build -- " yells another. " Continuous integration -- " cheers a third.

Let's just cut to the chase on all that, shall we?

"Just think of continuous as 'automated,'" says Nate Berent-Spillson, senior delivery director at Nexient , a software services provider. "Automation is driving down cost and the time to develop and deploy."

Well, frack, why don't people just say automation?

Add to the idea of automation the concepts of continuous build, continuous delivery, continuous everything, which are central to DevOps, and we find ourselves talking in circles. So, let's get right to sorting all that out.

... ... ...

Rinse. Repeat, repeat, repeat, repeat (the point of automation in DevOps)

Automation has obvious returns on investment. "You can make sure it's good in pre-production and push it immediately to production without breaking anything, and then just repeat, repeat, repeat, over and over again," says Sehringer.

In other words, you move delivery through all the steps in a structured, repeatable, automated way to reduce risk and increase the speed of releases and updates.

In an ideal world, you would push a button to release every few seconds," Sehringer says. But this is not an ideal world, and so people plug up the process along the way.

A company may need approval for an application change from its legal department. "Some companies are heavily regulated and may need additional gates to ensure compliance," notes Sehringer. "It's important to understand where these bottlenecks are." The ARA software should improve efficiencies and ensure the application is released or updated on schedule.

"Developers are more familiar with continuous integration," he says. "Application release automation is more recent and thus less understood."

... ... ...

Pam Baker has written hundreds of articles published in leading technology, business and finance publications including InformationWeek, Institutional Investor magazine, CIO.com, NetworkWorld, ComputerWorld, IT World, Linux World, and more. She has also authored several analytical studies on technology, eight books -- the latest of which is Data Divination: Big Data Strategies -- and an award-winning documentary on paper-making. She is a member of the National Press Club, Society of Professional Journalists and the Internet Press Guild.

[Aug 28, 2017] Rsync over ssh with root access on both sides

Aug 28, 2017 | serverfault.com

I have one older ubuntu server, and one newer debian server and I am migrating data from the old one to the new one. I want to use rsync to transfer data across to make final migration easier and quicker than the equivalent tar/scp/untar process.

As an example, I want to sync the home folders one at a time to the new server. This requires root access at both ends as not all files at the source side are world readable and the destination has to be written with correct permissions into /home. I can't figure out how to give rsync root access on both sides.

I've seen a few related questions, but none quite match what I'm trying to do.

I have sudo set up and working on both servers. ubuntu ssh debian rsync root

share improve this question asked Apr 28 '10 at 9:18 Tim Abell 732 20
add a comment | 3 Answers active oldest votes
up vote down vote accepted Actually you do NOT need to allow root authentication via SSH to run rsync as Antoine suggests. The transport and system authentication can be done entirely over user accounts as long as you can run rsync with sudo on both ends for reading and writing the files.

As a user on your destination server you can suck the data from your source server like this:

sudo rsync -aPe ssh --rsync-path='sudo rsync' boron:/home/fred /home/

The user you run as on both servers will need passwordless* sudo access to the rsync binary, but you do NOT need to enable ssh login as root anywhere. If the user you are using doesn't match on the other end, you can add user@boron: to specify a different remote user.

Good luck.

*or you will need to have entered the password manually inside the timeout window.

share improve this answer edited Jun 30 '10 at 13:51 answered Apr 28 '10 at 22:06 Caleb 9,089 27 43
1
Although this is an old question I'd like to add word of CAUTION to this accepted answer. From my understanding allowing passwordless "sudo rsync" is equivalent to open the root account to remote login. This is because with this it is very easy to gain full root access, e.g. because all system files can be downloaded, modified and replaced without a password. – Ascurion Jan 8 '16 at 16:30
add a comment |
up vote down vote If your data is not highly sensitive, you could use tar and socat. In my experience this is often faster as rsync over ssh.

You need socat or netcat on both sides.

On the target host, go to the directory where you would like to put your data, after that run: socat TCP-LISTEN:4444 - | tar xzf -

If the target host is listening, start it on the source like: tar czf - /home/fred /home/ | socat - TCP:ip-of-remote-server:4444

For this setup you'll need a reliably connection between the 2 servers.

share improve this answer answered Apr 28 '10 at 21:20 Jeroen Moors
Good point. In a trusted environment, you'll pick up a lot of speed by not encrypting. It might not matter on small files, but with GBs of data it will. – pboin May 18 '10 at 10:53
add a comment |
up vote down vote Ok, i've pieced together all the clues to get something that works for me.

Lets call the servers "src" & "dst".

Set up a key pair for root on the destination server, and copy the public key to the source server:

dest $ sudo -i
dest # ssh-keygen
dest # exit
dest $ scp /root/id_rsa.pub src:

Add the public key to root's authorized keys on the source server

src $ sudo -i
src # cp /home/tim/id_rsa.pub .ssh/authorized_keys

Back on the destination server, pull the data across with rsync:

dest $ sudo -i
dest # rsync -aP src:/home/fred /home/

[Aug 28, 2017] Unix Rsync Copy Hidden Dot Files and Directories Only by Vivek Gite

Feb 06, 2014 | www.cyberciti.biz
November 9, 2012 February 6, 2014 in Categories Commands , File system , Linux , UNIX last updated February 6, 2014

How do I use the rsync tool to copy only the hidden files and directory (such as ~/.ssh/, ~/.foo, and so on) from /home/jobs directory to the /mnt/usb directory under Unix like operating system?

The rsync program is used for synchronizing files over a network or local disks. To view or display only hidden files with ls command:

ls -ld ~/.??*

OR

ls -ld ~/.[^.]*

Sample outputs:

ls command: List only hidden files in Unix / Linux terminal

Fig:01 ls command to view only hidden files

rsync not synchronizing all hidden .dot files?

In this example, you used the pattern .[^.]* or .??* to select and display only hidden files using ls command . You can use the same pattern with any Unix command including rsync command. The syntax is as follows to copy hidden files with rsync:

rsync -av /path/to/dir/.??* /path/to/dest
rsync -avzP /path/to/dir/.??* /mnt/usb
rsync -avzP $HOME/.??* user1@server1.cyberciti.biz:/path/to/backup/users/u/user1
rsync -avzP ~/.[^.]* user1@server1.cyberciti.biz:/path/to/backup/users/u/user1

rsync -av /path/to/dir/.??* /path/to/dest rsync -avzP /path/to/dir/.??* /mnt/usb rsync -avzP $HOME/.??* user1@server1.cyberciti.biz:/path/to/backup/users/u/user1 rsync -avzP ~/.[^.]* user1@server1.cyberciti.biz:/path/to/backup/users/u/user1

In this example, copy all hidden files from my home directory to /mnt/test:

rsync -avzP ~/.[^.]* /mnt/test

rsync -avzP ~/.[^.]* /mnt/test

Sample outputs:

Rsync example to copy only hidden files

Fig.02 Rsync example to copy only hidden files

Vivek Gite is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on Twitter , Facebook , Google+ .

[Aug 28, 2017] Could AI Transform Continuous Delivery Development

Notable quotes:
"... It's basically a bullshit bingo post where someone repeats a buzzword without any knowledge of the material behind it ..."
"... continuous delivery == constant change ..."
"... This might be good for developers, but it's a nightmare for the poor, bloody, customers. ..."
"... However, I come at it from the other side, the developers just push new development out and production support is responsible for addressing the mess, it is horrible, there is too much disconnect between developers and their resulting output creating consistent outages. The most successful teams follow the mantra "Eat your own dog food" , developers who support the crap they push ..."
"... But do you know who likes Continuous Delivery? Not the users. The users hate stuff changing for the sake of change, but trying to convince management seems an impossible task. ..."
"... some of us terrified what 'continious delivery' means in the context of software in the microcontroller of a health care device, ..."
"... It is a natural consequence of a continuous delivery, emphasis on always evolving and changing and that the developer is king and no one can question developer opinion. Developer decides it should move, it moves. No pesky human testers to stand up and say 'you confused the piss out of us' to make them rethink it. No automatic test is going to capture 'confuse the piss out of the poor non-developer users'. ..."
"... It's amazing how common this attitude has become. It's aggressively anti-customer, and a big part of the reason for the acceleration of the decline of software quality over the past several years. ..."
"... All I know is that, as a user, rapid-release or continuous delivery has been nothing but an enormous pain in the ass to me and I wish it would die the horrible death it deserves already. ..."
Aug 28, 2017 | developers.slashdot.org

Anonymous Coward writes:

Re: Score: , Insightful)

Yeah, this is an incredibly low quality article. It doesn't specify what it means by what AI should do, doesn't specify which type of AI, doesn't specify why AI should be used, etc. Junk article.

It's basically a bullshit bingo post where someone repeats a buzzword without any knowledge of the material behind it.

xbytor ( 215790 ) , Sunday August 27, 2017 @04:00PM ( #55093989 ) Homepage
buzzwords Score: , Funny)

> a new paradigm shift.

I stopped reading after this.

cyber-vandal ( 148830 ) writes:
Re: buzzwords

Not enough leveraging core competencies through blue sky thinking and synergistic best of breed cloud machine learning for you?

sycodon ( 149926 ) , Sunday August 27, 2017 @04:10PM ( #55094039 )
Same Old Thing Score: , Insightful)

Holy Fuck.

Continuous integration Prototyping Incremental development Rapid application development Agile development Waterfall development Spiral development

Now, introducing, "Continuous Delivery"...or something.

Here is the actual model, a model that will exist for the next 1,000 years.

1. Someone (or something) gathers requirement. 2. They get it wrong. 3. They develop the wrong thing that doesn't even work they way they thought it should 4. The project leader is canned 5. The software is implemented by an outside vendor, with all the flaws. 6. The software operates finally after 5 years of modifications to both the software and the workflows (to match the flaws in the software). 7. As soon as it's all running properly and everyone is trained, a new project is launched to redo it, "the right way". 8. Goto 1

AmazingRuss ( 555076 ) writes:
Re:

If everyone is stupid, no one is.

ColdWetDog ( 752185 ) writes:
Re:

No no. We got rid of line numbers a long time ago.

Graydyn Young ( 2835695 ) writes:
Re:

+1 Depressing

Tablizer ( 95088 ) writes:
AI meets Hunger Games

It's a genetic algorithm where YOU are the population being flushed out each cycle.

TheStickBoy ( 246518 ) writes:
Re:
Here is the actual model, a model that will exist for the next 1,000 years.

1. Someone (or something) gathers requirement. 2. They get it wrong. 3. They develop the wrong thing that doesn't even work they way they thought it should 4. The project leader is canned 5. The software is implemented by an outside vendor, with all the flaws. 6. The software operates finally after 5 years of modifications to both the software and the workflows (to match the flaws in the software). 7. As soon as it's all running properly and everyone is trained, a new project is launched to redo it, "the right way". 8. Goto 1

You just accurately described a 6 year project within our organization....and it made me cry Does this model have a name? an urban dictionary name? if not it needs one.

alvinrod ( 889928 ) , Sunday August 27, 2017 @04:15PM ( #55094063 )
Re:buzzwords Score: , Insightful)

Yeah, maybe there's something useful in TFA, but I'm not really inclined to go looking based on what was in the summary. At no point, did the person being quoted actually say anything of substance.

It's just buzzword soup with a dash of new technologies thrown in.

Five years ago they would have said practically the same words, but just talked about utilizing the cloud instead of AI.

I'm also a little skeptical of any study published by a company looking to sell you what the study has just claimed to be great. That doesn't mean its a complete sham, but how hard did they look for other explanations why some companies are more successful than others?

phantomfive ( 622387 ) writes:
Re:

At first I was skeptical, but I read some online reviews of it, and it looks pretty good [slashdot.org]. All you need is some AI and everything is better.

Anonymous Coward writes:
I smell Bullshit Bingo...

that's all, folks...

93 Escort Wagon ( 326346 ) writes:
Meeting goals

I notice the targets are all set from the company's point of view... including customer satisfaction. However it's quite easy to meet any goal, as long as you set it low enough.

Companies like Comcast or Qwest objectively have abysmal customer satisfaction ratings; but they likely meet their internal goal for that metric. I notice, in their public communications, they always use phrasing along the lines of "giving you an even better customer service experience" - again, the trick is to set the target low and

petes_PoV ( 912422 ) , Sunday August 27, 2017 @05:56PM ( #55094339 )
continuous delivery == constant change ( Score: , Insightful)

This might be good for developers, but it's a nightmare for the poor, bloody, customers.

Any professional outfit will test a new release (in-house or commercial product) thoroughly before letting it get anywhere close to an environment where their business is at stake.

This process can take anywhere from a day or two to several months, depending on the complexity of the operation, the scope of the changes, HOW MANY (developers note: not if any ) bugs are found and whether any alterations to working practices have to be introduced.

So to have developers lob a new "release" over the wall at frequent intervals is not useful, it isn't clever, nor does it save (the users) any money or speed up their acceptance. It just costs more in integration testing, floods the change control process with "issues" and means that when you report (again, developers: not if ) problems, it is virtually impossible to describe exactly which release you are referring to and even more impossible for whoever fixes the bugs to produce the same version to fix and then incorporate those fixes into whatever happens to be the latest version - that hour. Even more so when dozens of major corporate customers are ALL reporting bugs with each new version they test.

SethJohnson ( 112166 ) writes:
Re:

Any professional outfit will test a new release (in-house or commercial product) thoroughly before letting it get anywhere close to an environment where their business is at stake. This process can take anywhere from a day or two to several months, depending on the complexity of the operation, the scope of the changes, HOW MANY (developers note: not if any) bugs are found and whether any alterations to working practices have to be introduced.

I wanted to chime in with a tangible anecdote to support your

Herkum01 ( 592704 ) writes:
Re:

I can sympathize with that few, of it appearing to have too many developers focused upon deployment/testing then actual development.

However, I come at it from the other side, the developers just push new development out and production support is responsible for addressing the mess, it is horrible, there is too much disconnect between developers and their resulting output creating consistent outages. The most successful teams follow the mantra "Eat your own dog food" , developers who support the crap they push

JohnFen ( 1641097 ) writes:
Re:
This might be good for developers

It's not even good for developers.

AmazingRuss ( 555076 ) writes:
"a new paradigm shift."

Another one?

sethstorm ( 512897 ) writes:
Let's hope not.

AI is enough of a problem, why make it worse?

bobm ( 53783 ) writes:
According to one study

One study, well then I'm sold.

But do you know who likes Continuous Delivery? Not the users. The users hate stuff changing for the sake of change, but trying to convince management seems an impossible task.

angel'o'sphere ( 80593 ) writes:
Re:

Why should users not like it? If you shop on amazon you don't know if a specific feature you notice today came there via continuous delivery or a more traditional process.

Junta ( 36770 ) writes:
Re:

The crux of the problem is that we (in these discussions and the analysts) describe *all* manner of 'software development' as the same thing. Whether it's a desktop application, an embedded microcontroller in industrial equipment, a web application for people to get work done, or a webapp to let people see the latest funny cat video.

Then we start talking past each other, some of us terrified what 'continious delivery' means in the context of software in the microcontroller of a health care device, others t

angel'o'sphere ( 80593 ) writes:
Re:

Well, 'continuous delievery' is a term with a defined meaning. And releasing phone apps with unwanted UI/functionality in rapid succession is not part of that definition. Continuous delievery basically only is the next logical step after continuous integration. You deploy the new functionallity automatically (or with a click of a button) when certain test criteria are met. Usually on a subset of your nodes so only a subset of your customers sees it. If you have crashes on those nodes or customer complaints you

JohnFen ( 1641097 ) writes:
Re:
You deploy the new functionallity automatically (or with a click of a button) when certain test criteria are met. Usually on a subset of your nodes so only a subset of your customers sees it. If you have crashes on those nodes or customer complaints you roll back.

Why do you consider this to be a good thing? It's certainly not for those poor customers who were chosen to be involuntary beta testers, and it's also not for the rest of the customers who have to deal with software that is constantly changing underneath them.

Junta ( 36770 ) writes:
Re:
'continuous delievery' is a term with a defined meaning. And releasing phone apps with unwanted UI/functionality in rapid succession is not part of that definition.

It is a natural consequence of a continuous delivery, emphasis on always evolving and changing and that the developer is king and no one can question developer opinion. Developer decides it should move, it moves. No pesky human testers to stand up and say 'you confused the piss out of us' to make them rethink it. No automatic test is going to capture 'confuse the piss out of the poor non-developer users'.

If you have crashes on those nodes or customer complaints you roll back.

Note that a customer with a choice is likely to just go somewhere else rather than use your software.

manu0601 ( 2221348 ) writes:
AI written paper

I suspect that article was actually written by an AI. That would explain why it makes so little sense to human mind.

4wdloop ( 1031398 ) writes:
IT what?

IT in my company does network, Windows, Office and Virus etc. type of work. Is this what they talk about? Anyway, it's been long outsourced to IT (as in "Indian" technology)...

Comrade Ogilvy ( 1719488 ) writes:
For some businesses maybe but...

I recently interviewed at a couple of the new fangled big data marketing startups that correlate piles of stuff to help target ads better, and they were continuously deploying up the wazoo. In fact, they had something like zero people doing traditional QA.

It was not totally insane at all. But they did have a blaze attitude about deployments -- if stuff don't work in production they just roll back, and not worry about customer input data being dropped on the floor. Heck, they did not worry much about da

JohnFen ( 1641097 ) writes:
Re:
But they did have a blaze attitude about deployments -- if stuff don't work in production they just roll back, and not worry about customer input data being dropped on the floor.

It's amazing how common this attitude has become. It's aggressively anti-customer, and a big part of the reason for the acceleration of the decline of software quality over the past several years.

Njovich ( 553857 ) writes:
No

You want your deployment system to be predictable, and as my old AI professor used to say, intelligent means hard to predict. You don't want AI for systems that just have to do the exact same thing reliably over and over again.

angel'o'sphere ( 80593 ) writes:
Summary sounds retarded

A continuous delivery pipeline has as much AI as a nematode has natural intelligence ... probably even less.

Junta ( 36770 ) writes:
In other words...

Analyst who understands neither software development nor AI proceeds to try to sound insightful about both.

JohnFen ( 1641097 ) writes:
All I know is

All I know is that, as a user, rapid-release or continuous delivery has been nothing but an enormous pain in the ass to me and I wish it would die the horrible death it deserves already.

jmcwork ( 564008 ) writes:
Every morning: git update; make install

As long as customers are comfortable with doing this, I do not see a problem. Now, that will require that developers keep making continuous,

[Aug 28, 2017] rsync doesn't copy files with restrictive permissions

Aug 28, 2017 | superuser.com
up vote down vote favorite Trying to copy files with rsync, it complains:
rsync: send_files failed to open "VirtualBox/Machines/Lubuntu/Lubuntu.vdi" \
(in media): Permission denied (13)

That file is not copied. Indeed the file permissions of that file are very restrictive on the server side:

-rw-------    1 1000     1000     3133181952 Nov  1  2011 Lubuntu.vdi

I call rsync with

sudo rsync -av --fake-super root@sheldon::media /mnt/media

The rsync daemon runs as root on the server. root can copy that file (of course). rsyncd has "fake super = yes" set in /etc/rsyncd.conf.

What can I do so that the file is copied without changing the permissions of the file on the server? rsync file-permissions

share improve this question asked Dec 29 '12 at 10:15 Torsten Bronger 207
If you use RSync as daemon on destination, please post grep rsync /var/log/daemon to improve your question – F. Hauri Dec 29 '12 at 13:23
add a comment |
1 Answer active oldest votes
up vote down vote As you appear to have root access to both servers have you tried a: --force ?

Alternatively you could bypass the rsync daemon and try a direct sync e.g.

rsync -optg --rsh=/usr/bin/ssh --rsync-path=/usr/bin/rsync --verbose --recursive --delete-after --force  root@sheldon::media /mnt/media
share improve this answer edited Jan 2 '13 at 10:55 answered Dec 29 '12 at 13:21 arober11 376
Using ssh means encryption, which makes things slower. --force does only affect directories, if I read the man page correctly. – Torsten Bronger Jan 1 '13 at 23:08
Unless your using ancient kit, the CPU overhead of encrypting / decrypting the traffic shouldn't be noticeable, but you will loose 10-20% of your bandwidth, through the encapsulation process. Then again 80% of a working link is better than 100% of a non working one :) – arober11 Jan 2 '13 at 10:52
do have an "ancient kit". ;-) (Slow ARM CPU on a NAS.) But I now mount the NAS with NFS and use rsync (with "sudo") locally. This solves the problem (and is even faster). However, I still think that my original problem must be solvable using the rsync protocol (remote, no ssh). – Torsten Bronger Jan 4 '13 at 7:55

[Aug 28, 2017] Using rsync under target user to copy home directories

Aug 28, 2017 | unix.stackexchange.com

up vote down vote favorite

nixnotwin , asked Sep 21 '12 at 5:11

On my Ubuntu server there are about 150 shell accounts. All usernames begin with the prefix u12.. I have root access and I am trying to copy a directory named "somefiles" to all the home directories. After copying the directory the user and group ownership of the directory should be changed to user's. Username, group and home-dir name are same. How can this be done?

Gilles , answered Sep 21 '12 at 23:44

Do the copying as the target user. This will automatically make the target files. Make sure that the original files are world-readable (or at least readable by all the target users). Run chmod afterwards if you don't want the copied files to be world-readable.
getent passwd |
awk -F : '$1 ~ /^u12/ {print $1}' |
while IFS= read -r user; do
  su "$user" -c 'cp -Rp /original/location/somefiles ~/'
done

[Aug 28, 2017] rsync over SSH preserve ownership only for www-data owned files

Aug 28, 2017 | stackoverflow.com
up vote 10 down vote favorite 4

jeffery_the_wind , asked Mar 6 '12 at 15:36

I am using rsync to replicate a web folder structure from a local server to a remote server. Both servers are ubuntu linux. I use the following command, and it works well:
rsync -az /var/www/ user@10.1.1.1:/var/www/

The usernames for the local system and the remote system are different. From what I have read it may not be possible to preserve all file and folder owners and groups. That is OK, but I would like to preserve owners and groups just for the www-data user, which does exist on both servers.

Is this possible? If so, how would I go about doing that?

Thanks!

** EDIT **

There is some mention of rsync being able to preserve ownership and groups on remote file syncs here: http://lists.samba.org/archive/rsync/2005-August/013203.html

** EDIT 2 **

I ended up getting the desired affect thanks to many of the helpful comments and answers here. Assuming the IP of the source machine is 10.1.1.2 and the IP of the destination machine is 10.1.1.1. I can use this line from the destination machine:

sudo rsync -az user@10.1.1.2:/var/www/ /var/www/

This preserves the ownership and groups of the files that have a common user name, like www-data. Note that using rsync without sudo does not preserve these permissions.

ghoti , answered Mar 6 '12 at 19:01

You can also sudo the rsync on the target host by using the --rsync-path option:
# rsync -av --rsync-path="sudo rsync" /path/to/files user@targethost:/path

This lets you authenticate as user on targethost, but still get privileged write permission through sudo . You'll have to modify your sudoers file on the target host to avoid sudo's request for your password. man sudoers or run sudo visudo for instructions and samples.

You mention that you'd like to retain the ownership of files owned by www-data, but not other files. If this is really true, then you may be out of luck unless you implement chown or a second run of rsync to update permissions. There is no way to tell rsync to preserve ownership for just one user .

That said, you should read about rsync's --files-from option.

rsync -av /path/to/files user@targethost:/path
find /path/to/files -user www-data -print | \
  rsync -av --files-from=- --rsync-path="sudo rsync" /path/to/files user@targethost:/path

I haven't tested this, so I'm not sure exactly how piping find's output into --files-from=- will work. You'll undoubtedly need to experiment.

xato , answered Mar 6 '12 at 15:39

As far as I know, you cannot chown files to somebody else than you, if you are not root. So you would have to rsync using the www-data account, as all files will be created with the specified user as owner. So you need to chown the files afterwards.

user2485267 , answered Jun 14 '13 at 8:22

I had a similar problem and cheated the rsync command,

rsync -avz --delete root@x.x.x.x:/home//domains/site/public_html/ /home/domains2/public_html && chown -R wwwusr:wwwgrp /home/domains2/public_html/

the && runs the chown against the folder when the rsync completes successfully (1x '&' would run the chown regardless of the rsync completion status)

Graham , answered Mar 6 '12 at 15:51

The root users for the local system and the remote system are different.

What does this mean? The root user is uid 0. How are they different?

Any user with read permission to the directories you want to copy can determine what usernames own what files. Only root can change the ownership of files being written .

You're currently running the command on the source machine, which restricts your writes to the permissions associated with user@10.1.1.1. Instead, you can try to run the command as root on the target machine. Your read access on the source machine isn't an issue.

So on the target machine (10.1.1.1), assuming the source is 10.1.1.2:

# rsync -az user@10.1.1.2:/var/www/ /var/www/

Make sure your groups match on both machines.

Also, set up access to user@10.1.1.2 using a DSA or RSA key, so that you can avoid having passwords floating around. For example, as root on your target machine, run:

# ssh-keygen -d

Then take the contents of the file /root/.ssh/id_dsa.pub and add it to ~user/.ssh/authorized_keys on the source machine. You can ssh user@10.1.1.2 as root from the target machine to see if it works. If you get a password prompt, check your error log to see why the key isn't working.

ghoti , answered Mar 6 '12 at 18:54

Well, you could skip the challenges of rsync altogether, and just do this through a tar tunnel.
sudo tar zcf - /path/to/files | \
  ssh user@remotehost "cd /some/path; sudo tar zxf -"

You'll need to set up your SSH keys as Graham described.

Note that this handles full directory copies, not incremental updates like rsync.

The idea here is that:

[Aug 28, 2017] rsync and file permissions

Aug 28, 2017 | superuser.com
up vote down vote favorite I'm trying to use rsync to copy a set of files from one system to another. I'm running the command as a normal user (not root). On the remote system, the files are owned by apache and when copied they are obviously owned by the local account (fred).

My problem is that every time I run the rsync command, all files are re-synched even though they haven't changed. I think the issue is that rsync sees the file owners are different and my local user doesn't have the ability to change ownership to apache, but I'm not including the -a or -o options so I thought this would not be checked. If I run the command as root, the files come over owned by apache and do not come a second time if I run the command again. However I can't run this as root for other reasons. Here is the command:

/usr/bin/rsync --recursive --rsh=/usr/bin/ssh --rsync-path=/usr/bin/rsync --verbose root@server.example.com:/src/dir/ /local/dir
unix rsync
share improve this question edited May 2 '11 at 23:53 Gareth 13.9k 11 44 58 asked May 2 '11 at 23:43 Fred Snertz 11
Why can't you run rsync as root? On the remote system, does fred have read access to the apache-owned files? – chrishiestand May 3 '11 at 0:32
Ah, I left out the fact that there are ssh keys set up so that local fred can become remote root, so yes fred/root can read them. I know this is a bit convoluted but its real. – Fred Snertz May 3 '11 at 14:50
Always be careful when root can ssh into the machine. But if you have password and challenge response authentication disabled it's not as bad. – chrishiestand May 3 '11 at 17:32
add a comment |
1 Answer active oldest votes
up vote down vote Here's the answer to your problem:
-c, --checksum
      This changes the way rsync checks if the files have been changed and are in need of a  transfer.   Without  this  option,
      rsync  uses  a "quick check" that (by default) checks if each file's size and time of last modification match between the
      sender and receiver.  This option changes this to compare a 128-bit checksum for each file  that  has  a  matching  size.
      Generating  the  checksums  means  that both sides will expend a lot of disk I/O reading all the data in the files in the
      transfer (and this is prior to any reading that will be done to transfer changed files), so this  can  slow  things  down
      significantly.

      The  sending  side  generates  its checksums while it is doing the file-system scan that builds the list of the available
      files.  The receiver generates its checksums when it is scanning for changed files, and will checksum any file  that  has
      the  same  size  as the corresponding sender's file:  files with either a changed size or a changed checksum are selected
      for transfer.

      Note that rsync always verifies that each transferred file was correctly reconstructed on the receiving side by  checking
      a  whole-file  checksum  that is generated as the file is transferred, but that automatic after-the-transfer verification
      has nothing to do with this option's before-the-transfer "Does this file need to be updated?" check.

      For protocol 30 and beyond (first supported in 3.0.0), the checksum used is MD5.  For older protocols, the checksum  used
      is MD4.

So run:

/usr/bin/rsync -c --recursive --rsh=/usr/bin/ssh --rsync-path=/usr/bin/rsync --verbose root@server.example.com:/src/dir/ /local/dir

Note there may be a time+disk churn tradeoff by using this option. Personally, I'd probably just sync the file's mtimes too:

/usr/bin/rsync -t --recursive --rsh=/usr/bin/ssh --rsync-path=/usr/bin/rsync --verbose root@server.example.com:/src/dir/ /local/dir
share improve this answer edited May 3 '11 at 17:55 answered May 3 '11 at 17:48 chrishiestand 1,098 10
Awesome. Thank you. Looks like the second option is going to work for me and I found the first very interesting. – Fred Snertz May 3 '11 at 18:40
psst, hit the green checkbox to give my answer credit ;-) Thx. – chrishiestand May 12 '11 at 1:56

[Aug 28, 2017] Why does rsync fail to copy files from /sys in Linux?

Notable quotes:
"... pseudo file system ..."
"... pseudo filesystems ..."
Aug 28, 2017 | unix.stackexchange.com

up vote 11 down vote favorite 1

Eugene Yarmash , asked Apr 24 '13 at 16:35

I have a bash script which uses rsync to backup files in Archlinux. I noticed that rsync failed to copy a file from /sys , while cp worked just fine:
# rsync /sys/class/net/enp3s1/address /tmp    
rsync: read errors mapping "/sys/class/net/enp3s1/address": No data available (61)
rsync: read errors mapping "/sys/class/net/enp3s1/address": No data available (61)
ERROR: address failed verification -- update discarded.
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1052) [sender=3.0.9]

# cp  /sys/class/net/enp3s1/address /tmp   ## this works

I wonder why does rsync fail, and is it possible to copy the file with it?

mattdm , answered Apr 24 '13 at 18:20

Rsync has code which specifically checks if a file is truncated during read and gives this error ! ENODATA . I don't know why the files in /sys have this behavior, but since they're not real files, I guess it's not too surprising. There doesn't seem to be a way to tell rsync to skip this particular check.

I think you're probably better off not rsyncing /sys and using specific scripts to cherry-pick out the particular information you want (like the network card address).

Runium , answered Apr 25 '13 at 0:23

First off /sys is a pseudo file system . If you look at /proc/filesystems you will find a list of registered file systems where quite a few has nodev in front. This indicates they are pseudo filesystems . This means they exists on a running kernel as a RAM-based filesystem. Further they do not require a block device.
$ cat /proc/filesystems
nodev   sysfs
nodev   rootfs
nodev   bdev
...

At boot the kernel mount this system and updates entries when suited. E.g. when new hardware is found during boot or by udev .

In /etc/mtab you typically find the mount by:

sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0

For a nice paper on the subject read Patric Mochel's – The sysfs Filesystem .


stat of /sys files

If you go into a directory under /sys and do a ls -l you will notice that all files has one size. Typically 4096 bytes. This is reported by sysfs .

:/sys/devices/pci0000:00/0000:00:19.0/net/eth2$ ls -l
-r--r--r-- 1 root root 4096 Apr 24 20:09 addr_assign_type
-r--r--r-- 1 root root 4096 Apr 24 20:09 address
-r--r--r-- 1 root root 4096 Apr 24 20:09 addr_len
...

Further you can do a stat on a file and notice another distinct feature; it occupies 0 blocks. Also inode of root (stat /sys) is 1. /stat/fs typically has inode 2. etc.

rsync vs. cp

The easiest explanation for rsync failure of synchronizing pseudo files is perhaps by example.

Say we have a file named address that is 18 bytes. An ls or stat of the file reports 4096 bytes.


rsync
  1. Opens file descriptor, fd.
  2. Uses fstat(fd) to get information such as size.
  3. Set out to read size bytes, i.e. 4096. That would be line 253 of the code linked by @mattdm . read_size == 4096
    1. Ask; read: 4096 bytes.
    2. A short string is read i.e. 18 bytes. nread == 18
    3. read_size = read_size - nread (4096 - 18 = 4078)
    4. Ask; read: 4078 bytes
    5. 0 bytes read (as first read consumed all bytes in file).
    6. nread == 0 , line 255
    7. Unable to read 4096 bytes. Zero out buffer.
    8. Set error ENODATA .
    9. Return.
  4. Report error.
  5. Retry. (Above loop).
  6. Fail.
  7. Report error.
  8. FINE.

During this process it actually reads the entire file. But with no size available it cannot validate the result – thus failure is only option.

cp
  1. Opens file descriptor, fd.
  2. Uses fstat(fd) to get information such as st_size (also uses lstat and stat).
  3. Check if file is likely to be sparse. That is the file has holes etc.
    copy.c:1010
    /* Use a heuristic to determine whether SRC_NAME contains any sparse
     * blocks.  If the file has fewer blocks than would normally be
     * needed for a file of its size, then at least one of the blocks in
     * the file is a hole.  */
    sparse_src = is_probably_sparse (&src_open_sb);
    

    As stat reports file to have zero blocks it is categorized as sparse.

  4. Tries to read file by extent-copy (a more efficient way to copy normal sparse files), and fails.
  5. Copy by sparse-copy.
    1. Starts out with max read size of MAXINT.
      Typically 18446744073709551615 bytes on a 32 bit system.
    2. Ask; read 4096 bytes. (Buffer size allocated in memory from stat information.)
    3. A short string is read i.e. 18 bytes.
    4. Check if a hole is needed, nope.
    5. Write buffer to target.
    6. Subtract 18 from max read size.
    7. Ask; read 4096 bytes.
    8. 0 bytes as all got consumed in first read.
    9. Return success.
  6. All OK. Update flags for file.
  7. FINE.

,

Might be related, but extended attribute calls will fail on sysfs:

[root@hypervisor eth0]# lsattr address

lsattr: Inappropriate ioctl for device While reading flags on address

[root@hypervisor eth0]#

Looking at my strace it looks like rsync tries to pull in extended attributes by default:

22964 <... getxattr resumed> , 0x7fff42845110, 132) = -1 ENODATA (No data available)

I tried finding a flag to give rsync to see if skipping extended attributes resolves the issue but wasn't able to find anything ( --xattrs turns them on at the destination).

[Aug 28, 2017] Rsync doesn't copy everyting s

Aug 28, 2017 | ubuntuforums.org

View Full Version : [ubuntu] Rsync doesn't copy everyting



Scormen May 31st, 2009, 10:09 AM Hi all,

I'm having some trouble with rsync. I'm trying to sync my local /etc directory to a remote server, but this won't work.

The problem is that it seems he doesn't copy all the files.
The local /etc dir contains 15MB of data, after a rsync, the remote backup contains only 4.6MB of data.

Rsync is running by root. I'm using this command:

rsync --rsync-path="sudo rsync" -e "ssh -i /root/.ssh/backup" -avz --delete --delete-excluded -h --stats /etc kris@192.168.1.3:/home/kris/backup/laptopkris

I hope someone can help.
Thanks!

Kris


Scormen May 31st, 2009, 11:05 AM I found that if I do a local sync, everything goes fine.
But if I do a remote sync, it copies only 4.6MB.

Any idea?


LoneWolfJack May 31st, 2009, 05:14 PM never used rsync on a remote machine, but "sudo rsync" looks wrong. you probably can't call sudo like that so the ssh connection needs to have the proper privileges for executing rsync.

just an educated guess, though.


Scormen May 31st, 2009, 05:24 PM Thanks for your answer.

In /etc/sudoers I have added next line, so "sudo rsync" will work.

kris ALL=NOPASSWD: /usr/bin/rsync

I also tried without --rsync-path="sudo rsync", but without success.

I have also tried on the server to pull the files from the laptop, but that doesn't work either.


LoneWolfJack May 31st, 2009, 05:30 PM in the rsync help file it says that --rsync-path is for the path to rsync on the remote machine, so my guess is that you can't use sudo there as it will be interpreted as a path.

so you will have to do --rsync-path="/path/to/rsync" and make sure the ssh login has root privileges if you need them to access the files you want to sync.

--rsync-path="sudo rsync" probably fails because
a) sudo is interpreted as a path
b) the space isn't escaped
c) sudo probably won't allow itself to be called remotely

again, this is not more than an educated guess.


Scormen May 31st, 2009, 05:45 PM I understand what you mean, so I tried also:

rsync -Cavuhzb --rsync-path="/usr/bin/rsync" -e "ssh -i /root/.ssh/backup" /etc kris@192.168.1.3:/home/kris/backup/laptopkris

Then I get this error:

sending incremental file list
rsync: recv_generator: failed to stat "/home/kris/backup/laptopkris/etc/chatscripts/pap": Permission denied (13)
rsync: recv_generator: failed to stat "/home/kris/backup/laptopkris/etc/chatscripts/provider": Permission denied (13)
rsync: symlink "/home/kris/backup/laptopkris/etc/cups/ssl/server.crt" -> "/etc/ssl/certs/ssl-cert-snakeoil.pem" failed: Permission denied (13)
rsync: symlink "/home/kris/backup/laptopkris/etc/cups/ssl/server.key" -> "/etc/ssl/private/ssl-cert-snakeoil.key" failed: Permission denied (13)
rsync: recv_generator: failed to stat "/home/kris/backup/laptopkris/etc/ppp/peers/provider": Permission denied (13)
rsync: recv_generator: failed to stat "/home/kris/backup/laptopkris/etc/ssl/private/ssl-cert-snakeoil.key": Permission denied (13)

sent 86.85K bytes received 306 bytes 174.31K bytes/sec
total size is 8.71M speedup is 99.97
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1058) [sender=3.0.5]

And the same command with "root" instead of "kris".
Then, I get no errors, but I still don't have all the files synced.


Scormen June 1st, 2009, 09:00 AM Sorry for this bump.
I'm still having the same problem.

Any idea?

Thanks.


binary10 June 1st, 2009, 10:36 AM I understand what you mean, so I tried also:

rsync -Cavuhzb --rsync-path="/usr/bin/rsync" -e "ssh -i /root/.ssh/backup" /etc kris@192.168.1.3:/home/kris/backup/laptopkris

Then I get this error:

And the same command with "root" instead of "kris".
Then, I get no errors, but I still don't have all the files synced.

Maybe there's a nicer way but you could place /usr/bin/rsync into a private protected area and set the owner to root place the sticky bit on it and change your rsync-path argument such like:

# on the remote side, aka kris@192.168.1.3
mkdir priv-area
# protect it from normal users running a priv version of rsync
chmod 700 priv-area
cd priv-area
cp -p /usr/local/bin/rsync ./rsync-priv
sudo chown 0:0 ./rsync-priv
sudo chmod +s ./rsync-priv
ls -ltra # rsync-priv should now be 'bold-red' in bash

Looking at your flags, you've specified a cvs ignore factor, ignore files that are updated on the target, and you're specifying a backup of removed files.

rsync -Cavuhzb --rsync-path="/home/kris/priv-area/rsync-priv" -e "ssh -i /root/.ssh/backup" /etc kris@192.168.1.3:/home/kris/backup/laptopkris

From those qualifiers you're not going to be getting everything sync'd. It's doing what you're telling it to do.

If you really wanted to perform a like for like backup.. (not keeping stuff that's been changed/deleted from the source. I'd go for something like the following.

rsync --archive --delete --hard-links --one-file-system --acls --xattrs --dry-run -i --rsync-path="/home/kris/priv-area/rsync-priv" --rsh="ssh -i /root/.ssh/backup" /etc/ kris@192.168.1.3:/home/kris/backup/laptopkris/etc/

Remove the --dry-run and -i when you're happy with the output, and it should do what you want. A word of warning, I get a bit nervous when not seeing trailing (/) on directories as it could lead to all sorts of funnies if you end up using rsync on softlinks.


Scormen June 1st, 2009, 12:19 PM Thanks for your help, binary10.

I've tried what you have said, but still, I only receive 4.6MB on the remote server.
Thanks for the warning, I'll not that!

Did someone already tried to rsync their own /etc to a remote system? Just to know if this strange thing only happens to me...

Thanks.


binary10 June 1st, 2009, 01:22 PM Thanks for your help, binary10.

I've tried what you have said, but still, I only receive 4.6MB on the remote server.
Thanks for the warning, I'll not that!

Did someone already tried to rsync their own /etc to a remote system? Just to know if this strange thing only happens to me...

Thanks.

Ok so I've gone back and looked at your original post, how are you calculating 15MB of data under etc - via a du -hsx /etc/ ??

I do daily drive to drive backup copies via rsync and drive to network copies.. and have used them recently for restoring.

Sure my du -hsx /etc/ reports 17MB of data of which 10MB gets transferred via an rsync. My backup drives still operate.

rsync 3.0.6 has some fixes to do with ACLs and special devices rsyncing between solaris. but I think 3.0.5 is still ok with ubuntu to ubuntu systems.

Here is my test doing exactly what you you're probably trying to do. I even check the remote end..

binary10@jsecx25:~/bin-priv$ ./rsync --archive --delete --hard-links --one-file-system --stats --acls --xattrs --human-readable --rsync-path="~/bin/rsync-priv-os-specific" --rsh="ssh" /etc/ rsyncbck@10.0.0.21:/home/kris/backup/laptopkris/etc/

Number of files: 3121
Number of files transferred: 1812
Total file size: 10.04M bytes
Total transferred file size: 10.00M bytes
Literal data: 10.00M bytes
Matched data: 0 bytes
File list size: 109.26K
File list generation time: 0.002 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 10.20M
Total bytes received: 38.70K

sent 10.20M bytes received 38.70K bytes 4.09M bytes/sec
total size is 10.04M speedup is 0.98

binary10@jsecx25:~/bin-priv$ sudo du -hsx /etc/
17M /etc/
binary10@jsecx25:~/bin-priv$

And then on the remote system I do the du -hsx

binary10@lenovo-n200:/home/kris/backup/laptopkris/etc$ cd ..
binary10@lenovo-n200:/home/kris/backup/laptopkris$ sudo du -hsx etc
17M etc
binary10@lenovo-n200:/home/kris/backup/laptopkris$


Scormen June 1st, 2009, 01:35 PM ow are you calculating 15MB of data under etc - via a du -hsx /etc/ ??
Indeed, on my laptop I see:

root@laptopkris:/home/kris# du -sh /etc/
15M /etc/

If I do the same thing after a fresh sync to the server, I see:

root@server:/home/kris# du -sh /home/kris/backup/laptopkris/etc/
4.6M /home/kris/backup/laptopkris/etc/

On both sides, I have installed Ubuntu 9.04, with version 3.0.5 of rsync.
So strange...


binary10 June 1st, 2009, 01:45 PM it does seem a bit odd.

I'd start doing a few diffs from the outputs find etc/ -printf "%f %s %p %Y\n" | sort

And see what type of files are missing.

- edit - Added the %Y file type.


Scormen June 1st, 2009, 01:58 PM Hmm, it's going stranger.
Now I see that I have all my files on the server, but they don't have their full size (bytes).

I have uploaded the files, so you can look into them.

Laptop: http://www.linuxontdekt.be/files/laptop.files
Server: http://www.linuxontdekt.be/files/server.files


binary10 June 1st, 2009, 02:16 PM If you look at the files that are different aka the ssl's they are links to local files else where aka linked to /usr and not within /etc/

aka they are different on your laptop and the server


Scormen June 1st, 2009, 02:25 PM I understand that soft links are just copied, and not the "full file".

But, you have run the same command to test, a few posts ago.
How is it possible that you can see the full 15MB?


binary10 June 1st, 2009, 02:34 PM I was starting to think that this was a bug with du.

The de-referencing is a bit topsy.

If you rsync copy the remote backup back to a new location back onto the laptop and do the du command. I wonder if you'll end up with 15MB again.


Scormen June 1st, 2009, 03:20 PM Good tip.

On the server side, the backup of the /etc was still 4.6MB.
I have rsynced it back to the laptop, to a new directory.

If I go on the laptop to that new directory and do a du, it says 15MB.


binary10 June 1st, 2009, 03:34 PM Good tip.

On the server side, the backup of the /etc was still 4.6MB.
I have rsynced it back to the laptop, to a new directory.

If I go on the laptop to that new directory and do a du, it says 15MB.

I think you've now confirmed that RSYNC DOES copy everything.. just tht du confusing what you had expected by counting the end link sizes.

It might also think about what you're copying, maybe you need more than just /etc of course it depends on what you are trying to do with the backup :)

enjoy.


Scormen June 1st, 2009, 03:37 PM Yeah, it seems to work well.
So, the "problem" where just the soft links, that couldn't be counted on the server side?
binary10 June 1st, 2009, 04:23 PM Yeah, it seems to work well.
So, the "problem" where just the soft links, that couldn't be counted on the server side?

The links were copied as links as per the design of the --archive in rsync.

The contents of the pointing links were different between your two systems. These being that that reside outside of /etc/ in /usr And so DU reporting them differently.


Scormen June 1st, 2009, 05:36 PM Okay, I got it.
Many thanks for the support, binarty10!
Scormen June 1st, 2009, 05:59 PM Just to know, is it possible to copy the data from these links as real, hard data?
Thanks.
binary10 June 2nd, 2009, 09:54 AM Just to know, is it possible to copy the data from these links as real, hard data?
Thanks.

Yep absolutely

You should then look at other possibilities of:

-L, --copy-links transform symlink into referent file/dir
--copy-unsafe-links only "unsafe" symlinks are transformed
--safe-links ignore symlinks that point outside the source tree
-k, --copy-dirlinks transform symlink to a dir into referent dir
-K, --keep-dirlinks treat symlinked dir on receiver as dir

but then you'll have to start questioning why you are backing them up like that especially stuff under /etc/. If you ever wanted to restore it you'd be restoring full files and not symlinks the restore result could be a nightmare as well as create future issues (upgrades etc) let alone your backup will be significantly larger, could be 150MB instead of 4MB.


Scormen June 2nd, 2009, 10:04 AM Okay, now I'm sure what its doing :)
Is it also possible to show on a system the "real disk usage" of e.g. that /etc directory? So, without the links, that we get a output of 4.6MB.

Thank you very much for your help!


binary10 June 2nd, 2009, 10:22 AM What does the following respond with.

sudo du --apparent-size -hsx /etc

If you want the real answer then your result from a dry-run rsync will only be enough for you.

sudo rsync --dry-run --stats -h --archive /etc/ /tmp/etc/

[Aug 21, 2017] As the crisis unfolds there will be talk about giving the UN some role in resolving international problems.

Aug 21, 2017 | www.lettinggobreath.com

psychohistorian | Aug 21, 2017 12:01:32 AM | 27

My understanding of the UN is that it is the High Court of the World where fealty is paid to empire that funds most of the political circus anyway...and speaking of funding or not, read the following link and lets see what PavewayIV adds to the potential sickness we are sleep walking into.

As the UN delays talks, more industry leaders back ban on weaponized AI

[Jul 29, 2017] linux - Directory bookmarking for bash - Stack Overflow

Notable quotes:
"... May you wan't to change this alias to something which fits your needs ..."
Jul 29, 2017 | stackoverflow.com

getmizanur , asked Sep 10 '11 at 20:35

Is there any directory bookmarking utility for bash to allow move around faster on the command line?

UPDATE

Thanks guys for the feedback however I created my own simple shell script (feel free to modify/expand it)

function cdb() {
    USAGE="Usage: cdb [-c|-g|-d|-l] [bookmark]" ;
    if  [ ! -e ~/.cd_bookmarks ] ; then
        mkdir ~/.cd_bookmarks
    fi

    case $1 in
        # create bookmark
        -c) shift
            if [ ! -f ~/.cd_bookmarks/$1 ] ; then
                echo "cd `pwd`" > ~/.cd_bookmarks/"$1" ;
            else
                echo "Try again! Looks like there is already a bookmark '$1'"
            fi
            ;;
        # goto bookmark
        -g) shift
            if [ -f ~/.cd_bookmarks/$1 ] ; then 
                source ~/.cd_bookmarks/"$1"
            else
                echo "Mmm...looks like your bookmark has spontaneously combusted. What I mean to say is that your bookmark does not exist." ;
            fi
            ;;
        # delete bookmark
        -d) shift
            if [ -f ~/.cd_bookmarks/$1 ] ; then 
                rm ~/.cd_bookmarks/"$1" ;
            else
                echo "Oops, forgot to specify the bookmark" ;
            fi    
            ;;
        # list bookmarks
        -l) shift
            ls -l ~/.cd_bookmarks/ ;
            ;;
         *) echo "$USAGE" ;
            ;;
    esac
}

INSTALL

1./ create a file ~/.cdb and copy the above script into it.

2./ in your ~/.bashrc add the following

if [ -f ~/.cdb ]; then
    source ~/.cdb
fi

3./ restart your bash session

USAGE

1./ to create a bookmark

$cd my_project
$cdb -c project1

2./ to goto a bookmark

$cdb -g project1

3./ to list bookmarks

$cdb -l

4./ to delete a bookmark

$cdb -d project1

5./ where are all my bookmarks stored?

$cd ~/.cd_bookmarks

Fredrik Pihl , answered Sep 10 '11 at 20:47

Also, have a look at CDPATH

A colon-separated list of search paths available to the cd command, similar in function to the $PATH variable for binaries. The $CDPATH variable may be set in the local ~/.bashrc file.

ash$ cd bash-doc
bash: cd: bash-doc: No such file or directory

bash$ CDPATH=/usr/share/doc
bash$ cd bash-doc
/usr/share/doc/bash-doc

bash$ echo $PWD
/usr/share/doc/bash-doc

and

cd -

It's the command-line equivalent of the back button (takes you to the previous directory you were in).

ajreal , answered Sep 10 '11 at 20:41

In bash script/command,
you can use pushd and popd

pushd

Save and then change the current directory. With no arguments, pushd exchanges the top two directories.

Usage

cd /abc
pushd /xxx    <-- save /abc to environment variables and cd to /xxx
pushd /zzz
pushd +1      <-- cd /xxx

popd is to remove the variable (reverse manner)

fgm , answered Sep 11 '11 at 8:28

bookmarks.sh provides a bookmark management system for the Bash version 4.0+. It can also use a Midnight Commander hotlist.

Dmitry Frank , answered Jun 16 '15 at 10:22

Thanks for sharing your solution, and I'd like to share mine as well, which I find more useful than anything else I've came across before.

The engine is a great, universal tool: command-line fuzzy finder by Junegunn.

It primarily allows you to "fuzzy-find" files in a number of ways, but it also allows to feed arbitrary text data to it and filter this data. So, the shortcuts idea is simple: all we need is to maintain a file with paths (which are shortcuts), and fuzzy-filter this file. Here's how it looks: we type cdg command (from "cd global", if you like), get a list of our bookmarks, pick the needed one in just a few keystrokes, and press Enter. Working directory is changed to the picked item:

It is extremely fast and convenient: usually I just type 3-4 letters of the needed item, and all others are already filtered out. Additionally, of course we can move through list with arrow keys or with vim-like keybindings Ctrl+j / Ctrl+k .

Article with details: Fuzzy shortcuts for your shell .

It is possible to use it for GUI applications as well (via xterm): I use that for my GUI file manager Double Commander . I have plans to write an article about this use case, too.

return42 , answered Feb 6 '15 at 11:56

Inspired by the question and answers here, I added the lines below to my ~/.bashrc file.

With this you have a favdir command (function) to manage your favorites and a autocompletion function to select an item from these favorites.

# ---------
# Favorites
# ---------

__favdirs_storage=~/.favdirs
__favdirs=( "$HOME" )

containsElement () {
    local e
    for e in "${@:2}"; do [[ "$e" == "$1" ]] && return 0; done
    return 1
}

function favdirs() {

    local cur
    local IFS
    local GLOBIGNORE

    case $1 in
        list)
            echo "favorite folders ..."
            printf -- ' - %s\n' "${__favdirs[@]}"
            ;;
        load)
            if [[ ! -e $__favdirs_storage ]] ; then
                favdirs save
            fi
            # mapfile requires bash 4 / my OS-X bash vers. is 3.2.53 (from 2007 !!?!).
            # mapfile -t __favdirs < $__favdirs_storage
            IFS=$'\r\n' GLOBIGNORE='*' __favdirs=($(< $__favdirs_storage))
            ;;
        save)
            printf -- '%s\n' "${__favdirs[@]}" > $__favdirs_storage
            ;;
        add)
            cur=${2-$(pwd)}
            favdirs load
            if containsElement "$cur" "${__favdirs[@]}" ; then
                echo "'$cur' allready exists in favorites"
            else
                __favdirs+=( "$cur" )
                favdirs save
                echo "'$cur' added to favorites"
            fi
            ;;
        del)
            cur=${2-$(pwd)}
            favdirs load
            local i=0
            for fav in ${__favdirs[@]}; do
                if [ "$fav" = "$cur" ]; then
                    echo "delete '$cur' from favorites"
                    unset __favdirs[$i]
                    favdirs save
                    break
                fi
                let i++
            done
            ;;
        *)
            echo "Manage favorite folders."
            echo ""
            echo "usage: favdirs [ list | load | save | add | del ]"
            echo ""
            echo "  list : list favorite folders"
            echo "  load : load favorite folders from $__favdirs_storage"
            echo "  save : save favorite directories to $__favdirs_storage"
            echo "  add  : add directory to favorites [default pwd $(pwd)]."
            echo "  del  : delete directory from favorites [default pwd $(pwd)]."
    esac
} && favdirs load

function __favdirs_compl_command() {
    COMPREPLY=( $( compgen -W "list load save add del" -- ${COMP_WORDS[COMP_CWORD]}))
} && complete -o default -F __favdirs_compl_command favdirs

function __favdirs_compl() {
    local IFS=$'\n'
    COMPREPLY=( $( compgen -W "${__favdirs[*]}" -- ${COMP_WORDS[COMP_CWORD]}))
}

alias _cd='cd'
complete -F __favdirs_compl _cd

Within the last two lines, an alias to change the current directory (with autocompletion) is created. With this alias ( _cd ) you are able to change to one of your favorite directories. May you wan't to change this alias to something which fits your needs .

With the function favdirs you can manage your favorites (see usage).

$ favdirs 
Manage favorite folders.

usage: favdirs [ list | load | save | add | del ]

  list : list favorite folders
  load : load favorite folders from ~/.favdirs
  save : save favorite directories to ~/.favdirs
  add  : add directory to favorites [default pwd /tmp ].
  del  : delete directory from favorites [default pwd /tmp ].

Zied , answered Mar 12 '14 at 9:53

Yes there is DirB: Directory Bookmarks for Bash well explained in this Linux Journal article

An example from the article:

% cd ~/Desktop
% s d       # save(bookmark) ~/Desktop as d
% cd /tmp   # go somewhere
% pwd
/tmp
% g d       # go to the desktop
% pwd
/home/Desktop

Al Conrad , answered Sep 4 '15 at 16:10

@getmizanur I used your cdb script. I enhanced it slightly by adding bookmarks tab completion. Here's my version of your cdb script.
_cdb()
{
    local _script_commands=$(ls -1 ~/.cd_bookmarks/)
    local cur=${COMP_WORDS[COMP_CWORD]}

    COMPREPLY=( $(compgen -W "${_script_commands}" -- $cur) )
}
complete -F _cdb cdb


function cdb() {

    local USAGE="Usage: cdb [-h|-c|-d|-g|-l|-s] [bookmark]\n
    \t[-h or no args] - prints usage help\n
    \t[-c bookmark] - create bookmark\n
    \t[-d bookmark] - delete bookmark\n
    \t[-g bookmark] - goto bookmark\n
    \t[-l] - list bookmarks\n
    \t[-s bookmark] - show bookmark location\n
    \t[bookmark] - same as [-g bookmark]\n
    Press tab for bookmark completion.\n"        

    if  [ ! -e ~/.cd_bookmarks ] ; then
        mkdir ~/.cd_bookmarks
    fi

    case $1 in
        # create bookmark
        -c) shift
            if [ ! -f ~/.cd_bookmarks/$1 ] ; then
                echo "cd `pwd`" > ~/.cd_bookmarks/"$1"
                complete -F _cdb cdb
            else
                echo "Try again! Looks like there is already a bookmark '$1'"
            fi
            ;;
        # goto bookmark
        -g) shift
            if [ -f ~/.cd_bookmarks/$1 ] ; then
                source ~/.cd_bookmarks/"$1"
            else
                echo "Mmm...looks like your bookmark has spontaneously combusted. What I mean to say is that your bookmark does not exist." ;
            fi
            ;;
        # show bookmark
        -s) shift
            if [ -f ~/.cd_bookmarks/$1 ] ; then
                cat ~/.cd_bookmarks/"$1"
            else
                echo "Mmm...looks like your bookmark has spontaneously combusted. What I mean to say is that your bookmark does not exist." ;
            fi
            ;;
        # delete bookmark
        -d) shift
            if [ -f ~/.cd_bookmarks/$1 ] ; then
                rm ~/.cd_bookmarks/"$1" ;
            else
                echo "Oops, forgot to specify the bookmark" ;
            fi
            ;;
        # list bookmarks
        -l) shift
            ls -1 ~/.cd_bookmarks/ ;
            ;;
        -h) echo -e $USAGE ;
            ;;
        # goto bookmark by default
        *)
            if [ -z "$1" ] ; then
                echo -e $USAGE
            elif [ -f ~/.cd_bookmarks/$1 ] ; then
                source ~/.cd_bookmarks/"$1"
            else
                echo "Mmm...looks like your bookmark has spontaneously combusted. What I mean to say is that your bookmark does not exist." ;
            fi
            ;;
    esac
}

tobimensch , answered Jun 5 '16 at 21:31

Yes, one that I have written, that is called anc.

https://github.com/tobimensch/anc

Anc stands for anchor, but anc's anchors are really just bookmarks.

It's designed for ease of use and there're multiple ways of navigating, either by giving a text pattern, using numbers, interactively, by going back, or using [TAB] completion.

I'm actively working on it and open to input on how to make it better.

Allow me to paste the examples from anc's github page here:

# make the current directory the default anchor:
$ anc s

# go to /etc, then /, then /usr/local and then back to the default anchor:
$ cd /etc; cd ..; cd usr/local; anc

# go back to /usr/local :
$ anc b

# add another anchor:
$ anc a $HOME/test

# view the list of anchors (the default one has the asterisk):
$ anc l
(0) /path/to/first/anchor *
(1) /home/usr/test

# jump to the anchor we just added:
# by using its anchor number
$ anc 1
# or by jumping to the last anchor in the list
$ anc -1

# add multiple anchors:
$ anc a $HOME/projects/first $HOME/projects/second $HOME/documents/first

# use text matching to jump to $HOME/projects/first
$ anc pro fir

# use text matching to jump to $HOME/documents/first
$ anc doc fir

# add anchor and jump to it using an absolute path
$ anc /etc
# is the same as
$ anc a /etc; anc -1

# add anchor and jump to it using a relative path
$ anc ./X11 #note that "./" is required for relative paths
# is the same as
$ anc a X11; anc -1

# using wildcards you can add many anchors at once
$ anc a $HOME/projects/*

# use shell completion to see a list of matching anchors
# and select the one you want to jump to directly
$ anc pro[TAB]

Cảnh Toàn Nguyễn , answered Feb 20 at 5:41

Bashmarks is an amazingly simple and intuitive utility. In short, after installation, the usage is:
s <bookmark_name> - Saves the current directory as "bookmark_name"
g <bookmark_name> - Goes (cd) to the directory associated with "bookmark_name"
p <bookmark_name> - Prints the directory associated with "bookmark_name"
d <bookmark_name> - Deletes the bookmark
l                 - Lists all available bookmarks

,

For short term shortcuts, I have a the following in my respective init script (Sorry. I can't find the source right now and didn't bother then):
function b() {
    alias $1="cd `pwd -P`"
}

Usage:

In any directory that you want to bookmark type

b THEDIR # <THEDIR> being the name of your 'bookmark'

It will create an alias to cd (back) to here.

To return to a 'bookmarked' dir type

THEDIR

It will run the stored alias and cd back there.

Caution: Use only if you understand that this might override existing shell aliases and what that means.

[Jul 29, 2017] If processes inherit the parents environment, why do we need export?

Notable quotes:
"... "Processes inherit their environment from their parent (the process which started them)." ..."
"... in the environment ..."
Jul 29, 2017 | unix.stackexchange.com
Amelio Vazquez-Reina asked May 19 '14

I read here that the purpose of export in a shell is to make the variable available to sub-processes started from the shell.

However, I have also read here and here that "Processes inherit their environment from their parent (the process which started them)."

If this is the case, why do we need export ? What am I missing?

Are shell variables not part of the environment by default? What is the difference?

Your assumption is that all shell variables are in the environment . This is incorrect. The export command is what defines a name to be in the environment at all. Thus:

a=1
b=2
export b

results in the current shell knowing that $a expands to 1 and $b to 2, but subprocesses will not know anything about a because it is not part of the environment (even in the current shell).

Some useful tools:

Alternatives to export :

  1. name=val command # Assignment before command exports that name to the command.
  2. declare/local -x name # Exports name, particularly useful in shell functions when you want to avoid exposing the name to outside scope.
====

There's a difference between shell variables and environment variables. If you define a shell variable without export ing it, it is not added to the processes environment and thus not inherited to its children.

Using export you tell the shell to add the shell variable to the environment. You can test this using printenv (which just prints its environment to stdout, since it's a child-process you see the effect of export ing variables):

#!/bin/sh
MYVAR="my cool variable"
echo "Without export:"
printenv | grep MYVAR
echo "With export:"
export MYVAR 
printenv | grep MYVAR
A variable, once exported, is part of the environment. PATH is exported in the shell itself, while custom variables can be exported as needed.

... ... ..

[Jul 29, 2017] Why does subshell not inherit exported variable (PS1)?

Jul 29, 2017 | superuser.com
up vote down vote favorite 1 I am using startx to start the graphical environment. I have a very simple .xinitrc which I will add things to as I set up the environment, but for now it is as follows:

catwm
&
# Just a basic window manager, for testing.


xterm

The reason I background the WM and foreground terminal and not the other way around as often is done, is because I would like to be able to come back to the virtual text console after typing exit in xterm . This appears to work as described.

The problem is that the PS1 variable that currently is set to my preference in /etc/profile.d/user.sh (which is sourced from /etc/profile supplied by distro), does not appear to propagate to the environment of the xterm mentioned above. The relevant process tree is as follows:


\_
bash
    \_ xinit
home
user
/.
xinitrc
--
etc
X11
xinit
xserverrc
auth
tmp
serverauth
ggJna3I0vx
        \_
usr
bin
nolisten tcp
auth
tmp
serverauth
ggJna3I0vx vt1
        \_ sh
home
user
/.
xinitrc
            \_
home
user
catwm
            \_ xterm
                \_ bash

The shell started by xterm appears to be interactive, the shell executing .xinitrc however is not. I am ok with both, the assumptions about interactivity seem to be perfectly valid, but now I have a non-interactive shell that spawns an interactive shell indirectly, and the interactive shell has no chance to automatically inherit the prompt, because the prompt was unset or otherwise made unavailable higher up the process tree.

How do I go about getting my prompt back? bash environment-variables sh

share improve this question edited Oct 21 '13 at 11:39 asked Oct 21 '13 at 9:51 amn 453 12 29
down vote accepted

Commands env and export list only variables which are exported. $PS1 is usually not exported. Try echo $PS1 in your shell to see actual value of $PS1 .

Non-interactive shells usually do not have $PS1 . Non-interactive bash explicitly unsets $PS1 . 1 You can check if bash is interactive by echo $- . If the output contains i then it is interactive. You can explicitly start interactive shell by using the option on the command line: bash -i . Shell started with -c is not interactive.

The /etc/profile script is read for a login shell. You can start the shell as a login shell by: bash -l .

With bash shell the scripts /etc/bash.bashrc and ~/.bashrc are usually used to set $PS1 . Those scripts are sourced when interactive non-login shell is started. It is your case in the xterm .

See Setting the PS? Strings Permanently

Possible solutions
share improve this answer edited Oct 22 '13 at 16:45 answered Oct 21 '13 at 11:19 pabouk 4,250 25 40
I am specifically avoiding to set PS1 in .bashrc or /etc/bash.bashrc (which is executed as well), to retain POSIX shell compatibility. These do not set or unset PS1 . PS1 is set in /etc/profile.d/user.sh , which is sourced by /etc/profile . Indeed, this file is only executed for login shells, however I do export PS1 from /etc/profile.d/user.sh exactly because I want propagation of my preferred value down the process tree. So it shouldn't matter which subshells are login and/or interactive ones then, should it? – amn Oct 21 '13 at 11:32
It seems that bash removes the PS1 variable. What exactly do you want to achieve by "POSIX shell compatibility"? Do you want to be able to replace bash by a different POSIX-compliant shell and retain the same functionality? Based on my tests bash removes PS1 when it is started as non-interactive. I think of two simple solutions: 1. start the shell as a login shell with the -l option (attention for actions in the startup scripts which should be started only at login) 2. start the intermediate shells as interactive with the -i option. – pabouk Oct 21 '13 at 12:00
I try to follow interfaces and specifications, not implementations - hence POSIX compatibility. That's important (to me). I already have one login shell - the one started by /usr/bin/login . I understand that a non-interactive shell doesn't need prompt, but unsetting a variable is too much - I need the prompt in an interactive shell (spawned and used by xterm ) later on. What am I doing wrong? I guess most people set their prompt in .bashrc which is sourced by bash anyway, and so the prompt survives. I try to avoid .bashrc however. – amn Oct 22 '13 at 12:12
@amn: I have added various possible solutions to the reply. – pabouk Oct 22 '13 at 16:46

[Jul 29, 2017] Bash subshell mystery

Notable quotes:
"... The subshell created using parentheses does not ..."
Jul 29, 2017 | stackoverflow.com

user3718463 , asked Sep 27 '14 at 21:41

The Learning Bash Book mention that a subshell will inherit only environment variabels and file descriptors , ...etc and that it will not inherit variables that are not exported of
$ var=15
$ (echo $var)
15
$ ./file # this file include the same command echo $var

$

As i know the shell will create two subshells for () case and for ./file, but why in () case the subshell identified the var variable although it is not exported and in the ./file case it did not identify it ?

...

I tried to use strace to figure out how this happens and surprisingly i found that bash will use the same arguments for the clone system call so this means that the both forked process in () and ./file should have the same process address space of the parent, so why in () case the variable is visible to the subshell and the same does not happen for ./file case although the same arguments is based with clone system call ?

Alfe , answered Sep 27 '14 at 23:16

The subshell created using parentheses does not use an execve() call for the new process, the calling of the script does. At this point the variables from the parent shell are handled differently: The execve() passes a deliberate set of variables (the script-calling case) while not calling execve() (the parentheses case) leaves the complete set of variables intact.

Your probing using strace should have shown exactly that difference; if you did not see it, I can only assume that you made one of several possible mistakes. I will just strip down what I did to show the difference, then you can decide for yourself where your error was.

... ... ...

Nicolas Albert , answered Sep 27 '14 at 21:43

You have to export your var for child process:


export var
15

Once exported, the variable is used for all children process at the launch time (not export time).



var
15



export var

is same as



export var
var
15

is same as



export var
15

Export can be cancelled using unset . Sample: unset var .

user3718463 , answered Sep 27 '14 at 23:11

The solution for this mystery is that subshells inherit everything from the parent shell including all shell variables because they are simply called with fork or clone so they share the same memory space with the parent shell , that's why this will work
$ var=15
$ (echo $var)
15

But in the ./file , the subshell will be later followed by exec or execv system call which will clear all the previous parent variables but we still have the environment variables you can check this out using strace using -f to monitor the child subshell and you will find that there is a call to execv

[Jul 29, 2017] How To Read and Set Environmental and Shell Variables on a Linux VPS

Mar 03, 2014 | www.digitalocean.com
Introduction

When interacting with your server through a shell session, there are many pieces of information that your shell compiles to determine its behavior and access to resources. Some of these settings are contained within configuration settings and others are determined by user input.

One way that the shell keeps track of all of these settings and details is through an area it maintains called the environment . The environment is an area that the shell builds every time that it starts a session that contains variables that define system properties.

In this guide, we will discuss how to interact with the environment and read or set environmental and shell variables interactively and through configuration files. We will be using an Ubuntu 12.04 VPS as an example, but these details should be relevant on any Linux system.

How the Environment and Environmental Variables Work

Every time a shell session spawns, a process takes place to gather and compile information that should be available to the shell process and its child processes. It obtains the data for these settings from a variety of different files and settings on the system.

Basically the environment provides a medium through which the shell process can get or set settings and, in turn, pass these on to its child processes.

The environment is implemented as strings that represent key-value pairs. If multiple values are passed, they are typically separated by colon (:) characters. Each pair will generally will look something like this:

KEY
value1
value2:...

If the value contains significant white-space, quotations are used:

KEY
="
value with spaces
"

The keys in these scenarios are variables. They can be one of two types, environmental variables or shell variables.

Environmental variables are variables that are defined for the current shell and are inherited by any child shells or processes. Environmental variables are used to pass information into processes that are spawned from the shell.

Shell variables are variables that are contained exclusively within the shell in which they were set or defined. They are often used to keep track of ephemeral data, like the current working directory.

By convention, these types of variables are usually defined using all capital letters. This helps users distinguish environmental variables within other contexts.

Printing Shell and Environmental Variables

Each shell session keeps track of its own shell and environmental variables. We can access these in a few different ways.

We can see a list of all of our environmental variables by using the env or printenv commands. In their default state, they should function exactly the same:

printenv


SHELL=/bin/bash
TERM=xterm
USER=demouser
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca:...
MAIL=/var/mail/demouser
PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
PWD=/home/demouser
LANG=en_US.UTF-8
SHLVL=1
HOME=/home/demouser
LOGNAME=demouser
LESSOPEN=| /usr/bin/lesspipe %s
LESSCLOSE=/usr/bin/lesspipe %s %s
_=/usr/bin/printenv

This is fairly typical of the output of both printenv and env . The difference between the two commands is only apparent in their more specific functionality. For instance, with printenv , you can requests the values of individual variables:

printenv SHELL


/bin/bash

On the other hand, env let's you modify the environment that programs run in by passing a set of variable definitions into a command like this:

env VAR1="blahblah" command_to_run command_options

Since, as we learned above, child processes typically inherit the environmental variables of the parent process, this gives you the opportunity to override values or add additional variables for the child.

As you can see from the output of our printenv command, there are quite a few environmental variables set up through our system files and processes without our input.

These show the environmental variables, but how do we see shell variables?

The set command can be used for this. If we type set without any additional parameters, we will get a list of all shell variables, environmental variables, local variables, and shell functions:

set


BASH=/bin/bash
BASHOPTS=checkwinsize:cmdhist:expand_aliases:extglob:extquote:force_fignore:histappend:interactive_comments:login_shell:progcomp:promptvars:sourcepath
BASH_ALIASES=()
BASH_ARGC=()
BASH_ARGV=()
BASH_CMDS=()
. . .

This is usually a huge list. You probably want to pipe it into a pager program to deal with the amount of output easily:

set | less

The amount of additional information that we receive back is a bit overwhelming. We probably do not need to know all of the bash functions that are defined, for instance.

We can clean up the output by specifying that set should operate in POSIX mode, which won't print the shell functions. We can execute this in a sub-shell so that it does not change our current environment:

(set -o posix; set)

This will list all of the environmental and shell variables that are defined.

We can attempt to compare this output with the output of the env or printenv commands to try to get a list of only shell variables, but this will be imperfect due to the different ways that these commands output information:

comm -23 <(set -o posix; set | sort) <(env | sort)

This will likely still include a few environmental variables, due to the fact that the set command outputs quoted values, while the printenv and env commands do not quote the values of strings.

This should still give you a good idea of the environmental and shell variables that are set in your session.

These variables are used for all sorts of things. They provide an alternative way of setting persistent values for the session between processes, without writing changes to a file.

Common Environmental and Shell Variables

Some environmental and shell variables are very useful and are referenced fairly often.

Here are some common environmental variables that you will come across:

In addition to these environmental variables, some shell variables that you'll often see are:

Setting Shell and Environmental Variables

To better understand the difference between shell and environmental variables, and to introduce the syntax for setting these variables, we will do a small demonstration.

Creating Shell Variables

We will begin by defining a shell variable within our current session. This is easy to accomplish; we only need to specify a name and a value. We'll adhere to the convention of keeping all caps for the variable name, and set it to a simple string.

TEST_VAR='Hello World!'

Here, we've used quotations since the value of our variable contains a space. Furthermore, we've used single quotes because the exclamation point is a special character in the bash shell that normally expands to the bash history if it is not escaped or put into single quotes.

We now have a shell variable. This variable is available in our current session, but will not be passed down to child processes.

We can see this by grepping for our new variable within the set output:

set | grep TEST_VAR


TEST_VAR='Hello World!'

We can verify that this is not an environmental variable by trying the same thing with printenv :

printenv | grep TEST_VAR

No out should be returned.

Let's take this as an opportunity to demonstrate a way of accessing the value of any shell or environmental variable.

echo $TEST_VAR


Hello World!

As you can see, reference the value of a variable by preceding it with a $ sign. The shell takes this to mean that it should substitute the value of the variable when it comes across this.

So now we have a shell variable. It shouldn't be passed on to any child processes. We can spawn a new bash shell from within our current one to demonstrate:

bash
echo $TEST_VAR

If we type bash to spawn a child shell, and then try to access the contents of the variable, nothing will be returned. This is what we expected.

Get back to our original shell by typing exit :

exit

Creating Environmental Variables

Now, let's turn our shell variable into an environmental variable. We can do this by exporting the variable. The command to do so is appropriately named:

export TEST_VAR

This will change our variable into an environmental variable. We can check this by checking our environmental listing again:

printenv | grep TEST_VAR


TEST_VAR=Hello World!

This time, our variable shows up. Let's try our experiment with our child shell again:

bash
echo $TEST_VAR


Hello World!

Great! Our child shell has received the variable set by its parent. Before we exit this child shell, let's try to export another variable. We can set environmental variables in a single step like this:

export NEW_VAR="Testing export"

Test that it's exported as an environmental variable:

printenv | grep NEW_VAR


NEW_VAR=Testing export

Now, let's exit back into our original shell:

exit

Let's see if our new variable is available:

echo $NEW_VAR

Nothing is returned.

This is because environmental variables are only passed to child processes. There isn't a built-in way of setting environmental variables of the parent shell. This is good in most cases and prevents programs from affecting the operating environment from which they were called.

The NEW_VAR variable was set as an environmental variable in our child shell. This variable would be available to itself and any of its child shells and processes. When we exited back into our main shell, that environment was destroyed.

Demoting and Unsetting Variables

We still have our TEST_VAR variable defined as an environmental variable. We can change it back into a shell variable by typing:

export -n TEST_VAR

It is no longer an environmental variable:

printenv | grep TEST_VAR

However, it is still a shell variable:

set | grep TEST_VAR


TEST_VAR='Hello World!'

If we want to completely unset a variable, either shell or environmental, we can do so with the unset command:

unset TEST_VAR

We can verify that it is no longer set:

echo $TEST_VAR

Nothing is returned because the variable has been unset.

Setting Environmental Variables at Login

We've already mentioned that many programs use environmental variables to decide the specifics of how to operate. We do not want to have to set important variables up every time we start a new shell session, and we have already seen how many variables are already set upon login, so how do we make and define variables automatically?

This is actually a more complex problem than it initially seems, due to the numerous configuration files that the bash shell reads depending on how it is started.

The Difference between Login, Non-Login, Interactive, and Non-Interactive Shell Sessions

The bash shell reads different configuration files depending on how the session is started.

One distinction between different sessions is whether the shell is being spawned as a "login" or "non-login" session.

A login shell is a shell session that begins by authenticating the user. If you are signing into a terminal session or through SSH and authenticate, your shell session will be set as a "login" shell.

If you start a new shell session from within your authenticated session, like we did by calling the bash command from the terminal, a non-login shell session is started. You were were not asked for your authentication details when you started your child shell.

Another distinction that can be made is whether a shell session is interactive, or non-interactive.

An interactive shell session is a shell session that is attached to a terminal. A non-interactive shell session is one is not attached to a terminal session.

So each shell session is classified as either login or non-login and interactive or non-interactive.

A normal session that begins with SSH is usually an interactive login shell. A script run from the command line is usually run in a non-interactive, non-login shell. A terminal session can be any combination of these two properties.

Whether a shell session is classified as a login or non-login shell has implications on which files are read to initialize the shell session.

A session started as a login session will read configuration details from the /etc/profile file first. It will then look for the first login shell configuration file in the user's home directory to get user-specific configuration details.

It reads the first file that it can find out of ~/.bash_profile , ~/.bash_login , and ~/.profile and does not read any further files.

In contrast, a session defined as a non-login shell will read /etc/bash.bashrc and then the user-specific ~/.bashrc file to build its environment.

Non-interactive shells read the environmental variable called BASH_ENV and read the file specified to define the new environment.

Implementing Environmental Variables

As you can see, there are a variety of different files that we would usually need to look at for placing our settings.

This provides a lot of flexibility that can help in specific situations where we want certain settings in a login shell, and other settings in a non-login shell. However, most of the time we will want the same settings in both situations.

Fortunately, most Linux distributions configure the login configuration files to source the non-login configuration files. This means that you can define environmental variables that you want in both inside the non-login configuration files. They will then be read in both scenarios.

We will usually be setting user-specific environmental variables, and we usually will want our settings to be available in both login and non-login shells. This means that the place to define these variables is in the ~/.bashrc file.

Open this file now:

nano ~/.bashrc

This will most likely contain quite a bit of data already. Most of the definitions here are for setting bash options, which are unrelated to environmental variables. You can set environmental variables just like you would from the command line:

export VARNAME=value

We can then save and close the file. The next time you start a shell session, your environmental variable declaration will be read and passed on to the shell environment. You can force your current session to read the file now by typing:

source ~/.bashrc

If you need to set system-wide variables, you may want to think about adding them to /etc/profile , /etc/bash.bashrc , or /etc/environment .

Conclusion

Environmental and shell variables are always present in your shell sessions and can be very useful. They are an interesting way for a parent process to set configuration details for its children, and are a way of setting options outside of files.

This has many advantages in specific situations. For instance, some deployment mechanisms rely on environmental variables to configure authentication information. This is useful because it does not require keeping these in files that may be seen by outside parties.

There are plenty of other, more mundane, but more common scenarios where you will need to read or alter the environment of your system. These tools and techniques should give you a good foundation for making these changes and using them correctly.

By Justin Ellingwood

[Jul 29, 2017] shell - Whats the difference between .bashrc, .bash_profile, and .environment - Stack Overflow

Notable quotes:
"... "The following paragraphs describe how bash executes its startup files." ..."
Jul 29, 2017 | stackoverflow.com

up vote 130 down vote favorite 717

Adam Rosenfield , asked Jan 6 '09 at 3:58

I've used a number of different *nix-based systems of the years, and it seems like every flavor of Bash I use has a different algorithm for deciding which startup scripts to run. For the purposes of tasks like setting up environment variables and aliases and printing startup messages (e.g. MOTDs), which startup script is the appropriate place to do these?

What's the difference between putting things in .bashrc , .bash_profile , and .environment ? I've also seen other files such as .login , .bash_login , and .profile ; are these ever relevant? What are the differences in which ones get run when logging in physically, logging in remotely via ssh, and opening a new terminal window? Are there any significant differences across platforms (including Mac OS X (and its Terminal.app) and Cygwin Bash)?

Cos , answered Jan 6 '09 at 4:18

The main difference with shell config files is that some are only read by "login" shells (eg. when you login from another host, or login at the text console of a local unix machine). these are the ones called, say, .login or .profile or .zlogin (depending on which shell you're using).

Then you have config files that are read by "interactive" shells (as in, ones connected to a terminal (or pseudo-terminal in the case of, say, a terminal emulator running under a windowing system). these are the ones with names like .bashrc , .tcshrc , .zshrc , etc.

bash complicates this in that .bashrc is only read by a shell that's both interactive and non-login , so you'll find most people end up telling their .bash_profile to also read .bashrc with something like

[[ -r ~/.bashrc ]] && . ~/.bashrc

Other shells behave differently - eg with zsh , .zshrc is always read for an interactive shell, whether it's a login one or not.

The manual page for bash explains the circumstances under which each file is read. Yes, behaviour is generally consistent between machines.

.profile is simply the login script filename originally used by /bin/sh . bash , being generally backwards-compatible with /bin/sh , will read .profile if one exists.

Johannes Schaub - litb , answered Jan 6 '09 at 15:21

That's simple. It's explained in man bash :
... ... ... 

Login shells are the ones that are the one you login (so, they are not executed when merely starting up xterm, for example). There are other ways to login. For example using an X display manager. Those have other ways to read and export environment variables at login time.

Also read the INVOCATION chapter in the manual. It says "The following paragraphs describe how bash executes its startup files." , i think that's a spot-on :) It explains what an "interactive" shell is too.

Bash does not know about .environment . I suspect that's a file of your distribution, to set environment variables independent of the shell that you drive.

Jonathan Leffler , answered Jan 6 '09 at 4:13

Classically, ~/.profile is used by Bourne Shell, and is probably supported by Bash as a legacy measure. Again, ~/.login and ~/.cshrc were used by C Shell - I'm not sure that Bash uses them at all.

The ~/.bash_profile would be used once, at login. The ~/.bashrc script is read every time a shell is started. This is analogous to /.cshrc for C Shell.

One consequence is that stuff in ~/.bashrc should be as lightweight (minimal) as possible to reduce the overhead when starting a non-login shell.

I believe the ~/.environment file is a compatibility file for Korn Shell.

Filip Ekberg , answered Jan 6 '09 at 4:03

I found information about .bashrc and .bash_profile here to sum it up:

.bash_profile is executed when you login. Stuff you put in there might be your PATH and other important environment variables.

.bashrc is used for non login shells. I'm not sure what that means. I know that RedHat executes it everytime you start another shell (su to this user or simply calling bash again) You might want to put aliases in there but again I am not sure what that means. I simply ignore it myself.

.profile is the equivalent of .bash_profile for the root. I think the name is changed to let other shells (csh, sh, tcsh) use it as well. (you don't need one as a user)

There is also .bash_logout wich executes at, yeah good guess...logout. You might want to stop deamons or even make a little housekeeping . You can also add "clear" there if you want to clear the screen when you log out.

Also there is a complete follow up on each of the configurations files here

These are probably even distro.-dependant, not all distros choose to have each configuraton with them and some have even more. But when they have the same name, they usualy include the same content.

Rose Perrone , answered Feb 27 '12 at 0:22

According to Josh Staiger , Mac OS X's Terminal.app actually runs a login shell rather than a non-login shell by default for each new terminal window, calling .bash_profile instead of .bashrc.

He recommends:

Most of the time you don't want to maintain two separate config files for login and non-login shells ! when you set a PATH, you want it to apply to both. You can fix this by sourcing .bashrc from your .bash_profile file, then putting PATH and common settings in .bashrc.

To do this, add the following lines to .bash_profile:


if ~/.bashrc ]; then 
    source ~/.bashrc
fi

Now when you login to your machine from a console .bashrc will be called.

PolyThinker , answered Jan 6 '09 at 4:06

A good place to look at is the man page of bash. Here 's an online version. Look for "INVOCATION" section.

seismick , answered May 21 '12 at 10:42

I have used Debian-family distros which appear to execute .profile , but not .bash_profile , whereas RHEL derivatives execute .bash_profile before .profile .

It seems to be a mess when you have to set up environment variables to work in any Linux OS.

[Jul 29, 2017] Preserve bash history in multiple terminal windows - Unix Linux Stack Exchange

Jul 29, 2017 | unix.stackexchange.com

Oli , asked Aug 26 '10 at 13:04

I consistently have more than one terminal open. Anywhere from two to ten, doing various bits and bobs. Now let's say I restart and open up another set of terminals. Some remember certain things, some forget.

I want a history that:

Anything I can do to make bash work more like that?

Pablo R. , answered Aug 26 '10 at 14:37

# Avoid duplicates
export HISTCONTROL=ignoredups:erasedups  
# When the shell exits, append to the history file instead of overwriting it
shopt -s histappend

# After each command, append to the history file and reread it
export PROMPT_COMMAND="${PROMPT_COMMAND:+$PROMPT_COMMAND$'\n'}history -a; history -c; history -r"

kch , answered Sep 19 '08 at 17:49

So, this is all my history-related .bashrc thing:
export HISTCONTROL=ignoredups:erasedups  # no duplicate entries
export HISTSIZE=100000                   # big big history
export HISTFILESIZE=100000               # big big history
shopt -s histappend                      # append to history, don't overwrite it

# Save and reload the history after each command finishes
export PROMPT_COMMAND="history -a; history -c; history -r; $PROMPT_COMMAND"

Tested with bash 3.2.17 on Mac OS X 10.5, bash 4.1.7 on 10.6.

lesmana , answered Jun 16 '10 at 16:11

Here is my attempt at Bash session history sharing. This will enable history sharing between bash sessions in a way that the history counter does not get mixed up and history expansion like !number will work (with some constraints).

Using Bash version 4.1.5 under Ubuntu 10.04 LTS (Lucid Lynx).

HISTSIZE=9000
HISTFILESIZE=$HISTSIZE
HISTCONTROL=ignorespace:ignoredups

_bash_history_sync() {
  builtin history -a         #1
  HISTFILESIZE=$HISTSIZE     #2
  builtin history -c         #3
  builtin history -r         #4
}

history() {                  #5
  _bash_history_sync
  builtin history "$@"
}

PROMPT_COMMAND=_bash_history_sync
Explanation:
  1. Append the just entered line to the $HISTFILE (default is .bash_history ). This will cause $HISTFILE to grow by one line.
  2. Setting the special variable $HISTFILESIZE to some value will cause Bash to truncate $HISTFILE to be no longer than $HISTFILESIZE lines by removing the oldest entries.
  3. Clear the history of the running session. This will reduce the history counter by the amount of $HISTSIZE .
  4. Read the contents of $HISTFILE and insert them in to the current running session history. this will raise the history counter by the amount of lines in $HISTFILE . Note that the line count of $HISTFILE is not necessarily $HISTFILESIZE .
  5. The history() function overrides the builtin history to make sure that the history is synchronised before it is displayed. This is necessary for the history expansion by number (more about this later).
More explanation: About the constraints of the history expansion:

When using history expansion by number, you should always look up the number immediately before using it. That means no bash prompt display between looking up the number and using it. That usually means no enter and no ctrl+c.

Generally, once you have more than one Bash session, there is no guarantee whatsoever that a history expansion by number will retain its value between two Bash prompt displays. Because when PROMPT_COMMAND is executed the history from all other Bash sessions are integrated in the history of the current session. If any other bash session has a new command then the history numbers of the current session will be different.

I find this constraint reasonable. I have to look the number up every time anyway because I can't remember arbitrary history numbers.

Usually I use the history expansion by number like this

$ history | grep something #note number
$ !number

I recommend using the following Bash options.

## reedit a history substitution line if it failed
shopt -s histreedit
## edit a recalled history line before executing
shopt -s histverify
Strange bugs:

Running the history command piped to anything will result that command to be listed in the history twice. For example:

$ history | head
$ history | tail
$ history | grep foo
$ history | true
$ history | false

All will be listed in the history twice. I have no idea why.

Ideas for improvements:

Maciej Piechotka , answered Aug 26 '10 at 13:20

I'm not aware of any way using bash . But it's one of the most popular features of zsh .
Personally I prefer zsh over bash so I recommend trying it.

Here's the part of my .zshrc that deals with history:

SAVEHIST=10000 # Number of entries
HISTSIZE=10000
HISTFILE=~/.zsh/history # File
setopt APPEND_HISTORY # Don't erase history
setopt EXTENDED_HISTORY # Add additional data to history like timestamp
setopt INC_APPEND_HISTORY # Add immediately
setopt HIST_FIND_NO_DUPS # Don't show duplicates in search
setopt HIST_IGNORE_SPACE # Don't preserve spaces. You may want to turn it off
setopt NO_HIST_BEEP # Don't beep
setopt SHARE_HISTORY # Share history between session/terminals

Chris Down , answered Nov 25 '11 at 15:46

To do this, you'll need to add two lines to your ~/.bashrc :
shopt -s histappend
PROMPT_COMMAND="history -a;history -c;history -r;"
$PROMPT_COMMAND

From man bash :

If the histappend shell option is enabled (see the description of shopt under SHELL BUILTIN COMMANDS below), the lines are appended to the history file, otherwise the history file is over-written.

Schof , answered Sep 19 '08 at 19:38

You can edit your BASH prompt to run the "history -a" and "history -r" that Muerr suggested:
savePS1=$PS1

(in case you mess something up, which is almost guaranteed)

PS1=$savePS1`history -a;history -r`

(note that these are back-ticks; they'll run history -a and history -r on every prompt. Since they don't output any text, your prompt will be unchanged.

Once you've got your PS1 variable set up the way you want, set it permanently it in your ~/.bashrc file.

If you want to go back to your original prompt while testing, do:

PS1=$savePS1

I've done basic testing on this to ensure that it sort of works, but can't speak to any side-effects from running history -a;history -r on every prompt.

pts , answered Mar 25 '11 at 17:40

If you need a bash or zsh history synchronizing solution which also solves the problem below, then see it at http://ptspts.blogspot.com/2011/03/how-to-automatically-synchronize-shell.html

The problem is the following: I have two shell windows A and B. In shell window A, I run sleep 9999 , and (without waiting for the sleep to finish) in shell window B, I want to be able to see sleep 9999 in the bash history.

The reason why most other solutions here won't solve this problem is that they are writing their history changes to the the history file using PROMPT_COMMAND or PS1 , both of which are executing too late, only after the sleep 9999 command has finished.

jtimberman , answered Sep 19 '08 at 17:38

You can use history -a to append the current session's history to the histfile, then use history -r on the other terminals to read the histfile.

jmanning2k , answered Aug 26 '10 at 13:59

I can offer a fix for that last one: make sure the env variable HISTCONTROL does not specify "ignorespace" (or "ignoreboth").

But I feel your pain with multiple concurrent sessions. It simply isn't handled well in bash.

Toby , answered Nov 20 '14 at 14:53

Here's an alternative that I use. It's cumbersome but it addresses the issue that @axel_c mentioned where sometimes you may want to have a separate history instance in each terminal (one for make, one for monitoring, one for vim, etc).

I keep a separate appended history file that I constantly update. I have the following mapped to a hotkey:

history | grep -v history >> ~/master_history.txt

This appends all history from the current terminal to a file called master_history.txt in your home dir.

I also have a separate hotkey to search through the master history file:

cat /home/toby/master_history.txt | grep -i

I use cat | grep because it leaves the cursor at the end to enter my regex. A less ugly way to do this would be to add a couple of scripts to your path to accomplish these tasks, but hotkeys work for my purposes. I also periodically will pull history down from other hosts I've worked on and append that history to my master_history.txt file.

It's always nice to be able to quickly search and find that tricky regex you used or that weird perl one-liner you came up with 7 months ago.

Yarek T , answered Jul 23 '15 at 9:05

Right, So finally this annoyed me to find a decent solution:
# Write history after each command
_bash_history_append() {
    builtin history -a
}
PROMPT_COMMAND="_bash_history_append; $PROMPT_COMMAND"

What this does is sort of amalgamation of what was said in this thread, except that I don't understand why would you reload the global history after every command. I very rarely care about what happens in other terminals, but I always run series of commands, say in one terminal:

make
ls -lh target/*.foo
scp target/artifact.foo vm:~/

(Simplified example)

And in another:

pv ~/test.data | nc vm:5000 >> output
less output
mv output output.backup1

No way I'd want the command to be shared

rouble , answered Apr 15 at 17:43

Here is my enhancement to @lesmana's answer . The main difference is that concurrent windows don't share history. This means you can keep working in your windows, without having context from other windows getting loaded into your current windows.

If you explicitly type 'history', OR if you open a new window then you get the history from all previous windows.

Also, I use this strategy to archive every command ever typed on my machine.

# Consistent and forever bash history
HISTSIZE=100000
HISTFILESIZE=$HISTSIZE
HISTCONTROL=ignorespace:ignoredups

_bash_history_sync() {
  builtin history -a         #1
  HISTFILESIZE=$HISTSIZE     #2
}

_bash_history_sync_and_reload() {
  builtin history -a         #1
  HISTFILESIZE=$HISTSIZE     #2
  builtin history -c         #3
  builtin history -r         #4
}

history() {                  #5
  _bash_history_sync_and_reload
  builtin history "$@"
}

export HISTTIMEFORMAT="%y/%m/%d %H:%M:%S   "
PROMPT_COMMAND='history 1 >> ${HOME}/.bash_eternal_history'
PROMPT_COMMAND=_bash_history_sync;$PROMPT_COMMAND

simotek , answered Jun 1 '14 at 6:02

I have written a script for setting a history file per session or task its based off the following.
        # write existing history to the old file
        history -a

        # set new historyfile
        export HISTFILE="$1"
        export HISET=$1

        # touch the new file to make sure it exists
        touch $HISTFILE
        # load new history file
        history -r $HISTFILE

It doesn't necessary save every history command but it saves the ones that i care about and its easier to retrieve them then going through every command. My version also lists all history files and provides the ability to search through them all.

Full source: https://github.com/simotek/scripts-config/blob/master/hiset.sh

Litch , answered Aug 11 '15 at 0:15

I chose to put history in a file-per-tty, as multiple people can be working on the same server - separating each session's commands makes it easier to audit.
# Convert /dev/nnn/X or /dev/nnnX to "nnnX"
HISTSUFFIX=`tty | sed 's/\///g;s/^dev//g'`
# History file is now .bash_history_pts0
HISTFILE=".bash_history_$HISTSUFFIX"
HISTTIMEFORMAT="%y-%m-%d %H:%M:%S "
HISTCONTROL=ignoredups:ignorespace
shopt -s histappend
HISTSIZE=1000
HISTFILESIZE=5000

History now looks like:

user@host:~# test 123
user@host:~# test 5451
user@host:~# history
1  15-08-11 10:09:58 test 123
2  15-08-11 10:10:00 test 5451
3  15-08-11 10:10:02 history

With the files looking like:

user@host:~# ls -la .bash*
-rw------- 1 root root  4275 Aug 11 09:42 .bash_history_pts0
-rw------- 1 root root    75 Aug 11 09:49 .bash_history_pts1
-rw-r--r-- 1 root root  3120 Aug 11 10:09 .bashrc

fstang , answered Sep 10 '16 at 19:30

Here I will point out one problem with
export PROMPT_COMMAND="${PROMPT_COMMAND:+$PROMPT_COMMAND$'\n'}history -a; history -c; history -r"

and

PROMPT_COMMAND="$PROMPT_COMMAND;history -a; history -n"

If you run source ~/.bashrc, the $PROMPT_COMMAND will be like

"history -a; history -c; history -r history -a; history -c; history -r"

and

"history -a; history -n history -a; history -n"

This repetition occurs each time you run 'source ~/.bashrc'. You can check PROMPT_COMMAND after each time you run 'source ~/.bashrc' by running 'echo $PROMPT_COMMAND'.

You could see some commands are apparently broken: "history -n history -a". But the good news is that it still works, because other parts still form a valid command sequence (Just involving some extra cost due to executing some commands repetitively. And not so clean.)

Personally I use the following simple version:

shopt -s histappend
PROMPT_COMMAND="history -a; history -c; history -r"

which has most of the functionalities while no such issue as mentioned above.

Another point to make is: there is really nothing magic . PROMPT_COMMAND is just a plain bash environment variable. The commands in it get executed before you get bash prompt (the $ sign). For example, your PROMPT_COMMAND is "echo 123", and you run "ls" in your terminal. The effect is like running "ls; echo 123".

$ PROMPT_COMMAND="echo 123"

output (Just like running 'PROMPT_COMMAND="echo 123"; $PROMPT_COMMAND'):

123

Run the following:

$ echo 3

output:

3
123

"history -a" is used to write the history commands in memory to ~/.bash_history

"history -c" is used to clear the history commands in memory

"history -r" is used to read history commands from ~/.bash_history to memory

See history command explanation here: http://ss64.com/bash/history.html

PS: As other users have pointed out, export is unnecessary. See: using export in .bashrc

Hopping Bunny , answered May 13 '15 at 4:48

Here is the snippet from my .bashrc and short explanations wherever needed:
# The following line ensures that history logs screen commands as well
shopt -s histappend

# This line makes the history file to be rewritten and reread at each bash prompt
PROMPT_COMMAND="$PROMPT_COMMAND;history -a; history -n"
# Have lots of history
HISTSIZE=100000         # remember the last 100000 commands
HISTFILESIZE=100000     # start truncating commands after 100000 lines
HISTCONTROL=ignoreboth  # ignoreboth is shorthand for ignorespace and     ignoredups

The HISTFILESIZE and HISTSIZE are personal preferences and you can change them as per your tastes.

Mulki , answered Jul 24 at 20:49

This works for ZSH
##############################################################################
# History Configuration for ZSH
##############################################################################
HISTSIZE=10000               #How many lines of history to keep in memory
HISTFILE=~/.zsh_history     #Where to save history to disk
SAVEHIST=10000               #Number of history entries to save to disk
#HISTDUP=erase               #Erase duplicates in the history file
setopt    appendhistory     #Append history to the history file (no overwriting)
setopt    sharehistory      #Share history across terminals
setopt    incappendhistory  #Immediately append to the history file, not just when a term is killed

[Jul 29, 2017] shell - How does this bash code detect an interactive session - Stack Overflow

Notable quotes:
"... ', the pattern removal operation is applied to each positional parameter in turn, and the expansion is the resultant list. If parameter is an array variable subscripted with '@' or ' ..."
Jul 29, 2017 | stackoverflow.com

user1284631 , asked Jun 5 '13 at 8:44

Following some issues with scp (it did not like the presence of the bash bind command in my .bashrc file, apparently), I followed the advice of a clever guy on the Internet (I just cannot find that post right now) that put at the top of its .bashrc file this:
[[ ${-#*} != ${-} ]] || return

in order to make sure that the bash initialization is NOT executed unless in interactive session.

Now, that works. However, I am not able to figure how it works. Could you enlighten me?

According to this answer , the $- is the current options set for the shell and I know that the ${} is the so-called "substring" syntax for expanding variables.

However, I do not understand the ${-#*i} part. And why $-#*i is not the same as ${-#*i} .

blue , answered Jun 5 '13 at 8:49

parameter#word}

$parameter##word}

The word is expanded to produce a pattern just as in filename expansion. If the pattern matches the beginning of the expanded value of parameter, then the result of the expansion is the expanded value of parameter with the shortest matching pattern (the '#' case) or the longest matching pattern (the '##' case) deleted.

If parameter is '@' or ' ', the pattern removal operation is applied to each positional parameter in turn, and the expansion is the resultant list. If parameter is an array variable subscripted with '@' or ' ', the pattern removal operation is applied to each member of the array in turn, and the expansion is the resultant list.

Source: http://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html

So basically what happens in ${-#*i} is that *i is expanded, and if it matches the beginning of the value of $- , then the result of the whole expansion is $- with the shortest matching pattern between *i and $- deleted.

Example

VAR "baioasd" 
echo ${VAR#*i};

outputs oasd .

In your case

If shell is interactive, $- will contain the letter 'i', so when you strip the variable $- of the pattern *i you will get a string that is different from the original $- ( [[ ${-#*i} != ${-} ]] yelds true). If shell is not interactive, $- does not contain the letter 'i' so the pattern *i does not match anything in $- and [[ ${-#*i} != $- ]] yelds false, and the return statement is executed.

perreal , answered Jun 5 '13 at 8:53

See this :

To determine within a startup script whether or not Bash is running interactively, test the value of the '-' special parameter. It contains i when the shell is interactive

Your substitution removes the string up to, and including the i and tests if the substituted version is equal to the original string. They will be different if there is i in the ${-} .

[Jul 28, 2017] bash - About .bash_profile, .bashrc, and where should alias be written in - Stack Overflow

Jul 28, 2017 | stackoverflow.com

Community May 23 at 12:17

Possible Duplicate: What's the difference between .bashrc, .bash_profile, and .environment?

It seems that if I use

alias ls
'ls -F'

inside of .bashrc on Mac OS X, then the newly created shell will not have that alias. I need to type bash again and that alias will be in effect.

And if I log into Linux on the hosting company, the .bashrc file has a comment line that says:

For non-login shell

and the .bash_profile file has a comment that says

for login shell

So where should aliases be written in? How come we separate the login shell and non-login shell?

Some webpage say use .bash_aliases , but it doesn't work on Mac OS X, it seems.

Maggyero edited Apr 25 '16 at 16:24

The reason you separate the login and non-login shell is because the .bashrc file is reloaded every time you start a new copy of Bash.

The .profile file is loaded only when you either log in or use the appropriate flag to tell Bash to act as a login shell.

Personally,

Oh, and the reason you need to type bash again to get the new alias is that Bash loads your .bashrc file when it starts but it doesn't reload it unless you tell it to. You can reload the .bashrc file (and not need a second shell) by typing


source
~/.
bashrc

which loads the .bashrc file as if you had typed the commands directly to Bash.

lhunath answered May 24 '09 at 6:22

Check out http://mywiki.wooledge.org/DotFiles for an excellent resource on the topic aside from man bash .

Summary:

Adam Rosenfield May 24 '09 at 2:46
From the bash manpage:

When bash is invoked as an interactive login shell, or as a non-interactive shell with the --login option, it first reads and executes commands from the file /etc/profile , if that file exists. After reading that file, it looks for ~/.bash_profile , ~/.bash_login , and ~/.profile , in that order, and reads and executes commands from the first one that exists and is readable. The --noprofile option may be used when the shell is started to inhibit this behavior.

When a login shell exits, bash reads and executes commands from the file ~/.bash_logout , if it exists.

When an interactive shell that is not a login shell is started, bash reads and executes commands from ~/.bashrc , if that file exists. This may be inhibited by using the --norc option. The --rcfile file option will force bash to read and execute commands from file instead of ~/.bashrc .

Thus, if you want to get the same behavior for both login shells and interactive non-login shells, you should put all of your commands in either .bashrc or .bash_profile , and then have the other file source the first one.

Adam Rosenfield May 24 '09 at 2:46

.bash_profile is loaded for a "login shell". I am not sure what that would be on OS X, but on Linux that is either X11 or a virtual terminal.

.bashrc is loaded every time you run Bash. That is where you should put stuff you want loaded whenever you open a new Terminal.app window.

I personally put everything in .bashrc so that I don't have to restart the application for changes to take effect.

[Jul 26, 2017] I feel stupid declare not found in bash scripting

A single space can make a huge difference in bash :-)
www.linuxquestions.org

Mohtek

I feel stupid: declare not found in bash scripting? I was anxious to get my feet wet, and I'm only up to my toes before I'm stuck...this seems very very easy but I'm not sure what I've done wrong. Below is the script and its output. What the heck am I missing?

______________________________________________________
#!/bin/bash
declare -a PROD[0]="computers" PROD[1]="HomeAutomation"
printf "${ PROD[*]}"
_______________________________________________________

products.sh: 6: declare: not found
products.sh: 8: Syntax error: Bad substitution

wjevans_7d1@yahoo.co

I ran what you posted (but at the command line, not in a script, though that should make no significant difference), and got this:

Code:

-bash: ${ PROD[*]}: bad substitution

In other words, I couldn't reproduce your first problem, the "declare: not found" error. Try the declare command by itself, on the command line.

And I got rid of the "bad substitution" problem when I removed the space which is between the ${ and the PROD on the printf line.

Hope this helps.

blackhole54

The previous poster identified your second problem.

As far as your first problem goes ... I am not a bash guru although I have written a number of bash scripts. So far I have found no need for declare statements. I suspect that you might not need it either. But if you do want to use it, the following does work:

Code:
#!/bin/bash

declare -a PROD
PROD[0]="computers"
PROD[1]="HomeAutomation"
printf "${PROD[*]}\n"

EDIT: My original post was based on an older version of bash. When I tried the declare statement you posted I got an error message, but one that was different from yours. I just tried it on a newer version of bash, and your declare statement worked fine. So it might depend on the version of bash you are running. What I posted above runs fine on both versions.

[Jul 26, 2017] Associative array declaration gotcha

Jul 26, 2017 | unix.stackexchange.com

bash silently does function return on (re-)declare of global associative read-only array - Unix & Linux Stack Exchange

Ron Burk :

Obviously cut out of a much more complex script that was more meaningful:

#!/bin/bash

function InitializeConfig(){
    declare -r -g -A SHCFG_INIT=( [a]=b )
    declare -r -g -A SHCFG_INIT=( [c]=d )
    echo "This statement never gets executed"
}

set -o xtrace

InitializeConfig
echo "Back from function"
The output looks like this:
ronburk@ubuntu:~/ubucfg$ bash bug.sh
+ InitializeConfig
+ SHCFG_INIT=([a]=b)
+ declare -r -g -A SHCFG_INIT
+ SHCFG_INIT=([c]=d)
+ echo 'Back from function'
Back from function
Bash seems to silently execute a function return upon the second declare statement. Starting to think this really is a new bug, but happy to learn otherwise.

Other details:

Machine: x86_64
OS: linux-gnu
Compiler: gcc
Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64' -DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-pc-linux-gn$
uname output: Linux ubuntu 3.16.0-38-generic #52~14.04.1-Ubuntu SMP Fri May 8 09:43:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Lin$
Machine Type: x86_64-pc-linux-gnu

Bash Version: 4.3
Patch Level: 11
Release Status: release
bash array readonly
share improve this question edited Jun 14 '15 at 17:43 asked Jun 14 '15 at 7:05 118

By gum, you're right! Then I get readonly warning on second declare, which is reasonable, and the function completes. The xtrace output is also interesting; implies declare without single quotes is really treated as two steps. Ready to become superstitious about always single-quoting the argument to declare . Hard to see how popping the function stack can be anything but a bug, though. – Ron Burk Jun 14 '15 at 23:58

Weird. Doesn't happen in bash 4.2.53(1). – choroba Jun 14 '15 at 7:22
I can reproduce this problem with bash version 4.3.11 (Ubuntu 14.04.1 LTS). It works fine with bash 4.2.8 (Ubuntu 11.04). – Cyrus Jun 14 '15 at 7:34
Maybe related: unix.stackexchange.com/q/56815/116972 I can get expected result with declare -r -g -A 'SHCFG_INIT=( [a]=b )' . – yaegashi Jun 14 '15 at 23:22
add a comment |

I found this thread in bug-bash@gnu.org related to test -v on an assoc array. In short, bash implicitly did test -v SHCFG_INIT[0] in your script. I'm not sure this behavior got introduced in 4.3.

You might want to use declare -p to workaround this...

if  declare p SHCFG_INIT >/dev/null >& ; then
    echo "looks like SHCFG_INIT not defined"
fi
====
Well, rats. I think your answer is correct, but also reveals I'm really asking two separate questions when I thought they were probably the same issue. Since the title better reflects what turns out to be the "other" question, I'll leave this up for a while and see if anybody knows what's up with the mysterious implicit function return... Thanks! – Ron Burk Jun 14 '15 at 17:01
Edited question to focus on the remaining issue. Thanks again for the answer on the "-v" issue with associative arrays. – Ron Burk Jun 14 '15 at 17:55
Accepting this answer. Complete answer is here plus your comments above plus (IMHO) there's a bug in this version of bash (can't see how there can be any excuse for popping the function stack without warning). Thanks for your excellent research on this! – Ron Burk Jun 21 '15 at 19:31

[Jul 26, 2017] Typing variables: declare or typeset

Jul 26, 2017 | www.tldp.org

The declare or typeset builtins , which are exact synonyms, permit modifying the properties of variables. This is a very weak form of the typing [1] available in certain programming languages. The declare command is specific to version 2 or later of Bash. The typeset command also works in ksh scripts.

declare/typeset options
-r readonly
( declare -r var1 works the same as readonly var1 )

This is the rough equivalent of the C const type qualifier. An attempt to change the value of a readonly variable fails with an error message.

declare -r var1=1
echo "var1 = $var1"   # var1 = 1

(( var1++ ))          # x.sh: line 4: var1: readonly variable
-i integer
declare -i number
# The script will treat subsequent occurrences of "number" as an integer.             

number=3
echo "Number = $number"     # Number = 3

number=three
echo "Number = $number"     # Number = 0
# Tries to evaluate the string "three" as an integer.

Certain arithmetic operations are permitted for declared integer variables without the need for expr or let .

n=6/3
echo "n = $n"       # n = 6/3

declare -i n
n=6/3
echo "n = $n"       # n = 2
-a array
declare -a indices

The variable indices will be treated as an array .

-f function(s)
declare -f

A declare -f line with no arguments in a script causes a listing of all the functions previously defined in that script.

declare -f function_name

A declare -f function_name in a script lists just the function named.

-x export
declare -x var3

This declares a variable as available for exporting outside the environment of the script itself.

-x var=$value
declare -x var3=373

The declare command permits assigning a value to a variable in the same statement as setting its properties.

Example 9-10. Using declare to type variables
#!/bin/bash

func1 ()
{
  echo This is a function.
}

declare -f        # Lists the function above.

echo

declare -i var1   # var1 is an integer.
var1=2367
echo "var1 declared as $var1"
var1=var1+1       # Integer declaration eliminates the need for 'let'.
echo "var1 incremented by 1 is $var1."
# Attempt to change variable declared as integer.
echo "Attempting to change var1 to floating point value, 2367.1."
var1=2367.1       # Results in error message, with no change to variable.
echo "var1 is still $var1"

echo

declare -r var2=13.36         # 'declare' permits setting a variable property
                              #+ and simultaneously assigning it a value.
echo "var2 declared as $var2" # Attempt to change readonly variable.
var2=13.37                    # Generates error message, and exit from script.

echo "var2 is still $var2"    # This line will not execute.

exit 0                        # Script will not exit here.
Caution Using the declare builtin restricts the scope of a variable.
foo ()
{
FOO="bar"
}

bar ()
{
foo
echo $FOO
}

bar   # Prints bar.

However . . .

foo (){
declare FOO="bar"
}

bar ()
{
foo
echo $FOO
}

bar  # Prints nothing.


# Thank you, Michael Iatrou, for pointing this out.
9.2.1. Another use for declare

The declare command can be helpful in identifying variables, environmental or otherwise. This can be especially useful with arrays .

bash$


declare | grep HOME


HOME=/home/bozo

bash$


zzy=68


bash$


declare | grep zzy


zzy=68

bash$


Colors=([0]="purple" [1]="reddish-orange" [2]="light green")


bash$


echo ${Colors[@]}


purple reddish-orange light green

bash$


declare | grep Colors


Colors=([0]="purple" [1]="reddish-orange" [2]="light green")

Notes
[1] In this context, typing a variable means to classify it and restrict its properties. For example, a variable declared or typed as an integer is no longer available for string operations .
declare -i intvar

intvar=23
echo "$intvar"   # 23
intvar=stringval
echo "$intvar"   # 0

[Jul 25, 2017] Beginner Mistakes

Jul 25, 2017 | wiki.bash-hackers.org

Script execution Your perfect Bash script executes with syntax errors If you write Bash scripts with Bash specific syntax and features, run them with Bash , and run them with Bash in native mode .

Wrong

See also:

Your script named "test" doesn't execute Give it another name. The executable test already exists.

In Bash it's a builtin. With other shells, it might be an executable file. Either way, it's bad name choice!

Workaround: You can call it using the pathname:

/home/user/bin/test

Globbing Brace expansion is not globbing The following command line is not related to globbing (filename expansion):

# YOU EXPECT
# -i1.vob -i2.vob -i3.vob ....

echo -i{*.vob,}

# YOU GET
# -i*.vob -i
Why? The brace expansion is simple text substitution. All possible text formed by the prefix, the postfix and the braces themselves are generated. In the example, these are only two: -i*.vob and -i . The filename expansion happens after that, so there is a chance that -i*.vob is expanded to a filename - if you have files like -ihello.vob . But it definitely doesn't do what you expected.

Please see:

Test-command

Please see:

Variables Setting variables The Dollar-Sign There is no $ (dollar-sign) when you reference the name of a variable! Bash is not PHP!
# THIS IS WRONG!
$myvar="Hello world!"

A variable name preceeded with a dollar-sign always means that the variable gets expanded . In the example above, it might expand to nothing (because it wasn't set), effectively resulting in

="Hello world!"
which definitely is wrong !

When you need the name of a variable, you write only the name , for example

When you need the content of a variable, you prefix its name with a dollar-sign , like

Whitespace Putting spaces on either or both sides of the equal-sign ( = ) when assigning a value to a variable will fail.
# INCORRECT 1
example = Hello

# INCORRECT 2
example= Hello

# INCORRECT 3
example =Hello

The only valid form is no spaces between the variable name and assigned value

# CORRECT 1
example=Hello

# CORRECT 2
example=" Hello"

Expanding (using) variables A typical beginner's trap is quoting.

As noted above, when you want to expand a variable i.e. "get the content", the variable name needs to be prefixed with a dollar-sign. But, since Bash knows various ways to quote and does word-splitting, the result isn't always the same.

Let's define an example variable containing text with spaces:

example="Hello world"
Used form result number of words
$example Hello world 2
"$example" Hello world 1
\$example $example 1
'$example' $example 1

If you use parameter expansion, you must use the name ( PATH ) of the referenced variables/parameters. i.e. not ( $PATH ):

# WRONG!
echo "The first character of PATH is ${$PATH:0:1}"

# CORRECT
echo "The first character of PATH is ${PATH:0:1}"

Note that if you are using variables in arithmetic expressions , then the bare name is allowed:

((a=$a+7))         # Add 7 to a
((a = a + 7))      # Add 7 to a.  Identical to the previous command.
((a += 7))         # Add 7 to a.  Identical to the previous command.

a=$((a+7))         # POSIX-compatible version of previous code.

Please see:

Exporting Exporting a variable means to give newly created (child-)processes a copy of that variable. not copy a variable created in a child process to the parent process. The following example does not work, since the variable hello is set in a child process (the process you execute to start that script ./script.sh ):
$ cat script.sh
export hello=world

$ ./script.sh
$ echo $hello
$

Exporting is one-way. The direction is parent process to child process, not the reverse. The above example will work, when you don't execute the script, but include ("source") it:

$ source ./script.sh
$ echo $hello
world
$
In this case, the export command is of no use.

Please see:

Exit codes Reacting to exit codes If you just want to react to an exit code, regardless of its specific value, you don't need to use $? in a test command like this:
grep
 ^root:
etc
passwd
>/
dev
null
>&

 
if
$?
-neq
then
echo
"root was not found - check the pub at the corner"
fi

This can be simplified to:

if
grep
 ^root:
etc
passwd
>/
dev
null
>&
then
echo
"root was not found - check the pub at the corner"
fi

Or, simpler yet:

grep
 ^root:
etc
passwd
>/
dev
null
>&
||
echo
"root was not found - check the pub at the corner"

If you need the specific value of $? , there's no other choice. But if you need only a "true/false" exit indication, there's no need for $? .

See also:

Output vs. Return Value It's important to remember the different ways to run a child command, and whether you want the output, the return value, or neither.

When you want to run a command (or a pipeline) and save (or print) the output , whether as a string or an array, you use Bash's $(command) syntax:

$(ls -l /tmp)
newvariable=$(printf "foo")

When you want to use the return value of a command, just use the command, or add ( ) to run a command or pipeline in a subshell:

if grep someuser /etc/passwd ; then
    # do something
fi

if ( w | grep someuser | grep sqlplus ) ; then
    # someuser is logged in and running sqlplus
fi

Make sure you're using the form you intended:

# WRONG!
if $(grep ERROR /var/log/messages) ; then
    # send alerts
fi

[Jul 25, 2017] Arrays in bash 4.x

Jul 25, 2017 | wiki.bash-hackers.org

Purpose An array is a parameter that holds mappings from keys to values. Arrays are used to store a collection of parameters into a parameter. Arrays (in any programming language) are a useful and common composite data structure, and one of the most important scripting features in Bash and other shells.

Here is an abstract representation of an array named NAMES . The indexes go from 0 to 3.

NAMES
 0: Peter
 1: Anna
 2: Greg
 3: Jan

Instead of using 4 separate variables, multiple related variables are grouped grouped together into elements of the array, accessible by their key . If you want the second name, ask for index 1 of the array NAMES . Indexing Bash supports two different types of ksh-like one-dimensional arrays. Multidimensional arrays are not implemented .

Syntax Referencing To accommodate referring to array variables and their individual elements, Bash extends the parameter naming scheme with a subscript suffix. Any valid ordinary scalar parameter name is also a valid array name: [[:alpha:]_][[:alnum:]_]* . The parameter name may be followed by an optional subscript enclosed in square brackets to refer to a member of the array.

The overall syntax is arrname[subscript] - where for indexed arrays, subscript is any valid arithmetic expression, and for associative arrays, any nonempty string. Subscripts are first processed for parameter and arithmetic expansions, and command and process substitutions. When used within parameter expansions or as an argument to the unset builtin, the special subscripts * and @ are also accepted which act upon arrays analogously to the way the @ and * special parameters act upon the positional parameters. In parsing the subscript, bash ignores any text that follows the closing bracket up to the end of the parameter name.

With few exceptions, names of this form may be used anywhere ordinary parameter names are valid, such as within arithmetic expressions , parameter expansions , and as arguments to builtins that accept parameter names. An array is a Bash parameter that has been given the -a (for indexed) or -A (for associative) attributes . However, any regular (non-special or positional) parameter may be validly referenced using a subscript, because in most contexts, referring to the zeroth element of an array is synonymous with referring to the array name without a subscript.

# "x" is an ordinary non-array parameter.
$ x=hi; printf '%s ' "$x" "${x[0]}"; echo "${_[0]}"
hi hi hi

The only exceptions to this rule are in a few cases where the array variable's name refers to the array as a whole. This is the case for the unset builtin (see destruction ) and when declaring an array without assigning any values (see declaration ). Declaration The following explicitly give variables array attributes, making them arrays:

Syntax Description
ARRAY=() Declares an indexed array ARRAY and initializes it to be empty. This can also be used to empty an existing array.
ARRAY[0]= Generally sets the first element of an indexed array. If no array ARRAY existed before, it is created.
declare -a ARRAY Declares an indexed array ARRAY . An existing array is not initialized.
declare -A ARRAY Declares an associative array ARRAY . This is the one and only way to create associative arrays.
Storing values Storing values in arrays is quite as simple as storing values in normal variables.
Syntax Description
ARRAY[N]=VALUE Sets the element N of the indexed array ARRAY to VALUE . N can be any valid arithmetic expression
ARRAY[STRING]=VALUE Sets the element indexed by STRING of the associative array ARRAY .
ARRAY=VALUE As above. If no index is given, as a default the zeroth element is set to VALUE . Careful, this is even true of associative arrays - there is no error if no key is specified, and the value is assigned to string index "0".
ARRAY=(E1 E2 ) Compound array assignment - sets the whole array ARRAY to the given list of elements indexed sequentially starting at zero. The array is unset before assignment unless the += operator is used. When the list is empty ( ARRAY=() ), the array will be set to an empty array. This method obviously does not use explicit indexes. An associative array can not be set like that! Clearing an associative array using ARRAY=() works.
ARRAY=([X]=E1 [Y]=E2 ) Compound assignment for indexed arrays with index-value pairs declared individually (here for example X and Y ). X and Y are arithmetic expressions. This syntax can be combined with the above - elements declared without an explicitly specified index are assigned sequentially starting at either the last element with an explicit index, or zero.
ARRAY=([S1]=E1 [S2]=E2 ) Individual mass-setting for associative arrays . The named indexes (here: S1 and S2 ) are strings.
ARRAY+=(E1 E2 ) Append to ARRAY.

As of now, arrays can't be exported. Getting values article about parameter expansion and check the notes about arrays.

Syntax Description
${ARRAY[N]} Expands to the value of the index N in the indexed array ARRAY . If N is a negative number, it's treated as the offset from the maximum assigned index (can't be used for assignment) - 1
${ARRAY[S]} Expands to the value of the index S in the associative array ARRAY .
"${ARRAY[@]}"
${ARRAY[@]}
"${ARRAY[*]}"
${ARRAY[*]}
Similar to mass-expanding positional parameters , this expands to all elements. If unquoted, both subscripts * and @ expand to the same result, if quoted, @ expands to all elements individually quoted, * expands to all elements quoted as a whole.
"${ARRAY[@]:N:M}"
${ARRAY[@]:N:M}
"${ARRAY[*]:N:M}"
${ARRAY[*]:N:M}
Similar to what this syntax does for the characters of a single string when doing substring expansion , this expands to M elements starting with element N . This way you can mass-expand individual indexes. The rules for quoting and the subscripts * and @ are the same as above for the other mass-expansions.

For clarification: When you use the subscripts @ or * for mass-expanding, then the behaviour is exactly what it is for $@ and $* when mass-expanding the positional parameters . You should read this article to understand what's going on. Metadata

Syntax Description
${#ARRAY[N]} Expands to the length of an individual array member at index N ( stringlength
${#ARRAY[STRING]} Expands to the length of an individual associative array member at index STRING ( stringlength )
${#ARRAY[@]}
${#ARRAY[*]}
Expands to the number of elements in ARRAY
${!ARRAY[@]}
${!ARRAY[*]}
Expands to the indexes in ARRAY since BASH 3.0
Destruction The unset builtin command is used to destroy (unset) arrays or individual elements of arrays.
Syntax Description
unset -v ARRAY
unset -v ARRAY[@]
unset -v ARRAY[*]
Destroys a complete array
unset -v ARRAY[N] Destroys the array element at index N
unset -v ARRAY[STRING] Destroys the array element of the associative array at index STRING

It is best to explicitly specify -v when unsetting variables with unset.

pathname expansion to occur due to the presence of glob characters.

Example: You are in a directory with a file named x1 , and you want to destroy an array element x[1] , with

unset x[1]
then pathname expansion will expand to the filename x1 and break your processing!

Even worse, if nullglob is set, your array/index will disappear.

To avoid this, always quote the array name and index:

unset -v 'x[1]'

This applies generally to all commands which take variable names as arguments. Single quotes preferred.

Usage Numerical Index Numerical indexed arrays are easy to understand and easy to use. The Purpose and Indexing chapters above more or less explain all the needed background theory.

Now, some examples and comments for you.

Let's say we have an array sentence which is initialized as follows:

sentence=(Be liberal in what you accept, and conservative in what you send)

Since no special code is there to prevent word splitting (no quotes), every word there will be assigned to an individual array element. When you count the words you see, you should get 12. Now let's see if Bash has the same opinion:

$ echo ${#sentence[@]}
12

Yes, 12. Fine. You can take this number to walk through the array. Just subtract 1 from the number of elements, and start your walk at 0 (zero)

((n_elements=${#sentence[@]}, max_index=n_elements - 1))

for ((i = 0; i <= max_index; i++)); do
  echo "Element $i: '${sentence[i]}'"
done

You always have to remember that, it seems newbies have problems sometimes. Please understand that numerical array indexing begins at 0 (zero)

The method above, walking through an array by just knowing its number of elements, only works for arrays where all elements are set, of course. If one element in the middle is removed, then the calculation is nonsense, because the number of elements doesn't correspond to the highest used index anymore (we call them " sparse arrays "). Associative (Bash 4) Associative arrays (or hash tables ) are not much more complicated than numerical indexed arrays. The numerical index value (in Bash a number starting at zero) just is replaced with an arbitrary string:

# declare -A, introduced with Bash 4 to declare an associative array
declare -A sentence

sentence[Begin]='Be liberal in what'
sentence[Middle]='you accept, and conservative'
sentence[End]='in what you send'
sentence['Very end']=...

Beware: don't rely on the fact that the elements are ordered in memory like they were declared, it could look like this:

# output from 'set' command
sentence=([End]="in what you send" [Middle]="you accept, and conservative " [Begin]="Be liberal in what " ["Very end"]="...")
This effectively means, you can get the data back with "${sentence[@]}" , of course (just like with numerical indexing), but you can't rely on a specific order. If you want to store ordered data, or re-order data, go with numerical indexes. For associative arrays, you usually query known index values:
for element in Begin Middle End "Very end"; do
    printf "%s" "${sentence[$element]}"
done
printf "\n"

A nice code example: Checking for duplicate files using an associative array indexed with the SHA sum of the files:

# Thanks to Tramp in #bash for the idea and the code

unset flist; declare -A flist;
while read -r sum fname; do 
    if [[ ${flist[$sum]} ]]; then
        printf 'rm -- "%s" # Same as >%s<\n' "$fname" "${flist[$sum]}" 
    else
        flist[$sum]="$fname"
    fi
done <  <(find . -type f -exec sha256sum {} +)  >rmdups

Integer arrays Any type attributes applied to an array apply to all elements of the array. If the integer attribute is set for either indexed or associative arrays, then values are considered as arithmetic for both compound and ordinary assignment, and the += operator is modified in the same way as for ordinary integer variables.

 ~ $ ( declare -ia 'a=(2+4 [2]=2+2 [a[2]]="a[2]")' 'a+=(42 [a[4]]+=3)'; declare -p a )
declare -ai a='([0]="6" [2]="4" [4]="7" [5]="42")'

a[0] is assigned to the result of 2+4 . a[1] gets the result of 2+2 . The last index in the first assignment is the result of a[2] , which has already been assigned as 4 , and its value is also given a[2] .

This shows that even though any existing arrays named a in the current scope have already been unset by using = instead of += to the compound assignment, arithmetic variables within keys can self-reference any elements already assigned within the same compound-assignment. With integer arrays this also applies to expressions to the right of the = . (See evaluation order , the right side of an arithmetic assignment is typically evaluated first in Bash.)

The second compound assignment argument to declare uses += , so it appends after the last element of the existing array rather than deleting it and creating a new array, so a[5] gets 42 .

Lastly, the element whose index is the value of a[4] ( 4 ), gets 3 added to its existing value, making a[4] == 7 . Note that having the integer attribute set this time causes += to add, rather than append a string, as it would for a non-integer array.

The single quotes force the assignments to be evaluated in the environment of declare . This is important because attributes are only applied to the assignment after assignment arguments are processed. Without them the += compound assignment would have been invalid, and strings would have been inserted into the integer array without evaluating the arithmetic. A special-case of this is shown in the next section.

eval , but there are differences.) 'Todo: ' Discuss this in detail.

Indirection Arrays can be expanded indirectly using the indirect parameter expansion syntax. Parameters whose values are of the form: name[index] , name[@] , or name[*] when expanded indirectly produce the expected results. This is mainly useful for passing arrays (especially multiple arrays) by name to a function.

This example is an "isSubset"-like predicate which returns true if all key-value pairs of the array given as the first argument to isSubset correspond to a key-value of the array given as the second argument. It demonstrates both indirect array expansion and indirect key-passing without eval using the aforementioned special compound assignment expansion.

isSubset() {
    local -a 'xkeys=("${!'"$1"'[@]}")' 'ykeys=("${!'"$2"'[@]}")'
    set -- "${@/%/[key]}"

    (( ${#xkeys[@]} <= ${#ykeys[@]} )) || return 1

    local key
    for key in "${xkeys[@]}"; do
        [[ ${!2+_} && ${!1} == ${!2} ]] || return 1
    done
}

main() {
    # "a" is a subset of "b"
    local -a 'a=({0..5})' 'b=({0..10})'
    isSubset a b
    echo $? # true

    # "a" contains a key not in "b"
    local -a 'a=([5]=5 {6..11})' 'b=({0..10})'
    isSubset a b
    echo $? # false

    # "a" contains an element whose value != the corresponding member of "b"
    local -a 'a=([5]=5 6 8 9 10)' 'b=({0..10})'
    isSubset a b
    echo $? # false
}

main

This script is one way of implementing a crude multidimensional associative array by storing array definitions in an array and referencing them through indirection. The script takes two keys and dynamically calls a function whose name is resolved from the array.

callFuncs() {
    # Set up indirect references as positional parameters to minimize local name collisions.
    set -- "${@:1:3}" ${2+'a["$1"]' "$1"'["$2"]'}

    # The only way to test for set but null parameters is unfortunately to test each individually.
    local x
    for x; do
        [[ $x ]] || return 0
    done

    local -A a=(
        [foo]='([r]=f [s]=g [t]=h)'
        [bar]='([u]=i [v]=j [w]=k)'
        [baz]='([x]=l [y]=m [z]=n)'
        ) ${4+${a["$1"]+"${1}=${!3}"}} # For example, if "$1" is "bar" then define a new array: bar=([u]=i [v]=j [w]=k)

    ${4+${a["$1"]+"${!4-:}"}} # Now just lookup the new array. for inputs: "bar" "v", the function named "j" will be called, which prints "j" to stdout.
}

main() {
    # Define functions named {f..n} which just print their own names.
    local fun='() { echo "$FUNCNAME"; }' x

    for x in {f..n}; do
        eval "${x}${fun}"
    done

    callFuncs "$@"
}

main "$@"

Bugs and Portability Considerations

Bugs Evaluation order Here are some of the nasty details of array assignment evaluation order. You can use this testcase code to generate these results.
Each testcase prints evaluation order for indexed array assignment
contexts. Each context is tested for expansions (represented by digits) and
arithmetic (letters), ordered from left to right within the expression. The
output corresponds to the way evaluation is re-ordered for each shell:

a[ $1 a ]=${b[ $2 b ]:=${c[ $3 c ]}}               No attributes
a[ $1 a ]=${b[ $2 b ]:=c[ $3 c ]}                  typeset -ia a
a[ $1 a ]=${b[ $2 b ]:=c[ $3 c ]}                  typeset -ia b
a[ $1 a ]=${b[ $2 b ]:=c[ $3 c ]}                  typeset -ia a b
(( a[ $1 a ] = b[ $2 b ] ${c[ $3 c ]} ))           No attributes
(( a[ $1 a ] = ${b[ $2 b ]:=c[ $3 c ]} ))          typeset -ia b
a+=( [ $1 a ]=${b[ $2 b ]:=${c[ $3 c ]}} [ $4 d ]=$(( $5 e )) ) typeset -a a
a+=( [ $1 a ]=${b[ $2 b ]:=c[ $3 c ]} [ $4 d ]=${5}e ) typeset -ia a

bash: 4.2.42(1)-release
2 b 3 c 2 b 1 a
2 b 3 2 b 1 a c
2 b 3 2 b c 1 a
2 b 3 2 b c 1 a c
1 2 3 c b a
1 2 b 3 2 b c c a
1 2 b 3 c 2 b 4 5 e a d
1 2 b 3 2 b 4 5 a c d e

ksh93: Version AJM 93v- 2013-02-22
1 2 b b a
1 2 b b a
1 2 b b a
1 2 b b a
1 2 3 c b a
1 2 b b a
1 2 b b a 4 5 e d
1 2 b b a 4 5 d e

mksh: @(#)MIRBSD KSH R44 2013/02/24
2 b 3 c 1 a
2 b 3 1 a c
2 b 3 c 1 a
2 b 3 c 1 a
1 2 3 c a b
1 2 b 3 c a
1 2 b 3 c 4 5 e a d
1 2 b 3 4 5 a c d e

zsh: 5.0.2
2 b 3 c 2 b 1 a
2 b 3 2 b 1 a c
2 b 1 a
2 b 1 a
1 2 3 c b a
1 2 b a
1 2 b 3 c 2 b 4 5 e
1 2 b 3 2 b 4 5

See also

[Jul 25, 2017] Handling positional parameters

Notable quotes:
"... under construction ..."
"... under construction ..."
Jul 25, 2017 | wiki.bash-hackers.org

Intro The day will come when you want to give arguments to your scripts. These arguments are known as positional parameters . Some relevant special parameters are described below:

Parameter(s) Description
$0 the first positional parameter, equivalent to argv[0] in C, see the first argument
$FUNCNAME the function name ( attention : inside a function, $0 is still the $0 of the shell, not the function name)
$1 $9 the argument list elements from 1 to 9
${10} ${N} the argument list elements beyond 9 (note the parameter expansion syntax!)
$* all positional parameters except $0 , see mass usage
$@ all positional parameters except $0 , see mass usage
$# the number of arguments, not counting $0

These positional parameters reflect exactly what was given to the script when it was called.

Option-switch parsing (e.g. -h for displaying help) is not performed at this point.

See also the dictionary entry for "parameter" . The first argument The very first argument you can access is referenced as $0 . It is usually set to the script's name exactly as called, and it's set on shell initialization:

Testscript - it just echos $0 :


#!/bin/bash

echo "$0"

You see, $0 is always set to the name the script is called with ( $ is the prompt ):

> ./testscript 

./testscript


> /usr/bin/testscript

/usr/bin/testscript

However, this isn't true for login shells:


> echo "$0"

-bash

In other terms, $0 is not a positional parameter, it's a special parameter independent from the positional parameter list. It can be set to anything. In the ideal case it's the pathname of the script, but since this gets set on invocation, the invoking program can easily influence it (the login program does that for login shells, by prefixing a dash, for example).

Inside a function, $0 still behaves as described above. To get the function name, use $FUNCNAME . Shifting The builtin command shift is used to change the positional parameter values:

The command can take a number as argument: Number of positions to shift. e.g. shift 4 shifts $5 to $1 . Using them Enough theory, you want to access your script-arguments. Well, here we go. One by one One way is to access specific parameters:


#!/bin/bash

echo "Total number of arguments: $#"

echo "Argument 1: $1"

echo "Argument 2: $2"

echo "Argument 3: $3"

echo "Argument 4: $4"

echo "Argument 5: $5"

While useful in another situation, this way is lacks flexibility. The maximum number of arguments is a fixedvalue - which is a bad idea if you write a script that takes many filenames as arguments.

⇒ forget that one Loops There are several ways to loop through the positional parameters.


You can code a C-style for-loop using $# as the end value. On every iteration, the shift -command is used to shift the argument list:


numargs=$#

for ((i=1 ; i <= numargs ; i++))

do

    echo "$1"

    shift

done

Not very stylish, but usable. The numargs variable is used to store the initial value of $# because the shift command will change it as the script runs.


Another way to iterate one argument at a time is the for loop without a given wordlist. The loop uses the positional parameters as a wordlist:


for arg

do

    echo "$arg"

done

Advantage: The positional parameters will be preserved

The next method is similar to the first example (the for loop), but it doesn't test for reaching $# . It shifts and checks if $1 still expands to something, using the test command :


while [ "$1" ]

do

    echo "$1"

    shift

done

Looks nice, but has the disadvantage of stopping when $1 is empty (null-string). Let's modify it to run as long as $1 is defined (but may be null), using parameter expansion for an alternate value :


while [ "${1+defined}" ]; do

  echo "$1"

  shift

done

Getopts There is a small tutorial dedicated to ''getopts'' ( under construction ). Mass usage All Positional Parameters Sometimes it's necessary to just "relay" or "pass" given arguments to another program. It's very inefficient to do that in one of these loops, as you will destroy integrity, most likely (spaces!).

The shell developers created $* and $@ for this purpose.

As overview:

Syntax Effective result
$* $1 $2 $3 ${N}
$@ $1 $2 $3 ${N}
"$*" "$1c$2c$3c c${N}"
"$@" "$1" "$2" "$3" "${N}"

Without being quoted (double quotes), both have the same effect: All positional parameters from $1 to the last one used are expanded without any special handling.

When the $* special parameter is double quoted, it expands to the equivalent of: "$1c$2c$3c$4c ..$N" , where 'c' is the first character of IFS .

But when the $@ special parameter is used inside double quotes, it expands to the equivanent of

"$1" "$2" "$3" "$4" .. "$N"

which reflects all positional parameters as they were set initially and passed to the script or function. If you want to re-use your positional parameters to call another program (for example in a wrapper-script), then this is the choice for you, use double quoted "$@" .

Well, let's just say: You almost always want a quoted "$@" ! Range Of Positional Parameters Another way to mass expand the positional parameters is similar to what is possible for a range of characters using substring expansion on normal parameters and the mass expansion range of arrays .

${@:START:COUNT}

${*:START:COUNT}

"${@:START:COUNT}"

"${*:START:COUNT}"

The rules for using @ or * and quoting are the same as above. This will expand COUNT number of positional parameters beginning at START . COUNT can be omitted ( ${@:START} ), in which case, all positional parameters beginning at START are expanded.

If START is negative, the positional parameters are numbered in reverse starting with the last one.

COUNT may not be negative, i.e. the element count may not be decremented.

Example: START at the last positional parameter:


echo "${@: -1}"

Attention : As of Bash 4, a START of 0 includes the special parameter $0 , i.e. the shell name or whatever $0 is set to, when the positional parameters are in use. A START of 1 begins at $1 . In Bash 3 and older, both 0 and 1 began at $1 . Setting Positional Parameters Setting positional parameters with command line arguments, is not the only way to set them. The builtin command, set may be used to "artificially" change the positional parameters from inside the script or function:


set "This is" my new "set of" positional parameters



# RESULTS IN

# $1: This is

# $2: my

# $3: new

# $4: set of

# $5: positional

# $6: parameters

It's wise to signal "end of options" when setting positional parameters this way. If not, the dashes might be interpreted as an option switch by set itself:


# both ways work, but behave differently. See the article about the set command!

set -- ...

set - ...

Alternately this will also preserve any verbose (-v) or tracing (-x) flags, which may otherwise be reset by set


set -$- ...

Production examples Using a while loop To make your program accept options as standard command syntax:

COMMAND [options] <params> # Like 'cat -A file.txt'

See simple option parsing code below. It's not that flexible. It doesn't auto-interpret combined options (-fu USER) but it works and is a good rudimentary way to parse your arguments.


#!/bin/sh

# Keeping options in alphabetical order makes it easy to add more.



while :

do

    case "$1" in

      -f | --file)

          file="$2"   # You may want to check validity of $2

          shift 2

          ;;

      -h | --help)

          display_help  # Call your function

          # no shifting needed here, we're done.

          exit 0

          ;;

      -u | --user)

          username="$2" # You may want to check validity of $2

          shift 2

          ;;

      -v | --verbose)

          #  It's better to assign a string, than a number like "verbose=1"

          #  because if you're debugging the script with "bash -x" code like this:

          #

          #    if [ "$verbose" ] ...

          #

          #  You will see:

          #

          #    if [ "verbose" ] ...

          #

          #  Instead of cryptic

          #

          #    if [ "1" ] ...

          #

          verbose="verbose"

          shift

          ;;

      --) # End of all options

          shift

          break;

      -*)

          echo "Error: Unknown option: $1" >&2

          exit 1

          ;;

      *)  # No more options

          break

          ;;

    esac

done



# End of file

Filter unwanted options with a wrapper script This simple wrapper enables filtering unwanted options (here: -a and –all for ls ) out of the command line. It reads the positional parameters and builds a filtered array consisting of them, then calls ls with the new option set. It also respects the as "end of options" for ls and doesn't change anything after it:


#!/bin/bash



# simple ls(1) wrapper that doesn't allow the -a option



options=()  # the buffer array for the parameters

eoo=0       # end of options reached



while [[ $1 ]]

do

    if ! ((eoo)); then

        case "$1" in

          -a)

              shift

              ;;

          --all)

              shift

              ;;

          -[^-]*a*|-a?*)

              options+=("${1//a}")

              shift

              ;;

          --)

              eoo=1

              options+=("$1")

              shift

              ;;

          *)

              options+=("$1")

              shift

              ;;

        esac

    else

        options+=("$1")



        # Another (worse) way of doing the same thing:

        # options=("${options[@]}" "$1")

        shift

    fi

done



/bin/ls "${options[@]}"

Using getopts There is a small tutorial dedicated to ''getopts'' ( under construction ). See also

Discussion 2010/04/14 14:20
The shell-developers invented $* and $@ for this purpose.
Without being quoted (double-quoted), both have the same effect: All positional parameters from $1 to the last used one >are expanded, separated by the first character of IFS (represented by "c" here, but usually a space):
$1c$2c$3c$4c........$N

Without double quotes, $* and $@ are expanding the positional parameters separated by only space, not by IFS.


#!/bin/bash



export IFS='-'



echo -e $*

echo -e $@


$./test "This is" 2 3

This is 2 3

This is 2 3

2011/02/18 16:11 #!/bin/bash

OLDIFS="$IFS" IFS='-' #export IFS='-'

#echo -e $* #echo -e $@ #should be echo -e "$*" echo -e "$@" IFS="$OLDIFS"

2011/02/18 16:14 #should be echo -e "$*"

2012/04/20 10:32 Here's yet another non-getopts way.

http://bsdpants.blogspot.de/2007/02/option-ize-your-shell-scripts.html

2012/07/16 14:48 Hi there!

What if I use "$@" in subsequent function calls, but arguments are strings?

I mean, having:


#!/bin/bash

echo "$@"

echo n: $#

If you use it


mypc$ script arg1 arg2 "asd asd" arg4

arg1 arg2 asd asd arg4

n: 4

But having


#!/bin/bash

myfunc()

{

  echo "$@"

  echo n: $#

}

echo "$@"

echo n: $#

myfunc "$@"

you get:


mypc$ myscrpt arg1 arg2 "asd asd" arg4

arg1 arg2 asd asd arg4

4

arg1 arg2 asd asd arg4

5

As you can see, there is no way to make know the function that a parameter is a string and not a space separated list of arguments.

Any idea of how to solve it? I've test calling functions and doing expansion in almost all ways with no results.

2012/08/12 09:11 I don't know why it fails for you. It should work if you use "$@" , of course.

See the example I used your second script with:


$ ./args1 a b c "d e" f

a b c d e f

n: 5

a b c d e f

n: 5

[Jul 25, 2017] Bash function for 'cd' aliases

Jul 25, 2017 | artofsoftware.org

Sep 2, 2011

Posted by craig in Tools

Leave a comment

Tags

bash , CDPATH

Once upon a time I was playing with Windows Power Shell (WPSH) and discovered a very useful function for changing to commonly visited directories. The function, called "go", which was written by Peter Provost , grew on me as I used WPSH ! so much so that I decided to implement it in bash after my WPSH experiments ended.

The problem is simple. Users of command line interfaces tend to visit the same directories repeatedly over the course of their work, and having a way to get to these oft-visited places without a lot of typing is nice.

The solution entails maintaining a map of key-value pairs, where each key is an alias to a value, which is itself a commonly visited directory. The "go" function will, when given a string input, look that string up in the map, and if the key is found, move to the directory indicated by the value.

The map itself is just a specially formatted text file with one key-value entry per line, while each entry is separated into key-value components by the first encountered colon, with the left side being interpreted as the entry's key and the right side as its value.

Keys are typically short easily typed strings, while values can be arbitrary path names, and even contain references to environment variables. The effect of this is that "go" can respond dynamically to the environment.

Finally, the "go" function finds the map file by referring to an environment variable called "GO_FILE", which should have as its value the full path to the map.

Before I ran into this idea I had maintained a number of shell aliases, (i.e. alias dwork='cd $WORK_DIR'), to achieve a similar end, but every time I wanted to add a new location I was forced to edit my .bashrc file. Then I would subsequently have to resource it or enter the alias again on the command line. Since I typically keep multiple shells open this is just a pain, and so I didn't add new aliases very often. With this method, a new entry in the "go file" is immediately available to all open shells without any extra finagling.

This functionality is related to CDPATH, but they are not replacements for one another. Indeed CDPATH is the more appropriate solution when you want to be able to "cd" to all or most of the sub-directories of some parent. On the other hand, "go" works very well for getting to a single directory easily. For example you might not want "/usr/local" in your CDPATH and still want an abbreviated way of getting to "/usr/local/share".

The code for the go function, as well as some brief documentation follows.

##############################################
# GO
#
# Inspired by some Windows Power Shell code
# from Peter Provost (peterprovost.org)
#
# Here are some examples entries:
# work:${WORK_DIR}
# source:${SOURCE_DIR}
# dev:/c/dev
# object:${USER_OBJECT_DIR}
# debug:${USER_OBJECT_DIR}/debug
###############################################
export GO_FILE=~/.go_locations
function go
{
   if [ -z "$GO_FILE" ]
   then
      echo "The variable GO_FILE is not set."
      return
   fi

   if [ ! -e "$GO_FILE" ]
   then
      echo "The 'go file': '$GO_FILE' does not exist."
      return
   fi

   dest=""
   oldIFS=${IFS}
   IFS=$'\n'
   for entry in `cat ${GO_FILE}`
   do
      if [ "$1" = ${entry%%:*} ]
      then
         #echo $entry
         dest=${entry##*:}
         break
      fi
   done

   if [ -n "$dest" ]
   then
      # Expand variables in the go file.
      #echo $dest
      cd `eval echo $dest`
   else
      echo "Invalid location, valid locations are:"
      cat $GO_FILE
   fi
   export IFS=${oldIFS}
}

[Jul 25, 2017] Local variables

Notable quotes:
"... completely local and separate ..."
Jul 25, 2017 | wiki.bash-hackers.org

local to a function:

myfunc
()
local
var
=VALUE
 
# alternative, only when used INSIDE a function
declare
var
=VALUE
 
...

The local keyword (or declaring a variable using the declare command) tags a variable to be treated completely local and separate inside the function where it was declared:

foo
=external
 
printvalue
()
local
foo
=internal
 
echo
$foo

 
 
# this will print "external"
echo
$foo

 
# this will print "internal"

printvalue
 
# this will print - again - "external"
echo
$foo

[Jul 25, 2017] Environment variables

Notable quotes:
"... environment variables ..."
"... including the environment variables ..."
Jul 25, 2017 | wiki.bash-hackers.org

The environment space is not directly related to the topic about scope, but it's worth mentioning.

Every UNIX® process has a so-called environment . Other items, in addition to variables, are saved there, the so-called environment variables . When a child process is created (in Bash e.g. by simply executing another program, say ls to list files), the whole environment including the environment variables is copied to the new process. Reading that from the other side means: Only variables that are part of the environment are available in the child process.

A variable can be tagged to be part of the environment using the export command:

# create a new variable and set it:
# -> This is a normal shell variable, not an environment variable!
myvariable
"Hello world."

 
# make the variable visible to all child processes:
# -> Make it an environment variable: "export" it
export
 myvariable

Remember that the exported variable is a copy . There is no provision to "copy it back to the parent." See the article about Bash in the process tree !


1) under specific circumstances, also by the shell itself

[Jul 25, 2017] Block commenting

Jul 25, 2017 | wiki.bash-hackers.org

: (colon) and input redirection. The : does nothing, it's a pseudo command, so it does not care about standard input. In the following code example, you want to test mail and logging, but not dump the database, or execute a shutdown:

#!/bin/bash
# Write info mails, do some tasks and bring down the system in a safe way
echo
"System halt requested"
 mail
-s
"System halt"
 netadmin
example.com
logger
-t
 SYSHALT
"System halt requested"

 
##### The following "code block" is effectively ignored

:
<<
"SOMEWORD"
etc
init.d
mydatabase clean_stop
mydatabase_dump
var
db
db1
mnt
fsrv0
backups
db1
logger
-t
 SYSHALT
"System halt: pre-shutdown actions done, now shutting down the system"

shutdown
-h
 NOW
SOMEWORD
##### The ignored codeblock ends here
What happened? The : pseudo command was given some input by redirection (a here-document) - the pseudo command didn't care about it, effectively, the entire block was ignored.

The here-document-tag was quoted here to avoid substitutions in the "commented" text! Check redirection with here-documents for more

[Jul 25, 2017] Doing specific tasks: concepts, methods, ideas

Notable quotes:
"... under construction! ..."
Jul 25, 2017 | wiki.bash-hackers.org

[Jul 25, 2017] Bash 4 - a rough overview

Jul 25, 2017 | wiki.bash-hackers.org

Bash changes page for new stuff introduced.

Besides many bugfixes since Bash 3.2, Bash 4 will bring some interesting new features for shell users and scripters. See also Bash changes for a small general overview with more details.

Not all of the changes and news are included here, just the biggest or most interesting ones. The changes to completion, and the readline component are not covered. Though, if you're familiar with these parts of Bash (and Bash 4), feel free to write a chapter here.

The complete list of fixes and changes is in the CHANGES or NEWS file of your Bash 4 distribution.

The current available stable version is 4.2 release (February 13, 2011): New or changed commands and keywords The new "coproc" keyword Bash 4 introduces the concepts of coprocesses, a well known feature of other shells. The basic concept is simple: It will start any command in the background and set up an array that is populated with accessible files that represent the filedescriptors of the started process.

In other words: It lets you start a process in background and communicate with its input and output data streams.

See The coproc keyword The new "mapfile" builtin The mapfile builtin is able to map the lines of a file directly into an array. This avoids having to fill an array yourself using a loop. It enables you to define the range of lines to read, and optionally call a callback, for example to display a progress bar.

See: The mapfile builtin command Changes to the "case" keyword The case construct understands two new action list terminators:

The ;& terminator causes execution to continue with the next action list (rather than terminate the case construct).

The ;;& terminator causes the case construct to test the next given pattern instead of terminating the whole execution.

See The case statement Changes to the "declare" builtin The -p option now prints all attributes and values of declared variables (or functions, when used with -f ). The output is fully re-usable as input.

The new option -l declares a variable in a way that the content is converted to lowercase on assignment. For uppercase, the same applies to -u . The option -c causes the content to be capitalized before assignment.

declare -A declares associative arrays (see below). Changes to the "read" builtin The read builtin command has some interesting new features.

The -t option to specify a timeout value has been slightly tuned. It now accepts fractional values and the special value 0 (zero). When -t 0 is specified, read immediately returns with an exit status indicating if there's data waiting or not. However, when a timeout is given, and the read builtin times out, any partial data recieved up to the timeout is stored in the given variable, rather than lost. When a timeout is hit, read exits with a code greater than 128.

A new option, -i , was introduced to be able to preload the input buffer with some text (when Readline is used, with -e ). The user is able to change the text, or press return to accept it.

See The read builtin command Changes to the "help" builtin The builtin itself didn't change much, but the data displayed is more structured now. The help texts are in a better format, much easier to read.

There are two new options: -d displays the summary of a help text, -m displays a manpage-like format. Changes to the "ulimit" builtin Besides the use of the 512 bytes blocksize everywhere in POSIX mode, ulimit supports two new limits: -b for max socket buffer size and -T for max number of threads. Expansions Brace Expansion The brace expansion was tuned to provide expansion results with leading zeros when requesting a row of numbers.

See Brace expansion Parameter Expansion Methods to modify the case on expansion time have been added.

On expansion time you can modify the syntax by adding operators to the parameter name.

See Case modification on parameter expansion Substring expansion When using substring expansion on the positional parameters, a starting index of 0 now causes $0 to be prepended to the list (if the positional parameters are used). Before, this expansion started with $1:

# this should display $0 on Bash v4, $1 on Bash v3
echo ${@:0:1}

Globbing There's a new shell option globstar . When enabled, Bash will perform recursive globbing on ** – this means it matches all directories and files from the current position in the filesystem, rather than only the current level.

The new shell option dirspell enables spelling corrections on directory names during globbing.

See Pathname expansion (globbing) Associative Arrays Besides the classic method of integer indexed arrays, Bash 4 supports associative arrays.

An associative array is an array indexed by an arbitrary string, something like

declare -A ASSOC

ASSOC[First]="first element"
ASSOC[Hello]="second element"
ASSOC[Peter Pan]="A weird guy"

See Arrays Redirection There is a new &>> redirection operator, which appends the standard output and standard error to the named file. This is the same as the good old >>FILE 2>&1 notation.

The parser now understands |& as a synonym for 2>&1 | , which redirects the standard error for a command through a pipe.

See Redirection Interesting new shell variables

Variable Description
BASHPID contains the PID of the current shell (this is different than what $$ does!)
PROMPT_DIRTRIM specifies the max. level of unshortened pathname elements in the prompt
FUNCNEST control the maximum number of shell function recursions

See Special parameters and shell variables Interesting new Shell Options The mentioned shell options are off by default unless otherwise mentioned.

Option Description
checkjobs check for and report any running jobs at shell exit
compat* set compatiblity modes for older shell versions (influences regular expression matching in [[ ... ]]
dirspell enables spelling corrections on directory names during globbing
globstar enables recursive globbing with **
lastpipe (4.2) to execute the last command in a pipeline in the current environment

See List of shell options Misc

[Jul 25, 2017] Keeping persistent history in bash

Jul 25, 2017 | eli.thegreenplace.net

June 11, 2013 at 19:27 Tags Linux , Software & Tools

Update (Jan 26, 2016): I posted a short update about my usage of persistent history.

For someone spending most of his time in front of a Linux terminal, history is very important. But traditional bash history has a number of limitations, especially when multiple terminals are involved (I sometimes have dozens open). Also it's not very good at preserving just the history you're interested in across reboots.

There are many approaches to improve the situation; here I want to discuss one I've been using very successfully in the past few months - a simple "persistent history" that keeps track of history across terminal instances, saving it into a dot-file in my home directory ( ~/.persistent_history ). All commands, from all terminal instances, are saved there, forever. I found this tremendously useful in my work - it saves me time almost every day.

Why does it go into a separate history and not the main one which is accessible by all the existing history manipulation tools? Because IMHO the latter is still worthwhile to be kept separate for the simple need of bringing up recent commands in a single terminal, without mixing up commands from other terminals. While the terminal is open, I want the press "Up" and get the previous command, even if I've executed a 1000 other commands in other terminal instances in the meantime.

Persistent history is very easy to set up. Here's the relevant portion of my ~/.bashrc :

log_bash_persistent_history()
{
  [[
    $(history 1) =~ ^\ *[0-9]+\ +([^\ ]+\ [^\ ]+)\ +(.*)$
  ]]
  local date_part="${BASH_REMATCH[1]}"
  local command_part="${BASH_REMATCH[2]}"
  if [ "$command_part" != "$PERSISTENT_HISTORY_LAST" ]
  then
    echo $date_part "|" "$command_part" >> ~/.persistent_history
    export PERSISTENT_HISTORY_LAST="$command_part"
  fi
}

# Stuff to do on PROMPT_COMMAND
run_on_prompt_command()
{
    log_bash_persistent_history
}

PROMPT_COMMAND="run_on_prompt_command"

The format of the history file created by this is:

2013-06-09 17:48:11 | cat ~/.persistent_history
2013-06-09 17:49:17 | vi /home/eliben/.bashrc
2013-06-09 17:49:23 | ls

Note that an environment variable is used to avoid useless duplication (i.e. if I run ls twenty times in a row, it will only be recorded once).

OK, so we have ~/.persistent_history , how do we use it? First, I should say that it's not used very often, which kind of connects to the point I made earlier about separating it from the much higher-use regular command history. Sometimes I just look into the file with vi or tail , but mostly this alias does the trick for me:

alias phgrep='cat ~/.persistent_history|grep --color'

The alias name mirrors another alias I've been using for ages:

alias hgrep='history|grep --color'

Another tool for managing persistent history is a trimmer. I said earlier this file keeps the history "forever", which is a scary word - what if it grows too large? Well, first of all - worry not. At work my history file grew to about 2 MB after 3 months of heavy usage, and 2 MB is pretty small these days. Appending to the end of a file is very, very quick (I'm pretty sure it's a constant-time operation) so the size doesn't matter much. But trimming is easy:

tail -20000 ~/.persistent_history | tee ~/.persistent_history

Trims to the last 20000 lines. This should be sufficient for at least a couple of months of history, and your workflow should not really rely on more than that :-)

Finally, what's the use of having a tool like this without employing it to collect some useless statistics. Here's a histogram of the 15 most common commands I've used on my home machine's terminal over the past 3 months:

ls        : 865
vi        : 863
hg        : 741
cd        : 512
ll        : 289
pss       : 245
hst       : 200
python    : 168
make      : 167
git       : 148
time      : 94
python3   : 88
./python  : 88
hpu       : 82
cat       : 80

Some explanation: hst is an alias for hg st . hpu is an alias for hg pull -u . pss is my awesome pss tool , and is the reason why you don't see any calls to grep and find in the list. The proportion of Mercurial vs. git commands is likely to change in the very

[Jul 24, 2017] Bash history handling with multiple terminals

Add to your Prompt command history -a to preserve history from multiple terminals. This is a very neat trick !!!
get=

Bash history handling with multiple terminals

The bash session that is saved is the one for the terminal that is closed the latest. If you want to save the commands for every session, you could use the trick explained here.

export PROMPT_COMMAND='history -a'

To quote the manpage: "If set, the value is executed as a command prior to issuing each primary prompt."

So every time my command has finished, it appends the unwritten history item to ~/.bash

ATTENTION: If you use multiple shell sessions and do not use this trick, you need to write the history manually to preserver it using the command history -a

See also:

[Jul 20, 2017] These Guys Didnt Back Up Their Files, Now Look What Happened

Notable quotes:
"... Unfortunately, even today, people have not learned that lesson. Whether it's at work, at home, or talking with friends, I keep hearing stories of people losing hundreds to thousands of files, sometimes they lose data worth actual dollars in time and resources that were used to develop the information. ..."
"... "I lost all my files from my hard drive? help please? I did a project that took me 3 days and now i lost it, its powerpoint presentation, where can i look for it? its not there where i save it, thank you" ..."
"... Please someone help me I last week brought a Toshiba Satellite laptop running windows 7, to replace my blue screening Dell vista laptop. On plugged in my sumo external hard drive to copy over some much treasured photos and some of my (work – music/writing.) it said installing driver. it said completed I clicked on the hard drive and found a copy of my documents from the new laptop and nothing else. ..."
Jul 20, 2017 | www.makeuseof.com
Back in college, I used to work just about every day as a computer cluster consultant. I remember a month after getting promoted to a supervisor, I was in the process of training a new consultant in the library computer cluster. Suddenly, someone tapped me on the shoulder, and when I turned around I was confronted with a frantic graduate student – a 30-something year old man who I believe was Eastern European based on his accent – who was nearly in tears.

"Please need help – my document is all gone and disk stuck!" he said as he frantically pointed to his PC.

Now, right off the bat I could have told you three facts about the guy. One glance at the blue screen of the archaic DOS-based version of Wordperfect told me that – like most of the other graduate students at the time – he had not yet decided to upgrade to the newer, point-and-click style word processing software. For some reason, graduate students had become so accustomed to all of the keyboard hot-keys associated with typing in a DOS-like environment that they all refused to evolve into point-and-click users.

The second fact, gathered from a quick glance at his blank document screen and the sweat on his brow told me that he had not saved his document as he worked. The last fact, based on his thick accent, was that communicating the gravity of his situation wouldn't be easy. In fact, it was made even worse by his answer to my question when I asked him when he last saved.

"I wrote 30 pages."

Calculated out at about 600 words a page, that's 18000 words. Ouch.

Then he pointed at the disk drive. The floppy disk was stuck, and from the marks on the drive he had clearly tried to get it out with something like a paper clip. By the time I had carefully fished the torn and destroyed disk out of the drive, it was clear he'd never recover anything off of it. I asked him what was on it.

"My thesis."

I gulped. I asked him if he was serious. He was. I asked him if he'd made any backups. He hadn't.

Making Backups of Backups

If there is anything I learned during those early years of working with computers (and the people that use them), it was how critical it is to not only save important stuff, but also to save it in different places. I would back up floppy drives to those cool new zip drives as well as the local PC hard drive. Never, ever had a single copy of anything.

Unfortunately, even today, people have not learned that lesson. Whether it's at work, at home, or talking with friends, I keep hearing stories of people losing hundreds to thousands of files, sometimes they lose data worth actual dollars in time and resources that were used to develop the information.

To drive that lesson home, I wanted to share a collection of stories that I found around the Internet about some recent cases were people suffered that horrible fate – from thousands of files to entire drives worth of data completely lost. These are people where the only remaining option is to start running recovery software and praying, or in other cases paying thousands of dollars to a data recovery firm and hoping there's something to find.

Not Backing Up Projects

The first example comes from Yahoo Answers , where a user that only provided a "?" for a user name (out of embarrassment probably), posted:

"I lost all my files from my hard drive? help please? I did a project that took me 3 days and now i lost it, its powerpoint presentation, where can i look for it? its not there where i save it, thank you"

The folks answering immediately dove into suggesting that the person run recovery software, and one person suggested that the person run a search on the computer for *.ppt.

... ... ...

Doing Backups Wrong

Then, there's a scenario of actually trying to do a backup and doing it wrong, losing all of the files on the original drive. That was the case for the person who posted on Tech Support Forum , that after purchasing a brand new Toshiba Laptop and attempting to transfer old files from an external hard drive, inadvertently wiped the files on the hard drive.

Please someone help me I last week brought a Toshiba Satellite laptop running windows 7, to replace my blue screening Dell vista laptop. On plugged in my sumo external hard drive to copy over some much treasured photos and some of my (work – music/writing.) it said installing driver. it said completed I clicked on the hard drive and found a copy of my documents from the new laptop and nothing else.

While the description of the problem is a little broken, from the sound of it, the person thought they were backing up from one direction, while they were actually backing up in the other direction. At least in this case not all of the original files were deleted, but a majority were.

[Jul 20, 2017] Server Backup Procedures

Jul 20, 2017 | www.tldp.org
.1.1. Backing up with ``tar'':

If you decide to use ``tar'' as your backup solution, you should probably take the time to get to know the various command-line options that are available; type " man tar " for a comprehensive list. You will also need to know how to access the appropriate backup media; although all devices are treated like files in the Unix world, if you are writing to a character device such as a tape, the name of the "file" is the device name itself (eg. `` /dev/nst0 '' for a SCSI-based tape drive).

The following command will perform a backup of your entire Linux system onto the `` /archive/ '' file system, with the exception of the `` /proc/ '' pseudo-filesystem, any mounted file systems in `` /mnt/ '', the `` /archive/ '' file system (no sense backing up our backup sets!), as well as Squid's rather large cache files (which are, in my opinion, a waste of backup media and unnecessary to back up):


tar -zcvpf /archive/full-backup-`date '+%d-%B-%Y'`.tar.gz \
    --directory / --exclude=mnt --exclude=proc --exclude=var/spool/squid .


Don't be intimidated by the length of the command above! As we break it down into its components, you will see the beauty of this powerful utility.

The above command specifies the options `` z '' (compress; the backup data will be compressed with ``gzip''), `` c '' (create; an archive file is begin created), `` v '' (verbose; display a list of files as they get backed up), `` p '' (preserve permissions; file protection information will be "remembered" so they can be restored). The `` f '' (file) option states that the very next argument will be the name of the archive file (or device) being written. Notice how a filename which contains the current date is derived, simply by enclosing the ``date'' command between two back-quote characters. A common naming convention is to add a `` tar '' suffix for non-compressed archives, and a `` tar.gz '' suffix for compressed ones.

The `` --directory '' option tells tar to first switch to the following directory path (the `` / '' directory in this example) prior to starting the backup. The `` --exclude '' options tell tar not to bother backing up the specified directories or files. Finally, the `` . '' character tells tar that it should back up everything in the current directory.

Note: Note: It is important to realize that the options to tar are cAsE-sEnSiTiVe! In addition, most of the options can be specified as either single mneumonic characters (eg. ``f''), or by their easier-to-memorize full option names (eg. ``file''). The mneumonic representations are identified by prefixing them with a ``-'' character, while the full names are prefixed with two such characters. Again, see the "man" pages for information on using tar.

Another example, this time writing only the specified file systems (as opposed to writing them all with exceptions as demonstrated in the example above) onto a SCSI tape drive follows:


tar -cvpf /dev/nst0 --label="Backup set created on `date '+%d-%B-%Y'`." \
    --directory / --exclude=var/spool/ etc home usr/local var/spool


In the above command, notice that the `` z '' (compress) option is not used. I strongly recommend against writing compressed data to tape, because if data on a portion of the tape becomes corrupted, you will lose your entire backup set! However, archive files stored without compression have a very high recoverability for non-affected files, even if portions of the tape archive are corrupted.

Because the tape drive is a character device, it is not possible to specify an actual file name. Therefore, the file name used as an argument to tar is simply the name of the device, `` /dev/nst0 '', the first tape device on the SCSI bus.

Note: Note: The `` /dev/nst0 '' device does not rewind after the backup set is written; therefore it is possible to write multiple sets on one tape. (You may also refer to the device as `` /dev/st0 '', in which case the tape is automatically rewound after the backup set is written.)

Since we aren't able to specify a filename for the backup set, the `` --label '' option can be used to write some information about the backup set into the archive file itself.

Finally, only the files contained in the `` /etc/ '', `` /home/ '', `` /usr/local '', and `` /var/spool/ '' (with the exception of Squid's cache data files) are written to the tape.

When working with tapes, you can use the following commands to rewind, and then eject your tape:


mt -f /dev/nst0 rewind



mt -f /dev/nst0 offline


Tip: Tip: You will notice that leading `` / '' (slash) characters are stripped by tar when an archive file is created. This is tar's default mode of operation, and it is intended to protect you from overwriting critical files with older versions of those files, should you mistakenly recover the wrong file(s) in a restore operation. If you really dislike this behavior (remember, its a feature !) you can specify the `` --absolute-paths '' option to tar, which will preserve the leading slashes. However, I don't recommend doing so, as it is Dangerous !

[Jul 18, 2017] Can I copy my Ubuntu OS off my hard drive to a USB stick and boot from that stick with all my programs

get=
user323419
Yes, this is completely possible. First and foremost, you will need at least 2 USB ports available, or 1 USB port and 1 CD-Drive.

You start by booting into a Live-CD version of Ubuntu with your hard-drive where it is and the target device plugged into USB. Mount your internal drive and target USB to any paths you like.

Open up a terminal and enter the following commands:

tar cp --xattrs /path/to/internal | tar x /path/to/target/usb

You can also look into doing this through a live installation and a utility called CloneZilla, but I am unsure of exactly how to use CloneZilla. The above method is what I used to copy my 128GB hard-drive's installation of Ubuntu to a 64GB flash drive.

2) Clone again the internal or external drive in its entirety to another drive:

Use the "Clonezilla" utility, mentioned in the very last paragraph of my original answer, to clone the original internal drive to another external drive to make two such external bootable drives to keep track of. v>

[Jul 16, 2017] Bash prompt tips and tricks

Jul 07, 2017 | opensource.com
7 comments your Bash prompt.

Anyone who has started a terminal in Linux is familiar with the default Bash prompt:


[
user
@
$host
 ~
]
$

But did you know is that this is completely customizable and can contain some very useful information? Here are a few hidden treasures you can use to customize your Bash prompt.

How is the Bash prompt set?

The Bash prompt is set by the environment variable PS1 (Prompt String 1), which is used for interactive shell prompts. There is also a PS2 variable, which is used when more input is required to complete a Bash command.

[ dneary @ dhcp- 41 - 137 ~ ] $ export PS1 = "[Linux Rulez]$ "
[ Linux Rulez ] export PS2 = "... "
[ Linux Rulez ] if true ; then
... echo "Success!"
... fi
Success ! Where is the value of PS1 set?

PS1 is a regular environment variable.

The system default value is set in /etc/bashrc . On my system, the default prompt is set with this line:


[
"
$PS1
"
 = 
"\\s-\
\v
\\
\$
 "
]
&&
PS1
=
"[\u@\h \W]\
\$
 "

This tests whether the value of PS1 is \s-\v$ (the system default value), and if it is, it sets PS1 to the value [\u@\h \W]\\$ .

If you want to see a custom prompt, however, you should not be editing /etc/bashrc . You should instead add it to .bashrc in your Home directory.

What do \u, \h, \W, \s, and \v mean? More Linux resources

In the PROMPTING section of man bash , you can find a description of all the special characters in PS1 and PS2 . The following are the default options:

What other special strings can I use in the prompts?

There are a number of special strings that can be useful.

There are many other special characters!you can see the full list in the PROMPTING section of the Bash man page .

Multi-line prompts

If you use longer prompts (say if you include \H or \w or a full date-time ), you may want to break things over two lines. Here is an example of a multi-line prompt, with the date, time, and current working directory on one line, and username @hostname on the second line:


PS1
=
"\D{%c} \w
\n
[\u@\H]$ "

Are there any other interesting things I can do?

One thing people occasionally do is create colorful prompts. While I find them annoying and distracting, you may like them. For example, to change the date-time above to display in red text, the directory in cyan, and your username on a yellow background, you could try this:

PS1 = "\[\e[31m\]\D{%c}\[\e[0m\]
\[\e[36m\]\w\[\e[0m\] \n [\[\e[1;43m\]\u\[\e[0m\]@\H]$ "

To dissect this:

You can find more colors and tips in the Bash prompt HOWTO . You can even make text inverted or blinking! Why on earth anyone would want to do this, I don't know. But you can!

What are your favorite Bash prompt customizations? And which ones have you seen that drive you crazy? Let me know in the comments. Ben Cotton on 07 Jul 2017 Permalink I really like the Bash-Beautify setup by Chris Albrecht:
https://github.com/KeyboardCowboy/Bash-Beautify/blob/master/.bash_beautify

When you're in a version-controlled directory, it includes the VCS information (e.g. the git branch and status), which is really handy if you do development. Victorhck on 07 Jul 2017 Permalink An easy drag and drop interface to build your own .bashrc/PS1 configuration

http://bashrcgenerator.com/

've phun!

How Docker Is Growing Its Container Business (Apr 21, 2017, 07:00)
VIDEO: Ben Golub, CEO of Docker Inc., discusses the business of containers and where Docker is headed.

Understanding Shell Initialization Files and User Profiles in Linux (Apr 22, 2017, 10:00)
tecmint: Learn about shell initialization files in relation to user profiles for local user management in Linux.

Cockpit An Easy Way to Administer Multiple Remote Linux Servers via a Web Browser (Apr 23, 2017, 18:00)
Cockpit is a free and open source web-based system management tool where users can easily monitor and manage multiple remote Linux servers.

The Story of Getting SSH Port 22 (Apr 24, 2017, 13:00)
It's no coincidence that the SSH protocol got assigned to port 22.

How To Suspend A Process And Resume It Later In Linux (Apr 24, 2017, 11:00)
This brief tutorial describes how to suspend or pause a running process and resume it later in Unix-like operating systems.

ShellCheck -A Tool That Shows Warnings and Suggestions for Shell Scripts (Apr 25, 2017, 06:00)
tecmint: ShellCheck is a static analysis tool that shows warnings and suggestions concerning bad code in bash/sh shell scripts.

Quick guide for Linux check disk space (Apr 26, 2017, 14:00)
Do you know how much space is left on your Linux system?

[Jul 16, 2017] A Collection Of Useful BASH Scripts For Heavy Commandline Users - OSTechNix

Notable quotes:
"... Provides cheat-sheets for various Linux commands ..."
Jul 16, 2017 | www.ostechnix.com
Today, I have stumbled upon a collection of useful BASH scripts for heavy commandline users. These scripts, known as Bash-Snippets , might be quite helpful for those who live in Terminal all day. Want to check the weather of a place where you live? This script will do that for you. Wondering what is the Stock prices? You can run the script that displays the current details of a stock. Feel bored? You can watch some youtube videos. All from commandline. You don't need to install any heavy memory consumable GUI applications.

Bash-Snippets provides the following 12 useful tools:

  1. currency – Currency converter.
  2. stocks – Provides certain Stock details.
  3. weather – Displays weather details of your place.
  4. crypt – Encrypt and decrypt files.
  5. movies – Search and display a movie details.
  6. taste – Recommendation engine that provides three similar items like the supplied item (The items can be books, music, artists, movies, and games etc).
  7. short – URL Shortner
  8. geo – Provides the details of wan, lan, router, dns, mac, and ip.
  9. cheat – Provides cheat-sheets for various Linux commands .
  10. ytview – Watch YouTube from Terminal.
  11. cloudup – A tool to backup your GitHub repositories to bitbucket.
  12. qrify – Turns the given string into a qr code.
Bash-Snippets – A Collection Of Useful BASH Scripts For Heavy Commandline Users Installation

You can install these scripts on any OS that supports BASH.

First, clone the GIT repository using command:

git clone https://github.com/alexanderepstein/Bash-Snippets

Sample output would be:

Cloning into 'Bash-Snippets'...
remote: Counting objects: 1103, done.
remote: Compressing objects: 100% (45/45), done.
remote: Total 1103 (delta 40), reused 55 (delta 23), pack-reused 1029
Receiving objects: 100% (1103/1103), 1.92 MiB | 564.00 KiB/s, done.
Resolving deltas: 100% (722/722), done.

Go to the cloned directory:

cd Bash-Snippets/

Git checkout to the latest stable release:

git checkout v1.11.0

Finally, install the Bash-Snippets using command:

sudo ./install.sh

This will ask you which scripts to install. Just type Y and press ENTER key to install the respective script. If you don't want to install a particular script, type N and hit ENTER.

[Jul 16, 2017] Classifier by classifying them into folders of Xls, Docs, .png, .jpeg, vidoe, music, pdfs, images, ISO, etc.

Jul 16, 2017 | github.com
If i'm not wrong, all our download folder is pretty Sloppy compare with others because most of the downloaded files are sitting over there and we can't delete blindly, which leads to lose some important files. Also not possible to create bunch of folders based on the files and move appropriate files into folder manually.

So, what to do to avoid this ? Better to organize files with help of classifier, later we can delete unnecessary files easily. Classifier app was written in Python.

How to Organize directory ? Simple navigate to corresponding directory, where you want to organize/classify your files and run the classifier command, it will take few mins or more depends on the directory files count or quantity.

Make a note, there is no undo option, if you want to go back. So, finalize before run classifier in directory. Also, it wont move folders.

Install Classifier in Linux through pip

pip is a recommended tool for installing Python packages in Linux. Use pip command instead of package manager to get latest build.

For Debian based systems.

$ sudo apt-get install python-pip

For RHEL/CentOS based systems.

$ sudo yum install python-pip

For Fedora

$ sudo dnf install python-pip

For openSUSE

$ sudo zypper install python-pip

For Arch Linux based systems

$ sudo pacman -S python-pip

Finally run the pip tool to install Classifier on Linux.

$ sudo pip install classifier
Organize pattern files into specific folders

First i will go with default option which will organize pattern files into specific folders. This will create bunch of directories based on the file types and move them into specific folders.

See my directory, how its looking now (Before run classifier command).

$ pwd
/home/magi/classifier

$ ls -lh
total 139M
-rw-r--r-- 1 magi magi 4.5M Mar 21 21:21 Aaluma_Doluma.mp3
-rw-r--r-- 1 magi magi  26K Mar 21 21:12 battery-monitor_0.4-xenial_all.deb
-rw-r--r-- 1 magi magi  24K Mar 21 21:12 buku-command-line-bookmark-manager-linux.png
-rw-r--r-- 1 magi magi    0 Mar 21 21:43 config.php
-rw-r--r-- 1 magi magi   25 Mar 21 21:13 core.py
-rw-r--r-- 1 magi magi 101K Mar 21 21:12 drawing.svg
-rw-r--r-- 1 magi magi  86M Mar 21 21:12 go1.8.linux-amd64.tar.gz
-rw-r--r-- 1 magi magi   28 Mar 21 21:13 index.html
-rw-r--r-- 1 magi magi   27 Mar 21 21:13 index.php
-rw-r--r-- 1 magi magi  48M Apr 30  2016 Kabali Tamil Movie _ Official Teaser _ Rajinikanth _ Radhika Apte _ Pa Ranjith-9mdJV5-eias.webm
-rw-r--r-- 1 magi magi   28 Mar 21 21:12 magi1.txt
-rw-r--r-- 1 magi magi   66 Mar 21 21:12 ppa.py
-rw-r--r-- 1 magi magi 1.1K Mar 21 21:12 Release.html
-rw-r--r-- 1 magi magi  45K Mar 21 21:12 v0.4.zip

Navigate to corresponding directory where you want to organize files, then run classifier command without any option to achieve it.

$ classifier
Scanning Files
Done!

See the Directory look, after run classifier command

$ ls -lh
total 44K
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Archives
-rw-r--r-- 1 magi magi    0 Mar 21 21:43 config.php
-rw-r--r-- 1 magi magi   25 Mar 21 21:13 core.py
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 DEBPackages
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Documents
-rw-r--r-- 1 magi magi   28 Mar 21 21:13 index.html
-rw-r--r-- 1 magi magi   27 Mar 21 21:13 index.php
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Music
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Pictures
-rw-r--r-- 1 magi magi   66 Mar 21 21:12 ppa.py
-rw-r--r-- 1 magi magi 1.1K Mar 21 21:12 Release.html
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Videos

Make a note, this will organize only general category files such docs, audio, video, pictures, archive, etc and wont organize .py, .html, .php, etc.,.

Classify specific file types into specific folder

To Classify specific file types into specific folder, just add -st (mention the file type) & -sf (folder name) followed by classifier command.

For best understanding, i'm going to move .py , .html & .php files into Development folder. See the exact command to achieve it.

$ classifier -st .py .html .php -sf "Development" 
Scanning Files
Done!

If the folder doesn't exit, it will create the new one and organize the files into that. See the following output. It created Development directory and moved all the files inside the directory.

$ ls -lh
total 28K
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Archives
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 DEBPackages
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:51 Development
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Documents
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Music
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Pictures
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Videos

For better clarification, i have listed Development folder files.

$ ls -lh Development/
total 12K
-rw-r--r-- 1 magi magi  0 Mar 21 21:43 config.php
-rw-r--r-- 1 magi magi 25 Mar 21 21:13 core.py
-rw-r--r-- 1 magi magi 28 Mar 21 21:13 index.html
-rw-r--r-- 1 magi magi 27 Mar 21 21:13 index.php
-rw-r--r-- 1 magi magi  0 Mar 21 21:43 ppa.py
-rw-r--r-- 1 magi magi  0 Mar 21 21:43 Release.html

To Organize files by Date. It will organize current directory files based on the date.

$ classifier -dt

me width=

me width=

To save organized files in different location, add -d (source directory) & -o (destination directory) followed by classifier command.

$  classifier -d /home/magi/organizer -o /home/magi/2g

[Jul 10, 2017] Crowdsourcing, Open Data and Precarious Labour by Allana Mayer Model View Culture

Notable quotes:
"... Photo CC-BY Mace Ojala. ..."
"... Photo CC-BY Samantha Marx. ..."
Jul 10, 2017 | modelviewculture.com
Crowdsourcing, Open Data and Precarious Labour Crowdsourcing and microtransactions are two halves of the same coin: they both mark new stages in the continuing devaluation of labour. by Allana Mayer on February 24th, 2016 The cultural heritage industries (libraries, archives, museums, and galleries, often collectively called GLAMs) like to consider themselves the tech industry's little siblings. We're working to develop things like Linked Open Data, a decentralized network of collaboratively-improved descriptive metadata; we're building our own open-source tech to make our catalogues and collections more useful; we're pushing scholarly publishing out from behind paywalls and into open-access platforms; we're driving innovations in accessible tech.

We're only different in a few ways. One, we're a distinctly feminized set of professions , which comes with a large set of internally- and externally-imposed assumptions. Two, we rely very heavily on volunteer labour, and not just in the internship-and-exposure vein : often retirees and non-primary wage-earners are the people we "couldn't do without." Three, the underlying narrative of a "helping" profession ! essentially a social service ! can push us to ignore the first two distinctions, while driving ourselves to perform more and expect less.

I suppose the major way we're different is that tech doesn't acknowledge us, treat us with respect, build things for us, or partner with us, unless they need a philanthropic opportunity. Although, when some ingenue autodidact bootstraps himself up to a billion-dollar IPO, there's a good chance he's been educating himself using our free resources. Regardless, I imagine a few of the issues true in GLAMs are also true in tech culture, especially in regards to labour and how it's compensated.

Crowdsourcing

Notecards in a filing drawer: old-fashioned means of recording metadata.

Photo CC-BY Mace Ojala.

Here's an example. One of the latest trends is crowdsourcing: admitting we don't have all the answers, and letting users suggest some metadata for our records. (Not to be confused with crowdfunding.) The biggest example of this is Flickr Commons: the Library of Congress partnered with Yahoo! to publish thousands of images that had somehow ended up in the LOC's collection without identifying information. Flickr users were invited to tag pictures with their own keywords or suggest descriptions using comments.

Many orphaned works (content whose copyright status is unclear) found their way conclusively out into the public domain (or back into copyright) this way. Other popular crowdsourcing models include gamification , transcription of handwritten documents (which can't be done with Optical Character Recognition), or proofreading OCR output on digitized texts. The most-discussed side benefits of such projects include the PR campaign that raises general awareness about the organization, and a "lifting of the curtain" on our descriptive mechanisms.

The problem with crowdsourcing is that it's been conclusively proven not to function in the way we imagine it does: a handful of users end up contributing massive amounts of labour, while the majority of those signed up might do a few tasks and then disappear. Seven users in the "Transcribe Bentham" project contributed to 70% of the manuscripts completed; 10 "power-taggers" did the lion's share of the Flickr Commons' image-identification work. The function of the distributed digital model of volunteerism is that those users won't be compensated, even though many came to regard their accomplishments as full-time jobs .

It's not what you're thinking: many of these contributors already had full-time jobs , likely ones that allowed them time to mess around on the Internet during working hours. Many were subject-matter experts, such as the vintage-machinery hobbyist who created entire datasets of machine-specific terminology in the form of image tags. (By the way, we have a cute name for this: "folksonomy," a user-built taxonomy. Nothing like reducing unpaid labour to a deeply colonial ascription of communalism.) In this way, we don't have precisely the free-labour-for-exposure/project-experience problem the tech industry has ; it's not our internships that are the problem. We've moved past that, treating even our volunteer labour as a series of microtransactions. Nobody's getting even the dubious benefit of job-shadowing, first-hand looks at business practices, or networking. We've completely obfuscated our own means of production. People who submit metadata or transcriptions don't even have a means of seeing how the institution reviews and ingests their work, and often, to see how their work ultimately benefits the public.

All this really says to me is: we could've hired subject experts to consult, and given them a living wage to do so, instead of building platforms to dehumanize labour. It also means our systems rely on privilege , and will undoubtedly contain and promote content with a privileged bias, as Wikipedia does. (And hey, even Wikipedia contributions can sometimes result in paid Wikipedian-in-Residence jobs.)

For example, the Library of Congress's classification and subject headings have long collected books about the genocide of First Nations peoples during the colonization of North America under terms such as "first contact," "discovery and exploration," "race relations," and "government relations." No "subjugation," "cultural genocide," "extermination," "abuse," or even "racism" in sight. Also, the term "homosexuality" redirected people to "sexual perversion" up until the 1970s. Our patrons are disrespected and marginalized in the very organization of our knowledge.

If libraries continue on with their veneer of passive and objective authorities that offer free access to all knowledge, this underlying bias will continue to propagate subconsciously. As in Mechanical Turk , being "slightly more diverse than we used to be" doesn't get us any points, nor does it assure anyone that our labour isn't coming from countries with long-exploited workers.

Labor and Compensation

Rows and rows of books in a library, on vast curving shelves.

Photo CC-BY Samantha Marx.

I also want to draw parallels between the free labour of crowdsourcing and the free labour offered in civic hackathons or open-data contests. Specifically, I'd argue that open-data projects are less ( but still definitely ) abusive to their volunteers, because at least those volunteers have a portfolio object or other deliverable to show for their work. They often work in groups and get to network, whereas heritage crowdsourcers work in isolation.

There's also the potential for converting open-data projects to something monetizable: for example, a Toronto-specific bike-route app can easily be reconfigured for other cities and sold; while the Toronto version stays free under the terms of the civic initiative, freemium options can be added. The volunteers who supply thousands of transcriptions or tags can't usually download their own datasets and convert them into something portfolio-worthy, let alone sellable. Those data are useless without their digital objects, and those digital objects still belong to the museum or library.

Crowdsourcing and microtransactions are two halves of the same coin: they both mark new stages in the continuing devaluation of labour, and they both enable misuse and abuse of people who increasingly find themselves with few alternatives. If we're not offering these people jobs, reference letters, training, performance reviews, a "foot in the door" (cronyist as that is), or even acknowledgement by name, what impetus do they have to contribute? As with Wikipedia, I think the intrinsic motivation for many people to supply us with free labour is one of two things: either they love being right, or they've been convinced by the feel-good rhetoric that they're adding to the net good of the world. Of course, trained librarians, archivists, and museum workers have fallen sway to the conflation of labour and identity , too, but we expect to be paid for it.

As in tech, stereotypes and PR obfuscate labour in cultural heritage. For tech, an entrepreneurial spirit and a tendency to buck traditional thinking; for GLAMs, a passion for public service and opening up access to treasures ancient and modern. Of course, tech celebrates the autodidactic dropout; in GLAMs, you need a masters. Period. Maybe two. And entry-level jobs in GLAMs require one or more years of experience, across the board.

When library and archives students go into massive student debt, they're rarely apprised of the constant shortfall of funding for government-agency positions, nor do they get told how much work is done by volunteers (and, consequently, how much of the job is monitoring and babysitting said volunteers). And they're not trained with enough technological competency to sysadmin anything , let alone build a platform that pulls crowdsourced data into an authoritative record. The costs of commissioning these platforms aren't yet being made public, but I bet paying subject experts for their hourly labour would be cheaper.

Solutions

I've tried my hand at many of the crowdsourcing and gamifying interfaces I'm here to critique. I've never been caught up in the "passion" ascribed to those super-volunteers who deliver huge amounts of work. But I can tally up other ways I contribute to this problem: I volunteer for scholarly tasks such as peer-reviewing, committee work, and travelling on my own dime to present. I did an unpaid internship without receiving class credit. I've put my research behind a paywall. I'm complicit in the established practices of the industry, which sits uneasily between academic and social work: neither of those spheres have ever been profit-generators, and have always used their codified altruism as ways to finagle more labour for less money.

It's easy to suggest that we outlaw crowdsourced volunteer work, and outlaw microtransactions on Fiverr and MTurk, just as the easy answer would be to outlaw Uber and Lyft for divorcing administration from labour standards. Ideally, we'd make it illegal for technology to wade between workers and fair compensation.

But that's not going to happen, so we need alternatives. Just as unpaid internships are being eliminated ad-hoc through corporate pledges, rather than being prohibited region-by-region, we need pledges from cultural-heritage institutions that they will pay for labour where possible, and offer concrete incentives to volunteer or intern otherwise. Budgets may be shrinking, but that's no reason not to compensate people at least through resume and portfolio entries. The best template we've got so far is the Society of American Archivists' volunteer best practices , which includes "adequate training and supervision" provisions, which I interpret to mean outlawing microtransactions entirely. The Citizen Science Alliance , similarly, insists on "concrete outcomes" for its crowdsourcing projects, to " never waste the time of volunteers ." It's vague, but it's something.

We can boycott and publicly shame those organizations that promote these projects as fun ways to volunteer, and lobby them to instead seek out subject experts for more significant collaboration. We've seen a few efforts to shame job-posters for unicorn requirements and pathetic salaries, but they've flagged without productive alternatives to blind rage.

There are plenty more band-aid solutions. Groups like Shatter The Ceiling offer cash to women of colour who take unpaid internships. GLAM-specific internship awards are relatively common , but could: be bigger, focus on diverse applicants who need extra support, and have eligibility requirements that don't exclude people who most need them (such as part-time students, who are often working full-time to put themselves through school). Better yet, we can build a tech platform that enables paid work, or at least meaningful volunteer projects. We need nationalized or non-profit recruiting systems (a digital "volunteer bureau") that matches subject experts with the institutions that need their help. One that doesn't take a cut from every transaction, or reinforce power imbalances, the way Uber does. GLAMs might even find ways to combine projects, so that one person's work can benefit multiple institutions.

GLAMs could use plenty of other help, too: feedback from UX designers on our catalogue interfaces, helpful tools , customization of our vendor platforms, even turning libraries into Tor relays or exits . The open-source community seems to be looking for ways to contribute meaningful volunteer labour to grateful non-profits; this would be a good start.

What's most important is that cultural heritage preserves the ostensible benefits of crowdsourcing – opening our collections and processes up for scrutiny, and admitting the limits of our knowledge – without the exploitative labour practices. Just like in tech, a few more glimpses behind the curtain wouldn't go astray. But it would require deeper cultural shifts, not least in the self-perceptions of GLAM workers: away from overprotective stewards of information, constantly threatened by dwindling budgets and unfamiliar technologies, and towards facilitators, participants in the communities whose histories we hold.

Tech Workers Please Stop Defending Tech Companies by Shanley Kane Model View Culture

[Jul 06, 2017] Linux tip Bash test and comparison functions

Jul 06, 2017 | www.ibm.com

Demystify test, [, [[, ((, and if-then-else

Ian Shields Ian Shields
Published on February 20, 2007 > > >

[Jul 05, 2017] Linux tip: Bash parameters and parameter expansions by Ian Shields

Definitely gifted author !
economistsview.typepad.com

Do you sometimes wonder how to use parameters with your scripts, and how to pass them to internal functions or other scripts? Do you need to do simple validity tests on parameters or options, or perform simple extraction and replacement operations on the parameter strings? This tip helps you with parameter use and the various parameter expansions available in the bash shell.

[Jul 02, 2017] The Details About the CIAs Deal With Amazon by Frank Konkel

Jul 17, 2014 | www.theatlantic.com

The intelligence community is about to get the equivalent of an adrenaline shot to the chest. This summer, a $600 million computing cloud developed by Amazon Web Services for the Central Intelligence Agency over the past year will begin servicing all 17 agencies that make up the intelligence community. If the technology plays out as officials envision, it will usher in a new era of cooperation and coordination, allowing agencies to share information and services much more easily and avoid the kind of intelligence gaps that preceded the Sept. 11, 2001, terrorist attacks.

For the first time, agencies within the intelligence community will be able to order a variety of on-demand computing and analytic services from the CIA and National Security Agency. What's more, they'll only pay for what they use.

The vision was first outlined in the Intelligence Community Information Technology Enterprise plan championed by Director of National Intelligence James Clapper and IC Chief Information Officer Al Tarasiuk almost three years ago. Cloud computing is one of the core components of the strategy to help the IC discover, access and share critical information in an era of seemingly infinite data.

For the risk-averse intelligence community, the decision to go with a commercial cloud vendor is a radical departure from business as usual.

In 2011, while private companies were consolidating data centers in favor of the cloud and some civilian agencies began flirting with cloud variants like email as a service, a sometimes contentious debate among the intelligence community's leadership took place.

... ... ...

The government was spending more money on information technology within the IC than ever before. IT spending reached $8 billion in 2013, according to budget documents leaked by former NSA contractor Edward Snowden. The CIA and other agencies feasibly could have spent billions of dollars standing up their own cloud infrastructure without raising many eyebrows in Congress, but the decision to purchase a single commercial solution came down primarily to two factors.

"What we were really looking at was time to mission and innovation," the former intelligence official said. "The goal was, 'Can we act like a large enterprise in the corporate world and buy the thing that we don't have, can we catch up to the commercial cycle? Anybody can build a data center, but could we purchase something more?

"We decided we needed to buy innovation," the former intelligence official said.

A Groundbreaking Deal

... ... ...

The Amazon-built cloud will operate behind the IC's firewall, or more simply: It's a public cloud built on private premises.

Intelligence agencies will be able to host applications or order a variety of on-demand services like storage, computing and analytics. True to the National Institute of Standards and Technology definition of cloud computing, the IC cloud scales up or down to meet the need.

In that regard, customers will pay only for services they actually use, which is expected to generate massive savings for the IC.

"We see this as a tremendous opportunity to sharpen our focus and to be very efficient," Wolfe told an audience at AWS' annual nonprofit and government symposium in Washington. "We hope to get speed and scale out of the cloud, and a tremendous amount of efficiency in terms of folks traditionally using IT now using it in a cost-recovery way."

... ... ...

For several years there hasn't been even a close challenger to AWS. Gartner's 2014 quadrant shows that AWS captures 83 percent of the cloud computing infrastructure market.

In the combined cloud markets for infrastructure and platform services, hybrid and private clouds-worth a collective $131 billion at the end of 2013-Amazon's revenue grew 67 percent in the first quarter of 2014, according to Gartner.

While the public sector hasn't been as quick to capitalize on cloud computing as the private sector, government spending on cloud technologies is beginning to jump.

Researchers at IDC estimate federal private cloud spending will reach $1.7 billion in 2014, and $7.7 billion by 2017. In other industries, software services are considered the leading cloud technology, but in the government that honor goes to infrastructure services, which IDC expects to reach $5.4 billion in 2017.

In addition to its $600 million deal with the CIA, Amazon Web Services also does business with NASA, the Food and Drug Administration and the Centers for Disease Control and Prevention. Most recently, the Obama Administration tapped AWS to host portions of HealthCare.gov.

[Jun 28, 2017] PBS Pro Tutorial by Krishna Arutwar

www.nakedcapitalism.com
What is PBS Pro?

Portable Batch System (PBS) is a software which is used in cluster computing to schedule jobs on multiple nodes. PBS was started as contract project by NASA. PBS is available in three different versions as below 1) Torque: Terascale Open-source Resource and QUEue Manager (Torque) is developed from OpenPBS. It is developed and maintain by Adaptive Computing Enterprises. It is used as a distributed resource manager can perform well when integrated with Maui cluster scheduler to improve performance. 2) PBS Professional (PBS Pro): It is commercial version of PBS offered by Altair Engineering. 3) OpenPBS: It is open source version released in 1998 developed by NASA. It is not actively developed.

In this article we are going to concentrate on tutorial of PBS Pro it is similar to some extent with Torque.

PBS contain three basic units server, MoM (execution host), scheduler.

  1. Server: It is heart of the PBS, with executable named "pbs_server". It uses IP network to communicate with the MoMs. PBS server create a batch job, modify the job requested from different MoMs. It keeps track of all resources available, assigned in the PBS complex from different MoMs. It will also monitor the PBS license for jobs. If your license expires it will throw an error.
  2. Scheduler: PBS scheduler uses various algorithms to decide when job should get executed on which node or vnode by using detail of resources available from server. It has executable as "pbs_sched".
  3. MoM: MoM is the mother of all execution job with executable "pbs_mom". When MoM gets job from server it will actually execute that job on the host. Each node must have MoM running to get participate in execution.

Installation and Setting up of environment (cluster with multiple nodes)

Extract compressed software of PBS Pro and go the path of extracted folder it contain "INSTALL" file, make that file executable you may use command like "chmod +x ./INSTALL". As shown in the image below run this executable. It will ask for the "execution directory" where you want to store the executable (such as qsub, pbsnodes, qdel etc.) used for different PBS operations and "home directory" which contain different configuration files. Keep both as default for simplicity. There are three kind of installation available as shown in figure:

1) Server node: PBS server, scheduler, MoM and commands are installed on this node. PBS server will keep track of all execution MoMs present in the cluster. It will schedule jobs on this execution nodes. As MoM and commands are also installed on server node it can be used to submit and execute the jobs. 2) Execution node: This type installs MoM and commands. This nodes are added as available nodes for execution in a cluster. They are also allowed to submit the jobs at server side with specific permission by server as we are going to see below. They are not involved in scheduling. This kind of installation ask for PBS server which is used to submit jobs, get status of jobs etc. 3 ) Client node: This are the nodes which are only allowed to submit a PBS job at server with specific permission by the server and allowed to see the status of the jobs. They are not involved in execution or scheduling.

Creating vnode in PBS Pro:

We can create multiple vnodes in a single node which contain some part of resources in a node. We can execute job on this vnodes with specified allocated resources. We can create vnode using qmgr command which is command line interface to PBS server. We can use command given below to create vnode using qmgr.

Qmgr:
create node Vnode1,Vnode2 resources_available.ncpus=8, resources_available.mem=10gb, 
resources_available.ngpus=1, sharing=default_excl 
The command above will create two vnodes named Vnode1 and Vnode2 with 8 cpus cores, 10gb of memory and 1 GPU with sharing mode as default_excl which means this vnode can execute exclusively only one job at a time independent of number of resources free. This sharing mode can be default_shared which means any number of jobs can run on that vnode until all resources are busy. To know more about all attributes which can be used with vnode creation are available in PBS Pro reference guide.

You can also create a file in " /var/spool/PBS/mom_priv/config.d/ " this folder with any name you want I prefer hostname -vnode with sample given below. It will select all files even temporary files with (~) and replace configuration for same vnode so delete unnecessary files to get proper configuration of vnodes.

e.g.

$configversion 2
hostname
:resources_available.ncpus=0
hostname
:resources_available.mem=0
hostname
:resources_available.ngpus=0
hostname
[0]:resources_available.ncpus=8
hostname
[0]:resources_available.mem=16gb
hostname
[0]:resources_available.ngpus=1
hostname
[0]:sharing=default_excl
hostname
[1]:resources_available.ncpus=8
hostname
[1]:resources_available.mem=16gb


hostname
[1]:resources_available.ngpus=1




hostname
[1]:sharing=default_excl


hostname
[2]:resources_available.ncpus=8


hostname
[2]:resources_available.mem=16gb


hostname
[2]:resources_available.ngpus=1




hostname
[2]:sharing=default_excl


hostname
[3]:resources_available.ncpus=8


hostname
[3]:resources_available.mem=16gb


hostname
[3]:resources_available.ngpus=1

hostname
[3]:sharing=default_excl
Here in this example we assigned default node configuration to resource available as 0 because by default it will detect and allocate all available resources to default node with sharing attribute as is default_shared.

Which cause problem as all the jobs will by default get scheduled on that default vnode because its sharing type is default_shared . If you want to schedule jobs on your customized vnodes you should allocate resources available as 0 on default vnode . Every time whenever you restart the PBS server

PBS get status:

get status of Jobs:

qstat will give details about jobs there states etc.

useful options:

To print detail about all jobs which are running or in hold state: qstat -a

To print detail about subjobs in JobArray which are running or in hold state: qstat -ta

get status of PBS nodes and vnodes:

"pbsnode -a" command will provide list of all nodes present in PBS complex with there resources available, assigned, status etc.

To get details of all nodes and vnodes you created use " pbsnodes -av" command.

You can also specify node or vnode name to get detail information of that specific node or vnode.

e.g.

pbsnodes wolverine (here wolverine is hostname of the node in PBS complex which is mapped with IP address in /etc/hosts file)

Job submission (qsub):

PBS MoM will submit jobs to the PBS server. Server maintain queue of jobs by default all jobs are submitted to default queue named "workq". You may create multiple queues by using "qmgr" command which is administrator interface mainly used to create, delete & modify queues and vnodes. PBS server will decide which job to be scheduled on which node or vnode based on scheduling policy and privileges set by user. To schedule jobs server will continuously ping to all MoMs in the PBS complex to get detail of resources available and assigned. PBS assigns unique job identifier to each and every job called JobID. For job submission PBS uses "qsub" command. It has syntax as shown below qsub script Here script may be a shell (sh, csh, tchs, ksh, bash) script. PBS by default uses /bin/sh. You may refer simple script given below #!/bin/sh

echo "This is PBS job"

When PBS completes execution of job it will store errors in file name with JobName.e{JobID} e.g. Job1.e1492

Output with file name

JobName.o{JobID} e.g. Job1.o1492

By default it will store this files in the current working directory (can be seen by pwd command) . You can change this location by giving path with -o option.

you may specify job name with -N option while submitting the job

qsub -N firstJob ./test.sh

If you don't specify job name it will store files by replacing JobName with script name. e.g. qsub ./test.sh this command will store results in file with test.sh.e1493 and test.sh.o.1493 in current working directory.

OR

qsub -N firstJob -o /home/user1/ ./test.sh this command will store results in file with test.sh.e1493 and test.sh.o.1493 in /home/user1/ directory.

If submitted job terminate abnormally (errors in job is not abnormal, this errors get stored in JobName.e{JobID} file) it will store its error and output files in "/var/spool/PBS/undelivered/ " folder.

Useful Options:

Select resources:

qsub -l select="chunks":ncpus=3:ngpus=1:mem=2gb script 

e.g.

This Job will selects 2 copies with 3 cpus, 1 gpu and 2gb memory which mean it will select 6 cpus, 2 gpus and 4 gb ram.

qsub -l nodes=megamind:ncpus=3 /home/titan/PBS/input/in.sh

This job will select one node specified with hostname.

To select multiple nodes you may use command given below

qsub -l nodes=megamind+titan:ncpus=3 /home/titan/PBS/input/in.sh
Submit multiple jobs with same script (JobArray):

qsub -J 1-20 script

Submit dependant jobs:

In some cases you may require job which should run after successful or unsuccessful completion of some specified jobs for that PBS provide some options such as

qsub -W depend=afterok:316.megamind /home/titan/PBS/input/in.sh

This specified job will start only after successful completion of job with job ID "316.megamind". Like afterok PBS has other options such as beforeok

beforenotok to , afternotok. You may find all this details in the man page of qsub .

Submit Job with priority :

There are two ways using which we can set priority to jobs which are going to execute.

1) Using single queue with different jobs with different priority:

To change sequence of jobs queued in a execution queue open "$PBS_HOME/sched_priv/sched_config" file, normally $PBS_HOME is present in "/var/spool/PBS/" folder. Open this file and uncomment the line below if present otherwise add it .

job_sort_key : "job_priority HIGH"

After saving this file you will need to restart the pbs_sched daemon on head node you may use command below

service pbs restart

After completing this task you have to submit the job with -p option to specify priority of job within queue. This value may range between (-1024) to 1023, where -1024 is the lowest priority and 1023 is the highest priority in the queue.

e.g.

qsub -p 100 ./X.sh

qsub -p 101 ./Y.sh


qsub -p 102 ./Z.sh 
In this case PBS will execute jobs as explain in the diagram given below

multipleJobsInoneQ

2) Using different queues with specified priority: We are going to discuss this point in PBS Queue section.

q

In this example all jobs in queue 2 will complete first then queue 3 then queue 1, since priority of queue 2 > queue 3 > queue 1. Because of this job execution flow is as shown below

J4=> J5=> J6=>J7=> J8=> J9=> J1=> J2=> J3 PBS Queue:

PBS Pro can manage multiple queue as per users requirement. By default every job is queued in "workq" for execution. There are two types of queue are available execution and routing queue. Jobs in execution queue are used by pbs server for execution. Jobs in routing queue can not be executed they can be redirected to execution queue or another routing queue by using command qmove command. By default queue "workq" is an execution queue. The sequence of job in queue may change by using priority defined while job submission as specified above in job submission section.

Useful qmgr commands:

First type qmgr which is Manager interface of PBS Pro.

To create queue:


Qmgr:
 create queue test2

To set type of queue you created:


Qmgr:
 set queue test2 queue_type=execution

OR


Qmgr:
 set queue test2 queue_type=route

To enable queue:


Qmgr:
 set queue test2 enabled=True

To set priority of queue:


Qmgr:
 set queue test2 priority=50

Jobs in queue with higher priority will get first preference. After completion of all jobs in the queue with higher priority jobs in lower priority queue are scheduled. There is huge probability of job starvation in queue with lower priority.

To start queue:


Qmgr:
 set queue test2 started = True

To activate all queue (present at particular node):


Qmgr:
 active queue @default

To set queue for specified users : You require to set acl_user_enable attribute to true which indicate PBS to only allow user present in acl_users list to submit the job.


 Qmgr:
 set queue test2 acl_user_enable=True

To set users permitted (to submit job in a queue):


Qmgr:
 set queue test2 acl_users="user1@
..
,user2@
..
,user3@
..
"

(in place of .. you have to specify hostname of compute node in PBS complex. Only user name without hostname will allow users ( with same name ) to submit job from all nodes ( permitted to submit job ) in a PBS Complex).

To delete queues we created:


Qmgr:
 delete queue test2

To see details of all queue status:

qstat -Q


You may specify specific queue name: qstat -Q test2

To see full details of all queue: qstat -Q -f

You may specify specific queue name: qstat -Q -f test2

[Jun 18, 2017] An introduction to parameter expansion in Bash by James Pannacciulli

Notable quotes:
"... parameter expansion ..."
"... var="" ..."
"... var="gnu" ..."
"... parameter expansion ..."
"... offset of 5 length of 4 ..."
"... parameter expansion ..."
"... pattern of string of _ ..."
Jun 18, 2017 | opensource.com
About conditional, substring, and substitution parameter expansion operators Conditional parameter expansion

Conditional parameter expansion allows branching on whether the parameter is unset, empty, or has content. Based on these conditions, the parameter can be expanded to its value, a default value, or an alternate value; throw a customizable error; or reassign the parameter to a default value. The following table shows the conditional parameter expansions-each row shows a parameter expansion using an operator to potentially modify the expansion, with the columns showing the result of that expansion given the parameter's status as indicated in the column headers. Operators with the ':' prefix treat parameters with empty values as if they were unset.

parameter expansion unset var var="" var="gnu"
${var-default} default - gnu
${var:-default} default default gnu
${var+alternate} - alternate alternate
${var:+alternate} - - alternate
${var?error} error - gnu
${var:?error} error error gnu

The = and := operators in the table function identically to - and :- , respectively, except that the = variants rebind the variable to the result of the expansion.

As an example, let's try opening a user's editor on a file specified by the OUT_FILE variable. If either the EDITOR environment variable or our OUT_FILE variable is not specified, we will have a problem. Using a conditional expansion, we can ensure that when the EDITOR variable is expanded, we get the specified value or at least a sane default:

$
echo
${
EDITOR
}

/usr/bin/vi
$
echo
${
EDITOR
:-
$(
which nano
)
}

/usr/bin/vi
$
unset
EDITOR
$
echo
${
EDITOR
:-
$(
which nano
)
}

/usr/bin/nano

Building on the above, we can run the editor command and abort with a helpful error at runtime if there's no filename specified:

$
${
EDITOR
:-
$(
which nano
)
}
${
OUT_FILE
:?
Missing filename
}

bash: OUT_FILE: Missing filename
Substring parameter expansion

Parameters can be expanded to just part of their contents, either by offset or by removing content matching a pattern. When specifying a substring offset, a length may optionally be specified. If running Bash version 4.2 or greater, negative numbers may be used as offsets from the end of the string. Note the parentheses used around the negative offset, which ensure that Bash does not parse the expansion as having the conditional default expansion operator from above:

$
location
=
"
CA 90095
"
$
echo
"
Zip Code: 
${
location
:
3
}
"

Zip Code: 90095
$
echo
"
Zip Code: 
${
location
:
(-5)
}
"

Zip Code: 90095
$
echo
"
State: 
${
location
:
0
:
2
}
"

State: CA

Another way to take a substring is to remove characters from the string matching a pattern, either from the left edge with the # and ## operators or from the right edge with the % and %% operators. A useful mnemonic is that # appears left of a comment and % appears right of a number. When the operator is doubled, it matches greedily, as opposed to the single version, which removes the most minimal set of characters matching the pattern.

var="open source"
parameter expansion offset of 5
length of 4
${var:offset} source
${var:offset:length} sour
pattern of *o?
${var#pattern} en source
${var##pattern} rce
pattern of ?e*
${var%pattern} open sour
${var%%pattern} o

The pattern-matching used is the same as with filename globbing: * matches zero or more of any character, ? matches exactly one of any character, [...] brackets introduce a character class match against a single character, supporting negation ( ^ ), as well as the posix character classes, e.g. . By excising characters from our string in this manner, we can take a substring without first knowing the offset of the data we need:

$
echo
$
PATH

/usr/local/bin:/usr/bin:/bin
$
echo
"
Lowest priority in PATH: 
${
PATH
##
*:
}
"

Lowest priority in PATH: /bin
$
echo
"
Everything except lowest priority: 
${
PATH
%
:*
}
"

Everything except lowest priority: /usr/local/bin:/usr/bin
$
echo
"
Highest priority in PATH: 
${
PATH
%%
:*
}
"

Highest priority in PATH: /usr/local/bin
Substitution in parameter expansion

The same types of patterns are used for substitution in parameter expansion. Substitution is introduced with the / or // operators, followed by two arguments separated by another / representing the pattern and the string to substitute. The pattern matching is always greedy, so the doubled version of the operator, in this case, causes all matches of the pattern to be replaced in the variable's expansion, while the singleton version replaces only the leftmost.

var="free and open"
parameter expansion pattern of
string of _
${var/pattern/string} free_and open
${var//pattern/string} free_and_open

The wealth of parameter expansion modifiers transforms Bash variables and other parameters into powerful tools beyond simple value stores. At the very least, it is important to understand how parameter expansion works when reading Bash scripts, but I suspect that not unlike myself, many of you will enjoy the conciseness and expressiveness that these expansion modifiers bring to your scripts as well as your interactive sessions.

[Jun 17, 2017] How containers and DevOps transformed Duke Universitys IT department by Chris Collins

Jun 16, 2017 | opensource.com

...At Duke University's Office of Information Technology (OIT), we began looking at containers as a way to achieve higher density from the virtualized infrastructure used to host websites. Virtual machine (VM) sprawl had started to become a problem. We favored separating each client's website onto its own VM for both segregation and organization, but steady growth meant we were managing more servers than we could handle. As we looked for ways to lower management overhead and make better use of resources, Docker hit the news, and we began to experiment with containerization for our web applications.

For us, the initial investigation of containers mirrors a shift toward a DevOps culture.

Where we started

When we first looked into container technology, OIT was highly process driven and composed of monolithic applications and a monolithic organizational structure. Some early forays into automation were beginning to lead the shift toward a new cultural organization inside the department, but even so, the vast majority of our infrastructure consisted of "pet" servers (to use the pets vs. cattle analogy). Developers created their applications on staging servers designed to match production hosting environments and deployed by migrating code from the former to the latter. Operations still approached hosting as it always had: creating dedicated VMs for individual services and filing manual tickets for monitoring and backups. A service's lifecycle was marked by change requests, review boards, standard maintenance windows, and lots of personal attention.

A shift in culture

As we began to embrace containers, some of these longstanding attitudes toward development and hosting began to shift a bit. Two of the larger container success stories came from our investigation into cloud infrastructure. The first project was created to host hundreds of R-Studio containers for student classes on Microsoft Azure hosts, breaking from our existing model of individually managed servers and moving toward "cattle"-style infrastructure designed for hosting containerized applications.

The other was a rapid containerization and deployment of the Duke website to Amazon Web Services while in the midst of a denial-of-service attack, dynamically creating infrastructure and rapidly deploying services.

The success of these two wildly nonstandard projects helped to legitimize containers within the department, and more time and effort was put into looking further into their benefits and those of on-demand and disposable cloud infrastructure, both on-premises and through public cloud providers.

It became apparent early on that containers lived within a different timescale from traditional infrastructure. We started to notice cases where short-lived, single-purpose services were created, deployed, lived their entire lifecycle, and were decommissioned before we completed the tickets created to enter them into inventory, monitoring, or backups. Our policies and procedures were not able to keep up with the timescales that accompanied container development and deployment.

In addition, humans couldn't keep up with the automation that went into creating and managing the containers on our hosts. In response, we began to develop more automation to accomplish usually human-gated processes. For example, the dynamic migration of containers from one host to another required a change in our approach to monitoring. It is no longer enough to tie host and service monitoring together or to submit a ticket manually, as containers are automatically destroyed and recreated on other hosts in response to events.

Some of this was in the works for us already-automation and container adoption seem to parallel one another. At some point, they become inextricably intertwined.

As containers continued to grow in popularity and OIT began to develop tools for container orchestration, we tried to further reinforce the "cattle not pets" approach to infrastructure. We limited login of the hosts to operations staff only (breaking with tradition) and gave all hosts destined for container hosting a generic name. Similar to being coached to avoid naming a stray animal in an effort to prevent attachment, servers with generic names became literally forgettable. Management of the infrastructure itself became the responsibility of automation, not humans, and humans focused their efforts on the services inside the containers.

Containers also helped to usher continuous integration into our everyday workflows. OIT's Identity Management team members were early adopters and began to build Kerberos key distribution centers (KDCs) inside containers using Jenkins, building regularly to incorporate patches and test the resulting images. This allowed the team to catch breaking builds before they were pushed out onto production servers. Prior to that, the complexity of the environment and the widespread impact of an outage made patching the systems a difficult task.

Embracing continuous deployment

Since that initial use case, we've also embraced continuous deployment. There is a solid pattern for every project that gets involved with our continuous integration/continuous deployment (CI/CD) system. Many teams initially have a lot of hesitation about automatically deploying when tests pass, and they tend to build checkpoints requiring human intervention. However, as they become more comfortable with the system and learn how to write good tests, they almost always remove these checkpoints.

Within our container orchestration automation, we use Jenkins to patch base images on a regular basis and rebuild all the child images when the parent changes. We made the decision early that the images could be rebuilt and redeployed at any time by automated processes. This meant that any code included in the branch of the git repository used in the build job would be included in the image and potentially deployed without any humans involved. While some developers initially were uncomfortable with this, it ultimately led to better development practices: Developers merge into the production branch only code that is truly ready to be deployed.

This practice facilitated rebuilding container images immediately when code is merged into the production branch and allows us to automatically deploy the new image once it's built. At this point, almost every project using the automatic rebuild has also enabled automated deployment.

Looking ahead

Today the adoption of both containers and DevOps is still a work in progress for OIT.

Internally we still have to fight the entropy of history even as we adopt new tools and culture. Our biggest challenge will be convincing people to break away from the repetitive break-fix mentality that currently dominates their jobs and to focus more on automation. While time is always short, and the first step always daunting, in the long run adopting automation for day-to-day tasks will free them to work on more interesting and complex projects.

Thankfully, people within the organization are starting to embrace working in organized or ad hoc groups of cross-discipline members and developing automation together. This will definitely become necessary as we embrace automated orchestration and complex systems. A group of talented individuals who possess complementary skills will be required to fully manage the new environments.

[Jun 09, 2017] Amazon's S3 web-based storage service is experiencing widespread issues on Feb 28 2017

Jun 09, 2017 | techcrunch.com

Amazon's S3 web-based storage service is experiencing widespread issues, leading to service that's either partially or fully broken on websites, apps and devices upon which it relies. The AWS offering provides hosting for images for a lot of sites, and also hosts entire websites, and app backends including Nest.

The S3 outage is due to "high error rates with S3 in US-EAST-1," according to Amazon's AWS service health dashboard , which is where the company also says it's working on "remediating the issue," without initially revealing any further details.

Affected websites and services include Quora, newsletter provider Sailthru, Business Insider, Giphy, image hosting at a number of publisher websites, filesharing in Slack, and many more. Connected lightbulbs, thermostats and other IoT hardware is also being impacted, with many unable to control these devices as a result of the outage.

Amazon S3 is used by around 148,213 websites, and 121,761 unique domains, according to data tracked by SimilarTech , and its popularity as a content host concentrates specifically in the U.S. It's used by 0.8 percent of the top 1 million websites, which is actually quite a bit smaller than CloudFlare, which is used by 6.2 percent of the top 1 million websites globally – and yet it's still having this much of an effect.

Amazingly, even the status indicators on the AWS service status page rely on S3 for storage of its health marker graphics, hence why the site is still showing all services green despite obvious evidence to the contrary. Update (11:40 AM PT): AWS has fixed the issues with its own dashboard at least – it'll now accurately reflect service status as it continues to attempt to fix the problem .

[May 29, 2017] Release of Wine 2.8

May 29, 2017 | news.softpedia.com
What's new in this release (see below for details):
- - TCP and UDP connection support in WebServices.
- - Various shader improvements for Direct3D 11.
- - Improved support for high DPI settings.
- - Partial reimplementation of the GLU library.
- - Support for recent versions of OSMesa.
- - Window management improvements on macOS.
+ - Direct3D command stream runs asynchronously.
+ - Better serial and parallel ports autodetection.
+ - Still more fixes for high DPI settings.
+ - System tray notifications on macOS.
- Various bug fixes.

... improved support for Warhammer 40,000: Dawn of War III that'll be ported to Linux and SteamOS platforms by Feral Interactive on June 8, Wine 2.9 is here to introduce support for tesselation shaders in Direct3D, binary mode support in WebServices, RegEdit UI improvements, and clipboard changes detected through Xfixes.

...

The Wine 2.9 source tarball can be downloaded right now from our website if you fancy compiling it on your favorite GNU/Linux distribution, but please try to keep in mind that this is a pre-release version not suitable for production use. We recommend installing the stable Wine branch if you want to have a reliable and bug-free experience.

Wine 2.9 will also be installable from the software repos of your operating system in the coming days.

[May 27, 2017] An introduction to EXT4 filesystem

Notable quotes:
"... In EXT4, data allocation was changed from fixed blocks to extents. ..."
"... EXT4 reduces fragmentation by scattering newly created files across the disk so that they are not bunched up in one location at the beginning of the disk, ..."
"... Aside from the actual location of the data on the disk, EXT4 uses functional strategies, such as delayed allocation, to allow the filesystem to collect all the data being written to the disk before allocating space to it. This can improve the likelihood that the data space will be contiguous. ..."
May 27, 2017 | opensource.com
EXT4

The EXT4 filesystem primarily improves performance, reliability, and capacity. To improve reliability, metadata and journal checksums were added. To meet various mission-critical requirements, the filesystem timestamps were improved with the addition of intervals down to nanoseconds. The addition of two high-order bits in the timestamp field defers the Year 2038 problem until 2446-for EXT4 filesystems, at least.

In EXT4, data allocation was changed from fixed blocks to extents. An extent is described by its starting and ending place on the hard drive. This makes it possible to describe very long, physically contiguous files in a single inode pointer entry, which can significantly reduce the number of pointers required to describe the location of all the data in larger files. Other allocation strategies have been implemented in EXT4 to further reduce fragmentation.

EXT4 reduces fragmentation by scattering newly created files across the disk so that they are not bunched up in one location at the beginning of the disk, as many early PC filesystems did. The file-allocation algorithms attempt to spread the files as evenly as possible among the cylinder groups and, when fragmentation is necessary, to keep the discontinuous file extents as close as possible to others in the same file to minimize head seek and rotational latency as much as possible. Additional strategies are used to pre-allocate extra disk space when a new file is created or when an existing file is extended. This helps to ensure that extending the file will not automatically result in its becoming fragmented. New files are never allocated immediately after existing files, which also prevents fragmentation of the existing files.

Aside from the actual location of the data on the disk, EXT4 uses functional strategies, such as delayed allocation, to allow the filesystem to collect all the data being written to the disk before allocating space to it. This can improve the likelihood that the data space will be contiguous.

Older EXT filesystems, such as EXT2 and EXT3, can be mounted as EXT4 to make some minor performance gains. Unfortunately, this requires turning off some of the important new features of EXT4, so I recommend against this.

EXT4 has been the default filesystem for Fedora since Fedora 14.

An EXT3 filesystem can be upgraded to EXT4 using the procedure described in the Fedora documentation, however its performance will still suffer due to residual EXT3 metadata structures.

The best method for upgrading to EXT4 from EXT3 is to back up all the data on the target filesystem partition, use the mkfs command to write an empty EXT4 filesystem to the partition, and then restore all the data from the backup.

[May 20, 2017] Outsourcing higher wage work is more profitable than outsourcing lower wage work

Notable quotes:
"... Baker correctly diagnoses the impact of boomers aging, but there is another effect - "knowledge work" and "high skill manufacturing" is more easily outsourced/offshored than work requiring a physical presence. ..."
"... That's what happened with American IT. ..."
May 20, 2017 | economistsview.typepad.com
cm, May 20, 2017 at 04:51 PM
Baker correctly diagnoses the impact of boomers aging, but there is another effect - "knowledge work" and "high skill manufacturing" is more easily outsourced/offshored than work requiring a physical presence.

Also outsourcing "higher wage" work is more profitable than outsourcing "lower wage" work - with lower wages also labor cost as a proportion of total cost tends to be lower (not always).

And outsourcing and geographically relocating work creates other overhead costs that are not much related to the wages of the local work replaced - and those overheads are larger in relation to lower wages than in relation to higher wages.

libezkova -> cm... May 20, 2017 at 08:34 PM

"Also outsourcing "higher wage" work is more profitable than outsourcing "lower wage" work"

That's what happened with American IT.

[May 19, 2017] IT ops doesnt matter. Really by Dale Vile

Notable quotes:
"... All of the hype around software and developers, which tends to significantly skew even the DevOps discussion, runs the risk of creating the perception that IT ops is just a necessary evil. Indeed, some go so far as to make the case for a 'NoOps' world in which the public cloud magically takes care of everything downstream once developers have 'innovated' and 'created'. ..."
"... This kind of view comes about from people looking through the wrong end of the telescope. Turn the thing around and look up close at what goes on in the world of ops, and you get a much better sense of perspective. Teams operating in this space are not just there to deploy the next custom software release and make sure it runs quickly and robustly - in fact that's often a relatively small part of what they do. ..."
"... And coming back to operations, you are sadly mistaken if you think that the public cloud makes all challenges and requirements go away. If anything, the piecemeal adoption of cloud services has made things more complex and unpredictable from an integration and management perspective. ..."
"... There are all kinds of valid reasons to keep an application sitting on your own infrastructure anyway - regulation, performance, proximity to dependent solutions and data, etc. Then let's not forget the simple fact that running things in the cloud is often more expensive over the longer term. ..."
Dec 19, 2016 | theregister.co.uk

Get real – it's not all about developers and DevOps

Listen to some DevOps evangelists talk, and you would get the impression that IT operations teams exist only to serve the needs of developers. Don't get me wrong, software development is a good competence to have in-house if your organisation depends on custom applications and services to differentiate its business.

As an ex-developer, I appreciate the value of being able to deliver something tailored to a specific need, even if it does pain me to see the shortcuts too often taken nowadays due to ignorance of some of the old disciplines, or an obsession with time-to-market above all else.

But before this degenerates into an 'old guy' rant about 'youngsters today', let's get back to the point that I really want to make.

All of the hype around software and developers, which tends to significantly skew even the DevOps discussion, runs the risk of creating the perception that IT ops is just a necessary evil. Indeed, some go so far as to make the case for a 'NoOps' world in which the public cloud magically takes care of everything downstream once developers have 'innovated' and 'created'.

This kind of view comes about from people looking through the wrong end of the telescope. Turn the thing around and look up close at what goes on in the world of ops, and you get a much better sense of perspective. Teams operating in this space are not just there to deploy the next custom software release and make sure it runs quickly and robustly - in fact that's often a relatively small part of what they do.

This becomes obvious when you recognize how much stuff runs in an Enterprise IT landscape - software packages enabling core business processes, messaging, collaboration and workflow platforms keeping information flowing, analytics environments generating critical business insights, and desktop and mobile estates serving end user access needs - to name but a few.

Vital operations

There's then everything required to deal with security, data protection, compliance and other aspects of risk. Apart from the odd bit of integration and tailoring work - the need for which is diminishing with modern 'soft-coded', connector-driven solutions - very little of all this has anything to do with development and developers.

A big part of the rationale for modernising your application landscape and migrating to the latest flexible and open software packages and platforms is to eradicate the need for coding wherever you can. Code is expensive to build and maintain, and the same can often be achieved today through software switches, policy-driven workflow, drag-and-drop interface design, and so on. Sensible IT teams only code when they absolutely have to.

And coming back to operations, you are sadly mistaken if you think that the public cloud makes all challenges and requirements go away. If anything, the piecemeal adoption of cloud services has made things more complex and unpredictable from an integration and management perspective.

There are all kinds of valid reasons to keep an application sitting on your own infrastructure anyway - regulation, performance, proximity to dependent solutions and data, etc. Then let's not forget the simple fact that running things in the cloud is often more expensive over the longer term.

Against this background, an 'appropriate' level of custom development and the selective use of cloud services will be the way forward for most organisations, all underpinned by a well-run data centre environment acting as the hub for hybrid delivery. This is the approach that tends to be taken by the most successful enterprise IT teams, and the element that makes particularly high achievers stand out is agile and effective IT operations.

This isn't just to support any DevOps agenda you might have; it is demonstrably a key enabler across the board. Of course if you work in operations, you will know already intuitively know all this. But if you want some ammunition to spell it out to others who need enlightenment, take a look at our research report entitled IT Ops and a Digital Business Enabler; more than just keeping the lights on . This is based on input from 400 Senior European IT professionals. ®

Paul Smith
I think this is one fad that has run its course. If nothing else, the one thing that cloud has brought to the software world is the separation of software from the environment it runs in, and since the the Ops side of DevOps is all about the integration of the platform and software, what you end up with in a cloudy world is a lot of people looking for a new job.
Anonymous Coward

For decades developers have been ignored by infrastructure vendors because the decision makers buying infrastructure sit in the infrastructure teams. Now with the cloud etc vendors realize they will lose supporters within these teams.

So instead - infrastructure vendors target developers to become their next fanboys.

E.g. Dear developer, you won't need to speak to your infrastructure admins anymore to setup a development environment. Now you can automate, orchestrate the provisioning of your containerized development environment at the push of a button. Blah blah blah, but you have to buy our storage.

I remember the days when every DBA wanted RAID10 just because thats what the whitepaper recommended. By that time storage technology had long moved on, but the DBA still talked about Full Stripe Writes.

Now with DevOps you'll have Developers influencing infrastructure decisions, because they just learned about snapshots. And yes - it has to be all flash - and designed from the ground up by millenials that eat avocado.

John 104
Re: DevOps was never supposed to replace Operations

Yes, DevOps isn't about replacing Ops. But try telling that to the powers that be. It is sold and seen as a cost cutting measure.

As for devs learning Ops and vice versa, there are very few on both sides who really understand what it takes to do the others job. I have a very high regard for Devs, but when it comes to infra, they are, as a whole, very incompetent. Just like I'm incompetent in Dev. can't have one without the other. I feel that in time, the pendulum will swing away from cloud as execs and accountants realize how it isn't really saving any money.

The real question is: Will there be any qualified operations engineers available or will they all have retired out or have found work elsewhere. It isn't easy to be an ops engineer, takes a lot of experience to get there, and qualified candidates are hard to come by. Let's face it, in today's world, its a dying breed.

John 104
Very Nice

Nice of you to point out what us in Ops have known all along. I'm afraid it will fall on deaf ears, though. Until the executives who constantly fall for the new shiny are made to actually examine business needs and processes and make business decisions based on said.

Our laughable move to cloud here involved migrating off of on prem Exchange to O365. The idea was to free up our operations team to allow us to do more in house projects. Funny thing is, it takes more management of the service than we ever did on premises. True, we aren't maintaining the Exchange infra, but now we have SQL servers, DCs, ADFS, etc, to maintain in the MS cloud to allow authentication just to use the product. And because mail and messaging is business critical, we have to have geographically disparate instances of both. And the cost isn't pretty. Yay cloud.

[May 17, 2017] Talk of tech innovation is bullsht. Shut up and get the work done – says Linus Torvalds

May 17, 2017 | theregister.co.uk

Linus Torvalds believes the technology industry's celebration of innovation is smug, self-congratulatory, and self-serving. The term of art he used was more blunt: "The innovation the industry talks about so much is bullshit," he said. "Anybody can innovate. Don't do this big 'think different'... screw that. It's meaningless.

In a deferential interview at the Open Source Leadership Summit in California on Wednesday, conducted by Jim Zemlin, executive director of the Linux Foundation, Torvalds discussed how he has managed the development of the Linux kernel and his attitude toward work.

"All that hype is not where the real work is," said Torvalds. "The real work is in the details."

Torvalds said he subscribes to the view that successful projects are 99 per cent perspiration, and one per cent innovation.

As the creator and benevolent dictator of the open-source Linux kernel , not to mention the inventor of the Git distributed version control system, Torvalds has demonstrated that his approach produces results. It's difficult to overstate the impact that Linux has had on the technology industry. Linux is the dominant operating system for servers. Almost all high-performance computing runs on Linux. And the majority of mobile devices and embedded devices rely on Linux under the hood.

The Linux kernel is perhaps the most successful collaborative technology project of the PC era. Kernel contributors, totaling more than 13,500 since 2005, are adding about 10,000 lines of code, removing 8,000, and modifying between 1,500 and 1,800 daily, according to Zemlin. And this has been going on – though not at the current pace – for more than two and a half decades.

"We've been doing this for 25 years and one of the constant issues we've had is people stepping on each other's toes," said Torvalds. "So for all of that history what we've done is organize the code, organize the flow of code, [and] organize our maintainership so the pain point – which is people disagreeing about a piece of code – basically goes away."

The project is structured so people can work independently, Torvalds explained. "We've been able to really modularize the code and development model so we can do a lot in parallel," he said.

Technology plays an obvious role but process is at least as important, according to Torvalds.

"It's a social project," said Torvalds. "It's about technology and the technology is what makes people able to agree on issues, because ... there's usually a fairly clear right and wrong."

But now that Torvalds isn't personally reviewing every change as he did 20 years ago, he relies on a social network of contributors. "It's the social network and the trust," he said. "...and we have a very strong network. That's why we can have a thousand people involved in every release."

The emphasis on trust explains the difficulty of becoming involved in kernel development, because people can't sign on, submit code, and disappear. "You shoot off a lot of small patches until the point where the maintainers trust you, and at that point you become more than just a guy who sends patches, you become part of the network of trust," said Torvalds.

Ten years ago, Torvalds said he told other kernel contributors that he wanted to have an eight-week release schedule, instead of a release cycle that could drag on for years. The kernel developers managed to reduce their release cycle to around two and half months. And since then, development has continued without much fuss.

"It's almost boring how well our process works," Torvalds said. "All the really stressful times for me have been about process. They haven't been about code. When code doesn't work, that can actually be exciting ... Process problems are a pain in the ass. You never, ever want to have process problems ... That's when people start getting really angry at each other." ®

[May 17, 2017] So your client's under-spent on IT for decades and lives in fear of an audit

Notable quotes:
"... Most of us use some form of desired state solution already. Desired state solutions basically involve an OS agent that gets a config from a centralized location and applies the relevant configuration to the operating system and/or applications. ..."
May 17, 2017 | theregister.co.uk
12 May 2017 at 14:56, Trevor Pott Infrastructure as code is a buzzword frequently thrown out alongside DevOps and continuous integration as being the modern way of doing things. Proponents cite benefits ranging from an amorphous "agility" to reducing the time to deploy new workloads. I have an argument for infrastructure as code that boils down to "cover your ass", and have discovered it's not quite so difficult as we might think.

... ... ...

None of this is particularly surprising. When you have an environment where each workload is a pet , change is slow, difficult, and requires a lot of testing. Reverting changes is equally tedious, and so a lot of planning goes into making sure than any given change won't cascade and cause knock-on effects elsewhere.

In the real world this is really the result of two unfortunate aspects of human nature. First: everyone hates doing documentation, so it's highly unlikely that in an unstructured environment every change from the last refresh was documented. The second driver of chaos and problems is that there are few things more permanent than a temporary fix.

When you don't have the budget for the right hardware, software or services you make do. When something doesn't work you "innovate" a solution. When that breaks something, you patch it. You move from one problem to the next, and if you're not careful, you end up with something so fragile that if you breathe on it, it falls over. At this point, you burn it all down and restart from scratch.

This approach to IT is fine - if you have 5, 10 or even 50 workloads. A single techie can reasonably be expected to keep that all in their head, know their network and solve any problems they encounter. Unfortunately, 50 workloads is today restricted to only the smallest of shops. Everyone else is juggling too many workloads to be playing the pets game any more.

Most of us use some form of desired state solution already. Desired state solutions basically involve an OS agent that gets a config from a centralized location and applies the relevant configuration to the operating system and/or applications. Microsoft's group policy can be considered a really primitive version of this, with System Center being a more powerful but miserable to use example. The modern friendly tools being Puppet, Chef, Saltstack, Ansible and the like.

Once you have desired state configs in place we're no longer beating individual workloads into shape, or checking them manually for deviation from design. If all does what it says on the tin, configurations are applied and errors thrown if they can't be. Usually there is some form of analysis software to determine how many of what is out of compliance. This is a big step forward.

... ... ...

This article is sponsored by HPE.

[May 16, 2017] The Technocult Soleil Wiki Fandom powered by Wikia

May 16, 2017 | soleil.wikia.com
The Technocult, also known as the Machine cult is the semi-offical name given by The Church of the Crossed Heart to followers of the Mechanicum faith who supply and maintain virtually all of the church's technology, engineering and industry.

Although they serve with the Church of the Crossed Heart they have their own version of worship that differs substantially in theology and ritualistic forms from that of The Twelve Angels . Instead the Technocult worships a deity they call the Machine god or Omnissiah. The Technocult believes that knowledge is divine and comes only form the Omnissiah thus making any objects that demonstrate the application of knowledge , i.e machinery, or contain it (books) holy in the eyes/optical implants of the Techcult. The Technocult regard organic flesh as weak and imperfect, with the Rot being veiwed as a divine message from the Omnissah demonstrating its weakness, thus making its removal and replacement by mechanical, bionic parts a sacred process that brings them closer to their god with many of its older members having very little of their original bodies remaining.

The date of the cults formation is unknown, or a closely guarded secret...

[May 16, 2017] 10 Things I Hate About Agile Development!

May 16, 2017 | www.allaboutagile.com

1. Saying you're doing Agile just cos you're doing daily stand-ups. You're not doing agile. There is so much more to agile practices than this! Yet I'm surprised how often I've heard that story. It really is remarkable.

... ... ....

3. Thinking that agile is a silver bullet and will solve all your problems. That's so naiive, of course it won't! Humans and software are a complex mix with any methodology, let alone with an added dose of organisational complexity. Agile development will probably help with many things, but it still requires a great deal of skill and there is no magic button.

... ... ...

8. People who use agile as an excuse for having no process or producing no documentation. If documents are required or useful, there's no reason why an agile development team shouldn't produce them. Just not all up-front; do it as required to support each feature or iteration. JFDI (Just F'ing Do It) is not agile!

David, 23 February 2010 at 1:21 am

So agree on number 1. Following "Certified" Scrum Master training (prior to the exam requirement), a manager I know now calls every regular status meeting a "scrum", regardless of project or methodology. Somehow the team is more agile as a result.

Ironically he pulled up another staff member for "incorrectly" using the term retrospective.

Andy Till, 23 February 2010 at 9:28 am

I can think of far worse, how about pairing with the guy in the office who is incapable of compromise?

Steve Watson, 13 May 2010 at 10:06 am

Kelly

Good list!

I like number 9 as I find with testing people think that they no longer need to write proper test cases and scripts – a list of confirmations on a user story will do. Well, if its a simple change I guess you can dispense with test scripts, but if its something more complex then there is no reason NOT to write scripts. If you have a reasonably large team of people who could execute the tests, they can follow the test steps and validate against the expected results. It also means that you can sensibly lump together test cases and cover them with one test.

If you dont think about how you will execute them and just tackle them one by one off the confirmations list, you miss the opportunity to run one test and cover many separate cases, saving time.

I always find test scripts useful if someone different re-runs a test, as they then follow the same process as before. This is why we automate regression so the tests are executed the same each time.

John Quincy, 24 October 2011 at 12:02 am

I am not a fan of agile. Unless you have a small group of developers who are in perfect sync with each other at all times, this "one size fits all" methodology is destructive and downright dangerous. I have personally witnessed a very good company go out of business this year because they transformed their development shop from a home-grown iterative methodology to SCRUM. The team was required to abide by the SCRUM rules 100%. They could not keep up with customer requirements and produced bug filled releases that were always late. These developers went from fun, friendly, happy people (pre-SCRUM) [who NEVER missed a date] to bitter, sarcastic, hard to be around 'employees'. When the writing was on the wall a couple of months back, the good ones got the hell out of there, and the company could not recover.

Some day, I'm convinced that Beck through Thomas will proclaim that the Agile Manifesto was all a big practical joke that got out of control.

This video pretty much lays out the one and only reason why management wants to implement Agile:

http://www.youtube.com/watch?v=nvks70PD0Rs

grumpasaurus, 9 February 2014 at 4:30 pm

It's a cycle of violence when a project claims to be Agile just because of standups and iterations and don't think about resolving the core challenges they've had to begin with. People are left still battling said challenges and then say that Agile sucks.

[May 15, 2017] Wall Street Journal Enterprises Are Not Ready for DevOps, but May Not Survive Without It by Abel Avram

Notable quotes:
"... while DevOps is appealing to startups, there are important stumbling blocks in the path of DevOps adoption within the enterprise. ..."
"... The tools needed to implement a DevOps culture are lacking. While some of the tools can be provided by vendors and others can be created within the enterprise, a process which takes a long period of time, "there is a marathon of organizational change and restructuring that must occur before such tools could ever be bought or built." ..."
Jun 06, 2014 | www.infoq.com
Rachel Shannon-Solomon suggests that most enterprises are not ready for DevOps, while Gene Kim says that they must make themselves ready if they want to survive.

Rachel Shannon-Solomon, a venture associate at At Work-Bench, has recently written a blog post for The Wall Street Journal entitled DevOps Is Great for Startups, but for Enterprises It Won't Work-Yet , arguing that while DevOps is appealing to startups, there are important stumbling blocks in the path of DevOps adoption within the enterprise.

While acknowledging that large companies such as Google and Facebook benefit from implementing DevOps, and that "there is no lack of appetite to experiment with DevOps practices" within "Fortune 500s and specifically financial services firms", Shannon-Solomon remarks that "there are few true change agents within enterprise IT willing to affect DevOps implementations."

Shehas come to this conclusion basedon "conversations with startup founders, technology incumbents offering DevOps solutions, and technologists within large enterprises."

Shannon-Solomon brings four arguments to support her position:

Shannon-Solomonends her post wondering "how long will it be until enterprises are forced to accept that they must accelerate their experiments with DevOps" and hoping that "more individual change agents within large organizations may emerge" in the fut