Softpanorama

May the source be with you, but remember the KISS principle ;-)
Contents Bulletin Scripting in shell and Perl Network troubleshooting History Humor

Linux Tips

News

Recommended Links

Filesystems tips

Linux netwoking tips

RHEL Tips

Suse Tips
Bash Tips and Trick Attaching to and detaching from screen sessions Midnight Commander Tips and Tricks Midnight Commander Tips and Tricks WinSCP Tips SSH Tips
Shell Tips How to rename files with special characters in names VIM Tips GNU Tar Tips GNU Screen Tips  
Grub Linux Start up and Run Levels AWK Tips Sysadmin Horror Stories Humor Etc

There are several large collection of Linux Tips on the Internet. Those are mixture of obsolete and useful tips so some work need to be done selecting valuable info from junk. Among them:

For YUM tips one can look at Yum - Linux@Duke Project Wiki

Linux Gazette regularly publishes tips column. See for example More 2 Cent Tips! LG #106

Some references from Linux Today also might be useful, for example


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Oct 09, 2017] TMOUT - Auto Logout Linux Shell When There Isn't Any Activity by Aaron Kili

Oct 07, 2017 | www.tecmint.com
... ... ..

To enable automatic user logout, we will be using the TMOUT shell variable, which terminates a user's login shell in case there is no activity for a given number of seconds that you can specify.

To enable this globally (system-wide for all users), set the above variable in the /etc/profile shell initialization file.

[Jul 29, 2017] shell - How does this bash code detect an interactive session - Stack Overflow

Notable quotes:
"... ', the pattern removal operation is applied to each positional parameter in turn, and the expansion is the resultant list. If parameter is an array variable subscripted with '@' or ' ..."
Jul 29, 2017 | stackoverflow.com

user1284631 , asked Jun 5 '13 at 8:44

Following some issues with scp (it did not like the presence of the bash bind command in my .bashrc file, apparently), I followed the advice of a clever guy on the Internet (I just cannot find that post right now) that put at the top of its .bashrc file this:
[[ ${-#*} != ${-} ]] || return

in order to make sure that the bash initialization is NOT executed unless in interactive session.

Now, that works. However, I am not able to figure how it works. Could you enlighten me?

According to this answer , the $- is the current options set for the shell and I know that the ${} is the so-called "substring" syntax for expanding variables.

However, I do not understand the ${-#*i} part. And why $-#*i is not the same as ${-#*i} .

blue , answered Jun 5 '13 at 8:49

parameter#word}

$parameter##word}

The word is expanded to produce a pattern just as in filename expansion. If the pattern matches the beginning of the expanded value of parameter, then the result of the expansion is the expanded value of parameter with the shortest matching pattern (the '#' case) or the longest matching pattern (the '##' case) deleted.

If parameter is '@' or ' ', the pattern removal operation is applied to each positional parameter in turn, and the expansion is the resultant list. If parameter is an array variable subscripted with '@' or ' ', the pattern removal operation is applied to each member of the array in turn, and the expansion is the resultant list.

Source: http://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html

So basically what happens in ${-#*i} is that *i is expanded, and if it matches the beginning of the value of $- , then the result of the whole expansion is $- with the shortest matching pattern between *i and $- deleted.

Example

VAR "baioasd" 
echo ${VAR#*i};

outputs oasd .

In your case

If shell is interactive, $- will contain the letter 'i', so when you strip the variable $- of the pattern *i you will get a string that is different from the original $- ( [[ ${-#*i} != ${-} ]] yelds true). If shell is not interactive, $- does not contain the letter 'i' so the pattern *i does not match anything in $- and [[ ${-#*i} != $- ]] yelds false, and the return statement is executed.

perreal , answered Jun 5 '13 at 8:53

See this :

To determine within a startup script whether or not Bash is running interactively, test the value of the '-' special parameter. It contains i when the shell is interactive

Your substitution removes the string up to, and including the i and tests if the substituted version is equal to the original string. They will be different if there is i in the ${-} .

[Jul 25, 2017] Local variables

Notable quotes:
"... completely local and separate ..."
Jul 25, 2017 | wiki.bash-hackers.org

local to a function:

myfunc
()
local
var
=VALUE
 
# alternative, only when used INSIDE a function
declare
var
=VALUE
 
...

The local keyword (or declaring a variable using the declare command) tags a variable to be treated completely local and separate inside the function where it was declared:

foo
=external
 
printvalue
()
local
foo
=internal
 
echo
$foo

 
 
# this will print "external"
echo
$foo

 
# this will print "internal"

printvalue
 
# this will print - again - "external"
echo
$foo

[Jul 25, 2017] Environment variables

Notable quotes:
"... environment variables ..."
"... including the environment variables ..."
Jul 25, 2017 | wiki.bash-hackers.org

The environment space is not directly related to the topic about scope, but it's worth mentioning.

Every UNIX® process has a so-called environment . Other items, in addition to variables, are saved there, the so-called environment variables . When a child process is created (in Bash e.g. by simply executing another program, say ls to list files), the whole environment including the environment variables is copied to the new process. Reading that from the other side means: Only variables that are part of the environment are available in the child process.

A variable can be tagged to be part of the environment using the export command:

# create a new variable and set it:
# -> This is a normal shell variable, not an environment variable!
myvariable
"Hello world."

 
# make the variable visible to all child processes:
# -> Make it an environment variable: "export" it
export
 myvariable

Remember that the exported variable is a copy . There is no provision to "copy it back to the parent." See the article about Bash in the process tree !


1) under specific circumstances, also by the shell itself

[Jul 25, 2017] Block commenting

Jul 25, 2017 | wiki.bash-hackers.org

: (colon) and input redirection. The : does nothing, it's a pseudo command, so it does not care about standard input. In the following code example, you want to test mail and logging, but not dump the database, or execute a shutdown:

#!/bin/bash
# Write info mails, do some tasks and bring down the system in a safe way
echo
"System halt requested"
 mail
-s
"System halt"
 netadmin
example.com
logger
-t
 SYSHALT
"System halt requested"

 
##### The following "code block" is effectively ignored

:
<<
"SOMEWORD"
etc
init.d
mydatabase clean_stop
mydatabase_dump
var
db
db1
mnt
fsrv0
backups
db1
logger
-t
 SYSHALT
"System halt: pre-shutdown actions done, now shutting down the system"

shutdown
-h
 NOW
SOMEWORD
##### The ignored codeblock ends here
What happened? The : pseudo command was given some input by redirection (a here-document) - the pseudo command didn't care about it, effectively, the entire block was ignored.

The here-document-tag was quoted here to avoid substitutions in the "commented" text! Check redirection with here-documents for more

[Jul 25, 2017] Doing specific tasks: concepts, methods, ideas

Notable quotes:
"... under construction! ..."
Jul 25, 2017 | wiki.bash-hackers.org

[Feb 14, 2017] Ms Dos style aliases for linux

I think alias ipconfig = 'ifconfig' is really useful for people who work with Linus from Windows POc desktop/laptop.
Feb 14, 2017 | bash.cyberciti.biz
# MS-DOS / XP cmd like stuff  
   alias edit = $VISUAL  
   alias copy = 'cp'  
   alias cls = 'clear'  
   alias del = 'rm'  
   alias dir = 'ls'  
   alias md = 'mkdir'  
   alias move = 'mv'  
   alias rd = 'rmdir'  
   alias ren = 'mv'  
   alias ipconfig = 'ifconfig'

[Feb 04, 2017] Quickly find differences between two directories

You will be surprised, but GNU diff use in Linux understands the situation when two arguments are directories and behaves accordingly
Feb 04, 2017 | www.cyberciti.biz

The diff command compare files line by line. It can also compare two directories:

# Compare two folders using diff ##
diff /etc /tmp/etc_old  
Rafal Matczak September 29, 2015, 7:36 am
§ Quickly find differences between two directories
And quicker:
 diff -y <(ls -l ${DIR1}) <(ls -l ${DIR2})  

[Feb 04, 2017] Restoring deleted /tmp folder

Jan 13, 2015 | cyberciti.biz

As my journey continues with Linux and Unix shell, I made a few mistakes. I accidentally deleted /tmp folder. To restore it all you have to do is:

mkdir /tmp
chmod 1777 /tmp
chown root:root /tmp
ls -ld /tmp
 
mkdir /tmp chmod 1777 /tmp chown root:root /tmp ls -ld /tmp 

[Feb 04, 2017] Use CDPATH to access frequent directories in bash - Mac OS X Hints

Feb 04, 2017 | hints.macworld.com
The variable CDPATH defines the search path for the directory containing directories. So it served much like "directories home". The dangers are in creating too complex CDPATH. Often a single directory works best. For example export CDPATH = /srv/www/public_html . Now, instead of typing cd /srv/www/public_html/CSS I can simply type: cd CSS
Use CDPATH to access frequent directories in bash UNIX
Mar 21, '05 10:01:00AM • Contributed by: jonbauman

I often find myself wanting to cd to the various directories beneath my home directory (i.e. ~/Library, ~/Music, etc.), but being lazy, I find it painful to have to type the ~/ if I'm not in my home directory already. Enter CDPATH , as desribed in man bash ):

The search path for the cd command. This is a colon-separated list of directories in which the shell looks for destination directories specified by the cd command. A sample value is ".:~:/usr".
Personally, I use the following command (either on the command line for use in just that session, or in .bash_profile for permanent use):
CDPATH=".:~:~/Library"

This way, no matter where I am in the directory tree, I can just cd dirname , and it will take me to the directory that is a subdirectory of any of the ones in the list. For example:
$ cd
$ cd Documents 
/Users/baumanj/Documents
$ cd Pictures
/Users/username/Pictures
$ cd Preferences
/Users/username/Library/Preferences
etc...

[ robg adds: No, this isn't some deeply buried treasure of OS X, but I'd never heard of the CDPATH variable, so I'm assuming it will be of interest to some other readers as well.]

cdable_vars is also nice
Authored by: clh on Mar 21, '05 08:16:26PM

Check out the bash command shopt -s cdable_vars

From the man bash page:

cdable_vars

If set, an argument to the cd builtin command that is not a directory is assumed to be the name of a variable whose value is the directory to change to.

With this set, if I give the following bash command:

export d="/Users/chap/Desktop"

I can then simply type

cd d

to change to my Desktop directory.

I put the shopt command and the various export commands in my .bashrc file.

[Feb 04, 2017] Copy file into multiple directories

Feb 04, 2017 | www.cyberciti.biz
Instead of running:
cp /path/to/file /usr/dir1
cp /path/to/file /var/dir2
cp /path/to/file /nas/dir3

Run the following command to copy file into multiple dirs:

echo /usr/dir1 /var/dir2 /nas/dir3 | xargs -n 1 cp -v /path/to/file

[Feb 04, 2017] 20 Unix Command Line Tricks – Part I

Feb 04, 2017 | www.cyberciti.biz
Locking a directory

For privacy of my data I wanted to lock down /downloads on my file server. So I ran:

chmod
 0000 
/
downloads

chmod 0000 /downloads

The root user can still has access and ls and cd commands will not work. To go back:

chmod
 0755 
/
downloads

chmod 0755 /downloads Clear gibberish all over the screen

Just type:

reset

reset Becoming human

Pass the -h or -H (and other options) command line option to GNU or BSD utilities to get output of command commands like ls, df, du, in human-understandable formats:

ls
-lh
# print sizes in human readable format (e.g., 1K 234M 2G)
df
-h
df
-k
# show output in bytes, KB, MB, or GB
free
-b
free
-k
free
-m
free
-g
# print sizes in human readable format (e.g., 1K 234M 2G)
du
-h
# get file system perms in human readable format
stat
-c
%
A 
/
boot
# compare human readable numbers
sort
-h
-a
file
# display the CPU information in human readable format on a Linux

lscpu
lscpu 
-e

lscpu 
-e
=cpu,node
# Show the  size of each file but in a more human readable way
tree
-h
tree
-h
/
boot

ls -lh # print sizes in human readable format (e.g., 1K 234M 2G) df -h df -k # show output in bytes, KB, MB, or GB free -b free -k free -m free -g # print sizes in human readable format (e.g., 1K 234M 2G) du -h # get file system perms in human readable format stat -c %A /boot # compare human readable numbers sort -h -a file # display the CPU information in human readable format on a Linux lscpu lscpu -e lscpu -e=cpu,node # Show the size of each file but in a more human readable way tree -h tree -h /boot Show information about known users in the Linux based system

Just type:

## linux version ##

lslogins
 
## BSD version ##

logins

## linux version ## lslogins## BSD version ## logins

Sample outputs:

UID USER      PWD-LOCK PWD-DENY LAST-LOGIN GECOS
  0 root             0        0   22:37:59 root
  1 bin              0        1            bin
  2 daemon           0        1            daemon
  3 adm              0        1            adm
  4 lp               0        1            lp
  5 sync             0        1            sync
  6 shutdown         0        1 2014-Dec17 shutdown
  7 halt             0        1            halt
  8 mail             0        1            mail
 10 uucp             0        1            uucp
 11 operator         0        1            operator
 12 games            0        1            games
 13 gopher           0        1            gopher
 14 ftp              0        1            FTP User
 27 mysql            0        1            MySQL Server
 38 ntp              0        1            
 48 apache           0        1            Apache
 68 haldaemon        0        1            HAL daemon
 69 vcsa             0        1            virtual console memory owner
 72 tcpdump          0        1            
 74 sshd             0        1            Privilege-separated SSH
 81 dbus             0        1            System message bus
 89 postfix          0        1            
 99 nobody           0        1            Nobody
173 abrt             0        1            
497 vnstat           0        1            vnStat user
498 nginx            0        1            nginx user
499 saslauth         0        1            "Saslauthd user"
Confused on a top command output?

Seriously, you need to try out htop instead of top:

sudo
htop

sudo htop Want to run the same command again?

Just type !! . For example:

/
myhome
/
dir
/
script
/
name arg1 arg2
 
# To run the same command again 
!!

 
## To run the last command again as root user
sudo
!!

/myhome/dir/script/name arg1 arg2# To run the same command again !!## To run the last command again as root user sudo !!

The !! repeats the most recent command. To run the most recent command beginning with "foo":

!
foo
# Run the most recent command beginning with "service" as root
sudo
!
service

!foo # Run the most recent command beginning with "service" as root sudo !service

The !$ use to run command with the last argument of the most recent command:

# Edit nginx.conf
sudo
vi
/
etc
/
nginx
/
nginx.conf
 
# Test nginx.conf for errors
/
sbin
/
nginx 
-t
-c
/
etc
/
nginx
/
nginx.conf
 
# After testing a file with "/sbin/nginx -t -c /etc/nginx/nginx.conf", you
# can edit file again with vi
sudo
vi
!
$

# Edit nginx.conf sudo vi /etc/nginx/nginx.conf# Test nginx.conf for errors /sbin/nginx -t -c /etc/nginx/nginx.conf# After testing a file with "/sbin/nginx -t -c /etc/nginx/nginx.conf", you # can edit file again with vi sudo vi !$ Get a reminder you when you have to leave

If you need a reminder to leave your terminal, type the following command:

leave +hhmm

leave +hhmm

Where,

Home sweet home

Want to go the directory you were just in? Run:
cd -
Need to quickly return to your home directory? Enter:
cd
The variable CDPATH defines the search path for the directory containing directories:

export
CDPATH
=
/
var
/
www:
/
nas10

export CDPATH=/var/www:/nas10

Now, instead of typing cd /var/www/html/ I can simply type the following to cd into /var/www/html path:

cd
 html

cd html Editing a file being viewed with less pager

To edit a file being viewed with less pager, press v . You will have the file for edit under $EDITOR:

less
*
.c
less
 foo.html
## Press v to edit file ##
## Quit from editor and you would return to the less pager again ##

less *.c less foo.html ## Press v to edit file ## ## Quit from editor and you would return to the less pager again ## List all files or directories on your system

To see all of the directories on your system, run:

find
/
-type
 d 
|
less

 
# List all directories in your $HOME
find
$HOME
-type
 d 
-ls
|
less

find / -type d | less# List all directories in your $HOME find $HOME -type d -ls | less

To see all of the files, run:

find
/
-type
 f 
|
less

 
# List all files in your $HOME
find
$HOME
-type
 f 
-ls
|
less

find / -type f | less# List all files in your $HOME find $HOME -type f -ls | less Build directory trees in a single command

You can create directory trees one at a time using mkdir command by passing the -p option:

mkdir
-p
/
jail
/
{
dev,bin,sbin,etc,usr,lib,lib64
}
ls
-l
/
jail
/

mkdir -p /jail/{dev,bin,sbin,etc,usr,lib,lib64} ls -l /jail/ Copy file into multiple directories

Instead of running:

cp
/
path
/
to
/
file
/
usr
/
dir1
cp
/
path
/
to
/
file
/
var
/
dir2
cp
/
path
/
to
/
file
/
nas
/
dir3

cp /path/to/file /usr/dir1 cp /path/to/file /var/dir2 cp /path/to/file /nas/dir3

Run the following command to copy file into multiple dirs:

echo
/
usr
/
dir1 
/
var
/
dir2 
/
nas
/
dir3 
|
xargs
-n
1
cp
-v
/
path
/
to
/
file

echo /usr/dir1 /var/dir2 /nas/dir3 | xargs -n 1 cp -v /path/to/file

Creating a shell function is left as an exercise for the reader

Quickly find differences between two directories

The diff command compare files line by line. It can also compare two directories:

ls
-l
/
tmp
/
r
ls
-l
/
tmp
/
s
# Compare two folders using diff ##
diff
/
tmp
/
r
/
/
tmp
/
s
/

[Feb 04, 2017] List all files or directories on your system

Feb 04, 2017 | www.cyberciti.biz
List all files or directories on your system

To see all of the directories on your system, run:

find
/
-type
 d 
|
less

 
# List all directories in your $HOME
find
$HOME
-type
 d 
-ls
|
less

find / -type d | less# List all directories in your $HOME find $HOME -type d -ls | less

To see all of the files, run:

find
/
-type
 f 
|
less

 
# List all files in your $HOME
find
$HOME
-type
 f 
-ls
|
less

[Jan 26, 2012] A last-resort trick to recover your machine from the brink of death

Have you heard of the magic SysRq key?

No?

Well, it's magic. It's directly shunted to the Linux kernel. You press ALT, press the PrintScreen (SysRq) key, and while holding them both down, press one of the letters (each letter has a different function assigned to it).

It's not normally enabled, but you can enable it by putting

kernel.sysrq = 1

in your machine's /etc/sysctl.conf file. Oh, and then rebooting.

Here's why it's useful.

So, what does SysRq do, really?

Hit Alt+SysRq+K - the windowing system will restart. More effective than Ctrl+Alt+Backspace.

Suppose a GUI application you just opened is starting to swallow massive amounts of RAM. Like, one gigabyte, perhaps? Your machine is locking up, and you feel the mouse start to stutter at first, then freeze completely - while the hard disk light in your computer's front panel is lighting up frantically, gasping for air (aka memory) .

You now have three choices:

  1. Sit it out and let the Linux kernel detect this situation and kill the abusive application. This can take way more than 15 minutes.
  2. Press the computer's power off button for 5 seconds. This shuts your machine down uncleanly and leads to data loss.
  3. Hit the magic SysRq combo: Alt+SysRq+K.

Should you choose option 3, the graphical subsystem dies immediately. That's because Alt+SysRq+K kills any application that holds the keyboard open - and, you guessed it, the graphical subsystem is holding it open. This premature death of the GUI causes all GUI applications to die in a cascade, including the abusive application.

Two to ten seconds later, you will be presented with a login prompt.

Sure, you lost changes to all files you haven't saved, and all the tabs in your Web browser… but at least you didn't have to reboot uncleanly, did you?

But, Ctrl+Alt+Backspace?

Once the machine is in a critically heavy memory crunch, Ctrl+Alt+Backspace will take too much time to work, because the windowing system will be pressed for memory to even execute. The magic SysRq key has the luxury of not having that problem - if Ctrl+Alt+Backspace were an IV drip, SysRq would be like a central line.

Why this key combination exists

The reason this key combo exists is simple. Alt+SysRq+K is called SAK (System Attention Key). It was designed back in the days of, um, yore, to kill all applications snooping on the keyboard - so administrators wishing to log in could safely do so without anyone sniffing their passwords.

As a preventative security measure, it sure works against keyloggers and other malware that may be snooping on your keyboard, may I say. And it most definitely works against your run-of-the-mill temporary memory shortage ;-).

Advantages/disadvantages

Well, the major disadvantages are:

But, on a memory crunch, this beats rebooting hands-down. And that's the biggest advantage.

Controlling runaway processes on Linux • Rudd-O.com

Sometimes, buggy memory hogs can choke your machine. Here, two tricks: one to recover from a memory choke, another to prevent memory chokes forever.

Misbehaving application frozen?

Has an application stopped responding on your machine? Well, as long as your machine is still responsive, you can use these tricks to nuke it safely.

On KDE

Hit Ctrl+Alt+Esc. Your mouse cursor will change - from an arrow to a small skull/crossbones combo. Hit the stubborn application with the skull.

It'll die.

On GNOME

Add a new launcher to your panel and set it so it executes the xkill application. Now, when an application starts stupidifying itself, just hit the launcher you created (the cursor will change to a square-type "target"), then hit the application with your mouse cursor.

It'll die.

Application choking your machine?

Of course, if your machine is already too slow to use these tricks, they won't help you much. Here's why, sometimes, your machine ditches itself into a molasses pit, and how to rescue it from certain death.

Memory and pathologies

…almost all applications have this pathological idea (encouraged by the operating system) that memory is a limitless resource…

You see, almost all applications have this pathological idea (encouraged by the operating system) that memory is a limitless resource - and when they go overboard, the operating system just dips into the hard disk to simulate memory.

Sometimes, bugs in an application do cause them to go haywire, requesting memory like there's no end. Once your machine goes down that lane, there isn't a simple way to recover it, short of powering it off forcibly.

A fortress-like operating system like Linux isn't supposed to die, and yet it does. However, the fortress I'm so proud of can, and does, provide you with effective measures against premature deaths of these sort.

A last-resort trick to recover your machine from the brink of death

Have you heard of the magic SysRq key?

No?

Well, it's magic. It's directly shunted to the Linux kernel. You press ALT, press the PrintScreen (SysRq) key, and while holding them both down, press one of the letters (each letter has a different function assigned to it).

It's not normally enabled, but you can enable it by putting

kernel.sysrq = 1

in your machine's /etc/sysctl.conf file. Oh, and then rebooting.

Here's why it's useful.

So, what does SysRq do, really?

Hit Alt+SysRq+K - the windowing system will restart. More effective than Ctrl+Alt+Backspace.

Suppose a GUI application you just opened is starting to swallow massive amounts of RAM. Like, one gigabyte, perhaps? Your machine is locking up, and you feel the mouse start to stutter at first, then freeze completely - while the hard disk light in your computer's front panel is lighting up frantically, gasping for airmemory.

You now have three choices:

  1. Sit it out and let the Linux kernel detect this situation and kill the abusive application. This can take way more than 15 minutes.
  2. Press the computer's power off button for 5 seconds. This shuts your machine down uncleanly and leads to data loss.
  3. Hit the magic SysRq combo: Alt+SysRq+K.

Should you choose option 3, the graphical subsystem dies immediately. That's because Alt+SysRq+K kills any application that holds the keyboard open - and, you guessed it, the graphical subsystem is holding it open. This premature death of the GUI causes all GUI applications to die in a cascade, including the abusive application.

Two to ten seconds later, you will be presented with a login prompt.

Sure, you lost changes to all files you haven't saved, and all the tabs in your Web browser… but at least you didn't have to reboot uncleanly, did you?

But, Ctrl+Alt+Backspace?

Once the machine is in a critically heavy memory crunch, Ctrl+Alt+Backspace will take too much time to work, because the windowing system will be pressed for memory to even execute. The magic SysRq key has the luxury of not having that problem ;-) - if Ctrl+Alt+Backspace were an IV drip, SysRq would be like a central line.

Why this key combination exists

The reason this key combo exists is simple. Alt+SysRq+K is called SAK (System Attention Key). It was designed back in the days of, um, yore, to kill all applications snooping on the keyboard - so administrators wishing to log in could safely do so without anyone sniffing their passwords.

As a preventative security measure, it sure works against keyloggers and other malware that may be snooping on your keyboard, may I say. And it most definitely works against your run-of-the-mill temporary memory shortage ;-).

Advantages/disadvantages

Well, the major disadvantages are:

But, on a memory crunch, this beats rebooting hands-down. And that's the biggest advantage.

A definitive cure to runaway applications

Become an administrator (root) and use your favorite text editor to open the file /etc/security/limits.conf:

# /etc/security/limits.conf
#
#Each line describes a limit for a user in the form:
#
#<domain>        <type>  <item>  <value>
#
#Where:
#<domain> can be:
#        - an user name
#        - a group name, with @group syntax
#        - the wildcard *, for default entry
#        - the wildcard %, can be also used with %group syntax,
#                 for maxlogin limit
#
#<type> can have the two values:
#        - "soft" for enforcing the soft limits
#        - "hard" for enforcing hard limits
#
#<item> can be one of the following:
#        - core - limits the core file size (KB)
#        - data - max data size (KB)
#        - fsize - maximum filesize (KB)
#        - memlock - max locked-in-memory address space (KB)
#        - nofile - max number of open files
#        - rss - max resident set size (KB)
#        - stack - max stack size (KB)
#        - cpu - max CPU time (MIN)
#        - nproc - max number of processes
#        - as - address space limit

Add the following line anywhere on the file:

*      soft      as      512000
limits.conf is a little-known godsend, and a definite requirement to keep large computing farms or terminal servers under control.

You will need to restart any sessions (graphical/terminal) for this change to take effect. Additionally, you will need to restart the graphical session manager (GDM or KDM).

Of course, 512000 is just my favorite setting - but that's because I have the privilege of using multi-gigabyte memory sticks on my machine. If you have much less memory than me, you will want to tune this, while keeping in mind that modern applications can and do take more than 300 MB under exceptional circumstances.

Like, for example, Firefox with 50 tabs/windows open. Or Evolution managing 2 GB of e-mail. Hey, both circumstances happen to me, but I guess I'm an oddball.

What this "magic incantation" in limits.conf does

limits.conf is the file that lets you set per-user/process/system resource limits. There are several limits to choose from.

One of them is the address space (as). The address space refers to the maximum amount of RAM (in kilobytes) that a process may request from the operating system. Any requests above the configured limit are simply refused.

Advantages/disadvantages

The fortunate side effect of this recipe is that the majority of applications will disappear and die a horrible death if they request memory indiscriminately. Which beats having to turn the machine off.

The unfortunate side effect of refused requests for more memory is that the majority of applications will disappear and die a horrible death if they request memory indiscriminately. That's because they don't know how to cope with more memory. Until software developers actually implement error handling for lack of memory, this will be a small nuisance - hey, save often and you'll be safe ;-).

Temporarily disabling memory limits for picky applications

Another disadvantage: certain applications don't run when a limit is set. WINE is among those applications. However, you can use a terminal window to disable this limit temporarily (within any applications started from the terminal):

[rudd-o@andrea /media/windows/UnrealTournament]$ ulimit -v unlimited
[rudd-o@andrea /media/windows/UnrealTournament]$ wine System/UnrealTournament.exe

That's it for this tutorial

So, the infrequent dead application, SAKing your computer, or powering it off… what do you prefer? Me… well, once I upgraded my new "desktop" machine (1U Dell PowerEdge SC1425) to 2 GB, I haven't looked back.

But my old machine did thank me a lot for keeping it off memory crunches :-). It's now safe in PVR heaven, piratingtime-shifting TV shows for my pleasure.

[Aug 09, 2011] Creating a Linux ramdisk

While performing some testing a few weeks ago, I needed to create a ramdisk on one of my redhat AS 4.0 servers. I knew Solaris supported tmpfs, and after a bit of googling was surprised to find that Linux supported the tmpfs pseudo-file system as well. To create a ramdisk on a Linux host, you first need to find a suitable place to mount the tmpfs file system. For my tests, I used mkdir to create a directory valled /var/ramdisk:

$ mkdir /var/ramdisk

Once the mount point is identified, you can use the mount command to mount a tmpfs file system on top of that mount point:

$ mount -t tmpfs none /var/ramdisk -o size=28m

Now each time you access /var/ramdisk, your reads and writes will be coming directly from memory. Nice!

[Dec 17, 2009] Top Ten Things I Miss in Windows

Thoughts on Technology

Klipper/Copy & Paste Manager

I use this one alot when I am either coding or writing a research paper for school. More often than not I find I have copied something new only to discover I need to paste a link or block of code again from two copies back. Having a tray icon where I can recall the last ten copies or so is mighty useful.

Gnome-Do

Most anyone who uses the computer in their everyday work will tell you that less mouse clicks means faster speed and thus (typically) more productivity. Gnome-Do is a program that allows you to cut down on mouse clicks (so long as you know what program you are looking to load). The jist of what it does is this: you assign a series of hot keys to call up the search bar (personally I use control+alt+space) and then you start typing in the name of an application or folder you want to open and it will start searching for it - once the correct thing is displayed all you need to do is tap enter to load it up. The best part is that it remembers which programs you use most often. Meaning that most times you only need to type the first letter or two of a commonly used application for it to find the one you are looking for.

[Aug 7, 2009] atd daemon is not running on Suse 10 SP2 by default, so at commands fail.

It needs to be manually enabled via chkconfig and started with service command to ensure consistency in behavior with the Solaris. AIX and HP-UX.

[Aug 5, 2009] 10 Essential UNIX-Linux Command Cheat Sheets TECH SOURCE FROM BOHOL

Linux has become so idiot proof nowadays that there is less and less need to use the command line. However, the commands and shell scripts have remained powerful for advanced users to utilize to help them do complicated tasks quickly and efficiently.

To those of you who are aspiring to become a UNIX/Linux guru, you have to know loads of commands and learn how to effectively use them. But there is really no need to memorize everything since there are plenty of cheat sheets available on the web and on books. To spare you from the hassles of searching, I have here a collection of 10 essential UNIX/Linux cheat sheets that can greatly help you on your quest for mastery...

[Aug 4, 2009] Tech Tip View Config Files Without Comments Linux Journal

I've been using this grep invocation for years to trim comments out of config files. Comments are great but can get in your way if you just want to see the currently running configuration. I've found files hundreds of lines long which had fewer than ten active configuration lines, it's really hard to get an overview of what's going on when you have to wade through hundreds of lines of comments.

$ grep ^[^#] /etc/ntp.conf

The regex ^[^#] matches the first character of any line, as long as that character that is not a #. Because blank lines don't have a first character they're not matched either, resulting in a nice compact output of just the active configuration lines.

[Aug 3, 2009] 10 super-cool Linux hacks you did not know about

1. Run top in batch mode

2. Write to more than one file at once with tee

3. Unleash the accounting power with pacct

4. Dump utmp and wtmp logs

5. Monitor CPU and disk usage with iostat

6. Monitor memory usage with vmstat

7. Combine the power of iostat and vmstat with dstat

8. Collect, report or save system activity information with sar

9. Create UDP server-client - version 1

10. Configure UDP server-client - version 2

Linux Commando Remap Caps Lock key for virtual console windows

Remap Caps Lock key for virtual console windows

My last blog entry explains how to use xmodmap to remap the Caps Lock key to the Escape key in X. That takes care of the keyboard mapping when you are in X. What about when you are in a virtual console window? You need to follow the steps below. Make sure that you sudo root before you execute the following commands.
  1. Find out the keycode of the key that you want remapped.

    Execute the showkey command as root in a virtual consolde:

    $ showkey
    kb mode was UNICODE
    
    press any key (program terminates after 10s of last keypress)...
    0x9c
    Hit the Caps Lock key, wait 10 seconds (default timeout), and the showkey command will exit on its own.
    $ showkey
    kb mode was UNICODE
    
    press any key (program terminates after 10s of last keypress)...
    0x9c
    0x3a 
    0xba
    The keycode for the Caps Lock key is 0x3a in hex, or 58 in decimal.

  2. Find out the symbolic name (key symbol) of the key that you want to map to.
    You can list all the supported symbolic names by dumpkeys -l and grep for esc:
    $ dumpkeys -l |grep -i esc 
    0x001b Escape
    0x081b Meta_Escape
  3. Remap the keycode 58 to the Escape key symbol.
    $ (echo `dumpkeys |grep -i keymaps`;  \
       echo keycode 58 = Escape)          \
       | loadkeys -
    Thanks to cjwatson who pointed me to prepending the keymaps statement from dumpkeys. The keymaps statement is a shorthand notation defining what key modifiers you are defining with the key. See man keymaps(5) for more info.
To make the new key mapping permanent, you need to put the loadkeys command in a bootup script.

For my Debian Etch system, I put the
(echo `dumpkeys |grep -i keymaps`; echo keycode 58 = Escape) |loadkeys - command in /etc/rc.local.

Remapping Keys Under Linux

To swap caps lock and control:

# Make the Caps Lock key be a Control key:
xmodmap -e "remove lock = Caps_Lock"
xmodmap -e "add control = Caps_Lock"

# Make the Left Control key be a Caps Lock key:
xmodmap -e "remove control = Control_L"
xmodmap -e "add lock = Control_L"

Questions Answered Below

The instructions in this page apply only to Linux in an X environment (like KDE).

Terminology

How These Relate To One Another

Keycodes, keysyms, and modifiers relate in the following way:

keycodekeysymmodifier (optional)

So for example, on my keyboard:

keycode 38 (the 'a' key)keysym 0x61 (the symbol 'a')

keycode 50 (the left 'shift' key)keysym 0xffe1 (the action 'the left shift key is down')the shift modifier

Note that technically, each keycode can be mapped to more than one keysym. The first mapping applies when no modifier is pressed; the second applies when the shift key is pressed. (I haven't figured out how to use the third and fourth yet.) So for example, the second mapping on my 'a' key is:

keycode 38 (the 'a' key)keysym 0x61 (the symbol 'A')

In other words, when modifier 'shift' is active, my 'a' key generates an 'A' instead of an 'a'.

Viewing Your Settings

Changing Your Settings

Say you want to map the caps lock key to be the control modifier. You have two sensible choices for how to do this:

Caps Lock KeyCaps Lock actionControl Modifier

Caps Lock KeyControl_L actionControl Modifier

To do the first, you need to change the action → modifier mapping. Do this as follows:

xmodmap -e "remove lock = Caps_Lock"
xmodmap -e "add control = Caps_Lock"

To do the second, you need to change the keycode → action mapping, so you'll need to know the keycode of your caps lock key. To find the keycode for your caps lock key use xev, as described above. Mine is 66. So:

xmodmap -e "keycode 66 = Control_L"

Help!

If you mess things up, the simplest way to fix things is to log out of the window manager and log back in.

For More Information

Notes

By David Vespe, April 2006

Remap Caps Lock

The Caps Lock key on most PC keyboards is in the position where the Control key is on many other keyboards, and vice versa. This can make it difficult for programmers to use the "wrong" kind of keyboard.

This page describes how to RemapCapsLock on different keys in different OperatingSystems.

One really stupid thing about PeeCee keyboards is that manufacturers even realized that putting caps-lock on home row was a bad idea because people kept hitting it with the 'a' key. Did they move it? No, that would be too sensible. They carved a little piece of it off to leave a bigger gap. So now if I re-map a standard PC-10* keyboard so that left-control is in a sensible place, it is still harder to use than it should be. :(

Many people (the majority, clearly) feel that the placement of CTRL below the SHIFT key is a better location for it. However, the backspace key is way out of the way -- it would be better if the CAPSLOCK and backspace keys were swapped.


Unix, Console

If you have loadkeys (as you would under Linux), this should do the trick:
 loadkeys /usr/share/keymaps/i386/qwerty/emacs2.kmap.gz
To reset to the defaults (you may have to switch to another tty and back to undo ctrl-lock):
 loadkeys -d
Unix, X

Under Redhat 8.0, just enable the following line in /etc/X11/XF86Config

        Option      "XkbOptions"        "ctrl:swapcaps"
Replace "swapcaps" with "nocaps" to turn both keys into "Control."

With X, there are at least 2 different ways to remap the keys. One is using xmodmap. For example, man xmodmap shows how to swap the left control key and the CapsLock key:

 ! Swap Caps_Lock and Control_L
 !
 remove Lock = Caps_Lock
 remove Control = Control_L
 keysym Control_L = Caps_Lock
 keysym Caps_Lock = Control_L
 add Lock = Caps_Lock
 add Control = Control_L                 
Many people don't want a CapsLock key at all. They can change the CapsLock key to a ControlKey? by using the following lines in xmodmap:
 clear Lock
 keycode 0x7e = Control_R
 add Control = Control_R 
Maybe you have to change the keycode 0x7e. You can find the keycodes with xev. I Furthermore, this only works if you don't have a right control key. I hope somebody has a solution which does not have this restriction.

This solution might be the easiest one. If you do not have a problem owning a dead key in your keyboard you might disable CapsLock at all:

 "remove lock = Caps_Lock"  (or just: "clear lock")
A better solution might be this sequence, which is keycode independent and does not remove existing control keys:
 remove Lock = Caps_Lock
 remove Control = Control_L
 keysym Caps_Lock = Control_L
 add Lock = Caps_Lock
 add Control = Control_L
Now, you can use another solution which uses xkb. For that, you will have to find the sybols directory on your unix system. There, you add a file which might be called 'ctrl' containing the following:
 // eliminate the caps lock key completely (replace with control)
 partial modifier_keys
 xkb_symbols "nocaps" {
     key <CAPS>  {  symbols[Group1]= [ Control_L ] };
     modifier_map  Control { <CAPS>, <LCTL> };
 };
This eliminates the caps lock key if included in a keymap. We can do this by changing the file en_US:
 xkb_symbols "pc101" {
     include "ctrl(nocaps)"
     key <RALT> { [ Mode_switch,  Multi_key ] };
augment "us(pc101)" include "iso9995-3(basic101)"

modifier_map Mod3 { Mode_switch }; };

You can then add the keyboard using a line like:
 /usr/X11R6/lib/X11/xkb/xkbcomp -w 1 -R/usr/X11R6/lib/X11/xkb -xkm -m en_US keymap/xfree86 0:0
Now, unfortunately there are probably errors in the text above. Please correct and make it working for other systems than RedHat Linux.

Red Hat Knowledgebase How can I find information on the maximum amount of memory my system can handle

The dmidecode command can be used to display information from the systems' BIOS that includes the maximum memory that the BIOS will support. This information is displayed by dmidecode as type 16 (Physical Memory Array) which can be filtered with the command dmidecode -t 16.

For instance, the following output shows a system that can support a maximum of 16GB of RAM.

Handle 0x0032, DMI type 16, 15 bytes
Physical Memory Array
	Location: System Board Or Motherboard
	Use: System Memory
	Error Correction Type: None
	Maximum Capacity: 16 GB
	Error Information Handle: Not Provided
	Number Of Devices: 4

[Jan 27, 2009] Linux Keyboard Shortcuts Safe Way to Exit During System Freezes

Jan 26, 2009 | Linux Today

"Alt + SysR + K
Kill all processes (including X), which are running on the currently active virtual console.

"Alt + SysRq + E
Send the TERM signal to all running processes except init, asking them to exit."

Magic Tricks With the Sysreq Key(Dec 10, 2008)
Sounds of Crashing Hard Drives(Nov 24, 2008)
Fix Unresponsive or Frozen Linux Computers Using Shortcuts(Nov 11, 2008)
Linux Kernel Magic SysRq Keys in openSUSE for Crash Recovery(Sep 29, 2008)
Rebooting the Magic Way(Aug 22, 2008)
Fix a Frozen System with the Magic SysRq Keys(Sep 17, 2007)

Tips & Tricks

partprobe - inform the OS of partition table changes

One of the major benefits to using Red Hat Enterprise Linux is that once the operating system is up and running, it tends to stay that way. This also holds true when it comes to reconfiguring a system; mostly. One Achilles heel for Linux, until the past couple of years, has been the fact that the Linux kernel only reads partition table information at system initialization, necessitating a reboot any time you wish to add new disk partitions to a running system.

The good news, however, is that disk re-partitioning can now also be handled 'on-the-fly' thanks to the 'partprobe' command, which is part of the 'parted' package.

Using 'partprobe' couldn't be more simple. Any time you use 'fdisk', 'parted' or any other favorite partitioning utility you may have to modify the partition table for a drive, run 'partprobe' after you exit the partitioning utility and 'partprobe' will let the kernel know about the modified partition table information. If you have several disk drives and want to specify a specific drive for 'partprobe' to scan, you can run 'partprobe <device_node>'

Of course, given a particular hardware configuration, shutting down your system to add hardware may be unavoidable, it's still nice to be given the option of not having to do so and 'partprobe' fills that niche quite nicely.

partprobe [-d] [-s] [devices...]

DESCRIPTION

This manual page documents briefly the partprobe command.

partprobe is a program that informs the operating system kernel of partition table changes, by requesting that the operating system re-read the partition table.

OPTIONS

This program uses short UNIX style options.
-d
Don't update the kernel.
-s
Show a summary of devices and their partitions.
-h
Show summary of options.
-v
Show version of program.

CENTOS/RHEL 5 CONFIGURATION TIPS

Yum & Repositories
I noticed this issue with both CentOS 4 and 5 - Yum will often choose bad mirrors from the mirrorlist file - for example, choosing overseas servers, when an official NZ server exists. And in some cases, the servers it has chosen are horribly slow.

You will probably find that you get better download speeds by editing /etc/yum.repos.d/CentOS-Base.repo and commenting out the mirrorlist lines and setting the baseurl line to point to your preferred local mirror.

Yum-updatesd
CentOS 5 has a new daemon called yum-updatesd, which replaces the old cron job yum update scripts. This script will check frequently for updates, and can be configured to download and/or install them.

However, this daemon is bad for a server, since it doesn't run at a fixed time - I really don't want my server downloading and updating software during the busiest time of day thank-you-very-much!

So, it's bad for a server. Let's disable it with:

service yum-updatesd stop
chkconfig --level 2345 yum-updatesd off

Plus I don't like the idea of having a full blown daemon where a simple cronjob will do the trick perfectly fine - seems like overkill. (although it appears yum-updatesd has some useful features like dbus integration for desktop users)

So, I replace it with my favorite cronjob script approach, by running the following (as root of course):

cat << "EOF" > /etc/cron.daily/yumupdate
 #!/bin/sh
 # install any yum updates
/usr/bin/yum -R 10 -e 0 -d 1 -y update yum > /var/log/yum.cron.log 2>&1
/usr/bin/yum -R 120 -e 0 -d 1 -y update  >> /var/log/yum.cron.log 2>&1
if [ -s /var/log/yum.cron.log ]; then
        /bin/cat /var/log/yum.cron.log | mail root -s "Yum update information" 2>&1
fi
EOF

and if you want to clear up the package cache every week:
cat << "EOF" > /etc/cron.weekly/yumclean
 #!/bin/sh
 # remove downloaded packages
/usr/bin/yum -e 0 -d 0 clean packages
EOF

(please excuse the leading space infront of the comments ( #) - it is to work around a limitation in my site, which I will fix shortly. Just copy the lines into a text editor and remove the space, before pasting into the terminal)

This will install 2 scripts that get run around 4:00am (as set in /etc/crontab) which will check for updates and download and install any automatically. If there were any updates, it will send out an email, if there were none, it doesn't send anything.

(of course, you need sendmail/whatever_fucking_email_server_you_like configured correctly to get the alerts!)

You can change yum to just download and not install the updates (just RTFM), but I've never had a update break anything - update compatibility and quality is always very high - so I use automatic updates.

CentOS 4 had something very similar to this, with the addition of a bootscript to turn the cronjobs on and off.

* Please check out the update at the bottom of this page for futher information on this.


Apache Quirks
If you are using indexing in apache (indexing is when you can browse folders/files), you may find that the browsing page looks small and nasty.

The fix is to edit /etc/httpd/conf/httpd.conf and change the following line:

IndexOptions FancyIndexing VersionSort NameWidth=* HTMLTable
to
IndexOptions FancyIndexing VersionSort NameWidth=*

This should make the index full screen again. I'm not sure if this is an apache bug, a distro bug or some other weird issue, because I'm sure HTMLTable isn't supposed to be all small like that.

(FYI: CentOS 4 did not have the HTMLTable option active)

SSL Certificates
Redhat have moved things around with SSL certificates a lot. What it seems like happened (I have only had a quick look into this), is that they were going to provide a new tool to generate SSL certificates called "genkey" but pulled it out before release.

To make things more fun, they also removed the good old Makefile that was in /etc/httpd/conf/ that allowed you to generate SSL certificates & keys.

However, I found the same Makefile again in /etc/pki/tls/certs/


Vi vs. Vim
If you use vi/vim, you should check this posting out.

That's all the issues that I've come across for now - if I find any more things to note, I'll update this page with the information and put a note on my blog.

/etc/sysconfig/network

Note the NETMASK should be defined in /etc/sysconfig/network-scripts/ifcfg-eth0

/etc/sysconfig/network

The /etc/sysconfig/network file is used to specify information about the desired network configuration. The following values may be used:

How to see parameter of ext3 filesystem

# tune2fs -l /dev/mapper/vg00-lv06
tune2fs 1.38 (30-Jun-2005)
Filesystem volume name: <none>
Last mounted on: <not available>
Filesystem UUID: c0615eba-5bb6-443d-81c7-7f3c1eb829b2
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal filetype needs_recovery sparse_super
Default mount options: (none)
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 1310720
Block count: 2621440
Reserved block count: 131072
Free blocks: 2365370
Free inodes: 1309130
First block: 0
Block size: 4096
Fragment size: 4096
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 16384
Inode blocks per group: 512
Filesystem created: Mon May 21 11:16:17 2007
Last mount time: Tue May 6 17:40:40 2008
Last write time: Tue May 6 17:40:40 2008
Mount count: 3
Maximum mount count: 500
Last checked: Thu Apr 3 11:51:39 2008
Check interval: 5184000 (2 months)
Next check after: Mon Jun 2 11:51:39 2008
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 128
Journal inode: 8
Default directory hash: tea
Directory Hash Seed: 36a54cdf-3f8e-482b-9e2c-a48b6ac1d27e
Journal backup: inode blocks

[Nov 30, 2007] freshmeat.net Project details for Expect-lite

About:
Expect-lite is a wrapper for expect, created to make expect programming even easier. The wrapper permits the creation of expect script command files by using special character(s) at the beginning of each line to indicate the expect-lite action. Basic expect-lite scripts can be created by simply cutting and pasting text from a terminal window into a script, and adding '>' '

Release focus: Major feature enhancements

Changes:
The entire command script read subsystem has changed. The previous system read directly from the script file. The new system reads the script file into a buffer, which can be randomly accessed. This permits looping (realistically only repeat loops). Infinite loop protection has been added. Variable increment and decrement have been added to support looping.

Author:
Craig Miller [contact developer]

[Nov 30, 2007] Got more than a gig of RAM and 32-bit Linux Here's how to use it By Bruce Byfield

September 21, 2007 | Linux.com
Nowadays, many machines are running with 2-4 gigabytes of RAM, and their owners are discovering a problem: When they run 32-bit GNU/Linux distributions, their extra RAM is not being used. Fortunately, correcting the problem is only a matter of installing or building a kernel with a few specific parameters enabled or disabled.

The problem exists because 32-bit Linux kernels are designed to access only 1GB of RAM by default. The workaround for this limitation is vaguely reminiscent of the virtual memory solution once used by DOS, with a high memory area of virtual memory being constantly mapped to physical addresses. This high memory can be enabled for up to 4GB by one kernel parameter, or up to 64GB on a Pentium Pro or higher processor with another parameter. However, since these parameters have not been needed on most machines until recently, the standard kernels in many distributions have not enabled them.

Increasingly, many distributions are enabling high memory for 4GB. Ubuntu default kernels have been enabling this process at least since version 6.10, and so have Fedora 7's. By contrast, Debian's default 486 kernels do not. Few distros, if any, enable 64GB by default.

To check whether your kernel is configured to use all your RAM, enter the command free -m. This command gives you the total amount of unused RAM on your system, as well as the size of your swap file, in megabytes. If the total memory is 885, then no high memory is enabled on your system (the rest of the first gigabyte is reserved by the kernel for its own purposes). Similarly, if the result shows over 1 gigabyte but less than 4GB when you know you have more, then the 4GB parameter is enabled, but not the 64GB one. In either case, you will need to add a new kernel to take full advantage of your RAM.

[Nov 20, 2007] Games with discolors

eval `dircolors ~/.dir_colors`
alias ls="ls --color=auto"

The command 'dircolors' takes its data from the file ~/.dir_colors and
creates an environment variable LS_COLORS. The command 'ls --color' takes
its colors from the environmental variable LS_COLORS.

So, write a suitable ~/.dir_colors file, and execute the command
'dircolors'. To get a starting file for editing, do this:

dircolors -p > ~/.dir_colors

The ~/.dir_colors file so created includes directions on coding the colors
for different kinds of files.

See man dircolors.

[Nov 1, 2007] Changing Gnome behavior to standard UNIX CDE style (application displayed on a particular desktop are visible only this desktop toolbar.

You need to unclick:

/apps/panel/applets/windows_list_screen/pref/display_in_all_workspaces

Controlling runaway processes on Linux • Rudd-O.com

... Have you heard of the magic SysRq key?

No?

Well, it's magic. It's directly shunted to the Linux kernel. You press ALT, press the PrintScreen (SysRq) key, and while holding them both down, press one of the letters (each letter has a different function assigned to it).

It's not normally enabled, but you can enable it by putting

kernel.sysrq = 1

in your machine's /etc/sysctl.conf file. Oh, and then rebooting.

Here's why it's useful.

So, what does SysRq do, really?

Hit Alt+SysRq+K - the windowing system will restart. More effective than Ctrl+Alt+Backspace.

Suppose a GUI application you just opened is starting to swallow massive amounts of RAM. Like, one gigabyte, perhaps? Your machine is locking up, and you feel the mouse start to stutter at first, then freeze completely - while the hard disk light in your computer's front panel is lighting up frantically, gasping for airmemory.

You now have three choices:

  1. Sit it out and let the Linux kernel detect this situation and kill the abusive application. This can take way more than 15 minutes.
  2. Press the computer's power off button for 5 seconds. This shuts your machine down uncleanly and leads to data loss.
  3. Hit the magic SysRq combo: Alt+SysRq+K.

Should you choose option 3, the graphical subsystem dies immediately. That's because Alt+SysRq+K kills any application that holds the keyboard open - and, you guessed it, the graphical subsystem is holding it open. This premature death of the GUI causes all GUI applications to die in a cascade, including the abusive application.

Two to ten seconds later, you will be presented with a login prompt.

Sure, you lost changes to all files you haven't saved, and all the tabs in your Web browser… but at least you didn't have to reboot uncleanly, did you?

But, Ctrl+Alt+Backspace?

Once the machine is in a critically heavy memory crunch, Ctrl+Alt+Backspace will take too much time to work, because the windowing system will be pressed for memory to even execute. The magic SysRq key has the luxury of not having that problem ;-) - if Ctrl+Alt+Backspace were an IV drip, SysRq would be like a central line.

Why this key combination exists

The reason this key combo exists is simple. Alt+SysRq+K is called SAK (System Attention Key). It was designed back in the days of, um, yore, to kill all applications snooping on the keyboard - so administrators wishing to log in could safely do so without anyone sniffing their passwords.

As a preventative security measure, it sure works against keyloggers and other malware that may be snooping on your keyboard, may I say. And it most definitely works against your run-of-the-mill temporary memory shortage ;-).

Advantages/disadvantages

Well, the major disadvantages are:

But, on a memory crunch, this beats rebooting hands-down. And that's the biggest advantage.