|May the source be with you, but remember the KISS principle ;-)|
|Contents||Bulletin||Scripting in shell and Perl||Network troubleshooting||History||Humor|
|News||See also||Recommended Links||Tutorials||Introductory materials||Papers||Solaris UFS File System||Protective partitioning|
|LVM||NFS||Ext2 / Ext3||Btrfs||jfs||XFS||ReisnerFS||AFS|
|Logical Volume Snapshots||Linux snapshots||Extended Attributes||ACLs||atime||RAM Disks||Loopback filesystem||ZFS|
|Disk partitioning||Linux Swap filesystem||Filesystems Recovery||VFS||Proc Filesystem||History||Humor||Etc|
The file system is one of the most important parts of an operating system. The file system stores and manages user data on disk drives, and ensures that what’s read from storage is identical to what was originally written. In addition to storing user data in files, the file system also creates and manages information about files and about itself. Besides guaranteeing the integrity of all that data, file systems are also expected to be extremely reliable and have very good performance.
File systems update their structural information (called metadata) by synchronous writes. Each metadata update may require many separate writes, and if the system crashes during the write sequence, metadata may be in inconsistent state. At the next boot the filesystem check utility (called fsck) must walk through the metadata structures, examining and repairing them. This operation takes a very very long time on large filesystems. And the disk may not contain sufficient information to correct the structure. This results in misplaced or removed files. A journaling file system uses a separate area called a log or journal. Before metadata changes are actually performed, they are logged to this separate area. The operation is then performed. If the system crashes during the operation, there is enough information in the log to "replay" the log record and complete the operation. This approach does not require a full scan of the file system, yielding very quick filesystem check time on large file systems, generally a few seconds for a multiple-gigabyte file system. In addition, because all information for the pending operation is saved, no removals or lost-and-found moves are required. Disadvantage of journaling filesystems is that they are slower than other filesystems. Some journaling filesystems: BeFS, HTFS, JFS, NSS, Ext3, VxFS and XFS.
Fortunately, a number of other Linux file systems take up where Ext2 leaves off. Indeed, Linux now offers four alternatives to Ext2:
In addition to meeting some or all of the requirements listed above, each of these alternative file systems also supports journaling, a feature certainly demanded by enterprises, but beneficial to anyone running Linux. A journaling file system can simplify restarts, reduce fragmentation, and accelerate I/O. Better yet, journaling file systems make fscks a thing of the past.
If you maintain a system of fair complexity or require high-availability, you should seriously consider a journaling file system. Let’s find out how journaling file systems work, look at the four journaling file systems available for Linux, and walk through the steps of installing one of the newer systems, JFS. Switching to a journaling file system is easier than you might think, and once you switch — well, you’ll be glad you did.
Fun with File Systems
To better appreciate the benefits of journaling file systems, let’s start by looking at how files are saved in a non-journaled file system like Ext2. To do that, it’s helpful to speak the vernacular of file systems.
A file system consists of blocks of data. The number of bytes constituting a block varies depending on the OS. The internal physical structure of a hard disk consists of cylinders. The hard disk is divided into groups of cylinders known as cylinder groups, further divided into blocks.
The file system is comprised of five main blocks (boot block, superblock, Inode block, data block,
A super block plays an important role during the system boot up and shutdown process. When the system boots, the details in the super block are loaded into the memory to improve the speed of processing. The super block is then updated at regular time intervals from the data in the memory. During system shutdown, a program called sync writes the updated data in the memory back to the super block. This process is very crucial because an inaccurate super block might even lead to an unusable file system. This is precisely why the proper shutdown of a Solaris system is essential.
Because of the critical nature of the super block, it is replicated at
the beginning of every cylinder group. These blocks are known as surrogate
super blocks. A damaged or corrupted super block is recovered from one of
the surrogate super blocks.
Each inode has a unique number associated with it, called the inode number. The -li option of the ls command displays the inode number of a file:
# ls -li
When a user creates a file in the directory or modifies it, the following events occur:
The data block is the storage unit of data in the Solaris file system. The default size of a data block in the Solaris file system is 8192 bytes. After a block is full, the file is allotted another block. The addresses of these blocks are stored as an array in the Inode.
The first 12 pointers in the array are direct addresses of the file; that is, they point to the first 12 data blocks where the file contents are stored. If the file grows larger than these 12 blocks, then a 13th block is added, which does not contain data. This block, called an indirect block, contains pointers to the addresses of the next set of direct blocks.
If the file grows still larger, then a 14th block is added, which contains pointers to the addresses of a set of indirect blocks. This block is called the double indirect block. If the file grows still larger, then a 15th block is added, which contains pointers to the addresses of a set of double indirect blocks. This block is called the triple indirect block.
Hard and soft links are a great features of Unix. It is a reference in a directory to a file stored in another directory. In case of soft links it can be a reference to a directory. There might be multiple links to a file. Links eliminate redundancy because you do not need to store multiple copies of a file.
Links are of two types: hard and soft (also known as symbolic).
To create a symbolic link, you must use the -s option with the ln command. Files that are soft linked contain an l symbol at the first bit of the access permission bits displayed by the ls -l command, whereas those that are hard linked do not contain the l symbol. A directory is symbolically linked to a file. However, it cannot be hard linked.
It is obvious that no file exists with a link count less than one. Relative pathnames . or .. are nothing but links for the current directory and its parent directory. These are present in every directory: any directory stores the two links ., .. and the Inode numbers of the files. They can be listed by the ls -lia option. A directory must have a minimum of two links. The number of links increases as the number of sub-directories increase. Whenever you issue a command to list the file attributes, it refers to the Inode block with the Inode number and the corresponding data is retrieved.
Each file system used in Solaris is intended for a specific purpose.
The root file system is at the top of an inverted tree structure. It is the first file system that the kernel mounts during booting. It contains the kernel and device drivers. The / directory is also called the mount pointdirectory of the file system. All references in the file system are relative to this directory. The entire file system structure is attached to the main system tree at the root directory during the process of mounting, and hence the name. During the creation of the file system, a lost + found directory is created within the mount point directory. This directory is used to dump into the file system any unredeemed files that were found during the customary file system check, which you do with the fsck command.
The directory located at the top of the Unix file system. It is represented by the "/" (forward slash) character.
/usr Contains commands and programs for system-level usage and administration.
/var Contains system log files and spooling files, which grow in size with system usage.
/home Contains user home directories.
/opt Contains optional third-party software and applications.
/tmp Contains temporary files, which are cleared each time the system is booted.
/proc Contains information about all active processes.
You create file systems with the newfs command. The newfs command accepts only logical raw device names. The syntax is as follows:
newfs [ -v ] [ mkfs-options ] raw-special-device
For example, to create a file system on the disk slice c0t3d0s4, the following command is used:
# newfs -v /dev/rdsk/c0t3d0s4
The -v option prints the actions in verbose mode. The newfs command calls the mkfs command to create a file system. You can invoke the mkfs command directly by specifying a -F option followed by the type of file system.
Mounting file systems is the next logical step to creating file systems. Mounting refers to naming the file system and attaching it to the inverted tree structure. This enables access from any point in the structure. A file system can be mounted during booting, manually from the command line, or automatically if you have enabled the automount feature.
With remote file systems, the server shares the file system over the network and the client mounts it.
The / and /usr file systems, as mentioned earlier, are mounted during booting. To mount a file system, attach it to a directory anywhere in the main inverted tree structure. This directory is known as the mount point. The syntax of the mount command is as follows:
# mount <logical block device name> <mount point>
The following steps mount a file system c0t2d0s7 on the /export/home directory:
# mkdir /export/home # mount /dev/dsk/c0t2d0s7 /export/home
You can verify the mounting by using the mount command, which lists all the mounted file systems.
Note: If the mount point directory has any content prior to the mounting operation, it is hidden and remains inaccessible until the file system is unmounted.Data is stored and retrieved from the physical disk where the file system is mounted.
Although there are no defined specifications for creating the file systems on the physical disk, slices are usually allocated as following:
0. Root or /— Files and directories of the OS.
The slices shown above are all allocated on a single single disk. However, there is no restriction that all file systems need to be located on a single disk. They can also span across multiple disks. Slice 2 refers to the entire disk. Hence, if you want to allocate an entire disk for a file system, you can do so by creating it on slice 2. The mount command supports a variety of useful options.
|-o largefiles||Files larger than 2GB are supported in the file system.|
|-o nolargefiles||Does not mount file systems with files larger than 2GB.|
|-o rw||File system is mounted with read and write permissions.|
|-o ro||File system is mounted with read-only permission.|
|-o bg||Repeats mount attempts in the background. Used with non-critical file systems.|
|-o fg||Repeats mount attempts in the foreground. Used with critical file systems.|
|-p||Prints the list of mounted file systems in /etc/vfstab format.|
|-m||Mounts without making an entry in /etc/mnt /etc/tab file.|
|-O||Performs an Overlay mount. Mounts over an existing mount point.|
A file system can be unmounted with the umount command. The following is the syntax for umount:
umount <mount-point or logical block device name > File systems cannot be unmounted when they are in use or when the umount command is issued from any subdirectory within the file system mount point.Note: A file system can be unmounted forcibly if you use the -f option of the umount command. Please refer to the man page to learn about the use of these options.
The umountall command is used to unmount a group of file systems. The umountall command unmounts all file systems in the /etc/mnttab file except the /, /usr, /var, and /proc file systems. If you want to unmount all the file systems from a specified host, use the -h option. If you want to unmount all the file systems mounted from remote hosts, use the -r option.
The /etc/vfstab (Virtual File System Table) file plays a very important role in system operations. This file contains one record for every device that has to be automatically mounted when the system enters run level 2.
|device to mount||The logical block name of the device to be mounted. It can also be a remote resource name for NFS.|
|device to fsck||The logical raw device name to be subjected to the fsck check during booting. It is not applicable for read-only file systems, such as High Sierra File System (HSFS) and network File systems such as NFS.|
|Mount point||The mount point directory.|
|FS type||The type of the file system.|
The number used by fsck to decide whether the file system is to be checked.
0— File system is not checked.
1— File system is checked sequentially.
2— File system is checked simultaneously along with other file systems where this field is set to 2.
|Mount at boot||The file system to be mounted by the mount all command at boot time is determined by this field. The options are either yes or no.|
|Mount options||The mount options to be supported by the mount command while the particular file system is mounted.|
Note the no values in this field for the root, /usr, and /var file systems. These are mounted by default. The fd field refers to the floppy disk and the swap field refers to the tmpfs in the /tmp directory.
A sample vfstab file looks like:
#device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # fd - /dev/fd fd - no - /proc - /proc proc - no - /dev/dsk/c0t0d0s4 - - swap - no - /dev/dsk/c0t0d0s0 /dev/rdsk/c0t0d0s0 / ufs 1 no - /dev/dsk/c0t0d0s6 /dev/rdsk/c0t0d0s6 /usr ufs 1 no - /dev/dsk/c0t0d0s3 /dev/rdsk/c0t0d0s3 /var ufs 1 no - /dev/dsk/c0t0d0s7 /dev/rdsk/c0t0d0s7 /export/home ufs 2 yes - /dev/dsk/c0t0d0s5 /dev/rdsk/c0t0d0s5 /opt ufs 2 yes - /dev/dsk/c0t0d0s1 /dev/rdsk/c0t0d0s1 /usr/openwin ufs 2 yes - swap - /tmp tmpfs - yes -
The /etc/mnttab file comprises a table that defines which partitions and/or disks are currently mounted by the system.
The /etc/mnttab file contains the following details about each mounted file system:
A sample mnttab file:
/dev/dsk/c0t0d0s0 / ufs rw,intr,largefiles,xattr,onerror=panic,s uid,dev=2200000 1014366934 /dev/dsk/c0t0d0s6 /usr ufs rw,intr,largefiles,xattr,onerror=panic,s uid,dev=2200006 1014366934 /proc /proc proc dev=4300000 1014366933 mnttab /etc/mnttab mntfs dev=43c0000 1014366933 fd /dev/fd fd rw,suid,dev=4400000 1014366935 /dev/dsk/c0t0d0s3 /var ufs rw,intr,largefiles,xattr,onerror=panic,s uid,dev=2200003 1014366937 swap /var/run tmpfs xattr,dev=1 1014366937 swap /tmp tmpfs xattr,dev=2 1014366939 /dev/dsk/c0t0d0s5 /opt ufs rw,intr,largefiles,xattr,onerror=panic,s uid,dev=2200005 1014366939 /dev/dsk/c0t0d0s7 /export/home ufs rw,intr,largefiles,xattr,onerror =panic,suid,dev=2200007 1014366939 /dev/dsk/c0t0d0s1 /usr/openwin ufs rw,intr,largefiles,xattr,onerror =panic,suid,dev=2200001 1014366939 -hosts /net autofs indirect,nosuid,ignore,nobrowse,dev=4580001 10143669 44 auto_home /home autofs indirect,ignore,nobrowse,dev=4580002 10143669 44 -xfn /xfn autofs indirect,ignore,dev=4580003 1014366944 sun:vold(pid295) /vol nfs ignore,dev=4540001 1014366950 #
Some applications and processes create temporary files that occupy a lot of hard disk space. As a result, it is necessary to impose a restriction on the size of the files that are created.
Solaris provides tools to control the storage. They are:
The ulimit command is a built-in shell command, which displays the current file size limit. The default value for the maximum file size, set inside the kernel, is 1500 blocks. The following syntax displays the current limit:
$ ulimit -a time(seconds) unlimited file(blocks) unlimited data(kbytes) unlimited stack(kbytes) 8192 coredump(blocks) unlimited nofiles(descriptors) 256 memory(kbytes) unlimited
If the limit is not set, it reports as unlimited.
The system administrator and the individual users change this value to set the file size at the system level and at the user level, respectively. The following is the syntax of the ulimit command:
For example, the following syntax sets the file size limit to 1600 blocks:
# ulimit 1600 # ulimit -a time(seconds) unlimited file(blocks) 1600 data(kbytes) unlimited stack(kbytes) 8192 coredump(blocks) unlimited nofiles(descriptors) 256 memory(kbytes) unlimited #
The file size can be limited at the system level or the user level. To set it at the system level, change the value of the ulimit variable in the /etc/profile file. To set it at the user level, change the value in the .profile file present in the user's home directory. The user-level setting always takes precedence over the system-level setting. It is the user's profile file that sets the working environment.
Note: The ulimit values set at the user level and system level cannot exceed the default ulimit value set in the kernel.
May 27, 2017 | opensource.comEXT4
The EXT4 filesystem primarily improves performance, reliability, and capacity. To improve reliability, metadata and journal checksums were added. To meet various mission-critical requirements, the filesystem timestamps were improved with the addition of intervals down to nanoseconds. The addition of two high-order bits in the timestamp field defers the Year 2038 problem until 2446-for EXT4 filesystems, at least.
In EXT4, data allocation was changed from fixed blocks to extents. An extent is described by its starting and ending place on the hard drive. This makes it possible to describe very long, physically contiguous files in a single inode pointer entry, which can significantly reduce the number of pointers required to describe the location of all the data in larger files. Other allocation strategies have been implemented in EXT4 to further reduce fragmentation.
EXT4 reduces fragmentation by scattering newly created files across the disk so that they are not bunched up in one location at the beginning of the disk, as many early PC filesystems did. The file-allocation algorithms attempt to spread the files as evenly as possible among the cylinder groups and, when fragmentation is necessary, to keep the discontinuous file extents as close as possible to others in the same file to minimize head seek and rotational latency as much as possible. Additional strategies are used to pre-allocate extra disk space when a new file is created or when an existing file is extended. This helps to ensure that extending the file will not automatically result in its becoming fragmented. New files are never allocated immediately after existing files, which also prevents fragmentation of the existing files.
Aside from the actual location of the data on the disk, EXT4 uses functional strategies, such as delayed allocation, to allow the filesystem to collect all the data being written to the disk before allocating space to it. This can improve the likelihood that the data space will be contiguous.
Older EXT filesystems, such as EXT2 and EXT3, can be mounted as EXT4 to make some minor performance gains. Unfortunately, this requires turning off some of the important new features of EXT4, so I recommend against this.
EXT4 has been the default filesystem for Fedora since Fedora 14.
An EXT3 filesystem can be upgraded to EXT4 using the procedure described in the Fedora documentation, however its performance will still suffer due to residual EXT3 metadata structures.
The best method for upgrading to EXT4 from EXT3 is to back up all the data on the target filesystem partition, use the mkfs command to write an empty EXT4 filesystem to the partition, and then restore all the data from the backup.
Jan 13, 2015 | cyberciti.biz
As my journey continues with Linux and Unix shell, I made a few mistakes. I accidentally deleted /tmp folder. To restore it all you have to do is:mkdir /tmp chmod 1777 /tmp chown root:root /tmp ls -ld /tmp mkdir /tmp chmod 1777 /tmp chown root:root /tmp ls -ld /tmp
The storage industry continues to make the same mistakes over and over again, and enterprises continue to take vendors' bold statements as facts. Previously, we introduced our two-part series, "The Evolution of Stupidity," explaining how issues seemingly resolved more than 20 years ago are again rearing their heads. Clearly, the more things change, the more they stay the same.
This time I ask, why do we continue to believe that the current evolutionary file system path will meet our needs today and in the future and cost nothing? Let's go back and review a bit of history for free and non-free systems file systems.
Time Machine -- Back to the Early 1980s
My experiences go back only to the early 1980s, but we have repeated history a few times since then. Why can we not seem to remember history, learn from it or even learn about it? It never ceases to amaze me. I talk to younger people, and more often than not, they say that they do not want to do hear about history, just about the presentation, and how they are going to make the future better. I coined a saying (at least I think I coined it) in the late 1990s: There are no new engineering problems, just new engineers solving old problems. I said this when I was helping someone develop a new file system using technology and ideas I had helped optimize the design around 10 years earlier.
In the mid-1980s, most of the open system file systems came as part of a standard Unix release from USL. A few vendors, such as Cray and Amdahl, wrote their own file systems. These vendors generally did so because the standard UNIX file did not meet the requirements of the day. UFS on Solaris came from another operating system, which was written in the 1960s, called Multics . That brings us to the late 1980s, and by this time, we had a number of high-performance file systems from companies such as Convex, MultiFlow and Thinking Machines. Everyone who had larger systems had their own file system, and everyone was trying to address many, if not all, of the same issues. They were in my opinion the scalability of:
- Metadata performance
- Recovery performance
- Small block performance
- Large block performance
- Storage management
The keyword here is scalability. Remember, during this time disk drive density was growing very rapidly and performance was scaling far better than it is today. Some of the vendors began the process of looking at parallel systems and some began charging for file systems that were once free. Does any of this sound like what I said in a recent blog post, "It's like deja-vu, all over again" (Yogi Berra)? But since this article is about stupidity, let's also remember the quote from another Yogi, Yogi Bear the cartoon character, "I'm smarter than the average bear!" and ask the question: Is the industry any smarter?
Around 1990, Veritas released VxFS, the first commercial UNIX file system. This file system tried to address all of the bullets points above except storage management, and Veritas added that later with VxVM. VxFS was revolutionary for commercial UNIX implementations at the time. Most of the major vendors used this product in some fashion, either supporting it or OEMing. Soon Veritas added things like the DB edition, which removed some of the POSIX-required write lock restrictions.
While Veritas was taking over the commercial world in the 1990s and making money on the file system, Silicon Graphics (SGI) decided to write its own file system, called XFS. It was released in the mid-1990s. It was later open sourced and had similar some characteristic to VxFS (imagine that), given that some of the developers were the same people. By the late 1990s and early 2000s, a number of vendors had shared file systems, but you had to pay for most of them in the HPC community. Most were implemented with a single metadata server and clients. Meanwhile, a smaller number of vendors were trying to solve large shared data problems problems with a shared name space and implementation of distributed allocation of space.
Guess what? None of these file systems were free, and all of them were trying to resolve the list of the five areas noted above. From about 2004 until Sun Microsystems purchased CFS in 2007, there was an exception to the free parallel file system-- Lustre. But "free" is relative because for much of that time significant funding was coming from the U.S. government. It was not long after the funding ran out that Sun Microsystems purchased the company that developed the Lustre file system and hoped to monetize the company purchase cost by developing hardware around the Lustre file system.
- The Evolution of Stupidity: Research (Don't Repeat) the Storage Past
- The Evolution of Stupidity: File Systems
At the same time, on the commercial front, the move to Linux was in full swing. Enter the XFS file system, which came with many standard Linux distributions and met many requirements. Appliance-based storage from the NAS vendors also met many of the requirements for performance and was far easier to management than provisioning file systems from crop of file system vendors selling file systems.
Now you have everyone moving to free file systems, not from vendors like in the 1980s but from the Linux distribution or from the NAS appliances vendors. Storage is purchased with a built-in file system.
This is all well and good, but now I am seeing the beginnings of change back to the early 1990s. Remember the saying that railroad executives in the 1920s and 1930s did not realize they were in the transportation business? Rather, they saw themselves as being only in the railroad business and thus did not embrace the airline industry. Similarly, NAS vendors do not seem to realize they are in the scalable storage business, and large shared file system vendors are now building appliances to better address many of the five bullets above.
Why Are We Going Around in Circles?
It seems to me that we are going around in circles. The 1980s are much like the early 2000s in the file system world, and the early 1990s are like the mid-2000s. The mid-1990s are similar to what we are going into again. The same is likely true for other areas of computing, as I have shown for storage in the previous article If we all thought about it, that could be said for computational design with scalar processors, vector process, GPUs and FPGAs today and yesteryear.
So everything is new every 20 years or so, and the problem solutions are not really that different. Why? Is it because no one remembers the past? Is it because everyone thinks they are smarter than their manager was when he or she was doing the work 20 years ago? Or is it something far different, like the market mimics other cycles in life like fashion, food preparation and myriad other things.
Almost 20 years ago, some friends of mine at Cray Research had the idea to separate file system metadata and data on different storage technologies, as data and metadata had different access patterns. Off and on file systems over the past 20 years have done this, but the concept has never really caught on as a must-have for file systems. I am now hearing rumblings that lots of people are talking about doing this with xyz file system. Was this NIH? I think in some cases, yes. The more I think about it, there is not a single answer to explain what happens, and if I did figure it out, I will be playing the futures market rather than doing what I am doing. We all need to learn from the past if we are going to break this cycle and make dramatic changes to technology.
POSIX is now about 20 years old since the last real changes were made. I am now hearing that POSIX limitations from a number of circles are limiting the five factors. If we change POSIX to support parallel I/O, I hope we look beyond today and think to the future.
Henry Newman is CEO and CTO of Instrumental Inc. and has worked in HPC and large storage environments for 29 years. The outspoken Mr. Newman initially went to school to become a diplomat, but was firmly told during his first year that he might be better suited for a career that didn't require diplomatic skills. Diplomacy's loss was HPC's gain.
Follow Enterprise Storage Forum on Twitter
Guide to Linux Filesystem Mastery
Journaling File Systems Linux Magazine
JFS for Linux: http://oss.software.ibm.com/jfs
Linux XFS: http://oss.sgi.com/projects/xfs
Extended Attributes & Access Controls Lists: http://acl.bestbits.at
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit exclusivly for research and educational purposes. If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner.
ABUSE: IPs or network segments from which we detect a stream of probes might be blocked for no less then 90 days. Multiple types of probes increase this period.
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least
Copyright © 1996-2016 by Dr. Nikolai Bezroukov. www.softpanorama.org was created as a service to the UN Sustainable Development Networking Programme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License.
Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info|
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.
Last modified: May 31, 2017