Softpanorama
May the source be with you, but remember the KISS principle ;-)

Contents Bulletin Scripting in shell and Perl Network troubleshooting History Humor

Slightly Skeptical View on Enterprise Unix Administration

News Unix/Linux Internals Recommended Books Recommended Links Unix System Monitoring Job schedulers
Admin Horror Stories HP Operations Manager iLO Dell DRAC ALOM Working with serial console
Advanced Linux Administration Red Hat Enterprise Linux Administration SUSE Linux Enterprise Server (SLES) Solaris AIX Administration HP-UX Administration
Networking TCP Performance Tuning Redhat Network Configuration Classic Network Utilities Network Troubleshooting Tools  TCP/IP Network Troubleshooting
Linux Multipath The Linux Logical Volume Manager (LVM) Nagios in Large Enterprise Environment System Activity Reporter (sar) Authentication Unix config management
Helpdesk software Software Distribution Assets management Project Management Unix to Unix migration Working with Console
Backup Baseliners Classic Unix Tools Perl admintools Software Distribution Performance Tuning
Unix filesystem navigation Bash history and bang commands profile and RC-files Orthodox File Managers SSH for System Administrators Network File System (NFS)
Access Control in Operating Systems Enterprise System Management Resources Reverse Cloud as Alternative to Cloud Computing Logs Analysis SMTP Mail Tivoli
Social Problem in Enterprise Unix Administration IBM Humor Tips  History Humor Etc
The KISS rule can be expanded as: Keep It Simple, Sysadmin ;-)

Additional useful material on the topic can also be found in an older article Solaris vs Linux:

Abstract

Introduction

Nine factors framework for comparison of two flavors of Unix in a large enterprise environment

Four major areas of Linux and Solaris deployment

Comparison of internal architecture and key subsystems

Security

Hardware: SPARC vs. X86

Development environment

Solaris as a cultural phenomenon

Using Solaris-Linux enterprise mix as the least toxic Unix mix available

Conclusions

Acknowledgements

Webliography

Here are my notes/reflection of sysadmin problem in strange (and typically pretty toxic) IT departments of large corporations:


Top updates

Bulletin Latest Past week Past month
Google Search


NEWS CONTENTS

Old News ;-)

2016 2015 2014 2013 2012 2011 2010 2009 2008
2007 2006 2005 2004 2003 2002 2001 2000 1999
  “I appreciate Woody Allen’s humor because one of my safety valves is an appreciation for life’s absurdities. His message is that life isn’t a funeral march to the grave. It’s a polka.”

-- Dennis Kusinich

[Apr 08, 2014] The Unix sysadmin's guide to getting along with co-workers By Sandra Henry-Stocker

 ITworld

One of the top strategies for managing relationships at work is to always maintain an appropriate level of humor. It doesn't pay to be goofy, but a few running jokes and inside humor can lighten those hard days when you'd otherwise be inclined to beat your head against the wall or cry out in frustration. Keep your sense of humor, even when the going gets rough. Spend a little time away from the office together if you can. Share some personal events. Don’t base your entire relationship on trouble tickets and backups.

Another, perhaps related, tactic is to remember that you are not your job. I've had to remind myself of this time and time again. To a large degree, I often let my career become too big a part of my self-definition. One way around this -- other than having a deeply satisfying personal life (which hasn't always worked for me) -- is to base some part of your professional identity well beyond the walls of the building in which you work. Join professional organizations. Meet people at conferences and stay in touch. Develop and share tutorials on those things you're really good at. Find ways to use your skills that provide you with an independent sense of your worth. And don’t lose track of the fact that your coworkers are not their jobs either.

Try to avoid becoming isolated, even when your work is primarily independent of the work of your coworkers. For several years, I worked for a guy who cut me off from everything else going on in our division. He'd drop by my office once a week to ask what I'd been working on and then disappear for a week while maintaining conspiracy theories about how his boss was intent on making everything we worked on fail. Having connections outside the company -- my writing and part-time teaching -- helped me deal with the isolation, but I don't ever want to work like that again. In retrospect, I should have found some way to better understand and deal with whatever politics were feeding this situation, but I survived and he didn't.

Another lesson -- behave professionally. Make peace with your big disappointments without allowing resentment to build up, leaving you bitter or impacting the quality of your work or your relationships with your coworkers.

[Apr 08, 2014] 7 habits of highly successful Unix admins By Sandra Henry-Stocker

April 05, 2014 | itworld.com

Unix admins generally work a lot of hours, juggle a large set of priorities, get little credit for their work, come across as arrogant by admins of other persuasions, tend to prefer elegant solutions to even the simplest of problems, take great pride in their ability to apply regular expressions to any challenge that comes their way....

You can spend 50-60 hours a week managing your Unix servers and responding to your users' problems and still feel as if you're not getting much done or you can adopt some good work habits that will both make you more successful and prepare you for the next round of problems.

  1. Habit 1: Don't wait for problems to find you. One of the best ways to avoid emergencies that can throw your whole day out of kilter is to be on the alert for problems in their infancy. I have found that installing scripts on the servers that report unusual log entries, check performance and disk space statistics, report application failures or missing processes, and email me reports when anything looks "off" can be of considerable value. The risks are getting so much of this kind of email that you don't actually read it or failing to notice when these messages stop arriving or start landing in your spam folder. Noticing what messages *aren't* arriving is not unlike noticing who from your team of 12 or more people hasn't shown up for a meeting.

    Being proactive, you are likely to spot a number of problems long before they turn into outages and before you users notice the problems or find that they can no longer get their work done. It's also extremely beneficial if you have the resources needed to plan for disaster. Can you fail over a service if one of your primary servers goes down? Can you rely on your backups to rebuild a server environment quickly? Do you test your backups periodically to be sure they are complete and usable? ....
     
  2. Habit 2: Know your tools and your systems. Probably the best way to recognize that one of your servers is in trouble is to know how that server looks under normal conditions. If a server typically uses 50% of its memory and starts using 99%, you're going to want to know what is different. What process is running now that wasn't before? What application is using more resources than usual?

    ... ...  ...

  3. Habit 3: Prioritize, prioritize, prioritize. Putting first things first is something of a no brainer when it comes to how you organize your work, but sometimes selecting which priority problem qualifies as "first" may be more difficult than it seems. ....
     
  4. Habit 4: Perform post mortems, but don't get lost in them ...If you do figure out why something broke, not just what happened, it's a good idea to keep some kind of record that you or someone else can find if the same thing happens months or years from now. As much as I'd like to learn from the problems I have run into over the years, I have too many times found myself facing a problem and saying "I've seen this before ..." and yet not remembered the cause or what I had done to resolve the problem. Keeping good notes and putting them in a reliable place can save you hours of time somewhere down the line.
  5. Habit 5: Document your work. In general, Unix admins don't like to document the things that they do, but some things really warrant the time and effort. I have built some complicated tools and enough of them that, without some good notes, I would have to retrace my steps just to remember how one of these processes works. ...In fact, I sometimes have to stop and ask myself "wait a minute; how does this one work?" Some of the best documentation that I have prepared for myself outlines the processes and where each piece is run, displays data samples at each stage in the process and includes details of how and when each process runs.
  6. Habit 6: Fix the problem AND explain. Good Unix admins will always be responsive to the people they are supporting, acknowledge the problems that have been reported and let their users know when they're working on them. If you take the time to acknowledge a problem when it's reported, inform the person reporting the problem when you're actually working on the problem, and let the user know when the problem has been fixed, your users are likely to feel a lot less frustrated and will be more appreciative of the time you are spending helping them....
     
  7. Habit 7: Make time for yourself. As I've said in other postings, you are not your job. Taking care of yourself is an important part of doing a good job. Don't chain yourself to your desk. Walk around now and then, take mental breaks, and keep learning -- especially things that interest you. If you look after your well being, renew your energy, and step away from your work load for brief periods, you're likely to be both happier and more successful in all aspects of your life.

    Read more of Sandra Henry-Stocker's Unix as a Second Language blog  and follow the latest IT news at ITworld, Twitter and Facebook.
     

NSA hacks system administrators, new leak reveals

In its quest to take down suspected terrorists and criminals abroad, the United States National Security Agency has adopted the practice of hacking the system administrators that oversee private computer networks, new documents reveal.

In its quest to take down suspected terrorists and criminals abroad, the United States National Security Agency has adopted the practice of hacking the system administrators that oversee private computer networks, new documents reveal.

The Intercept has published a handful of leaked screenshots taken from an internal NSA message board where one spy agency specialist spoke extensively about compromising not the computers of specific targets, but rather the machines of the system administrators who control entire networks.

Journalist Ryan Gallagher reported that Edward Snowden, a former sys admin for NSA contractor Booz Allen Hamilton, provided The Intercept with the internal documents, including one from 2012 that’s bluntly titled “I hunt sys admins.”

According to the posts — some labeled “top secret” — NSA staffers should not shy away from hacking sys admins: a successful offensive mission waged against an IT professional with extensive access to a privileged network could provide the NSA with unfettered capabilities, the analyst acknowledged.

“Who better to target than the person that already has the ‘keys to the kingdom’?” one of the posts reads.

“They were written by an NSA official involved in the agency’s effort to break into foreign network routers, the devices that connect computer networks and transport data across the Internet,” Gallagher wrote for the article published late Thursday. “By infiltrating the computers of system administrators who work for foreign phone and Internet companies, the NSA can gain access to the calls and emails that flow over their networks.”

Since last June, classified NSA materials taken by Snowden and provided to certain journalists have exposed an increasing number of previously-secret surveillance operations that range from purposely degrading international encryption standards and implanting malware in targeted machines, to tapping into fiber-optic cables that transfer internet traffic and even vacuuming up data as its moved into servers in a decrypted state.

The latest leak suggests that some NSA analysts took a much different approach when tasked with trying to collect signals intelligence that otherwise might not be easily available. According to the posts, the author advocated for a technique that involves identifying the IP address used by the network’s sys admin, then scouring other NSA tools to see what online accounts used those addresses to log-in. Then by using a previously-disclosed NSA tool that tricks targets into installing malware by being misdirected to fake Facebook servers, the intelligence analyst can hope that the sys admin’s computer is sufficiently compromised and exploited.

Once the NSA has access to the same machine a sys admin does, American spies can mine for a trove of possibly invaluable information, including maps of entire networks, log-in credentials, lists of customers and other details about how systems are wired. In turn, the NSA has found yet another way to, in theory, watch over all traffic on a targeted network.

“Up front, sys admins generally are not my end target. My end target is the extremist/terrorist or government official that happens to be using the network some admin takes care of,” the NSA employee says in the documents.

When reached for comment by The Intercept, NSA spokesperson Vanee Vines said that, “A key part of the protections that apply to both US persons and citizens of other countries is the mandate that information be in support of a valid foreign intelligence requirement, and comply with US Attorney General-approved procedures to protect privacy rights.”

Coincidentally, outgoing-NSA Director Keith Alexander said last year that he was working on drastically cutting the number of sys admins at that agency by upwards of 90 percent — but didn’t say it was because they could be exploited by similar tactics waged by adversarial intelligence groups. Gen. Alexander’s decision came just weeks after Snowden — previously one of around 1,000 sys admins working on the NSA’s networks, according to Reuters — walked away from his role managing those networks with a trove of classified information.

[Dec 27, 2013] Power-Loss-Protected SSDs Tested Only Intel S3500 Passes

December 27, 2013 | Slashdot
lkcl writes
"After the reports on SSD reliability and after experiencing a costly 50% failure rate on over 200 remote-deployed OCZ Vertex SSDs, a degree of paranoia set in where I work. I was asked to carry out SSD analysis with some very specific criteria: budget below £100, size greater than 16Gbytes and Power-loss protection mandatory. This was almost an impossible task: after months of searching the shortlist was very short indeed. There was only one drive that survived the torturing: the Intel S3500. After more than 6,500 power-cycles over several days of heavy sustained random writes, not a single byte of data was lost. Crucial M4: failed. Toshiba THNSNH060GCS: failed. Innodisk 3MP SATA Slim: failed. OCZ: failed hard.
Only the end-of-lifed Intel 320 and its newer replacement, the S3500, survived unscathed. The conclusion: if you care about data even when power could be unreliable, only buy Intel SSDs."
ssufficool (1836898)
So make the power reliable...

and get a UPS. Why blow more money on a slightly more reliable SSD when a UPS is so much cheaper?

nerdguy0 (101358) <lwalkeraNO@SPAMieee.org

Re: So make the power reliable... (Score:2)

Or get an m500 which is basically a m4 with capacitor backup and newer NAND. 

Dunbal (464142) * on Friday December 27, 2013 @05:01PM (#45800499)

People who have "important data" and fail to make a backup copy - no matter which type of media they are using - deserve to lose their data. Seriously, what you said doesn't only apply to SSD's.

CajunArson (465943)

Consumer grade vs. Enterprise Grade (Score:5, Insightful)

Slightly more seriously than my last post, the S3500 was the only enterprise-grade SSD tested in that batch. Frankly, I have little sympathy for you if you expected consumer-grade SSDs to perform like Enterprise-grade SSDs in a mission-critical application.

Consumer grade drives, even/especially the "high performance" ones that will often benchmark better than the "overpriced" enterprise drives, ain't designed to have perfect data retention. Of course, consumer or enterprise, any drive can fail and appropriate measures including RAID and backup* should always be in place no matter what type of drive you have.

* Yes, RAID != backup, I know, don't bother making that post.

[Aug 10, 2013]  Hybrid Hard Drives Just Need 8GB of NAND

August 08, 2013

judgecorp writes

"Research from Seagate suggests that hybrid hard drives in general use are virtually as good as solid state drives if they have just 8GB of solid state memory.

The research found that normal office computers, not running data-centric applications, access just 9.58GB of unique data per day. 8GB is enough to store most of that, and results in a drive which is far cheaper than an all-Flash device.

Seagate is confident enough to ease off on efforts to get data off hard drives quickly, and rely on cacheing instead. It will cease production of 7200 RPM laptop drives at the end of 2013, and just make models running at 5400 RPM." 

Brad DeLong Illegitimate and Unfair Larry Summers Bashing The Smart and Thoughtful Matthew Klein Gets One Wrong

GrueBleen

Hmmm. Kinda reminds me of a story I once heard about IBM: that it took a huge amount of data gathering and persuasion to get IBM to adopt a new, or changed, policy/strategy/tactic. But that once adopted, having been through an exhausting process of approval, it was all but impossible to stop it, even when circumstances had significantly changed.

Or, in short, "this was approved and supported by (ex)President, and economics guru Larry Summers, so it must be right for all time".

The price of fiscal success is eternal (and repeated, and systematic) vigilance ?

Microsoft Has 1 Million Servers. So What

 July 18, 2013 | Slashdot

dbIII

Since they do mail hosting (Score:2)

Since they do mail hosting that's probably half right and a large proportion of them are mail servers. It probably works well most of the time, but I've only ever been exposed to that side of their business due to an utterly stupid fuckup that took them a week to resolve because that's how long the trouble ticket queue is - that's how little respect they had for their client with more than twenty thousand email accounts.

I wasn't working for that client of theirs but instead trying to contact someone there while their Microsoft hosted email was down for a week.

Decker-Mage

Re:How do you calculate space and power... (Score:2)

While running VMs is more flexible, is there too much overhead in the tradeoff? Especially with a million servers and all.

Which does need some consideration. Supposedly, in a perfect virtualized environment you'd see about 2-3% knocked off, in a headless configuration (no preferred guest OS VM installed on top of the host) and with perfect loading. However it's an imperfect world and no matter how you automagically mix and match loads, assuming it's allowed for those guests (think HIPAA, etc.), you're going to see more inefficiency. How much? No one seems to be releasing real numbers that I know of. It's quite literally a billion dollar question for the host providers and perhaps a trillion dollar question for the world.

-- "The most deadly words for an engineer. 'I have an idea.'"

 cusco

Re:How do you calculate space and power... (Score:3)

Not really. Microsoft's Quincy data center started virtualizing servers and they saved so much electricity that they didn't hit Bonneville Power Association's target energy usage to qualify for the huge discount they normally get.

To make up the difference they opened all the vents in the middle of winter, turned the heaters on full blast, and burned $70,000 in electricity in a week. The renegotiated the next year's contract with the BPA so they haven't had to repeat that particular bit of foolishness.

-- "Think about how stupid the average person is. Now, realise that half of them are dumber than that." - George Carlin

Is the Information Technology Revolution Over

June 4, 2013 | Economist's View

Quick one, then I have to figure out how to get to Toulouse (missed connection, in Paris now ... but should be able to get there ... long day so far):

Is the Information Technology Revolution Over?, by David M. Byrne, Stephen D. Oliner, and Daniel E. Sichel, FRB: Abstract: Given the slowdown in labor productivity growth in the mid-2000s, some have argued that the boost to labor productivity from IT may have run its course. This paper contributes three types of evidence to this debate. First, we show that since 2004, IT has continued to make a significant contribution to labor productivity growth in the United States, though it is no longer providing the boost it did during the productivity resurgence from 1995 to 2004. Second, we present evidence that semiconductor technology, a key ingredient of the IT revolution, has continued to advance at a rapid pace and that the BLS price index for microprocesssors may have substantially understated the rate of decline in prices in recent years. Finally, we develop projections of growth in trend labor productivity in the nonfarm business sector. The baseline projection of about 1¾ percent a year is better than recent history but is still below the long-run average of 2¼ percent. However, we see a reasonable prospect--particularly given the ongoing advance in semiconductors--that the pace of labor productivity growth could rise back up to or exceed the long-run average. While the evidence is far from conclusive, we judge that "No, the IT revolution is not over."

Comments

Darryl FKA Ron said...
The pickup reflects ongoing advances in IT and an assumption that those gains and innovations in other sectors spur some improvement in multifactor productivity (MFP) growth outside of the IT sector relative to its tepid pace from 2004 to 2012.5 These developments feed through the economy to provide a modest boost to labor productivity growth.

[Using technology to replace people or make them more productive, generally considered the same thing, is one form of productivity increasing technology integration. Automating accounting functions from producing bills to meter reading or selling your goods on the WWW were examples of the revolution of picking low hanging fruit. Using technology to manage systems in ways that people could not realistically accomplish is another way of increasing productivity. From running power production and distribution, traffic lights, and just in time manufacturing integrated with ERP accounting systems from order entry through to shipping and general ledger were other ways of increasing productivity. The green fields of labor replacement have largely been sewn. The green fields of automated systems management are without end. Economists have a limited lens into operations with metrics that often confuse value and price.]

Darryl FKA Ron said in reply to Darryl FKA Ron...
IOW, the MFP is underpriced because its marginal benefits get absorded by price competition.
squidward said in reply to Darryl FKA Ron...
Economists have a limited lens into operations with metrics that often confuse value and price.

That can be very true when looking at it qualitatively. Google or Amazon could be loading your browser with cookies and data mining your online habits to maximize sales. Their increases in revenue don't help the average consumer much other than consume more. It's not quite the same brave new world we had with the advent of online banking, bill paying and 24 hr shopping with home delivery.

I would have to agree that now marginal increases in productivity due to IT aren't giving the same marginal increases in value to the end consumer.

john personna said...
As I understand it, middle class incomes have fallen as productivity has risen. Doesn't that make a productivity centered view much less interesting to compassionate observers?
reason said in reply to john personna...
Not sure, but it makes redistribution more interesting.
Fred C. Dobbs said...

Yesterday I watched three techs work for two hours to get a laser printer going again that had been working fine Friday, but now was down due to 'network problems', so I would have to say, yeah, it could be over.

Julio said in reply to Fred C. Dobbs...
I watched the same scene twenty years ago, and it was a crappy dot-matrix printer that cost ten times as much.

What does it all mean?

john personna said in reply to Julio...
I suspect that techs stretched out a ticket, then and now.
Fred C. Dobbs said in reply to Fred C. Dobbs...

Such problems, which are infrequent, seem to be invariably network-related. Network support techs
are in short supply so 'everything else' is always tried first, even when that doesn't make much sense.

KJMClark said...

If economists at the FRB are still getting paid to ruminate on whether the IT revolution is over, then the IT revolution is not over. We'll know it's nearly over when we're replacing all but the top level of economists with intelligent software. (The top level will be helping the top-level software developers write the software.) We'll know it's completely over when the politicians decide we've had enough of the experiment of replacing people with machines. Or, rather, when we get the intelligent machines' responses to the politicians saying we're done.

ezra abrams said...

ya know, if u economist ever got outta yr offices, and did some real work...
You would find that the gains yet to be realized from the IT revolution are IMMENSE
People like myself, highly paid and educated PhDs, we can all be dispensed with
or, how about an earpiece that in real time tells a trial lawyer, *while s/he is in court*, what ruling he needs to cite to rebut a just made oral argument...

or CAD software that allows automotive design engineers to shave 10% of the weight of a car in an iteritve fashion (lower the weight of one component; therefore suspension can be less sturdy, in turn you then need less horsepower due to the lower wieght...)

or software that can solve Navier stokes for non laminair flow at high reynolds number

i mean, seriously, the it revolution is over ?
I'm sure they said the same thing about RailRoads and the tansportation revolution in, say 1890

jt said...

Couldn't automation lead to declining gross productivity via underemployment of displaced workers into low productivity jobs (services)? In fact, one could argue that the dual mandate of monetary policy will produce exactly this outcome. Labor income distributions seem to confirm this split in the labor market. The more interesting comparison, is in industry that use a lot of IT, has their productivity improved?

Michael Gamble:

Computer technology with out software is just a paperweight. The technology revolution is stalled but not over. The problem as I see it and it seems to be everywhere is not nearly enough people paying for custom software development done in house.

Every company any bigger than a few employees needs someone on staff who manages software purchases, installation and the creation of custom software to make the bridge between a lot of chimerical software, but no one wants to pay for it.

I don't blame them mind you there has been a lot of over hyping going on by the big players in the industry for years. And looking around you would think your entire business could be run from your phone with all the advertising, technology and hype put in to mobile phones?

Observer said in reply to Michael Gamble...
I used to lead teams that built custom software for internal use. The ROI can be high under the right conditions, but Commercial Off The Shelf is often a better solution. It depends. My off the cuff guess is that custom is hard to justify for a small business, except in very special cases.

Custom software that actually supports business critical processes tends to be quite expensive, with reason.

Another reason to avoid custom software is risk mitigation. A business that relies on software built by one "someone on staff" is running a real risk if/when that someone leaves.

Part of the unwillingness to pay is that people's expectations are influenced by consumer software price points, $0 to a few hundred dollars.

Jerry said...
A secondary inhibiter might be the growing mess in the software patent world. There seems to be an increasing reluctance to make an investment because of the patent toes that might get tweaked.
squidward said in reply to Jerry...

This is very true, the whole tech industry is a mess with antiquated patent law. You have to wonder how many products aren't being brought to market because a smaller company can't afford to get in a heavy weight patent brawl a la Samsung v. Apple.

I'm no expert but I have always wondered why writing code isn't more analogous to copyrighting than inventing and patenting. If we could incentivize more open source we could have more innovation.

Second Best said...

For the US, much of the IT revolution was over after the major carriers killed the internet revolution in its tracks, and it ain't coming back anytime soon.

See 'Captive Audience' by Susan Crawford.

kievite said...

Situation is enterprise datacenters definitely corresponds to definition of stagnation. We see a lot of cost cutting.

Percentage of custom software is small and getting smaller as Observer already noted above. Programmers are disappearing from the enterprise IT departments.

At the same time a new dangerous trend is in place. Some "of-the-shelf" packages on which enterprise depends are problematic with some subsystems close to junk or even harmful (SAP/R3, some IBM products, etc). Moreover there is now a new type of enterprise software vendors who are specializing in selling completely useless or even harmful software on the pure strength of marketing (plus fashion). Vendor which try to capitalize on ignorance of a typical IT management layer.

Like Kolmogorov once said "You can't overestimate the level of ignorance of the audience". That was about different audience, but fully applicable here. So snake oil salesmen in IT are making good money, may be better than honest sailmen.

But the problems with "off-the-shelf" packages are increasing due to their often unwarranted complexity (which serves mainly as barrier of entry for competitors), or just complexity for the sake of complexity.

This and the fact that generally software is a the most complex artifact invented by mankind lead to the level of understanding of existing packages and operating systems that can be called dismal. Even people who "should know" often look like coming from the pages of "The Good Soldier Švejk" or "Catch 22". One Unix group manager in a large company that I used to know for example did not understand the fact that IBM Power servers and Intel servers are based on CPU with two different architectures. When at the meeting I realized that my jaw simply dropped.

People who saw the software evolution from its humble beginning and can understand internals and nature of compromises taken in existing hardware and software are now close to retirement and in the new generation such people are exceedingly rare.

That is true for operating systems such as Linux or Solaris, this is even more true for web-related software such as Apache, MediaWiki, Frontpage, etc. As a result a lot of things are "barely run" and a lot of system are bought just because people have no clue that already bought systems can perform the same functions.

This is also true about Office, especially Excel. One think that I noticed that the level of knowledge of Excel is really dismal across the enterprise. I would agree with "squidward" that for office (but only for office) "I would have to agree that now marginal increases in productivity due to IT aren't giving the same marginal increases in value to the end consumer. ". But that's for office only. Cars, homes, etc are still "terrra incognita".

But while internally everything looks rotten, externally situation looks different: there is unending assault of automation on existing jobs. So JT is on something when he asks the question "The more interesting comparison, is in industry that use a lot of IT, has their productivity improved?" Yes and to the extend that many workforce cuts are permanent and moreover cuts might continue.

As for statement "For the US, much of the IT revolution is over" I doubt it. Computer will continue to eat jobs. The "cutting edge" simply moved elsewhere and one hot area are various robotic systems. Here is one example:

"IBM is using robots based on iRobot Create, a customizable version of the Roomba vacuum cleaner, to measure temperature and humidity in data centers. The robot looks for cold zones (where cold air may be going to waste instead of being directed to the servers) and hotspots (where the air circulation may be breaking down. IBM is putting the robots to commercial use at partners — while EMC is at an early stage on a strikingly similar project."

Both home and datacenter are huge application areas. Even in consumer electronics what we have is still very primitive in comparison with what is possible on the current hardware. That is true for smartphones, tablets and other mass gargets. And it is even more true for home. For example for older people a cutting edge computer technology can probably provide the level of service comparable with the level of service of nursing home. Automated cook who accepts a simple menu and deliver dishes is already feasible automation. Currently the cost will be high but gradually it will drop and quality of service improves. One interesting area is saving energy. How many people here have home network which integrates thermostat, outdoor and indoor lights, and security system. And probably nobody here has computer automated shades on windows.

Autonomous datacenters with robot service are in my opinion an interesting development which in many cases can serve as "distributed cloud".

Sunny Liu said in reply to kievite...

I agree. That was a wonderfully informative post, and thank you for that. I also don't think it's even close to over not just because of robotics but also because of machine learning. The implications for data mining and big data are enormous, and the research being done in those fields are still yet to be fully utilized.

Right now, machine learning and big data are advancing research in biology, but what about the optimizations possible in pharmaceuticals or manufacturing?

geoff

not sure if this qualifies as IT. Not even sure if that is a meaningful distinction, but as a distributor I can't help thinking that 3D printers will have a profound impact on both manufacturing and distribution.

http://www.businessweek.com/articles/2013-05-16/bloomberg-view-why-3d-printing-can-make-the-world-a-better-place 

The issue now becomes whether the technology will transform manufacturing more broadly. At the moment, 3D printing is a small part of the economy. The printers are typically slow, and the material they use is expensive and inconsistent. As the industry advances, however, printing on demand could reduce assembly lines, shorten supply chains, and largely erase the need for warehouses for many companies. Cutting back on shipping and eliminating the waste and pollution of traditional manufacturing could be an environmental boon.

Kaleberg

The software revolution is over, just the same way the written word revolution ended in 1900.

Tom Shillock:

Does increased labor productivity increase living standards? What if the products are lower quality and either must be disposed of sooner than if they were of higher quality or incur greater lifecycle service costs than if they were of higher quality? If increased productivity degrades the natural environment (air, water, soil, food) are living standards increased? If productivity numbers increase because more people are unemployed or under employed or suffer stagnant or lower wages from globalization has the standard of living increased?

Looking at aggregate data glosses losers and winners. For the last three decades the winners are relatively fewer in number but grabbing a greater share of GDP, while the loser are vastly greater in number and have fewer opportunities to be winners other than through random luck (marriage, inheritance, connections).

Is IT innovation necessarily good or more productive? The increase in semiconductor performance (pick your metric) at a given price, or even lower price, will not increase productivity if you already have all the IT performance you can use. In some cases, it might reduce productivity. E.g., having larger hard drives means more data will be collected (data collects to fill the space available: Shillock’s Second Law of Storage). Unless one has efficient methods to keep track of it all then searching for it will reduce productivity. Also, access and seek times have not improved linearly with capacity. Another hit to productivity from IT. This is why the Fed’s incorporation of a “hedonic index” for microprocessors into its price index muddies the water (c.f. p. 8)

What if some IT innovation gives a company a competitive edge such as happened in finance? Others are compelled to adopt it ASAP but that does not necessarily make the financial industry more productive. Indeed, it facilitated fraud on a massive scale that caused the Great Recession while leaving insiders vastly wealthier for it. It could be argued that IT innovations have facilitated the increasing and increasingly server financial crises over the past three decades, if only because they create the delusion among users that they know more than they do thereby feeding their hubris.

The Dictatorship of Data http://www.technologyreview.com/news/514591/the-dictatorship-of-data/

Most of the gains to productivity from IT are in two areas. First, is that large financial and reservation systems. These run on IBM mainframes. The second is the application of smaller computers to manufacturing to run tools. The vast amount of PC level IT probably reduces productivity because most people do not know how to use it. Word processors allow more people to take more time making their emails and interoffice memos grammatically better and with fewer spelling errors. Most people have little clue how to use spreadsheets, even Rinehart and Rogoff were challenged. PowerPoint and similar PC apps are great for enabling the incompetent and ignorant to appear otherwise, which accounts for their popularity. They are the lingua franca of IBM. So far Facebook, Twitter, LinkedIn, etc. are a great waste of time for the neurotically self-conscious and self-important.

Economics is a literary genre in which contestants focus only on the numbers and usually those they like then use their imaginations to spin stories about the numbers. Audiences then vote on which story they find most pleasing.

Can You Say “Bubble”?

April 30, 2013 by | 18 Comments  By James Kwak

Yesterday’s Wall Street Journal had an article titled “Foosball over Finance” about how people in finance have been switching to technology startups, for all the predictable reasons: The long hours in finance. “Technology is collaborative. In finance, it’s the opposite.” “The prospect of ‘building something new.’” Jeans. Foosball tables. Or, in the most un-self-conscious, over-engineered, revealing turn of phrase: “The opportunity of my generation did not seem to be in finance.”

We have seen this before. Remember Startup.com? That film documented the travails of a banker who left Goldman to start an online company that would revolutionize the delivery of local government services. It failed, but not before burning through tens of millions of dollars of funding. There was a time, right around 1999, when every second-year associate wanted to bail out of Wall Street and work for an Internet company.

The things that differentiate technology from banking are always the same: the hours (they’re not quite as bad), the work environment, “building something new,” the dress code, and so on. They haven’t changed in the last few years. The only thing that changes are the relative prospects of working in the two industries—or, more importantly, perceptions of those relative prospects.

Wall Street has always attracted a particular kind of person: ambitious but unfocused, interested in success more than any achievements in particular, convinced (not entirely without reason) that they can do anything, and motivated by money largely as a signifier of personal distinction. If those people want to work for technology startups, that means two things.

[Mar 27, 2013] Most IT Admins Have Considered Quitting Due To Stress

 Mar 27, 2013 |  Slashdot

Posted by Soulskill

Orome1 writes

"The number of IT professionals considering leaving their job due to workplace stress has jumped from 69% last year to 73%. One-third of those surveyed cited dealing with managers as their most stressful job requirement, particularly for IT staff in larger organizations. Handling end user support requests, budget squeeze and tight deadlines were also listed as the main causes of workplace stress for IT managers. Although users are not causing IT staff as much stress as they used to, it isn't stopping them from creating moments that make IT admins want to tear their hair out in frustration. Of great concern is the impact that work stress is having on health and relationships. While a total of 80% of participants revealed that their job had negatively impacted their personal life in some way, the survey discovered some significant personal impact: 18% have suffered stress-related health issues due to their work, and 28% have lost sleep due to work."

Culture20:

Re:IT admins are special (Score:5, Insightful)

lots of jobs really suck, and lots of people are stressed to the point of health impacts and have considered quitting. Many of these jobs pay significantly less than IT wages.

Whenever I get stressed out, I remember the jobs I did before/while I was in college, and I'm happy to be where I am. I can't imagine what today's grads do without any work experience at low-wage McJobs. Consider quitting I guess?

datavirtue:

Re:IT admins are special (Score:4, Insightful)

Admin is just a step up from help desk, hang out too long and it will begin to suck badly. If you fail to increase your skills (most admins) and your ability to add value, then it will start to suck badly after a number of years--it's boring.

How many servers can you provision or user accounts can you setup before pulling your fucking hair out?

Learn to code, become a professional DBA, or acquire some more skills that makes you valuable, like perhaps getting involved with business intelligence.

Admins are a commodity. Yes, it is easy to hang out and collect a paycheck, but don't whine when your value wanes and people direct you around like a monkey boy.

i kan reed

Re:IT admins are special (Score:5, Insightful)

As an software engineer(and thus not an IT admin), IT admins have it much worse than most middle class office workers. They get shit on over the smallest thing, and are the only IT employees who are expected to deliver within minutes of being asked. I don't think it's a stretch to say their stress levels might be higher than yours.

jedidiah

Re:IT admins are special (Score:5, Interesting)

In terms of certain job expectations they are. These include longer hours and working weekends and during the 3rd shift.

A lot of mundanes don't understand this. They hear that you've got some office job and they don't understand why you would be working those kinds of hours.

Clueless spouses can add to the stress level. Even spouses that are part of the workforce can be ignorant and unsympathetic.

jellomizer

Re:IT admins are special (Score:4, Funny)

No your wife will not understand no matter what your job is. She will undoubtedly have worked more then you did, no matter what.

h4rr4r:

 That is only $48k. That is terrible pay for sysadmin work.

Shadow99_1

Personally I was supporting Windows, Linux, and Apple... So no, not just windows. I also was not the only one, I worked with admins from a dozen companies from time to time and pay varied from $40k-55k. Those making $55k were in their 50's and had started (often at these companies) during the 70's or at most 80's...

ZaMoose:

Lying liars and the lies they lie about (Score:5, Informative)

Only 73% have considered quitting? The other 27% are lying to you, probably because they're worried that the survey is being snooped on by the corporate Barracuda firewall.

Spy Handler

Rapid change in IT is the problem (Score:3)

When IT and computer/internet field in general settle down and become mature, things will get better.

Right now there's just too many new technolgies and buzzwords and platforms and architecture and paradigms popping up, and pointy-haired managers and VPs all want to implement this and that and oh by the way make it work with our legacy system and nothing better get lost or you're fired.

Yold

Re:Rapid change in IT is the problem (Score:5, Insightful)

It's not a matter of maturity. Many organizations hide behind the disclaimer "we are not an I.T. company", despite having sizable I.T. departments. And despite having this sizable department, which offers mission-critical applications and infrastructure, zero effort is made towards working smarter. Problems are fixed with mandatory overtime, cutting staffing/costs, and "quick-and-dirty" fixes to long standing problems.

I think some companies are starting to understand that their project management methodologies are flawed, but most cannot connect the concepts of "software debt" to decreasing marginal output in their I.T. efforts. An hour of work today is less effective than in the past because you are paying "interest" on your previous bad decisions.

I think that the 27% is reflective of companies that can connect the longevity and cost-effectiveness of I.T. systems to proper project planning, management, and I.T. expertise. Whether or not this is an upper-bound remains to be seen, because a lot of organizations simply don't understand that inventing your own project management ideas dooms you to repeating the same failures that have happened over the last 50 years.

meatspray

Thats why your #1 priority in an interview is: (Score:5, Insightful)

Picking your boss. If you're not up a creek looking for work, that interview is to let you meet your managers, talk to some workers about the managers.

When I started working it was "If I can just get in the door"

When I was in my 20's it was "What cool things will this job do for me"

Now That i'm in my 30's its "Will I be able to work with these people"

Midnight_Falcon

It's about being "Always on" (Score:5, Insightful)

 I'm an IT professional and more than once I've thought about quitting, especially when I was doing high-stress consulting. Clients treat you like meat, like "the help." They have no problem waking you up at 5AM with nonsense problems. If you don't answer and do it politely, they call your boss and then your job/livelihood is in jeopardy.

This isn't just a 9-5 thing where, when you leave the office, you're no longer on the hook -- it's always happening. Sometimes, you're at a bar at 10PM and you get an urgent call -- pick it up, and you in your tipsy state are now on the hook to resolve an important issue.

The fear of getting these calls has made me stay home sometimes when I could have been being social, and not travel away on vacation when I knew some action was going on I'd be needed for. It creates a lot of stress to be depended on so much, and now with telecommuting, you're expected to be responsive at all times wherever you are.

It's a lot of stress even in the best setup/most-redundant environments, and the job is not for everyone. And when projects come up that are difficult and highly user-facing, it's hard to avoid this type of a situation.

mjr167

Re:It's about being "Always on" (Score:2)

How is that different from being... a doctor, a fireman, a nuclear plant operator, a plumber, or an electrical line repairman?

Welcome to the world of essential services. When your job is to keep things working, you don't get to pick your hours cause shit happens.

[Mar 26, 2013] Not your father's IBM - I, Cringely

Another Ex-IBMer says:

April 18, 2012 at 9:28 am

The current level of EPS was reached by a combination of two things: (1) offshoring jobs and (2) killing the pension fund (IBM stopped contributing to its pension fund in 2007 – all subsequent benefit dollars have gone into 401k plans). Lots of other shenanigans too of course, documented well in the book “Retirement Heist: How Companies Plunder and Profit from the Nest Eggs of American Workers” by Ellen Schultz.

Since (2) has already been done, to redouble EPS again to $20 per share will require an even greater rate of US job elimination than we have seen in the past.

They have also done other little things. Yes, retirees can purchase health insurance through IBM, but retirees are placed into their own insurance pool rather than being co-insured with all employees. Result: (much) higher rates. Result of higher rates: retirees cannot afford health insurance ==> retirees die sooner ==> lower pension costs for IBM ==> higher EPS!

[Dec 06, 2012] If tech is so important, why are IT wages flat

IT salaries have not really kept pace with inflation
Computerworld

Despite the fact that technology plays an increasingly important role in the economy, IT wages remain persistently flat. This may be tech's inconvenient truth.

The still sluggish U.S. economy gets most of the blame for this wage stagnation, but factors such as outsourcing and automation also contribute to the problem, say analysts.

"IT salaries have not really kept pace with inflation," said Victor Janulaitis, the CEO of Janco Associates, which reports on IT wage compensation.

In 2000, the average hourly wage was $37.27 in computer and math occupations for workers with at least a bachelor's degree. In 2011, it was $39.24, adjusted for inflation, according to a new report by the Economic Policy Institute (EPI).

[Aug 10, 2012]  Business Has Killed IT With Overspecialization  by Charlie Schluting

April 7, 2010 | Enterprise Networking Planet

What happened to the old "sysadmin" of just a few years ago? We've split what used to be the sysadmin into application teams, server teams, storage teams, and network teams. There were often at least a few people, the holders of knowledge, who knew how everything worked, and I mean everything. Every application, every piece of network gear, and how every server was configured -- these people could save a business in times of disaster.

Now look at what we've done. Knowledge is so decentralized we must invent new roles to act as liaisons between all the IT groups. Architects now hold much of the high-level "how it works" knowledge, but without knowing how any one piece actually does work. In organizations with more than a few hundred IT staff and developers, it becomes nearly impossible for one person to do and know everything. This movement toward specializing in individual areas seems almost natural. That, however, does not provide a free ticket for people to turn a blind eye.

Specialization

You know the story: Company installs new application, nobody understands it yet, so an expert is hired. Often, the person with a certification in using the new application only really knows how to run that application. Perhaps they aren't interested in learning anything else, because their skill is in high demand right now. And besides, everything else in the infrastructure is run by people who specialize in those elements. Everything is taken care of.

Except, how do these teams communicate when changes need to take place? Are the storage administrators teaching the Windows administrators about storage multipathing; or worse logging in and setting it up because it's faster for the storage gurus to do it themselves? A fundamental level of knowledge is often lacking, which makes it very difficult for teams to brainstorm about new ways evolve IT services. The business environment has made it OK for IT staffers to specialize and only learn one thing.

If you hire someone certified in the application, operating system, or network vendor you use, that is precisely what you get. Certifications may be a nice filter to quickly identify who has direct knowledge in the area you're hiring for, but often they indicate specialization or compensation for lack of experience.

Resource Competition

Does your IT department function as a unit? Even 20-person IT shops have turf wars, so the answer is very likely, "no." As teams are split into more and more distinct operating units, grouping occurs. One IT budget gets split between all these groups. Often each group will have a manager who pitches his needs to upper management in hopes they will realize how important the team is.

The "us vs. them" mentality manifests itself at all levels, and it's reinforced by management having to define each team's worth in the form of a budget. One strategy is to illustrate a doomsday scenario. If you paint a bleak enough picture, you may get more funding. Only if you are careful enough to illustrate the failings are due to lack of capital resources, not management or people. A manager of another group may explain that they are not receiving the correct level of service, so they need to duplicate the efforts of another group and just implement something themselves. On and on, the arguments continue.

Most often, I've seen competition between server groups result in horribly inefficient uses of hardware. For example, what happens in your organization when one team needs more server hardware? Assume that another team has five unused servers sitting in a blade chassis. Does the answer change? No, it does not. Even in test environments, sharing doesn't often happen between IT groups.

With virtualization, some aspects of resource competition get better and some remain the same. When first implemented, most groups will be running their own type of virtualization for their platform. The next step, I've most often seen, is for test servers to get virtualized. If a new group is formed to manage the virtualization infrastructure, virtual machines can be allocated to various application and server teams from a central pool and everyone is now sharing. Or, they begin sharing and then demand their own physical hardware to be isolated from others' resource hungry utilization. This is nonetheless a step in the right direction. Auto migration and guaranteed resource policies can go a long way toward making shared infrastructure, even between competing groups, a viable option.

Blamestorming

The most damaging side effect of splitting into too many distinct IT groups is the reinforcement of an "us versus them" mentality. Aside from the notion that specialization creates a lack of knowledge, blamestorming is what this article is really about. When a project is delayed, it is all too easy to blame another group. The SAN people didn't allocate storage on time, so another team was delayed. That is the timeline of the project, so all work halted until that hiccup was restored. Having someone else to blame when things get delayed makes it all too easy to simply stop working for a while.

More related to the initial points at the beginning of this article, perhaps, is the blamestorm that happens after a system outage.

Say an ERP system becomes unresponsive a few times throughout the day. The application team says it's just slowing down, and they don't know why. The network team says everything is fine. The server team says the application is "blocking on IO," which means it's a SAN issue. The SAN team say there is nothing wrong, and other applications on the same devices are fine. You've ran through nearly every team, but without an answer still. The SAN people don't have access to the application servers to help diagnose the problem. The server team doesn't even know how the application runs.

See the problem? Specialized teams are distinct and by nature adversarial. Specialized staffers often relegate themselves into a niche knowing that as long as they continue working at large enough companies, "someone else" will take care of all the other pieces.

I unfortunately don't have an answer to this problem. Maybe rotating employees between departments will help. They gain knowledge and also get to know other people, which should lessen the propensity to view them as outsiders

 

Continued


E-books, Courses, Tutorials

Online Libraries Mark
Burgess
USAIL Digital Unix System Administation e-book LDP e-books Other e-books

Online Libraries


Mark Burgess

Principles of system administration - Table of Contents


USAIL

USAIL can be freely mirrored. A very useful resource...

Recommended Links

Softpanorama Top Visited

Softpanorama Recommended

Seven Sisters ;-)

Search engines:

Professional societies:

Portals and collections of links

Forums

Other E-books

LDP e-books

[Apr 21, 1999] Linux Administration Made Easy by Steve Frampton, <3srf@qlink.queensu.ca> v0.99u.01 (PRE-RELEASE), 21 April 1999. A new LDP book.

The Network Administrators' Guide by Olaf Kirk


Recommended Articles

Burnout, and Other Social Isuues

Resources

The FreeBSD Diary -- System tools - toys I have found -- short discussion of last, swapinfo, systat, tops and z-tools.

Tom Limoncelli's Published Papers




Etc

Society

Groupthink : Understanding Micromanagers and Control Freaks : Toxic Managers : BureaucraciesHarvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Two Party System as Polyarchy : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

Skeptical Finance : John Kenneth Galbraith : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Oscar Wilde : Talleyrand : Somerset Maugham : War and Peace : Marcus Aurelius : Eric Hoffer : Kurt Vonnegut : Otto Von Bismarck : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Oscar Wilde : Bernard Shaw : Mark Twain Quotes

Bulletin:

Vol 26, No.1 (January, 2013) Object-Oriented Cult : Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks: The efficient markets hypothesis : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

 

The Last but not Least


Copyright © 1996-2014 by Dr. Nikolai Bezroukov. www.softpanorama.org was created as a service to the UN Sustainable Development Networking Programme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Site uses AdSense so you need to be aware of Google privacy policy. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine. This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting hosting of this site with different providers to distribute and speed up access. Currently there are two functional mirrors: softpanorama.info (the fastest) and softpanorama.net.

Disclaimer:

The statements, views and opinions presented on this web page are those of the author and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

Last updated: July 18, 2014