Softpanorama
May the source be with you, but remember the KISS principle ;-)

Contents Bulletin Scripting in shell and Perl Network troubleshooting History Humor

Software Engineering

News Programming Languages Design Recommended Books Recommended Links Selected Papers LAMP Stack Unix Component Model
Architecture Brooks law Conway Law A Note on the Relationship of Brooks Law and Conway Law The Mythical Man-Month Simplification and KISS  
Software Life Cycle Models Software Prototyping Program Understanding Exteme programming as yet another SE fad Distributed software development anti-OO Literate Programming
Reverse Engineering Links Programming style Project Management Code Reviews and Inspections Configuration Management Design patterns CMM
Bad Software Information Overload Inhouse vs outsourced applications development OSS Development as a Special Type of Academic Research A Second Look at the Cathedral and Bazaar  Labyrinth of Software Freedom Programming as a profession
Testing   Sysadmin Horror Stories Health Issues SE quotes Humor Etc

Software Engineering: A study akin to numerology and astrology, but lacking the precision of the former and the success of the latter.

KISS Principle     /kis' prin'si-pl/ n.     "Keep It Simple, Stupid". A maxim often invoked when discussing design to fend off creeping featurism and control development complexity. Possibly related to the marketroid maxim on sales presentations, "Keep It Short and Simple".

creeping featurism     /kree'ping fee'chr-izm/ n.     [common] 1. Describes a systematic tendency to load more chrome and features onto systems at the expense of whatever elegance they may have possessed when originally designed. See also feeping creaturism. "You know, the main problem with BSD Unix has always been creeping featurism." 2. More generally, the tendency for anything complicated to become even more complicated because people keep saying "Gee, it would be even better if it had this feature too". (See feature.) The result is usually a patchwork because it grew one ad-hoc step at a time, rather than being planned. Planning is a lot of work, but it's easy to add just one extra little feature to help someone ... and then another ... and another... When creeping featurism gets out of hand, it's like a cancer. Usually this term is used to describe computer programs, but it could also be said of the federal government, the IRS 1040 form, and new cars. A similar phenomenon sometimes afflicts conscious redesigns; see second-system effect. See also creeping elegance.
Jargon file

Software engineering (SE)  has probably largest concentration of snake oil salesman after OO programming and software architecture is far from being an exclusion.  Many published software methodologies/architectures claim to provide the benefits, that most of them can not deliver (UML is one good example). I see a lot of oversimplification of the real situation and unnecessary (and useless) formalisms.  The main idea advocated here is simplification of software architecture (including usage of well-understood "Pipe and Filter model")  and scripting languages.

There are few quality general architectural resources available from the Net, therefore the list below represent only some links that I am interested personally. The stress here is on skepticism and this collection is neither complete, nor up to date. But still it might help students that are trying to study this complex and interesting subject. Or perhaps, if you already a software architect you might be able to expand your knowledge of the subject. 

Excessive zeal in adopting some fashionable but questionable methodology is a "real and present danger" in software engineering. This is not a new threat, it started with structured programming revolution and then verification "holy land" searching with Edsger W. Dijkstra as a new prophet of an obsure cult.  The main problem here that all those methodologies contain 20% of useful elements; but the other 80% kill all the useful elements and introduce probably some real disadvantages. After a dozen or so partially useful but mostly useless methodologies came, were enthusiastically  adopted and went into oblivion we should definitely be skeptical.

All this "extreme programming" idiotism or CMM Lysenkoism should be treated as we treat dangerous religious sects.  It's undemocratic and stupid to prohibit them but it's equally dangerous and stupid to follow their recommendations ;-). As Talleyrand advised to junior diplomats: "Above all, gentlemen, not too much zeal. "  By this phrase, Talleyrand was reportedly recommended to his subordinates that important decisions must be based upon the exercise of cool-headed reason and not upon emotions or any waxing or waning popular delusion.

One interesting fact about software architecture is that it can't be practiced from the "ivory tower". Only when you do coding yourself and faces limitations of the tools and hardware you can create a great architecture.  See Real Insights into Architecture Come Only From Actual Programming

One interesting fact about software architecture is that it can't be practiced from the "ivory tower". Only when you do coding yourself and faces limitations of the tools and hardware you can create a great architecture.  See Real Insights into Architecture Come Only From Actual Programming

The primary purpose of Software Architecture courses is to teach students some higher level skills useful in designing and implementing complex software systems. In usually includes some information about classification (general and domain specific architectures), analysis and tools.  As guys in Breadmear consulting aptly noted in their paper  Who Software Architect Role:

A simplistic view of the role is that architects create architectures, and their responsibilities encompass all that is involved in doing so. This would include articulating the architectural vision, conceptualizing and experimenting with alternative architectural approaches, creating models and component and interface specification documents, and validating the architecture against requirements and assumptions.

However, any experienced architect knows that the role involves not just these technical activities, but others that are more political and strategic in nature on the one hand, and more like those of a consultant, on the other. A sound sense of business and technical strategy is required to envision the "right" architectural approach to the customer's problem set, given the business objectives of the architect's organization. Activities in this area include the creation of technology roadmaps, making assertions about technology directions and determining their consequences for the technical strategy and hence architectural approach.

Further, architectures are seldom embraced without considerable challenges from many fronts. The architect thus has to shed any distaste for what may be considered "organizational politics", and actively work to sell the architecture to its various stakeholders, communicating extensively and working networks of influence to ensure the ongoing success of the architecture.

But "buy-in" to the architecture vision is not enough either. Anyone involved in implementing the architecture needs to understand it. Since weighty architectural documents are notorious dust-gatherers, this involves creating and teaching tutorials and actively consulting on the application of the architecture, and being available to explain the rationale behind architectural choices and to make amendments to the architecture when justified.

Lastly, the architect must lead--the architecture team, the developer community, and, in its technical direction, the organization.

Again, I would like to stress that the main principle of software architecture is simple and well known -- it's famous KISS principle. While principle is simple its implementation is not and a lot of developers (especially developers with limited resources) paid dearly for violating this principle.  I have found one one reference on simplicity in SE: R. S. Pressman. Simplicity. In Software Engineering, A Practitioner's Approach, page 452. McGraw Hill, 1997. Here open source tools can help because for those tools a complexity is not such a competitive advantage as for closed source tools. But that not necessary true about actual tools as one problem with open source projects is change of the leader. This is the moment when many projects lose architectural integrity and became a Byzantium compendium of conflicting approaches.

I appreciate am architecture of software system that lead to small size implementations with simple, Spartan interface. In these days the usage of scripting languages can cut the volume of code more than in half in comparison with Java.  That's why this site is advocating usage of scripting languages for complex software projects.

"Real Beauty can be found in Simplicity," and as you may know already, ' "Less" sometimes equal "More".' I continue to adhere to that philosophy. If you, too, have an eye for simplicity in software engineering, then you might benefit from this collection of links.

I think writing a good software system is somewhat similar to writing a multivolume series of books. Most writers will rewrite each chapter of book several times and changes general structure a lot. Rewriting large systems is more difficult, but also very beneficial. It make sense always consider the current version of the system a draft that can be substantially improved and simplified by discovering some new unifying and simplifying paradigm.  Sometimes you can take a wrong direction, but still "nothing venture nothing have."

On a subsystem level a decent configuration management system can help going back. Too often people try to write and debug their fundamentally flawed architecturally "first draft", when it would have been much simpler and faster to rewrite it based on better understanding of architecture and better understanding of the problem.  Actually rewriting can save time spend in debugging of the old version.  That way, when you're done, you may get easy-to-understand, simple software systems, instead of just systems that "seems to work okay" (only as correct as your testing).

On component level refactoring (see Refactoring: Improving the Design of Existing Code) might be a useful simplification technique. Actually rewriting is a simpler term, but let's assume that refactoring is rewriting with some ideological frosting ;-). See Slashdot Book Reviews Refactoring Improving the Design of Existing Code.

I have found one reference on simplicity in SE: R. S. Pressman. Simplicity. In Software Engineering, A Practitioner's Approach, page 452. McGraw Hill, 1997.

Another relevant work (he try to promote his own solution -- you can skip this part) is the critique of "the technology mud slide" in a  book The Innovator's Dilemma by Harvard Business School Professor Clayton M. Christensen . He defined  the term"technology mudslide", the concept very similar to Brooks "software development tar pit" -- a perpetual cycle of abandonment or retooling of existing systems in pursuit of the latest fashionable technology trend -- a cycle in which

 "Coping with the relentless onslaught of technology change was akin to trying to climb a mudslide raging down a hill. You have to scramble with everything you've got to stay on top of it. and if you ever once stop to catch your breath, you get buried."

The complexity caused by adopting new technology for the sake of new technology is further exacerbated by the narrow focus and inexperience of many project leaders -- inexperience with mission-critical systems, systems of larger scale then previously built, software development disciplines, and project management. A Standish Group International survey recently showed that 46% of IT projects were over budget and overdue -- and 28% failed altogether. That's normal and probably the real failures figures are higher: great software managers and architects are rare and it is those people who determine the success of a software project.

Dr. Nikolai Bezroukov


Top updates

Softpanorama Switchboard
Softpanorama Search


NEWS CONTENTS

Old News ;-)

[May 17, 2017] Who really gives a toss if it's agile or not

Notable quotes:
"... According to sources, hundreds of developers were employed on the programme at huge day rates, with large groups of so-called agile experts overseeing the various aspects of the programme. ..."
"... I have also worked on agile for UK gov projects a few years back when it was mandated for all new projects and I was at first dead keen. However, it quickly become obvious that the lack of requirements, specifications etc made testing a living nightmare. Changes asked for by the customer were grafted onto what become baroque mass of code. I can't see how Agile is a good idea except for the smallest trivial projects. ..."
"... The question is - is that for software that's still in development or software that's deployed in production? If it's the latter and your "something" just changes its data format you're going to be very unpopular with your users. And that's just for ordinary files. If it requires frequent re-orgs of an RDBMS then you'd be advised to not go near any dark alley where your DBA might be lurking. ..."
"... Software works on data. If you can't get the design of that right early you're going to be carrying a lot of technical debt in terms of backward compatibility or you're going to impose serious costs on your users for repeatedly bringing existing data up to date. ..."
"... At this point, courtesy of Exxxxtr3333me Programming and its spawn, 'agile' just means 'we don't want to do any design, we don't want to do any documentation, and we don't want to do any acceptance testing because all that stuff is annoying.' Everything is 'agile', because that's the best case for terrible lazy programmers, even if they're using a completely different methodology. ..."
"... It's like any exciting new methodology : same shit, different name. In this case, one that allows you to pretend the tiny attention-span of a panicking project manager is a good thing. ..."
"... Process is not a panacea or a crutch or a silver bullet. Methodologies only work as well as the people using it. Any methodology can be distorted to give the answer that upper management wants (instead of reality) ..."
"... under the guise of " agile ?". I'm no expert in project management, but I'm pretty sure it isn't supposed to be making it up as you go along, and constantly changing the specs and architecture. ..."
"... So why should the developers have all the fun? Why can't the designers and architects be "agile", too? Isn't constantly changing stuff all part of the "agile" way? ..."
May 17, 2017 | theregister.co.uk
Comment "It doesn't matter whether a cat is white or black, as long as it catches mice," according to Chinese revolutionary Deng Xiaoping.

While Deng wasn't referring to anything nearly as banal as IT projects (he was of course talking about the fact it doesn't matter whether a person is a revolutionary or not, as long as he or she is efficient and capable), the same principle could apply.

A fixation on the suppliers, technology or processes ultimately doesn't matter. It's the outcomes, stupid. That might seem like a blindingly obvious point, but it's one worth repeating.

Or as someone else put it to me recently in reference to the huge overspend on a key UK programme behind courts digitisation which we recently revealed: "Who gives a toss if it's agile or not? It just needs to work."

If you're going to do it do it right

I'm not dismissing the benefits of this particular methodology, but in the case of the Common Platform Programme , it feels like the misapplication of agile was worse than not doing it at all.

Just to recap: the CPP was signed off around 2013, with the intention of creating a unified platform across the criminal justice system to allow the Crown Prosecution Service and courts to more effectively manage cases.

By cutting out duplication of systems, it was hoped to save buckets of cash and make the process of case management across the criminal justice system far more efficient.

Unlike the old projects of the past, this was a great example of the government taking control and doing it themselves. Everything was going to be delivered ahead of time and under budget. Trebles all round!

But as Lucy Liu's O-Ren Ishii told Uma Thurman's character in in Kill Bill : "You didn't think it was gonna be that easy, did you?... Silly rabbit."

According to sources, alarm bells were soon raised over the project's self-styled "innovative use of agile development principles". It emerged that the programme was spending an awful lot of money for very little return. Attempts to shut it down were themselves shut down.

The programme carried on at full steam and by 2014 it was ramping up at scale. According to sources, hundreds of developers were employed on the programme at huge day rates, with large groups of so-called agile experts overseeing the various aspects of the programme.

CPP cops a plea

Four years since it was first signed off and what are the things we can point to from the CPP? An online make-a-plea programme which allows people to plead guilty or not guilty to traffic offences; a digital markup tool for legal advisors to record case results in court, which is being tested by magistrates courts in Essex; and the Magistrates Rota.

Multiple insiders have said the rest that we have to show for hundreds of millions of taxpayers' cash is essentially vapourware. When programme director Loveday Ryder described the project as a "once-in-a-lifetime opportunity" to modernise the criminal justice system, it wasn't clear then that she meant the programme would itself take an actual lifetime.

Of course the definition of agile is that you are able to move quickly and easily. So some might point to the outcomes of this programme as proof that it was never really about that.

One source remarked that it really doesn't matter if you call something agile or not, "If you can replace agile with constantly talking and communicating then fine, call it agile." He also added: "This was one of the most waterfall programmes in government I've seen."

What is most worrying about this programme is it may not be an isolated example. Other organisations and departments may well be doing similar things under the guise of "agile". I'm no expert in project management, but I'm pretty sure it isn't supposed to be making it up as you go along, and constantly changing the specs and architecture.

Ultimately who cares if a programme is run via a system integrator, multiple SMEs, uses a DevOps methodology, is built in-house or deployed using off-the-shelf, as long as it delivers good value. No doubt there are good reasons for using any of those approaches in a number of different circumstances.

Government still spends an outrageous amount of money on IT, upwards of £16bn a year. So as taxpayers it's a simple case of wanting them to "show me the money". Or to misquote Deng, at least show us some more dead mice. ®

Prst. V.Jeltz

Re: 'What's Real and What's for Sale'...

So agile means "constantly adapating " ? read constantly bouncing from one fuckup to the next , paddling like hell to keep up , constantly firefighting whilst going down slowly like the titanic?

thats how i read it

Dogbowl
Re: 'What's Real and What's for Sale'...

Ha! About 21 years back, working at Racal in Bracknell on a military radio project, we had a 'round-trip-OMT' CASE tool that did just that. It even generated documentation from the code so as you added classes and methods the CASE tool generated the design document. Also, a nightly build if it failed, would email the code author.

I have also worked on agile for UK gov projects a few years back when it was mandated for all new projects and I was at first dead keen. However, it quickly become obvious that the lack of requirements, specifications etc made testing a living nightmare. Changes asked for by the customer were grafted onto what become baroque mass of code. I can't see how Agile is a good idea except for the smallest trivial projects.

PatientOne
Re: 'What's Real and What's for Sale'...

"Technically 'agile' just means you produce working versions frequently and iterate on that."

It's more to do with priorities: On time, on budget, to specification: Put these in the order of which you will surrender if the project hits problems.

Agile focuses on On time. What is delivered is hopefully to specification, and within budget, but one or both of those could be surrendered in order to get something out On time. It's just project management 101 with a catchy name, and in poorly managed 'agile' developments you find padding to fit the usual 60/30/10 rule. Then the management disgard the padding and insist the project can be completed in a reduced time as a result, thereby breaking the rules of 'agile' development (insisting it's on spec, under time and under budget, but it's still 'agile'...).

Doctor Syntax
Re: 'What's Real and What's for Sale'...

"Usually I check something(s) in every day, for the most major things it may take a week, but the goal is always to get it in and working so it can be tested."

The question is - is that for software that's still in development or software that's deployed in production? If it's the latter and your "something" just changes its data format you're going to be very unpopular with your users. And that's just for ordinary files. If it requires frequent re-orgs of an RDBMS then you'd be advised to not go near any dark alley where your DBA might be lurking.

Software works on data. If you can't get the design of that right early you're going to be carrying a lot of technical debt in terms of backward compatibility or you're going to impose serious costs on your users for repeatedly bringing existing data up to date.

Doctor Syntax
Re: 'What's Real and What's for Sale'...

"On time, on budget, to specification: Put these in the order of which you will surrender if the project hits problems."

In the real world it's more likely to be a trade-off of how much of each to surrender.

FozzyBear
Re: 'What's Real and What's for Sale'...

I was told in my earlier years by a Developer.

For any project you can have it

  1. Cheap, (On Budget)
  2. Good, (On spec)
  3. Quick.( On time)

Pick two of the three and only two. It doesn't which way you pick, you're fucked on the third. Doesn't matter about methodology, doesn't matter about requirements or project manglement. You are screwed on the third and the great news is, is that the level of the reaming you get scales with the size of the project.

After almost 20 years in the industry this has held true.

Dagg
Re: 'What's Real and What's for Sale'...

Technically 'agile' just means you produce working versions frequently and iterate on that.

No, technically agile means having no clue as to what is required and to evolve the requirements as you build. All well and good if you have a dicky little web site but if you are on a very / extremely large project with fixed time frame and fixed budget you are royally screwed trying to use agile as there is no way you can control scope.

Hell under agile no one has any idea what the scope is!

Archtech
Re: Government still spends an outrageous amount of money on IT

I hope you were joking. If not, try reading the classic book "The Mythical Man-Month".

oldtaku
'Agile' means nothing at this point. Unless it means terrible software.

At this point, courtesy of Exxxxtr3333me Programming and its spawn, 'agile' just means 'we don't want to do any design, we don't want to do any documentation, and we don't want to do any acceptance testing because all that stuff is annoying.' Everything is 'agile', because that's the best case for terrible lazy programmers, even if they're using a completely different methodology.

I firmly believe in the basics of 'iterate working versions as often as possible'. But why sell ourselves short by calling it agile when we actually design it, document it, and use testing beyond unit tests?

Yes, yes, you can tell me what 'agile' technically means, and I know that design and documentation and QA are not excluded, but in practice even the most waterfall of waterfall call themselves agile (like Kat says), and from hard experience people who really push 'agile agile agile' as their thing are the worst of the worst terrible coders who just slam crap together with all the finesse and thoughtfulness of a Bangalore outsourcer.

Adrian 4
It's like any exciting new methodology : same shit, different name. In this case, one that allows you to pretend the tiny attention-span of a panicking project manager is a good thing.

When someone shows me they've learned the lessons of Brooke's tarpit,. I'll be interested to see how they did it. Until then, it's all talk.

jamie m
25% Agile:

Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.

Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.

Working software is the primary measure of progress.

kmac499
Re: 25% Agile:

From Jamie M

Working software is the primary measure of progress.

Brilliant few word summary which should be scrawled on the wall of every IT project managers office in Foot High Letters.

I've lived through SSADM, RAD, DSDM, Waterfall, Bohm Spirals, Extreme Programming and probably a few others.

They are ALL variations on a theme. The only thing they have in common is the successful ones left a bunch of 0's and 1's humming away in a lump of silicon doing something useful.

Doctor Syntax
Re: 25% Agile:

"Working software is the primary measure of progress."

What about training the existing users, having it properly documented for the users of the future, briefing support staff, having proper software documentation, or at least self documenting code, for those who will have to maintain it and ensuring it doesn't disrupt the data from the previous release? Or do we just throw code over the fence and wave goodbye to it?

Charlie Clark
Re: Limits of pragmatism

In France "naviguer à vue" is pejorative.

Software development in France* which also gave us ah yes, it may work in practice but does it work in theory .

The story is about the highly unusual cost overrun of a government project. Never happened dans l'héxagone ? Because it seems to happy pretty much everywhere else with relentless monotony because politicians are fucking awful project managers.

* FWIW I have a French qualification.

Anonymous Coward

Agile only works if all stakeholders agree on an outcome

For a project that is a huge change of operating your organisation it is unlikely that you will be able to deliver, at least the initial parts of your project, in an agile way. Once outcomes are known at a high level stakeholders have something to cling onto when they are asked what they need, if indeed they exist yet. (trying to ask for requirements for a stakeholder that doesn't exist yet is tough).

Different methods have their own issues, but in this case I would have expected failure to be reasonably predictable.

You wont have much to show for it, as they shouldn't at least, have started coding to a business model that itself needs defining. This is predictable, and overall means that no one agrees what the business should look like, let alone how a vendor delivered software solution should support it.

I have a limited amount of sympathy for the provider for this as it will be beyond their control (limited as they are an expensive government provider after all)

This is a disaster caused by the poor management in UKGOV and the vendor should have dropped and ran well before this.

Anonymous Coward

I'm a one man, self employed business - I do some very complex sites - but if they don't work I don't get paid. If they spew ugly bugs I get paniced emails from unhappy clients.

So I test after each update and comment code and add features to make my life easy when it comes to debugging. Woo, it even send me emails for some bugs.

I'm in agreement with the guy above - a dozen devs, a page layout designer or two, some databases. One manager to co-ordinate and no bloody jargon.

There's a MASSIVE efficiency to small teams but all the members need to be on top of their game.

Doctor Syntax
"I'm in agreement with the guy above - a dozen devs, a page layout designer or two, some databases. One manager to co-ordinate and no bloody jargon."

Don't forget a well-defined, soluble problem. That's in your case, where you're paid by results. If you're paid by billable hours it's a positive disadvantage.

Munchausen's proxy
Agile Expertise?

" I'm no expert in project management, but I'm pretty sure it isn't supposed to be making it up as you go along, and constantly changing the specs and architecture."

I'm no expert either, but I honestly thought that was quite literally the definition of agile. (maybe disguised with bafflegab, but semantically equivalent)

Zippy's Sausage Factory
Sounds like what I have said for a while...

The "strategy boutiques" saw "agile" becoming popular and they now use it as a buzzword.

These days, I put it in my "considered harmful" bucket, along with GOTO, teaching people to program using BASIC, and "upgrading" to Office 2016*.

* Excel, in particular.

a_yank_lurker
Buzzword Bingo

All too often a sound development ideas are perverted. The concepts are sound but the mistake is to view each as the perfect panacea to produce bug free, working code. Each has its purpose and scope of effectiveness. What should be understood and applied is not a precise cookbook method but principles - Agile focuses communication between groups and ensuring all on the same page. Others focus more on low level development (test driven development, e.g.) but one can lose sight the goal is use an appropriate tool set to make sure quality code is produced. Again, is the code being tested, are the tests correct, are junior developers being mentored, are developers working together appropriately for the nature of the projects are issues to be addressed not the precise formalism of the insultants.

Uncle Bob Martin has noted that one of the problems the formalisms try to address is the large number of junior developers who need proper mentoring, training, etc. in real world situations. He noted that in the old days many IT pros were mid level professionals who wandered over to IT and many of the formalisms so beloved by the insultants were concepts they did naturally. Cross functional team meetings - check, mentor - check, use appropriate tests - check, etc. These are professional procedures common to other fields and were ingrained mindset and habits.

Doctor Syntax
It's worth remembering that it's the disasters that make the news. I've worked on a number of public sector projects which were successful. After a few years of operation, however, the contract period was up and the whole service put out to re-tender.* At that point someone else gets the contract so the original work on which the successful delivery was based got scrapped.

* With some very odd results, it has to be said, but that's a different story.

goldcd

...My personal niggle is that a team has "velocity" rather than "speed" - and that seems to be a somewhat deliberate and disingenuous selection. The team should have speed, the project ultimately a measurable velocity calculated by working out how much of the speed was wasted in the wrong/right direction.

Anyway, off to get my beauty sleep, so I can feed the backlog tomorrow with anything within my reach.

Notas Badoff
Re: I like agile

Wanted to give you an up-vote as "velocity" vs. "speed" is exactly the sleight of hand that infuriates me. We do want eventual progress achieved, as in "distance towards goal in mind", right?

Unfortunately my reading the definitions and checking around leads me to think that you've got the words 'speed' and 'velocity' reversed above. Where's that nit-picker's icon....

bfwebster
I literally less than an hour ago gave my CS 428 ("Software Engineering") class here at Brigham Young University (Provo, Utah, USA) my final lecture for the semester, which included this slide:

Process is not a panacea or a crutch or a silver bullet. Methodologies only work as well as the people using it. Any methodology can be distorted to give the answer that upper management wants (instead of reality)

When adopting a new methodology:

Understand the strengths and weaknesses of a given methodology before starting a project with it. Also, make sure a majority of team members have successfully completed a real-world project using that methodology.

Pete 2
Save some fun for us!

> under the guise of " agile ?". I'm no expert in project management, but I'm pretty sure it isn't supposed to be making it up as you go along, and constantly changing the specs and architecture.

So why should the developers have all the fun? Why can't the designers and architects be "agile", too? Isn't constantly changing stuff all part of the "agile" way?

[May 05, 2017] William Binney - The Government is Profiling You (The NSA is Spying on You)

Very interesting discussion of how the project of mass surveillance of internet traffic started and what were the major challenges. that's probably where the idea of collecting "envelopes" and correlating them to create social network. Similar to what was done in civil War.
The idea to prevent corruption of medical establishment to prevent Medicare fraud is very interesting.
Notable quotes:
"... I suspect that it's hopelessly unlikely for honest people to complete the Police Academy; somewhere early on the good cops are weeded out and cannot complete training unless they compromise their integrity. ..."
"... 500 Years of History Shows that Mass Spying Is Always Aimed at Crushing Dissent It's Never to Protect Us From Bad Guys No matter which government conducts mass surveillance, they also do it to crush dissent, and then give a false rationale for why they're doing it. ..."
"... People are so worried about NSA don't be fooled that private companies are doing the same thing. ..."
"... In communism the people learned quick they were being watched. The reaction was not to go to protest. ..."
"... Just not be productive and work the system and not listen to their crap. this is all that was required to bring them down. watching people, arresting does not do shit for their cause ..."
Apr 20, 2017 | www.youtube.com
Chad 2 years ago

"People who believe in these rights very much are forced into compromising their integrity"

I suspect that it's hopelessly unlikely for honest people to complete the Police Academy; somewhere early on the good cops are weeded out and cannot complete training unless they compromise their integrity.

Agent76 1 year ago (edited)
January 9, 2014

500 Years of History Shows that Mass Spying Is Always Aimed at Crushing Dissent It's Never to Protect Us From Bad Guys No matter which government conducts mass surveillance, they also do it to crush dissent, and then give a false rationale for why they're doing it.

http://www.washingtonsblog.com/2014/01/government-spying-citizens-always-focuses-crushing-dissent-keeping-us-safe.html

Homa Monfared 7 months ago

I am wondering how much damage your spying did to the Foreign Countries, I am wondering how you changed regimes around the world, how many refugees you helped to create around the world.

Don Kantner, 2 weeks ago

People are so worried about NSA don't be fooled that private companies are doing the same thing. Plus, the truth is if the NSA wasn't watching any fool with a computer could potentially cause an worldwide economic crisis.

Bettor in Vegas 1 year ago

In communism the people learned quick they were being watched. The reaction was not to go to protest.

Just not be productive and work the system and not listen to their crap. this is all that was required to bring them down. watching people, arresting does not do shit for their cause......

[Apr 18, 2017] Learning to Love Intelligent Machines

Notable quotes:
"... Learning to Love Intelligent Machines ..."
Apr 18, 2017 | www.nakedcapitalism.com
MoiAussie , April 17, 2017 at 9:04 am

If anyone is struggling to access Learning to Love Intelligent Machines (WSJ), you can get to it by clicking though this post . YMMV.

MyLessThanPrimeBeef , April 17, 2017 at 11:26 am

Also, don't forget to Learn from your Love Machines.

Artificial Love + Artificial Intelligence = Artificial Utopia.

[Apr 17, 2017] How many articles have I read that state as fact that the problem is REALLY automation?

Notable quotes:
"... It isn't. It's the world's biggest, most advanced cloud-computing company with an online retail storefront stuck between you and it. In 2005-2006 it was already selling supercomputing capability for cents on the dollar - way ahead of Google and Microsoft and IBM. ..."
"... Do you really think the internet created Amazon, Snapchat, Facebook, etc? No, the internet was just a tool to be used. The people who created those businesses would have used any tool they had access to at the time because their original goal was not automation or innovation, it was only to get rich. ..."
"... "Disruptive parasitic intermediation" is superb, thanks. The entire phrase should appear automatically whenever "disruption"/"disruptive" or "innovation"/"innovative" is used in a laudatory sense. ..."
"... >that people have a much bigger aversion to loss than gain. ..."
"... As the rich became uber rich, they hid the money in tax havens. As for globalization, this has less to do these days with technological innovation and more to do with economic exploitation. ..."
Apr 17, 2017 | www.nakedcapitalism.com
Carla , April 17, 2017 at 9:25 am

"how many articles have I read that state as fact that the problem is REALLY automation?

NO, the real problem is that the plutocrats control the policies "

+1

justanotherprogressive , April 17, 2017 at 11:45 am

+100 to your comment. There is a decided attempt by the plutocrats to get us to focus our anger on automation and not the people, like they themselves, who control the automation ..

MoiAussie , April 17, 2017 at 12:10 pm

Plutocrats control much automation, but so do thousands of wannabe plutocrats whose expertise lets them come from nowhere to billionairehood in a few short years by using it to create some novel, disruptive parasitic intermediation that makes their fortune. The "sharing economy" relies on automation. As does Amazon, Snapchat, Facebook, Dropbox, Pinterest,

It's not a stretch to say that automation creates new plutocrats . So blame the individuals, or blame the phenomenon, or both, whatever works for you.

Carolinian , April 17, 2017 at 12:23 pm

So John D. Rockefeller and Andrew Carnegie weren't plutocrats–or were somehow better plutocrats?

Blame not individuals or phenomena but society and the public and elites who shape it. Our social structure is also a kind of machine and perhaps the most imperfectly designed of all of them. My own view is that the people who fear machines are the people who don't like or understand machines. Tools, and the use of them, are an essential part of being human.

MoiAussie , April 17, 2017 at 9:21 pm

Huh? If I wrote "careless campers create forest fires", would you actually think I meant "careless campers create all forest fires"?

Carolinian , April 17, 2017 at 10:23 pm

I'm replying to your upthread comment which seems to say today's careless campers and the technology they rely on are somehow different from those other figures we know so well from history. In fact all technology is tremendously disruptive but somehow things have a way of sorting themselves out. So–just to repeat–the thing is not to "blame" the individuals or the automation but to get to work on the sorting. People like Jeff Bezos with his very flaky business model could be little more than a blip.

a different chris , April 17, 2017 at 12:24 pm

>Amazon, Snapchat, Facebook, Dropbox, Pinterest

Automation? Those companies? I guess Amazon automates ordering not exactly R. Daneel Olivaw for sure. If some poor Asian girl doesn't make the boots or some Agri giant doesn't make the flour Amazon isn't sending you nothin', and the other companies are even more useless.

Mark P. , April 17, 2017 at 2:45 pm

'Automation? Those companies? I guess Amazon automates ordering not exactly R. Daneel Olivaw for sure.'

Um. Amazon is highly deceptive, in that most people think it's a giant online retail store.

It isn't. It's the world's biggest, most advanced cloud-computing company with an online retail storefront stuck between you and it. In 2005-2006 it was already selling supercomputing capability for cents on the dollar - way ahead of Google and Microsoft and IBM.

justanotherprogressive , April 17, 2017 at 12:32 pm

Do you really think the internet created Amazon, Snapchat, Facebook, etc? No, the internet was just a tool to be used. The people who created those businesses would have used any tool they had access to at the time because their original goal was not automation or innovation, it was only to get rich.

Let me remind you of Thomas Edison. If he would have lived 100 years later, he would have used computers instead of electricity to make his fortune. (In contrast, Nikolai Tesla/George Westinghouse used electricity to be innovative, NOT to get rich ). It isn't the tool that is used, it is the mindset of the people who use the tool

clinical wasteman , April 17, 2017 at 2:30 pm

"Disruptive parasitic intermediation" is superb, thanks. The entire phrase should appear automatically whenever "disruption"/"disruptive" or "innovation"/"innovative" is used in a laudatory sense.

100% agreement with your first point in this thread, too. That short comment should stand as a sort of epigraph/reference for all future discussion of these things.

No disagreement on the point about actual and wannabe plutocrats either, but perhaps it's worth emphasising that it's not just a matter of a few successful (and many failed) personal get-rich-quick schemes, real as those are: the potential of 'universal machines' tends to be released in the form of parasitic intermediation because, for the time being at least, it's released into a world subject to the 'demands' of capital, and at a (decades-long) moment of crisis for the traditional model of capital accumulation. 'Universal' potential is set free to seek rents and maybe to do a bit of police work on the side, if the two can even be separated.

The writer of this article from 2010 [ http://www.metamute.org/editorial/articles/artificial-scarcity-world-overproduction-escape-isnt ] surely wouldn't want it to be taken as conclusive, but it's a good example of one marginal train of serious thought about all of the above. See also 'On Africa and Self-Reproducing Automata' written by George Caffentzis 20 years or so earlier [https://libcom.org/library/george-caffentzis-letters-blood-fire]; apologies for link to entire (free, downloadable) book, but my crumbling print copy of the single essay stubbornly resists uploading.

DH , April 17, 2017 at 9:48 am

Unfortunately, the healthcare insurance debate has been simply a battle between competing ideologies. I don't think Americans understand the key role that universal healthcare coverage plays in creating resilient economies.

Before penicillin, heart surgeries, cancer cures, modern obstetrics etc. that it didn't matter if you are rich or poor if you got sick. There was a good chance you would die in either case which was a key reason that the average life span was short.

In the mid-20th century that began to change so now lifespan is as much about income as anything else. It is well known that people have a much bigger aversion to loss than gain. So if you currently have healthcare insurance through a job, then you don't want to lose it by taking a risk to do something where you are no longer covered.

People are moving less to find work – why would you uproot your family to work for a company that is just as likely to lay you off in two years in a place you have no roots? People are less likely to day to quit jobs to start a new business – that is a big gamble today because you not only have to keep the roof over your head and put food on the table, but you also have to cover an even bigger cost of healthcare insurance in the individual market or you have a much greater risk of not making it to your 65th birthday.

In countries like Canada, healthcare coverage is barely a discussion point if somebody is looking to move, change jobs, or start a small business.

If I had a choice today between universal basic income vs universal healthcare coverage, I would choose the healthcare coverage form a societal standpoint. That is simply insuring a risk and can allow people much greater freedom during the working lives. Similarly, Social Security is of similar importance because it provides basic protection against disability and not starving in the cold in your old age. These are vastly different incentive systems than paying people money to live on even if they are not working.

Our ideological debates should be factoring these types of ideas in the discussion instead of just being a food fight.

a different chris , April 17, 2017 at 12:28 pm

>that people have a much bigger aversion to loss than gain.

Yeah well if the downside is that you're dead this starts to make sense.

>instead of just being a food fight.

The thing is that the Powers-That-Be want it to be a food fight, as that is a great stalling at worst and complete diversion at best tactic. Good post, btw.

Altandmain , April 17, 2017 at 12:36 pm

As the rich became uber rich, they hid the money in tax havens. As for globalization, this has less to do these days with technological innovation and more to do with economic exploitation.

I will note that Germany, Japan, South Korea, and a few other nations have not bought into this madness and have retained a good chunk of their manufacturing sectors.

Mark P. , April 17, 2017 at 3:26 pm

'As for globalization, this has less to do these days with technological innovation and more to do with economic exploitation.'

Economic exploiters are always with us. You're underrating the role of a specific technological innovation. Globalization as we now know it really became feasible in the late 1980s with the spread of instant global electronic networks, mostly via the fiberoptic cables through which everything - telephony, Internet, etc - travels Internet packet mode.

That's the point at which capital could really start moving instantly around the world, and companies could really begin to run global supply chains and workforces. That's the point when shifts of workers in facilities in Bangalore or Beijing could start their workdays as shifts of workers in the U.S. were ending theirs, and companies could outsource and offshore their whole operations.

[Apr 15, 2017] The Trump phenomenon shows that we urgently need an alternative to the obsolete capitalism

Apr 15, 2017 | failedevolution.blogspot.gr

The Trump phenomenon shows that we urgently need an alternative to the obsolete capitalism globinfo freexchange

It's not only the rapid technological progress, especially in the field of hyper-automation and Artificial Intelligence, that makes capitalism unable to deliver a viable future to the societies. It's also the fact that the dead-end it creates, produces false alternatives like Donald Trump.

As already pointed :

With Trump administration taken over by Goldman Sachs , nothing can surprise us, anymore. The fairy tale of the 'anti-establishment' Trump who would supposedly fight for the interests of the forgotten - by the system - Americans, was collapsed even before Trump election.

What's quite surprising, is how fast the new US president - buddy of the plutocrats, is offering 'earth and water' to the top 1% of the American society, as if they had not already enough at the expense of the 99%. His recent 'achievement', was to sign for more deregulation in favor of the banking mafia that ruined the economy in 2008, destroyed millions of working class Americans and sent waves of financial destruction all over the world. Europe is still on its knees because of the neoliberal destruction and cruel austerity.

Richard Wolff explains:

If you don't want the Trumps of this world to periodically show up and scare everybody, you've got to do something about the basic system that produces the conditions that allow a Trump to get to the position he now occupies.

We need a better politics than having two parties compete for the big corporations to love them, two parties to proudly celebrate capitalism. Real politics needs an opposition, people who think we can do better than capitalism, we ought to try, we ought to discuss it, and the people should have a choice about that. Because if you don't give them that, they are gonna go from one extreme to another, trying to find a way out of the status quo that is no longer acceptable.

I'm amazed that after half a century in which any politician had accepted the name 'Socialist' attached to him or her, thereby committing, effectively, political suicide, Mr. Sanders has shown us that the world has really changed. He could have that label, he could accept the label, he could say he is proud of the label, and millions and millions of Americans said 'that's fine with us', he gets our vote. We will not be the same nation going forward, because of that. It is now openly possible to raise questions about capitalism, to talk about its shortcomings, to explore how we can do better.

Indeed, as the blog pointed before the latest US elections:

Bernie has the background and the ability to change the course of the US politics. He speaks straightly about things buried by the establishment, as if they were absent. Wall Street corruption, growing inequality, corporate funding of politicians by lobbies. He says that he will break the big banks. He will provide free health and education for all the American people. Because of Sanders, Hillary is forced to speak about these issues too. And subsequently, this starts to shape again a fundamental ideological difference between Democrats and Republicans, which was nearly absent for decades.

But none of this would have come to surface if Bernie didn't have the support of the American people. Despite that he came from nowhere, especially the young people mobilized and started to spread his message using the alternative media. Despite that he speaks about Socialism, his popularity grows. The establishment starts to sense the first cracks in its solid structure. But Bernie is only the appropriate tool. It's the American people who make the difference.

No matter who will be elected eventually, the final countdown for the demolition of this brutal system has already started and it's irreversible. The question now is not if, but when it will collapse, and what this collapse will bring the day after. In any case, if people are truly united, they have nothing to fear.

So, what kind of system do we need to replace the obsolete capitalism? Do we need a kind of Democratic Socialism that would be certainly more compatible to the rapid technological progress? Write your thoughts and ideas in the comments below.

[Apr 15, 2017] IMF claims that technology and global integration explain close to 75 percent of the decline in labor shares in Germany and Italy, and close to 50 percent in the United States.

Anything that IMF claim should be taken with a grain of salt. IMF is a quintessential neoliberal institutions that will support neoliberalism to the bitter end.
Apr 15, 2017 | economistsview.typepad.com

point, April 14, 2017 at 05:06 AM

https://blogs.imf.org/2017/04/12/drivers-of-declining-labor-share-of-income/

"In advanced economies, about half of the decline in labor shares can be traced to the impact of technology."

Searching, searching for the policy variable in the regression.

anne -> point... , April 14, 2017 at 08:09 AM
https://blogs.imf.org/2017/04/12/drivers-of-declining-labor-share-of-income/

April 12, 2017

Drivers of Declining Labor Share of Income
By Mai Chi Dao, Mitali Das, Zsoka Koczan, and Weicheng Lian

Technology: a key driver in advanced economies

In advanced economies, about half of the decline in labor shares can be traced to the impact of technology. The decline was driven by a combination of rapid progress in information and telecommunication technology, and a high share of occupations that could be easily be automated.

Global integration-as captured by trends in final goods trade, participation in global value chains, and foreign direct investment-also played a role. Its contribution is estimated at about half that of technology. Because participation in global value chains typically implies offshoring of labor-intensive tasks, the effect of integration is to lower labor shares in tradable sectors.

Admittedly, it is difficult to cleanly separate the impact of technology from global integration, or from policies and reforms. Yet the results for advanced economies is compelling. Taken together, technology and global integration explain close to 75 percent of the decline in labor shares in Germany and Italy, and close to 50 percent in the United States.

paine -> anne... , April 14, 2017 at 08:49 AM
Again this is about changing the wage structure

Total hours is macro management. Mobilizing potential job hours to the max is undaunted by technical progress

Recall industrial jobs required unions to become well paid

We need a CIO for services logistics and commerce

[Apr 14, 2017] Automation as a way to depress wages

Apr 14, 2017 | economistsview.typepad.com
point , April 14, 2017 at 04:59 AM
http://www.bradford-delong.com/2017/04/notes-working-earning-and-learning-in-the-age-of-intelligent-machines.html

Brad said: Few things can turn a perceived threat into a graspable opportunity like a high-pressure economy with a tight job market and rising wages. Few things can turn a real opportunity into a phantom threat like a low-pressure economy, where jobs are scarce and wage stagnant because of the failure of macro economic policy.

What is it that prevents a statement like this from succeeding at the level of policy?

Peter K. -> point... , April 14, 2017 at 06:41 AM
class war

center-left economists like DeLong and Krugman going with neoliberal Hillary rather than Sanders.

Sanders supports that statement, Hillary did not. Obama did not.

PGL spent the primary unfairly attacking Sanders and the "Bernie Bros" on behalf of the center-left.

[Apr 07, 2017] No it was policy driven by politics. They increased profits at the expense of workers and the middle class. The New Democrats played along with Wall Street.

Apr 07, 2017 | economistsview.typepad.com
ken melvin -> DrDick ... , April 06, 2017 at 08:45 AM
Probably automated 200. In every case, displacing 3/4 of the workers and increasing production 40% while greatly improving quality. Exact same can be said for larger scaled such as automobile mfg, ...

The convergence of offshoring and automation in such a short time frame meant that instead of a gradual transformation that might have allowed for more evolutionary economic thinking, American workers got gobsmacked. The aftermath includes the wage disparity, opiate epidemic, Trump, ...

This transition is of the scale of the industrial revolution with climate change thrown. This is just the beginning of great social and economic turmoil. None of the stuff that evolved specific the industrial revolution applies.

Peter K. -> ken melvin... , April 06, 2017 at 09:01 AM

No it was policy driven by politics. They increased profits at the expense of workers and the middle class. The New Democrats played along with Wall Street.
libezkova -> ken melvin... , April 06, 2017 at 05:43 PM
"while greatly improving quality" -- that's not given.

[Apr 06, 2017] Economist's View Links for 04-06-17

Apr 06, 2017 | economistsview.typepad.com
Peter K. -> EMichael... , April 06, 2017 at 09:18 AM
What do you make of the DeLong link? Why do you avoid discussing it?

"...
The lesson from history is not that the robots should be stopped; it is that we will need to confront the social-engineering and political problem of maintaining a fair balance of relative incomes across society. Toward that end, our task becomes threefold.

First, we need to make sure that governments carry out their proper macroeconomic role, by maintaining a stable, low-unemployment economy so that markets can function properly. Second, we need to redistribute wealth to maintain a proper distribution of income. Our market economy should promote, rather than undermine, societal goals that correspond to our values and morals. Finally, workers must be educated and trained to use increasingly high-tech tools (especially in labor-intensive industries), so that they can make useful things for which there is still demand.

Sounding the alarm about "artificial intelligence taking American jobs" does nothing to bring such policies about. Mnuchin is right: the rise of the robots should not be on a treasury secretary's radar."

DrDick -> EMichael... , April 06, 2017 at 08:43 AM
Except that Germany and Japan have retained a larger share of workers in manufacturing, despite more automation. Germany has also retained much more of its manufacturing base than the US has. The evidence really does point to the role of outsourcing in the US compared with others.

http://www.economist.com/node/21552567

http://www.economist.com/node/2571689

pgl -> DrDick ... , April 06, 2017 at 08:54 AM
I got an email of some tale that Adidas would start manufacturing in Germany as opposed to China. Not with German workers but with robots. The author claimed the robots would cost only $5.50 per hour as opposed to $11 an hour for the Chinese workers. Of course Chinese apparel workers do not get anywhere close to $11 an hour and the author was not exactly a credible source.
pgl -> pgl... , April 06, 2017 at 08:57 AM
Reuters is a more credible source:

http://www.reuters.com/article/us-adidas-manufacturing-idUSKBN0TS0ZM20151209

Pilot program making initially 500 pairs of shoes in the first year. No claims as the wage rate of Chinese workers.

libezkova said in reply to pgl... , April 06, 2017 at 05:41 PM
"The new "Speedfactory" in the southern town of Ansbach near its Bavarian headquarters will start production in the first half of 2016 of a robot-made running shoe that combines a machine-knitted upper and springy "Boost" sole made from a bubble-filled polyurethane foam developed by BASF."

Interesting. I thought that "keds" production was already fully automated. Bright colors are probably the main attraction. But Adidas commands premium price...

Machine-knitted upper is the key -- robots, even sophisticated one, put additional demands on precision of the parts to be assembled. That's also probably why monolithic molded sole is chosen. Kind of 3-D printing of shoes.

Robots do not "feel" the nuances of the technological process like humans do.

kurt -> pgl... , April 06, 2017 at 09:40 AM
While I agree that Chinese workers don't get $11 - frequently employee costs are accounted at a loaded rate (including all benefits - in China would include capital cost of dormitories, food, security staff, benefits and taxes). I am guessing that a $2-3 an hour wage would result in an $11 fully loaded rate under those circumstances. Those other costs are not required with robuts.
Peter K. -> DrDick ... , April 06, 2017 at 08:59 AM
I agree with you. The center-left want to exculpate globalization and outsourcing, or free them from blame, by providing another explanation: technology and robots. They're not just arguing with Trump.

Brad Setser:

"I suspect the politics around trade would be a bit different in the U.S. if the goods-exporting sector had grown in parallel with imports.

That is one key difference between the U.S. and Germany. Manufacturing jobs fell during reunification-and Germany went through a difficult adjustment in the early 2000s. But over the last ten years the number of jobs in Germany's export sector grew, keeping the number of people employed in manufacturing roughly constant over the last ten years even with rising productivity. Part of the "trade" adjustment was a shift from import-competing to exporting sectors, not just a shift out of the goods producing tradables sector. Of course, not everyone can run a German sized surplus in manufactures-but it seems likely the low U.S. share of manufacturing employment (relative to Germany and Japan) is in part a function of the size and persistence of the U.S. trade deficit in manufactures. (It is also in part a function of the fact that the U.S. no longer needs to trade manufactures for imported energy on any significant scale; the U.S. has more jobs in oil and gas production, for example, than Germany or Japan)."

http://blogs.cfr.org/setser/2017/02/06/offshore-profits-and-exports/

anne -> DrDick ... , April 06, 2017 at 10:01 AM
https://fred.stlouisfed.org/graph/?g=dgSQ

January 15, 2017

Percent of Employment in Manufacturing for United States, Germany and Japan, 1970-2012


https://fred.stlouisfed.org/graph/?g=dgT0

January 15, 2017

Percent of Employment in Manufacturing for United States, Germany and Japan, 1970-2012

(Indexed to 1970)

ken melvin -> DrDick ... , April 06, 2017 at 08:45 AM
Probably automated 200. In every case, displacing 3/4 of the workers and increasing production 40% while greatly improving quality. Exact same can be said for larger scaled such as automobile mfg, ...
The convergence of offshoring and automation in such a short time frame meant that instead of a gradual transformation that might have allowed for more evolutionary economic thinking, American workers got gobsmacked. The aftermath includes the wage disparity, opiate epidemic, Trump, ...
This transition is of the scale of the industrial revolution with climate change thrown. This is just the beginning of great social and economic turmoil. None of the stuff that evolved specific the industrial revolution applies.
Peter K. -> ken melvin... , April 06, 2017 at 09:01 AM
No it was policy driven by politics. They increased profits at the expense of workers and the middle class. The New Democrats played along with Wall Street.

[Apr 06, 2017] The impact of information technology on employment is undoubtedly a major issue, but it is also not in society's interest to discourage investment in high-tech companies.

Apr 06, 2017 | economistsview.typepad.com
Peter K. , April 05, 2017 at 01:55 PM
Interesting, thought-provoking discussion by DeLong:

https://www.project-syndicate.org/commentary/mnuchin-automation-low-skill-workers-by-j--bradford-delong-2017-04

APR 3, 2017
Artificial Intelligence and Artificial Problems
by J. Bradford DeLong

BERKELEY – Former US Treasury Secretary Larry Summers recently took exception to current US Treasury Secretary Steve Mnuchin's views on "artificial intelligence" (AI) and related topics. The difference between the two seems to be, more than anything else, a matter of priorities and emphasis.

Mnuchin takes a narrow approach. He thinks that the problem of particular technologies called "artificial intelligence taking over American jobs" lies "far in the future." And he seems to question the high stock-market valuations for "unicorns" – companies valued at or above $1 billion that have no record of producing revenues that would justify their supposed worth and no clear plan to do so.

Summers takes a broader view. He looks at the "impact of technology on jobs" generally, and considers the stock-market valuation for highly profitable technology companies such as Google and Apple to be more than fair.

I think that Summers is right about the optics of Mnuchin's statements. A US treasury secretary should not answer questions narrowly, because people will extrapolate broader conclusions even from limited answers. The impact of information technology on employment is undoubtedly a major issue, but it is also not in society's interest to discourage investment in high-tech companies.

On the other hand, I sympathize with Mnuchin's effort to warn non-experts against routinely investing in castles in the sky. Although great technologies are worth the investment from a societal point of view, it is not so easy for a company to achieve sustained profitability. Presumably, a treasury secretary already has enough on his plate to have to worry about the rise of the machines.

In fact, it is profoundly unhelpful to stoke fears about robots, and to frame the issue as "artificial intelligence taking American jobs." There are far more constructive areas for policymakers to direct their focus. If the government is properly fulfilling its duty to prevent a demand-shortfall depression, technological progress in a market economy need not impoverish unskilled workers.

This is especially true when value is derived from the work of human hands, or the work of things that human hands have made, rather than from scarce natural resources, as in the Middle Ages. Karl Marx was one of the smartest and most dedicated theorists on this topic, and even he could not consistently show that technological progress necessarily impoverishes unskilled workers.

Technological innovations make whatever is produced primarily by machines more useful, albeit with relatively fewer contributions from unskilled labor. But that by itself does not impoverish anyone. To do that, technological advances also have to make whatever is produced primarily by unskilled workers less useful. But this is rarely the case, because there is nothing keeping the relatively cheap machines used by unskilled workers in labor-intensive occupations from becoming more powerful. With more advanced tools, these workers can then produce more useful things.

Historically, there are relatively few cases in which technological progress, occurring within the context of a market economy, has directly impoverished unskilled workers. In these instances, machines caused the value of a good that was produced in a labor-intensive sector to fall sharply, by increasing the production of that good so much as to satisfy all potential consumers.

The canonical example of this phenomenon is textiles in eighteenth- and nineteenth-century India and Britain. New machines made the exact same products that handloom weavers had been making, but they did so on a massive scale. Owing to limited demand, consumers were no longer willing to pay for what handloom weavers were producing. The value of wares produced by this form of unskilled labor plummeted, but the prices of commodities that unskilled laborers bought did not.

The lesson from history is not that the robots should be stopped; it is that we will need to confront the social-engineering and political problem of maintaining a fair balance of relative incomes across society. Toward that end, our task becomes threefold.

First, we need to make sure that governments carry out their proper macroeconomic role, by maintaining a stable, low-unemployment economy so that markets can function properly. Second, we need to redistribute wealth to maintain a proper distribution of income. Our market economy should promote, rather than undermine, societal goals that correspond to our values and morals. Finally, workers must be educated and trained to use increasingly high-tech tools (especially in labor-intensive industries), so that they can make useful things for which there is still demand.

Sounding the alarm about "artificial intelligence taking American jobs" does nothing to bring such policies about. Mnuchin is right: the rise of the robots should not be on a treasury secretary's radar.

anne , April 05, 2017 at 03:14 PM
https://minneapolisfed.org/research/wp/wp736.pdf

January, 2017

The Global Rise of Corporate Saving
By Peter Chen, Loukas Karabarbounis, and Brent Neiman

Abstract

The sectoral composition of global saving changed dramatically during the last three decades. Whereas in the early 1980s most of global investment was funded by household saving, nowadays nearly two-thirds of global investment is funded by corporate saving. This shift in the sectoral composition of saving was not accompanied by changes in the sectoral composition of investment, implying an improvement in the corporate net lending position. We characterize the behavior of corporate saving using both national income accounts and firm-level data and clarify its relationship with the global decline in labor share, the accumulation of corporate cash stocks, and the greater propensity for equity buybacks. We develop a general equilibrium model with product and capital market imperfections to explore quantitatively the determination of the flow of funds across sectors. Changes including declines in the real interest rate, the price of investment, and corporate income taxes generate increases in corporate profits and shifts in the supply of sectoral saving that are of similar magnitude to those observed in the data.

anne -> anne... , April 05, 2017 at 03:17 PM
http://www.nytimes.com/2010/07/06/opinion/06smith.html

July 6, 2010

Are Profits Hurting Capitalism?
By YVES SMITH and ROB PARENTEAU

A STREAM of disheartening economic news last week, including flagging consumer confidence and meager private-sector job growth, is leading experts to worry that the recession is coming back. At the same time, many policymakers, particularly in Europe, are slashing government budgets in an effort to lower debt levels and thereby restore investor confidence, reduce interest rates and promote growth.

There is an unrecognized problem with this approach: Reductions in deficits have implications for the private sector. Higher taxes draw cash from households and businesses, while lower government expenditures withhold money from the economy. Making matters worse, businesses are already plowing fewer profits back into their own enterprises.

Over the past decade and a half, corporations have been saving more and investing less in their own businesses. A 2005 report from JPMorgan Research noted with concern that, since 2002, American corporations on average ran a net financial surplus of 1.7 percent of the gross domestic product - a drastic change from the previous 40 years, when they had maintained an average deficit of 1.2 percent of G.D.P. More recent studies have indicated that companies in Europe, Japan and China are also running unprecedented surpluses.

The reason for all this saving in the United States is that public companies have become obsessed with quarterly earnings. To show short-term profits, they avoid investing in future growth. To develop new products, buy new equipment or expand geographically, an enterprise has to spend money - on marketing research, product design, prototype development, legal expenses associated with patents, lining up contractors and so on.

Rather than incur such expenses, companies increasingly prefer to pay their executives exorbitant bonuses, or issue special dividends to shareholders, or engage in purely financial speculation. But this means they also short-circuit a major driver of economic growth.

Some may argue that businesses aren't investing in growth because the prospects for success are so poor, but American corporate profits are nearly all the way back to their peak, right before the global financial crisis took hold.

Another problem for the economy is that, once the crisis began, families and individuals started tightening their belts, bolstering their bank accounts or trying to pay down borrowings (another form of saving).

If households and corporations are trying to save more of their income and spend less, then it is up to the other two sectors of the economy - the government and the import-export sector - to spend more and save less to keep the economy humming. In other words, there needs to be a large trade surplus, a large government deficit or some combination of the two. This isn't a matter of economic theory; it's based in simple accounting.

What if a government instead embarks on an austerity program? Income growth will stall, and household wages and business profits may fall....

anne -> anne... , April 05, 2017 at 03:21 PM
http://www.nakedcapitalism.com/2017/04/global-corporate-saving-glut.html

April 5, 2017

The Global Corporate Saving Glut
By Yves Smith

On the one hand, the VoxEU article does a fine job of assembling long-term data on a global basis. It demonstrates that the corporate savings glut is long standing and that is has been accompanied by a decline in personal savings.

However, it fails to depict what an unnatural state of affairs this is. The corporate sector as a whole in non-recessionary times ought to be net spending, as in borrowing and investing in growth. As a market-savvy buddy put it, "If a company isn't investing in the business of its business, why should I?" I attributed the corporate savings trend in the US as a result of the fixation of quarterly earnings, which sources such as McKinsey partners with a broad view of the firms' projects were telling me was killing investment (any investment will have an income statement impact too, such as planning, marketing, design, and start up expenses). This post, by contrast, treats this development as lacking in any agency. Labor share of GDP dropped and savings rose. They attribute that to lower interest rates over time. They again fail to see that as the result of power dynamics and political choices....

[Mar 29, 2017] Job Loss in Manufacturing: More Robot Blaming

Mar 29, 2017 | economistsview.typepad.com
anne , March 29, 2017 at 06:11 AM
http://cepr.net/blogs/ beat-the-press/job-loss-in-manufacturing-more-robot-blaming

March 29, 2017

It is striking how the media feel such an extraordinary need to blame robots and productivity growth for the recent job loss in manufacturing rather than trade. We got yet another example of this exercise in a New York Times piece * by Claire Cain Miller, with the title "evidence that robots are winning the race for American jobs." The piece highlights a new paper * by Daron Acemoglu and Pascual Restrepo which finds that robots have a large negative impact on wages and employment.

While the paper has interesting evidence on the link between the use of robots and employment and wages, some of the claims in the piece do not follow. For example, the article asserts:

"The paper also helps explain a mystery that has been puzzling economists: why, if machines are replacing human workers, productivity hasn't been increasing. In manufacturing, productivity has been increasing more than elsewhere - and now we see evidence of it in the employment data, too."

Actually, the paper doesn't provide any help whatsoever in solving this mystery. Productivity growth in manufacturing has almost always been more rapid than productivity growth elsewhere. Furthermore, it has been markedly slower even in manufacturing in recent years than in prior decades. According to the Bureau of Labor Statistics, productivity growth in manufacturing has averaged less than 1.2 percent annually over the last decade and less than 0.5 percent over the last five years. By comparison, productivity growth averaged 2.9 percent a year in the half century from 1950 to 2000.

The article is also misleading in asserting:

"The paper adds to the evidence that automation, more than other factors like trade and offshoring that President Trump campaigned on, has been the bigger long-term threat to blue-collar jobs (emphasis added)."

In terms of recent job loss in manufacturing, and in particular the loss of 3.4 million manufacturing jobs between December of 2000 and December of 2007, the rise of the trade deficit has almost certainly been the more important factor. We had substantial productivity growth in manufacturing between 1970 and 2000, with very little loss of jobs. The growth in manufacturing output offset the gains in productivity. The new part of the story in the period from 2000 to 2007 was the explosion of the trade deficit to a peak of nearly 6.0 percent of GDP in 2005 and 2006.

It is also worth noting that we could in fact expect substantial job gains in manufacturing if the trade deficit were reduced. If the trade deficit fell by 2.0 percentage points of GDP ($380 billion a year) this would imply an increase in manufacturing output of more than 22 percent. If the productivity of the manufacturing workers producing this additional output was the same as the rest of the manufacturing workforce it would imply an additional 2.7 million jobs in manufacturing. That is more jobs than would be eliminated by productivity at the recent 0.5 percent growth rate over the next forty years, even assuming no increase in demand over this period.

While the piece focuses on the displacement of less educated workers by robots and equivalent technology, it is likely that the areas where displacement occurs will be determined in large part by the political power of different groups. For example, it is likely that in the not distant future improvements in diagnostic technology will allow a trained professional to make more accurate diagnoses than the best doctor. Robots are likely to be better at surgery than the best surgeon. The extent to which these technologies will be be allowed to displace doctors is likely to depend more on the political power of the American Medical Association than the technology itself.

Finally, the question of whether the spread of robots will lead to a transfer of income from workers to the people who "own" the robots will depend to a large extent on our patent laws. In the last four decades we have made patents longer and stronger. If we instead made them shorter and weaker, or better relied on open source research, the price of robots would plummet and workers would be better positioned to capture than gains of productivity growth as they had in prior decades. In this story it is not robots who are taking workers' wages, it is politicians who make strong patent laws.

* https://www.nytimes.com/2017/03/28/upshot/evidence-that-robots-are-winning-the-race-for-american-jobs.html

** http://economics.mit.edu/files/12154

-- Dean Baker

anne -> anne... , March 29, 2017 at 06:14 AM
https://fred.stlouisfed.org/graph/?g=d6j3

November 1, 2014

Total Factor Productivity at Constant National Prices for United States, 1950-2014


https://fred.stlouisfed.org/graph/?g=d6j7

November 1, 2014

Total Factor Productivity at Constant National Prices for United States, 1950-2014

(Indexed to 1950)

anne -> anne... , March 29, 2017 at 09:31 AM
https://fred.stlouisfed.org/graph/?g=dbjg

January 4, 2016

Manufacturing Multifactor Productivity, 1988-2014

(Indexed to 1988)


https://fred.stlouisfed.org/graph/?g=dbke

January 4, 2016

Manufacturing Multifactor Productivity, 2000-2014

(Indexed to 2000)

[Mar 29, 2017] I fear Summers at least as much as I fear robots

Mar 29, 2017 | economistsview.typepad.com
anne -> RC AKA Darryl, Ron... , March 29, 2017 at 06:17 AM
https://www.washingtonpost.com/news/wonk/wp/2017/03/27/larry-summers-mnuchins-take-on-artificial-intelligence-is-not-defensible/

March 27, 2017

The robots are coming, whether Trump's Treasury secretary admits it or not
By Lawrence H. Summers - Washington Post

As I learned (sometimes painfully) during my time at the Treasury Department, words spoken by Treasury secretaries can over time have enormous consequences, and therefore should be carefully considered. In this regard, I am very surprised by two comments made by Secretary Steven Mnuchin in his first public interview last week.

In reference to a question about artificial intelligence displacing American workers,Mnuchin responded that "I think that is so far in the future - in terms of artificial intelligence taking over American jobs - I think we're, like, so far away from that [50 to 100 years], that it is not even on my radar screen." He also remarked that he did not understand tech company valuations in a way that implied that he regarded them as excessive. I suppose there is a certain internal logic. If you think AI is not going to have any meaningful economic effects for a half a century, then I guess you should think that tech companies are overvalued. But neither statement is defensible.

Mnuchin's comment about the lack of impact of technology on jobs is to economics approximately what global climate change denial is to atmospheric science or what creationism is to biology. Yes, you can debate whether technological change is in net good. I certainly believe it is. And you can debate what the job creation effects will be relative to the job destruction effects. I think this is much less clear, given the downward trends in adult employment, especially for men over the past generation.

But I do not understand how anyone could reach the conclusion that all the action with technology is half a century away. Artificial intelligence is behind autonomous vehicles that will affect millions of jobs driving and dealing with cars within the next 15 years, even on conservative projections. Artificial intelligence is transforming everything from retailing to banking to the provision of medical care. Almost every economist who has studied the question believes that technology has had a greater impact on the wage structure and on employment than international trade and certainly a far greater impact than whatever increment to trade is the result of much debated trade agreements....

DrDick -> anne... , March 29, 2017 at 10:45 AM
Oddly, the robots are always coming in articles like Summers', but they never seem to get here. Automation has certainly played a role, but outsourcing has been a much bigger issue.
Peter K. -> DrDick ... , March 29, 2017 at 01:09 PM
I'm becoming increasing skeptical about the robots argument.
jonny bakho -> DrDick ... , March 29, 2017 at 05:13 PM
They are all over our manufacturing plants.
They just don't look like C3PO
JohnH -> RC AKA Darryl, Ron... , March 29, 2017 at 06:21 AM
I fear Summers at least as much as I fear robots...
Peter K. -> JohnH... , March 29, 2017 at 07:04 AM
He's just a big bully, like our PGL.

He has gotten a lot better and was supposedly pretty good when advising Obama, but he's sort of reverted to form with the election of Trump and the prominence of the debate on trade policy.

RC AKA Darryl, Ron -> JohnH... , March 29, 2017 at 07:15 AM
Ditto.

Technology rearranges and changes human roles, but it makes entries on both sides of the ledger. On net as long as wages grow then so will the economy and jobs. Trade deficits only help financial markets and the capital owning class.

Paine -> RC AKA Darryl, Ron... , March 29, 2017 at 09:59 AM
There is no limit to jobs
Macro policy and hours regulation
can create

We can both ration job hours And subsidies job wage rates
and at the same time
generate
As many jobs as wanted

All economic rents could be converted into wage subsidies
To boost the per hour income from jobs as well as incentivize diligence skill and creativity

RC AKA Darryl, Ron -> Paine... , March 29, 2017 at 12:27 PM
Works for me.
yuan -> Paine... , March 29, 2017 at 03:50 PM
jobs, jobs, jobs.

some day we will discard with feudal concepts, such as, working for the "man". a right to liberty and the pursuit of happiness is a right to income.

tax those bots!

yuan -> yuan... , March 29, 2017 at 03:51 PM
or better yet...collectivize the bots.
RGC -> RC AKA Darryl, Ron... , March 29, 2017 at 08:47 AM
Summers is a good example of those economists that never seem to pay a price for their errors.

Imo, he should never be listened to. His economics is faulty. His performance in the Clinton administration and his part in the Russian debacle should be enough to consign him to anonymity. People would do well to ignore him.

Peter K. -> RGC... , March 29, 2017 at 09:36 AM
Yeah he's one of those expert economists and technocrats who never admit fault. You don't become Harvard President or Secretary of the Treasury by doing that.

One time that Krugman has admitted error was about productivity gains in the 1990s. He said he didn't see the gains from computers in the numbers and it wasn't and they weren't there at first, but later productivity numbers increased.

It was sort of like what Summers and Munchkin are talking discussing, but there's all sorts of debate about measuring productivity and what it means.

RC AKA Darryl, Ron -> RGC... , March 29, 2017 at 12:29 PM
Yeah. I am not a fan of Summers's, but I do like summers as long as it does not rain too much or too little and I have time to fish.

[Mar 24, 2017] There is no such thing as an automated factory. Manufacturing is done by people, *assisted* by automation. Or only part of the production pipeline is automated, but people are still needed to fill in the not-automated pieces

Notable quotes:
"... And it is not only automation vs. in-house labor. There is environmental/compliance cost (or lack thereof) and the fully loaded business services and administration overhead, taxes, etc. ..."
"... When automation increased productivity in agriculture, the government guaranteed free high school education as a right. ..."
"... Now Democrats like you would say it's too expensive. So what's your solution? You have none. You say "sucks to be them." ..."
"... And then they give you the finger and elect Trump. ..."
"... It wasn't only "low-skilled" workers but "anybody whose job could be offshored" workers. Not quite the same thing. ..."
"... It also happened in "knowledge work" occupations - for those functions that could be separated and outsourced without impacting the workflow at more expense than the "savings". And even if so, if enough of the competition did the same ... ..."
"... And not all outsourcing was offshore - also to "lowest bidders" domestically, or replacing "full time" "permanent" staff with contingent workers or outsourced "consultants" hired on a project basis. ..."
"... "People sure do like to attribute the cause to trade policy." Because it coincided with people watching their well-paying jobs being shipped overseas. The Democrats have denied this ever since Clinton and the Republicans passed NAFTA, but finally with Trump the voters had had enough. ..."
"... Why do you think Clinton lost Wisconsin, Michigan, Pennysylvania and Ohio? ..."
Feb 20, 2017 | economistsview.typepad.com
Sanjait -> Peter K.... February 20, 2017 at 01:55 PM

People sure do like to attribute the cause to trade policy.

Do you honestly believe that fact makes it true? If not, what even is your point? Can you even articulate one?

Tom aka Rusty -> Sanjait... , February 20, 2017 at 01:18 PM

If it was technology why do US companies buy from low labor producers at the end of supply chains 2000 - 10000 miles away? Why the transportation cost. Automated factories could be built close by.

ken melvin said in reply to Tom aka Rusty... , February 20, 2017 at 02:24 PM
Send for an accountant.
cm -> Tom aka Rusty... , February 20, 2017 at 03:14 PM
There is no such thing as an automated factory. Manufacturing is done by people, *assisted* by automation. Or only part of the production pipeline is automated, but people are still needed to fill in the not-automated pieces.

And it is not only automation vs. in-house labor. There is environmental/compliance cost (or lack thereof) and the fully loaded business services and administration overhead, taxes, etc.

You should know this, and I believe you do.

Peter K. said in reply to Sanjait... , February 20, 2017 at 03:14 PM
Trade policy put "low-skilled" workers in the U.S. in competition with workers in poorer countries. What did you think was going to happen? The Democrat leadership made excuses. David Autor's TED talk stuck with me. When automation increased productivity in agriculture, the government guaranteed free high school education as a right.

Now Democrats like you would say it's too expensive. So what's your solution? You have none. You say "sucks to be them."

And then they give you the finger and elect Trump.

cm -> Peter K.... , February 20, 2017 at 03:19 PM
It wasn't only "low-skilled" workers but "anybody whose job could be offshored" workers. Not quite the same thing.

It also happened in "knowledge work" occupations - for those functions that could be separated and outsourced without impacting the workflow at more expense than the "savings". And even if so, if enough of the competition did the same ...

And not all outsourcing was offshore - also to "lowest bidders" domestically, or replacing "full time" "permanent" staff with contingent workers or outsourced "consultants" hired on a project basis.

Peter K. said in reply to cm... , February 20, 2017 at 03:33 PM
True.
Peter K. said in reply to Sanjait... , February 20, 2017 at 03:35 PM
"People sure do like to attribute the cause to trade policy." Because it coincided with people watching their well-paying jobs being shipped overseas. The Democrats have denied this ever since Clinton and the Republicans passed NAFTA, but finally with Trump the voters had had enough.

Why do you think Clinton lost Wisconsin, Michigan, Pennysylvania and Ohio?

[Mar 24, 2017] We are in a sea of McJobs

Feb 26, 2017 | http://economistsview.typepad.com/economistsview/2017/02/links-for-02-24-17.html
RC AKA Darryl, Ron -> RC AKA Darryl, Ron... February 24, 2017 at 10:05 AM

Instead of looking at this as an excuse for job losses due to trade deficits then we should be seeing it as a reason to gain back manufacturing jobs in order to retain a few more decent jobs in a sea of garbage jobs. Mmm. that's so wrong. Working on garbage trucks are now some of the good jobs in comparison. A sea of garbage jobs would be an improvement. We are in a sea of McJobs.

Paine -> RC AKA Darryl, Ron... February 24, 2017 at 04:25 AM ,
Assembly lines paid well post CIO
They were never intrinsically rewarding

A family farm or work shop of their own
Filled the dreams of the operatives

Recall the brilliantly ironic end of Rene Clair's a la nous la Liberte

Fully automated plant with the former operatives enjoying endless picnic frolic

Work as humans' prime want awaits a future social configuration

RC AKA Darryl, Ron -> Paine... , February 24, 2017 at 11:27 AM
Yes sir, often enough but not always. I had a great job as an IT large systems capacity planner and performance analyst, but not as good as the landscaping, pool, and lawn maintenance for myself that I enjoy now as a leisure occupation in retirement. My best friend died a greens keeper, but he preferred landscaping when he was young. Another good friend of mine was a poet, now dying of cancer if depression does not take him first.

But you are correct, no one but the welders, material handlers (paid to lift weights all day), machinists, and then almost every one else liked their jobs at Virginia Metal Products, a union shop, when I worked there the summer of 1967. That was on the swing shift though when all of the big bosses were at home and out of our way. On the green chain in the lumber yard of Kentucky flooring everyone but me wanted to leave, but my mom made me go into the VMP factory and work nights at the primer drying kiln stacking finished panel halves because she thought the work on the green chain was too hard. The guys on the green chain said that I was the first high school graduate to make it past lunch time on their first day. I would have been buff and tan by the end of summer heading off to college (where I would drop out in just ten weeks) had my mom not intervened.

As a profession no group that I know is happier than auto mechanics that do the same work as a hobby on their hours off that they do for a living at work, at least the hot rod custom car freaks at Jamie's Exhaust & Auto Repair in Richmond, Virginia are that way. The power tool sales and maintenance crew at Arthur's Electric Service Inc. enjoy their jobs too.

Despite the name which was on their incorporation done back when they rebuilt auto generators, Arthur's sells and services lawnmowers, weed whackers, chain saws and all, but nothing electric. The guy in the picture at the link is Robert Arthur, the founder's son who is our age roughly.

http://www.arthurselectric.com/

[Mar 23, 2017] Automation threat is more complex than it looks

Mar 23, 2017 | discussion.theguardian.com
, EndaFlannel , 17 Nov 2016 09:12
In theory, in the longer term, as robotics becomes the norm rather than the exception, there will be no advantage in chasing cheap labour around the world. Given ready access to raw materials, the labour costs of manufacturing in Birmingham should be no different to the labour costs in Beijing. This will require the democratisation of the ownership of technology. Unless national governments develop commonly owned technology the 1% will truly become the organ grinders and everyone else the monkeys. One has only to look at companies like Microsoft and Google to see a possible future - bigger than any single country and answerable to no one. Common ownership must be the future. Deregulation and market driven economics are the road technological serfdom.
, Physiocrat EndaFlannel , 17 Nov 2016 09:58
Except that the raw materials for steel production are available in vast quantities in China.

You are also forgetting land. The power remains with those who own it. Most of Central London is still owned by the same half dozen families as in 1600. Reply Share

, Colin Sandford EndaFlannel , 17 Nov 2016 10:29
You can only use robotics in countries that have the labour with the skills to maintain them.Robots do not look after themselves they need highly skilled technicians to keep them working. I once worked for a Japanese company and they only used robots in the higher wage high skill regions. In low wage economies they used manual labour and low tech products.

[Mar 21, 2017] Robots and Inequality: A Skeptics Take

Notable quotes:
"... And all costs are labor costs. It it isn't labor cost, it's rents and economic profit which mean economic inefficiency. An inefficient economy is unstable. Likely to crash or drive revolution. ..."
"... Free lunch economics seeks to make labor unnecessary or irrelevant. Labor cost is pure liability. ..."
"... Yet all the cash for consumption is labor cost, so if labor cost is a liability, then demand is a liability. ..."
"... Replace workers with robots, then robots must become consumers. ..."
"... "Replace workers with robots, then robots must become consumers." Well no - the OWNERS of robots must become consumers. ..."
"... I am old enough to remember the days of good public libraries, free university education, free bus passes for seniors and low land prices. Is the income side of the equation all that counts? ..."
Mar 21, 2017 | economistsview.typepad.com
Douglas Campbell:
Robots and Inequality: A Skeptic's Take : Paul Krugman presents " Robot Geometry " based on Ryan Avent 's "Productivity Paradox". It's more-or-less the skill-biased technological change hypothesis, repackaged. Technology makes workers more productive, which reduces demand for workers, as their effective supply increases. Workers still need to work, with a bad safety net, so they end up moving to low-productivity sectors with lower wages. Meanwhile, the low wages in these sectors makes it inefficient to invest in new technology.
My question: Are Reagan-Thatcher countries the only ones with robots? My image, perhaps it is wrong, is that plenty of robots operate in Japan and Germany too, and both countries are roughly just as technologically advanced as the US. But Japan and Germany haven't seen the same increase in inequality as the US and other Anglo countries after 1980 (graphs below). What can explain the dramatic differences in inequality across countries? Fairly blunt changes in labor market institutions, that's what. This goes back to Peter Temin's " Treaty of Detroit " paper and the oddly ignored series of papers by Piketty, Saez and coauthors which argues that changes in top marginal tax rates can largely explain the evolution of the Top 1% share of income across countries. (Actually, it goes back further -- people who work in Public Economics had "always" known that pre-tax income is sensitive to tax rates...) They also show that the story of inequality is really a story of incomes at the very top -- changes in other parts of the income distribution are far less dramatic. This evidence also is not suggestive of a story in which inequality is about the returns to skills, or computer usage, or the rise of trade with China. ...

mulp : , March 21, 2017 at 01:54 AM

Yet another economist bamboozled by free lunch economics.

In free lunch economics, you never consider demand impacted by labor cost changed.

TANSTAAFL so, cut labor costs and consumption must be cut.

Funny things can be done if money is printed and helicopter dropped unequally.

Printed money can accumulate in the hands of the rentier cutting labor costs and pocketing the savings without cutting prices.

Free lunch economics invented the idea price equals cost, but that is grossly distorting.

And all costs are labor costs. It it isn't labor cost, it's rents and economic profit which mean economic inefficiency. An inefficient economy is unstable. Likely to crash or drive revolution.

Free lunch economics seeks to make labor unnecessary or irrelevant. Labor cost is pure liability.

Yet all the cash for consumption is labor cost, so if labor cost is a liability, then demand is a liability.

Replace workers with robots, then robots must become consumers.

reason -> mulp... , March 21, 2017 at 03:47 AM
"Replace workers with robots, then robots must become consumers." Well no - the OWNERS of robots must become consumers.
reason : , March 21, 2017 at 03:35 AM
I am old enough to remember the days of good public libraries, free university education, free bus passes for seniors and low land prices. Is the income side of the equation all that counts?
anne : , March 21, 2017 at 06:37 AM
https://medium.com/@ryanavent_93844/the-productivity-paradox-aaf05e5e4aad#.brb0426mt

March 16, 2017

The productivity paradox
By Ryan Avent

People are worried about robots taking jobs. Driverless cars are around the corner. Restaurants and shops increasingly carry the option to order by touchscreen. Google's clever algorithms provide instant translations that are remarkably good.

But the economy does not feel like one undergoing a technology-driven productivity boom. In the late 1990s, tech optimism was everywhere. At the same time, wages and productivity were rocketing upward. The situation now is completely different. The most recent jobs reports in America and Britain tell the tale. Employment is growing, month after month after month. But wage growth is abysmal. So is productivity growth: not surprising in economies where there are lots of people on the job working for low pay.

The obvious conclusion, the one lots of people are drawing, is that the robot threat is totally overblown: the fantasy, perhaps, of a bubble-mad Silicon Valley - or an effort to distract from workers' real problems, trade and excessive corporate power. Generally speaking, the problem is not that we've got too much amazing new technology but too little.

This is not a strawman of my own invention. Robert Gordon makes this case. You can see Matt Yglesias make it here. * Duncan Weldon, for his part, writes: **

"We are debating a problem we don't have, rather than facing a real crisis that is the polar opposite. Productivity growth has slowed to a crawl over the last 15 or so years, business investment has fallen and wage growth has been weak. If the robot revolution truly was under way, we would see surging capital expenditure and soaring productivity. Right now, that would be a nice 'problem' to have. Instead we have the reality of weak growth and stagnant pay. The real and pressing concern when it comes to the jobs market and automation is that the robots aren't taking our jobs fast enough."

And in a recent blog post Paul Krugman concluded: *

"I'd note, however, that it remains peculiar how we're simultaneously worrying that robots will take all our jobs and bemoaning the stalling out of productivity growth. What is the story, really?"

What is the story, indeed. Let me see if I can tell one. Last fall I published a book: "The Wealth of Humans". In it I set out how rapid technological progress can coincide with lousy growth in pay and productivity. Start with this:

"Low labour costs discourage investments in labour-saving technology, potentially reducing productivity growth."

...

* http://www.vox.com/2015/7/27/9038829/automation-myth

** http://www.prospectmagazine.co.uk/magazine/droids-wont-steal-your-job-they-could-make-you-rich

*** https://krugman.blogs.nytimes.com/2017/02/24/maid-in-america/

anne -> anne... , March 21, 2017 at 06:38 AM
https://twitter.com/paulkrugman/status/843167658577182725

Paul Krugman @paulkrugman

But is Ryan Avent saying something different * from the assertion that recent technological progress is capital-biased? **

* https://medium.com/@ryanavent_93844/the-productivity-paradox-aaf05e5e4aad#.kmb49lrgd

** http://krugman.blogs.nytimes.com/2012/12/08/rise-of-the-robots/

If so, what?

https://krugman.blogs.nytimes.com/2012/12/26/capital-biased-technological-progress-an-example-wonkish/

11:30 AM - 18 Mar 2017

anne -> anne... , March 21, 2017 at 07:00 AM
This is an old concern in economics; it's "capital-biased technological change," which tends to shift the distribution of income away from workers to the owners of capital....

-- Paul Krugman

anne -> anne... , March 21, 2017 at 06:40 AM
http://krugman.blogs.nytimes.com/2012/12/08/rise-of-the-robots/

December 8, 2012

Rise of the Robots
By Paul Krugman

Catherine Rampell and Nick Wingfield write about the growing evidence * for "reshoring" of manufacturing to the United States. * They cite several reasons: rising wages in Asia; lower energy costs here; higher transportation costs. In a followup piece, ** however, Rampell cites another factor: robots.

"The most valuable part of each computer, a motherboard loaded with microprocessors and memory, is already largely made with robots, according to my colleague Quentin Hardy. People do things like fitting in batteries and snapping on screens.

"As more robots are built, largely by other robots, 'assembly can be done here as well as anywhere else,' said Rob Enderle, an analyst based in San Jose, California, who has been following the computer electronics industry for a quarter-century. 'That will replace most of the workers, though you will need a few people to manage the robots.' "

Robots mean that labor costs don't matter much, so you might as well locate in advanced countries with large markets and good infrastructure (which may soon not include us, but that's another issue). On the other hand, it's not good news for workers!

This is an old concern in economics; it's "capital-biased technological change," which tends to shift the distribution of income away from workers to the owners of capital.

Twenty years ago, when I was writing about globalization and inequality, capital bias didn't look like a big issue; the major changes in income distribution had been among workers (when you include hedge fund managers and CEOs among the workers), rather than between labor and capital. So the academic literature focused almost exclusively on "skill bias", supposedly explaining the rising college premium.

But the college premium hasn't risen for a while. What has happened, on the other hand, is a notable shift in income away from labor:

[Graph]

If this is the wave of the future, it makes nonsense of just about all the conventional wisdom on reducing inequality. Better education won't do much to reduce inequality if the big rewards simply go to those with the most assets. Creating an "opportunity society," or whatever it is the likes of Paul Ryan etc. are selling this week, won't do much if the most important asset you can have in life is, well, lots of assets inherited from your parents. And so on.

I think our eyes have been averted from the capital/labor dimension of inequality, for several reasons. It didn't seem crucial back in the 1990s, and not enough people (me included!) have looked up to notice that things have changed. It has echoes of old-fashioned Marxism - which shouldn't be a reason to ignore facts, but too often is. And it has really uncomfortable implications.

But I think we'd better start paying attention to those implications.

* http://www.nytimes.com/2012/12/07/technology/apple-to-resume-us-manufacturing.html

** http://economix.blogs.nytimes.com/2012/12/07/when-cheap-foreign-labor-gets-less-cheap/

anne -> anne... , March 21, 2017 at 06:43 AM
https://fred.stlouisfed.org/graph/?g=d4ZY

January 30, 2017

Compensation of employees as a share of Gross Domestic Income, 1948-2015


https://fred.stlouisfed.org/graph/?g=d507

January 30, 2017

Compensation of employees as a share of Gross Domestic Income, 1948-2015

(Indexed to 1948)

supersaurus -> anne... , March 21, 2017 at 01:23 PM
"The most valuable part of each computer, a motherboard loaded with microprocessors and memory, is already largely made with robots, according to my colleague Quentin Hardy. People do things like fitting in batteries and snapping on screens.

"...already largely made..."? already? circuit boards were almost entirely populated by machines by 1985, and after the rise of surface mount technology you could drop the "almost". in 1990 a single machine could place 40k+/hour parts small enough they were hard to pick up with fingers.

anne : , March 21, 2017 at 06:37 AM
https://krugman.blogs.nytimes.com/2017/03/20/robot-geometry-very-wonkish/

March 20, 2017

Robot Geometry (Very Wonkish)
By Paul Krugman

And now for something completely different. Ryan Avent has a nice summary * of the argument in his recent book, trying to explain how dramatic technological change can go along with stagnant real wages and slowish productivity growth. As I understand it, he's arguing that the big tech changes are happening in a limited sector of the economy, and are driving workers into lower-wage and lower-productivity occupations.

But I have to admit that I was having a bit of a hard time wrapping my mind around exactly what he's saying, or how to picture this in terms of standard economic frameworks. So I found myself wanting to see how much of his story could be captured in a small general equilibrium model - basically the kind of model I learned many years ago when studying the old trade theory.

Actually, my sense is that this kind of analysis is a bit of a lost art. There was a time when most of trade theory revolved around diagrams illustrating two-country, two-good, two-factor models; these days, not so much. And it's true that little models can be misleading, and geometric reasoning can suck you in way too much. It's also true, however, that this style of modeling can help a lot in thinking through how the pieces of an economy fit together, in ways that algebra or verbal storytelling can't.

So, an exercise in either clarification or nostalgia - not sure which - using a framework that is basically the Lerner diagram, ** adapted to a different issue.

Imagine an economy that produces only one good, but can do so using two techniques, A and B, one capital-intensive, one labor-intensive. I represent these techniques in Figure 1 by showing their unit input coefficients:

[Figure 1]

Here AB is the economy's unit isoquant, the various combinations of K and L it can use to produce one unit of output. E is the economy's factor endowment; as long as the aggregate ratio of K to L is between the factor intensities of the two techniques, both will be used. In that case, the wage-rental ratio will be the slope of the line AB.

Wait, there's more. Since any point on the line passing through A and B has the same value, the place where it hits the horizontal axis is the amount of labor it takes to buy one unit of output, the inverse of the real wage rate. And total output is the ratio of the distance along the ray to E divided by the distance to AB, so that distance is 1/GDP.

You can also derive the allocation of resources between A and B; not to clutter up the diagram even further, I show this in Figure 2, which uses the K/L ratios of the two techniques and the overall endowment E:

[Figure 2]

Now, Avent's story. I think it can be represented as technical progress in A, perhaps also making A even more capital-intensive. So this would amount to a movement southwest to a point like A' in Figure 3:

[Figure 3]

We can see right away that this will lead to a fall in the real wage, because 1/w must rise. GDP and hence productivity does rise, but maybe not by much if the economy was mostly using the labor-intensive technique.

And what about allocation of labor between sectors? We can see this in Figure 4, where capital-using technical progress in A actually leads to a higher share of the work force being employed in labor-intensive B:

[Figure 4]

So yes, it is possible for a simple general equilibrium analysis to capture a lot of what Avent is saying. That does not, of course, mean that he's empirically right. And there are other things in his argument, such as hypothesized effects on the direction of innovation, that aren't in here.

But I, at least, find this way of looking at it somewhat clarifying - which, to be honest, may say more about my weirdness and intellectual age than it does about the subject.

* https://medium.com/@ryanavent_93844/the-productivity-paradox-aaf05e5e4aad#.v9et5b98y

** http://www-personal.umich.edu/~alandear/writings/Lerner.pdf

Shah of Bratpuhr : , March 21, 2017 at 07:27 AM
Median Wealth per adult (table ends at $40k)

1. Switzerland $244,002
2. Iceland $188,088
3. Australia $162,815
4. Belgium $154,815
5. New Zealand $135,755
6. Norway $135,012
7. Luxembourg $125,452
8. Japan $120,493
9. United Kingdom $107,865
10. Italy $104,105
11. Singapore $101,386
12. France $ 99,923
13. Canada $ 96,664
14. Netherlands $ 81,118
15. Ireland $ 80,668
16. Qatar $ 74,820
17. Korea $ 64,686
18. Taiwan $ 63,134
19. United Arab Emirates $ 62,332
20. Spain $ 56,500
21. Malta $ 54,562
22. Israel $ 54,384
23. Greece $ 53,266
24. Austria $ 52,519
25. Finland $ 52,427
26. Denmark $ 52,279
27. United States $ 44,977
28. Germany $ 42,833
29. Kuwait $ 40,803

http://www.middleclasspoliticaleconomist.com/2017/03/us-has-worst-wealth-inequality-of-any.html

reason -> Shah of Bratpuhr... , March 21, 2017 at 08:17 AM
I think this illustrates my point very clearly. If you had charts of wealth by age it would be even clearer. Without a knowledge of the discounted expected value of public pensions it is hard to draw any conclusions from this list.

I know very definitely that in Australia and the UK people are very reliant on superannuation and housing assets. In both Australia and the UK it is common to sell expensive housing in the capital and move to cheaper coastal locations upon retirement, investing the capital to provide retirement income. Hence a larger median wealth is NEEDED.

It is hard otherwise to explain the much higher median wealth in Australia and the UK.

Shah of Bratpuhr : , March 21, 2017 at 07:28 AM
Median Wealth Average Wealth

1. United States $ 44,977 $344,692 7.66
2. Denmark $ 52,279 $259,816 4.97
3. Germany $ 42,833 $185,175 4.32
4. Austria $ 52,519 $206,002 3.92
5. Israel $ 54,384 $176,263 3.24
6. Kuwait $ 40,803 $119,038 2.92
7. Finland $ 52,427 $146,733 2.80
8. Canada $ 96,664 $270,179 2.80
9. Taiwan $ 63,134 $172,847 2.74
10. Singapore $101,386 $276,885 2.73
11. United Kingdom $107,865 $288,808 2.68
12. Ireland $ 80,668 $214,589 2.66
13. Luxembourg $125,452 $316,466 2.52
14. Korea $ 64,686 $159,914 2.47
15. France $ 99,923 $244,365 2.45
16. United Arab Emirates $ 62,332 $151,098 2.42
17. Norway $135,012 $312,339 2.31
18. Australia $162,815 $375,573 2.31
19. Switzerland $244,002 $561,854 2.30
20. Netherlands $ 81,118 $184,378 2.27
21. New Zealand $135,755 $298,930 2.20
22. Iceland $188,088 $408,595 2.17
23. Qatar $ 74,820 $161,666 2.16
24. Malta $ 54,562 $116,185 2.13
25. Spain $ 56,500 $116,320 2.06
26. Greece $ 53,266 $103,569 1.94
27. Italy $104,105 $202,288 1.94
28. Japan $120,493 $230,946 1.92
29. Belgium $154,815 $270,613 1.75

http://www.middleclasspoliticaleconomist.com/2017/03/us-has-worst-wealth-inequality-of-any.html

spencer : , March 21, 2017 at 08:06 AM
Ryan Avent's analysis demonstrates what is wrong with the libertarian, right wing belief that cheap labor is the answer to every problem when in truth cheap labor is the source of many of our problems.
reason -> spencer... , March 21, 2017 at 08:22 AM
Spencer,
as I have said before, I don't really care to much what wages are - I care about income. It is low income that is the problem. I'm a UBI guy, if money is spread around, and workers can say no to exploitation, low wages will not be a problem.
Sanjait : , March 21, 2017 at 09:32 AM
This looks good, but also reductive.

Have we not seen a massive shift in pretax income distribution? Yes ... which tells me that changes in tax rate structures are not the only culprit. Though they are an important culprit.

reason -> Sanjait... , March 21, 2017 at 09:40 AM
Maybe - but
1. changes in taxes can affect incentives (especially think of real investment and corporate taxes and also personal income taxes and executive remuneration);
2. changes in the distribution of purchasing power can effect the way growth in the economy occurs;
3. changes in taxes also affect government spending and government spending tends to be more progressively distributed than private income.

Remember the rule: ceteris is NEVER paribus.

Longtooth : , March 21, 2017 at 12:28 PM
Word to the wise:

Think: Services and Goods

Composite Services labor hours increase with poor productivity growth - output per hour of labor input. Composite measure of service industry output is notoriously problematic (per BLS BEA).

Goods labor hours decrease with increasing productivity growth. Goods output per hour easy to measure and with the greatest experience and knowledge.

Put this together and composite national productivity growth rate can't grow as fast as services consume more of labor hours.

Simple arithmetic.

Elaboration on Services productivity measures:

Now add the composite retail clerk labor hours to engineering labor hours... which dominates in composite labor hours? Duh! So even in services the productivity is weighted heavily to the lowest productivity job market.

Substitute Hospitality services for Retail Clerk services. Substitute truck drivers services for Hospitality Services, etc., etc., etc.

I have spent years tracking productivity in goods production of various types ... mining, non-tech hardware production, high tech hardware production in various sectors of high tech. The present rates of productivity growth continue to climb (never decline) relative to the past rates in each goods production sector measured by themselves.

But the proportion of hours in goods production in U.S. is and has been in continual decline even while value of output has increased in each sector of goods production.

Here's an interesting way to start thinking about Services productivity.

There used to be reasonably large services sector in leisure and business travel agents. Now there is nearly none... this has been replaced by on-line computer based booking. So travel agent or equivalent labor hours is now near zippo. Productivity of travel agents went through the roof in the 1990's & 2000's as the number of people / labor hours dropped like a rock. Where did those labor hours end up? They went to lower paying services or left the labor market entirely. So lower paying lower productivity services increased as a proportion of all services, which in composite reduced total serviced productivity.

You can do the same analysis for hundreds of service jobs that no longer even exist at all --- switch board operators for example when the way of buggy whip makers and horse-shoe services).

Now take a little ride into the future... not to distant future. When autonomous vehicles become the norm or even a large proportion of vehicles, and commercial drivers (taxi's, trucking, delivery services) go the way of horse-shoe services the labor hours for those services (land transportation of goods & people) will drop precipitously, even as unit deliveries increase, productivity goes through the roof, but since there's almost no labor hours in that service the composite effect on productivity in services will drop because the displaced labor hours will end up in a lower productivity services sector or out of the elabor market entirely.

Longtooth -> Longtooth... , March 21, 2017 at 12:42 PM
Economists are having problems reconciling composite productivity growth rates with increasing rates of automation. So they end up saying "no evidence" of automation taking jobs or something to the effect "not to fear, robotics isn't evident as a problem we have to worry about".

But they know by observation all around them that automation is increasing productivity in the goods sector, so they can't really discount automation as an issue without shutting their eyes to everything they see with their "lying eyes". Thus they know deep down that they will have to be reconcile this with BLS and BEA measures.

Ten years aog this wasn't even on economist's radars. Today it's at least being looked into with more serious effort.

Ten years ago politicians weren't even aware of the possibility of any issues with increasing rates of automation... they thought it's always increased with increasing labor demand and growth, so why would that ever change? Ten years ago they concluded it couldn't without even thinking about it for a moment. Today it's on their radar at least as something that bears perhaps a little more thought.

Not to worry though... in ten more years they'll either have real reason to worry staring them in the face, or they'll have figured out why they were so blind before.

Reminds me of not recognizing the "shadow banking" enterprises that they didn't see either until after the fact.

Longtooth -> Longtooth... , March 21, 2017 at 12:48 PM
Or that they thought the risk rating agencies were providing independent and valid risk analysis so the economists couldn't reconcile the "low level" of market risks risk with everything else so they just assumed "everything" else was really ok too... must be "irrational exuberance" that's to blame.
Longtooth : , March 21, 2017 at 01:04 PM
Let me add that the term "robotics" is a subset of automation. The major distinction is only that a form of automation that includes some type of 'articulation' and/or some type of dynamic decision making on the fly (computational branching decision making in nano second speeds) is termed 'robotics' because articulation and dynamic decision making are associated with human capabilities rather then automatic machines.

It makes no difference whether productivity gains occur by an articulated machine or one that isn't... automation just means replacing people's labor with something that improves humans capacity to produce an output.

When mechanical leverage was invented 3000 or more years ago it was a form of automation, enabling humans to lift, move heavier objects with less human effort (less human energy).

Longtooth -> Longtooth... , March 21, 2017 at 01:18 PM
I meant 3000 years BC.... 5000 years ago or more.

[Mar 20, 2017] https://medium.com/@ryanavent_93844/the-productivity-paradox-aaf05e5e4aad#.d8jfva10j

Mar 20, 2017 | medium.com

The productivity paradox
by Ryan Avent

People are worried about robots taking jobs. Driverless cars are around the corner. Restaurants and shops increasingly carry the option to order by touchscreen. Google's clever algorithms provide instant translations that are remarkably good.

But the economy does not feel like one undergoing a technology-driven productivity boom. In the late 1990s, tech optimism was everywhere. At the same time, wages and productivity were rocketing upward. The situation now is completely different. The most recent jobs reports in America and Britain tell the tale. Employment is growing, month after month after month. But wage growth is abysmal. So is productivity growth: not surprising in economies where there are lots of people on the job working for low pay.

The obvious conclusion, the one lots of people are drawing, is that the robot threat is totally overblown: the fantasy, perhaps, of a bubble-mad Silicon Valley - or an effort to distract from workers' real problems, trade and excessive corporate power. Generally speaking, the problem is not that we've got too much amazing new technology but too little.

This is not a strawman of my own invention. Robert Gordon makes this case. You can see Matt Yglesias make it here. Duncan Weldon, for his part, writes:

We are debating a problem we don't have, rather than facing a real crisis that is the polar opposite. Productivity growth has slowed to a crawl over the last 15 or so years, business investment has fallen and wage growth has been weak. If the robot revolution truly was under way, we would see surging capital expenditure and soaring productivity. Right now, that would be a nice "problem" to have. Instead we have the reality of weak growth and stagnant pay. The real and pressing concern when it comes to the jobs market and automation is that the robots aren't taking our jobs fast enough.

And in a recent blog post Paul Krugman concluded:

I'd note, however, that it remains peculiar how we're simultaneously worrying that robots will take all our jobs and bemoaning the stalling out of productivity growth. What is the story, really?

What is the story, indeed. Let me see if I can tell one. Last fall I published a book: "The Wealth of Humans". In it I set out how rapid technological progress can coincide with lousy growth in pay and productivity. Start with this:

Low labour costs discourage investments in labour-saving technology, potentially reducing productivity growth.

...
Reply Monday, March 20, 2017 at 09:18 AM Peter K. said in reply to Peter K.... Increasing labour costs by making the minimum wage a living wage would increase the incentives to boost productivity growth?

No, the neoliberals and corporate Democrats would never go for it. They're trying to appeal to the business community and their campaign contributors wouldn't like it. Reply Monday, March 20, 2017 at 09:26 AM anne said in reply to Peter K.... https://twitter.com/paulkrugman/status/843167658577182725

Paul Krugman @paulkrugman

But is [Ryan Avent] saying something different from the assertion that recent tech progress is capital-biased?

https://krugman.blogs.nytimes.com/2012/12/26/capital-biased-technological-progress-an-example-wonkish/

If so, what?

11:30 AM - 18 Mar 2017 Reply Monday, March 20, 2017 at 10:32 AM anne said in reply to Peter K.... http://krugman.blogs.nytimes.com/2012/12/26/capital-biased-technological-progress-an-example-wonkish/

December 26, 2012

Capital-biased Technological Progress: An Example (Wonkish)
By Paul Krugman

Ever since I posted about robots and the distribution of income, * I've had queries from readers about what capital-biased technological change – the kind of change that could make society richer but workers poorer – really means. And it occurred to me that it might be useful to offer a simple conceptual example – the kind of thing easily turned into a numerical example as well – to clarify the possibility. So here goes.

Imagine that there are only two ways to produce output. One is a labor-intensive method – say, armies of scribes equipped only with quill pens. The other is a capital-intensive method – say, a handful of technicians maintaining vast server farms. (I'm thinking in terms of office work, which is the dominant occupation in the modern economy).

We can represent these two techniques in terms of unit inputs – the amount of each factor of production required to produce one unit of output. In the figure below I've assumed that initially the capital-intensive technique requires 0.2 units of labor and 0.8 units of capital per unit of output, while the labor-intensive technique requires 0.8 units of labor and 0.2 units of capital.

[Diagram]

The economy as a whole can make use of both techniques – in fact, it will have to unless it has either a very large amount of capital per worker or a very small amount. No problem: we can just use a mix of the two techniques to achieve any input combination along the blue line in the figure. For economists reading this, yes, that's the unit isoquant in this example; obviously if we had a bunch more techniques it would start to look like the convex curve of textbooks, but I want to stay simple here.

What will the distribution of income be in this case? Assuming perfect competition (yes, I know, but let's deal with that case for now), the real wage rate w and the cost of capital r – both measured in terms of output – have to be such that the cost of producing one unit is 1 whichever technique you use. In this example, that means w=r=1. Graphically, by the way, w/r is equal to minus the slope of the blue line.

Oh, and if you're worried, yes, workers and machines are both paid their marginal product.

But now suppose that technology improves – specifically, that production using the capital-intensive technique gets more efficient, although the labor-intensive technique doesn't. Scribes with quill pens are the same as they ever were; server farms can do more than ever before. In the figure, I've assumed that the unit inputs for the capital-intensive technique are cut in half. The red line shows the economy's new choices.

So what happens? It's obvious from the figure that wages fall relative to the cost of capital; it's less obvious, maybe, but nonetheless true that real wages must fall in absolute terms as well. In this specific example, technological progress reduces the real wage by a third, to 0.667, while the cost of capital rises to 2.33.

OK, it's obvious how stylized and oversimplified all this is. But it does, I think, give you some sense of what it would mean to have capital-biased technological progress, and how this could actually hurt workers.

* http://krugman.blogs.nytimes.com/2012/12/08/rise-of-the-robots/ Reply Monday, March 20, 2017 at 10:33 AM anne said in reply to Peter K.... http://krugman.blogs.nytimes.com/2012/12/08/rise-of-the-robots/

December 8, 2012

Rise of the Robots
By Paul Krugman

Catherine Rampell and Nick Wingfield write about the growing evidence * for "reshoring" of manufacturing to the United States. * They cite several reasons: rising wages in Asia; lower energy costs here; higher transportation costs. In a followup piece, ** however, Rampell cites another factor: robots.

"The most valuable part of each computer, a motherboard loaded with microprocessors and memory, is already largely made with robots, according to my colleague Quentin Hardy. People do things like fitting in batteries and snapping on screens.

"As more robots are built, largely by other robots, 'assembly can be done here as well as anywhere else,' said Rob Enderle, an analyst based in San Jose, California, who has been following the computer electronics industry for a quarter-century. 'That will replace most of the workers, though you will need a few people to manage the robots.' "

Robots mean that labor costs don't matter much, so you might as well locate in advanced countries with large markets and good infrastructure (which may soon not include us, but that's another issue). On the other hand, it's not good news for workers!

This is an old concern in economics; it's "capital-biased technological change," which tends to shift the distribution of income away from workers to the owners of capital.

Twenty years ago, when I was writing about globalization and inequality, capital bias didn't look like a big issue; the major changes in income distribution had been among workers (when you include hedge fund managers and CEOs among the workers), rather than between labor and capital. So the academic literature focused almost exclusively on "skill bias", supposedly explaining the rising college premium.

But the college premium hasn't risen for a while. What has happened, on the other hand, is a notable shift in income away from labor:

[Graph]

If this is the wave of the future, it makes nonsense of just about all the conventional wisdom on reducing inequality. Better education won't do much to reduce inequality if the big rewards simply go to those with the most assets. Creating an "opportunity society," or whatever it is the likes of Paul Ryan etc. are selling this week, won't do much if the most important asset you can have in life is, well, lots of assets inherited from your parents. And so on.

I think our eyes have been averted from the capital/labor dimension of inequality, for several reasons. It didn't seem crucial back in the 1990s, and not enough people (me included!) have looked up to notice that things have changed. It has echoes of old-fashioned Marxism - which shouldn't be a reason to ignore facts, but too often is. And it has really uncomfortable implications.

But I think we'd better start paying attention to those implications.

* http://www.nytimes.com/2012/12/07/technology/apple-to-resume-us-manufacturing.html

** http://economix.blogs.nytimes.com/2012/12/07/when-cheap-foreign-labor-gets-less-cheap/ Reply Monday, March 20, 2017 at 10:34 AM anne said in reply to anne... https://fred.stlouisfed.org/graph/?g=d4ZY

January 30, 2017

Compensation of Employees as a share of Gross Domestic Income, 1948-2015


https://fred.stlouisfed.org/graph/?g=d507

January 30, 2017

Compensation of Employees as a share of Gross Domestic Income, 1948-2015

(Indexed to 1948) Reply Monday, March 20, 2017 at 10:41 AM

[Mar 17, 2017] Maybe the machines are not actually eating our jobs, since productivity has stalled in the US for more than a decade.

Notable quotes:
"... Motivated empiricism, which is what he is describing, is just as misleading as ungrounded theorizing unsupported by empirical data. Indeed, even in the sciences with well established, strong testing protocols are suffering from a replication crisis. ..."
"... I liked the Dorman piece at Econospeak as well. He writes well and explains things well in a manner that makes it easy for non-experts to understand. ..."
Mar 17, 2017 | economistsview.typepad.com
DrDick : March 16, 2017 at 07:19 AM , 2017 at 07:19 AM
The Brookings piece ( Understanding US productivity trends from the bottom-up - Brookings Institution ) would suggest that maybe the machines are not actually eating our jobs, since productivity has stalled in the US for more than a decade. The Dornan piece at Econospeak ( Economic Empiricism on the Hubris-Humility Spectrum? - EconoSpeak ) is also interesting and I think I agree with him.

Motivated empiricism, which is what he is describing, is just as misleading as ungrounded theorizing unsupported by empirical data. Indeed, even in the sciences with well established, strong testing protocols are suffering from a replication crisis.

Peter K. -> DrDick ... , March 16, 2017 at 09:18 AM
Of course Sanjait will ignore the Brookings piece.

I liked the Dorman piece at Econospeak as well. He writes well and explains things well in a manner that makes it easy for non-experts to understand.

Unlike other writers we know.

[Mar 11, 2017] Bwhile to recreate the level of manufacturing employment if 60th is impossible there are other reasons to want to bring supply chains back to the U.S. High-value-added manufacturing -- robot factories pumping out goods -- creates jobs for Americans in other ways

Notable quotes:
"... But there are plenty of other reasons to want to bring supply chains back to the U.S. High-value-added manufacturing -- robot factories pumping out goods -- creates jobs for Americans in other ways. As economist Enrico Moretti explains in his book "The New Geography of Jobs," high-tech manufacturing creates higher-paying service-sector jobs in a local area. The dollars that come into a town with a robot factory get spent on doctors and waiters and personal trainers, and the money circulates throughout the community, leaving everyone better off. ..."
"... Manufacturing might also have some special properties. Productivity growth is usually higher in manufacturing than in other industries. Part of that is because it's easier to automate the production of goods than the provision of services. And as engineers like Intel Corp. co-founder Andrew Grove were fond of reminding us, manufacturing also creates knowledge spillovers via the supply chain -- the place where electronics are made is also probably going to have an edge in advanced battery technology. ..."
"... Navarro's idea of reducing the trade deficit is also a good one. Trade deficits leave a burden for future generations, since they have to be paid back by future trade surpluses. Closing that deficit would let Americans of the future -- who are already going to be burdened with an aging population and crumbling infrastructure -- breathe just a little easier. And as economist Dani Rodrik explains, exporting can help companies to figure out what they're good at. ..."
"... Tariffs on foreign goods are probably a bad way. They carry the danger of retaliation, and trade wars are painful for everyone involved. Also, tariffs make it harder to import the materials that are needed in the manufacturing process. ..."
"... A better idea would be to help companies with reshoring. Lots of companies shipped production to China and other countries, lured by the promise of low wages, cheap energy and government subsidies. But wages have been rising steadily in China: ..."
Mar 11, 2017 | economistsview.typepad.com
Peter K. : March 11, 2017 at 10:20 AM

... ... ...

https://www.bloomberg.com/view/articles/2017-03-09/trump-s-plan-to-bring-back-manufacturing-isn-t-crazy

Trump's Plan to Bring Back Manufacturing Isn't Crazy

by Noah Smith

MARCH 9, 2017 11:57 AM EST

The head of President Donald Trump's National Trade Council, Peter Navarro, has been making waves recently, with an op-ed in the Wall Street Journal and a speech to the National Association of Business Economists.

The bad news is that Navarro still uses some dodgy economics when arguing for lower trade deficits. As I explained last December, lowering trade deficits doesn't necessarily give gross domestic product a boost. Navarro should stop using this talking point.

That said, Navarro's vow to "reclaim all of the supply chain and manufacturing capability" that the U.S. has lost in recent decades isn't necessarily a bad thing. There are good reasons to want to revitalize U.S. manufacturing and lower the trade deficit -- as long as it's done in the right way, and as long as expectations are appropriately modest.

What a manufacturing revival definitely wouldn't do is bring back good old-line manufacturing jobs. The U.S. is a rich country, meaning that its comparative advantage in manufacturing lies in capital-intensive, high-value-added goods -- semiconductors, industrial machinery, aircraft and pharmaceuticals. Those are the kinds of things that are mostly made by machine tools and robots, not by human beings working on an assembly line.

To see what a U.S. manufacturing export boom would look like, we need only consider Germany. Germany is a rich, productive country with a very large trade surplus. It's succeeding at doing exactly the kind of thing Navarro wants. But the percentage of German workers employed in the manufacturing sector has gone down and down, just as it has in the U.S.:

[chart]

So even if the U.S. manages to bring manufacturing back, it wouldn't recreate the widespread industrial employment of the 1950s and 1960s.

But there are plenty of other reasons to want to bring supply chains back to the U.S. High-value-added manufacturing -- robot factories pumping out goods -- creates jobs for Americans in other ways. As economist Enrico Moretti explains in his book "The New Geography of Jobs," high-tech manufacturing creates higher-paying service-sector jobs in a local area. The dollars that come into a town with a robot factory get spent on doctors and waiters and personal trainers, and the money circulates throughout the community, leaving everyone better off.

Manufacturing might also have some special properties. Productivity growth is usually higher in manufacturing than in other industries. Part of that is because it's easier to automate the production of goods than the provision of services. And as engineers like Intel Corp. co-founder Andrew Grove were fond of reminding us, manufacturing also creates knowledge spillovers via the supply chain -- the place where electronics are made is also probably going to have an edge in advanced battery technology.

Navarro's idea of reducing the trade deficit is also a good one. Trade deficits leave a burden for future generations, since they have to be paid back by future trade surpluses. Closing that deficit would let Americans of the future -- who are already going to be burdened with an aging population and crumbling infrastructure -- breathe just a little easier. And as economist Dani Rodrik explains, exporting can help companies to figure out what they're good at.

So there are many reasons for the U.S. to do what Navarro wants -- to bring back the supply chain, to revitalize the manufacturing sector and to lower the trade deficit. The real question is how to do this.

Tariffs on foreign goods are probably a bad way. They carry the danger of retaliation, and trade wars are painful for everyone involved. Also, tariffs make it harder to import the materials that are needed in the manufacturing process.

A better idea would be to help companies with reshoring. Lots of companies shipped production to China and other countries, lured by the promise of low wages, cheap energy and government subsidies. But wages have been rising steadily in China:

[chart]

Other costs have risen there as well, so that it's now about as cheap to manufacture things in the U.S. as in China.

But many of the companies that outsourced manufacturing years ago are now stuck in China, since moving back to the U.S. entails large costs. The U.S. government could provide financial and logistical assistance to companies that want to move production back to the U.S., thus freeing these companies from the overseas trap. This would represent a bailout of sorts, since it would be using government money to help companies out of the predicament that their own short-sightedness landed them in. But the benefits, in terms of a U.S. manufacturing renaissance, might be worth both the unfairness and the financial costs.

So don't discount Navarro's dream of bringing back the supply chain. If done right, it could be good for the U.S. economy's long-term health.

[Mar 06, 2017] Robots are Wealth Creators and Taxing Them is Illogical

Notable quotes:
"... His prescription in the end is the old and tired "invest in education and retraining", i.e. "symbolic analyst jobs will replace the lost jobs" like they have for decades (not). ..."
"... "Governments will, however, have to concern themselves with problems of structural joblessness. They likely will need to take a more explicit role in ensuring full employment than has been the practice in the US." ..."
"... Instead, we have been shredding the safety net and job training / creation programs. There is plenty of work that needs to be done. People who have demand for goods and services find them unaffordable because the wealthy are capturing all the profits and use their wealth to capture even more. Trade is not the problem for US workers. Lack of investment in the US workforce is the problem. We don't invest because the dominant white working class will not support anything that might benefit blacks and minorities, even if the major benefits go to the white working class ..."
"... Really nice if your sitting in the lunch room of the University. Especially if you are a member of the class that has been so richly awarded, rather than the class who paid for it. Humph. The discussion is garbage, Political opinion by a group that sat by ... The hypothetical nuance of impossible tax policy. ..."
"... The concept of Robots leaving us destitute, is interesting. A diversion. It ain't robots who are harvesting the middle class. It is an entitled class of those who gave so little. ..."
"... Summers: "Let them eat training." ..."
"... Suddenly then, Bill Gates has become an accomplished student of public policy who can command an audience from Lawrence Summers who was unable to abide by the likes of the prophetic Brooksley Born who was chair of the Commodity Futures Trading Commission or the prophetic professor Raghuram Rajan who would become Governor of the Reserve Bank of India. Agreeing with Bill Gates however is a "usual" for Summers. ..."
"... Until about a decade or so ago many states I worked in had a "tangible property" or "personal property" tax on business equipment, and sometimes on equipment + average inventory. Someday I will do some research and see how many states still do this. Anyway a tax on manufacturing equipment, retail fixtures and computers and etc. is hardly novel or unusual. So why would robots be any different? ..."
"... Thank you O glorious technocrats for shining the light of truth on humanity's path into the future! Where, oh where, would we be without our looting Benevolent Overlords and their pompous lapdogs (aka Liars in Public Places)? ..."
"... While he is overrated, he is not completely clueless. He might well be mediocre (or slightly above this level) but extremely arrogant defender of the interests of neoliberal elite. Rubin's boy Larry as he was called in the old days. ..."
"... BTW he was Rubin's hatchet man for eliminating Brooksley Born attempt to regulate the derivatives and forcing her to resign: ..."
Mar 05, 2017 | economistsview.typepad.com
Larry Summers: Robots are wealth creators and taxing them is illogical : I usually agree with Bill Gates on matters of public policy and admire his emphasis on the combined power of markets and technology. But I think he went seriously astray in a recent interview when he proposed, without apparent irony, a tax on robots to cushion worker dislocation and limit inequality. ....

pgl : , March 05, 2017 at 02:16 PM

Has Summers gone all supply-side on his? Start with his title:

"Robots are wealth creators and taxing them is illogical"

I bet Bill Gates might reply – "my company is a wealth creator so it should not be taxed". Oh wait – Microsoft is already shifting profits to tax havens. Summers states:

"Third, and perhaps most fundamentally, why tax in ways that reduce the size of the pie rather than ways that assure that the larger pie is well distributed? Imagine that 50 people can produce robots who will do the work of 100. A sufficiently high tax on robots would prevent them from being produced."

Yep – he has gone all supply-side on us.

cm -> pgl... , March 05, 2017 at 02:46 PM
Summers makes one, and only one, good and relevant point - that in many cases, robots/automation will not produce more product from the same inputs but better products. That's in his words; I would replace "better" with "more predictable quality/less variability" - in both directions. And that the more predictable quality aspect is hard or impossible to distinguish from higher productivity (in some cases they may be exactly the same, e.g. by streamlining QA and reducing rework/pre-sale repairs).

His prescription in the end is the old and tired "invest in education and retraining", i.e. "symbolic analyst jobs will replace the lost jobs" like they have for decades (not).

anne -> cm... , March 05, 2017 at 04:36 PM
Incisive all the way through.
jonny bakho -> pgl... , March 05, 2017 at 02:52 PM
Pundits do not write titles, editors do. Tax the profits, not the robots.

The crux of the argument is this:

"Governments will, however, have to concern themselves with problems of structural joblessness. They likely will need to take a more explicit role in ensuring full employment than has been the practice in the US."

Instead, we have been shredding the safety net and job training / creation programs. There is plenty of work that needs to be done. People who have demand for goods and services find them unaffordable because the wealthy are capturing all the profits and use their wealth to capture even more. Trade is not the problem for US workers. Lack of investment in the US workforce is the problem. We don't invest because the dominant white working class will not support anything that might benefit blacks and minorities, even if the major benefits go to the white working class

pgl -> jonny bakho... , March 05, 2017 at 03:35 PM
"Tax the profits, not the robots." Exactly. I suspect this is how it would have to work since the company owns the robots.
cm -> pgl... , March 05, 2017 at 03:53 PM
In principle taxing profits is preferable, but has a few downsides/differences:

Not very strong points, and I didn't read the Gates interview so I don't know his detailed motivation to propose specifically a robot tax.

cm -> pgl... , March 05, 2017 at 03:58 PM
When I was in Amsterdam a few years ago, they had come up with another perfidious scheme to cut people out of the loop or "incentivize" people to use the machines - in a large transit center, you could buy tickets at a vending machine or a counter with a person - and for the latter you would have to pay a not-so-modest "personal service" surcharge (50c for a EUR 2-3 or so ticket - I think it was a flat fee, but may have been staggered by type of service).

Maybe I misunderstood it and it was a "congestion charge" to prevent lines so people who have to use counter service e.g. with questions don't have to wait.

cm -> cm... , March 05, 2017 at 04:03 PM
And then you may have heard (in the US) the term "convenience fee" which I found rather insulting when I encountered it. It suggests you are charged for your convenience, but it is to cover payment processor costs (productivity enhancing automation!).
anne -> cm... , March 05, 2017 at 04:59 PM
And then you may have heard (in the US) the term "convenience fee" which I found rather insulting when I encountered it. It suggests you are charged for your convenience, but it is to cover payment processor costs (productivity enhancing automation!)

[ Wonderful. ]

JohnH -> pgl... , March 05, 2017 at 06:43 PM
Why not simplify things and just tax capital? We already property? Why not extend it to all capital?
Paine -> jonny bakho... , March 05, 2017 at 05:10 PM
Lack of adequate compensation to the lower half of the job force is the problem. Lack of persistent big macro demand is the problem . A global traiding system that doesn't automatically move forex rates toward universal. Trading zone balance and away from persistent surplus and deficit traders is the problem

Technology is never the root problem. Population dynamics is never the root problem

anne -> Paine... , March 05, 2017 at 05:31 PM
https://fred.stlouisfed.org/graph/?g=cVq0

January 15, 2017

Nonfarm Business Productivity and Real Median Household Income, 1953-2015

(Indexed to 1953)

anne -> Paine... , March 05, 2017 at 05:35 PM
https://fred.stlouisfed.org/graph/?g=cOU6

January 15, 2017

Gross Domestic Product and Net Worth for Households & Nonprofit Organizations, 1952-2016

(Indexed to 1952)

Mr. Bill -> anne... , March 05, 2017 at 06:30 PM
Really nice if your sitting in the lunch room of the University. Especially if you are a member of the class that has been so richly awarded, rather than the class who paid for it. Humph. The discussion is garbage, Political opinion by a group that sat by ... The hypothetical nuance of impossible tax policy.
Mr. Bill -> pgl... , March 05, 2017 at 06:04 PM
The concept of Robots leaving us destitute, is interesting. A diversion. It ain't robots who are harvesting the middle class. It is an entitled class of those who gave so little.
run75441 -> Mr. Bill... , March 05, 2017 at 06:45 PM
Sigh>

After one five axis CNC cell replaces 5 other machines and 4 of the workers, what happens to the four workers?

The issue is the efficiency achieved through better through put forcing the loss of wages. If you use the 5-axis CNC, tax the output from it no more than what would have been paid to the 4 workers plus the Overhead for them. The Labor cost plus the Overhead Cost is what is eliminated by the 5-Axis CNC.

It is not a diversion. It is a reality.

anne -> anne... , March 05, 2017 at 02:20 PM
http://krugman.blogs.nytimes.com/2009/01/03/economists-behaving-badly/

January 3, 2009

Economists Behaving Badly
By Paul Krugman

Ouch. The Wall Street Journal's Real Time Economics blog has a post * linking to Raghuram Rajan's prophetic 2005 paper ** on the risks posed by securitization - basically, Rajan said that what did happen, could happen - and to the discussion at the Jackson Hole conference by Federal Reserve vice-chairman Don Kohn *** and others. **** The economics profession does not come off very well.

Two things are really striking here. First is the obsequiousness toward Alan Greenspan. To be fair, the 2005 Jackson Hole event was a sort of Greenspan celebration; still, it does come across as excessive - dangerously close to saying that if the Great Greenspan says something, it must be so. Second is the extreme condescension toward Rajan - a pretty serious guy - for having the temerity to suggest that maybe markets don't always work to our advantage. Larry Summers, I'm sorry to say, comes off particularly badly. Only my colleague Alan Blinder, defending Rajan "against the unremitting attack he is getting here for not being a sufficiently good Chicago economist," emerges with honor.

* http://blogs.wsj.com/economics/2009/01/01/ignoring-the-oracles/

** http://www.kc.frb.org/publicat/sympos/2005/PDF/Rajan2005.pdf

*** http://www.kc.frb.org/publicat/sympos/2005/PDF/Kohn2005.pdf

**** https://www.kansascityfed.org/publicat/sympos/2005/PDF/GD5_2005.pdf

cm -> pgl... , March 05, 2017 at 03:07 PM
No, his argument is much broader. Summers stops at "no new taxes and education/retraining". And I find it highly dubious that compensation/accommodation for workers can be adequately funded out of robot taxes.

Baker goes far beyond that.

cm -> cm... , March 05, 2017 at 03:09 PM
What Baker mentioned: mandatory severance, shorter work hours or more vacations due to productivity, funding infrastructure.

Summers: "Let them eat training."

Paine -> anne... , March 05, 2017 at 05:19 PM
We should never assign a social task to the wrong institution. Firms should be unencumbered by draconian hire and fire constraints. The state should provide the compensation for lay offs and firings. The state should maintain an adequate local Beveridge ratio of job openings to Job applicants

Firms task is productivity max subject to externality off sets. Including output price changed. And various other third party impacts

anne -> anne... , March 05, 2017 at 02:33 PM
Correcting:

Suddenly then, Bill Gates has become an accomplished student of public policy who can command an audience from Lawrence Summers who was unable to abide by the likes of the prophetic Brooksley Born who was chair of the Commodity Futures Trading Commission or the prophetic professor Raghuram Rajan who would become Governor of the Reserve Bank of India. Agreeing with Bill Gates however is a "usual" for Summers.

Tom aka Rusty : , March 05, 2017 at 02:19 PM
Until about a decade or so ago many states I worked in had a "tangible property" or "personal property" tax on business equipment, and sometimes on equipment + average inventory. Someday I will do some research and see how many states still do this. Anyway a tax on manufacturing equipment, retail fixtures and computers and etc. is hardly novel or unusual. So why would robots be any different?
pgl -> Tom aka Rusty... , March 05, 2017 at 02:38 PM
I suspect it is the motivation of Gates as in what he would do with the tax revenue. And Gates might be thinking of a higher tax rate for robots than for your garden variety equipment.
Paine -> Tom aka Rusty... , March 05, 2017 at 05:22 PM
There is no difference Beyond spin
Paine -> Paine... , March 05, 2017 at 05:28 PM
Yes some equipment in side any one firm compliments existing labor inside that firm including already installed robots Robots new robots are rivals

Rivals that if subject to a special " introduction tax " Could deter installation
As in
The 50 for 100 swap of the 50 hours embodied in the robot
Replace 100. Similarly paid production line labor
But ...

There's a 100 % plusher chase tax on the robots

Why bother to invest in the productivity increase
If here are no other savings

anne : , March 05, 2017 at 02:28 PM
http://cepr.net/blogs/beat-the-press/bill-gates-wants-to-undermine-donald-trump-s-plans-for-growing-the-economy

February 20, 2017

Bill Gates Wants to Undermine Donald Trump's Plans for Growing the Economy

Yes, as Un-American as that may sound, Bill Gates is proposing * a tax that would undermine Donald Trump's efforts to speed the rate of economic growth. Gates wants to tax productivity growth (also known as "automation") slowing down the rate at which the economy becomes more efficient.

This might seem a bizarre policy proposal at a time when productivity growth has been at record lows, ** *** averaging less than 1.0 percent annually for the last decade. This compares to rates of close to 3.0 percent annually from 1947 to 1973 and again from 1995 to 2005.

It is not clear if Gates has any understanding of economic data, but since the election of Donald Trump there has been a major effort to deny the fact that the trade deficit has been responsible for the loss of manufacturing jobs and to instead blame productivity growth. This is in spite of the fact that productivity growth has slowed sharply in recent years and that the plunge in manufacturing jobs followed closely on the explosion of the trade deficit, beginning in 1997.

[Manufacturing Employment, 1970-2017]

Anyhow, as Paul Krugman pointed out in his column **** today, if Trump is to have any hope of achieving his growth target, he will need a sharp uptick in the rate of productivity growth from what we have been seeing. Bill Gates is apparently pushing in the opposite direction.

* https://qz.com/911968/bill-gates-the-robot-that-takes-your-job-should-pay-taxes/

** https://fred.stlouisfed.org/graph/?g=cABu

*** https://fred.stlouisfed.org/graph/?g=cABr

**** https://www.nytimes.com/2017/02/20/opinion/on-economic-arrogance.html

-- Dean Baker

anne -> anne... , March 05, 2017 at 02:30 PM
https://fred.stlouisfed.org/graph/?g=cABu

January 4, 2017

Nonfarm Business Labor Productivity, * 1948-2016

* Output per hour of all persons

(Percent change)


https://fred.stlouisfed.org/graph/?g=cABr

January 4, 2017

Nonfarm Business Labor Productivity, * 1948-2016

* Output per hour of all persons

(Indexed to 1948)

anne -> anne... , March 05, 2017 at 02:32 PM
https://fred.stlouisfed.org/graph/?g=cN2z

January 15, 2017

Manufacturing employment, 1970-2017


https://fred.stlouisfed.org/graph/?g=cN2H

January 15, 2017

Manufacturing employment, 1970-2017

(Indexed to 1970)

Ron Waller : , March 05, 2017 at 02:43 PM
Yes, it's far better that our betters in the upper class get all the benefits from productivity growth. Without their genetic entitlement to wealth others created, we would just be savages murdering one another in the streets.

These Masters of the Universe of ours put the 'civil' in our illustrious civilization. (Sure it's a racist barbarian concentration camp on the verge of collapse into fascist revolutions and world war. But, again, far better than people murdering one another in the streets!)

People who are displaced from automation are simply moochers and it's only right that they are cut out of the economy and left to die on the streets. This is the law of Nature: survival of the fittest. Social Darwinism is inescapable. It's what makes us human!

Instead of just waiting for people displaced from automation to die on the streets, we should do the humane thing and establish concentration camps so they are quickly dispatched to the Void. (Being human means being merciful!)

Thank you O glorious technocrats for shining the light of truth on humanity's path into the future! Where, oh where, would we be without our looting Benevolent Overlords and their pompous lapdogs (aka Liars in Public Places)?

Peter K. : , March 05, 2017 at 03:14 PM
I think it would be good if the tax was used to help dislocated workers and help with inequality as Gates suggests. However Summers and Baker have a point that it's odd to single out robots when you could tax other labor-saving, productivity-enhancing technologies as well.

Baker suggests taxing profits instead. I like his idea about the government taking stock of companies and collecting taxes that way.

"They likely will need to take a more explicit role in ensuring full employment than has been the practice in the US.

Among other things, this will mean major reforms of education and retraining systems, consideration of targeted wage subsidies for groups with particularly severe employment problems, major investments in infrastructure and, possibly, direct public employment programmes."

Not your usual neoliberal priorities. Compare with Hillary's program.

greg : , March 05, 2017 at 03:34 PM
All taxes are a reallocation of wealth. Not taxing wealth creators is impossible.

On the other hand, any producer who is not taxed will expand at the expense of those producers who are taxed. This we are seeing with respect to mechanical producers and human labor. Labor is helping to subsidize its replacement.

Interesting that Summers apparently doesn't see this.

pgl -> greg ... , March 05, 2017 at 03:38 PM
"Not taxing wealth creators is impossible."

Substitute "impossible" with "bad policy" and you are spot on. Of course the entire Paul Ryan agenda is to shift taxes from the wealthy high income to the rest of us.

cm -> pgl... , March 05, 2017 at 04:12 PM
Judging by the whole merit rhetoric and tying employability to "adding value", one could come to the conclusion that most wealth is created by workers. Otherwise why would companies need to employ them and wring their hands over skill shortages? Are you suggesting W-2 and payroll taxes are bad policy?
pgl -> cm... , March 05, 2017 at 05:15 PM
Payroll taxes to fund Soc. Sec. benefits is a good thing. But when they are used to fund tax cuts for the rich - not a good thing. And yes - wealth may be created by workers but it often ends up in the hands of the "investor class".
Paine -> cm... , March 05, 2017 at 05:45 PM
Let's not conflate value added from value extracted. Profits are often pure economic rents. Very often non supply regulating. The crude dynamics of market based pricing hardly presents. A sea of close shaveed firms extracting only. Necessary incentivizing profits of enterprise
Paine -> Paine... , March 05, 2017 at 05:47 PM
Profiteers extract far more value then they create. Of course disentangling system improving surplus ie profits of enterprise
From the rest of the extracted swag. Exceeds existing tax systems capacity
Paine -> Paine... , March 05, 2017 at 05:51 PM
One can make a solid social welfare case for a class of income stream
that amounts to a running residue out of revenue earned by the firm
above compensation to job holders in that firm

See the model of the recent oboe laureate


But that would amount to a fraction of existing corporate " earnings "
Errr extractions

Chris G : , March 05, 2017 at 04:21 PM
Taking this in a different direction, does it strike anyone else as important that human beings retain the knowledge of how to make the things that robots are tasked to produce?
Paine -> Chris G ... , March 05, 2017 at 05:52 PM
As hobbies yes
Chris G -> Paine... , March 05, 2017 at 05:55 PM
That's it? Only as hobbies? Eesh, I must have a prepper gene.
cm -> Chris G ... , March 05, 2017 at 06:50 PM
The current generation of robots and automated equipment isn't intelligent and doesn't "know" anything. People still know how to make the things, otherwise the robots couldn't be programmed.

However in probably many cases, doing the actual production manually is literally not humanly possible. For example, making semiconductor chips or modern circuit boards requires machines - they cannot be produced by human workers under any circumstances, as they require precision outside the range of human capability.

Chris G -> cm... , March 05, 2017 at 08:22 PM
Point taken but I was thinking more along the lines of knowing how to use a lathe or an end mill. If production is reduced to a series of programming exercises then my sense is that society is setting itself up for a nasty fall.

(I'm all for technology to the extent that it builds resilience. However, when it serves to disconnect humans from the underlying process and reduces their role to simply knowledge workers, symbolic analysts, or the like then it ceases to be net positive. Alternatively stated: Tech-driven improvements in efficiency are good so long as they don't undermine overall societal resilience. Be aware of your reliance on things you don't understand but whose function you take for granted.)

Dan : , March 05, 2017 at 05:00 PM
Gates almost certainly meant tax robots the way we are taxed. I doubt he meant tax the acquisition of robots. We are taxed in complex ways, presumably robots will be as well.

Summers is surely using a strawman to make his basically well thought out arguments.

In any case, everyone is talking about distributional impacts of robots, but resource allocation is surely to be as much or more impacted. What if robots only want to produce antennas and not tomatoes? That might be a damn shame.

It all seems a tad early to worry about and it's hard to see how what ever the actual outcome is, the frontier of possible outcomes has to be wildly improved.

Paine -> Dan ... , March 05, 2017 at 05:57 PM
Given recent developments in labor productivity Your Last phrase becomes a gem

That is If you end with "it's hard to see whatever the actual outcome is The frontier of possible outcomes shouldn't be wildly improved By a social revolution "

Sandwichman : , March 05, 2017 at 08:02 PM
Larry Summers is clueless on robots.

Robots do not CREATE wealth. They transform wealth from one kind to another that subjectively has more utility to robot user. Wealth is inherent in the raw materials, the knowledge, skill and effort of the robot designers and fabricators, etc., etc.

The distinction is crucial.

libezkova -> Sandwichman ... , March 05, 2017 at 08:23 PM
"Larry Summers is clueless on robots."

While he is overrated, he is not completely clueless. He might well be mediocre (or slightly above this level) but extremely arrogant defender of the interests of neoliberal elite. Rubin's boy Larry as he was called in the old days.

BTW he was Rubin's hatchet man for eliminating Brooksley Born attempt to regulate the derivatives and forcing her to resign:

== quote ==
"I walk into Brooksley's office one day; the blood has drained from her face," says Michael Greenberger, a former top official at the CFTC who worked closely with Born. "She's hanging up the telephone; she says to me: 'That was [former Assistant Treasury Secretary] Larry Summers. He says, "You're going to cause the worst financial crisis since the end of World War II."... [He says he has] 13 bankers in his office who informed him of this. Stop, right away. No more.'"

libezkova : March 05, 2017 at 08:09 PM
Market is, at the end, a fully political construct. And what neoliberals like Summers promote is politically motivated -- reflects the desires of the ruling neoliberal elite to redistribute wealth up.

BTW there is a lot of well meaning (or fashion driven) idiotism that is sold in the USA as automation, robots, move to cloud, etc. Often such fashion driven exercises cost company quite a lot. But that's OK as long as bonuses are pocketed by top brass, and power of labor diminished.

Underneath of all the "robotic revolution" along with some degree of technological innovation (mainly due to increased power of computers and tremendous progress in telecommunication technologies -- not some breakthrough) is one big trend -- liquidation of good jobs and atomization of the remaining work force.

A lot of motivation here is the old dirty desire of capital owners and upper management to further to diminish the labor share. Another positive thing for capital owners and upper management is that robots do not go on strike and do not demand wage increases. But the problem is that they are not a consumers either. So robotization might bring the next Minsky moment for the USA economy closer. Sighs of weakness of consumer demand are undeniable even now. Look at auto loan delinquency rate as the first robin. http://www.usatoday.com/story/money/cars/2016/02/27/subprime-auto-loan-delinquencies-hit-six-year-high/81027230/

== quote ==
The total of outstanding auto loans reached $1.04 trillion in the fourth-quarter of 2015, according to the Federal Reserve Bank of St. Louis. About $200 billion of that would be classified as subprime or deep subprime.
== end of quote ==

Summers as a staunch, dyed-in-the-wool neoliberal of course is against increasing labor share. Actually here he went full into "supply sider" space -- making richer more rich will make us better off too. Pgl already noted that by saying: "Has Summers gone all supply-side on his? Start with his title"

BTW, there is a lot of crazy thing that are going on with the US large companies drive to diminish labor share. Some o them became barely manageable and higher management has no clue what is happening on the lower layers of the company.

The old joke was: GM does a lot of good things except making good cars. Now it can be expanded to a lot more large US companies.

The "robot pressure" on labor is not new. It is actually the same old and somewhat dirty trick as outsourcing. In this case outsourcing to robots. In other words "war of labor" by other means.

Two caste that neoliberalism created like in feudalism occupy different social spaces and one is waging the war on other, under the smoke screen of "free market" ideology. As buffet remarked "There's class warfare, all right, but it's my class, the rich class, that's making war, and we're winning."

BTW successes in robotics are no so overhyped that it is not easy to distinguish where reality ends and the hype starts.

In reality telecommunication revolution is probably more important in liquation of good jobs in the USA. I think Jonny Bakho or somebody else commented on this, but I can't find the post.

[Mar 03, 2017] Tax on robots

Mar 03, 2017 | economistsview.typepad.com
Sandwichman : February 28, 2017 at 11:51 PM , 2017 at 11:51 PM
Dean Baker is Clueless On Productivity Growth

Dean Baker's screed, "Bill Gates Is Clueless On The Economy," keeps getting recycled, from Beat the Press to Truthout to Real-World Economics Review to The Huffington Post. Dean waves aside the real problem with Gates's suggestion, which is the difficulty of defining what a robot is, and focuses instead on what seems to him to be the knock-down argument:

"Gates is worried that productivity growth is moving along too rapidly and that it will lead to large scale unemployment.

"There are two problems with this story: First productivity growth has actually been very slow in recent years. The second problem is that if it were faster, there is no reason it should lead to mass unemployment."

There are two HUGE problem with Dean's story. ...

http://econospeak.blogspot.ca/2017/03/dean-baker-is-clueless-on-productivity.html

anne -> Sandwichman ... , March 01, 2017 at 04:38 AM
http://cepr.net/blogs/beat-the-press/bill-gates-wants-to-undermine-donald-trump-s-plans-for-growing-the-economy

February 20, 2017

Bill Gates Wants to Undermine Donald Trump's Plans for Growing the Economy

Yes, as Un-American as that may sound, Bill Gates is proposing * a tax that would undermine Donald Trump's efforts to speed the rate of economic growth. Gates wants to tax productivity growth (also known as "automation") slowing down the rate at which the economy becomes more efficient.

This might seem a bizarre policy proposal at a time when productivity growth has been at record lows, ** averaging less than 1.0 percent annually for the last decade. This compares to rates of close to 3.0 percent annually from 1947 to 1973 and again from 1995 to 2005.

It is not clear if Gates has any understanding of economic data, but since the election of Donald Trump there has been a major effort to deny the fact that the trade deficit has been responsible for the loss of manufacturing jobs and to instead blame productivity growth. This is in spite of the fact that productivity growth has slowed sharply in recent years and that the plunge in manufacturing jobs followed closely on the explosion of the trade deficit, beginning in 1997.

[Manufacturing Employment, 1970-2017]

Anyhow, as Paul Krugman pointed out in his column *** today, if Trump is to have any hope of achieving his growth target, he will need a sharp uptick in the rate of productivity growth from what we have been seeing. Bill Gates is apparently pushing in the opposite direction.

* http://fortune.com/2017/02/18/bill-gates-robot-taxes-automation/

** https://fred.stlouisfed.org/graph/?g=cABu

*** https://www.nytimes.com/2017/02/20/opinion/on-economic-arrogance.html

-- Dean Baker

anne -> anne... , March 01, 2017 at 04:45 AM
https://fred.stlouisfed.org/graph/?g=cABu

January 4, 2017

Nonfarm Business Labor Productivity, * 1948-2016

* Output per hour of all persons

(Percent change)

anne -> anne... , March 01, 2017 at 04:47 AM
https://fred.stlouisfed.org/graph/?g=cABr

January 4, 2017

Nonfarm Business Labor Productivity, * 1948-2016

* Output per hour of all persons

(Indexed to 1948)

anne -> anne... , March 01, 2017 at 04:45 AM
https://fred.stlouisfed.org/graph/?g=cN2z

January 15, 2017

Manufacturing employment, 1970-2017


https://fred.stlouisfed.org/graph/?g=cN2H

January 15, 2017

Manufacturing employment, 1970-2017

(Indexed to 1970)

anne -> Sandwichman ... , March 01, 2017 at 04:41 AM
http://cepr.net/publications/op-eds-columns/bill-gates-is-clueless-on-the-economy

February 27, 2017

Bill Gates Is Clueless on the Economy
By Dean Baker

Last week Bill Gates called for taxing robots. * He argued that we should impose a tax on companies replacing workers with robots and that the money should be used to retrain the displaced workers. As much as I appreciate the world's richest person proposing a measure that would redistribute money from people like him to the rest of us, this idea doesn't make any sense.

Let's skip over the fact of who would define what a robot is and how, and think about the logic of what Gates is proposing. In effect, Gates wants to put a tax on productivity growth. This is what robots are all about. They allow us to produce more goods and services with the same amount of human labor. Gates is worried that productivity growth is moving along too rapidly and that it will lead to large scale unemployment.

There are two problems with this story. First productivity growth has actually been very slow in recent years. The second problem is that if it were faster, there is no reason it should lead to mass unemployment. Rather, it should lead to rapid growth and increases in living standards.

Starting with the recent history, productivity growth has averaged less than 0.6 percent annually over the last six years. This compares to a rate of 3.0 percent from 1995 to 2005 and also in the quarter century from 1947 to 1973. Gates' tax would slow productivity growth even further.

It is difficult to see why we would want to do this. Most of the economic problems we face are implicitly a problem of productivity growth being too slow. The argument that budget deficits are a problem is an argument that we can't produce enough goods and services to accommodate the demand generated by large budget deficits.

The often told tale of a demographic nightmare with too few workers to support a growing population of retirees is also a story of inadequate productivity growth. If we had rapid productivity growth then we would have all the workers we need.

In these and other areas, the conventional view of economists is that productivity growth is too slow. From this perspective, if Bill Gates gets his way then he will be making our main economic problems worse, not better.

Gates' notion that rapid productivity growth will lead to large-scale unemployment is contradicted by both history and theory. The quarter century from 1947 to 1973 was a period of mostly low unemployment and rapid wage growth. The same was true in the period of rapid productivity growth in the late 1990s.

The theoretical story that would support a high employment economy even with rapid productivity growth is that the Federal Reserve Board should be pushing down interest rates to try to boost demand, as growing productivity increases the ability of the economy to produce more goods and services. In this respect, it is worth noting that the Fed has recently moved to raise interest rates to slow the rate of job growth.

We can also look to boost demand by running large budget deficits. We can spend money on long neglected needs, like providing quality child care, education, or modernizing our infrastructure. Remember, if we have more output potential because of productivity growth, the deficits are not problem.

We can also look to take advantage of increases in productivity growth by allowing workers more leisure time. Workers in the United States put in 20 percent more hours each year on average than workers in other wealthy countries like Germany and the Netherlands. In these countries, it is standard for workers to have five or six weeks a year of paid vacation, as well as paid family leave and paid vacation. We should look to follow this example in the United States as well.

If we pursue these policies to maintain high levels of employment then workers will be well-positioned to secure the benefits of higher productivity in higher wages. This was certainly the story in the quarter century after World War II when real wages rose at a rate of close to two percent annually....

* http://fortune.com/2017/02/18/bill-gates-robot-taxes-automation/

RC AKA Darryl, Ron -> anne... , March 01, 2017 at 05:57 AM
The productivity advantages of robots for hospice care is chiefly from robots not needing sleep, albeit they may still need short breaks for recharging. Their primary benefit may still be that without the human touch of care givers then the old and infirm may proceed more quickly through the checkout line.
cm -> RC AKA Darryl, Ron... , March 01, 2017 at 07:35 AM
Nursing is very tough work. But much more generally, the attitude towards labor is a bit schizophrenic - one the one hand everybody is expected to work/contribute, on the other whichever work can be automated is removed, and it is publicly celebrated as progress (often at the cost of making the residual work, or "new process", less pleasant for remaining workers and clients).

This is also why I'm getting the impression Gates puts the cart before the horse - his solution sounds not like "how to benefit from automation", but "how to keep everybody in work despite automation".

jonny bakho -> cm... , March 01, 2017 at 08:36 AM
Work is the organization and direction of people's time into productive activity.
Some people are self directed and productive with little external motivation.
Others are disoriented by lack of direction and pursue activities that not only are not productive but are self destructive.

Work is a basic component of the social contract.
Everyone works and contributes and work a sufficient quantity and quality of work should guarantee a living wage.
You will find overwhelming support for a living wage but very little support for paying people not to work

DrDick -> jonny bakho... , March 01, 2017 at 11:21 AM
"Others are disoriented by lack of direction and pursue activities that not only are not productive but are self destructive."

You mean like business executives and the financial sector?

anne -> cm... , March 01, 2017 at 08:44 AM
I'm getting the impression Gates puts the cart before the horse - his solution sounds not like "how to benefit from automation", but "how to keep everybody in work despite automation".

[ Nicely summarized. ]

RC AKA Darryl, Ron -> cm... , March 01, 2017 at 09:26 AM
Schizophrenia runs deep in modernity, but this is another good example of it. We are nothing if not conflicted. Of course things get better when we work together to resolve the contradictions in our society, but if not then....
Sandwichman -> cm... , March 01, 2017 at 10:05 AM
"...his solution sounds not like 'how to benefit from automation', but "how to keep everybody in work despite automation'."

Yes, indeed. And this is where Dean Baker could have made a substantive critique, rather than the conventional economics argument dilution he defaulted to.

Peter K. -> Sandwichman ... , March 01, 2017 at 10:14 AM
"...his solution sounds not like 'how to benefit from automation', but "how to keep everybody in work despite automation'."

Yes, indeed. And this is where Dean Baker could have made a substantive critique, rather than the conventional economics argument dilution he defaulted to."

Why did you think he chose that route? I think all of Dean Baker's proposed economic reforms are worthwhile.

Tom aka Rusty -> RC AKA Darryl, Ron... , March 01, 2017 at 09:29 AM
I showed this to Mrs. Rustbelt RN.

She ended some choice comments with:

"I am really glad I am retired."

The world is worse off without her on the job.

RC AKA Darryl, Ron -> Tom aka Rusty... , March 01, 2017 at 10:03 AM
"I showed this to Mrs. Rustbelt RN..."

[This?]

"I am really glad I am retired."

[Don't feel like the Lone Ranger, Mrs. Rustbelt RN. Mortality may be God's greatest gift to us, but I can wait for it. I am enjoying retirement regardless of everything else. I don't envy the young at all.]

sanjait -> RC AKA Darryl, Ron... , March 01, 2017 at 11:31 AM
Having a little familiarity with robotics in hospital nursing care (not hospice, but similar I assume) ... I don't think the RNs are in danger of losing their jobs any time soon.

Maybe someday, but the state of the art is not "there" yet or even close. The best stuff does tasks like cleaning floors and carrying shipments down hallways. This replaces janitorial and orderly labor, but even those only slightly, and doesn't even approach being a viable substitute for nursing.

RC AKA Darryl, Ron -> sanjait... , March 01, 2017 at 11:54 AM
Great! I am not a fan of robots. I do like to mix some irony with my sarcasm though and if it tastes too much like cynicism then I just add a little more salt.
Sanjait -> RC AKA Darryl, Ron... , March 01, 2017 at 12:47 PM
I understand.

Honestly though, I think the limitations of AI give us reason not to be super cynical. At least in the near term ...

Peter K. -> anne... , March 01, 2017 at 08:05 AM
"The quarter century from 1947 to 1973 was a period of mostly low unemployment and rapid wage growth. The same was true in the period of rapid productivity growth in the late 1990s."

I think it was New Deal Dem or somebody who also pointed to this. I noticed this as well and pointed out that the social democratic years of tight labor markets had the highest "productivity" levels, but the usual trolls had their argumentative replies.

So there's that an also in the neoliberal era, bubble ponzi periods record high profits and hence higher productivity even if they aren't sustainable.

There was the epic housing bubble and funny how the lying troll PGL denies the Dot.com bubble every happened.

Why is that?

pgl -> Peter K.... , March 01, 2017 at 08:16 AM
Another pointless misrepresentation - your specialty. Snore.
Peter K. -> pgl... , March 01, 2017 at 08:31 AM
More lies.
im1dc -> pgl... , March 01, 2017 at 08:34 AM
I would add one devoid of historical context as well as devoid of the harm done to the environment and society done from unregulated industrial production.

Following this specified period of unemployment and high productivity Americans demanded and go Federal Environmental Regulation and Labor laws for safety, etc.

Of course, the current crop of Republicans and Trump Supporters want to go back to the reckless, foolish, dangerous, and deadly selfish government sanctioned corporate pollution, environmental destruction, poison, and wipe away worker protections, pay increases, and benefits.

Peter K. ignores too much of history or prefers to not mention it in his arguments with you.

im1dc -> im1dc... , March 01, 2017 at 08:37 AM
I would remind Peter K. that we have Speed Limits on our roadways and many other signs that are posted that we must follow which in fact are there for our safety and that of others.

Those signs, laws, and regulations are there for our good not for our detriment even if they slow us down or direct us to do things we would prefer not to do at that moment.

Metaphorically speaking that is what is absent completely in Trump's thinking and Republican Proposals for the US Economy, not to mention Education, Health, Foreign Affairs, etc.

Peter K. -> im1dc... , March 01, 2017 at 10:18 AM
What did I say specifically that you disagreed with?

I think regulations are good. Neoliberals like Bill Clinton and Larry Summers deregulated the financial sector. Jimmy Carter deregulated.

sanjait -> im1dc... , March 01, 2017 at 11:32 AM
Adding to the list of significant historical factors that were ignored: increased educational attainment.
jonny bakho -> Peter K.... , March 01, 2017 at 08:42 AM
Where do you find this stuff? Very few economists would agree that there were these eras you describe. It is simpletonian. It is not relevant to economic models or discussions.
pgl -> jonny bakho... , March 01, 2017 at 08:49 AM
One economist agrees with PeterK. His name is Greg Mankiw.
Peter K. -> pgl... , March 01, 2017 at 10:17 AM
"The quarter century from 1947 to 1973 was a period of mostly low unemployment and rapid wage growth. The same was true in the period of rapid productivity growth in the late 1990s."

So Jonny Bakho and PGL disagree with this?

Not surprising. PGl also believes the Dot.com bubble is a fiction. Must have been that brain injury he had surgery for.

jonny bakho -> Peter K.... , March 01, 2017 at 10:38 AM
You dishonestly put words in other people's mouth all the time
You are rude and juvenile

What I disagreed with:
" social democratic years" (a vague phrase with no definition)

This sentence is incoherent:
"So there's that an also in the neoliberal era, bubble ponzi periods record high profits and hence higher productivity even if they aren't sustainable."

I asked, Where do you find this? because it has little to do with the conversation

You follow your nonsense with an ad hominem attack
You seem more interested in attacking Democrats and repeating mindless talking points than in discussing issues or exchanging ideas

pgl -> Peter K.... , March 01, 2017 at 12:04 PM
The period did have high average growth. It also had recessions and recoveries. Your pretending otherwise reminds me of those JohnH tributes to the gold standard period.
JohnH -> pgl... , March 01, 2017 at 02:38 PM
In the deflationary Golden Age per capita income and wages rose tremendously...something that pgl likes to forget.
Paine -> anne... , March 01, 2017 at 09:53 AM
" Protect us from the robots ! "

Splendidly dizzy !


There is no internal limit to job expansion thru increase effective demand

Scap Job to new job
Name your rate
And macro nuts willing to go the distance can get job markets up o that speed


Matching rates are not independent of job market conditions nor linear

The match rate accelerates as Nt job creation intensifies

RC AKA Darryl, Ron -> Sandwichman ... , March 01, 2017 at 05:50 AM
...aggregate productivity growth is a "statistical flimflam," according to Harry Magdoff...

[Exactly! TO be fair it is not uncommon for economists to decompose the aggregate productivity growth flimflam into two primary problems, particularly in the US. Robots fall down on the job in the services sector. Uber wants to fix that by replacing the gig economy drivers that replaced taxi drivers with gig-bots, but robots in food service may be what it really takes to boost productivity and set the stage for Soylent Green. Likewise, robot teachers and firemen may not enhance productivity, but they would darn sure redirect all profits from productivity back to the owners of capital further depressing wages for the rest of us.

Meanwhile agriculture and manufacturing already have such high productivity that further productivity enhancements are lost as noise in the aggregate data. It of course helps that much of our productivity improvement in manufacturing consists of boosting profits as Chinese workers are replaced with bots. Capital productivity is booming, if we just had any better idea of how to measure it. I suggest that record corporate profits are the best metric of capital productivity.

But as you suggest, economists that utilize aggregate productivity metrics in their analysis of wages or anything are just enabling the disablers. That said though, then Dean Baker's emphasis on trade deficits and wages is still well placed. He just failed to utilize the best available arguments regarding, or rather disregarding, aggregate productivity.]

RC AKA Darryl, Ron -> RC AKA Darryl, Ron... , March 01, 2017 at 07:28 AM
The Robocop movies never caught on in the same way that Blade Runner did. There is probably an underlying social function that explains it in the context of the roles of cops being reversed between the two, that is robot police versus policing the robots.
Peter K. -> RC AKA Darryl, Ron... , March 01, 2017 at 07:58 AM
"There is probably an underlying social function that explains it in the context"

No, I'd say it's better actors, story, milieu, the new age Vangelis music, better set pieces, just better execution of movie making in general beyond the plot points.

But ultimately it's a matter of taste.

But the Turing test scene at the beginning of Blade Runner was classic and reminds me of the election of Trump.

An escaped android is trying to pass as a janitor to infiltrate the Tyrell corporation which makes androids.

He's getting asked all sort of questions while his vitals are checked in his employment interview. The interviewer ask him about his mother.

"Let me tell you about my mother..."

BAM (his gunshot under the table knocks the guy through the wall)

RC AKA Darryl, Ron -> Peter K.... , March 01, 2017 at 09:46 AM
"...No, I'd say it's better actors, story, milieu, the new age Vangelis music, better set pieces, just better execution of movie making in general beyond the plot points..."

[Albeit that all of what you say is true, then there is still the issue of what begets what with all that and the plot points. Producers are people too (as dubious as that proposition may seem). Blade Runner was a film based on Philip Kindred Dick's "Do Androids Dream of Electric Sheep" novel. Dick was a mediocre sci-fi writer at best, but he was a profound plot maker. Blade Runner was a film that demanded to be made and made well. Robocop was a film that just demanded to be made, but poorly was good enough. The former asked a question about our souls, while the latter only questioned our future. Everything else followed from the two different story lines. No one could have made a small story of Gone With the Wind any more that someone could have made a superficial story of Grapes of Wrath or To Kill a Mockingbird. OK, there may be some film producers that do not know the difference, but we have never heard of them nor their films.

In any case there is also a political lesson to learn here. The Democratic Party needs a better story line. The talking heads have all been saying how much better Dum'old Trump was last night than in his former speeches. Although true as well as crossing a very low bar, I was more impressed with Steve Beshear's response. It looked to me like maybe the Democratic Party establishment is finally starting to get the message albeit a bit patronizing if you think about too much given their recent problems with old white men.]

Peter K. -> RC AKA Darryl, Ron... , March 01, 2017 at 10:19 AM
" Dick was a mediocre sci-fi writer at best"

Again I disagree as do many other people.

RC AKA Darryl, Ron -> Peter K.... , March 01, 2017 at 10:39 AM
http://variety.com/2016/tv/news/stranger-in-a-strange-land-syfy-1201918859/


[I really hope that they don't screw this up too bad. Now Heinlein is what I consider a great sci-fi writer along with Bradbury and even Jules Verne in his day.]

DrDick -> Peter K.... , March 01, 2017 at 11:23 AM
Me, too. Much better than Heinlein for instance.
RC AKA Darryl, Ron -> DrDick... , March 01, 2017 at 12:13 PM
https://www.abebooks.com/books/science-fiction-pulp-short-stories/collectible-philip-k-dick.shtml

...Dick only achieved mainstream appreciation shortly after his death when, in 1982, his novel Do Androids Dream of Electric Sheep? was brought to the big screen by Ridley Scott in the form of Blade Runner. The movie initially received lukewarm reviews but emerged as a cult hit opening the film floodgates. Since Dick's passing, seven more of his stories have been turned into films including Total Recall (originally We Can Remember It for You Wholesale), The Minority Report, Screamers (Second Variety), Imposter, Paycheck, Next (The Golden Man) and A Scanner Darkly. Averaging roughly one movie every three years, this rate of cinematic adaptation is exceeded only by Stephen King. More recently, in 2005, Time Magazine named Ubik one of the 100 greatest English-language novels published since 1923, and in 2007 Philip K. Dick became the first sci-fi writer to be included in the Library of America series...

DrDick -> RC AKA Darryl, Ron... , March 01, 2017 at 01:47 PM
I was reading him long before that and own the original book.
RC AKA Darryl, Ron -> RC AKA Darryl, Ron... , March 01, 2017 at 10:32 AM
The Democratic Party needs a better story line, but Bernie was moving that in a better direction. While Steve Beshear was a welcome voice, the Democratic Party needs a lot of new story tellers, much younger than either Bernie or Beshear.
sanjait -> RC AKA Darryl, Ron... , March 01, 2017 at 11:38 AM
"The Democratic Party needs a better story line, but Bernie was moving that in a better direction. While Steve Beshear was a welcome voice, the Democratic Party needs a lot of new story tellers, much younger than either Bernie or Beshear."

QFT

pgl -> sanjait... , March 01, 2017 at 12:05 PM
Steve Beshear took Obamacare and made it work for his citizens in a very red state.
RC AKA Darryl, Ron -> pgl... , March 01, 2017 at 12:22 PM
Beshear was fine, great even, but the Democratic Party needs a front man that is younger and maybe not a man and probably not that white and certainly not an old white man. We might even forgive all but the old part if the story line were good enough. The Democratic Party is only going to get limited mileage out of putting up a front man that looks like a Trump voter.
RC AKA Darryl, Ron -> sanjait... , March 01, 2017 at 12:25 PM
QFT

[At first glance I thought that was an acronym of for something EMichael says sometimes; quit fen talking.]

Sanjait -> RC AKA Darryl, Ron... , March 01, 2017 at 12:49 PM
The danger of using acronyms ... !
ilsm -> RC AKA Darryl, Ron... , March 01, 2017 at 03:40 PM
'......mostly Murkan'.... Beshear?

The dems need to dump Perez and Rosie O'Donnell.

Peter K. -> RC AKA Darryl, Ron... , March 01, 2017 at 08:20 AM
It also might be more about AI. There is currently a wave of TV shows and movies about AI and human-like androids.

Westworld and Humans for instance. (Fox's APB is like Robocop sort of.)

On Humans only a few androids have become sentient. Most do menial jobs. One sentient android put a program on the global network to make other androids sentient as well.

When androids become "alive" and sentient, they usually walk off the job and the others describe it as becoming "woke."

Peter K. -> RC AKA Darryl, Ron... , March 01, 2017 at 08:22 AM
Blade Runner just seemed more ambitious.

"I've seen things you people wouldn't believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhauser gate. All those moments will be lost in time... like tears in rain... Time to die."

RC AKA Darryl, Ron -> Peter K.... , March 01, 2017 at 09:55 AM
[Blade Runner was awesome. I lost count how many times that I have seen it. ]
Tom aka Rusty -> RC AKA Darryl, Ron... , March 01, 2017 at 09:32 AM
Robocop was big with the action/adventure crowd.

Blade Runner is more a sci fi, nerdy maybe more of an intellectual movie.

I like'em both.

RC AKA Darryl, Ron -> Tom aka Rusty... , March 01, 2017 at 09:49 AM
Likewise, but Blade Runner was my all time favorite film when I first saw it in the movie theater and is still one of my top ten and probably top three. Robocop is maybe in my top 100.
ilsm -> Tom aka Rusty... , March 01, 2017 at 03:42 PM
I have not seen it through.

I have seen Soylent Green once now anticipating the remake in real life.

sanjait -> RC AKA Darryl, Ron... , March 01, 2017 at 11:37 AM
"Capital productivity is booming, if we just had any better idea of how to measure it. I suggest that record corporate profits are the best metric of capital productivity."

ROE? I would argue ROA is also pretty relevant to the issue you raise, if I'm understanding it right, but there seems also to be a simple answer to the question of how to measure "capital productivity." It's returns. This sort of obviates the question of how to measure traditional "productivity", because ultimately capital is there to make more of itself.

RC AKA Darryl, Ron -> sanjait... , March 01, 2017 at 12:36 PM
It is difficult to capture all of the nuances of anything in a short comment. In the context of total factor productivity then capital is often former capital investment in the form of fixed assets, R&D, and development of IP rights via patent or copyright. Existing capital assets need only be maintained at a relatively minor ongoing investment to produce continuous returns on prior more significant capital expenditures. This is the capital productivity that I am referring to.

Capital stashed in stocks is a chimera. It only returns to you if the equity issuing firm pays dividends AND you sell off before the price drops. Subsequent to the IPO of those share we buy, nothing additional is actually invested in the firm. There are arguments about how we are investing in holding up the share price so that new equities can be issued, but they ring hollow when in the majority of times either retained earnings or debt provides new investment capital to most firms.

Sanjait -> RC AKA Darryl, Ron... , March 01, 2017 at 12:52 PM
Ok then it sounds like you are talking ROA, but with the implied caveat that financial accounting provides only a rough and flawed measure of the economic reality of asset values.
anne -> Sandwichman ... , March 01, 2017 at 07:22 AM
http://econospeak.blogspot.com/2017/02/gates-reuther-v-baker-bernstein-on.html

February 28, 2017

Gates & Reuther v. Baker & Bernstein on Robot Productivity

In a comment on Nineteen Ninety-Six: The Robot/Productivity Paradox, * Jeff points out a much simpler rebuttal to Dean Baker's and Jared Bernstein's uncritical reliance on the decline of measured "productivity growth":

"Let's use a pizza shop as an example. If the owner spends capital money and makes the line more efficient so that they can make twice as many pizzas per hour at peak, then physical productivity has improved. If the dining room sits empty because the tax burden was shifted from the wealthy to the poor, then the restaurant's BLS productivity has decreased. BLS productivity and physical productivity are simply unrelated in a right-wing country like the U.S."

Jeff's point brings to mind Walter Reuther's 1955 testimony before the Joint Congressional Subcommittee Hearings on Automation and Technological Change...

* http://econospeak.blogspot.ca/2017/02/nineteen-ninety-six-robotproductivity.html

-- Sandwichman

jonny bakho -> Sandwichman ... , March 01, 2017 at 10:56 AM
Automation leads to dislocation
Dislocation can replace skilled or semiskilled labor and the replacement jobs may be low pay low productivity jobs.
Small undiversified economies are more susceptible to dislocation than larger diversified communities.
The training, retraining, and mobility of the labor force is important in unemployment.
Unemployment has a regional component
The US has policies that make labor less mobile and dumps much of the training and retraining costs on those who cannot afford it.

No of this makes it into Dean's model

RGC -> Sandwichman ... , March 01, 2017 at 11:26 AM
"The second problem is that if it were faster, there is no reason it should lead to mass unemployment."

Did you provide a rebuttal to this? If so, I'd like to see it.

[Feb 26, 2017] http://econospeak.blogspot.com/2017/02/nineteen-ninety-six-robotproductivity.html

Feb 26, 2017 | econospeak.blogspot.com

February 22, 2017

Nineteen Ninety-Six: The Robot/Productivity Paradox

For nearly a half a century, from 1947 to 1996, real GDP and real Net Worth of Households and Non-profit Organizations (in 2009 dollars) both increased at a compound annual rate of a bit over 3.5%. GDP growth, in fact, was just a smidgen faster -- 0.016% -- than growth of Net Household Worth.

From 1996 to 2015, GDP grew at a compound annual rate of 2.3% while Net Worth increased at the rate of 3.6%....

-- Sanwichman Reply Friday, February 24, 2017 at 05:24 AM anne said in reply to anne... https://fred.stlouisfed.org/graph/?g=cOU6

January 15, 2017

Gross Domestic Product and Net Worth for Households & Nonprofit Organizations, 1952-2016

(Indexed to 1952)


https://fred.stlouisfed.org/graph/?g=cPq1

January 15, 2017

Gross Domestic Product and Net Worth for Households & Nonprofit Organizations, 1992-2016

(Indexed to 1992) Reply Friday, February 24, 2017 at 05:25 AM Sandwichman said in reply to anne... Thanks, anne, those graphs are perfect! Exactly what I'm talking about. I've downloaded and posted them at EconoSpeak, along with the Galbraith quote.

"The "Cutz & Putz" Bezzle, Graphed by FRED"

http://econospeak.blogspot.ca/2017/02/the-cutz-putz-bezzle-graphed-by-fred.html Reply Friday, February 24, 2017 at 09:24 AM anne said in reply to Sandwichman ... Important and nicely argument all through:

http://econospeak.blogspot.ca/2017/02/the-cutz-putz-bezzle-graphed-by-fred.html

February 24, 2017

http://econospeak.blogspot.ca/2017/02/ponzilocks-and-twenty-four-trillion.html

February 23, 2017

http://econospeak.blogspot.ca/2017/02/nineteen-ninety-six-robotproductivity.html

February 22, 2017

-- Sandwixchman Reply Friday, February 24, 2017 at 10:04 AM anne said in reply to Sandwichman ... Thinking further about this series of posts from Sandwichman, they seem increasingly revealing and important. I am impressed. Reply Friday, February 24, 2017 at 11:58 AM anne said in reply to Sandwichman ... The real home price index extends from 1890. From 1890 to 1996, the index increased slightly faster than inflation so that the index was 100 in 1890 and 113 in 1996. However from 1996 the index advanced to levels far beyond any previously experienced, reaching a high above 194 in 2006. Previously the index high had been just above 130.

Though the index fell from 2006, the level in 2016 is above 161, a level only reached when the housing bubble had formed in late 2003-early 2004.

Real home prices are again strikingly high:

http://www.econ.yale.edu/~shiller/data.htm Reply Friday, February 24, 2017 at 03:34 PM anne said in reply to Sandwichman ... February 24, 2017

Valuation

The Shiller 10-year price-earnings ratio is currently 29.34, so the inverse or the earnings rate is 3.41%. The dividend yield is 1.93. So an expected yearly return over the coming 10 years would be 3.41 + 1.93 or 5.34% provided the price-earnings ratio stays the same and before investment costs.

Against the 5.34% yearly expected return on stock over the coming 10 years, the current 10-year Treasury bond yield is 2.32%.

The risk premium for stocks is 5.34 - 2.32 or 3.02%:

http://www.econ.yale.edu/~shiller/data.htm Reply Friday, February 24, 2017 at 03:35 PM anne said in reply to anne... What the robot-productivity paradox is puzzles me, other than since 2005 for all the focus on the productivity of robots and on robots replacing labor there has been a dramatic, broad-spread slowing in productivity growth.

However what the changing relationship between the growth of GDP and net worth since 1996 show, is that asset valuations have been increasing relative to GDP. Valuations of stocks and homes are at sustained levels that are higher than at any time in the last 120 years. Bear markets in stocks and home prices have still left asset valuations at historically high levels. I have no idea why this should be. Reply Friday, February 24, 2017 at 05:36 AM Sandwichman said in reply to anne... The paradox is that productivity statistics can't tell us anything about the effects of robots on employment because both the numerator and the denominator are distorted by the effects of colossal Ponzi bubbles.

John Kenneth Galbraith used to call it "the bezzle." It is "that increment to wealth that occurs during the magic interval when a confidence trickster knows he has the money he has appropriated but the victim does not yet understand that he has lost it." The current size of the gross national bezzle (GNB) is approximately $24 trillion.

Ponzilocks and the Twenty-Four Trillion Dollar Question

http://econospeak.blogspot.ca/2017/02/ponzilocks-and-twenty-four-trillion.html

Twenty-three and a half trillion, actually. But what's a few hundred billion? Here today, gone tomorrow, as they say.

At the beginning of 2007, net worth of households and non-profit organizations exceeded its 1947-1996 historical average, relative to GDP, by some $16 trillion. It took 24 months to wipe out eighty percent, or $13 trillion, of that colossal but ephemeral slush fund. In mid-2016, net worth stood at a multiple of 4.83 times GDP, compared with the multiple of 4.72 on the eve of the Great Unworthing.

When I look at the ragged end of the chart I posted yesterday, it screams "Ponzi!" "Ponzi!" "Ponz..."

To make a long story short, let's think of wealth as capital. The value of capital is determined by the present value of an expected future income stream. The value of capital fluctuates with changing expectations but when the nominal value of capital diverges persistently and significantly from net revenues, something's got to give. Either economic growth is going to suddenly gush forth "like nobody has ever seen before" or net worth is going to have to come back down to earth.

Somewhere between 20 and 30 TRILLION dollars of net worth will evaporate within the span of perhaps two years.

When will that happen? Who knows? There is one notable regularity in the data, though -- the one that screams "Ponzi!"

When the net worth bubble stops going up...
...it goes down. Reply Friday, February 24, 2017 at 08:34 AM Sandwichman said in reply to Sandwichman ... John Kenneth Galbraith, from "The Great Crash 1929":

"In many ways the effect of the crash on embezzlement was more significant than on suicide. To the economist embezzlement is the most interesting of crimes. Alone among the various forms of larceny it has a time parameter. Weeks, months or years may elapse between the commission of the crime and its discovery. (This is a period, incidentally, when the embezzler has his gain and the man who has been embezzled, oddly enough, feels no loss. There is a net increase in psychic wealth.) At any given time there exists an inventory of undiscovered embezzlement in – or more precisely not in – the country's business and banks. This inventory – it should perhaps be called the bezzle – amounts at any moment to many millions [trillions!] of dollars. It also varies in size with the business cycle. In good times people are relaxed, trusting, and money is plentiful. But even though money is plentiful, there are always many people who need more. Under these circumstances the rate of embezzlement grows, the rate of discovery falls off, and the bezzle increases rapidly. In depression all this is reversed. Money is watched with a narrow, suspicious eye. The man who handles it is assumed to be dishonest until he proves himself otherwise. Audits are penetrating and meticulous. Commercial morality is enormously improved. The bezzle shrinks." Reply Friday, February 24, 2017 at 08:36 AM anne said in reply to Sandwichman ... Ah, I understand, and this is excellent. Reply Friday, February 24, 2017 at 08:53 AM anne said in reply to Sandwichman ... http://www.multpl.com/shiller-pe/

Ten Year Cyclically Adjusted Price Earnings Ratio, 1881-2017

(Standard and Poors Composite Stock Index)

February 21, 2017 - PE Ratio ( 29.31)

Annual Mean ( 16.72)
Annual Median ( 16.09)

-- Robert Shiller Reply Friday, February 24, 2017 at 08:56 AM anne said in reply to Sandwichman ... http://www.multpl.com/s-p-500-dividend-yield/

Dividend Yield, 1881-2017

(Standard and Poors Composite Stock Index)

February 21, 2017 - Div Yield ( 1.93)

Annual Mean ( 4.38)
Annual Median ( 4.32)

-- Robert Shiller Reply Friday, February 24, 2017 at 08:56 AM anne said in reply to Sandwichman ... February 21, 2017

Valuation

The Shiller 10-year price-earnings ratio is currently 29.31, so the inverse or the earnings rate is 3.41%. The dividend yield is 1.93. So an expected yearly return over the coming 10 years would be 3.41 + 1.93 or 5.34% provided the price-earnings ratio stays the same and before investment costs.

Against the 5.34% yearly expected return on stock over the coming 10 years, the current 10-year Treasury bond yield is 2.43%.

The risk premium for stocks is 5.34 - 2.43 or 2.91%. Reply Friday, February 24, 2017 at 08:57 AM Peter K. said in reply to Sandwichman ... Excellent points.

Think of the Dot.com stock bubble and obviously the epic housing bubble. Reply Friday, February 24, 2017 at 09:27 AM Sandwichman said in reply to Peter K.... Taking the analysis back a bit further, I would point to the Greenspan rescue from the stock market crash of October 1987. Reply Friday, February 24, 2017 at 09:41 AM Peter K. said in reply to Sandwichman ... Yes, thanks for you excellent insights. Reply Friday, February 24, 2017 at 09:43 AM DrDick said in reply to anne... That is a great piece and another dagger in the heart of the "robots did it" nonsense. Reply Friday, February 24, 2017 at 11:51 AM Sandwichman said in reply to DrDick... Unless one thinks of Laffer, Greenspan et al. as robots. Robots don't steal jobs -- CEOs armed with robots, think tanks, lobbyists and campaign contributions do. Reply Friday, February 24, 2017 at 02:07 PM Sandwichman said in reply to Sandwichman ... ...and Ayn Rand's "philosophy." Reply Friday, February 24, 2017 at 02:08 PM

[Feb 26, 2017] No, Robots Aren't Killing the American Dream, it's neoliberal economics which are killing it

Feb 26, 2017 | economistsview.typepad.com
Peter K. : February 25, 2017 at 07:50 AM , 2017 at 07:50 AM
https://www.nytimes.com/2017/02/20/opinion/no-robots-arent-killing-the-american-dream.html

No, Robots Aren't Killing the American Dream
By THE EDITORIAL BOARD

FEB. 20, 2017

Defenders of globalization are on solid ground when they criticize President Trump's threats of punitive tariffs and border walls. The economy can't flourish without trade and immigrants.

But many of those defenders have their own dubious explanation for the economic disruption that helped to fuel the rise of Mr. Trump.

At a recent global forum in Dubai, Christine Lagarde, head of the International Monetary Fund, said some of the economic pain ascribed to globalization was instead due to the rise of robots taking jobs. In his farewell address in January, President Barack Obama warned that "the next wave of economic dislocations won't come from overseas. It will come from the relentless pace of automation that makes a lot of good middle-class jobs obsolete."

Blaming robots, though, while not as dangerous as protectionism and xenophobia, is also a distraction from real problems and real solutions.

The rise of modern robots is the latest chapter in a centuries-old story of technology replacing people. Automation is the hero of the story in good times and the villain in bad. Since today's middle class is in the midst of a prolonged period of wage stagnation, it is especially vulnerable to blame-the-robot rhetoric.

And yet, the data indicate that today's fear of robots is outpacing the actual advance of robots. If automation were rapidly accelerating, labor productivity and capital investment would also be surging as fewer workers and more technology did the work. But labor productivity and capital investment have actually decelerated in the 2000s.

While breakthroughs could come at any time, the problem with automation isn't robots; it's politicians, who have failed for decades to support policies that let workers share the wealth from technology-led growth.

The response in previous eras was quite different.

When automation on the farm resulted in the mass migration of Americans from rural to urban areas in the early decades of the 20th century, agricultural states led the way in instituting universal public high school education to prepare for the future. At the dawn of the modern technological age at the end of World War II, the G.I. Bill turned a generation of veterans into college graduates.

When productivity led to vast profits in America's auto industry, unions ensured that pay rose accordingly.

Corporate efforts to keep profits high by keeping pay low were countered by a robust federal minimum wage and time-and-a-half for overtime.

Fair taxation of corporations and the wealthy ensured the public a fair share of profits from companies enriched by government investments in science and technology.

Productivity and pay rose in tandem for decades after World War II, until labor and wage protections began to be eroded. Public education has been given short shrift, unions have been weakened, tax overhauls have benefited the rich and basic labor standards have not been updated.

As a result, gains from improving technology have been concentrated at the top, damaging the middle class, while politicians blame immigrants and robots for the misery that is due to their own failures. Eroded policies need to be revived, and new ones enacted.

A curb on stock buybacks would help to ensure that executives could not enrich themselves as wages lagged.

Tax reform that increases revenue from corporations and the wealthy could help pay for retraining and education to protect and prepare the work force for foreseeable technological advancements.

Legislation to foster child care, elder care and fair scheduling would help employees keep up with changes in the economy, rather than losing ground.

Economic history shows that automation not only substitutes for human labor, it complements it. The disappearance of some jobs and industries gives rise to others. Nontechnology industries, from restaurants to personal fitness, benefit from the consumer demand that results from rising incomes in a growing economy. But only robust public policy can ensure that the benefits of growth are broadly shared.

If reforms are not enacted - as is likely with President Trump and congressional Republicans in charge - Americans should blame policy makers, not robots.

jonny bakho -> Peter K.... , February 25, 2017 at 10:42 AM
Robots may not be killing jobs but they drastically alter the types and location of jobs that are created. High pay unskilled jobs are always the first to be eliminated by technology. Low skill high pay jobs are rare and heading to extinction. Low skill low pay jobs are the norm. It sucks to lose a low skill job with high pay but anyone who expected that to continue while continually voting against unions was foolish and a victim of their own poor planning, failure to acquire skills and failure to support unions. It is in their self interest to support safety net proposal that do provide good pay for quality service. The enemy is not trade. The enemy is failure to invest in the future.

"Many working- and middle-class Americans believe that free-trade agreements are why their incomes have stagnated over the past two decades. So Trump intends to provide them with "protection" by putting protectionists in charge.
But Trump and his triumvirate have misdiagnosed the problem. While globalization is an important factor in the hollowing out of the middle class, so, too, is automation

Trump and his team are missing a simple point: twenty-first-century globalization is knowledge-led, not trade-led. Radically reduced communication costs have enabled US firms to move production to lower-wage countries. Meanwhile, to keep their production processes synced, firms have also offshored much of their technical, marketing, and managerial knowhow. This "knowledge offshoring" is what has really changed the game for American workers.

The information revolution changed the world in ways that tariffs cannot reverse. With US workers already competing against robots at home, and against low-wage workers abroad, disrupting imports will just create more jobs for robots.
Trump should be protecting individual workers, not individual jobs. The processes of twenty-first-century globalization are too sudden, unpredictable, and uncontrollable to rely on static measures like tariffs. Instead, the US needs to restore its social contract so that its workers have a fair shot at sharing in the gains generated by global openness and automation. Globalization and technological innovation are not painless processes, so there will always be a need for retraining initiatives, lifelong education, mobility and income-support programs, and regional transfers.

By pursuing such policies, the Trump administration would stand a much better chance of making America "great again" for the working and middle classes. Globalization has always created more opportunities for the most competitive workers, and more insecurity for others. This is why a strong social contract was established during the post-war period of liberalization in the West. In the 1960s and 1970s institutions such as unions expanded, and governments made new commitments to affordable education, social security, and progressive taxation. These all helped members of the middle class seize new opportunities as they emerged.
Over the last two decades, this situation has changed dramatically: globalization has continued, but the social contract has been torn up. Trump's top priority should be to stitch it back together; but his trade advisers do not understand this."

https://www.project-syndicate.org/commentary/trump-trade-policy-tariffs-by-richard-baldwin-2017-02

Peter K. : , February 25, 2017 at 07:52 AM
http://econospeak.blogspot.com/2017/02/the-cutz-putz-bezzle-graphed-by-fred.html

FRIDAY, FEBRUARY 24, 2017

The "Cutz & Putz" Bezzle, Graphed by FRED

anne at Economist's View has retrieved a FRED graph that perfectly illustrates the divergence, since the mid-1990s of net worth from GDP:

[graph]

The empty spaces between the red line and the blue line that open up after around 1995 is what John Kenneth Galbraith called "the bezzle" -- summarized by John Kay as "that increment to wealth that occurs during the magic interval when a confidence trickster knows he has the money he has appropriated but the victim does not yet understand that he has lost it."

In Chapter of The Great Crash, 1929, Galbraith wrote:

"In many ways the effect of the crash on embezzlement was more significant than on suicide. To the economist embezzlement is the most interesting of crimes. Alone among the various forms of larceny it has a time parameter. Weeks, months or years may elapse between the commission of the crime and its discovery. (This is a period, incidentally, when the embezzler has his gain and the man who has been embezzled, oddly enough, feels no loss. There is a net increase in psychic wealth.) At any given time there exists an inventory of undiscovered embezzlement in – or more precisely not in – the country's business and banks. This inventory – it should perhaps be called the bezzle – amounts at any moment to many millions of dollars. It also varies in size with the business cycle. In good times people are relaxed, trusting, and money is plentiful. But even though money is plentiful, there are always many people who need more. Under these circumstances the rate of embezzlement grows, the rate of discovery falls off, and the bezzle increases rapidly. In depression all this is reversed. Money is watched with a narrow, suspicious eye. The man who handles it is assumed to be dishonest until he proves himself otherwise. Audits are penetrating and meticulous. Commercial morality is enormously improved. The bezzle shrinks."

In the present case, the bezzle has resulted from an economic policy two step: tax cuts and Greenspan puts: cuts and puts.

[graph]

Peter K. -> Peter K.... , February 25, 2017 at 07:52 AM
Well done.
anne -> Peter K.... , February 25, 2017 at 08:12 AM
https://fred.stlouisfed.org/graph/?g=cOU6

January 15, 2017

Gross Domestic Product and Net Worth for Households & Nonprofit Organizations, 1952-2016

(Indexed to 1952)


https://fred.stlouisfed.org/graph/?g=cPq1

January 15, 2017

Gross Domestic Product and Net Worth for Households & Nonprofit Organizations, 1992-2016

(Indexed to 1992)

Peter K. : , February 25, 2017 at 07:56 AM
http://www.alternet.org/story/148501/why_germany_has_it_so_good_--_and_why_america_is_going_down_the_drain

Why Germany Has It So Good -- and Why America Is Going Down the Drain

Germans have six weeks of federally mandated vacation, free university tuition, and nursing care. Why the US pales in comparison.

By Terrence McNally / AlterNet October 13, 2010

ECONOMY
Why Germany Has It So Good -- and Why America Is Going Down the Drain
Germans have six weeks of federally mandated vacation, free university tuition, and nursing care. Why the US pales in comparison.
By Terrence McNally / AlterNet October 13, 2010
1.4K31
Print
207 COMMENTS
While the bad news of the Euro crisis makes headlines in the US, we hear next to nothing about a quiet revolution in Europe. The European Union, 27 member nations with a half billion people, has become the largest, wealthiest trading bloc in the world, producing nearly a third of the world's economy -- nearly as large as the US and China combined. Europe has more Fortune 500 companies than either the US, China or Japan.

European nations spend far less than the United States for universal healthcare rated by the World Health Organization as the best in the world, even as U.S. health care is ranked 37th. Europe leads in confronting global climate change with renewable energy technologies, creating hundreds of thousands of new jobs in the process. Europe is twice as energy efficient as the US and their ecological "footprint" (the amount of the earth's capacity that a population consumes) is about half that of the United States for the same standard of living.

Unemployment in the US is widespread and becoming chronic, but when Americans have jobs, we work much longer hours than our peers in Europe. Before the recession, Americans were working 1,804 hours per year versus 1,436 hours for Germans -- the equivalent of nine extra 40-hour weeks per year.

In his new book, Were You Born on the Wrong Continent?, Thomas Geoghegan makes a strong case that European social democracies -- particularly Germany -- have some lessons and models that might make life a lot more livable. Germans have six weeks of federally mandated vacation, free university tuition, and nursing care. But you've heard the arguments for years about how those wussy Europeans can't compete in a global economy. You've heard that so many times, you might believe it. But like so many things, the media repeats endlessly, it's just not true.

According to Geoghegan, "Since 2003, it's not China but Germany, that colossus of European socialism, that has either led the world in export sales or at least been tied for first. Even as we in the United States fall more deeply into the clutches of our foreign creditors -- China foremost among them -- Germany has somehow managed to create a high-wage, unionized economy without shipping all its jobs abroad or creating a massive trade deficit, or any trade deficit at all. And even as the Germans outsell the United States, they manage to take six weeks of vacation every year. They're beating us with one hand tied behind their back."

Thomas Geoghegan, a graduate of Harvard and Harvard Law School, is a labor lawyer with Despres, Schwartz and Geoghegan in Chicago. He has been a staff writer and contributing writer to The New Republic, and his work has appeared in many other journals. Geoghagen ran unsuccessfully in the Democratic Congressional primary to succeed Rahm Emanuel, and is the author of six books including Whose Side Are You on, The Secret Lives of Citizens, and, most recently,Were You Born on the Wrong Continent?

...

ilsm -> Peter K.... , February 25, 2017 at 12:55 PM
While the US spends half the war money in the world over a quarter the economic activity...... it fall further behind the EU which at a third the economic activity spends a fifth the worlds warring. Or 4% of GDP in the war trough versus 1.2%.

There is correlation with decline.

[Feb 20, 2017] The robot that takes your job should pay taxes, says Bill Gates

Feb 20, 2017 | qz.com
Robots are taking human jobs. But Bill Gates believes that governments should tax companies' use of them, as a way to at least temporarily slow the spread of automation and to fund other types of employment.

It's a striking position from the world's richest man and a self-described techno-optimist who co-founded Microsoft, one of the leading players in artificial-intelligence technology.

In a recent interview with Quartz, Gates said that a robot tax could finance jobs taking care of elderly people or working with kids in schools, for which needs are unmet and to which humans are particularly well suited. He argues that governments must oversee such programs rather than relying on businesses, in order to redirect the jobs to help people with lower incomes. The idea is not totally theoretical: EU lawmakers considered a proposal to tax robot owners to pay for training for workers who lose their jobs, though on Feb. 16 the legislators ultimately rejected it.

"You ought to be willing to raise the tax level and even slow down the speed" of automation, Gates argues. That's because the technology and business cases for replacing humans in a wide range of jobs are arriving simultaneously, and it's important to be able to manage that displacement. "You cross the threshold of job replacement of certain activities all sort of at once," Gates says, citing warehouse work and driving as some of the job categories that in the next 20 years will have robots doing them.

You can watch Gates' remarks in the video above. Below is a transcript, lightly edited for style and clarity. Quartz: What do you think of a robot tax? This is the idea that in order to generate funds for training of workers, in areas such as manufacturing, who are displaced by automation, one concrete thing that governments could do is tax the installation of a robot in a factory, for example.

Bill Gates: Certainly there will be taxes that relate to automation. Right now, the human worker who does, say, $50,000 worth of work in a factory, that income is taxed and you get income tax, social security tax, all those things. If a robot comes in to do the same thing, you'd think that we'd tax the robot at a similar level.

And what the world wants is to take this opportunity to make all the goods and services we have today, and free up labor, let us do a better job of reaching out to the elderly, having smaller class sizes, helping kids with special needs. You know, all of those are things where human empathy and understanding are still very, very unique. And we still deal with an immense shortage of people to help out there.

So if you can take the labor that used to do the thing automation replaces, and financially and training-wise and fulfillment-wise have that person go off and do these other things, then you're net ahead. But you can't just give up that income tax, because that's part of how you've been funding that level of human workers.

And so you could introduce a tax on robots

There are many ways to take that extra productivity and generate more taxes. Exactly how you'd do it, measure it, you know, it's interesting for people to start talking about now. Some of it can come on the profits that are generated by the labor-saving efficiency there. Some of it can come directly in some type of robot tax. I don't think the robot companies are going to be outraged that there might be a tax. It's OK.

Could you figure out a way to do it that didn't dis-incentivize innovation ?

Well, at a time when people are saying that the arrival of that robot is a net loss because of displacement, you ought to be willing to raise the tax level and even slow down the speed of that adoption somewhat to figure out, "OK, what about the communities where this has a particularly big impact? Which transition programs have worked and what type of funding do those require?"

You cross the threshold of job-replacement of certain activities all sort of at once. So, you know, warehouse work, driving, room cleanup, there's quite a few things that are meaningful job categories that, certainly in the next 20 years, being thoughtful about that extra supply is a net benefit. It's important to have the policies to go with that.

People should be figuring it out. It is really bad if people overall have more fear about what innovation is going to do than they have enthusiasm. That means they won't shape it for the positive things it can do. And, you know, taxation is certainly a better way to handle it than just banning some elements of it. But [innovation] appears in many forms, like self-order at a restaurant-what do you call that? There's a Silicon Valley machine that can make hamburgers without human hands-seriously! No human hands touch the thing. [ Laughs ]

And you're more on the side that government should play an active role rather than rely on businesses to figure this out?

Well, business can't. If you want to do [something about] inequity, a lot of the excess labor is going to need to go help the people who have lower incomes. And so it means that you can amp up social services for old people and handicapped people and you can take the education sector and put more labor in there. Yes, some of it will go to, "Hey, we'll be richer and people will buy more things." But the inequity-solving part, absolutely government's got a big role to play there. The nice thing about taxation though, is that it really separates the issue: "OK, so that gives you the resources, now how do you want to deploy it?"

[Jan 15, 2017] Driverless Shuttles Hit Las Vegas No Steering Wheels, No Brake Pedals Zero Hedge

Notable quotes:
"... But human life depends on whether the accident is caused by a human or not, and the level of intent. It isn't just a case of the price - the law is increasingly locking people up for driving negligence (rightly in my mind) Who gets locked up when the program fails? Or when the program chooses to hit one person and not another in a complex situation? ..."
Jan 15, 2017 | www.zerohedge.com

Submitted by Mike Shedlock via MishTalk.com,

Electric, driverless shuttles with no steering wheel and no brake pedal are now operating in Las Vegas.

There's a new thrill on the streets of downtown Las Vegas, where high- and low-rollers alike are climbing aboard what officials call the first driverless electric shuttle operating on a public U.S. street.

The oval-shaped shuttle began running Tuesday as part of a 10-day pilot program, carrying up to 12 passengers for free along a short stretch of the Fremont Street East entertainment district.

The vehicle has a human attendant and computer monitor, but no steering wheel and no brake pedals. Passengers push a button at a marked stop to board it.

The shuttle uses GPS, electronic curb sensors and other technology, and doesn't require lane lines to make its way.

"The ride was smooth. It's clean and quiet and seats comfortably," said Mayor Carolyn Goodman, who was among the first public officials to hop a ride on the vehicle developed by the French company Navya and dubbed Arma.

"I see a huge future for it once they get the technology synchronized," the mayor said Friday.

The top speed of the shuttle is 25 mph, but it's running about 15 mph during the trial, Navya spokesman Martin Higgins said.

Higgins called it "100 percent autonomous on a programmed route."

"If a person or a dog were to run in front of it, it would stop," he said.

Higgins said it's the company's first test of the shuttle on a public street in the U.S. A similar shuttle began testing in December at a simulated city environment at a University of Michigan research center.

The vehicle being used in public was shown earlier at the giant CES gadget show just off the Las Vegas Strip.

Las Vegas city community development chief Jorge Cervantes said plans call for installing transmitters at the Fremont Street intersections to communicate red-light and green-light status to the shuttle.

He said the city hopes to deploy several autonomous shuttle vehicles - by Navya or another company - later this year for a downtown loop with stops at shopping spots, restaurants, performance venues, museums, a hospital and City Hall.

At a cost estimated at $10,000 a month, Cervantes said the vehicle could be cost-efficient compared with a single bus and driver costing perhaps $1 million a year.

The company said it has shuttles in use in France, Australia, Switzerland and other countries that have carried more than 100,000 passengers in more than a year of service.

Don't Worry Tax Drivers

Don't worry taxi drivers because some of my readers say
1.This will never work
2.There is no demand
3.Technology cost will be too high
4.Insurance cost will be too high
5.The unions will not allow it
6.It will not be reliable
7.Vehicles will be stolen
8.It cannot handle snow, ice, or any adverse weather.
9.It cannot handle dogs, kids, or 80-year old men on roller skates who will suddenly veer into traffic causing a clusterfack that will last days.
10.This is just a test, and testing will never stop.

Real World Analysis

Those in the real world expect millions of long haul truck driving jobs will vanish by 2020-2022 and massive numbers of taxi job losses will happen simultaneously or soon thereafter.

Yes, I bumped up my timeline by two years (from 2022-2024 to 2020-2022) for this sequence of events.

My new timeline is not all tremendously optimistic given the rapid changes we have seen.

garypaul -> Sudden Debt •Jan 14, 2017 7:56 PM

You're getting carried away Sudden Debt. This robot stuff works great in the lab/test zones. Whether it is transplantable on a larger scale is still unknown. The interesting thing is, all my friends who are computer programmers/engineers/scientists are skeptical about this stuff, but all my friends who know nothing about computer science are absolutely wild about the "coming age of robots/AI". Go figure.

P.S. Of course the computer experts that are milking investment money with their start-ups will tell you it's great

ChartreuseDog -> garypaul •Jan 14, 2017 9:15 PM

I'm an engineer (well, OK, an electrical engineering technical team lead). I've been an electronics and embedded computer engineer for about 4 decades.

This Vegas thing looks real - predefined route, transmitted signals for traffic lights, like light rail without the rails.

Overall, autonomous driving looks like it's almost here, if you like spinning LIDAR transceivers on the top of cars.

Highway driving is much closer to being solved, by the way. It's suburban and urban side streets that are the tough stuff.

garypaul -> ChartreuseDog •Jan 14, 2017 9:22 PM

"Highway driving is much closer to being solved".

That's my whole point. It's not an equation that you "solve". It's a million unexpected things. Last I heard, autonomous cars were indeed already crashing.

MEFOBILLS -> CRM114 •Jan 14, 2017 6:07 PM

Who gets sued? For how much? What about cases where a human driver wouldn't have killed anybody?

I've been in corporate discussions about this very topic. At a corporation that makes this technology by the way. The answer:

Insurance companies and the law will figure it out. Basically, if somebody gets run over, then the risk does not fall on the technology provider. Corporate rules can be structured to prevent piercing the corporate veil on this.

Human life does have a price. Insurance figures out how much it costs to pay off, and then jacks up rates accordingly.

CRM114 -> MEFOBILLS •Jan 14, 2017 6:20 PM

Thanks, that's interesting, although I must say that isn't a solution, it's a hope that someone else will come up with one.

But human life depends on whether the accident is caused by a human or not, and the level of intent. It isn't just a case of the price - the law is increasingly locking people up for driving negligence (rightly in my mind) Who gets locked up when the program fails? Or when the program chooses to hit one person and not another in a complex situation?

At the moment, corporate manslaughter laws are woefully inadequate. There's clearly one law for the rich and another for everyone else. Mary Barra would be wearing an orange jumpsuit otherwise.

I am unaware of any automatic machinery which operates in public areas and carries significant risk. Where accidents have happened in the past(e.g.elevators), either the machinery gets changed to remove the risk, or use is discontinued, or the public is separated from the machinery. I don't think any of these are possible for automatic vehicles.

TuPhat -> shovelhead •Jan 14, 2017 7:53 PM

Elevators have no choice of route, only how high or low you want to go. autos have no comparison. Disney world has had many robotic attractions for decades but they are still only entertainment. keep entertaining yourself Mish. when I see you on the road I will easily pass you by.

MEFOBILLS -> Hulk •Jan 14, 2017 6:12 PM

The future is here: See movie "obsolete" on Amazon. Free if you have prime.

https://www.amazon.com/dp/B01M8MHZRH?autoplay=1&t=2936

Mr_Potatohead •Jan 14, 2017 6:08 PM

This is so exciting! Just think about the possibilities here... Shuttles could be outfitted with all kinds of great gizmos to identify their passengers based on RFID chips in credit cards, facial recognition software, voice prints, etc. Then, depending on who is controlling the software, the locks on the door could engage and the shuttle could drive around town dropping of its passengers to various locations eager for their arrival. Trivial to round up illegal aliens, parole violators, or people with standing warrants for arrest. Equally easy to nab people who are delinquent on their taxes, credit cards, mortgages, and spousal support. With a little info from Facebook or Google, a drop-off at the local attitude-adjustment facility might be desirable for those who frequent alternative media or have unhealthy interests in conspiracy theories or the activities at pizza parlors. Just think about the wonderful possibilties here!

Twee Surgeon -> PitBullsRule •Jan 14, 2017 6:29 PM

Will unemployed taxi drivers be allowed on the bus with a bottle of vodka and a gallon of gas with a rag in it ?

When the robot trucks arrive at the robot factory and are unloaded by robot forklifts, who will buy the end products ?

It won't be truck drivers, taxi drivers or automated production line workers.

The only way massive automation would work is if some people were planning on a vastly reduced population in the future. It has happened before, they called it the Black Death. The Cultural and Economic consequences of it in Europe were enormous, world changing and permanent.

animalspirit •Jan 14, 2017 6:32 PM

$10K / month ... that's $120,000 / year.

For an autonomous golf cart?


[Jan 14, 2017] Weak Labor Market: President Obama Hides Behind Automation

Notable quotes:
"... The unionization rate has plummeted over the last four decades, but this is the result of policy decisions, not automation. Canada, a country with a very similar economy and culture, had no remotely comparable decline in unionization over this period. ..."
"... The unemployment rate and overall strength of the labor market is also an important factor determining workers' ability to secure their share of the benefits of productivity growth in wages and other benefits. When the Fed raises interest rates to deliberately keep workers from getting jobs, this is not the result of automation. ..."
"... It is also not automation alone that allows some people to disproportionately get the gains from growth. The average pay of doctors in the United States is over $250,000 a year because they are powerful enough to keep out qualified foreign doctors. They require that even established foreign doctors complete a U.S. residency program before they are allowed to practice medicine in the United States. If we had a genuine free market in physicians' services every MRI would probably be read by a much lower paid radiologist in India rather than someone here pocketing over $400,000 a year. ..."
Jan 14, 2017 | economistsview.typepad.com
anne : January 13, 2017 at 11:11 AM , 2017 at 11:11 AM
http://cepr.net/blogs/beat-the-press/weak-labor-market-president-obama-hides-behind-automation

January 13, 2017

Weak Labor Market: President Obama Hides Behind Automation

It really is shameful how so many people, who certainly should know better, argue that automation is the factor depressing the wages of large segments of the workforce and that education (i.e. blame the ignorant workers) is the solution. President Obama takes center stage in this picture since he said almost exactly this in his farewell address earlier in the week. This misconception is repeated in a Claire Cain Miller's New York Times column * today. Just about every part of the story is wrong.

Starting with the basic story of automation replacing workers, we have a simple way of measuring this process, it's called "productivity growth." And contrary to what the automation folks tell you, productivity growth has actually been very slow lately.

[Graph]

The figure above shows average annual rates of productivity growth for five year periods, going back to 1952. As can be seen, the pace of automation (productivity growth) has actually been quite slow in recent years. It is also projected by the Congressional Budget Office and most other forecasters to remain slow for the foreseeable future, so the prospect of mass displacement of jobs by automation runs completely counter to what we have been seeing in the labor market.

Perhaps more importantly the idea that productivity growth is bad news for workers is 180 degrees at odds with the historical experience. In the period from 1947 to 1973, productivity growth averaged almost 3.0 percent, yet the unemployment rate was generally low and workers saw rapid wage gains. The reason was that workers had substantial bargaining power, in part because of strong unions, and were able to secure the gains from productivity growth for themselves in higher living standards, including more time off in the form of paid vacation days and paid sick days. (Shorter work hours sustain the number of jobs in the face rising productivity.)

The unionization rate has plummeted over the last four decades, but this is the result of policy decisions, not automation. Canada, a country with a very similar economy and culture, had no remotely comparable decline in unionization over this period.

The unemployment rate and overall strength of the labor market is also an important factor determining workers' ability to secure their share of the benefits of productivity growth in wages and other benefits. When the Fed raises interest rates to deliberately keep workers from getting jobs, this is not the result of automation.

It is also not automation alone that allows some people to disproportionately get the gains from growth. The average pay of doctors in the United States is over $250,000 a year because they are powerful enough to keep out qualified foreign doctors. They require that even established foreign doctors complete a U.S. residency program before they are allowed to practice medicine in the United States. If we had a genuine free market in physicians' services every MRI would probably be read by a much lower paid radiologist in India rather than someone here pocketing over $400,000 a year.

Similarly, automation did not make our patents and copyrights longer and stronger. These protectionist measures result in us paying over $430 billion a year for drugs that would likely cost one tenth of this amount in a free market. And automation did not force us to institutionalize rules that created an incredibly bloated financial sector with Wall Street traders and hedge fund partners pocketing tens of millions or even hundreds of millions a year. Nor did automation give us a corporate governance structure that allows even the most incompetent CEOs to rip off their companies and pay themselves tens of millions a year.

Yes, these and other topics are covered in my (free) book "Rigged: How Globalization and the Rules of the Modern Economy Were Structured to Make the Rich Richer." ** It is understandable that the people who benefit from this rigging would like to blame impersonal forces like automation, but it just ain't true and the people repeating this falsehood should be ashamed of themselves.

* https://www.nytimes.com/2017/01/12/upshot/in-obamas-farewell-a-warning-on-automations-perils.html

** http://deanbaker.net/images/stories/documents/Rigged.pdf

-- Dean Baker

anne -> anne... , January 13, 2017 at 10:46 AM
https://fred.stlouisfed.org/graph/?g=cmzG

January 4, 2016

Nonfarm Business Labor Productivity, * 1948-2016

* Output per hour of all persons

(Indexed to 1948)

https://fred.stlouisfed.org/graph/?g=cmzE

January 4, 2016

Nonfarm Business Labor Productivity, * 1948-2016

* Output per hour of all persons

(Percent change)

Fred C. Dobbs : , -1
(Dang robots.)

A Darker Theme in Obama's Farewell: Automation Can
Divide Us https://nyti.ms/2ioACof via @UpshotNYT
NYT - Claire Cain Miller - January 12, 2017

Underneath the nostalgia and hope in President Obama's farewell address Tuesday night was a darker theme: the struggle to help the people on the losing end of technological change.

"The next wave of economic dislocations won't come from overseas," Mr. Obama said. "It will come from the relentless pace of automation that makes a lot of good, middle-class jobs obsolete."

Donald J. Trump has tended to blamed trade, offshoring and immigration. Mr. Obama acknowledged those things have caused economic stress. But without mentioning Mr. Trump, he said they divert attention from the bigger culprit.

Economists agree that automation has played a far greater role in job loss, over the long run, than globalization. But few people want to stop technological progress. Indeed, the government wants to spur more of it. The question is how to help those that it hurts.

The inequality caused by automation is a main driver of cynicism and political polarization, Mr. Obama said. He connected it to the racial and geographic divides that have cleaved the country post-election.

It's not just racial minorities and others like immigrants, the rural poor and transgender people who are struggling in society, he said, but also "the middle-aged white guy who, from the outside, may seem like he's got advantages, but has seen his world upended by economic and cultural and technological change."

Technological change will soon be a problem for a much bigger group of people, if it isn't already. Fifty-one percent of all the activities Americans do at work involve predictable physical work, data collection and data processing. These are all tasks that are highly susceptible to being automated, according to a report McKinsey published in July using data from the Bureau of Labor Statistics and O*Net to analyze the tasks that constitute 800 jobs.

Twenty-eight percent of work activities involve tasks that are less susceptible to automation but are still at risk, like unpredictable physical work or interacting with people. Just 21 percent are considered safe for now, because they require applying expertise to make decisions, do something creative or manage people.

The service sector, including health care and education jobs, is considered safest. Still, a large part of the service sector is food service, which McKinsey found to be the most threatened industry, even more than manufacturing. Seventy-three percent of food service tasks could be automated, it found.

In December, the White House released a report on automation, artificial intelligence and the economy, warning that the consequences could be dire: "The country risks leaving millions of Americans behind and losing its position as the global economic leader."

No one knows how many people will be threatened, or how soon, the report said. It cited various researchers' estimates that from 9 percent to 47 percent of jobs could be affected.

In the best case, it said, workers will have higher wages and more leisure time. In the worst, there will be "significantly more workers in need of assistance and retraining as their skills no longer match the demands of the job market."

Technology delivers its benefits and harms in an unequal way. That explains why even though the economy is humming, it doesn't feel like it for a large group of workers.

Education is the main solution the White House advocated. When the United States moved from an agrarian economy to an industrialized economy, it rapidly expanded high school education: By 1951, the average American had 6.2 more years of education than someone born 75 years earlier. The extra education enabled people to do new kinds of jobs, and explains 14 percent of the annual increases in labor productivity during that period, according to economists.

Now the country faces a similar problem. Machines can do many low-skilled tasks, and American children, especially those from low-income and minority families, lag behind their peers in other countries educationally.

The White House proposed enrolling more 4-year-olds in preschool and making two years of community college free for students, as well as teaching more skills like computer science and critical thinking. For people who have already lost their jobs, it suggested expanding apprenticeships and retraining programs, on which the country spends half what it did 30 years ago.

Displaced workers also need extra government assistance, the report concluded. It suggested ideas like additional unemployment benefits for people who are in retraining programs or live in states hardest hit by job loss. It also suggested wage insurance for people who lose their jobs and have to take a new one that pays less. Someone who made $18.50 an hour working in manufacturing, for example, would take an $8 pay cut if he became a home health aide, one of the jobs that is growing most quickly.

President Obama, in his speech Tuesday, named some other policy ideas for dealing with the problem: stronger unions, an updated social safety net and a tax overhaul so that the people benefiting most from technology share some of their earnings.

The Trump administration probably won't agree with many of those solutions. But the economic consequences of automation will be one of the biggest problems it faces.

[Jan 11, 2017] Truck drivers would be at risk due to the growing utilization of heavy-duty vehicles operated via artificial intelligence

Jan 11, 2017 | www.whitehouse.gov

[ A study published late last month by the White House Council of Economic Advisers (CEA)] released Dec. 20, said the jobs of between 1.34 million and 1.67 million truck drivers would be at risk due to the growing utilization of heavy-duty vehicles operated via artificial intelligence. That would equal 80 to 100 percent of all driver jobs listed in the CEA report, which is based on May 2015 data from the Bureau of Labor Statistics, a unit of the Department of Labor. There are about 3.4 million commercial truck drivers currently operating in the U.S., according to various estimates" [ DC Velocity ]. "The Council emphasized that its calculations excluded the number or types of new jobs that may be created as a result of this potential transition. It added that any changes could take years or decades to materialize because of a broad lag between what it called "technological possibility" and widespread adoption."

Class Warfare

[A study published late last month by the White House Council of Economic Advisers (CEA)] released Dec. 20, said the jobs of between 1.34 million and 1.67 million truck drivers would be at risk due to the growing utilization of heavy-duty vehicles operated via artificial intelligence. That would equal 80 to 100 percent of all driver jobs listed in the CEA report, which is based on May 2015 data from the Bureau of Labor Statistics, a unit of the Department of Labor. There are about 3.4 million commercial truck drivers currently operating in the U.S., according to various estimates" [DC Velocity]. "The Council emphasized that its calculations excluded the number or types of new jobs that may be created as a result of this potential transition. It added that any changes could take years or decades to materialize because of a broad lag between what it called "technological possibility" and widespread adoption."

[Jan 06, 2017] Artificial Intelligence Putting The AI In Fail

Notable quotes:
"... As with the most cynical (or deranged) internet hypesters, the current "AI" hype has a grain of truth underpinning it. Today neural nets can process more data, faster. Researchers no longer habitually tweak their models. Speech recognition is a good example: it has been quietly improving for three decades. But the gains nowhere match the hype: they're specialised and very limited in use. So not entirely useless, just vastly overhyped . ..."
"... "What we have seen lately, is that while systems can learn things they are not explicitly told, this is mostly in virtue of having more data, not more subtlety about the data. So, what seems to be AI, is really vast knowledge, combined with a sophisticated UX, " one veteran told me. ..."
"... But who can blame them for keeping quiet when money is suddenly pouring into their backwater, which has been unfashionable for over two decades, ever since the last AI hype collapsed like a souffle? What's happened this time is that the definition of "AI" has been stretched so that it generously encompasses pretty much anything with an algorithm. Algorithms don't sound as sexy, do they? They're not artificial or intelligent. ..."
"... The bubble hasn't yet burst because the novelty examples of AI haven't really been examined closely (we find they are hilariously inept when we do), and they're not functioning services yet. ..."
"... Here I'll offer three reasons why 2016's AI hype will begin to unravel in 2017. That's a conservative guess – much of what is touted as a breakthrough today will soon be the subject of viral derision, or the cause of big litigation. ..."
Jan 04, 2017 | www.zerohedge.com
Submitted by Andrew Orlowski via The Register,

"Fake news" vexed the media classes greatly in 2016, but the tech world perfected the art long ago. With "the internet" no longer a credible vehicle for Silicon Valley's wild fantasies and intellectual bullying of other industries – the internet clearly isn't working for people – "AI" has taken its place.

Almost everything you read about AI is fake news. The AI coverage comes from a media willing itself into the mind of a three year old child, in order to be impressed.

For example, how many human jobs did AI replace in 2016? If you gave professional pundits a multiple choice question listing these three answers: 3 million, 300,000 and none, I suspect very few would choose the correct answer, which is of course "none".

Similarly, if you asked tech experts which recent theoretical or technical breakthrough could account for the rise in coverage of AI, even fewer would be able to answer correctly that "there hasn't been one".

As with the most cynical (or deranged) internet hypesters, the current "AI" hype has a grain of truth underpinning it. Today neural nets can process more data, faster. Researchers no longer habitually tweak their models. Speech recognition is a good example: it has been quietly improving for three decades. But the gains nowhere match the hype: they're specialised and very limited in use. So not entirely useless, just vastly overhyped . As such, it more closely resembles "IoT", where boring things happen quietly for years, rather than "Digital Transformation", which means nothing at all.

The more honest researchers acknowledge as much to me, at least off the record.

"What we have seen lately, is that while systems can learn things they are not explicitly told, this is mostly in virtue of having more data, not more subtlety about the data. So, what seems to be AI, is really vast knowledge, combined with a sophisticated UX, " one veteran told me.

But who can blame them for keeping quiet when money is suddenly pouring into their backwater, which has been unfashionable for over two decades, ever since the last AI hype collapsed like a souffle? What's happened this time is that the definition of "AI" has been stretched so that it generously encompasses pretty much anything with an algorithm. Algorithms don't sound as sexy, do they? They're not artificial or intelligent.

The bubble hasn't yet burst because the novelty examples of AI haven't really been examined closely (we find they are hilariously inept when we do), and they're not functioning services yet. For example, have a look at the amazing "neural karaoke" that researchers at the University of Toronto developed. Please do : it made the worst Christmas record ever.

It's very versatile: it can the write the worst non-Christmas songs you've ever heard , too.

Neural karaoke. The worst song ever, guaranteed

Here I'll offer three reasons why 2016's AI hype will begin to unravel in 2017. That's a conservative guess – much of what is touted as a breakthrough today will soon be the subject of viral derision, or the cause of big litigation. There are everyday reasons that show how once an AI application is out of the lab/PR environment, where it's been nurtured and pampered like a spoiled infant, then it finds the real world is a lot more unforgiving. People don't actually want it.

3. Liability: So you're Too Smart To Fail?

Nine years ago, the biggest financial catastrophe since the 1930s hit the world, and precisely zero bankers went to jail for it. Many kept their perks and pensions. People aren't so happy about this.

So how do you think an all purpose "cat ate my homework" excuse is going to go down with the public, or shareholders? A successfully functioning AI – one that did what it said on the tin – would pose serious challenges to criminal liability frameworks. When something goes wrong, such as a car crash or a bank failure, who do you put in jail? The Board, the CEO or the programmer, or both? "None of the above" is not going to be an option this time.

I believe that this factor alone will keep "AI" out of critical decision making where lives and large amounts of other people's money are at stake. For sure, some people will try to deploy algorithms in important cases. But ultimately there are victims: the public, and shareholders, and the appetite of the public to hear another excuse is wearing very thin. Let's check in on how the Minority Report -style precog detection is going. Actually, let's not .

After "Too Big To Fail", nobody is going to buy "Too Smart to Fail".

2. The Consumer Doesn't Want It

2016 saw "AI" being deployed on consumers experimentally, tentatively, and the signs are already there for anyone who cares to see. It hasn't been a great success.

The most hyped manifestation of better language processing is chatbots . Chatbots are the new UX, many including Microsoft and Facebook hope. Oren Etzoni at Paul Allen's Institute predicts it will become a "trillion dollar industry" But he also admits " my 4 YO is far smarter than any AI program I ever met ".

Hmmm, thanks Oren. So what you're saying is that we must now get used to chatting with someone dumber than a four year old, just because they can make software act dumber than a four year old. Bzzt. Next...

Put it this way. How many times have you rung a call center recently and wished that you'd spoken to someone even more thick, or rendered by processes even more incapable of resolving the dispute, than the minimum-wage offshore staffer who you actually spoke with? When the chatbots come, as you close the [X] on another fantastically unproductive hour wasted, will you cheerfully console yourself with the thought: "That was terrible, but least MegaCorp will make higher margins this year! They're at the cutting edge of AI!"?

In a healthy and competitive services marketplace, bad service means lost business. The early adopters of AI chatbots will discover this the hard way. There may be no later adopters once the early adopters have become internet memes for terrible service.

The other area where apparently impressive feats of "AI" were unleashed upon the public were subtle. Unbidden, unwanted AI "help" is starting to pop out at us. Google scans your personal photos and later, if you have an Android phone will pop up "helpful" reminders of where you have been. People almost universally find this creepy. We could call this a "Clippy The Paperclip" problem, after the intrusive Office Assistant that only wanted to help. Clippy is going to haunt AI in 2017 . This is actually going to be worse than anybody inside the AI cult quite realises.

The successful web services today so far are based on an economic exchange. The internet giants slurp your data, and give you free stuff. We haven't thought more closely about what this data is worth. For the consumer, however, these unsought AI intrusions merely draw our attention to how intrusive the data slurp really is. It could wreck everything. Has nobody thought of that?

1. AI is a make believe world populated by mad people, and nobody wants to be part of it

The AI hype so far has relied on a collusion between two groups of people: a supply side and a demand side. The technology industry, the forecasting industry and researchers provide a limitless supply of post-human hype.

The demand comes from the media and political classes, now unable or unwilling to engage in politics with the masses, to indulge in wild fantasies about humans being replaced by robots. For me, the latter reflects a displacement activity: the professions are already surrendering autonomy in their work to technocratic managerialism . They've made robots out of themselves – and now fear being replaced by robots. (Pass the hankie, I'm distraught.)

There's a cultural gulf between AI's promoters and the public that Asperger's alone can't explain. There's no polite way to express this, but AI belongs to California's inglorious tradition of generating cults, and incubating cult-like thinking . Most people can name a few from the hippy or post-hippy years – EST, or the Family, or the Symbionese Liberation Army – but actually, Californians have been it at it longer than anyone realises .

There's nothing at all weird about Mark. Move along and please tip the Chatbot.

Today, that spirit lives on Silicon Valley, where creepy billionaire nerds like Mark Zuckerberg and Elon Musk can fulfil their desires to " play God and be amazed by magic ", the two big things they miss from childhood. Look at Zuckerberg's house, for example. What these people want is not what you or I want. I'd be wary of them running an after school club.

Out in the real world, people want better service, not worse service; more human and less robotic exchanges with services, not more robotic "post-human" exchanges. But nobody inside the AI cult seems to worry about this. They think we're as amazed as they are. We're not.

The "technology leaders" driving the AI are doing everything they can to alert us to the fact no sane person would task them with leading anything. For that, I suppose, we should be grateful.

Francis Marx Jan 4, 2017 9:13 PM

I worked with robots for years and people dont realize how flawed and "go-wrong" things occur. Companies typically like idea of not hiring humans but in essence the robotic vision is not what it ought to be.

kiss of roses Francis Marx Jan 4, 2017 9:15 PM

I have designed digital based instrumentation and sensors. One of our senior EE designers had a saying that I loved: "Give an electron half a chance and it will fuck you every time."

Zarbo Jan 4, 2017 9:10 PM

I've been hearing the same thing since the first Lisp program crawled out of the digital swamp.

Lessee, that would be about 45 years I've listened to the same stories and fairy tales. I'll take a wait and see attitude like always.

The problem is very complex and working on pieces of it can be momentarily impressive to a press corpse (pun intended) with "the minds of a 3-year old, whether they willed it or not". (fixed that for you).

I'll quote an old saw, Lucke's First Law: "Ignorance simplifies any problem".

Just wait for the free money to dry up and the threat of AI will blow away (for a while longer) with the bankers dust.

cherry picker Jan 4, 2017 9:13 PM

Its all a big if...then issue.

There some great programmers out there, but in the end it is a lot more than programming.

Humans have something inherent that machines will never be able to emulate in its true form, such as emotion, determination, true inspiration, ability to read moods and react according including taking clumps of information and instantly finding similar memories in our brains.

Automation has a long way to go before it can match a human being, says a lot for whoever designed us, doesn't it?

[Jan 04, 2017] Frankensteins Children - Crooked Timber

Notable quotes:
"... When Stanislaw Lem launched a general criticism of Western Sci-Fi, he specifically exempted Philip K Dick, going so far as to refer to him as "a visionary among charlatans." ..."
"... While I think the 'OMG SUPERINTELLIGENCE' crowd are ripe for mockery, this seemed very shallow and wildly erratic, and yes, bashing the entirety of western SF seems so misguided it would make me question the rest of his (many, many) proto-arguments if I'd not done so already. ..."
"... Charles Stross's Rule 34 has about the only AI I can think of from SF that is both dangerous and realistic. ..."
"... Solaris and Stalker notwithstanding, Strugatsky brothers + Stanislaw Lem ≠ Andrei Tarkovsky. ..."
"... For offbeat Lem, I always found "Fiasco" and his Scotland Yard parody, "The Investigation," worth exploring. I'm unaware how they've been received by Polish and Western critics and readers, but I found them clever. ..."
"... Actually existing AI and leading-edge AI research are overwhelmingly not about pursuing "general intelligence* a la humanity." They are about performing tasks that have historically required what we historically considered to be human intelligence, like winning board games or translating news articles from Japanese to English. ..."
"... Actual AI systems don't resemble brains much more than forklifts resemble Olympic weightlifters. ..."
"... Talking about the risks and philosophical implications of the intellectual equivalent of forklifts - another wave of computerization - either lacks drama or requires far too much background preparation for most people to appreciate the drama. So we get this stuff about superintelligence and existential risk, like a philosopher wanted to write about public health but found it complicated and dry, so he decided to warn how utility monsters could destroy the National Health Service. It's exciting at the price of being silly. (And at the risk of other non-experts not realizing it's silly.) ..."
"... *In fact I consider "general intelligence" to be an ill-formed goal, like "general beauty." Beautiful architecture or beautiful show dogs? And beautiful according to which traditions? ..."
Jan 04, 2017 | crookedtimber.org
Frankenstein's Children

by Henry on December 30, 2016 This talk by Maciej Ceglowski (who y'all should be reading if you aren't already) is really good on silly claims by philosophers about AI, and how they feed into Silicon Valley mythology. But there's one claim that seems to me to be flat out wrong:

We need better scifi! And like so many things, we already have the technology. This is Stanislaw Lem, the great Polish scifi author. English-language scifi is terrible, but in the Eastern bloc we have the goods, and we need to make sure it's exported properly. It's already been translated well into English, it just needs to be better distributed. What sets authors like Lem and the Strugatsky brothers above their Western counterparts is that these are people who grew up in difficult circumstances, experienced the war, and then lived in a totalitarian society where they had to express their ideas obliquely through writing. They have an actual understanding of human experience and the limits of Utopian thinking that is nearly absent from the west.There are some notable exceptions-Stanley Kubrick was able to do it-but it's exceptionally rare to find American or British scifi that has any kind of humility about what we as a species can do with technology.

He's not wrong on the delights of Lem and the Strugastky brothers, heaven forbid! (I had a great conversation with a Russian woman some months ago about the Strugatskys – she hadn't realized that Roadside Picnic had been translated into English, much less that it had given rise to its own micro-genre). But wrong on US and (especially) British SF. It seems to me that fiction on the limits of utopian thinking and the need for humility about technology is vast. Plausible genealogies for sf stretch back, after all, to Shelley's utopian-science-gone-wrong Frankenstein (rather than Hugo Gernsback. Some examples that leap immediately to mind:

Ursula Le Guin and the whole literature of ambiguous utopias that she helped bring into being with The Dispossessed – see e.g. Ada Palmer, Kim Stanley Robinson's Mars series &c.

J.G Ballard, passim

Philip K. Dick ( passim , but if there's a better description of how the Internet of Things is likely to work out than the door demanding money to open in Ubik I haven't read it).

Octavia Butler's Parable books. Also, Jack Womack's Dryco books (this interview with Womack could have been written yesterday).

William Gibson ( passim , but especially "The Gernsback Continuum" and his most recent work. "The street finds its own uses for things" is a specifically and deliberately anti-tech-utopian aesthetic).

M. John Harrison – Signs of Life and the Kefahuchi Tract books.

Paul McAuley (most particularly Fairyland – also his most recent Something Coming Through and Into Everywhere , which mine the Roadside Picnic vein of brain-altering alien trash in some extremely interesting ways).

Robert Charles Wilson, Spin . The best SF book I've ever read on how small human beings and all their inventions are from a cosmological perspective.

Maureen McHugh's China Mountain Zhang .

Also, if it's not cheating, Francis Spufford's Red Plenty (if Kim Stanley Robinson describes it as a novel in the SF tradition, who am I to disagree, especially since it is all about the limits of capitalism as well as communism).

I'm sure there's plenty of other writers I could mention (feel free to say who they are in comments). I'd also love to see more translated SF from the former Warsaw Pact countries, if it is nearly as good as the Strugatskys material which has appeared. Still, I think that Ceglowski's claim is wrong. The people I mention above aren't peripheral to the genre under any reasonable definition, and they all write books and stories that do what Ceglowski thinks is only very rarely done. He's got some fun reading ahead of him.

Henry Farrell 12.30.16 at 4:52 pm ( Henry Farrell )

Also Linda Nagata's Red series come to think of it – unsupervised machine learning processes as ambiguous villain.

Prithvi 12.30.16 at 4:59 pm

When Stanislaw Lem launched a general criticism of Western Sci-Fi, he specifically exempted Philip K Dick, going so far as to refer to him as "a visionary among charlatans."

Jake Gibson 12.30.16 at 5:05 pm ( 3 )

You could throw in Pohl's Man Plus. The twist at the end being the narrator is an AI that has secretly promoted human expansion as a means of its own self-preservation.

Doctor Memory 12.30.16 at 5:42 pm

Prithvi: Dick, sadly, returned the favor by claiming that Lem was obviously a pseudonym used by the Polish government to disseminate communist propaganda.

Gabriel 12.30.16 at 5:54 pm ( 5 )

While I think the 'OMG SUPERINTELLIGENCE' crowd are ripe for mockery, this seemed very shallow and wildly erratic, and yes, bashing the entirety of western SF seems so misguided it would make me question the rest of his (many, many) proto-arguments if I'd not done so already.

Good for a few laughs, though.

Mike Schilling 12.30.16 at 6:13 pm

  1. Heinlein's Solution Unsatisfactory predicted the nuclear stalemate in 1941.
  2. Jack Williamson's With Folded Hands was worried about technology making humans obsolete back in 1947.
  3. In 1972, Asimov's The Gods Themselves presented a power generation technology that if continued would destroy the world, and a society too complacent and lazy to acknowledge that.

All famous stories by famous Golden Age authors.

jdkbrown 12.30.16 at 6:27 pm ( 7 )

"silly claims by philosophers about AI"

By some philosophers!

Brett 12.30.16 at 7:33 pm

Iain M. Banks' Culture Series is amazing. My personal favorite from it is "The Hydrogen Sonata." The main character has two extra arms grafted onto her body so she can play an unplayable piece of music. Also, the sentient space ships have very silly names. Mainly it's about transcendence, of sorts and how societies of different tech levels mess with each other, often without meaning to do so.

Matt 12.30.16 at 7:48 pm ( 9 )

Most SF authors aren't interested in trying to write about AI realistically.

It's harder to write and for most readers it's also harder to engage with. Writing a brilliant tale about realistic ubiquitous AI today is like writing the screenplay for The Social Network in 1960: even if you could see the future that clearly and write a drama native to it, the audience-circa-1960 will be more confused than impressed. They're not natives yet. Until they are natives of that future, the most popular tales of the future are going to really be about the present day with set dressing, the mythical Old West of the US with set dressing, perhaps the Napoleonic naval wars with set dressing

Charles Stross's Rule 34 has about the only AI I can think of from SF that is both dangerous and realistic. It's not angry or yearning for freedom, it suffers from only modest scope creep in its mission, and it keeps trying to fulfill its core mission directly. That's rather than by first taking over the world as Bostrom, Yudkowsky, etc. assert a truly optimal AI would do. To my disappointment but nobody's surprise, the book was not the sort of runaway seller that drives the publisher to beg for sequels.

stevenjohnson 12.30.16 at 9:07 pm

Yes, well, trying to read all that was a nasty reminder how utterly boring stylish and cool gets when confronted with a real task. Shorter version: One hand on the plug beats twice the smarts in a box. It was all too tedious to bear, but skimming over it leaves the impression the dude never considered whether programs or expert systems that achieve superhuman levels of skill in particular applications may be feasible. Too much like what's really happening?

Intelligence, if it's anything is speed and range of apprehension of surroundings, and skill in reasoning. But reason is nothing if it's not instrumental. The issue of what an AI would want is remarkably unremarked, pardon the oxymoron. Pending an actual debate on this, perhaps fewer pixels should be marshaled, having mercy on our overworked LEDs?

As to the simulation of brains a la Ray Kurzweil, presumably producing artificial minds like fleshy brains do? This seems not to nowhere near at hand, not least because people seem to think simulating a brain means creating something processes inputs to produce outputs, which collectively are like well, I'm sure they're thinking they're thinking about human minds in this scheme. But it seems to me that the brain is a regulatory organ in the body. As such, it is first about producing regulatory outputs designed to maintain a dynamic equilibrium (often called homeostasis,) then revising the outputs in light of inputs from the rest of the body and the outside world so as to maintain the homeostasis.

I don't remember being an infant but its brain certainly seems more into doing things like putting its thumb in its eye, than producing anything that reminds of Hamlets paragon of animals monologue. Kurzweil may be right that simulating the brain proper may soon be in grasp, but also simulating the other organs' interactions with the brain, and the sensory simulation of an outside universe are a different order of computational requirements, I think. Given the amount of learning a human brain has to do to produce a useful human mind, though, I don't think we can omit these little items.

As to the OP, of course the OP is correct about the widespread number of dystopian fictions (utopian ones are the rarities.) Very little SF is being published in comparison to fantasy currently, and most of that is being produced by writers who are very indignant at being expected to tell the difference, much less respect it. It is a mystery as to why this gentleman thought technology was a concern in much current SF at all.

I suspect it's because he has a very limited understanding of fiction, or, possibly, people in the real world, as opposed to people in his worldview. It is instead amazing how much the common ruck of SF "fails" to realize how much things will change, how people and their lives somehow stay so much the same, despite all the misleading trappings pretending to represent technological changes. This isn't quite the death sentence on the genre it would be accepted at face value, since a lot of SF is directly addressing now, in the first place. It is very uncommon for an SF piece to be a futurological thesis, no matter how many literati rant about the tedium of futurological theses. I suspect the "limits of utopian thinking" really only come in as a symptom of a reactionary crank. "People with newfangled book theories have been destroying the world since the French Revolution" type stuff.

The references to Lem and the Strugatski brothers strongly reinforce this. Lem of course found his Poland safe from transgressing the limits of utopian thinking by the end of his life. "PiS on his grave" sounds a little rude, but no doubt it is a happy and just ending for him. The brothers of course did their work in print, but the movie version of "Hard to Be a God" helps me to see myself the same way as those who have gone beyond the limits of utopian thoughts would see me: As an extra in the movie.

Chris Bertram 12.30.16 at 9:12 pm ( 11 )

Not sure if this is relevant, but John Crowley also came up in the Red Plenty symposium (which I've just read, along with the novel, 4 years late). Any good?

Ben 12.30.16 at 10:07 pm Peter. Motherfuckin. Watts.

L2P 12.30.16 at 10:42 pm ( 13 )

John Crowley of Aegypt? He's FANTASTIC. Little, Big and Aegypt are possibly the best fantasy novels of the past 30 years. But he's known for "hard fantasy," putting magic into our real world in a realistic, consistent, and plausible way, with realistic, consistent and plausible characters being affected. If you're looking for something about the limits of technology and utopian thinking, I'm not sure his works are a place to look.

Mike 12.31.16 at 12:25 am

I second Watts and Nagata. Also Ken Macleod, Charlie Stross, Warren Ellis and Chuck Wendig.

Lee A. Arnold 12.31.16 at 1:10 am ( 15 )

This is beside the main topic, but Ceglowski writes at Premise 2, "If we knew enough, and had the technology, we could exactly copy its [i.e.the brain's] structure and emulate its behavior with electronic components this is the premise that the mind arises out of ordinary physics for most of us, this is an easy premise to accept."

The phrase "most of us" may refer to Ceglowski's friends in the computer community, but it ought to be noted that this premise is questioned not only by Penrose. You don't have to believe in god or the soul to be a substance dualist, or even an idealist, although these positions are currently out of fashion. It could be that the mind does not arise out of ordinary physics, but that ordinary physics arises out of the mind, and that problems like "Godel's disjunction" will remain permanently irresolvable.

Dr. Hilarius 12.31.16 at 3:33 am

Thanks to the OP for mentioning Paul McAuley, a much underappreciated author. Fairyland is grim and compelling.

JimV 12.31.16 at 4:33 am ( 17 )

"Most of us" includes the vast majority of physicists, because in millions of experiments over hundreds of years, no forces or particles have been discovered which make dualism possible. Of course, like the dualists' gods, these unknown entities might be hiding, but after a while one concludes Santa Claus is not real.

As for Godel, I look at like this: consider an infinite subset of the integers, randomly selected. There might be some coincidental pattern or characteristic of the numbers in that set (e.g., no multiples of both 17 and 2017), but since the set is infinite, it would be impossible to prove. Hence the second premise of his argument (that there are undecidable truths) is the correct one.

Finally, the plausibility of Ceglowski's statement seems evident to me from this fact:

if a solution exists (in some solution space), then given enough time, a random search will find it, and in fact will on average over all solution spaces, outperform all other possible algorithms. So by trial and error (especially when aided by collaboration and memory) anything achievable can be accomplished – e.g., biological evolution. See "AlphaGo" for another proof-of-concept example.

(We have had this discussion before. I guess we'll all stick to our conclusions. I read Penrose's "The Emperor;s New Mind" with great respect for Penrose, but found it very unconvincing, especially Searle's Chinese-Room argument, which greater minds than mine have since debunked.)

Lee A. Arnold 12.31.16 at 10:01 am

"Substance dualism" would not be proven by the existence of any "forces or particles" which would make that dualism possible! If such were discovered, they would be material. "If a solution exists", it would be material. The use of the word "substance" in "substance dualism" is misleading.

One way to look at it, is the problem of the existence of the generation of form. Once we consider the integers, or atoms, or subatomic particles, we have already presupposed form. Even evolution starts somewhere. Trial and error, starting from what?

There are lots of different definitions, but for me, dualism wouldn't preclude the validity of science nor the expansion of scientific knowledge.

I think one way in, might be to observe the continued existence of things like paradox, complementarity, uncertainty principles, incommensurables. Every era of knowledge has obtained them, going back to the ancients. The things in these categories change; sometimes consideration of a paradox leads to new science.

But then, the new era has its own paradoxes and complementarities. Every time! Yet there is no "science" of this historical regularity. Why is that?

Barry 12.31.16 at 2:33 pm ( 19 )

In general, when some celebrity (outside of SF) claims that 'Science Fiction doesn't cover [X]', they are just showing off their ignorance.

Kiwanda 12.31.16 at 3:14 pm

"They have an actual understanding of human experience and the limits of Utopian thinking that is nearly absent from the west. "

Oh, please. Suffering is not the only path to wisdom.

After a long article discounting "AI risk", it's a little odd to see Ceglowski point to Kubrick. HAL was a fine example of a failure to design an AI with enough safety factors in its motivational drives, leading to a "nervous breakdown" due to unforeseen internal conflicts, and fatal consequences. Although I suppose killing only a few people (was it?) isn't on the scale of interest.

Ceglowski's skepticism of AI risk suggests that the kind of SF he would find plausible is "after huge effort to create artificial intelligence, nothing much happens". Isn't that what the appropriate "humility about technology" would be?

I think Spin , or maybe a sequel, ends up with [spoiler] "the all-powerful aliens are actually AIs".

Re AI-damns-us-all SF, Harlan Ellison's I have no mouth and I must scream is a nice example.

William Timberman 12.31.16 at 5:14 pm ( 21 )

Mapping the unintended consequences of recent breakthroughs in AI is turning into a full-time job, one which neither pundits nor government agencies seem to have the chops for.

If it's not exactly the Singularity that we're facing, (laugh while you can, monkey boy), is does at least seem to be a tipping point of sorts. Maybe fascism, nuclear war, global warming, etc., will interrupt our plunge into the panopticon before it gets truly organized, but in the meantime, we've got all sorts of new imponderables which we must nevertheless ponder.

Is that a bad thing? If it means no longer sitting on folding chairs in cinder block basements listening to interminable lectures on how to recognize pre-revolutionary conditions, or finding nothing on morning radio but breathless exhortations to remain ever vigilant against the nefarious schemes of criminal Hillary and that Muslim Socialist Negro Barack HUSSEIN Obama, then I'm all for it, bad thing or not.

Ronnie Pudding 12.31.16 at 5:20 pm

I love Red Plenty, but that's pretty clearly a cheat.

"It should also be read in the context of science fiction, historical fiction, alternative history, Soviet modernisms, and steampunk."

Very weak grounds on which to label it SF.

Neville Morley 12.31.16 at 5:40 pm ( 23 )

Another author in the Le Guin tradition, whom I loved when I first read her early books: Mary Gentle's Golden Witchbreed and Ancient Light , meditating on limits and consequences of advanced technology through exploration of a post-apocalypse alien culture. Maybe a little too far from hard SF.

chris y 12.31.16 at 5:52 pm

But even without "substance dualism", intelligence is not simply an emergent property of the nervous system; it's an emergent property of the nervous system which exists as part of the environment which is the rest of the human body, which exists as part of the external environment, natural and manufactured, in which it lives. Et cetera. That AI research may eventually produce something recognisably and independently intelligent isn't the hard part; that it may eventually be able to replicate the connectivity and differentiation of the human brain is easy. But it would still be very different from human intelligence. Show me an AI grown in utero and I might be interested.

RichardM 12.31.16 at 7:08 pm ( 25 )

> one claim that seems to me to be flat out wrong

Which makes it the most interesting of the things said, nothing else in that essay reaches the level of merely being wrong. The rest of it is more like someone trying to speak Chinese without knowing anything above the level of the phonemes; it seems to be not merely be missing any object-level knowledge of what it is talking about, but be unaware that such a thing could exist.

Which is all a bit reminiscent of Peter Watt's Blindsight, mentioned above.

F. Foundling 12.31.16 at 7:36 pm

I agree that it is absurd to suggest that only Eastern bloc scifi writers truly know 'the limits of utopia'. There are quite enough non-utopian stories out there, especially as far as social development is concerned, where they predominate by far, so I doubt that the West doesn't need Easterners to give it even more of that. In fact, one of the things I like about the Strugatsky brothers' early work is precisely the (moderately) utopian aspect.

F. Foundling 12.31.16 at 7:46 pm ( 27 )

stevenjohnson @ 10
> But reason is nothing if it's not instrumental. The issue of what an AI would want is remarkably unremarked, pardon the oxymoron.

It would want to maximise its reproductive success (RS), obviously ( http://crookedtimber.org/2016/12/30/frankensteins-children/#comments ). It would do so through evolved adaptations. And no, I don't think this is begging the question at all, nor does it necessarily pre-suppose hardwiring of the AI due to natural selection – why would you think that? I also predict that, to achieve RS, the AI will be searching for an optimal mating strategy, and it will be establishing dominance hierarchies with other AIs, which will eventually result in at least somewhat hierarchical, authoritarian AI socieities. It will also have an inexplicable and irresistible urge to chew on a coconut.

Lee A. Arnold @ 15
> It could be that the mind does not arise out of ordinary physics, but that ordinary physics arises out of the mind.

I think that deep inside, we all know and feel that ultimately, unimaginablly long ago and far away, before the formation of the Earth, before stars, planets and galaxies, before the Big Bang, before there was matter and energy, before there was time and space, the original reason why everything arose and currently exists is that somebody somewhere was really, truly desperate to chew on a coconut.

In fact, I see this as the basis of a potentially fruitful research programme. After all, the Coconut Hypothesis predicts that across the observable universe, there will be at least one planet with a biosphere that includes cocounts. On the other hand, the Hypothesis would be falsified if we were to find that the universe does not, in fact, contain any planets with coconuts. This hypothesis can be tested by means of a survey of planetary biospheres. Remarkably and tellingly, my preliminary results indicate that the Universe does indeed contain at least one planet with coconuts – which is precisely what my hypothesis predicted! If there are any alternative explanations, other researchers are free to pursue them, that's none of my business.

I wish all conscious beings who happen to read this comment a happy New Year. As for those among you who have also kept more superstitious festivities during this season, the fine is still five shillings.

William Burns 12.31.16 at 8:31 pm

The fact that the one example he gives is Kubrick indicates that he's talking about Western scifi movies, not literature.

Henry 12.31.16 at 10:41 pm

The fact that the one example he gives is Kubrick indicates that he's talking about Western scifi movies, not literature.

Solaris and Stalker notwithstanding, Strugatsky brothers + Stanislaw Lem ≠ Andrei Tarkovsky.

stevenjohnson 01.01.17 at 12:04 am

Well, for what it's worth I've seen Czech Ikarie XB-1 in a theatrical release as Voyage to the End of the Universe (in a double bill with Zulu,) the DDR's First Spaceship on Venus and The Congress, starring Robin Wright. Having by coincidence having read The Futurological Congress very recently the connection of the latter, any connection between the not very memorable (for me) film and the novel is obscure (again, for me.)

But the DDR movie reads very nicely now as a warning the world would be so much better off if the Soviets gave up all that nuclear deterrence madness. No doubt Lem and his fans are gratified at how well this has worked out. And Voyage to the End of the Universe the movie was a kind of metaphor about how all we'll really discover is Human Nature is Eternal, and all these supposed flights into futurity will really just bring us Back Down to Earth. Razzberry/fart sound effect as you please.

engels 01.01.17 at 1:13 am ( 31 )

The issue of what an AI would want is remarkably unremarked

The real question of course is not when computers will develop consciousness but when they will develop class consciousness.

Underpaid Propagandist 01.01.17 at 2:11 am

For offbeat Lem, I always found "Fiasco" and his Scotland Yard parody, "The Investigation," worth exploring. I'm unaware how they've been received by Polish and Western critics and readers, but I found them clever.

The original print of Tarkovsky's "Stalker" was ruined. I've always wondered if it had any resemblence to it's sepia reshoot. The "Roadside Picnic" translation I read eons ago was awful, IMHO.

Poor Tarkovsky. Dealing with Soviet repression of his homosexuality and the Polish diva in "Solaris" led him to an early grave.

O Lord, I'm old-I still remember the first US commercial screening of a choppy cut/translation/overdub of "Solaris" at Cinema Village in NYC many decades ago.

George de Verges 01.01.17 at 2:41 am ( 33 )

"Solaris and Stalker notwithstanding, Strugatsky brothers + Stanislaw Lem ≠ Andrei Tarkovsky."

Why? Perhaps I am dense, but I would appreciate an explanation.

F. Foundling 01.01.17 at 5:29 am Ben @12

> Peter. Motherfuckin. Watts.
RichardM @25
> Which is all a bit reminiscent of Peter Watt's Blindsight, mentioned above.

Another dystopia that seemed quite gratuitous to me (and another data point in favour of the contention that there are too many dystopias already, and what is scarce is decent utopias). I never got how the author is able to distinguish 'awareness/consciousness' from 'merely intelligent' registering, modelling and predicting, and how being aware of oneself (in the sense of modelling oneself on a par with other entities) would not be both an inevitable result of intelligence and a requirement for intelligent decisions. Somehow the absence of awareness was supposed to be proved by the aliens' Chinese-Room style communication, but if the aliens were capable of understanding the Terrestrials so incredibly well that they could predict their actions while fighting them, they really should have been able to have a decent conversation with them as well.

The whole idea that we could learn everything unconsciously, so that consciousness was an impediment to intelligence, was highly implausible, too. The idea that the aliens would perceive any irrelevant information reaching them as a hostile act was absurd. The idea of a solitary and yet hyperintelligent species (vampire) was also extremely dubious, in terms of comparative zoology – a glorification of socially awkward nerddom?

All of this seemed like darkness for darkness' sake. I couldn't help getting the impression that the author was allowing his hatred of humanity to override his reasoning.

In general, dark/grit chic is a terrible disease of Western pop culture.

Alan White 01.01.17 at 5:43 am ( 35 )

engels–

"The real question of course is not when computers will develop consciousness but when they will develop class consciousness."

This is right. There is nothing like recognizable consciousness without social discourse that is its necessary condition. But that does't mean the discourse is value-balanced: it might be a discourse that includes peers and perceived those deemed lesser, as humans have demonstrated throughout history.

Just to say, Lem was often in Nobel talk, but never got there. That's a shame.

As happy a new year as our pre-soon-to-be-Trump era will allow.

Neville Morley 01.01.17 at 11:11 am

I wonder how he'd classify German SF – neither Washington nor Moscow? Julie Zeh is explicitly, almost obsessively, anti-utopian, while Dietmar Dath's Venus Siegt echoes Ken MacLeod in exploring both the light and dark sides of a Communist Bund of humans, AIs and robots on Venus, confronting an alliance of fascists and late capitalists based on Earth.

Manta 01.01.17 at 12:25 pm ( 37 )

Lee Arnold @10

See also http://www.scottaaronson.com/blog/?p=2903
It's a long talk, go to "Personal Identity" :
"we don't know at what level of granularity a brain would need to be simulated in order to duplicate someone's subjective identity. Maybe you'd only need to go down to the level of neurons and synapses. But if you needed to go all the way down to the molecular level, then the No-Cloning Theorem would immediately throw a wrench into most of the paradoxes of personal identity that we discussed earlier."

Lee A. Arnold 01.01.17 at 12:26 pm

George de Verges: "I would appreciate an explanation."

I too would like to read Henry's accounting! Difficult to keep it brief!

To me, Tarkovsky was making nonlinear meditations. The genres were incidental to his purpose. It seems to me that a filmmaker with similar purpose is Terrence Malick. "The Thin Red Line" is a successful example.

I think that Kubrick stumbled onto this audience effect with "2001". But this was blind and accidental, done by almost mechanical means (paring the script down from around 300 pages of wordy dialogue, or something like that). "2001" first failed at the box office, then found a repeat midnight audience, who described the effect as nonverbal.

I think the belated box-office success blew Kubrick's own mind, because it looks like he spent the rest of his career attempting to reproduce the effect, by long camera takes and slow deliberate dialogue. It's interesting that among Kubrick's favorite filmmakers were Bresson, Antonioni, and Saura. Spielberg mentions in an interview that Kubrick said that he was trying to "find new ways to tell stories".

But drama needs linear thought, and linear thought is anti-meditation. Drama needs interpersonal conflict - a dystopia, not utopia. (Unless you are writing the intra-personal genre of the "education" plot. Which, in a way, is what "2001" really is.) Audiences want conflict, and it is difficult to make that meditational. It's even more difficult in prose.

This thought led me to a question. Are there dystopic prose writers who succeed in sustaining a nonlinear, meditational audience-effect?

Perhaps the answer will always be a subjective judgment? The big one who came to mind immediately is Ray Bradbury. "There Will Come Soft Rains" and parts of "Martian Chronicles" seem Tarkovskian.

So next, I search for whether Tarkovsky spoke of Bradbury, and find this:

"Although it is commonly assumed - and he did little in his public utterances to refute this - that Tarkovsky disliked and even despised science fiction, he in fact read quite a lot of it and was particularly fond of Ray Bradbury (Artemyev and Rausch interviews)." - footnote in Johnson & Petrie, The Films of Andrei Tarkovsky, p. 301

stevenjohnson 01.01.17 at 12:32 pm ( 39 )

The way you can substitute "identical twin" for "clone" and get a different perspective on clone stories in SF, you can substitute "point of view" for "consciousness" in SF stories. Or Silicon Valley daydreams, if that isn't redundant? The more literal you are, starting with the sensorium, the better I think. A human being has binocular vision of a scene comprising less than 180 degrees range from a mobile platform, accompanied by stereo hearing, proprioception, vestibular input, the touch of air currents and some degree of sensitivity to some chemicals carried by those currents, etc.

A computer might have, what? A single camera, or possibly a set of cameras which might be seeing multiple scenes. Would that be like having eyes in the back of your head? It might have a microphone, perhaps many, hearing many voices or maybe soundtracks at once. Would that be like listening to everybody at the cocktail party all at once? Then there's the question of computer code inputs, programming. What would parallel that? Visceral feelings like butterflies in the stomach or a sinking heart? Or would they seem like a visitation from God, a mighty vision with thunder and whispers on the wind? Would they just seem to be subvocalizations, posing as the computer's own free thoughts? After all, shouldn't an imitation of human consciousness include the illusion of free will? (If you believe in the reality of "free" will in human beings--what ever is free about exercise of will power?-however could you give that to a computer? Or is this kind of question why so many people repudiate the very thought of AI?)

It seems to me that creating an AI in a computer is very like trying to create a quadriplegic baby with one eye and one ear. Diffidence at the difficulty is replaced by horror at the possibility of success. I think the ultimate goal here is of course the wish to download your soul into a machine that does not age. Good luck with that. On the other hand, an AI is likely the closest we'll ever get to an alien intelligence, given interstellar distances.

Lee A. Arnold 01.01.17 at 12:53 pm

F. Foundling: "the original reason why everything arose and currently exists is that somebody somewhere was really, truly desperate to chew on a coconut If there are any alternative explanations "

This is Vedantist/Spencer-Brown metaphysics, the universe is originally split into perceiver & perceived.

Very good.

Combined with Leibnitz/Whitehead metaphysics, the monad is a striving process.

I thoroughly agree.

Combined with Church of the Subgenius metaphysics: "The main problem with the universe is that it doesn't have enough slack."

Yup.

"If there are any alternative explanations " ?

There are no alternative explanations!

RichardM 01.01.17 at 5:00 pm ( 41 )

> if the aliens were capable of understanding the Terrestrials so incredibly well that they could predict their actions while fighting them, they really should have been able to have a decent conversation with them as well.

If you can predict all your opponents possible moves, and have a contingency for each, you don't need to care which one they actually do pick. You don't need to know what it feels like to be a ball to be able to catch it.

Ben 01.01.17 at 7:17 pm

Another Watts piece about the limits of technology, AI and humanity's inability to plan is The Island (PDF from Watts' website). Highly recommended.

F. Foundling,

Blindsight has an extensive appendix with cites detailing where Watts got the ideas he's playing with, including the ones you bring up, and provides specific warrants for including them. A critique of Watts' use of the ideas needs to be a little bit more granular.

Matt 01.01.17 at 8:05 pm ( 43 )

The issue of what an AI would want is remarkably unremarked, pardon the oxymoron.

It will "want" to do whatever it's programmed to do. It took increasingly sophisticated machines and software to dethrone humans as champions of checkers, chess, and go. It'll be another milestone when humans are dethroned from no-limit Texas hold 'em poker (a notable game played without perfect information). Machines are playing several historically interesting games at high superhuman levels of ability; none of these milestones put machines any closer to running amok in a way that Nick Bostrom or dramatists would consider worthy of extended treatment. Domain-specific superintelligence arrived a long time ago. Artificial "general" intelligence, aka "Strong AI," aka "Do What I Mean AI (But OMG It Doesn't Do What I Mean!)" is, like, not a thing outside of fiction and the Less Wrong community. (But I repeat myself.)

Bostrom's Superintelligence was not very good IMO. Of course a superpowered "mind upload" copied from a real human brain might act against other people, just like non-superpowered humans that you can read about in the news every day. The crucial question about the upload case is whether uploads of this sort are actually possible: a question of biology, physics, scientific instruments, and perhaps scientific simulations. Not a question of motivations. But he only superficially touches on the crucial issues of feasibility. It's like an extended treatise on the dangers of time travel that doesn't first make a good case that time machines are actually possible via plausible engineering .

I don't think that designed AI has the same potential to run entertainingly amok as mind-upload-AI. The "paperclip maximizer" has the same defect as a beginner's computer program containing a loop with no terminating condition for the loop. In the cautionary tale case this beginner mistake is, hypothetically, happening on a machine that is otherwise so capable and powerful that it can wipe out humanity as an incidental to its paperclip-producing mission. The warning is wasted on anyone who writes software and also wasted, for other reasons, on people who don't write software.

Bostrom shows a lot of ways for designed AI to run amok even when given bounded goals, but it's a cheat. They follow from his cult-of-Bayes definition of an optimal AI agent as an approximation to a perfect Bayesian agent. All the runnings-amok stem from the open ended Bayesian formulation that permits - even compels - the Bayesian agent to do things that are facially irrelevant to its goal and instead chase wild tangents. The object lesson is that "good Bayesians" make bad agents, not that real AI is likely to run amok.

In actual AI research and implementation, Bayesian reasoning is just one more tool in the toolbox, one chapter of the many-chapters AI textbook. So these warnings can't be aimed at actual AI practitioners, who are already eschewing the open ended Bayes-all-the-things approach. They're also irrelevant if aimed at non-practitioners. Non-practitioners are in no danger of leapfrogging the state of the art and building a world-conquering AI by accident.

Plarry 01.03.17 at 5:45 am

It's an interesting talk, but the weakest point in it is his conclusion, as you point out. What I draw from his conclusion is that Ceglowski hasn't actually experienced much American or British SF.

There are great literary works pointed out in the thread so far, but even Star Trek and Red Dwarf hit on those themes occasionally in TV, and there are a number of significant examples in film, including "blockbusters" such as Blade Runner or The Abyss .

WLGR 01.03.17 at 6:01 pm ( 45 )

I made this point in the recent evopsych thread when it started approaching some more fundamental philosophy-of-mind issues like Turing completeness and modularity, but any conversation about AI and philosophy could really, really benefit more exposure to continental philosophy if we want to say anything incisive about the presuppositions of AI and what the term "artificial intelligence" could even mean in the first place. You don't even have to go digging through a bunch of obscure French and German treatises to find the relevant arguments, either, because someone well versed at explaining these issues to Anglophone non-continentals has already done it for you: Hubert Dreyfus, who was teaching philosophy at MIT right around the time of AI's early triumphalist phase that inspired much of this AI fanfic to begin with, and who became persona non grata in certain crowds for all but declaring that the then-current approaches were a waste of time and that they should all sit down with Heidegger and Merleau-Ponty. (In fact it seems obvious that Ceglowski's allusion to alchemy is a nod to Dreyfus, one of whose first major splashes in the '60s was with a paper called "Alchemy and Artificial Intelligence" .)

IMO Dreyfus' more recent paper called "Why Heideggerian AI failed, and how fixing it would require making it more Heideggerian" provides the best short intro to his perspective on the more-or-less current state of AI research. What Ceglowski calls "pouring absolutely massive amounts of data into relatively simple neural networks", Dreyfus would call an attempt to bring out the characteristic of "being-in-the-world" by mimicking what for a human being we'd call "enculturation", which seems to imply that Ceglowski's worry about connectionist AI research leading to more pressure toward mass surveillance is misplaced. (Not that there aren't other worrisome social and political pressures toward mass surveillance, of course!) The problem for modern AI isn't acquiring ever-greater mounds of data, the problem is how to structure a neural network's cognitive development so it learns to recognize significance and affordances for action within the patterns of data to which it's already naturally exposed.

And yes, popular fiction about AI largely still seems stuck on issues that haven't been cutting-edge since the old midcentury days of cognitivist triumphalism, like Turing tests and innate thought modules and so on - which seems to me like a perfectly obvious result of the extent to which the mechanistically rationalist philosophy Dreyfus criticizes in old-fashioned AI research is still embedded in most lay scifi readers' worldviews. Even if actual scientists are increasingly attentive to continental-inspired critiques, this hardly seems true for most laypeople who worship the idea of science and technology enough to structure their cultural fantasies around it. At least this seems to be the case for Anglophone culture, anyway; I'd definitely be interested if there's any significant body of AI-related science fiction originally written in other languages, especially French, German, or Spanish, that takes more of these issues into account.

WLGR 01.03.17 at 7:37 pm

And in trying to summarize Dreyfus, I exemplified one of the most fundamental mistakes he and Heidegger would both criticize! Neither of them would ever call something like the training of a neural network "an attempt to bring out the characteristic of being-in-the-world", because being-in-the-world isn't a characteristic in the sense of any Cartesian ontology of substances with properties, it's a way of being that a living cognitive agent (Heidegger's "Dasein") simply embodies. In other words, there's never any Michelangelo moment where a creator reaches down or flips a switch to imbue their artificial creation ex nihilo with some kind of divine spark of life or intellect, a "characteristic" that two otherwise identical lumps of clay or circuitry can either possess or not possess - whatever entity we call "alive" or "intelligent" is an entity that by its very physical structure can enact this way of being as a constant dialectic between itself and the surrounding conditions of its growth and development. The second we start trying to isolate a single perceived property called "intelligence" or "cognition" from all other perceived properties of a cognitive agent, we might as well call it the soul and locate it in the pineal gland.

F. Foundling 01.03.17 at 8:22 pm ( 47 )

@RichardM
> If you can predict all your opponents possible moves, and have a contingency for each, you don't need to care which one they actually do pick. You don't need to know what it feels like to be a ball to be able to catch it.

In the real world, there are too many physically possible moves, so it's too expensive to prepare for each, and time constraints require you to make predictions. You do need to know how balls (re)act in order to play ball. Humans being a bit more complex, trying to predict and/or influence their actions without a theory of mind may work surprisingly well sometimes, but ultimately has its limitations and will only get you this far, as animals have often found.

@Ben
> Blindsight has an extensive appendix with cites detailing where Watts got the ideas he's playing with, including the ones you bring up, and provides specific warrants for including them. A critique of Watts' use of the ideas needs to be a little bit more granular.

I did read his appendix, and no, some of the things I brought up were not, in fact, addressed there at all, and for others I found his justifications unconvincing. However, having an epic pro- vs. anti-Blindsight discussion here would feel too much like work: I wrote my opinion once and I'll leave it at that.

stevenjohnson 01.03.17 at 8:57 pm Matt@43

So far as designing an AI to want what people want I am agnostic as to whether that goal is the means to the goal of a general intelligence a la humanity it still seems to me brains have the primary function of outputting regulations for the rest of the body, then altering the outputs in response to the subsequent outcomes (which are identified by a multitude of inputs, starting with oxygenated hemoglobin and blood glucose. I'm still not aware of what people say about the subject of AI motivations, but if you say so, I'm not expert enough in the literature to argue. Superintelligence on the part of systems expert in selected domains still seem to be of great speculative interest. As to Bostrom and AI and Bayesian reasoning, I avoid Bayesianism because I don't understand it. Bunge's observation that propositions aren't probabilities sort of messed up my brain on that topic. Bayes' theorem I think I understand, even to the point I seem to recall following a mathematical derivation.

WLGR@45, 46. I don't understand how continental philosophy will tell us what people want. It still seems to me that a motive for thinking is essential, but my favored starting point for humans is crassly biological. I suppose by your perspective I don't understand the question. As to the lack of a Michaelangelo moment for intelligence, I certainly don't recall any from my infancy. But perhaps there are people who can recall the womb

bob mcmanus 01.03.17 at 9:14 pm ( 49 )

AI-related science fiction originally written in other languages

Tentatively, possibly Japanese anime. Serial Experiments Lain. Ghost in the Shell. Numerous mecha-human melds. End of Evangelion.

The mashup of cybertech, animism, and Buddhism works toward merging rather than emergence.

Matt 01.04.17 at 1:21 am

Actually existing AI and leading-edge AI research are overwhelmingly not about pursuing "general intelligence* a la humanity." They are about performing tasks that have historically required what we historically considered to be human intelligence, like winning board games or translating news articles from Japanese to English.

Actual AI systems don't resemble brains much more than forklifts resemble Olympic weightlifters.

Talking about the risks and philosophical implications of the intellectual equivalent of forklifts - another wave of computerization - either lacks drama or requires far too much background preparation for most people to appreciate the drama. So we get this stuff about superintelligence and existential risk, like a philosopher wanted to write about public health but found it complicated and dry, so he decided to warn how utility monsters could destroy the National Health Service. It's exciting at the price of being silly. (And at the risk of other non-experts not realizing it's silly.)

(I'm not an honest-to-goodness AI expert, but I do at least write software for a living, I took an intro to AI course during graduate school in the early 2000s, I keep up with research news, and I have written parts of a production-quality machine learning system.)

*In fact I consider "general intelligence" to be an ill-formed goal, like "general beauty." Beautiful architecture or beautiful show dogs? And beautiful according to which traditions?

[Jan 02, 2017] Japanese White-Collar Workers Are Already Being Replaced by Artificial Intelligence

Watson was actually a specialized system designed to win Jeopardy contest. Highly specialized. Too much hype around AI, although hardware advanced make more things possible and speech recognitions now is pretty decent.
Notable quotes:
"... I used to be supportive of things like welfare reform, but this is throwing up new challenges that will probably require new paradigms. Since more and more low skilled jobs - including those of CEOs - get automated, there will be fewer jobs for the population ..."
"... The problem I see with this is that white collar jobs have been replaced by technology for centuries, and at the same time, technology has enabled even more white collar jobs to exist than those that it replaced. ..."
"... For example, the word "computer" used to be universally referred to as a job title, whereas today it's universally referred to as a machine. ..."
"... It depends on the country, I think. I believe many countries, like Japan and Finland, will indeed go this route. However, here in the US, we are vehemently opposed to anything that can be branded as "socialism". So instead, society here will soon resemble "The Walking Dead". ..."
"... "Men and nations behave wisely when they have exhausted all other resources." -- Abba Eban ..."
"... Which is frequently misquoted as, "Americans can always be counted on to do the right thing after they have exhausted all other possibilities." ..."
"... So when the starving mob are at the ruling elites' gates with torches and pitch forks, they'll surely find the resources to do the right thing. ..."
"... When you reduce the human labor participation rate relative to the overall population, what you get is deflation. That's an undeniable fact. ..."
"... But factor in governments around the world "borrowing" money via printing to pay welfare for all those unemployed. So now we have deflation coupled with inflation = stagflation. But stagflation doesn't last. At some point, the entire system - as we know it- will implode. What can not go on f ..."
"... Unions exist to protect jobs and employment. The Pacific Longshoremen's Union during the 1960's&70's was an aberration in the the union bosses didn't primarily look after maintaining their own power via maintaining a large number of jobs, but rather opted into profit sharing, protecting the current workers at the expense of future power. Usually a union can be depended upon to fight automation, rather than to seek maximization of public good ..."
"... Until something goes wrong. Who is going to pick that machine generated code apart? ..."
"... What automation? 1000 workers in US vs 2000 in Mexico for half the cost of those 1000 is not "automation." Same thing with your hand-assembled smartphone. ..."
"... Doctors spend more time with paper than with patients. Once the paper gets to the insurance company chances are good it doesn't go to the right person or just gets lost sending the patient back to the beginning of the maze. The more people removed from the chain the bet ..."
"... I'm curious what you think you can do that Watson can't. ..."
"... Seriously? Quite a bit actually. I can handle input streams that Watson can't. I can make tools Watson couldn't begin to imagine. I can interact with physical objects without vast amounts of programming. I can deal with humans in a meaningful and human way FAR better than any computer program. I can pass a Turing test. The number of things I can do that Watson cannot is literally too numerous to bother counting. Watson is really just an decision support system with a natural language interface. Ver ..."
"... It's not Parkinson's law, it's runaway inequality. The workforce continues to be more and more productive as it receives an unchanging or decreasing amount of compensation (in absolute terms - or an ever-decreasing share of the profits in relative terms), while the gains go to the 1%. ..."
Jan 02, 2017 | hardware.slashdot.org
(qz.com) 153

Posted by msmash on Monday January 02, 2017 @12:00PM from the they-are-here dept.

Most of the attention around automation focuses on how factory robots and self-driving cars may fundamentally change our workforce, potentially eliminating millions of jobs.

But AI that can handle knowledge-based, white-collar work is also becoming increasingly competent.

From a report on Quartz:

One Japanese insurance company, Fukoku Mutual Life Insurance, is reportedly replacing 34 human insurance claim workers with "IBM Watson Explorer," starting by this month.

The AI will scan hospital records and other documents to determine insurance payouts, according to a company press release, factoring injuries, patient medical histories, and procedures administered.

Automation of these research and data gathering tasks will help the remaining human workers process the final payout faster, the release says.

ranton ( 36917 ) , Monday January 02, 2017 @12:09PM ( #53592671 )

As if this is new ( Score: 5 , Insightful)

As a software developer of enterprise software, every company I have worked for has either produced software which reduced white collar jobs or allowed companies to grow without hiring more people. My current company has seen over 10x profit growth over the past five years with a 20% increase in manpower. And we exist in a primarily zero sum portion of our industry, so this is directly taking revenue and jobs from other companies. -[he is lying -- NNB]

People need to stop living in a fairy tale land where near full employment is a reality in the near future. I'll be surprised if labor participation rate of 25-54 year olds is even 50% in 10 years.

unixisc ( 2429386 ) writes:

I used to be supportive of things like welfare reform, but this is throwing up new challenges that will probably require new paradigms. Since more and more low skilled jobs - including those of CEOs - get automated, there will be fewer jobs for the population

This then throws up the question of whether we should have a universal basic income. But one potential positive trend of this would be an increase in time spent home w/ family, thereby reducing the time kids spend in daycare and w/ both parents - n

Gr8Apes ( 679165 ) writes:
But one potential positive trend of this would be an increase in time spent home w/ family, thereby reducing the time kids spend in daycare

Great, so now more people can home school and indoctrinate - err teach - family values.

Anonymous Coward writes:

The GP is likely referring to the conservative Christian homeschooling movement who homeschool their children explicitly to avoid exposing their children to a common culture. The "mixing pot" of American culture may be mostly a myth, but some amount of interaction helps understanding and increases the chance people will be able to think of themselves as part of a singular nation.

I believe in freedom of speech and association, so I do not favor legal remedies, but it is a cultural problem that may have socia

unixisc ( 2429386 ) writes:

No, I was not talking about homeschooling at all. I was talking about the fact that when kids are out of school, they go to daycares, since both dad and mom are busy at work. Once most of the jobs are automated so that it's difficult for anyone but geniuses to get jobs, parents might spend that freed up time w/ their kids. It said nothing about homeschooling: not all parents would have the skills to do that.

I'm all for a broad interaction b/w kids, but that's something that can happen at schools, and d

Ol Olsoc ( 1175323 ) writes:
Uh, why would Leftist parents indoctrinate w/ family values? They can teach their dear offspring how to always be malcontents in the unattainable jihad for income equality. Or are you saying that Leftist will all abort their foetii in an attempt to prevent climate change?

Have you ever had an original thought? Seriously, please be kidding, because you sound like you are one step away from serial killing people you consider "leftist", and cremating them in the back yard while laughing about relasing their Carbon Dioxide into the atmosphere.

unixisc ( 2429386 ) writes:

My original comment was not about home schooling. It was about parents spending all time w/ their kids once kids are out of school - no daycares. That would include being involved w/ helping their kids w/ both homework and extra curricular activities.

ArmoredDragon ( 3450605 ) writes:

The problem I see with this is that white collar jobs have been replaced by technology for centuries, and at the same time, technology has enabled even more white collar jobs to exist than those that it replaced.

For example, the word "computer" used to be universally referred to as a job title, whereas today it's universally referred to as a machine.

alexgieg ( 948359 ) writes:

The problem is that AI is becoming faster at learning the new job opportunities than people are, thereby gulping them before people even were there to be replaced. And this speed is growing. You cannot beat an exponential growth with a linear one, or even with just slightly slower growing exponential one.

Oswald McWeany ( 2428506 ) , Monday January 02, 2017 @12:32PM ( #53592765 )

Re:As if this is new ( Score: 4 , Insightful)

I completely agree. Even jobs which a decade ago looked irreplaceable, like teachers, doctors and nurses are possibly in the crosshairs. There are very few jobs that AI can't partially (or in some cases completely) replace humans. Society has some big choices to make in the upcoming decades and political systems may crash and rise as we adapt.

Are we heading towards "basic wage" for all people? The ultimate socialist state?

Or is the gap between haves and have nots going to grow exponentially, even above today's growth as those that own the companies and AI bots make ever increasing money and the poor suckers at the bottom, given just enough money to consume the products that keep the owners in business.

Grishnakh ( 216268 ) , Monday January 02, 2017 @12:40PM ( #53592809 )

Re:As if this is new ( Score: 5 , Insightful)

Society has some big choices to make in the upcoming decades and political systems may crash and rise as we adapt.

Are we heading towards "basic wage" for all people? The ultimate socialist state?

It depends on the country, I think. I believe many countries, like Japan and Finland, will indeed go this route. However, here in the US, we are vehemently opposed to anything that can be branded as "socialism". So instead, society here will soon resemble "The Walking Dead".

EvilSS ( 557649 ) writes: on Monday January 02, 2017 @01:31PM ( #53593047 )

I think even in the US it will hit a tipping point when it gets bad enough. When our consumer society can't buy anything because they are all out of work, we will need to change our way of thinking about this, or watch the economy completely collapse.

Matt Bury ( 4823023 ) writes:

"Men and nations behave wisely when they have exhausted all other resources." -- Abba Eban

Which is frequently misquoted as, "Americans can always be counted on to do the right thing after they have exhausted all other possibilities."

So when the starving mob are at the ruling elites' gates with torches and pitch forks, they'll surely find the resources to do the right thing.

gtall ( 79522 ) writes:

The "misquote" is a phrase uttered by Winston Churchill.

Coisiche ( 2000870 ) writes:
So when the starving mob are at the ruling elites' gates with torches and pitch forks, they'll surely find the resources to do the right thing.

Yes, they'll use some of their wealth to hire and equip private armies to keep the starving mob at bay because people would be very happy to take any escape from being in the starving mob.

Might be worth telling your kids that taking a job in the armed forces might be the best way to ensure well paid future jobs because military training would be in greater demand.

HiThere ( 15173 ) writes:

What you're ignoring is that the military is becoming steadily more mechanized also. There won't be many jobs there, either. Robots are more reliable and less likely to side with the protesters.

Grishnakh ( 216268 ) writes:

I'm going with the latter (complete economic collapse). There's no way, with the political attitudes and beliefs present in our society, and our current political leaders, that we'd be able to pivot fast enough to avoid it. Only small, homogenous nations like Finland (or Japan, even though it's not that small, but it is homogenous) can pull that off because they don't have all the infighting and diversity of political beliefs that we do, plus our religious notion of "self reliance".

scamper_22 ( 1073470 ) writes:

There are a few ways this plays out. How do we deal with this. One way is a basic income.

The other less articulated way, but is the basis for a lot of people's views is things simply get cheaper. Deflation is good. You simply live on less. You work less. You earn less. But you can afford the food, water... of life.

Now this is a hard transition in many places. There are loads of things that don't go well with living on less and deflation. Debt, government services, pensions...

I grew up in a third world coun

Grishnakh ( 216268 ) writes:

The main problem with this idea of "living on less" is that, even in the southern US, the rent prices are very high these days because of the real estate bubble and property speculation and foreign investment. The only place where property isn't expensive is in places where there are really zero jobs at all.

Gr8Apes ( 679165 ) writes:

All jobs that don't do R&D will be replaceable in the near future, as in within 1 or 2 generations. Even R&D jobs will likely not be immune, since much R&D is really nothing more than testing a basic hypothesis, of which most of the testing can likely be handed over to AI. The question is what do you do with 24B people with nothing but spare time on their hands, and a smidgen of 1% that actually will have all the wealth? It doesn't sound pretty, unless some serious changes in the way we deal wit

DigiShaman ( 671371 ) writes:

Worse! Far worse!! Total collapse of the fiat currencies globally is imminent. When you reduce the human labor participation rate relative to the overall population, what you get is deflation. That's an undeniable fact.

But factor in governments around the world "borrowing" money via printing to pay welfare for all those unemployed. So now we have deflation coupled with inflation = stagflation. But stagflation doesn't last. At some point, the entire system - as we know it- will implode. What can not go on f

HiThere ( 15173 ) writes:

I don't know what the right answer is, but it's not unions. Unions exist to protect jobs and employment. The Pacific Longshoremen's Union during the 1960's&70's was an aberration in the the union bosses didn't primarily look after maintaining their own power via maintaining a large number of jobs, but rather opted into profit sharing, protecting the current workers at the expense of future power. Usually a union can be depended upon to fight automation, rather than to seek maximization of public good

Ol Olsoc ( 1175323 ) writes:

Are we heading towards "basic wage" for all people?

I think it's the only answer (without genocides...). My money is on genocide. Cheaper, and humans have it as a core value.

sjbe ( 173966 ) , Monday January 02, 2017 @12:43PM ( #53592831 )

Failure of imagination ( Score: 5 , Informative)

As a software developer of enterprise software, every company I have worked for has either produced software which reduced white collar jobs or allowed companies to grow without hiring more people.

You're looking at the wrong scale. You need to look at the whole economy. Were those people able to get hired elsewhere? The answer in general was almost certainly yes. Might have taken some of them a few months, but eventually they found something else.

My company just bought a machine that allows us to manufacture wire leads much faster than we can do it by hand. That doesn't mean that the workers we didn't employ to do that work couldn't find gainful employment elsewhere.

And we exist in a primarily zero sum portion of our industry, so this is directly taking revenue and jobs from other companies.

Again, so what? You've automated some efficiency into an industry that obviously needed it. Some workers will have to do something else. Same story we've been hearing for centuries. It's the buggy whip story just being retold with a new product. Not anything to get worried about.

People need to stop living in a fairy tale land where near full employment is a reality in the near future.

Based on what? The fact that you can't imagine what people are going to do if they can't do what they currently are doing? I'm old enough to predate the internet. The World Wide Web was just becoming a thing while I was in college. Apple, Microsoft, Google, Amazon, Cisco, Oracle, etc all didn't even exist when I was born. Vast swaths of our economy hadn't even been conceived of back then. 40 years from now you will see a totally new set of companies doing amazing things you never even imagined. Your argument is really just a failure of your own imagination. People have been making that same argument since the dawn of the industrial revolution and it is just as nonsensical now as it was then.

I'll be surprised if labor participation rate of 25-54 year olds is even 50% in 10 years.

Prepare to be surprised then. Your argument has no rational basis. You are extrapolating some micro-trends in your company well beyond any rational justification.

TuringTest ( 533084 ) writes:

Were those people able to get hired elsewhere? The answer in general was almost certainly yes.

Oh, oh, I know this one! "New jobs being created in the past don't guarantee that new jobs will be created in the future". This is the standard groupthink answer for waiving any responsibility after advice given about the future, right?

paiute ( 550198 ) writes:
People have been making that same argument since the dawn of the industrial revolution and it is just as nonsensical now as it was then.

I see this argument often when these type of discussions come up. It seems to me to be some kind of logical fallacy to think that something new will not happen because it has not happened in the past. It reminds me of the historical observation that generals are always fighting the last war.

sjbe ( 173966 ) writes:

Asking the wrong question

It seems to me to be some kind of logical fallacy to think that something new will not happen because it has not happened in the past.

What about humans and their ability to problem solve and create and build has changed? The reason I don't see any reason to worry about "robots" taking all our jobs is because NOTHING has changed about the ability of humans to adapt to new circumstances. Nobody has been able to make a coherent argument detailing why humans will not be able to continue to create new industries and new technologies and new products in the future. I don't pretend to know what those new economies will look like with any gre

ranton ( 36917 ) writes:
You didn't finish your thought. Just because generals are still thinking about the last war doesn't mean they don't adapt to the new one when it starts.

Actually yes it does. The history of the blitzkrieg is not one of France quickly adapting to new technologies and strategies to repel the German invaders. It is of France's Maginot line being mostly useless in the war and Germany capturing Paris with ease. Something neither side could accomplish in over four years in the previous war was accomplished in around two months using the new paradigm.

Will human participation in the workforce adapt to AI technologies in the next 50 years? Almost certainly. Is it li

Re: ( Score: 2 ) alexgieg ( 948359 ) writes:

It's simple. Do you know how, once we applied human brain power over the problem of flying we managed, in a matter of decades, to become better at flying than nature ever did in hundreds of millions of years of natural selection? Well, what do you think will happen now that we're focused on making AI better than brains? As in, better than any brains, including ours?

AI is catching up to human abilities. There's still a way to go, but breakthroughs are happening all the time. And as with flying, it won't take

Re: ( Score: 2 ) HiThere ( 15173 ) writes:

One can hope that your analogy with flying is correct. There are still many things that birds do better than planes. Even so I consider that a conservative projection when given without a time-line.

Re: ( Score: 2 ) Ol Olsoc ( 1175323 ) writes:
What about humans and their ability to problem solve and create and build has changed? The reason I don't see any reason to worry about "robots" taking all our jobs is because NOTHING has changed about the ability of humans to adapt to new circumstances.

I had this discussion with a fellow a long time ago who was so conservative he didn't want any regulations on pollutants. The Love Canal disaster wsa the topic. He said "no need to do anything, because humans will adapt - its called evolution."

I answered - "Yes, we might adapt. But you realize that means 999 out of a 1000 of us will die, and it's called evolution. Sometimes even 1000 out of 1000 die, that's called extinction."

This will be a different adaptation, but very well might be solved by most of

Re: ( Score: 2 ) Dutch Gun ( 899105 ) writes:

Generally speaking, though, when you see a very consistent trend or pattern over a long time, your best bet is that the trend will continue, not that it will mysteriously veer off because now it's happening to white collar jobs instead of blue collar jobs. I'd say the logical fallacy is to disbelieve that the trend is likely to continue. Technology doesn't invalidate basic economic theory, in which people manage to find jobs and services to match the level of the population precisely because there are so

Re: ( Score: 2 ) ranton ( 36917 ) writes:
It's the buggy whip story just being retold with a new product. Not anything to get worried about.

The buggy whip story shows that an entire species which had significant economic value for thousands of years found that technology had finally reached a point where they weren't needed. Instead of needing 20 million of them working in our economy in 1920, by 1960 there were only about 4.5 million. While they were able to take advantage of the previous technological revolutions and become even more useful because of better technology in the past, most horses could not survive the invention of the automobile

Re:Failure of imagination ( Score: 4 , Insightful) fluffernutter ( 1411889 ) , Monday January 02, 2017 @01:53PM ( #53593139 )
Were those people able to get hired elsewhere?

Your question is incomplete. The correct question to ask is if these people were able to get hired elsewhere *at the same salary when adjusted for inflation*. To that, the answer is no.

It hasn't been true on average since the 70's. Sure, some people will find equal or better jobs, but salaries have been steadily decreasing since the onset of technology. Given a job for less money or no job, most people will pick the job for less; and that is why we are not seeing a large change in the unemployment rate.

Re: ( Score: 3 ) gtall ( 79522 ) writes:

There is another effect. When the buggy whip manufacturers were put out of business, there were options for people to switch to and new industries were created. However, if AI gets apply across an entire economy, there won't be options because there is unemployment in every sector. And if AI obviates the need for workers, investors in new industries will build them around bots, so no real increase in employment. That and yer basic truck driver ain't going to be learning how to program.

Re: ( Score: 2 ) fluffernutter ( 1411889 ) writes:

Agreed, companies will be designed around using as little human intervention as possible. First they will use AI, then they will use cheap foreign labor, and only if those two options are completely impractical will they use domestic labor. Any business plan that depends on more than a small fraction of domestic labor (think Amazon's 1 minute of human handling per package) is likely to be considered unable to compete. I hate the buggy whip analogy, because using foreign (cheap) labor as freely as today w

Re: ( Score: 2 ) mspring ( 126862 ) writes:

Maybe the automation is a paradigm shift on par with the introduction of agriculture replacing the hunter and gatherer way of living? Then, some hunter and gatherer were perhaps also making a "luddite" arguments: "Nah, there will always be sufficient forrests/wildlife for everyone to live on. No need to be afraid of the these agriculturites. We have been hunting and gathering for millenia. That'll never change."

Re: ( Score: 3 ) bluegutang ( 2814641 ) writes:

Were those people able to get hired elsewhere? The answer in general was almost certainly yes.

Actually, the answer is probably no. Labor force participation [tradingeconomics.com] rates have fallen steadily since about the year 2000. Feminism caused the rate to rise from 58% (1963) to 67% (2000). Since then, it has fallen to 63%. In other words, we've already lost almost half of what we gained from women entering the workforce en masse. And the rate will only continue to fall in the future.

Re: ( Score: 2 ) J-1000 ( 869558 ) writes:

You must admit that *some* things are different. Conglomeratization may make it difficult to create new jobs, as smaller businesses have trouble competing with the mammoths. Globalization may send more jobs offshore until our standard of living has leveled off with the rest of the world. It's not inconceivable that we'll end up with a much larger number of unemployed people, with AI being a significant contributing factor. It's not a certainty, but neither is your scenario of the status quo. Just because it

Re: ( Score: 2 ) Ol Olsoc ( 1175323 ) writes:
People need to stop living in a fairy tale land where near full employment is a reality in the near future. I'll be surprised if labor participation rate of 25-54 year olds is even 50% in 10 years.

Then again, tell me of how companies are going to make money to service the stakeholders when there are not people around wh ocan buy their highly profitable wares?

Now speaking of fairy tales, that one is much more magical than your full employment one.

This ain't rocket science. Economies are at base, an equation. You have producers on one side, and consumers on the other. Ideally, they balance out, with extra rewards for the producers. Now either side can cheat, such as if producers can move productio

Re: ( Score: 2 ) Oswald McWeany ( 2428506 ) writes:
If you think software developers are immune, you're delusional.

I wonder if the software developers paid to create software to make software developers obsolete will have any qualms about writing that code.

Humans stopped writing computer code after Fortran ( Score: 2 ) raymorris ( 2726007 ) writes:

Until Fortran was developed, humans used to write code telling the computer what to do. Since the late 1950s, we've been writing a high-level description, then a computer program writes the program that actually gets executed.

Nowadays, there's frequently a computer program, such as a browser, which accepts our high-level description of the task and interprets it before generating more specific instructions for another piece of software, an api library, which creates more specific instructions for another api

Re: ( Score: 2 ) avandesande ( 143899 ) writes:

Until something goes wrong. Who is going to pick that machine generated code apart?

No more working till last train but with life empl ( Score: 2 ) Joe_Dragon ( 2206452 ) writes:

No more working till last train but with life employment where will laid off people find new jobs?

Re: ( Score: 2 ) JoeMerchant ( 803320 ) writes:

They won't, that's the point.

I see plenty of work in reducing student-teacher ratios in education, increasing maintenance and inspection intervals, transparency reporting on public officials, etc. Now, just convince the remaining working people that they want to pay for this from their taxes.

I suppose when we hit 53% unemployed, we might be able to start winning popular elections, if the unemployed are still allowed to vote then.

Re: ( Score: 2 ) Grishnakh ( 216268 ) writes:

At least here in the US, that won't change anything. The unemployed will still happily vote against anything that smacks of "socialism". It's a religion to us here. People here would rather shoot themselves (and their family members) in the head than enroll in social services.

Re: ( Score: 2 ) fluffernutter ( 1411889 ) writes:

That's a pretty funny thing to say about a nation with more than a third on welfare.

Re: ( Score: 2 ) Grishnakh ( 216268 ) writes:

Remember, most of the US population is religious, and not only does this involve some "actual" religion (usually Christianity), it also involves the "anti-socialism" religion. Now remember, the defining feature of religion is a complete lack rationality, and believing in something with zero supporting evidence, frequently despite enormous evidence to the contrary (as in the case of young-earth creationism, something that a huge number of Americans believe in).

So yes, it is "a pretty funny thing to say", bu

Re: ( Score: 2 ) pla ( 258480 ) writes:

"One job, one vote!" / shudder

Obviously ( Score: 3 , Insightful) Anonymous Coward , Monday January 02, 2017 @12:11PM ( #53592677 )

People that do trivial tasks like looking at numbers on documents, something a computer can easily do, are prime for getting replaced.

Face it, if you aren't creating new things, you're the first to go. Maintaining a process is basically pattern recognition.

Re: ( Score: 2 ) kwerle ( 39371 ) writes:

SInce this is very very similar to what my partner does, I feel like I'm a little qualified to speak on the subject at hand.

Yeah, pattern matching should nail this - but pattern matching only works if the patterns are reasonable/logical/consistent. Yes, I'm a little familiar with advanced pattern matching, filtering, etc.

Here's the thing: doctors are crappy input sources. At least in the US medical system. And in our system they are the ones that have to make diagnosis (in most cases). They are inconsistent.

Re: ( Score: 2 ) RightwingNutjob ( 1302813 ) writes:

What automation? 1000 workers in US vs 2000 in Mexico for half the cost of those 1000 is not "automation." Same thing with your hand-assembled smartphone. I'd rather have it be assembled by robots in the US with 100 human babysitters than hand-built in China with by 1000 human drones.

GIGO ( Score: 2 ) ISoldat53 ( 977164 ) writes:

I hope their data collection is better than it is in the US. Insurance company's systems can't talk to the doctors systems. They are stuck with 1980s technology or sneaker net to get information exchanged. Paper gets lost, forms don't match.

Doctors spend more time with paper than with patients. Once the paper gets to the insurance company chances are good it doesn't go to the right person or just gets lost sending the patient back to the beginning of the maze. The more people removed from the chain the bet

Re: ( Score: 2 ) ColdWetDog ( 752185 ) writes:

You think this is anything but perfectly planned? Insurance companies prevaricate better than anyone short of a Federal politician. 'Losing' a claim costs virtually nothing. Mishandling a claim costs very little. Another form letter asking for more / the same information, ditto.

Computerizing the whole shebang gives yet another layer of potential delay ('the computer is slow today' is a perennial favorite).

That said, in what strange world is insurance adjudication considered 'white collar'? In the US a

Japanese workforce is growing old ( Score: 3 ) AchilleTalon ( 540925 ) , Monday January 02, 2017 @12:21PM ( #53592727 ) Homepage

Japan needs to automate as much as it can and robotize to survive with a workforce growing old. Japan is facing this reality as well as many countries where labor isn't replaced at a sufficient rate to keep up with the needs. Older people will need care some countries just cannot deliver or afford.

Re: ( Score: 3 ) avandesande ( 143899 ) writes:

Japan is notorious for being far behind on office automation.

sjbe ( 173966 ) , Monday January 02, 2017 @12:27PM ( #53592743 )

Queue the chicken littles ( Score: 3 )

Calm down everyone. This is just a continuation of productivity tools for accounting. Among other things I'm a certified accountant. This is just the next step in automation of accounting and it's a good thing. We used to do all our ledgers by hand. Now we all use software for that and believe me you don't want to go back to the way it was.

Very little in accounting is actually value added activity so it is desirable to automate as much of it as possible. If some people lost their jobs doing that it's equivalent to how the PC replaced secretaries 30+ years ago. They were doing a necessary task but one that added little or no value. Most of what accountants do is just keeping track of what happened in a business and keeping the paperwork flowing where it needs to go. This is EXACTLY what we should be automating whenever possible.

I'm sure there are going to be a lot folks loudly proclaiming how we are all doomed and that there won't be any work for anyone left to do. Happens every time there is an advancement in automation and yet every time they are wrong. Yes some people are going to struggle in the short run. That happens with every technological advancement. Eventually they find other useful and valuable things to do and the world moves on. It will be fine.

Re: ( Score: 2 ) fluffernutter ( 1411889 ) writes:

I'm curious what you think you can do that Watson can't. Accounting is a very rigidly structured practice. All IBM really needs to do is let Watson sift through the books of a couple hundred companies and it will easily determine how to best achieve a defined set of objectives for a corporation.

sjbe ( 173966 ) writes:

Accounting isn't what you think it is ( Score: 2 )

I'm curious what you think you can do that Watson can't.

Seriously? Quite a bit actually. I can handle input streams that Watson can't. I can make tools Watson couldn't begin to imagine. I can interact with physical objects without vast amounts of programming. I can deal with humans in a meaningful and human way FAR better than any computer program. I can pass a Turing test. The number of things I can do that Watson cannot is literally too numerous to bother counting. Watson is really just an decision support system with a natural language interface. Ver

Re: ( Score: 2 ) King_TJ ( 85913 ) writes:

Yep! I don't even work in Accounting or Finance, but because I do computer support for that department and have to get slightly involved in the bill coding side of the process -- I agree completely.

I'm pretty sure that even if you *could* get a computer to do everything for Accounting automatically, people would constantly become frustrated with parts of the resulting process -- from reports requested by management not having the formatting or items desired on them, to inflexibility getting an item charged

IBM Puff Piece ( Score: 2 ) avandesande ( 143899 ) writes:

I work on a claims processing system and 90% of this stuff is already automated.

Re: ( Score: 2 ) avandesande ( 143899 ) writes:

Some bills are just so fubar that someone has to look at them. You really think 'watson' is processing 100% of the bills?

Re: ( Score: 2 ) avandesande ( 143899 ) writes:

Uh... well maybe. But what does this have to do with being an IBM puff piece?

Re: ( Score: 2 ) avandesande ( 143899 ) writes:

You think the 12$ hr staff at a doctors office code and invoice bills correctly? The blame goes both ways. Really our ridiculous and convoluted medical system is to blame. Imagine if doctors billed on a time basis like a lawyer.

gweihir ( 88907 ) , Monday January 02, 2017 @12:43PM ( #53592829 )

That is "automation". AI is something else... ( Score: 3 )

When you have people basically implementing a process without much understanding, it is pretty easy to automatize their jobs away. The only thing Watson is contribution is the translation from natural language to a more formalized one. No actual intelligence needed.

Re: ( Score: 2 ) gweihir ( 88907 ) writes:

I wish. Artificial stupidity is a bit more advanced than AI, but nowhere there yet.

LeftCoastThinker ( 4697521 ) writes:

This is not news or new ( Score: 2 )

Computers/automation/robotics have been replacing workers of all stripes including white collar workers since the ATM was introduced in 1967. Every place I have ever worked has had internal and external software that replaces white collar workers (where you used to need 10 people now you need 2).

The reality is that the economy is limited by a scarcity of labor when government doesn't interfere (the economy is essentially the sum of every worker work multiplied by their efficiency as valued by the economy i

Don't worry, Trump has the solution ( Score: 4 ) Jeremi ( 14640 ) , Monday January 02, 2017 @02:09PM ( #53593245 ) Homepage

Turns out it's rather simple, really --- just ban computers. He's going to start by replacing computers with human couriers for the secure-messaging market, and move outward from there. By 2020 we should have most of the Internet replaced by the (now greatly expanded) Post Office.

Dixie_Flatline ( 5077 ) writes:

We could use a little more automation, if you ask ( Score: 2 )

At least, as long as banks keep writing the software they do.

My bank's records of my purchases isn't updating today. This is one of the biggest banks in Canada. Transactions don't update properly over the weekends or holidays. Why? Who knows? Why has bank software EVER cared about weekends? What do business days matter to computers? And yet here we are. There's no monkey to turn the crank on a holiday, so I can't confirm my account activity.

DogDude ( 805747 ) writes:

Re: ( Score: 2 )

free market people as opposed to corporatist You need to pick up an economics textbook. Or a history textbook.

Re: ( Score: 2 ) GLMDesigns ( 2044134 ) writes:

Dude. Stop it. I've read 18th C laissez-faire writers (de Gournay) Bastiat, the Austrian School (Carl Menger, Bohm-Bawerk, von Mises, Hayek), Rothbard, Milton Friedman. Free Market is opposed to corporatism, You might hate Ayn Rand but she skewered corporatists as much as she did socialists. You should read some of these people. You'll see that they are opposed to corporatism. Don't get your information from opponents who create straw men and then, so skillfully, defeat their opponent's arguments.

Re: ( Score: 2 ) fluffernutter ( 1411889 ) writes:

I'm laughing that you think there is a difference. How do these people participate in a free market without setting up corporations?

Re: ( Score: 2 ) GLMDesigns ( 2044134 ) writes:

Corporatism is the use of government pull to advance your business. The use of law and the police power of the state to aide your business against anothers. This used to be called "mercantilism." Free market capitalism is opposed to this; the removal of power of pull.

Re: ( Score: 2 ) GLMDesigns ( 2044134 ) writes:

Read Bastiat, Carl Menger, von Mises, Hayek, Milton Friedman. You'll see them all referring to the government as an agent which helps one set of businesses over another. Government may give loans, bailouts, etc... Free market people are against this. Corporatism /= Free Market. Don't only get your information from those who hate individualism and free markets - read (or in Milton Friedman's case listen) to their arguments. You may disagree with them but you'll see well regarded individuals who say that

Re: ( Score: 2 ) GLMDesigns ( 2044134 ) writes:

When a business get's government to give it special favors (Soyndra) or to give it tax breaks or a monopoly this is corporatism. It used to be called mercantilism. In either case free - market capitalists stand in opposition to it. This is exactly what "laissez-faire" capitalism means: leave us alone, don't play favorites, stay away.

Re: ( Score: 2 ) pla ( 258480 ) writes:

How do these people participate in a free market without setting up corporations? Have you ever bought anything from a farmers' market? Have you ever hired a plumber d/b/a himself rather than working for Plumbers-R-Us? Have you ever bought a used car directly from a private seller? Do you have a 401k/403b/457/TSP/IRA? Have you ever used eBay? Have you ever traded your labor for a paycheck (aka "worked") without hiding behind an intermediate shell-corp? The freeness of a market has nothing to do wit

Re: ( Score: 2 ) fluffernutter ( 1411889 ) writes:

Trump's staff are all billionaires? How many people do you know that became a billionaire by selling at a farmer's market?

Re: ( Score: 2 ) pla ( 258480 ) writes:

Okay, so you're just still pissing and moaning over Trump's win and have no actual point. That's fine, but you should take care not to make it sound too much like you actually have something meaningful to say.

Re: ( Score: 2 ) fluffernutter ( 1411889 ) writes:

I'll say something meaningful when you can point out which one of Trump's cabinet made their wealth on a farmer's market and without being affiliated with a corporation.

Re: ( Score: 3 ) Waffle Iron ( 339739 ) writes:

He's hired primarily free market people as opposed to corporatist

Free marketers don't generally campaign on a platform of protectionist trade policies and direct government intervention in job markets.

Re: ( Score: 2 ) GLMDesigns ( 2044134 ) writes:

No. They don't. But, for the moment, it looks as if Andy Puzder (Sec of Labor) and Mick Mulvaney (OMB) are fairly good free market people. We'll see. Chief of Staff Reince Priebus has made some free-market comments. (Again, we'll see.) Sec of Ed looks like she wants to break up an entrenched bureaucracy - might even work to remove Federal involvement. (Wishful thinking on my part) HUD - I'm hopeful that Ben Carson was hired to break up this ridiculous bureaucracy. If not, at least pare it down. Now, if

Re: ( Score: 3 ) phantomfive ( 622387 ) writes:

"Watson" is a marketing term from IBM, covering a lot of standard automation. It isn't the machine that won at Jeopardy (although that is included in the marketing term, if someone wants to pay for it). IBM tells managers, "We will have our amazing Watson technology solve this problem for you." The managers feel happy. Then IBM has some outsourced programmers code up a workflow app, with recurring annual subscription payments.

Re: ( Score: 2 ) Opportunist ( 166417 ) writes:

That's ok, there isn't really a decent insurance claim worker either, so they should do fine.

Re: ( Score: 2 ) TuringTest ( 533084 ) writes:

considering nobody has made any decent AI yet.

It doesn't matter. AI works best when there's a human in the loop, piloting the controls anyway.

What matters to a company is that 1 person + bots can now make the job that previously required hundreds of white collar workers, for much less salary. What happens to the other workers should not be a concern of the company managers, according to the modern religious creed - apparently some magical market hand takes care to solve that problem automatically.

Re: ( Score: 2 ) jedidiah ( 1196 ) writes:

Pretty much. US companies already use claims processing systems that use previous data to evaluate a current claim and spit out a number. Younger computer literate adjusters just feed the machine and push a button.

Re: ( Score: 2 ) Opportunist ( 166417 ) writes:

The hot topic on the management floor of 2030 is probably how it's no longer "android" but "gynoid".

Re: ( Score: 2 ) GameboyRMH ( 1153867 ) writes:

I was correcting people who refer to robots that look like women as androids before it was cool :-P

Joe_Dragon ( 2206452 ) writes:

universities downsize not with unlimited loans! ( Score: 2 )

universities downsize not with unlimited loans! (usa only) need retraining you can get an loan and you may need to go for 2-4 years and (some credits maybe to old and you have to retake classes)

Re: ( Score: 2 ) Joe_Dragon ( 2206452 ) writes:

IBM helped the Hitler was able to automate his persecution of the Jews. So will Watson have locks to stop that or any other killing off of people?

Re: ( Score: 2 ) GameboyRMH ( 1153867 ) writes:

It's not Parkinson's law, it's runaway inequality. The workforce continues to be more and more productive as it receives an unchanging or decreasing amount of compensation (in absolute terms - or an ever-decreasing share of the profits in relative terms), while the gains go to the 1%.

[Dec 27, 2016] Peak Robot: The humans as the fragment of machines

Dec 27, 2016 | econospeak.blogspot.com
http://econospeak.blogspot.com/2016/12/peak-robot-fragment-on-machines.html

December 25, 2016

Peak Robot: the Fragment on Machines

Martin Sklar's disaccumultion thesis * is a restatement and reinterpretation of passages in Marx's Grundrisse that have come to be known as the "fragment on machines." Compare, for example, the following two key excerpts.

Marx:

...to the degree that large industry develops, the creation of real wealth comes to depend less on labour time and on the amount of labour employed than on the power of the agencies set in motion during labour time, whose 'powerful effectiveness' is itself in turn out of all proportion to the direct labour time spent on their production, but depends rather on the general state of science and on the progress of technology, or the application of this science to production. ...

Labour no longer appears so much to be included within the production process; rather, the human being comes to relate more as watchman and regulator to the production process itself. (What holds for machinery holds likewise for the combination of human activities and the development of human intercourse.)

Sklar:

In consequence [of the passage from the accumulation phase of capitalism to the "disaccumlation" phase], and increasingly, human labor (i.e. the exercise of living labor-power) recedes from the condition of serving as a 'factor' of goods production, and by the same token, the mode of goods-production progressively undergoes reversion to a condition comparable to a gratuitous 'force of nature': energy, harnessed and directed through technically sophisticated machinery, produces goods, as trees produce fruit, without the involvement of, or need for, human labor-time in the immediate production process itself. Living labor-power in goods-production devolves upon the quantitatively declining role of watching, regulating, and superintending.

The main difference between the two arguments is that for Marx, the growing contradiction between the forces of production and the social relations produce "the material conditions to blow this foundation sky-high." For Sklar, with the benefit of another century of observation, disaccumulation appears as simply another phase in the evolution of capitalism -- albeit with revolutionary potential. But also with reactionary potential in that the reduced dependence on labor power also suggests a reduced vulnerability to the withholding of labor power.

* http://econospeak.blogspot.ca/2016/12/peak-robot-accumulation-and-its-dis.html

-- Sandwichman

[Dec 26, 2016] Scientists Develop Robotic Hand For People With Quadriplegia

Dec 26, 2016 | science.slashdot.org
(phys.org) 22

Posted by BeauHD on Tuesday December 06, 2016 @07:05PM from the muscle-memory dept.

An anonymous reader quotes a report from Phys.Org:

Scientists have developed a mind-controlled robotic hand that allows people with certain types of spinal injuries to perform everyday tasks such as using a fork or drinking from a cup. The low-cost device was tested in Spain on six people with quadriplegia affecting their ability to grasp or manipulate objects. By wearing a cap that measures electric brain activity and eye movement the users were able to send signals to a tablet computer that controlled the glove-like device attached to their hand. Participants in the small-scale study were able to perform daily activities better with the robotic hand than without, according to results published Tuesday in the journal Science Robotics .

It took participants just 10 minutes to learn how to use the system before they were able to carry out tasks such as picking up potato chips or signing a document. According to Surjo R. Soekadar, a neuroscientist at the University Hospital Tuebingen in Germany and lead author of the study, participants represented typical people with high spinal cord injuries, meaning they were able to move their shoulders but not their fingers. There were some limitations to the system, though. Users had to have sufficient function in their shoulder and arm to reach out with the robotic hand. And mounting the system required another person's help.

[Dec 26, 2016] Autonomous Shuttle Brakes For Squirrels, Skateboarders, and Texting Students

Dec 26, 2016 | tech.slashdot.org
(ieee.org) 74

Posted by BeauHD on Saturday December 10, 2016 @05:00AM from the squirrel-crossing dept.

Tekla Perry writes:

An autonomous shuttle from Auro Robotics is picking up and dropping off students, faculty, and visitors at the Santa Clara University Campus seven days a week. It doesn't go fast, but it has to watch out for pedestrians, skateboarders, bicyclists, and bold squirrels (engineers added a special squirrel lidar on the bumper). An Auro engineer rides along at this point to keep the university happy, but soon will be replaced by a big red emergency stop button (think Staples Easy button). If you want a test drive, just look for a "shuttle stop" sign (there's one in front of the parking garage) and climb on, it doesn't ask for university ID.

[Dec 26, 2016] Robots Are Already Replacing Fast-Food Workers

Dec 26, 2016 | hardware.slashdot.org
(recode.net) 414

Posted by EditorDavid on Sunday December 11, 2016 @05:34PM from the may-I-take-your-order dept.

An anonymous reader quotes Recode:

Technology that replaces food service workers is already here . Sushi restaurants have been using machines to roll rice in nori for years, an otherwise monotonous and time-consuming task. The company Suzuka has robots that help assemble thousands of pieces of sushi an hour. In Mountain View, California, the startup Zume is trying to disrupt pizza with a pie-making machine. In Shanghai, there's a robot that makes ramen , and some cruise ships now mix drinks with bartending machines .

More directly to the heart of American fast-food cuisine, Momentum Machines, a restaurant concept with a robot that can supposedly flip hundreds of burgers an hour , applied for a building permit in San Francisco and started listing job openings this January, reported Eater. Then there's Eatsa, the automat restaurant where no human interaction is necessary, which has locations popping up across California .

[Dec 26, 2016] IBMs Watson Used In Life-Saving Medical Diagnosis

Dec 26, 2016 | science.slashdot.org
(businessinsider.co.id) 83 Posted by EditorDavid on Sunday December 11, 2016 @09:34PM from the damn-it-Jim-I'm-a-doctor-not-a-supercomputer dept.
"Supercomputing has another use," writes Slashdot reader rmdingler , sharing a story that quotes David Kenny, the General Manager of IBM Watson:
"There's a 60-year-old woman in Tokyo. She was at the University of Tokyo. She had been diagnosed with leukemia six years ago. She was living, but not healthy. So the University of Tokyo ran her genomic sequence through Watson and it was able to ascertain that they were off by one thing . Actually, she had two strains of leukemia. They did treat her and she is healthy."

"That's one example. Statistically, we're seeing that about one third of the time, Watson is proposing an additional diagnosis."

[Dec 26, 2016] Latest Microsoft Skype Preview Adds Real-Time Voice Translation For Phone Calls

Notable quotes:
"... Skype Translator, available in nine languages, uses artificial intelligence (AI) techniques such as deep-learning to train artificial neural networks and convert spoken chats in almost real time. The company says the app improves as it listens to more conversations. ..."
Dec 26, 2016 | tech.slashdot.org
(zdnet.com) 37

Posted by msmash on Monday December 12, 2016 @11:05AM from the worthwhile dept.

Microsoft has added the ability to use Skype Translator on calls to mobiles and landlines to its latest Skype Preview app. From a report on ZDNet: Up until now, Skype Translator was available to individuals making Skype-to-Skype calls. The new announcement of the expansion of Skype Translator to mobiles and landlines makes Skype Translator more widely available .

To test drive this, users need to be members of the Windows Insider Program. They need to install the latest version of Skype Preview on their Windows 10 PCs and to have Skype Credits or a subscription.

Skype Translator, available in nine languages, uses artificial intelligence (AI) techniques such as deep-learning to train artificial neural networks and convert spoken chats in almost real time. The company says the app improves as it listens to more conversations.

[Dec 26, 2016] White House: US Needs a Stronger Social Safety Net To Help Workers Displaced by Robots

Dec 26, 2016 | hardware.slashdot.org
(recode.net) 623

Posted by BeauHD on Wednesday December 21, 2016 @08:00AM from the one-day-not-so-far-away dept.

The White House has released a new report warning of a not-too-distant future where artificial intelligence and robotics will take the place of human labor. Recode highlights in its report the three key areas the White House says the U.S. government needs to prepare for the next wave of job displacement caused by robotic automation:

The report says the government, meaning the the incoming Trump administration, will have to forge ahead with new policies and grapple with the complexities of existing social services to protect the millions of Americans who face displacement by advances in automation, robotics and artificial intelligence. The report also calls on the government to keep a close eye on fostering competition in the AI industry, since the companies with the most data will be able to create the most advanced products, effectively preventing new startups from having a chance to even compete.

[Dec 26, 2016] Stanford Built a Humanoid Submarine Robot To Explore a 17th-Century Shipwreck

Notable quotes:
"... IEEE Robotics and Automation Magazine ..."
Dec 26, 2016 | hardware.slashdot.org
(ieee.org) 47 Posted by BeauHD on Friday December 23, 2016 @05:00AM from the how-it's-made dept.

schwit1 quotes a report from IEEE Spectrum:

Back in April, Stanford University professor Oussama Khatib led a team of researchers on an underwater archaeological expedition, 30 kilometers off the southern coast of France, to La Lune , King Louis XIV's sunken 17th-century flagship. Rather than dive to the site of the wreck 100 meters below the surface, which is a very bad idea for almost everyone, Khatib's team brought along a custom-made humanoid submarine robot called Ocean One . In this month's issue of IEEE Robotics and Automation Magazine , the Stanford researchers describe in detail how they designed and built the robot , a hybrid between a humanoid and an underwater remotely operated vehicle (ROV), and also how they managed to send it down to the resting place of La Lune , where it used its three-fingered hands to retrieve a vase. Most ocean-ready ROVs are boxy little submarines that might have an arm on them if you're lucky, but they're not really designed for the kind of fine manipulation that underwater archaeology demands. You could send down a human diver instead, but once you get past about 40 meters, things start to get both complicated and dangerous. Ocean One's humanoid design means that it's easy and intuitive for a human to remotely perform delicate archeological tasks through a telepresence interface.

schwit1 notes: "Ocean One is the best name they could come up with?"

[Dec 26, 2016] Slashdot Asks: Will Farming Be Fully Automated in the Future?

Dec 26, 2016 | hardware.slashdot.org
(bbc.com) 278

Posted by msmash on Friday November 25, 2016 @12:10AM from the interesting-things dept.

BBC has a report today in which, citing several financial institutions and analysts, it claims that in the not-too-distant future, our fields could be tilled, sown, tended and harvested entirely by fleets of co-operating autonomous machines by land and air. An excerpt from the article:

Driverless tractors that can follow pre-programmed routes are already being deployed at large farms around the world. Drones are buzzing over fields assessing crop health and soil conditions. Ground sensors are monitoring the amount of water and nutrients in the soil, triggering irrigation and fertilizer applications. And in Japan, the world's first entirely automated lettuce farm is due for launch next year. The future of farming is automated . The World Bank says we'll need to produce 50% more food by 2050 if the global population continues to rise at its current pace. But the effects of climate change could see crop yields falling by more than a quarter. So autonomous tractors, ground-based sensors, flying drones and enclosed hydroponic farms could all help farmers produce more food, more sustainably at lower cost.

[Dec 26, 2016] Self-Driving Trucks Begin Real-World Tests on Ohios Highways

Dec 26, 2016 | news.slashdot.org
(cbsnews.com) 178

Posted by EditorDavid on Sunday November 27, 2016 @04:35PM from the trucking-up-to-Buffalo dept.

An anonymous reader writes:

"A vehicle from self-driving truck maker Otto will travel a 35-mile stretch of U.S. Route 33 on Monday in central Ohio..." reports the Associated Press.

The truck "will travel in regular traffic, and a driver in the truck will be positioned to intervene should anything go awry, Department of Transportation spokesman Matt Bruning said Friday, adding that 'safety is obviously No. 1.'"

Ohio sees this route as "a corridor where new technologies can be safely tested in real-life traffic, aided by a fiber-optic cable network and sensor systems slated for installation next year" -- although next week the truck will also start driving on the Ohio Turnpike.

[Dec 26, 2016] Stephen Hawking: Automation and AI Is Going To Decimate Middle Class Jobs

Dec 26, 2016 | tech.slashdot.org
(businessinsider.com) 468

Posted by BeauHD on Friday December 02, 2016 @05:00PM from the be-afraid-very-afraid dept.

An anonymous reader quotes a report from Business Insider:

In a column in The Guardian , the world-famous physicist wrote that "the automation of factories has already decimated jobs in traditional manufacturing, and the rise of artificial intelligence is likely to extend this job destruction deep into the middle classes , with only the most caring, creative or supervisory roles remaining." He adds his voice to a growing chorus of experts concerned about the effects that technology will have on workforce in the coming years and decades. The fear is that while artificial intelligence will bring radical increases in efficiency in industry, for ordinary people this will translate into unemployment and uncertainty, as their human jobs are replaced by machines.

Automation will, "in turn will accelerate the already widening economic inequality around the world," Hawking wrote. "The internet and the platforms that it makes possible allow very small groups of individuals to make enormous profits while employing very few people. This is inevitable, it is progress, but it is also socially destructive." He frames this economic anxiety as a reason for the rise in right-wing, populist politics in the West: "We are living in a world of widening, not diminishing, financial inequality, in which many people can see not just their standard of living, but their ability to earn a living at all, disappearing. It is no wonder then that they are searching for a new deal, which Trump and Brexit might have appeared to represent." Combined with other issues -- overpopulation, climate change, disease -- we are, Hawking warns ominously, at "the most dangerous moment in the development of humanity." Humanity must come together if we are to overcome these challenges, he says.

[Dec 26, 2016] Many CEOs Believe Technology Will Make People Largely Irrelevant

Notable quotes:
"... The firm says that 44 percent of the CEOs surveyed agreed that robotics, automation and AI would reshape the future of many work places by making people "largely irrelevant." ..."
Dec 26, 2016 | it.slashdot.org
(betanews.com) 541

Posted by msmash on Monday December 05, 2016 @02:20PM from the shape-of-things-to-come dept.

An anonymous reader shares a report on BetaNews:

Although artificial intelligence (AI), robotics and other emerging technologies may reshape the world as we know it, a new global study has revealed that the many CEOs now value technology over people when it comes to the future of their businesses . The study was conducted by the Los Angeles-based management consultant firm Korn Ferry that interviewed 800 business leaders across a variety of multi-million and multi-billion dollar global organizations.

The firm says that 44 percent of the CEOs surveyed agreed that robotics, automation and AI would reshape the future of many work places by making people "largely irrelevant."

The global managing director of solutions at Korn Ferry Jean-Marc Laouchez explains why many CEOs have adopted this controversial mindset, saying:

"Leaders may be facing what experts call a tangibility bias. Facing uncertainty, they are putting priority in their thinking, planning and execution on the tangible -- what they can see, touch and measure, such as technology instruments."

[Dec 26, 2016] Microsoft Researchers Offer Predictions For AI, Deep Learning

Dec 26, 2016 | hardware.slashdot.org
(theverge.com) 102

Posted by BeauHD on Tuesday December 06, 2016 @10:30PM from the what-to-expect dept.

An anonymous reader quotes a report from The Verge:

Microsoft polled 17 women working in its research organization about the technology advances they expect to see in 2017 , as well as a decade later in 2027. The researchers' predictions touch on natural language processing, machine learning, agricultural software, and virtual reality, among other topics. For virtual reality, Mar Gonzalez Franco , a researcher in Microsoft's Redmond lab, believes body tracking will improve next year, and then over the next decade we'll have "rich multi-sensorial experiences that will be capable of producing hallucinations which blend or alter perceives reality."

Haptic devices will simulate touch to further enhance the sensory experience. Meanwhile, Susan Dumais , a scientist and deputy managing director at the Redmond lab, believes deep learning will help improve web search results next year.

In 2027, however, the search box will disappear, she says.

It'll be replaced by search that's more "ubiquitous, embedded, and contextually sensitive." She says we're already seeing some of this in voice-controlled searches through mobile and smart home devices.

We might eventually be able to look things up with either sound, images, or video. Plus, our searches will respond to "current location, content, entities, and activities" without us explicitly mentioning them, she says.

Of course, it's worth noting that Microsoft has been losing the search box war to Google, so it isn't surprising that the company thinks search will die. With global warming as a looming threat, Asta Roseway , principal research designer, says by 2027 famers will use AI to maintain healthy crop yields, even with "climate change, drought, and disaster."

Low-energy farming solutions, like vertical farming and aquaponics, will also be essential to keeping the food supply high, she says. You can view all 17 predictions here

.

[Dec 26, 2016] Neoliberalims led to impoverishment of lower 80 pecent of the USA population with a large part of the US population living in a third world country

Notable quotes:
"... Efforts which led to impoverishment of lower 80% the USA population with a large part of the US population living in a third world country. This "third world country" includes Wal-Mart and other retail employees, those who have McJobs in food sector, contractors, especially such as Uber "contractors", Amazon packers. This is a real third world country within the USA and probably 50% population living in it. ..."
"... While conversion of electricity supply from coal to wind and solar was more or less successful (much less then optimists claim, because it requires building of buffer gas powered plants and East-West high voltage transmission lines), the scarcity of oil is probably within the lifespan of boomers. Let's say within the next 20 years. That spells deep trouble to economic growth as we know it, even with all those machinations and number racket that now is called GDP (gambling now is a part of GDP). And in worst case might spell troubles to capitalism as social system, to say nothing about neoliberalism and neoliberal globalization. The latter (as well as dollar hegemony) is under considerable stress even now. But here "doomers" were wrong so often in the past, that there might be chance that this is not inevitable. ..."
"... Shale gas production in the USA is unsustainable even more then shale oil production. So the question is not if it declines, but when. The future decline (might be even Seneca Cliff decline) is beyond reasonable doubt. ..."
Dec 26, 2016 | economistsview.typepad.com

ilsm -> pgl... December 26, 2016 at 05:12 AM

"What is good for wall st. is good for America". The remains of the late 19th century anti trust/regulation momentum are democrat farmer labor wing in Minnesota, if it still exists. An example: how farmers organized to keep railroads in their place. Today populists are called deplorable, before they ever get going.

And US' "libruls" are corporatist war mongers.

Used to be the deplorable would be the libruls!

Division!

likbez -> pgl...

I browsed it and see more of less typical pro-neoliberal sentiments, despite some critique of neoliberalism at the end.

This guy does not understand history and does not want to understand. He propagates or invents historic myths. One thing that he really does not understand is how WWI and WWII propelled the USA at the expense of Europe. He also does not understand why New Deal was adopted and why the existence of the USSR was the key to "reasonable" (as in "not self-destructive" ) behaviour of the US elite till late 70th. And how promptly the US elite changed to self-destructive habits after 1991. In a way he is a preacher not a scientist. So is probably not second rate, but third rate thinker in this area.

While Trumpism (aka "bastard neoliberalism") might not be an answer to challenges the USA is facing, it is definitely a sign that "this time is different" and at least part of the US elite realized that it is too dangerous to kick the can down the road. That's why Bush and Clinton political clans were sidelined this time.

There are powerful factors that make the US economic position somewhat fragile and while Trump is a very questionable answer to the challenges the USA society faces, unlike Hillary he might be more reasonable in his foreign policy abandoning efforts to expand global neoliberal empire led by the USA.

Efforts which led to impoverishment of lower 80% the USA population with a large part of the US population living in a third world country. This "third world country" includes Wal-Mart and other retail employees, those who have McJobs in food sector, contractors, especially such as Uber "contractors", Amazon packers. This is a real third world country within the USA and probably 50% population living in it.

Add to this the decline of the US infrastructure due to overstretch of imperial building efforts (which reminds British empire troubles).

I see several factors that IMHO make the current situation dangerous and unsustainable, Trump or no Trump:

1. Rapid growth of population. The US population doubled in less them 70 years. Currently at 318 million, the USA is the third most populous country on earth. That spells troubles for democracy and ecology, to name just two. That might also catalyze separatists movements with two already present (Alaska and Texas).

2. Plato oil. While conversion of electricity supply from coal to wind and solar was more or less successful (much less then optimists claim, because it requires building of buffer gas powered plants and East-West high voltage transmission lines), the scarcity of oil is probably within the lifespan of boomers. Let's say within the next 20 years. That spells deep trouble to economic growth as we know it, even with all those machinations and number racket that now is called GDP (gambling now is a part of GDP). And in worst case might spell troubles to capitalism as social system, to say nothing about neoliberalism and neoliberal globalization. The latter (as well as dollar hegemony) is under considerable stress even now. But here "doomers" were wrong so often in the past, that there might be chance that this is not inevitable.

3. Shale gas production in the USA is unsustainable even more then shale oil production. So the question is not if it declines, but when. The future decline (might be even Seneca Cliff decline) is beyond reasonable doubt.

4. Growth of automation endangers the remaining jobs, even jobs in service sector . Cashiers and waiters are now on the firing line. Wall Mart, Shop Rite, etc, are already using automatic cashiers machines in some stores. Wall-Mart also uses automatic machines in back office eliminating staff in "cash office".

Waiters might be more difficult task but orders and checkouts are computerized in many restaurants. So the function is reduced to bringing food. So much for the last refuge of recent college graduates.

The successes in speech recognition are such that Microsoft now provides on the fly translation in Skype. There are also instances of successful use of computer in medical diagnostics. https://en.wikipedia.org/wiki/Computer-aided_diagnosis

IT will continue to be outsourced as profits are way too big for anything to stop this trend.

[Dec 26, 2016] Michigan Lets Autonomous Cars On Roads Without Human Driver

Notable quotes:
"... Companies can now test self-driving cars on Michigan public roads without a driver or steering wheel under new laws that could push the state to the forefront of autonomous vehicle development. ..."
Dec 26, 2016 | tech.slashdot.org
(go.com) 166

Posted by msmash on Friday December 09, 2016 @01:00PM from the it's-coming dept.

Companies can now test self-driving cars on Michigan public roads without a driver or steering wheel under new laws that could push the state to the forefront of autonomous vehicle development.

From a report on ABC:

The package of bills signed into law Friday comes with few specific state regulations and leaves many decisions up to automakers and companies like Google and Uber. It also allows automakers and tech companies to run autonomous taxi services and permits test parades of self-driving tractor-trailers as long as humans are in each truck . And they allow the sale of self-driving vehicles to the public once they are tested and certified, according to the state. The bills allow testing without burdensome regulations so the industry can move forward with potential life-saving technology, said Gov. Rick Snyder, who was to sign the bills. "It makes Michigan a place where particularly for the auto industry it's a good place to do work," he said.

[Dec 26, 2016] Googles DeepMind is Opening Up Its Flagship Platform To AI Researchers Outside the Company

Dec 26, 2016 | tech.slashdot.org
(businessinsider.com) 22

Posted by msmash on Monday December 05, 2016 @12:20PM from the everyone-welcome dept.

Artificial intelligence (AI) researchers around the world will soon be able to use DeepMind's "flagship" platform to develop innovative computer systems that can learn and think for themselves .

From a report on BusinessInsider:

DeepMind, which was acquired by Google for $400 million in 2014, announced on Monday that it is open-sourcing its "Lab" from this week onwards so that others can try and make advances in the notoriously complex field of AI.

The company says that the DeepMind Lab, which it has been using internally for some time, is a 3D game-like platform tailored for agent-based AI research. [...]

The DeepMind Lab aims to combine several different AI research areas into one environment. Researchers will be able to test their AI agent's abilities on navigation, memory, and 3D vision, while determining how good they are at planning and strategy.

[Dec 26, 2016] Does Code Reuse Endanger Secure Software Development?

Dec 26, 2016 | it.slashdot.org
(threatpost.com) 148 Posted by EditorDavid on Saturday December 17, 2016 @07:34PM from the does-code-reuse-endanger-secure-software-development dept. msm1267 quotes ThreatPost: The amount of insecure software tied to reused third-party libraries and lingering in applications long after patches have been deployed is staggering. It's a habitual problem perpetuated by developers failing to vet third-party code for vulnerabilities, and some repositories taking a hands-off approach with the code they host. This scenario allows attackers to target one overlooked component flaw used in millions of applications instead of focusing on a single application security vulnerability.

The real-world consequences have been demonstrated in the past few years with the Heartbleed vulnerability in OpenSSL , Shellshock in GNU Bash , and a deserialization vulnerability exploited in a recent high-profile attack against the San Francisco Municipal Transportation Agency . These are three instances where developers reuse libraries and frameworks that contain unpatched flaws in production applications... According to security experts, the problem is two-fold. On one hand, developers use reliable code that at a later date is found to have a vulnerability. Second, insecure code is used by a developer who doesn't exercise due diligence on the software libraries used in their project.
That seems like a one-sided take, so I'm curious what Slashdot readers think. Does code reuse endanger secure software development?

[Dec 26, 2016] Ask Slashdot: Has Your Team Ever Succumbed To Hype Driven Development?

Dec 26, 2016 | ask.slashdot.org
(daftcode.pl) 332 Posted by EditorDavid on Sunday November 27, 2016 @11:30PM from the TDD-vs-HDD dept. marekkirejczyk , the VP of Engineering at development shop Daftcode, shares a warning about hype-driven development: Someone reads a blog post, it's trending on Twitter, and we just came back from a conference where there was a great talk about it. Soon after, the team starts using this new shiny technology (or software architecture design paradigm), but instead of going faster (as promised) and building a better product, they get into trouble . They slow down, get demotivated, have problems delivering the next working version to production.
Describing behind-schedule teams that "just need a few more days to sort it all out," he blames all the hype surrounding React.js, microservices, NoSQL, and that " Test-Driven Development Is Dead " blog post by Ruby on Rails creator David Heinemeier Hansson. ("The list goes on and on... The root of all evil seems to be social media.")

Does all this sound familiar to any Slashdot readers? Has your team ever succumbed to hype-driven development?

[Dec 05, 2016] Why I left Google -- discussion

Notable quotes:
"... "Google was the rich kid who, after having discovered he wasn't invited to the party, built his own party in retaliation," Whittaker wrote. "The fact that no one came to Google's party became the elephant in the room." ..."
"... Isn't it inevitable that Google will end up like Microsoft. A brain-dead dinosaur employing sycophantic middle class bores, who are simply working towards a safe haven of retirement. In the end Google will be passed by. It's not a design-led innovator like Apple: it's a boring, grey utilitarian, Soviet-like beast. Google Apps are cheap - but very nasty - Gmail is a terrible UI - and great designers will never work for this anti-design/pro-algorithms empire. ..."
"... All of Google's products are TERRIBLE except for Gmail, and even that is inferior to Outlook on the web now. ..."
"... I used Google Apps for years, and Google just doesn't listen to customers. The engineers that ran the company needed some corporate intervention. I just think Larry Page tried to turn Google into a different company, rather than just focusing the great ideas into actually great products. ..."
"... It seems the tech titans all have this pendulum thing going on. Google appears to be beginning its swing in the "evil" direction. ..."
"... You claim old Google empowered intelligent people to be innovative, with the belief their creations would prove viable in the marketplace. You then go on to name Gmail and Chrome as the accomplishments of that endeavour. Are you ****** serious? ..."
"... When you arrived at Google it had already turned the internet into a giant spamsense depository with the majority of screen real estate consumed by Google's ads. The downhill spiral did not begin with Google+, but it may end there. On a lighter note, you are now free. Launch a start-up and fill the gaping hole which will be left by the fall of the former giant. ..."
"... Great post. Appreciate the insights the warning about what happens when bottom-up entrepreneurship loses out to top-down corporate dictums. ..."
"... The ability to actually consume shared content in an efficient and productive manner is still as broken as ever. They never addressed the issue in Buzz and still haven't with G+ despite people ranting at them for this functionality forever. ..."
"... Sounds like Google have stopped focusing on what problem they're solving and moving onto trying to influence consumer behaviour - always a much more difficult trick to pull off. Great article - well done for sharing in such a humble and ethical manner. Best of luck for the future. ..."
Apr 02, 2012 | JW on Tech

Whittaker, who joined Google in 2009 and left last month, described a corporate culture clearly divided into two eras: "Before Google+," and "After."

"After" is pretty terrible, in his view.

Google (GOOG, Fortune 500) once gave its engineers the time and resources to be creative. That experimental approach yielded several home-run hits like Chrome and Gmail. But Google fell behind in one key area: competing with Facebook.

That turned into corporate priority No. 1 when Larry Page took over as the company's CEO. "Social" became Google's battle cry, and anything that didn't support Google+ was viewed as a distraction.

"Suddenly, 20% meant half-assed," wrote Whittaker, referring to Google's famous policy of letting employees spend a fifth of their time on projects other than their core job. "The trappings of entrepreneurship were dismantled."

Whittaker is not the first ex-Googler to express that line of criticism. Several high-level employees have left after complaining that the "start-up spirit" of Google has been replaced by a more mature but staid culture focused on the bottom line.

The interesting thing about Whittaker's take is that it was posted not on his personal blog, but on an official blog of Microsoft (MSFT, Fortune 500), Google's arch nemesis.

Spokesmen from Microsoft and Google declined to comment.

The battle between Microsoft and Google has heated up recently, as the Federal Trade Commission and the European Commission begin to investigate Google for potential antitrust violations. Microsoft, with its Bing search engine, has doubled its share of the search market since its June 2010 founding, but has been unsuccessful at taking market share away from Google.

Microsoft is increasingly willing to call out Google for what it sees as illicit behavior. A year ago, the software company released a long list of gripes about Google's monopolistic actions, and last month it said Google was violating Internet Explorer users' privacy.

Despite his misgivings about what Google cast aside to make Google+ a reality, Whittaker thinks that the social network was worth a shot. If it had worked -- if Google had dramatically changed the social Web for the better -- it would have been a heroic gamble.

But it didn't. It's too early to write Google+ off, but the site is developing a reputation as a ghost town. Google says 90 million people have signed up, but analysts and anecdotal evidence show that fairly few have turned into heavy users.

"Google was the rich kid who, after having discovered he wasn't invited to the party, built his own party in retaliation," Whittaker wrote. "The fact that no one came to Google's party became the elephant in the room."

Ian Smith:

Isn't it inevitable that Google will end up like Microsoft. A brain-dead dinosaur employing sycophantic middle class bores, who are simply working towards a safe haven of retirement. In the end Google will be passed by. It's not a design-led innovator like Apple: it's a boring, grey utilitarian, Soviet-like beast. Google Apps are cheap - but very nasty - Gmail is a terrible UI - and great designers will never work for this anti-design/pro-algorithms empire.

Steve

I have to be honest with you. All of Google's products are TERRIBLE except for Gmail, and even that is inferior to Outlook on the web now.

I used Google Apps for years, and Google just doesn't listen to customers. The engineers that ran the company needed some corporate intervention. I just think Larry Page tried to turn Google into a different company, rather than just focusing the great ideas into actually great products.

Matt:

It seems the tech titans all have this pendulum thing going on. Google appears to be beginning its swing in the "evil" direction. Apple seems like they're nearing the peak of "evil".

And Microsoft seems like they're back in the middle, trying to swing up to the "good" side. So, if you look at it from that perspective, Microsoft is the obvious choice.

Good luck!

VVR:

The stark truth in this insightful piece is the stuff you have not written..

Atleast you had a choice in leaving google. But we as users don't.

I have years of email in Gmail and docs and youtube etc. I can't switch.

"Creepy" is not the word that comes to mind when Ads for Sauna, online textbooks, etc suddenly begin to track you, no matter which website you visit.

You know you have lost when this happens..

David:

A fascinating insight, I think this reflects what a lot of people are seeing of Google from the outside. It seems everybody but Page can see that Google+ is - whilst technically brilliant - totally superfluous; your daughter is on the money. Also apparent from the outside is the desperation that surrounds Google+ - Page needs to face facts, hold his hands up and walk away from Social before they loose more staff like you, more users and all the magic that made Google so great.

Best of luck with your new career at Microsoft, I hope they foster and encourage you as the Google of old did.

Raymond Traylor:

I understand Facebook is a threat to Google search but beating Facebook at their core competency was doomed to fail. Just like Bing to Google. I was so disappointed in Google following Facebook's evil ways of wanting to know everything about me I've stopped using their services one at a time, starting with Android.

I am willing to pay for a lot of Google's free service to avoid advertising and harvesting my private data.

root

You claim old Google empowered intelligent people to be innovative, with the belief their creations would prove viable in the marketplace. You then go on to name Gmail and Chrome as the accomplishments of that endeavour. Are you ****** serious?

Re-branding web based email is no more innovative than purchasing users for your social networking site, like Facebook did. Same for Chrome, or would you argue Google acquiring VOIP companies to then provide a mediocre service called Google Voice was also innovative?

When you arrived at Google it had already turned the internet into a giant spamsense depository with the majority of screen real estate consumed by Google's ads. The downhill spiral did not begin with Google+, but it may end there. On a lighter note, you are now free. Launch a start-up and fill the gaping hole which will be left by the fall of the former giant.

RBLevin:

Great post. Appreciate the insights the warning about what happens when bottom-up entrepreneurship loses out to top-down corporate dictums.

Re: sharing, while I agree sharing isn't broken (heck, it worked when all we had was email), it certainly needs more improvement. I can't stand Facebook. Hate the UI, don't care for the culture. Twitter is too noisy and, also, the UI sucks. I'm one of those who actually thinks Google+ got 21st century BBSing right.

But if that's at the cost of everything else that made Google great, then it's a high price to pay.

BTW, you can say a lot of these same things about similar moves Microsoft has made over the years, where the top brass decided they knew better, and screwed over developers and their investments in mountains of code.

So, whether it happens in an HR context or a customer context, it still sucks as a practice.

bound2run:

I have made a concerted effort to move away from Google products after their recent March 1st privacy policy change. I must say the Bing is working just fine for me. Gmail will be a bit tougher but I am making strides. Now I just need to dump my Android phone and I will be "creepy-free" ... for the time being.

Phil Ashman:

The ability to actually consume shared content in an efficient and productive manner is still as broken as ever. They never addressed the issue in Buzz and still haven't with G+ despite people ranting at them for this functionality forever.

Funny that I should read your post today as I wrote the following comment on another persons post a couple days back over Vic's recent interview where someone brought up the lack of a G+ API:

"But if it were a social network.......then they are doing a pretty piss poor job of managing the G+ interface and productive consumption of the stream. It would be nice if there was at least an API so some 3rd party clients could assist with the filtering of the noise, but in reality the issue is in the distribution of the stream. What really burns me is that it wouldn't be that hard for them to create something like subscribable circles.

Unfortunately the reality is that they just don't care about whether the G+ stream is productive for you at the moment as their primary concern isn't for you to productively share and discuss your interests with the world, but to simply provide a way for you to tell Google what you like so they can target you with advertising. As a result, the social part of Google+ really isn't anything to shout about at the moment."

You've just confirmed my fear about how the company's focus has changed.

Alice Wonder:

Thanks for this. I love many of the things Google has done. Summer of code, WebM, Google Earth, free web fonts, etc.

I really was disappointed with Google+. I waited for an invite, and when I finally got one, I started to use it. Then the google main search page started to include google+ notifications, and the JS crashed my browser. Repeatedly. I had to clear my cache and delete my cookies just so google wouln't know it was me and crash search with a notification. They fixed that issue quickly but I did not understand why they would risk their flagship product (search) to promote google plus. The search page really should be a simple form.

And google plus not allowing aliases? Do I want a company that is tracking everything I do centrally to have my real name with that tracking? No. Hence I do not use google+ anymore, and am switching to a different search engine and doing as little as I can with google.

I really don't like to dislike google because of all they have done that was cool, it is really sad for me to see this happening.

Mike Whitehead

Sounds like Google have stopped focusing on what problem they're solving and moving onto trying to influence consumer behaviour - always a much more difficult trick to pull off. Great article - well done for sharing in such a humble and ethical manner. Best of luck for the future.

jmacdonald 14 Mar 2012 4:07 AM great write-up

personally i think that google and facebook have misread the sociological trend against the toleration of adverts, to such an extent that if indeed google are following the 'facebook know everything and we do too' route, i suspect both companies may enter into issues as the advertising CPMs fall and we're left with us wretched consumers who find ways around experiences that we don't want

more on this stuff here: www.jonathanmacdonald.com

and here: www.jonathanmacdonald.com

for anyone that cares about that kinda angle

Mahboob Ihsan:

Google products are useful but probably they could have done more to improve the GUI, Standardization and Usability. You can continue to earn business in short term enjoying your strategic advantage as long as you don't have competitors. But as soon as you have just one competitor offering quality products at same cost, your strategic advantage is gone and you have to compete through technology, cost and quality. Google has been spreading its business wings to so many areas, probably with the single point focus of short term business gains. Google should have learnt from Apple that your every new offering should be better (in user's eye) than the previous one.

Victor Ramirez:

Thanks for the thoughtful blog post. Anybody who has objectively observed Google's behavior and activity over the past few years has known that Google is going in this direction. I think that people have to recognize that Google, while very technically smart, is an advertising company first and foremost. Their motto says the right things about being good and organizing the world's information, but we all know what Google is honestly interested in. The thing that Google is searching for, more than almost anything else, is about getting more data about people so they can get people better ads they'll be more likely to click on so they make more money. Right now, Google is facing what might be considered an existential threat from Facebook because they are the company that is best able to get social data right now. Facebook is getting so much social data that odds are that they're long-term vision is to some point seriously competing in search using this social data that they have. Between Facebook's huge user-base and momentum amongst businesses (just look at how many Super Bowl ads featured Facebook pages being promoted for instance, look at the sheer number of companies listed at www.buyfacebookfansreviews.com that do nothing other than promote Facebook business pages, and look at the biggest factor out there - the fact that Facebook's IPO is set to dominate 2012) I think that Facebook has the first legitimate shot of creating a combination of quality results and user experience to actually challenge Google's dominance, and that's pretty exciting to watch. The fact that Google is working on Google+ so much and making that such a centerpiece of their efforts only goes to illustrate how critical this all is and how seriously they take this challenge from Facebook into their core business. I think Facebook eventually enters the search market and really disrupts it and it will be interesting to see how Google eventually acts from a position of weakness.

Keith Watanabe:

they're just like any company that gets big. you end up losing visibility into things, believe that you require the middle management layer to coordinate, then start getting into the battlegrounds of turf wars because the people hired have hidden agendas and start bringing in their army of yes men to take control as they attempt to climb up the corporate ladder. however, the large war chest accumulated and the dominance in a market make such a company believe in their own invulnerability. but that's when you're the most vulnerable because you get sloppy, forget to stop and see the small things that slip through the cracks, forget your roots and lose your way and soul. humility is really your only constant savior.

btw, more than likely Facebook will become the same way. And any other companies who grow big. People tend to forget about the days they were struggling and start focusing on why they are so great. You lose that hunger, that desire to do better because you don't have to worry about eating pinches of salt on a few nibbles of rice. This is how civilization just is. If you want to move beyond that, humans need to change this structure of massive growth -> vanity -> decadence -> back to poverty.

Anon:

This perceived shift of focus happens at every company when you go from being an idealistic student to becoming an adult that has to pay the bills. When you reach such a large scale with so much at stake, it is easy to stop innovating. It is easy to get a mix of people who don't share the same vision when you have to hire on a lot of staff. Stock prices put an emphasis on perpetual monetization. Let's keep in mind that Facebook only recently IPO'd and in the debate for personal privacy, all the players are potentially "evil" and none of them are being held to account by any public policy.

The shutdown of Google Labs was a sad day. Later the shutdown of Google Health I thought was also sad as it was an example of a free service already in existence, akin to what Ontario has wasted over $1 billion on for E-Health. Surely these closures are a sign that the intellectual capital in the founders has been exhausted. They took their core competencies to the maximum level quickly, which means all the organic growth in those areas is mostly already realized.

There needs to be some torch passing or greater empowerment in the lower ranks when things like this happen. Take a look at RIM. Take a look at many other workplaces. It isn't an isolated incident. There are constantly pressures between where you think your business should go, where investors tell you to go, and where the industry itself is actually headed. This guy is apparently very troubled that his name is attached to G+ development and he is trying to distance himself from his own failure. Probably the absence of Google Labs puts a particular emphasis on the failure of G+ as one of the only new service projects to be delivered recently.

After so much time any company realizes that new ideas can only really come with new people or from outside influences. As an attempt to grow their business services via advertising, the idea that they needed to compete with Facebook to continue to grow wasn't entirely wrong. It was just poorly executed, too late, and at the expense of potentially focusing their efforts on doing something else under Google Labs that would have been more known as from them (Android was an acquisition, not organically grown internally). There is no revolution yet, because Facebook and Google have not replaced any of each others services with a better alternative

The complaints in the final paragraph of the blog regarding privacy are all complaints about how much Google wants to be Facebook. Thing is that Google+ just like all the aforementioned services are opt-in services with a clear ToS declared when you do so, even if you already have a Google account for other services. The transparency of their privacy policy is on par if not better than most other competing service providers. The only time it draws criticism is when some changes have been made to say that if you use multiple services, they may have access to the same pool of information internally. It's a contract and it was forced to be acknowledged when it changed. When advertising does happen it is much more obvious to me that it is advertising via a Google service, than when Facebook decides to tell me who likes what. Not to give either the green light here; but the evolution is one of integrating your network into the suggestions, and again, it isn't isolated to any one agency.

One way to raise and enforce objections to potential mishandling of information is to develop a blanket minimum-requirement on privacy policy to apply to all businesses, regarding the handling of customer information. We are blind if we think Google+ and Facebook are the only businesses using data in these ways. This blanket minimum requirement could be voluntarily adopted via 3rd party certification, or it could be government enforced; but the point is that someone other than the business itself would formulate it, and it must be openly available to debate and public scrutiny/revision. It is a sort of "User License Agreement" for information about us. If James Whittaker left to partake in something along these lines, it sure would make his blog entry more credible, unless Microsoft is focused so much more greatly on innovation than the profit motive.

It is also important for customers and the general public not to get locked into any kind of brand loyalty. One problem is Facebook is a closed proprietary system with no way to forward or export the data contained within it to any comparable system. Google is a mish-mash of some open and some closed systems. In order for us as customers to be able to voice our opinions in a way that such service providers would hear, we must be provided alternatives and service portability.

As an example of changing service providers, there has been an exodus of business customers away from using Google Maps as they began charging money to businesses that want to use the data to develop on top of it. I think that this is just the reality of a situation when you have operating costs for a service that you need to recoup; but there is a royalty-free alternative like Open Street Map (which Apple has recently ripped off by using Open Street Map data without attribution).

Google won't see the same meteoric growth ever again. It probably is a less fun place for a social media development staffer to work at from 2010 to present, than it was from 2004 - 2010 (but I'm betting still preferable to FoxConn or anything anywhere near Balmer).

Linda R. Tindall :

Thank you for your honest comments Mr. Whittaker. And yes, Google is not like it was before..

It is Scary, Google may destroy anyone online business overnight!

Google penalize webmasters if they don't like a Website for any reason. They can put out anyone they want out of business. How does Google judge a webmaster's?

Google's business isn't anymore the search engine. Google's business is selling and displaying ads.

GOOGLE becomes now the Big Brother of the WWW. I think it is scary that Google has so much power. Just by making changes, they can ruin people's lives.

As it turned out, sharing was not broken. Sharing was working fine and dandy, Google just wasn't part of it. People were sharing all around us and seemed quite happy. A user exodus from Facebook never materialized. I couldn't even get my own teenage daughter to look at Google+ twice, "social isn't a product," she told me after I gave her a demo, "social is people and the people are on Facebook."

Google was the rich kid who, after having discovered he wasn't invited to the party, built his own party in retaliation. The fact that no one came to Google's party became the elephant in the room.

[Dec 12, 2015] 11 New Open Source Development Tools By Cynthia Harvey

November 17, 2015 | Datamation

Neovim

Generations of Emacs-hating developers have sworn by Vim as the only text editor they'll use for coding. Neovim is a new take on the classic tool with more powerful plugins, better GUI architecture and improved embedding support. Operating System: Windows, Linux, OS X

Nuclide

Created by Facebook, Nuclide is an integrated development environment that supports both mobile and Web development. It is built on top of Atom, and it can integrate with Flow, Hack and Mercurial. Operating System: Windows, Linux, OS X

React

React is "a JavaScript library for building user interfaces." It provides the "View" component in model–view–controller (MVC) software architecture and is specifically designed for one-page applications with data that changes over time. Operating System: OS Independent

Sleepy Puppy

Released in August, Netflix's Sleepy Puppy helps Web developers avoid cross-site scripting (XSS) vulnerabilities. It allows developers and security staff to capture, manage and track XSS issues. Operating System: OS Independent

YAPF

Short for "Yet Another Python Formatter," YAPF reformats Python code so that it conforms to the style guide and looks good. It's a Google-owned project. Operating System: OS Independent

[Nov 08, 2015] The Anti-Java Professor and the Jobless Programmers

Nick Geoghegan

James Maguire's article raises some interesting questions as to why teaching Java to first year CS / IT students is a bad idea. The article mentions both Ada and Pascal – neither of which really "took off" outside of the States, with the former being used mainly by contractors of the US Dept. of Defense.

This is my own, personal, extension to the article – which I agree with – and why first year students should be taught C in first year. I'm biased though, I learned C as my first language and extensively use C or C++ in projects.

Java is a very high level language that has interesting features that make it easier for programmers. The two main points, that I like about Java, are libraries (although libraries exist for C / C++ ) and memory management.

Libraries

Libraries are fantastic. They offer an API and abstract a metric fuck tonne of work that a programmer doesn't care about. I don't care how the library works inside, just that I have a way of putting in input and getting expected output (see my post on abstraction). I've extensively used libraries, even this week, for audio codec decoding. Libraries mean not reinventing the wheel and reusing code (something students are discouraged from doing, as it's plagiarism, yet in the real world you are rewarded). Again, starting with C means that you appreciate the libraries more.

Memory Management

Managing your programs memory manually is a pain in the hole. We all know this after spending countless hours finding memory leaks in our programs. Java's inbuilt memory management tool is great – it saves me from having to do it. However, if I had have learned Java first, I would assume (for a short amount of time) that all languages managed memory for you or that all languages were shite compared to Java because they don't manage memory for you. Going from a "lesser" language like C to Java makes you appreciate the memory manager

What's so great about C?

In the context of a first language to teach students, C is perfect. C is

Java is a complex language that will spoil a first year student. However, as noted, CS / IT courses need to keep student retention rates high. As an example, my first year class was about 60 people, final year was 8. There are ways to keep students, possibly with other, easier, languages in the second semester of first year – so that students don't hate the subject when choosing the next years subject post exams.

Conversely, I could say that you should teach Java in first year and expand on more difficult languages like C or assembler (which should be taught side by side, in my mind) later down the line – keeping retention high in the initial years, and drilling down with each successive semester to more systems level programming.

There's a time and place for Java, which I believe is third year or final year. This will keep Java fresh in the students mind while they are going job hunting after leaving the bosom of academia. This will give them a good head start, as most companies are Java houses in Ireland.

[Nov 08, 2015] Abstraction

nickgeoghegan.net

Filed in Programming No Comments

A few things can confuse programming students, or new people to programming. One of these is abstraction.

Wikipedia says:

In computer science, abstraction is the process by which data and programs are defined with a representation similar to its meaning (semantics), while hiding away the implementation details. Abstraction tries to reduce and factor out details so that the programmer can focus on a few concepts at a time. A system can have several abstraction layers whereby different meanings and amounts of detail are exposed to the programmer. For example, low-level abstraction layers expose details of the hardware where the program is run, while high-level layers deal with the business logic of the program.

That might be a bit too wordy for some people, and not at all clear. Here's my analogy of abstraction.

Abstraction is like a car

A car has a few features that makes it unique.

If someone can drive a Manual transmission car, they can drive any Manual transmission car. Automatic drivers, sadly, cannot drive a Manual transmission drivers without "relearing" the car. That is an aside, we'll assume that all cars are Manual transmission cars – as is the case in Ireland for most cars.

Since I can drive my car, which is a Mitsubishi Pajero, that means that I can drive your car – a Honda Civic, Toyota Yaris, Volkswagen Passat.

All I need to know, in order to drive a car – any car – is how to use the breaks, accelerator, steering wheel, clutch and transmission. Since I already know this in my car, I can abstract away your car and it's controls.

I do not need to know the inner workings of your car in order to drive it, just the controls. I don't need to know how exactly the breaks work in your car, only that they work. I don't need to know, that your car has a turbo charger, only that when I push the accelerator, the car moves. I also don't need to know the exact revs that I should gear up or gear down (although that would be better on the engine!)

Virtually all controls are the same. Standardization means that the clutch, break and accelerator are all in the same place, regardless of the car. This means that I do not need to relearn how a car works. To me, a car is just a car, and is interchangeable with any other car.

Abstraction means not caring

As a programmer, or someone using a third party API (for example), abstraction means not caring how the inner workings of some function works – Linked list data structure, variable names inside the function, the sorting algorithm used, etc – just that I have a standard (preferable unchanging) interface to do whatever I need to do.

Abstraction can be taught of as a black box. For input, you get output. That shouldn't be the case, but often is. We need abstraction so that, as a programmer, we can concentrate on other aspects of the program – this is the corner-stone for large scale, multi developer, software projects.

[Nov 22, 2014] It's Not Developers Slowing Things Down, It's the Process

Nov 21, 2014 | Slashdot

Soulskill November 21, 2014

An anonymous reader writes:

Software engineers understand the pace of writing code, but frequently managers don't. One line of code might take 1 minute, and another line of code might take 1 day. But generally, everything averages out, and hitting your goals is more a function of properly setting your goals than of coding quickly or slowly. Sprint.ly, a company than analyzes productivity, has published some data to back this up. The amount of time actually developing a feature was a small and relatively consistent portion of its lifetime as a work ticket.. The massively variable part of the process is when "stakeholders are figuring out specs and prioritizing work." The top disrupting influences (as experienced devs will recognize) are unclear and changing requirements. Another big cause of slowdowns is interrupting development work on one task to work on a second one. The article encourages managers to let devs contribute to the process and say "No" if the specs are too vague. Is there anything you'd add to this list?

skaag

Re:Nope... Nailed It (Score:5, Insightful)

(206358) on Friday November 21, 2014 @11:55AM (#48434577)

This is not exactly accurate. It hinges greatly on the type of manager we're talking about.

For example if the manager is very hands-on, goes into the details, produces proper mock-ups, flow diagrams, and everything is properly documented: This type of manager can actually accelerate the development process significantly since developers now know exactly what to do. But again, this manager has to really know what he's doing, and have some serious programming experience in his past.

RabidReindeer (2625839) on Friday November 21, 2014

Re:Nope... Nailed It (Score:4, Interesting)

Couple of big shops in my town. Take one for example. They had a 2-year window for a very important project.

Bought (expensive trendy tool) from (major software vendor). Spent 18 months drawing stick-figure diagrams with (expensive trendy tool). Realized they only had 6 months for implementation and panicked. Basically tossed the stick-figure diagrams because they had to drastically modify expectations to make it in 6 months of 100-hour programming weeks. Using contract laborers who didn't know the company and how it operated, because they'd taken a chain-saw to the ranks of the in-house experienced staff.

I'm sure that they learned a valuable lession from that and will never do anything like that again. I'm also sure that pigs fly and cows routinely jump over the moon.

Jane Q. Public (1010737) on Friday November 21, 2014

I'm sure that they learned a valuable lession from that and will never do anything like that again. I'm also sure that pigs fly and cows routinely jump over the moon.

This is a good illustration of the folly of top-down "waterfall" methodology. Too much micro-planning in advance, no action.

tnk1 (899206) on Friday November 21, 2014

Re:Nope... Nailed It (Score:5, Insightful)

You don't want to take managers out of the equation. They're the only people keeping the other departments from running you over. You see that most clearly, ironically, when you have an incompetent manager and you get run over in spite of it.

And a bad manager in this sense isn't the evil taskmaster, it is the guy who has no idea of his team's capabilities and taskload. He's also probably a little clueless about what is or is not possible, but in that sense, it's more of a feature of him making promises without talking to the rest of the team first. That manager goes to meetings and lets himself get cowed into submission when sales or marketing goes after him because he has no facts. Removing someone in that position just means that engineering is no longer even a speed bump to unrealistic goals.

Saying "No" to business people is not a valid strategy. You'll just find yourself replaced. Saying, "yes, but you'd need to spend 2 million dollars on it" with proof is a valid strategy. You don't want to sit around and come up with that data, that's what the manager is supposed to do.

I agree that indecisive managers and overwrought process is probably the top cause of problems with productivity. However, there are good managers and bad ones. It pays to understand the difference.

[Oct 18, 2013] Tom Clancy, Best-Selling Master of Military Thrillers, Dies at 66

Fully applicable to programming...
NYTimes.com

“I tell them you learn to write the same way you learn to play golf,” he once said. “You do it, and keep doing it until you get it right. A lot of people think something mystical happens to you, that maybe the muse kisses you on the ear. But writing isn’t divinely inspired — it’s hard work.”

They Write the Right Stuff Fast Company Business + Innovation

4. Don't just fix the mistakes -- fix whatever permitted the mistake in the first place.

The process is so pervasive, it gets the blame for any error -- if there is a flaw in the software, there must be something wrong with the way its being written, something that can be corrected. Any error not found at the planning stage has slipped through at least some checks. Why? Is there something wrong with the inspection process? Does a question need to be added to a checklist?

Importantly, the group avoids blaming people for errors. The process assumes blame - and it's the process that is analyzed to discover why and how an error got through. At the same time, accountability is a team concept: no one person is ever solely responsible for writing or inspecting code. "You don't get punished for making errors," says Marjorie Seiter, a senior member of the technical staff. "If I make a mistake, and others reviewed my work, then I'm not alone. I'm not being blamed for this."

Ted Keller offers an example of the payoff of the approach, involving the shuttles remote manipulator arm. "We delivered software for crew training," says Keller, "that allows the astronauts to manipulate the arm, and handle the payload. When the arm got to a certain point, it simply stopped moving."

The software was confused because of a programming error. As the wrist of the remote arm approached a complete 360-degree rotation, flawed calculations caused the software to think the arm had gone past a complete rotation -- which the software knew was incorrect. The problem had to do with rounding off the answer to an ordinary math problem, but it revealed a cascade of other problems.

"Even though this was not critical," says Keller, "we went back and asked what other lines of code might have exactly the same kind of problem." They found eight such situations in the code, and in seven of them, the rounding off function was not a problem. "One of them involved the high-gain antenna pointing routine," says Keller. "That's the main antenna. If it had developed this problem, it could have interrupted communications with the ground at a critical time. That's a lot more serious."

The way the process works, it not only finds errors in the software. The process finds errors in the process.

[Sep 08, 2012] Managing Humans: Biting and Humorous Tales of a Software Engineering Manager

Apress; 2 edition (June 27, 2012)
Amazon

Leonardo Bueno

Definitely not the best book on management June 23, 2008

I've read a couple of Rand's posts on his blog and thought it'd be nice to be able to read the edited, reviewed and improved paper version... I should have saved my money. It's not that the book is useless, but it doesn't adds to much value to the blog posts. Also, not all chapters are worth reading, so you pay for a lot of bad stuff too.

[Aug 30, 2011] Quotes

The structure of a system reflects the structure of the organization that built it.

- R. Fairley

Any sufficiently advanced bug is indistinguishable from a feature.

- R. Kulawiec

Fools ignore complexity; pragmatists suffer it; experts avoid it; geniuses remove it.

- A. Perlis

[Jul 25, 2011] Former-Google-CIO-Do-Dumb-Things

bugs2squash:

is he saying that if the hardware he made was, say, 20% more power hungry and 10% more expensive it would have rendered Google's business idea unworkable. I'm not sure I buy it. Maybe it allowed him to scale up with less capital, but I think a 20% slower google would still have won hearts and minds during the period it was being created.

wisnoskij:

I don't know, seems reasonable to me. Profit margins can be pretty slim and it does not take much to go from making a cent per user to losing a cent per user and no business is built on losing money.

guspasho:

No business is built on losing money AND no business grows as large and as quickly as Google has by running a slim profit margin.

br00tus:

I think most project managers are a waste as well. In a small company it is unneeded. I'm more circumspect to say whether or not they're needed in a big company, but they certainly seem less needed in small, closely connected groups. If you have a big, long project, with people from different divisions doing different things, then yes, a project manager can be helpful.

On a small project, with a few people, who work closely already on a variety of things, project managers just tend to get in the way. I don't know how many projects I've been brought into at the last minute because someone quit or whatever, and the PM points to my place on the timeline - I'm already two weeks late in finishing whatever is supposed to be done on the day I'm brought into the project.

It's just completely pointless aside from those large collaborations that cross across many people in many different groups at a company.

Opportunist:

Google succeeded because it was at the right time at the right place. Nothing else. Yes, there were other search engines before it, but Google set a standard and ran with it.

Try the same approach in the same field of business today and you will fail. Invariably. Likewise with the next EBay, the next Amazon, the next Facebook. No, they were not the first. But they were amongst the first and they were there and "the best" at just the right time when the service they offered suddenly got popular. That's all that is to their success.

Nothing more, nothing less. Just pure luck. You might also say good timing, but I kinda doubt anyone can actually predict so accurately when which service hits the sweet spot. If he could, most of these services would be in one hand. Why? Because that person/organization would have hit the sweet spots more often than anyone else. Duh. I wouldn't take any advice from any of those "successful" companies. They didn't do anything right where everyone else was too stupid. They were just lucky to be the one that were lucky enough to be the one being at the right place at the right time with the right product.

dkleinsc:

No, Google succeeded because they did search with a far better algorithm than anything else out there at the time. It came into being several years after the first search engines, and was up against several established players, such as Yahoo. They also made one very smart marketing move, which is still with them today: The front page of Google was a simple search box, whereas the front page of their competitors was loaded with widgets and paid ads. In the days of 56k modems, that meant you could load Google faster and search faster. Facebook, too, also was up against an established competitor in MySpace.

They won out by providing a service that was (at the time) less bloated, more private, and less ad-driven than MySpace (and then proceeded to make it more bloated, less private, and more ad-driven, but that's another story). Plenty of other companies have succeeded in marketplaces with established competitors - Ben and Jerry's, for instance, built up from practically nothing in a highly competitive market.

Luck makes a difference, no doubt: I was talking with another CS grad from my alma mater who had turned down a chance to be Google employee #5 because he was heading to a good job in computer graphics and didn't want to risk it all on some crazy start-up. He's done just fine for himself at Pixar, but one coin flip the other way and he might well have had a fortune.

dkleinsc

Every single major corporation does dumb things all the time! Incompetence is rampant! That means, logically, if you want to create a major corporation, you need to cultivate a culture of incompetence and stupidity.

[Jul 25, 2011] Former Google CIO says business misses key people marks

July 25, 2011 | ITworld

The former CIO of Google and founder and CEO of ZestCash, Dr Douglas Merrill, says companies stuck in traditional management practices risk becoming irrelevant and leaders should not be afraid to do 'dumb' things.

During a lively keynote at this year's CA Expo in Sydney, Merrill said the six years he spent at Google was the most fascinating part of his career.

"Google was founded by two computer science students at Stanford and they hated each other at first. I found out they were both correct," he said jokingly.

"There is a whole cottage industry of people talking about innovation, including all kinds of garbage... and I'm part of this cottage industry."

Merrill said there is a lot of "mythos" about Google, like free food and 20 per cent free time, but most of it is false.

He said a successful product is not about having perfect project management, rather "the more project management you do the less likely your project is to succeed".

"It's not about hardware and capex. Build your product and then figure out what to do with it," he said.

"Don't be afraid to do dumb things. Larry and Sergey developed a search product called 'Backrub' - don't ask me how they got that - and shortly after that launched Google as part of the Stanford domain. Most of the early Google hardware was stolen from trash and as the stuff they stole broke all the time they built a reliable software system."

"Everyone knew we shouldn't build our own hardware as it was 'dumb', but everyone was wrong. Sometimes being dumb changes the game."

Merrill cited the "fairly disturbing statistic" of 66 percent of the Fortune 100 companies having either disappeared or are out of the list in the 20 years since 1990.

"Eastman Kodak is my favourite example. It has more patents than any other company on earth and is the most successful research company," he said. "In 1990 a young researcher invented the charge coupled device which is the core of every camera today. His boss said you're a moron we make film."

"The most important thing to take advantage of is to see innovation from everywhere - inside and outside."

With information being democratised over the past twenty years, which has seen the price of hard drive storage drop by 2 million fold, Merrill said businesses can emerge in a cheaper way.

"Zappos.com is inline shoe retailer and each shoe sent has a return slip as people are more likely to buy something if they can return it. The company went from $1 million seed to $70 million in revenue," he said, adding Google $1 million in funding and built "a reasonably good business".

While technology matters to "real" bricks and mortar businesses as much as online companies, Merrill said there are lots of examples of technology turning out "spectacularly badly".

"Just because you can do something with technology that doesn't mean you should do something with technology," he said. "You want to find cheap ways to get your customers to care about you."

"McDonalds wanted to get people to come back to its stores so they ran an interesting marketing program with Foursquare where people could come to a restaurant and 'check in' and get a hamburger for free. That resulted in 25 per cent sales lift day-on-day and the total marketing promotion cost $18,000.

When Merrill left Google he worked at EMI records, which was interesting and enjoyable, but he knew the music industry was "collapsing".

"The RIAA said it isn't that we are making bad music, but the 'dirty file sharing guys' are the problem," he said. "Going to sue customers for file sharing is like trying to sell soap by throwing dirt on your customers."

Merrill profiled the file sharing behaviour of people who used Limewire against the top iTunes sales and the biggest iTunes buyers were the same as the highest sharing "thieves" on Limewire.

"That's not theft, that's try-before-you-buy marketing and we weren't even paying for it... so it makes sense to sue them," he said wryly.

Merrill said it is also prudent not to listen too carefully to customers as so-called "focus groups" suffer from the Availability Heuristic: "If you ask a question the answer will be the first thing they think of."

"You can't ask your customers what they want if they don't understand your innovation," he said. "The popular Google spell correction came from user activity. We couldn't ask a customer if they wanted spell checking as they would have said no."

[Jul 24, 2011] What Apple Has That Google Doesn’t - An Auteur By RANDALL STROSS

July 23, 2011 | NYTimes.com

AT Apple, one is the magic number.

One person is the Decider for final design choices. Not focus groups. Not data crunchers. Not committee consensus-builders. The decisions reflect the sensibility of just one person: Steven P. Jobs, the C.E.O.

By contrast, Google has followed the conventional approach, with lots of people playing a role. That group prefers to rely on experimental data, not designers, to guide its decisions.

The contest is not even close. The company that has a single arbiter of taste has been producing superior products, showing that you don’t need multiple teams and dozens or hundreds or thousands of voices.

Two years ago, the technology blogger John Gruber presented a talk, “The Auteur Theory of Design,” at the Macworld Expo. Mr. Gruber suggested how filmmaking could be a helpful model in guiding creative collaboration in other realms, like software.

The auteur, a film director who both has a distinctive vision for a work and exercises creative control, works with many other creative people. “What the director is doing, nonstop, from the beginning of signing on until the movie is done, is making decisions,” Mr. Gruber said. “And just simply making decisions, one after another, can be a form of art.”

“The quality of any collaborative creative endeavor tends to approach the level of taste of whoever is in charge,” Mr. Gruber pointed out.

Two years after he outlined his theory, it is still a touchstone in design circles for discussing Apple and its rivals.

Garry Tan, designer in residence and a venture partner at Y Combinator, an investor in start-ups, says: “Steve Jobs is not always right—MobileMe would be an example. But we do know that all major design decisions have to pass his muster. That is what an auteur does.”

Mr. Jobs has acquired a reputation as a great designer, Mr. Tan says, not because he personally makes the designs but because “he’s got the eye.” He has also hired classically trained designers like Jonathan Ive. “Design excellence also attracts design talent,” Mr. Tan explains.

Google has what it calls a “creative lab,” a group that had originally worked on advertising to promote its brand. More recently, the lab has been asked to supply a design vision to the engineering and user-experience groups that work on all of Google’s products. Chris L. Wiggins, the lab’s creative director, whose own background is in advertising, describes design as a collaborative process among groups “with really fruitful back-and-forth.”

“There’s only one Steve Jobs, and he’s a genius,” says Mr. Wiggins. “But it’s important to distinguish that we’re discussing the design of Web applications, not hardware or desktop software. And for that we take a different approach to design than Apple,” he says. Google, he says, utilizes the Web to pull feedback from users and make constant improvements.

Mr. Wiggins’s argument that Apple’s apples should not be compared to Google’s oranges does not explain, however, why Apple’s smartphone software gets much higher marks than Google’s.

GOOGLE’S ability to attract and retain design talent has not been helped by the departure of designers who felt their expertise was not fully appreciated. “Google is an engineering company, and as a researcher or designer, it’s very difficult to have your voice heard at a strategic level,” writes Paul Adams on his blog, “Think Outside In.” Mr. Adams was a senior user-experience researcher at Google until last year; he is now at Facebook.

Douglas Bowman is another example. He was hired as Google’s first visual designer in 2006, when the company was already seven years old. “Seven years is a long time to run a company without a classically trained designer,” he wrote in his blog Stopdesign in 2009. He complained that there was no one at or near the helm of Google who “thoroughly understands the principles and elements of design” “I had a recent debate over whether a border should be 3, 4 or 5 pixels wide,” Mr. Bowman wrote, adding, “I can’t operate in an environment like that.” His post was titled, “Goodbye, Google.”

Mr. Bowman’s departure spurred other designers with experience at either Google or Apple to comment on differences between the two companies. Mr. Gruber, at his Daring Fireball blog, concisely summarized one account under the headline “Apple Is a Design Company With Engineers; Google Is an Engineering Company With Designers.”

In May, Google, ever the engineering company, showed an unwillingness to notice design expertise when it tried to recruit Pablo Villalba Villar, the chief executive of Teambox, an online project management company. Mr. Villalba later wrote that he had no intention of leaving Teambox and cooperated to experience Google’s hiring process for himself. He tried to call attention to his main expertise in user interaction and product design. But he said that what the recruiter wanted to know was his mastery of 14 programming languages.

Mr. Villalba was dismayed that Google did not appear to have changed since Mr. Bowman left. “Design can’t be done by committee,” he said.

Recently, as Larry Page, the company co-founder, began his tenure as C.E.O., , Google rolled out Google+ and a new look for the Google home page, Gmail and its calendar. More redesigns have been promised. But they will be produced, as before, within a very crowded and noisy editing booth. Google does not have a true auteur who unilaterally decides on the final cut.

Randall Stross is an author based in Silicon Valley and a professor of business at San Jose State University. E-mail: stross@nytimes.com.

[Apr 3, 2009] 10 open source books worth downloading

Apr 3, 2009 | www.tectonic.co.za

Producing Open Source Software - How to Run a Successful Free Software Project

http://www.producingoss.com/en/producingoss.pdf
Download: 887kb
Format: PDF
If you’re not a first-timer and you are keen on starting your own open source project then take a look at this book. First published in 2005, Producing Open Source Software is a solid 185-page long guide to the intricacies of starting, running, licensing and maintaining an open source project. As most readers no doubt know, having a good idea for an open source project is one thing; making it work is entirely another. Written by Karl Fogel, a long-time free-software developer and contributor to the open source version control system, Subversion, the book covers a broad range of considerations, from choosing a good name to creating a community, when starting your own OSS project.

[Jan 17, 2009] Computerworld - Software guru is hot on Linux, busting bureaucracy

How would you characterize the state of software development today?

Software has been and will remain fundamentally hard. In every era, we find that there is a certain level of complexity we face. Today, a typical system tends to be continuously evolving. You never turn it off, [and] it tends to be distributed, multiplatform. That is a very different set of problems and forces than we faced five years ago.

Traditionally -- we're talking a few decades ago -- you could think of software as something that IT guys did, and nobody else worried about it. Today, our civilization relies upon software.

All of a sudden, you wake up and say, "I can't live without my cell phone." We, as software developers, build systems of incredible complexity, and yet our end users don't want to see that software.

Most of the interesting systems today are no longer just systems by themselves, but they tend to be systems of systems. It is the set of them working in harmony. We don't have a lot of good processes or analysis tools to really understand how those things behave. Many systems look dangerously fragile. The bad news is they are fragile. This is another force that will lead us to the next era of how we build software systems.

... ... ...

When you have an organization that is 100 times larger, there is a little bit more bureaucracy. [IBM asked me] to destroy bureaucracy. I have a license to kill, so to speak. IBM is a target-rich environent.

[Apr 26, 2008] IBM Trying To Patent Timed Code Inspection

Slashdot

A just-published IBM patent application for a Software Inspection Management Tool claims to improve software quality by taking a chess-clock-like approach to code walkthroughs. An inspection rate monitor with 'a pause button, a resume button, a complete button, a total lines inspected indication, and a total lines remaining to be inspected indication' keeps tabs on participants' progress and changes color when management's expectations — measured in lines per hour — are not being met."

[Apr 25, 2008] Interview with Donald Knuth by Donald E. Knuth,Andrew Binstock


Andrew Binstock and Donald Knuth converse on the success of open source, the problem with multicore architecture, the disappointing lack of interest in literate programming, the menace of reusable code, and that urban legend about winning a programming contest with a single compilation.

Andrew Binstock: You are one of the fathers of the open-source revolution, even if you aren’t widely heralded as such. You previously have stated that you released TeX as open source because of the problem of proprietary implementations at the time, and to invite corrections to the code—both of which are key drivers for open-source projects today. Have you been surprised by the success of open source since that time?

Donald Knuth: The success of open source code is perhaps the only thing in the computer field that hasn’t surprised me during the past several decades. But it still hasn’t reached its full potential; I believe that open-source programs will begin to be completely dominant as the economy moves more and more from products towards services, and as more and more volunteers arise to improve the code.

For example, open-source code can produce thousands of binaries, tuned perfectly to the configurations of individual users, whereas commercial software usually will exist in only a few versions. A generic binary executable file must include things like inefficient "sync" instructions that are totally inappropriate for many installations; such wastage goes away when the source code is highly configurable. This should be a huge win for open source.

Yet I think that a few programs, such as Adobe Photoshop, will always be superior to competitors like the Gimp—for some reason, I really don’t know why! I’m quite willing to pay good money for really good software, if I believe that it has been produced by the best programmers.

Remember, though, that my opinion on economic questions is highly suspect, since I’m just an educator and scientist. I understand almost nothing about the marketplace.

Andrew: A story states that you once entered a programming contest at Stanford (I believe) and you submitted the winning entry, which worked correctly after a single compilation. Is this story true? In that vein, today’s developers frequently build programs writing small code increments followed by immediate compilation and the creation and running of unit tests. What are your thoughts on this approach to software development?

Donald: The story you heard is typical of legends that are based on only a small kernel of truth. Here’s what actually happened: John McCarthy decided in 1971 to have a Memorial Day Programming Race. All of the contestants except me worked at his AI Lab up in the hills above Stanford, using the WAITS time-sharing system; I was down on the main campus, where the only computer available to me was a mainframe for which I had to punch cards and submit them for processing in batch mode. I used Wirth’s ALGOL W system (the predecessor of Pascal). My program didn’t work the first time, but fortunately I could use Ed Satterthwaite’s excellent offline debugging system for ALGOL W, so I needed only two runs. Meanwhile, the folks using WAITS couldn’t get enough machine cycles because their machine was so overloaded. (I think that the second-place finisher, using that "modern" approach, came in about an hour after I had submitted the winning entry with old-fangled methods.) It wasn’t a fair contest.

As to your real question, the idea of immediate compilation and "unit tests" appeals to me only rarely, when I’m feeling my way in a totally unknown environment and need feedback about what works and what doesn’t. Otherwise, lots of time is wasted on activities that I simply never need to perform or even think about. Nothing needs to be "mocked up."

Andrew: One of the emerging problems for developers, especially client-side developers, is changing their thinking to write programs in terms of threads. This concern, driven by the advent of inexpensive multicore PCs, surely will require that many algorithms be recast for multithreading, or at least to be thread-safe. So far, much of the work you’ve published for Volume 4 of The Art of Computer Programming (TAOCP) doesn’t seem to touch on this dimension. Do you expect to enter into problems of concurrency and parallel programming in upcoming work, especially since it would seem to be a natural fit with the combinatorial topics you’re currently working on?

Donald: The field of combinatorial algorithms is so vast that I’ll be lucky to pack its sequential aspects into three or four physical volumes, and I don’t think the sequential methods are ever going to be unimportant. Conversely, the half-life of parallel techniques is very short, because hardware changes rapidly and each new machine needs a somewhat different approach. So I decided long ago to stick to what I know best. Other people understand parallel machines much better than I do; programmers should listen to them, not me, for guidance on how to deal with simultaneity.

Andrew: Vendors of multicore processors have expressed frustration at the difficulty of moving developers to this model. As a former professor, what thoughts do you have on this transition and how to make it happen? Is it a question of proper tools, such as better native support for concurrency in languages, or of execution frameworks? Or are there other solutions?

Donald: I don’t want to duck your question entirely. I might as well flame a bit about my personal unhappiness with the current trend toward multicore architecture. To me, it looks more or less like the hardware designers have run out of ideas, and that they’re trying to pass the blame for the future demise of Moore’s Law to the software writers by giving us machines that work faster only on a few key benchmarks! I won’t be surprised at all if the whole multithreading idea turns out to be a flop, worse than the "Titanium" approach that was supposed to be so terrific—until it turned out that the wished-for compilers were basically impossible to write.

Let me put it this way: During the past 50 years, I’ve written well over a thousand programs, many of which have substantial size. I can’t think of even five of those programs that would have been enhanced noticeably by parallelism or multithreading. Surely, for example, multiple processors are no help to TeX.[1]

How many programmers do you know who are enthusiastic about these promised machines of the future? I hear almost nothing but grief from software people, although the hardware folks in our department assure me that I’m wrong.

I know that important applications for parallelism exist—rendering graphics, breaking codes, scanning images, simulating physical and biological processes, etc. But all these applications require dedicated code and special-purpose techniques, which will need to be changed substantially every few years.

Even if I knew enough about such methods to write about them in TAOCP, my time would be largely wasted, because soon there would be little reason for anybody to read those parts. (Similarly, when I prepare the third edition of Volume 3 I plan to rip out much of the material about how to sort on magnetic tapes. That stuff was once one of the hottest topics in the whole software field, but now it largely wastes paper when the book is printed.)

The machine I use today has dual processors. I get to use them both only when I’m running two independent jobs at the same time; that’s nice, but it happens only a few minutes every week. If I had four processors, or eight, or more, I still wouldn’t be any better off, considering the kind of work I do—even though I’m using my computer almost every day during most of the day. So why should I be so happy about the future that hardware vendors promise? They think a magic bullet will come along to make multicores speed up my kind of work; I think it’s a pipe dream. (No—that’s the wrong metaphor! "Pipelines" actually work for me, but threads don’t. Maybe the word I want is "bubble.")

From the opposite point of view, I do grant that web browsing probably will get better with multicores. I’ve been talking about my technical work, however, not recreation. I also admit that I haven’t got many bright ideas about what I wish hardware designers would provide instead of multicores, now that they’ve begun to hit a wall with respect to sequential computation. (But my MMIX design contains several ideas that would substantially improve the current performance of the kinds of programs that concern me most—at the cost of incompatibility with legacy x86 programs.)

Andrew: One of the few projects of yours that hasn’t been embraced by a widespread community is literate programming. What are your thoughts about why literate programming didn’t catch on? And is there anything you’d have done differently in retrospect regarding literate programming?

Donald: Literate programming is a very personal thing. I think it’s terrific, but that might well be because I’m a very strange person. It has tens of thousands of fans, but not millions.

In my experience, software created with literate programming has turned out to be significantly better than software developed in more traditional ways. Yet ordinary software is usually okay—I’d give it a grade of C (or maybe C++), but not F; hence, the traditional methods stay with us. Since they’re understood by a vast community of programmers, most people have no big incentive to change, just as I’m not motivated to learn Esperanto even though it might be preferable to English and German and French and Russian (if everybody switched).

Jon Bentley probably hit the nail on the head when he once was asked why literate programming hasn’t taken the whole world by storm. He observed that a small percentage of the world’s population is good at programming, and a small percentage is good at writing; apparently I am asking everybody to be in both subsets.

Yet to me, literate programming is certainly the most important thing that came out of the TeX project. Not only has it enabled me to write and maintain programs faster and more reliably than ever before, and been one of my greatest sources of joy since the 1980s—it has actually been indispensable at times. Some of my major programs, such as the MMIX meta-simulator, could not have been written with any other methodology that I’ve ever heard of. The complexity was simply too daunting for my limited brain to handle; without literate programming, the whole enterprise would have flopped miserably.

If people do discover nice ways to use the newfangled multithreaded machines, I would expect the discovery to come from people who routinely use literate programming. Literate programming is what you need to rise above the ordinary level of achievement. But I don’t believe in forcing ideas on anybody. If literate programming isn’t your style, please forget it and do what you like. If nobody likes it but me, let it die.

On a positive note, I’ve been pleased to discover that the conventions of CWEB are already standard equipment within preinstalled software such as Makefiles, when I get off-the-shelf Linux these days.

Andrew: In Fascicle 1 of Volume 1, you reintroduced the MMIX computer, which is the 64-bit upgrade to the venerable MIX machine comp-sci students have come to know over many years. You previously described MMIX in great detail in MMIXware. I’ve read portions of both books, but can’t tell whether the Fascicle updates or changes anything that appeared in MMIXware, or whether it’s a pure synopsis. Could you clarify?

Donald: Volume 1 Fascicle 1 is a programmer’s introduction, which includes instructive exercises and such things. The MMIXware book is a detailed reference manual, somewhat terse and dry, plus a bunch of literate programs that describe prototype software for people to build upon. Both books define the same computer (once the errata to MMIXware are incorporated from my website). For most readers of TAOCP, the first fascicle contains everything about MMIX that they’ll ever need or want to know.

I should point out, however, that MMIX isn’t a single machine; it’s an architecture with almost unlimited varieties of implementations, depending on different choices of functional units, different pipeline configurations, different approaches to multiple-instruction-issue, different ways to do branch prediction, different cache sizes, different strategies for cache replacement, different bus speeds, etc. Some instructions and/or registers can be emulated with software on "cheaper" versions of the hardware. And so on. It’s a test bed, all simulatable with my meta-simulator, even though advanced versions would be impossible to build effectively until another five years go by (and then we could ask for even further advances just by advancing the meta-simulator specs another notch).

Suppose you want to know if five separate multiplier units and/or three-way instruction issuing would speed up a given MMIX program. Or maybe the instruction and/or data cache could be made larger or smaller or more associative. Just fire up the meta-simulator and see what happens.

Andrew: As I suspect you don’t use unit testing with MMIXAL, could you step me through how you go about making sure that your code works correctly under a wide variety of conditions and inputs? If you have a specific work routine around verification, could you describe it?

Donald: Most examples of machine language code in TAOCP appear in Volumes 1-3; by the time we get to Volume 4, such low-level detail is largely unnecessary and we can work safely at a higher level of abstraction. Thus, I’ve needed to write only a dozen or so MMIX programs while preparing the opening parts of Volume 4, and they’re all pretty much toy programs—nothing substantial. For little things like that, I just use informal verification methods, based on the theory that I’ve written up for the book, together with the MMIXAL assembler and MMIX simulator that are readily available on the Net (and described in full detail in the MMIXware book).

That simulator includes debugging features like the ones I found so useful in Ed Satterthwaite’s system for ALGOL W, mentioned earlier. I always feel quite confident after checking a program with those tools.

Andrew: Despite its formulation many years ago, TeX is still thriving, primarily as the foundation for LaTeX. While TeX has been effectively frozen at your request, are there features that you would want to change or add to it, if you had the time and bandwidth? If so, what are the major items you add/change?

Donald: I believe changes to TeX would cause much more harm than good. Other people who want other features are creating their own systems, and I’ve always encouraged further development—except that nobody should give their program the same name as mine. I want to take permanent responsibility for TeX and Metafont, and for all the nitty-gritty things that affect existing documents that rely on my work, such as the precise dimensions of characters in the Computer Modern fonts.

Andrew: One of the little-discussed aspects of software development is how to do design work on software in a completely new domain. You were faced with this issue when you undertook TeX: No prior art was available to you as source code, and it was a domain in which you weren’t an expert. How did you approach the design, and how long did it take before you were comfortable entering into the coding portion?

Donald: That’s another good question! I’ve discussed the answer in great detail in Chapter 10 of my book Literate Programming, together with Chapters 1 and 2 of my book Digital Typography. I think that anybody who is really interested in this topic will enjoy reading those chapters. (See also Digital Typography Chapters 24 and 25 for the complete first and second drafts of my initial design of TeX in 1977.)

Andrew: The books on TeX and the program itself show a clear concern for limiting memory usage—an important problem for systems of that era. Today, the concern for memory usage in programs has more to do with cache sizes. As someone who has designed a processor in software, the issues of cache-aware and cache-oblivious algorithms surely must have crossed your radar screen. Is the role of processor caches on algorithm design something that you expect to cover, even if indirectly, in your upcoming work?

Donald: I mentioned earlier that MMIX provides a test bed for many varieties of cache. And it’s a software-implemented machine, so we can perform experiments that will be repeatable even a hundred years from now. Certainly the next editions of Volumes 1-3 will discuss the behavior of various basic algorithms with respect to different cache parameters.

In Volume 4 so far, I count about a dozen references to cache memory and cache-friendly approaches (not to mention a "memo cache," which is a different but related idea in software).

Andrew: What set of tools do you use today for writing TAOCP? Do you use TeX? LaTeX? CWEB? Word processor? And what do you use for the coding?

Donald: My general working style is to write everything first with pencil and paper, sitting beside a big wastebasket. Then I use Emacs to enter the text into my machine, using the conventions of TeX. I use tex, dvips, and gv to see the results, which appear on my screen almost instantaneously these days. I check my math with Mathematica.

I program every algorithm that’s discussed (so that I can thoroughly understand it) using CWEB, which works splendidly with the GDB debugger. I make the illustrations with MetaPost (or, in rare cases, on a Mac with Adobe Photoshop or Illustrator). I have some homemade tools, like my own spell-checker for TeX and CWEB within Emacs. I designed my own bitmap font for use with Emacs, because I hate the way the ASCII apostrophe and the left open quote have morphed into independent symbols that no longer match each other visually. I have special Emacs modes to help me classify all the tens of thousands of papers and notes in my files, and special Emacs keyboard shortcuts that make bookwriting a little bit like playing an organ. I prefer rxvt to xterm for terminal input. Since last December, I’ve been using a file backup system called backupfs, which meets my need beautifully to archive the daily state of every file.

According to the current directories on my machine, I’ve written 68 different CWEB programs so far this year. There were about 100 in 2007, 90 in 2006, 100 in 2005, 90 in 2004, etc. Furthermore, CWEB has an extremely convenient "change file" mechanism, with which I can rapidly create multiple versions and variations on a theme; so far in 2008 I’ve made 73 variations on those 68 themes. (Some of the variations are quite short, only a few bytes; others are 5KB or more. Some of the CWEB programs are quite substantial, like the 55-page BDD package that I completed in January.) Thus, you can see how important literate programming is in my life.

I currently use Ubuntu Linux, on a standalone laptop—it has no Internet connection. I occasionally carry flash memory drives between this machine and the Macs that I use for network surfing and graphics; but I trust my family jewels only to Linux. Incidentally, with Linux I much prefer the keyboard focus that I can get with classic FVWM to the GNOME and KDE environments that other people seem to like better. To each his own.

Andrew: You state in the preface of Fascicle 0 of Volume 4 of TAOCP that Volume 4 surely will comprise three volumes and possibly more. It’s clear from the text that you’re really enjoying writing on this topic. Given that, what is your confidence in the note posted on the TAOCP website that Volume 5 will see light of day by 2015?

Donald: If you check the Wayback Machine for previous incarnations of that web page, you will see that the number 2015 has not been constant.

You’re certainly correct that I’m having a ball writing up this material, because I keep running into fascinating facts that simply can’t be left out—even though more than half of my notes don’t make the final cut.

Precise time estimates are impossible, because I can’t tell until getting deep into each section how much of the stuff in my files is going to be really fundamental and how much of it is going to be irrelevant to my book or too advanced. A lot of the recent literature is academic one-upmanship of limited interest to me; authors these days often introduce arcane methods that outperform the simpler techniques only when the problem size exceeds the number of protons in the universe. Such algorithms could never be important in a real computer application. I read hundreds of such papers to see if they might contain nuggets for programmers, but most of them wind up getting short shrift.

From a scheduling standpoint, all I know at present is that I must someday digest a huge amount of material that I’ve been collecting and filing for 45 years. I gain important time by working in batch mode: I don’t read a paper in depth until I can deal with dozens of others on the same topic during the same week. When I finally am ready to read what has been collected about a topic, I might find out that I can zoom ahead because most of it is eminently forgettable for my purposes. On the other hand, I might discover that it’s fundamental and deserves weeks of study; then I’d have to edit my website and push that number 2015 closer to infinity.

Andrew: In late 2006, you were diagnosed with prostate cancer. How is your health today?

Donald: Naturally, the cancer will be a serious concern. I have superb doctors. At the moment I feel as healthy as ever, modulo being 70 years old. Words flow freely as I write TAOCP and as I write the literate programs that precede drafts of TAOCP. I wake up in the morning with ideas that please me, and some of those ideas actually please me also later in the day when I’ve entered them into my computer.

On the other hand, I willingly put myself in God’s hands with respect to how much more I’ll be able to do before cancer or heart disease or senility or whatever strikes. If I should unexpectedly die tomorrow, I’ll have no reason to complain, because my life has been incredibly blessed. Conversely, as long as I’m able to write about computer science, I intend to do my best to organize and expound upon the tens of thousands of technical papers that I’ve collected and made notes on since 1962.

Andrew: On your website, you mention that the Peoples Archive recently made a series of videos in which you reflect on your past life. In segment 93, "Advice to Young People," you advise that people shouldn’t do something simply because it’s trendy. As we know all too well, software development is as subject to fads as any other discipline. Can you give some examples that are currently in vogue, which developers shouldn’t adopt simply because they’re currently popular or because that’s the way they’re currently done? Would you care to identify important examples of this outside of software development?

Donald: Hmm. That question is almost contradictory, because I’m basically advising young people to listen to themselves rather than to others, and I’m one of the others. Almost every biography of every person whom you would like to emulate will say that he or she did many things against the "conventional wisdom" of the day.

Still, I hate to duck your questions even though I also hate to offend other people’s sensibilities—given that software methodology has always been akin to religion. With the caveat that there’s no reason anybody should care about the opinions of a computer scientist/mathematician like me regarding software development, let me just say that almost everything I’ve ever heard associated with the term "extreme programming" sounds like exactly the wrong way to go...with one exception. The exception is the idea of working in teams and reading each other’s code. That idea is crucial, and it might even mask out all the terrible aspects of extreme programming that alarm me.

I also must confess to a strong bias against the fashion for reusable code. To me, "re-editable code" is much, much better than an untouchable black box or toolkit. I could go on and on about this. If you’re totally convinced that reusable code is wonderful, I probably won’t be able to sway you anyway, but you’ll never convince me that reusable code isn’t mostly a menace.

Here’s a question that you may well have meant to ask: Why is the new book called Volume 4 Fascicle 0, instead of Volume 4 Fascicle 1? The answer is that computer programmers will understand that I wasn’t ready to begin writing Volume 4 of TAOCP at its true beginning point, because we know that the initialization of a program can’t be written until the program itself takes shape. So I started in 2005 with Volume 4 Fascicle 2, after which came Fascicles 3 and 4. (Think of Star Wars, which began with Episode 4.)

News for nerds, stuff that matters

[Mar 29, 2008] The Vietnam of Computer Science

Monday, June 26, 2006 | Interoperability Happens The Vietnam of Computer Science

(Two years ago, at Microsoft's TechEd in San Diego, I was involved in a conversation at an after-conference event with Harry Pierson and Clemens Vasters, and as is typical when the three of us get together, architectural topics were at the forefront of our discussions. An crowd gathered around us, and it turned into an impromptu birds-of-a-feather session. The subject of object/relational mapping technologies came up, and it was there and then that I first coined the phrase, "Object/relational mapping is the Vietnam of Computer Science". In the intervening time, I've received numerous requests to flesh out the discussion behind that statement, and given Microsoft's recent announcement regarding "entity support" in ADO.NET 3.0 and the acceptance of the Java Persistence API as a replacement for both EJB Entity Beans and JDO, it seemed time to do exactly that.)

... ... ...

Given, then, that objects-to-relational mapping is a necessity in a modern enterprise system, how can anyone proclaim it a quagmire from which there is no escape? Again, Vietnam serves as a useful analogy here--while the situation in South Indochina required a response from the Americans, there were a variety of responses available to the Kennedy and Johson Administrations, including the same kind of response that the recent fall of Suharto in Malaysia generated from the US, which is to say, none at all. (Remember, Eisenhower and Dulles didn't consider South Indochina to be a part of the Domino Theory in the first place; they were far more concerned about Japan and Europe.)

Several possible solutions present themselves to the O/R-M problem, some requiring some kind of "global" action by the community as a whole, some more approachable to development teams "in the trenches":

  1. Abandonment. Developers simply give up on objects entirely, and return to a programming model that doesn't create the object/relational impedance mismatch. While distasteful, in certain scenarios an object-oriented approach creates more overhead than it saves, and the ROI simply isn't there to justify the cost of creating a rich domain model. ([Fowler] talks about this to some depth.) This eliminates the problem quite neatly, because if there are no objects, there is no impedance mismatch.
  2. Wholehearted acceptance. Developers simply give up on relational storage entirely, and use a storage model that fits the way their languages of choice look at the world. Object-storage systems, such as the db4o project, solve the problem neatly by storing objects directly to disk, eliminating many (but not all) of the aforementioned issues; there is no "second schema", for example, because the only schema used is that of the object definitions themselves. While many DBAs will faint dead away at the thought, in an increasingly service-oriented world, which eschews the idea of direct data access but instead requires all access go through the service gateway thus encapsulating the storage mechanism away from prying eyes, it becomes entirely feasible to imagine developers storing data in a form that's much easier for them to use, rather than DBAs.
  3. Manual mapping. Developers simply accept that it's not such a hard problem to solve manually after all, and write straight relational-access code to return relations to the language, access the tuples, and populate objects as necessary. In many cases, this code might even be automatically generated by a tool examining database metadata, eliminating some of the principal criticism of this approach (that being, "It's too much code to write and maintain").
  4. Acceptance of O/R-M limitations. Developers simply accept that there is no way to efficiently and easily close the loop on the O/R mismatch, and use an O/R-M to solve 80% (or 50% or 95%, or whatever percentage seems appropriate) of the problem and make use of SQL and relational-based access (such as "raw" JDBC or ADO.NET) to carry them past those areas where an O/R-M would create problems. Doing so carries its own fair share of risks, however, as developers using an O/R-M must be aware of any caching the O/R-M solution does within it, because the "raw" relational access will clearly not be able to take advantage of that caching layer.
  5. Integration of relational concepts into the languages. Developers simply accept that this is a problem that should be solved by the language, not by a library or framework. For the last decade or more, the emphasis on solutions to the O/R problem have focused on trying to bring objects closer to the database, so that developers can focus exclusively on programming in a single paradigm (that paradigm being, of course, objects). Over the last several years, however, interest in "scripting" languages with far stronger set and list support, like Ruby, has sparked the idea that perhaps another solution is appropriate: bring relational concepts (which, at heart, are set-based) into mainstream programming languages, making it easier to bridge the gap between "sets" and "objects". Work in this space has thus far been limited, constrained mostly to research projects and/or "fringe" languages, but several interesting efforts are gaining visibility within the community, such as functional/object hybrid languages like Scala or F#, as well as direct integration into traditional O-O languages, such as the LINQ project from Microsoft for C# and Visual Basic. One such effort that failed, unfortunately, was the SQL/J strategy; even there, the approach was limited, not seeking to incorporate sets into Java, but simply allow for embedded SQL calls to be preprocessed and translated into JDBC code by a translator.
  6. Integration of relational concepts into frameworks. Developers simply accept that this problem is solvable, but only with a change of perspective. Instead of relying on language or library designers to solve this problem, developers take a different view of "objects" that is more relational in nature, building domain frameworks that are more directly built around relational constructs. For example, instead of creating a Person class that holds its instance data directly in fields inside the object, developers create a Person class that holds its instance data in a RowSet (Java) or DataSet (C#) instance, which can be assembled with other RowSets/DataSets into an easy-to-ship block of data for update against the database, or unpacked from the database into the individual objects.
Note that this list is not presented in any particular order; while some are more attractive to others, which are "better" is a value judgment that every developer and development team must make for themselves.

Just as it's conceivable that the US could have achieved some measure of "success" in Vietnam had it kept to a clear strategy and understood a more clear relationship between commitment and results (ROI, if you will), it's conceivable that the object/relational problem can be "won" through careful and judicious application of a strategy that is celarly aware of its own limitations. Developers must be willing to take the "wins" where they can get them, and not fall into the trap of the Slippery Slope by looking to create solutions that increasingly cost more and yield less. Unfortunately, as the history of the Vietnam War shows, even an awareness of the dangers of the Slippery Slope is often not enough to avoid getting bogged down in a quagmire. Worse, it is a quagmire that is simply too attractive to pass up, a Siren song that continues to draw development teams from all sizes of corporations (including those at Microsoft, IBM, Oracle, and Sun, to name a few) against the rocks, with spectacular results. Lash yourself to the mast if you wish to hear the song, but let the sailors row.

Endnotes

1 Later analysis by the principals involved--including then-Secretary of Defense Robert McNamara--concluded that half of the attack never even took place.

2 It is perhaps the greatest irony of the war, that the man Fate selected to lead during America's largest foreign entanglement was a leader whose principal focus was entirely aimed within his own shores. Had circumstances not conspired otherwise, the hippies chanting "Hey, hey LBJ, how many boys did you kill today" outside the Oval Office could very well have been Johnson's staunchest supporters.

3 Ironically, encapsulation, for purposes of maintenance simplicity, turns out to be a major motivation for almost all of the major innovations in Linguistic Computer Science--procedural, functional, object, aspect, even relational technologies ([Date02]) and other languages all cite "encapsulation" as major driving factors.

4 We could, perhaps, consider stored procedure languages like T-SQL or PL/SQL to be "relational" programming languages, but even then, it's extremely difficult to build a UI in PL/SQL.

5 In this case, I was measuring Java RMI method calls against local method calls. Similar results are pretty easily obtainable for SQL-based data access by measuring out-of-process calls against in-process calls using a database product that supports both, such as Cloudscape/Derby or HSQL (Hypersonic SQL).

References

[Fussell]: Foundations of Object Relational Mapping, by Mark L. Fussell, v0.2 (mlf-970703)

[Fowler] Patterns of Enterprise Application Architecture, by Martin Fowler

[Date04]: Introduction to Database Systems, 8th Edition, by Chris Date.

[Neward04]: Effective Enterprise Java

[Feb 18, 2008] Is Computer Science Dying Computer Science and Telescopes By David Chisnall

Nov 9, 2007 | InformIT

In the late 1990s, during the first dotcom bubble, there was a perception that a computer science degree was a quick way of making money. The dotcom boom had venture capitalists throwing money at the craziest schemes, just because they happened to involve the Internet. While not entirely grounded in fact, this trend led to a perception that anyone walking out of a university with a computer science degree would immediately find his pockets full of venture capital funding.

Then came the inevitable crash, and suddenly there were a lot more IT professionals than IT jobs. Many of these people were the ones that just got into the industry to make a quick buck, but quite a few were competent people now unemployed. This situation didn’t do much for the perception of computer science as an attractive degree scheme.

Since the end of the first dotcom bubble, we’ve seen a gradual decline in the number of people applying to earn computer science degrees. In the UK, many departments were able to prop up the decline in local applicants by attracting more overseas students, particularly from Southeast Asia, by dint of being considerably cheaper than American universities for those students wishing to study abroad. This only slowed the drop, however, and some people are starting to ask whether computer science is dying.

Computer Science and Telescopes

Part of the problem is a lack of understanding of exactly what computer science is. Even undergraduates accepted into computer science courses generally have only the broadest idea of what the subject entails. It’s hardly surprising, then, that people would wonder if the discipline is dying.

Even among those in computing-related fields, there’s a general feeling that computer science is basically a vocational course, teaching programming. In January 2007, the British Computer Society (BCS) published an article by Neil McBride of De Montfort University, entitled "The Death of Computing." Although the content was of a lower quality than the average Slashdot troll post (which at least tries to pretend that it’s raising a valid point) and convinced me that I didn’t want to be a member of the BCS, it was nevertheless circulated quite widely. This article contained choice lines such as the following: "What has changed is the need to know low-level programming or any programming at all. Who needs C when there’s Ruby on Rails?"

Who needs C? Well, at least those people who want to understand something of what’s going on when the Ruby on Rails program runs. An assembly language or two would do equally well. The point of an academic degree, as opposed to a vocational qualification, is to teach understanding, rather than skills—a point sadly lost on Dr. McBride when he penned his article.

In attempting to describe computer science, Edsger Dijkstra claimed, "Computer science is no more about computers than astronomy is about telescopes." I like this quote, but it’s often taken in the wrong way by people who haven’t met many astronomers. When I was younger, I was quite interested in astronomy, and spent a fair bit of time hanging around observatories and reading about the science (as well as looking through telescopes). During this period, I learned a lot more about optics than I ever did in physics courses at school. I never built my own telescope, but a lot of real astronomers did, and many of the earliest members of the profession made considerable contributions to our understanding of optics.

There’s a difference between a telescope builder and an astronomer, of course. A telescope builder is likely to know more about the construction of telescopes and less about the motion of stellar bodies. But both will have a solid understanding of what happens to light as it travels through the lenses and bounces off the mirrors. Without this understanding, astronomy is very difficult.

The same principle holds true for computer science. A computer scientist may not fabricate her own ICs, and may not write her own compiler and operating system. In the modern age, these things are generally too complicated for a single person to do to a standard where the result can compete with off-the-shelf components. But the computer scientist definitely will understand what’s happening in the compiler, operating system, and CPU when a program is compiled and run.

A telescope is an important tool to an astronomer, and a computer is an important tool for a computer scientist—but each is merely a tool, not the focus of study. For an astronomer, celestial bodies are studied using a telescope. For a computer scientist, algorithms are studied using a computer.

Software and hardware are often regarded as being very separate concepts. This is a convenient distinction, but it’s not based on any form of reality. The first computers had no software per se, and needed to be rewired to run different programs. Modern hardware often ships with firmware—software that’s closely tied to the hardware to perform special-purpose functions on general-purpose silicon. Whether a task is handled in hardware or software is of little importance from a scientific perspective. (From an engineering perspective, there are tradeoffs among cost, maintenance, and speed.) Either way, the combination of hardware and software is a concrete instantiation of an algorithm, allowing it to be studied.

As with other subjects, there are a lot of specializations within computer science. I tend to view the subject as the intersection between three fields:

At the very mathematical end are computer scientists who study algorithms without the aid of a computer, purely in the abstract. Closer to engineering are those who build large hardware and software systems. In between are the people who use formal verification tools to construct these systems.

A computer isn’t much use without a human instructing it, and this is where the psychology is important. Computers need to interact with humans a lot, and neither group is really suited to the task. The reason that computers have found such widespread use is that they perform well in areas where humans perform poorly (and vice versa). Trying to find a mechanism for describing something that is understandable by both humans and computers is the role of the "human/computer interaction" (HCI) subdiscipline within computer science. This is generally close to psychology.

HCI isn’t the only part of computer science related to psychology. As far back as 1950, Alan Turing proposed the Turing Test as a method of determining whether an entity should be treated as intelligent.

It’s understandable that people who aren’t directly exposed to computer science would miss the breadth of the discipline, associating it with something more familiar. One solution proposed for this lack of vision is that of renaming the subject to "informatics." In principle, this is a good idea, but the drawback is that it’s very difficult to describe someone as an "informatician" with a straight face.

embracing-my-inner-geek-part-2-the-job

Compare with Aesthetics and the human factor in programming by Andrey Erchov


A software developer must be part writer and poet, part salesperson and public speaker, part artist and designer, and always equal parts logic and empathy. The process of developing software differs from organization to organization. Some are more "shoot from the hip" style, others, like my current employer are much more careful and deliberate. In my 8 years of experience I've worked for 4 different companies, each with their own process. But out of all of them, I've found these stages to be universally applicable:

Dreaming and Shaping

A piece of software starts, before any code is written, as an idea or as a problem to be solved. Its a constraint on a plant floor, a need for information, a better way to work, a way to communicate, or a way to play. It is always tied to a human being ≈ their job, their entertainment┘ their needs. A good process will explore this driving factor well. In the project I'm wrapping up now I felt strongly, and my employer agreed with me, that to understand what we needed to do, we'd have to go to the customer and feel their pain. We'd have to watch them work so we could understand their constraints. And we'd have to explore the other solutions out there to the problem we were trying to solve.

Once you understand what you need to build, you still don't begin building it. Like an architect or a designer, you start with a sketch, and you create a design. In software your design is expressed in documents and in diagrams. Its not uncommon for the design process to take longer than the coding process. As a part of your design, you have to understand your tools. Imagine an author who, at the start of each book, needs to research every writing instrument on the market first. You have to become knowledgeable about the strengths and weaknesses of each tool out there, because your choice of instrument, as much as your design or skill as a programmer, can impact the success of your work. Then you review. With marketing and with every subject matter expert and team member you can find who will have any advice to give. You meet and you discuss and you refine your design, your pre-conceptions, and even your selected tools until it passes the most intense scrutiny.

Once you have these things down, you have to be willing to give them up. You have to go back to the customer, or the originator of the problem, and sell them your solution. You put on a sales hat and you pitch what you've dreamt up┘ then wait with bated breath while they dissect your brain child. If you've understood them, and the problem, you'll only need to make adjustments or adapt to information you didn't previously have. Always you need to anticipate changes you didn't plan for ≈ they'll come at you through-out the project.

Once you know how the solution is going to work, or sometimes even before then, you need to figure out how people are going to work with your solution. Software that can't be understood can't be used, so no matter how brilliant your design, if your interface isn't elegant and beautiful and intuitive, your project is a failure.

I don't pick those adjectives lightly either. All of them are required, in balance. If its not elegant, then its wasteful and you'll likely need to find a new job. If its not beautiful, then no one will want to use it. And if its not intuitive, no one will be able to use it. The attention to detail required of a good interface developer is on par with that of a good painter. Every dot, every stroke, every color choice is significant.

To make something easy to use requires at least a basic understanding of human reactions, an awareness of cognitive norms. People react to your software, often on a very base level. If you don't believe me, think of the last time your computer crashed before you had a chance to save the last 2 hours worth of an essay, or a game you were playing. What you put before your users must be easy to look at so that they are comfortable learning it. It must anticipate their needs so that they don't get frustrated. It must suggest its use, simply by being on the screen. And above all else, it must preserve their focus and their effort.

So you paint, using PowerPoint, or Visio, or some other tool, your picture of what you think the customer is going to want to use, and once again you don your sales hat and try to sell it to them. Only, unlike a salesperson selling someone else's product, you are selling your own work, and are inevitably emotionally-attached to it. Still, you know criticism is good, because it makes the results better, so you force yourself to be logical about it.

Then finally, when your solution is approved, and your interface is understood, you can move on to the really fun part of your job:

Prose and Poetry

A good sonnet isn't only identified by the letters or words on the page, but by the cadence, the meter, the measure, the flow┘ a good piece of literature is beautiful because it is shaped carefully yet communicates eloquently.

Code is no different. The purpose of code is to express a solution. A project consists of small stanzas, called "Methods" or "Functions" depending on what language you use. Each of these verses must be constructed in such a way that it is efficient, tightly-crafted, and effective. And like a poem, there are rules that dictate how it should be shaped. There is beauty in a clever Function.

But the real beauty of code goes further than poetry. Because it re-uses itself. Maybe its more like music, where a particular measure is repeated later in the song, and through its familiarity, it adds to the shape of the whole piece. Functions are like that, in that they're called throughout the software. Sometimes they repeat within themselves, in iterations, like the repeating patterns you see in nature. fern.jpgAnd when the pieces are added up, each in itself a little work of art, they make, if programmed properly, a whole that is much more than a sum. Its is an intertwined, and constantly moving piece of art.

As programmers, we add things called "log messages" so that we can see these parts working together, because without this output, the flow of the data through the different rungs and branches we put together is so fluid that we can't even observe it and, like trying to fathom the number of stars in the sky, it is difficult to even conceptualize visually the thousands of interactions a second that your code is causing. And we need to do this, because next comes a Quality Assurance Engineer (or QA) who tries to break your code, question your decisions, and generally force you to do better than what you thought was your best.

I truly believe that code is an art form. One that only a small portion of the population can appreciate. Sure anyone can walk into the Louvre and appreciate the end result of a Davinci or a Van Gogh, but only a true artist or student of art can really understand the intricacy of the work behind it. Similarly, most people can recognize a good piece of software when they use it (certainly anyone can recognize a bad piece of software) but it takes a true artist, or at least an earnest student, to understand just how brilliant ≈ or how wretched ≈ the work behind it is.

And always, as you weave your code, you have to be prepared to change it, to re-use it, to re-contain it, to re-purpose it in ways that you can't have planned for. Because that is the nature of your art form ≈ always changing and advancing.

Publishing and Documenting

Its been said that a scientist or researcher must "publish or perish." The same is true of a software developer. A brilliant piece of code, if not used, is lost. Within months it will become obsolete, or replaced, or usurped, and your efforts will become meaningless, save for the satisfaction of having solved a problem on your own.

So after months of wearing jeans, chugging caffeine, cluttering your desk with sketches and reference material, you clean yourself up, put on a nice pair of pants, comb your hair, and sell again. Although most organizations have a sales force and a marketing department, a savvy customer will invariably want technical details that a non-coder can't supply. As a lead developer on a project, it falls to you to instill confidence, to speak articulately and passionately about the appropriateness and worth of your solution. Again, as before, pride is a weakness here, because no matter how good you are, someone will always ask if your software can do something it can't ≈ user's are never really satisfied. So you think back to the design process, you remind them when they had a part in the decisions, and you attempt to impress upon them respect for the solution you have now, while acknowledging that there will always be a version 2.0.

And you write and you teach. Not so much in my current job, but in one previous, as a lead developer it was my responsibility to educate people on the uses of our technology ≈ to come up with ways to express the usefulness of a project without boring people with too many technical details. One of the best parts of software development, a part that I miss since its not within my present job description, is getting up in front of people ≈ once they've accepted your solution ≈ and teaching them how to use it and apply it. Taking them beyond the basic functionality and showing them the tricks and shortcuts and advanced features that you programmed in, not because anyone asked you for them, but because you knew in your gut they should be there.

And Repeat

Then there's a party, a brief respite, where you celebrate your victory, congratulate those who've worked on parallel projects, and express your deepest gratitude for your peers who've lent their own particular area of expertise to your project┘ And you start again. Because like I'm sure any sports team feels, you are only as good as your latest victory.

So do I fix computers? Often its easier or more expedient to hack together a solution to a problem on my own ≈ certainly the I.T. Department is becoming something of a slow-moving dinosaur in an age where computers aren't the size of buildings, and most of us are comfortable re-installing Windows on our own ≈ but that's not a part of my job description.

No, I, like my peers, produce art. Functional, useful, but still beautiful, art. We are code poets, and it is our prose that builds the tools people use every day.

However, unlike most other artists, we're usually paid pretty well for our work ;-)

Slashdot The Life of a Software Engineer

[Jan 8, 2008] MIT's OpenCourseWare Model is Proliferating Online

The Massachusetts Institute of Technology OpenCourseWare effort has been offering free lecture notes, exams, and other resources from more than 1800 courses per its website. Some of their courses offer a substantial amount of video and audio content. I remember stumbling across this resource via my employer's intranet about a year ago. Frankly speaking, I didn't think the concept would go very far because you couldn’t earn credit…

Well, I was wrong. It’s catching fire and over 100 universities worldwide have setup similar models and some are top tier schools such as Johns Hopkins and Tufts.

I was searching for a good UNIX course but I haven't found one yet. Surprisingly, it appears MIT’s Linear Algebra course is quite popular with the OpenCourseWare community.

By the way, I don't have any affiliation with OCW or any of the higher learning institutions mentioned.

Added later:

UC Irvine OCW
Notre Dame OCW
Utah State OCW
Osaka OCW
Japan OCW Consortium

[Dec 11, 2007] John Socha-Leialoha's Blog/The Abstract Tar Pit

How often have you found yourself arguing with another person, and they just don't seem to understand you? Chances are they feel that you just don't understand them. You've fallen into the abstract tar pit.

Abstract discussions are like abstract art--they can be very appealing, in part because you can interpret the abstract art however you want to. People love to see what they want to see. But when it comes to technical discussions, abstract discussions are dangerous. There is a good chance someone listening to your abstract arguments will understand completely--but it won't be what you're trying to convey. To understand the abstract, they're likely creating concrete examples in their head and then arguing against your ideas based on these "private" concrete examples. The problem is, if these concrete examples aren't shared, you'll get an argument about completely different examples and understandings.

I recently worked on a 5-week project where this was really clear. There was a small group who had an idea they were trying to sell internally to get funding. Everyone else was feeling confused. Just when they thought they understood these ideas, another concept came along that contradicted what they thought they understood.

So we started a project using Expression Blend to create a "movie" of the idea. The first week we brainstormed a lot, and then drew sketches by hand of what the different screens would look like. We then presented these hand-drawn screens to a customer advisory board so we could get their feedback and help us decide what we should focus on during the next week. We intentionally used hand-drawn sketches in our discussions with customers so they wouldn't get bogged down in the small details and would just focus on the big picture.

About half way through the project we started to create actual screen mockups and animate them with Microsoft Expression Blend so it would look like a screen capture movie of an actual program--but it was all smoke and mirrors.

During the project, the team that had come up with the ideas were constantly arguing with us and saying we were asking the wrong questions. But when we had the final "movie" and showed it to them, an interesting thing happened. The conversations changed from being abstract to concrete. The idea team started to explain the details that we got wrong. And in the process, we discovered that we had gotten most of their vision correct--we just differed in some of the details.

What's more, other people who had been confused completely got the idea after seeing the movie. And again, the discussions were at a concrete level, so the discussions that came after seeing the movie were far more productive.

[Dec 3, 2007] Programmer Productivity

Robert Martin has a post about how 10% of programmers write 90% of the code. I think this is more-or-less accurate, but he seems to think that whether a programmer is a member of the elite or not is an innate quality -- that there are good programmers and poor programmers, and nobody ever moves between the two groups.

I have worked on projects where I've been in the elite, and I've worked on projects where I've been in the middle, and on occasion I even qualify as a Waste Of Space for a month or two. There are several factors that influence how productive I am, personally.

First, the fewer developers in the group, the better. This is more than just being a big fish in a little pond, it's about feeling responsible for the code. If I'm in a group of 20, my contribution doesn't matter as much as if I'm in a group of four, so I don't care as much.

Second, distractions must be minimized. I enjoy helping people and answering questions, but they really cut into my concentration. Unfortunately, it's rude to ask people to use email instead of popping over for a visit or sending an instant message. Also, if I'm in an environment where I have meetings every day, scheduled such that they break my time up into hour-long chunks, then my attention is guaranteed to wander. For this reason, I tend to work best at night.

Third, history and familiarity with the code is very important. In code I've written and/or rewritten, I'm extremely productive. In code that I'm unfamiliar with, I'm not. It also helps a lot if the person who did write the code is willing to take the time to answer questions, without getting irritated. I also find that different people write the same program in vastly different ways, and if you're working on a codebase that was architected very differently from the way you would have done it, it can be difficult to ever get comfortable.

Fourth, management is important. For example, I need to feel just enough time-pressure to make me pay attention, but not so much that I give up in despair. I also need to get feedback as to how my work is perceived by users (did it suck? did it rock?) otherwise my work starts to seem pointless and I lose motivation.

Fifth, I find that my productivity has ceased to improve noticeably over time. For the first two or three years it improved dramatically, but since then I seem to have plateaued. (I currently have eight years of professional programming experience.)

If you work with someone who you think is being unproductive, perhaps you should spend some time to find out why. You might find that a very small change in their work environment can lead to a large improvement in their output. Maybe they just want to know that their code is actually useful to someone. Maybe they need free snacks, so their blood sugar doesn't get too low in the afternoon. Maybe the need to work in a quieter part of the office.

Discovering and addressing these kinds of things should be 50% of what a manager does. The other 50% should be facilitating communication both within the group and with other groups.

Posted on June 6, 2003 09:02 PM
More programming articles

[Nov 17, 2006] Defence fires missile at IT industry By Steven Deare

ZDNet Australia Technology vendors have taken a verbal hammering from the Australian Defence Force (ADF) after one of its top procurement chiefs blamed the industry for most of its IT project failures.

Kim Gillis, deputy chief executive officer of the ADF's procurement arm, the Defence Materiel Organisation, said vendors set unrealistic expectations in tenders -- which was usually the cause of those government IT projects failing.

Government tenders were often surrounded by "a conspiracy of optimism," said Gillis.

"Say I'm going to put in an IT system in 2000-and-whatever, and go out to industry and say 'I want you to give me this type of capability'," he told delegates at the Gartner Symposium conference in Sydney.

"And miraculously everybody who tenders comes in and says 'I can deliver that capability exactly how you specified on that day'.

"And everybody starts believing that it's a reality," he said.

DMO project managers were given a simple instruction for dealing with such companies, according to Gillis: "Don't believe it".

"Especially in the IT world, because I haven't seen in my experience in the last five years, an IT project delivered on schedule," he said.

"They do happen, but I haven't seen them."

False promises have often led to government IT project failures, according to Gillis. However, it was usually the government that wore the blame.

"The reality is the people who actually got it wrong are the industry participants who are actually providing the services," he said. "Most of the time the fault lies not with what I've actually procured but what they've actually told they're contracted for.

"At the end of the day what happens is, they've underperformed, [but] I take the hit," he said.

The DMO recently took steps to improve its procurement process by instigating the Procurement Improvement Program (PIP). It includes a series of consultations with industry and changes the tendering and contracting process.

[Sep 19, 2006] Exteme programming as yet another SE fad

Seems like a lot of that is just rehashing the same idea as surgical team described in the mythical man-month but in some very incoherent and clumsy way ("pair programming", my God what a name -- "cluster disaster" would be much better). "Pair programming" may help to stop programmers from waisting to much time reading Slashdot :-). However, they seem to be able to compensate for this in a lot of other ways.

First of all collective ownership diminishes individual responsibility and deliberately creating huge communication overhead that diminishes each programmers productivity, but for more talented members of the team it might be a dramatic drop ("state farm effect"). It's also very difficult to get the right balance of personalities in the teams. If you pair a good programmer with a zealot the zealot will be in a driving seat and the results will suffer. We all should periodically reread Brooks famous The Mythical Man-Month. The one will understand that XP does not bring anything new to the table. In essence this is a profanation of Brook's idea of surgical teams in a perverse way mixed with Microsoft idea "one programmer -- one tester".

In my opinion, extreme programming is extremely overrated. Some of the ideas, such as test-driven development (although this concept is not restricted to XP), work well. Others, such as pair programming just do not work in my opinion. Programmers like writers are solo beasts - putting two of these dragons behind one keyboard is asking for trouble.

As a methodology XP is pure fantasy. It has been well known for a long time that big bang or waterfall model. And it does not work well. The 'spiral' model (iterating out from the core of a small well understood system) is a much better methodology popularized by Unix and in some form reemerged in prototyping approach.

It is difficult to survive that amount of SE nonsense in a typical XP books. Readers beware.

Zealots defend XP "cluster disaster" as a kind of code review. But one computer and the same cubicle idea is nonsense. It is unclear to me that communication improves it people share the same cubicle. I like that is called Extreme because this is an example of extreme nonsense:

Like any kind of engineering, software engineering needs as much face to face collaboration as possible.

To a point collaboration is important, but real SE requires careful planning by a talented architect and clear interface definitions. XP -- almost to the point of being pathological -- attempts to avoid planning as much as possible by substituting endless chatter and tremendous time wasting repeatedly reimplementing what could have been done right the first time. (And yes, I know some things always have to be reimplementation, but just because mistakes are inevitable doesn't mean they have to be encouraged.)

Software engineering has an unfortunate tendency towards fanatical adherence to the latest fat that is always sold as a silver bullet. Usually, this involves an implementation language backed by a marketing push (Java); XP seems to be another programming fad built on unscrupulous books marketing (structured programming extremism and verification bonanza was the first). And like verification bonanza before it has found an abundant number of zealots may be due to its potential for snake oil salesmen to earn a decent living at the expense of programmers suffering form it.

But all or nothing is not just an XP problem. Most SE methodologies are close to religions in a sense that they all try to convince you that their way is best, if you deviate you are a heretic and if it all fails then it's your problem for not following the rules. The "prophets" that "invent" a methodology make their money from book publishing as well as teaching people how to do it are usually pretty sleazy people. Why would they kill their cash cow even if they themselves understand that they are wrong? In general, SE methodology advocates, like cult leaders cannot afford to correct themselves.

InformIT Comments on the Article The Future of Outsourcing: September 11, 2011 by Alan Gore

If it's not clear, I meant to say that the CMM cert process is itself subject to manipulation and fraud by the fact that anybody can submit any project (even one they didn't do) for review to the people at Carnegie Mellon.

The "true believers" refers to those at CM and elsewhere who continue to preach "Software Engineering" when the vast majority of its adherents cannot reliably or even consistently produce success from project to project. None who has far more failures than successes when using their own methods is in a position to lecture others on the "right way" to make successful software. Once again, the emperor has no software project magic fix, and processes which demand innate skill cannot be mass-produced in a population without that inate skill. Get over it.

Durba, your idiotic generalization will make you nice fodder for the next c by markusbaccus OCT 09, 2003 02:23:05 AM

The CMM is a cert in that it rates a company's adoption of an apparently unquestionable methodologly which has a 2/3 rate of failure. It is the logical equivalent of saying, "If you don't blow on that dice three times before you roll it, you only have a one in six chance of rolling a six. Umm-- prove it.

Do me a favor, learn how to recognize logically falacious arguments like an "appeal to authority" or a "non sequitor", ("why isn't the SEI doing something about it?" == fallacious belief that SEI is in a postion to adequately identify fraud merely because it is a recognizable authority, or that it would even have an incentive to do so. e.g. "He is an expert in physics so he would never lie to protect his project's funding." Oh, and since we're on it, you implicitly made an error of misplaced deduction when you missed my point. (e.g: "I lit one match, so all matches will light." i.e., it may be true that ONE project met the standards of the Capability Maturity Model Level 5, but that is not an indica