Category Archives: Data

2011 Management Blog Roundup: Stats Made Easy

The 4th Annual Management blog roundup is coming to a close soon. This is my 3rd and final review post looking back at 2001, the previous two posts looked at: Gemba Panta Rei and the Lean Six Sigma Blog.

I have special affinity for the use of statistics to understand and improve. I imaging it is both genetic and psychological. My father was a statistician and I have found memories of applying statistical thinking to understand a result or system. I also am comfortable with numbers, and like most people enjoy working with things I have an affinity for.

photo of Mark Anderson

Mark Anderson

Mark Anderson’s Stats Made Easy blog brings statistical thinking to managers. And this is not an easy thing to do, as one of his posts shows, we have an ability to ignore data we don’t want to know. Wrong more often than right but never in doubt: “Kahneman examined the illusion of skill in a group of investment advisors who competed for annual performance bonuses. He found zero correlation on year-to-year rankings, thus the firm was simply rewarding luck. What I find most interesting is his observation that even when confronted with irrefutable evidence of misplaced confidence in one’s own ability to prognosticate, most people just carry on with the same level of self-assurance.”

That actually practice of experimentation (PDSA…) needs improvement. Too often the iteration component is entirely missing (only one experiment is done). That is likely partially a result another big problem: the experiments are not nearly short enough. Mark offered very wise advice on the Strategy of experimentation: Break it into a series of smaller stages. “The rule-of-thumb I worked from as a process development engineer is not to put more than 25% of your budget into the first experiment, thus allowing the chance to adapt as you work through the project (or abandon it altogether).” And note that, abandon it altogether option. Don’t just proceed with a plan if what you learn makes that option unwise: too often we act based on expectations rather than evidence.

In Why coaches regress to be mean, Mark explained the problem with reacting to common cause variation and “learning” that it helped to do so. “A case in point is the flight instructor who lavishes praise on a training-pilot who makes a lucky landing. Naturally the next result is not so good. Later the pilot bounces in very badly — again purely by chance (a gust of wind). The instructor roars disapproval. That seems to do the trick — the next landing is much smoother.” When you ascribe special causation to common cause variation you often confirm your own biases.

Mark’s blog doesn’t mention six sigma by name in his 2011 posts but the statistical thinking expressed throughout the year make this a must for those working in six sigma programs.

Related: 2009 Curious Cat Management Blog Carnival2010 Management Blog Review: Software, Manufacturing and Leadership

Eliminate the Waste of Waiting in Line with Queuing Theory

One thing that frustrates me is how managers fail to adopt proven strategies for decades. One very obvious example is using queuing theory to setup lines.

Yes it may be even better to adopt strategies to eliminate as much waiting in line as possible, but if there is still waiting in line occurring and you are not having one queue served by multiple representatives shame on you and your company.

Related: Customer Focus and Internet Travel SearchYouTube Uses Multivariate Experiment To Improve Sign-ups 15%Making Life Difficult for Customers

Steve Jobs Discussing Customer Focus at NeXT

Video from 1991 when Steve Jobs was at NeXT. Even with the customer focus however, NeXT failed. But this does show the difficulty in how to truly apply customer focus. You have to be creative. You have examine data. You have to really understand how your customers use your products or services (go to the gemba). You have to speculate about the future. The video is also great evidence of providing insight to all employees of the current thinking of executives.

Related: Sometimes Micro-managing Works (Jobs)Delighting CustomersWhat Job Does Your Product Do?

One factor at a time (OFAT) Versus Factorial Designs

Guest post by Bradley Jones

Almost a hundred years ago R. A. Fisher‘s boss published an article espousing OFAT (one factor at a time). Fisher responded with an article of his own laying out his justification for factorial design. I admire the courage it took to contradict his boss in print!

Fisher’s argument was mainly about efficiency – that you could learn as much about many factors as you learned about one in the same number of trials. Saving money and effort is a powerful and positive motivator.

The most common argument I read against OFAT these days has to do with inability to detect interactions and the possibility of finding suboptimal factor settings at the end of the investigation. I admit to using these arguments myself in print.

I don’t think these arguments are as effective as Fisher’s original argument.

To play the devil’s advocate for a moment consider this thought experiment. You have to climb a hill that runs on a line going from southwest to northeast but you are only allowed to make steps that are due north or south or due east or west. Though you will have to make many zig zags you will eventually make it to the top. If you noted your altitude at each step, you would have enough data to fit a response surface.

Obviously this approach is very inefficient but it is not impossible. Don’t mistake my intent here. I am definitely not an advocate of OFAT. Rather I would like to find more convincing arguments to persuade experimenters to move to multi-factor design.

Related: The Purpose of Factorial Designed ExperimentsUsing Design of Experimentsarticles by R.A. Fisherarticles on using factorial design of experimentsDoes good experimental design require changing only one factor at a time (OFAT)?Statistics for Experimenters

Warren Buffett’s 2010 Letter to Shareholders

Warren Buffett has published his always excellent annual shareholder letter. His letters, provide excellent investing insight and good management ideas.

Yearly figures, it should be noted, are neither to be ignored nor viewed as all-important. The pace of the earth’s movement around the sun is not synchronized with the time required for either investment ideas or operating decisions to bear fruit. At GEICO, for example, we enthusiastically spent $900 million last year on advertising to obtain policyholders who deliver us no immediate profits. If we could spend twice that amount productively, we would happily do so though short-term results would be further penalized. Many large investments at our railroad and utility operations are also made with an eye to payoffs well down the road.

At Berkshire, managers can focus on running their businesses: They are not subjected to meetings at headquarters nor financing worries nor Wall Street harassment. They simply get a letter from me every two years and call me when they wish. And their wishes do differ: There are managers to whom I have not talked in the last year, while there is one with whom I talk almost daily. Our trust is in people rather than process. A “hire well, manage little” code suits both them and me.

Cultures self-propagate. Winston Churchill once said, “You shape your houses and then they shape you.” That wisdom applies to businesses as well. Bureaucratic procedures beget more bureaucracy, and imperial corporate palaces induce imperious behavior. (As one wag put it, “You know you’re no longer CEO when you get in the back seat of your car and it doesn’t move.”) At Berkshire’s “World Headquarters” our annual rent is $270,212. Moreover, the home-office investment in furniture, art, Coke dispenser, lunch room, high-tech equipment – you name it – totals $301,363. As long as Charlie and I treat your money as if it were our own, Berkshire’s managers are likely to be careful with it as well.

At bottom, a sound insurance operation requires four disciplines… (4) The willingness to walk away if the appropriate premium can’t be obtained. Many insurers pass the first three tests and flunk the fourth. The urgings of Wall Street, pressures from the agency force and brokers, or simply a refusal by a testosterone-driven CEO to accept shrinking volumes has led too many insurers to write business at inadequate prices. “The other guy is doing it so we must as well” spells trouble in any business, but none more so than insurance.

I don’t agree with everything he says. And what works at one company, obviously won’t work everywhere. Copying doesn’t work. Learning from others and understanding what makes it work and then determining how to incorporate some of the ideas into your organization can be valuable. I don’t believe in “Our trust is in people rather than process.” I do believe in “hire well, manage little.” Exactly what those phrases mean is not necessarily straight forward. I believe you need to focus on creating a Deming based management system and that will require educating and coaching managers about how to manage such a system. But that the management decisions about day to day operations should be left to those who are working on the processes in question (which will often be workers, that are not managers, sometimes will be supervisors and managers and sometimes will be senior executives).

Related: Too often, executive compensation in the U.S. is ridiculously out of line with performance.Management Advice from Warren BuffetGreat Advice from Warren Buffett to University of Texas – Austin business school students2004 Warren Buffet Report
Continue reading

Actionable Metrics

Metrics are valuable when they are actionable. Think about what will be done if certain results are shown by the data. If you can’t think of actions you would take, it may be that metric is not worth tracking.

Metrics should be operationally defined so that the data is collected properly. Without operationally definitions data collected by more than one person will often include measurement error (in this case, the resulting data showing the results of different people measuring different things but calling the result the same thing).

And without operational definitions those using the resulting data may well mis-interpret what it is saying. Often data is presented without an operational definition and people think the data is saying something that it is not. I find most often when people say statistics lie it is really that they made an incorrect assumption about what the data said – which most often was because they didn’t understand the operational definition of the data. Data can’t lie. People can. And people can intentionally mislead with data. But far more often people unintentionally mislead with data that is misunderstood (often this is due to failure to operationally define the data).

In response to: Metrics Manifesto: Raising the Standard for Metrics

Related: Outcome MeasuresEvidence-based ManagementMetrics and Software DevelopmentDistorting the System (due to misunderstanding metrics)Manage what you can’t measure

Incentivizing Behavior Doesn’t Improve Results

In the webcast Dan Pink’s shares research results exploring human motivation and ideas on how to manage organization given the scientific research on motivation.

  • “once a task called for even rudimentary cognitive skill a larger reward led to poorer performance”
  • “Pay people enough to take the issue of money off the table. Pay people enough so they are not thinking about money they are thinking about the work.”
  • “3 factors lead to better performance: autonomy, mastery and purpose” [not additional cash rewards]
  • Open source software is created by highly skilled people contributing their time to collaborative projects that are then given away (such as Linux, Ruby, Apache). For large efforts their are often people paid by companies to contribute to the open source software but many people contribute 20-30, and more hours a week for free to such efforts, why? “Challenge, mastery and making a contribution”
  • “When the profit motive becomes unmoored from the purpose motive, bad thing happen. Bad things ethically sometimes, but also bad things like not good stuff, like crappy products, like lame services, like uninspiring places to work… People don’t do great things”
  • “If we start treating people like people… get past this ideology of idea of carrots and sticks and look at the science we can actually build organization and work life that make us better off, but I also think they have the promise to make our world a just a little bit better.”

The ideas presented emphasize respect for people, an understanding of psychology and validating beliefs with data. All of it fits very well with Deming’s ideas on management and the idea I try to explore in this blog. It isn’t easy to adjust your ideas. But the evidence continues to pile up against some outdated management practices. And good managers have to learn and adapt their practices to what is actually effective.

Related: Extrinsic Incentives Kill CreativityThe Trouble with Incentives: They WorkRighter IncentivizationIndividual Bonuses Are Bad Management

Taxes per Person by Country

I think that the idea that data lies is false, and that such a notion is commonly held a sign of lazy intellect. You can present data in different ways to focus on different aspects of a system. And you can make faulty assumptions based on data you look at.

It is true someone can just provide false data, that is an issue you have to consider when drawing conclusions from data. But often people just don’t think about what the data is really saying. Most often when people say data lies they just were misled because they didn’t think about what the data actually showed. When you examine data provided by someone else you need to make sure you understand what it is actually saying and if they are trying to support their position you may be wise to be clear they are not misleading you with their presentation of the data.

Here is some data from Greg Mankiw’s Blog. He wants to make his point that the USA is taxed more on par with Europe than some believe because he want to reduce current taxes. So he shows that while taxes as a percent of economic activity is low in the USA taxes per person is comparable to Europe.

Taxes/GDP x GDP/Person = Taxes/Person

France .461 x 33,744 = $15,556

Germany .406 x 34,219 = $13,893

UK .390 x 35,165 = $13,714

US .282 x 46,443 = $13,097

Canada .334 x 38,290 = $12,789

Italy .426 x 29,290 = $12,478

Spain .373 x 29,527 = $11,014

Japan .274 x 32,817 = $8,992

The USA is the 2nd lowest for percent of GDP taxes 28.2% v 27.4% for Japan. But in taxes per person toward the middle of the pack. France which has 46% taxes/GDP totals $15,556 in tax per person compared to $13,097 for the USA. Both measures of taxes are useful to know, in my opinion. Neither lies. Both have merit in providing a understanding of the system (the economies of countries).

Related: Fooled by RandomnessSimpson’s ParadoxMistakes in Experimental Design and InterpretationGovernment Debt as Percentage of GDP 1990-2008 by CountryCommunicating with the Visual Display of DataIllusion of Explanatory Depth

Prophet Unheard: Dr. W. Edwards Deming – 1992

[embedded webcast links removed because they have been removed from YouTube. To see video with W. Edwards Deming see the Deming Institute YouTube channel.]

This is an interesting video on Deming and American management (by the BBC in 1992): Prophet Unheard. It includes some nice old footage of Deming in Japan. The importance of respect for people is clear and the video also touches on the idea the danger of relying on data (when you do not understand variation and that many important matters and unmeasurable). The video features many snippets of Dr. Deming speaking and includes Don Peterson, Ford CEO; Clare Crawford Mason, If Japan Can, Why Can’t We producer; and Myron Tribus.

Related: Dr. Deming Webcast on the 5 Deadly DiseasesRed Bead Experiment WebcastPerformance without Appraisalmanagement webcasts

Part two of the documentary explores the Deming Prize, understanding data and the PDSA cycle: [removed]

Part 3 explores the efforts at Florida Power and Light, the first USA Deming Prize winner: [removed]

The Biggest Manufacturing Countries in 2008 with Historical Data

Once again the USA was the leading country in manufacturing for 2008. And once again China grew their manufacturing output amazingly. In a change with recent trends Japan grew output significantly. Of course, the 2009 data is going to show the impact of a very severe worldwide recession.

Chart showing percent of output by top manufacturing countries from 1990 to 2008Chart showing the percentage output of top manufacturing countries from 1990-2008 by Curious Cat Management Blog, Creative Commons Attribution.

The first chart shows the USA’s share of the manufacturing output, of the countries that manufactured over $185 billion in 2008, at 28.1% in 1990, 27.7% in 1995, 32% in 2000, 28% in 2005, 28% in 2006, 26% in 2007 and 24% in 2008. China’s share has grown from 4% in 1990, 6% in 1995, 10% in 2000, 13% in 2005, 14% in 2006, 16% in 2007 to 18% in 2008. Japan’s share has fallen from 22% in 1990 to 14% in 2008. The USA has about 4.5% of the world population, China about 20%. See Curious Cat Investment blog post” Data on the Largest Manufacturing Countries in 2008.

Even with just this data, it is obvious the belief in a decades long steep decline in USA manufacturing is not in evidence. And, in fact the USA’s output has grown substantially over this period. It has just grown more slowly than that of China (as has every other country), and so while output in the USA has grown the percentage with China has shrunk. The percentage of manufacturing output by the USA (excluding output from China) was 29.3% in 1990 and 29.6% in 2008. The second chart shows manufacturing output over time.

charts showing the top manufacturing countries output from 1990-2008Chart showing the output of the top manufacturing countries from 1990-2008 by Curious Cat Management Blog, Creative Commons Attribution.

The 2008 China data is not provided for manufacturing alone (the latest UN Data, for global manufacturing, in billions of current USA dollars). The percentage of manufacturing (to manufacturing, mining and utilities) was 78% for 2005-2007 (I used 78% of the manufacturing, mining and utilities figure provided in the 2008 data). There is a good chance this overstates China manufacturing output in 2008 (due to very high commodity prices in 2008).

Hopefully these charts provide some evidence of what is really going on with global manufacturing and counteracts the hype, to some extent. Global economic data is not perfect. These figures are an attempt to capture the economic reality in the world but they are not a perfect proxy. This data is shown in 2008 USA dollars which is good in the sense that it shows all countries in the same light and we can compare the 1995 USA figure to 2005 without worrying about inflation. However foreign exchange fluctuations over time can show a country, for example, having a decline in manufacturing output in some year when in fact the output increased (just the decline against the USA dollar that year results in the data showing a decrease – which is accurate when measured in terms of USA dollars).

If the dollar declines substantially between when the 2008 data was calculated and the 2009 data is calculated that will give result in the data showing a substantial increase in those countries that had a currency strengthen against the USA dollar. At this time the Chinese Renminbi has not strengthened while most other currencies have – the Chinese government is retaining a peg to a specific exchange rate.

Korea (1.8% in 1990, 3% in 2008), Mexico (1.7% to 2.6%) and India (1.4% to 2.5%) were the only countries to increase their percentage of manufacturing output (other than China, of course, which grew from 3.9% to 18.5%).

Related: posts on manufacturingGlobal Manufacturing Data 2007Global Manufacturing Employment Data – 1979 to 2007Top 10 Manufacturing Countries 2006Top 10 Manufacturing Countries 2005lean manufacturing resources

Highlights from Recent George Box Speech

The JMP blog has posted some highlights from George Box’s presentation at Discovery 2009

Infusing his entire presentation with humor and fascinating tales of his memories, Box focused on sequential design of experiments. He attributed much of what he knows about DOE [design of experiments] to Ronald A. Fisher. Box explained that Fisher couldn’t find the things he was looking for in his data, “and he was right. Even if he had had the fastest available computer, he’d still be right,” said Box. Therefore, Fisher figured out how to study a number of factors at one time. And so, the beginnings of DOE.

Having worked and studied with many other famous statisticians and analytic thinkers, Box did not hesitate to share his characterizations of them. He told a story about Dr. Bill Hunter and how he required his students to run an experiment. Apparently a variety of subjects was studied [see 101 Ways to Design an Experiment, or Some Ideas About Teaching Design of Experiments]

According to Box, the difficulty of getting DOE to take root lies in the fact that these mathematicians “can’t really get the fact that it’s not about proving a theorem, it’s about being curious about things. There aren’t enough people who will apply [DOE] as a way of finding things out. But maybe with JMP, things will change that way.”

George Box is a great mind and great person who I have had the privilege of knowing my whole life. My father took his class at Princeton, then followed George to the University of Wisconsin-Madison (where Dr. Box founded the statistics department and Dad received the first PhD). They worked together building the UW statistics department, writing Statistics for Experimenters and founding the Center for Quality and Productivity Improvement among many other things.

Statistics for Experimenters: Design, Innovation, and Discovery shows that the goal of design of experiments is to learn and refine your experiment based on the knowledge you gain and experiment again. It is a process of discovery. If done properly it is very similar to the PDSA cycle with the application of statistical tools to aid in determining the impact of various factors under study.

Related: Box on QualityGeorge Box Quotationsposts on design of experimentsUsing Design of Experiments

Deming: There is No True Value

There is no true value of anything: data has meaning based on the operational definition used to calculate the data.

Walter Shewhart’s Statistical Method from the Viewpoint of Quality Control, forward by W. Edwards Deming:

There is no true value of anything. There is instead a figure that is produced by application of a master or ideal method of counting or measurement… no true value of the number of inhabitants within the boundaries of (e.g.) Detroit. A count of the number of inhabitants of Detroit is dependent upon the application of arbitrary rules for carrying out the count. Repetition of an experiment or of a count will exhibit variation.

Dr. Deming’s ideas on the theory of knowledge are the least understood and least seen in other management systems. The importance of understanding what data does, and does not tell you, is at least somewhat acknowledged in other management system but is often not found much in the actual practice of management. The execution often glosses over the importance of actually understanding statistics versus using formulas. Just using formulas is dangerous. It may be inconvenient but learning about the traps we can fall into in using data is important.

How often do you see the operational definition used to calculate the data you see with the data you are provided?

via: Shewhart, Deming and Data by Malcolm Chisholm

Related: How We Know What We KnowPragmatism and Management KnowledgeMeasuring and Managing Performance in OrganizationsDangers of Forgetting the Proxy Nature of Data

Statistical Learning as the Ultimate Agile Development Tool by Peter Norvig

Interesting lecture on Statistical Learning as the Ultimate Agile Development Tool by Peter Norvig. The webcast is likely to be of interest to a fairly small segment of readers of this blog. But for geeks it may be interesting. He looks at the advantages of machine learning versus hand programming every case (for example spelling correction).

Google translate does a very good job (for computer based translation) based on machine learning. You can translate any of the pages on this blog into over 30 languages using Google translate (using the widget in the right column).

Via: @seanstickle

Related: Mistakes in Experimental Design and InterpretationDoes the Data Deluge Make the Scientific Method Obsolete?Website DataAn Introduction to Deming’s Management Ideas by Peter Scholtes (webcast)

Communicating with the Visual Display of Data

graphs showing data sets with different looks even though some statistical characteristics are the same
Anscombe’s quartet: all four sets are identical when examined statistically, but vary considerably when graphed. Image via Wikipedia.

Anscombe’s quartet comprises four datasets that have identical simple statistical properties, yet are revealed to be very different when inspected graphically. Each dataset consists of eleven (x,y) points. They were constructed in 1973 by the statistician F.J. Anscombe to demonstrate the importance of graphing data before analyzing it, and of the effect of outliers on the statistical properties of a dataset.

Of course we also have to be careful of drawing incorrect conclusions from visual displays.

For all four datasets:

Property Value
Mean of each x variable 9.0
Variance of each x variable 10.0
Mean of each y variable 7.5
Variance of each y variable 3.75
Correlation between each x and y variable 0.816
Linear regression line y = 3 + 0.5x

Edward Tufte uses the quartet to emphasize the importance of looking at one’s data before analyzing it in the first page of the first chapter of his book, The Visual Display of Quantitative Information.

Related: Great ChartsSimpson’s ParadoxSeeing Patterns Where None ExistsVisible DataControl ChartsEdward Tufte’s: Beautiful Evidence

Blame the Road – Not the Person

The system is responsible for 90, 92, 94, 97% of problems – W. Edwards Deming. Fix the system, don’t blame the people. When you seek system fixes you approach situations differently than if you search for people to blame.

By the way, I am often asked about the data supporting Deming’s contention that the system was responsible for 97% of the problems. This statement was not based on a set of data but on Dr. Deming’s decades of experience. And he increased the percentage over time – as he learned more.

Roads that are designed to kill

Half blamed the runner, saying she should not have been running in the street at that hour. Half blamed the driver, for not paying close enough attention. Not a single writer blamed the road.

Your streets are designed to kill people.

Vision Zero started about 30 years ago, when traffic safety researcher Claes Tingvall got the idea that we didn’t have to accept road traffic deaths as a fact of life. Tingvall and his colleagues said that these deaths were not “accidents’’ but were predictable and preventable. And they set out to prove it.

One of the ways they began to protect people was to put barriers down the center of two-lane roads. They showed that this could be done cheaply. When Mylar – a strong polyester film – is supported by closely spaced plastic poles, it can keep cars from crossing the median. When the Swedes used this type of center barrier to separate the traffic going in opposite directions, they effectively prevented head-on collisions and the death rate on these roads fell by 70 percent to 80 percent.

Global health research shows more improvements can save lives. For example, Ghana put in rumble strips – small bumps spaced closely together – across all the roads leading into the capital city of Accra, reducing fatalities by 35 percent. Research has shown that speed bumps on roads are one of the “best buys” in all of global health.

Most people think we are doing all that can be done to keep our roads safe. They are wrong. Road traffic injuries kill more than a million people a year worldwide, including 40,000 a year in the United States.

Is a situation killing 40,000 people in the USA a year a health care issue? It sure seems to me it would be. It probably isn’t a disease management issue though (some might try to say bad roads are a disease but I wouldn’t say that). I think this is one, of many examples, that shows that we have a disease and injury management system not a health care system (in addition to illustrating systems thinking, effective root cause analysis, PDSA, innovation, respect for people…).

Related: Find the Root Cause Instead of the Person to BlameTraffic Congestion and a Non-SolutionChecklists Save LivesSaving Lives: US Health Care ImprovementThe Economic Benefits of Walkable CommunitiesSWAT Raid Signs of Systemic FailuresSystem Improvement to Respond to the Dynamics of Crowd DisastersThe Leading Causes of Death

YouTube Uses Multivariate Experiment To Improve Sign-ups 15%

Google does a great job of using statistical and engineering principles to improve. It is amazing how slow we are to adopt new ideas but because we are it provides big advantages to companies like Google that use concepts like design of experiments, experimenting quickly and often… while others don’t. Look Inside a 1,024 Recipe Multivariate Experiment

A few weeks ago, we ran one of the largest multivariate experiments ever: a 1,024 recipe experiment on 100% of our US-English homepage. Utilizing Google Website Optimizer, we made small changes to three sections on our homepage (see below), with the goal of increasing the number of people who signed up for an account. The results were impressive: the new page performed 15.7% better than the original, resulting in thousands more sign-ups and personalized views to the homepage every day.

While we could have hypothesized which elements result in greater conversions (for example, the color red is more eye-catching), multivariate testing reveals and proves the combinatorial impact of different configurations. Running tests like this also help guide our design process: instead of relying on our own ideas and intuition, you have a big part in steering us in the right direction. In fact, we plan on incorporating many of these elements in future evolutions of our homepage.

via: @hexawiseMy brother has created a software application to provide much better test coverage with far fewer tests using the same factorial designed experiments ideas my father worked with decades ago (and yet still far to few people use).

Related: Combinatorial Testing for SoftwareStatistics for ExperimentersGoogle’s Website Optimizer allows for multivariate testing of your website.Using Design of Experiments

Don’t Hide Problems in Computers

Making things visible is a key to effective management. And data in computers can be easy to ignore. Don’t forget to make data visible. Paul Levy, CEO of Beth Israel Deaconess Medical Center in Boston recently hosted Hideshi Yokoi, president of the Toyota Production System Support Center and wrote this blog post:

Together, we visited gemba and observed several hospital processes in action, looking for ways to reduce waste and reorganize work. It was fascinating to have such experts here and see things through their eyes. Mr. Yokoi’s thoughts and observations are very, very clear, notwithstanding a command of English that is still a work in progress.

The highlight? At one point, we pointed out a new information system that we were thinking of putting into place to monitor and control the flow of certain inventory. Mr. Yokoi’s wise response, suggesting otherwise, was:

“When you put problem in computer, box hide answer. Problem must be visible!”

The mission of the Toyota Production System Support Center to share Toyota Production System know-how with North American organizations that have a true desire to learn and adopt TPS.

Related: The Importance of Making Problems VisibleGreat Visual Instruction ExampleHealth Care the Toyota Way

CEO’s Castles and Company Performance

Where are the Shareholders’ Mansions? CEOs’ Home Purchases, Stock Sales, and Subsequent Company Performance by Crocker H. Liu, Arizona State University and David Yermack, New York University – Stern School of Business

We study real estate purchases by major company CEOs, compiling a database of the principal residences of nearly every top executive in the Standard & Poor’s 500 index. When a CEO buys real estate, future company performance is inversely related to the CEO’s liquidation of company shares and options for financing the transaction. We also find that, regardless of the source of finance, future company performance deteriorates when CEOs acquire extremely large or costly mansions and estates. We therefore interpret large home acquisitions as signals of CEO entrenchment. Our research also provides useful insights for calibrating utility based models of executive compensation and for understanding patterns of Veblenian conspicuous consumption.

To understand better the reasons behind the underperformance of companies whose CEOs acquire very large homesteads, we read news stories about major events affecting the firms in our sample in which a CEO acquires a property with at least 10 acres or a 10,000 square foot house. These news stories suggest parallels between the CEOs’ oversight of their personal assets and management of their companies. No less than nine of the 25 CEOs attempted major corporate acquisitions in the two years following their personal acquisitions of very large real estate,9 and seven of the 25 announced significant capital investment initiatives involving the construction or expansion of corporate facilities. An additional two firms became mired in accounting scandals shortly after their CEOs purchased mansions, and one firm saw a previously agreed merger collapse.

Using a database of principal residences of company CEOs, we study whether these executives’ decisions about home ownership contain information useful for predicting the future path of their companies’ stock prices. We find that CEOs who acquire extremely large properties exhibit inferior ex post stock performance, a result consistent with large mansions and estates being proxies for CEO entrenchment. We also find that the method of financing a home’s acquisition is informative about future stock returns. A general pattern of CEO sales of their firms’ shares and options exists over the twelve months leading up to the date of home acquisition. However, when the CEO does not sell any shares, his stock performs significantly better ex post than the stocks of firms whose CEOs do liquidate equity to finance their houses. The retention of company shares simultaneous with a new home purchase, despite the presence of an evident personal liquidity need, appears to send a signal of commitment by a CEO to his company.

That we put in power CEO’s that see themselves as nobility with the right to build castles (and many of these CEO castles dwarf all but the most conspicuous castle built by nobility) by taking the wealth produced by others from corporate coffers is a sign of our failure to select acceptable leaders for companies.

Related: Another Year of CEO’s Taking Hugely Excessive PayExcessive Executive PayExposing CEO Pay ExcessesNarcissistic Cadre of Senior Executives9 Deadly Diseases

When Performance-related Pay Backfires

When Economic Incentives Backfire by Samuel Bowles, Sante Fe Institute

Dozens of recent experiments show that rewarding self-interest with Economic incentives can backfire when they undermine what Adam Smith called “the moral sentiments.”

Punished by Rewards, by Alfie Kohn, is a great book on this topic. The area of “motivating” employees is one it is often hard for managers to learn. Even managers that have been studying Deming, Ackoff, Ohno… for years still have trouble with the idea that trying to find the right incentive scheme to motivate the right behavior is the wrong approach. Read the The Human Side Of Enterprise by Douglas Mcgregor (in 1960) to re-enforce the understanding of human motivation provided by Toyota’s respect for people principles.

Managers need to eliminate de-motivation in the work systems not try and find bonus schemes to motivate behavior. Eliminating de-motivation is often much more work. You can’t just get some money from the bonus pool and start giving it away. You have to manage. But if you are a manager you shouldn’t be afraid to actually manage the system and make it better.

Related: “Pay for Performance” is a Bad IdeaReward and Incentive Programs are Ineffective — Even Harmful by Peter Scholtes – The Defect Black MarketWhat’s the Value of a Big Bonus?Problems with BonusesLosses Covered Up to Protect BonusesStop Demotivating Employees

When performance-related pay backfires:
Continue reading

Google’s Innovative Use of Economics

Secret of Googlenomics: Data-Fueled Recipe Brews Profitability

Google depends on economic principles to hone what has become the search engine of choice for more than 60 percent of all Internet surfers, and the company uses auction theory to grease the skids of its own operations. All these calculations require an army of math geeks, algorithms of Ramanujanian complexity, and a sales force more comfortable with whiteboard markers than fairway irons.

Varian tried to understand the process better by applying game theory. “I think I was the first person to do that,” he says. After just a few weeks at Google, he went back to Schmidt. “It’s amazing!” Varian said. “You’ve managed to design an auction perfectly.” To Schmidt, who had been at Google barely a year, this was an incredible relief. “Remember, this was when the company had 200 employees and no cash,” he says. “All of a sudden we realized we were in the auction business.”

Google even uses auctions for internal operations, like allocating servers among its various business units. Since moving a product’s storage and computation to a new data center is disruptive, engineers often put it off. “I suggested we run an auction similar to what the airlines do when they oversell a flight. They keep offering bigger vouchers until enough customers give up their seats,” Varian says. “In our case, we offer more machines in exchange for moving to new servers. One group might do it for 50 new ones, another for 100, and another won’t move unless we give them 300. So we give them to the lowest bidder—they get their extra capacity, and we get computation shifted to the new data center.”

Google continues to make bold moves putting faith in their ability to find innovative solutions that others reject as impossible. It is a challenging but interesting path to success, for them, at least.

Related: Google Should Stay True to Their Management PracticesGoogle’s Answer to Filling Jobs Is an AlgorithmThe Google Way: Give Engineers RoomGoogle Website OptimizerGoogle: Experiment Quickly and Oftenposts on innovation in management