This “Lean IT – avant la lettre” article first appeared in January 2004 in the Dutch edition of Chief Financial Officer.
For years the IT industry reacted to any question of customers and users with the introduction of yet another generation of new technology. But investing in ever more and ever faster technology is not the answer. The solution lies in better and more effective management of the existing technology investments.
Driven by the endless, vendor driven technology push of the last 20 years many of today’s managers seem clueless about what to do with (the problem called) IT. They are trying to postpone IT investments as long as possible (maybe hoping that if the ignore IT’s requests long enough they might actually go away) or are seriously considering outsourcing their complete IT all together. The IT suppliers respond as usual by laying out yet another array of new hype with catchy names like Utility and on-demand.
In industrial manufacturing year on year cost reductions of substantial size are quite normal and expected. The plasma or LCD screen that was produced at the time of introduction for 400 Euro’s each, two years later has a fully loaded manufacturing cost of 100-150 euro’s. And rightfully so, because in the third year these TVs are likely to do less than 200 Euro’s in retail. The same pattern we see with food, air travel and thousands of other products and services.
There is only one sector where this kind of productivity increase seems to be largely elusive. An industry segment that costs more money every year, despite enormous progress in technology. We are talking about the steam engine of today’s business processes: Information Technology or for short IT. Although you get a ever more computer power for less money, the cost of an average company desktop still is the same (or more) than in previous years and the total cost of ownership (TCO) of ERP and other business applications keeps increasing.
DIVISION OF LABOR
A set of very clear principles underpins the constant cost reductions found in industrial production. It makes sense to examine the history of these industrial developments and to try and draw some conclusions that can help to better manage IT going forward.
In 1911 the American engineer Frederick Winslow Taylor published the ground rules of industrial division of labor in a book called the Principles of Scientific Management. The wide acceptance and implementation of Tailor’s theories led to drastic increases in productivity and pushed the world economy in an unprecedented spiral of increasing wealth and prosperity.
Until that time products were made by individual craftsman. A gunsmith for example made one gun per day. To do so, a gunsmith did require an education (from apprentice to master) that often took up to five years. Through the division of labor proposed by Taylor’s theory the job was done by multiple people. Someone made the barrel, someone else the trigger and a third person specialized in making powder chambers. Tasks were divided and simplified further and further, ideally until they were so simple they could be automated away. This approach also turned out to be very beneficial to leverage the biggest invention of those days, the steam machine. Something that had been impossible for the individual craftsmen. And instead of 10 riffles a day, 10 people now produced a hundred or more riffles per day. The average training time of five years for a gunsmith went down to five weeks for a barrel maker. And as a result the average pay also decreased significantly, often to the level of so called unskilled labor. The productivity increase of Taylor’s ideas proved enormous.
Early factories were laid out and managed solely based on optimizing the utilization of the (often expensive) machines. In front of every machine there was a queue of products waiting to be processed. This enabled the machine to carry on processing continuously, resulting in utilization rates of up to 99%. Unfinished products were transported from machine to machine and put in a long queue every time. For the owner of the machine this was perfect, for the customer waiting for the product it was less so. Often products had up to six weeks lead-time, while the actual processing time only was one hour. Also the customer had very little choice as the machines could only plough ahead productively if the number of variations was kept to the absolute minimum and everything was produced in vast quantities.
ASSEMBLYLINE
Shortly after a new way of organizing production was introduced: the assembly line. The whole layout of the factory now was optimized to get the product as fast a s possible through the factory. Main advantage was that in half a day a car, in one hour a laptop and in 10 minutes a complete phone could be assembled. Drawback was that I seemed slightly more costly (as the machines were slightly less utilized, a press that could press 4 times per minute now only pressed once per 5 minutes (when a car happen to come by)) and it was relatively inflexible, because it was not easy or even impossible to produce many different products on one line. Just remember Henry Ford that said: “You can have any color you like, as long as it is black”.
But pretty soon (from a historic perspective) consumers no longer wanted only black cars, today they even want to pick their preferred TV model and individualized telephone. All this for prices lower than the going rate for last year’s standard black model. In part influenced by the ideas and work of W. Edward Deming a new way of managing industrial production was pioneered in Japan: Just in Time. Nowadays Toyota manufactures on one assembly line a multitude of different models and variants. Even small trucks and regular family cars can be produced on the same line, one after each other. Something not made possible by new technology, but merely by refining the management of the existing technology.
Most important difference between this Just in Time management and the traditional approach was that products were no longer PUSHed through the factory, they were PULLed. In other words production only starts when there is specific demand (PULL) not when the machine happens to be available (PUSH). To only produce when there is demand there are however two prerequisites: specific logistical and infrastructural processes. In the Japanese car factories a refined combination of KanBan cards, standard bins and MRP-type systems was used for this. In addition factories nowadays need real-time insight into what the effects of a certain action or decision on the work floor are on the end product and therefore what the impact is on the actual customer (who created the PULL). Modern production environments use supply chain optimization software to do so. With this software one can see directly the impact of a certain delay, problem or change in planning on the end customer and (more importantly) corrective action can be taken.
PROFESSIONAL CONSPIRACY
So far our (relatively) short excursion into Industrial history. What have we learned? That the computer and information technology industry seems stuck in a pre-taylorian era, with IT staff resembling pre-industrial gun-smiths. In IT the subject matter still is too complex and specialist knowledge too important for any sensible division of labor, to some extend also due to the persistent use of IT’s own language, a language called acronyms. As a result only members of the gild (IT people) can participate in meaningful discussions about the trade and the profession. As a result of this “professional conspiracy” division of labor is, unlike in every other industry, still far from standard in IT. This resulted in the well known silo’s of automation, where gunsmiths looked over their own individual well guarded areas. As a result many of Taylor’s productivity benefits have passed the IT industry by. Or did you recently hear someone complain about the now very low salaries in IT or claim that 3 months experiences are more than enough for a senior Java developer.
Also the degree of standardization and interoperability in IT is less than to be desired or expected. There is much to do about XML standards for information exchange. But there is a plethora of emerging standards. Just like the French gunsmiths back in the 18th century, the IT industry (IT People) seem to understand too well that the lucrative integration industry does not benefit from widely accepted common standards.
Meanwhile, with a bit of good faith, the factories from the pre-industrial revolution could be recognized in something like Batch/Mainframe. Here the user was secondary to the utilization and cost of the mainframe and whether he liked it or not, he had to wait till the next day for his output, as this was the only way the use of the machine could be optimized. Similarly the successor of the job shop organization, the assembly line, as first introduced by Henry Ford, has his equivalent in IT. The process orientation of ITIL, reinforced by the speed obsession of the internet bubble caused suppliers to push “one app per server”. As Sun’s CEO admitted in a 2004 visit to Holland, the common industry advice was: “buy CRM, get a CRM server”, “buy a web shop, get a web shop server”. As a result dedicated servers were assigned to specific single tasks, regardless of utilization. Many customers still have hundreds of servers (a.k.a space heaters) all used only a fraction of the time and using their own proprietary storage, print and security subsystems. A far from optimal situation.
ON DEMAND
So how do the ideas of PULL driven Just in Time production equate to IT. Some of the principles we see back in so called “Computing on Demand” (or now Cloud computing). This assumes that only IT services, for which true demand exists, are being delivered (and paid for). And just like in other industries, these principles seem to resonate well. Computing on Demand (or Utility Computing) to a large extends borrows from Taylors principles. Today’s complex integrated IT application environment is divided into smaller, simpler specialized tasks that can be processed cheaper and more efficiently. Completely automating the complete running of an enterprise wide SAP system may be a step to far today (2004) but completely automating supporting tasks like backup and printing is very well possible (and not just for the SAP servers, but for all servers) and likely a lot more cost efficient then having the SAP gun smiths do this at their daily rates.
The industry agrees that this “on demand” or Utility computing is going to be delivered over the network in the form off services (most likely web-services). This network off course has been there for years, but the popular term to refer to this infrastructure for IT people now is “Service Oriented Architecture” (SOA). Next step is the automatic allocation and set-up of new elements in this environment. For example, every time a new employee starts working for the company, he requires a USER ID, a (preferably unique) email address , access to SAP, some disk space on the server and maybe one or two specific applications for his job function on his laptop. The popular term for this is provisioning. User provisioning makes sure that new employees automatically get the all the earlier described access and applications. A typical routine task, that lends itself very well for automating it away.
By now we are also “provisioning applications”. If by the end of the month the month end closing application is brought life, it automatically gets CPU capacity on one or more servers, the required storage space and access to the appropriate databases , bandwidth and financial system access allocated. If there is no server available, it could even decide automatically to reduce the allocated number of servers for Exchange or SAP in order to optimally use the current investments “on demand”.
MANAGEMENT MATTERS
“On Demand” computing is as dependent on real time insight into the consequences of decisions, issues and changes as production in the real physical world is. We need to understand what business processes are impacted if the router on the first floor gives up or worse if the printer at the expedition dock runs out of ink. And we, off course, need to see the alternatives that we have to continue critical processes like invoicing or month end close despite the fact that a specific router or printer is temporarily down. Most modern management systems indeed allow specifying which components are involved with which business process (like invoicing or month-end closing). But often this has to be manually entered and maintained, which is hard if this changes frequently. It will be clear that defining and maintaining this manually is no option anymore when the systems through automatic provisioning starts to dynamically (on Demand) allocated resources to certain business processes. Some kind of monitoring function that analyze running processes, determines the correlation and interdependencies and presents these to the administrator (in terms of service levels) so he can take action or approve the suggested remedial actions.
When all this is “in place”, and for many organizations this may still take a fair bit of work, one can start to manage the IT processes in an “Industrial way” (cost effective and on demand). This is not a question of technology or nicer shinier boxes. Today’s technology is sufficient for supporting the current business processes. What is needed is better management of these processes. If we now start to truly focus on managing and integrating what we have, instead of thinking about replacing everything we have with something nicer and shinier, then maybe the contribution of IT to increased labor productivity does not have to be illusive or un-measurable any longer.