Depending on which dictionary you choose, you can find anywhere between two and seven meanings for “fabric.” Etymology-wise, it comes from the French fabrique and the Latin fabricare, and the Dutch Fabriek actually means factory. But in an IT context, fabric has little to do with our often used manufacturing or supply chain analogies; instead it actually relates much closer to fabric in its meaning of cloth, a material produced (fabricated) by weaving fibers.
Fabric computing or unified computing involves the creation of a computing fabric consisting of interconnected nodes that look like a ‘weave’ or a ‘fabric’ when viewed collectively from a distance.[1]
Usually this refers to a consolidated high-performance computing system consisting of loosely coupled storage, networking and parallel processingfunctions linked by high bandwidth interconnects …
In the context of data centers it means a move from having distinct boxes for handling storage, network and processing towards a fabric where these functions are much more intertwined or even integrated. Most people started to note the move to fabric or unified computing when Cisco started to include servers inside their switches, which they did partly in response to HP including more and more switches in their server deals. Cisco’s UCS (Unified Computing System), and its bigger sibling, VCE, are the first hardware examples of this trend (although inside the box you can still distinguish the original components).
One reason to move to such a fabric design is that by moving data, network and compute closer together (integrating them) you can improve performance. Juniper’s recent QFabric architecture announcement is another similar example. But, the idea of closer integration of data, processing, and communication is actually much older. In some respects, we may even conclude IT is coming full circle with this trend.
Let me explain.
Many years ago I spoke to Professor Scheer, founder of IDS Scheer and a pioneer in the field of Business Process Management (BPM). (Disclosure: years later IDS Scheer became part of my former employer: Software AG.) He spoke about how – in the old days of IT – data and logic were seen as one. Literally! If – while walking with your stack of punch cards to the computer room (back then it was a computer the size of a room, not a room with a computer in it) – you dropped your stack of punch cards, both data and logic would be in one pile on the floor. You would spend the rest of your afternoon sorting them again. There was just one stack: first the processing/algorithm logic, and then the data. Scheer’s point was that just like we figured out after a while that data did not belong there and we moved it to its own place (typically a relational database), we should now separate the process flow instructions from the algorithms and move these to a workflow process engine (preferably of course his BPM engine). All valid and true – at that time.
But not long after, Object Oriented programming became the norm, and we started to move data back with the logic that understood how to handle that data, and treat them as objects. This of course created a new issue of having these objects perform in an even more remotely acceptable way, as we used relational databases to store or persist the data inside these objects. You could compare this to disassembling your car every night into its original pieces in order to put it in your garage. Over the years the industry figured out how to do this better,in part by creating new databases which design-wise looked remarkably similar to the (hierarchical) databases we used back in the day of punch cards.
And now , under the new shiny name of fabric computing we are moving all these processes back in the same physical box.
But this is not the whole story — there is another revolution happening. As an industry we are moving from using dedicated hardware for specialized tasks to generic hardware with specialized software instead. For example, you might use a software virtualization layer to simply emulate a certain piece of specific hardware.
Or, look at a firewall: traditionally it was a piece of dedicated hardware built to do one thing (keeping non-allowed traffic out). Today, most firewalls are software-based. We use a generic processor to take care of that task. And we’re seeing this trend unfold with more equipment in the data center. Even switches, load balancers and network-attached storage are becoming software-based (virtual appliance seems to be the preferred marketing buzzword for this trend).
Using software is more efficient than having loads of dedicated hardware, and we can’t ignore the fact that software, because of its completely different economic and management characteristics, has numerous inherent advantages over hardware. For example, you can copy, change, delete and distribute software, all remotely, without having to leave your seat, and even do automatically. You’d need some pretty advanced robots to do that with hardware (if it could be done today).
So how do these two trends relate to cloud computing?
By combining the idea of moving stuff that needs to work together closer together (the idea of fabric) and the idea of doing that by using software instead of hardware (which gives us the economics and manageability of software) we can create higher performance, lower cost and easier to manage clouds.
Virtualization has been on a similar path. First we virtualized servers, then storage and networking, but all remained in their separate silos. Now we are virtualizing all of it in the same “fabric.” This means that managing the entire stack gets simpler, with one tool to define it, make it work and monitor it. And that’s something that should make any IT pro smile.
In my next post, I’ll share my thoughts on why I think this approach has the power to change IT as we know it, based on some of my own epiphanies.
2 people have left comments at http://ca.com/blogs:
Hey,
Basically what you mean is that with fabric computing, you will have all processes stacked on top of each other which makes it easier and faster to get the full benefits of the services?
Have I understood you correctly?
Posted by: Gautam Ghambir | March 15, 2011 2:44 PM
Gautam, thanks for reading and commenting. Here’s another example to think about: a modern espresso machine. It does all of the work for you – in an integrated fashion. No need to worry about the temperature of the water, grinding the beans, or any other steps. Now think of all the equipment you need and steps it takes to make a traditional cup of coffee: a kettle to boil the water, a grinder to grind the beans, a filter to put the grinded coffee in, and then you need to poor the water in several small steps so it does not overflow. And lastly you poor the coffee from the pot into a cup. The idea of fabric computing is that the fabric is integrated “out of the box,” and doesn’t require provisioning, managing, integrating and monitoring lots of VMs and appliances individually. Much more on this in my upcoming blog (next week).
Posted by: Gregor Petri | March 15, 2011 5:17 PM
LikeLike