A Guide to Implementing the Theory of
A Motor For Production
Drum-buffer-rope is the Theory of Constraints production application. It is named after the 3 essential elements of the solution; the drum or constraint or weakest link, the buffer or material release duration, and the rope or release timing. The aim of the solution is to protect the weakest link in the system, and therefore the system as a whole, against process dependency and variation and thus maximize the systems’ overall effectiveness. The outcome is a robust and dependable process that will allow us to produce more, with less inventory, less rework/defects, and better on-time delivery – always.
Drum-buffer-rope however is really just one part of a two part act. We need both parts to make a really good show. If drum-buffer-rope is the motor for production, then buffer management is the monitor. Buffer management is the second part of this two part act. We use buffer management to guide the way in which we tune the motor for peak performance.
In the older notion of planning and control, the first part; drum-buffer-rope, is the planning stage of the approach – essentially the overall agreement on how we operate the system. The second part, buffer management, is the control system that allows us to keep a running check on the system effectiveness. However, I want to reserve the word “planning” and the word “control” for quite specific and established functions within the solution, functions that we will investigate further on this page.
I want to propose that we step out a level and instead use the terms “configuration” and “monitoring.” Using this terminology the configuration is drum-buffer-rope and the monitoring is buffer management. Let’s draw this;
The way that we configure the solution, the way that we configure the; drum, the buffer, and the rope, will determine the characteristics and the behavior of the system as a whole. Buffer management allows us to monitor that behavior. The use of the terms configuration and monitoring will allow a more critical distinction to be developed once we introduce the concepts of planning and control. This, I hope, will also clarify some of the confusion that may exist over the dual role of buffer management.
Keep this model in mind as we will return to it. Now, however, we must return to our plan of attack and work through the development of the solution.
Interested? Then let’s go.
On the measurements page we introduced the concept of our “rules of engagement” which is to define; the system, the goal, the necessary conditions, the fundamental measurements, and the role of the constraints. Then on the process of change page we introduced the concept of our “plan of attack” – the 5 focusing steps that allow us to define the role of the constraints. Let’s remind ourselves once again of the 5 focusing steps for determining the process of change;
(1) Identify the system’s constraints.
(2) Decide how to Exploit the system’s constraints.
(3) Subordinate everything else to the above decisions.
(4) Elevate the system’s constraints.
(5) If in the previous steps a constraint has been broken Go back to step 1, but do not allow inertia to cause a system constraint. In other words; Don’t Stop.
Let’s also return to our simple system model which we have so far used in much more general terms and apply it to drum-buffer-rope. As you will recall it has 4 sections, or departments or whatever you would like to call them; a beginning, a middle, a near-the-end, and an end.
Our system has to interact with the outside world, so let’s draw in an input and an output. Raw material flows in and finished product flows out. In a for-profit situation sales flow in and expenses flow out, the difference – profit, is captured by the system. We showed these flows previously in the section on measurements.
Now we are ready for the next stage, the first step in the 5 step focusing method – identify the constraint.
In fact we know where the constraint is in our simple system presented here based upon the discussion in the earlier section on measurements. It’s located near the end of the process. This isn’t at all an unusual place to find a constraint. Think about it for a moment. If the constraint was located near the beginning, then all the downstream steps would always be waiting for work. In that situation management would most probably go about purchasing further capacity until they move the constraint further down the process and then bury it in work-in-process so that it is no longer visible.
Let’s draw the constraint in.
As we know from the previous section on production, the constraint, the slowest step, beats out the rate at which the whole process can work at. Therefore it becomes the “drum” of drum-buffer-rope.
Of course we forgot something – work-in-process. If our model system is to be anything like our own reality, then it is probably full to the gills of work-in-process. We had better add this to our model as well.
Work-in-process of course serves a useful purpose in such a system; it decouples each stage from the stages before and after. If you don’t know what to protect, then you might as well protect everything. However chances are that, even with all that protection, the work that was required at the time wasn’t the work that was waiting in the pile of work-in-process. And of course it means that the time required for any job to traverse the system is much longer than necessary. In any case we don’t need all of that work-in-process anymore if we are going to use drum-buffer-rope.
So we have completed Step 1 – identify the constraint. The next step, step two, is to decide how to exploit the constraint.
To make sure that the constraint works as well as possible on the task of producing or creating throughput for the system we must ensure that we exploit it fully – essentially we are leveraging the system against the full capacity of the constraint. This means not only making sure that it is fully utilized, but also making sure that the utilization is fully profitable. If you remember back to the P & Q problem or the airline analogy, is quite possible to have everything utilized but not make as much profit as is possible.
If we increase the output of the constraint, then the output of the system as a whole will increase also. One of the most effective tactics for exploiting the constraint, once identified, and improving its output is to write a detailed schedule for that particular resource and that particular resource alone – and then to adhere to that schedule. This is the “plan” in this context. Our day-to-day planning “falls out” as a consequence of the decisions that we make while configuring the implementation. Let’s add this to our model.
We now have a local plan for just one point, the most important point, the drum. If, at the same time, we hold the input constant then the additional output from continued exploitation must come from work-in-process already in the system. As a consequence work-in-process and hence lead times must go down. In effect we begin to drain the system. Let’s show that.
Let’s be clear however, work-in-process does not have to decrease under drum-buffer-rope, but usually there are sound reasons for doing so – reduced lead times, increased quality, and increased throughput. We will investigate all of these sometime later under the heading of the role of inventory. The primary objective of Theory of Constraints however is always to move the system towards the goal, and usually that means increasing throughput first. Inventory reduction is secondary and often a consequence of increasing throughput.
If we continue to operate in this fashion we can reduce work-in-process considerably. Let’s show this before introducing some further drum-buffer-rope concepts.
In fact we have completed the second step; we have decided how to exploit the constraint. We used a simple example of writing a schedule, there are many more ways to exploit a constraint and some of these are mentioned in the next page on implementation details. However, we need to move on to the third step, subordination of the non-constraint resources.
Sometimes using the word “protect” makes it easier to understand this step than using the correct term which is “subordinate.” In fact, we subordinate the non-constraint resources in order to protect the constraint and the system as a whole. Let’s examine this is a little more detail.
In the process of change page we described subordination as avoiding deviation from our plan, and the plan in this case is our constraint exploitation schedule in the previous step. We described deviation from plan as (2);
(1) Not doing what is supposed to be done.
(2) Doing what is not supposed to be done.
We can therefore describe subordination as;
(1) Doing what is supposed to be done.
(2) Not doing what is not supposed to be done.
By doing what is supposed to be done in accordance with our plan we protect the constraint and the systems as a whole. Moreover, by not doing what is not supposed to be done in accordance with our plan we also protect the constraint and the system as a whole. Let’s examine this with our simple model.
As we use up our supply of excess work-in-process, it is likely that the constraints will begin to “starve” from time to time. Work will not arrive in sufficient time for it to enter the constraint on schedule. We need to replace our local safety everywhere (our excess work-in-process) with some global safety right where it is needed, in front of the constraint. We need to buffer the constraint. We need to do what is supposed to be done in order to protect the constraint from shortages.
In fact we would normally have made our buffering decisions before we even began and therefore reduced our work-in-process and lead time in line with these pre-determined targets.
Let’s assume for a moment then that the lead time allowed for work to travel from the start of the process to the start of the constraint was 18 days prior to the implementation. Well, in fact, it could be 18 hours for electronics or the paper work in an insurance claim, or it might be 18 weeks for heavy engineering. But let’s use days in this example. The rule of thumb to apply is to halve the existing lead time (3). Therefore the new lead time becomes 9 days. If halving the lead time sounds horrendously short, it is not. Most of the time the current work-in-process is sitting in queues doing nothing. You can easily check this for yourself – got out and tag some work with a flag or a balloon or a bright color and then watch it. It will sit. This 9 day period becomes our buffer length.
To this 9 day buffer we apply a second rule of thumb and divide the buffer into zones of one third each (4). We expect most work to be completed in the first 2 thirds and be waiting in front of the constraint for the last third of the buffer time. Thus we expect our work to take about 6 days of processing (and waiting-in-process) and 3 days of sitting in front of the drum.
If 3 days sitting in front of the constraint sounds terrible, then remember that prior to the implementation, the system allowed work to sit for at least another 9 days. Nine plus 3 is 12 days sitting. Which would you rather have 12 days or 3 days? More importantly, which would your customer prefer?
We now can protect our system constraint by ensuring that there is always work for it to do. Thus we ensure its effective exploitation – and with much less total material or lead time than before.
Let’s add the buffer to our diagram.
Let’s make sure we are clear about the definition of the buffer. “For all practical purposes the TIME BUFFER is the time interval by which we predate the release of work, relative to the date at which the corresponding constraint’s consumption is scheduled (5).”
Please be careful, on the diagram above we have drawn units of time – the zones and the buffer – as space on our diagram. Don’t let this confuse you. The zones equate to time allocated in the plant to protecting an operation whose position and function is critical to the timeliness and output of the whole process. The zones do not equate to the position of work in the plant. In fact we will return to this shortly and try and draw the diagram more realistically to represent time.
Why is this whole period from material release to the constraint considered as the buffer? Schragenheim and Dettmer consider that this is one of two unique aspects of buffering in Theory of Constraints. “The reason buffers are defined as the whole lead time and not just the safety portion is that in most manufacturing environments there is a huge difference between the sum of the net processing times and the total lead time. When we review the net processing time of most products, we find it takes between several minutes and an hour per unit. But the lead time may be several weeks, and even in the best environments several days. Consequently, each unit of product waits for attention somewhere on the shop floor for a much longer time than it actually takes to work on it.” “So it makes sense not to isolate the net processing time, but to treat the whole lead time as a buffer – the time the shop floor needs to handle all the orders it must process (6).”
The other unique point is that buffers are, as we have mentioned, measured in time. Firms in non-drum-buffer-rope settings consider a buffer to be a measure of physical stock; 6 jobs, or 6 orders, or 10 batches, or 4000 pieces, or whatever. In drum-buffer-rope a constraint buffer is a measure of time; hours or days of work at the constraint rate located between the gating operation (material release) and the constraint. In fact, there are two ways to look at a buffer, either from the perspective of a single job, or from the perspective of the system as a whole. Let’s consider this for a moment.
Let’s assume for the sake of simplicity that all of our jobs are of equal length. Let’s assume then that each one takes 1 day of constraint time. In this case each job has a 9 day buffer to the constraint. That is, it is released 9 days prior to its scheduled date on the constraint. This is the perspective of a single job. The constraint, looking back, will see 9 one-day jobs at various stages in the process; this is the perspective of the system as a whole.
What then, all else being equal, if all of our jobs now take half a day on the constraint? Each job sees a 9 day buffer, the constraint looking back will see 18 half-day jobs at various stages in the process, but the aggregate load is still 9 days, this is the perspective of the system as a whole.
Let’s do this one more time. Each job now takes quarter of a day on the constraint. Each job still sees a 9 day buffer, the constraint looking back will see 36 quarter-day jobs at various stages in the process, but the aggregate load is still 9 days from the perspective of the system as a whole. It is time that is the measure of the buffer.
Let’s labor this point for a moment because it is so important. Measuring a constraint buffer in units of time is unique to drum-buffer-rope because acknowledgement of the existence of a singular constraint within a process is unique to drum-buffer-rope. We can apply this to both the constraint buffer size and the constraint buffer activity.
Let’s look at constraint buffer activity first.
By considering only one station, or step, or procedure, we need only to know one set of average times for that place or action for all of the different types of material units that pass through it. We could look at this as follows;
At a manufacturing constraint an hour is an hour but the number of units may differ
The number of physical units may differ because different types of material using the same constraint may use different amounts of constraint time. In fact, even the same type of material will display some variability unless the constraint is a totally automated procedure – but these will largely average out.
How about constraint buffer size then?
The unique perspective brought about by the designation of a singular constraint allows us to define the length of the buffer in time also. Essentially the buffer is sized and “sees” the duration from the gating operation to the constraint due date. Moreover the buffer “sees” committed demand – work that has already been released to the system. Constraint buffers, divergent/convergent control point buffers, assembly buffers, and shipping buffers are all of the same basic nature.
Maybe it is much simpler to say that;
We protect time (due date) with a time buffer
There is, however, one other buffer type that we are likely to come across in manufacturing – a stock buffer. There are two places that these occur at in manufacturing; they are at raw material/inwards goods in all process environments and at finished goods in a make-to-stock environment. These are actually supply chain buffers; they represent the two places that the supply chain must interact with processing – before the beginning of the process and after the completion of the process. We need to ensure that we always have an adequate supply of raw material prior to the process to meet consumption and we need to ensure that we always have an adequate supply of finished goods post-production to meet demand. We will examine these types of buffers later on this page. They are also examined in more detail on the supply chain pages – especially the replenishment page. However, let’s confine ourselves at the moment to constraint buffers. We need to labor the issue that the constraint buffer is a measure of time. Let’s do that.
Many, many, people say that they do understand the definition of a drum buffer or of a constraint buffer when the evidence is that they do not. Too often our prior experience causes us to think of buffers in terms of physical stock, and too often we consider zone 1 as “the buffer.” Let’s see.
The buffer is the whole of the duration of the part of the system that the buffer protects.
Did I overstress the point? I don’t think so. Check here for more discussion on continual mis-understanding of buffers in drum-bufffer-rope.
In part, this is due to our prior manufacturing experience with MRP II systems and push production which tends to blind-side our interpretation (see the sections on Buffer The Constraint and Local Safety Argument in the next page – Implementation Details – for further development of this aspect.) In part, the problem also lies in the way we try to draw time as space on our simple diagrammatic representations. The only way to draw time is to draw a sequence of diagrams. Let’s do that.
We will follow a slice of work – ones day’s worth – through the process to the drum. We will use our 9 day buffer as we derived above, so this slice of work is the drum’s work for one day 10 days out from the scheduled processing date. There are 5 products (units, jobs, batches, whatever) in our slice. The products are “lilac,” “red,” “green,” blue,” and “orange.” The time interval, for the sake of clarity in this example, is course – days – rather than finer divisions of hours or less that we might expect to find in reality.
Imagine that within the departments (“beginning” and “middle”) of our generic process we have the tools of our particular trade; be they desks in a paper trail, admission or beds or clinical units in a hospital, or work centers in a manufacturing system. The 5 products could be at any time waiting or moving between jobs or being worked upon. The resolution of this detail isn’t important to us here.
Probably in the day before the release date the planner knows what will be released. The planner might even have the orders “cut” and waiting but unreleased (and hopefully unknown to the floor – to avoid people working ahead of time). Let’s draw this.
The orders may exist on a plan but they are not yet released. We draw the units outside of the system even if they currently have no physical presence other than paper work or an entry on a scheduling system.
We have also drawn a timeline in. It is colored according the buffer zones. Zone 3 is the “green zone,” zone 2 is the “orange or yellow zone,” and zone 1 is the “red zone.”
On the first day of the schedule all the products are released (as scheduled) and are in zone 3 of our time buffer. Their physical location at the end of the day is as follows.
Lilac might be small batch or a simple process that is completed quickly, it moves forward further (and maybe faster) than the rest.
After another day we are at day 2 and still within buffer zone 3 the process looks like this.
We can see that red has moved quite quickly relative to the others and blue hasn’t moved at all. How does this happen? Different jobs travel through different routings, and have different wait times (because of other jobs in front of them) and different processing times (either because of different batch size or different work). And of course sometimes things don’t always go as planned; we have break-downs, people are absent, and “stuff happens.”
By day 4, one day into buffer zone 2 we see the work has evolved as follows.