A Guide to Implementing the Theory of Constraints (TOC)

PowerPoints

Preface

Introduction

Contents

Next Step

Advanced

 

Bottom Line

Production

Supply Chain

Tool Box

Strategy

Projects

& More ...

Healthcare

 

Drum Buffer Rope

Implementation Details

Batch Issues

Quality/TQM II

Alignment

Time

 

 

Why Do We Batch?

Batching issues have a profound influence on the characteristics of any process and substantial gains can be made by properly understanding the dynamics involved.  Although we often don’t think about it, we can batch in either quantity of material or quantity of time.  They seem interchangeable but most often one is treated as the variable and the other invariable.  We batch once a week; means time is invariable and material is variable.  We batch a full load; means material is invariable and time is variable.

Let’s confine ourselves here to the batching of material.

And why do we batch?  Well, to be efficient of course!

Increased batch size affects work-in-process inventory levels, manufacturing lead time, local and global safety time issues, and finished goods stock levels by increasing them.  Increased batch size affects quality and throughput by decreasing them.  However, well thought-out changes to critical batch sizes can hugely change these parameters within a process.  It does so not by speeding up machine or process time, but by reducing idle time when work sits on the workshop floor (or office desk, or computer hard disk) between process points.

So far we have only considered drum-buffer-rope in terms of reducing the “excess” work-in-process from the system.  You might like to consider this as “drying” the system out.  Often this alone will substantially reduce manufacturing lead time, but after doing that, it is time to look for our 5th gear – overdrive – smaller batches move much faster.

 
A Brief History Of Batch Size Issues

Batch size issues at a formal level have tended to be treated as a trade-off analysis or optimization between set-up or ordering costs, storage and holding costs, and stock out costs (1).  The resultant optimal batch size is known as the Economic Order Quantity or EOQ.  The formalization dates back to about 1915 (2).  However, while most everyone knows about Economic Order Quantity, very few people ever bother to calculate it.  There is a far more fundamental driver to batch sizing.

The fundamental driver is reducing “non-productive” set-up time, and maximizing “productive” processing or machine time.  There are usually very strong measurement incentives – timesheets – which cause workshop floor personal to minimize set-up time and maximize process time.  The easiest way to do this is to decrease the frequency of set-ups by increasing the batch size.  There is, however, another more subtle and less often expressed driver at work as well.

A set-up should be a matter of pride for set-up personnel; however, in addition to the pressure of completing the set-up as soon as possible, there is also the added responsibility to get the set-up exactly right.  In fact, the larger the batch to be processed, the more pressure there is not to cause an error that might be replicated exactly on every item in the batch that is subsequently processed.  Contrast this with the relief the personnel have once the machine – especially an automated or semi-automated one – starts its production run.  One should be able to relax and monitor the run which might take hours or even days before the same or another machine requires its next set-up.  Thus we have a more personal driver for reducing the frequency of set-ups and consequently increasing the batch size.

There is an important follow-on effect to reducing the frequency of set-ups.  Just as introducing computer integrated manufacturing removes operator involvement and operator skills atrophy (3), decreasing set-up frequency drives set-up proficiency down.  The danger is an ever downward spiral of reducing set-up frequency and increasing batch size.  Even when the operator element is reduced or replaced in “flexible” manufacturing systems, the performance is in some cases worse than the manual system.  Rather than producing more variety of lower volume, these systems often produce less variety of higher volume (4).  That is to say, set-up time is reduced and batch size increased.  This is clearly a management decision, not an operator decision.

Of course there are many places where the manufacturing concept of set-up time does not apply.  Consider a paper work flow in an office.  What usually happens?  People accumulate of pile of similar jobs and process it together at one time – because it’s efficient.  OK, so there is no way to account for this carry-over into the service industry, but it does exist, and probably arises from a need to locally optimize.

 
Manufacturing Lead Time And Batch Size

It is sometimes hard to imagine how reducing the batch size can reduce manufacturing lead time.  Clearly each processed part takes the same time to go through each process stage regardless of whether there is one or one thousand pieces in the batch.  The answer however lies not in the process time but in the waiting time.

Let’s break manufacturing lead time down into its component parts; set-up time, run time, move time, and queue time (5).  Queue time is also known as non-instant availability (6).  “Queue time is usually much larger than the sum of the other numbers.  The only number that considers the size of the order is the run time (5)."  We can break run time down into process time and wait time for each individual piece in the batch.  Let’s draw a table to show this.

Manufacturing Lead Time

Division

Subdivision

Set-up Time

 

Run Time

Process Time

Wait Time

Move Time

 

Queue Time

 

Generally, even when the queue time has been reduced, the wait time for each piece will be large.  The first piece spends most of the run time waiting after being processed while all the other pieces are being processed.  The last piece spends most of the run time waiting before being processed while all the other pieces are being processed.  Pieces in the middle of the batch wait equal amounts of time before and after being processed.  OK, there is a lot of waiting during run time.

We can reduce the amount of waiting during run time.  Let’s see how we can do that.

The first mechanism is transfer batching.  In the first case a job goes through four operations with no queue time.  Each operation takes 4 hours for a total of 16 hours.  In the second case as soon as half of the first operation is completed it is passed to the second operation.  As soon as that half is completed on the second operation it is forwarded to the third operation and so forth.  The total duration is decreased from 16 to 10 hours.  In the third case we pass on one quarter of the job as soon as it completed at each stage.  Total duration is compressed from 16 hours to just 7.  We do this at no cost of additional set-ups.

The second mechanism is process batching.  In the first case, again, a job goes through four operations with no queue time.  Each operation takes 4 hours for a total of 16 hours.  In the second case we double the number of set-ups and halve the duration from 16 hours to 8 per batch.  In the third case we quadruple the number of set-ups and halve the duration once again from 8 to 4 hours per batch.  Again we can reduce the duration of the wait time considerably but at the cost of additional set-ups.

Transfer batching seems to be the better of the two because it requires no additional set-ups.  However, for either method to be effective there must not be large amounts of other idle work sitting in front of the batches.  In actuality, reducing batch size simultaneously reduces queue time and wait time within run time, rather than as we have suggested here a step-wise progression of drying excess work-in-process out - and hence reducing queue time - and then reducing batch size and hence waiting within run time.  It is easier to understand in a step wise presentation, in reality it’s all mixed together.

 
Transfer Batching Is Natural

If you attempt to formalize transfer batching as an explicit operating procedure, be prepared for howls of dissent from workshop personnel.  Also be aware that at the end of each measurement period; be it a week, a month, or a quarter, the very same workers, foremen, and supervisors, will quietly and automatically go about making discreet arrangements with each other to expedite work to completion using implicit transfer batching.  Transfer batching is natural.

 
YES, But You Don’t Understand - We Have 3000 Standard Items!

Whereas reduced transfer batch size might be natural when it is aligned with end of period performance measures, reducing process batch size isn’t at all natural due to the reluctance to increase set-ups as outlined previously.  However, reducing process batch size is the major driver in reducing finished goods inventory, and reducing finished goods inventory is the major driver in reducing forecast dependency and stock-outs.  Reducing process batch size, however, is also a major driver for increasing set-up frequency and thus decreasing productivity on the constraint.  This seems like an impossible dilemma.

Moreover, the companies that are most likely to benefit from reduced finished goods stock are those with the largest inventories of make-to-stock items.  Because they make so many stock items, and their lead times are so long, they need significant amounts of finished goods.  How can we reduce the process batch size on the constraint and so reduce the lead time and the finished goods inventory?

Well there are two solutions;

(1)  Reduce the set-up time on the constraint (only).

(2)  Hunt for the biggest process batches and split them up.

Normally, proper exploitation or protection of the constraint will produce significant increases in productive time without attempting to reduce set-up time.  Set-up reduction tends to be the last thing that people want to do, rather than the first.  Even with set-up reduction, if we split all process batch sizes in half – doubled the set-up time on the constraint, we might just render the constraint unable to process the required throughput.  Therefore, hunt for the biggest process batches first.

 
The Truck And Trailer Analogy

Here is one of those occasions where you can get something significant for very little effort.  Large process batches are like a truck and trailer unit full of gravel driving down a highway.  Usually it can do a good speed on the flat parts, but the rig slows down considerably going up (and down) hills.  Smaller, lighter, and faster cars also have to slow down unduly behind the truck on the hills.  In terms of process batches, a heavy truck and trailer is like a large process batch, a car is like a small process batch.  The flat road is a section that has a very short process time, and a hill is a section with a very long process time.  Large batches entering a long process time section will hold up small batches behind them that have just left a short process time section.  The small batch now has to queue in proportion to the run time of the large batch ahead of it.

So what is the solution?

Well, returning to the truck and trailer analogy, if we split the load into two trucks, then both trucks would be able to travel at greater speeds up (and down) the hills, and the smaller, lighter, and faster cars can do likewise.  In the factory if we split the largest process batches into two – and separate them in time (otherwise people will just join them up again – trust me), then everything will be able to move faster as a result.  Essentially we have a better process flow.

So what proportion of my existing batches am I looking at?

Well, product volume will follow some sort of Pareto distribution.  Construct histograms of volume percent of product versus process batch size, and also of the lot percentage (percentage of batches) against process batch size.  You will find somewhere between 70-80% of your production volume in your largest batch size classes.  You will also find that those same batch size classes account for between 20 and 30% of the total set-ups.

Does this square with reality? Yes.  We tend to batch-up the highest demand products into the largest (and most efficient) process batches – giving high volume and a small number of set-ups.  On the other hand we can’t batch up all the “rats and mice” that we sell as well – giving quite a small volume and a very large number of set-ups.

Let’s illustrate this with some real data from a large-scale small-device batch manufacturer.  Firstly let’s look at the volume percentage versus batch size.

We can see that the two largest batch classes account for 68% of the total volume (these are real numbers).  The two smallest batch classes account for 13% of the total volume.

Let’s then look at the number of lots of each of these batch classes.

Now we see that the two largest batch classes account for only 29 percent of the lots (and hence set-ups), whereas the two smallest batch classes account for 49 percent of the lots and hence set-ups.  Be careful.  It is not the large number of small set-ups that is the cause of large work-in-process values; it is the large batch sizes.  In this example 70 percent of the total production is accounted for by 30% of the lots.

The large process batches cause everything else to slow down.  If for example total set-up time for a large batch is the order of 10% of the total process time for these large batches, then halving the largest batch size and doubling the number of set-ups will increase the set-up time for these batches from 2-3% of total time to 4-6%.  Not a big price to pay!  And if you are looking for spare time, take a good hard look at absolute downtime recorded and not a pie-chart of set-up, process time, maintenance time etc.

So, do the arithmetic on splitting the largest process batch classes into two smaller process batches.  See if you can accommodate the additional set-up time on the constraint.  Process flow, lead time, and finished goods inventory should all improve as a consequence of this simple application.

 
Batching Discipline

Factories are full of helpful people before the constraint who will surely recognize two process batches of the same product that are separated by a day or so or thereabouts on the constraint schedule.  They will helpfully “mate” them up again to be efficient – you are not going to remove 100 years of tradition, let alone automatic local optimization, that quickly.  Extremely helpful people will actually hold up a process batch until a matching batch comes along.  You will recognize that this is happening when batch-pairs begin to arrive at the constraint – one generally late and the other generally early.  Despite protestations of complete surprise you will find the cause somewhere before the constraint.

Like all old habits it will disappear as people gain a greater appreciation of the importance in the improvement in flow.

 
Summary