Friday, November 23, 2012

Analytic FIFO multiplied - part 3

This is part 3 of a three part posting on analytic FIFO picking of multiple orders. Part 3 shows how to combine the FIFO developed in part 1 with the analytics used for the better route calculation in an earlier blog post.

We use the same tables and same data as part 1, so read part 1 for the setup.

When combining the FIFO for multiple orders with the route calculation, we get this nice piece of sql:

with orderlines as (
   select o.ordno
        , o.item
        , o.qty
        , nvl(sum(o.qty) over (
             partition by o.item
             order by o.ordno
             rows between unbounded preceding and 1 preceding
          ),0) + 1 from_qty
        , nvl(sum(o.qty) over (
             partition by o.item
             order by o.ordno
             rows between unbounded preceding and current row
          ),0) to_qty
     from orderline o
    where ordno in (51, 62, 73)
), orderbatch as (
   select o.item
        , sum(o.qty) qty
     from orderlines o
    group by o.item
), fifo as (
   select s.loc
        , s.item
        , s.loc_qty
        , least(s.loc_qty, s.ord_qty - s.sum_prv_qty) pick_qty
        , sum_prv_qty + 1 from_qty
        , least(sum_qty, ord_qty) to_qty
     from (
      select o.item
           , o.qty ord_qty
           , i.loc
           , i.qty loc_qty
           , nvl(sum(i.qty) over (
                partition by i.item
                order by i.purch, i.loc
                rows between unbounded preceding and 1 preceding
             ),0) sum_prv_qty
           , nvl(sum(i.qty) over (
                partition by i.item
                order by i.purch, i.loc
                rows between unbounded preceding and current row
             ),0) sum_qty
        from orderbatch o
        join inventory i
          on i.item = o.item
   ) s
    where s.sum_prv_qty < s.ord_qty
), pick as (
   select to_number(substr(f.loc,1,1)) warehouse
        , substr(f.loc,3,1) aisle
        , dense_rank() over (
             order by
             to_number(substr(f.loc,1,1)),    -- warehouse
             substr(f.loc,3,1)                -- aisle
          ) aisle_no
        , to_number(substr(f.loc,5,2)) position
        , f.loc
        , f.item
        , f.pick_qty pick_at_loc
        , o.ordno
        , least(
           , least(o.to_qty, f.to_qty) - greatest(o.from_qty, f.from_qty) + 1
          ) qty_for_ord
     from fifo f
     join orderlines o
          on o.item = f.item
         and o.to_qty >= f.from_qty
         and o.from_qty <= f.to_qty
select p.loc
     , p.item
     , p.pick_at_loc
     , p.ordno
     , p.qty_for_ord
  from pick p
 order by p.warehouse
        , p.aisle_no
            when mod(p.aisle_no,2) = 1 then

That gives us this output:

---------- ---------- ----------- ---------- -----------
1-A-02     B1                   5         73           5
1-A-20     A1                  18         51          18
1-B-15     B1                   2         51           2
1-B-11     B1                   4         51           3
1-B-11     B1                   4         73           1
1-C-04     B1                  12         51          12
1-C-05     A1                   6         73           6
2-A-02     A1                  24         51           6
2-A-02     A1                  24         73          10
2-A-02     A1                  24         62           8
2-D-23     B1                   1         51           1

11 rows selected.

So now the operator goes "up" in aisle 1-A, "down" in aisle 1-B and "up" again in aisle 1-C, and so on.
The ultimate picking query - picking multiple orders by FIFO in an efficient route through the warehouse. What more could you possibly want? :-D

You may feel that is a complex piece of SQL and it would be a lot easier to understand if it was written procedurally in PL/SQL, C# or Java with a nice flow of statements and IFs and THENs and LOOPs you can single-step debug in your head.

But let me tell you, that if you try to understand set based processing and how to really use the power of SQL to create queries like these - then your application stands a much better chance of performing superior to anything else. If you can gain manyfold performance increase by doing SQL properly, why bother nit-picking on whether an integer datatype shaves fractions of CPU time compared to a decimal datatype in your C# program?

A recent AskTom thread discusses a similar theme. I gave an example of how an application written as one SQL wrapped in a PL/SQL procedure survives whenever there is a new fancy GUI.

Use the power of SQL - you won't regret it :-D

And let me for the third and final time give you Monty's great quote:
"Don’t you just love these kind of challenges? It’s why we do what we do!"

If you wish, you can download the script for this demo.


  1. Hi Kim, I like your three FIFO posts. Some good ideas. You use a cursor based approach which I believe is the best practice way. Others have used a data set approach to get value of stock on hand (see ). This is indeed a quick method to get value of stock on hand but I can't find a data set solution based on this technique to get cost of goods sold. Have you attempted a data set approach to get get cost of goods sold?

    1. Hi, John

      Thanks for your comment :-)

      If we look at Table 1 at the simple-talk link you gave, I presume what you mean by "getting cost of goods sold" is to calculate, that the 165 in the OUT transaction at line 4 consists of 138 pieces at 245.94 each plus 27 pieces at 199.95 each for a total of 39338.37. That way you can get a precise profit calculation for that OUT transaction.

      No, I do not have such a piece of SQL, but I probably will have to create it some day. We have exactly this requirement at my work and the present slow-by-slow approach is terrible... The problem for me is mostly that I have to do it location-wise for each article and there are a lot of "move" transactions simply moving stock from one location to another, where those transactions also need to find out by FIFO how much value was moved around, and then when that is moved once again we need to follow the trail to the next location and so on.

      When I get the time to get this done at work, I'll be sure to add a blog post as well :-)

    2. Thanks Kim

      Best wishes. I'll keep and eye on your blogs. They very interesting.