Sunday, November 29, 2015

Supply Chain: Adapting Process to Create Customer Value

Foreword: All too often organizations treat processes, policies, and procedures as though they are laws written in stone. While there are some processes that are fixed due to regulations, most processes, procedures, and policies may be adapted. In old school leadership and management paradigms there is an adage, "You are not trying if you are not breaking rules". Old school thinking was that policies, procedures, and processes were guidelines as they understood the background, effects, and outcomes of varying parameters around a process that policies and procedures baselined. 

This post discusses one approach to adapting process in order to create a competitive edge over competition and build customer loyalty as operators in the supply chain.

Adapting Process to Create Customer Value

The bottom line up front is that customers make buy decisions based on real or perceived value they receive from the product or service. The seller then must make all efforts towards delivering that value. Anything and any activity not contributing towards delivery of customer value is wasted time, effort, and money. The classic responses and efforts to solving this challenge have been leaning out the operational processes, performing continuous improvement projects, and acquiring certifications such as ISO 9000. Leaning out processes and continuous improvement focus on cost and quality which affect value but are rarely directly linked to customer value. After all, the customer does not care about the error rate on a process or how many times a subcomponent is handled. ISO 9000 enthusiast often fly banners proclaiming to ‘meet or exceed customer expectations’ but this is a nebulous statement because it is not qualified nor quantified. After all, what are the customer expectations? Well – value is complex to grasp because it can be many thing to many customers. However, customer value can be reeled in and met.

Figure 1: Have It Your Way
 
Please use this link if the video does not load

Let us reflect on an old 1974 Burger King commercial themed have it your way, Figure 1. Pardon the pun but let us unpeel the onion behind have it your way. Burger King had set up a process that collected customer orders then adapted assembly to customer specifications that reflected direct value to the customer. While assembly of a Whopper is linear and the bill of materials is simple, the key point is that the process is adaptable. Extra ketchup was measured to an extra squirt which was quantified in terms of ounces having a specified cost associated. Normally, two pickles were used but the pickle station would be bypassed on hold the pickles resulting in a savings. At the time, orders were tracked on paper systems which were summarized and filed as a report from each geographically dispersed operation. Thus, Burger King could track and manage costs and even adapt the standard process to common or popular demands.

Cutting edge technology and information systems offer higher consistent quality and greater responsiveness to market demands but can require a large capital investment. A labor intensive system and process offers enormous ability for flexibility but is prone to errors resulting in higher scrap rates and rework as well as limited by human talent and skills on hand. The trick is not to begin with the process but instead with the customer or as Stephen Covey's second habit states begin with the end in mind (Covey, 1989, pp 102-153). Understand the customer and their needs first.

This begins with building a relationship in which the customer feels comfortable to express not only the hard requirements but those little things that really sweeten the deal them. Those little things could be a different cap with a pour spout or a little tighter quality than required. For example, the product may be within requirements but has lumps that makes pouring difficult. Adjusting the solution within the hard requirements to make the product more smooth may increase the usability of the product to the customer. This is similar to the Burger King customer who wanted extra ketchup to perhaps make the grilled burger with its charred spots slide down easier.

Once these little things are understood then common desirable qualities should be observable in the research collected. Now the process can be designed to adapt desirable qualities to the customer requests. In designing the process and adaptability, designers should look at the core process and identify breaks points where adjustments can be made in order to tweak the product for the customer. Administratively, the primary model number may be the same with an extra identifier for the standard tweaks in order to avoid shipping adapted finished goods to the wrong customer.

Adapting process is not rocket science but does take some rigor and willingness to make the necessary adjustments where possible. Not all customer desires can be accommodated without compromising the hard requirements or fundamentally changing the process all together. The goal is to seek that happy medium where the customer enjoys an edge that differentiates from the competition in a positive and meaningful way. When this is happens the customer achieves optimized value.

References:

Covey, S (1989) 7 Habits of HIghly Effective People. Simon & Schuster. USA

Monday, September 28, 2015

The Rise of a New Age

Foreword: Throughout human history entrenched socio-technical, socio-political, and other cultural periods have been labeled as an age. For example, there was the Stone Age, Bronze Age, Middle Ages, etc… Typically these ages lasted, in some cases, for centuries. Today ages are moving more swiftly and can last only a decade or two as the case with the Information Age. This opinion post discusses the transition away from the information age and the coming of a new age.

The Rise of a New Age


The information age began with the advent of widespread digital processing and mass communications which resulted in explosive growth having several pivotal events. Each pivotal event was a disruptive technology that propelled the age into further wild growth. The first major pivotal event was the introduction of the desktop computer which propelled wide spread computational capability into common use. The next pivotal event was achieving real-time computations during which processors speeds were increasing and processing technologies were evolving. The advent of the webpage on December 24, 1991 created a common platform for human centric communications. The miniaturization of the circuits for portability and wearability expanded the transition to common activities. The anticipated final major event of the Information Age is what futurist Ray Kurzwiel calls the ‘singularity’ when computer systems will become more capable than the human brain. Even though many people consider the singularity on event horizon, the Information Age is drawing to a close if not already over since processing speeds, storage capacity, and computational capability already exceed human capacity as discussed in the post Organizational Computational Architectures. As of 2015, information systems lack self-awareness (the ability to reason independent of human supervision), sentience (the ability to feel emotions), and wisdom (the ability to make quantum leaps in judgment in the absence of information). Inroads are being made on each of these areas and the achievement of reasonable capability in each of these areas will have profound impacts on humanity.

What the Information Age was about is characterized by the transition to digital computational systems from the older paper processes and analogue systems of the industrial revolution. That transition is nearly complete as digital computational systems are now nearly everywhere and the few remaining legacy systems will become so technologically obsolete that they will be removed from use soon. The result of the Information Age is the processing, accumulation, and organization of not only data into information but also knowledge. Now that we have these systems and accumulated knowledge what purpose does this serve?

To gain a greater understanding and appreciation of the purpose, let us go back in time to the early 1960’s. Generalize information theory, fathered by Claude E. Shannon PhD, had already emerged circa 1948 as a science and research was ongoing in this domain. Generalized Chaos Theory, fathered by meteorologist Edward Lorenze, emerged circa 1962 when the world held disconnected polarized views. There were pillars of knowledge that rarely if ever communicated. Thus, there was a redundant process of discovery in each knowledge domain.. More than 10 years after Chaos Theory was discovered the world began to observe the value of Chaos Theory. Scientist began to network pillars of knowledge resulting in wide spread increases in scientific findings where knowledge had stagnated.

One of the great discoveries of Chaos Theory was that networked systems are found throughout nature. The greatest network system is gravity as every particle in space-time is in instantaneous communication with every other particle throughout space-time. In a similar fashion the Information Age has produced pillars of information and knowledge that are now being networked. These networked pillars are now combining in interesting ways. Hence, the world has entered the Age of the Hybrid.

During the Age of the Hybrid there will be a blurring between domains such as virtual and physical realities, biology and machinery, genome and computers, etc… The blurring has been ongoing and is increasing as the transition to digital systems completes giving rise to the Age of the Hybrid. Knowledge domains will network and multiply information and knowledge exponentially.

The tempo of change is anticipated to be overwhelming. The purchase decision is expected to shift away from a discrete purchase to subscribing to a brand as the hardware products are expected to commoditize. The purchase decision will be to go with a company based on reputation for stability and the capability to service needs. In turn for the brand subscription, updated and newer technology will be exchanged for the outdated, obsolete, or the older hardware for nominal cost if not covered fully by the subscription fee. For example, you may purchase a cell phone service in which on a regular cycle the phone itself will be replaced by the provider as part of the subscription. 

Figure 1:
Transhumanism
Symbol
A spooky or creepy aspect of the Hybrid Age involves Transhumanism or H+. As the genome becomes more understood, currently unimaginable combinations will emerge such as living computers that are fed rather than powered. Even more concerning is the advent of creatures that result from tampering with the firing of genes as the Genome is digital having 4 bits of information, G C A T. Information is also applied to the Genome in terms timing, sequencing, and duration of the firing gene. Hence, the Genome is not a blueprint that is read but instead a patternmakers template in which timing, sequencing, and duration is adjusted to form creatures.

In conclusion, the purpose of the information age has been fulfilled and the world is entering a new age during which combinations of knowledge domains will cause hybrid realities, systems, and creatures.

Thursday, June 18, 2015

Tentacles of Conflicting Worldviews

Foreword: As the 4th of July approaches every US citizen should reflect on the Democracy and free market capitalism they enjoy as both come at a cost and are under continual threat of loss. The bottom line up front is the world has left the information age some time ago and entered into newer age within which virtual and physical realities are blurring. The challenges facing information security and system vulnerabilities are even greater now than ever as the subversive actors often leverage cyberspace in the clash between worldviews. This epic clash has very real and dire implications for Democracy and free market capitalism of which every proponent of free market capitalism should defend with fierce resolve as the fight is at our doorsteps.

Tentacles of Conflicting Worldviews

Worldviews clash with increasing intensity in a world compressed by the internet. These worldviews that have been geographically dispersed and isolated now have tentacles that reach deep into territories that were not accessible before. Proponents of the various worldviews leverage strategic and tactical advantages of access to the internet in order to propagate as well as patch shortfalls in their ideologies. One such event involves companies operating under free market capitalism who compete with companies under socialistic systems that reject tenants of the free market and operate under other mechanisms that seek to disrupt, diminish, and deny capitalistic companies of their trade secrets.

Free market capitalism is creativity in service to humanity. Free market mechanisms such as creative destruction, innovations, and inventions foster value in terms of effectiveness, efficiencies, need, safety, etc... This created value is the basis for a redistribution of wealth in which money is put into the pockets of labor and business owners. Companies operating under a free market paradigm seek to leverage value through tactics such as durable competitive advantage, disruptive technologies, planned obsolescence, etc… in order to capture market share and earn just rewards for the fruits of their labor

Socialism operates based on the concepts of welfarism and egalitarianism. Egalitarianism fundamentally remarks that all humans are equal in every way such that what is mine is yours and what is yours is mine. The mechanism of social justice also known as institutional theft sees a dialectic struggle between the proletariat (labor) and bourgeoisie (business owners) and redistributes wealth eliminating just rewards for the labors of one’s fruits. Thus, penalizes creativity in order to drive equal pay for equal work. In the socialist mindset work is thought of in a manufacturing model of repetitive tasks and does not account for differences resulting from human skill, talents, creativity, and the development of intellectual property. Companies operating under socialistic paradigms are State Owned Enterprises, SOEs, and spiral into mediocrity then seek creative solutions in order to resolve stagnation. Unfortunately, instead of realizing the shortfalls of the socialist ideology then seeking virtuous reforms, socialists instead seek to steal consistent with their practice of social justice from free market capitalist extending the clash between worldviews into the realm of cyberspace. In effect socialistic nation-states conduct belligerent activities against Free Market Capitalistic states via the SOE.

Companies operating under free market capitalism are exciting places as they cultivate capitalistic mechanisms expanding existing and developing new markets that result in new jobs and opportunities. This also make them places to be envied and therefore must defend their intellectual and trade secrets not only legally but through information technologies.

Defending information is militaristic in nature. The defenders of the networks must think in classic war fighting terms. Defense of the network or Computer Network Defense (CND) seeks to diminish, deter, deflect, disrupt, and/or deny the enemy’s actions and will to penetrate the network in order to steal information. This begins with a defensive posture that exposes the enemy’s activities in order to have a deliberate response that diminishes, deflects, deters, disrupts, and/or denies enemy action against the networks.

At the core of monitoring is a network of virtual agents that are watch keepers of the network having continuous vigilance for indications of an attack. Hackers of the networks attempt to penetrate the systems through a deliberate process outlined in the post: Computer Hacking. Once an incident is identified then actions are taken against the threat in one or more of the following manners:
  • Diminish: Threats are reduced simply through identification and knowledge that the systems are well defended.
  • Deter: Enemy action is discouraged knowing that vulnerabilities are being identified, monitored, and closed preventing penetration.
  • Deflect: Given an incident event, a response is planned and taken to reject that enemy action or redirect the action to a managed monitoring environment or may simply redirected into oblivion.
  • Disrupt: This is a more provocative approach given an ongoing event. The enemy action is interrupted, blocked, or obscured at the point of contact preventing the enemy from achieving their objective.
  • Deny: This is a proactive approach of closing down vulnerabilities when discovered in order to refuse the enemy use of those vulnerabilities.
In conclusion, defense of the networks is a challenge that has reach far beyond a simple breach. In reality the breach is an outreach of conflicting worldviews in the cyber realm affecting not only the systems but leaves scars as battle damage on democracy and free market capitalism. In the end, creators of intellectual capital have a right to just rewards and CND is on the front line of defending creativity.

References:

Kelley, D. (2011). The morality of capitalism; ayn rand and capitalism: the moral revolution. Jameson Books, Inc: IL. pp. 71-72.

McCloskey, D. (2011). The morality of capitalism; liberty and dignity explain the modern world. Jameson Books, Inc: IL. pp. 27-30.

Nikonov, L. (2011). The morality of capitalism; the moral logic of equality and inequality in market society. Jameson Books, Inc: IL. pp. 55-62

Palmer, T. (2011). The morality of capitalism; introduction: the morality of capitalism. Jameson Books, Inc: IL. pp. 1-3.

Schwartz, E. (2004). Juice: the creative Fuel that drives world-class inventors. Harvard Business Review Press: USA.

Yushi, M. (2011). The morality of capitalism; the paradox of morality. Jameson Books, Inc: IL. pp. 1-3.

Tuesday, May 12, 2015

Gasoline: Nemesis or Necessity

Foreword: Fuel economy has a lot of focus today with the higher cost of energy. There is a lot of information out there of which much of it is not accurate. I tried to get to the bottom of what is going on and understand the processes. If possible, I wanted to seek a way to improve the fuel or blend for real fuel economy. In this blog post I explore the science, blending, and politcs of gasoline. 

Gasoline: Nemesis or Necessity

Every gas tank is tracked for fuel economy as fuel economy can mean major improvements in cost. Over the years, I noted that the gas mileage was erratic. At times, the miles per gallon were very high and a short time later it would be the worse ever. Only to return to a very high value sometime later. There was little discernible pattern at first glance. I used the same gas station for years and only bought premium unleaded. The vehicle was in excellent running condition and the highest grade of oil filtration and oil was used in the engine. So for all intents and purposes there should have been some stability in the fuel economy but there was not. Something was going on but what?

I read innumerable articles and began to observe that there was circular reporting as well as severe group think on this topic of fuel economy. Everyone was copying everyone else to the point that they were recycling their own works yet citing other people who copied something of theirs. Every article read that engine knock was a outcome of low Research Octane Number (RON) or simply the fuel's Octane rating. Step up the Octane number and rid yourself of engine knock. Otherwise, there was no other purpose to Octanes. This did not sound accurate to me as purpose of Octane is not simply to rid engines of knock and pings which was actually a symptom of something else. Octane ratings are designed as a measure of fuel performance in order to optimize the application of thrust out of an engine. Engine knock is a symptom of poor fuel performance.

The Science and Engineering

The reciprocating engine involves usually four strokes; a draw stroke, compression stroke, power stroke, and expunge stroke. See Figure 1.  On the draw stroke air and fuel is drawn into the cylinder past the open valve. The compression moves upward increasing pressure on the fuel-air mixture. The spark plug ignites the fuel-air mixture and consumes all the oxygen. The nitrogen remains and expands driving the piston downward in a power stroke. The piston then travels upward as the valves open to expunge spent fuel, gases, and smoke.
Figure 1: The 4-Stroke Engine Cycle
The concept behind the Octane rating affects the power stroke. The Octane numbers relate to the rate of burn of the fuel. At higher Octane numbers the fuel burns more slowly applying constant pressure to the piston head for the entire power stroke as opposed to a sudden flash at the top of the stroke that could be at the incorrect time due to temperature and pressures in the cyclinder causing the knock sound. Ideally, the burn rate of the fuel should perfectly match the entire power stroke. This results in a smooth transfer of the fuel's undirected explosive energy into directed mechanical energy that moves the vehicle. Thus, the Octane rating is a measure of an efficient process in which the fuel is completely consumed over the power stroke. At lower Octane ratings a portion of the fuel is not spent. This residual fuel cokes (a hard black soot) the values, springs, and rocker arms in the head. Technically, at too high of an Octane rating the fuel-air mix could still be burning during the expunge stoke which is not good either. The design challenge is to match the power stroke travel to the burn rate of the fuel in order to optimize energy transfer and minimize waste. In doing so, the engine runs very smoothly. Thus, the octane rating for a vehicle is a fixed design feature of the engine. There is a little flexibility and with the age of the engine higher octanes improve performance.

Now that we understand the process and Octane rating efficiency of the fuel, we need to turn our attention to the fuel blending and composition.  Once again I was awestruck by the degree of misinformation in the articles I read. Numerous articles cycled on the internet discussing how to improve the grade or blend of gasoline using off-the-shelf products such as Methyl Ethyl Keytone, Benzene, Toluene, or Xylene. Wild claims were made that the Octane numbers could be boosted to over 100 at lower costs. Many articles claimed that up to 50% of gasoline was Xylene. Therefore, I looked up the chemical formulas for gasoline as well as other chemicals involved and the US standards for the various blends. 

First, let us look at the refining process. The older method is called fractional distillation. The crude oil is a mixture of several products, each having identifiable characteristics such a boiling points and weights. The crude oil is heated to the boiling point causing vapors to form which rise in distillation tower then settles based on their weight and Archimedes Law of Buoyancy. Collection plates are placed in the tower at specific locations that cause the vapors of a specific compound to condense on the plate before being drawn away for further processing. Methyl Ethyl Keytone, Benzene, Toluene, and Xylene are called feed stock commanding higher prices than if sold in the blend of gasoline. For example, Xylene costs about $18.00 per gallon. Gasoline is roughly $3.00 per gallon and has less than 1% Xylene.  Other chemical processing methods of separation exist such as cracking, unification, and alteration. The main point is that many compounds are removed and not included in the final fuel mixture due to component economic value.

In the end, the gasoline that results from the continuous manufacturing process is further treated to remove impurities, contamination, and water. Pure gasoline is a blend of hydrogen and carbon chains. Specifically, the chains from C7H16 through C11H24 are blended together to form gasoline. The combustion process for a Hydrogen-Carbon chain that is 2 parts C8H18 combined with 25 parts Oxygen results in 16 parts CO2 and 18 parts water, Figure 2. The 25O2 in this formulation is drawn in via the air intake on a carbuerated engine. 
Figure 2: The Chemical Formula for Combustion of Gasoline
As the fuel makes its way to the pump, additives are included such as upper cylinder lubricants, dyes, and ethanol.  Some of the additives improve the fuels performance and others such as ethanol degrade the fuels performance.  In brief, ethanol burns hotter and quicker than gasoline which degrades the fuels performance. At 10% ethanol the fuel efficiency is usually degraded by 4 miles per gallon. This is a good segway into the politics of fuel.  

Politics

The politics of energy covers a huge spectrum of issues mostly in the environmental realm. However, energy is also a huge revenue boon for nation-states through taxation. Essentially, the government through the mechanisms of taxation and process obfuscation take just rewards (consumer hard earn money) and redistribute them based on the whims of the aristocracy or the seat of power. This is also known as institutional theft or more commonly called social justice. A good example of process obfuscation is the formulation and blending of gasoline. Governments through their environmental agencies enforce blends of gasoline at different seasons and additives such as ethanol. Changing the formulation of the gasoline blend and ethanol mix can reduce the fuel economy resulting in more fuel to go the same distance. The taxes on gasoline are flat based on a gallon of gasoline. If traveling a fixed distance takes more fuel than before due to lower fuel economy then the tax revenue increases in relation to the fuel economy reduction. This can actually cause a reduction of disposable income if the reformulated fuel economy loss is greater than the lower prices. See Figure 3. 
Figure 3: Relationships Between Fuel Economy and Tax Revenues. 
If the fuel economy is 20 miles per gallon when pure and at 16 miles per gallon with 10% ethanol then the differential on a 15 gallon tank is 3.75 gallons more to go the same distance when using 10% ethanol. That is a considerable increase in cost to the consumer and a huge increase in tax revenues for the government.  For the consumer, there is little that can be done to improve the fuel efficiency that is also cost effective from the pump. Setting up a mini-fractional distillation plant to remove the ethanol does not justify the cost as at 10% there is 1 gallon out of 10 gallons that is ethanol. The consumer still pays for the ethanol. Simply purchase fuel without ethanol added.

Returning to the original issue of erratic fuel efficiency, the issue is mostly political as the government seeks to optimize tax revenues by managing fuel economy. The erratic fuel economy is due to seasonal fuel adjustments and the political climate seated at the time. 

Achieving Lower Fuel Costs

Of course, electing public officials who do the right thing is ideal. However, the current political climate in which corruption can be obfuscated in complex or layered processes is too tempting for self-serving public servants. Demanding transparency is an option but even that is difficult to achieve as the seated power base simply stalls and debates the issue rather than takes action. Consumers are relegated to purchasing more fuel efficient vehicles and seeking alternative fuels or transportation solutions in order to save costs. A time may be coming when single passenger vehicles for the work commute become more popular. One example is the ELIO


Monday, March 23, 2015

EDI Overview

Commentary: This post was originally posted 10Mar2011. I have made a few updates then published the post again. EDI is in pervasive use in manufacturing, logistics, banking, and general day-to-day website purchases. Many people have little understanding about EDI. I want to highlight what it is, general implementations, challenges, and benefits.

EDI Overview

Electronic Data Interchange, EDI, is the pervasive method of conducting business transactions electronically. It is not technology dependent nor driven by technology implementations. Instead, EDI is driven by business needs and is a standard means for communicating business transactions. The process centers on the notion of a document. This document contains a host of information relating to purchase order, logistics, financial, design, and/or Personal Protected Information (PPI). These informational documents are exchanged between business partners and/or customers conducting business transactions. Traditional methodologies used paper which has a lot of latency in that system and is error prone. Replacing the paper based system and call center systems with electronic systems do not change the processes at all since the standard processes remain independent of the implementation or medium. When the processes are conducted via electronic media the latency is compressed out of the system and errors are reduced making the electronic processing far more desirable given the critical need for speed and accuracy in external processes.

Implementing EDI is a strategy-to-task effort that must be managed well due to some of the complexities of the implementation. A seven step phased process highlights an EDI implementation.

Step 1 - Conduct a strategic review of the business.
Step 2 - Study the internal and external business processes.
Step 3 - Design an EDI solution that supports the strategic structure and serves the business needs
Step 4 - Establish a pilot project with selected business partners.
Step 5 – Flex and test the system
Step 6 - Implement the designed solution across the company
Step 7 - Deploy the EDI system to all business partners

The technology used in EDI varies based on the business strategies, Figure 1. In general, EDI services can operate through three general methods; 1) the Value Added Network (VAN), 2) Virtual Private Network (VPN), 2) by Point-to-Point (P2P), or 30 Web EDI. The first two are for small to medium sized EDI installations where a direct association is known, established, and more secure. Web EDI is conducted through a web browser over the World Wide Web and the simplest form of EDI for vary broad based and low value purchases. The technology in use varies slightly and comes down to three forms of secure communications at a central gateway service such as Secure File Transfer Protocol (FTPS), Hypertext Transport Protocol Secure (HTTPS), or AS2 Protocol which is used almost exclusively by Walmart. Web EDI requires no specialized software to be installed and works across International boundaries well. Web EDI is a hub-and-spoke approach and messages to multiple EDI systems through its gateway service. In addition, industry security organizations provide standards and oversight for data in motion and at rest. PCI Data Security Standards is one such organization.

Figure 1: EDI Systems Architecture
Overall, EDI can reduce latency in business transactions and tightly adhere to the organizational strategies without significant adaptation of the organization. Organizations may try to outsource the people, processes, and even the technology as part of their strategy and objectives but processes remain consistent. In conclusion, EDI is a standard that is applied in various forms offering numerous advantages to an organization and business transactions.

Saturday, February 28, 2015

Supply Chain: Managing Butterflies, Not BullWhip

A butterfly fluttering in Japan causes atmospheric perturbations that can result in a hurricane in Florida. Hence, the Butterfly Effect states that small changes in a system may compound into large differences at a later time and different station in the same system.   The concept was originally introduced circa 1962 by an American meteorologist Edward Norton Lorenz, the Father of Chaos Theory.  Science rediscovered Lorenz's work in the mid-1970's and engineers began to employ its use in the early 1980s.  Business did not begin to consider the merits of Chaos Theory until the late 1990's when adaptability began to be of interest to organizations with rapidly emergent conditions.  As supply chain complexity increased then chaos principles began to apply to the chain. 

The Bullwhip effect is a result of a lack of visibility or transparency in the supply chain when estimating demand. Actors in the supply chain successively farthest from the end demand, compound errant demand estimates upon errant demand estimates. This creates excess supply that cost money, increase waste, and reduce profits. The problem successively increases from the end demand.  The BullWhip effect is a single problem in the supply chain. Albeit, a significant problem. Thus, the high degree of focus.  Butterflies, on the other hand, seem minor and can emerge anywhere in the chain creating upstream and downstream problems from where ever the disturbance appears. 

Supply Chain Project Failures, Successes, and Case Studies

Motorola was experiencing high scrap rates on an ultra-high vacuum semi-conductor process overseas causing supply chain disruptions that demanded an improvement. A project to find a solution revealed in lab tests an alternative process resolved the problem. The new solution was approved for a switch but failed in real-world applications causing offshore plant shutdowns and failures to meet normal demand.  The lesson learned for Supply Chain managers when dealing with projects that affect key processes is to start small and phase-in.  Motorola issued a policy that regardless of lab results, critical manufacturing process transitions begin with less than 10% of the production line followed by review and analysis. Once approved then a scheduled increase begins over a minimum of four months (Hoffman, 2003). Project managers are accustomed to phases, milestones, and reviews. Supply chains are no different. 

Shea Homes wanted to build more homes in less time without spending more money. The challenge was a resource restraint. A self-imposed requirement to do more with less or improve efficiencies. There were more than 54 subcontractors in the supply chain operating on multiple sites and multiple jobs. Often tasks were done out of sequence as subcontractors worked at a job site until there was a problem or roadblock then moved to another job site. The hopscotch approach wasted time, money, and was riddled with delays. The project manager established a critical chain that eliminated the multi-tasking replacing that with a continuous workflow and synchronous schedule. The critical chain was a cultural shift that required training. After implementation, Shea Homes observed a 42% reduction in time to build while increasing their return on investment.  The change also resulted in guaranteed move-in dates, competitive advantage, and a marketing tool. The the critical chain strategy focused all efforts on the critical path to completion (Hunsberger, 2012). Project managers are very familiar with the critical path.  Thus, supply chains should be organized around a critical path and such that all decision making is focused on the integrity of the critical path. 

Redtag, Inc was faced with high supply chain costs with an objective to reduce costs by a minimum of 10%. The supply chain stretched from Asia to supermarkets across the United States. Deliveries were driven by events and holidays often with short lead times. In order to avoid delays, shipping and freight costs absorbed the short lead times. The solution was to form a strategic alliance with an inland freight distributor. Imports were taken as far as possible over water before arranging ground transportation. The lower inland freight miles traveled resulted in more than 30% reduction costs (Markarian, 2003).  Strategic Alliances may improve the bottom line costs but they also place new constraints on the company. Project managers must understand the supply chain strategy such that, in this case, while the goods are on the water they are slow and cannot easily change modes. Other projects may be impacted by these kinds of strategic constraints that cannot be easily changed. 

Legal troubles are common in supply chain disruptions and a supply chain project manager has a duty to mitigate legal troubles. Proactive reviews of contracts and procedures are essential (Markarian, 2003)

Establishing a Project Ready Supply Chain Culture

Supply Chains are dynamic enterprises in constant flux. A single organization may have a considerable number of supply chains and projects that may affect one or more of the chains. Projects and change can arise anywhere in the supply chain affecting quality, cost, schedules, and scope within the entire chain.  There has to be a common operating environment for projects to occur within. We saw policies being written at Motorola to mitigate issues and critical path being established in order to create efficiencies among the subcontractors at Shea Homes. RedTag sought a strategic alliance.  These strategies, policies, and processes need to be communicated. Anytime someone in the chain has changes or projects these have to be vetted against the strategies, policies, and processes in use up and down the supply chain.  Finding who in the chain will manage this can be difficult. Some people think its the project manager's duty but that can be a problem as project managers do not always have operations management backgrounds.

Often the dominant supply chain actor manages the chain by fiat but this is not always good for the supply chain. The dominant actor may be remiss or selfish in managing the chain.  On occasion, the supply chain is managed by an external actor such as an appointed fiduciary, working group, or by a council. This happens when there are legal issues surrounding procurement and/or unbalanced competitive actors influencing outcomes. In many, supply chains there is an absence of overarching management. Project managers operating at any station in the supply chain may find themselves in a vacuum of management and with multiple projects that could impact the entire supply chain. If in the dominant role then establishing management of the supply chain projects is easier but that is not always the case nor good. In short, a supply chain project manager must wear other hats to include an OM hat.  The best project manager can do is manage from their station if unable to rally support for overarching management and project managers should use the project plan. 

Project Management hovers around the project plan, Figure 1, which has numerous subset plans such as change management, communications, and quality plans. Sometimes these subset plans are just a paragraph in the larger project plan or may not even appear depending on the project. Nonetheless, the project plan is the primary artifact in any project. 

Figure 1: The Project Management Documents

Plans are limited because they are focused and narrow in scope. In the case of a project plan, the subset plans are unique to the project and can be different for each project even though common elements exist is a supply chain. As projects come and go, sometimes with a high degree of frequency, tracking policies can become confusing. Additionally, supply chains are coalitions of many players. Some are dominant and wield control over the supply chain. Other players are minor contributors and a single supply chain can become costly for them with little influence. Standards need to be set with stable expectations for performance for all players. Plans are too short-lived and can vary by the different leaders. The solution to this challenge is the establishment of programmed management or management programs. 

Management programs in the Operations Management, OM, the view is different than Program Management in the Project Management Institute's, PMI's, view.  PMI considers program management to be a project-of-projects or simply the Mega Project architecture. Operations Management considers a management program to be a loosely written document for specific management of an activity detailing the principles, processes, responsibilities, authority, and decision-making guidelines. The concept of loosely written refers to the degree of cohesiveness in the document. Strict cohesiveness binds activity from operating outside the policies and rules using terms like shall, must, and sets discrete limits. A loosely cohesive program is flexible and allows room to operate using terms like may, should, and sets ranges vice limits. The degree of cohesiveness is a combination of strict and loose. Management programs are useful in highly complex operations involving rapid change, regular turnover of personnel, quality and safety concerns, as well as financial impacts among other issues. These program documents are the baseline management policies, provide stability, and available to everyone.

Mapping typical project management plans to the supply chain in order to organize management programs, Figure 2, should use the APICs supply chain model.  These project management plan elements are expanded and become written management programs for the supply chain. 

Figure 2:  Mapping Project Management to Manufacturing Supply Chains


When developing a management program, a supply chain working group should be formed in order to obtain buy off from everyone upstream and downstream. The program should detail policies and responsibilities for not only the dominant actor in the chain but more so for others whose projects may affect the entire supply chain. The programs should detail processes for vetting impacts, communicating, and processes as well as timelines before acting. There should be an annual review of each program and a process for interim changes. Once agreed upon the program should be published for all actors in the supply chain.

When a project develops, the elemental pieces of the project plan are already established and known. The project should incorporate these standing policies and practices with authorities and responsibilities already understood. The program management concept was established in the post Project Complexity Perplexes Procurements in order to gain control over complex procurement processes during large scale projects.

In conclusion, establishing the policies, procedures, authorities, and responsibilities as a stable program up and down the supply chain helps avoid the injection of unanticipated changes during projects. For example, a material quality change that reduces cost for one actor with the intention of improving supply chain performance may reduce quality for another down stream causing increases in warranty issues.  Having a supply chain quality program in place helps project managers identify and vet quality change impacts to the supply chain during process improvement projects. 

References:

(2011). APICS Certified Supply Chain Professional Learning System. (2011 ed.). Version 2.2.

Hoffman, W. (2003).  Missing links. PM Network. Resourced from http://www.pmi.org on June 5, 2014. 

Hunsberger, K. (2012). Chain reactions. PM Network. Resourced from http://www.pmi.org on June 5, 2014.

Markarian, M. (2003). Tightening the supply chain. PM Network. Resourced from http://www.pmi.org on June 5, 2014.

Originally posted on January 15, 2014.