Category Archives: Agile

Productivity

This blog reflects my learning process, experiments, and personal experience helping software teams. I have worked with a bunch of exceptional professionals who have suffered many of my mistakes and they have replied to me delivering working software, putting more effort on software quality and even more energy to try new things. Several years ago I committed myself to understanding what software development actually is and how to help those professionals do their best. I think this blog is in the right direction, tirelessly, step-by-step I pay back to them and to the agile community what I owe them.

If you are wondering how my personal purpose and my unpaid debt is related to productivity keep reading. I am going to start by describing a team as a

“network of interconnected work”

Team members who are the nodes of the network, transform, exchange and convert raw information into value to customers. An important characteristic of these networks is that “work” has dependencies between nodes. An event or sequence of events must take place before another, however, the sequence is not predictable. It means that my productivity depends on many other nodes of the network.

In my opinion, many organisations have ignored this remarkable characteristic and they assess employee’s productivity individually without considering the environment. These organisations tend to avoid measuring productivity of the network as a whole.

            “You cannot improve what you cannot see.”

The situation gets more and more unfair when employees do not have either control or authority to change how they interact with the network. Edward Deming wrote 14th principles in his brilliant book Out of Crisis:

Remove barriers that rob people in management and in engineering of their right to pride of workmanship. This means, inter alia, abolishment of the annual or merit rating and of management by objective (see Ch. 3).

Another side effect of productivity is busyness. As all nodes of the network are busy, the whole system loses responsiveness and effectiveness needed to react to continuous changes happening around. Busyness organisations want to reach high level of capacity utilisation avoiding idle nodes. An exaggeration might be to create a buffer of work to do just before every node in order to avoid starvation. My untested hypothesis about the expected behaviour is in this case, a network with a poorer global performance and longer time to market. Work to do has to wait in a queue until the node has free capacity to work with it and to dispatch it to the next node of the network, which is also be terribly busy. Hence, work must wait in a queue again.

Not long ago a friend of mine told me that his manager had wanted their teams to achieve maximum capacity utilisation and velocity. Then POs began to take features from here and there to prepare an iteration backlog considering number of people, their skills, expertise and calendar days… It was an exaggeration. Wasn’t it?

“Watch the baton, not the runner”

Productivity and Variability

Software development systems are high variability systems affected by external and internal sources of variability. External sources of variability are mostly rules, policies and events at the organisation level:

  1. Technology: Using immature technology we are exposed to bugs or changes in our technology. For example, lean companies try to use only reliable and proven technology.
  2. Team organisation: Changing team members continuously imply that teams must reorganise and affects negatively their performance. People are not replaceable not exchangeable. Furthermore, space configuration and distance between nodes are barriers to our communication. The likelihood of communicating falls dramatically when distance is farther than 30 meters.
  3. Knowledge or business complexity: Lack of domain knowledge to solve the customer’s problems or constant changes in their preferences are also common sources of variability.
  4. Customer: Lack of involvement or weak support from the customer. When either feedback is too long or is useless from a proxy persona instead of the real customer we can build the wrong system.
  5. Competitors: Competitors’ decisions affect our plans when they bring new products into the market. We should react and inject variability in our project plans.
  6. Waiting for availability: is the time that work is idle waiting for other parts or nodes.
  7. Dependencies or specialisation: It is a significant self-wounded promoted by organisations that encourage high levels of specialisation. This “culture” lengthens our development time and our time to market. We are more exposed to changes in market preferences or competitors.

On the contrary, internal sources of variability are mostly focused on individuals. Intrinsic factors like motivation, healthy or safety among others are very dependent on how we see the world and affect our individual performance. Variability has an important effect on productivity so we might put strong and direct effort on reducing the bad economic consequences of those variability factors in order to increase productivity of the whole system.

Now, I am going to take a different approach and see how to assess productivity through the eyes of Theory of Constraints (TOC). The only goal of an organisation is to make money. Eddie Goldratt who designed TOC considered that Throughput is a powerful metric to measure organisation’s performance. Throughput is the rate at which the organisation converts its inventory of products into sales.

From the TOC perspective, the performance of the development process is affected by bottlenecks, which impede the organisation to achieve its goal. Performance of the whole system is determined by the capacity of those bottlenecks. System performs at the speed of the slowest link in the chain. Whether you increase the capacity utilisation on non-bottleneck nodes of your network, you are not improving the system at all.

“For any resource that is not a bottleneck, the level of activity from which the system is able to profit is not determined by its individual performance but by some other constraint within the system. – Eddie Goldratt”

TOC encourages you to either increase bottleneck’s capacity through improving its process, removing unnecessary work or deriving it to other areas of the system. Finally, we might add more people or required resources if previous actions didn’t achieve the expected results. In any case, we aim at improving the whole system in order to increase throughput.

This is the sequence of steps required to apply TOC:

  1. Identify the constraint
  2. Exploit the bottleneck
  3. Subordinate everything
  4. Elevate the constraint
  5. Avoid Inertia

In theory, TOC is a tool to strengthen the view of the system as a whole through implementing a global metric (throughput) and improving flow whereas avoiding local optimisations on resources that are not bottlenecks.

Conclusions

I hope to give you some arguments to discuss about productivity in your organisation.

  • Productivity has very harmful side effects for the organisation: longer time to market, waste created by busyness*.
  • Culture of busyness reduces responsiveness and effectiveness required to adapt continuously to changes. Network and world around us is not static but dynamic.
  • Identifying bottlenecks is the first step of TOC to improve throughput.
  • Many opportunities to improve the whole system are manager’s responsibilities.
  • Either removing unnecessary work or deriving it to other parts of the system is a good alternative to try to improve throughput.
  • Adding more capacity should be our last choice.
  • When you prioritise an iteration backlog based on ROI and some team members are idle, take advantage of these visible signals to discuss specialisation, t shape resources availability, team organisation and organisational culture.

This is what I have learnt so far and I wish to write in the future to contradict some of the arguments written here. That would be a signal to indicate that I learnt something new.

Tagged , ,

Cost of Delay – Decision Making Framework

“Cost of delay is the language to translate value and impact to our customers into money. “

Cost of delay is the cornerstone of the economic decision making framework, which helps businesses to assess the impact of time on their products and to prioritise their scarce resources on them. Cost of delay puts the tag price on our features and assesses how their value decays over time.

Using cost of delay our discussions shift from the typical labor cost-oriented mindset in which the important topic is what the cost of the feature is to a radically different approach in which we assess the value of the piece of work to do in terms of impact to the business and customers. We model an economic scenario and consider it real when prioritising features or products in our portfolio. Notice that we are replacing gut feeling to using a more scientific model. This model is more adaptable to the complex adaptive system we have to deal with. We arrange experiments and hypothesis using “probe > sense > respond” to learn how the system responds to the stimulus. Cost of delay is a powerful vehicle to harmonise a single vision of the future and to align a common business strategy.

As we have just mentioned, cost of delay is strongly dependent on time and we should depict how time affects product development. @JoshuaJames reflects on 3 different profile life cycle development patterns to describe product development markets.

Short life cycle and sales peak is affected is cost of delay.

Screen Shot 2015-05-18 at 15.25.09

This urgency pattern has a very short life cycle and sales are profoundly affected by delay. Consider for example the challenge to release a mobile game. As soon as the product is released, sales ramp up very fast until reaches a peak. Then, sales progressively begin to decay. Life cycle is very short and peak is affected by delay. Whether we release our product too late, our peak is reduced due to the fact that market is almost covered by other titles. At a certain point, when sales begin to decay we must invest in discovering which features can help to stabilise or increase the revenue. An important characteristic of this profile is that exciting features (Kano model) are quickly copied by competitors and become basic needs future products.

Long life cycle and sales peak is affected by delay.

Screen Shot 2015-05-18 at 15.24.44

This life cycle profile for certain products also reflects a quick growth nevertheless sales maintain over time. In this case, the first company to introduce the product into the market wins the competitive advantage over latecomers. Cars market or competition between airplane manufacturers is good example of this kind of profile.

Long life cycle and sales are unaffected by delay.

Screen Shot 2015-05-18 at 15.25.00

This profile is the easiest one to compute due to profits are sustained over a long period of time. Number of sales is not affected by when the product is released.

Once, we have identified the urgency pattern it is time to decompose value and duration which are both parameters required to compute cost of delay.

The value of the product features was previously introduced here and has to be estimated considering 4 different perspectives:

  • Increase revenue reflects the revenue provided by new-delighted features (KANO model), which attract either new users or current users.
  • Protect revenue are small improvements which current users will not be able to not pay any extra money for.
  • Reduce cost are improvements in our process to deliver value faster.
  • Avoid cost: costs that are not incurring right now to occur in the future unless some action is taken.

Notice that these perspectives might be complementary and the total value is obtained summing these 4 areas.

Let’s take a hypothetical example. A small company that released a successful instant communication tool is researching on the profitability of adding new features.

           Feature: As a User, I want to use voice commands to request the application to dictate messages to the receiver.

Our network of daily active users is 5 millions. Current license price is $10. The marketing strategy is to offer an upgrade worth $5 to current users and hence we expect 10% of daily users to purchase it. We expect our immediate competitors to release their new service in 3 months so we expect to lose 8% of the revenue per month from current active users who would not pay the upgrade every month and 5% value depreciation of the network of users.

Increased Revenue:

We expect 2% rise in new revenues from users who will pay $5 for the new service.

= 2% 5M daily active users * $5 = $500K

Avoid Cost:

Releasing late this feature would decrease 8% of revenue from current users and would devaluate 5% the net value of our network every month. This network is worth $50M today.

= f(g) current users + f(i) network of users

= 8% 5M daily active users * $5 = $2M

= 5% $50M= $2,5M

COST OF DELAY = $500K + $2M + $2,5M $5M

So, cost of delay is the amount of money we will not make whether that feature is not released on time.

Duration

The amount of time required to release the feature or product to the customer is the second factor required to compute cost of delay. Notice that I prefer making statistical analysis about the performance of the system (historical data) rather than estimating duration.

CD3

So far, we have assessed the list of features in terms of value and duration, however, that is not enough to prioritize and maximise the economics. Product development contains features that usually have different value; urgency and duration so standard approaches like FIFO or LIFO are far from optimising economics. Rather we use cost of delay divided by Duration.

As you can see, cost of developing a product or a feature is not considered when prioritising. Why? First of all, Time is the most critical factor because it is irreplaceable. It cannot be replaced or reversed. On the contrary, funds can be obtained through external sources like financial capitalisation. Also, cost is not a good variable to consider when making decisions due to the asymmetric payoff function of product development. Cost is not proportional to the value obtained. Some research points out that only 30% or 40% of our features can provide up to 90% of the value and we usually only consider cost when making economic decisions. We don’t properly deal with variability and it force us to maximise economics by eliminating all choices with uncertain outcomes.

Finally, I have conscientiously removed the option of adding more capacity because its difficulty to scale in certain situations, especially in later stages of development. Most of the times, adding more capacity leads to communication overloads, and more delays.

            “If you bring new people to a product that is late, it’s likely to delay the project even more because of the increased complexity and the need for the team to adapt to its new composition”

Inspect and adapt

As our customer preferences change and competitors adapt their strategy, cost of delay is constantly affected. Our value model needs to be revisited and refined often. Hence, Cost of delay is not a static figure and urgency pattern is a way to create awareness and shared understanding about the economic impact of delays.

How to prioritize

In order to answer to this question, we have a list of features with different value, duration and CD3.

Feature Value Duration Cost of Delay
Feature A $10K 6m $1.6K
Feature B $8K 4m $2K
Feature C $27K 14m $1,92K

The optimal scheduling decision TODAY is to deliver the feature with highest CD3. So, first feature to release would be B, then C and then A.

Next 2 examples are very atypical in software development but it’s worth mentioning them.

When all features have the same value but different duration, very atypical in software development we might use shorter time first (SJF).

Feature Value Duration Cost of Delay
Feature A $10K 6m $1.6K
Feature B $10K 4m $2.5K
Feature C $10K 14m $0.71K

So, optimal selection would be: B, A, C.

When all features have the same duration but different value we might sequence the work to do with high cost of delay first (HDCF).

Feature Value Duration Cost of Delay
Feature A $30K 5m $6K
Feature B $20K 5m $4K
Feature C $10K 5m $2K

Optimal scheduling would be: A, B, C.

Finally, in Flow Product development we measure throughput as the rate at which we convert inventory through sales and value delivered to the customer. Thus cost of delay can be considered a healthy signal of the system. All partially completed features (inventory) are avoiding us reach the goal of making money.

“How much money and time do we spend on features that have not been converted into throughput?

CONCLUSIONS

  • Cost of delay puts a price tag on our features in order to help you maximise economics and prioritise.
  • Cost of delay shifts our mindset from cost and efficiency to speed and value.
  • Cost of delay assesses our value models against urgency, value and risk.
  • Not only Cost but also probability are required to make optimal economic decisions. Cost is not always proportional to the value obtained. Asymmetry payoff function of product development remind us that we need variability to create value and short feedback loops to cut wrong paths as soon as possible.
  • We consider 4 different perspectives to assess value:
    • Increase Revenue
    • Protect Revenue
    • Reduce Cost
    • Avoid Cost
  • Cost of Delay is an alternative way to assess the economic impact of the inventory of design in progress.
  • CD3 is a prioritisation algorithm for work to do with different urgency, value and duration.
  • Cost of delay can be obtained dividing value by duration.

This blog post is in some way an extract of the ideas developed by @JoshuaJames and Donald Reinertsen.

For further information see next sources of information:

http://costofdelay.com

http://playbookhq.co/blog/profit-based-new-product-development-decisions-part-3-estimate-cost-delay/

http://leanagilechange.com/leanagilewiki/index.php?title=Cost_of_Delay

http://toolsforagile.com/blog/archives/647

http://agileconsulting.blogspot.com.au/2011/03/using-cost-of-delay-functions-to.html

Tagged , ,

Decision making framework

Is it possible to improve how we make decisions? That’s one of the questions that I ask to myself almost everyday. I’ve attended thousands of meetings without a clear purpose and I’ve witnessed how lots of important decisions have been made without a clear understanding of the current situation, or a clear vision of the problem to solve. Late 2014, I was reading a book called “Commitment” [Olav Maassen Chris Matts 2013] when I firstly came across Real Options theory. It is a decision-making framework feeds from financial options theory and cognitive behavioral theory that allows people to make optimal decisions within their current context. Oh uh! Easy. Isn’t it? An option is defined as the right, but not the obligation, to take an action in the future.

From financial options theory we can extract that our options have value and cost. We will see how to ideally quantify the value of our options. Likewise, NLP and cognitive behavioral theory shed light on the aversion to uncertainty and draw conclusions on why people don’t follow optimal decision processes and thus we make irrational decisions.

Real options theory is a strategic way of thinking [Amram 1999] especially valuable when there is uncertainty due to a lack of information [Ozkaya, Kazman, Klevin 2007] that advocates for deferring your commitments as long as possible. You gain flexibility to change later if you don’t commit to any option. You must wait and wait and wait to transform an option: the right to do something into an obligation. In the meantime, you actively look for either more information about options, or to create alternate options to get a optimal decision.

Screen Shot 2015-03-24 at 10.27.25

In agile, we recognize high uncertainty and we provide options almost everywhere. When the customer evaluates the latest release of the product, we are giving him options to address the product based on new learning: we can continue working on improving something already delivered (refine), develop new stuff (explore) or even cancel the product (abandon) if that’s not what the customer is expecting. In the same way, we can see retrospectives like ceremonies in which members assess new options to try to work better. Most agile software development practices also provide options: TDD or BDD offer fast feedback loops to detect coding errors when writing or changing the code.

“ Without test driven development, there would be no refactoring.”

Mocks or stubs also create options to isolate external dependencies. We defer the decision on how to implement certain parts of the system. The YAGNI principle – you ain’t gonna need it has its roots in options theory too. Lean transmits a similar idea through the principle last responsible moment.

Now it’s time to dig into the special characteristics of our options.

Screen Shot 2015-03-25 at 15.32.11

Options have value.

We should assess the value of our options in order to help us identify which option is more valuable. In fact, having numbers is better than no having numbers at all. So, we are going to consider our options from 4 different perspectives.  This framework considers that our options can contribute to one or more perspectives called buckets.

Increased Revenue

This is the revenue associated with either increasing sales to existing customers or gaining new customers. The source of this revenue is either adding value to existing products with “delighting” features [Kano model] that the current customers are willing to pay for or new products or services. Some questions will help you identify this bucket.

  • What’s the benefit of this option?
  • Will this option enable future opportunities? How?

Protected Revenue

This is the revenue received from our existing customers. It aims to maintain the existing market share through continuous improvements. Those changes aren’t valuable enough for existing customers to pay for. Sustaining this revenue requires a more defending strategy.

Reduce Cost

We contribute to this bucket with ideas to be more efficient. These aim at improving our processes and delivering value faster. We must assess the value of the option matching to the cost of the alternatives. An example is automating a process so that we can reduce the cost of 2 employee working full time (FTE).

Avoid Cost

Making actions to prevent costs that are not incurring right now but it might occur in the future unless some action is taken. We should focus our attention on technical risks but we should also consider market, strategic and operational risks [Reinertsen 1991]. We might try to answer to these questions:

  • Does it reduce a future risk?
  • How is this option going to affect me?
  • Is there any potential negative impact if I don’t choose this option? How can avoid it?

Total value of your option is obtained summing up the 4 areas.

Screen Shot 2015-03-25 at 15.13.56

Example:

A game software company released a new game called “Killing Br0kers in Destiny” 8 months ago and since then, the game has been very profitable. More than 1 million of daily active users have played however, the trend for last few months indicates a slightly yet continued downward of one percent every week and the company is now considering what to do in order to address the situation. On average, 25% percent of our daily active users spend 5$ on new characters, special weapons, ammunition and equipment every month.

Screen Shot 2015-03-25 at 15.14.05Screen Shot 2015-03-25 at 15.14.12

Notice that our mental models and assumptions are described in our assessment. Cause and effect diagrams are valuable tools to shape our assumptions and to describe how we expect the system to behave considering certain conditions as true events. There are two important side effects when using these tools. Visualizing our assumptions let us challenge the assumptions and brings up healthy discussions about them and the second one is the use of (money) as a base unit to measure our all our options.

 “In the absent of information about value, of course, the system optimizes for other things. Why should it surprise everyone? @joshuajames

This model is far away from being perfect but it is order of magnitude better than gut feeling.

“All models are wrong but some are useful” George E. P. Box

           “One answer is better than no having one at all”

Options expire.

Options are very time sensitive and we must consider when our options expire and they are no longer available. We need to set the last responsible moment either estimating the duration it would take us to carry out the option or using historical data. I like calling this point PNR. Point of no return is considered the point beyond which one must continue on one’s current route because turning back is physically impossible.

So, point of no return is obtained subtracting the duration of an option from the due date.

Screen Shot 2015-03-24 at 10.29.57

Never commits early until you know why, and when you know it, do it as quickly as possible.

Conclusions

  1. Don’t treat everything as an option; we should only use real options theory for important decisions. We are suffering from overabundance of choice and it might overwhelm you and produce analysis paralysis.
  2. Agile methodologies and many Agile software development tools and practices provide options to deliver high quality products faster and better.
  3. Options have value: Use four different perspectives to estimate the value of your options.
  4. Options expire: use historical data to determine the last responsible moment.
  5. Those who make decisions ought to learn to be lazy. Wait, wait, and wait until you have more information before making a decision. Then, do it as quickly as possible.
  6. A scientific approach like A3 thinking should dramatically help your decision-making framework and will visualise your options.
  7. Use cause and effect diagrams to forecast how you expect the system is going to behave before releasing your experiments.
  8.  Variability will jeopardize your plans, no matter how well you value and forecast them.
  9. Cost optimization is not the same as revenue optimization. Sometimes it is worth investing in more than one option even though this might cost slightly more.

References

    1. Black Swan Farming Case study from Maersk Line
    2. Commitment

Cost of change

You are likely familiar with the cost of making changes to a product but let me share with you a deeper view of elements which are affected by cost of change. The most common elements are: code and tests to verify the expected behavior, but there are also non-code artifacts: user manuals, analysis and design documents, software requirement specifications, test documents and so on. Hence, cost of change can be computed considering the following list of elements:

Cost of Change = coordination costs + transaction costs + Failure load cost

Coordination costs: the cost of getting people together to coordinate the change. For example, the cost of coordinating people to release a new patch for the current version.

Transaction costs which are the costs associated to the activity performed. For example, the cost of regression tests before releasing the product. And finally, failure load is the cost of addressing changes demanded by the customer. For example, cost of fixing a critical bug. David Anderson writes on his book about Kanban: “Demand generated by customer that might have been avoided through higher quality delivered earlier. Those activities don’t create any value

First person who wrote about cost of change was Barry Boehm in the 80s. According to his point of view, cost of making changes in software development increase significantly overtime. Waterfall, which was the predominant methodology for developing software products at the time, was not able to handle cost properly due to the sequential process and as a result, cost of change for small projects increased up to 4 times and rocketed up to 100 times for big projects. Such extreme changes called “architecture breakers” were actually discovered once the system was in production due to scalability and performance issues. Waterfall’s is still being broadly used and its weak points are: no customer involvement since the requirements elicitation phase, no feedback from previous stages and delayed validation of assumptions (testing phase too late in the product life cycle).

Of course, software engineering has evolved and there are new frameworks, tools and programming languages. However, we have been unable to reduce its devastating effects so far. Back in 1980s, Boehm tried to minimize cost of development through delivering software in small increments. Agile and other lightweight methodologies evolved those ideas and nowadays they foster to build software making small iterative and incremental steps.

But, that’s not enough. For example, the Lean principle “last responsible moment” suggests that we should deliver features only when we have enough information to develop them. Thus, we defer the cost of making decisions as much as possible to minimize the risk of change.

As a consequence, Agile radically reduces the effect of cost of change however in many cases, this effect isn’t real due to superficial transitions to Agile. Many companies which are becoming non-perfect “Agile” companies have a lower cost of change than waterfall driven companies due to the fact that Agile promotes iterative development and fast feedback through delivering value often. Yet, they fail to complete the transformation: managers and developers aren’t willing to accept non-conventional software engineering practices to build high quality products.

Next picture depicts how cost of changes evolves for different approaches over time.

cost of change

Notice that fast feedback dramatically reduces the cost of change at first, but it increases again over time if short feedback mechanisms aren’t ready to provide information about the health of the system. Such systems like test automation, unit testing or customer feedback provide invaluable data to react quickly to unintended consequences of change.

Now, it’s time to present a new topic aimed to help our teams visualize, identify, anticipate, react or mitigate the impact of this problem. Systems thinking aims to describe how the elements of a given system interact with each other and how the system as a whole is expected to perform. It’s worth mentioning that next part of the article is going to briefly cover systems thinking but I promise to keep writing about it in the future.

Next part depicts my personal view of the system and it can/must be argued / discussed with other actors who interact within the same system. The real value of this technique is to create a shared understanding and alignment from different actors.

Describing “my view” of the problem.

In order to support the explanation I want to show you how to use a simple archetype. An archetype is a template for describing patterns of behavior repeatedly found in different kind of systems.

Fixes that backfire

Fixes that backfire

“Fixing that backfires” describes a problem symptom. First feedback loop called balance loop is intended to fix the problem symptoms in the short term. This “quick fix” tries to eliminate the problem symptom but quickly it emerges again. More “medicine” is injected into the system but symptoms seem to be alleviated only for a short period of time. Same “solution” is tested again and again and again. On the contrary, there is a second feedback loop which describes a slow and silent degradation of the system. This harmful lack of performance is more and more catastrophic due to a delayed time of response between the “quick fix” and the unintended consequences.

Learning by Example

Let’s describe a hypothetical situation in which an important project for a company was very late in development. At first glance, project seemed to be very easy: project’s goal was to rewrite an existing application using a new technology. Besides, new requirements weren’t required which significantly reduced external sources of variation and complexity.

Problem Symptom:

Deadline was coming and the team wasn’t responding according to the expectations. Product and code metrics, bug trends and data collected from static code analysis tools seemed to indicate critical quality problems. Furthermore, feedback collected during the product review meetings clearly indicated a lack of trust due to continuous crashes and instability of the system. Product owner who was continuously receiving pressure from stakeholders tried to reduce scope or negotiate a new deadline but his efforts to convince managers were fruitless. The company needed that product to guarantee its position in the market. Hopeless, the product owner shared the critical situation with the team during a meeting and shifted the main responsibility for delivering the product to the team.

Problem Symptom

Problem Symptom

Short term reaction Fix:

Team members who didn’t have a firm code of ethics naturally reacted to the pressure cutting corners in order to meet the deadline. They quickly planned to develop features without any kind of automatic testing and decided to postpone exploratory testing until the later phase. Finally they reserved a buffer of time to stabilize and to test the product before releasing.

Quick Fix

Quick Fix

After many hours of overtime and tireless effort, team was able to release a “stable” version.

Hey grandpa!!! Stop and keep talking about “Fixing that backfires”

As I mentioned before, the second loop usually takes more time to be noticeable and it provokes unintended effects on the system. At the moment that the team was making those decisions, they were unaware of the psychological impact on morale or motivation and the economic impact on cost of change, technical debt or maintainability index.

Unintended consequences

Unintended consequences

Ok, what should have they done?

The second balancing loop usually needs traction in the opposite direction from the balancing feedback loop. Making a route cause analysis of the problem in order to understand how the system is actually performing is a good starting point. Notice that, automated testing and continuous integration services provide invaluable information about the health of the system and provide fast feedback which dramatically reduces the cost of change.

Likewise, a bundle of actions can extend your toolbox:

  • Providing training on systems thinking and archetypes like “Fixing that backfires” might help the team to avoid linear thinking* and superficial analysis of the problems.
  • Bringing more visibility and transparency to the technical debt and cost of change through the use of tools like sonar cube, Jira or TFS.
  • Putting more effort on agile practices mentioned in this article help reduce the cost of change.

Finally, I want to share with you some conclusions I have learnt from this writing.

Conclusions

  • Cost of change is much more expensive than you can expect at first glance. In this book, authors explain to us how Microsoft Word came to market years late due to the same reasons described here.
  • Agile methodologies allow us to reduce cost of change but superficial transitions only have a slight reduction in cost of change.
  • System thinking is a branch of knowledge which allows us to share our mental views with others and to forecast how a system is expected to perform.
  • Archetypes are templates used for determining patterns of behavior repeatedly found in different companies and different situations.
  • Start modeling the system searching for feedback loops instead of fitting the view of the system into an existing archetype.
  • Our behavior and code of ethics have dramatic effects in people around us and the products we work in. We shouldn’t underestimate team’s health, morale, motivation and the impact of our actions or inactions in our families.
  • Root cause analysis of the problem (5xWhy?) and retrospectives are basic tools for learning.

*I promise to write about it in the future.

*For unknown reasons my zip file you can download here: Cost of change.zip is not permitted so rename it from .doc to .zip

Impact Mapping

THIS BLOG STARTS WITH A NEW EXPERIMENT:

In a few weeks I am going to attend a hangout to discuss both: impact mapping and user story mapping tools. It will likely cover what these tools are, benefits, problems, how and when to use both tools. I have had some experience with impact mapping because stakeholders in last few projects I was involved in, hadn’t direct communication to the real user and thus, they had an imprecise idea about what the system should to. We used impact mapping to help stakeholders and team members align their vision, see the product as a whole focused on its business value and keep different assumptions / choices alive as we built the product. The map was an indispensable tool to center many discussions.

But first of all, let’s briefly describe what impact mapping is:

Impact Mapping:

Impact Mapping Template

IMPACT MAPPING IS A VISUAL TECHNIQUE THAT ALLOWS YOU TO MAP, DISPLAY AND organize hierarchically your ideas. First node is the central one and describes the goals, reason or purpose for the impact. “Why” you want to achieve something. From the blog point of view, I want to visualize and organize my ideas in a procedural way to reduce the amount of time I invest in setting up the content of this blog.

Second level, is a set of child nodes in which we describe Who? may help or prevent from achieving our goals. This article is focused on writers and readers: agile coaches, product owners, software developers, testers and scrum masters. Such roles will benefit in some way from the impact. Third level describes “how” role’s behavior contributes to facilitate the effect or prevent it from happening. This branch is focused on how to create these impacts. Last level is called “what” or scope and is intended to detail a list of actions or capabilities to support the required goals. Again, I intend to write these maps in advance and to provide a downloadable impact map for each blog entry delivered.

Impact Mapping 

An interesting side effect produced by following this tool is that my mindset shifted from free writing or writing content without any script to look for a more profound purpose or business goal for this blog entry only by asking these questions. Consequently, focusing on who will be affected by these ideas and how to impact them has addressed my chain of thought and has helped me approach writing content in a more structural and procedural way.

MEASURE

As a newbie writer, organizing my ideas is the activity that takes me longest so the experiment should help me shorter this set-up time and help me deliver content faster and more often. Yes, I know that that’s not a SMART goal so here it goes my redefinition:

S: Specific

  • I want to deliver content faster (less than 1 month) and to reduce set up costs needed to organize my ideas.

M: Measurable

  • Measuring set-up cost
  • Measuring lead time for writing the content of the blog

A: Actionable

  • Preparing impact mapping files for each blog entry
  • Measuring set up
  • Measuring lead time

R: Realistic

  • Despite the fact that my baby demands a lot of time and energy I think I will be able to do it.

T: Testable

  • 4 more blog posts in next 4 months

LEARN

The purpose of this section is to address the learning obtained from collecting metrics feedback from you.

BIBLIOGRAPHY

http://impactmapping.org/index.php

A web tool to create impact maps http://effectcup.com/

For unknown reasons my zip file you can download here: Impact Mapping is not permitted so rename it from .doc to .zip and open it with:

http://freemind.sourceforge.net/

http://mindjet.com/ 

Why pair programming works?

Pair programming is an agile software development technique in which two programmers work together at one workstation. One, the driver, writes code while the other, the observer, pointer or navigator, reviews each line of code as it is typed in.[i] Pair programming is considered a wasteful technique because people tend to believe that navigators aren’t writing code. In fact, this simplistic view lays on the foundation that the more people writing code, the faster you product will release. Yet, things usually go even worse because products suffer from maintainability and complexity problems in short and long run as a result of poor software development practices, inexperienced teams and occasional quality reviews. Some studies have shown that paired programmers are only 15% slower than two independent individual programmers, but produce 15% fewer bugs[ii]. I decided to give it a try when I came across queuing theory, which is a statistics and probabilistic approach to evaluate how systems (teams) perform when jobs are nonrepetitive, nonhomogeneous and they arrive in an unpredictable arrival rate. It sounds familiar!!!!. Isn’t it? Some additional concepts are needed before continuing: Server(s) are the resources whom perform the job. Arrival process describes the arrival pattern to the queue. Service process is the amount of time that it takes to accomplish the work. Queuing discipline describes how we manage the arrival work (FIFO First input first output, LIFO last input, first output,etc)

Next chart depicts the relation between capacity utilization and service process for a system with unpredictable arrival rate and unpredictable service time, also defined as M/M/1/∞

capacity utilization

[1] Copied from http://hbr.org/2012/05/six-myths-of-product-development/ar/1

Notice that cycle time increases exponentially as you increase capacity utilization thus, the effect of overloading teams drives to blocking system’s workflow more and more. So, the effect of loading systems from 70% to 80% capacity utilization increases cycle time around 25 times.

What should we do to keep capacity utilization under certain limits? My favorite choice is limiting WIP. Another valuable countermeasure is the pair programming technique because it helps decrease the team’s capacity utilization to acceptable levels (behind the scenes) which also leads to reduce cycle time and to improve flow. This technique also strives to reduce silo mentality, promoting code ownership through the constant rotation of pairs. Again, queuing theory confirms that mathematically systems perform better when there’s only one queue (product backlog) and several servers (team members) whom can deal with work.

Single queue Multiple  Server 2

Thus, if your goal is to increase system’s performance, you should try to seed and promote T shape servers. Those whom are experts in a single area and should potentially develop abilities to collaborate with other areas. Finally, a fow more benefits from the pair programming practice: developers learn from each other new development skills and continuous rotation creates a strong sense of “team”.

 

[1] http://hbr.org/2012/05/six-myths-of-product-development/ar/1

[i] http://en.wikipedia.org/wiki/Pair_programming

[ii] http://c2.com/cgi/wiki?PairProgramming

Mix of Things

An interesting opportunity showed up a few weeks ago just after returning from my paternity leave. My brain was still stuck and asleep when my co-workers were presenting me the last “Death or Death” project. They told me we had only 4 months to deliver a new software before the Nothing.

*Thanks to Michael Ende for describing the life of most software developers in this world.

After some workshops that we held to create a common understanding about the problem, we discussed about our critical situation and we decided to promote our agile principles and values even more. We reinforced next ideas:

  • Delivering the highest quality software iteratively
  • Building and designing it incrementally
  • Communicating constantly with customer
  • Focusing on customer needs

Thus, the following techniques or tools were planned to be used in our last *cough project:

  • #impact mapping
  • #MobProgramming
  • #BDD
  • #Lean Software Development
  • #Visual Management + Metrics (CycleTime + CFD + Release Burndown chart)

The intent of this blog isn’t to create a prescriptive recipe for your software development process but to provide insights into the reasons to using any specific technique.

#impact mapping
Despite the fact that this tool is designed for strategic planning and very useful for managing projects in the long run. We decided to use this tool for helping the team visualize the product backlog as a report (information radiator) in the short term. This simplification of the diagram seeks to respond to the following questions at first sight:

  • What the customer wants in plain text
  • How to provide value. In User Story format. As a I want so that

Sometimes It is needed to break down the “how” item into smaller pieces following User Story format in order to give even more detail about the content of the How node or Epic.

#MobProgramming
This technique which I discovered thanks to the promotion of Woody Zuill consists of one team, one active keyboard and one projector.
As they promote: It’s just like doing full-team pair programming.
There are two roles involved: navigators who discuss, think, design and guide the driver who is in charge of writing the code that navigators are dictating. Every 15 minutes the driver role is rotated.
Team’s feedback after two weeks is terrific. They highlighted the following emergent behaviors:

  • Alignment: The whole team took part of the architecture. In fact, all team members coded and designed the emergent architecture.
  • The whole team defined a solid foundation for coding standards and code quality rules.
  • More meaningful DONE DEFINITION.
  • They learnt a lot from each other and specially programmers from senior developers.
  • Ups, I hardly forget to mention that: Resharper is cool.

Please, see more information here: http://codebetter.com/marcushammarberg/2013/08/06/mob-programming/

#BDD
I told to someone some time ago that the book Specification by example from Godjko Adzic had radically changed the way I understood software development. Although, I didn’t have much experience (only 2 projects), it has become an irreplaceable tool for any project that I work in.

As an Agile coach I try to encourage the team to practice BDD and follow next rules:

  1. Technology changes but domain remains. We avoid testing presentation layer and we make effort to test only our business services. Presentation layer is either delegated to exploratory testing or programmed with an automation tool whether it’s worth investing in it.
  2. Using Ubiquitous Domain language. Formal language must be shared by all members of the software development team – both software developers and non-technical team members.
  3. Testing the real system. Continuous Integration machine provides feedback on a daily basis about the health of the system. We are especially interested in performance issues or integration problems with external components. We are promoting to integrate external components as soon as possible or mocking them until a stable version is ready to integrate.
  4. Embrace BDD refactoring. Re-read and re-write your tests many times in order to minimize misunderstanding or ambiguity and search for inadequate feature or scenario definition. The purpose of the scenario is to describe what the system has to do. Likewise, the feature describes the acceptance test. We use the standard agile framework of a User story for its definition.
  5. Defining specifications collaboratively: Specification by example mentions several ways to create specifications: formal workshops, informal conversations, 3 amigos. Inspect them and adapt to your context.

As I am writing this blog is I wonder about the possibility to create a specification quality control policy which is a check list for early identification of common issues affected by specifications and help reduce rework.

#New Product Development
Although I got excellent results following the Scrum framework along these years I have been progressively more interested in Lean Software Development, Kanban and New product development and less interested in estimations, the usage of them from middle and upper layers, conflicts provoked by estimations and the relationship with team commitment. Thus, our current software development process is a mix of ideas from different sources and frameworks.

Once every two weeks we release a new version and facilitate a review for our stakeholders. This cadence reduces the team’s coordination cost and creates a sense of urgency to deliver value as soon as possible and receive feedback from stakeholders. Besides, we have also limited work in progress (WIP) in order to provide enough flexibility and adaptability to variability. I took a change to facilitate a Systems thinking analysis meeting to create an awareness of potential effects of modifying the value and how to increase and decrease it. The team has managed this WIP internally so far with responsibility adapting it to the continuous context change.

Although our Done definition includes an statement that states: 0 bugs, some of them have shown up  and we decided to create a queue for them. The queue size is very little (only 6 bugs) and team usually reacts very quickly to keep the queue under control. One of the interesting effects of limiting WIP is that developers are capable of dealing with variability (bugs) faster than other teams that I led in the past.

#Visual Management
First blog entry described how we are going to use our metrics cycle time and lead time but it only mentioned cumulative flow diagrams. Now it’s time I explained to you in more detail the usage of this metric.
This metric aims at helping track and monitor how user stories are moving through various stages of the process to being “done”.

From a cumulative flow diagram we can see:

  • Where the bottlenecks are in our flow. Based on Theory of Constraints introduced by Eli Goldratt we must exploit the bottleneck, optimizing the throughput of the system adding more capacity or changing how the system is performing. Thus, if we apply continuous improvement frameworks like PDCA or Build – > Measure -> Learn we can easily evaluate if our policies are improving the flow.
  • If demand is seasonal and take steps to adjust capacity in that case.
  • If we are delivering value at the end of the process and how to improve them (global view of the process).

Finally, we use release burndown chart to indicate the progress of the team against the product backlog. The diagram is updated every week.

SampleBurndownChart
Picture provided by Wikipedia.

Tagged ,

New experiment

Training BDD

Update on 13th of March. Even though I received some unexpected news from Skype Education and I have decided to go ahead with the plan but using Google Hangout.

Build phase: In order to help me improve my English Communication skills I have decided to create a classroom on Skype Hangout for people to talk about Agile, Lean, Kanban, Scrum or XP and exchange experiences.

This experiment will take place on Saturdays or/and Sundays once a month. Please, have a look at the web page to learn about the dates.

Measure phase: How to measure project’ success: Amount of demand and feedback from those who join the classroom.

Learn phase:  Keep doing it or reject the idea.

Stop the Line

LEAN

Jidoka is the Japanese term used by LEAN practitioners to stop the production line when workers in the factory floor discover a defect. A retrospective is a delayed demonstration of the Jidoka term which aims at evaluating and bringing up continuous improvement actions. Such actions must be focused on the components that interact together to deliver a software product valuable for your costumers: product (software quality), people and processes.

Sometimes, poor variability reduction policies led by team and management layers (including Scrum Masters and Agile Coaches) source problems such as: poor technical skills, requirements ambiguity, poor requirement specifications or scope. Such problems make teams suffer from stressful situations and excess of pressure that must be treated even before the retrospective takes place.

Explicit rules

One of the Kanban foundations for leading the continuous improvement movement is to design explicit process rules which aim to improve flow and reduce risks. As an Agile Coach I have worked together with the team to design a process for the early detection of bottlenecks and “STOP THE LINE” when required.

User Stories

Despite the fact that it’s hard to split epics into small same size user stories, we try to break user stories into vertical slices of the product.  In order to provide value as quickly as possible, we ask ourselves the following question:

                “What is team going to undertake in only one day?

Bottlenecks, Cycle Time and Traffic Lights

We currently have a visual management metric to measure cycle time and to help us detect bottlenecks.

Cycle time

*Notice that team manually updates the diagram, computing the amount of time it took them to complete the user story. Meanwhile, the agile coach also updates the digital version which automatically collects valuable metrics like the average cycle time, average trend or the standard deviation.

**Horizontal axis depicts the time line and the vertical axis contains the time consumed for each user story.

At the bottom of the diagram, there’s a green area (up to 2 days of cycle time) that indicates that our process is healthy and it’s flowing. No additional action is required.

The yellow area (3rd day of cycle time) is an optional step and we set up a new topic just after the stand-up meeting. Team works together to identify early actions to remove the bottleneck.

At the top of the diagram (more than 4 days of cycle time) we are forced to evaluate the current situation of the bottleneck on a daily basis after stand up meeting. We must focus our attention on the whole process, resources availability and people capacity, work in progress and pending actions.  This topic usually takes 10 to 15 minutes.

*Due to the fact that cycle time is a delayed indicator, we are researching to replace the cycle time report with the cumulative flow diagram. It’s a tool used in queuing theory that depicts the quantity of work in a given state, showing work in progress, queue in time, and departure.

Tagged ,