This free book is about software development without a word on how to develop software; it’s about everything else around the development of software except programming. More particularly, it’s about software development as a knowledge-based discipline and what might make complex software development more productive. Complex problems has multiple solutions, there is no one BEST way to run a software project. It’s like solving rubrics cube it’s got multiple solutions some better and some not so effective, and some strategies doesn’t solve the problem. This book has a specific audience – people how organize software development.
It’s not finished! If we write software in an iterative and incremental way books would probably gain to be written this way also. So here is my non finished first public version of my book “Agile Leadership – A book about the human factor of software development”. So please enjoy and spread it to people that might like it and give me feedback. It free for non-commercial use and licenses under Creative Common – CC BY-NC-SA 3.0.
Enter your name and email address to download My book Agile Leadership
I collect the e-mail address so you can get notice when I add content. I will automatic send you a PDF when you enter a valid e-mail address.
The more value a system has for a company the more integration with that system will be performed. When companies grow and processes change due to internal or external changes, more integration is carried out. Integration is a field that developed significantly in the last 10 years. Tasks are more transparent and communication is easier. Technology/Design principals like SOA (Service Oriented Architecture) have been the major accelerators in getting a coherent view through the industry. Here are some recommendations for handling integration projects.
What is Integration?
Integration is making two or more systems “talk” to each other. The interesting thing about integration is that it isn’t two different systems that understand each other, but two different teams that try to agree about how they see the world. Both teams already have a model of the world in their information model.
The Psychology of Who is Integrating Against Who
When integrating two systems, it’s common for team A to feel that they are doing team B a favor because team B is integrating into the team A’s system, and therefore have more to gain from that integration. Team A feels that their information model is the one that team B should adapt to.
Sometimes both teams get the same feeling that they are doing the other team a favor. This causes irritations as each team tries to present evidence that their view of the world is the correct one.
Is it a problem to adapt to the other systems view of “the world” / information model?
If the other team has had more integration experience (either by building them or integrating to several systems), it is probably a good idea to let their view of the world teach you. While your system might be easier to adapt to and your team is more agile than theirs, the other team has to make sure the information model works for a lot of different systems.
How to avoid this?
If this becomes a problem for both systems, meet with the other systems owner and do an effect analysis together. Which benefit will each system gain through integration? Are there any cost benefits or new possibilities? If there are new possibilities for any system, include the people who will gain most by these possibilities.
A business developer might see how and if these possibilities might be “harvested”. If there are no new benefits by integrating, no efficiency gain, no cost reduction, or any new possibilities that can be harvested, integration is discouraged. Another alternative is to get upper management / executive support, either enterprise-wide or between the organizations. Present the new possibilities that integration will bring. It’s always good to have “executive blessing”!
How to use this to benefit your team!
If you fear that your project will be the one that has to adapt to the other team, you can manipulate the situation to your advantage. But do it with caution, because these aren’t exactly “kosher” recommendations.
Conduct the meeting with the other team in your office or conference room. The other team is in your territory and you’re hosting the meeting. The meeting host can set the agenda for discussions.
Be proactive and send out the agenda before the meeting. That way, it’s easier to chair the meeting. If you are conducting the meeting, the other team will feel that they report to you.
Take down minutes of the meeting and send them to all meeting attendees.
If the meeting is via phone or video conferencing, (live meetings) use your account and pay for the facility.
Integration projects tend to have hidden stakeholders. Since there are two organizations to be integrated, one has no knowledge of the other. all stakeholders. For the success of the entire project, it should be made clear that every stakeholder is involved at the beginning of the project. Unfortunately, business people show up only during the testing phase when they should have been present much earlier. In large organizations it also common that there is a specific security department, it becomes very strange when they are not involved early. I has responsible for delivering as system as a supplier to and a week before product delivery the purchaser called and ask if we could have a workshop with the security department and discuss the design and solutions. The purchasor had the prior day accepted all acceptance test so we feelt like this was a bit late in the process to start discussing security design principles might have been better to done this earlier in the project.
The most important success factor in integration projects
The largest factor is relationships – accounting for 53% of success (According to the standish rapport). For integration projects this becomes more important especially when it is carried out between different organizations, each with their goals and views of the world. Communication and building relationships must therefore be done at every level as well as obtaining upper management’s support. Hold regular meetings – for example every Friday after lunch. Things tend to happen on Friday morning when people promise deliveries or need to prepare.
Continuous integration is a way to applying quality control by integrates pieces by pieces of effort into project. When embarking a change, the developer takes a copy of the current code base on which he works on. As other developers submit changed code to the code repository, this copy gradually cease to reflect the repository code. When the developer submits his code to the repository they must first update their code to reflect the changes in the repository since they took their copy. If a lot of changes have happen to the repository, the more work the developer must do before submitting their own changes. If the repository becomes so different from the copy the developer is working on it becomes what is called as “integration hell”, where the time it takes to integrate exceeds the time it took to make their original changes. In a worst-cast scenario, the developer may have to discard their changes and redo the work. In order to avoid “integration hell” the practice of continuous integration is used. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible.
One warning when adapting CI!
The focus of integration and automated builds leads to a false feeling of secure, as long as it builds and all tests passes people tend to be satisfied. But in order to gain all benefits that you strive for, like getting feedback on just work done these test isn’t enough. In order to be as productive as possible you need to really test as close to the development as possible. You will save a lot of time if you find a bug or mistake that you done today or this week, rather something someone may have done to the code base during the last 6 month.
Continuous integration is like one apple a day, it keeps the debug doctor away. But it doesn’t gain its full power if you also complements with frequent testing and verification of the system.
How to separate different aspects of a project in which the modules are tightly bound together can be a real challenge. If many people are involved in solving a complex project, it can be hard to decide how closely together everyone should work. Shall we have collective ownership or a hard pre-defined contract between modules and different aspects?
Mel Conway wrote in his book “How Do Committees Invent?” that “organizations which design systems are constrained to produce designs which are copies of the communication structure of these organizations”. When people are dependent on people in other parts of the organization, a conflict of interest can easily appear. People might share the same view today, but in the future, someone’s boss might set different priorities. When that happens, one team will follow its own objectives and might not prioritize another group’s needs as highly as their own. The further away people are from each other, the more disjointed and incongruous their objectives are likely to be.
People working in the same room can easily discuss things spontaneously within the room, or on a short notice attend a design meeting within the room. People on different floors start booking meetings with each other, either officially or in more informal settings such as over a cup of coffee. People on different offices write memos and specifications to each other have phone meetings and occasionally meet each other in real life. When people are geographically separated from each other, have different underlying motivations or are in a different part of an organization, clearer contract of responsibility and modularization is needed.
Another important concept in agile software development methodologies is technical debt. When you write code you don’t fully understand a problem until you are finished solving it. This means that your application isn’t designed and implemented the way you would have if you were to do it again. If you continue to add functionality and features to this application without aligning the code to what you have learned and understood about the domain, there could be disagreement. This disagreement between understanding of the domain and reality will lead to a decrease in the rate of development. Ward Cunningham called this technical debt; he did an economic analogy to software development to explain code quality.
When you write code, you make a monetary or time investment to understand and solve a problem. And time is money. You have limited resources at your disposal so you prioritize the use of these resources in solving the problem. To meet deadline, shortcuts can be taken. This means that the job can be completed more quickly, but in doing so, time had to be borrowed. Time, as stated, is money.
When you borrow money, you pay interest until the loan is paid in full. It’s the same in software development. If you don’t re-factor your application and align with newly discovered insights about the problem or fix previous shortcuts, you need to adjust other parts of the system. However, this adjustment is in vain because the other parts don’t align with how it should have been built based on current or newly acquired knowledge.
Every minute spent on the wrong code is time spent in vain. It lowers the speed of solving the real business issues. If you keep borrowing money and not paying it back,this eventually leads to your income going to interest payments, bringing down your purchasing power to zero. It’s the same as developing a program for a long period of time and only adding features without reorganizing them for a better understanding of how they work. This translates into very low productivity.
A friend told me that his latest consultant assignment involved a project with a very large technical debt. He was added to the project very late in the development cycle, and he felt like an archaeologist trying to clean up an artifact that would crumble unless he worked very slowly and cautiously. This project had been running for three years and it was now one year late.
When several programmers pointed out that the problem arose because the project was not aligned with the system and did not take into account the new knowledge and understanding of the domain. Management’s comment was there was no time for refactoring as there were only 10 months left according to their earned value analysis, so the decision was to add the last feature. My friend left after a year and the project was only 90% complete.
What happens knowledge is not synchronized with code?
Management indirectly state that they don’t appreciate quality in the long run. If it’s not appreciated in the long run, it might not be so important in the short run either. If you are not allowed to take pride in your craftsmanship you will not do your best, because it doesn’t matter. Mikael Feathers author/blogger says, “We need to see clean flexible code as an asset and count it as an asset.” Whether it depends on deadlines, dispersed teams, or unfamiliarity with a new technology, it’s easier to let code rot than keep it up to date.
I found a great video with Ward Cunningham when he explains Technical Debt that I have to share with you:
Milestones are like a progress bar indicators for internal stakeholders, they give everyone involved feedback on the progress. In different software I have come across at least three different progress bars, the first quite easily reach 90% then stalling and you’re wondering if stopped. The second one just shows that it’s working and never stall but you don’t know how much work that’s left. The third one gives you the feeling that it’s accurate and that it’s adopt in order to give you a realistic estimate. Even if the progress bar says it’s a long time left you prefer that because then you can do something else. Sincere (Swedish: uppriktigt) estimation is more valuable than working in the dark and hoping that the first 50% of the work will be evidence that the 50% of the remaining work. But this is not so easy as it sounds. Let’s look on some aspects of tracing progress.
Your project is getting closer to a release deadline and you ask the lead developer, “How’s is it going? Are we going to ship in time?”
“Hmm, something have come up!”, she tells you, “I have done really great for the last 4 weeks, but today I found something no one have thought of and the timetable slipped 3 month.” How big chance is that? Quite small I will say. Because timetable slips don’t happen at the end of the milestone or project. Slips just show up at the end. But they happen every day and every hour. As soon someone answers an unexpected e-mail, have to be home with sick kids or tracking down an intermittent but catastrophic bug, slips happens. These things are also important and needs to be done but the sooner you can identify how much that’s left and if there is a timetable slipping risks the better time estimation it will result in. Team members and stakeholders will be more able to trust the timetable.
Processes and methodologies are great for some kinds of work. If you have a repeatable process that consistently yields a high quality product, you can emulate McDonalds’ approach and hire teens to be chefs. But at McDonalds you will not find the next Gordon Ramsey. People that really want to understand and master their craft will not achieve that in a repeatable industrial process.
Well documented processes are a great foundation for any part of a company when you working with a repeatable process or don’t have a dream team of smart, disciplined and attentive people whose work is more of an exploratory journey. A process helps people to align their collaboration and work at getting better at repeatable tasks. Processes are also good to be run from a checklist so that people don’t forget things, especially when they rarely or never do a specific task.
ITIL is the McDonalds process concept for governance of infrastructure and solutions. ITIL isn’t the process used by creative advertising agency that is looking for best and maximal business value. Software Development is more complex and isn’t a software factory that has a universal solution.
What software development organization need is a reflective improvement framework, with a wide range of tools to choose from. Some tools and practices are designed to help a larger group to collaborate whereas others just focus on quality aspects. There is no universal solutions for complex work, McDonalds approach might work for handling infrastructure and teens in a kitchen but handling complex change in a high tech work where nothing stays long enough to be normal isn’t a solutions.
Development is mentally difficult because that’s what we need to reconsider ourselves. Many of us have learned through years of socialization and school systems that being right is good. If you are right you are rewarded, and if you are wrong you are punished.
As a result, many people become obsessed with being “right”. But what does this lead to? It convinces us that one way is more “right” than others and that which is different is wrong. To develop, you have to realize that the way things were done before was not necessarily optimal for its outcome, therefore you were wrong. If prestige was involved in the “right” way, it is difficult to re-evaluate the situation and develop. So the feeling of being right can be dangerous because it doesn’t help us develop.
When you are working with complex project it can be hard to make decisions because you have to consider so many different variables at the same time and when you later realize you were right or wrong you wonder what I thought.
On tool in order to help you make complex decisions and at the same time document your decision making so you can reflect on the decision is the decision matrix. A decision matrix is a list of values in rows and columns that helps you to identify and analyze the performance and relationships between a set of values and information. Elements of a decision matrix show decisions based on certain decision criteria. On the rows you put the different alternative and on the columns different criteria’s. Then you weight how well each criteria feels on each alternative. After that is done you weight each criterion to see how heavy they shall be.
Supported by 3-part products
0*3+3*3+3*1+2*3 = 18
3*3+1*3+1*1+3*3 = 22
2*3 + 1*3 + 1*1 + 1* 3 = 13
Value / weight
1 (good to have)
Alternative 2 is what we know right now and value in this situation the best choice. If you later learn that you over estimated the value of some criteria you can go back and see how that changes you outcome.
Milestones are like progress bar indicators for internal stakeholders’ use. They provide feedback on progress to everyone involved. In different software programs, I have come across at least three different progress bars: the first easily reaches 90% then stalls, making you wonder if it has stopped.
The second one just shows that it’s working and does not stall but you don’t know how much work remains to be done. The third one gives you the impression of accuracy because it provides a realistic estimate. Even if the progress bar indicates there is still a lot of time left, you tend to prefer it over other time bars because you know you have time to do something else. Let’s look on some aspects of tracing progress.
Your project is getting closer to a release deadline and you ask the lead developer, “How’s is it going? Are we going to ship on time?”
“Hmm, something has come up!”, she tells you, “I have done really great for the last 4 weeks, but today I found something no one thought about and the timetable slipped 3 months.”
What’s the chance of that happening? I’d say it’s a small chance. The reason is timetable slip-ups don’t happen at the end of the milestone or project. Slip-ups show up at the end. Yet they happen every day and every hour. As soon as someone answers an unexpected e-mail, rushes home because of a sick child, or tracks down an intermittent but catastrophic bug, slip-ups happen. The sooner you can identify how much time is left and prevent slip-ups, the better you can estimate time periods, assuring team members and stakeholders that the timetable is still reliable.
Collective ownership encourages everyone to contribute new ideas to all segments of the project. Any developer can change any line of code to add functionality, fix bugs, improve designs or re-factor. No one person becomes a bottle neck for changes. People what are running their own raise becomes dangerous for the rest of the team. If you hear someone says “That module I dare not touch its Lotta’s code, it’s to complex. Only Lotta can do that!” you got a problem. Your “bus-factor” (How many people that need to be run over by the bus in order to need to redo that knowledge) is 1, if Lotta is run over by the buss all knowledge of that is gone. Collective ownership is quite closed linked to the practice of unit testing and continuous integration. In order for people to dare do change and not have 100% overview of how other modules might be affected on an accidental way a lot of automated unit test helps the developer to handle parameters that he don’t know by heart. And in order for everyone to be able to work and overview the project frequent check-ins shall be done. If mastodon check-ins is done, people don’t learn as much from each other’s code and it’s has a tendency of develop into that one single person is responsible for a specific class.
Within any group there are things to be done. The outcome of some challenging activities is more important than others for your unique project/system/organization. For your project to be successful, identify the key factors/positions for success. I recommend starting with identifying the different roles and responsibilities.
When you have identified key positions to be filled, investigate what abilities and experiences will make success possible. How can you minimize risk by recruiting people who have worked on similar projects or systems? Who has the ability to fill the position either now or later because they have the potential to become great at that role and position?
Many projects start in the reverse order. I have the following persons and resources, what can I do with them and who fits in what role? Person X is best suited as IT Quality manager looking after the process, documents and so on. You don’t start with roles have to fill. Person X might not be an asset at all for this project. It might work out, but when filling key positions, it might be wiser to cross your fingers and wish for luck.
What Might the Key Positions Be?
It is difficult to come up with a general answer. I’m mainly looking for two different factors – managers and very specialized roles. There will probably be fewer of them than the rest of the team so it is more important that you get the right person. It might also vary over time, when a project is small and the key positions are mainly developers. However, later in the technology life cycle, it might be the product owner or project portfolio manager/assistant.
Hiring for Key Positions
There are a few general recommendations for hiring for key positions. If you can match people, roles and projects perfectly, I’m confident you will have a much better chance for success.
Values - Does the person share the core values of the organization/project/company? You cannot teach key values to people just to make them fit in. As Harry Truman once said, if people don’t know right from wrong when they are aged 30 they probably never will. You cannot teach people the right attitude, you can only teach them the right skills.
High standards - Look for people who you don’t need to manage. If they tend to have high standards from previous work they tend to exhibit the same high standards. People with low standards look for the easiest way out, they’re into short cuts. The quality of their work shows these low standards. You will have to define what’s expected from them and they need constant supervision and control. Your job is easier if you work with team members who maintain their high standards.
Ability - Does the person have the ability to become the most suitable person for this key position? He doesn’t have to be the best suitable right now, but does he have the potential to become the best suitable person in the future?
Neurotic responsible – don’t hire someone for a key position, hire for a key position responsibility. It’s important that a person doesn’t think of the work as just another job. He must think of it as a sacred responsibility. You want a person who wants to fix a hole – even be neurotic about it – as soon as he spots a hole. They won’t rest until it is fixed. This is an ability that you can’t detect immediately, but over time, you will be able to identify these neurotics who take their tasks very seriously. They’re the ones you want in your team because you know they won’t let problems fester and leave them unsolved.
How to Manage Key Positions
When you recruit people for key positions, invest the time to know them right from the beginning. Getting to know people and their abilities is a challenge so make the effort to know them, their work habits, their values and their strengths and weaknesses.
People are different so it’s good practice to be 100% familiar with these differences and how to handle them. A key position person is like a captain or helmsman on a boot. They give clear orders and steer the ship away from dangerous rocks. Each person on that ship may be following orders, and meeting targets, standards, and plan. But what happens if these targets, standards and plans are wrong?
Ask yourself: do I have the right seat or do I have the right person in the wrong seat? Maybe another position would be better? However, if you had the good fortune of hiring the right person for the right task, so whatever it takes to motivate, coach and develop them.
One management practice in is that you might estimate everything you do in ideal time. Ideal time is how long a task takes if there were no interruptions. And when you plan you don’t plan that all work is 100% focused and non-interrupted. Instead you can calculate with a velocity. Velocity is the number of user stories (or user story points) you can do in one iteration (or sprint) in Scrum this is known as the focus-factor. If your team (4 persons) under a 3 weeks sprint working 40 hours a week completes user-stories that where estimated to be 360 hours of ideal time your focus factor / velocity would be 66%.
Advantages of the velocity term is that it is much easier to estimate in ideal time, it’s what comes natural when you think of how long something will take to do. You get a focus on prioritizing to work with only planed work tasks. People avoid interruptions. You don’t need to plan to do more work than time available × the focus factor per iterations so you don’t get false expectations. You are able to trace the velocity over time, velocity has ability to peek in the middle of a project and become lower in the beginning and in the end of a project. In the beginning it can have with that people are stumbling on how they will work and they might not have all infrastructure and roles on place. In the end of a project new unplanned work have a certain ability to appear. It often takes between 3 to 6 iterations for velocity to stop fluctuate and become more stable.
This doesn’t mean that 34% of the time was wasted it, is not just spend on things that where planed and prioritized from start.
There is a different level of engagement among stakeholders. Some are really involved because their stakes are high and their risks are greater, while some are more interested in being informed. Scrum has a small story to show the difference between a stakeholder you really need to focus on and one who is not as involved.
A pig and a chicken walk down the road. The chicken says to the pig, “should we start a restaurant together?”
The pig answers, “What a great idea! What should we call it?”
“We can call it Egg and Bacon”, says the chicken.
“Hmm, I will be really involved putting my ass in the pan, but you will not be as committed when you are just laying eggs!”, answers the pig.
This is the difference between stakeholders with stakes and stakeholders who don’t.
When starting a new project invent everyone that might have expectations on the project. Then you can organize them after their interest and power over the project. This gives you a clue of you will need to manage that specific stakeholder so that you put your energy on the right ones. This helps you see identify key people not just the one that sounds much.
Everyone who is affected by the project will have an opinion on how the project will affect them personally and their organization. Users want the best software and customers want it for free. Sales people want a unique product that creates a great business value for as many potential customers as possible. Suppliers want a larger share and your development team wants to use only state of the art tools and technologies.
Your responsibility as a software development manager is to make sure that the right thing gets done, in the right time and in the right way. To make this happen, understand the needs and strategies of the different stakeholders. The responsibility of the software development manager, therefore, is to make sure that everyone gets what they want. This makes him the middle man negotiating with all stakeholders. Your aim is to give everyone something – so they feel like a winner. I want to point out that making everyone happy to a certain extent is a key role of the software development manager.
The Process of Managing Stakeholders
If a project manager’s main concern is to get all stakeholders satisfied lets analyze the flow of the work. Stakeholders have interests in the project and communicate their expectations, when all expectations are recorded a prioritization can be made based on strategy and stakeholders power. Because you got limited resources you will not be able to do everything that everyone wants so you will need to make a lot of trade-offs in order to get everyone committed and satisfied. The worst thing you can do is not to be clear that all expectations will not be fulfilled
Many companies just record expectations and write down requirement but never sit down and really prioritize what is important and really drives business value; therefore, it is common for developers working on features that are more of a decorative art and good to have stuff than real business value. This is not the developers mistake is the business and project managers’ mistake, programmers might implement a function perfect. It’s quite common to make the wrong thing in the right way.
Leadership is about understanding direction while management is administering the journey as efficiently as possible. A leader shows the way, develops long-term strategies and plans, and inspires others so they have a clear understanding on the project’s future success.
By informing others, the organization will have the ability to adapt in a self-directing way. Leadership is about pulling the organization towards the future. Management is more about short-term planning.
Complex organizations need managers to coordinate the work, so that the right priorities are established and reached. Managers push the organization towards goals on a daily basis. Leadership is about helping people to cope with change, while management is about helping people coping with complexity. Leaders set direction, mangers plan and budget. Leaders align people, managers organize and supervise staff. Leaders motivate managers’ control.
You can quickly see the important differences between leadership and management in this classic story. A group of workers is cutting their way through a jungle. The workers in the front will be cutting the undergrowth and cleaning it out. The potential managers will be behind them, sharpening their machetes, writing policy and procedure manuals, holding development programs and setting up work schedules. The potential leader is the one who climbs the tallest tree, surveys the entire situation and yells “wrong jungle” (Covey, 1989).
To understand how knowledge is spread throughout an organization, we need to understand the SECI modell by Prof. Ikujiro Nonaka (Hitotsubashi University). When Prof. Ikujiro Nonaka introduced the SECI model (Nonaka & Takeuchi 1996) it became the cornerstone of knowledge creation and knowledge transfer theories. He proposed four ways to combine and convert knowledge types, showing how knowledge is shared and created in organizations. The model is based on two types of knowledge – explicit knowledge and tacil knowledge. Explicit knowledge is visible knowledge, it is easily explained, quantified and documented; tacil knowledge is unseen and grows with habits and hands-on work, but is not easy to share or document it.
The model also consists of 4 different process situations: Socialization, Externalization, Combination and Internalization.
This process focuses on tacit to tacit knowledge transfer. It’s done when knowledge is passed on through practice, guidance, imitation and observation. This is when someone who is learning a new skill can interact with a more experienced person, ask questions and observe. This occurs in traditional environments where a son learns the technique of wood craft from his father by working with him (rather than reading books or manuals on wood working).
Externalization This process focuses on tacit to explicit knowledge transfer. Externalization is about making an internal understanding more quantifiable like writing documents and manuals, so that the knowledge can be spread more easily through the organization. The processes of externalization are good at distributing knowledge for repetitive work or processes. An expert describes different parts so that readers can understand “if this happens do the following in order to succeed”.
Combination The process of combination is about transforming explicit knowledge to another person’s explicit knowledge. A typical case is when a financial department collects all financial information from departments and consolidates this information to provide an overall profile of the company.
Internalization The process of internalization is about transforming explicit knowledge to tacit knowledge. Through reading books, manuals or searching on the web, explicit knowledge can be learned.
There is a spiral of knowledge involved in their model, where the explicit and tacit knowledge interact in a continuous process. This process leads to creation of new knowledge. The central thought of the model is that knowledge held by individuals is shared with other individuals so it interconnects to a new knowledge. The spiral of knowledge or the amount of knowledge grows all the time as more rounds are done in the model.
The basis of all change is that the need for change is known and communicated. If it’s not on the agenda it will probably not be valued. If spreading of knowledge is important for your organization, talk about it with the people involved.
Between the 40’s and the 70’s computer science became more of a scientific instrument than a well-established business technology. It wasn’t until IBM and Apple introduced the PC and Macintosh computers that computer science began to spread outside the scientific institutions, making their way to the largest companies. It was at this time that software development started to take off and we saw the birth of large companies.
Up until the 70s, programs often were quite simple and were operated only by those who created them. But as systems became larger, it also became more difficult to develop and organize software development.
In 1970, a director at the Lockheed Software Technology Center, Dr. Winton W. Royce, published a paper entitled “Managing the Development of Large Software Systems: Concepts and Techniques”. Dr. Royce presented a more structured method for organizing software development. This technique was inspired by the manner in which fields like Civil Engineering and Manufacturing organized their development.
The basic idea is that everything is done in sequential phases. This means that you need to understand everything in a specific phase before you can start doing the next phase. If you change your mind in a later phase it will cost you and be hard to finish the project in time. First you need to understand all requirements, and then you need to do all the design (big design up front) and so on.
Each phase was handled by specialized groups like business analysts (for defining the requirements), system analysts (for designing the programs), programmers (for developing applications), testers (for testing applications) and deployment personnel (for overseeing operations). These groups communicated mostly in writing, and handed over work from group to group.
Managing software development with the Waterfall Model (I discuss this model later in this section) is to investigate what the system is supposed to do, make plans so that it does what it is supposed to do, and to stick to that plan. This model, however, had its setbacks: first, people learned a lot from the first system requirements until they went into production and were used by users. This made it difficult to take advantage of what was learned in the various processes.
Second, it often took a long time between the requirement phase and the user feedback phase. If you didn’t figure out what the users wanted or the users themselves didn’t know what they wanted, that meant more time and money had to be spent to change or adapt the system to users’ needs.
In defense of Royce, it would be fair to say that he actually did warn that these things could happen and he therefore proposed an iterative way of work. But no one adopted this part of his model. That’s how it came to be called the Waterfall.
When the US Department of Defense needed a software development process, they looked at Royce’s paper and they adopted a part of it (unfortunately they adopted the worst part) and named it DOD-STD-2167 (Department of Defense Standard 2167).
When the NATO later needed a model they thought that if it was the best model the US military could find, then it ought to be adopted. And from there, more and more people adopted the theories of the Waterfall. Even if the US Department of Defense changed the standard in 1995, it remained the basis of what the academic world is teaching to this day.
The rise of plan-driven methodologies
In the 80th and early 90th a myriad of new methodologies where invented that where focusing on design. These gained popularity in the same speed that object-oriented programming languages like C++, ADA and Smalltalk gained practitioners.
Naturally there were design methods before this time, even object-oriented ones but the popularity of C++ created the need for a new approach. Most design methodologies before this were data driven and/or functional in nature. When programming in an object-oriented language they were found to be not adequate.
Methodologies that became popular where Rumbaugh OMT, Booch, Coad-Yourdan, OOSE from Jacobsen and Shlaer-Mellon to name a few. All were quite good in certain areas, but seem to not cover the whole design process. Each methodology had its own type of notation and often only concentrated on a sequence of even in a system.
Because of this it was hard to use only one tool; developers adapted their favorite method and added other tools into their hybrid design methodology, splintering the industry even more. The so-called “Method Wars” arise and people where arguing endlessly about the pros and cons of their adapted methodology.
But in the mid-90th three design creators Jim Rumbaugh, Grady Booch and Ivar Jacobson joined forces at a company specializing in design tools called Rational. They became known as the famous “three amigos”. They declared the “Method Wars” over and soon came out with a first release of the Unified Modeling Language.
The RUP process and other methodologies from this time were based on a plan-driven but iterative assumption. The critics was mostly based on that these methodologies was to document focused. You would still need to understand the whole problem before you started the next step everything should be documented and this created a very big overhead that didn’t create a business value. A large complex problem where documented and explain rather good but less complex and more simple problem directly became as big to administrate.
The rise of agile methodologies
During the late 90th people started to react against the plan-driven models. Many people were very frustrated on the demands presented that developers became more agile to business demands, adapting better to knowledge gained during the project. Plan-driven methodologies were considered bureaucratic, slow, demanding, and inconsistent with the way software developers actually perform effective work.
New ways of work like XP, Scrum, DSDM, ASD, Crystal, FDD and Pragmatic Programming where developed as an alternative to documentation driven, heavyweight software development process. These new methodologies, however, had something in common: they focus on a construction and planning plan in the beginning phase and stay with that plan. Many of the Agile methodologies were inspired by “New Product Development Game”, an article written by Hirotaka Takeuchi and Ikujiro Nonaka and published in the Harvard Business Review in 1986. This article is often used as a reference and could be considered the birth of agile methodologies.
In February 2001 a group of different methodology developers meet in a ski resort in Utah, to talk, ski, and relax and to try to find common ground of what they were trying to accomplish. The output of this weekend became known as “The agile software development manifesto”.
Another movement that has gained substantial support within organization of software development in the 21th century is the Lean Software Development theories with Mary and Tom Poppendieck as the main figures. Lean is based on the system thinking theory, which sees an organization as a system. The system shall fulfill a clear customer-focused purpose in a so productive way as possible. They mean that your purpose is probably not to develop software. Your organizations customers probably want their demand satisfied or a problem fixed. If the customer could solve their problems without software, they would be delighted. They way Leans work is that it analyzes the system that the software shall be used and also how to produce that in a so productive way as possible, that focus on adaptive behavior, knowledge acquisition and knowledge workers.
It might seems like programming is about writing code for computers to understand, that what a programmer do is translating requirement to a language that computers can understand. But no programmers sees himself as a translator, we see ourselves as creative writers. If two programmers are given the same requirement to implement their source code will differ so it’s not about translating from one language to another. The language used to produce the program isn’t either for the computers, it’s for humans. Most programmers write in a language that needs to be compiles and translated to an executable that computers understand, so the purpose of a language is for people to be able to collaborate and understand each other intention.
In a complex solution the time and effort to deliver, maintain and extend software are directly related to the clarity of the code. The lack of clarity creates technical debt that will eventually have to be paid off with interest, debt that can overwhelm your ability to develop new features. Failure to pay off technical debt often results in bankruptcy: The system must be abandoned because it no longer worth to maintaining. It is far better not to go into debt in the first palace. Keep it simple, keep it clear and keep it clean. Grady Booch author of Object-Oriented Analysis and Design with Application writes that you recognize clean code because it simple and direct, “Clean code reads like well-written prose. Clean code never obscures the designers’ intent but rather is full of crisp abstractions and straight forward line of control”.
To be agile and stay agile, you need to change things with confidence without risking change that could result in an unpredictable behavior or in a bug. To do this, verify that previously built features are not affected by newly introduced code. If you don’t create automated tests causing you to do a lot of manual work, you incur unnecessary risks when introducing new features. As a result, your system will become very fragile.
Another argument in favor of several automated tests is that unit testing and test driven development represents the cost of handling bugs. The cost of fixing a bug that has already reached the production environment can be very expensive. Ideally, a bug should never appear.
Practices like unit testing, automated tests, continuous integration, refactoring, TDD, iterations and incremental development and simple design where separation of concerns are applied is great insurance to be able to be agile without becoming fragile. As soon as you start tamper with these practices your risks increase and ability to flexibility decrease.
After been completely spammed with comments during the summer! Even if I have enabled Captcha and that comments need to be approved, I have decide to turn of comments for a while. I can’t handle hundreds of comments per week and most are just link robots from non-serious SEO companies.