| |
Development Process - arstechnica
Development Methodologies: Waterfall vs. Agile, and all in between
Development Process - arstechnica
http://arstechnica.com/information-technology/2014/08/how-microsoft-dragged-its-development-practices-into-the-21st-century/ In short, sometimes I think agile is a bit of a hype. Agile is over a decade old at this point and was created by the still pioneering pioneers like Bob Martin (Uncle Bob), Martin Fowler and others.
Old Microsoft's organizational structure tends to reflect this development approach. The company has three relevant roles: - the program manager (PM), responsible for specifying and designing features; - the developer, responsible for building them; and - QA, responsible for making sure the features do what they're supposed to. The three roles have parallel management structures (PMs reporting to PMs, and so on). Visual Studio's scrum teams take members from each of the three Microsoft roles: a program manager (who becomes the product owner) and some mix of developers with a dev lead, and QA with a QA lead. Each individual team manages its own backlog. At the start of a sprint, a few items from the backlog are chosen according to their priority, and at the end of each sprint, those items should be complete, with a working, tested implementation.
Agile, accepts change; understands that it happens in EVERY project. Change is not a bad word to be avoided.
Agile is a mindset change. It's an acceptance of the idea that we're not perfect developers and we work with imperfect information that is sometimes flat out wrong and we're terrible at estimating how long any of this will take.
We work to deliver software that even our users might not know the end impact. Therefore, why not wrap that process in a something that helps us cope with these realities. Agile does this. Waterfall tries to control it and fails every time. The demand for flawless, high-performing, intelligent software has increased exponentially. Yet, our ability to satisfy this demand has only increased linearly due to the limitations of the software development tools. It's an exciting time to be a software developer and at the same time a bit frightening considering the expectations and the perpetual threat of obsolescence.
There are very narrow fields in software development that allow a closer approximation to engineering; tightly restricted software such as flight control systems, for example, can be rigorously specified. But they're the exception. I defy anyone to write a "meaningful, real world specification" for a general purpose operating system, for example.
My company develops custom web applications for clients, and requirements gathering is king. Our project timelines usually look like:
Design Stage: Requirements Gathering (1-2 weeks) - We talk to the customer, their users, do research, throw questions at the customer, do more research, and eventually determine a few things: What problems they have, how they think they should be solved, a priority assigned to each problem, and the general timeline they're looking for.
Problem Assessment (3-5 days) - The entire dev team sits down, and goes over the data from the requirements gathering. We assess potential difficulties, analyze problem sets for commonalities, and sometimes throw out requirements altogether in favor of a better solution to address their needs.
Solution Architecture (1-2 weeks) - Most of the major design decisions are made here. Database schemas are designed and normalized, data flow paradigms are established, tasks are defined and distributed. By the end of this phase, the project outline is finished. There are a lot of grey areas still, but the broad strokes and important bits are there.
Development Stage: After about a month for larger projects, when the design aspects are settling down, we begin developing in earnest. Here we take an iterative approach, at the end of the day, the project should run. If it doesn't, well, that's what branches are for. We follow the basic pattern: Design -> Implement -> Test for each required feature, starting with core backbone code and working our way towards user facing code as the project goes on.
https://arstechnica.com/information-technology/2014/08/how-microsoft-dragged-its-development-practices-into-the-21st-century/?comments=1&post=27346235 I'm responsible for defining, assigning, and reviewing tasks (in addition to being lead developer). Every Saturday I go over everything committed to the repository during the week (not exactly with a fine tooth comb, just the general idea), and I use that, plus feedback from my team (and my own experiences), to tweak the task assignments for the next week. Sometimes I see two separate development threads attempting to solve the same underlying problem, so I'll design a component that solves both and converge development efforts.
Deployment Stage: When the team feels the project is ready for user eyeballs, we'll deploy it to server, schedule a meeting with the client, and sit down with them going over it. We ask them to have people start using it, and gather feedback from them.
We then use that feedback to address any problems or shortcomings in the project, and address them. This may take a few iterations, but eventually the client signs off on the software, and we enter Maintenance.
Maintenance Stage: For the first 30 days after sign-off (longer if they purchased extended support) we provide free bugfixes, feature changes, tweaks, etc. If we did our jobs right during Design, any change they need is usually trivial to implement. After their complementary support window ends, we offer free major bugfixes (breaking errors, security issues, etc.) for 6 months. During this time we also offer low-cost change requests.
Most of the waterfall horror stories come from teams led by a manager who has no firsthand experience with software development. The view from the trenches is usually enlightening.
I guess you could call our overall approach as a waterfall of sprints.
The key is to be able to anticipate customer requirements, and have the architecture in place to easily implement their requests, or other unforeseen changes.
For the longest time, Microsoft had something of a poor reputation as a software developer. The issue wasn't so much the quality of the company's software but the way it was developed and delivered. The company's traditional model involved cranking out a new major version of Office, Windows, SQL Server, Exchange, and so on every three or so years.
The releases may have been infrequent, but delays, or at least perceived delays, were not. Microsoft's reputation in this regard never quite matched the reality—the company tended to shy away from making any official announcements of when something would ship until such a point as the company knew it would hit the date—but leaks, assumptions, and speculation were routine. Windows 95 was late. Windows 2000 was late. Windows Vista was very late and only came out after the original software was scrapped.
In spite of this, Microsoft became tremendously successful. After all, many of its competitors worked in more or less the same way, releasing paid software upgrades every few years. Microsoft didn't do anything particularly different. Even the delays weren't that unusual, with both Microsoft's competitors and all manner of custom software development projects suffering the same.
There's no singular cause for these periodic releases and the delays that they suffered. Software development is a complex and surprisingly poorly understood business; there's no one "right way" to develop and manage a project. That is, there's no reliable process or methodology that will ensure a competent team can actually produce working, correct software on time or on budget. A system that works well with one team or on one project can easily fail when used on a different team or project.
Nonetheless, computer scientists, software engineers, and developers have tried to formalize and describe different processes for building software. The process historically associated with Microsoft—and the process most known for these long development cycles and their delays—is known as the waterfall process.
The basic premise is that progress goes one way. The requirements for a piece of software are gathered, then the software is designed, then the design is implemented, then the implementation is tested and verified, and then, once it has shipped, it goes into maintenance mode.
The Microsoft Way? Top-down, waterfall-ish. Enlarge / The Microsoft Way? Top-down, waterfall-ish. The wretched waterfall
The waterfall process has always been regarded with suspicion. Even when first named and described in the 1970s, it was not regarded as an ideal process that organizations should aspire to. Rather, it was a description of a process that organizations used but which had a number of flaws that made it unsuitable to most development tasks.
It has, however, persisted. It's still being commonly used today because it has a kind of intuitive appeal. In industries such as manufacturing and construction, design must be done up front because things like cars and buildings are extremely hard to change once they've been built. In these fields, it's imperative to get the design as correct as possible right from the start. It's the only way to avoid the costs of recalling vehicles or tearing down buildings.
Software is cheaper and easier to change than buildings are, but it's still much more effective to write the right software first than it is to build something and then change it later. In spite of this, the waterfall process is widely criticized. Perhaps the biggest problem is that, unlike cars and buildings, we generally have a very poor understanding of software. While some programs—flight control software, say—have very tight requirements and strict parameters, most are more fluid.
For example, lots of companies develop in-house applications to automate various business processes. In the course of developing these applications, it's often discovered that the old process just isn't that great. Developers will discover that there are redundant steps, or that two processes should be merged into one, or that one should be split into two. Electronic forms that mirror paper forms in their layout and sequence can provide familiarity, but it's often the case that rearranging the forms can be more logical. Processes that were thought to be understood and performed by the book can be found to work a little differently in practice.
Often, these things are only discovered after the development process has begun, either during development or even after deployment to end users.
This presents a great problem when attempting to do all the design work up front. The design can be perfectly well-intentioned, but if the design is wrong or needs to be changed in response to user feedback, or if it turns out not to be solving the problem that people were hoping it would solve (and this is extremely common), the project is doomed to fail. Waiting until the end of the waterfall to discover these problems means pouring a lot of time and money into something that isn't right. =============================== Waterfalls in action: Developing Visual Studio ===============================
Microsoft didn't practice waterfall in the purest sense; its software development process was slightly iterative. But it was very waterfall-like.
A good example of how this worked comes from the Visual Studio team. For the last few years, Visual Studio has been on a somewhat quicker release cycle than Windows and Office. Major releases come every two or so years rather than every three.
This two-year cycle was broken into a number of stages. At the start there would be four to six months of planning and design work. The goal was to figure out what features the team wanted to add to the product and how to add them. Next came six to eight weeks of actual coding, after which the project would be "code complete," followed by a four-month "stabilization" cycle of testing and debugging.
During this stage, the test team would file hundreds upon hundreds of bugs, and the developers would have to go through and fix as many as they could. No new development occurred during stabilization, only debugging and bug fixing. At the conclusion of this stabilization phase, a public beta would be produced. There would then be a second six- to eight-week cycle of development, followed by another four months of stabilization. From this, the finished product would emerge.
With a few more weeks for managing the transitions between the phases of development, some extra time for last-minute fixes to both the beta and the final build, and a few weeks to recover between versions, the result was a two-year development process in which only about four months would be spent writing new code. Twice as long would be spent fixing that code.
Microsoft's organizational structure tends to reflect this development approach. The company has three relevant roles: the program manager (PM), responsible for specifying and designing features; the developer, responsible for building them; and QA, responsible for making sure the features do what they're supposed to. The three roles have parallel management structures (PMs reporting to PMs, and so on).
=======================================
kalzekdor Ars Scholae Palatinae reply Aug 6, 2014 4:24 AM Popular One of the key things when developing a web application of any complexity is to make sure your underlying architecture is solid. It's usually trivial to rollout new bugfixes, features, or interface tweaks after launch, but if you screw up your backend.... time to tear it all down and start from scratch. My company develops custom web applications for clients, and requirements gathering is king. Our project timelines usually look like: Design Stage: Requirements Gathering (~1 week) - We talk to the customer, their users, do research, throw questions at the customer, do more research, and eventually determine a few things: What problems they have, how they think they should be solved, a priority assigned to each problem, and the general timeline they're looking for. Problem Assessment (3-5 days) - The entire dev team sits down, and goes over the data from the requirements gathering. We assess potential difficulties, analyze problem sets for commonalities, and sometimes throw out requirements altogether in favor of a better solution to address their needs. Solution Architecture (1-2 weeks) - Most of the major design decisions are made here. Database schemas are designed and normalized, data flow paradigms are established, tasks are defined and distributed. By the end of this phase, the project outline is finished. There are a lot of grey areas still, but the broad strokes and important bits are there. Development Stage: After about a month for larger projects, when the design aspects are settling down, we begin developing in earnest. Here we take an iterative approach, at the end of the day, the project should run. If it doesn't, well, that's what branches are for. We follow the basic pattern: Design -> Implement -> Test for each required feature, starting with core backbone code and working our way towards user facing code as the project goes on. I'm responsible for defining, assigning, and reviewing tasks (in addition to being lead developer). Every Saturday I go over everything committed to the repository during the week (not exactly with a fine tooth comb, just the general idea), and I use that, plus feedback from my team (and my own experiences), to tweak the task assignments for the next week. Sometimes I see two separate development threads attempting to solve the same underlying problem, so I'll design a component that solves both and converge development efforts. Deployment Stage: When the team feels the project is ready for user eyeballs, we'll deploy it to server, schedule a meeting with the client, and sit down with them going over it. We ask them to have people start using it, and gather feedback from them. We then use that feedback to address any problems or shortcomings in the project, and address them. This may take a few iterations, but eventually the client signs off on the software, and we enter Maintenance. Maintenance Stage: For the first 30 days after sign-off (longer if they purchased extended support) we provide free bugfixes, feature changes, tweaks, etc. If we did our jobs right during Design, any change they need is usually trivial to implement. After their complementary support window ends, we offer free major bugfixes (breaking errors, security issues, etc.) for 6 months. During this time we also offer low-cost change requests. Most of the waterfall horror stories come from teams led by a manager who has no firsthand experience with software development. The view from the trenches is usually enlightening. I guess you could call our overall approach as a waterfall of sprints. The key is to be able to anticipate customer requirements, and have the architecture in place to easily implement their requests, or other unforeseen changes. =====================================
Sonnybill reply Aug 6, 2014 4:33 AM New Poster Popular I don't think it's fair to say that Waterfall is dead and Agile is the new way to go. Like Waterfall, Agile has existed since the 1970s (or arguably earlier) and I don't think it's even accurate to say that Microsoft has switched from Waterfall to Agile based on the information provided by this article. "Consider, for example, what happened when a Visual Studio user installed the beta and a month later found and reported a bug. It was probably too late for anything to be done about it for that release." Why? Beta = feature complete whether you're using Agile or Waterfall. Only bugs will be fixed and like any piece of software, it will never be bug free even after extensive testing. The closest model to 'Agile' that I can think of is Google's old mantra that EVERYTHING is beta, there's no release date and you can't expect anything to work (or call anybody for support) because it's Beta. Once Google started making money/trying to enter the corporate world they had to go back on that because it's simply not good enough having an unsupported, beta office suite in an office environment. Same with phones... hence why they started making their 'Jellybean' and 'Icecream Sandwich' releases. Because corporate environments demand distinct releases. --- Waterfall is used by microsoft and you can see this with their Windows/Office/Phone...etc releases. Each piece of software has clear releases, features for each release and bugs are fixed in order of priority. Startups (e.g. Google in the old days) use this 'Agile' process where you basically slap things together, throw new features out there whenever and fix bugs whenever. When you have a small team, a small code base and mostly home users who aren't paying anything then this makes sense. When you're a milti-billion dollar company that companies rely on (and are paying a lot of money to use) then you need to have definite goals for each release. Yes this sometimes leads to inflexibility and yes, many 'bugs' raised during the public beta will probably be ignored/put in the 'next release' basket... but that's the difference between large companies and small '2 geeks in a garage' startups. ******************************************************* turtlechurch Seniorius Lurkius Aug 6, 2014 6:28 AM My perspective is that of an old dog (my first programs were FORTRAN on a S370 in 1973). In the twilight of my career I was director of the IT research group at a multi-national company and lead the transition from waterfall to OpenUP and finally adopted modified Scrum in 2010. It is true that Agile borrows much from rolling wave waterfall or the RAD movement of the 90s, but I do see it as the most important software development methodology change of the past 20 years. I believe if adopted in a way that is respectful of the culture and capabilities of your organization it can be very successful. It has clearly improved our product quality and is generally well supported by our developers. We are very disciplined about applying realistic velocities to our iteration plans and rely heavily on our retrospectives to drive continuous process improvement by taking seriously the concerns of our staff who are actually getting the 'real' work done. We have a number of experienced business product owners who are strongly invested in the process and are enthusiastic about the amount of control they have over the end product. The collaborative aspect is important, but we provide a combination of war rooms and personal space to provide a flexible work environment.
My advice is to focus on little 'a' agile and leave the ivory tower dogmatism of big 'A' Agile to the purists. Accept that requirements change and that we are in the business of performing largely non-repeatable tasks that have never been done quite like we are doing them nor will they ever be done that way again.Two of my favourite Agile maxims are "fail fast" and "maximize work not done". You know your senior management 'gets' being agile when they understand why those are good things. ======================================================
normally butters Ars Tribunus Militum reply Aug 6, 2014 6:52 AM Like any good myth, agile software development is based around an incontrovertible moral (in this case: always be responsive to the needs of your customer) but mostly consists of diverse interpretative theories of what that means in practice -- and a whole lot of proprietary tools and consulting services. In my view, the distinguishing aspect of managing software projects is that the requirements have to be generated in consultation with the developers, because product owners aren't necessarily suited to the task of conducting detailed analysis of abstract concepts. Waterfall formalizes this reality by having dedicated phases of the development cycle for negotiating requirements both before and after the main development phase. Agile attempts to side-step this reality by making the cycles short enough to allow product owners to evaluate the results of their requirements based on a concrete implementation rather than in the abstract. In waterfall, good developers proactively anticipate problems with requirements and negotiate better solutions prior to implementation. In agile, developers implement the requirements and passive-aggressively use the implementation to demonstrate the problems with what they were told to do. It's one of those management theories which is designed to work even if the managers and workers involved aren't particularly skillful. Nobody needs to be proactive. Nobody needs to have a coherent vision. The process lets you pay the cost of mediocrity on an installment plan. In my current shop, we actually added a management role to each team when we "went agile". The technical manager is basically a vessel into which the developers vent their frustration that they can no longer speak any sense into the product owners until the work is already done. If I were interviewing a candidate for this position I would judge them entirely on their ability to recite with authentic compassion the following curative: "I hear you, man, and I agree. I'm on your side." *****************************************************************
Agile is not the silver bullet and it is not right for every project. The larger the project the less it makes sense. It's useful for small projects when the technology or requirements are not well understood, like most web projects. When you have a lot of unknowns. http://programming-motherfucker.com/ ================================================= mfaraon Wise, Aged Ars Veteran et Subscriptor Aug 6, 2014 12:36 PM Another one mistaking agile for silver bullet. Agile is just another software development method. period. It has advantages and disadvantages like any other method. There are projects where agile approach is more suitable, and projects where waterfall is the only sane option and agile approach would completely screw them. You decide whether you need agile or waterfall based on project/client requirements, not the other way round! =================================================
|
|