The Essentials of Modern Software Engineering. Ivar Jacobson

Читать онлайн книгу.

The Essentials of Modern Software Engineering - Ivar Jacobson


Скачать книгу
today would call practices. There are solution-related “practices,” such as work with requirements, work with code, and conduct testing. There are endeavor-related practices, such as setting up a collaborative team and an efficient endeavor as well as improving capability of the people and collecting metrics. There are of course customer-related practices, such as making sure that what is built is what the customers really want.

      The interesting discovery we made more than a decade ago was that even if the number of methods in the world was huge, it seemed that all these methods were just compositions of a much smaller collection of practices, maybe a few hundred of such practices in total. Practices are what we call reusable because they can be used over and over again to build different methods.

      To understand how we as a software engineering community have improved our knowledge in software engineering, we provide a description of historical developments. Our purpose with this brief history is to make it easier for you to understand why Essence was developed.

       2.2.1 There Are Lifecycles

      From the ad hoc approach used in the early years of computing came the waterfall method around the 1960s; actually, it was not just one single method—it was a whole class of methods. The waterfall methods describe a software engineering project as going through a number of phases such as Requirements, Design, Implementation (Coding), and Verification (i.e., testing and bug-fixing) (see Figure 2.1).

      While the waterfall methods helped to bring some discipline to software engineering, many people tried to follow the model literally, which caused serious problems especially on large complex efforts. This was because software engineering is not as simple as this linear representation indicates.

      A way to describe the waterfall methods is this: What do you have once you think you have completed the requirements? Something written on “paper.” (You may have used a tool and created an electronic version of the “paper,” but the point is that it is just text and pictures.) But since it has not been used, do you know for sure at this point if they are the right requirements? No, you don’t. As soon as people start to use the product being developed based on your requirements, they almost always want to change it.

Image

      Similarly, what do you have after you have completed your design? More “paper” of what you think needs to be programmed? But are you certain that it is what your customer really intended? No, you are not. However, you can easily claim you are on schedule because you just write less and with less quality.

      Even after you have programmed according to the design, you still don’t know for sure. However, all of the activities you have conducted don’t provide proof that what you did is correct.

      Now you may feel you have done 80%. The only thing you have left is to test. At this point the endeavor almost always falls apart, because what you have to test is just too big to deal with as one piece of work. It is the code coming from all the requirements. You thought you had 20% left but now you feel you may have 80% left. This is a common well-known problem with waterfall methods.

      There are some lessons learned. Believing you can specify all requirements upfront is just a myth in the vast majority of situations today. This lesson learned has led to the popularity of more iterative lifecycle methods. Iterating means you can specify some requirements and you can build something meeting these requirements, but as soon as you start to use what you have built you will know how to make it a bit better. Then you can specify some more requirements and build, and test these until you have something that you feel can be released. But to gain confidence you need to involve your users in each iteration to make sure what you have provides value. These lessons gave rise at the end of the 1980s to a new lifecycle approach called iterative development, a lifecycle adopted by the agile paradigm now in fashion (see Figure 2.2).

Image

      New practices came into fashion. The old project management practices fell out of fashion and practices relying on the iterative metaphor became popular. The most prominent practice was Scrum, which started to become popular at the end of the 1990s and still is very popular. We will discuss this more deeply in Part III of the book.

       2.2.2 There Are Technical Practices

      Since the early days of software development, we have struggled with how to do the right things in our projects. Originally, we struggled with programming because writing code was what we obviously had to do. The other things we needed to do were ad hoc. We had no real guidelines for how to do requirements, testing, configuration management, project management, and many of these other important things.

      Later new trends became popular.

       2.2.2.1 The Structured Methods Era

      In the late 1960s to mid-1980s, the most popular methods separated the software to be developed into the functions to be executed and the data that the functions would operate upon: the functions living in a program store and the data living in a data store. These methods were not farfetched because computers at that time had a program store, for the functions translated to code, and a data store. We will just mention two of the most popular methods at that time: SADT (Structured Analysis and Design Technique) and SA/SD (Structured Analysis/Structured Design). As a student, you really don’t need to learn anything more about these methods. They were used for all kinds of software engineering. They were not the only methods in existence. There were a large number of published methods available and around each method there were people strongly defending it. It was at this time in the history of software engineering that the methods war started. And, unfortunately, it has not yet finished!

Image

      Every method brought with it a large number of practices such as requirements, design, test, defect management, and the list goes on.

      Each had its own blueprint notation or diagrams to describe the software from different viewpoints and with different levels of abstraction (for example, see Figure 2.3 on SADT). Tools were built to help people use the notation and to keep track of what they were doing. Some of these practices and tools were quite sophisticated. The value of these approaches was, of course, that what was designed was close to the realization—to the machine: you wrote the program separate from the way you designed your data. The problems were that programs and data are very interconnected and many programs could access and change the same data. Although many successful systems were developed applying this approach, there were far many more failures. The systems were hard to develop and even harder to change safely, and that became the Achilles’ heel for this generation of methods.

       2.2.2.2 The Component Methods Era

      The next method paradigm shift4 came in early 1980 and had its high season until the beginning of the 2000s.

      In simple terms, a software system was no longer seen as having two major parts: functions and data. Instead, a system was a set of interacting elements—components (see also Sidebar 2.1). Each component had an interface connecting it with other components, and over this interface messages were communicated. Systems were developed by breaking them down into components, which collaborated with one another to provide for implementation of the requirements of the system. What was inside a component was less important


Скачать книгу