- Fachkompetenz: Die Studierenden verfügen über anwendungs-orientiertes Wissen zu Werkzeugen der Anforderungserhebung und -modellierung, der Prozessmodellierung und -anpassung, der Aufwandsschätzung, des Softwaretests, der Produktlinien-entwicklung und der Wartung von Software.
- Methodenkompetenz: Die Studierenden kennen den methodischen Hintergrund zu den vorgestellten Werkzeugen / Verfahren und sind daher in der Lage auch neue Problemstellungen zu lösen. Sie können aus den vorgestellten Methoden jeweils die passenden auswählen.
- Systemkompetenz: Die Studierenden können die vorgestellten Methoden und Werkzeuge in Projekten unterschiedlicher Domänen an-wenden.
- Sozialkompetenz: Die Studierenden kennen die Bedeutung und den Einfluss der erlernten Methoden und Werkzeuge innerhalb einen Firma. Sie können daher Ihr jeweiliges Vorgehen und dieErgebnisse auf die Erfordernisse eines Projektes in einer Organisation abstimmen.
- "... three years of project management experience, with 4500 hours leading and directing projects and 35 hours of project management education." (Erfolgreich und/oder bezahlt?)
PMP background
- Examination for Project Managers
- 200 multiple-choice questions in 4hrs -> 1min 12sec per question
- 2-3 weeks of preparation
- Paid renewal every three years
#### PMP Example Question 1 (taken from the PMP website)
An accepted deadline for a project approaches. However, the project manager realizes only 75% percent of the work has been completed. The project manager then issues a change request. What should the change request authorize?
1. Additional resources using the contingency fund
2. Escalation approval to use contingency funding
3. Team overtime to meet schedule
4. Corrective action based on causes
#### PMP Example Question 2 (taken from the PMP website)
The project manager develops a process improvement plan to encourage continuous process improvement during the life of the project. Which of the following is a valid tool or technique to assist the project manager to assure the success of the process improvement plan?
1. Change control system
2. Process analysis
3. Benchmarking
4. Configuration management system
#### PMP Example Questions 3 (taken from the PMP website)
The project manager meets with the project team to review lessons learned from previous projects. In what activity is the team involved?
1. Performance management
2. Scope identification
3. Risk identification
4. Project team status meeting
## OpenUP
### Core Principles
(Made available under EPL v1.0)
OpenUP is based on a set of mutually supporting core principles:
- Collaborate to align interests and share understanding
- Evolve to continuously obtain feedback and improve
- Balance competing priorities to maximize stakeholder value
- Focus on articulating the architecture
### Collaboration: Some key practices
- Maintain a common understanding
- Key artifacts: Vision, requirements, architecture notebook, iteration plan
- Foster a high-trust environment
- Manage by intent, tear down walls, understand the perspectives of others
- Share responsibility
- Everybody owns the product, help each other
- Learn continuously
- Develop technical and interpersonal skills, be a student and a teacher
- Organize around the architecture
- The architecture provides a shared understanding of the solution and forms the
basis for partitioning work.
### Evolve: Some key practices
- Develop your project in iterations
- Use time-boxed iterations that deliver incremental value and provide frequent feedback.
- Focus iterations on meeting the next management milestone
- Divide the project into phases with clear goals and focus iterations on meeting those goals.
- Manage risks
- Identify and eliminate risk early.
- Embrace and manage change
- Adapt to changes.
- Measure progress objectively
- Deliver working software, get daily status, and use metrics.
- Continuously re-evaluate what you do
- Assess each iteration and perform process retrospectives.
### Balance: Some key practices
- Know your audience & create a shared understanding of the domain.
- Identify stakeholders early and establish a common language
- Separate the problem from the solution
- Understand the problem before rushing into a solution.
- Use scenarios and use cases to capture requirements
- Capture requirements in a form that stakeholders understand
- Establish and maintain agreement on priorities
- Prioritize work to maximize value and minimize risk early
- Make trade-offs to maximize value
- Investigate alternative designs and re-factor to maximize value
- Manage scope
- Assess the impact of changes and set expectations accordingly.
### Focus: Some key practices
- Create the architecture for what you know today
- Keep it as simple as possible and anticipate change
- Leverage the architecture as a collaborative tool
- A good architecture facilitates collaboration by communicating the "big-picture" and enabling parallelism in development.
- Cope with complexity by raising the level of abstraction
- Use models to raise the level of abstraction to focus on important high-level decisions.
- Organize the architecture into loosely coupled, highly cohesive components
- Design the system to maximize cohesion and minimize coupling to improve comprehension and increase flexibility.
- Reuse existing assets
- Don’t re-invent the wheel. Made available under EPL v1.0
### OpenUP is Agile and Unified
- OpenUP incorporates a number of agile practices...
- OpenUP incorporates a three-tiered governance model to plan, execute, and monitor progress.
- These tiers correspond to personal, team and stakeholder concerns and each operates at a different time scale and level of detail.
### OpenUP Project Lifecycle
- OpenUP uses an iterative, incremental lifecycle.
- Proper application of this lifecycle directly addresses the first core principle (Evolve).
- The lifecycle is divided into 4 phases, each with a particular purpose and milestone criteria to exit the phase:
- Inception: To understand the problem.
- Elaboration: To validate the solution architecture.
- Construction: To build and verify the solution in increments.
- Transition: To transition the solution to the operational environment and validate the solution.
### OpenUP Iteration Lifecycle
- Phases are further decomposed into a number of iterations.
- At the end of each iteration a verified build of the system increment is available.
- Each iteration has its own lifecycle, beginning with planning and ending in a stable system increment, Iteration Review (did we achieve the iteration objectives) and a Retrospective (is there a better process).
- Progress on completion of micro-increments is monitored daily via "Scrums" and the iteration burndown chart to provide timely feedback.
![](Assets/Softwaretechnik2-openUp-lifecycle.png)
Micro-Increments
- Micro-increments are small steps towards the goals of the iteration.
- Should be small enough to be completed in a day or two
- Identify Stakeholders is a micro-increment (one step of a task).
- Determine Technical Approach for Persistency is a micro-increment (a task with a specific
focus)
- Develop Solution Increment for UC 1 Main Flow is a micro-increment (a task with a
specific focus)
- Micro-increments are defined and tracked via the work items list.
- Work items reference requirements and process tasks as needed to provide
- The primary purpose of the Inception Phase is to understand the scope of the problem and feasibility of a solution.
- At the Lifecycle Objectives Milestone, progress towards meeting these objectives are assessed and a decision to proceed with the same scope, change the scope, or terminate the project is made.
- More specifically, the objectives and associated process activities are:
- The primary purpose of the Elaboration Phase is to validate the solution architecture (feasibility and trade-offs).
- At the Lifecycle Architecture Milestone, progress towards meeting these objectives are assessed and a decision to proceed with the same scope, change the scope, or terminate the project is made.
- More specifically, the objectives and associated process activities are:
- The primary purpose of the Construction Phase is to develop and verify the solution incrementally.
- At the Initial Operational Capability Milestone, progress towards meeting these objectives is assessed and a decision to deploy the solution to the operation environment is made.
- More specifically, the objectives and associated process activities are:
| Iteratively develop a complete product that is ready to transition to the user community | Identify and Refine Requirements; Develop Solution Increment; Test Solution |
| Minimize development costs and achieve some degree of parallelism | Plan and Manage Iteration Ongoing Tasks |
- The primary purpose of the Transition Phase is to deploy the solution to the operational environment and validate it.
- At the Product Release Milestone, progress towards meeting these objectives are assessed and a decision to make the product generally available is made.
- More specifically, the objectives and associated process activities are:
- Do any existing requirements in the baseline conflict with the proposed change?
- Do any other pending requirements changes conflict with the proposed change?
- What are the business or technical consequences of not making the change?
- What are possible adverse side effects or other risks of making the proposed change?
- Will the proposed change adversely affect performance requirements or other quality attributes?
- Is the proposed change feasible within known technical constraints and current staff skills?
- Will the proposed change place unacceptable demand on any computer resources required for the development, test, or operating environments?
- Must any tools be acquired to implement and test the change?
- How will the proposed change affect the sequence, dependencies, effort, or duration of any tasks currently in the project plan?
- Will prototyping or other user input be required to verify the proposed change?
- How much effort that has already been invested in the project will be lost it this change is accepted?
- Will the proposed change cause an increase in product unit cost, such as by increasingg third-party product licensing fees?
- Will the change affect any marketing, manufacturing, training, or customer support plans?
- Identify any user interface changes, additions, or deletions required.
- Identify any changes, additions, or deletions required in reports, databases, or files.
- Identify the design components that must be created, modified, or deleted.
- Identify the source code files that must be created, modified, or deleted.
- Identify any changes required in build files or procedures.
- Identify existing unit, integration, system, and acceptance test cases that must be modified or deleted.
- Estimate the number of new unit, integration, system, and acceptance test cases that will be required.
- Identify any help screens, training materials, or other user documentation that must be created or modified.
- Identify any other applications, libraries, or hardware components affected by the change.
- Identify any third-party software that must be purchased or licensed.
- Identify any impact the proposed change will have on the project‘s software project management plan, quality assurance plan, configuration management plan, or other plans. [Wieg 1999], p346
## Metamodelle
### What is Model Driven Development?
MDD proposes the usage of "models at different levels of abstraction and performs
transformations between them in order to derive a concrete application implementation " [1]
Model
- Everything can be a representation of a model
- Source Code
- Word, Excel
- ...
- Conforms to a meta-model
- -> Manage complexity with a higher/smarter abstraction
Definition ([Bere 2009] → [DoD 1991]): "Requirements engineering involves all lifecycle activities devoted to (1) identification of user requirements, (2) analysis of the requirements to derive additional requirements, documentation of the requirements as a specification, and (3) validation of the documented requirements against user needs, as well as (4) processes that support these activities."
1. A condition or capability needed by a user to solve a problem or achieve an objective.
2. A condition or capability that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed documents.
3. A documented representation of a condition or capability as in (1) or (2).
- We need to know how to sort the requirements. Where to sort in a new requirement? Where to find requirements to a given topic?
- "Checklist" missing requirements / redundancies
- Wide / complete scope of the requirements document
- Relations
- ... to one/more use-case(s)
- ... to other requirements
- Use Interaction (one requirement refers to the implemented functionality of another requirement, e.g. Scrolling through a list of videos might make use of the immediate play-video functionality)
- Share Interaction (two requirements share the same resource, e.g. memory)
- ... to a component (off-the-shelf) of the system. (This is a relation to the system design)
- ... to a specific system variant (of a system family).
- Conflicts between requirements (should be resolved before the design phase)
- Conflicting requirements, side effects
- Analysis of the requirements, coverage metrics, automated traceability matrices
2. Danilo Assmann, Ralf Kalmar, Dr Teade Punter, "Handbuch, Messen und Bewerten von WebApplikationen mit der Goal/Question/Metric Methode", IESE-Report Nr. 087.02/D ver. 1.2, 2002
Please read and observe the following directions carefully:
"For each question, fill in the upper and lower bounds that, in your opinion, give you a 90% chance of including the correct value. Be careful not to make your ranges either too wide or too narrow. Make them wide enough so that, in your best judgment, the ranges give you a 90% chance of including the correct answer. Please do not research any of the answers-this quiz is intended to assess your estimation skills, not your research skills. You must fill in an answer [or each item; an omitted item will be scored as an incorrect item. Please limit your time on this exercise to 10 minutes."
-> Most people reach a 30% confidence level if they believe it is a 90% confidence level!
#### Ten Key Characteristics of Software Executives
1. Executives will always ask for what they want.
2. Executives will always probe to get what they want if they don't get it initially.
3. Executives will tend to probe until they discover your point of discomfort.
4. Executives won't always know what's possible, but they will know what would be good for the business if it were possible.
5. Executives will be assertive. That's how they got to be executives in the first place.
6. Executives will respect you when you are being assertive. In fact, they assume you will be assertive if you need to be.
7. Executives want you to operate with the organization's best interests at heart.
8. Executives will want to explore lots of variations to maximize business _value._
9. Executives know things about the business, the market, and the company that you don't know , and they may prioritize your project's goals differently than you would.
10. Executives will always want visibility and commitment early (which would indeed _have_ great business _value,_ if it were possible). [McCo 2006], p260
### Estimation Improvement with the Capability Maturity Model
Improvement in estimation at the Boeing Company. As with the U.S. Air Force projects, the predictability of the projects improved dramatically at higher CMM levels. [McCo 2006, p10]
| Marketing requirements | Average effort hours per requirement for development |
| | Average effort hours per requirement for independent testing |
| | Average effort hours per requirement for documentation |
| | Average effort hours per requirement to create engineering requirements from marketing requirements |
| Features | Average effort hours per feature for development and/or testing |
| Use cases | Average total effort hours per use case |
| | Average number of use cases that can be delivered in a particular amount of calendar time |
| Stories | Average total effort hours per story |
| | Average number of stories that can be delivered in a particular amount of calendar time |
| Engineering Requirements | Average number of engineering requirements that can be formally inspected per hour |
| | Average effort hours per requirement for development/test/documentation |
| Function Points | Average development/test/documentation effort per Function Point |
| | Average lines of code in the target language per Function Point |
| Change requests | Average development/test/documentation effort per change request (depending on variability of the change requests, the data might be decomposed into average effort per small, medium, and large change request) |
| Web pages | Average effort per Web page for user interface work |
| | Average whole-project effort per Web page (less reliable, but can be an interesting data point) |
| Reports | Average effort per report for report work |
| Dialog Boxes | Average effort per dialog for user interface work |
| Database Tables | Average effort per table for database work |
| | Average whole-project effort per table (less reliable, but can be an interesting data point) |
| Classes | Average effort hours per class for development |
| | Average effort hours to formally inspect a class |
| | Average effort hours per class for testing |
| Defects found | Average effort hours per defect to fix |
| | Average effort hours per defect to regression test |
| | Average number of defects that can be corrected in a particular amount of calendar time |
| Configurations settings | Average effort per configuration setting |
| Lines of code already written | Average number of defects per line of code |
| | Average lines of code that can be formally inspected per hour |
| | Average new lines of code from one release to the next |
1. The Delphi coordinator presents each estimator with the specification and an estimation form.
2. Estimators prepare initial estimates individually. (Optionally, this step can be performed after step 3.)
3. The coordinator calls a group meeting in which the estimators discuss estimation issues related to the project at hand. If the group agrees on a single estimate without much discussion, the coordinator assigns someone to play devil's advocate.
4. Estimators give their individual estimates to the coordinator anonymously.
5. The coordinator prepares a summary of the estimates on an iteration form and presents the iteration form to the estimators so that they can see how their estimates compare with other estimators' estimates.
6. The coordinator has estimators meet to discuss variations in their estimates.
7. Estimators vote anonymously on whether they want to accept the average estimate. If any of the estimators votes "no," they return to step 3.
8. The final estimate is the single-point estimate stemming from the Delphi exercise. Or, the final estimate is the range created through the Delphi discussion and the single-point Delphi estimate is the expected case.
### Proxy-Based Estimates
Classify existing components/features ... (small, medium, large) ... and estimate new features by these classes
and Management Rules" (NASA, 1991), Microsoft Secrets (Cusumano and Selby 1995).
### Checklist for individual estimates
1. Is what's being estimated clearly defined?
2. Does the estimate include all the kinds of work needed to complete the task?
3. Does the estimate include all the functionality areas needed to complete the task?
4. Is the estimate broken down into enough detail to expose hidden work?
5. Did you look at documented facts (written notes) from past work rather than estimating purely from memory?
6. Is the estimate approved by the person who will actually do the work?
7. Is the productivity assumed in the estimate similar to what has been achieved on similar assignments?
8. Does the estimate include a Best Case, Worst Case, and Most Likely Case?
9. Is the Worst Case really the worst case? Does it need to be made even worse?
10. Is the Expected Case computed appropriately from the other cases?
11. Have the assumptions in the estimate been documented?
12. Has the situation changed since the estimate was prepared?
[McCo 2006, p110]
# Testen
## Motivation
Software Testing
- Operate/use a system with a set of known inputs and/or a set of (environmental) conditions
- Observe the reaction of the system and compare against the expected reaction
- -> Test against the requirements
- Measure the quality of a System
- Keep the quality of a system
- While changing the system (-> maintenance)
- Regression Testing
Reference: ISTQB® Glossary of Testing Terms v2.2 (ANSI/IEEE 610.12-1990)
## Definition: Error
- Difference between _actual_ and _desired_ behavior (Istverhalten <-> Sollverhalten)
- Failure: Deviation of the component or system from its expected delivery, service or result.
- Fehlerwirkung: Abweichung einer Komponente/eines Systems von der erwarteten (Daten)Übergabe, Leistung oder dem Ergebnis. (auch: äußerer Fehler, Ausfall)
- Fault -> Defect: A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.
- Fehlerzustand: Defekt (innerer Fehlerzustand) in einer Komponente oder einem System, der eine geforderte Funktion des Produkts beeinträchtigen kann, z.B. inkorrekte Anweisung oder Datendefinition. Ein Fehlerzustand, der zur Laufzeit angetroffen wird, kann eine Fehlerwirkung einer Komponente oder Systems verursachen. (auch: innerer Fehler)
![Matthias Grochtmann, "Test Case Design Using Classification Trees", 1994; http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.83.9731](Assets/Softwaretechnik2-Classification-Trees.png)
### Plan System Test
- Task
- Refine Requirements / Test Cases
- Develop Test Model(s)
- Check
- Requirements refined?
- Test Cases refined?
- Review
- Refinement ok / feasible?
- Completeness?
- Test Case Execution
- Possible?
- Manual / automated execution?
- Estimation of test case execution time
Test Plan (IEEE829) [https://cabig.nci.nih.gov/archive/CTMS/Templates]
1. INTRODUCTION
1. SCOPE
2. QUALITY OBJECTIVE
3. ROLES AND RESPONSIBILITIES
4. ASSUMPTIONS FOR TEST EXECUTION
5. CONSTRAINTS FOR TEST EXECUTION
6. DEFINITIONS
2. TEST METHODOLOGY
1. PURPOSE
2. TEST LEVELS
3. BUG REGRESSION
4. BUG TRIAGE
5. SUSPENSION CRITERIA AND RESUMPTION REQUIREMENTS
- Missing destructors from classes using dynamic allocation
- Creation of temporaries
- Operator delete not checking argument for NULL
- Conflicting function specifiers
- ...
- -> Could also check MISRA rules
##### MISRA
[Motor Industry Software Reliability Association](http://www.misra.org.uk)
- Conform to ISO 9899 standard (C-Language)
- Multibyte characters and wide string literals shall not be used
- Sections of code should not be commented out
- In an enumerator list the = construct shall not be used to explicitly initialise members other than the first unless it is used to initialise all items
- Bitwise operations shall not be performed on signed integer types
- The _goto_ statement shall not be used
- The _continue_ statement shall not be used
- The _break_ statement shall not be used, except to terminate the cases of a _switch_ statement
- ....
### Unit Testing
- Single functions / methods
- ~30min pro Comp.Point (McCabe)
- Priorisierung
- Nach McCabe
- Benutzungshäufigkeit
- Kritikalität
- Equivalence class testing
- Pre- / Post-conditions, Invariants
- "White-Box" testing
- Timing on a fine granularity level (-> functions)
Equivalence Class Testing
- A function F has a number of variables
-`void setdate(int day, int month, int year)`
- The variables have the following boundaries and intervalls
- ->Decision to finalize the project (Payment ... $$)
## Beispiel: Testen eines Softstarters
Quelle: Florian Kantz, Thomas Ruschival, Philipp Nenninger, Detlef Streitferdt, "Testing with Large Parameter Sets for the Development of Embedded Systems in the Automation Domain", 2nd IEEE International Workshop on Component-Based Design of Resource-Constrained Systems (CORCS) at the 33rd IEEE International Computer Software and
- Domain Engineering: Domain engineering is the process of software product line engineering to define and realize the commonality and the variability of the product line.
- Application Engineering: Application engineering is the process of software product line engineering to build the applications of the product line by reusing domain artifacts and exploiting the product line variability.
## The Concept of Variability
- Variation Point (of development/design elements): A variation point is a representation of a variability subject within domain artifacts enriched by contextual information.
- Variant (core + set of elements with variation points): A variant is a representation of a variability object within domain artifacts.
- Variability in Time: Variability in time is the existence of different versions of an artifact valid at different (application lifecycle) times. (-> roadmap )
- Variability in Space: Variability in space is the existence of an artifact in different shapes at the same time. (-> binding time )
- External Variability: External variability is the variability of domain artifacts that is visible to customers.
- Internal Variability: Internal variability is the variability of domain artifacts that is hidden from customers.
[Pohl 2005]
## Modeling of Product Lines using Features
### Features
A feature is a user visible property of a product -> the user is willing to pay for such a property
- Define an interface for creating an object, but let subclasses decide which classto instantiate. Factory Method lets a class defer instantiation to subclasses.
- Defining a "virtual" constructor
- The new operator considered harmful
- Problem
- A framework needs to standardize the architectural model for a range of applications, but allow for individual applications to define their own domain objects and provide for their instantiation
##### Factory Method
![](Assets/Softwaretechnik2-factory-method.png)
Example: Pocket Coffee Machine
- One-Button Operation, -> prepare coffee
- ... one machine per pad type ...
- The machine type shall be "jumpered" at production time
- Use different libraries (libraries with the same interface)
- Update (-> replace libraries)
- Static linking
- Linker directives, #ifdef
### Startup Time
Deliver different systems
### Runtime
Variability at Runtime
- Use different libraries
- Update (-> replace libraries at system startup)
- Configuration files
- switch/if , based on config-files
- Virtual Machines / Scripts, Interpreter
## Aspect-Oriented Programming
AOP was developed in the mid 90ies by Gregor Kiczales at the Xerox Palo Alto Research Lab (PARC) (now: Professor at the University of British Columbia, Canada)
- Handle cross-cutting concerns in software systems to increase the code maintainability and reusability
- Within the AuthPayment pointcut, ep refers to the object of the adviced method
- call : method call
- execution: execution of the method content
- get/set : reading / writing of attributes
- initialization : ... of the object
- handler : exception handler
- Signature of the method being part of the aspect.
- Wildcards:
- * An arbitrary number of characters
- .. An arbitrary number of characters with the point ‚.‘
- + Includes the subtypes (here: subtypes of Routing)
- When to inject code?
- before the base method is executed
- after the base method is executed
- around new behavior with existing method referred by proceed()
Summary: Aspect Oriented Programming
- Very good _separation of concerns_
- Design of aspects is vital for the success of the development
- Only parts of the system behavior are in the code, all others are in aspects.
- Hard to (statically) analyze the system.
- Testing of such systems ...
## Domain Specific Languages
Definition: A Domain Specific Language (DSL) is a computer programming language (-> grammar and syntax) to formulate / implement solutions for problems in a specified (limited) domain.
- $m(A)$ is the number of methods accessing an attribute A
- $n$ is the number of attributes
- $m$ is the number of methods m
- ($LCOM >> 1$ is alarming), small values ($<1$)arebetter.
- Hint: The class could be split into a number of (sub)classes.
- ![](Assets/Softwaretechnik2-highly-coupled.png)
- Changes in A cause the need to check B, C, D, E, F
- The interface of A might be hard to reuse in future/other projects
- Coupling : A component which highly depends (by method calls) on another component is strongly coupled.
- Afferent Coupling = #Classes outside a package that depend on classes inside the package.
- Efferent Coupling = #Classes inside a package that depend on classes outside the package.
Design Level
- "Tell, don‘t ask!"
- Bad: car.getSteeringWheel().getAngle()
- Better: car.getDirectionOfTravel()
- Start with reference architectures and refine ...
- Layers, pipes and filters, plug-in, client / server, MVC
- Use design patterns, (or at least) their concepts
- A class / component interface should hide most of the complexity underneath (-> Facade Pattern)
- 30-Rule, [Rooc 2004], p35
- Methods <= 30LLOC
-#Methods per Class <30
-#Classes per Package <30
-#Packages per Subsystem <30
- System <30subsystems
-#Layers 3 ... 10
- Usage / Inheritance Relations
- Inheritance hierarchy <10
- In and between Packages
- Keep a small hierarchy (<5)
- In and between Subsystems
- Keep APIs small
- In and between Layers
- Use layers at all!
- Calls should follow the layer structure
- Don‘t use/allow cycles
### What is the simplest design?
By Kent Beck, [Beck 2000], page 109
1. The system (code and tests together) must communicate everything you want to communicate.
2. The system must contain no duplicate code.
3. The system should have the fewest possible classes.
4. The system should have the fewest possible methods.
Requirements Level
- Sometimes hard to find but easy to change at very low costs!
- Inconsistencies
- Redundancy
- Contradictions
- Misspellings
- Wording (domain specific)
- Constraints (missing ~)
- Missing requirements vs. "goldplating"
- ...
## Refactoring
### Refactoring Overview
- Software changes (beautifying) without changing the behavior!
- "Refactoring (noun): a change made to the internal structure of software to make it easier to understand and cheaper to modify without changing its observable behavior." [Fowl 1999, Martin Fowler, "Refactoring - Improving the Design of Existing Code", Addison Wesley, 1999, page 53.]