Waterfall model, also referred to as a linear-sequential life cycle model, is very simple to understand and use. It is a Plan-driven model with separate and distinct phases of specification and development involving a lot of processes: Problem definition, feasibility study, requirement analysis, design, programming & module testing, integration & system testing, delivery, and maintenance. Waterfall model is just one of the four software process models that will be discussed this week. But before we get to that, here is a recap of what you learned last week.
Recap: last week‘s material
Last week, we covered the main zest of software engineering: what it is about and how we should utilize it to produce generic products and custom ones. Evolution of design techniques along with the core principles of software practice were covered as well.
SOFTWARE DEVELOPMENT LIFE CYCLE (SDLC)
SDLC is a process followed for a software project that is made of a detailed plan describing how to develop, maintain, replace, and alter or enhance specific software. Simply put, it is a methodology for improving software quality and overall development process. It involves six stages: Continue reading “Software Development Life Cycle”
Asymptotic analysis is the backbone of algorithm analysis, it attempts to estimate the resource consumption of an algorithm by comparing the relative costs of two or more algorithms for solving the same problem. After reading this post, you should be familiar with three concepts: growth rate, upper and lowers bounds of a growth rate, and how to calculate the Running Time of an algorithm. But before we get to that, here is a recap of what you learned last week.
Recap: last week‘s material
Last week, we covered the main zest of data structures: what it is about and how we should utilize it to solve problems. Design patterns were briefly touched upon and good attributes any algorithm should have were covered as well.
Asymptotic Analysis is an estimation technique that measures the efficiency of an algorithm, or its implementation as the input size of a program increases. Generally, you will need to analyze the time required for an algorithm and the space required for a data structure. Here, is [The Big Five] which are a bunch of functions I will be elaborating on shortly: Continue reading “Algorithm Analysis”
A legacy system is an old computer system that continues to be used because it still functions according to the users’ needs, even though newer technology or more efficient methods of performing a task are now available. Convenience is mostly why clients follow the “old is gold” trend. What does this have to do with software engineering tho? For starters, software engineering has everything to do with theories, methods, and tools for developing professional software. However, around 60% of software costs are development costs, while the rest are testing costs. Hence, that expenditure is mainly why the lifetime of software systems is expected to exceed 10 years (if not more), turning them into legacies.
Introduction to Software Engineering
Software costs dominate computer system costs, which is why [cost-effective] software development with good attributes like maintainability, dependability & security, efficiency, and acceptability is important. Hence, it is good to remember that reusing old software when appropriate is preferable than writing new software from scratch. Software products come in two forms: Generic products, which are stand-alone systems sold to the public with rights fully owned by the developer and customized products, which are software commissioned by a specific client, who will be granted sole rights to the software. Generally, software engineering involves 4 activities: Continue reading “Introduction to Software Engineering”
For starters, data structures are essential building blocks in obtaining efficient algorithms. In other words, they are meant to be an organization or structuring for a collection of data items. Data Structures are meant to solve a problem and provide a solution that can be solved within the specified resource constraints: cost, time, technical capabilities, etc. It is important to remember that each data structure has an allocated amount of space for each data item it stores, a certain duration to perform a single basic operation, and a certain amount of programming knowledge. Hence, each problem is basically centered on two constraints: space and time.
Introduction to Data Structures
The fundamental role of most (if not all) computer programs is to store and retrieve information as quickly as possible. As beginner coders we usually focus on our programs performing calculations correctly, while neglecting speed and information retrieval or storage. In that regards, this unit will teach us how to structure information to support efficient processing. But before we get to that, we need to mentally prepare ourselves for three things (that we’ll get the hang of in the future): Continue reading “Data Structures – The Beginning”