We discuss heuristic principles of modern software engineering and what works to build good software faster. We talk about the concepts of CICD, agile and modularity, amongst others.
Software engineering applies an empirical, scientific approach to finding economical solutions to software problems. We need to manage the complexity of the systems we create to maintain our ability to learn and adapt to them.
Software Engineering Practices
To be learning experts, we need:
Complex systems are a product of many steps, and we try out ideas and improve along the way.
To be experts at managing complexity, we need:
- Separation of concerns
- Loose coupling
We need to combine the ideas with tooling to drive software development:
- Controlling the variables
- Continuous delivery
Software engineering was a concept created at the end of the 1960s. It was first used by Margaret Hamilton, the director of the Software Engineering Division of the MIT Instrumentation Lab when she led an effort to develop the flight control software for the Apollo space program.
Learning is a kind of accretion; we build up layers of understanding, with each layer foundationally supported by the previous one. Sometimes, we change our perspective to learn new things and discard what we have learned.
Engineering applies an empirical, scientific approach to finding efficient, economical solutions to practical problems. Engineering is not equivalent to codes. It is the process, tools and techniques used to derive solutions.
The effectiveness of software development teams is based on stability and throughput, which are considered high performers. Stability is determined by change failure rate (how a change introduces a defect) and recovery failure time (how long it takes to recover from a failure). We use lead time (how long it takes for a single line change to go from idea to working software) and frequency (how often changes are deployed into production) to track throughput.
How can you improve and learn fast as a Software Engineer?
Iteration allows us to learn, react and adapt what we have learnt. It is a procedure in which the repetition of a sequence of operations yields results successively closer to the desired result. Iteration is the heart of all exploratory learning and is fundamental to accurate knowledge acquisition.
Agile planning depends on decomposing work into small enough pieces to complete our features within a single sprint or iteration. This was promoted as a way to measure progress. Still, it also profoundly impacted delivering definitive feedback on the quality and appropriateness of the work regularly. This change increases the rate at which the team can learn.
Working iteratively in small, definitely, and production-ready steps provide excellent feedback in software engineering.
Iteration as a defensive design strategy
Waterfall thinking builds the assumption that change becomes more expensive as time goes go. The only sensible solution is to make the most critical decisions in the early life of the project. However, software development never beings with every piece of work completely understood at the early stage, no matter how hard to analyse things before starting work.
Surprises, misunderstandings and mistakes are typical in software development because it is an exercise in exploration and discovery. Working iteratively allows us to reduce and flatten the cost of change and is an idea at the heart of continuous delivery.
To work iteratively, we need to work in smaller batches to reduce the scope of each change and make changes in smaller steps to try out techniques, ideas and technology more frequently. We limit the time horizon over which assumptions need to hold.
We work towards completed, production-ready codes in agile disciplines in a short, fixed period. In continuous integration, we commit our changes frequency, multiple times per day, even if the feature it contributes to is not yet complete. Each change needs to be atomic. It gives us opportunities to learn if our code works alongside everyone else. Test-driven development is where we write a test, run and see it fail. Write just enough codes to see it pass. Refactor the codes to make them clear, elegant and more general.
If we want to work iteratively, we must change how we work to facilitate it.
Feedback allows us to establish a source of evidence for our decisions. It transmits corrective information about an action, event or process to the source.
How do you create feedback in coding? One way is to use a test-driven approach. Before adding new behaviour to a system, write a test, see it fail, and make changes in tiny steps. Everyone a change is made, rerun the previous tets. These feedback cycles are short and valuable. Feature branching is about isolating feature changes.
Continuous integration allows programmers to we regular, frequent drips of feedback and gives powerful insights into the state of code and behaviour of the system throughout the working day. Commits need to be pushed frequently to get that feedback. We will be getting frequent fine-granted feedback on the quality and applicability of the work.
TDD is useful for giving feedback on the quality of the design. If the tests are hard to write, they might reflect on the quality of the code. It applies pressure to create code that is of high quality.
Continuous delivery is a high-performance feedback approach to development. We should produce software that is always ready to be released into production. We need to consider the architectural qualities of the systems, including testability and deployability.
We should aim to create releasable software at least once per hour, and we need to have multiple tests every hour.
Creating early feedback loops
Development tools can highlight IDE errors, the fastest, cheapest feedback loop. Tests can be run modularly in the development environment. A full suite of testing will be run after codes are committed.
Feedback on product design
Software developers are paid to create value for the organisation, not nicely designed testable codes. This is the tension between business-focused people and technically-focused people in organisations. The goal is for the continuous delivery of valuable ideas into production. To achieve this, we need to close the feedback loop from the consumers of the software.
Historically, software development has been measured in lines of code, developer days, or test coverage. These are easy to measure. However, these are not correlated with success.
In agile development, the people in the development are bought into the feedback loop, so they can observe the results of their work and refine their choices over time.
Feedback needs to be fast and effective. Continuous delivery and continuous integration are optimised development processes to maximise collected feedback quality and speed.
Incrementalism is about building value progressively, building a system and releasing it piece by piece. Modularity has advantages, such as each component can be built to focus on one part of the program, and different groups can work largely independently of the others. Teams can work more independently, make incremental steps forward without needing to coordinate too much between teams and give organisations the freedom to move forward and innovate at an unprecedented pace.
Ports and adapters’ patterns allow pieces within the system to be more independent. We will have more freedom to change the code behind the adapter without forcing changes on other components that interact with it through the port.
How should you manage software complexity within software engineering?
Modularity is essential to manage the complexity of the systems. Systems should be built into small, more understood pieces. The code in a modular should be short enough to be understood as a standalone thing outside the context of other parts in the system. There is some control over the scope of variables and limitations that limit access to the modules, so there is an outside and inside interface to the module. TDD allows us to get a signal immediately to get feedback on the quality of our design as we define it for the next increment in behaviour.
Modularity is key to our ability to make progress when we do not have a clear view of how our software will work in the future. We can change codes and systems in one place without worry the impact of these changes elsewhere.
Cohesion is how the elements in a module belong and work together. Everyone should be together and easy to see. The goal of code is to communicate ideas to humans. Ideas should be expressed clearly.
High-performance systems demand simple, well-designed code. The most straightforward possible route is needed to pass the codes through. Our systems are less flexible when cohesion is poor and difficult to test and work on.
Separation of concerns
Separation of concern is a design principle to separate a computer program into distinct sections so that each section addresses a different concern. It provides clarity and focuses on codes and systems. Codes and architecture can be kept clean, focused, composable and scalable.
Dependency injection is where parts of codes are supplied to it as parameters rather than created by it. This is a tool to help minimise coupling and form a line of demarcation between concerns.
Separating essential and accidental complexity
Essential complexity is directly linked to the problem you are trying to solve, which is the system’s tangible business value. Accidental complexity is everything else, the problems not directly related to solving the problem. We should work to minimise accidental complexity and focus very clearly on sperate the accidental and accidental complexities of the systems. Testability of codes and systems can be helpful to separate concerns.
Managing to couple
Coupling is the degree of interdependence between software modules, how to connected two routines or modules are. If our codes are too tightly coupled, we must worry about complex ideas like concurrency. Generally, the looser coupling is preferred over tighter coupling. Coupling affects our ability to scale up development. There is high overhead in coordinating teams.
Microservices are a way to reduce the level of coupling. It draws a line around modules and defines the abstractions. Microservices are small, focused on a task, have a bounded context, and are autonomous and independently deployable. It is a way to scale up systems in an organisation. If you do not need to scale up development, you do not need microservices.
Complex software systems and software engineering can be broken down into simple modules using the fundamental guardrails presented in this article. Optimise your work and maximise your learning ability to do a better job. Manage the complexity of every work you have to sustain your ability to do a better job. We at Latent help build bespoke and modular enterprise software with accordance to best practices.
Latent Workers provides AI workers through chat interfaces that handles generalised tasks. Our trained AI workers help you to streamline your workflows, reduce errors, and enhance your services. With real-time answers to your industry-related questions, our AI workers make your job easier and more efficient.