A well structured development process is the recipe to create successful products that people love.
Keeping up to date with the ever-popping development frameworks and technologies takes passion and determination. Following along all the newest trends and keep on self-improving takes daunting effort, formidable motivation and inspiring humility.
Software Development isn't peanuts, specially on complex products with multiple developers enrolled, and relying on a strict company wide process has proven itself as the key to success, ensuring on-time deliveries and high quality, well crafted interfaces.
Understand the Product
From a Developer's standpoint, there are a few thing that must be understood from the get go as every detail of information will be crucial to make informed core product decisions.
To better understand the scope of the product, Developers must learn its functional and non-functional requirements through an insightful perception of the Design and its macro and micro interactions, by knowing the expected number of users, and its demographic and geographic characteristics.
For such an understanding, a Team meeting is held with the Product Owner and the remainder team members, all of whom responsible for briefing in the Dev Team. Quite often, a Developer Team Member takes part in the WorkshopsA very intense journey of meetings, preferable in person, next to a whiteboard with a marker in-hand, and a wall full of post-its. More about it at How We Work..
Define Technology Stack
A technology stack is a crucial part of developing any piece of software. It's referred to as the combination of programming languages, frameworks and tools developers can use to build an interface.
The undeniable importance of choosing the right Tech-Stack is consequent of the findings taken from Understand the Product and happens very early in the Software Development process for 2 reasons:
- It defines the skeleton of the product early on by influencing its scalability, performance, and by defining both its strengths and weaknesses.
Analogy: When building a car, you don’t start with the painting or the windshields. You start with the chassis and the body to hang everything else on.
- Because making significant Stack changes down the line is possible but painful. In most cases, it’ll involve a huge investment of time and money.
As such, we don't force a technology just because we're familiar with it. When a technology we master isn't right for a product, either we adapt or we don't do it at all.
To build consistent and high-quality products more efficiently, we break down the interface into self contained reusable components. Consequently, we can achieve consistency for both the product's visual and interactive counterparts by facilitating potential modifications, reduce friction on code dependencies and ease maintenance.
In summary, a much more manageable product, split into independent components, allowing for a smoother but also faster Development process.
We follow a Git Flow process which makes collaborative, parallel, and scalable coding more manageable and quality-driven. Essentially it relies on isolating new code from finished live code. With this, new code will only be eligible for Production once it gets past a series of automatic and manual tests.
From a very high-level, the production-ready code is contained in one locked branch and no-one is allowed to make changes directly on it. We can also have another locked branch for a staging environment.
The proper work, per say, is done on other dedicated branches that can only get merged to staging or production via properly tested and reviewed Pull-Requests.
Even though the biggest advantage is to make sure only thoroughly reviewed, high-quality code is Deployed into a Live sever, it also allows for easier parallel working and collaborative coding.
Usually Software testing attempts to execute an interface or part of it with the purpose of finding defects. Tests are an iterative process as when one bug is fixed, it can reveal other hidden bugs, or even create new ones.
It is key in obtaining insights about the code's quality.
Unit Testing is when we test the smallest parts of the codebase. It usually takes a few inputs and tests against an expected output. Their purpose is to guarantee each unit/component performs the way they were designed to.
By testing more granular parts of the code more often, the risk of having bad code spreading like a disease through the entire product is reduced drastically. Code becomes more reliable and the confidence in changing and maintaining code sky-rockets.
Integration Testing is when components are combined and tested as a group. Its purpose is to expose fault when units are combined together.
During the process of manufacturing a water bottle, the cap, the body and the label are produced and tested separately. When Units are ready, they are assembled and tested together. For example, whether the cap fits into the body or not.
Integration Tests are conducted to evaluate the compliance of those Components with their specified functional requirements.
End-to-end testing is meant to test an interface flow from start to finish. Its purpose is to simulate a real-user scenario and validate component integration and data integrity.
It is held under real-world scenarios like communication of the application with the hardware, network, database, back-end and other applications.
- Log into account by using valid credentials;
- Access homepage;
- Search a Restaurant;
- Open a Restaurant;
- Select an item, change quantity and add it to cart;
- Proceed to checkout;
- Pay using Credit Card.
The main reason for carrying out these tests is to determine various dependencies of an application as well as ensuring that accurate information is communicated between various system components. It is usually performed after the completion of Unit Tests and Integration Tests.
Code quality is indispensable for us. One of the strictest rules we follow is Continuous Integration (CI).
Having automatic tests introduces an extra layer of confidence and makes the team able to promptly detect any problems introduces by recent changes, such as bug fixes and malfunctioning components.
On every git commit, automatic tests on changed files are run locally with git hooks. On each Pull Request, tests are run for the entire codebase. These tests help to identify if anything has been broken by the newly committed changes.
When no problem is found, a Reviewer is responsible to manually verify the code and get it Merged.
There's a lot to be said about CI's advantages, but reducing risks for each build and clearing the way to get features out faster are only the tip of the iceberg.
After the code has passed all the automatic tests and was double-checked by another developer, an automatic build containing the new feature is triggered and consequently delivered. When necessary we also have continuous deployment of pull requests (deploy previews) to help analysing the proposed changes.
Continuous delivery is all about the ability to continuously deliver integrated code, be it bug fixes or new features, to Staging or Production. It means the successful CI builds are ready to go, should we wish to release them.
It goes one step further as it allows you to automatically deploy live every Master Branch change that passes any approved CI.
However, for various reasons we might choose not to do it automatically as it might need further testing.
On top of all the Quality Assurance (QA) procedures described above, there's still an extra layer of verification. We never let any piece of our code be considered as Done and sent to Live before being manually tested by humans.
Either other Developers, the Design Team or hand-picked users are responsible for meticulously testing the our Interfaces. Nothing beats good old manual tests, on real devices to be 100% sure things work out and look the way they were envisioned in the first place.