Read more about Mach Composer

software develop­ment

Modern software engineering practices to achieve high quality

Modern software engineering

We are passionate about software engineering and the complete process that leads to creating the applications we deliver.

Having built software for many years, we have a good understanding of what it means to practice the craft of software engineering. When writing the final solutions, we enjoy taking aspects such as maintainability, security, performance, and fault tolerance.

Software engineering, in our field, also means taking maximum advantage of managed solutions provided by cloud providers and SaaS vendors. Knowing those solutions and the principles behind them in-depth is a must.

As a tech agency, we are often responsible for the end-to-end delivery of our software, from front-end and back-end, cloud-native infrastructure to site reliability engineering. This means that it is essential for us to be able to continuously ship high-quality software into high-traffic production environments without any interruptions. Our teams must be equipped to deliver all of it in a future-proof way.

Development workflow

Our development workflow roughly follows the Pull Request Flow / Trunk-based development process. With that, it is critical to also implement mature CI/CD pipelines.  This includes sufficient automated testing coverage on every branch and commit so that code can be integrated and deployed to production with confidence.

Testing

Testing is essential to any software project but finding the right balance in the testing approach can be tedious. We apply many different techniques to testing and follow the testing pyramids concept to decide which method to use when.

  • Unit/integration testing at the code level
  • Cypress/Playwright for UI and acceptance testing
  • API testing through tools such as Postman
  • Stress and load-testing using Locust.io
  • Automated smoke testing after production deployments
  • Manual QA testing with a focus on usability for end-users
Documentation

Documentation is one of the most underappreciated topics within the software development process. What might sometimes seem like tedious work is, in our opinion, one of the most important. It improves the longevity of a platform by making it easier to maintain, forces you to think about what you want to achieve, and reduces the onboarding time of new colleagues on a project.

We don’t shy away from creating extensive documentation websites for our projects, which include everything from the history and goals of a project to detailed sequence diagrams of interactions between systems.

Architecture patterns 


Architecture patterns continuously evolve. Deciding on an approach has a significant impact on the long-term success of your project. Over the years, we have applied many patterns and know intimately when and how to apply these, and we continuously stay up to date with the latest methods.

12Factor methodology

The 12 Factor methodology, introduced by Heroku in 2011, is an ‘old but gold’ approach to deploying applications in the cloud. We use it in almost every software application we build and apply its patterns even when not deploying to the cloud.

Immutable deployments

Immutability of deployments gives you a lot of flexibility and guarantees when deploying applications in the cloud. When properly implemented, immutability makes it easy to scale, maintain, and troubleshoot applications in the cloud.

Federated architecture

A new approach to building distributed applications and exposing these through a unified graph.

Read more about using GraphQL federation in composable architecture.

Serverless architecture

A relatively new approach, pioneered by AWS Lambda in 2014, to hosting and building applications in the cloud without the need to host servers yourself. We leverage serverless for a wide range of workloads, including high-traffic web services.

Event-driven architecture

Event-driven architecture makes it easier to decouple services by using events as a communication method. This often results in a more resilient and scalable architecture since events can be queued and asynchronously processed by multiple consumers when needed. In modern cloud-native architectures, event-driven architecture goes hand-in-hand with serverless architectures.

Microservices and monolithic architectures

While we implement microservice architectures regularly, we still think that having the least amount of systems in your ecosystem is almost always a design goal. That may still mean that you should implement a monolithic architecture for your application. We are experienced in both, as well as the practice of transitioning an existing monolithic architecture towards a microservice architecture.

Security by design

Security is not an afterthought but needs to be part of the culture of the complete team working on a product. From a technical standpoint, this means reducing the attack surface as much as possible. We work with the assumption that source code get’s leaked, that rogue parties will try to do things we did not foresee and that every day new security issues will be found in the software we depend on.

Site Reliability Engineering

Developing software is only part of the complete SDLC (software development lifecycle).

Running the software in both production and high-traffic environments in a reliable, performant way requires a thorough understanding of the complete software stack and underlying infrastructure. Due to this, we prefer to take complete ownership of both the software and the infrastructure.

Continuously monitoring the performance and availability of the software and related services in an automated way is critical. Next to the tools provided by Cloud providers, like AWS CloudWatch Metrics/Logs and Azure Insights, we use Sentry and Pingdom alongside our internally developed tool, Folge, which notifies us when services are reporting errors.

When problems do occur, we must ensure that we know what caused the issue as soon as possible. We achieve this by making the platform ‘observable’, so we can identify the exact state of the system and the reason it is in that state by having detailed metrics of all aspects of the system. Proper logging setup, and APM tooling like AWS X-Ray and OpenTelemetry, allow us to see in-depth traces of how data flows between all the systems, from client to external systems.

In the end, we have one goal: whenever an issue occurs that cannot be solved automatically, we want to be the first to know about it and solve it before it becomes a problem.

menno-boyan

we're looking for a

senior backend engineer

Contact us to hear more about our career framework for software engineers. We offer a modern foundation for your ambitions and growth.