MACH made open source
Are you composing a modern MACH architecture, but don't want to start from scratch? Try MACH composer
Modern software engineering practices to achieve high quality
Testing is essential to any software project but finding the right balance in the testing approach can be tedious. We apply many different techniques to testing and follow the testing pyramids concept to decide which method to use when.
Documentation is one of the most underappreciated topics within the software development process. What might sometimes seem like tedious work is, in our opinion, one of the most important. It improves the longevity of a platform by making it easier to maintain, forces you to think about what you want to achieve, and reduces the onboarding time of new colleagues on a project.
We don’t shy away from creating extensive documentation websites for our projects, which include everything from the history and goals of a project to detailed sequence diagrams of interactions between systems.
The 12 Factor methodology, introduced by Heroku in 2011, is an ‘old but gold’ approach to deploying applications in the cloud. We use it in almost every software application we build and apply its patterns even when not deploying to the cloud.
Immutability of deployments gives you a lot of flexibility and guarantees when deploying applications in the cloud. When properly implemented, immutability makes it easy to scale, maintain, and troubleshoot applications in the cloud.
A new approach to building distributed applications and exposing these through a unified graph.
Read more about using GraphQL federation in composable architecture.
A relatively new approach, pioneered by AWS Lambda in 2014, to hosting and building applications in the cloud without the need to host servers yourself. We leverage serverless for a wide range of workloads, including high-traffic web services.
Event-driven architecture makes it easier to decouple services by using events as a communication method. This often results in a more resilient and scalable architecture since events can be queued and asynchronously processed by multiple consumers when needed. In modern cloud-native architectures, event-driven architecture goes hand-in-hand with serverless architectures.
While we implement microservice architectures regularly, we still think that having the least amount of systems in your ecosystem is almost always a design goal. That may still mean that you should implement a monolithic architecture for your application. We are experienced in both, as well as the practice of transitioning an existing monolithic architecture towards a microservice architecture.
Security is not an afterthought but needs to be part of the culture of the complete team working on a product. From a technical standpoint, this means reducing the attack surface as much as possible. We work with the assumption that source code get’s leaked, that rogue parties will try to do things we did not foresee and that every day new security issues will be found in the software we depend on.
Developing software is only part of the complete SDLC (software development lifecycle).
Running the software in both production and high-traffic environments in a reliable, performant way requires a thorough understanding of the complete software stack and underlying infrastructure. Due to this, we prefer to take complete ownership of both the software and the infrastructure.
Continuously monitoring the performance and availability of the software and related services in an automated way is critical. Next to the tools provided by Cloud providers, like AWS CloudWatch Metrics/Logs and Azure Insights, we use Sentry and Pingdom alongside our internally developed tool, Folge, which notifies us when services are reporting errors.
When problems do occur, we must ensure that we know what caused the issue as soon as possible. We achieve this by making the platform ‘observable’, so we can identify the exact state of the system and the reason it is in that state by having detailed metrics of all aspects of the system. Proper logging setup, and APM tooling like AWS X-Ray and OpenTelemetry, allow us to see in-depth traces of how data flows between all the systems, from client to external systems.
In the end, we have one goal: whenever an issue occurs that cannot be solved automatically, we want to be the first to know about it and solve it before it becomes a problem.