We’re excited to bring Transform 2022 back to life on July 19th and virtually July 20-28. Join AI and data leaders for sensible conversations and exciting networking opportunities. Register today!
By 2025, 85% of enterprises will have a cloud-first principle – a more efficient way to host data than on-premises. The transformation of cloud computing by Covid-19 and remote work means many benefits for companies: lower IT costs, efficiency and reliable security.
The continued boom in this trend has also increased the risk of service disruptions and outages. Cloud providers are extremely reliable, but they are not “immune to failure.” In December 2021, Amazon reported that multiple Amazon Web Services (AWS) APIs were affected, and within minutes, many widely used websites went down.
So, how can companies reduce cloud risk, prepare themselves for the upcoming AWS shortage, and increase demand abruptly?
The answer is scalability and resilience – two essential aspects of cloud computing that greatly benefit businesses. Let’s talk about the differences between scalability and resilience and see how they can be built at the cloud infrastructure, application and database levels.
Understand the difference between scalability and resilience
Both scalability and resilience are related to the number of requests that can be made together in a cloud system – they are not mutually exclusive; Both have to be supported separately.
Scalability is the system’s ability to be responsive as the number of users and traffic gradually increases over time. Therefore, it is a long-term growth that is strategically planned. Most B2B and B2C applications that use it will need this to ensure reliability, high performance and uptime.
With a few minor configuration changes and button clicks, in a matter of minutes, the company can easily scale their cloud system up or down. In many cases, this can be automated by cloud platforms with scale factors applied to server, cluster and network levels, which reduces engineering labor costs.
Resilience is the ability of a system to be responsive during short-term explosions or high instantaneous spikes in loads. Some examples of systems that regularly deal with resilience issues include the NFL ticketing application during natural disasters, the auction system, and insurance companies. In 2020, the NFL was able to tilt its virtual draft to AWS to livestream, when it needed more cloud capability.
A business that experiences unexpected workloads but does not want a pre-planned scaling strategy can find a resilient solution in the public cloud, with low maintenance costs. This will be managed by a third-party provider and shared with multiple organizations using the public Internet.
So, does your business have an estimated workload, highly variable, or both?
Work on scaling options with Cloud Infrastructure
When it comes to scalability, businesses should pay attention to more provisions or less provisions. This happens when tech teams do not provide quantitative metrics around the resource requirements for the application or the back-end idea of scaling does not align with business goals. In order to determine the right size solution, ongoing performance testing is essential.
Business leaders who read this should talk to their tech teams to find out how they find their cloud provisioning schematics. IT teams should constantly measure response time, number of requests, CPU load, and memory usage to see the cost of goods (COG) associated with cloud costs.
Different scaling techniques are available for organizations depending on business needs and technical constraints. So, will you scale up or out?
Vertical scaling Includes up or down scaling and is used for applications that are monolithic, often built before 2017 and can be difficult to refact. Adding more resources, such as RAM or processing power (CPU), to your existing server when you have a large workload, but this means that there is a scaling limit depending on the capacity of the server. It does not require any application architecture changes as you are moving the same application, files and databases to a larger machine.
Horizontal scaling Working as a single system involves scaling in or out of the original cloud infrastructure and adding more servers. Each server must be independent so that the server can be added or removed separately. It covers many architectural and design considerations around load-balancing, session management, caching and communication. Transferred legacy (or older) applications that are not designed for distributed computing must be carefully refactored. Horizontal scaling is especially important for businesses with high availability services that require minimal downtime and high performance, storage and memory.
If you’re not sure which scaling technology suits your company better, you may need to consider a third-party cloud engineering automation platform to help manage your scaling needs, goals, and implementation.
Weigh how application architectures affect scalability and resilience
Let’s take a simple healthcare application – which also applies to many other industries – to see how it can evolve across different architectures and how it affects scalability and resilience. Healthcare services were under heavy pressure and scale during the Covid-19 epidemic, and could have benefited from cloud-based solutions.
At higher levels, there are two types of architecture: monolithic and distributed. Monolithic (Or layered, modular monolith, pipeline and microkernel) Architecture Not originally built for efficient scalability and resilience – all modules are embedded in the core of the application and, as a result, the entire application is deployed as a whole. There are three types of distributed architectures: event-based, microservices and space-based.
Here’s a simple Healthcare app:
- Patient Portal – To register patients and book appointments.
- Physician portal – view health records, conduct medical examinations and prescribe medication for medical staff.
- Office Portal – To collect payments and answer questions for the accounting department and support staff.
Hospital services are in high demand and to support growth, they need to measure patient enrollment and appointment scheduling modules. This means they only need to measure the patient portal, not the physician or office portal. Let us know how this application can be built on every architecture.
Tech-enabled startups, including healthcare, often go along with this traditional, unified model for software design due to the speed-to-market advantage. But it’s not the best solution for businesses that need scalability and resilience. This is because the application has a single integrated instance and a centralized single database.
For application scaling, adding more examples of applications with load-balancing eliminates scaling of the other two portals as well as the patient portal, even if the business does not need it.
Most monolithic applications use a monolithic database – one of the most expensive cloud resources. Cloud costs increase rapidly with scale, and these arrangements are expensive, especially when it comes to maintenance time for development and operations engineers.
Another aspect that makes monolithic architecture unsuitable for supporting resilience and scalability is the average-time-to-startup (MTTS) – the time it takes for a new model application to start. Due to the large scope of application and database it usually takes several minutes: engineers should create support functions, dependencies, objects and connection pool and ensure security and connectivity for other services.
Event based architecture
Event-driven architecture is more suitable than monolithic architecture for scaling and resilience. For example, it highlights an event when something becomes noticeable. It feels like shopping on an ecommerce site during a busy period, ordering an item, but receiving an email stating that it’s out of stock. Asynchronous messaging and queuing provide rear push without scaling the rear end by queuing requests when the front end is scaled.
In this Healthcare application case study, this distributed architecture will mean that each module has its own event processor; One or more modules have the flexibility to distribute or share data. There is some flexibility at the application and database level in terms of scale as the services are no longer connected.
This architecture views each service as a one-purpose service, giving businesses the ability to measure each service independently and avoid using unnecessarily valuable resources. For database scaling, persistence levels can be uniquely designed and set for each service for individual scaling.
With event-driven architectures, these architectures cost more in terms of cloud resources than monolithic architectures at lower levels of use. However, with increasing loads, multitenant implementations and in cases where there is an explosion in traffic, it is more economical. MTTS is also very efficient and can be measured in seconds due to the sophisticated services.
However, with the full number and nature of services delivered, debugging can be difficult and more maintenance costs can occur if services are not fully automated.
This architecture is based on a principle called tuple-space processing – multiple parallel processors with shared memory. This architecture maximizes both scalability and resilience at the application and database level.
All application interactions occur with the in-memory data grid. Calls on the grid are asynchronous, and event processors can scale independently. With database scaling, there is a background data writer who reads and updates the database. All insert, update or delete operations are sent to the data writer by the relevant service and queued for removal.
MTTS is extremely fast, usually taking a few milliseconds, as all data interactions are with in-memory data. However, all services must be connected to the broker, and an initial cache load must be created with the data reader.
In this digital age, companies want to increase or decrease IT resources as needed to meet changing demands. The first step is moving from distributed monolithic systems to distributed architecture to gain a competitive edge – this is what Netflix, Lyft, Uber and Google have done. However, the choice of which architecture is subjective, and decisions should be based on developers’ capabilities, average load, peak load, budget constraints, and business-growth goals.
Shashank is a serial entrepreneur with a keen interest in innovation.
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including tech people working on data, can share data-related insights and innovations.
If you would like to read about the latest ideas and latest information, best practices and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing to your own article!
Read more from DataDecisionMakers