Categories
Architecture and operating of web applications in a cloud environment (Part 2)

Architecture and operating of web applications in a cloud environment (Part 2)

September 25,2022 in Digital Marketing Blogs Posts | 0 Comments

In the first part of this article we got acquainted with the conventional models of services provided in a cloud, the principles of development of PHP applications designed for deployment in this environment, and finally principles of economics of cloud operation.

In the second part of the article we will introduce practical experience with deployment of PHP system, which was not originally designed for cloud environment.

Also, you should not miss the previous part of the article series about web integration – Architecture and operating of web applications in the cloud environment (Part 1). It is a must-read!

CMS system

Our CMS system uses a local file system or a relational database as a repository for storing CRM data, it accelerates access to CMS objects by multi-level caching (on the 1st level within one request in memory, on the 2nd level in APC cache and on the 3rd level by reserved section of the file system), for full-text indexing and subsequent searching it uses the Apache Solr service.

Using a local file system in combination with fulltext significantly affects deployment capabilities to the cloud. Interactions of CMS system with CMS model is very intense, in practice, there are dozens to hundreds of small accesses (mtime, fread operations) to objects of CMS model within a single web request.

Thus latency of approach combined with efficient caching fundamentally affects the speed of putting sites together. Within preparation of the system for deployment in Microsoft Azure cloud several implementations of storage were gradually implemented and tested, starting with existing support for the use of relational database and ending with implementation of storage, combining services Azure Table and Blob Storage.

In both cases a relatively high latency in storage access provided under Azure as SaaS service adversely manifested itself (in the case of a relational database in the order of tens of milliseconds, in the case of Table / Blob storage to hundreds).

The solution was to deem the data already loaded into the cache for a certain period of time (in the order of seconds or minutes) to be valid and not to verify their changes in persistent CMS repository. This, however, brings a delay in updating the content in case of deploying in a cluster.

PaaS or IaaS (Infrastructure as a Service)?

A separate chapter was performance tuning in PaaS in the environment of Azure Websites. Unlike the Unix environment in this application container based upon Windows Server technology the WinCache module instead of APC or Zend OpCache is available for the purposes of in-memory caching.

Although in principle this one works the same as the above mentioned modules, however in practice its use brought noticeably worse results within the cache layer for caching PHP bytecode as well as data. For these reasons we preferred the use of laaS for execution of PHP code of web solution based on our own CMS – virtual machines with Linux based operating system, Ubuntu Server, on which it is possible to use proven APC caching or alternatively Zend OpCache extensions.

By switching from PaaS to laaS model we, however, lost a crucial advantage of Azure Websites, that is  auto-scaling and automatic distribution of the application into cluster administered by the platform, therefore we had to solve these function in other ways.

Docker or own PaaS solution

The target solution combines laaS with Docker technology, which allows for preparation of a complete installation of CMS system in the form of an image, which is deployed and run on virtual machines with pre-installed Docker Engine running environment.

In the case of an on-premise solution it is possible to use arbitrary servers with some distribution of Linux. Own management of distribution of the application into the cluster is then carried out by using the Docker Swarm technology, which can be interconnected with a tool of continuous integration responsible for creating and deploying web projects (in our cases Jenkins CI).

Server Docker image contains web server NGINX in the role of reverse cache and proxy server, which is connected through FastCGI interface to Zend PHP Engine with all usually required extensions, optionally then also connected to monolithic PHP engine HHVM, which officially supports CMS from version 2.2.16 (selection of engine is carried out by communication to the relevant port assigned to NGINX).

There is also installation of Apache Solr fulltext engine prepared in the image.This image is then used as a base for creating a complex image of the web solution containing all the application code, which will upload a tool of continuous integration into it, then it may subsequently distribute it into running environment through command line.

Distributed CMS storage

As CMS repository, intended for deployment into dynamically scalable running environment (Azure Cloud Service),  our CMS uses a specific implementation using local file system, which records all the changes in parallel to external (in case of Azure SaaS) relational database server, where at the same time the CMS system database located on this server is extended with application change log.

All changes of CMS objects are recorded into this log within change transaction, then detected by other instances of the same web solution, then retrieved from the database and replicated to the local file storage and cache, while they are indexed by the full-text engine. It is possible to use a temporary disk for cache folder in Azure Virtual Machine, which is connected to the above mentioned directory inside the container of Docker server.

Although this solution is not saving in terms of data storage (all the data in CMS storage are copied to the local file system and full-text of each running container of the given web solution), it brings the speed corresponding with local deployment without cluster.

In the case of deployment to on-premise/statically scaled environment it is possible to connect a selected directory to data directory of the web solution from Docker server and thus to have all data which are stored by our CMS in local file system, or more precisely a complete web folder, outside the actual container of our CMS server.

In case of the cluster the given directory then may be synchronized to other nodes in this cluster using the service GlusterFS, by which replication of change of data across the application cluster is ensured. Such a solution is then generally applicable to arbitrary web applications based on the PHP technology.

Summary

According to the level of specificity of the application environment it is possible to deploy PHP applications in Azure either to the existing PaaS model of Azure Websites, or it is possible to use IaaS (Azure Virtual Machines), preferably then in the combination with Docker technology, which allows for easy packaging and distributing the application with a complete application environment, configured according to specific needs, based on Linux to IaaS cluster using Azure Virtual Machines and Azure Cloud Service technologies. In any case, it is possible to at least use the relational database in the cloud environment as SaaS service.

Useful links

You can now follow to the next part – article called: Continuous Integration – Cure for Human Error in Deployment.

Was this article helpful?

Support us to keep up the good work and to provide you even better content. Your donations will be used to help students get access to quality content for free and pay our contributors’ salaries, who work hard to create this website content! Thank you for all your support!

Reaction to comment: Cancel reply

What do you think about this article?

Your email address will not be published. Required fields are marked.