Author Archives: krcmic.com

Architecture and operating of web applications in the cloud environment (Part 1)

Architecture and operating of web applications in the cloud environment (Part 1)

Cloud based platforms catch on more and more as a running environment for web applications. What advantages, compared to the traditional “on-premise” infrastructure or classic web-hosting, do they offer? And what new possibilities (from the point of view of the web integrator) does cloud bring to the architecture of web systems using LAMP technology? These, and many other questions, will answer the first part of the new series.

Also, you should not miss the previous part of the article series about web integration – Putting Continuous Integration (CI) into Practice – Part 2. It is a must-read!

Virtualisation as a foundation of the cloud

In the last decade we have been witnessing gradual but more and more significant advancement of computer virtualisation in server, and later also in desktop environment. First, not very powerful server hypervisors1 based on full virtualisation of the hardware incl. CPU, were followed by revolution based on implementation of the virtualisation support directly on the CPU level.

This significantly improved performance of virtualised systems, whose running costs went from dozens to single digit percentage values of the hosting processor power. Together with improvement of the concept of the peripherals virtualisation in the form of drivers closely cooperating with the host system allowed massive succession of this technology in a broad scale in both on-premise2 environment where companies have their own server infrastructure available, and between infrastructure providers, who offer it on a rental base.

This, originally on-demand3 supplied virtual systems, gradually developed into complex cloud platforms with wide portfolio of services for running the applications. One of the most renowned in this field are Amazon Web Services (AWS) and Microsoft Azure.

1 Hypervisor is a specialised software, or combination of hardware and software, which allows creating and operating of virtualised computers.

2 On-premise is the way of running the solution, where the organisation has it under full control – typically, it’s run on their own infrastructure.

3 On-demand describes the solution operated on the infrastructure of the provider, to which the organisation has access via the Internet and usually does not have full control over it.

Models of operation in the cloud environment

Basic models of service, provided through the cloud environment, are following: IaaS – Infrastructure as a Service features basic directly from virtualisation derived service enabling to create and manage infrastructure consisting of:

  • virtualised servers,
  • private networks (VLAN – Virtual Local Area Network),
  • specialised network components, typically:

— „load balancers“ – application switches, securing high accessibility of the given application service by transmuting requirements into cluster consisting of two or more virtual servers,

— network firewalls (optionally explicitly allow or deny access from public network to selected network services provided by the virtualised servers in the cluster),

— and application firewalls with the possibility of an active detection/prevention of security attacks in given application protocol („Intruder Detection/Prevention Systems“).

In case of IaaS remains the responsibility for the maintenance of the operation environment of the system of virtual servers, creating and maintenance of the application environment within it on the service user.

Configuration of the virtual servers, networks and network services is done via web-based administration console, or there might be API available to allow automation of such administration operations on another system’s level. Within the framework, the IaaS provider usually offers installation of pre-set images of number of different types/versions of operation systems, optimised for running in IaaS cloud (containing drivers for cooperation with hypervisor, used by the provider for server virtualisation – i.e. VMware ESX/ESXiXen/Citrix XenSource or Microsoft Hyper-V technologies).

In case of smaller cloud service providers is the IaaS offer often final. Compared to mentioned global players it can, in case of the given cloud is satisfactory for suggested solution, compete in more attractive price and individual local support.

IaaS services are charged on pay-as-you-go basis, where the price is given for computed unit per unit of time (typically combination of CPU and RAM, defined in so called type of instance of the virtual server over the period of one hour) or data unit (transferred or uploaded gigabyte for the network transfer, respectively data warehouse).

Base price can also be influenced by the installed operation system (i.e. in case of Windows Server includes the license fee payment), more powerful CPU architecture (as a standard, cloud service providers’ data centres use processors with the most favourable computing power/power input rate, but those are not the most powerful models) or more powerful repository type (SSDs, sometimes installed locally instead of a shared disk array).

PaaS („Platform as a Service“) offers an environment for operating web-based applications built on a particular application platform (i.e. PHP, .NET, Node.js or Ruby on Rails). In this model, the service provider takes full responsibility for needed infrastructure, consisting of IaaS instruments, operating system and application runtime environment software necessary for running of the application.

Service user obtains in the first place one or more interfaces for uploading the application into the environment (typically FTPS/SFTP, Git SCM, eventually API and/or CLI tool, including support for deployment management).

PasS gives the user reduced options of configuration of the application environment (i.e. language or framework options, insertion of expansion modules and adjustment options of the global configuration of the application environment, which can usually be adjusted through configuration of the application itself, when uploaded into PaaS).

PaaS as a service can also include manual or automated (threshold-parameter controlled) scaling, where, in combination with the load balancer, virtual engines securing processing of the application code depending on the current load are created and disposed of.

Same as in case of IaaS, the price unfolds from actually consumed sources including use of IaaS resources (type and number of instances of the virtual engines, warehouses and data transfers) and other PaaS specific services (i.e. automated scaling support and also preparedness for scaling in greater number of instances). Amongst PaaS platform belong Azure WebsitesGoogle AppEngineAWS Elastic Beanstalk or cloud platform Heroku.

SaaS – Software as a Service is a cloud layer of the highest level of abstraction and it means the supply of a working server software with clearly defined functions and interface.

In SaaS model are typically supplied data warehouses (SQL/relation and noSQL database systems), directory services (LDAP, Azure Active Directory), CRM or e-commerce systems. Falling into this model are also currently very popular cloud services Google Apps and Office 365 or, for example, on-demand versions of Atlassian company products (JIRA, Confluence). In this category also fall systems for content distribution (CDN – Content Delivery Network).

In SaaS model, the user primarily works with the web interface of the service, which offers the model of secure access to individual functions and data based on the user roles and groups with the possibility of management of the access and optionally also access of other software or cloud services via API, eventually by using of special CLI tools for maintenance and integration.

SaaS platform offers very restricted (or none at all) options of insertion of your own application code. In case of databases, this may be “stored” procedures or functions executed by the database engine, in other cases uploading of the code in the form of package (extension), which the given software allows to load up within the means of its modular architecture. Typically only in case the instance of SaaS is operationally completely separated from instances of other customers, otherwise only integration on different servers via SSI (Server-Side Include – entered on the server side) model or exclusively via code running on the web browser of the user (UI components coded in Javascript that are using accessible server API) is possible.

Architecture and operating of web applications in the cloud environment (Part 1)

 SaaS services are usually charged depending on the on the number of user accounts, size of the data storage or number of parallel client connections or their total number per period of time, or combination of these parameters. Parameters can be usually changed at any time accordingly to the changing needs without having to interrupt accessibility of the service.

In all the models, the cloud platform offers possibilities that are, in traditional on-premise model, either completely missing, or their implementation requires great efforts:

  • high level of security of the access to the network services and active protection against various forms of attack (network firewall, intruder detection/prevention system),
  • high level of the data security (redundant multilayer storage, optionally also geographical),
  • option of pooling cloud instruments in IaaS/PaaS model into virtual private networks and their connection with the network environment of the organisation (site2site VPN), creating of hybrid cloud environment and support while connecting the users into the cloud environment via site2client VPN,
  • load balancer (incl. support of automated grading of cluster sources in accordance with given load parameters),
  • and SSL offloading.

PHP application in the cloud environment

PHP based web applications and portals (or portals still operated in the traditional LAMP model) can be used in the cloud environment utilising all above mentioned models. The easiest is, if considering operational requirements, using PaaS, which offers support for operating of PHP applications, in combination with database in SaaS model. In this case the whole environment is created and further maintained via web administration, deployment of changes is executed using familiar tools (application code via Git SCM push mechanism, or by upload via SSH/SFTP, database changes manually or by script via database console).

Problems can arise, if the PHP application has special requirements on the application environment (i.e. uses specific PHP extensions, which are not available and it’s not possible to upload them into PaaS) or its framework isn’t suitable for deployment in cloud with automated scaling and “failover” mechanisms (typically uses local file system as a persistent storage). In these cases it is necessary to base either part, or the whole solution of operation of the application, base on the IaaS model (thus reduce the utilisation of the cloud platform on the virtualised server infrastructure), or to adjust the system framework of the web solution so that it’s compatible with the PaaS cloud.

While doing this, it’s important to take in an account , that the IaaS model, which is the closest to the traditional on-premise infrastructure, also suffers from infirmities of this approach, in particular it requires resources for the management of the operation system and application running environment, at the same time in case of requirements on high granted accessibility requires conceiving of cluster, by which it puts strain on the application framework equivalent to scaling mechanism, included in PaaS. It follows several recommendations for architecture of PHP applications, which are intended to be deployed in the cloud:

  • Do not mix code and data – do not use file system for saving of persistent data, abstract all the operations with the data storage beyond the general boundary (this will allow to implement one of the available SaaS services) or work directly with the database abstraction of PDO type.
  • Use actively all caching mechanisms that the chosen application platform/framework offers in combination with the chosen cloud environment (in the computing instances there will never be an excess of the computing power or operation memory; for caching you can use in-memory key-value of the database of the Redis if the platform offers this in the SaaS model).
  • Expect serving static source files (stylesheets, pics, videos) from other absolute URLs – within the platform you will be able to utilise accessible solutions of CDN (Content Delivery Network).
  • In case you would like to use relation databases, do not use SQL commands dependent on a particular database engine (with exception of cases, where you know up front, what type of database will be used in cloud); design the queries programmatically by the application code, using the library that will allow or use ORM tool.
  • Do not use PHP extensions that are not available in the original setting in the PaaS model of the given cloud and it’s not possible to upload them with confidence additionally.
  • Do not use libraries and frameworks that are written in conflict with the above mentioned rules.
  • If it’s possible, avoid using server services that cannot be used in the selected cloud platform in the form of SaaS (i.e. exotic key-value databases or full-text indexers).

Costs calculation

Significant change introduced by the cloud platforms in all mentioned models is the pay-as-you-go service accounting system, i.e. interim payments in accordance with used up resources. Planning of the operating costs doesn’t necessarily require execution of the precise sizing for the expected peak attendance in the given period (usually number of years) and consequent purchase or lease of corresponding computing resources (servers and other network elements) not taking in account whether they will be fully utilised (not to mention the case when traffic exceeds expectations, requiring unplanned new infrastructure…).

In case of the cloud, the sizing is only indicative, it determines the estimated operating costs in the given time-frame (typically one month) depending on number of users of the web solution, that are supposed to be served in given response time. We can therefore ultimately calculate the cost of servicing a single user and include it in the entire business transaction, which uses the web-based solution.

This, together with guaranteed accessibility of cloud services (standard in models SaaS and PaaS, or in case of IaaS under condition of generating the cluster) and high level of security, are the main advantages of use of cloud as opposed to the traditional on-premise infrastructure.

In the second part of this article, you will learn about practical experience from deployment and scaling of CMS based on PHP software architecture (which has many specific characteristics)  into cloud environment Microsoft Azure.

You can now follow to the next part – article called: Architecture and operating of web applications in a cloud environment (Part 2).

Putting Continuous Integration (CI) into Practice 2/2

Putting Continuous Integration (CI) into Practice – Part 2

There are a wide variety of task recording, documentation creation, team communication and deployment systems out there. In the second instalment of this article, we shall see how these tools can be turned into an efficient system. By connecting these otherwise independently functioning tools together, you can reap substantial benefits.

Also, you should not miss the previous part of the article series about web integration – Putting Continuous Integration (CI) into Practice – Part 1. It is a must-read!

A project management and issue tracking system

Whether you have a completely new project or a post-sale application, an issue logging and tracking system is a must. Before your developers start cutting code, you need to define the tasks to which you will assign particular programmers, that is, tasks the progress and cost of which can be monitored against targets.

At Krcmic.com and Onlineandweb.com, we used Mantis Bug Tracker in the past, which, however, does not meet the requirements of today’s software project management. It is a good, albeit a little cumbersome issue logging system, but it lacks certain core features that are absolutely necessary for us today.

Modern application development is unthinkable without a flexible bug management system that actively supports the implementation of agile development management methods. We currently use the JIRA issue tracking system which has the following advantages over Mantis:

  • customisable project workflows;
  • combines the features of an issue tracker and time tracker;
  • supports team work (a user access and role management system)
  • can be easily configured to interact with other tools (Bitbucket.org, HipChat IM, Confluence);
  • is user-friendly.

Like other Atlassian products, JIRA is not cheap, but we currently use it as a replacement for two systems (issue tracking and time reporting). JIRA is easier and faster to work with and as such, directly saves us time. Moreover, we have custom-developed one of the systems, so we will be able to save considerable development, servicing and bugfixing costs in the future.

We recommend connecting the JIRA system to a central source code repository (see below), making it possible to drill down for details on the work that has been done and its authors. In addition to the verbal comments left by team members, we have access to the related technical details of the implementations.

Workflow

The choice and use of a suitable workflow are very important considerations. JIRA offers some out-of-the-box workflows that are available as part of the installation, but these do not exactly fit the processes within software development.

The workflows in JIRA are fully customisable, which can be both a blessing and a curse. This freedom may be a certain advantage, but you are faced with the necessity of capturing recurrent situations and processes common to most projects. Creating workflows that are usable for more projects necessarily involves some compromises. The deeper you dig into the configuration options, the more likely you are to accommodate specific requirements and create special-purpose workflows. This has more disadvantages than it has advantages. A smaller number of workflows aid in orientation; the statuses (To do, Done, Accepted…) and other properties ought to be unified across projects, which helps avoid misunderstandings in communication and in managing the processes themselves. Both managers and developers have an easier time of navigating a limited number of universal workflows, rather than having to choose from dozens of specialised workflows. For this reason, we have adopted the use of workflow schema definitions, which can be carefully prepared for the company and which users can choose from when setting up new projects.

User-friendliness may not be the primary consideration, nevertheless, a good GUI saves time and motivates employees to use the system with respect, if not with pleasure. On-site editing, search, autocomplete and keyboard shortcuts are just fantastic features!

Time reporting

We used to have a time reporting system that was set up independently of the issue tracking system. It was a custom-built system that we created ourselves. However, we could not continue developing it forever and to upgrade it to meet our evolving needs would have meant virtually creating a new application.

What are the benefits of merging the two systems into one?

  • directly related information on bug logging and the progress of bug resolution is stored in a single system;
  • less time is spent on logging work, reporting and reassignment of tasks within teams;
  • information on persons, methods and the complexity of solutions is consolidated at one location;
  • the evaluation of the status and success rate of projects is much simpler and more accurate thanks to the aforementioned features.

For more sophisticated development work, we use the JIRA Tempo Timesheets Plugin integration. Already in the basic version, JIRA supports time reporting (worklog). The built-in feature will be sufficient for the needs of a small team, while larger teams will probably consider purchasing an extension that allows sorting by teams, projects, etc. Pricing is based on team size, as is usual for the Atlassian ecosystem.

Versioning system

The tasks have been defined, assigned to individual team members and the project picks up steam. The progress of the work must, however, be tracked and changes made by several people must be incorporated into a whole. To this end, we have for years been using the Git versioning system. If you wanted to purchase a version control system in the past, you didn’t have many choices. Certainly, many of you are familiar with Subversion. Today, we cannot but recommend distributed version control systems (DVCS). The most popular such systems worldwide are Git and Mercurial. The choice is a matter of personal preference. If you don’t have any experience with either, in the next article about versioning systems we are looking at the pros and cons of some of these systems. The article discusses in detail the reasons why DVCS is better than the client-server model.

Your developers will deal with the versioning system on a daily basis, so if you have not decided on a particular one yet, you can organise a vote among your developers.

When it comes to price, versioning systems as such are free, unlike data storage.

A central source code repository

A more complicated topic which builds on the choosing of a versioning system is a central source code repository. A web integrator, being a supplier of client solutions, is working on a much greater number of repositories than there are developers on the team. This disproportion significantly affects the costs that the various providers have.

Repositories can be hosted on an internal server or outsourced to a hosting provider. Should you opt to run your own server, you have the choice of using an open-source solution, which requires specific know-how and can be time-consuming to set up, or you can use an off-the-shelf solution with paid support. Last but not least, there is the option of renting server space, in other words, using a cloud service, in which case you pay by the month.

On-premise solution – “what’s in the house?”

When we started with CI, we had our own git server. When it comes to applications, one option is to use free, open-source technologies, which, however, need to be configured. It goes without saying that time is money. You must also factor in the overhead costs associated with the management of IT resources.

An alternative to open-source solutions are paid solutions, which require a relatively large lump-sum investment. Their advantage is that they work out of the box. We can recommend Atlassian Stash, which offers easy and effective management tools and numerous options for managing user accounts and groups as well as Active Directory integration. Alternatives include GitLab, which comes in both community and enterprise versions, and GitHub Enteprise. Apart from price, you should be looking at user comfort, availability of documentation and the liveliness of the user community. Do not go for tools that you have reservations about: choose tools that are intuitive to use.

Rental – on-demand cloud elasticity

A software cloud can expand and contract just like the real cloud. Typical examples of such elastic clouds include BitbucketGitHub and GitLab. The advantages of these are:

  • non-stop availability from anywhere regardless of internal infrastructure;
  • it takes away the worries and hassles of running your own hardware and infrastructure;
  • backup, disk space and security are taken care of by a team of experts;
  • automatic system and add-on updates;
  • you can easily change your provider.

When it comes to price, GitHub is more suited to larger teams that work on a smaller number of projects (products). Such teams appreciate its pricing model based on the number of private repositories. This model would be costly for us.

Bitbucket has a different pricing model based on the number of team users; the number of repositories is not taken into account. For example, to develop an online shop on the Magento e-commerce platform, we would need tens of repositories for the various extensions. We have therefore decided to switch from GitHub to Bitbucket.

A cloud service is more flexible and more comfortable to operate and we can quite easily switch to a different provider of a similar service if such a need arises. However, a cloud solution may turn out to be more costly in the long run.

Deployment of modifications in various environments

The developer has committed his work, i.e., a piece of code versioned in the central repository. Now comes the time to deploy the code in a runtime environment. Application deployment should not be handled manually by the developer; rather, this part of the process should be entrusted to experts on continuous delivery and handled as an automatic deployment task. When a developer commits his code, we call the execution of deployment by a tool that inserts the code in the relevant environment. For more information on deployment, please see the article The advantages of continuous integration and automated deployment.

Project documentation

All projects, however big or small, must be documented. After creating a solution, developers often hand their project over to another team for maintenance. Perhaps we can all agree that without documentation to refer to, work with code that we have not written ourselves can be a headache.

Documentation should always be supplied with any piece of code, although it may not always be sufficient when new developers need to be taken on board quickly. It is advisable that developers create and maintain compact documentation, describing the architecture and implementation of specific functional components of particular solutions, ideally using an on-line system that permits quick viewing and editing of stakeholder groups. We had trialled MediaWiki before switching to Confluence.

One of the reasons for the switch was the option that the latter offered of integrating with other Atlassian systems, not to mention the nice user interface and the flexible security model allowing detailed management of permissions. Unlike Confluence, MediaWiki requires no licence fees, however, the basic version offers only a very simple, open UI with next to no formatting options. As it is a Wiki platform, there is no granular access management. Tens of hours in configuration, extension set-up, etc., are needed and the system may, in the end, come at a steep price.

Which features are important and what makes our life easier?

  • import from MS Word documents;
  • export to MS Word/PDF formats;
  • formatting;
  • inserting links and images;
  • code documentation;
  • support for macros.

Macros are useful tools for the insertion of specific content (e.g. software code) or dynamic content such as child page summaries, etc. Frequently employed and recommendable tools include JIRA integration and filters for viewing task lists.

The filters are dynamic and impervious to the inaccuracy of authors. In the last step, information on activities in Confluence is sent in the form of notifications to topic rooms in HipChat (for more information on the topic of chat see the next chapter).

Real-time communication – chat

You send a group e-mail and the next day you find out that you forgot to include the most important addressee. A client who is unsure whom to address sends an e-mail request to ten people. The message reaches the right addressee, but the other people have received a message that is irrelevant to them.

Not only can this be a bother to some people, but it may be really hard for the employees to stay on top of things if their inboxes are inundated by hundreds of e-mails every day.

Now imagine a chat room to which you can invite a group of people and where everyone can talk without worrying that he or she might have left someone out of the loop. If the ongoing discussion is not relevant to a particular employee, he or she can temporarily log off. If needed, he or she can be invited to re-join later.

As a result, our inboxes are not cluttered with too many internal messages and e-mail, in the end, becomes a tool for external rather than internal communication. A part of our communication with clients also takes place via chat. Chat is useful especially in situations where we need to clarify some details, ideally in real time and involving more people at one time.

We draw information from communication among actual users as well as from other systems we have in place, i.e., JIRA, Bitbucket, Jenkins and Confluence. Unlike the aggregation of information in JIRA, where information is linked to particular tasks, we use HipChat to aggregate information in chat rooms based on teams and projects.

To give you the full picture, here is the cost information: the cost can be calculated quite simply: an organisation running a cloud with unlimited history and message search pays 2 dollars a month per each user.

Integration and standardisation

All Atlassian systems are mutually integrable. This may not be anything special since other systems, including open-source tools, offer this functionality to some extent, however, it is a lot easier to integrate products from one family. It is usually a matter of minutes or hours, at the worst.

Integration is a foundation for an effective mode of work. If you don’t have such integration at your company, we recommend that you take steps in this direction, although the price you will have to pay now and in the future may be a deterrent.

You will spare yourself a headache, save money and improve quality. The transmission and aggregation of information saves a lot of e-mail and personal communication as well as time spent in team meetings. Moreover, you will have traceable history in addition to traditional documentation.

If you have already set about implementing new systems or are preparing to do so, be sure to enlist the help of someone with sufficient experience, e.g. a similar company in the same vertical or an expert consultant. You will spare yourself the headache of learning and implementing a new system and you will also save money.

Last but not least, you will lead by example: the employees of the organisation will be more disposed to accept changes when they see someone who has been through the teething troubles and now reaps the benefits.

You can now follow to the next part – article called: Architecture and operating of web applications in the cloud environment (Part 1).

Putting Continuous Integration (CI) into Practice - part 1

Putting Continuous Integration (CI) into Practice – Part 1

Practical advantages and pitfalls of introducing continuous integration (CI) into (not only) a web integrator environment, what is easy and what is harder, what requires attention, whether we speak about tools or people.

Also, you should not miss the previous part of the article series about web integration – Front-End Task Automation – Part 2. It is a must-read!

In the current era of strong competition and high pressure on maintaining output quality, one of the essential parts of effective development and web applications is continuous integration (CI). Implementing of a correct and compact CI process is not a matter of one day and may contain obstacles. The last but not least aspect of the process is the actual speed of change distribution into various environments, where continuous integration greatly helps reducing errors while reaching a sufficient speed.

Required tools

The subject of this article is not a detailed analysis from a technical perspective. To give you a better idea, I recommend the previous articles, Advantages of continuous integration and automated deployment from the perspective of web integration and Version control system and web integration projects.

For a general idea, let us at least provide a basic list of tools and information systems that need to be integrated into the production process of a company:

Pros and cons

Implementing new systems means changes in both the infrastructure and the process level and affects operation principles of the whole company. Individual workers’ reactions to those changes within an organization will vary. Generally, significant changes do not cause waves of enthusiasm throughout the whole organization. A part of the staff will believe the existing system is sufficient and will prefer using the established procedures.

Another part will be indecisive and will wait until those changes are proven and have acceptable consequences. Those people will neither vote against changes nor actively support them. Finally, the third part of the workforce, dissatisfied with the current state, will more or less support the changes. The last-mentioned group may also contain future members of a team responsible for the correct setting for the new systems and maintenance.

Apart from the technical support itself, the last group also has a significant psychological impact on their co-workers, which they can naturally convince of the benefits of changes in work procedures. Enthusiastic supporters will gladly learn the new system and will naturally pass the enthusiasm on their co-workers.

Therefore, the changes will not seem as “dictated from above” and will be regarded as actually beneficial.

Do not get me wrong, however. The group of critics is not trying to prevent progress, it only requires us to clearly describe what benefits the change would bring them at the price of their temporary discomfort.

The most common objections against changes that consist of implementing an infrastructure and a continuous integration process:

  1. Learning to use new tools requires extra time.
  2. New tools are lacking functions of the old tools.
  3. New changes cause confusion in the established procedures and lead to errors.
  4. It comes with additional standards and limitations.
  5. Automation reduces the developer’s control over the deployment process and the control of a code deployed into a runtime environment.

Counter-arguments:

  1. Workers increase their qualification and broaden their knowledge.
  2. Modern systems supporting continuous integration feature most of the previously used systems’ functions, and also:
    • Include additions/extensions that can partially or entirely replace functions of the old system.
    • Allow integration into other systems within the continuous integration process.
    • Allow integration into bug & issue tracking systems.
    • Feature new functions that can be used to improve work effectiveness.
  3. Short-term decrease of effectiveness is compensated by a long-term increased level of stability and a better-quality (more easily estimable) output towards the clients.
  4. In a clearly specified system, standards also serve the workers as manuals that always define a repeatable process; bigger changes allow more substantial reworks of the established security models, improving them qualitatively.
  5. Replacing manual work with an automated tool reduces the probability of human error and limits responsibility to a smaller group of people. This also means the developers, after submitting their work, do not have to worry about if and where their work has been deployed.

Where to look for good advice and a helping hand?

Implementing new systems and their setting for the company’s specifics requires a considerable time. Some attempts may end in failure, or a certain direction may prove to be a dead end. In any case, it will provide us with valuable experience, even though it costs us valuable resources. Implementing a compact system may take hundreds of hours of work. Good information can save a significant portion of the worker’s time required to acquire new skills and, last but not least, spare our nervous system.

Available source of information:

  • Internet – on the internet, you can find documentation, manuals, personal experience of individuals and discussion on a given topic.
  • Customer support and community – you can consult the customer support (in case of commercial products) or the developer community (in case of open source).
  • Seminars, training – as in any other field, you can attend to seminars or training.
  • Consultant – a subject with experience.

The aforementioned options are more or less sorted according to their cost and also reflect the quality of received support. Both open source and commercial systems usually have out-of-the-box functionality preset to the most common user requirements. In most cases, you can thus start working almost immediately. A lot of information is free of charge as long as you are able to find it. Time, however, is not free, and hours of experimenting on your own can be avoided by an individual consultation.

To make the transition as painless as possible, it might be worth it to involve an external consultant specialized in continuous integration or to cooperate with a company in your field that has already implemented such system and uses it successfully. A subject that experienced a transition from an uncontrolled system to continuous integration can provide you with:

  • Personal experience that can point you in the right direction.
  • Warning against a number of errors resulting from experimenting with new tools and processes.
  • Help with a basic setting, which you would otherwise have to discover on your own.
  • Recommendations on how to adjust the basic setting to the needs of your organization.

Besides that, it will act as a mentor and role model for your employees, a presence of such person usually makes the team more inclined towards changes.

Implement changes gradually

It is not wise technically or organizationally to turn all systems upside down. For instance, the maintenance of ongoing projects requires the impact to be as small as possible. In this case, we recommend to fine-tune the process on a chosen project and then apply the process on other projects.

Our suitable candidate should not be too small and should generate sufficient amount of change requirements, on which we can test the process and learn it. Besides, this way the participants will be shocked to a lesser degree than they would be otherwise. Despite all efforts for a smooth progress and maximum efficiency, not everything will be perfect since not all risks can be prevented. Unexpected events throughout the development process require a certain level of improvisation and inspiration.

But if you anticipate these risks – especially if you involve an expert consultant – you will overcome the period of implementing changes smoothly and to the satisfaction of developers, managers and clients. In the second part of this article, we will focus on practical experience with implementing of some of the aforementioned tools and information systems in the building process of continuous integration in a web integrator environment.

You can now follow to the next part – article called: Putting Continuous Integration (CI) into Practice – Part 2.

Front-End Task Automation - Part 2

Front-End Task Automation – Part 2

Work on a web integration project includes a lot of repetitive and not-so-interesting tasks. The following lines describe several very useful tools that can significantly facilitate and accelerate our work. These tools include Bower, Grunt, Gulp and Yeoman.

In the previous part, we explained what Bower, Grunt, Gulp and Yeoman are based on and now we can move on to introducing them further.

Bower

Bower is a package manager from Twitter and, as opposed to npm, which deals with the “server side”, Bower deals with client components, such as jQuery, Angular, Bootstrap, etc. Modules can be browsed at bower.io/search/.

Bower is available as an npm module and, since we will use it in all projects, we install it globally in the following way:

npm install bower -g

The commands are similar to npm and the installation of Bootstrap, for example, will look like this:

bower install bootstrap

Here you can see that the bootstrap module is also dependent on jQuery, and both were downloaded in the bower_components folder. From this folder, we can start using jQuery or Bootstrap in the project.

During the installation, we can again enter either the module names directly or refer to URL, local file, git repo, github, etc.

New project initialization is then executed by the following command:

bower init

After a brief questionnaire, it will create the file bower.json in the project (with a structure similar to package.json from npm), which describes the given project. The content of bower.json file is shown in the picture above.

Therefore, if we receive a new project from someone, it should contain both the package.json and bower.json files, on the basis of which we then perform the given project installation:

npm install
bower install

and everything important will be downloaded into the project. It should be noted, that although the command bower install bootstrap itself, in an already existing project, physically adds bootstrap to this project, it does not update the bower.json file (dependencies section). If we want the given project to include this module, we must add the --save parameter.

Grunt

Grunt “The JavaScript Task Runner” is an automation tool for front-end tasks and thus enables, for example, automatic minification of CSS and JavaScript, conversion of preprocessor codes (LESS, SASS …) to CSS, compilation of Coffee Script into JavaScript, etc. There are thousands of modules for Grunt, enabling a variety of interesting tasks:

The installation of Grunt itself is executed by npm in the following way:

npm install -g grunt-cli

When working with Grunt, there are two key files – package.json and a so called Gruntfile (Gruntfile.js, alternatively Gruntfile.coffee). Both files are already installed in an already existing project, only the dependencies remain to be installed by the npm install command. In a new project, we can create the package.json file either manually by the npm init command, or by the grunt-init template.

Then, using npm, we install Grunt itself between so called devDependencies, together with its modules, such as the uglify module used for the minification of JavaScript, in the following way:

npm install grunt-contrib-uglify --save-dev

Gruntfile

Gruntfile (Gruntfile.js or Gruntfile.coffee) is a file containing the configurations of individual Grunt tasks.

module.exports = function(grunt) {

  //loading package.json
  grunt.initConfig({
    pkg: grunt.file.readJSON('package.json'),

    //uglify module definition - for
JavaScript minification

    uglify: {
    options: {
    banner: '/*! <%= pkg.name %>
<%= grunt.template.today("dd-mm-yyyy") %>
*/\n'

    },
    dist: {
    files: {
      'js/input.min.js': ['js/input.js']
    }
    }
    },

    //watch module definition - this monitors
changes of js files and minifies them

    watch: {
    files: ['js/*.js'],
    tasks: ['uglify']
    }
  });

  //loading the required Grunt modules
  grunt.loadNpmTasks('grunt-contrib-uglify');
  grunt.loadNpmTasks('grunt-contrib-watch');

  //registering of the default task executed
by entering the command grunt into
the command line

  grunt.registerTask('default', ['watch']);

};

The example above shows a created simple Gruntfile with two modules (uglify and watch), where one module (watch) is added to the default Grunt task and is executed by entering the command grunt into the command line. This task monitors changes in JavaScript files and when such change happens, it runs the module uglify, which takes the input.js file and minifies it as input.min.js file. We can also run the uglify task separately by the grunt uglify command in command line.

Gulp

Gulp has the same use as Grunt. It only works in a slightly different way. Because Grunt focuses on configurability, it is more suitable for users preferring to configure code rather than write their own. To write our own code, we use Gulp instead. There is a variety of modules, although the amount is lower than in case of Grunt. Gulp does not need that much anyway. What would otherwise require a module in Grunt, you can write by yourselves using Gulp. Gulp also focuses on speed, although that can be increased in Grunt too, by the module jit-grunt, which ensures that other modules are loaded only when needed.

Another difference between Grunt and Gulp is in their approach to processing files. While in Grunt you can see individual intermediate steps of, for example, JavaScript merging and minification, Gulp works with so called Node.js streams. In this case, there are no intermediate results generated on the drive, but data are transferred between individual tasks via the pipeline instead.

The installation of Gulp itself is executed by npm in the following way:

npm install -g gulp

When working with Gulp, there are two key files – package.json and a so called Gulpfile (Gulpfile.js), it is therefore very similar to Grunt, including the installation of Gulp modules.

Gulpfile

//loading Gulp itself into an object named gulp
var gulp = require('gulp');

//loading watch module
var watch = require('gulp-watch');

//creating a new task "test", which only puts text into the command line
gulp.task('test', function() {
  console.log('Hello, this is test ...');
});

//creating a new "default" task, which monitors the contents of input folder and copies every file created or modified in this folder into the output folder
gulp.task('default', function() {
  gulp.src('input/*')
    .pipe(watch())
    .pipe(gulp.dest('output/'))
});

This simple example shows two tasks, one named test and the other one default. The default task is executed by entering gulp into the command line. The default task uses the gulp-watch module. The “test” task is executed by entering the command gulp test. It is obvious from this code what is the difference between the files Gruntfile and Gulpfile and why Gulp users require less modules.

Yeoman

Yeoman is used to generate code according to previously set templates (scaffolding). The templates in this case are not text files, but a pattern, according to which a base for an application or a part of it can be generated. This way we can generate, for instance, the base of a WordPress website, AngularJS application, etc.

Installation is performed in the following way:

npm install yo -g

We also need a so called generator, which decides what is created, where and how. We can either use a prearranged generator or create it ourselves.

Useful Tools – Alternative to Command Line

If you are not fans of the command line, I have good news for you. Bower, Grunt and Gulp are all natively supported in IDE Webstorm (PhpStorm) and the projects using these tools can thus be managed directly in IDE. The modules can be easily managed, but Gruntfile or Gulpfile still have to be edited manually.
Interesting links with examples of working with these tools:

Conclusion

Using these tools greatly facilitates and accelerates work of a web developer. It facilitates and, more importantly, unifies work in team, since individual configuration files are versioned (as opposed to separate modules). This way the unified configuration is shared by the whole team and one does not have to worry that every developer creates, for example, different CSS from SASS, or that there will be problems in JavaScript if the whole team uses JSHint.

You can now follow to the next part – article called: Putting Continuous Integration (CI) into Practice – Part 1.

Front-End Task Automation - Part 1

Front-End Task Automation – Part 1

Work on a web integration project includes a lot of repetitive and not-so-interesting tasks. The following articles describe several very useful tools that can significantly facilitate and accelerate our work. Before we start with individual tools such as Bower, Grunt, Gulp and Yeoman, let’s talk about what these tools are based on.

Also, you should not miss the previous part of the article series about web integration – Version control system and web integration projects. It is a must-read!

First, it should be noted that Bower, Grunt, Gulp and Yeoman are modules of Node.js and will be further explained in the following article. Node.js is a powerful event-driven framework for JavaScript, based on the V8 JavaScript engine from Google, which is used in Google Chrome. Therefore, we install these tools using npm (node package manager), which is the default package manager for Node.js.

Node.js and npm

Node.js is an event-driven I/O framework for the V8 JavaScript engine. In other words, it is a V8 engine enriched with functions allowing scripts to access files or network functions. This means we can create a server listening to a certain port almost the same way as we create, for instance, event handlers in a browser.

Npm (node package manager) is a package manager for Node.js and is installed together with Node.js by default. In Node.js, modules (packages) are installed the same way as, for example, software in Linux via APT. Modules can be browsed at npmjs.org.

It is important to say that the operation of most of these tools requires us to be at least partially familiar to the command line, because this is the usual way these tools are operated / installed. There are also alternative ways for some of them which will be discussed in the conclusion.

Npm is operated via the classic command line and all modules are installed by a simple command npm install. For example, if we want to install the module coffee-script, we use in the following command:

npm install coffee-script

A directory named node_modules was created in the project directory and contains locally installed modules used by the given project. If we want to install a module globally, we use the parameter -g. The module than will be installed globally and can be used in multiple projects.

npm install coffee-script -g

The previous example shows that also mkdirp module has been installed together with coffee-script, because the module is dependent on it.

During installation, we can refer directly to a module from the npm registry or install modules from Github, local modules, modules available at URL, etc. Npm offers much more, as can be seen in the project documentation.

Package.json

If you look into a project or module, you will probably find the file package.json. This file contains basic information about a project, such as name, version, description of dependencies on other modules (including version), etc. This very file allows the installation of given module and its dependencies. If we use the command npm install in a project, npm downloads all dependencies into node_modules folder, including other dependencies of modules and so forth.

To create a new project (package.json), we use the command:

npm init

This, after a brief questionnaire, creates a file package.json in the project, which provides a description of the project. Its content is shown in the picture above. The installation of modules in a project is then performed by the command npm install <pkg> –save, as we are advised by a hint during project creation.

The installation of coffee-script module directly into our project is then executed in the following way:

npm install coffee-script --save

And the corresponding package.json:

{
"name": "my-project",
"version": "0.0.0",
"description": "my description",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\"
&& exit 1"
},
"repository": {
"type": "git",
"url": "https://bitbucket.org/my-project"
},
"author": "",
"license": "BSD-2-Clause",
"dependencies": {
"coffee-script": "~1.8.0"
}
}

One can see that coffee-script module was added to the dependencies section. This way the project is ready to be shared, for instance, by a team, where other members only install the remaining modules used by the project via the npm install command.

Conclusion

We have briefly described what the tools, which we will get to in the next part of this article, are based on and shown the basic principles used by all modules based on Node.js and npm. In the next part, we will finally get to the particular tools that can significantly facilitate work on a web integration project. That includes Bower – a simple package manager for front-end components, Grunt and Gulp – automation tools for front-end tasks and Yeoman – a scaffolding tool for generating complete projects.

You can now follow to the next part – article called: Front-End Task Automation – Part 2.

Version control system and web integration projects

Version control system and web integration projects

Why does a version control system increase quality levels in a web integration project? What are the advantages of using such a system? And what is its overall influence on development?

Also, you should not miss the previous part of the article series about web integration – Advantages of continuous integration and automated deployment from the perspective of web integration. It is a must-read!

The Version Control System (VCS) plays an important role in the development of a web integration project. When using VCS, the main goal is to record the changes in project development and support for team collaboration, and deploy an application to various environments. Modern versioning systems resolve not only the issue of cooperation and recording changes, but also bring a range of other benefits.

VCS supports tools for continuous integration, such as JenkinsHudsonBamboo. They can help us automatize application testing and deployment. Ultimately, tests and reporting will be followed by reporting in the form of email or other notifications.

Features of the most popular version control systems

Traditionally, Subversion (SVN) is the client-server application, which for a long period was the dominant system in terms of revision management. SVN saves space and uses different revisions. The project is versioned on a central repository on the server. For versioning, a functional connection with the service on the server is vital.

A similar philosophy can be seen in the Git and Mercurial distributed systems. The main difference between these systems and SVN is that all developers can work locally and do not require server connections to do their work. Additionally, they may submit only part of their work on a remote server and retain other parts locally for later adjustments.

Git is very flexible, involves a whole set of single-purpose tools and can easily be upgraded. Among others, we use Gitflow. In contrast, Mercurial is a single, compact library and has shown itself to be rather inflexible. For a detailed comparison of git and Mercurial, please refer to the Git vs. Mercurial article.

Decentralised nature of development, with a central repository

The project evolution results from a basically decentralised scheme. All developers work on their own local copies of the project and in specific teams in separate locations. Different teams use their local environments for developing and testing before a specific batch of changes is shared with other teams. As the number of teams and developers grows, so does the risk that two developers are involved in the same part of the application and the adjustments made will be incompatible. The distributed VCS is a better solution to such problems.

There are many approaches, either centralised or decentralised, which can be dealt with by various types of model (e.g. Workflow).

A unified system and standardisation of development

The settings of a unified form of changes versioning and a system of submitting work creates rules for the developer’s work, and a firm foundation for standardisation of development and distribution. For developers, this means clearly defined processes and, as a result, a set of instructions for dealing with possible situations that could arise.

Updating the application in various environments poses problems in the absence of VCS. In practice, we make use of continuous integration, using tools connected with VCS.

The project launch, and updates in operations in the testing and production environment, is further simplified, reducing the costs of handing over work and inspections.

To hire services or use your own

A versioning system can be operated on your own server, a solution which offers the maximum configurability. In projects where sensitive client data is processed, it is also imperative that the source codes are not made public, with an arrangement that under no circumstances could anybody other than the responsible developer be allowed access to such data.

If we do not want to deal with an operating a versioning system, we can use specialised hosting services supported by a selected versioning system. Github supports use of Git and SVN in parallel, giving the developer the freedom to choose the versioning system.

Migration between services

Everything progresses. In the past here at Krcmic.com or at Onlineandweb.com we operated our own SVN server and Git server. But despite its benefits, we had to move on and hand over to hosting services.

The first migration involves the transfer from our server to GitHub. Later on we moved our projects from GitHub to Bitbucket.

Moving repositories to GitHub

Migration on GitHub involves just a few simple steps.

  1. Create the repository on GitHub.
  2. In the original location on our server we configure as remote the repository that we set up in point 1.
  3. The entire history is sent to the remote repository.
  4. Security is configured for the developer team, and we inform its members about the migration carried out.

That is it. Then, work on the project can continue on without interruption.

Transfer from GitHub to Bitbucket

Thanks to import, the project need not only be migrated between GitHub and Bitbucket. This is a convenient method, where with the help of a single form a repository is created on Bitbucket. In addition, a request is made for information needed for access to the original repository on GitHub.

As can be seen, this form of migration is even easier than a switch to GitHub.

Blending versioned and non-versioned parts of the application

In virtually all web integration projects, there are parts that we do not want to version. Developers need local settings for a variety of environments (for example settings for a database connection). At the user level, we have user-managed data, through a Content Management System (texts, images, video…). Such parts should be separate from a versioning system. There is no need, for both security and purely practical reasons. In addition, the user data has a very large volume. Versioning is problematic at the very least, and in most cases is simply not possible.

In projects we resolve the situation through placing of configuration files and data folders in a list of ignored parts of the project. In Git, such a setting has the file content marked .gitignore, where specific files and whole can be specified and rules can be set for ignoring folders parametrically (for details see GitHub help and others).

Positive impacts of the version control system on the development of a web integration system

Version control system in the life cycle of a web integration project has an immediate impact on several areas:

  • Collaboration of developers, who tend to be in separate locations and focus on different professional areas;
  • responsibility linked with the traceability changes to the application codes,
  • secure deployment – the option to go through the changes history allows us return to a previous, stable version, if updating appears unsuitable for the operating environment;
  • documentation – records from a versioning system complement the traditional developer documentation and also add a time dimension, thanks to which it is possible to retrospectively monitor the speed and robustness of the development. Data export can be turned into graphic format, which will be especially appreciated by non-technical people in the team.
  • audit – after completing the project we turn to the versioning system audit of project,
  • change log – a summary of all changes for a particular period can be produced according to history.

Advantages and disadvantages – a summary

Version brings certain disadvantages, including an undoubted increase in levels of labour-intensiveness. Developers are required to extend their training to cover new tools that they will need for their work. On the other hand, versioning requires minimum effort in comparison with the development of web applications. Practice shows that the overall level of labour intensiveness does not increase in project development. They will therefore cut deployment costs, and development in turn is made more effective.

The increase in quality levels and effectiveness in development as a consequence outweighs the disadvantages. As well as the technical aspects outlined above, mention should be made of the audit option, which is also appreciated by managers.

The above clearly shows that using a versioning system should be included in the development of every web integration project.

You can now follow to the next part – article called: Front-End Task Automation – Part 1.

Advantages of continuous integration and automated deployment from the perspective of web integration

Advantages of continuous integration and automated deployment from the perspective of web integration

Although the choice of specific tools supporting the implementation of the principle of continuous integration into the organization engaged in development is not associated with continuous integration as such in any way, in another interpretation we will stick to the terminology of Git version system, Git flow version workflow, and principles of semantic versioning.

Concept of continuous integration

In an organization where there is the development of projects or products of larger scale in the team, developers typically work on particular parts of the assignment (“features” or “hotfixes”), which are necessary to be systematically combined into one whole of the project while maintaining the integrity (functionality) of already existing work. Continuous Integration is based on continuous performance of acts of integration of source code, testing, building, and deployment in response to each change to the source code of the project submitted by the developer and for the use of tools for the support of the development and testing by compliance with the established procedure automatedly.

System of control of versions of source code

The cornerstone of deploying continuous integration in an organization is the consistent use of a version system for ALL the code of projects, and ideally incl. runtime configuration (with an exception of configurations specific for a particular target runtime environment) and database configuration as defined by data structures – in case of relational databases of DDL* commands, for other types of repositories than other descriptive files, usually in XML or JSON format).

(* Data Definition Language – a language for describing data structures (typically database tables)

It is also appropriate to version not only the program and configuration code of the projects themselves in the repository, but also meta files, which help to uniformly manage the process of development and integration, e.g. single configuration of Integrated development environment (IDE) of the team, files controlling building of the project (see below) and files configuring this process (project build configuration).

Version systems are generally divided into centralized (CVCS) based on server-client architecture and distributed (DVCS) not requiring server service because they generally suffice with any network-accessible file repository. Among centralized VCS are included CVS, SVN (Subversion) or SourceSafe, among distributed are included e.g. Git or Mercurial, where especially Git has observed a vast expansion not only in the world of open source in recent years.

Central source code repository

Central source code repository represents a network location or better service that solves:

  • hosting – physical location of repositories of code of used version system,
  • centralization of developed code for transmission and control,
  • security settings (restricting access to specific repositories for specific users or better to user groups)
  • code control,
  • connection of tools for cooperation in development of the project.

Utilizing cloud service eliminates overheads associated with maintaining own solutions to a central repository. In contrast, when using own hardware resources the code remains within the organization’s infrastructure and can thus be considered to be stored more safely (let us ignore the fact that statistically the information leakage occurs the most just inside the corporate network infrastructure). The technical aspect for selecting the type of operation of the service may be also the speed of connection, where the availability of the service located in the internal network of the company is higher in orders of magnitude than for external services.

Centralization brings advantages both for developers and manager roles involved in the project. Developers have one destination for takeover of current code of the project and the subsequent submission of work and thus there is a space for them to focus on their core specialization that is production of functional quality code. On the other hand, staff focused managerially have clearly designated place where they can check the progress of the course of development of the project at any time.

After submitting the project code, its control typically follows. Submitting of work may then take the form of a request for acceptance of set of changes (commits) in the project code (so called pull request), when the developer cannot directly affect the version of the project in a remote repository, but a proposed change by him/her must pass the quality control (code review). The actual control of the code has both the dimension of a control by a responsible person and also automated tests are used to check the submitted code. For more details see below in the chapter Automated testing and code control.

It is advantageous if not even necessary to solve the above mentioned parts of the development by means of specialized existing services / tools and in the sense of hosted (on-demand) service or application which you install on your own infrastructure and operate so called on-premise. Hosted services are represented by Github and Bitbucket, where the organization has a web-based user interface for:

  • comfortable management of safety,
  • control of pull requests,
  • code review,
  • history display
  • possibility of integration with other services (for tracking of requests, automated deployment etc.)

A tool for solution on own hardware resources may be Stash for Linux (on-premise similar to Bitbucket cloud service) GitWebGitolite or Bonobo Git Server or GitStack for Windows.

Packaging and dependency management (PDM) tool

Within the development infrastructure in addition to determining a central repository of source codes a need arises to storage and organization of packages of already compiled code, either within the organization or independently outside of it or possibly within packaged source code that does not change – it concerns scripting application environments and / or distribution of code for the needs of code assist feature in the development environment (IDE) and the need to clearly define the dependence of the code of the project on these packages. It is therefore necessary to use a tool that allows you to configurationally define the necessary dependencies of the source code (i.e. different frameworks and libraries needed to run the application) and put the resulting compiled application into the form of a distributable package. While in some application platforms the use of this tool is inherent (e.g. Apache Maven not only for Java-based projects, Ruby Gems for Ruby or NPM Node.js/Javascript based projects) in the world of web development on PHP platform its existence in the form of a Composer tool is still a relative novelty and a number of open source projects still does not consider it, nevertheless its use is increasingly promoted.

Each PDM tool commonly brings the following features into the development

  • Defines the configuration file of the project for definition of dependencies including repositories, in which it is possible to obtain the dependent packages, and other conditions for the formation of application, usually in XML or JSON format
  • Defines format/form of the package of reusable source or compiled code (according to application platform) incl. its meta definition (analogy for project configuration)
  • Usually has one or more own public repositories in which packages with popular open source code are made available (in case of Composer tool it is Packagist service)
  • A tool for command line (CLI) for controlling the life cycle of the project following the chosen archetype, typically thus
    • Creation of directory structure of the project following the chosen application framework
    • download of all dependencies necessary for development (source codes for code assistance in IDE) or running of the project
    • update of downloaded dependencies at change of configuration
    • build (package) with the possibility of loading into runtime environment (application) or repository of packages (libraries); within building then performing a series of operations that depend on each other

Continuous Integration tool – CI

Continuous Integration tool ensures all the automatable tasks within implementation of CI process i.e. using or in cooperation with above mentioned tools for source code management and controlling of dependencies, its building and package. There are many tools available as open-source as well as commercial, let us mention the best known

Some are directed to use in connection with a specific application platform, others are platform-neutral. CI tool can be operated in both traditional on-premise model or use them as a cloud service. On-premise deployment may be a necessity in some cases, typically when the source code repositories and / or target runtime environments for deploying of the application are placed in a private network of the organization and they cannot be made accessible to the cloud service due to security policies.

The basic property of CI tool is the definition of parametrized tasks, which can disintegrate into individual partial steps. The task may be subsequently done either by running through a web interface of CI tool or through its API by calling upon an event in another system (typically). Tasks may also be combined to a higher logical units. Thus CI tools, to a considerable extent, reiterate the principles of separate building command line tools of AntMaven or Phing type with which they may also complement (a step in the building task may call a building tool controlled by building configuration file (build config file), which may be a part of version code of the project or is stored on the CI server independently from it).

Automation of building

Building means an activity which by downloading of the source code and configuration files incl. definition of dependencies and a code required by this definition for translation and / or running of the project will create a form of a project executable in runtime environment and usually in the form of a package, but it may be only a workspace directory created on CI server for further use.

Depending on the chosen application technology a translation may be a part of building, in case of PHP it is usually installation or update of built form of the project by Composer tool. Automated building is usually done using CI tool based on a change (a new commit) in one of the main branches in the repository of the source code of the project.

A successful automated building of the project then may be an input (trigger) of subsequent automated deployment. In case of a product (application or library) the CI process usually ends with automated building (finished package is delivered to CI server for further use).

Automation of deployment

Based on successful automated deployment and testing of project the CI tool may immediately or independently in a separate task, proceed to automated deployment into target environment.

Set of environments which are available in the organization for deployment of the project should be clearly defined and the set of environments which are available in the organization for deployment of the project need to be unanimously understood in the context of development. These environments usually stem from the life cycle of the project.

  • Internal test for deployment of current (HEAD) form of the project in the development. In case that shared server environment is used for development of web projects (which is, in case of PHP development, widespread practice in bigger organizations), then the internal test is usually implemented in the same server environment, in which developers develop, by which equivalent running conditions are ensured for it, as a local instance of the project of developers. Development team (incl. testers and internal product owner, delivery manager or project manager continually supervising the process of development) work with the internal test.
  • Preview, sometimes also UAT or simply testing environment. There is usually a deployment into the preview from release or hotfix branch of the source code of the project, possibly a separate preview branch may be defined to which developers merge changes, which they want to build into preview environment by means of automation of project deployment. Primarily the Client (product owner) works with preview environment and he/she checks (accepts) incorporated changes in them.
  • Production or live environment.

Automated deployment reduces if not directly does not need inputs by developers into runtime environments within the process of deployment, which positively contributes to stable behaviour of the applications.

Automated testing and code control

According to the time moment of launch, tests can be divided into groups pre-commit (before submission), pre-deploy (before deployment) and post-deploy (after deployment into the target environment).

Pre-commit tests

Control of syntactic validity and coding conventions (formatting, documentation of the source code)

Pre-deploy tests

Pre-deploy tests relate to the source code and can be implemented before building or within it (thus typically after all the necessary dependencies are prepared by a package tool).

These test may be launched in response to integration of changes in the code in system for administration of the source code into some of the main branches (develop, master) of the main repository of the project. In response to the nature of the test or strategy of integration they can be launched even before the change will be integrated into the branch (so called pre-commit) or after it (post-commit).

  • Launching of unit tests implemented within the project for testing individual features (typically using PHPUnit framework)

Post-deploy tests

Post-deploy tests may be implemented only in response to a successful automated deployment as they require an application running in the target environment. These are typically:

  • Automated user tests created in Selenium IDE tool and launched by CI tool in Selenium server environment
  • Integration tests verifying functionality of the system

Automation of testing makes it obvious and ever present part of the development process. Compared to traditional testing implemented ad-hoc (or neglected) the CI process may require the presence of tests while forcing the team to respond to their failures (be it by a repair of the source code or by an update of the test in response to required implemented functional changes).

Main contributions of deployment of continual integration into the process of making web solutions

Automation of individual repetitive tasks of building to deployment of the application eliminates occurrence of errors associated with manually performing them. In addition, an automated process is always described at least in the form of configuration files or configuration settings of the tools that perform it, when these configurations are managed by limited group of persons (role CI/deployment manager only) and in addition it is usually possible to apply version control system to them as well and thus have a control in the whole team over when and who changed such a meta definition of the process.

Automated implementation of tests ensures continually running testing of the project in response to ongoing change at the level of the source code (typically thus integration of commit into some of the main branches of the project) or even independently on it (post-deploy tests may continuously verify the integrity of running application in the chosen target runtime environment)

Continually running process of building and deployment ensures that in response to the work submitted by the developers and workflow of the project mapping the individual development phases to target runtime environments there are always current tested versions of the project in these environments.

At the same time automated deployment facilitates extraordinary remedies, e.g. allows to easily deploy the previous version in case that despite implemented automatized or possibly manual tests the project shows problems in the given environment.

A concept of continual integration is naturally conveniently complemented by agile, interactive methodologies of the software development (Scrum).

Continual integration tool gathers and according to the needs / rules of the organization further distributes information on the result of automated task via available information channels (e-mail, instant messaging, possibly SMS messages). Thus it contributes to the awareness of each member of the development team as well as persons of interest standing outside the team (development manager, project management, product owner) about the course of the process of controlled change in the web project.

Useful links

You can now follow to the next part – article called: Version control system and web integration projects.

Stay safe online - 5 cybersecurity tips you should know

5 Cybersecurity Tips You Should Know

Do you know what cybercrime is?

Did you know that 71 % of all cyberattacks are financially motivated? The cost of cybercrime is estimated to be around 10,5 trillion dollars by 2025. Most of us cannot even imagine this amount of money. Cybersecurity is something every user with a phone or any other electronic device should know. Most people who are not very informed about cybersecurity trust their passwords consisting of their date of birth or nickname. That is a terrible mistake to make.

Cyber security is becoming increasingly important as more and more people rely on internet-connected devices, from mobile phones to computers. Cyber security protects your devices from hackers and viruses and ensures confidentiality.

Although it sounds like a fantasy, you don’t have to have any technical background to prevent your devices from being misused for criminal activity.

Cybersecurity sounds fanciful, but you don’t need an extensive technology background to prevent criminal misuse of your devices. Here are 5 simple cybersecurity tips that everyone should follow.

The importance of strong passwords

One of the most common ways hackers break into computers and other devices is by guessing their owners’ passwords. Creating a strong password can protect you from unauthorized access by cybercriminals. Hackers can get through your password within a minute if it is not strong enough.

  • Use 12 or more characters, try not to use your name, surname, or your date of birth
  • Use upper and lower case letters
  • Include some digits
  • Include a symbol
  • Another common password creation mistake is using the same password for every device or account login. Make sure you create different ones for all accounts and electronics.
  • Although it can be difficult to remember them all yourself, you can use a password management tool or an app to avoid constantly mistaking one password for another.

    Also consider using multi-step authentication, which many companies offer. This method adds an extra layer of security because you would have to enter a code in addition to the password that is sent to your email or sent to your phone.

    Keep your software up to date

    You may be impatient to finish updating the software on your phone or laptop, but it’s worth taking the time. Software updates often include new features and are designed to improve the stability and security of the software. Since viruses are constantly changing, your device’s software must also adapt to improve cybersecurity.

    When companies discover security holes, updating is an ideal way for them to prevent hackers from stealing data and other information from your devices. So the next time you’re hesitant to endure a lengthy phone or personal computer update, consider the risks of having outdated software.

    Backing up your data

    Have you ever lost a file or a photo you were working on for a few hours just because you forgot to click the save button? The same thing can happen to your data without proper backup. If your device gets infected by a virus, your hard drive crashes or gets damaged and you can suddenly lose all your data, including important documents and photos. Backing up your device can prevent such a disaster and should be done regularly.

    How can you back up your data?

  • Cloud backup
  • External hard drives
  • Flash drives
  • Backup services
  • Install and use an antivirus software

    Viruses cause problems such as slowing down your device, corrupting and deleting files, and crashing your hard drive. Antivirus software protects your device from viruses by detecting and removing them before they cause damage.

    In addition to protecting you from viruses, antivirus software can also block spam and ads which is very useful for every internet user. It can defend your device from hackers, protect your data and files, and act as protection from other cyber threats online.

    When choosing antivirus software, don’t just decide on price. You should consider the software’s email scanning capabilities, download protection, speed, malware scanning ability, device compatibility, and privacy policy to get the most for your money.

    Some of the best antivirus programs

  • Norton
  • McAfee
  • Intego
  • Avira
  • TotalA
  • Avast
  • ESET
  • AVG
  • Norton is our number one and we highly recommend it to you. Norton antivirus products offer password managers, unlimited VPN data, identity theft protection, parental controls, and even online storage. If you’re willing to pay full price, you get almost every kind of digital security you could ever need.

    Be careful when using public Wi-Fi

    Although the use of public Wi-Fi has increased in recent years, the cyber security threats associated with it are still a concern. Hackers can create their networks or use other public networks to steal information without the user knowing.

    When using public Wi-Fi, avoiding sensitive activities such as online shopping or banking is best. You should also look for a network that requires a password. In most public places, such as coffee shops or stores, people will have the password available after making a purchase or upon request from a potential customer. The public Wi-Fi usually has a very weak password, so It is easier to hack it and steal data.

    Another way to protect your device when using a public Wi-Fi network is to ensure the websites you visit are secure, use a firewall or set up a virtual private network (VPN).

    Using VPN (what is it and why to use it)

    VPN stands for Virtual Private Network, it creates a secure connection between you and the internet. It gives you an extra layer of anonymity and privacy. Using a VPN hides your real IP address and encrypts your internet connection.  The best thing about it is, that you do not have to be a technical expert to make it work.

    VPN has a lot of uses while browsing the internet, for example, it can change your online location or make your browsing history private and your activities anonymous. The only thing you need to ensure your VPN is working well is to have a well-working device.

    If you are using a VPN for the first time, here are our three tips for the best VPN

  • CyberGhost (2.29 USD/month)
  • Express VPN (6.67 USD/month)
  • Private Internet Acces PIA (2.19 USD/month)
  • Surfshark (2.49 USD/month)
  • CyberGhost is a VPN we recommend the most. It is very easy to work with, it is supported on all different platforms, and the price is acceptable too. You can also connect up to 7 devices with just one subscription.

    Facebook ads - why use them and how to avoid common mistakes

    Facebook ads – why use them and how to avoid common mistakes

    Nowadays, Facebook is one of the best platforms for publishing your ads. However, if you make too many mistakes in your ads, then Facebook itself benefits more than your company.

    In the following article, we’ll go over a few ways to maximize the effect of Facebook ads and how to avoid mistakes your ads may have.

    Why should you use Facebook for marketing?

    Facebook is an excellent platform for your marketing campaigns. But why? There are several reasons. Nearly 3 billion users currently use Facebook, and you’ll rarely meet someone who says they don’t have Facebook or rarely use it. Let’s take a look at the main benefits of using Facebook for marketing.

    Reach wide audience

    In July 2022, Facebook had 2.934 billion users. That’s significantly more than any other social media platform (YouTube, for example, has “only” 2.1 billion users). The audience you can reach on Facebook is not only large in number but also diverse in demographics.

    Regardless of who your business caters to, you should be able to find the profile of your desired audience on Facebook. Although Facebook skews toward younger users – 62% of users fall into the 18 to 34 age category – it attracts users of all generations, with 38% of users falling into the 35 to 65+ age category. Older demographics are the fastest-growing segment of Facebook users.

    Great for both B2B and B2C

    Have you heard that Facebook advertising is only for B2C businesses? Prepare to be surprised that B2B businesses can run successful campaigns on Facebook too.

    Business decision makers spend 74% more time on Facebook than other people. The B2B space is competitive, which means B2B marketers need to be nimble when using Facebook. But with the right targeting, ad format, messaging and user experience outside of Facebook on your site, there is an opportunity for success.

    Remarketing on Facebook is the least B2B marketers should consider. We often forget how anyone who is a B2B target audience doesn’t stop being one after they leave the office or are online in between work commitments. They’re still the same people. Facebook remarketing is a surefire way for them to stay in their crosshairs.

    Transparency of the audience

    While some programmatic networks offer similar audience targeting options, Facebook’s audience reach is very transparent. With the audience targeting that you choose, your business has a high level of control and transparency over your target audience.

    While other platforms automatically optimize your placements, segmenting your Facebook campaign based on these known audience clusters allows you to gain valuable insights.

    On Facebook, however, you’ll be able to see which segment(s) performed best, leading you to create hypotheses with the ability to further test and refine strategies.

    Targetting based on interests

    Facebook’s targeting capabilities go way beyond demographics. Increasingly, demographics alone are poor predictors of lifestyle or shopping needs. For example, not all millennials have high college debt or lead lifestyles that could be associated with low disposable income.

    Facebook’s targeting capabilities allow targeting by a wide range of lifestyle characteristics such as interests, life events, behaviours, or hobbies. This not only allows for more precise targeting but also aligns digital strategy with offline tactics, ensuring the same behavioural criteria are used across the marketing channel mix.

    Targeting your competitors

    Facebook doesn’t exactly let you target fans of other brands, but there are still ways to see which people like a brand. And those are the people you can include in your target audience.

    This figure is based on self-reported data and may not be up-to-date as it depends on when the user last updated their settings. Still, especially when used extensively, it can be an effective strategy for finding well-qualified users.

    By creating custom audiences of users interested in more than 20 well-known brands, thousands of users can be quickly acquired, all without paying the fees for these audience profiles that may be necessary with other channels.

    Different types of ads

    Facebook offers a wealth of options when it comes to advertising and promoting your posts. Each of these can attract new fans to your page and customers to your brand.

    As well to a large number of ads (for example, on your timeline, in videos or on Stories), you can also contribute to your regular posts. These feel casual, garner a natural response, and can help build your brand’s reputation.

    Sponsored posts are one very useful ad format. It is especially useful if you want your fans and others to interact with you. I recommend using sponsored posts if you want to alert your audience to, for example, ongoing competition for valuable prizes or services.

    Boosting a user-generated post on your feed is fantastic to attract more audience to a post that has already been successful with your fans. The chances are if your fans like it, the audience will too.

    Strong user-generated content often outperforms purpose-built ads because they are more easily identifiable than purpose-built messages. In contrast, user-generated content is organic and people are less likely to resist seeing it.

    What are the most common mistakes when making Facebook ads?

    If you don’t like to throw money out of the window, so to speak, then you’ll want your ads to make as much money as possible using as little money as possible. Poorly designed Facebook ad campaigns can take an unnecessary bite out of your allotted budget. Together, let’s go over the 8 most common mistakes that can cost you dearly.

    Testing multiple interests in one set of Facebook ads

    Has it ever happened to you that a Facebook ad produced amazing results for a few days, but stopped working soon after? Or have you created an ad that worked well without having any idea how to replicate that success? These are the types of problems you usually encounter when you’re in the process of putting information together.

    To put it in perspective, in the early stages of ad testing, many marketers who want to run a Facebook ad for a new audience spend time researching relevant interests they can use to target. They then run a set of ads with all of those interests in one set of ads.

    Using interests is a great way to find new audiences, but this approach makes it impossible to know which specific interest was most effective or to find other interests similar to the interest that brought the sale. And it makes scaling the ad impossible.

    Instead, create a list of all the interests you want to target and group the interests into several categories. Then create several ad sets and target each one to one group of interests. That way you’ll know which audiences are best, how big each audience is, and how to find other interests to test.

    Too many Facebook ads with too little budget

    Most often I encounter overly complex Facebook ad accounts: too many campaigns, too many ad sets, and too many ads. This leads to confusion, lack of efficiency, high costs and ultimately poor results.

    Merge audiences into ad sets with bigger budgets. This will allow you to provide Facebook with more data, achieve your desired CPA faster, and then scale faster. Combine the 1% lookalike of all 365 audience buyers and the lookalike of people who landed on your sales page in the last 30 days into one ad set in one combined audience – a sort of super lookalike. In the same way, place all your digital marketing people, small business owners, and Facebook page admin audience into one combined ad set.

    Once you’ve combined your best audiences into just 2-3 ad sets, it’s time to drop your top 3-6 ads into each ad set. Test new ad copy and creatives (both images and videos) with true split testing in separate campaigns.

    Focus on cost per lead over earnings per lead

    There are two main mistakes people make in Facebook advertising. The first is that they don’t test campaign elements organically before running an ad campaign. This applies to everything from the micro element of the ad to the macro elements of the sales process.

    If an organic post on your Facebook page isn’t generating any clicks, shares, or sales, amplifying the post with an ad probably won’t solve the problem. Advertising will only amplify what is already broken in your message and sales process.

    To get the data you need to make decisions, you pay in one of two ways – data like CTR, cost per link click, landing page conversion, sales conversion, earnings per lead, cost per lead, lifetime customer value, customer acquisition cost, etc. You’ll either pay to get this data (to see where your stuff is broken) quickly through ads, or you’ll pay with your time and organic posting to make your ads work from scratch.

    The mistake people make is focusing on the wrong number when evaluating Facebook ads. They focus on cost per lead rather than earnings per lead.

    Cost per lead is a finite number that can only be reduced to a certain point and is subjective to your earnings per lead. Earnings per lead are how much you make on each person that comes through your sales process.

    Setting the wrong campaign objective

    The big mistake I see is that people use Facebook ads to sell too fast. They create an ad that pushes cold audiences directly to a sales page to sell something right away.

    It’s too fast and completely inappropriate, and it violates one of the golden rules of social media advertising: you have to give before you ask.

    To generate leads, you need an intermediate step in which you provide something of value upfront. It’s the beginning of a conversation that you can use to build a relationship that you can nurture. Then, when people are ready to buy, they’re more likely to do so with you.

    This leads to the second mistake, which is choosing the wrong goal in your campaign structure. Many people looking for leads choose lead ads, engagement or click-throughs when they should choose conversions and optimize for leads.

    Running ads on Facebook with zero follow-up management

    The biggest mistake I see with Facebook advertisers is that they don’t manage campaigns after they are active. If you set up a campaign and let it run on its own, its effectiveness will decrease over time due to Facebook ad fatigue. Ad fatigue is something to avoid before you antagonize your audience. Fortunately, it is a well-mapped phenomenon that you can easily solve.

    The key to developing sustainable Facebook advertising results is to analyze your campaigns on an ongoing basis. Look at the return on ad spend and the metrics of cost, relevance, frequency and CPM, and then make adjustments to the creative and ad copy, as well as goals and targeting.

    Creating new ads instead of maintaining successful ads

    Facebook’s optimization algorithm needs at least 50 conversions per ad set per week to work. So if you have a dozen ad sets or campaigns that only have a few conversions each, you’re wasting your audience and not allowing the system to optimize.

    Related to creating too many campaigns is not continuing to put more money into your “greatest ad hits.” Many Facebook advertisers have a publisher mentality where they feel the need to create a steady stream of content – X posts per day and Y ads per day. However, you are not a newspaper that needs to have fresh news every day.

    Instead of spawning more campaigns, put more money into your proven winners. We have campaigns where winning ads have been running for years. Instead of putting tens of dollars into each new campaign, go back to the analytics and tweak the winners before they die.

    Creating ads without understanding the entire campaign setup

    Really common mistake business owners make is running ads without understanding the implications of the settings they choose in Facebook Ads Manager, or even when boosting posts. The wrong settings or choices can cause them to waste money on poor ad placement, poor optimization, or poor targeting.

    Many business owners don’t understand that the auto-placement option places their ads not only on Facebook but also on Instagram and the entire Audience Network (non-Facebook pages).

    Others optimize for Inbound Generation (which happens through a Facebook form) when they want Conversions (which happen on their website), or choose Brand Awareness when they’re all about Traffic.

    Facebook ads work for a wide range of businesses. Successful results are usually a matter of proper testing and the right ad setup. If you’re not familiar with setting up Facebook ads, investing in training for you or someone on your team will help you avoid wasting money or concluding that Facebook ads don’t work.

    Excessive reliance on automatic ad placement on Facebook

    Selecting automatic placement for Facebook ads is easy, can speed up ad creation, and can prevent you from selecting the wrong ad placement, such as selecting a video placement even if there is no video in the ad. Yet ad placement directly affects the cost and performance of ad sets and campaigns.

    On the other pole, Instagram placements are usually more expensive, but if your audience is on Instagram and you can connect with them through these ads, your spending on this platform can pay off.

    The key to making sure your placements aren’t wasted resources is to think carefully about what you want from your ads and which placements will deliver that result. Think about the strategy behind your ads before you accept the default setting.

    Good strategies to ensure the reach of Facebook ads

    I mentioned above that Facebook offers a large number of post types that can be used to promote your page or company. In my opinion, great advertising vehicles are those that don’t feel forced at first glance. Let’s take a look at them a bit more closely.

    • Polls. Facebook allows you to respond to a post in a few simple ways. The days when there was only Like are long gone. Polls are great in that they are simple and don’t require too much time on the user’s part. Apple or pear? Red dress or blue? Which football club will win the Champions League? Another great thing about polls is that people like to be heard and like to “argue” with each other in the comments. Both the responses and the comments then help your posts to be seen.
    • Viral videos. We live in a time that is characterised by its speed. The social network TikTok, for example, has played no small part in popularising this trend. Have you ever watched a funny video, only to find out after a while that you were actually watching an ad? Try advertising your product non-violently, in a way that gets the audience’s attention first, and then their interest in the product.
    • Spare the text. For image ads and posts, it’s better to let the image do the talking. Try to make the text that accompanies the image as concise as possible. But also don’t write too much text in the image, this can also put off the audience. Try to think of the most concise ways to say what you have to say.
    • Carousels. In addition to a post with one photo, Facebook also allows you to publish a post with multiple photos arranged in a carousel. If the photo is interesting, the person will probably want to see what else you have “up your sleeve”. While a post with one photo may quickly get overlooked, a carousel can get their attention for longer.
    • Ads in Stories. Stories ads are a bit specific in that they are portrait – as opposed to the standard square or widescreen format. A person watching Stories wants to know what their friends are doing right now, but your ad might just pop up as they’re clicking through. An interesting strategy I’ve noticed is doing ads in Stories unobtrusively. That brief moment of “who is that?” can be enough to make me think “wow, that’s interesting”.
    WHAT’S YOUR FRONT-END STRATEGY

    What’s Your Front-end Strategy?

    Is front-end strategy something you need? What is it actually? Or do you simply think you will somehow sort out your front-ends on your own? What is front-end for you?

    Also, you should not miss the previous part of the article series about web integration – Web integrator and digital transformation. It is a must-read!

    My every day experience from a large corporation is, there are twenty, thirty … hundred internal and external systems. Every single one looks different, is controlled differently and is differently maintained – there is a bit of chaos in it, but the corporation says, „we want to simplify it we would like to have a common, unified, user-friendly, light …”front-end“.

    Front-end (FE) can be understood as the presentation layer based on three-tier architecture paradigm and web integrator’s input, as someone who assists with this part of the “SW ecosystem”.

    This article aims to summarize the principles of a successful FE, or to be precise the “Common Front-end” and describe the basic stages of a new FE delivery process.

    Principles of a Quality and Successful Front-end

    I believe that the right FE strategy is based on the following principles, which help ensure that we have a “good and successful” FE from the perspective of users and business goals, as well as it is implemented “correctly”, i.e. effectively, transparently and sustainably.

    1. Focus on Users
      FE is defined by emphasis on interaction with users, how they would utilize the offered FE is priority number one, while maintaining business benefits and given technological conditions. The user must be involved in all steps of the FE development preparation process.
    2. Business Value
      FE is a system (application) that has to deliver the desired business value, while we must be able to justify having invested in it. Involving Business people is key throughout the development as well as it is emphasized later on. Business Analyst‘s role that is subsequently represented by a Product Owner role is key to fulfilling such expectations.
    3. Flexibility
      FE is part of the application utilized by the user, and it is where all the systems materialize – while actually being „below“, i.e. in the lower layers of the architecture. User requirements change over time as well as business requirements. These changes need to be addressed quickly and flexibly. Often without having to make changes in other parts of the system. FE must be designed to make these changes possible according to the requirements – and ideally prevent them.
    4. Time to Market
      Time to Market is closely related to flexibility. We see it as the ability to bring the executed change to the user in the shortest time possible. This requirement is reflected in the need for both quick development and a short “release cycle”. Its condition is to maximize the use of the already existing parts in all stages of FE development.
    5. Measurability
      FE technologies allow detailed measuring. The aim is to carry out all changes based on the results of measuring, and then assess the changes by measuring them again. Choosing the right metrics of ‚before‘ and ‚after‘ the change is essential to properly evaluate the change. We can use a variety of metrics – users‘ behaviour, system behaviour (performance, e.g. security), but also business performance metrics (various conversion rates, execution time, etc.).
    6. Iterative Development & Agile Principles
      Iterative development with continuous implementation of UX / CX and business capacities are the way to effectively deliver changes. Thanks to its commonly lower complexity this way is more beneficial than the typical “waterfall” control methods, yet they do not exclude each other.
    7. Joint Engagement of Business, IT and UX engagement
      Continuous engagement of UX / CX and business value roles is important throughout the whole development. Separating these roles into stages does not meet the principles of iterative development and jeopardizes the future ‘product’.
    8. Keeping Key Know-how
      Know-how is acquired (or already exists) throughout the process. Know-how that is key for the FE owner must always be internally properly documented. A prerequisite for this is separating business logic from the presentation layer (FE) and; and ideally, also defining working roles, the appointment of which is primarily done internally.
    9. Performance
      So that we can work with our FE system nicely and easily, we need the right response rate based on the number of users and target devices expected to be involved. Furthermore, performance also must be viewed from business perspective. Either way, continuous increase of performance (Performance Tuning) is closely associated with the ability to measure the results.
    10. Security
      FE is a “gateway” to the company, thus commitment to safety is crucial. For this reason, security requirements are essential, and must meet the requirements defined by the organization where FE is implemented.

    Business goals, needs, technologies

    Principles Across FE – Common Front-end (CFE)

    Front-ends of individual systems don’t function alone, but interdependently. Therefore, it is important that the following set of rules is observed across all FE systems.

    1. Consistency
      Consistency of behaviour, appearance not only within one FE system but also across systems. User environment must be built on the same principles and provide a consistent and quality user experience.
    2. Re-use
      Reusing existing parts both on the level of UI / UX and on the level of technology and development – the ability to reuse components – significantly affects the flexibility and time to market delivery of individual FE systems.
    3. MVP (Multi Vendor Platform) Principle
      Regardless of the technology and tools, the possibility to deliver by more suppliers must be preserved and therefore an Anti-Vendor lock must be ensured (independence from one supplier).
    4. Quality
      Clear sets of standards to ensure uniform FE quality across systems.

    FE Preparation Process

    FE preparation process can be divided into three main stages, which are based on software engineering, and which are named after the questions we seek answers to.

    1. WHO and WHAT?
    2. HOW?
    3. Is it O.K. and is it running (correctly)?

    These stages can interlink and the relationships between the individual stages are indicated in the diagram. It is also important to consider how demanding individual stages are. We have illustrated this by the size of the circle.

    We can identify three basic groups of roles within the FE development process:

    • Business – responsible for the commercial benefit of the system
    • UX/CX –FE design and functionality
    • DevOps – development, testing and operation

    Engagement of these roles is illustrated in colours according to the expected engagement range.

    FE preparation process can be divided into three main stages

    WHO and WHAT?

    Research

    Any significant FE change starts by doing research. The aim of such research is to collect data (qualitative / quantitative) as the basis for the analytical stage. The data serves to better understand the present and future state, the strengths and weaknesses. We focus on all three key aspects – business, technical and user. We also set comparative metrics and apply their application to the current state.

    Analysis

    Analysis answers the question – WHAT will be the subject of development, thus how to meet the business, functional, technical and user goals of the project. This is possible only on the basis of an analysis of the data obtained in the previous stage, which will provide us with insights and information about WHO is or will be the user, what their needs, motivations, constraints are, and what kind of tasks they (will) address; and, how competition tackles similar situations; what works and what doesn’t.

    As part of this step, processes using design techniques are recommended – especially “mockup” prototype of the future state. These processes are intended to greatly reduce the workload of this step and bring its outputs closer to the contracting authority.

    In the absence of functional changes within the FE, this part may be significantly shortened.

    HOW

    Design

    Design primarily answers the question – HOW the final FE will be designed. In a closed environment, the technological part, in particular, is subjected to the existing “technology stack” and to the used patterns (the same can be applied to UX part of the proposal UX).

    Compared to the classical concept of IT design, we can achieve here maximum work efficiency and developer can easier understand the design. From the perspective of business and user requirements, we use design visualization, using the methods of rapid prototyping (different levels of detail – sketches, wireframes, interactive HTML prototypes), which greatly minimize the text description of the design.

    In addition to streamlining, the aim of prototyping design is a quick client validation of the design and user usability testing. This approach helps us prepare several different variants of the future solution (concepts / ideas) already at the beginning of this stage; we can validate them and select the most appropriate ones for further elaboration.

    Increasing the quality of the final design is achieved by an iterative approach, where we test the prototype several times and improve it even further. Naturally, all this happens much before the actual development starts.

    In an agile development environment, it is neither necessary nor desirable to design a complete solution prior to the development stage. The development phase can be started once we have prepared a compact Minimal Viable Product and have attained sufficient background data for the first few development sprinters. Finishing off the other parts of the design then takes place in parallel with the development itself. This gives us significant space to save time, as well as we get greater design flexibility that is ready to respond to the changes in goals as they come up during the implementation.

    Development/project management

    The development stage is iterative as it is interlinked with the design stage. A suitable methodology for iterative development is, for example, SCRUM.

    The roles of Product Owner and Scrum Master are key within development. The overall setup of the team must always follow the principle of jointly engaging business and UX.

    UX engagement in the development phase has two forms:

    • UX designer is part of the DEV team – clarifies the assignment, completes design details, provides support.
    • In parallel, prepares/ completes the materials for further sprints.

    Every “definition of done” of each solved task must be followed up by quality control abiding by pre-defined concepts – in particular, this regards functional concept and UX concept.

    Is it O.K. and is it running?

    QA

    Despite QA (Quality assurance) being part of the “Definition of Done” for each task, there are still areas that need to be tested separately as per the entire product, taking into account linkage to the surrounding systems. These tests are often performed by specialists outside the primary agile team. It regards, in particular, the following areas:

    • Security testing
    • Performance testing
    • Integration tests
    • Standard „check list“ test defined by the client
    • User testing
    • Expert UX assessment

    Deployment and Operation

    The processes of deployment and operation itself are fundamentally influenced by the chosen system architecture and technology stack. The very issue of continuous integration and automation of deployment is described, for example, in individual series and is beyond the scope of this article.

    Measuring/analytics

    An essential step following deployment is continuous measuring of the solution “performance”. This includes activities associated with measuring and evaluation of the set metrics, technical parameters and quantitative / qualitative parameters from users’ perspective.

    Based on measuring outputs, it is possible to evaluate fulfilment of the objectives set in the project and to prioritize minor and conceptual changes of the solution.

    Another benefit of measuring is being able to further increase the commercial and technical performance.