Author Archives: krcmic.com

GA filters and segments - what is difference

Do Google Analytics filters apply retroactively? And what is the difference between filters and segments?

One question I’m often asked is, “Can Google Analytics filters be applied retroactively? Does it influence anyhow the historical data?”

In other words, people want to know if a filter they activate also applies to historical data or only to data after the filter has been applied.

The answer is simple: NO. Google Analytics filters aren’t applied retroactively. They only apply to data collected after the filter has been applied. 

Like many people, you may be confused about what a filter does in Google Analytics. The simplest definition is:

Filters allow you to limit and change the data contained in a view. For example, you can use filters to exclude traffic from certain IP addresses, focus on a specific subdomain or directory, or convert dynamic page URLs into readable text strings. 

You can setup / create filters on view or account level (in all filters you can easily see and manage existing filters, but there is also option to create new one).

Google Analytics - filters new

GA filters - source

Google Analytics filters

Filters permanently change your data based on the criteria you specify. For example, I use a filter to convert all page paths to lowercase so for campaign source and content.

There is no way to undo the changes made by a filter, so it’s important that you know exactly what the filter will do to your data before you apply it.

To test filters, it’s advisable to set up a test analytics account. Since they permanently change your data, it’s important to make sure they work correctly before applying them to your live data.

A safer option for temporarily modifying your data is to create a segment.

What is segment?

Segments allow you to view a portion of your historical data. They can be applied retrospectively or removed at any time without destroying data. Here is a good reference on this topic with more information on how to get started with segments:

Think of a segment as an analysis of a subset of your data, such as looking only at traffic to a website when users came from an email. Segments can be created on the fly and don’t permanently change anything. Deactivate the segment, and the data returns to normal.

Google Analytics Segments Vs Filters

Click on the Add Segment button to see the list of pre-configured segments. As you can see, there’ are many options to play with, and with the ability to import new segments from the Google Analytics gallery and create your own, there’ is plenty of flexibility to explore your data from a variety of perspectives.

Segments are great and an essential part of your Google Analytics arsenal, but they’ aren’t without their weaknesses.

Weaknesses of segments

As handy as it’s to be able to change your data on the fly, some functionality is lost when compared to filters. First, the possibilities of a segment are less flexible than those of a filter. For example, you cannot exclude a specific IP address or a range of IP addresses with a segment.

They also have a habit of triggering sampling within Google Analytics, making the data displayed in a report less than 100% accurate. If your data set is small, you should be OK, but segments trigger this much sooner.

Weaknesses of filters

With the power of filters comes a certain responsibility. They permanently alter the data in a view from the moment they’ are applied until the moment they’re removed. There’ is no undo. Nor can’ they be applied retroactively, as is possible with segments. This permanence and the additional Google Analytics knowledge required to set up a filter are the biggest weakness of these filters.

In line with best practise, you should always have a fully unfiltered “All Website Data” view to ensure data continuity and to check that your data is being submitted correctly. Depending on the requirements of your website, you should then have other filtered views.

We recommend at least the All Website Data view and a view that filters out your own IP address and the IP address of partner agencies/other branches, etc., although we’ would usually go much deeper with a Google Analytics setup.

When to use segments and when filters?

A segment is the best way to isolate a specific metric, channel or device in your report view and apply it to your historical data. If you want to see how many people have come to your website from Facebook via their tablets over the last three years, a segment is the way to go.

If you want to permanently change the way your data is collected, such as excluding your IP address, removing bots or rewriting your URLs to make them more readable in reports, you’ should look for a filter.

The most important thing to remember about filters and segments is that there really is no “against” them. They’ are different tools for different tasks, used together in a good setup. For most reports you’ will rely on segments to isolate and highlight different metrics, but to make sure your data is as clean as possible you’ need filters.

Unsure if your Google Analytics setup is following best practises? Get in touch via the contact form and we’ll see how we can help.

Google Analytics 4: How to Exclude Internal Traffic via IP Filter

How does Google Analytics 4 define internal traffic?

When it comes to website analytics, the term ‘Internal Traffic’ refers to the web traffic generated by individuals who are directly associated with a business, such as its employees, suppliers, or service providers, including developers. This type of traffic is considered internal as it originates from within the organization and is not the result of organic or external sources. Essentially, any visits to a website that come from those within the business, or its network are considered internal traffic. These people are not tracked because they are not your target audience.

Excluding internal traffic from GA4 reports is crucial, as it has the potential to significantly impact website usage metrics. Internal traffic, stemming from visits by employees, suppliers, and other service providers, can often distort analytical data and lead to inaccurate insights. Therefore, to ensure that website metrics are as accurate and reliable as possible, filtering out internal traffic from GA4 reports is a necessary step for businesses. By doing so, they can gain a clearer understanding of how their website is performing and make informed decisions based on factual data.

How can you remove internal traffic from GA4 analytics?

Follow these steps to remove internal traffic from being reported in your Google Analytics 4 property:

Step #1: Go to your GA4 property.

Step #2: Click on the ‘Admin’ link located at the bottom left-hand side of the page.

Step #3: Under the ‘Property’ column, select ‘Data Streams’.

'Data Streams' under the 'Property' column.

Step #4:Select the data stream that you wish to exclude internal traffic from by clicking on its name.

Choosing the data stream hat you wish to exclude internal traffic from by clicking on its name.

Step #5: Scroll down the page and locate the ‘Google tag’ section. Then, click on ‘Configure tag settings’.

Locating the ‘Google tag’ section and clicking on ‘Configure tag settings’.

Step #6: To access more options, scroll down the page and click on the drop-down menu labelled ‘Show all’.

the drop-down menu labelled ‘Show all’ is shown.

Step #7: Locate and click on ‘Define internal traffic’ by scrolling down the page.

 Locating and clicking on ‘Define internal traffic’

Step #8: Click the ‘Create’ button.

Clicking the ‘Create’ button.

A screen similar to the one below should now be visible to you.

The screen that appears after pressing 'Create' button.

Step #9: Under ‘Rule Name’, assign a name to your filter by typing in the text box. For instance, let’s name our filter ‘Internal Traffic’.

Naming the filter.

Step #10: Since we are creating a filter to exclude internal traffic from the website, keep the ‘traffic_type value’ as ‘internal’.

Keeping the ‘traffic_type value’ as ‘internal’.

It is possible to modify the ‘traffic_type value’ to a different parameter if you want to use another identifier for internal traffic.

Please note that the value you set for ‘traffic_type value’ in the data filter settings will be utilized in the following step.

Step #11: Under ‘Match Type’, select the drop-down menu.

Clicking on the drop-down menu under 'Match Type'.

A drop-down menu should now be visible to you, similar to this:

After selecting the drop-down menu it looks the same as on the screenshot.

IP address equals: This option will only match a single IP address, such as ‘125.204.156.26’.

IP address begins with: This option will match all IP addresses that begin with the entered input.

For instance, if you enter ‘125’ as the input it will match ‘125.204.156.26’ but nor ‘129.204.156.26’.

The ‘IP address contains’ option will match all IP addresses that include the provided input.

For instance, if you enter ‘125’, it will match ‘125.204.156.26’ and ‘190.125.156.28’, but not ‘167.204.156.26’.

The default option, ‘IP address is in range (CIDR notation)’, will match a range of IP addresses.

For example, ‘128.208.156.28/32’ would match only the IP address ‘128.208.156.28’. However, ‘128.208.156.28/24’ would match any IP address between ‘128.208.156.0’ and ‘128.208.156.255’.

Note 1: In GA4, you can enter IPv4 or IPv6 addresses in the IP address field.

Here is an example of an IPv4 based IP address:

It is shown how IPv4-based IP address looks like.

Here is an example of an IPv6-based IP address:

It is demonstrated how IPv6-based IP address looks like,

Note 2: The IP address field in GA4 does not allow the use of regular expressions.

It is demonstarted that the IP address field in GA4 does not allow the use of regular expressions.

Note 3: If you want to set multiple conditions to identify internal IP addresses, click on the ‘Add Condition’ button.

It is shown that you should click on the 'Add Condition' button if you want to set multiple conditions to identify internal IP addresses.

It is shown that by pressing 'Add condition' you can set multiple conditions to identify internal IP addresses.

When you set multiple conditions to identify internal IP addresses, they are connected by logical OR.

For instance, IP addresses that match ‘2a02:c7fa82d:8d00:4351:b3c9:7387:8802’ or ‘192.168.1.20’ will be flagged as internal traffic.

Step #12: Now input the IP address for which you want to exclude internal traffic.

If you want to exclude all traffic originating from your device but do not know your IP address, click on the link ‘What’s my IP address’.

It is demonstrated that if you want to find out your IP address, you need to click on the link 'What's my IP address'.

You will be automatically redirected to a new tab in your browser window, where you can view your IP address.

Step #13: Copy your IP address and paste it into the text box below ‘IP address:

It is shown that you should paste your IP address into the text box below 'IP address'

Step #14: To create your internal traffic data filter, click on the ‘Create’ button.

It is demostrated that to create your internal traffic data filter, you need to click on the 'Create' button.

Your new internal traffic rule will appear similar to the one below.

It is shown how your new internal traffic rule will look like.

Step #15: Click on the cross button (at the top left of your screen) three times to go back to the admin section of your GA4 property. You should now see a screen similar to the one below:

It is shown what you should see on your screen after you click on the cross button (at the top left of your screen) three times to go back to the admin section of your GA4 property.

How can the ‘Exclude Internal Traffic’ data filter in GA4 be tested?

To do it, you need to follow the steps below:

Step #1: If you are blocking traffic from your own device, simply access your website using a different device.

If you are blocking traffic from an external IP address that doesn’t belong to your device, you can ask the person who owns that IP address to visit your website, provided you have added their IP address to the exclude internal filter.

Step #2: Access the ‘Realtime’ report in your GA4 property.

It is shown how to access the 'Realtime' report in your GA4 property.

Step #3: Find the ‘Add Comparison’ button and click it.

It is demonstrated where to locate the 'Add Comparison' button.

Step #4: Now, click on the drop-down menu located under the ‘Dimension’ label.

It is shown where to locate the drop-down menu under the 'Dimension' label.

Step #5: Enter the term ‘Test’ and you will be able to view the dimension labelled ‘Test data filter name’ in the drop-down list.

To view the dimension labelled 'Test data filter name' in the drop-down list, you need to enter the term 'Test'.

Note: if the ‘Test data filter name’ dimension appears disabled, you might need to wait for 12-24 hours.

Step #6: Select the ‘Test data filter name’ dimension by clicking on its name.

It is shown that you need to click on the 'Test data filter name' dimension.

Step #7: First, click on the drop-down located under ‘Dimension Values’, and then, select the checkbox next to ‘Internal Traffic’.

It is shown where you can select the checkbox next to 'Internal Traffic'.

Step #8: Click on the ‘OK’ button to confirm the selection.

Step #9: Click on the ‘Apply’ button to save the changes.

It is demonstrated where to locate the 'Apply' button.

After completing the previous steps, you should be able to see the new comparison applied to your report.

It is shown where to find the new comparison applied to your report.

Step #10: Reload your browser window.

You should now be able to view the filtered data under ‘Test data filter name’.

It is demonstrated that you should be able to view the filtered data under 'Test data filter name'.

The real-time view displayed in the orange column allows us to see the traffic originating from the internal IP address we had defined earlier.

This indicates that our filter is working properly, and we can now activate it to exclude internal traffic from our reports.

What are the steps to activate the ‘Exclude Internal Traffic’ data filter in Google Analytics 4?

Step #1: First, navigate to your property, and then click on the ‘Admin’ link.

It is show where to locate the 'Admin' link.

Step #2: To access the ‘Data Settings’ menu, simply click on the corresponding drop-down menu.

It is shown how to access the 'Data Settings' menu.

Step #3: Now click on the ‘Data Filters’ button.

It is demonstrated where the 'Data Filters' button is.

Step #4: In this step, you need to click on the ‘Internal Traffic’ data filter.

It is shown where to locate the 'Internal Traffic' data filter.

Step #5: Scroll down to the ‘Filter state’ section and click on ‘Active’. Then, click on the ‘Save’ button located at the top right-hand corner of your screen to apply the changes.

It is shown that firstly you need to scroll down to the 'Filter state' section and click on 'Active'. After that, click on the 'Save' button located at the top right-hand corner of your screen to apply the changes.

Step #6: Activate the filter by clicking on the corresponding button.

It is shown where to locatr the 'Activate filter' button.

The “exclude internal traffic” filter should now be displayed as “Active.”

It is demonstrated that the "exclude internal traffic" filter should now be displayed as "Active."

Great job! You’ve successfully configured your filter in GA4 to exclude internal traffic.

What is the process for disabling the ‘Exclude Internal Traffic’ filter for data in GA4?

To turn off the exclude internal traffic filter for your GA4 data, follow these steps:

Step #1: Go to the “admin” section of your GA4 property.

Step #2: Select the ‘Data Settings’ drop-down menu located under the Property column.

Step #3: Click on ‘Data Filters’ to access the relevant section.

Step #4: To deactivate a data filter in GA4, locate the filter you want to disable and click on the three dots menu next to it.

It is shown where to locate the filter you want to disable.

Step #5: Once you have located the data filter you wish to disable, click on the ‘Deactivate filter’ option.

It is demonstrated where to find the 'Deactivate filter' option.

After deactivating the filter, you should see that the ‘Current Status’ of the filter has been updated to ‘Inactive.’

It is shown where to check if the 'Current Status' of the filter has been updated to 'Inactive.'

What are the steps for reactivating the ‘Exclude Internal Traffic’ data filter in GA4?

Step #1: Go to the ‘admin’ section of your GA4 property.

Step #2: Once you’ve accessed the admin area, select the ‘Data Settings’ dropdown menu located under the ‘Property’ column.

Step #3: From there, click on ‘Data Filters’ to access the filters section.

Step #4: Locate the data filter you wish to reactivate and click on the three dots menu next to it.

It is shown how to locate the data filter you wish to reactivate.

Step #5: After selecting the desired data filter, click on ‘Activate filter’ to reactivate it.

It is shown where to locate the 'Activate filter' button.

Step #6: Finally, to confirm the reactivation of the filter, click on the ‘Activate filter’ button.

It is demonstarted where to find the button 'Activate filter'

How can I edit the parameters of an ‘Exclude Internal Traffic’ filter in GA4?

Step #1: Go to the ‘admin’ section of your GA4 property.

Step #2: Next, select the ‘Data Settings’ dropdown menu located under the ‘Property’ column.

Step #3: From the dropdown options, choose ‘Data Filters’ to navigate to the filters section.

Step #4: Locate and select the data filter that you wish to edit.

It is shown where to locate the data filter that you wish to edit.

Step #5: Once you have selected the filter you want to edit, make the necessary changes and then click on the ‘Save’ button to confirm the modifications.

What is the process for eliminating an ‘Exclude Internal Traffic’ data filter in GA4?

Step #1: Go to the ‘admin’ section of your GA4 property.

Step #2: Now, select the ‘Data Settings’ dropdown menu located under the ‘Property’ column.

Step #3: From the dropdown options, choose ‘Data Filters’ to navigate to the filters section.

Step #4: Select the specific data filter that you wish to delete.

It is demonstrated how to find the specific data filter that you wish to delete.

Step #5: Click on the three dots menu located at the top right-hand corner of your screen.

It is demonstrated that you need to click on the three dots menu located at the top right-hand corner of your screen.

Step #6: From the dropdown menu, choose the option to ‘Delete’ the selected data filter.

It is shown where to locate the 'Delete' button.

How To Resolve the "Couldn't Fetch Sitemap" Error on Search Console? - featured image

How To Resolve the “Couldn’t Fetch Sitemap” Error on Search Console?

Sitemaps notify search engines about the crucial pages that need to be crawled.

For smaller websites with fewer than 100 URLs, there is no requirement for a sitemap. As a result, you shouldn’t make a sitemap for a little website.

If all of your most crucial pages are linked from your home page, the search engines will locate all of your pages more rapidly.

But did you happen to encounter the “Couldn’t Fetch Sitemap” error when submitting your sitemap in Search Console? This issue could be due to a Search Console bug or a sitemap problem. Let’s discuss both possibilities.

Essential principles for creating an XML sitemap

There are a few fundamental rules you must stick to while creating an XML sitemap. Here are some of them:

  • The sitemap should not include more than 50,000 URLs if it is created inside the 50MB limit.
  • As a best practice, place a sitemap at the website’s root.
  • Non-canonical URLs are redirected URLs, and 404 URLs should be avoided.
  • Absolute URLs should be used instead of relative URLs.
  • Make sure to create and submit a sitemap to the preferred URL of the website.
  • Ensure that UTF 8 is supported by the sitemap.
  • The sitemap and any of the URLs listed in it should not be blocked by robots.txt.
  • Sitemap submission to Google does not ensure that the Google bot will crawl all of its URLs.

Sending an XML sitemap to Google helps the search engine crawl your website’s URLs.

On the other hand, there is no assurance that Google will crawl all of the URLs included in the sitemap or that it will do so more regularly.

Consequently, Google can crawl your sitemap more effectively if you provide worthwhile content and update your sitemap regularly.

Verification of the sitemap

Before attempting to fix the “Couldn’t Fetch Sitemap” error on Search Console, you must ensure that your sitemap is valid.

You might want to use the XML Sitemap Validator to complete this task. With this excellent Google sitemap checker, you can check the validity of any sitemap. Additionally, it will include details on how to format your sitemap properly.

To do this you should follow these simple steps:

  1. Access the XML Sitemap Validator
  2. Enter the address of your sitemap

By following these two easy steps, you can check the validity and accessibility of your sitemap. Moreover, this tool makes sure that the sitemap is formatted properly and can inform Google about its location.

You may easily spot any mistakes in your sitemaps and fix them before submitting them to Google by using this sitemap validation tool.

Resolving the “Couldn’t Fetch” Error on the Google Search console

There are various methods to fix the frustrating Google search console error that stops your sitemaps from being fetched. Below we list 7 solutions to this problem.

Way 1: Resolving the “Couldn’t Fetch” error in Google Search Console

This can be your potential solution if the Google search console is not fetching your sitemaps.

Although it is ineffective for certain users, it normally works 50% of the time. To employ this technique, follow these instructions.

  1. Log in to your account on Google Search Console.
  2. Choose “Sitemaps” from the left menu/panel.
  3. In “Add A Sitemap” enter the URL of the sitemap you want to index.
  4. Next, add a forward slash after the final forward slash of the URL and click the “Submit” button.
  5. If the problem still occurs, try again without the extra forward slash.

Do not worry, Google Search Console will index the correct domain name even with the added forward slash.

Way 2: Rename the sitemap

If the sitemaps are valid but still don’t function or can’t be read, renaming the sitemap file could be the key to fixing the Couldn’t Fetch error in Google Search Console.

To rename the file, submit the https://domain.com/?sitemap=1 in place of the sitemap_index.xml. It will serve the same purpose as renaming the sitemap file.

Way 3: Check the size of the sitemap.xml file

The uncompressed sitemap should be 50 MB in size and contain up to 50,000 URLs. The sitemap index can help in breaking down and compiling this size for bigger websites.

Instead of building sitemaps based on their indexes and making them too small, employ a maximum limit if possible.

When the maximum file size limit is reached, Google Search Console displays an error notice informing the user that the sitemap has exceeded the maximum file size restriction.

To prevent the Couldn’t FetchGoogle Search Console error, you should check the size of the sitemap file.

Way 4: Make sure that the sitemap is not blocked by Robots.txt

Google must be able to view the sitemap and all of the URLs listed in it. Google will display an error stating “Sitemap contains URLs which are forbidden by robots.txt” if access is restricted by the Robots.txt.

For example, you receive something like this:

User-agent: *

Disallow: /sitemap.xml

Disallow: /folder/

As you can see, the sitemap is blocked in this instance, and Robots.txt has restricted all URLs in the /folder/. Each website has a Robots.txt file in the root directory.

Way 5: Check if UTF-8 is supported by the sitemap file

All automatically created sitemaps must support UTF-8 as a standard feature. If you manually build the sitemap file, you should make sure it is UTF-8 compliant.

Special characters like * or {} are not supported in URLs. Make sure you stick to the proper escape code to support it.

For example, the URL below has been encoded in UTF-8 and the entity has escaped:

http://www.instance.com/%C3%BCmlat.html&q=name

Way 6: Place the sitemap at your website’s root

You should put the sitemap in the root folder of your website if you want to make sure that Google crawls and indexes all of the URLs on it.

For instance, positioning the sitemap as follows is not possible:

https//www.betterstudio.com/folder/sitemap.xml

It will display a message that states, “URL not allowed” and any URL after /folder are allowed but not https//www.betterstudio.com/folder/sitemap.xml or any other URL of a higher level.

Method 7: Remove the checkmark next to “Search Engine Visibility”

A WordPress user should be familiar with the fundamental settings. You should untick the following important setting in the settings section: Discourage Search Engines From Indexing.

To do so, follow these steps:

  1. Open the dashboard of WordPress.
  2. In “Settings” click on “Reading”.
  3. Lastly, untick the section called ” Discourage Search Engines From Indexing this site”.

After unticking search engine visibility, you will have to submit your sitemap to Google Search Console again.

The problem is still not solved

Let’s say that despite your efforts, you continue to see the Error Sitemap Couldn’t Fetch message in Google Search Console. It must be done manually in that situation.

We advise you to start with the aforementioned techniques. Consider using the manual way if none of those solutions works to solve your issue.

With this approach, you’ll need to manually build your website’s XML sitemaps and submit them to the domain’s root directory. After that, the sitemap has to be submitted to Search Console.

The Ways You Can Claim and Verify Your Website Using Google Merchant Center Tutorial featured image

The Ways You Can Claim and Verify Your Website Using Google Merchant Center: Tutorial

To launch Shopping campaigns, you need to validate and claim your e-commerce website with Google Merchant Center.

Please continue reading if you are logging in with your personal Merchant Center account. If not, the verification code will be sent to you through email, allowing you to move on to Chapter 2.

Step 1: Obtaining the verification code

Copy the HTML tag code after signing in to your Merchant Center account (or tag).

1. Navigate to “Business details,” “Website,” and “HTML tag” in Google Merchant Center.

2. The tag should be copied (for instance: <meta name=”google-site-verification” content=”ffae76-…-98addq9″ />).

The screenshot of the Google Merchant Center displays the process of obtaining the verification code.

Step 2: Verification code placement

The environment of Shopify

The methods below should be followed to paste the Google Merchant Center verification tag into your theme’s <head> tag. The <head> tag can be overloaded by some themes, so be careful. If so, check your theme’s options or get in touch with the creator.

1. Visit Shopify’s theme editor by selecting “Online Store > Themes.

The picture displays the main menu of shopify and navigates to the "Themes" button.

2. Select “actions” by clicking in the top right corner. A menu will open; select “Edit Code.”

The screenshot describing the process that follows after clicking on the "Themes" button.

3. In the layout directory, click theme.liquid. Please insert the tag <meta> that you acquired from Google Merchant Center at this location and click “save” in the section <head> (under the tag opening <head> with the other meta tags>).

The "theme.liquid" code is displayed together with numbers and text highlighted with pink colour for an easier orientation.

The environment of Prestashop

The Azameo Prestashop plugin makes it simple to set up the verification code.

1. Navigate to Module Manager > Azameo > Configure.

.2. Enter the variable content (verification number) in the Google Verification Code box. Example: Ns3l8wY-…-h4nz0cZ

3. To save, choose Change.

The picture displays interface of Azameo Configure and highligts box for Google Verification Code.

The environment of WordPress/WooCommerce

The HTML tag can be added in one of two ways to the <head> section of your WordPress website:

Technique 1: Insert the tag in your child theme’s header.php file. This option needs you to either go through your webmaster or have some basic WordPress understanding.

Technique 2: Use an extension.

  • Install the free Insert Headers and Footers extension, for instance, on your website.
  • Paste the meta tag in the Scripts section of the header by going to Settings > Insert Headers and Footers.
  • Press Save.
  • Depiction of the "Scripts in Header" in the settings of the "Insert Headers and Footers."

Step 3: Make sure to check the site

You can complete the check once you’ve added the tag to your website. Follow these steps if you’re using your own Merchant Center account. Otherwise, Azameo will automatically check your site and you will be set to go.

1. Return to Google Merchant Center and choose “Business information,” “Website,” and “HTML tag” afterward.

2. To finish the verification, click the “Verify URL” button.

Good work, your website has now been validated.

Google tag Manager - featured image

Google Tag Manager: Should You implement Fire Once per Page, per Event or Unlimited?

The Tag Firing Options in Google Tag Manager provide a range of choices to configure how often your tag will fire. By clicking on Advanced Settings for any tag, you can access the Tag Firing Options drop-down. The available options are Once per Event, Once per Page, and Unlimited. Each option serves a specific purpose and can be useful depending on your tagging needs.

Although the Once per Event and Once per Page options may be straightforward, the Unlimited option may seem unclear. Therefore, this blog post will clarify the Unlimited option and compare it with the other two options. We will delve into Fire Once per Page vs Once per Event vs Unlimited in Tag Firing Options to help you make informed decisions about your tagging strategy.

Option #1: Fire Once per Page

It is pretty evident that if a tag’s trigger gets activated three times on a page, but the tag is configured to fire Once per page, the tag will fire only once on that particular page. This feature is beneficial for standard websites that are not single-page applications. In such cases, tracking a visitor’s completion of a specific action only once per page is sufficient, and there is no need to record every single trigger activation or tag sequencing initiation.

Advanced Settings: Tag firing options

For instance, suppose you have conventionally integrated Facebook Pixel, i.e., by utilizing the Custom HTML tag. In that case, the recommended approach is to have the FB Pixel Base code isolated as a distinct tag, as depicted in the screenshot below.

Demonstration of the FB Pixel Base code isolated as a distinct tag.

And after that, it is necessary to set it as a setup tag in Tag Sequencing ahead of all other Facebook Pixel tags, such as Pageview.

Advanced settings of Goofle Tag Manager - setting up tag in Tag Sequencing.

It is crucial to ensure that the Facebook Pixel base code, which serves as the main code, is set to Fire once per page. You can achieve this at the Custom HTML tag level.

Regarding Single-Page Applications, where pageviews do not reload the entire browser tab like regular websites, the Once per page option will apply to ALL pageviews until the visitor reloads the browser tab entirely.

Therefore, if you configure a tag X to fire Once per page, it will only fire once until the visitor performs a complete page refresh. Even if the visitor navigates through ten pages on your single-page app, the tag X will only fire once.

Option #2: Fire Once per Event

This option is the most prevalent one as it is the default setting.

Before delving into the details of this option, let’s briefly discuss events in Google Tag Manager. Events are distinct from the Events in Google Analytics and refer to the elements visible on the left side of Preview mode:

The demonstration of events in Google Tag manager.

Each item displayed on the left side of Preview mode represents an event, except for the Message. If you encounter a Message, it does not qualify as an event.

To elaborate in more technical jargon, a Google Tag Manager event refers to a dataLayer.push that incorporates the ‘event’ key. Here’s an example:

  • script>
    dataLayer = window.dataLayer || [];
    window.dataLayer.push({
    ‘event’: ‘new_subscriber’,
    ‘formLocation’: ‘footer’
    });
    </script>

So, if you have, say, implemented the outbound link click tracking and you want to fire a Google Analytics tag every time any outbound link click is clicked, keep the Fire once per event option selected. If a visitor clicks any outbound links three times on a page, the tag will fire three times.

Sounds clear? A tag fires based on the dataLayer event and if that event occurs multiple times on the same page, the tag will be fired multiple times.

Let’s take a look at the Unlimited option now.

Option #3: Unlimited

To be honest, I haven’t utilized this Tag Firing Option in my setups, so I cannot provide a suitable practical example. However, I can offer a hypothetical scenario (albeit an unusual one) where the Unlimited option might be effective.

Allow me to explain the setup.

I have two tags – one named “Setup tag,” and the other dubbed “Just a tag.” In this case, their specific functions are irrelevant.

The screenshot of two tags: Setup tag and Just a tag.

The tag called “Just a tag” is triggered by a Custom Event Trigger named “sampleEvent.”

It is shown that "Just a tag" is triggered by sampleEvent.

The “Setup tag” is set up to use the same “sampleEvent” Custom Event Trigger as the “Just a tag”. Additionally, the Tag Sequencing in the Advanced Settings of “Just a tag” is configured so that the “Setup tag” fires before “Just a tag” does.

While the situation described above may seem strange, it is purely hypothetical, as I have never used the Unlimited option in any of my real-life projects. Therefore, I cannot provide a practical example of when the Unlimited option would be useful.

It is demonstrated that The "Setup tag" is set up to use the same "sampleEvent" Custom Event Trigger as the "Just a tag".

In conclusion:

  • When the sampleEvent dataLayer.push occurs on a page, it will be visible in Preview and Debug mode.
  • As a result, “Just a tag” will fire once since it has a “sampleEvent” trigger assigned to it.
  • Additionally, the “Setup tag” should fire twice because the “sampleEvent” trigger will activate it, and the Tag Sequencing from “Just a tag” will also activate it. Therefore, the tag will fire twice on the same dataLayer.push event.

In the Preview mode, after the sampleEvent occurs, you’ll notice that the “Setup tag” has only fired once instead of twice. But what could be the reason for this?

It is demonstarted that Setup tag" has only fired once instead of twice.

The reason for this is that the Setup tag was set to fire Once per Event. Despite the trigger and tag sequencing, which should have activated the tag twice, it only fired once because all of this was happening on the same event, sampleEvent. Therefore, the tag was set to “Fire once per event.”

However, if the “Setup Tag’s” tag firing option were changed to Unlimited, the tag would have fired twice on the same sampleEvent.

Setup tag fired 2 times after it was set to 'Unlimited'.

It is important to note that the Unlimited option is only applicable to Tag Sequencing. If a tag has two triggers of the same type, such as Just Links, and both are activated by the same click, the tag will only fire once.

What exactly is High-Performance Computing?

High Performance Computing (HPC) is the method of pooling computing resources in such a way that it provides significantly more horsepower than standard PCs and servers. HPC, or supercomputing, is similar to regular computing but much more powerful. It is a method of processing massive amounts of data at extremely fast speeds by using several computers and storage devices as a cohesive fabric. HPC enables researchers to investigate and solve some of the world’s most difficult issues in science, engineering, and business. HPC is being employed to handle complicated, high-performance challenges, and enterprises are progressively transferring HPC workloads to the cloud.

How does high-performance computing work?

Some workloads, like DNA sequencing, are just too large for a single computer to handle. Individual nodes (computers) working together in a cluster (connected group) to execute vast quantities of computation in a short period of time meet these large and complicated difficulties in HPC or supercomputing settings.

A corporation, for example, may send 100 million credit card records to individual processor cores in a cluster of nodes. Processing one credit card record is a modest operation, but when 100 million records are dispersed over the cluster, those little activities may be executed at remarkable rates at the same time (in parallel). Risk simulations, chemical modeling, contextual search, and logistics simulations are all common use cases.

What is the significance of HPC?

For decades, high-performance computing has been an essential component of academic research and industrial innovation. Engineers, data scientists, designers, and other researchers may use HPC to solve massive, complicated problems in a fraction of the time and expense of traditional computing.

The key advantages of HPC are as follows:

  • Physical testing is reduced since HPC can be utilized to construct simulations, which eliminates the requirement for physical tests. For example, when testing vehicle accidents, creating a simulation is significantly easier and less expensive than doing a crash test.
  • Cost: Quicker responses imply less wasted time and money. Furthermore, cloud-based HPC allows even small firms and startups to run HPC workloads, paying just for what they use and scaling up and down as needed.
  • Innovation: HPC promotes innovation in practically every industry—the it’s driving force behind important scientific discoveries that improve people’s quality of life all around the world.
  • Aerospace: Creating complex simulations, such as airflow over the wings of planes
  • Manufacturing: Executing simulations, such as those for autonomous driving, to support the design, manufacture, and testing of new products, resulting in safer cars, lighter parts, more-efficient processes, and innovations
  • Financial technology (fintech): Performing complex risk analyses, high-frequency trading, financial modeling, and fraud detection
  • Genomics: Sequencing DNA, analyzing drug interactions, and running protein analyses to support ancestry studies
  • Healthcare: Researching drugs, creating vaccines, and developing innovative treatments for rare and common diseases
  • Media and entertainment: Creating animations, rendering special effects for movies, transcoding huge media files, and creating immersive entertainment
  • Oil and gas: Performing spatial analyses and testing reservoir models to predict where oil and gas resources are located, and conducting simulations such as fluid flow and seismic processing
  • Retail: Analyzing massive amounts of customer data to provide more-targeted product recommendations and better customer service

Where does HPC take place?

HPC can be done on-premise, in the cloud, or in a hybrid approach that combines the two.

In an on-premise HPC deployment, a company or research institution constructs an HPC cluster comprised of servers, storage systems, and other equipment that it manages and upgrades over time. A cloud service provider administers and controls the infrastructure in a cloud HPC deployment, and enterprises use it on a pay-as-you-go basis.

Some businesses employ hybrid deployments, particularly those that have invested in on-premise infrastructure yet wish to benefit from the cloud’s speed, flexibility, and cost benefits. They can use the cloud on a continuous basis to execute some HPC tasks, and resort to cloud services on an ad hoc basis when queue time becomes a concern on premise.

What are the important factors when selecting a cloud environment for HPC?

Not all cloud service providers are made equal. Some clouds are not built for high-performance computing and cannot guarantee optimal performance at peak periods of demanding workloads. The four characteristics to look for while choosing a cloud service are as follows:

  1. Performance at the cutting edge: Your cloud provider should have and maintain the most recent generation of processors, storage, and network technology. Make certain that they have substantial capacity and top-tier performance that meets or exceeds typical on-premise deployments.
  2. HPC expertise: The cloud provider you choose should have extensive experience executing HPC workloads for a wide range of clients. Furthermore, their cloud service should be designed to work well even during peak moments, such as while running several simulations or models. In many circumstances, bare metal computer instances outperform virtual machines in terms of consistency and power.
  3. There are no hidden costs: Cloud services are often provided on a pay-as-you-go basis, so ensure that you understand exactly what you’ll be paying for each time you use the service.

What is the future of high-performance computing?

Businesses and organizations in a variety of industries are turning to HPC, fueling development that is anticipated to last for many years. The worldwide high-performance computing industry is predicted to grow from US$31 billion in 2017 to US$50 billion in 2023. As cloud performance continues to improve and become more dependable and powerful, much of the predicted increase will be in cloud-based HPC installations, which will relieve enterprises of the need to spend millions in data center hardware and related expenditures.

Expect big data and HPC to converge in the near future, with the same massive cluster of computers utilized to analyze big data and execute simulations and other HPC tasks. As these two trends converge, more processing power and capacity will be available for each, resulting in even more revolutionary research and innovation.

 

 

Resources: photobanka

 

Exploring the World of Data: Key Facts Everyone Must Know - featured image

Exploring the World of Data: Key Facts Everyone Must Know

Data plays a vital role in the success of businesses that operate online, as it serves as a foundation for customer service and provides insights into customer preferences, feedback, and internal operations. Understanding the significance of data in the business world can provide a deeper understanding of how modern companies leverage it to achieve success. Therefore, this article presents 10 facts that will give you a better overview of the data.

Fact No. 1: Data is subjective

When you examine an analysis, a graph, or the rows in a table of raw data, you create your interpretation of what you see. The evidence in front of you is not based on any objective facts.

This may easily turn into an ontological debate, which is OK. The truth is that both data quality and analysis are not static.

A single set of data can switch from being worthless to being very valuable without a single piece of information changing in any way.

Fact No. 2: Data is a continuous process

Keep in mind that handling data is a process that requires more than one project. Your business has to be aware of the upstream and downstream effects of all the data wrangling going on within (and outside) its walls from a regulatory perspective.

But it goes beyond that. Every second, your business generates ludicrous volumes of data. You need a mechanism in place to effectively maintain the data pipelines inside your firm, and you need to be able to respond to variations thereof (as things are continually changing).

Fact No. 3: Data is not active

People frequently say things like “The data demonstrate that…” or “The data clearly states that…” while presenting data. Even though I understand what they are trying to say, it is still a semantic justification.

Data is not capable of performing any actions or tasks. It is a passive medium that may be exploited, wrangled, managed, sculpted, and shaped to give proof or justification for, or even a diversion from, whatever the presenter is attempting to say.

Fact No. 4: Data is limitless

The significance of this fact only grows as technology advances year after year.

You cannot possibly know all the information. It is philosophically impossible, in addition to being technically infeasible. Therefore, a boundary needs to be established. And it is crucial to comprehend the position of this line’s plot. When presenting your data set as evidence with any type of representational capabilities, you must be aware of its limitations. To maintain the results’ objectivity and reproducibility, you must be able to communicate these limits when asked to do so.

Fact No. 5: Data cannot stand silos

For some strange reason, many businesses still view data as something that can be left to a single job title (analysis, data engineer, scientist), while the rest of the organisation ignores (and neglects) the data pipeline’s broad reach.

The organisation depends on its data to function. It is indifferent to job titles. It does not care if you have a matrix organisation, flat hierarchies, or limitless PTO.

You must be aware of every area of your business where data is being gathered and processed, and you must regularly examine and audit these processes.

Fact No. 6: Tools cannot determine how your organisation operates

Many data platforms have strict guidelines. They force the organisation to adopt schemas that might not be advantageous to the company’s business cases but instead serve solely to ensure that the analytics platform predictably processes the data.

In general, monolithic, generalised schemas are disadvantageous. The company is compelled to adapt to the analytics platform instead of the platform being tailored to the company’s needs.

I can still recall thinking for many hours about how I might “trick” Google Analytics into analysing an Add To Cart event on a website without a shopping cart so that I could use the e-commerce report suite. No one should be forced to complete this task.

Fact No. 7: Data can be overlooked or neglected

It is a mistake to say you are “data-driven.” Avoid falling for it! Based on my many years of experience, the vast majority of businesses operate with data that is entirely misinterpreted and whose baseline quality is just ridiculously low (but keep in mind Fact No. 1).

If the analysis suggests A, and this is supported by experimentation, thorough testing, and the most reliable data set you will ever see, but your intuition suggests B, feel free to follow it. You can ignore the data.

Although Fact No. 3 should be kept in mind, there is no categorical imperative for you to act under the data.

However, for the business case to make sense, you must be able to support your decision with evidence that is at least as effective as following the advice provided by the data analysis.

You cannot just throw a tantrum and ignore the facts because you believe it is your divine right to jump over the planet’s edge to make a pointless argument. You have to be able to create a business case for your decision and persuade your colleagues that the risk is worthwhile.

Fact No. 8: Data is a secondary result of other processes

Okay, so this is not always the case (surprise!), but it is still important in the context of analytics and digital marketing.

Applications, websites, and services with a primary focus on data generation have an extremely limited number of actual features.

Instead, most of the time as analysts, we make use of already-existing features and, as a secondary outcome, add data collection.

A checkout form’s central objective is not to increase conversions. No, the primary focus of it is to encourage sales. The conversion ping is only a secondary result of this process.

As analysts, we frequently forget that most of the time, our businesses, clients, developers, or even marketers do not care that much about data collection since we are too focused on the significance of our job. They only desire that the feature fulfil its intended function.

Data engineering jobs are frequently given lower priority as a result. It is unfortunate, but it is also true.

Clarifying the significance of these side effects is also something that the data scientist must do. The data engineer’s (or analyst’s) position frequently involves consulting since they have to show others how these side effects might truly be worth the time and resource commitment rather than just being development overhead.

Fact No. 9: Data can be complex and hard to manage

All of my presentations for years and years concluded with a slide that read:

Data is challenging. Data quality is not purchased; it is earned.

This, in my opinion, is still vital. Particularly during the COVID-19 epidemic, more and more people were exposed to charts, analyses, and data interpretations that were incorrect.

I hope people realise how challenging it is to not only gather data but to comprehend how that data will be processed, how that data will affect downstream processes, how that data will be regulated, and how to display that data in a meaningful way.

I want people to realise that “ML” and “AI” are more than simply fancy abbreviations. Machine learning and artificial intelligence algorithms require fine-tuning and a human component with the knowledge (and guts) to initiate the processes.

Working with data is more challenging than ever. There are still no fast cuts; data quality must be achieved via effort, passion, and a strong character.

Fact No. 10: Valuable insights are not always easy to obtain, and it’s alright

I believe that many analysts think and act like John Nash in A Beautiful Mind, looking at a data set with the expectation that patterns would suddenly emerge and spark a brilliant discovery that will radically transform their business.

Well, either you will have to wait a while, or you are not doing your job properly.

In evolutionary biology, there is a wonderful concept known as punctuated equilibrium. Most of the evolution, according to this, actually occurs at a slow, constant pace. But once in a while, significant changes bring about a chaotic, quick transformation.

Many analysts, in my opinion, fail to see this and instead try to bring these changes using new tools, collection methods, and schemas to “get results.”

However, a lot of what we do in analytics is focused on careful observation and delivering consistent data for other processes to use.

Assessing Googlebot's JavaScript Crawl Capabilities: Our Findings - featured image

Assessing Googlebot’s JavaScript Crawl Capabilities: Our Findings

Do not assume that Google cannot manage JavaScript – the results of a series of tests conducted by Merkle | RKG prove otherwise. The tests aimed to investigate the extent to which different JavaScript functions are crawled and indexed by Google.

Google’s ability to execute JavaScript and read the document object model (DOM)

The capability of Google to crawl JavaScript dates back to 2008, although it may have been limited at that time.

Today, it is evident that Google has advanced not just in terms of the types of JavaScript they crawl and index, but also in terms of producing fully functional websites.

The technical SEO team at Merkle was interested in learning more about the kinds of JavaScript events that Googlebot might crawl and index. We discovered some unexpected findings and established that Google not only processes various JavaScript event types but also indexes dynamically created content. How? Google is analysing the DOM.

The DOM: what is it?

Not many SEOs are familiar with the DOM or the Document Object Model.

The illustration below demonstrates what happens when a browser requests a web page, and how the DOM is involved.

What happens when a browser requests a web page, and how the DOM is involved.

The DOM, as it is used in web browsers, can be defined as API (application programming interface) for markup and structured data such as XML or HTML.

The DOM also specifies how that structure may be accessed and modified. The Document Object Model is an API that is independent of any particular programming language, but it is mostly used in web applications with JavaScript and dynamic content.

The DOM is the interface, or the “bridge,” between programming languages and web pages. The DOM is the content of a web page, not (only) source code. This makes it crucial.

Below you can see how JavaScript works with the DOM interface.

How JavaScript works with the DOM interface.

We were excited to learn that Google could read DOM documents, and understand signals, and content that was dynamically added, including title tags, page text, header tags, and meta-annotations like rel=canonical. Keep reading for more details.

The tests and their outcomes

To investigate how various JavaScript functions would be indexed and crawled by Googlebot, we developed several tests. Controls were established to guarantee that activity to the URLs would be recognised independently. Let us review a couple of the test findings that caught our attention in greater detail below. They are separated into five groups:

  1. JavaScript redirects
  2. JavaScript links
  3. Dynamically inserted content
  4. Dynamically inserted metadata and page elements
  5. An essential example with rel=“nofollow”

JavaScript redirects

To assess common JavaScript redirection, we first changed the way the URL was displayed. We agreed on using the window.location function. There were two tests run: The window.location method in Test A used the absolute URL property. And in Test B we used Relative URL.

The result: Google immediately followed the redirects. From an indexing perspective, these were read as 301s — the end-state URLs replaced the redirected URLs in Google’s index.

In a second test, we used an authoritative page and established a JavaScript redirect to a new page on the website with the exact same information. For popular key phrases, the original URL ranked on Google’s front page.

The result: As predicted, Google followed the redirect and removed the old page from its index. The updated URL was promptly indexed and given the same search engine ranking for identical queries. This caught us off guard and appears to show that JavaScript redirects can function just like permanent 301 redirects in terms of rankings.

JavaScript redirects for site moves may no longer be a cause for concern, as our findings suggest the transfer of ranking signals in this scenario. This is supported by Google’s guidelines, which state:

The usage of JavaScript to redirect users is a legitimate practice. For example, you may use JavaScript to redirect users to an internal page once they have logged in. Consider the intent while reviewing JavaScript or other redirect mechanisms to verify your site complies with our guidelines. When relocating your site, 301 redirects are preferred, however, if you do not have access to your website’s server, you might use a JavaScript redirect.

JavaScript links

We evaluated numerous different JavaScript links that were coded in various ways.

Dropdown menu links were examined. These connections have always been difficult for search engines to consistently follow. We conducted a test to see if the onchange event handler would be executed. It is significant that this is a particular kind of execution point since, unlike the JavaScript redirection above, we are asking for interaction to change something.

The result: The links were fully indexed and followed.

We also tried typical JavaScript links. These are the most frequent forms of JavaScript links that SEOs have typically advised should be converted to plain text. These tests involved JavaScript URLs created with:

  • Functions outside of the a tag but called within the href AVP (“javascript:openlink()”)
  • Functions outside of the href Attribute-Value Pair (AVP) but within the a tag (“onClick”)
  • Functions inside the href AVP (“javascript:window.location“)
  • And so on.

The result: The links were thoroughly crawled and followed.

The following test, similar to the onchange test before, looked at other event handlers. In particular, we were considering the concept of using mouse movements as the event handler and then disguising the URL using variables that are only activated during the event handler’s (in this example, onmousedown and onmouseout) firing.

The result: The links were crawled and followed.

Concatenated links: we needed to make sure Google was reading the variables in the code even though we knew they could execute JavaScript. In this test, we combined a series of characters that, when put together, formed a URL.

The result: The link was crawled and followed.

Dynamically inserted content

Dynamically added text, graphics, links, and navigation are undoubtedly significant. To fully comprehend the theme and content of a website, a search engine needs high-quality text content. The importance of SEOs staying on top of this has increased in the age of dynamic websites.

These tests were made to look for dynamically added text in two distinct scenarios:

  • Check the search engine’s capacity to take into account text that has been dynamically added and is included in the page’s HTML code.
  • Examine the search engine’s capability to take into account text that is dynamically introduced but not contained inside the page’s HTML code.

The result: The text was crawled, indexed, and the page was ranked for the content in both instances.

For additional information, we examined a client’s JavaScript-coded global navigation, with all links inserted with a document.writeIn method, and ensured they were completely crawled and followed. It should be highlighted that Google’s feature describes how webpages developed with the AngularJS framework and the HTML5 History API (pushState) may be generated and indexed by Google, ranking alongside traditional static HTML pages. This is why external files and JavaScript assets mustn’t be blocked from Googlebot access. Google is also probably moving away from supporting Ajax for SEO guidelines. Who needs HTML snapshots when you can just render the whole page?

Regardless of the content type, our testing showed the same outcome. For instance, when photos were loaded in the DOM, they were crawled and indexed. We even made a test where we dynamically produced breadcrumb markup for data-vocabulary.org and placed it into the DOM. Result? Successful rich snippets with breadcrumbs in Google’s SERP.

It should be noted that Google now advises JSON-LD markup for some structured data.

Dynamically inserted metadata and page elements

Several tags that are essential for SEO were dynamically inserted into the DOM:

  1. Meta descriptions
  2. Canonical tags
  3. Meta descriptions
  4. Title elements

The result: In every instance, the tags were crawled respected and behaved as HTML source code components should.

We will learn more about precedence through an intriguing follow-up test. Which signal prevails when there are contradictory ones? What happens if there is a noindex,follow in the DOM and a noindex,nofollow in the source code? This will be covered in the next thorough testing. Our studies, however, revealed that Google can ignore the source code tag in favour of the DOM.

An essential example with rel=“nofollow”

A particular instance was extremely helpful. When link-level nofollow properties were added to the source code and the DOM, we wanted to see how Google would respond. We also built a control that had nofollow completely disabled.

The link was ignored thanks to the nofollow directive in the original code. Nofollow in the DOM did not function (the link was followed, and the page was indexed). Why? Because Google has crawled the link and queued the URL before it performed the JavaScript method that inserts the rel=”nofollow” tag – the a href element in the DOM was modified too late. However, if the full a href element with nofollow is added to the DOM, the nofollow is recognised together with the link (and its URL) and is therefore respected.

Outcomes

In the past, ‘plain text’ content has been the main focus of SEO suggestions. AJAX, JavaScript links, and dynamically generated content have all harmed SEO for the major search engines. Clearly, Google is no longer in that position. Although we do not know what is going on in the algorithms behind the scenes, JavaScript links function similarly to ordinary HTML links.

  • JavaScript redirects are treated similarly to 301 redirects.
  • The processing of dynamically added material, including meta signals like rel canonical annotations, is the same whether it is fired in the HTML source code or after the initial HTML has been parsed with JavaScript in the DOM.
  • Google now seems to render the page completely and recognises the DOM rather than simply the raw code. Absolutely amazing! (Remember to provide Googlebot access to those JavaScript resources and external files.)

Google has grown at an incredible rate, leaving the competition in the dust. If other engines want to remain competitive and relevant in the future web development environment, which only means more HTML5, more JavaScript, and more dynamic websites, we expect to see the same sort of innovation from them.

It would be wise for SEOs who have not kept up with the underlying ideas and capabilities of Google to research them and update their consulting to take into account modern technology. You can be overlooking half of the picture if the DOM is not taken into consideration.

How can an IT person avoid losing money due to currency fluctuations?

With the onset of the war, IT personnel working as single proprietors for overseas customers found themselves in a difficult situation. Payments for services sent to an entrepreneur’s account in dollars or euros cannot be paid out without incurring considerable losses. In such cases, the question of how to avoid losing money on the exchange rate difference emerges.

Why should IT professionals register an account in another country?

If an entrepreneur is involved in the export or import of commodities, payments must be received on a Ukrainian bank account. The statute establishes time constraints for this reason. Furthermore, by Decree of the National Bank No. 18 on February 24, 2022, these terms were lowered to 90 days for the time of martial law (in the pre-war period it was 365 days). This regulation does not apply if the transaction amount is less than UAH 400,000.

However, Resolution of the Board of Directors of the National Bank No. 67, dated May 14, 2019, specifies a list of services and items that are not covered by the settlement deadlines. This approach, in particular, does not apply to the export of computer programming services. Here is an extract from the Resolution’s paragraph 5: “the deadlines do not apply to services, works (excluding transportation and insurance), intellectual property rights, and/or other non-property exporting rights.”

As a general rule, IT professionals should not be concerned with currency regulation since it concerns the export of services and/or moral rights. However, the tax office may decide that services are included in the definition of “goods” in specific instances.

Therefore, payment for these services must arrive on a Ukrainian bank account. In the absence of sanctions, the single owner may be held liable under Article 162-1 of the Code of Administrative Offenses and fined between UAH 17,000 and 51,000 for infractions in foreign exchange transactions. Therefore, it is advised to speak with a lawyer if there is even a remote chance that such unfavorable effects might result. As an illustration, a lawyer will review the service agreement and, if necessary, reformulate its clauses.

 

Keeping your Ukrainian tax residency

If a sole proprietor IT professional continues to be a tax resident of Ukraine, it makes sense to create a foreign account for him. If not, you must already pay taxes at the rates in effect in the nation where the bank account was created.

Consider the circumstances. Due to the conflict, the Ukrainian IT expert had to be moved overseas, leaving his single proprietorship behind. He has a bank account there, established residency there, and works for foreign contractors. In such cases, profits may be subject to taxes that are higher than the Ukrainian tax rate (2%, for example, during the conflict). And it’s very obvious what this is. Even so, there are some exceptions. For instance, the Baltic nations do not regard Ukrainian sole owners who are temporarily protected by them as tax residents.

However, if the businessman continues in Ukraine, the situation would be unclear. Many nations tax income where it is received. However, the center of the taxpayer’s critical interests will take center stage when selecting the state in which the Ukrainian sole owner must maintain records and pay taxes. Obviously, the center of important interests is in Ukraine if an IT professional resides and works there.

Characteristics of creating a foreign bank account for a single proprietor

You should consider the choice of jurisdiction, the regulations of a certain bank, and some other details when opening an account as an entrepreneur overseas, for instance. Opening a personal account rather than an entrepreneurial one is permitted by the bank, but doing so may result in claims from Ukraine’s tax service because the income will be viewed as non-entrepreneurial. Each jurisdiction has its own methods for determining tax residency and the center of vital interests, as was previously mentioned. Some banks impose restrictions on things like the maximum amount per transaction.

You must first determine how much it will cost to maintain the account before starting one. The following expenses are frequently included in the cost of keeping an account:

Fixed monthly payment; fees for incoming and outgoing payments, currency conversion, ATM cash withdrawals, and other services; SMS alerts and Internet banking; hiring services for client payment acceptance.

At the same time, an IT expert can create an account outside of a traditional bank to transfer salaries from a foreign company. A other option is to create a currency business account with one of the global payment processors, such Payoneer, Wise etc.

It is expressly forbidden to withdraw money from the sole proprietor account. Regulation of the Board of the National Bank No. 5, dated 02.01.2019, imposed this ban. “Transfer of money in foreign currency from the current account of a single proprietor resident to the current account of this individual in foreign currency, created for his own use, is banned,” reads paragraph 113 of this normative act.

A businessperson must do two things before using their earnings for personal expenses – exchange them into hryvnia  and deposit them to their credit card. You may only freely dispose of money after that.

So-called “convertible deposits” were first made available by Ukrainian banks at the end of July of this year. The diagram that follows is this. The business owner transfers money from his single proprietor account to the bank, sells it, and then deposits it right away for a limited time, like three months, at the same rate. After these three months, you can make a preliminary request to cash out the currency at the bank’s cash desk. The initial deposit opening threshold was UAH 50,000; however, starting on October 1 it was raised to UAH 100,000 each month. By employing such a plan, it may be possible to prevent losses on the sale of currency to the bank and the subsequent purchase of cash at the going rate. However, there is a chance that issues would occur if foreign currency is withdrawn through the bank’s cash desk after three months. For instance, the requested funds might not be released for several weeks after being requested.

Other Techniques

1. Establishing a foreign business

A lone owner may use their own overseas firm to collect payments from their customers (or, in reality, employers). Both sole proprietors who run small businesses and business owners who make more than 80,000 euros in profit per year should consider this alternative.

2. Deferred Pay

The so-called delayed salary is one of the methods used to conserve the money received from the employer (customer). We are referring to the situation where an IT worker who is employed by a foreign employer under a contract and who is registered as a Ukrainian single proprietorship requests to leave a portion of his fees (compensation) on the accounts of the foreign employer. Deferred salaries have been a common practice since the beginning of the conflict, but not particularly frequently.

3. A travel expense card

Opening a foreign currency card in a Ukrainian bank for “travel costs” is another quite common choice. Such cards were first made available by several banks in June 2022. In addition to being able to withdraw money straight from the single proprietor account at a greater rate than when selling to a bank, cardholders may pay for products and services using Apple and Google Pay. However, there are certain limitations and this plan only applies to those who are traveling. In particular, a 100 euro withdrawal cap can be established.

 

Resources: photobank

Who is an IT lawyer?

The Internet has shown to be a conducive setting for conducting crimes such as theft of intellectual property, money laundering, and fraud. Do not do online business without the professional legal assistance of an IT lawyer. There are two ways to get this support, employing an in-house attorney or signing an outsourcing contract with a law firm.

What abilities should a lawyer in IT possess?

The degree or even the amount of courses done in the subject are irrelevant if you know who is a lawyer in the information technology industry. A significant competency in the work of an IT lawyer is the capacity to integrate existing laws with novel company procedures and technology advancements. Legislation cannot keep up with technology, but technology must make it a point to always uphold the law. You must at least be familiar with the fundamentals of programming, as well as the process of software development and execution, in order to work in IT law. Knowledge of smart contracts and blockchain technologies as of today is required.

Benefits of hiring an IT lawyer

Finding a trustworthy IT attorney is not simple for a business. If this work is completed, the following issues should be resolved:

  • construction of new business models’ legal frameworks; implementation of risk management and agreement structuring – identification of risks and creation of strategies for avoiding these “pitfalls”
  • IT contract negotiations
  • reducing taxes as much as possible, especially by forming a corporation in the appropriate jurisdiction

In disputes involving information technology, claims and mediation are effective.

Supporting the process of getting funding in a business, particularly venture capitalists, is one of an IT lawyer’s strongest suit. We are discussing the creation of an investment agreement as well as the creation of a business plan based on this agreement for the company’s future formation.

The drawbacks of hiring an IT lawyer

It is important to remember that an IT lawyer is someone who is actively involved in the client’s business when determining who qualifies as an IT lawyer. Such a worker is knowledgeable with the particulars of the business, including information that is a trade secret. If the attorney lacks the necessary authority and reputation, they may utilize this information against the client. Privacy invasion is a concern that needs to be considered at all times.

Confidentiality, however, is given first attention by a respected attorney or legal company that appreciates its standing in the industry. Because of this, no information taken from the client’s lips is ever leaked. Therefore, this drawback (risk) is not present while dealing with dependable staff.

How frequently should an IT lawyer sharpen their skills?

Based on the aforementioned, IT attorneys continue to get training and retraining every day. Additionally, many university grads are unprepared for this. In spite of the excellent employment possibilities in the subject of IT law and the high costs for these professionals’ services, this profession is understaffed in our nation.

An ongoing issue for IT organizations is a lack of skilled lawyers. Even major firms that are prepared to make a competitive offer may struggle to resolve the problem with such a worker. Small businesses might accept entrants at their own risk.

Or, as a substitute, you might sign an outsourcing contract with a reputable legal company. You may offer knowledgeable legal assistance in the area of information technology for a fair price. A team of attorneys that has been contracted out is interested in a long-term, and in the best case scenario, a permanent, partnership with the client, thus these services come with a promise of quality, secrecy, and work for the outcome.

 

An specialist who works at the nexus of several legal, as well as other domains, is an IT lawyer. Since information technology and legal requirements in this field change daily, it is hard to receive a complete education as an IT lawyer in a university setting. You must continue to advance professionally in order to be successful in this career. Along with having the necessary qualifications, an excellent IT lawyer must also have the commitment to deal with clients in strictest confidence.

Resources: photobank