Author Archives: krcmic.com

DKIM - is abbreviation for DomainKeys Identified Mail, is an email authentication method that adds a digital signature to outgoing email

DKIM

DKIM, short for DomainKeys Identified Mail, is an email authentication method that adds a digital signature to outgoing email. That signature allows the receiving server to check whether the message is really associated with the sending domain and whether its important parts were changed on the way. In practical terms, DKIM works like a technical seal placed on an email. If the signature verifies correctly, the recipient’s system gets a stronger signal that the message is legitimate and that its content has not been altered in transit.

At first glance, DKIM can look like just another obscure DNS setting. In reality, it is one of the key building blocks of modern email trust. If a company sends newsletters, transactional messages, automated emails or normal business mail from its own domain, DKIM quickly becomes important. It helps receiving systems decide whether a message is technically trustworthy, and it also plays a major role in broader email policies such as DMARC.

DKIM adds a digital signature to outgoing email. The receiving server then looks up the public key in DNS and uses it to verify the signature. If the check succeeds, the message gets a stronger technical signal of authenticity and integrity.

What DKIM actually does in practice

When a mail server sends a DKIM-signed message, it creates a cryptographic signature based on selected parts of the email, such as specific headers and the body hash. That signature is then inserted into the message header as DKIM-Signature.

When the email reaches the receiving side, the receiving server uses information from that signature to find the correct public key in DNS. It then checks whether the signature matches the message it received. If it does, that means the message is technically tied to the signing domain and that the signed parts were not changed in transit.

This is important because DKIM does not only say “this mail came from somewhere allowed”. It also helps prove that the message has kept its expected signed form on the way to the recipient.

Why DKIM matters

In modern email infrastructure, it is not enough to say “this message is from us”. That claim needs technical support. DKIM provides one of the main ways to do that.

Its practical value is easiest to see in four areas:

  • trust – a valid signature makes the message more credible to receiving systems,
  • integrity – if important parts of the message are changed in transit, the signature can fail,
  • deliverability support – mailbox providers often use DKIM as one of several trust signals,
  • DMARC readiness – DKIM is one of the core authentication methods DMARC builds on.

That does not mean DKIM alone guarantees inbox placement. It does not. But in practice, it is one of the core signals that serious senders are expected to have in place.

Practical point: DKIM is not just about “better deliverability”. Its deeper value is that it helps prove the message is associated with the domain that signed it and that the signed parts were not silently altered on the way.

How DKIM works at a technical level

DKIM uses a key pair:

  • a private key kept on the sending side, used to create the signature,
  • a public key published in DNS, used by recipients to verify that signature.

The sending server signs the message with the private key. The recipient never sees that private key. Instead, the recipient queries DNS for the public key and uses it to verify the signature.

This is why DKIM is tightly linked to DNS. Without the correct DNS record, the recipient cannot retrieve the key needed for verification.

What a DKIM DNS record usually looks like

In many setups, DKIM is published as a TXT record in DNS. A typical example might look like this:

google._domainkey.example.com TXT "v=DKIM1; k=rsa; p=MIGfMA0GCSqGSIb3DQEBAQUAA4G..."

Even though it may look confusing at first, the structure is fairly logical:

  • google – this is the selector, the identifier of the key,
  • ._domainkey – the standard DKIM namespace in DNS,
  • example.com – the domain associated with the signature,
  • TXT – the DNS record type,
  • v=DKIM1 – the DKIM version,
  • k=rsa – the type of key,
  • p=… – the public key used for verification.

This is one of the reasons DKIM can look more intimidating than SPF at first. It is not just a list of authorised servers. It is a cryptographic verification record.

What a selector is and why it matters

The selector is one of the most important DKIM concepts because it identifies which exact key should be used to verify the message.

A single domain can use more than one selector at the same time. That is useful for key rotation, migration between providers or separating different sending systems.

Examples might look like this:

s1._domainkey.example.com
newsletter._domainkey.example.com
google._domainkey.example.com

In simple terms, the selector is the name of the specific key. If the message header says s=google, the receiving system knows it should look for the public key under google._domainkey.yourdomain.com.

One domain can have multiple DKIM keys, and the selector tells the receiving side which one to use for this specific message. That makes migrations and key rotation much easier to manage.

What the DKIM-Signature header means

When DKIM is working, the delivered email usually contains a header similar to this:

DKIM-Signature: v=1; a=rsa-sha256; d=example.com; s=google; h=from:date:subject:message-id:content-type; bh=abc123...; b=xyz456...

This is not the DNS record itself. It is the actual signature added to the email message. Some of the most useful parts are:

  • v=1 – version of the signature format,
  • a=rsa-sha256 – signing algorithm,
  • d=example.com – the domain that signed the message,
  • s=google – the selector used to find the key in DNS,
  • h= – the headers included in the signature,
  • bh= – the hash of the message body,
  • b= – the signature value itself.

For practical debugging, three values are especially useful:

  • d= tells you which domain signed the message,
  • s= tells you which selector was used,
  • p= in DNS contains the public key used to verify the signature.

Why some providers use TXT and others use CNAME

This is a common source of confusion. Many DKIM setups publish the public key directly in a TXT record. But some providers use a CNAME instead, so your domain points to a DKIM record managed on the provider’s side.

In practical terms, this means one provider may ask you to add a TXT record containing the public key directly, while another may ask you to add a CNAME that points to their managed DKIM endpoint.

Both approaches can be technically valid. The difference is mainly operational. With TXT, the key is published directly in your DNS. With CNAME, your DNS delegates that lookup to the provider’s DKIM infrastructure.

This is why DKIM setups can look different across services even when they are solving the same authentication problem.

Why 2048-bit keys and key rotation matter

In current mainstream practice, 2048-bit DKIM keys are generally preferred where the DNS provider supports them, because they provide stronger cryptographic protection than shorter keys. Some providers still allow 1024-bit keys in environments where DNS limitations make larger keys impractical, but the broader direction is clearly towards stronger key sizes.

Key rotation matters as well. Over time, organisations may need to replace DKIM keys for security, migration or operational reasons. This is another reason selectors are so useful. They allow one key to be introduced while another is still present, which makes the transition easier and safer.

DKIM is not a one-time checkbox. It is part of an email authentication setup that should be maintained properly over time, including sensible selectors, working DNS publication and occasional key rotation.

How DKIM relates to SPF and DMARC

DKIM is often mentioned together with SPF and DMARC because the three belong to the same wider email authentication picture, but they do different jobs.

  • SPF says which servers are allowed to send mail for a domain.
  • DKIM adds a digital signature so the recipient can verify the message against a public key in DNS.
  • DMARC builds on SPF and DKIM and adds policy and alignment rules around the visible sender domain.

In simple terms, SPF helps answer “was this server allowed to send?”, DKIM helps answer “was this message signed correctly and kept intact?”, and DMARC helps answer “does this all match the sender identity shown to the user?”

What can cause DKIM to fail

DKIM can fail for several reasons. A message may be modified in transit in a way that breaks the signature. The DNS record may be missing or published incorrectly. The wrong selector may be used. The sending system may sign with one domain while the published key is expected somewhere else.

In more complex email environments, failures can also appear during migrations, after DNS changes or when multiple sending platforms are involved and not all of them are configured consistently.

That is why DKIM issues are not always obvious from the sender’s side. A message may appear to have been sent normally, while the receiving system sees a signature problem that affects trust or DMARC alignment.

What DKIM does not guarantee

DKIM is valuable, but it does not prove everything. It does not confirm that a human sender is “honest” in a broader sense. It does not guarantee inbox placement. It does not solve list quality problems. And it does not replace SPF or DMARC.

It is best understood as a technical authenticity and integrity layer for email. That is already very important, but it is still only one part of the wider email setup.

What are the limits of DKIM? DKIM helps prove that a message was signed by a domain-related system and that signed parts of the message were not changed in transit. It does not by itself guarantee inbox placement, sender quality or full domain protection. Its real value appears when it works together with SPF, DMARC and a healthy email infrastructure.

Why this term is worth understanding even outside technical roles

DKIM is a good example of the fact that modern email is not only about writing a message and clicking send. Behind the scenes, there is also authentication, identity and technical trust.

If you understand what DKIM does, it becomes much easier to see why a message can look normal in the inbox and still fail technical checks, why email migrations sometimes create authentication issues, or why large providers put so much emphasis on properly signed mail.

That is why DKIM matters not only to mail administrators, but also to marketers, founders, e-commerce operators and anyone sending important email from their own domain. It is one of the main technical signals that separates a professionally authenticated sending setup from a weak one.

Related terms

  • DNS – DKIM depends on DNS because the public key used for verification is published there.
  • TXT record – the most common DNS record type used to publish a DKIM public key.
  • CNAME – some email providers use a CNAME-based DKIM setup instead of publishing the key directly in TXT.
  • Selector – the identifier that tells the receiving system which DKIM key it should use.
  • SPF – another email authentication method that checks which servers are allowed to send mail for the domain.
  • DMARC – the policy layer that builds on SPF and DKIM and checks alignment with the visible sender domain.
  • Hostname – important in DKIM because the selector and _domainkey namespace create the DNS name under which the key is published.
  • Public key – the key stored in DNS and used by recipients to verify the signature.
  • Private key – the signing key kept on the sending side and never published in DNS.
RNG - what is it

RNG

RNG (random number generator) is the system behind critical hits, loot drops, shuffles, spawns, and procedural maps. The mechanics are simple – the design work is not: good RNG creates uncertainty that feels fair, stays readable, and does not erase skill.

Most games do not want pure randomness. They want controlled randomness – enough unpredictability to stay interesting, with guardrails so the experience stays trustworthy.

What RNG Means In A Game Context

Most RNG decisions follow the same core pattern: the game generates a value, compares it to a probability, and applies an outcome based on whether the roll passes.

  • Step 1 – Generate a number. The game produces a random-looking value, commonly normalized to 0.00-1.00 (or an integer range). The exact range does not matter – what matters is that the value is sampled consistently and cannot be predicted or manipulated by the player.
  • Step 2 – Compare to a chance. The system checks whether the roll is below a probability threshold. If crit chance is 20%, the roll succeeds when it is below 0.20. If a rare item has a 2% drop rate, the roll succeeds when it is below 0.02.
  • Step 3 – Apply the outcome. If the check passes, the event triggers (crit, drop, proc, spawn). If it fails, nothing happens. That simplicity is exactly why RNG can be dangerous: a single roll can decide something emotionally important unless you design the impact and the frequency of rolls on purpose.

True RNG Vs Pseudo-RNG

There are two broad ways games get randomness.

  • True RNG (TRNG) samples unpredictable physical signals, typically hardware noise. It is closer to “real randomness”, but it is harder to control, less portable, and often unnecessary for game feel.
  • Pseudo-RNG (PRNG) uses an algorithm to generate a sequence of random-looking numbers from a starting value called a seed. PRNG is what most games use because it is fast, reproducible (when you want it to be), and consistent across platforms.

The Seed – Why Randomness Can Be Repeatable

A PRNG is deterministic: once you choose a seed, it produces a specific sequence of values. If you reuse the same seed and consume RNG calls in the same order, you can reproduce outcomes exactly. That is not a bug – it is often a feature.

  • Replays and debugging. If a rare bug happens only once every 10 000 runs, you can capture the seed and reproduce the exact same sequence. That makes problems fixable instead of “ghost stories” you can never recreate.
  • Daily runs and shared challenges. Roguelikes and challenge modes often use a daily seed so everyone plays the same generated content that day. The randomness is still there, but it is consistent across players, which makes comparisons meaningful.
  • Deterministic multiplayer (in some designs). Some games keep clients in sync by sharing a seed and then syncing player inputs. When done well it is efficient. When done poorly it is exploitable, because predictable randomness can become an advantage.
Random in games often means unpredictable to the player, not non-repeatable to the developer.

RNG Streams – Why Games Separate RNG Sources

Serious implementations often split randomness into multiple RNG streams so unrelated systems cannot influence each other. This is not theoretical – it prevents real production issues where a harmless feature changes outcomes somewhere else.

  • Combat RNG should be consumed only by combat (hit resolution, crits, damage spread, status effects). If combat shares a stream with UI or animation timing, you can create situations where “opening a menu” changes whether the next hit crits, which feels like rigging even if the math is unbiased.
  • Loot RNG should be isolated so reward outcomes do not depend on unrelated calls. Loot is emotionally high-stakes: players will form beliefs about fairness quickly, so you want your reward system to be stable and explainable.
  • Generation RNG (maps, rooms, encounter placement, spawns) is usually consumed heavily and early. If it shares a stream with loot or combat, small changes in generation can cascade into large differences elsewhere, making balancing and debugging much harder.

Where RNG Shows Up In Games

RNG is not just a list of things that roll dice. It shapes how a game progresses, how combat feels, how solvable the meta becomes, and how pacing is controlled over long sessions.

  • Loot and rewards – RNG controls rarity so valuable items stay valuable, but it also controls progression speed and long-term motivation. If rewards are too random, players feel effort does not translate into progress. If rewards are too predictable, the chase collapses and the economy inflates because everyone completes sets at the same time. Good reward RNG usually includes guardrails (pity, duplicate protection, token systems) so the worst-case stories are prevented while the excitement of uncertainty remains.
  • Combat resolution – RNG shapes moment-to-moment variance through crit spikes, damage ranges, and proc timing. This can make fights feel dynamic instead of scripted. The failure mode is volatility: if one roll swings too much, players feel robbed. Good combat RNG is typically bounded (tight ranges), readable (clear feedback), and positioned as a modifier, not a replacement for skill.
  • Cards and dice – RNG prevents matches from becoming fully solved and repetitive by limiting perfect planning. In card systems, randomness is what forces adaptation: you can plan a line, but you cannot guarantee the next draw. The educational point is that good designs add variance management tools (mulligans, filtering, tutoring limits, deckbuilding constraints) so outcomes feel like decisions under uncertainty, not coin flips.
  • Procedural content – RNG increases replayability by remixing layouts and encounters, but strong procedural design is never “pure random”. It is rules plus randomness: constraints ensure a room is playable, pacing rules prevent difficulty spikes, and curated pools keep variety meaningful. The goal is controlled novelty – runs feel different, but they do not feel broken.
  • AI variety – RNG reduces predictable enemy patterns by selecting from valid actions rather than always picking the same optimal move. This makes enemies feel less robotic and stops players from solving the AI instantly. The key is constraint: randomness should operate inside a safe decision set (cooldowns, positioning rules, threat evaluation) so variety does not look like incompetence.
  • Spawning – RNG controls pressure and pacing in waves, open worlds, and arenas. Spawn randomness changes what the player must respond to and when, which is a pacing lever: too many threats at once creates frustration, too few creates boredom. Good spawn systems often include safety rules (no unavoidable spawns behind the player, minimum reaction distance, intensity budgets) so randomness shapes pacing without creating unfair traps.

Why RNG Feels Unfair Even When The Math Is Correct

Random sequences naturally produce streaks, and streaks feel personal even when they are statistically normal. Players also remember extreme bad runs more than average outcomes, so their mental stats are biased from the start.

Example – if success chance is 20%, failure chance is 80% (0.8). Failing five times in a row is:

0.8 x 0.8 x 0.8 x 0.8 x 0.8 = 0.32768 (about 33%).

So five failures in a row is not rare, it is expected. The educational point is that “fair odds” do not guarantee a “fair-feeling experience” because real randomness produces ugly tails eventually – and players live in the tails when they happen to them.

  • Loss of control hurts most when the player did the right thing and still loses. This is why “random miss” is often perceived as an insult while “random crit” is perceived as a bonus – the miss removes agency, the crit adds upside.
  • Streak memory is a cognitive filter: players compress normal outcomes and vividly remember the worst ones. If your design allows an extreme streak, it will be turned into a story – and that story will define your system’s reputation.
  • Hidden rules trigger distrust. If odds are unclear, modifiers are invisible, or protection systems exist but feel secretive, players assume manipulation. Even a well-balanced system can fail if it is not readable.

Good RNG Design – Uncertainty With Guardrails

Good RNG is not more random. It is randomness shaped to support pacing, fairness, and skill. The tools below exist because pure RNG will eventually produce experience-breaking sequences.

Tool 1 – Bounded Randomness (Tight Ranges)

Bounded randomness limits how far a roll can swing results. A damage range of 10-100 creates huge emotional volatility: low rolls feel like punishment and high rolls feel like the game, not the player, decided the moment. A range like 45-55 still creates variation, but it protects the player’s expectation of consistency.

Design lesson: tight ranges preserve the feeling that outcomes follow from decisions, not from chaos.

Tool 2 – Weighted Randomness (Controlled Probabilities)

Most loot systems are weighted because progression and economy cannot survive equal odds. Weighting lets designers control expected value over time: commons stay common, rares stay rare, and difficulty can increase expected reward value without guaranteeing jackpots.

Design lesson: weighting is not about deception – it is about making reward rates compatible with your game’s pacing and long-term motivation.

Tool 3 – Bad Luck Protection (Pity Systems)

Pure RNG allows infinite failure. That means someone, eventually, will have a horror streak that feels impossible and breaks trust. Bad luck protection caps the misery while keeping early attempts exciting.

A pity timer guarantees success after N failures, escalating odds increase the chance after each miss, and duplicate protection reduces repeats until a set is completed. These mechanisms change the tail of the distribution – they do not necessarily change the average reward rate, but they dramatically improve perceived fairness.

If a reward system allows unlimited failure, then the worst-case experience grows with playtime and player count. Protection mechanisms cap the maximum expected drought and stabilize progression, without removing the moment-to-moment uncertainty that makes drops exciting.

Tool 4 – Shuffle Bags (Reduce Extreme Streaks)

A shuffle bag builds a controlled pool of outcomes and draws without replacement, then refills. This preserves unpredictability while greatly reducing extreme streaks. It is especially useful for proc systems, spawns, and controlled reward drops where long droughts are unacceptable.

Design lesson: shuffle bags turn independent rolls into managed variance, which often feels fairer while still being unpredictable moment to moment.

Tool 5 – Streak Breakers (Hard Caps)

Streak breakers are explicit caps when pacing cannot tolerate long failure runs. A soft breaker sharply increases chance after multiple failures; a hard breaker guarantees success after a defined streak length.

Try to use streak breakers when failure streaks create boredom, grind fatigue, or perceived brokenness. Avoid them when your game’s identity relies on raw variance and risk.

Tool 6 – RNG Sets The Situation, Skill Decides The Outcome

This is one of the cleanest patterns for “fair-feeling randomness”. RNG provides variety in what appears (upgrades offered, loot available, encounter composition), but the player’s decisions and execution determine whether they win.

Design lesson: players accept RNG better when it changes the puzzle, not when it overrides the solution.

RNG In Competitive And Online Games

In multiplayer, RNG is not only a design decision – it is a trust and security problem. If players believe outcomes are manipulable, the game’s credibility collapses even if the distribution is mathematically fine.

  • Client-side RNG is fast and responsive, but riskier: if a client can predict or influence the random stream, cheating becomes possible.
  • Server-side RNG is more trustworthy because the authoritative roll happens on the server, but it must handle latency, reconciliation, and synchronization cleanly so outcomes do not feel delayed or inconsistent.

Transparency matters because players will reverse-engineer meaning if you do not provide it. You do not need to publish every formula, but you do need consistent rules – what affects the roll, how modifiers stack, and whether protection systems exist. If you change odds dynamically, it should feel like a designed system, not a hidden trick.

Common RNG Myths

  • Myth: The game avoids what I want.
  • Reality: Low probability plus small sample sizes naturally create streaks that feel targeted. Players experience probability as narrative, so a few bad rolls can feel like intention.
  • Myth: I am due for a win.
  • Reality: Independent rolls do not owe success unless your system explicitly includes protection rules that change odds after failures.
  • Myth: Random means evenly spread – real randomness clusters, and clusters feel unfair.
  • Reality: Evenly spread is a human expectation, not a property of independent random sequences at small sample sizes. If you want outcomes to feel evenly distributed in play, you often need controlled randomness (shuffle bags, protection, bounded variance).

How Teams Test RNG So It Does Not Break The Game

RNG can be tested like any other system. The key is not only validating averages, but validating streaks and worst-case tails – because that is where player trust is won or lost.

  • Large simulations validate that rates converge correctly over huge roll counts and across different states. This is how you detect subtle biases, stacking bugs, and unintended interactions.
  • Distribution checks measure streak frequency and variance. A system can have a correct average drop rate and still produce unacceptable droughts. Testing the distribution tells you whether the experience will generate rage stories.
  • Edge-case tests stress unusual combinations – stacked modifiers, extreme build synergies, and rare states that occur only sometimes. These are the scenarios where RNG systems most often break.
  • Seed logging makes rare issues reproducible. If a player reports something impossible, you can inspect the seed and the sequence of RNG consumption instead of guessing.
  • Economy modeling connects RNG to progression and retention. Reward RNG is not isolated: it changes session length, player motivation, and content lifespan.
Pure randomness can create ugly streaks, and those streaks are exactly what makes players feel the game is unfair even when the math is correct. Good RNG design keeps uncertainty playable. It stays exciting and readable, and its impact is limited so skill remains the main driver. Use RNG to add variety and tension, not to decide the whole outcome. When randomness supports the experience instead of overriding it, players stay engaged and trust the system.

Churn Rate

Churn Rate (customer departure rate) is a comparative metric used in analysis to express how quickly a company is losing customers or repeat purchases in a specific time period – typically monthly or annually. It measures the percentage of customers who ended their relationship with the service, stopped buying, or canceled their subscription.

The goal is to identify weak points in customer retention, reveal structural problems in the business model, and optimize growth strategy by reducing losses in the customer base.

What is Churn Rate used for?

To evaluate customer departure and its impacts – for example, when:

  • tracking growth or decline in the number of active customers,
  • measuring loss of repeat purchases or subscription cancellations,
  • analyzing MRR churn (loss of recurring monthly revenue due to customer departure),
  • evaluating the effectiveness of retention campaigns and loyalty programs,
  • monitoring the impact of customer departure on growth strategy and long-term customer value.

Customer churn rate and MRR churn rate

Two basic indicators are tracked for churn:

Customer churn rate and MRR churn rate

Notes:

  • Customer Churn Rate – relates to physical customers and measures the speed at which a business is losing specific customers or customer accounts.
  • MRR Churn Rate (Monthly Recurring Revenue) – an indicator expressing as a percentage the total revenue loss resulting from customer departure in a given period. From a business perspective, it has greater informational value because it also considers the economic weight of individual customers.

Example: You have ten customers, but one of them is responsible for a quarter of your monthly revenue.

If they leave, Customer Churn Rate = 10%, but MRR Churn Rate will reach 25%.

How is it expressed?

Changes are typically expressed as percentages as the ratio of the number of customers who left to the total number of customers at the beginning of the period.

For example: 5% Churn Rate means that the company lost 5% of its customer base in the given period. This figure shows what portion of the customer base was lost – key information for managing growth and business sustainability.

Example

A company providing SaaS service had 1,000 active customers on January 1, 2025. By February 1, 2025, it lost 50 customers who canceled their subscription. The departure rate for January is thus 5% Churn Rate.

This means that for every 100 customers, it loses 5 monthly – and if this is not compensated by new customers or customers with higher revenues, the company’s growth will be at risk.

Voluntary vs. involuntary customer departure

  • Voluntary (active) churn – customers voluntarily stop buying or cancel their subscription.
  • Involuntary (passive) churn – the customer leaves unintentionally, for example, due to failed payment or technical error with payment method.

Tip: Passive churn should be addressed immediately – for example, with a reactivation campaign or notification about unpaid payment – before it spreads and gets out of control.

Negative Churn

Negative churn is considered the “holy grail” of growth and a symptom of a strong product and business model. It occurs when new revenue from existing customers (expansion, upsell, or reactivation) exceeds revenue lost due to departures.

In other words – a smaller but more active group of customers can compensate for revenue loss caused by the departure of some clients through their spending.

What Churn rate is good?

Generally, it’s stated that an acceptable customer departure rate ranges between 5-7% annually.

Chrurn rate and cohort analysis

In reality, however, it depends on the industry, business model, and customer characteristics.

Tip: Start from the LTV/CAC ratio (Lifetime Value / Customer Acquisition Cost) and look for a balance that ensures healthy growth and profitability.

Cohort analysis – Churn Rate

Cohort analysis allows tracking at what point in the lifecycle departure is highest and how customer behavior evolves over time.

Cohort analysis - Churn Rate

For example, it can reveal that churn is highest during the first or second month – which indicates insufficient communication of product value or weak onboarding.

Analysis of cohorts (groups of customers who converted in the same period) allows identifying critical phases and verifying whether new measures lead to lower churn in subsequent cohorts.

Why is this metric important?

Churn Rate is a key indicator of company health because it directly affects growth, revenue, and return on marketing investment. While acquisition metrics (e.g., CAC) show how much it costs to acquire a new customer, churn reveals how well the company retains its customers.

It helps to better assess:

  • the effectiveness of retention measures and customer care,
  • customer lifetime value (LTV) in relation to their acquisition cost (CAC),
  • structural weaknesses in the business model – if churn is high, growth will be unsustainable,
  • the speed at which new products, services, or price changes affect customer response.

This makes Churn Rate a fundamental tool for analysts, marketing, and management when evaluating company health and business models with recurring revenue.

What to watch out for with the Churn Rate metric

When interpreting, it’s important to:

  • distinguish between Customer Churn and MRR Churn – losing one large customer can have a greater impact than ten smaller ones,
  • not neglect passive churn and address technical causes of failed payments in time,
  • track cohorts and discover at which phase of the lifecycle departure is highest,
  • combine churn with LTV, CAC, and retention indicators for a complete view of customer base health.

Only then does this metric have real informational value and can be used as a reliable basis for planning growth, retention strategies, and budgeting.

Quarter on Quarter - QoQ

QoQ

Quarter on Quarter (abbreviated as QoQ) is a comparative metric used in analysis to compare economic, financial, or operational indicators between two consecutive quarters – that is, between the current and previous quarter.

The goal is to assess the development of a company’s, industry’s, or market’s performance over a shorter time horizon and quickly identify trends that may signal growth, slowdown, or stagnation.

What is QoQ used for?

To evaluate quarterly performance – for example, in:

  • tracking revenue growth, profit, and operating margin between two quarters,
  • analyzing productivity, inventory turnover, or cash flow,
  • reporting results of publicly traded companies,
  • monitoring macroeconomic indicators such as GDP, industrial production, or inflation,
  • evaluating the impact of seasonal factors and economic cycles.

QoQ - Quarter on Quarter - formula - Formula for calculating Quarter-on-Quarter (QoQ) percentage change

The Quarter-on-Quarter (QoQ) metric shows by what percentage a given indicator has changed between two consecutive quarters (current vs. previous quarter).

And how do you calculate Quarter-on-Quarter (QoQ)?

 

Notes:

positive QoQ (%) = growth compared to the previous quarter

negative QoQ (%) = decline compared to the previous quarter

0% = no change

How is QoQ expressed/How to correctly interpret the Quarter on Quarter metric?

Quarter-over-quarter changes are typically expressed as percentages.

The notation looks like:

+2.7% QoQ or -0.9% q/q

This notation shows by what percentage the indicator’s value increased or decreased compared to the previous quarter.

Example

Apple announced revenue growth of +2.7% QoQ in the second quarter of 2025.

This means that the company’s revenue was 2.7% higher than in the first quarter of 2025  -for example, if revenue reached $90 billion in the first quarter, it increased to approximately $92.43 billion in the second quarter.

Quarter-over-quarter comparison helps reveal the current trend in revenue development and provides a quick overview of the company’s short-term performance between individual periods of the fiscal year.

Why is quarter-over-quarter comparison important?

QoQ is among the fundamental tools of financial analysis and reporting because it enables tracking performance development within a single year and evaluating results without waiting for annual data.

Unlike the year-over-year metric (YoY), which shows long-term trends, QoQ provides a view of the current pace of growth or decline and helps identify changes that may precede broader economic shifts.

It helps to better assess:

  • short-term growth or performance slowdown,
  • the influence of seasonal trends between individual quarters,
  • the effectiveness of new strategies or marketing measures,
  • the speed of a company’s response to market fluctuations and demand.

This makes QoQ a metric frequently used by analysts, investors, and management in quarterly earnings presentations and strategic decision-making.

Difference of QoQ from other metrics

  • MoM (Month on Month) – month-over-month comparison that tracks rapid and short-term changes.
  • QoQ (Quarter on Quarter) – quarter-over-quarter comparison that provides an overview of performance development within one year (the term QoQ and its explanation and description are the focus of our entire article above).
  • YoY (Year on Year) – year-over-year comparison that displays long-term trends without the influence of seasonality.
  • CAGR (Compound Annual Growth Rate) – accounts for an entire multi-year period and compound interest, providing the most accurate view of long-term trends.

What to watch out for with the QoQ metric

When interpreting QoQ results, it’s important to consider seasonal influences, the length of quarters, and extraordinary events (such as new product launches or one-time expenses).

Quarter-over-quarter growth may appear positive but does not necessarily indicate a long-term trend.

QoQ should therefore always be supplemented with year-over-year comparison (YoY), which allows identification of whether the change is sustainable over a longer period.

CAGR - What is abbrev. CAGR and how it is calculated - Compound Annual Growth Rate formula

CAGR

Compound Annual Growth Rate (abbreviated as CAGR) is a financial metric that expresses the average annual growth rate of a value-such as revenue, profit, investment, or number of customers—over a specific time period.

The goal is to determine how quickly the value of the tracked indicator grew (or declined) on average each year, taking into account compound interest—that is, the fact that growth in each year is based on a higher base than in the previous year.

What it’s used for

To measure long-term growth rates—for example, in:

  • evaluating the average annual growth of a company’s revenue, profit, or turnover,
  • analyzing the development of investments, funds, or portfolios,
  • comparing growth dynamics between different companies or industries,
  • assessing the development of market share or customer numbers over a longer time horizon,
  • setting realistic targets for strategy and growth planning.

How is CAGR calculated – Compound Annual Growth Rate formula

CAGR is calculated using the formula:

CAGR - How is CAGR calculated - Compound Annual Growth Rate formula

((Final value / Initial value) ^ (1 / number of years)) – 1

The result represents the average annual growth rate in percentages, which would lead to the same final value if growth were constant each year.

Example

A company invested 10 million CZK in 2020 and in 2025 the investment value was 18 million CZK.

CAGR = ((18 / 10)^(1 / 5)) – 1 = 0.125 = 12.5% annually.

This means that the average annual growth rate of the investment was 12.5%—even though growth in individual years could have varied, this value expresses uniform returns over a longer time horizon.

Why the CAGR metric is important

CAGR is among the most reliable indicators of long-term development because it eliminates the influence of short-term fluctuations and enables objective comparison of growth across time. Unlike simple year-over-year comparison (YoY), which works with a single difference, CAGR considers the entire period, thereby providing a more realistic picture of actual growth rate.

It helps to better assess:

  • long-term growth of revenues, profit, or investments,
  • stability and sustainability of growth trends,
  • effectiveness of strategy over a multi-year period,
  • actual returns on projects or investment funds over time.

This makes CAGR a common component of investment analyses, corporate reports, and strategic presentations for shareholders and management.

Difference from other metrics

  • MoM (Month on Month) – month-over-month comparison that tracks rapid and short-term changes.
  • QoQ (Quarter on Quarter) – measures quarter-over-quarter growth rate within a year.
  • YoY (Year on Year) – shows annual change between two periods, suitable for short-term tracking.
  • CAGR (Compound Annual Growth Rate) – accounts for an entire multi-year period and compound interest, providing the most accurate view of long-term trends (the term CAGR and its explanation and description are the desribed above).

What to watch out for with the CAGR metric

When interpreting, it’s important to remember that CAGR does not show actual fluctuations in individual years—it only calculates the uniform rate that leads to the same result. Therefore, it’s advisable to combine it with year-over-year data (YoY) or a chart of actual development.

Distortion can also occur if the initial value is unusually low or includes a one-time anomaly. Proper interpretation of CAGR requires knowledge of the context and the entire time development of the tracked indicator.

How to stop AI from creating false information and desinformation and - how to get practical, accurate answers and minimize AI hallucinations - Blog

How to stop AI from creating false information and desinformation and bullshitting – how to get practical, accurate answers and minimize AI hallucinations

Artificial intelligence is a great tool. It can speed up work, supplement knowledge, reveal new connections, and sometimes surprise you with a result you wouldn’t have thought of yourself. But it’s important to acknowledge reality – AI is not a miraculous brain and certainly not a truthful expert. It’s a statistical model that generates the most probable answer based on learned patterns – and sometimes it hits the mark precisely, other times it confidently spouts nonsense. How to prevent this? We’ll discuss that in a moment, but first, let’s start with some repeated “pouring” of basics. Or alternatively

Yes, AI can speed up work, help with research, draft materials, suggest text structure, and explain technical problems. But it doesn’t make anyone an expert. And it’s definitely not a replacement for independent thinking. Those who have worked with these tools for a longer time know well that despite all the procedures, guides, and prompts, the model will sometimes simply respond with nonsense.

Artificial intelligence still loves to hallucinate very much – and no, the problem isn’t that you’re using the free version of ChatGPT, for example; paid versions hallucinate nicely too (this varies quite a bit from tool to tool; for instance, in Claude you’ll experience less frequently that AI will slip you fake sources and generally it seems to me that it makes things up less and works better with facts somewhat in general already at baseline, etc.).

This means that information needs to be read, compared, and confirmed in your own head (whether it’s not complete nonsense).

Not because the user “doesn’t know how to write a prompt.”

Not because they’re too lazy to study it.

But simply because artificial intelligence still doesn’t think, it only predicts the most probable answer.

And sometimes it hits amazingly precisely, other times it misses completely. When it misses, you’ll often hear the same mantra from various wannabe gurus who often became AI gurus overnight:

“Just write a better prompt.”

Yes, that’s true. You can always write a better and more detailed prompt. But that’s only half the truth.

The other half goes: “How much time does it really make sense to invest in tuning an AI response… and when is it faster to do it the old way?”

If it’s a task that would normally take you tens of minutes to hours of work, or an activity you’ll repeatedly perform, or you don’t know how to approach it at all, it makes sense to use AI as an assistant that will speed up the work and help you structure the process.

A typical situation might be, for example:

  • Drafting arguments for client communication – AI helps assemble logical arguments, lists advantages, objections and counter-arguments, adds recommended tone and communication style.
  • Writing procedures, checklists or methodologies – AI creates a clear step-by-step process, adds control points and recommendations so the process is clear and replicable.
  • Creating an outline for a marketing campaign or strategy – AI proposes campaign structure, target segments, key messages and recommended communication channels.
  • Proposing logic for a decision-making process or project task – AI helps break down the problem into steps, define decision criteria, possible scenarios and recommended procedure.
  • Transcribing and editing text – transcribing voice notes to text, adding structure, language correction.
  • Summarizing professional text – for example, turning 5 pages of internal study into one understandable page for management.
  • Expanding brief notes – you have 5 bullet points – AI generates quality continuous text from them.
  • Reformulating text for different audiences – technical version, lay version, business version.
  • Creating an outline – for an article, presentation, SOP, video, newsletter, email.
  • Creating short message variants – 1 minute, 10 seconds, social post, headline.
  • Creating schedules or checklists – customer onboarding, project timeline, proposal preparation.
  • Meeting or document summary – extracting key points, tasks, deadlines.
  • Solution variant proposals – for example, three different versions of arguments or business email.
  • Translation and tone adjustment – not just translation, but conversion to Czech style and context.
  • Ideas and brainstorming – slogans, claims, product names, messaging, content pillars.
  • Explaining complex concepts – simple version with concrete examples.
  • Supplementing decision-making materials – overview of pros and cons, risks, alternatives.
  • Generating follow-up emails – different tones and communication variants.
  • Converting informal notes – from chaotic text to professional output.
  • Creating step checklists – proposal preparation, supplier selection, project implementation.
  • Proposing information structure – sorting documents, CRM fields, project tasks.
  • Simulating a client or investor – AI plays the role of counterparty and tests arguments.
  • Highlighting blind spots – points you overlooked, adding context.
  • etc.

Moreover, it’s also necessary to know not only what different AI systems exist, but when they’re suitable or unsuitable for completing the task you need – because they have different strengths and weaknesses and their suitability therefore differs according to the type of task.

Some operations are more efficient to perform in ChatGPT, others in Claude AI, Gemini, or in tools integrated into office applications, and sometimes it simply doesn’t pay to use AI tools at all and it’s better and faster to do the task manually/the old way.

And then there are certain operations that some tools can’t process at all; try, for example, in Claude to set it to correctly use lower-opening and upper-closing quotation marks (i.e., characters like “”) for writing direct speech.

Standardly, despite all efforts, instead of the correct variant: Hello, how are you?

I get: Hello, how are you?

This is most likely because the model for Claude has its primary data core in English. Czech “” are probably represented marginally in the dataset – so it’s a low probability of occurrence of such a pattern for it. And because AI doesn’t solve Czech language rules, but only occurrence statistics, it will constantly give you a different pattern as a result, even if you forbid it that result, show it correct examples, save it to memory, add this command to settings (preferences) in the form of custom instructions.

Even thorough instructions or prompt engineering may not help if we want an output from the model in an unusual format or style that contradicts its statistical training.

If we require from the model a style or format that is in direct conflict with its training, even with repeated reminders, examples of the correct format and saving permanent instructions, the model may return to its accustomed patterns after a while. The reason is technical – current language models are guided by probabilistic patterns from training and in practice don’t have a reliable mechanism for “hard prohibitions.” The model can therefore partially respect the instruction, especially for shorter responses or if we actively monitor it – but for longer texts or in case of stylistic conflict, it often slides back to what it “knows most.” Permanent retraining requires intervention in the model itself or special controlled generation mechanisms, which is not a tool for ordinary users. Therefore, the simple reality still holds – AI can significantly speed up work, but human oversight and correction are essential. For some tasks, you simply can’t do without manual checks and adjustments, and sometimes you’ll never get the correct answer. I’m not saying that AI is completely incapable, so that someone doesn’t interpret this as one idiot proving to us that AI is shit because it can’t write quotation marks for him. It’s not.

But this is enough for understanding why you can’t rely purely on AI. 🙂

AI naturally handles plenty of very useful automations (reports, exports, extracts and structures for SEO/PPC campaigns that you would otherwise do for tens of hours, now you handle them in hours – when you manage to iron out all the bugs – sometimes it can be more laborious, other times it can save tens or even hundreds of hours per year when it works out).

It’s important to realize that AI is a statistical model, albeit a very sophisticated one. It’s not about real understanding of problems, but about probabilistic estimation of what answer is most likely correct based on training data. AI doesn’t think – it guesses the most probable continuation of text according to patterns it saw during learning. That’s why answers can be wrong, even if you use the best input method. It depends on what the model learned from, how the data was processed and how the answer calculation proceeds. Always check important information and don’t rely blindly on AI output.

Brief cheat sheet for tools see below:

  • ChatGPT – universal large language model suitable for writing texts, creative content, marketing proposals, explaining complex topics, logical tasks, code design and structuring information. It can quickly create drafts of articles, corporate documents, presentations, email communication, argumentative outlines and communication strategy for different audiences. Occasionally it adds a probable estimate instead of a fact if no source is provided – therefore it’s necessary to check specific data. If the user doesn’t supply data, the model relies on training information and may not reflect the latest changes.
  • Claude – focused on professional and structured texts, working with extensive documents, legal materials and technical materials. Strong in logical arrangement of information, argumentation, precise work with terminology and consistency of tone and structure. Suitable for analyses and legal or process documents. Thanks to a stricter approach to uncertain information, it adds assumptions less – but for creative tasks it may seem reserved and sometimes refuses vague assignments. Excellent for programming and coding, but not exactly a great tool for creative designs.
  • Gemini (Google) – strong in searching and working with information from the web, visual inputs and tasks in the Google ecosystem. Suitable for research, tabular outputs and orientation data overviews. Style is predominantly factual and informative, less suitable for emotional marketing content and creative copywriting. It allows working directly with Google documents and spreadsheets without manual data copying, can supplement context from the web and automates office workflow within Google Workspace. If you live in the Google ecosystem, it’s a significant time saver.
  • Microsoft Copilot – ideal for Word, Excel, PowerPoint, Outlook and corporate workflow. Excellent for summarizing meetings, spreadsheets, corporate documentation and emails. Maintains professional tone and is strong in office agenda – not primarily intended for creative writing or creating distinctive communication identity. It connects directly with corporate documents and data in the Microsoft ecosystem, so it saves time when preparing presentations, reports, contract materials or email communication. Ideal for corporate environment where you need to quickly process real documents, spreadsheets and meeting notes.
  • Notion AI – tool for organizing information, notes, SOPs, checklists, internal manuals and documentation. Converts informal notes into clear structure and helps create systematic materials for corporate processes, projects and knowledge bases. Strong where order, logic and content clarity are needed – less suitable for creative or emotionally tuned texts because it naturally generates factual, procedural tone.
  • Midjourney – suitable for stylized visuals, branding, moodboards, product scenes and concepts. Excels in aesthetics and originality. It’s a great tool for new visuals and possible ideas, or for creating new visuals that should emphasize some mood and overall visual style. It can make beautiful images and helps imagine how things could look. But it’s not entirely suitable where technical precision is needed – such as correct proportions, construction details or faithful representation of specific people and products. It’s more of a creative tool than technical, so the result looks nice but may not be entirely according to the reality you need.
  • Stable Diffusion – flexible tool for realistic images, retouching, product visualizations and precise control over output. Can be run locally, modify styles, use control tools (e.g., ControlNet) and train custom models so that images correspond to specific requirements – for example, realistic representation of a specific product, face, brand or architecture. Unlike Midjourney, it’s not just a “tool for ideas,” but allows teaching the model your own visual style or specific object so that the result looks according to reality. Thanks to this, it’s suitable for situations where precise match with reality is important, not just creative appearance. However, it requires certain technical proficiency, working with parameters and patience when tuning – only then do you achieve professional results with this tool.
  • Runway – suitable for creating short video scenes, visual effects and creative clips. Great for prototypes and visual inspiration. For longer videos, the process is significantly more demanding and requires a series of intermediate steps, manual adjustments and post-production – production of longer videos can take tens of hours and is not necessarily cheaper than the classic approach.
  • Pika Labs – focused on short animations, stylish video clips and dynamic effects for social content. Ideal for visual ideas and short motion design. Not intended for long film materials or technically demanding video projects.
  • Sora – model for generating videos based on text, images or clips; allows creating visually very high-quality short sequences, converting scripts to video and connecting different shots in one piece. Excels in rapid prototyping of visual ideas and scenic designs and provides easy interface for video content creation. Not ideal for long or complex productions with high degree of post-production and technical stability, because generating longer videos still requires large amount of time, manual adjustments and editing experience.
  • ElevenLabs – realistic voice synthesis for voice-over, dubbing and corporate communication. Captures intonation and natural expression. For languages with less support, pronunciation tuning may be needed.
  • Descript – great tool for video and audio editing by text. Suitable for podcasts, online courses, corporate interviews and educational content. Efficient for spoken content and scripted recordings – not specialized in film editing or dynamic advertising.
  • HeyGen – avatar video presentations, corporate onboarding, customer messages and lip-sync videos. Enables rapid production of talking avatars without filming. Best for formal and informational content – avatar tone is not intended for dramatic or emotional storytelling. For longer videos, processing time and price increase significantly, often with worse results than classic filming.
  • Lovable.dev – focused on rapid application development and prototyping using AI (you can create an MVP in it relatively easily and cheaply). It can convert text description into functional application with backend, frontend and database. Strong in generating UI, components, logic and basic project architecture – including automatic code creation, tests and version commits. Ideal for founding projects, MVPs, internal tools, dashboards or idea validation. Significantly speeds up work thanks to integrated editor and AI assistance directly in code. Doesn’t blindly rewrite code, but tries to reconstruct and optimize it – for complex or non-standard projects, however, it may require technical oversight and manual adjustments. Not intended as full replacement for senior developer, but as work accelerator that serves excellently for prototypes, proof-of-concept solutions and rapid launch of ideas that can then be tuned manually.

At the same time, it’s always necessary to be able to personally validate when it’s worth continuing to work on the prompt to get a perfect result or whether there isn’t some simpler path, because AI tools really aren’t cure-alls and the more you rely on them, the greater impact various future updates and changes to models of given tools will have on you.

Examples:

  • Logo or visual identity – AI can quickly propose style and idea – but you’ll fine-tune the final form manually in a graphic editor, because you want full control over the result, or you count on doing more edits with that visual, etc.
  • Copywriting and marketing texts – AI kicks off the idea great, gives variants, helps with brainstorming – but you write the final version yourself, so the tone and message are personal and precise, there are no factual errors, typos, non-existent words, correct tonality and language expression.
  • Complex decisions (finance, strategy, technical solution) – AI gives quick overview and summary of options – but the final decision must come from combination of AI, experience and common sense (for example, customer cycle or how your company works).
  • Bulk editing/filtering sensitive data – you get advice from AI, for example, formula/procedure for how to do it (you mainly avoid errors that it could put in there for you with larger data sets when you don’t give it absolutely perfect instructions, over whose creation you would spend long hours).
  • Contracts and legal texts – AI helps with structure and points out risks – but the final wording must go through lawyer’s review, because incorrect wording can mean real risk for you/your company. But at least for quick outline of chapters and what you should cover, it can be a good helper.
  • Technical solutions and architecture – AI proposes possible procedures and technologies – but the final decision comes from your knowledge of environment, security and system limitations, budget, features you need and thousands of other parameters.
  • Project management – AI prepares schedule, tasks and communication points – but prioritization, human capacities, risks and changes over time must be managed by a person.
  • Data analysis and reports – AI summarizes data and proposes conclusions – but a person must verify whether the model correctly understood the context and didn’t draw an incorrect conclusion.
  • Customer communication – AI prepares response texts, summaries and reaction variants – but empathetic tone and final choice remains with a person, because nuance and emotions AI can’t fully capture, not to mention that you probably don’t want customers to receive nonsense as responses. Here it should also be added that AI can formulate some basic answers for you; it’s really not very suitable for deeper answers to technical questions because it can’t grasp them too deeply. But for example – your customer center can use the customer’s initial inquiry and have it suggest what might be a suitable solution proposal for such a client.
  • Supplier/employee selection – AI helps define criteria and comparison – but you’ll assess the real value of a person or company only by combination of references, behavior and context. AI can be used to prepare the process and selection structure, not to replace human judgment. At the same time, it’s not appropriate (and in many cases not even legal) to use AI for automated “evaluation of people” or decision-making without human review – especially for resumes, personality conclusions or applicant profiling.
  • Presentations and materials for management – AI generates outline, visual and summary – but you tune the precise message, facts and communication tone.

What does all this lead to?

That AI is not a calculator. It’s a tool that requires:

  • experimentation,
  • critical thinking,
  • ability to verify facts,
  • also having your own knowledge and ability to further learn and educate yourself in the given area/topic (because otherwise I don’t know how to ask correctly or what is hallucination/total nonsense).

Likewise, there’s no universal prompt that will make you an expert without work.

Why?

Because AI doesn’t create new knowledge – it works with what it already has from us (people). And therefore – if you yourself don’t understand the principle, problem or context, you can’t assess whether AI is giving you the correct answer or just nicely formatted nonsense.

So here it holds – to be able to formulate/ask AI the correct question to get a relevant result, you must understand the topic/issue.

Without that:

  • you have no way to recognize that AI is confidently and very convincingly lying,
  • you can’t select correct information and filter out nonsense,
  • you can’t follow up with another question in the right direction,
  • you don’t know when to use AI output and when to ignore it.

It’s like having a scalpel – the tool itself doesn’t make you a surgeon either. A prompt is just an “arrow,” but the trajectory and target are determined by the person who gives AI commands (prompts). It can of course be bypassed by the process of so-called onion peeling, where you gradually submit your individual initially stupid questions to AI, let it gradually explain the topic to you until you roughly perceive it and can ask better. But that still doesn’t make you an expert in that area (it’s not even technically possible – it’s hard to cram into your brain in a few minutes all the knowledge that someone gradually absorbed over years).

Expertise isn’t just a set of information. It’s experience, memory, intuition and ability to put things in context. When AI just serves something to you, your brain often doesn’t even really store it – you capture the result, but not the path to it. But when you learn yourself, try, make mistakes, tune and think about it, memory and skill are stored much deeper. It’s the principle like with programming – you can have AI generate code for you, but if you don’t understand it, you’re no better programmer. You won’t remember procedures, you won’t create mental models and next time you’re back at the beginning. Quick information is not the same as acquired knowledge. And acquisition – not copying – is what makes an expert.

AI can be an excellent partner for you. But only if you control it, not it you.

And now let’s talk about how to correctly control AI so that it sends back at least somewhat usable results.

How to get better and more accurate answers from AI?

Step 1: Choose the right tool for the right task and also determine whether I really need AI for it

Different tasks need different tools. ChatGPT is not a universal solution, even if it seems that way. If you use it for the wrong type of task, you’ll get bad results. Simple.

See notes on tools above – you gain this knowledge only by using those tools daily and exploiting them. Only then will you learn when they’re suitable, when it’s better to input something differently, and when it doesn’t make sense to try to solve it through AI at all, because by writing such a perfect prompt you’d kill many times more time than completing the task itself by your own effort.

Another level for making work with AI models more efficient is having NotebookLM, which is designed for working with your own materials – contracts, PDFs, presentations, corporate documents or study materials.

Unlike regular chatbots, it’s not dependent only on “model memory.” NotebookLM directly relies on specific sources, not on estimation (grounding) in uploaded sources – it reads them, analyzes them and responds according to them. It uses only content you give it – so you have control over sources and where AI draws information from. This is essential for confidential documents or internal materials. And also – and this mainly – it significantly reduces the risk of hallucinations. And besides, NotebookLM also allows creating summaries, study materials, presentation materials, briefings or questions and answers directly from sources you upload (PDF, Docs, texts, notes, research), which again makes work on PC somewhat more efficient.

If you need to minimize the risk of hallucinations and have your own sources available, the best choice is NotebookLM – it works directly with uploaded content, so answers are built on actual data, not estimation.

When you don’t have sources and need to find them first, Perplexity works excellently. It’s fast, transparently provides links and its information can be easily verified. Although it can also hallucinate, thanks to cited sources, checking is significantly simpler. Its Deep Research mode typically takes only 2-3 minutes and instead of unnecessary length, it emphasizes quantity of relevant sources and their connection.

On the other hand, even with established models like ChatGPT or Gemini, it can happen that you get a perfectly written long text – which ultimately doesn’t answer the question precisely. Therefore, quality of sources and verifiability of information are more important than poetics or output scope.

Step 2: The simplest and at the same time longest path – just ask

You open ChatGPT, write a question and wait for an answer. This is what most people do. And that’s precisely why they get bad results. When you just write a question without further instructions, ChatGPT automatically uses a fast model, the so-called Instant version. It’s swift but very imprecise and has a huge tendency to make things up.

So watch out for that.

On the other hand, for most simple queries it might suffice (you simply don’t have time to write detailed prompts for every stupid thing, especially when you roughly know the correct result – it’s again about your own evaluation – when I know I’m going to solve a more complex/technical query, I’ll spend more time preparing the prompt and vice versa – for simple queries I can throw in a simple question, but I must count on a stupid answer all the more – what are we kidding ourselves about – you can get that even with a more detailed prompt, because frankly no AI model has great memory yet, so many prompts will simply take you some time…).

But – the query is without context, without role, without rules – an excellent recipe for hallucinations (meaning you’ll get made-up and untrue answers).

Better is to give AI model instructions + role (roleplay). Just by this step, answer quality improves dramatically, which is also proven by data from some studies. It’s enough to assign AI specific expert roles with detailed description.

Some current studies prove that simulation of multiple expert roles significantly improves reliability, safety and usefulness of AI answers. (the probability of truthful information from AI increased by 8.69%). Or find basics also in the article: Effective Prompts for AI: The Essentials.

Which is incidentally what most users do somewhat subconsciously when they get a stupid answer. Simple but effective – you simply write a command:

You are an expert/specialist/expert on… and at the same time you’re a professor from Harvard and on top of that you write the output as a journalist, where the text should be understandable even for a layperson, etc.

If you know English, understanding the principles of how LLM models work can be helped by the article: Unleashing the potential of prompt engineering for large language models.

Even more accurate and reliable results you’ll get if you activate the option to use the internet.

The model then doesn’t rely only on training data available by the date of its training, but can verify and supplement information according to the current state as of today. This is crucial especially for topics that change quickly – for example, legislation, grants, energy market, technology or economic data (or actually always, because you want ideally the most current data).

Step 3 – Activate “Thinking” mode for deeper and more accurate answers, or Deep research

Even much more accurate results you’ll get when you activate “Thinking” mode (in ChatGPT marked as “Thinking” option).

This mode belongs to the newest versions of the GPT-5 model, which have built-in “thinking” – i.e., deeper logical steps and longer analysis. The consequence is that answers can be higher quality and more professional – but at the same time the answer takes significantly longer.

However, you need to count on the fact that the answer takes significantly longer than with the fast “instant” mode. So you use Thinking mode when quality is more important to you than speed – for example, for more demanding professional queries, research, technical topics, legislation or financial decisions.

And where is Thinking mode turned on in ChatGPT?

Thinking mode - ChatGPT

A level higher still is the agentic “Deep Research” mode.

It’s not just about a “smarter answer,” but about controlled multi-step procedure.

AI plans the work itself, systematically goes through relevant web sources and your materials, continuously evaluates quality of findings, compares claims across sources and compiles findings into a coherent report with clear structure and citations.

The result is typically an extensive report – easily around 15 pages, with dozens of links and tables or graphs – that’s ready for export to PDF and immediate handover to colleagues or clients. It makes sense to turn it on when you need maximum accuracy and verifiability – for example, for legislative research, technical comparisons, investment materials, due diligence, market analyses or complex strategies.

The price for such depth is longer processing time and resource intensity – but when it comes to quality, “Deep Research” today represents the peak of what AI can offer.

If you want to get the most accurate output, write the task as specifically as possible (purpose, scope, audience, required format, comparison metrics, excluded sources) and add quality criteria – for example:

Compare at least 8 sources, state selection methodology, separate “findings” from “interpretation” and attach list of risks and unknowns.

This way you’ll get a report that goes more “to the bone” of the problem and is not just a compilation of links.

It’s just that this method is quite impractical from a time perspective, or you wait too long and often don’t always get the answer you need (personally, I’ve always found it useful to try to read up on the topic a bit while it’s crunching and compare the result with what ChatGPT spits out for me).

And where is the agentic “Deep Research” mode turned on in ChatGPT?

Deep Research in ChatGPT

Instead of repeating the same instructions in every chat, use a smarter approach – ask AI “What all information does it need to best answer you on <your query>?”

Even better is to invest a bit of time in setting up custom instructions that will apply to all conversations. And the most efficient solution for recurring tasks is to create your own specialized GPT. Deep Research is really the best current AI function for complex searching and analysis.

But even so, the iron rule applies – always verify everything. Not even the most advanced AI is one hundred percent reliable. Why, we’ve already explained several times above – because it’s still just a model working on the basis of probability.

Customer Acquisition Cost (abbreviated as CAC) - Blog

CAC

Customer Acquisition Cost (abbreviated as CAC) is a key financial metric that expresses the average cost it takes a company to acquire one new customer – that is, all expenses incurred on marketing, advertising and sales, divided by the number of newly acquired customers in a given period.

The goal of the CAC metric is to measure the efficiency of acquisition activities and determine whether the costs of acquiring customers are proportionate to their long-term value (LTV – Lifetime Value).

What is CAC used for?

CAC helps companies determine how efficiently they use their marketing and sales budget.

It’s most commonly used when:

  • evaluating return on investment in marketing and advertising,
  • comparing the performance of individual campaigns or channels,
  • setting acquisition goals and budgets,
  • determining optimal product or service pricing,
  • analyzing business profitability and scalability.

How is CAC calculated?

The basic formula for calculating CAC is simple:

CaC - Customer Acquisition Cost - formula

 

CAC = Total marketing and sales costs / Number of new customers

For example, if a company spends 500,000 CZK on marketing and sales activities during a quarter and acquires 1,000 new customers, its CAC is 500 CZK per customer.

What all is included in CAC?

The calculation includes all direct and indirect costs associated with customer acquisition:

  • advertising costs (online campaigns, TV, outdoor, print, etc.),
  • salaries of salespeople, marketing team and external agencies,
  • commissions and bonuses for closing deals,
  • technology and tools for CRM, emailing, analytics,
  • production and operational costs for campaigns and lead generation.

Why is CAC important?

CAC is a crucial metric for managing growth and profitability.

It shows how expensive it is to acquire a new customer and whether this process pays off. It helps to better assess:

  • the efficiency of marketing and sales channels,
  • sustainability of the growth model,
  • when it’s appropriate to scale investments in acquisition or conversely increase emphasis on retention,
  • the optimal ratio between acquisition costs and customer value.

Relationship between CAC and LTV (Customer Lifetime Value)

The CAC value alone has no meaningful value if it’s not assessed in relation to how much the company actually earns from the customer over time.

Therefore, in practice it’s always compared with the LTV (Customer Lifetime Value) metric – that is, with the total value that a customer brings to the company during their entire “lifetime” (for example, during the subscription period or average cooperation length).

If CAC is high but customers simultaneously have high LTV, the acquisition strategy can still be healthy. Conversely, low CAC may not be a success if customers leave quickly and their LTV is low.

The point is for investments in customer acquisition to pay off in the long term. For this reason, the LTV / CAC ratio is monitored, which helps determine the efficiency of acquisition strategy.

  • LTV / CAC > 3 – healthy ratio: the customer brings the company at least triple the value compared to what their acquisition cost,
  • LTV / CAC ≈ 1 – acquisition model is on the edge of profitability,
  • LTV / CAC < 1 – the company spends more on acquiring a customer than it earns from them.

The relationship between CAC and LTV is thus one of the most important indicators for growth sustainability, profitability and efficient budget allocation between customer acquisition and retention.

What to watch out for with the CAC metric

When working with CAC, it’s important to consider context and time perspective:

  • CAC can differ by channel – performance marketing has different costs than direct sales,
  • short-term higher CAC may be fine if LTV or customer retention increases long-term,
  • it’s advisable to calculate CAC separately for new and returning customers,
  • during rapid scaling, it’s necessary to monitor whether CAC is not increasing faster than revenues and margins.

Related metrics

  • LTV (Lifetime Value) – customer lifetime value,
  • LTV/CAC Ratio – ratio between customer value and their acquisition cost,
  • Churn Rate – customer departure rate, which directly affects LTV and the derived LTV/CAC ratio.
Year on Year (abbreviated as YoY)

YoY

Year on Year (abbreviated as YoY) is a comparative metric used in analytics to evaluate economic, financial, or operational indicators between two identical periods in different years – typically between full calendar years or matching months.

The goal is to capture long-term trends without being distorted by short-term seasonality and to understand the real performance trajectory of a company, market, or sector.

What It’s Used For

YoY analysis is used to identify annual changes – for example, when:

  • assessing growth or decline in revenue, profit, or margins,
  • tracking website traffic, customer demand, or sales performance,
  • analyzing macroeconomic indicators – such as inflation, GDP, or average wages,
  • monitoring industrial and energy performance – including production, consumption, and capacity utilization,
  • reporting corporate results and evaluating long-term strategic outcomes.

How It’s Expressed

Year-on-year changes are typically expressed as percentages.

The notation is usually written as:

+3.1% YoY or -2.4% y/y

This indicates how much a specific metric increased or decreased compared with the same period in the previous year.

Example

Coca-Cola reported a net profit of +3.1% YoY in 2025.

This means the company’s profit was 3.1% higher than during the same period in 2024 – if it earned USD 10 billion in 2024, it reached roughly USD 10.31 billion in 2025.

Year-on-year comparison therefore reflects the company’s real growth, not a temporary seasonal fluctuation driven by, for example, stronger summer or holiday sales.

Why YoY Comparison Matters

YoY is one of the core metrics in business reporting and performance evaluation because it reveals the real underlying trend.

While month-on-month (MoM) or quarter-on-quarter (QoQ) changes can be affected by temporary market conditions, promotions, or weather patterns, YoY results show whether a company is genuinely growing, stagnating, or declining over time.

By using YoY data, companies understand:

  • the effectiveness of strategic and investment decisions,
  • the stability and sustainability of growth,
  • the evolution of profitability and key business indicators,
  • the performance of individual divisions, products, or markets across years.

As such, YoY analysis is an essential component of any financial report, investor presentation, or management dashboard.

Difference Compared with Other Metrics

  • MoM (Month on Month) – compares performance between consecutive months; useful for short-term trend tracking but heavily influenced by seasonality.
  • QoQ (Quarter on Quarter) – compares data between quarters; often used in corporate reporting to measure quarterly progress.
  • YoY (Year on Year) – compares the same period across years; provides a broader and more stable view of long-term performance.

Common Pitfalls When Using YoY

To ensure meaningful results, YoY comparisons must always be based on identical and closed periods (e.g., January–December 2025 vs. January–December 2024). It’s also critical to consider any changes in accounting standards, reporting structures, or business models that might distort the comparison.

Only consistent data and like-for-like periods provide a reliable foundation for strategic decisions, budgeting, and forecasting.

Advanced Uses and Interpretation

Beyond basic financial metrics, YoY analysis is widely applied across multiple domains:

  • Marketing & e-commerce: tracking YoY growth in organic traffic, conversion rate, or customer retention helps identify sustainable acquisition trends.
  • Energy & industry: measuring YoY production or consumption reveals the impact of efficiency measures or demand fluctuations.
  • Finance & investment: YoY return comparisons allow investors to evaluate performance stability and risk exposure over time.
  • Public policy & macroeconomics: YoY inflation or wage growth data reflect economic health and purchasing power changes in real terms.

Year-on-year analysis is more than a numerical comparison – it’s a diagnostic tool that filters out short-term volatility to expose long-term direction. Used correctly, YoY metrics help companies and analysts make informed, evidence-based decisions about investment, growth strategy, and operational efficiency.

Month on Month (abbreviated as MoM) - Blog

MoM

Month on Month (abbreviated as MoM) is a comparative metric used in analysis to compare economic, financial, or operational indicators between two consecutive months—that is, between the current month and the previous month.

The goal is to capture short-term changes and trends that signal the immediate development of a company’s performance, sales, demand, or the effectiveness of marketing activities.

What it’s used for

To evaluate short-term movements—for example, in:

  • tracking monthly growth or decline in revenue, profit, or margins,
  • analyzing website traffic or conversion development,
  • monitoring the development of costs, productivity, or inventory turnover,
  • monitoring the performance of advertising campaigns and changes in demand,
  • operational financial reporting and cash flow management.

How is MoM calculated (Month on Month formula)

The calculation of month-over-month change (MoM) is straightforward and based on a simple comparison of values from two consecutive months.

Formula for calculating MoM:

Formula for calculating MoM

MoM (%) = ((Current month value - Previous month value) / Previous month value) × 100

Calculation example:

A company had revenue of $500,000 in February and $540,000 in March.

MoM = ((540,000 - 500,000) / 500,000) × 100 = (40,000 / 500,000) × 100 = 0.08 × 100 = 8%

Revenue thus increased month-over-month by +8% MoM.

If, on the other hand, March revenue dropped to $475,000:

MoM = ((475,000 - 500,000) / 500,000) × 100 = (-25,000 / 500,000) × 100 = -0.05 × 100 = -5%

Revenue would thus decrease by -5% MoM.

How is Month on Month interpreted

Month-over-month changes are typically expressed as percentages.

The notation looks like:

+5.4% MoM or -1.8% m/m

This notation shows by what percentage the indicator’s value increased or decreased compared to the previous month.

Example

Netflix recorded an increase in new subscribers of +5.4% MoM in March 2025.

This means that in March it gained 5.4% more customers than in February 2025 – so if 1 million new users were added in February, there were approximately 1.054 million in March.

Such a comparison helps quickly assess whether the company is growing or facing a short-term decline in demand, and allows for timely response to market changes.

Why month-over-month comparison is important

MoM is crucial for operational management and reporting because it enables tracking of development dynamics in a short period.

While year-over-year comparison (YoY) shows long-term trends, MoM provides a current view of performance and reveals rapid shifts in data.

It helps to better assess:

  • the effectiveness of short-term marketing and sales activities,
  • the immediate impact of price changes, discounts, or promotions,
  • the actual dynamics of cash flow and sales channels,
  • short-term fluctuations caused by seasonality or external factors.

This makes MoM an indispensable tool in every monthly financial or marketing report.

Difference from other metrics

  • MoM (Month on Month) – month-over-month comparison that tracks rapid changes and serves as an early indicator of development (the term MoM is described and explained in this article above).
  • QoQ (Quarter on Quarter) – quarter-over-quarter comparison, suitable for evaluating quarterly results.
  • YoY (Year on Year) – year-over-year comparison that shows long-term trends and eliminates seasonality.
  • CAGR (Compound Annual Growth Rate) – accounts for an entire multi-year period and compound interest, providing the most accurate view of long-term trends.

What to watch out for with the MoM metric

When interpreting month-over-month results, it’s important to consider the influence of seasonality, holidays, vacations, or extraordinary events that may temporarily affect the outcome. The MoM metric should therefore always be supplemented with year-over-year (YoY) comparison to distinguish whether it’s a permanent trend or just a temporary deviation.

The Skype Effect A Revolution That Changed Communication Forever

The Skype Effect: A Revolution That Changed Communication Forever

When historians look back on the early twenty-first century, they may well argue that one of the most disruptive inventions was not a rocket, not a microchip, nor a dazzling piece of artificial intelligence, but a humble piece of software created in Tallinn, Estonia, in 2003. That software, called Skype, made free voice and video calls possible across continents — and in doing so, it fundamentally altered the way humans communicate.

This transformation is often described as “the Skype Effect” – a phrase that captures both the company’s rise and the seismic impact it had on global society, business, and technology. Companies like Skype are a huge deal for a small country – it’s changed the whole infrastructure – it had a huge impact on the ecosystem.

A New Voice for the Internet Age

Before Skype, international calls were the preserve of the wealthy or the desperate. Throughout the 1990s and early 2000s, long-distance phone rates were brutally expensive. A single call from Paris to New York could cost several dollars per minute. For migrant workers, students studying abroad, and globally dispersed families, the simple act of talking to loved ones was rationed. Companies, too, faced staggering communication costs, with international telephony bills devouring budgets. Skype tore that system down. Using peer-to-peer (P2P) architecture, it allowed calls to bypass the centralised and expensive switching systems of telecom companies.

Suddenly, anyone with a computer and an internet connection could talk to anyone else in the world — for free.

The cultural shock was immediate. What email had done to letters, Skype now did to spoken communication. It normalised the idea that long-distance talk should not come with a meter ticking in the background. Families separated by continents spoke daily instead of monthly. Entrepreneurs could negotiate contracts across borders without stepping on a plane.

Soldiers deployed abroad could hear their children’s voices at bedtime.

“Skype me” entered the vocabulary as a shorthand for closeness at a distance.

Skype logo

Birth in a Baltic Nation – Estonia

The story began not in California, but in Northern Europe.

The Swedish entrepreneur Niklas Zennström and his Danish partner Janus Friis had already challenged established industries once with their file-sharing service Kazaa, which disrupted the global music business and drew the wrath of record labels. In Tallinn they found a team of gifted Estonian engineers — Ahti Heinla, Priit Kasesalu, Jaan Tallinn and Toivo Annus — who had honed their skills in the austere conditions of the post-Soviet 1990s, when hardware was scarce and improvisation a necessity. This combination of entrepreneurial boldness and engineering ingenuity proved catalytic. They saw an opportunity: if peer-to-peer networks could upend the music industry, why not apply them to voice communication? International telephony was still one of the most profitable businesses on earth, tightly controlled by national carriers.

The timing was perfect.

Broadband penetration was accelerating, webcams and microphones were becoming standard, and the world was hungry for cheaper connectivity. Estonia itself was undergoing a metamorphosis. After regaining independence in 1991, the small Baltic republic made a strategic decision to bet on digital transformation. Lacking natural resources and with a population of just 1.3 million, it invested heavily in connectivity and IT education.

By the late 1990s, Estonia was one of the first nations in the world to introduce online tax filing, digital identity cards and even internet voting. A generation of young engineers grew up with both necessity and ambition: necessity, because Soviet-era infrastructure was outdated; ambition, because independence demanded new paths to prosperity.

Skype became the crown jewel of this experiment.

It was not only Estonia’s first global brand, but also a vindication of the country’s belief that it could leapfrog its past by embracing technology. Internationally, Skype’s success inspired the phrase e-Estonia – shorthand for a state that had made digital governance, internet access and start-up culture part of its national DNA.

Within Estonia, it became a source of pride, the proof that even a small post-Soviet nation could give the world a product used by hundreds of millions.

At the same time, the company’s cross-border nature was crucial. Zennström and Friis brought the entrepreneurial daring, the Estonians delivered the technical brilliance, and investors from Europe and the United States soon followed. Skype was therefore never a purely local success: it was a symbol of what could happen when global capital met Baltic ingenuity at just the right historical moment.

Viral Growth and Everyday Miracles

Skype was released in August 2003. By the end of that year, it had a million users; by 2006, over 100 million. Adoption was viral, spreading through migrant communities, university dormitories, and small businesses that found themselves liberated from the tyranny of phone bills.

The stories were personal. A Filipino nurse in Riyadh could talk to her family in Manila every evening without worrying about cost. An Indian start-up could pitch to a London venture capitalist without buying a plane ticket. Aid workers in Africa could co-ordinate with colleagues in Geneva in real time. Skype was more than software: it was infrastructure for human connection.

And then came video. In 2005, Skype added free video calling, a function that fundamentally changed expectations.

Now long-distance communication wasn’t just a voice; it was a face.

Parents saw their children’s expressions, couples in long-distance relationships could dine „together“ over webcams, and global offices experimented with early forms of virtual meetings. What had once been the preserve of television studios was suddenly free on a home computer.

Scaling Up: From Start-up to Global Player

Behind the scenes, the company was growing at breakneck speed. What had begun as a small team in Tallinn and Luxembourg suddenly became a global operation. By 2005, Skype employed hundreds of engineers, marketers and support staff. The infrastructure that underpinned the service had to expand almost weekly to handle the surge in traffic. The peer-to-peer model was efficient, but the rapid uptake required constant refinement, bug fixes, and a scaling strategy that few start-ups had ever attempted before.

Unlike many dot-com ventures of the early 2000s, Skype had an obvious business model from the outset. While calls between users were free, the company introduced „SkypeOut“ — a paid service that allowed users to dial regular landlines and mobile phones at far cheaper rates than traditional carriers.

Revenue climbed quickly, proving that free communication could coexist with a sustainable profit engine.

Estonia, often overlooked on the global stage, suddenly had a unicorn – one of Europe’s earliest billion-dollar tech companies. The „Skype mafia“, as the original engineers and employees came to be known, later reinvested their wealth and expertise into new ventures. Companies such as TransferWise (now Wise), Bolt, and Pipedrive trace their origins to alumni of Skype.

This growth did not go unnoticed.

Telecom operators, threatened by the collapse of their lucrative long-distance business, began lobbying governments to regulate or even restrict Skype’s services. In some countries, carriers tried to block the software on their networks. But the genie was out of the bottle. Consumers had tasted free communication, and there was no turning back.

The eBay Years Acquisition

In September 2005, just two years after its launch, eBay announced it would acquire Skype for $2.6 billion. The deal stunned the business world. At the time, it was one of the largest acquisitions of a European tech company. eBay’s logic was straightforward: its marketplace relied on trust between buyers and sellers, and executives believed that real-time communication could reinforce that trust.

In practice, however, the fit was awkward. Shoppers did not want to phone one another; they wanted secure transactions. While Skype continued to grow in popularity, it never became the connective tissue of eBay’s ecosystem as envisioned. Within a few years, the mismatch became apparent, and eBay began looking for a way out.

Yet the eBay years were not wasted. They gave Skype access to global resources, expanded its brand presence, and strengthened its infrastructure. By the late 2000s, Skype had hundreds of millions of registered users and had become synonymous with internet telephony.

Investor Takeover and a New Skype Chapter

In 2009, a group led by Silver Lake Partners acquired a majority stake in Skype, valuing the company at $2.75 billion. This marked a turning point. The new owners were focused on sharpening Skype’s profitability and preparing it for a potential public offering. Under their stewardship, Skype improved its mobile apps, expanded into emerging markets, and explored integration with television sets and handheld devices.

By this stage, Skype was not just a consumer tool but a platform with strategic importance. It was being used by multinationals for internal communication, by journalists to conduct remote interviews, and by NGOs in crisis zones. Few technologies had embedded themselves so deeply, so quickly, into both everyday life and professional practice.

Microsoft’s Bold Bet

The next chapter came in 2011, when Microsoft purchased Skype for $8.5 billion, its largest acquisition to date.

For Microsoft, the deal was strategic: the company was eager to modernize its communications portfolio and compete with Apple’s FaceTime and Google’s growing voice and video services. Skype was integrated into a wide range of Microsoft products — Outlook, Office, Windows, Xbox – and positioned as both a consumer and enterprise tool.

„Skype for Business“ was launched, aiming squarely at the corporate communications market dominated by Cisco and other conferencing providers. For several years, this strategy appeared to work. Skype became the de facto tool for online interviews, for remote business calls, and even for televised events.

Heads of state used it to appear virtually at conferences. Universities embedded it into their distance learning programmes. The Skype ringtone — that simple, cascading melody — became one of the most recognisable sounds of the digital age.

The Smartphone Challenge

By the mid-2010s, Skype faced a new reality. The world was no longer defined by desktop computers and broadband modems, but by smartphones and mobile data.

WhatsApp, Facebook Messenger, WeChat and Apple’s FaceTime were native to the mobile environment, offering seamless integration with phone contacts, address books and operating systems. Skype, by contrast, had been built for an earlier age. Its peer-to-peer architecture, revolutionary in 2003, became a liability on handheld devices.

Maintaining constant connections consumed battery life, drained processing power and struggled with patchy mobile data networks. Users began to notice lag, call drops and clunky performance, especially compared to lightweight competitors. While Microsoft attempted to shift Skype towards a cloud-based model, the transition was slow and technically complex.

At the same time, the very expectation Skype had created — that calls and video should be free — was now industry standard. Competitors could copy the core function without needing to replicate its entire infrastructure. For the first time since its birth, Skype was no longer synonymous with internet calling.

Competition on All Fronts

The 2010s were an era of intense competition. WhatsApp, acquired by Facebook in 2014 for $19 billion, began rolling out voice and video calling to its vast user base. Apple integrated FaceTime deeply into iOS, making video calls frictionless for iPhone users.

In China, WeChat evolved into a super-app, with communication just one part of its ecosystem.

Skype, once the pioneer, now appeared dated.

Its user interface struggled to adapt to the minimalism of modern app design. Attempts to reinvent the product — with chatbots, new layouts, even Snapchat-like features — alienated long-time users without attracting a younger generation. Despite its immense brand recognition, Skype was beginning to feel like a legacy product: respected, widely known, but no longer central to the cutting edge of digital communication.

The Rise of Microsoft Teams

Within Microsoft itself, strategic winds were shifting. In 2017, the company launched Microsoft Teams as part of its Office 365 suite. Designed for enterprise collaboration, Teams integrated chat, file sharing, scheduling and, crucially, video conferencing. It was a direct competitor to Slack, but also to Skype for Business — Microsoft’s own product.

Gradually, Microsoft began positioning Teams as the future and Skype for Business as a product to be phased out. By 2021, Skype for Business was officially retired, its features folded into Teams. For corporate users, the transition was clear: Teams was the platform of choice. Skype, once at the forefront of professional communication, was sidelined.

A Missed Moment: The Pandemic

Then came 2020. When the COVID-19 pandemic forced billions into lockdown, video communication became a lifeline. Schools went online, offices migrated to home setups, families and friends turned to screens for contact. It was, in effect, the moment Skype had been built for.

Yet it was Zoom – a relative newcomer – that captured the zeitgeist.

With its intuitive interface, easy meeting links and reliable performance, Zoom became the verb of the pandemic age – people did not Skype into class or FaceTime the office. They only Zoomed.

For Skype, it was a bitter irony.

The pioneer of internet voice and video calling, the platform that had normalised digital presence, was largely absent from the headlines at the very moment its founding vision had become global necessity.

The Phasing Out of Skype

The pandemic was not merely a missed opportunity for Skype; it was the turning point that revealed how far the platform had fallen behind. Once celebrated for its simplicity, Skype had become cumbersome.

The interface was cluttered, the login process unreliable, and its performance lagged behind competitors built natively for smartphones and the cloud. For users juggling work, school and family life under lockdown, the choice was obvious: they turned to Zoom, WhatsApp or FaceTime, leaving Skype on the sidelines.

Inside Microsoft, executives had already reached the conclusion that Skype was no longer worth defending as a frontline product. Since 2017, the company had poured its energy into Teams, a platform designed to be more than just a communication tool. Teams promised integrated chat, calendars, file sharing and video conferencing in one package — and crucially, it fit seamlessly into Microsoft’s Office 365 ecosystem.

The more users adopted Teams, the less justification remained for Skype. The pandemic only accelerated this transition: while Zoom captured the public imagination, Teams became the default tool for companies and institutions, leaving Skype squeezed between irrelevance and obsolescence.

Why Microsoft Let Skype Fade

The end of Skype was the result of both technical realities and strategic choices. The technical side was clear: Skype’s original peer-to-peer architecture, so brilliant in 2003, had become a burden in the smartphone age. Although Microsoft had tried to rebuild it on cloud infrastructure, the app never shed its reputation for instability and heavy resource use. Video calls drained battery life, notifications failed to sync smoothly across devices, and the experience felt clunky next to lighter, mobile-first alternatives.

But the deeper issue was cultural. Skype had lost its place in the digital zeitgeist. In the mid-2000s, it was a verb: to Skype was to collapse distance, to bring people together across borders.

By the late 2010s, that linguistic crown had slipped to others.

Teenagers video-chatted on FaceTime, families called on WhatsApp, offices scheduled Zoom meetings.

Skype was still present, but it no longer defined the moment.

To younger generations, it felt like an app their parents once used, not the future of communication.

On the strategic side, Microsoft’s pivot was decisive. The company understood that its greatest strength lay in the enterprise market, where integrated platforms could lock in entire organisations. Every resource poured into Skype risked duplicating what Teams was already doing better. By prioritising Teams, Microsoft could focus its branding, development and marketing on a single platform. Skype, once a flagship acquisition, became an internal redundancy.

The End of an Era

On 5 May 2025, the story finally closed. Microsoft formally retired Skype after 22 years of service. Users were invited to migrate their accounts, contacts and chat history to Teams. The official Skype website redirected visitors to Teams, the mobile apps disappeared from app stores, and the iconic ringtone slipped into memory.

For those who had once relied on it, the shutdown was a poignant moment. It marked the passing of a cultural touchstone — a tool that had carried families across borders, enabled long-distance love stories, powered NGOs in crisis zones and disrupted an entire industry. Skype had forced telecoms to abandon the economics of distance, proved that video communication could be free and universal, and turned Estonia into a symbol of digital innovation. Yet in the end, the very forces it unleashed — mobile-first design, cloud-based collaboration, the expectation of constant connectivity — left it behind.

The Skype Effect remains, even without Skype. Every free international call, every remote lecture, every board meeting conducted online is a living fragment of its legacy. The platform itself may be gone, but its revolution is permanent.

And perhaps that is the most sobering lesson: even the most groundbreaking projects can fade. Innovation alone does not guarantee survival. Market shifts, strategic decisions, and cultural momentum can overtake even the pioneers.

Skype’s story is both an inspiration and a warning – proof that changing the world does not always mean you will remain at its centre.