Author Archives: krcmic.com

Save scrummings - what is it

Save scrummings – what is it?

Save scumming is one of the most common habits in single-player games. You save before a risky moment, the outcome goes badly, you reload, and you try again. Some players call it cheating. Others call it time management. In reality, it is mostly about control: control over randomness, consequences, and wasted time.

What save scumming means

Save scumming is the practice of saving the game before an uncertain situation and reloading if the result is not what you wanted. It is not tied to one genre – it shows up anywhere a single outcome can change your run, your story, or your mood.

  • RNG reroll – saving before a chance-based event and reloading until you get a better roll.
  • Perfect outcome chasing – repeating scenes to avoid losses, deaths, or negative story consequences.
  • Undoing mistakes – reloading after a misclick, wrong button, or misunderstanding.
  • Damage control – reloading to fix bugs, crashes, or broken quest logic.

Why games invite it

Save scumming becomes tempting when three ingredients come together: uncertainty, high stakes, and long recovery time. The more time the game asks you to replay after a failure, the more likely players are to reload instead of accepting the result.

  • Uncertainty – hidden information, unpredictable AI, or heavy randomness.
  • High stakes – permadeath, reputation loss, rare loot, failed quests, or irreversible choices.
  • Long recovery time – big checkpoints, long fights, or lengthy story sections with no quick restart.

Why players do it

Most people are not save scumming because they want an easy win. They do it because they want a fair experience, or simply do not want to lose an evening to bad luck.

  • Time is limited – for many players, gaming is squeezed between work, commuting, family, and other responsibilities. Reloading can feel like the sensible option when one unlucky outcome costs 30 to 60 minutes.
  • Random failure can feel unfair – players tend to accept failure when it is clearly their fault. They get frustrated when a good plan fails because a number generator says no. Reloading becomes a way to push back against randomness.
  • People want a specific story – in narrative games, one choice can lock or unlock entire arcs. Some players reload to protect the story they want: a relationship path, a companion outcome, or a particular ending.
  • It can be a learning tool – reloading is also practice. You test an approach, see the result, and try again with a different tactic. That is not always avoidance – sometimes it is training.
  • Sometimes it is just fixing the game – misclicks happen. Bugs happen. Crashes happen. Reloading is often the only solution, and few players feel guilty about it.

Is it cheating?

It depends on the context and the expectations.

  • Single-player – usually it is a personal choice. If you are not competing and you are enjoying yourself, you are not harming anyone.
  • Challenge modes and competitive rules – in ironman modes, no-reload runs, speedruns, or leaderboard play, reloading breaks the point of the challenge.

A better question than whether it is cheating is whether it improves your experience, or traps you in perfectionism.

The hidden downside – it can flatten the game

Save scumming works, but it can quietly remove tension. When you know you can always reload, choices can lose weight. Risk becomes optional. Surprises become problems to correct instead of moments to live through.

  • Choices can feel less meaningful because nothing is final.
  • Risk and tension drop, especially in story-heavy games.
  • You may start chasing perfect outcomes instead of memorable ones.

Many great gaming stories come from messy results: a plan goes wrong, you adapt, and the run becomes yours. Save scumming can erase those moments if it becomes automatic.

When save scumming is reasonable

Reloading makes practical sense in plenty of situations. Here are common cases where most players would call it fair.

  • The game gave unclear information and you could not make an informed choice.
  • A misclick or UI mistake caused a major loss.
  • The outcome is mostly RNG and the punishment feels disproportionate.
  • You are correcting a bug, crash, or broken quest state.
  • You are roleplaying and want your character to act consistently.

When it might be worth resisting

If you want stronger tension and a more natural story, consider accepting outcomes in these situations:

  • The failure is small and leads to interesting consequences.
  • The game is designed around failure and adaptation.
  • You are reloading purely out of anxiety, not because the outcome is truly unfair.

Practical compromises – rules you can actually follow

You do not have to choose between reloading everything and reloading nothing. Many players find a middle path that keeps the stakes without turning the game into a chore.

  • Bug-only rule – reload only for crashes, bugs, and misclicks.
  • One reload per mission – allow a single redo on major encounters.
  • No dialogue reloads – accept story choices, reload only for gameplay mistakes.
  • RNG exception – reload only when a major loss happens due to pure luck.
  • Checkpoint save only – keep a chapter save, avoid constant quick reload.

So is save scumming bad? 🙂

Save scumming is not a sin. It is a tool. Sometimes it protects your time, sometimes it fixes bad luck, and sometimes it helps you shape the story you want. But if it turns every moment into a reroll until perfect, it can drain the surprise and tension that makes games special.

  • Use it when it protects your time or fixes unfair situations.
  • Resist it when failure is interesting and part of the game’s design.
  • If you feel stuck in reloading, set simple limits and move on.
Season pass - what is it?

Season pass

A season pass does not mean you only own the game for one year. A season pass is normally a DLC bundle – you pay once and you receive a set of extra content drops as they are released. What confuses people is the word season. In gaming, it often refers to a release plan (for example, Year 1 content) – not a time limit on your access to the base game.

Your questions – answered clearly

  • Does a season pass mean I own the game for one year only? No – it is not a rental. It is typically a one-time purchase for DLC.
  • Do I keep the base game forever? Usually yes in practical terms – but on digital platforms you are buying a license to use the game, not legal ownership of a copy.
  • Do I keep the DLC forever? If the DLC is part of the season pass, you normally keep access to that DLC permanently as well – it does not expire just because Year 1 ends.
  • Why do people say you do not own games on Steam? Because Steam’s terms state the content and services are licensed, not sold – meaning you get usage rights, not title/ownership.

What a season pass actually gives you

A season pass is usually sold as a discounted package for current and future DLC tied to a specific game. :contentReference[oaicite:1]{index=1}

  • You get access to DLC covered by that pass – often story expansions, missions, characters, or cosmetic packs.
  • The DLC arrives over time – you do not necessarily get everything on day one.
  • The pass covers only what is listed for that pass – it is not automatically “all future content”.

Why some passes mention Year 1

Many publishers use labels like Year 1 Pass or Season Pass to mean the same thing – a bundle that covers the first wave of post-launch releases. The key point:

  • Year 1 describes the release window – which DLC drops are included.
  • It does not describe how long you can play – you keep the content you received.

Season pass vs battle pass – this is where the one-year confusion comes from

A season pass is usually permanent DLC access. A battle pass is often time-limited progression.

  • Season pass – you buy DLC access, and it stays in your library.
  • Battle pass – you buy a seasonal reward track, you earn rewards during a limited season, and some rewards may be missable if you do not play in time.

What to check before buying a season pass

  • What exactly is included – named DLC packs, a clear list, or a roadmap.
  • Whether your edition already includes it – deluxe/gold/ultimate editions often bundle the pass.
  • Whether it is a season pass or battle pass – they are different products.
  • Platform restrictions – passes are typically tied to the platform account where you bought them.

About Steam – why it feels like you do not own it

On Steam (and most digital storefronts), you are not buying ownership in the old physical sense. You are receiving a license to access and use the game under the platform’s rules. Steam’s Subscriber Agreement states that the content and services are licensed, not sold, and that your license confers no title or ownership. :contentReference[oaicite:2]{index=2}

What that means in plain terms:

  • You can usually play indefinitely as long as your account remains in good standing and the service continues to provide access.
  • Access is not the same as ownership – the platform can theoretically remove access in certain cases covered by its terms.
  • Online-dependent games have an extra risk – if servers shut down, parts of the game can become unplayable even if you paid.
RNG - what is it

RNG

RNG (random number generator) is the system behind critical hits, loot drops, shuffles, spawns, and procedural maps. The mechanics are simple – the design work is not: good RNG creates uncertainty that feels fair, stays readable, and does not erase skill.

Most games do not want pure randomness. They want controlled randomness – enough unpredictability to stay interesting, with guardrails so the experience stays trustworthy.

What RNG Means In A Game Context

Most RNG decisions follow the same core pattern: the game generates a value, compares it to a probability, and applies an outcome based on whether the roll passes.

  • Step 1 – Generate a number. The game produces a random-looking value, commonly normalized to 0.00-1.00 (or an integer range). The exact range does not matter – what matters is that the value is sampled consistently and cannot be predicted or manipulated by the player.
  • Step 2 – Compare to a chance. The system checks whether the roll is below a probability threshold. If crit chance is 20%, the roll succeeds when it is below 0.20. If a rare item has a 2% drop rate, the roll succeeds when it is below 0.02.
  • Step 3 – Apply the outcome. If the check passes, the event triggers (crit, drop, proc, spawn). If it fails, nothing happens. That simplicity is exactly why RNG can be dangerous: a single roll can decide something emotionally important unless you design the impact and the frequency of rolls on purpose.

True RNG Vs Pseudo-RNG

There are two broad ways games get randomness.

  • True RNG (TRNG) samples unpredictable physical signals, typically hardware noise. It is closer to “real randomness”, but it is harder to control, less portable, and often unnecessary for game feel.
  • Pseudo-RNG (PRNG) uses an algorithm to generate a sequence of random-looking numbers from a starting value called a seed. PRNG is what most games use because it is fast, reproducible (when you want it to be), and consistent across platforms.

The Seed – Why Randomness Can Be Repeatable

A PRNG is deterministic: once you choose a seed, it produces a specific sequence of values. If you reuse the same seed and consume RNG calls in the same order, you can reproduce outcomes exactly. That is not a bug – it is often a feature.

  • Replays and debugging. If a rare bug happens only once every 10 000 runs, you can capture the seed and reproduce the exact same sequence. That makes problems fixable instead of “ghost stories” you can never recreate.
  • Daily runs and shared challenges. Roguelikes and challenge modes often use a daily seed so everyone plays the same generated content that day. The randomness is still there, but it is consistent across players, which makes comparisons meaningful.
  • Deterministic multiplayer (in some designs). Some games keep clients in sync by sharing a seed and then syncing player inputs. When done well it is efficient. When done poorly it is exploitable, because predictable randomness can become an advantage.
Random in games often means unpredictable to the player, not non-repeatable to the developer.

RNG Streams – Why Games Separate RNG Sources

Serious implementations often split randomness into multiple RNG streams so unrelated systems cannot influence each other. This is not theoretical – it prevents real production issues where a harmless feature changes outcomes somewhere else.

  • Combat RNG should be consumed only by combat (hit resolution, crits, damage spread, status effects). If combat shares a stream with UI or animation timing, you can create situations where “opening a menu” changes whether the next hit crits, which feels like rigging even if the math is unbiased.
  • Loot RNG should be isolated so reward outcomes do not depend on unrelated calls. Loot is emotionally high-stakes: players will form beliefs about fairness quickly, so you want your reward system to be stable and explainable.
  • Generation RNG (maps, rooms, encounter placement, spawns) is usually consumed heavily and early. If it shares a stream with loot or combat, small changes in generation can cascade into large differences elsewhere, making balancing and debugging much harder.

Where RNG Shows Up In Games

RNG is not just a list of things that roll dice. It shapes how a game progresses, how combat feels, how solvable the meta becomes, and how pacing is controlled over long sessions.

  • Loot and rewards – RNG controls rarity so valuable items stay valuable, but it also controls progression speed and long-term motivation. If rewards are too random, players feel effort does not translate into progress. If rewards are too predictable, the chase collapses and the economy inflates because everyone completes sets at the same time. Good reward RNG usually includes guardrails (pity, duplicate protection, token systems) so the worst-case stories are prevented while the excitement of uncertainty remains.
  • Combat resolution – RNG shapes moment-to-moment variance through crit spikes, damage ranges, and proc timing. This can make fights feel dynamic instead of scripted. The failure mode is volatility: if one roll swings too much, players feel robbed. Good combat RNG is typically bounded (tight ranges), readable (clear feedback), and positioned as a modifier, not a replacement for skill.
  • Cards and dice – RNG prevents matches from becoming fully solved and repetitive by limiting perfect planning. In card systems, randomness is what forces adaptation: you can plan a line, but you cannot guarantee the next draw. The educational point is that good designs add variance management tools (mulligans, filtering, tutoring limits, deckbuilding constraints) so outcomes feel like decisions under uncertainty, not coin flips.
  • Procedural content – RNG increases replayability by remixing layouts and encounters, but strong procedural design is never “pure random”. It is rules plus randomness: constraints ensure a room is playable, pacing rules prevent difficulty spikes, and curated pools keep variety meaningful. The goal is controlled novelty – runs feel different, but they do not feel broken.
  • AI variety – RNG reduces predictable enemy patterns by selecting from valid actions rather than always picking the same optimal move. This makes enemies feel less robotic and stops players from solving the AI instantly. The key is constraint: randomness should operate inside a safe decision set (cooldowns, positioning rules, threat evaluation) so variety does not look like incompetence.
  • Spawning – RNG controls pressure and pacing in waves, open worlds, and arenas. Spawn randomness changes what the player must respond to and when, which is a pacing lever: too many threats at once creates frustration, too few creates boredom. Good spawn systems often include safety rules (no unavoidable spawns behind the player, minimum reaction distance, intensity budgets) so randomness shapes pacing without creating unfair traps.

Why RNG Feels Unfair Even When The Math Is Correct

Random sequences naturally produce streaks, and streaks feel personal even when they are statistically normal. Players also remember extreme bad runs more than average outcomes, so their mental stats are biased from the start.

Example – if success chance is 20%, failure chance is 80% (0.8). Failing five times in a row is:

0.8 x 0.8 x 0.8 x 0.8 x 0.8 = 0.32768 (about 33%).

So five failures in a row is not rare, it is expected. The educational point is that “fair odds” do not guarantee a “fair-feeling experience” because real randomness produces ugly tails eventually – and players live in the tails when they happen to them.

  • Loss of control hurts most when the player did the right thing and still loses. This is why “random miss” is often perceived as an insult while “random crit” is perceived as a bonus – the miss removes agency, the crit adds upside.
  • Streak memory is a cognitive filter: players compress normal outcomes and vividly remember the worst ones. If your design allows an extreme streak, it will be turned into a story – and that story will define your system’s reputation.
  • Hidden rules trigger distrust. If odds are unclear, modifiers are invisible, or protection systems exist but feel secretive, players assume manipulation. Even a well-balanced system can fail if it is not readable.

Good RNG Design – Uncertainty With Guardrails

Good RNG is not more random. It is randomness shaped to support pacing, fairness, and skill. The tools below exist because pure RNG will eventually produce experience-breaking sequences.

Tool 1 – Bounded Randomness (Tight Ranges)

Bounded randomness limits how far a roll can swing results. A damage range of 10-100 creates huge emotional volatility: low rolls feel like punishment and high rolls feel like the game, not the player, decided the moment. A range like 45-55 still creates variation, but it protects the player’s expectation of consistency.

Design lesson: tight ranges preserve the feeling that outcomes follow from decisions, not from chaos.

Tool 2 – Weighted Randomness (Controlled Probabilities)

Most loot systems are weighted because progression and economy cannot survive equal odds. Weighting lets designers control expected value over time: commons stay common, rares stay rare, and difficulty can increase expected reward value without guaranteeing jackpots.

Design lesson: weighting is not about deception – it is about making reward rates compatible with your game’s pacing and long-term motivation.

Tool 3 – Bad Luck Protection (Pity Systems)

Pure RNG allows infinite failure. That means someone, eventually, will have a horror streak that feels impossible and breaks trust. Bad luck protection caps the misery while keeping early attempts exciting.

A pity timer guarantees success after N failures, escalating odds increase the chance after each miss, and duplicate protection reduces repeats until a set is completed. These mechanisms change the tail of the distribution – they do not necessarily change the average reward rate, but they dramatically improve perceived fairness.

If a reward system allows unlimited failure, then the worst-case experience grows with playtime and player count. Protection mechanisms cap the maximum expected drought and stabilize progression, without removing the moment-to-moment uncertainty that makes drops exciting.

Tool 4 – Shuffle Bags (Reduce Extreme Streaks)

A shuffle bag builds a controlled pool of outcomes and draws without replacement, then refills. This preserves unpredictability while greatly reducing extreme streaks. It is especially useful for proc systems, spawns, and controlled reward drops where long droughts are unacceptable.

Design lesson: shuffle bags turn independent rolls into managed variance, which often feels fairer while still being unpredictable moment to moment.

Tool 5 – Streak Breakers (Hard Caps)

Streak breakers are explicit caps when pacing cannot tolerate long failure runs. A soft breaker sharply increases chance after multiple failures; a hard breaker guarantees success after a defined streak length.

Try to use streak breakers when failure streaks create boredom, grind fatigue, or perceived brokenness. Avoid them when your game’s identity relies on raw variance and risk.

Tool 6 – RNG Sets The Situation, Skill Decides The Outcome

This is one of the cleanest patterns for “fair-feeling randomness”. RNG provides variety in what appears (upgrades offered, loot available, encounter composition), but the player’s decisions and execution determine whether they win.

Design lesson: players accept RNG better when it changes the puzzle, not when it overrides the solution.

RNG In Competitive And Online Games

In multiplayer, RNG is not only a design decision – it is a trust and security problem. If players believe outcomes are manipulable, the game’s credibility collapses even if the distribution is mathematically fine.

  • Client-side RNG is fast and responsive, but riskier: if a client can predict or influence the random stream, cheating becomes possible.
  • Server-side RNG is more trustworthy because the authoritative roll happens on the server, but it must handle latency, reconciliation, and synchronization cleanly so outcomes do not feel delayed or inconsistent.

Transparency matters because players will reverse-engineer meaning if you do not provide it. You do not need to publish every formula, but you do need consistent rules – what affects the roll, how modifiers stack, and whether protection systems exist. If you change odds dynamically, it should feel like a designed system, not a hidden trick.

Common RNG Myths

  • Myth: The game avoids what I want.
  • Reality: Low probability plus small sample sizes naturally create streaks that feel targeted. Players experience probability as narrative, so a few bad rolls can feel like intention.
  • Myth: I am due for a win.
  • Reality: Independent rolls do not owe success unless your system explicitly includes protection rules that change odds after failures.
  • Myth: Random means evenly spread – real randomness clusters, and clusters feel unfair.
  • Reality: Evenly spread is a human expectation, not a property of independent random sequences at small sample sizes. If you want outcomes to feel evenly distributed in play, you often need controlled randomness (shuffle bags, protection, bounded variance).

How Teams Test RNG So It Does Not Break The Game

RNG can be tested like any other system. The key is not only validating averages, but validating streaks and worst-case tails – because that is where player trust is won or lost.

  • Large simulations validate that rates converge correctly over huge roll counts and across different states. This is how you detect subtle biases, stacking bugs, and unintended interactions.
  • Distribution checks measure streak frequency and variance. A system can have a correct average drop rate and still produce unacceptable droughts. Testing the distribution tells you whether the experience will generate rage stories.
  • Edge-case tests stress unusual combinations – stacked modifiers, extreme build synergies, and rare states that occur only sometimes. These are the scenarios where RNG systems most often break.
  • Seed logging makes rare issues reproducible. If a player reports something impossible, you can inspect the seed and the sequence of RNG consumption instead of guessing.
  • Economy modeling connects RNG to progression and retention. Reward RNG is not isolated: it changes session length, player motivation, and content lifespan.
Pure randomness can create ugly streaks, and those streaks are exactly what makes players feel the game is unfair even when the math is correct. Good RNG design keeps uncertainty playable. It stays exciting and readable, and its impact is limited so skill remains the main driver. Use RNG to add variety and tension, not to decide the whole outcome. When randomness supports the experience instead of overriding it, players stay engaged and trust the system.

Churn Rate

Churn Rate (customer departure rate) is a comparative metric used in analysis to express how quickly a company is losing customers or repeat purchases in a specific time period – typically monthly or annually. It measures the percentage of customers who ended their relationship with the service, stopped buying, or canceled their subscription.

The goal is to identify weak points in customer retention, reveal structural problems in the business model, and optimize growth strategy by reducing losses in the customer base.

What is Churn Rate used for?

To evaluate customer departure and its impacts – for example, when:

  • tracking growth or decline in the number of active customers,
  • measuring loss of repeat purchases or subscription cancellations,
  • analyzing MRR churn (loss of recurring monthly revenue due to customer departure),
  • evaluating the effectiveness of retention campaigns and loyalty programs,
  • monitoring the impact of customer departure on growth strategy and long-term customer value.

Customer churn rate and MRR churn rate

Two basic indicators are tracked for churn:

Customer churn rate and MRR churn rate

Notes:

  • Customer Churn Rate – relates to physical customers and measures the speed at which a business is losing specific customers or customer accounts.
  • MRR Churn Rate (Monthly Recurring Revenue) – an indicator expressing as a percentage the total revenue loss resulting from customer departure in a given period. From a business perspective, it has greater informational value because it also considers the economic weight of individual customers.

Example: You have ten customers, but one of them is responsible for a quarter of your monthly revenue.

If they leave, Customer Churn Rate = 10%, but MRR Churn Rate will reach 25%.

How is it expressed?

Changes are typically expressed as percentages as the ratio of the number of customers who left to the total number of customers at the beginning of the period.

For example: 5% Churn Rate means that the company lost 5% of its customer base in the given period. This figure shows what portion of the customer base was lost – key information for managing growth and business sustainability.

Example

A company providing SaaS service had 1,000 active customers on January 1, 2025. By February 1, 2025, it lost 50 customers who canceled their subscription. The departure rate for January is thus 5% Churn Rate.

This means that for every 100 customers, it loses 5 monthly – and if this is not compensated by new customers or customers with higher revenues, the company’s growth will be at risk.

Voluntary vs. involuntary customer departure

  • Voluntary (active) churn – customers voluntarily stop buying or cancel their subscription.
  • Involuntary (passive) churn – the customer leaves unintentionally, for example, due to failed payment or technical error with payment method.

Tip: Passive churn should be addressed immediately – for example, with a reactivation campaign or notification about unpaid payment – before it spreads and gets out of control.

Negative Churn

Negative churn is considered the “holy grail” of growth and a symptom of a strong product and business model. It occurs when new revenue from existing customers (expansion, upsell, or reactivation) exceeds revenue lost due to departures.

In other words – a smaller but more active group of customers can compensate for revenue loss caused by the departure of some clients through their spending.

What Churn rate is good?

Generally, it’s stated that an acceptable customer departure rate ranges between 5-7% annually.

Chrurn rate and cohort analysis

In reality, however, it depends on the industry, business model, and customer characteristics.

Tip: Start from the LTV/CAC ratio (Lifetime Value / Customer Acquisition Cost) and look for a balance that ensures healthy growth and profitability.

Cohort analysis – Churn Rate

Cohort analysis allows tracking at what point in the lifecycle departure is highest and how customer behavior evolves over time.

Cohort analysis - Churn Rate

For example, it can reveal that churn is highest during the first or second month – which indicates insufficient communication of product value or weak onboarding.

Analysis of cohorts (groups of customers who converted in the same period) allows identifying critical phases and verifying whether new measures lead to lower churn in subsequent cohorts.

Why is this metric important?

Churn Rate is a key indicator of company health because it directly affects growth, revenue, and return on marketing investment. While acquisition metrics (e.g., CAC) show how much it costs to acquire a new customer, churn reveals how well the company retains its customers.

It helps to better assess:

  • the effectiveness of retention measures and customer care,
  • customer lifetime value (LTV) in relation to their acquisition cost (CAC),
  • structural weaknesses in the business model – if churn is high, growth will be unsustainable,
  • the speed at which new products, services, or price changes affect customer response.

This makes Churn Rate a fundamental tool for analysts, marketing, and management when evaluating company health and business models with recurring revenue.

What to watch out for with the Churn Rate metric

When interpreting, it’s important to:

  • distinguish between Customer Churn and MRR Churn – losing one large customer can have a greater impact than ten smaller ones,
  • not neglect passive churn and address technical causes of failed payments in time,
  • track cohorts and discover at which phase of the lifecycle departure is highest,
  • combine churn with LTV, CAC, and retention indicators for a complete view of customer base health.

Only then does this metric have real informational value and can be used as a reliable basis for planning growth, retention strategies, and budgeting.

Quarter on Quarter - QoQ

QoQ

Quarter on Quarter (abbreviated as QoQ) is a comparative metric used in analysis to compare economic, financial, or operational indicators between two consecutive quarters – that is, between the current and previous quarter.

The goal is to assess the development of a company’s, industry’s, or market’s performance over a shorter time horizon and quickly identify trends that may signal growth, slowdown, or stagnation.

What is QoQ used for?

To evaluate quarterly performance – for example, in:

  • tracking revenue growth, profit, and operating margin between two quarters,
  • analyzing productivity, inventory turnover, or cash flow,
  • reporting results of publicly traded companies,
  • monitoring macroeconomic indicators such as GDP, industrial production, or inflation,
  • evaluating the impact of seasonal factors and economic cycles.

QoQ - Quarter on Quarter - formula - Formula for calculating Quarter-on-Quarter (QoQ) percentage change

The Quarter-on-Quarter (QoQ) metric shows by what percentage a given indicator has changed between two consecutive quarters (current vs. previous quarter).

And how do you calculate Quarter-on-Quarter (QoQ)?

 

Notes:

positive QoQ (%) = growth compared to the previous quarter

negative QoQ (%) = decline compared to the previous quarter

0% = no change

How is QoQ expressed/How to correctly interpret the Quarter on Quarter metric?

Quarter-over-quarter changes are typically expressed as percentages.

The notation looks like:

+2.7% QoQ or -0.9% q/q

This notation shows by what percentage the indicator’s value increased or decreased compared to the previous quarter.

Example

Apple announced revenue growth of +2.7% QoQ in the second quarter of 2025.

This means that the company’s revenue was 2.7% higher than in the first quarter of 2025  -for example, if revenue reached $90 billion in the first quarter, it increased to approximately $92.43 billion in the second quarter.

Quarter-over-quarter comparison helps reveal the current trend in revenue development and provides a quick overview of the company’s short-term performance between individual periods of the fiscal year.

Why is quarter-over-quarter comparison important?

QoQ is among the fundamental tools of financial analysis and reporting because it enables tracking performance development within a single year and evaluating results without waiting for annual data.

Unlike the year-over-year metric (YoY), which shows long-term trends, QoQ provides a view of the current pace of growth or decline and helps identify changes that may precede broader economic shifts.

It helps to better assess:

  • short-term growth or performance slowdown,
  • the influence of seasonal trends between individual quarters,
  • the effectiveness of new strategies or marketing measures,
  • the speed of a company’s response to market fluctuations and demand.

This makes QoQ a metric frequently used by analysts, investors, and management in quarterly earnings presentations and strategic decision-making.

Difference of QoQ from other metrics

  • MoM (Month on Month) – month-over-month comparison that tracks rapid and short-term changes.
  • QoQ (Quarter on Quarter) – quarter-over-quarter comparison that provides an overview of performance development within one year (the term QoQ and its explanation and description are the focus of our entire article above).
  • YoY (Year on Year) – year-over-year comparison that displays long-term trends without the influence of seasonality.
  • CAGR (Compound Annual Growth Rate) – accounts for an entire multi-year period and compound interest, providing the most accurate view of long-term trends.

What to watch out for with the QoQ metric

When interpreting QoQ results, it’s important to consider seasonal influences, the length of quarters, and extraordinary events (such as new product launches or one-time expenses).

Quarter-over-quarter growth may appear positive but does not necessarily indicate a long-term trend.

QoQ should therefore always be supplemented with year-over-year comparison (YoY), which allows identification of whether the change is sustainable over a longer period.

CAGR - What is abbrev. CAGR and how it is calculated - Compound Annual Growth Rate formula

CAGR

Compound Annual Growth Rate (abbreviated as CAGR) is a financial metric that expresses the average annual growth rate of a value-such as revenue, profit, investment, or number of customers—over a specific time period.

The goal is to determine how quickly the value of the tracked indicator grew (or declined) on average each year, taking into account compound interest—that is, the fact that growth in each year is based on a higher base than in the previous year.

What it’s used for

To measure long-term growth rates—for example, in:

  • evaluating the average annual growth of a company’s revenue, profit, or turnover,
  • analyzing the development of investments, funds, or portfolios,
  • comparing growth dynamics between different companies or industries,
  • assessing the development of market share or customer numbers over a longer time horizon,
  • setting realistic targets for strategy and growth planning.

How is CAGR calculated – Compound Annual Growth Rate formula

CAGR is calculated using the formula:

CAGR - How is CAGR calculated - Compound Annual Growth Rate formula

((Final value / Initial value) ^ (1 / number of years)) – 1

The result represents the average annual growth rate in percentages, which would lead to the same final value if growth were constant each year.

Example

A company invested 10 million CZK in 2020 and in 2025 the investment value was 18 million CZK.

CAGR = ((18 / 10)^(1 / 5)) – 1 = 0.125 = 12.5% annually.

This means that the average annual growth rate of the investment was 12.5%—even though growth in individual years could have varied, this value expresses uniform returns over a longer time horizon.

Why the CAGR metric is important

CAGR is among the most reliable indicators of long-term development because it eliminates the influence of short-term fluctuations and enables objective comparison of growth across time. Unlike simple year-over-year comparison (YoY), which works with a single difference, CAGR considers the entire period, thereby providing a more realistic picture of actual growth rate.

It helps to better assess:

  • long-term growth of revenues, profit, or investments,
  • stability and sustainability of growth trends,
  • effectiveness of strategy over a multi-year period,
  • actual returns on projects or investment funds over time.

This makes CAGR a common component of investment analyses, corporate reports, and strategic presentations for shareholders and management.

Difference from other metrics

  • MoM (Month on Month) – month-over-month comparison that tracks rapid and short-term changes.
  • QoQ (Quarter on Quarter) – measures quarter-over-quarter growth rate within a year.
  • YoY (Year on Year) – shows annual change between two periods, suitable for short-term tracking.
  • CAGR (Compound Annual Growth Rate) – accounts for an entire multi-year period and compound interest, providing the most accurate view of long-term trends (the term CAGR and its explanation and description are the desribed above).

What to watch out for with the CAGR metric

When interpreting, it’s important to remember that CAGR does not show actual fluctuations in individual years—it only calculates the uniform rate that leads to the same result. Therefore, it’s advisable to combine it with year-over-year data (YoY) or a chart of actual development.

Distortion can also occur if the initial value is unusually low or includes a one-time anomaly. Proper interpretation of CAGR requires knowledge of the context and the entire time development of the tracked indicator.

How to stop AI from creating false information and desinformation and - how to get practical, accurate answers and minimize AI hallucinations - Blog

How to stop AI from creating false information and desinformation and bullshitting – how to get practical, accurate answers and minimize AI hallucinations

Artificial intelligence is a great tool. It can speed up work, supplement knowledge, reveal new connections, and sometimes surprise you with a result you wouldn’t have thought of yourself. But it’s important to acknowledge reality – AI is not a miraculous brain and certainly not a truthful expert. It’s a statistical model that generates the most probable answer based on learned patterns – and sometimes it hits the mark precisely, other times it confidently spouts nonsense. How to prevent this? We’ll discuss that in a moment, but first, let’s start with some repeated “pouring” of basics. Or alternatively

Yes, AI can speed up work, help with research, draft materials, suggest text structure, and explain technical problems. But it doesn’t make anyone an expert. And it’s definitely not a replacement for independent thinking. Those who have worked with these tools for a longer time know well that despite all the procedures, guides, and prompts, the model will sometimes simply respond with nonsense.

Artificial intelligence still loves to hallucinate very much – and no, the problem isn’t that you’re using the free version of ChatGPT, for example; paid versions hallucinate nicely too (this varies quite a bit from tool to tool; for instance, in Claude you’ll experience less frequently that AI will slip you fake sources and generally it seems to me that it makes things up less and works better with facts somewhat in general already at baseline, etc.).

This means that information needs to be read, compared, and confirmed in your own head (whether it’s not complete nonsense).

Not because the user “doesn’t know how to write a prompt.”

Not because they’re too lazy to study it.

But simply because artificial intelligence still doesn’t think, it only predicts the most probable answer.

And sometimes it hits amazingly precisely, other times it misses completely. When it misses, you’ll often hear the same mantra from various wannabe gurus who often became AI gurus overnight:

“Just write a better prompt.”

Yes, that’s true. You can always write a better and more detailed prompt. But that’s only half the truth.

The other half goes: “How much time does it really make sense to invest in tuning an AI response… and when is it faster to do it the old way?”

If it’s a task that would normally take you tens of minutes to hours of work, or an activity you’ll repeatedly perform, or you don’t know how to approach it at all, it makes sense to use AI as an assistant that will speed up the work and help you structure the process.

A typical situation might be, for example:

  • Drafting arguments for client communication – AI helps assemble logical arguments, lists advantages, objections and counter-arguments, adds recommended tone and communication style.
  • Writing procedures, checklists or methodologies – AI creates a clear step-by-step process, adds control points and recommendations so the process is clear and replicable.
  • Creating an outline for a marketing campaign or strategy – AI proposes campaign structure, target segments, key messages and recommended communication channels.
  • Proposing logic for a decision-making process or project task – AI helps break down the problem into steps, define decision criteria, possible scenarios and recommended procedure.
  • Transcribing and editing text – transcribing voice notes to text, adding structure, language correction.
  • Summarizing professional text – for example, turning 5 pages of internal study into one understandable page for management.
  • Expanding brief notes – you have 5 bullet points – AI generates quality continuous text from them.
  • Reformulating text for different audiences – technical version, lay version, business version.
  • Creating an outline – for an article, presentation, SOP, video, newsletter, email.
  • Creating short message variants – 1 minute, 10 seconds, social post, headline.
  • Creating schedules or checklists – customer onboarding, project timeline, proposal preparation.
  • Meeting or document summary – extracting key points, tasks, deadlines.
  • Solution variant proposals – for example, three different versions of arguments or business email.
  • Translation and tone adjustment – not just translation, but conversion to Czech style and context.
  • Ideas and brainstorming – slogans, claims, product names, messaging, content pillars.
  • Explaining complex concepts – simple version with concrete examples.
  • Supplementing decision-making materials – overview of pros and cons, risks, alternatives.
  • Generating follow-up emails – different tones and communication variants.
  • Converting informal notes – from chaotic text to professional output.
  • Creating step checklists – proposal preparation, supplier selection, project implementation.
  • Proposing information structure – sorting documents, CRM fields, project tasks.
  • Simulating a client or investor – AI plays the role of counterparty and tests arguments.
  • Highlighting blind spots – points you overlooked, adding context.
  • etc.

Moreover, it’s also necessary to know not only what different AI systems exist, but when they’re suitable or unsuitable for completing the task you need – because they have different strengths and weaknesses and their suitability therefore differs according to the type of task.

Some operations are more efficient to perform in ChatGPT, others in Claude AI, Gemini, or in tools integrated into office applications, and sometimes it simply doesn’t pay to use AI tools at all and it’s better and faster to do the task manually/the old way.

And then there are certain operations that some tools can’t process at all; try, for example, in Claude to set it to correctly use lower-opening and upper-closing quotation marks (i.e., characters like “”) for writing direct speech.

Standardly, despite all efforts, instead of the correct variant: Hello, how are you?

I get: Hello, how are you?

This is most likely because the model for Claude has its primary data core in English. Czech “” are probably represented marginally in the dataset – so it’s a low probability of occurrence of such a pattern for it. And because AI doesn’t solve Czech language rules, but only occurrence statistics, it will constantly give you a different pattern as a result, even if you forbid it that result, show it correct examples, save it to memory, add this command to settings (preferences) in the form of custom instructions.

Even thorough instructions or prompt engineering may not help if we want an output from the model in an unusual format or style that contradicts its statistical training.

If we require from the model a style or format that is in direct conflict with its training, even with repeated reminders, examples of the correct format and saving permanent instructions, the model may return to its accustomed patterns after a while. The reason is technical – current language models are guided by probabilistic patterns from training and in practice don’t have a reliable mechanism for “hard prohibitions.” The model can therefore partially respect the instruction, especially for shorter responses or if we actively monitor it – but for longer texts or in case of stylistic conflict, it often slides back to what it “knows most.” Permanent retraining requires intervention in the model itself or special controlled generation mechanisms, which is not a tool for ordinary users. Therefore, the simple reality still holds – AI can significantly speed up work, but human oversight and correction are essential. For some tasks, you simply can’t do without manual checks and adjustments, and sometimes you’ll never get the correct answer. I’m not saying that AI is completely incapable, so that someone doesn’t interpret this as one idiot proving to us that AI is shit because it can’t write quotation marks for him. It’s not.

But this is enough for understanding why you can’t rely purely on AI. 🙂

AI naturally handles plenty of very useful automations (reports, exports, extracts and structures for SEO/PPC campaigns that you would otherwise do for tens of hours, now you handle them in hours – when you manage to iron out all the bugs – sometimes it can be more laborious, other times it can save tens or even hundreds of hours per year when it works out).

It’s important to realize that AI is a statistical model, albeit a very sophisticated one. It’s not about real understanding of problems, but about probabilistic estimation of what answer is most likely correct based on training data. AI doesn’t think – it guesses the most probable continuation of text according to patterns it saw during learning. That’s why answers can be wrong, even if you use the best input method. It depends on what the model learned from, how the data was processed and how the answer calculation proceeds. Always check important information and don’t rely blindly on AI output.

Brief cheat sheet for tools see below:

  • ChatGPT – universal large language model suitable for writing texts, creative content, marketing proposals, explaining complex topics, logical tasks, code design and structuring information. It can quickly create drafts of articles, corporate documents, presentations, email communication, argumentative outlines and communication strategy for different audiences. Occasionally it adds a probable estimate instead of a fact if no source is provided – therefore it’s necessary to check specific data. If the user doesn’t supply data, the model relies on training information and may not reflect the latest changes.
  • Claude – focused on professional and structured texts, working with extensive documents, legal materials and technical materials. Strong in logical arrangement of information, argumentation, precise work with terminology and consistency of tone and structure. Suitable for analyses and legal or process documents. Thanks to a stricter approach to uncertain information, it adds assumptions less – but for creative tasks it may seem reserved and sometimes refuses vague assignments. Excellent for programming and coding, but not exactly a great tool for creative designs.
  • Gemini (Google) – strong in searching and working with information from the web, visual inputs and tasks in the Google ecosystem. Suitable for research, tabular outputs and orientation data overviews. Style is predominantly factual and informative, less suitable for emotional marketing content and creative copywriting. It allows working directly with Google documents and spreadsheets without manual data copying, can supplement context from the web and automates office workflow within Google Workspace. If you live in the Google ecosystem, it’s a significant time saver.
  • Microsoft Copilot – ideal for Word, Excel, PowerPoint, Outlook and corporate workflow. Excellent for summarizing meetings, spreadsheets, corporate documentation and emails. Maintains professional tone and is strong in office agenda – not primarily intended for creative writing or creating distinctive communication identity. It connects directly with corporate documents and data in the Microsoft ecosystem, so it saves time when preparing presentations, reports, contract materials or email communication. Ideal for corporate environment where you need to quickly process real documents, spreadsheets and meeting notes.
  • Notion AI – tool for organizing information, notes, SOPs, checklists, internal manuals and documentation. Converts informal notes into clear structure and helps create systematic materials for corporate processes, projects and knowledge bases. Strong where order, logic and content clarity are needed – less suitable for creative or emotionally tuned texts because it naturally generates factual, procedural tone.
  • Midjourney – suitable for stylized visuals, branding, moodboards, product scenes and concepts. Excels in aesthetics and originality. It’s a great tool for new visuals and possible ideas, or for creating new visuals that should emphasize some mood and overall visual style. It can make beautiful images and helps imagine how things could look. But it’s not entirely suitable where technical precision is needed – such as correct proportions, construction details or faithful representation of specific people and products. It’s more of a creative tool than technical, so the result looks nice but may not be entirely according to the reality you need.
  • Stable Diffusion – flexible tool for realistic images, retouching, product visualizations and precise control over output. Can be run locally, modify styles, use control tools (e.g., ControlNet) and train custom models so that images correspond to specific requirements – for example, realistic representation of a specific product, face, brand or architecture. Unlike Midjourney, it’s not just a “tool for ideas,” but allows teaching the model your own visual style or specific object so that the result looks according to reality. Thanks to this, it’s suitable for situations where precise match with reality is important, not just creative appearance. However, it requires certain technical proficiency, working with parameters and patience when tuning – only then do you achieve professional results with this tool.
  • Runway – suitable for creating short video scenes, visual effects and creative clips. Great for prototypes and visual inspiration. For longer videos, the process is significantly more demanding and requires a series of intermediate steps, manual adjustments and post-production – production of longer videos can take tens of hours and is not necessarily cheaper than the classic approach.
  • Pika Labs – focused on short animations, stylish video clips and dynamic effects for social content. Ideal for visual ideas and short motion design. Not intended for long film materials or technically demanding video projects.
  • Sora – model for generating videos based on text, images or clips; allows creating visually very high-quality short sequences, converting scripts to video and connecting different shots in one piece. Excels in rapid prototyping of visual ideas and scenic designs and provides easy interface for video content creation. Not ideal for long or complex productions with high degree of post-production and technical stability, because generating longer videos still requires large amount of time, manual adjustments and editing experience.
  • ElevenLabs – realistic voice synthesis for voice-over, dubbing and corporate communication. Captures intonation and natural expression. For languages with less support, pronunciation tuning may be needed.
  • Descript – great tool for video and audio editing by text. Suitable for podcasts, online courses, corporate interviews and educational content. Efficient for spoken content and scripted recordings – not specialized in film editing or dynamic advertising.
  • HeyGen – avatar video presentations, corporate onboarding, customer messages and lip-sync videos. Enables rapid production of talking avatars without filming. Best for formal and informational content – avatar tone is not intended for dramatic or emotional storytelling. For longer videos, processing time and price increase significantly, often with worse results than classic filming.
  • Lovable.dev – focused on rapid application development and prototyping using AI (you can create an MVP in it relatively easily and cheaply). It can convert text description into functional application with backend, frontend and database. Strong in generating UI, components, logic and basic project architecture – including automatic code creation, tests and version commits. Ideal for founding projects, MVPs, internal tools, dashboards or idea validation. Significantly speeds up work thanks to integrated editor and AI assistance directly in code. Doesn’t blindly rewrite code, but tries to reconstruct and optimize it – for complex or non-standard projects, however, it may require technical oversight and manual adjustments. Not intended as full replacement for senior developer, but as work accelerator that serves excellently for prototypes, proof-of-concept solutions and rapid launch of ideas that can then be tuned manually.

At the same time, it’s always necessary to be able to personally validate when it’s worth continuing to work on the prompt to get a perfect result or whether there isn’t some simpler path, because AI tools really aren’t cure-alls and the more you rely on them, the greater impact various future updates and changes to models of given tools will have on you.

Examples:

  • Logo or visual identity – AI can quickly propose style and idea – but you’ll fine-tune the final form manually in a graphic editor, because you want full control over the result, or you count on doing more edits with that visual, etc.
  • Copywriting and marketing texts – AI kicks off the idea great, gives variants, helps with brainstorming – but you write the final version yourself, so the tone and message are personal and precise, there are no factual errors, typos, non-existent words, correct tonality and language expression.
  • Complex decisions (finance, strategy, technical solution) – AI gives quick overview and summary of options – but the final decision must come from combination of AI, experience and common sense (for example, customer cycle or how your company works).
  • Bulk editing/filtering sensitive data – you get advice from AI, for example, formula/procedure for how to do it (you mainly avoid errors that it could put in there for you with larger data sets when you don’t give it absolutely perfect instructions, over whose creation you would spend long hours).
  • Contracts and legal texts – AI helps with structure and points out risks – but the final wording must go through lawyer’s review, because incorrect wording can mean real risk for you/your company. But at least for quick outline of chapters and what you should cover, it can be a good helper.
  • Technical solutions and architecture – AI proposes possible procedures and technologies – but the final decision comes from your knowledge of environment, security and system limitations, budget, features you need and thousands of other parameters.
  • Project management – AI prepares schedule, tasks and communication points – but prioritization, human capacities, risks and changes over time must be managed by a person.
  • Data analysis and reports – AI summarizes data and proposes conclusions – but a person must verify whether the model correctly understood the context and didn’t draw an incorrect conclusion.
  • Customer communication – AI prepares response texts, summaries and reaction variants – but empathetic tone and final choice remains with a person, because nuance and emotions AI can’t fully capture, not to mention that you probably don’t want customers to receive nonsense as responses. Here it should also be added that AI can formulate some basic answers for you; it’s really not very suitable for deeper answers to technical questions because it can’t grasp them too deeply. But for example – your customer center can use the customer’s initial inquiry and have it suggest what might be a suitable solution proposal for such a client.
  • Supplier/employee selection – AI helps define criteria and comparison – but you’ll assess the real value of a person or company only by combination of references, behavior and context. AI can be used to prepare the process and selection structure, not to replace human judgment. At the same time, it’s not appropriate (and in many cases not even legal) to use AI for automated “evaluation of people” or decision-making without human review – especially for resumes, personality conclusions or applicant profiling.
  • Presentations and materials for management – AI generates outline, visual and summary – but you tune the precise message, facts and communication tone.

What does all this lead to?

That AI is not a calculator. It’s a tool that requires:

  • experimentation,
  • critical thinking,
  • ability to verify facts,
  • also having your own knowledge and ability to further learn and educate yourself in the given area/topic (because otherwise I don’t know how to ask correctly or what is hallucination/total nonsense).

Likewise, there’s no universal prompt that will make you an expert without work.

Why?

Because AI doesn’t create new knowledge – it works with what it already has from us (people). And therefore – if you yourself don’t understand the principle, problem or context, you can’t assess whether AI is giving you the correct answer or just nicely formatted nonsense.

So here it holds – to be able to formulate/ask AI the correct question to get a relevant result, you must understand the topic/issue.

Without that:

  • you have no way to recognize that AI is confidently and very convincingly lying,
  • you can’t select correct information and filter out nonsense,
  • you can’t follow up with another question in the right direction,
  • you don’t know when to use AI output and when to ignore it.

It’s like having a scalpel – the tool itself doesn’t make you a surgeon either. A prompt is just an “arrow,” but the trajectory and target are determined by the person who gives AI commands (prompts). It can of course be bypassed by the process of so-called onion peeling, where you gradually submit your individual initially stupid questions to AI, let it gradually explain the topic to you until you roughly perceive it and can ask better. But that still doesn’t make you an expert in that area (it’s not even technically possible – it’s hard to cram into your brain in a few minutes all the knowledge that someone gradually absorbed over years).

Expertise isn’t just a set of information. It’s experience, memory, intuition and ability to put things in context. When AI just serves something to you, your brain often doesn’t even really store it – you capture the result, but not the path to it. But when you learn yourself, try, make mistakes, tune and think about it, memory and skill are stored much deeper. It’s the principle like with programming – you can have AI generate code for you, but if you don’t understand it, you’re no better programmer. You won’t remember procedures, you won’t create mental models and next time you’re back at the beginning. Quick information is not the same as acquired knowledge. And acquisition – not copying – is what makes an expert.

AI can be an excellent partner for you. But only if you control it, not it you.

And now let’s talk about how to correctly control AI so that it sends back at least somewhat usable results.

How to get better and more accurate answers from AI?

Step 1: Choose the right tool for the right task and also determine whether I really need AI for it

Different tasks need different tools. ChatGPT is not a universal solution, even if it seems that way. If you use it for the wrong type of task, you’ll get bad results. Simple.

See notes on tools above – you gain this knowledge only by using those tools daily and exploiting them. Only then will you learn when they’re suitable, when it’s better to input something differently, and when it doesn’t make sense to try to solve it through AI at all, because by writing such a perfect prompt you’d kill many times more time than completing the task itself by your own effort.

Another level for making work with AI models more efficient is having NotebookLM, which is designed for working with your own materials – contracts, PDFs, presentations, corporate documents or study materials.

Unlike regular chatbots, it’s not dependent only on “model memory.” NotebookLM directly relies on specific sources, not on estimation (grounding) in uploaded sources – it reads them, analyzes them and responds according to them. It uses only content you give it – so you have control over sources and where AI draws information from. This is essential for confidential documents or internal materials. And also – and this mainly – it significantly reduces the risk of hallucinations. And besides, NotebookLM also allows creating summaries, study materials, presentation materials, briefings or questions and answers directly from sources you upload (PDF, Docs, texts, notes, research), which again makes work on PC somewhat more efficient.

If you need to minimize the risk of hallucinations and have your own sources available, the best choice is NotebookLM – it works directly with uploaded content, so answers are built on actual data, not estimation.

When you don’t have sources and need to find them first, Perplexity works excellently. It’s fast, transparently provides links and its information can be easily verified. Although it can also hallucinate, thanks to cited sources, checking is significantly simpler. Its Deep Research mode typically takes only 2-3 minutes and instead of unnecessary length, it emphasizes quantity of relevant sources and their connection.

On the other hand, even with established models like ChatGPT or Gemini, it can happen that you get a perfectly written long text – which ultimately doesn’t answer the question precisely. Therefore, quality of sources and verifiability of information are more important than poetics or output scope.

Step 2: The simplest and at the same time longest path – just ask

You open ChatGPT, write a question and wait for an answer. This is what most people do. And that’s precisely why they get bad results. When you just write a question without further instructions, ChatGPT automatically uses a fast model, the so-called Instant version. It’s swift but very imprecise and has a huge tendency to make things up.

So watch out for that.

On the other hand, for most simple queries it might suffice (you simply don’t have time to write detailed prompts for every stupid thing, especially when you roughly know the correct result – it’s again about your own evaluation – when I know I’m going to solve a more complex/technical query, I’ll spend more time preparing the prompt and vice versa – for simple queries I can throw in a simple question, but I must count on a stupid answer all the more – what are we kidding ourselves about – you can get that even with a more detailed prompt, because frankly no AI model has great memory yet, so many prompts will simply take you some time…).

But – the query is without context, without role, without rules – an excellent recipe for hallucinations (meaning you’ll get made-up and untrue answers).

Better is to give AI model instructions + role (roleplay). Just by this step, answer quality improves dramatically, which is also proven by data from some studies. It’s enough to assign AI specific expert roles with detailed description.

Some current studies prove that simulation of multiple expert roles significantly improves reliability, safety and usefulness of AI answers. (the probability of truthful information from AI increased by 8.69%). Or find basics also in the article: Effective Prompts for AI: The Essentials.

Which is incidentally what most users do somewhat subconsciously when they get a stupid answer. Simple but effective – you simply write a command:

You are an expert/specialist/expert on… and at the same time you’re a professor from Harvard and on top of that you write the output as a journalist, where the text should be understandable even for a layperson, etc.

If you know English, understanding the principles of how LLM models work can be helped by the article: Unleashing the potential of prompt engineering for large language models.

Even more accurate and reliable results you’ll get if you activate the option to use the internet.

The model then doesn’t rely only on training data available by the date of its training, but can verify and supplement information according to the current state as of today. This is crucial especially for topics that change quickly – for example, legislation, grants, energy market, technology or economic data (or actually always, because you want ideally the most current data).

Step 3 – Activate “Thinking” mode for deeper and more accurate answers, or Deep research

Even much more accurate results you’ll get when you activate “Thinking” mode (in ChatGPT marked as “Thinking” option).

This mode belongs to the newest versions of the GPT-5 model, which have built-in “thinking” – i.e., deeper logical steps and longer analysis. The consequence is that answers can be higher quality and more professional – but at the same time the answer takes significantly longer.

However, you need to count on the fact that the answer takes significantly longer than with the fast “instant” mode. So you use Thinking mode when quality is more important to you than speed – for example, for more demanding professional queries, research, technical topics, legislation or financial decisions.

And where is Thinking mode turned on in ChatGPT?

Thinking mode - ChatGPT

A level higher still is the agentic “Deep Research” mode.

It’s not just about a “smarter answer,” but about controlled multi-step procedure.

AI plans the work itself, systematically goes through relevant web sources and your materials, continuously evaluates quality of findings, compares claims across sources and compiles findings into a coherent report with clear structure and citations.

The result is typically an extensive report – easily around 15 pages, with dozens of links and tables or graphs – that’s ready for export to PDF and immediate handover to colleagues or clients. It makes sense to turn it on when you need maximum accuracy and verifiability – for example, for legislative research, technical comparisons, investment materials, due diligence, market analyses or complex strategies.

The price for such depth is longer processing time and resource intensity – but when it comes to quality, “Deep Research” today represents the peak of what AI can offer.

If you want to get the most accurate output, write the task as specifically as possible (purpose, scope, audience, required format, comparison metrics, excluded sources) and add quality criteria – for example:

Compare at least 8 sources, state selection methodology, separate “findings” from “interpretation” and attach list of risks and unknowns.

This way you’ll get a report that goes more “to the bone” of the problem and is not just a compilation of links.

It’s just that this method is quite impractical from a time perspective, or you wait too long and often don’t always get the answer you need (personally, I’ve always found it useful to try to read up on the topic a bit while it’s crunching and compare the result with what ChatGPT spits out for me).

And where is the agentic “Deep Research” mode turned on in ChatGPT?

Deep Research in ChatGPT

Instead of repeating the same instructions in every chat, use a smarter approach – ask AI “What all information does it need to best answer you on <your query>?”

Even better is to invest a bit of time in setting up custom instructions that will apply to all conversations. And the most efficient solution for recurring tasks is to create your own specialized GPT. Deep Research is really the best current AI function for complex searching and analysis.

But even so, the iron rule applies – always verify everything. Not even the most advanced AI is one hundred percent reliable. Why, we’ve already explained several times above – because it’s still just a model working on the basis of probability.

Customer Acquisition Cost (abbreviated as CAC) - Blog

CAC

Customer Acquisition Cost (abbreviated as CAC) is a key financial metric that expresses the average cost it takes a company to acquire one new customer – that is, all expenses incurred on marketing, advertising and sales, divided by the number of newly acquired customers in a given period.

The goal of the CAC metric is to measure the efficiency of acquisition activities and determine whether the costs of acquiring customers are proportionate to their long-term value (LTV – Lifetime Value).

What is CAC used for?

CAC helps companies determine how efficiently they use their marketing and sales budget.

It’s most commonly used when:

  • evaluating return on investment in marketing and advertising,
  • comparing the performance of individual campaigns or channels,
  • setting acquisition goals and budgets,
  • determining optimal product or service pricing,
  • analyzing business profitability and scalability.

How is CAC calculated?

The basic formula for calculating CAC is simple:

CaC - Customer Acquisition Cost - formula

 

CAC = Total marketing and sales costs / Number of new customers

For example, if a company spends 500,000 CZK on marketing and sales activities during a quarter and acquires 1,000 new customers, its CAC is 500 CZK per customer.

What all is included in CAC?

The calculation includes all direct and indirect costs associated with customer acquisition:

  • advertising costs (online campaigns, TV, outdoor, print, etc.),
  • salaries of salespeople, marketing team and external agencies,
  • commissions and bonuses for closing deals,
  • technology and tools for CRM, emailing, analytics,
  • production and operational costs for campaigns and lead generation.

Why is CAC important?

CAC is a crucial metric for managing growth and profitability.

It shows how expensive it is to acquire a new customer and whether this process pays off. It helps to better assess:

  • the efficiency of marketing and sales channels,
  • sustainability of the growth model,
  • when it’s appropriate to scale investments in acquisition or conversely increase emphasis on retention,
  • the optimal ratio between acquisition costs and customer value.

Relationship between CAC and LTV (Customer Lifetime Value)

The CAC value alone has no meaningful value if it’s not assessed in relation to how much the company actually earns from the customer over time.

Therefore, in practice it’s always compared with the LTV (Customer Lifetime Value) metric – that is, with the total value that a customer brings to the company during their entire “lifetime” (for example, during the subscription period or average cooperation length).

If CAC is high but customers simultaneously have high LTV, the acquisition strategy can still be healthy. Conversely, low CAC may not be a success if customers leave quickly and their LTV is low.

The point is for investments in customer acquisition to pay off in the long term. For this reason, the LTV / CAC ratio is monitored, which helps determine the efficiency of acquisition strategy.

  • LTV / CAC > 3 – healthy ratio: the customer brings the company at least triple the value compared to what their acquisition cost,
  • LTV / CAC ≈ 1 – acquisition model is on the edge of profitability,
  • LTV / CAC < 1 – the company spends more on acquiring a customer than it earns from them.

The relationship between CAC and LTV is thus one of the most important indicators for growth sustainability, profitability and efficient budget allocation between customer acquisition and retention.

What to watch out for with the CAC metric

When working with CAC, it’s important to consider context and time perspective:

  • CAC can differ by channel – performance marketing has different costs than direct sales,
  • short-term higher CAC may be fine if LTV or customer retention increases long-term,
  • it’s advisable to calculate CAC separately for new and returning customers,
  • during rapid scaling, it’s necessary to monitor whether CAC is not increasing faster than revenues and margins.

Related metrics

  • LTV (Lifetime Value) – customer lifetime value,
  • LTV/CAC Ratio – ratio between customer value and their acquisition cost,
  • Churn Rate – customer departure rate, which directly affects LTV and the derived LTV/CAC ratio.
Year on Year (abbreviated as YoY)

YoY

Year on Year (abbreviated as YoY) is a comparative metric used in analytics to evaluate economic, financial, or operational indicators between two identical periods in different years – typically between full calendar years or matching months.

The goal is to capture long-term trends without being distorted by short-term seasonality and to understand the real performance trajectory of a company, market, or sector.

What It’s Used For

YoY analysis is used to identify annual changes – for example, when:

  • assessing growth or decline in revenue, profit, or margins,
  • tracking website traffic, customer demand, or sales performance,
  • analyzing macroeconomic indicators – such as inflation, GDP, or average wages,
  • monitoring industrial and energy performance – including production, consumption, and capacity utilization,
  • reporting corporate results and evaluating long-term strategic outcomes.

How It’s Expressed

Year-on-year changes are typically expressed as percentages.

The notation is usually written as:

+3.1% YoY or -2.4% y/y

This indicates how much a specific metric increased or decreased compared with the same period in the previous year.

Example

Coca-Cola reported a net profit of +3.1% YoY in 2025.

This means the company’s profit was 3.1% higher than during the same period in 2024 – if it earned USD 10 billion in 2024, it reached roughly USD 10.31 billion in 2025.

Year-on-year comparison therefore reflects the company’s real growth, not a temporary seasonal fluctuation driven by, for example, stronger summer or holiday sales.

Why YoY Comparison Matters

YoY is one of the core metrics in business reporting and performance evaluation because it reveals the real underlying trend.

While month-on-month (MoM) or quarter-on-quarter (QoQ) changes can be affected by temporary market conditions, promotions, or weather patterns, YoY results show whether a company is genuinely growing, stagnating, or declining over time.

By using YoY data, companies understand:

  • the effectiveness of strategic and investment decisions,
  • the stability and sustainability of growth,
  • the evolution of profitability and key business indicators,
  • the performance of individual divisions, products, or markets across years.

As such, YoY analysis is an essential component of any financial report, investor presentation, or management dashboard.

Difference Compared with Other Metrics

  • MoM (Month on Month) – compares performance between consecutive months; useful for short-term trend tracking but heavily influenced by seasonality.
  • QoQ (Quarter on Quarter) – compares data between quarters; often used in corporate reporting to measure quarterly progress.
  • YoY (Year on Year) – compares the same period across years; provides a broader and more stable view of long-term performance.

Common Pitfalls When Using YoY

To ensure meaningful results, YoY comparisons must always be based on identical and closed periods (e.g., January–December 2025 vs. January–December 2024). It’s also critical to consider any changes in accounting standards, reporting structures, or business models that might distort the comparison.

Only consistent data and like-for-like periods provide a reliable foundation for strategic decisions, budgeting, and forecasting.

Advanced Uses and Interpretation

Beyond basic financial metrics, YoY analysis is widely applied across multiple domains:

  • Marketing & e-commerce: tracking YoY growth in organic traffic, conversion rate, or customer retention helps identify sustainable acquisition trends.
  • Energy & industry: measuring YoY production or consumption reveals the impact of efficiency measures or demand fluctuations.
  • Finance & investment: YoY return comparisons allow investors to evaluate performance stability and risk exposure over time.
  • Public policy & macroeconomics: YoY inflation or wage growth data reflect economic health and purchasing power changes in real terms.

Year-on-year analysis is more than a numerical comparison – it’s a diagnostic tool that filters out short-term volatility to expose long-term direction. Used correctly, YoY metrics help companies and analysts make informed, evidence-based decisions about investment, growth strategy, and operational efficiency.

Month on Month (abbreviated as MoM) - Blog

MoM

Month on Month (abbreviated as MoM) is a comparative metric used in analysis to compare economic, financial, or operational indicators between two consecutive months—that is, between the current month and the previous month.

The goal is to capture short-term changes and trends that signal the immediate development of a company’s performance, sales, demand, or the effectiveness of marketing activities.

What it’s used for

To evaluate short-term movements—for example, in:

  • tracking monthly growth or decline in revenue, profit, or margins,
  • analyzing website traffic or conversion development,
  • monitoring the development of costs, productivity, or inventory turnover,
  • monitoring the performance of advertising campaigns and changes in demand,
  • operational financial reporting and cash flow management.

How is MoM calculated (Month on Month formula)

The calculation of month-over-month change (MoM) is straightforward and based on a simple comparison of values from two consecutive months.

Formula for calculating MoM:

Formula for calculating MoM

MoM (%) = ((Current month value - Previous month value) / Previous month value) × 100

Calculation example:

A company had revenue of $500,000 in February and $540,000 in March.

MoM = ((540,000 - 500,000) / 500,000) × 100 = (40,000 / 500,000) × 100 = 0.08 × 100 = 8%

Revenue thus increased month-over-month by +8% MoM.

If, on the other hand, March revenue dropped to $475,000:

MoM = ((475,000 - 500,000) / 500,000) × 100 = (-25,000 / 500,000) × 100 = -0.05 × 100 = -5%

Revenue would thus decrease by -5% MoM.

How is Month on Month interpreted

Month-over-month changes are typically expressed as percentages.

The notation looks like:

+5.4% MoM or -1.8% m/m

This notation shows by what percentage the indicator’s value increased or decreased compared to the previous month.

Example

Netflix recorded an increase in new subscribers of +5.4% MoM in March 2025.

This means that in March it gained 5.4% more customers than in February 2025 – so if 1 million new users were added in February, there were approximately 1.054 million in March.

Such a comparison helps quickly assess whether the company is growing or facing a short-term decline in demand, and allows for timely response to market changes.

Why month-over-month comparison is important

MoM is crucial for operational management and reporting because it enables tracking of development dynamics in a short period.

While year-over-year comparison (YoY) shows long-term trends, MoM provides a current view of performance and reveals rapid shifts in data.

It helps to better assess:

  • the effectiveness of short-term marketing and sales activities,
  • the immediate impact of price changes, discounts, or promotions,
  • the actual dynamics of cash flow and sales channels,
  • short-term fluctuations caused by seasonality or external factors.

This makes MoM an indispensable tool in every monthly financial or marketing report.

Difference from other metrics

  • MoM (Month on Month) – month-over-month comparison that tracks rapid changes and serves as an early indicator of development (the term MoM is described and explained in this article above).
  • QoQ (Quarter on Quarter) – quarter-over-quarter comparison, suitable for evaluating quarterly results.
  • YoY (Year on Year) – year-over-year comparison that shows long-term trends and eliminates seasonality.
  • CAGR (Compound Annual Growth Rate) – accounts for an entire multi-year period and compound interest, providing the most accurate view of long-term trends.

What to watch out for with the MoM metric

When interpreting month-over-month results, it’s important to consider the influence of seasonality, holidays, vacations, or extraordinary events that may temporarily affect the outcome. The MoM metric should therefore always be supplemented with year-over-year (YoY) comparison to distinguish whether it’s a permanent trend or just a temporary deviation.