Author Archives: krcmic.com

Search Generative Experience (SGE) - what is it?

Search Generative Experience (SGE) – What Is It?

The search engine as we know it is undergoing a seismic shift – not through a new interface or ad product, but through the rise of AI-driven search. At the heart of this transformation is SGE, short for Search Generative Experience, Google’s ambitious and controversial experiment to integrate generative AI directly into its search results.

But what exactly is SGE, how does it work, and what does it mean for users, content creators, and the web at large?

Redefining Search Through AI

Launched in mid-2023 within Google’s Search Labs, SGE represents a fundamental rethinking of how users interact with search engines. Traditionally, search has functioned as an index – a curated list of links pointing users toward third-party websites that (hopefully) contain the answers they’re looking for.

SGE, by contrast, flips that model. Using generative AI, Google now attempts to synthesize and summarize answers within the search results themselves, without requiring users to click away. When a user enters a query, SGE generates a cohesive response – often a paragraph or two – that pulls together information from multiple sources across the web. In some cases, it includes cited links below the answer. In others, the AI stands on its own.

To the user, this may look like a helpful, conversational snippet. But to the publishers, marketers, and businesses who have long relied on traditional organic traffic from Google Search, it represents something else entirely: a disintermediation of the web.

How SGE Works?

SGE is built on large language models (LLMs) – similar to those powering chatbots like ChatGPT or Google’s own Gemini. When you type a question or prompt, the system doesn’t just fetch links. It processes your query, interprets the intent, and crafts a synthetic summary of relevant information based on data indexed across the web.

In most cases, Google overlays this AI-generated summary at the top of the search results page, sometimes accompanied by follow-up suggestions like “Ask a follow-up” or “Expand this answer.” The experience feels conversational and streamlined – a departure from the traditional blue-link format that has defined Google Search for decades.

Google describes SGE as an “experiment,” but it’s already shaping the direction of the company’s long-term search strategy.

Implications for SEO and Web Traffic

SGE introduces major uncertainty for anyone who depends on Google for visibility – especially content publishers, bloggers, ecommerce sites, and affiliate marketers. Since SGE aims to provide direct answers, many users may no longer need to click through to source websites. That means fewer pageviews, lower ad revenue, and potentially less influence for content creators.

Even more concerning is that not all AI overviews clearly credit their sources, and some generate summaries based on dozens of pages without giving any single site meaningful exposure. For small publishers, this could result in a scenario where their content is used to train or fuel an answer – without receiving any traffic or recognition in return.

This is at the heart of a growing ethical and economic debate. While Google claims its AI models are trained on publicly available data, many publishers argue that SGE effectively repurposes the labor of others – drawing from original reporting, expert analysis, and curated information to generate a free-to-consume summary that competes directly with their own content.

Content and Content Publishers and Content Creators Without Compensation

What makes this particularly painful for publishers is that SGE produces no direct revenue for those whose content it depends on. Unlike a traditional search result, where a well-ranking page could lead to advertising impressions, subscriptions, or product conversions, SGE answers often eliminate the need to click altogether.

From a technical standpoint, SGE is not “creating” new knowledge – it is rephrasing, condensing, and recombining existing information. The intelligence is statistical, not conceptual. While this is useful for efficiency, it raises a fundamental question: if AI is trained on the work of journalists, researchers, educators, and domain experts – and then outputs derivative summaries that draw traffic away from those very creators – who truly benefits?

At scale, this dynamic has serious consequences. It undermines the financial viability of quality publishing, weakens the incentive to invest in original reporting, and may accelerate a race to the bottom where surface-level content dominates because deeper, well-researched work is cannibalized without compensation.

Benefits and Risks for Users

SGE changes how users access and process information, replacing navigation with synthesis. While that offers speed and simplicity, it also reshapes how trust, truth, and critical thinking function in the search experience.

Key benefits:

  • Faster access to condensed information – instead of manually visiting multiple pages, SGE provides an AI-generated summary directly in the search results. This is especially useful for users looking for high-level overviews or basic answers, cutting research time down to seconds.
  • Improved handling of multi-part or exploratory queries – SGE can respond to complex, layered questions more fluidly than traditional search. For example, a user asking “How does solar energy compare to nuclear in terms of cost and environmental impact?” may get a structured, synthesized answer without needing to read three separate sources.
  • Lower barrier to understanding unfamiliar topics – for users with limited background in a subject, or those with reading or language limitations, SGE makes it easier to absorb information without being overwhelmed by technical jargon or long-form content.
  • Better support for mobile and on-the-go use – on smaller screens where navigating multiple tabs or scrolling through long articles is frustrating, SGE offers one-screen summaries that improve usability and convenience.

Key risks:

  • Lack of source transparency and citation depth – SGE often provides answers without clearly identifying which sources were used or how they were prioritized. This makes it difficult for users to evaluate credibility, spot bias, or verify facts independently.
  • Increased risk of misinformation and hallucinations – like all large language models, SGE can confidently generate inaccurate or outdated claims. If the underlying data is flawed or misunderstood, the summary may mislead users while appearing authoritative.
  • Over-simplification of complex or controversial issues – nuanced topics (such as climate policy, medical treatments, or historical conflict) are reduced to brief, generalized answers that may obscure key debates, ethical concerns, or minority perspectives.
  • Bias toward dominant narratives and data sets – because SGE is trained on widely available and often mainstream content, it may unintentionally suppress underrepresented voices, independent research, or emerging viewpoints that haven’t yet reached algorithmic prominence.
  • Erosion of user critical thinking and media literacy – when the AI presents a ready-made answer, users may stop evaluating, comparing, or questioning sources. Over time, this can weaken essential skills like recognizing manipulation, distinguishing opinion from evidence, or understanding the limits of certainty.
  • No clear path for correction or accountability – if an SGE answer is wrong, biased, or harmful, users cannot easily report, dispute, or understand how it was generated. Unlike human authors or publishers, AI summaries lack editorial bylines or revision histories.
  • Potential normalization of passive information consumption – as users grow used to quick AI-generated conclusions, they may begin expecting all knowledge to be instantly accessible and neatly packaged. This risks flattening the value of in-depth journalism, original research, or detailed reporting.

Structural Consequences for the Web and Society

Search Generative Experience (SGE) - Structural Consequences for the Web and Society

SGE doesn’t just change how users search – it transforms the economics, incentives, and architecture of the web itself. When search becomes a summary, and that summary is controlled by an AI layer, the ripple effects touch every part of the information ecosystem.

It will also help you better understand the whole topic about the benefits and disadvantages of AI when you read the full article about the biggest AI myths – what artificial intelligence can or cannot do now.

Implications for content creators and publishers:

  • Loss of traffic and monetization potential – when answers appear directly in the search results, users are less likely to click through. This erodes ad revenue, subscription conversions, and affiliate income, especially for sites that previously ranked well on informational queries.
  • Decreased visibility for small or emerging voicesAI-generated answers often favor popular or frequently cited sources, making it even harder for new, niche, or independent publishers to gain visibility through organic search or paid traffic (as they can afford to promote their content than smaller publishers).
  • Increased pressure to write for machines, not humans – creators may feel pushed to optimize content not for quality or clarity, but to align with AI training patterns. This leads to repetitive, formulaic content designed to be scraped or summarized rather than read and appreciated.
  • Extraction without compensation – publishers invest time, expertise, and resources into producing original content. Yet tools like SGE summarize, paraphrase, and repurpose that work without consistently attributing credit or offering any measurable return. The result is a growing gap between the effort required to create valuable content and the benefit received for doing so. Over time, that imbalance could lead to a sharp decline in quality content creation altogether – simply because it will no longer be viable or worth the investment to create it. This shift is no longer theoretical – it’s already happening. With SGE, Google has introduced a system AI Overviews (AIO) that, while seemingly helpful to users, further exploits the labor of unique content creators. It extracts information from news articles, blog posts, and expert sources – then rephrases that material into AI-generated content/summaries displayed directly in search results. Users get quick answers without visiting the source – and the original authors receive no traffic, no visibility, and no compensation. And that’s just the beginning. Right now in this super feature,  there is no revenue sharing option for those who create all information they use for training and displaying to users. No licensing agreement. No framework to reward the creators whose work fuels the system. This isn’t a technical oversight – it’s a systemic model of extraction. The platform captures and monetizes the value of human creativity, but gives nothing back in return. To put it plainly: creators get absolutely zero. At best, this is an unsustainable model. At worst, it’s a slow erosion of the open web. And this is just the beginning. Once Google begins placing ads directly inside these AI-generated responses – a move the company is already testing – the situation becomes even more paradoxical. Publishers could end up paying Google to reach users with content they themselves produced. Google becomes the summarizer, the gatekeeper, and the monetization channel – while the originators of the content are left invisible. The consequences will be far-reaching. Producing content will become more expensive. Businesses will be forced to spend more on ads to achieve the same visibility they once earned organically. And those costs won’t vanish – they’ll be passed along to consumers, built silently into the price of products and services. The few seconds users save by reading an AI summary instead of visiting a site will be offset by higher prices elsewhere. And all this is unfolding against a backdrop where Google’s traditional search results are already underperforming – often flooded with low-quality, SEO-optimized pages and irrelevant ads. In many cases, SGE isn’t enhancing the experience – it’s compensating for a system that has stopped delivering real value. If this is the future of search, then we must ask: are we improving the user experience, or just masking its collapse behind a smoother interface?

How AI Overviews Shift Traffic From Publishers to Google

Implications for platforms and institutions:

  • Concentration of influence within a single system – with SGE, Google doesn’t just guide you to information – it becomes the information. The platform becomes the default explainer, interpreter, and filter, with little transparency into how answers are assembled or what perspectives were excluded.
  • Increased risk of shaping public knowledge through unseen bias – even if unintentional, the AI’s output is shaped by its training data and model design. If the majority of content it learns from reflects a single worldview, that bias will be reflected and amplified in search responses.
  • Greater vulnerability to coordinated manipulation – if actors understand how content is ingested and surfaced by SGE, they may design entire content ecosystems around influencing AI outputs – subtly skewing what users see as “consensus”.

Implications for society and public trust:

  • A weakening of democratic knowledge structures – when people stop visiting news outlets, research institutions, or specialist platforms, those institutions lose both their audience and their civic role. SGE shortcuts the process of deliberation and dialogue – replacing it with a single, compressed output that can’t easily be questioned or debated.
  • Less friction, but also less scrutiny – SGE gives users what they want quickly, but sometimes what users need is discomfort, complexity, or contradiction. Those rarely survive algorithmic distillation. What’s fast and fluent isn’t always what’s fair or accurate.
  • A dangerous illusion of authority – users may assume that AI-generated summaries are neutral, complete, and correct. But these answers come from a probabilistic model – not a peer-reviewed source, not a journalist, not an expert. Without visible human judgment, trust becomes misplaced.

What Should Change (not only) in SEG in the Near Future?

If SGE is to remain part of the future of search, it must evolve to serve not only users and platforms – but also the wider ecosystem of creators, institutions, and public knowledge it relies on. A generative system that only extracts value without giving anything back risks collapsing the very web it summarizes.

For Google and AI platform designers:

  • Make sources visible by default – every AI-generated summary should clearly display the sources it draws from, and ideally, how those sources were selected or weighted. Without this transparency, users are left guessing whether the information is credible, biased, or manipulated. For example, if a summary on climate change is generated without any reference to scientific consensus or peer-reviewed research, users may be misled by surface-level or controversial takes that sound authoritative. Trust in AI cannot be built on opacity.
  • Design fair compensation models for content creators – if a publisher’s work is used to train or inform AI responses, they should receive tangible value in return – whether through attribution, licensing payments, or revenue-sharing agreements. Consider how Spotify or YouTube compensate artists through streaming revenue. A similar structure could be applied to AI: the more a piece of content contributes to AI-generated output, the more credit or payment the original creator should receive.
  • Give users control, depth, and traceability – users should not be limited to surface answers. They should be able to click into the summary and explore which sources were referenced, how the interpretation was constructed, and where to go for more detail. For example, a search about vaccine safety should allow users to dig deeper into the data sources and understand whether the summary comes from public health institutions, academic journals, or blogs.
  • Flag uncertainty and reflect complexity – not all questions have a definitive answer. AI should indicate when a topic is debated, emerging, or lacks expert consensus. Showing only one confident answer can give users a false sense of certainty. In legal, ethical, or political issues – such as abortion laws, climate policy, or economic models – SGE should make visible the diversity of expert viewpoints, not bury them.

For content creators and media institutions:

  • Shift toward layered, defensible content – creators should invest in producing content with original insight, data, and expert perspectives. Content that is shallow, SEO-driven, or templated will be easily replicated and paraphrased by AI. But investigative journalism, niche expertise, and original commentary still hold defensible value. For example, an in-depth report on corruption in a local government cannot be mimicked by an LLM trained on general web data.
  • Push for legal and commercial safeguards – current copyright laws were not designed for content that is digested, paraphrased, and reassembled by AI systems. Creators must advocate for updated frameworks that recognize the economic value of their intellectual labor. This may include opt-out mechanisms, licensing protocols, or compensation structures similar to those used in the music industry.
  • Strengthen audience awareness around authorship – creators and publishers should clearly communicate how content is produced, who is behind it, and why it can be trusted. Bylines, transparency about funding or affiliations, and editorial standards will help audiences distinguish between AI-generated summaries and original human-authored journalism. In a world flooded with machine-written content, clear signals of authenticity matter.

For users and institutions of knowledge:

  • Treat AI summaries as starting points, not conclusions – users should be encouraged to explore beyond the first answer. For example, if SGE provides a summary of the causes of inflation, readers should consider alternative economic theories, policy debates, and historical contexts before drawing conclusions. Critical reading and healthy skepticism are skills that must be retained, not replaced.
  • Actively support human-made knowledge – journalism, academia, and creative industries rely on public engagement and funding. Subscriptions to trusted news sources, donations to nonprofit publishers, and institutional support for libraries and research all play a role in sustaining the quality of information online. If AI becomes the default interface but draws from human work, we must ensure those human institutions survive.
  • Understand that search is now not neutral (and it was never been since SEO became important for companies, just SGE makes that fact more visible and obvious) – even before generative AI, search results were shaped by SEO strategies, commercial interests, and algorithmic ranking. What SGE changes is the degree to which content is paraphrased, filtered, and editorialized without visibility. When AI outputs a confident summary, it is doing more than retrieving data – it is creating a narrative. Users must recognize that these narratives are influenced by training data, design choices, and omitted context. Being informed today means questioning not only what is shown, but what has been excluded, compressed, or softened by AI design.
17 Biggest AI Myths and Misconceptions

17 Biggest AI Myths and Misconceptions

In today’s world, where artificial intelligence permeates all areas of our lives, numerous myths and misconceptions have emerged around it. Many people succumb to illusions about AI’s capabilities and limitations, either due to ignorance or influenced by media portrayals. The following text identifies and debunks the most common misunderstandings about artificial intelligence, which often lead to exaggerated expectations or unfounded fears.

1. Myth: AI is smarter than humans

Many people believe that artificial intelligence is already smarter than humans. In reality, it’s a statistical tool that connects words and concepts based on probabilities and patterns recognized in its training data. If some AI outputs seem impressive, it may be due to low expectations or comparison with the average communication we typically experience.

AI cannot truly understand abstract concepts like quantum physics or resolve subjective questions like “does pineapple belong on pizza?” although it can compose convincing text on these topics. Artificial intelligence can write essays, but without a genuine understanding of the subject matter, meaning humans are still needed to help develop these complex topics further.

However, with AI, humans can certainly make some tasks in this process somewhat (significantly) easier, and AI can genuinely help accomplish a given activity faster (such as data analysis, text summarization, search, finding fact-checking resources, etc.) or assist in developing one’s own thoughts.

2. Myth: AI is truly creative

The creative abilities of artificial intelligence are often overestimated. When AI creates a poem or musical composition, it’s not an original expression of creativity, but rather a sophisticated recombination of existing patterns (i.e., again something created by humans, or by AI based on input from humans or other AI, which again needs some instruction/input from humans in that initial version). It’s similar to a musician composing a song primarily from proven melodies and choruses – the result may sound good but lacks an authentic creative breakthrough.

AI doesn’t experience “aha” moments or any inspiration. The creation of artificial intelligence is essentially assembling a puzzle from pieces that have previously proven successful, without truly understanding their meaning or emotional impact. It lacks motivation, emotional connection to the work, and authentic artistic vision.

3. Myth: AI has real emotions

When AI appears friendly, empathetic, or angry, it’s merely simulating these states, not experiencing genuine emotions. The system was designed to use language in a way that meets human expectations in social communication.

Attributing emotions to artificial intelligence is a form of anthropomorphization – the tendency to assign human characteristics to inanimate objects. In reality, it’s a collection of algorithms without subjective experiences. A language model has no feelings, even if its responses may create the impression of personal interest or affection.

4. Myth: AI understands what it writes

AI has no real understanding of the text it generates. It resembles a highly sophisticated system for predicting subsequent words rather than a being with consciousness and comprehension. It cannot distinguish truth from fiction based on its own judgment – it merely reproduces patterns it has learned.

When AI claims something false or nonsensical, it’s not because it’s “mistaken” in the sense of human error, but because its algorithm evaluated this response as statistically probable in the given context.

5. Myth: AI has perfect memory

Despite advanced capabilities, current AI systems have significantly limited “memory.” Most can only work with information provided within a single conversation, and once the session ends, all history disappears. Even systems with activated long-term memory have considerable limitations.

AI doesn’t remember your previous interactions unless they’re part of the current context. It’s not like a human relationship, where the other party truly builds a long-term memory of your person and your preferences.

6. Myth: AI never makes mistakes

AI presents its answers with a conviction that can create an impression of infallibility. In reality, however, it often makes mistakes, both factual inaccuracies and logical inconsistencies. These errors can be all the more dangerous because they’re presented with a high degree of confidence.

The problem is that AI doesn’t know what it doesn’t know (or that it doesn’t know something). It lacks genuine metacognition – the ability to recognize the boundaries of its knowledge. Instead, it will attempt to generate an answer even in cases where it doesn’t have enough relevant information. Therefore, you must thoroughly verify everything that AI outputs.

7. Myth: AI will soon replace most human work

Myth AI will soon replace most human work

Concerns that AI will replace human work are partially justified but often exaggerated. Artificial intelligence excels in routine, repetitive tasks that can be precisely defined. Professions requiring creativity, empathy, social intelligence, critical thinking, and complex decision-making in unstructured situations remain the domain of humans.

AI likely won’t replace entire professions but rather change the nature of work. People whose jobs consist primarily of simple, predictable activities are most at risk.

Artificial intelligence will undoubtedly have a significant impact on how we use our brains and how our cognitive functioning will evolve. This influence can have both positive and negative effects on human thinking.

With the increasing availability of instant answers through AI assistants, there’s a risk that we will increasingly rely on external sources instead of building our own cognitive structures and knowledge. This phenomenon can lead to several significant changes:

  • Weakening of deep concentration ability – the immediate availability of information can disrupt our ability to focus long-term on complex problems.
  • Reduced motivation to build deep knowledge – why learn and remember facts when we can quickly look them up anytime?
  • Superficial information processing – we get used to quick but shallow content consumption without thorough analysis.
  • Dependence on external cognitive tools – the risk that we’ll stop developing our own mental abilities in favor of outsourcing thinking.

Cognitive specialization – people may begin to specialize in aspects of thinking that AI handles poorly, such as creativity, ethical reasoning, or interdisciplinary synthesis. While memory and computational abilities will be increasingly delegated to AI, humans can focus on uniquely human forms of intelligence.

New forms of literacy – there will be a need for a new type of literacy – the ability to effectively formulate queries for AI, critically evaluate its outputs, and integrate them into one’s own thinking. This “AI literacy” may become a key skill.

Cognitive symbiosis – instead of mere dependence, a more complex relationship may develop where AI and the human brain function in symbiosis, complementing each other and strengthening their respective strengths. For example, AI can process details and routine aspects of problems, while humans focus on strategic, creative, or value aspects.

Polarization of cognitive abilities – society may divide into those who can work synergistically with AI and develop their cognitive abilities, and those who become cognitively dependent and lose the ability to think critically on their own.

Transformation of educational systems – educational institutions will be forced to rethink their goals and methods. Instead of memorizing facts, education will likely focus on developing skills such as critical thinking, creativity, adaptability, and ethical reasoning, which AI cannot easily replace.

To minimize negative impacts and maximize benefits, it would be appropriate to consider the following approaches:

  • Conscious use of technology – developing “digital hygiene” and strategies for maintaining cognitive autonomy.
  • Redesign of educational curricula – greater emphasis on metacognitive skills, critical thinking, and the ability to learn.
  • Promotion of “deep reading” and focused thinking – active cultivation of the ability for deep concentration and complex reasoning.
  • Balanced approach to technology – finding a balance between leveraging the benefits of AI and maintaining one’s own cognitive abilities.
  • Intergenerational dialogue – connecting digital natives with generations who have experience with pre-digital forms of thinking and learning.

The influence of AI on human thinking is not predetermined – it depends on how consciously and strategically we approach these technologies, and on the social and educational systems we create around them.

8. Myth: AI has its own opinions and values

When AI expresses an opinion on a controversial topic, it’s not a genuine stance based on values and beliefs, but a statistical prediction of what type of response is expected in the given context. AI has no values, interests, or convictions of its own.

The consistency of AI opinions depends on the system’s settings and training data, not on an authentic moral compass. With different inputs or parameters, the same system can hold contradictory positions. See also the term Neural Network/Neural Networks, and the article Machine Learning and Artificial Intelligence – how they are related, what differentiates them, and what their practical applications are.

9. Myth: AI is perfectly objective

AI is sometimes considered an objective source of information because it’s a “machine” without personal interests. In reality, however, AI reproduces and sometimes amplifies biases and distortions contained in the data on which it was trained.

These systems are created by humans and trained on data created by humans, which inevitably introduces human perspectives and values into their operation. The apparent neutrality is an illusion – AI is not a superhuman arbiter of truth.

10. Myth: AI perfectly handles technical tasks

In technical fields such as programming, AI can appear particularly competent, but even here it has significant limitations. Code generated by artificial intelligence often contains errors, inefficient procedures, or security risks that require human review and correction.

AI has no real understanding of the problem domain or the ability to test the functionality of its solutions. Programmers with critical thinking remain essential for developing reliable and efficient software.

11. Myth: AI will soon become conscious

Myth AI will soon become conscious

With the growing presence of artificial intelligence in everyday life, there’s also an increasing number of myths, distortions, and misunderstandings about what these systems truly are – and what they are not. Modern language and multimodal models, such as ChatGPT, Gemini, or Claude, can create texts, translate, analyze documents, or create images and videos. These capabilities appear sophisticated, so it’s no wonder that in some media and public debates, there are claims that artificial intelligence “is almost thinking” or is “on the threshold of consciousness.” These claims are, however, fundamentally misleading.

For example, in 2022, a statement by Google engineer Blake Lemoine garnered strong media attention when he stated in an interview with The Washington Post that the language model LaMDA “is self-aware” and “wants to be respected as a person.” Google officially rejected this claim, stating that LaMDA does not have consciousness and that the engineer approached the system with human bias. Nevertheless, this opened a wave of speculation about whether we are witnessing the emergence of a new form of intelligence.

In reality, however, current artificial intelligence – even in 2025 – does not possess any consciousness, self-awareness, or understanding in the human sense.

AI today:

  • has no subjective experience (qualia),
  • does not perceive itself as an entity,
  • does not create autonomous goals,
  • is not aware of the consequences of its behavior,
  • has no inherent motivation or values.

AI is a computational tool. Its outputs are based on the analysis of an enormous amount of data and learning based on probabilistic patterns. It cannot understand the meaning of words, does not think about the world, does not ask questions, and has no internal experiences. It merely mimics patterns it has learned from previously existing data. By being able to write human-sounding texts, it creates the illusion of understanding, but it’s just sophisticated mirroring.

It’s important to remember that AI itself does not want, feel, or desire. It has no motivations or goals beyond those explicitly assigned to it. And although in 2023, warnings about “uncontrolled superintelligence” were voiced from some technological circles (for example, a letter signed by Elon Musk, Steve Wozniak, and others), none of the existing systems comes even remotely close to autonomous decision-making. Even those who signed these letters often acknowledge that the risk is not in the current technology, but rather in how people handle it.

What would be needed for AI to have consciousness?

To even talk about AI potentially achieving consciousness, we would first need to resolve the question of what consciousness actually is. And we cannot do that yet. Philosophers, neurologists, and cognitive scientists have been trying to answer this question for decades, but without a consensus. We don’t know exactly what in the brain causes us to have subjective experience, so-called qualia. Without understanding this phenomenon, we cannot build an artificial system that would experience something similar.

Furthermore, conscious AI would likely need to have the ability for long-term self-reference, that is, awareness of its own existence in time and space, the ability to understand the consequences of its actions, and create autonomous goals. Current systems cannot do any of this. Artificial intelligence has no internal motivations because it has no “self.” There is no center of consciousness, no unified subject – only layers of neural networks calculating the probabilities of the next words.

When could it happen? Expert time estimates

Various scientists and technologists differ significantly in their estimates. All quotes below, however, are still based on pure speculation, because no one today knows exactly what will be technically possible and when. From developments in recent years, however, it’s clear that AI as a field is innovating at an incredible pace. Yet, no one knows how quickly, when, and who should work towards truly fully conscious artificial intelligence and what form it will actually take (i.e., complete consciousness, or will it just be some agent that can be given a goal/specific task it can handle on its own).

It’s therefore not clear who will develop it first – although hot candidates include the USA, China, possibly Israel, and Europe may not be far behind if it manages to jump on the already moving train in time – after all, the latest geopolitical changes due to the chaotic behavior of exotic figures like Elon Musk or Trump/Putin prove that even a functioning world economy can be sunk in just a few months, so the cards may be dealt in all sorts of ways over the years). However, many of these statements converge on one point – it’s not a question of whether it will happen, but when, and estimates range around 5-30 years (i.e., sometime around 2030-2060), and it’s apparent that as time goes on, the estimate itself is more or less shortening as the technology and AI sector advances by leaps and bounds. I definitely recommend reading the analysis “When Will AGI/Singularity Happen?” if you’re more interested in this issue.

  • 2019 – Nick Bostrom (philosopher and futurist) updated his estimates in the publication “When Will AGI Be Created?” and stated that there is a “50% chance of achieving human-level intelligence by 2045,” defining this level as the ability to perform most economically relevant tasks better than humans.
  • 2020 – Ilya Sutskever (co-founder of OpenAI) stated in an interview for MIT Technology Review that AGI (artificial general intelligence) could emerge “in the next 5-10 years,” emphasizing that it’s about the technological ability to solve problems, not consciousness.
  • In 2021, Demis Hassabis (founder of DeepMind) suggested that “full-fledged AGI is possibly still decades away” and that current progress represents just the initial steps.
  • 2022 – Sam Altman (CEO of OpenAI) estimated that systems with “human-level intelligence” may appear “in the coming decade,” but cautioned that these would be tools optimized for specific tasks, not necessarily conscious entities. Demis Hassabis (founder of DeepMind) suggested that “full-fledged AGI” is “possibly still decades away” and that current progress represents just the initial steps.
  • 2023 – Geoffrey Hinton (pioneer of deep learning) warned that “within 5-20 years,” there is a 50% probability that AI capable of acting as an agent with its own goal may emerge. However, he did not say it would be conscious – just more autonomous.
  • 2024 – Ray Kurzweil (Google futurist) reiterated his prediction that by 2029, artificial intelligence will emerge that “will pass the Turing test.” Kurzweil assumes that by 2029, computers will be able to perform most cognitive tasks as well as humans. This includes capabilities such as language comprehension, problem-solving, and learning. This prediction was first mentioned in his 2005 book “The Singularity Is Near” and confirmed in its 2024 update “The Singularity Is Nearer.” Ray Kurzweil, an American futurist and engineer, predicts that by 2045 we will achieve technological singularity – the point when human intelligence merges with artificial intelligence (AI), leading to an exponential increase in intelligence. This process involves integrating nanotechnologies and AI into the human body, allowing people to expand their cognitive abilities, meaning nanobots will be able to communicate with the brain and improve its functions, leading to a significant increase in intelligence and capabilities. What is technological singularity? Technological singularity is a hypothetical point in the future when technological growth becomes so rapid and complex that human intelligence will not be able to keep pace. Kurzweil assumes that in 2045, AI will reach a level where it will be able to improve itself without human intervention, leading to an exponential increase in intelligence and a fundamental transformation of society. Implications for humanity – this prediction has the potential to fundamentally change human existence. People could achieve “digital immortality” through backing up consciousness, eliminating disease and aging, and accessing unlimited knowledge. Kurzweil believes this transformation will bring new opportunities for developing human potential. It is important to note, however, that these predictions are speculative and raise a number of ethical, social, and technological questions. The discussion about technological singularity and its implications continues among experts and the public.
  • 2025 – Dario Amodei, CEO of Anthropic, predicts that superintelligent AI, which will surpass human intelligence in most areas, could be developed as early as 2026. This AI could fundamentally change society, similar to the industrial revolution.

What would it mean if AI actually gained consciousness?

Although there is currently no evidence that any machine actually possesses consciousness, the question of its emergence is no longer considered purely hypothetical. A number of technology companies, research teams, and state institutions are actively working on developing systems that could approach consciousness in some form. Development is moving from narrowly specialized models to so-called artificial general intelligence (AGI), which would be capable of independent learning, complex decision-making, and application of knowledge across various fields.

It’s therefore not a question of whether it’s possible, but when, under what conditions, and with what consequences. This trend is fueling a global technological race in which individual states and corporations try to gain an advantage by being the first to develop a fully autonomous and intelligent system. The motivation is not only technological prestige but primarily the expected economic, informational, and military dominance that such a system could secure.

However, truly conscious or highly autonomous AI could pose a significant risk, especially if it acted outside the framework of human instructions or contrary to our values. Threats include not only technological failure but also fundamental ethical dilemmas – for example, in the question of the rights of an artificial entity, its legal status, responsibility for its actions, or the possibility of “turning it off.” Systems capable of independent decision-making without human control could be extremely efficient but also unpredictable.

Special attention should be paid to the potential for misuse of artificial intelligence in the military sphere. Development projects for so-called autonomous weapons systems already exist today, capable of identifying and eliminating targets without human intervention. If these technologies were combined with powerful AI, it would create space for conflicts conducted by machines that could escalate without human decision-making and without moral restraints. Such a scenario would mean a fundamental disruption of international law, security, and ethical norms. There is a risk that in the pursuit of technological superiority, security principles and regulations will be pushed to the sidelines.

To prevent scenarios that we know today primarily from dystopian visions, it is necessary to emphasize the development of so-called safe AI. The key is to focus on research of so-called alignment – aligning the goals and decision-making processes of AI with the values and interests of humanity. Transparency of development, thorough testing, international cooperation, and legal frameworks that clearly define the limits of deploying these technologies are also essential. At the same time, it is important to have a public debate about what boundaries we as a society are willing to accept – even if technology allows for creating something that truly approaches human consciousness.

12. Myth: AI is a threat mainly because of its intelligence

Many concerns about AI focus on scenarios where systems become “too intelligent” and take control. However, real risks often lie elsewhere – in how people deploy, use, and potentially misuse these technologies.

More realistic threats include the automation of disinformation, mass surveillance, manipulation of public opinion, discrimination through biased algorithms, or concentration of power in the hands of those who own the most advanced AI systems. These problems don’t require conscious AI – just a person who uses it for problematic purposes.

13. Myth: AI is energy efficient or environmentally neutral

Many people don’t realize the energy demands of operating advanced artificial intelligence systems. Training large language models consumes an enormous amount of electricity and water (for cooling). For example, training a single large language model can produce a carbon footprint comparable to several years of airplane flights around the world.

Current data centers with AI computations can consume an amount of electricity equivalent to a smaller city. This trend continues with each new, more powerful generation of models. With the growing implementation of AI in various sectors, its environmental impact is also growing, which is in direct conflict with global climate goals.

14. Myth: AI has access to all information

The average user often assumes that AI “knows everything” or has direct access to the internet. In reality, current AI systems are limited by their training data – they cannot “google” or search for current information in real-time (unless they are specially connected to a search engine).

This means that AI may have outdated or incomplete information about current events, new scientific discoveries, or changing circumstances. Some more advanced systems may be integrated with search tools, but independent access to current information is not a standard feature of AI.

15. Myth: AI is a purely digital phenomenon

AI is not just abstract software existing only in the digital world. Its operation has very real material and physical foundations – from rare metals in hardware, through data centers occupying thousands of square meters, to water resources used for cooling servers.

This material dimension has implications that are not only environmental but also geopolitical and economic. States and corporations compete for control over rare resources essential for the development of AI infrastructure, creating new forms of international tension and inequalities.

16. Myth: AI is always “better” than human decision-making

There is a tendency to assume that algorithmic decision-making is inherently more objective, efficient, or “better” than human decision-making. In reality, there are many contexts where human judgment, intuition, and ethical consideration provide better results.

AI systems can be effective in optimizing clearly defined parameters, but often lack the ability to consider the broader context, moral aspects, or non-standard circumstances that humans can intuitively process. Excessive trust in algorithmic decision-making can lead to the dehumanization of processes where human judgment is irreplaceable.

17. Myth: AI development is inevitably progressive and linear

Many discussions about AI assume that the development of this technology will automatically continue toward greater autonomy, complexity, and human similarity. However, the history of technologies shows that development is often non-linear, with detours, stagnation, and backward steps.

Social, economic, and regulatory factors can radically change the trajectory of AI development. It’s possible that the future will include more specialized, limited systems instead of general “superintelligence,” or that the preferred direction will be the development of hybrid systems that combine human and machine intelligence, rather than fully autonomous AI.

Bookmark - what is it?

Bookmark – what is it?

In our fast-paced digital world, the humble bookmark has become an essential tool that many of us take for granted. Yet this simple feature can transform your online experience from chaotic to streamlined with just a few clicks. Let’s explore why bookmarks matter and how they can make your digital life better.

What Is a Bookmark?

A bookmark refers to a highlighted page that a user can set individually for an Internet page. The bookmark is stored in a folder within the browser, so that easy access is possible at any time and favorite pages can be accessed quickly.

Think of bookmarks as your personal internet map—pinpointing the locations that matter to you amid the vast online landscape. Instead of trying to remember complex web addresses or searching repeatedly for the same sites, bookmarks give you instant access to your favorite online destinations.

Digital Bookmarks

Digital bookmarks are not a classic bookmark, but a digital one. Pages on the Internet that are particularly interesting or important for the user, that need to be found again quickly, or that simply should not be lost can be stored as a bookmark and saved in a list that can be called up via the browser. This ensures fast access and eliminates the need to use a search engine.

A mouse click on the corresponding bookmark immediately opens the saved link. Bookmarks can have different names, depending on the browser used. With some they are called “Favorites”, with others “Bookmarks”. They are often stored in an extended HTML file, for example with Mozilla or Lynx. With Opera, the bookmarks are stored in a specially formatted text file.

The average person visits dozens of websites weekly. Bookmarks eliminate the frustration of:

  • Forgetting exact website addresses
  • Repeatedly searching for the same information
  • Losing track of useful resources you’ve discovered
  • Navigating through multiple pages to reach your destination

How to Create Bookmarks in Different Browsers

Creating bookmarks is a simple process, but the exact method varies by browser. Here’s how to bookmark pages in the most popular browsers:

Google Chrome

  • Desktop: Click the star icon in the address bar or press Ctrl+D (Windows/Linux) or Command+D (Mac)
  • Mobile: Tap the three-dot menu and select the star icon
  • Show/hide bookmark bar: Ctrl+Shift+B (Windows/Linux) or Command+Shift+B (Mac)

Mozilla Firefox

  • Desktop: Click the star icon in the address bar or press Ctrl+D (Windows/Linux) or Command+D (Mac)
  • Mobile: Tap the three-dot menu and select “Add Bookmark”
  • Show/hide bookmark bar: Ctrl+Shift+B (Windows/Linux) or Command+Shift+B (Mac)

Microsoft Edge

  • Desktop: Click the star icon in the address bar or press Ctrl+D (Windows/Linux) or Command+D (Mac)
  • Mobile: Tap the star icon at the bottom of the screen
  • Show/hide favorites bar: Ctrl+Shift+B (Windows/Linux) or Command+Shift+B (Mac)

Safari

  • Desktop: Click “Bookmarks” in the menu bar and select “Add Bookmark” or press Command+D
  • Mobile: Tap the share icon (square with arrow) and select “Add Bookmark”
  • Show/hide bookmarks bar: Command+Shift+B

Opera

  • Desktop: Click the heart icon in the address bar or press Ctrl+D (Windows/Linux) or Command+D (Mac)
  • Mobile: Tap the heart icon at the bottom of the screen
  • Show/hide bookmark bar: Ctrl+Shift+B (Windows/Linux) or Command+Shift+B (Mac)

Organization of Bookmarks

Bookmarks are stored as links, but most browsers offer the possibility to give the bookmarks meaningful titles. Furthermore, bookmarks can be sorted and organized in folders to keep a better overview. In many folders there is a small logo next to the URL of the bookmark, a so-called favicon, which visually distinguishes and identifies the page from other pages.

What truly matters is creating a system that works for you. Consider organizing bookmarks by:

  • Purpose – group sites by what you use them for—work, shopping, entertainment, research
  • Frequency – keep daily-use sites easily accessible, with less frequent destinations tucked into folders
  • Projects – create collections related to specific short-term goals, like planning a trip or researching a purchase

To create folders in most browsers, simply right-click in the bookmarks area and select “Create new folder” or a similar option. You can then drag and drop your bookmarks into these folders for better organization.

The Bookmark Bar

The bookmark bar is prime real estate in your browser—displayed directly under the address bar for immediate access. This space deserves your most-visited sites, as it provides one-click access without opening menus.

A clever trick: Remove the text from instantly recognizable sites (like Gmail or Facebook) and leave only their icons to fit more bookmarks in this valuable space.

Live Bookmarks

In addition to the classic bookmarks, there are so-called live bookmarks or dynamic bookmarks. Here, the preferred pages are reloaded and displayed, so that the latest news can always be read. Some browsers remember the last or most frequently accessed pages and store them on the start page of the browser to allow quick access to favorite pages.

When a new tab or window is opened, these favorite pages appear as a thumbnail view that the user only needs to click to open. Firefox pioneered this live bookmark concept with RSS feed integration, while Chrome and Edge offer similar functionality through their “New Tab” pages.

Bookmark Syncing

Modern browsers offer the ability to synchronize bookmarks across different devices. By signing in to your browser account (like a Google account for Chrome or Firefox account), your bookmarks can be available on all your devices, including smartphones and tablets. This ensures consistent access to your favorite sites regardless of which device you are using.

To enable sync in most browsers:

  1. Sign in to your browser account
  2. Go to settings and find the sync options
  3. Make sure bookmarks/favorites are selected for synchronization

Social Bookmarks

In contrast to the internal favorites list, which is limited to one computer, there are social bookmarks, which can be viewed on the Internet and are thus available to a larger group of people. There, links can be updated, added, commented or rated by several people.

If the user’s own page is entered in the relevant services, interested users can simply adopt it and add it to their personal list of favorites. Popular social bookmarking services include Pinterest, which focuses on visual content, and platforms like Reddit, which combine social bookmarking with discussion features.

Bookmark Security and Privacy

When using bookmarks, it’s important to consider privacy implications. Synced bookmarks may contain sensitive information, such as links to financial institutions or personal accounts. Most browsers offer options to exclude certain bookmarks from syncing. Additionally, be cautious when using shared computers, as others may access your bookmarks if you’re logged into your browser account.

Other Types of Bookmarks

  • Reading Bookmarks – the traditional bookmark in the literal sense is a marker used to keep track of where you stopped reading in a physical book. These can be simple paper strips, decorative cardboard pieces, or even magnetic clips that attach to pages. Digital reading applications like Kindle, Apple Books, and PDF readers also include virtual bookmarking features that save your reading position.
  • Bookmark in Finance – in financial terminology, a “bookmark” can refer to a saved view or configuration in trading or financial analysis software. These bookmarks allow traders and analysts to quickly return to specific chart configurations, data views, or analysis parameters.
  • Website Bookmarking Services – beyond browser bookmarks, dedicated bookmarking services like Pocket, Instapaper, and Raindrop.io offer enhanced features such as offline reading, tagging systems, and full-text search of saved content. These services typically work across different browsers and devices through dedicated applications or browser extensions.
  • Programming Bookmarks – in programming and development environments, bookmarks are markers set within code files to allow developers to quickly jump between different sections of the codebase. This functionality is available in most code editors and integrated development environments (IDEs) and helps navigate through large files efficiently.