Chatbots as new sources of truth?
AI chatbots based on large language models (LLMs) cannot be regarded as new sources of truth. If used incorrectly, they can easily mislead.
For professional communication, structured context from within your own organisation is essential. Language models cannot guarantee up-to-date information and factual accuracy. They are tools for processing and generating language, not infallible knowledge stores.
Back to the key facts overview
Language model versus chatbot
A language model (large language model, LLM) is the underlying artificial intelligence that understands and generates human language, whereas a chatbot is the specific app or user interface that uses such a model for conversations.
Language model (LLM)
A language model is a complex deep-learning algorithm pre-trained on enormous amounts of human language. It serves as the “brain” for language processing. It recognises patterns in language, understands, summarises, translates text and generates human-like, context-relevant text. Examples: OpenAI GPT-5, Google Gemini, Claude, Llama.
AI chatbot
An AI chatbot is tailored to specific use cases and interactions, often in customer service, marketing or for answering frequently asked questions. Unlike its rule-based predecessors, a modern AI chatbot does not work with fixed scripts, but relies on a large language model to enable more flexible conversations. Examples: ChatGPT, Google Assistant, Alexa, Siri, customer-service bots on websites.
Back to the key facts overview
AI as an energy consumer
A single AI request produces roughly between 1.14 and 4.32 grams of CO2, depending on the model and complexity. A typical combustion-engine car would travel only around 10 to 30 metres with the same CO2 emissions.
For responsible AI use at company scale, different language models should be chosen depending on energy consumption and purpose. Both the training and the use of models have to be considered when assessing the CO2 footprint.
Back to the key facts overview
Don’t use end-user apps for professional purposes
End-user apps like ChatGPT, Copilot or Perplexity do not replace a professional enterprise context. Tailored solutions ensure governance, data protection and compliance.
Back to the key facts overview
Preparing data stores for AI
RAG stores (retrieval-augmented generation) structure large data volumes so that AI can access them in a targeted way without altering core messages. They effectively set bookmarks and make data directly usable for applications.
Back to the key facts overview
Nothing but probabilities?
Just like humans, AI language models store their knowledge in a kind of neural network. Probabilities help them make quick statements, predictions and generate text flexibly.
Back to the key facts overview
Manual prompts vs. function calls
Each prompt run produces different results. Function calls make it possible to generate structured JSON outputs that are machine-readable and can be used directly in complex applications.
Back to the key facts overview