The oracle

Context importance.

Importance of context

The growing importance of context represents the methodological shift from early, heuristic-based prompting to a rigorous framework for large language model interaction. While initial techniques relied on persona-adoption and emotional stimuli, modern models prioritize structured data to reduce ambiguity. We analyze the “3 Cs” (Clarity, Concreteness, and Context) as essential pillars for mitigating hallucinations.

One might think the ancient Greeks were lucky, being able to consult oracles about what the future held. But if we think twice about it, things couldn’t have been that simple.

If you wanted to consult the Oracle of Delphi, for instance, it wasn’t exactly a walk in the park. You had to travel for weeks along paths crawling with bandits, sleep in inns of questionable hygiene, sacrifice a goat (which was perfectly innocent, poor thing), and wait patiently for the Pythia to finish inhaling the ethylene vapours wafting from a geological chasm.

Only then, with eyes rolled back and a cavernous voice, would she utter an unintelligible phrase that required a team of linguists to decipher. If you asked, “Will I win the war?”, she would reply, “A great empire shall fall.” If you lost, it was your empire; if you won, it was your neighbor’s. The success of the prediction didn’t depend on the priestess’s clarity, but on the client’s ability to interpret a riddle specifically designed so that the house never loses.

Thus, the oracle employed a fascinating technique: ambiguity as a tool for infallibility that we, the so-called human beings, love. We are fascinated by the idea of a superior entity that, with just a glance at our tribulations, is capable of deciphering destiny without us having to make the cognitive effort of explaining what is actually happening to us. From Roman augurs reading bird livers to the newspaper horoscope promising “changes at work,” history proves that we prefer a mystical and comforting lie over a truth that requires providing concrete data and working through it.

And here we are, in the heart of the 21st century, swapping sulphur vapours for silicon chips, yet behaving exactly like a superstitious farmer from ancient Greece. We stand before ChatGPT or Gemini, throw out a vague phrase like “help me with my business,” and expect the Binary Oracle to magically divine our deepest desires, our balance sheets, and our existential fears.

But I have bad news: unlike the Pythia, large language models (LLMs) do not enter a mystical trance or roll their eyes while channelling Apollo. If your strategy for obtaining answers is based on divination rather than context, you aren’t doing science; you are playing semantic lottery with a machine that hallucinates more than the priestess of Apollo after a double shift without a gas mask.

So, get ready to understand how we should address our binary oracle to obtain predictions. In this post, we are going to talk about a very current concept: the importance of context.

A brief history of a failed invocation

To understand why 90% of people use artificial intelligence (AI) wrong, we have to look back… even if “back” in this field means a mere 18 months ago, a span that in internet years is equivalent to several centuries.

At the dawn of this revolution, when models were more dim-witted (we’re talking about the AI Pleistocene, back in late 2022), prompt engineering became a bizarre blend of bureaucracy and superstition.

We invented pompous acronyms like RISEN (Role, Instructions, Steps, Expected Context, Narrowing Style) to feel like we had some semblance of control over the black box. It wasn’t enough to just ask nicely; you had to fill out a mental form worthy of a government office: defining the interlocutor’s role (“you are a world expert in quantum physics and a part-time cattle rancher”), giving the order and objective with surgical precision, providing the context (the famous 5 Ws of journalism: Who, What, Where, When, and Why, forcing us to tie up any loose ends the AI might use to strangle us), and defining the desired style of the response.

But the most delusional part was the secret ingredient, that magical touch we added in a whisper: emotional bribery. Yes, friends, desperation led us to include instructions in technical guides like: “Try hard, this is very important to me, my career depends on it,” or my absolute favourite, “I’ll tip you $1,000 if you do a great job.”

 We treated the algorithm with a mix of engineering (iterating, modularizing complex questions, checking for coherence) and emotional blackmail, convinced that if we made the GPU feel sorry for us or promised it Monopoly money, the neural network would try harder out of pure digital greed. And the worst part, the part that fuels our paranoia, is that back then… sometimes it actually worked.

But technology moves fast, and what was law yesterday is a relic today. Current models, trained on obscene amounts of data and refined with RLHF (Reinforcement Learning from Human Feedback), no longer need you to tell them to put on an Einstein costume just to add two plus two.

Nowadays, the “Role” has been dethroned by context. Telling an AI to “act like a marketing expert” is irrelevant and redundant if you don’t give it the data for your product, your audience, and your goals. It’s like hiring the best architect in the world, sitting them at an empty table, and saying, “act like an architect and build me something pretty,” without telling them if you want a suspension bridge, a brutalist skyscraper, or a doghouse.

The death of divination: the importance of context

When it comes to useless answers or machine hallucinations, experts of the stature of Andrew Ng and research teams from the likes of Google and IBM have reached an empirical conclusion after thousands of hours of testing, a conclusion that bruises the average user’s ego: the problem isn’t the machine; it’s you (and your absolute lack of specificity).

The new “holy trinity” of prompting isn’t magical; it’s methodological and unbearably rational: clarity, concreteness, and context.

But what do these terms actually mean outside of a boring corporate PowerPoint? You might ask, intrigued.

Clarity is the total absence of ambiguity. If you want a numbered list, ask for a “numbered list”; don’t say “give me some loose ideas.” AI doesn’t understand subtext, nor does it have the ability to read your mind through your webcam (yet). Despite what some might think, the “A” in “AI” stands for Artificial, not Augury.

On the other hand, concreteness is the abysmal difference between asking for something “brief” (which, for a Victorian novelist, could mean 50 pages of landscape descriptions) and asking for “a text of exactly 150 words.” Numbers are your friends; vague and subjective adjectives are the mortal enemies of precision.

And thirdly, context is the new king of the party, the treasure map. It’s the “who,” “where,” “what for,” and “on what budget.” Without context, a request is just static noise in the digital vacuum.

We can understand the utility of these 3 Cs with an everyday example. If you tell a taxi driver “take me to somewhere fun,” you could end up at a clown convention, an abandoned amusement park, or an underground fight club, depending on the driver’s criteria (and criminal record). If you say “take me to the cheapest Italian restaurant downtown that has gluten-free options and is open now,” the margin for error almost entirely disappears.

The current golden rule is simple and brutal: the more information we give the model, the less it has to invent. The famous AI hallucinations are often not system failures or the machine’s delusions of grandeur, but rather desperate attempts by the model to fill the gaping holes of a vague request. If you don’t provide the context, the AI makes it up to please you, just as the Pythia made up that an empire would fall, simply to cover her back and make sure she got her goat.

From magic to engineering

If we manage to overcome the childish temptation to use AI as a crystal ball, we will discover that there are real cognitive tools to sharpen our aim, proven techniques that turn divination into precision engineering.

The first and most effective is to stop abstractly describing what we want, something difficult even between humans who speak the same language, and start teaching. Through few-shot prompting, instead of asking for “an ironic and casual tone,” we give the model three examples of our best previous sarcastic texts and tell it: “imitate this exact pattern.” One well-placed example is worth more than a thousand qualifying adjectives that the machine might misinterpret.

But sometimes the problem isn’t the style, but pure deductive logic. This is where chain of thought comes into play. Models, much like a nervous student in an oral exam who just wants to get it over with, tend to blurt out the first probabilistic answer they calculate.

If we force them to think “step by step” or to break down their reasoning point by point before concluding, accuracy skyrockets (even though newer models already do this in their digital subconscious, processing internally before spitting out the final token). This is the foundation of reasoning LLMs and, in practice, as simple as asking it to “think step by step” through the answer we’ve requested.

And for those times when even we don’t know which path to take, we can resort to the tree of thoughts. This is an advanced technique that asks the AI to explore multiple branches of possibilities, simulate scenarios, and compare results before deciding, marking the difference between a mediocre fortune cookie response and a brilliant strategy.

The ultimate co-pilot

And if all of this is too much for you, if you feel overwhelmed by so much technique and keep getting generic garbage, you can always resort to metaprompting, which is basically surrendering with style. Swallow your pride (and that lingering sense of biological superiority) and ask the AI to write the prompt for you. Say: “I want to achieve X, but I don’t know how to ask you for it. Ask me the questions you need answered to give me the best response possible.”

Suddenly, the dynamic changes radically. You are no longer facing a cryptic oracle that must be deciphered with fear, but a helpful co-pilot who interviews you to extract the information that, let’s face it, you were too lazy to structure on your own. And that, dear friends, is much more useful, cheaper, and far less bloody than gutting goats on Mount Parnassus.

Aquí tienes la traducción del cierre. He mantenido ese tono de suspense final, muy propio de un buen episodio de ciencia ficción o de un final de capítulo que te deja con ganas de más.

We’re leaving…

So there you have it: magic doesn’t exist; there is only well-fed statistics and applied probability.

We have seen how prompt engineering has mutated: from the simplicity of predefined roles to a complex data architecture where response quality depends directly on the richness of the environment. The result is that the original structure has surrendered its spotlight to context, which stands today as the decisive factor for success. We have also seen that, to obtain intelligent answers, one must first know how to ask intelligent questions, moving past the ambiguity of oracles to embrace the surgical precision of engineers.

This need for rigor reaches its peak when facing the complexity of today’s applications. We no longer interact solely with isolated models, but with agentic systems and intelligent multi-agent architectures. In these autonomous environments, where various agents collaborate, reason, and execute chains of tasks, the growing importance of context has transformed it from a mere frame of reference into the operating system of synthetic thought.

It is no longer about shouting commands into the void, but about designing information ecosystems that allow AI applications to navigate with autonomy and coherence. We are witnessing the birth of a new grammar of interaction that transcends the traditional prompt: we have fully entered the era of context engineering. But that’s another story…

Leave a Reply

Your email address will not be published. Required fields are marked *

Información básica sobre protección de datos Ver más

  • Responsable: Manuel Molina Arias.
  • Finalidad:  Moderar los comentarios.
  • Legitimación:  Por consentimiento del interesado.
  • Destinatarios y encargados de tratamiento:  No se ceden o comunican datos a terceros para prestar este servicio. El Titular ha contratado los servicios de alojamiento web a Aleph que actúa como encargado de tratamiento.
  • Derechos: Acceder, rectificar y suprimir los datos.
  • Información Adicional: Puede consultar la información detallada en la Política de Privacidad.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Esta web utiliza cookies propias y de terceros para su correcto funcionamiento y para fines analíticos. Al hacer clic en el botón Aceptar, aceptas el uso de estas tecnologías y el procesamiento de tus datos para estos propósitos. Antes de aceptar puedes ver Configurar cookies para realizar un consentimiento selectivo.   
Privacidad