Trending Now! Context Engineering Replaces Prompt Engineering as the New Hot Topic in AI}

The buzz around 'Context Engineering' has surged, with experts like Andrej Karpathy endorsing it. It’s now the key focus for optimizing AI performance beyond traditional prompt design.

Trending Now! Context Engineering Replaces Prompt Engineering as the New Hot Topic in AI}

Recently, “Context Engineering” has become incredibly popular. Andrej Karpathy has given it a shout-out, and Phil Schmid’s article introducing Context Engineering topped Hacker News and even hit the Zhihu trending list.

image.png

Previously, we introduced the basic concept of Context Engineering. Today, let’s discuss practical implementation.

Why Focus on “Context Engineering”?

It’s easy to anthropomorphize LLMs—treating them as super assistants capable of “thinking,” “understanding,” or “confusion.” But from an engineering perspective, this is a fundamental mistake. LLMs do not possess beliefs or intentions; they are sophisticated text generators.

More accurately, LLMs are general, uncertain functions. They work by: you give them a text (context), and they generate a new text (output).

image.png
  • Universal: Capable of handling various tasks (like translation, coding) without task-specific programming.
  • Uncertain: The same input can produce slightly different outputs each time—this is a feature, not a bug.
  • Stateless: It has no memory; you must provide all relevant background info each time for context.

This perspective clarifies our focus: we cannot change the model itself but can fully control the input. The key to optimization is how to craft the most effective input (context) to guide the model’s output.

“Prompt Engineering” was once popular, but it overly emphasizes finding a perfect “magic phrase.” This approach is unreliable in real-world applications because “magic phrases” may fail after model updates, and actual inputs are often more complex than a single command.

A more precise and systematic concept is “Context Engineering”.

图片

The core difference is:

  • Prompt Engineering: Focuses on manual crafting of a small, magical command, like reciting a spell.
  • Context Engineering: Focuses on building an automated system, like designing an “information pipeline” that automatically fetches, integrates, and packages information from databases, documents, etc., into a complete context for the model.

As Andrej Karpathy said, LLMs are a new kind of operating system. Our task isn’t just giving scattered commands but preparing all the data and environment they need to run.

Core Elements of Context Engineering

Simply put, “Context Engineering” is about creating a “super input” toolkit. All the trendy techniques (like RAG, agents) are just tools within this toolkit.

The goal is: feed the most effective information, in the most suitable format, at the right time, to the model.

图片

Here are some core elements in this toolkit:

  • Instructions: Giving commands—simply telling the model what to do, like “act as an expert” or providing examples for it to mimic.
  • Knowledge: Giving the model “memory.” Since it has no memory, we include chat history or summaries to help it remember context.
  • Tools:
    • Retrieval-Augmented Generation (RAG): Providing reference materials from knowledge bases (like company documents) to prevent hallucinations and ensure factual answers.
    • Agents: Letting the model autonomously search for information.
图片

This is a more advanced approach. Instead of preloading all data, we let a “smart agent” decide what information to seek, actively using tools like web search or database queries, then aggregating the results to solve problems.

In summary, all these techniques—simple or complex—are about answering one question: “How to craft the most perfect input for the model?”

Practical Methodology of Context Engineering

Using LLMs is more like conducting scientific experiments than creating art. Guesswork won’t cut it; testing and verification are essential.

Engineers’ core skill isn’t just writing fancy prompts but following a scientific process to continuously improve the system. This process involves two steps:

Step 1: Backward Planning (Set goals & break down tasks)

Start from the desired final output and work backwards to define the system:

  • Define the end goal: Clearly specify what the perfect answer should look like (content, format, etc.).
  • Identify required inputs: What information must be included in the input (context) to achieve this? This defines your “raw materials.”
  • Design the pipeline: Create an automated system to produce this “raw material.”

Step 2: Forward Construction (Build step-by-step)

Once planned, start building. Test each part thoroughly before integration:

  • Test data interfaces: Ensure stable data acquisition.
  • Test retrieval functions: Check if the search modules find accurate, complete info.
  • Test packaging programs: Verify if the system correctly assembles instructions, data, and context.
  • End-to-end testing: After all parts are confirmed, connect everything and evaluate the final output quality, knowing the input is correct.

The core idea is: through this “plan, build, test” rigorous process, we turn the art of using LLMs into a systematic engineering science.

For more practical methods, refer to Langchain’s latest blog and videos, which detail four mainstream approaches to Context Engineering and demonstrate how LangChain’s LangGraph and LangSmith support efficient implementation.

图片

Subscribe to QQ Insights

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe