Microsoft’s new AI training method eliminates bloated system prompts without sacrificing model performance

In building LLM applications, enterprises often have to create very long system prompts to adjust the model’s behavior for their applications. These prompts contain company knowledge, preferences, and application-specific instructions. At enterprise scale, these contexts can push inference latency past acceptable thresholds and drive per-query costs up significantly.  On-Policy Context Distillation (OPCD), a new training…

Read More
Back To Top