Advanced Configuration Guide (Coming soon)

This guide covers the three advanced settings that give you fine-grained control over when and how Coreply fetches and presents AI suggestions:

  1. Advanced Request Body
  2. Suggestion Content Template

1. Advanced Request Body

Switch to Advanced mode in the Custom API Settings section. Instead of filling in individual fields (model, system prompt, temperature…), you write the raw JSON body that Coreply posts directly to your API endpoint. This gives you full control: any model parameter, any message structure, any API schema.

The body is a Mustache template. Coreply uses jmustache, so it supports special variables of jmustache.

Before the request is sent, Coreply renders it with the current conversation context.

String values are mapped

All string fields are exposed as a map with four variants so you can embed them safely in JSON. It will be referred as “string map” in this guide. The available variants are:

VariantDescription
{{field.raw}}Original unescaped string
{{field.jsonEscaped}}JSON-escaped (backslashes, quotes, newlines escaped)
{{field.regexLiteral}}Regex-literal surrounded by\Q…\E)
{{field.regexLiteralEscaped}}Regex-literal first, then JSON-escaped

In JSON templates, use .jsonEscaped so that special characters in the conversation don’t break the JSON structure.

Top-level fields

FieldTypeDescription
{{currentTyping}}string mapWhat the user is currently typing
{{currentTypingTrimmed}}string mapcurrentTyping without the last incomplete token
{{currentTypingLastToken}}string mapThe last incomplete token the user is typing
{{currentTypingEndsWithSeparator}}booleantrue if the typing ends with a space or punctuation
{{pkgName}}string mapPackage name of the active app
{{<package>_<name>_<of>_<app>}}trueField name is the package name of the active app with . replaced by _, value is always true
pastMessageslistThe conversation history as a list of turns (oldest first)

String map fields are null / falsy when empty, so {{#currentTyping}}…{{/currentTyping}} can be used to conditionally render content only when the user is typing something.

pastMessages list

{{#pastMessages}}…{{/pastMessages}} iterates over the conversation history as a list of turns (oldest first). Each turn represents one or more consecutive messages sent by the same role (either all sent by the user, or all received from the other party).

List items of pastMessages:

FieldTypeDescription
{{sent}}booleantrue if this turn contains messages sent by the user (“Me”)
{{received}}booleantrue if this turn contains messages received from the other party
{{sender}}string mapThe sender name/role (e.g. "Me")
{{messages}}listThe list of individual messages in this turn (see below)

messages within a turn:

Use {{#messages}}…{{/messages}} to iterate over individual messages in the current turn:

FieldTypeDescription
{{content}}string mapThe message text
{{sent}}booleantrue if sent by the user
{{received}}booleantrue if received from the other party
{{sender}}string mapThe sender name/role (e.g. "Me")

Example

{
  "model": "gpt-4o-mini",
  "temperature": 0.7,
  "top_p": 1.0,
  "messages": [
    {
      "role": "system",
      "content": "You are an AI texting assistant. Generate a suggested reply based on the conversation history and current typing. Output only the suggested text without quotation marks or extra formatting."
    },
    {
      "role": "user",
      "content": "Chat history:\n{{#pastMessages}}{{#messages}}{{#sent}}Me: {{/sent}}{{#received}}Them: {{/received}}{{content.jsonEscaped}}\n{{/messages}}{{/pastMessages}}{{#currentTyping}}Current typing: {{currentTyping.jsonEscaped}}{{/currentTyping}}{{^currentTyping}}Suggest a reply.{{/currentTyping}}"
    }
  ],
  "max_tokens": 50,
  "stream": false
}

Per-app system prompt using package name booleans

{
  "model": "gpt-4o-mini",
  "messages": [
    {
      "role": "system",
      "content": "{{#com_whatsapp}}You are texting on WhatsApp. Keep replies short and casual.{{/com_whatsapp}}{{#com_slack}}You are messaging on Slack. Replies can be more professional.{{/com_slack}}{{^com_whatsapp}}{{^com_slack}}You are an AI texting assistant.{{/com_slack}}{{/com_whatsapp}}"
    },
    {
      "role": "user",
      "content": "{{#pastMessages}}{{#messages}}{{#sent}}Me: {{/sent}}{{#received}}Them: {{/received}}{{content.jsonEscaped}}\n{{/messages}}{{/pastMessages}}{{#currentTyping}}I started typing: {{currentTyping.jsonEscaped}}{{/currentTyping}}"
    }
  ],
  "max_tokens": 60,
  "stream": false
}

Note: package name dots are replaced with underscores, so com.whatsapp becomes com_whatsapp.

Prefill the user’s current typing as the start of the assistant turn

{
  "model": "gpt-4o-mini",
  "messages": [
    {
      "role": "system",
      "content": "You are a texting assistant. Output only the suggested message text."
    },
    {
      "role": "user",
      "content": "{{#pastMessages}}{{#messages}}{{#sent}}Me: {{/sent}}{{#received}}Them: {{/received}}{{content.jsonEscaped}}\n{{/messages}}{{/pastMessages}}"
    },
    {
      "role": "assistant",
      "content": "{{#currentTyping}}{{currentTyping.jsonEscaped}}{{/currentTyping}}"
    }
  ],
  "max_tokens": 60,
  "stream": false
}

Here the user’s current typing is prefilled as the beginning of the assistant turn — the model then continues from it. Pair this with the Suggestion Content Template below to assemble the full suggestion.

Note: Some models are bad at continuing from partial words, so currentTypingTrimmed comes in handy here to prefill only the last complete token, allowing the model to generate the full last word. You can hint the last word somewhere else in the prompt.


2. Suggestion Content Template

Found in the Advanced config section below the Request Body field.

After the model responds, this Mustache template is rendered to form the final suggestion text shown to the user. This exists because some API setups use assistant prefill — you send the user’s current typing as the start of the assistant’s response, and the model returns only the continuation. The template lets you join them back together.

The same context fields from the request body are available, plus two extras:

FieldTypeDescription
{{assistantMessage}}stringThe raw text returned by the model
{{assistantMessageAutoTrimCurrentTyping}}stringThe model response, but if it starts with currentTyping, that part would be trimmed automatically
{{assistantMessageAutoTrimCurrentTypingTrimmed}}stringSame as above, but currentTypingTrimmed is used for the trimming instead of currentTyping (only complete tokens are prefixed)
(all other fields previously stated)string mapSame as request body template, with .raw / .jsonEscaped / .regexLiteral / .regexLiteralEscaped variants

Unlike currentTyping and other string map fields, assistantMessage and assistantMessageAutoTrimCurrentTyping are already plain strings so no variant suffix is needed.

After rendering, any two consecutive spaces is collapsed to one, handling the case where currentTyping ends with a space and the model response also starts with one (or vice versa).

Show response as-is

{{assistantMessage}}

The model response is shown as-is.

Prepend the user’s current typing (usually with assistant prefill)

If your request body sends currentTyping as the start of the assistant turn (the model only returns the continuation):

{{currentTyping.raw}}{{assistantMessage}}

For example, the user has typed "sounds good, I'll be" and the model returns " there at 7". The suggestion becomes "sounds good, I'll be there at 7".

Add a suffix

Not sure who would need this, you can also add a suffix like this:

{{assistantMessage}} 😊