This guide covers the three advanced settings that give you fine-grained control over when and how Coreply fetches and presents AI suggestions:
Switch to Advanced mode in the Custom API Settings section. Instead of filling in individual fields (model, system prompt, temperature…), you write the raw JSON body that Coreply posts directly to your API endpoint. This gives you full control: any model parameter, any message structure, any API schema.
The body is a Mustache template. Coreply uses jmustache, so it supports special variables of jmustache.
Before the request is sent, Coreply renders it with the current conversation context.
All string fields are exposed as a map with four variants so you can embed them safely in JSON. It will be referred as “string map” in this guide. The available variants are:
| Variant | Description |
|---|---|
{{field.raw}} | Original unescaped string |
{{field.jsonEscaped}} | JSON-escaped (backslashes, quotes, newlines escaped) |
{{field.regexLiteral}} | Regex-literal surrounded by\Q…\E) |
{{field.regexLiteralEscaped}} | Regex-literal first, then JSON-escaped |
In JSON templates, use
.jsonEscapedso that special characters in the conversation don’t break the JSON structure.
| Field | Type | Description |
|---|---|---|
{{currentTyping}} | string map | What the user is currently typing |
{{currentTypingTrimmed}} | string map | currentTyping without the last incomplete token |
{{currentTypingLastToken}} | string map | The last incomplete token the user is typing |
{{currentTypingEndsWithSeparator}} | boolean | true if the typing ends with a space or punctuation |
{{pkgName}} | string map | Package name of the active app |
{{<package>_<name>_<of>_<app>}} | true | Field name is the package name of the active app with . replaced by _, value is always true |
pastMessages | list | The conversation history as a list of turns (oldest first) |
String map fields are null / falsy when empty, so {{#currentTyping}}…{{/currentTyping}} can be used to conditionally render content only when the user is typing something.
pastMessages list{{#pastMessages}}…{{/pastMessages}} iterates over the conversation history as a list of turns (oldest first). Each turn represents one or more consecutive messages sent by the same role (either all sent by the user, or all received from the other party).
List items of pastMessages:
| Field | Type | Description |
|---|---|---|
{{sent}} | boolean | true if this turn contains messages sent by the user (“Me”) |
{{received}} | boolean | true if this turn contains messages received from the other party |
{{sender}} | string map | The sender name/role (e.g. "Me") |
{{messages}} | list | The list of individual messages in this turn (see below) |
messages within a turn:
Use {{#messages}}…{{/messages}} to iterate over individual messages in the current turn:
| Field | Type | Description |
|---|---|---|
{{content}} | string map | The message text |
{{sent}} | boolean | true if sent by the user |
{{received}} | boolean | true if received from the other party |
{{sender}} | string map | The sender name/role (e.g. "Me") |
{
"model": "gpt-4o-mini",
"temperature": 0.7,
"top_p": 1.0,
"messages": [
{
"role": "system",
"content": "You are an AI texting assistant. Generate a suggested reply based on the conversation history and current typing. Output only the suggested text without quotation marks or extra formatting."
},
{
"role": "user",
"content": "Chat history:\n{{#pastMessages}}{{#messages}}{{#sent}}Me: {{/sent}}{{#received}}Them: {{/received}}{{content.jsonEscaped}}\n{{/messages}}{{/pastMessages}}{{#currentTyping}}Current typing: {{currentTyping.jsonEscaped}}{{/currentTyping}}{{^currentTyping}}Suggest a reply.{{/currentTyping}}"
}
],
"max_tokens": 50,
"stream": false
}
{
"model": "gpt-4o-mini",
"messages": [
{
"role": "system",
"content": "{{#com_whatsapp}}You are texting on WhatsApp. Keep replies short and casual.{{/com_whatsapp}}{{#com_slack}}You are messaging on Slack. Replies can be more professional.{{/com_slack}}{{^com_whatsapp}}{{^com_slack}}You are an AI texting assistant.{{/com_slack}}{{/com_whatsapp}}"
},
{
"role": "user",
"content": "{{#pastMessages}}{{#messages}}{{#sent}}Me: {{/sent}}{{#received}}Them: {{/received}}{{content.jsonEscaped}}\n{{/messages}}{{/pastMessages}}{{#currentTyping}}I started typing: {{currentTyping.jsonEscaped}}{{/currentTyping}}"
}
],
"max_tokens": 60,
"stream": false
}
Note: package name dots are replaced with underscores, so
com.whatsappbecomescom_whatsapp.
{
"model": "gpt-4o-mini",
"messages": [
{
"role": "system",
"content": "You are a texting assistant. Output only the suggested message text."
},
{
"role": "user",
"content": "{{#pastMessages}}{{#messages}}{{#sent}}Me: {{/sent}}{{#received}}Them: {{/received}}{{content.jsonEscaped}}\n{{/messages}}{{/pastMessages}}"
},
{
"role": "assistant",
"content": "{{#currentTyping}}{{currentTyping.jsonEscaped}}{{/currentTyping}}"
}
],
"max_tokens": 60,
"stream": false
}
Here the user’s current typing is prefilled as the beginning of the assistant turn — the model then continues from it. Pair this with the Suggestion Content Template below to assemble the full suggestion.
Note: Some models are bad at continuing from partial words, so
currentTypingTrimmedcomes in handy here to prefill only the last complete token, allowing the model to generate the full last word. You can hint the last word somewhere else in the prompt.
Found in the Advanced config section below the Request Body field.
After the model responds, this Mustache template is rendered to form the final suggestion text shown to the user. This exists because some API setups use assistant prefill — you send the user’s current typing as the start of the assistant’s response, and the model returns only the continuation. The template lets you join them back together.
The same context fields from the request body are available, plus two extras:
| Field | Type | Description |
|---|---|---|
{{assistantMessage}} | string | The raw text returned by the model |
{{assistantMessageAutoTrimCurrentTyping}} | string | The model response, but if it starts with currentTyping, that part would be trimmed automatically |
{{assistantMessageAutoTrimCurrentTypingTrimmed}} | string | Same as above, but currentTypingTrimmed is used for the trimming instead of currentTyping (only complete tokens are prefixed) |
| (all other fields previously stated) | string map | Same as request body template, with .raw / .jsonEscaped / .regexLiteral / .regexLiteralEscaped variants |
Unlike currentTyping and other string map fields, assistantMessage and assistantMessageAutoTrimCurrentTyping are already plain strings so no variant suffix is needed.
After rendering, any two consecutive spaces is collapsed to one, handling the case where currentTyping ends with a space and the model response also starts with one (or vice versa).
{{assistantMessage}}
The model response is shown as-is.
If your request body sends currentTyping as the start of the assistant turn (the model only returns the continuation):
{{currentTyping.raw}}{{assistantMessage}}
For example, the user has typed "sounds good, I'll be" and the model returns " there at 7". The suggestion becomes "sounds good, I'll be there at 7".
Not sure who would need this, you can also add a suffix like this:
{{assistantMessage}} 😊