Best practices for using the ConversationalAnalytics API endpoints in Looker's API

Looker's ConversationalAnalytics API endpoints let you build custom Conversational Analytics experiences within your embedded applications. These APIs mirror the endpoints that power Looker's Conversational Analytics feature and provide the same functions within the Looker API. They include create, update, read, and delete (CRUD) operations for agents, conversations, and messages, as well as a chat API for interacting with the conversational agent. To ensure a smooth development process and optimal performance, it's important to understand certain limitations and follow recommended best practices when using these APIs.

Typical workflow

A typical workflow for a multi-turn conversation involves using agents, conversations, messages, and chat APIs together.

  1. Create an agent: If you don't have one, create an agent by using POST /agents. The agent is configured to use specific Looker models and Explores.
  2. Create a conversation: Start a new conversation that's associated with an agent by using POST /conversations. This will return a conversation ID.
  3. Send a message: For each turn in the conversation, call POST /conversational_analytics/chat with the conversation_id and the user's message. This endpoint returns one or more system messages from the agent.
  4. Persist messages: The /conversational_analytics/chat endpoint does not persist the user message or the returned system messages. To maintain conversation history for subsequent turns, you must persist both the user message and the system message(s) by calling POST /conversations/:conversation_id/messages after calling the chat API.

Recommendations

Follow these recommendations for best results:

  • Persist all messages: After each call to /conversational_analytics/chat, make sure to call POST /conversations/:conversation_id/messages to save both the user's message from that turn and all system messages that are returned by the chat API. This is essential for multi-turn conversations.
  • Handle Streaming: When possible, use the streaming capability of the chat API to provide feedback to the user while the agent is processing. The messages received during streaming can be used to indicate that the agent is "thinking".

Limitations and considerations

When using the ConversationalAnalytics API endpoints, consider the following limitations:

  • Message Persistence: It is your responsibility to persist messages using the POST /conversations/:conversation_id/messages endpoint. Failing to persist messages after each call to /conversational_analytics/chat will prevent the conversation history from being maintained, and the agent won't have context for follow-up questions in a multi-turn conversation.
  • Streaming Support: The chat API is a streaming API, allowing you to receive messages as they are generated by the agent, which can improve the user experience for long-running queries. However, not all Looker SDK languages support streaming. If you are using an SDK that doesn't support streaming, the API will return the complete response synchronously after all messages are generated. If streaming is essential and not supported by your SDK language, you may need to make HTTP calls directly to use streaming.