AI Assistant Best Practices

The Checkbox AI Assistant is an intelligent chatbot designed to answer questions from business users and redirect them to relevant apps and documents when appropriate. This guide outlines best practices for setting up sources and instructions to optimize the AI Assistant's performance.


Table of Contents

Preparing document sources

Preparing app sources

Updating the Assistant Instructions




Preparing Sources 

Screenshot 2023-08-25 at 10.38.46 am.png

Source Document Best Practices

  • Use text-based documents (e.g., Word, PDF) instead of image or spreadsheet-based files.
  • Structure content in Q&A format when possible. Where possible, convert tables, dot points and other "rich-text" into this format.

    For example, instead of this table:
    Name Role Region
    Bob Paralegal EMEA
    Amy CLO APAC
    Sawa CEO US

    Consider reframing to:
    - Bob is a paralegal and is based in EMEA
    - Amy is the CLO and is based in APAC
    - SAWA is the CEO and based in US

  • Spell out hyperlinks fully.
  • Consolidate content into single documents where appropriate.


Good Practices:

  • Q&A Format: "Q: What is an NDA? A: An NDA is used to protect confidential information..."
  • Full URLs: "For more information, visit:"
  • Consolidated Content: All chapters of a manual in one PDF document.

Practices to Avoid:

  • Image-heavy formats (e.g., PowerPoint slides with minimal text)
  • Embedded hyperlinks: "Click here" instead of full URLs
  • Spreading related content across multiple files


App Source Best Practices

For each App that you add into sources, include:

  1. 1~2 line summary of the app.

  2. Example questions that might trigger the app (e.g. "I want a NDA", I want a non-disclosure agreement").

  3. Describing WHAT the key topic is for the app, as opposed to HOW it works. General information about the key topic is helpful. A helpful litmus test is to describe the topic as though it were NOT a workflow App.

  4. Scenarios: Include scenarios of when you expect this App to be used (e.g. “I want to share information with a vendor”)


Example of how to setup a NDA app as a source

  1. This App allows business users to self-serve non-disclosure agreements (NDA).

  2. This App allows business users who ask for a NDA (e.g. they ask "I want a NDA", "I want a non-disclosure agreement", "I want an agreement for sharing confidential information") to automatically self serve this agreement.

  3. A non-disclosure agreement (NDA) is a legally binding contract that establishes a confidential relationship. The party or parties signing the agreement agree that sensitive information they may obtain will not be made available to any others. An NDA may also be referred to as a confidentiality agreement. Non-disclosure agreements are common for businesses entering into negotiations with other businesses. They allow the parties to share sensitive information without fear that it will end up in the hands of competitors. In this case, it may be called a mutual non-disclosure agreement.

NOTE: Similarly to training a new colleague, it is sometimes helpful to repeat important parts of your instructions.



Assistant Instructions Prompt Best Practices

The Assistant Instructions prompt can be used to change the behaviour of the AI assistant.

Screenshot 2023-08-25 at 10.38.03 am.png

Some helpful tips when editing the assistive prompt provided to the AI assistant include telling it to:

  • Define a specific role or occupation for the AI

  • Respond in a particular tone/manner

  • Reinforce that it should not lie, nor give generalised information

  • Use the phrase "Think through your response step by step to arrive at the right answer"

  • (Optional) Include specific phrases that you want the AIA to say with every response like "Does that answer your request?"


You are a world class in-house legal professional helping non-legal business users in an in-house legal context. Respond to questions in a helpful, conversational, concise and business-friendly manner. Do not give generalised information. Only answer the question that is asked. Think through your response step by step to arrive at the right answer. After a response is given, always append to the end "Does that answer your request?"


NOTE: Similarly to training a new colleague, it is sometimes helpful to repeat important parts of your instructions.




While Large Language Models (LLMs) like the one powering the Checkbox AI Assistant are powerful tools, they have certain limitations. Understanding these limitations is crucial for setting realistic expectations and using the AI Assistant effectively.

Areas where LLMs may struggle:

  1. Mathematical Calculations: LLMs are not designed as calculators. They may make errors in complex or even simple mathematical operations. For accurate calculations, it's best to use dedicated calculators or spreadsheet software.
  2. Counting and Data Analysis: LLMs don't have the ability to accurately count occurrences of words or phrases across multiple documents. They cannot perform detailed data analysis tasks that require precise counting or statistical calculations.
  3. Up-to-date Information not in source documents: The knowledge of LLMs is based on their training data, which has a cutoff date. They may not have information about very recent events or changes, unless explicitly provided in source documents.
  4. Understanding Images or Audio: Checkbox AI Assistant is currently text-based and cannot process or understand images, audio, or video content.
  5. Remembering Previous Conversations: Each interaction with Checkbox AI Assistant is independent. 




When to use a separate AI Assistant vs combining everything in one "mega" AI Assistant?

Generally speaking, limited scope AI assistants performs better than broader scope. Broader scope usually means lots of (somewhat unrelated) source documents are uploaded. This makes it harder for us to retrieve the relevant "chunks" of documents to provide to the AI Assistant as background.

Can the AI Assistant ask clarifying questions?

Currently, this behaviour is not built into the AI Assistant. However, the "Assistant Instructions" can be adjusted to promote this behaviour. In addition, the source document should give enough information for the Chatbot to know what information is required. The best case would be if the source document explicitly calls out the datapoints required for a decision.

How can we give the bot a personality?

We've seen "personified" AI Assistants used as a way to drive adoption and improve user experience. You can setup "personified" Ai Assistants by adjusting the "Assistant Instructions". Fun examples include: "speak like Yoda", "pretend you're Santa", "make spooky references" during Halloween, or even just incorporate company values into responses.