Settings
Manage your account and application preferences
The intake assistant uses a large language model (LLM) to have a natural conversation with users about their legal requests. When a user describes their needs, the AI:
- Analyzes the request to understand the type of legal help needed
- Asks clarifying questions to gather complete information
- Extracts structured data (request type, priority, deadlines) from the conversation
- Pre-fills the ticket details form based on what it learned
The settings below control how the AI behaves during this process. Changes affect all new intake conversations but not existing tickets.
Claude Opus is best for complex reasoning. Sonnet balances speed and quality. Haiku is fastest but less nuanced. GPT models offer alternative perspectives.
Controls randomness in responses. 0.0-0.3: Precise, consistent answers (best for legal intake). 0.4-0.7: Balanced creativity. 0.8-1.0: More varied, creative responses (not recommended for legal work).
Limits the length of each AI response. A "token" is roughly 3/4 of a word. This controls the output only - not how much the AI can read or understand. For intake conversations, 2,048 tokens is usually sufficient. Increase if the AI is cutting off mid-sentence or if you need longer explanations.
Analyze uploaded documents for key terms and clauses
Automatically assess request priority based on content
Create and submit tickets on behalf of users
Find similar past cases for reference