Purpose of This Disclosure
This AI Transparency Disclosure explains how Adjudica.AI uses artificial intelligence, what our AI can and cannot do, and your responsibilities as an attorney when using AI-assisted tools. This disclosure is designed to help you comply with your professional obligations under California State Bar rules, including Rule 1.1 (Competence) and Rule 1.6 (Confidentiality).
We believe transparency builds trust. As a "Glass Box" company, we are committed to helping you understand exactly how our AI works so you can use it effectively and ethically.
1. How Adjudica.AI Uses Artificial Intelligence
1.1 AI Technology Overview
Adjudica.AI uses large language models (LLMs) and other AI technologies to assist California Workers' Compensation attorneys with document analysis, legal research, and case management.
AI Providers We Use:
| Provider | Technology | Primary Use |
|---|
| Google | Gemini | Document analysis, legal reasoning, OCR |
| Google | Gemini, Document AI | Document processing, OCR |
1.2 What Our AI Does
Adjudica.AI's AI assists with:
- Document Analysis: Reading and extracting information from medical records, QME reports, and legal documents
- Case Summarization: Creating summaries of case files and medical histories
- Legal Research: Identifying relevant statutes, regulations, and case law
- PDRS Calculations: Calculating permanent disability ratings using AMA Guides 5th Edition and California regulations
- Source Citation: Linking AI outputs to underlying source materials
1.3 The "Hover to Source" System
Our signature "Hover to Source" feature provides transparency by linking AI-generated statements to their underlying sources:
How It Works:
- When AI generates analysis, it identifies relevant sources
- Underlined text indicates source-backed statements
- Hovering reveals the source citation and excerpt
- Click to view the full source document
Source Types Include:
- California Labor Code
- California Code of Regulations (Title 8)
- WCAB Decisions
- AMA Guides to the Evaluation of Permanent Impairment (5th Edition)
- Medical literature and references
Limitations:
- Not all AI statements have linked sources
- Source links may occasionally be broken or outdated
- AI interpretation of sources should be verified
2. AI Limitations and Risks
2.1 What Our AI Cannot Do
Our AI is a tool to assist you—it does not replace your professional judgment.
| AI Cannot | Explanation |
|---|
| Provide legal advice | AI outputs are informational; you must apply professional judgment |
| Guarantee accuracy | AI may make errors; all outputs require verification |
| Know your full case | AI only sees what you provide; it lacks complete context |
| Replace attorney judgment | Strategic and ethical decisions remain yours |
| Create attorney-client relationships | AI is a tool; you are the attorney |
2.2 Potential for Errors
Despite our best efforts to minimize errors, AI may:
- Misinterpret documents: Complex or unclear documents may be misread
- Miss context: AI lacks the full context of your case and client
- Contain outdated information: Training data has cutoff dates; recent changes may not be reflected
- Oversimplify: Complex legal or medical issues may be presented too simply
- Generate incomplete analysis: Some relevant factors may not be identified
2.3 About "Hallucination"
Our Commitment: "AI hallucinates. Adjudica doesn't." — This is our goal, not a guarantee.
AI "hallucination" refers to AI generating plausible-sounding but incorrect or fabricated information. Our "Hover to Source" system significantly reduces this risk by grounding AI outputs in verifiable sources.
However:
- Source attribution reduces but does not eliminate all errors
- Some AI outputs may not have verifiable sources
- The AI's interpretation of sources may be incorrect
- You must verify all AI outputs before relying on them
2.4 Knowledge Limitations
| Limitation | Impact |
|---|
| Training data cutoff | May not reflect very recent legal changes |
| Unpublished decisions | Some relevant WCAB decisions may not be in our database |
| Local practices | Specific judge preferences or local customs may not be reflected |
| Evolving medicine | Latest medical research may not be incorporated |
3. Your Responsibilities as an Attorney
3.1 California State Bar Requirements
The California State Bar's Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law (November 2023) establishes that existing ethical rules apply to AI use. Key requirements include:
Rule 1.1 — Competence:
- You must understand the AI tools you use
- You must understand AI capabilities and limitations
- You must stay current on AI developments affecting your practice
Rule 1.6 — Confidentiality:
- You must ensure client data is protected when using AI
- You must understand how AI tools handle confidential information
- Adjudica.AI protects your data (see Privacy Notice)
Rule 5.3 — Supervision:
- AI is treated like a nonlawyer assistant
- You must supervise AI outputs
- You remain responsible for AI-assisted work product
Rule 1.4 — Communication:
- You may need to inform clients about AI use
- Disclosure requirements depend on circumstances
3.2 Your Verification Obligations
Before relying on any AI output, you must:
- Verify citations: Check that cited statutes, regulations, and cases exist and say what the AI claims
- Confirm accuracy: Verify factual statements against source documents
- Apply judgment: Evaluate whether the AI's analysis is appropriate for your case
- Check currency: Confirm legal authorities are current and haven't been superseded
- Review for completeness: Consider whether relevant factors were missed
3.3 Supervision Requirements
As the attorney, you:
- Retain full responsibility for all work product, whether AI-assisted or not
- Cannot delegate professional judgment to AI
- Must review AI outputs before use in legal proceedings
- Are liable for errors in AI-assisted work product
3.4 Client Communication Considerations
Consider disclosing AI use to clients when:
- AI significantly affects the representation
- Client would reasonably want to know
- Billing implications exist
- AI processes sensitive client information
The decision to disclose is a matter of professional judgment based on your specific circumstances.
4. How We Protect Your Data
4.1 Security Measures
| Protection | Implementation |
|---|
| Encryption at rest | AES-256 encryption |
| Encryption in transit | TLS 1.3 |
| Access controls | Role-based access, MFA available |
| Audit logging | Comprehensive access logs |
| US-only processing | All data stays in the United States |
4.2 Your Data and AI Model Training
We do not use PHI, document content, or case-specific information to train AI models.
Your medical records, legal documents, and case queries are used only to provide you with analysis. They are never retained or used to train AI models.
We do collect de-identified behavioral signals — such as document classification corrections (e.g., "user changed classification from Medical Report to Panel QME Report"), quality feedback (thumbs up/down), and feature usage patterns — to improve platform accuracy and reliability. These signals contain no PHI, no document content, and no case-specific information.
For complete details, see our Platform Improvement Data Policy.
Our commitments are:
- Contractually binding in our Terms of Service
- Required of our AI providers through Business Associate Agreements
- Technically enforced through PHI stripping before any improvement data leaves the secure perimeter
4.3 Third-Party AI Providers
Our AI provider (Google) is bound by a Business Associate Agreement that requires:
- HIPAA-compliant data handling
- Prohibition on using your data for model training
- Appropriate security measures
- Breach notification obligations
5. Best Practices for Using Adjudica.AI
5.1 Effective Use
Do:
- Use AI as a starting point, not a final answer
- Verify all citations and factual claims
- Apply your professional judgment to AI outputs
- Use "Hover to Source" to understand AI reasoning
- Upload complete, relevant documents for best results
Don't:
- Submit AI outputs to courts without verification
- Rely on AI for novel legal theories without research
- Assume AI has considered all relevant factors
- Use AI outputs verbatim without review
5.2 When to Be Extra Careful
Exercise heightened scrutiny when:
- Stakes are high (significant exposure, important deadlines)
- Facts are complex or unusual
- Legal issues are novel or unsettled
- AI output contradicts your initial analysis
- Sources are not available for verification
6. Professional Responsibility Resources
6.1 California State Bar
6.2 American Bar Association
6.3 Continuing Education
We recommend attorneys using AI tools:
- Complete CLE courses on legal technology ethics
- Stay current on State Bar AI guidance updates
- Review malpractice carrier guidance on AI use
7. Updates to This Disclosure
We will update this disclosure as:
- Our AI capabilities evolve
- New professional responsibility guidance is issued
- Regulatory requirements change
Material changes will be communicated via email and in-app notification. We encourage you to review this disclosure periodically.
8. Questions and Support
Technical Questions
Email: support@adjudica.ai
Ethics and Compliance Questions
Email: compliance@adjudica.ai
General Inquiries
Email: info@adjudica.ai
9. Acknowledgment
By using Adjudica.AI, you acknowledge that you have read and understood this AI Transparency Disclosure, including:
- The capabilities and limitations of our AI
- Your professional responsibilities when using AI-assisted tools
- The requirement to verify AI outputs before relying on them
- That you retain full responsibility for your work product
This AI Transparency Disclosure reflects our commitment to helping you use AI responsibly and ethically. We are committed to continuous improvement and welcome your feedback.
Glass Box Solutions, Inc.
From Black Box to Glass Box