Documentation Index
Fetch the complete documentation index at: https://docs.gc.ai/llms.txt
Use this file to discover all available pages before exploring further.
Demo 1: Contract Risk Heatmap
Spot risky clauses in any contract, in seconds.
What it shows
Click the Analyze risks button on the top right corner of the sample agreement below. The API returns a structured response listing each issue (severity, clause reference, suggested redline). Use this when you want an in-product clause review for legal, sales, or procurement teams. In this demo, we’ve pre-loaded GC AI’s review of a sample Series A Purchase Agreement so you see the same results every time. In real use, every contract you send gets reviewed live.
Why it matters
This integration is ideal for financings, vendor agreements, or any high-stakes paper where reviewers need machine-readable issue lists (categories, references, and rationale) that downstream tools or humans can triage, making review repeatable and comparable across deals.
APIs used
| Method | Endpoint | Role in the demo |
|---|
POST | /v1/chat/completions | Powers the run. The request carries the agreement text plus instructions (including JSON-schema guidance) so the model returns structured risk objects; the UI maps them into the Identified risks cards. |
Demo 2: Inbox Auto-Review
Auto-review incoming contracts against your playbook.
What it shows
Three apps wired together: (1) your account, where your team manages the playbook (an example SaaS MSA playbook covering 8 common issues is shown below); (2) your email inbox where contracts arrive; and (3) a review tool you host that fires one live call per incoming contract.
Why it matters
Legal intake stops being a black hole: the playbook is the single source of truth in GC AI, mail is the trigger, and your app turns each attachment into structured findings reviewers can action.
APIs used in this demo (1 live · 3 simulated)
| Method | Endpoint | Role |
|---|
POST | /v1/chat/completions | Live. Fires once per incoming contract on load. The prompt contains the playbook checks and the contract body; the model returns { flagged: [{ clause, comment, redline }] } (shape may vary by integration). Failures show in-row with retry, not silent defaults. |
POST | /v1/files | Simulated. Production: upload each email attachment to GC AI when it arrives so the file lives in your workspace, not only in the inbox tool’s memory. |
GET | /v1/folders | Simulated. Production: list folder/playbook structure from GC AI so the integration always pulls the latest checks instead of hard-coding the playbook into the prompt (as in the demo shortcut). |
PATCH | /v1/files/{id} | Simulated. Production: after the model flags issues, move or label the file (for example into Needs redlines vs Approved) so the queue reflects what legal should do next. |
Demo 3: Q&A (sample docs)
Query your company’s policies, get responses with citations.
What it shows
An example of how you can wire up a folder of internal policies (signing authority, AI usage, logo and brand guidelines, and more) to your app or chat bot. Get quick, policy-grounded answers to everyday questions like “Do we have logo rights for this customer?” or “Who can sign this vendor agreement?”, with inline citations to the source paragraph.
Why it matters
Self-serve policy Q&A shrinks repeat intake: employees get auditable, source-linked answers because every claim ties back to a file in GC AI.
APIs used in this demo (1 live · 2 simulated)
| Method | Endpoint | Role |
|---|
POST | /v1/chat/completions | Live. Your app sends the question plus the policy documents (inlined in the demo); the model returns { answer, sources[] }, which the UI renders as the answer and citation chips. |
GET | /v1/folders | Simulated. Production: list the user’s folders so they can choose which policy library to query (HR vs. marketing vs. engineering playbooks), instead of the demo’s fixed Acme Corp: Company Policies folder. Same pattern as a bring-your-own-documents Q&A flow: your app picks the library folder, then passes GC AI file ids into chat instead of inlining full text. |
GET | /v1/folders/{id}/files | Simulated. Production: list files in the selected folder (paginated files[] plus pagination), then pass returned file ids as file_ids: [...] on the chat call, hard-scoping the model and keeping prompts manageable for large libraries instead of inlining full text as the demo does. |
Demo 4: Triage & Route
Triage every legal request, route to the responsible team.
What it shows
A shared inbox where every incoming request is classified, risk-scored, and routed in a single API call. Routine items are handled quickly, standard matters go to junior counsel, high-risk or high-value ones escalate to senior counsel, and out-of-scope items get forwarded out, all according to your rules, so you only see what you need to action.
Why it matters
Shared inboxes can sometimes act as bottlenecks, as users search and sort through items assigned to others, or wait for direction. A triage and routing integration helps ensure tasks are immediately routed to the responsible team.
APIs used in this demo (1 live · 3 simulated)
| Method | Endpoint | Role |
|---|
POST | /v1/chat/completions | Live. Your app fires one structured-extraction call per incoming ticket. The model returns { category, risk, route, rationale, suggestedReply } as JSON, which drives the routing lane and the auto-reply draft. |
POST | /v1/files | Simulated. Production: upload any contract or document attached to the ticket so reviewers (and the model) can reference it by file_id throughout the lifecycle, not only from the email body. |
POST | /v1/folders | Simulated. Production: ensure a folder exists for each lane (Auto-handled, Junior Counsel, Senior Counsel, Routed Out) so triaged tickets land in a navigable workspace structure, not just an in-memory column. |
PATCH | /v1/files/{id} | Simulated. Production: move each attachment into the routing-lane folder the model picked, so counsel opening that lane in GC AI sees only the tickets routed there. |