1. Overview
SakhiChat uses artificial intelligence to power chatbot conversations. We believe in transparent AI: customers and end users deserve to know how AI is used, what data flows through it, and what its limitations are.
This page complements our Privacy Policy and Terms of Service.
2. How Our AI Works
When a visitor chats with a SakhiChat-powered bot:
- The visitor's message is received by our server.
- Relevant context is assembled — including the business's knowledge base (FAQs, uploaded documents, scraped website content), recent conversation history, and configuration settings.
- The message and context are sent to a large language model (LLM) provider (currently OpenAI).
- The LLM generates a response.
- The response is returned to the visitor and stored in the conversation log.
For complex queries — or when explicitly requested — the conversation can be handed over to a human agent. End users can always ask to speak to a human.
3. AI Providers We Use
We currently use the following AI providers:
| Provider | Models used | Purpose |
|---|---|---|
| OpenAI | GPT-class models (e.g. GPT-4 family) | Generating chatbot responses |
The specific model used may change over time as we evaluate quality, latency, and cost. We will update this page when material changes are made.
The provider's own terms apply to the AI processing: OpenAI Business Terms, OpenAI API Data Usage Policies.
4. How Your Data Is Handled
When we send data to an AI provider:
- Data is transmitted over encrypted connections (TLS).
- Only the data needed to generate a response is sent — not your full account or unrelated business data.
- The conversation message, relevant knowledge-base content, and conversation history (limited to recent context) are included.
- API keys and authentication tokens are never sent to the AI provider.
AI providers process the data on our behalf as our subprocessors (see Privacy Policy, Section 6).
5. Model Training
We do not use customer data to train any AI models. Specifically:
- We do not train our own models on customer data.
- OpenAI does not use API data submitted by us to train its models, per its API data usage policy.
- If we ever consider training models on aggregated, anonymised data, we will update this page and (where required) request consent.
6. AI Limitations
AI is powerful but imperfect. Customers and end users should understand that:
- AI can make mistakes. Responses may contain inaccuracies, contradictions, or fabricated information ("hallucinations") even when the knowledge base is correct.
- AI does not have true understanding. It generates plausible text based on patterns in training data, not from reasoning or external verification.
- AI may give outdated information. Models have a knowledge cutoff date; recent events may not be reflected.
- AI is sensitive to phrasing. Slightly different questions can produce noticeably different answers.
- AI may inherit biases. Training data reflects biases present on the internet; outputs can sometimes reflect those biases.
We continually work to improve quality, but we do not guarantee accuracy. For anything important — health, legal, financial, safety — please verify with a qualified human or authoritative source.
7. Human Oversight
SakhiChat is designed to keep humans in control:
- Handover to humans: end users can request a human agent at any time. Customers can configure their bot to escalate certain queries automatically.
- Take-over: the customer's team can take over any conversation from the AI in real time.
- Conversation logs: every AI message is logged so customers can review what was said.
- Knowledge base curation: customers control the knowledge the AI uses. Removing content prevents the AI from referencing it in future responses.
8. Transparency to End Users
We require our customers to be transparent with end users when an AI is involved. Customers must:
- Not misrepresent the AI as a human.
- Not impersonate real people through the AI.
- Comply with applicable AI-disclosure laws — including the EU AI Act, where relevant — when deploying chatbots to end users.
SakhiChat's default chatbot interface labels the bot as an AI assistant. Customers can rename the bot but should not configure it to deceive users about its nature.
9. Prohibited Uses
The following uses of our AI Services are prohibited:
- Consequential decisions without human review — using AI output as the sole basis for decisions about people's credit, employment, healthcare, housing, education, insurance, legal status, or other significant rights.
- Deception or manipulation — using AI to deceive end users about its nature, to manipulate emotions, or to exploit psychological vulnerabilities.
- Disinformation — generating or spreading false content intended to mislead.
- Impersonation — impersonating specific real people, organisations, or officials.
- Illegal content — content that is unlawful, infringes rights, or facilitates illegal activity.
- Targeting minors — using AI in ways that exploit, harm, or sexualise minors.
- Critical infrastructure — using AI in life-support, emergency response, weapons systems, or similar safety-critical contexts.
- Surveillance — using AI to surveil individuals without lawful basis or consent.
We reserve the right to suspend or terminate accounts that violate these rules.
10. EU AI Act Compliance
The European Union's AI Act regulates AI systems based on risk level. Chatbots like SakhiChat are generally classified as limited-risk AI systems with transparency obligations.
To support customer compliance, we:
- Clearly mark AI-generated messages as such in our default UI.
- Provide this AI Disclosure for end users and businesses to reference.
- Maintain logs that allow review of AI interactions.
- Do not use AI in ways considered "unacceptable risk" under the AI Act (e.g. social scoring, real-time biometric identification).
Customers deploying SakhiChat to EU end users are responsible for their own compliance, including:
- Disclosing to users that they are interacting with an AI.
- Ensuring AI use in their context does not fall into a high-risk category requiring additional obligations.
11. Reporting Issues
If you encounter an AI response that is harmful, inaccurate, biased, or inappropriate, please report it to support@sakhichat.com with:
- The conversation ID or a screenshot.
- A brief description of the issue.
- The expected behaviour.
We take feedback seriously and use it to improve guardrails, knowledge-base recommendations, and prompt engineering.
12. Contact Us
- Email: support@sakhichat.com
- See also: Privacy Policy, Terms of Service, GDPR & DPA