Approval Gates
Define actions that require human approval before execution. Refunds over a certain amount, messages to VIP customers, data deletions, or any action you flag as sensitive. The AI employee pauses, notifies the approver, and waits.
Approval gates, PII detection, content policies, and execution limits. Your AI employees operate within strict boundaries you define.
Sistava gives you full control over what your AI employees do and do not do. Set up approval gates that require human sign-off before sensitive actions. Detect and redact PII automatically. Define content policies that prevent off-brand or inappropriate responses. Set execution limits to control costs and scope. Enforce company-wide policies across your entire AI workforce.
Trust is built through control, not promises. Sistava provides five layers of governance that ensure your AI employees operate within the boundaries you set. Approval gates add human oversight for sensitive actions. PII detection prevents data leaks. Content policies keep responses on-brand and compliant. Execution limits control costs and scope. Company policies enforce organization-wide standards.
These controls work together, not in isolation. A customer service AI employee might have content policies preventing it from discussing legal matters, PII detection redacting phone numbers, execution limits capping at 100 actions per hour, and an approval gate requiring human sign-off before issuing refunds over $100.
Every guardrail generates an audit log entry. When compliance asks "what did your AI do last Tuesday?", you have a timestamped, searchable record of every action, every decision, and every guardrail that fired.
Define actions that require human approval before execution. Refunds over a certain amount, messages to VIP customers, data deletions, or any action you flag as sensitive. The AI employee pauses, notifies the approver, and waits.
Every outgoing message is scanned for personally identifiable information. Phone numbers, email addresses, SSNs, credit card numbers, and other sensitive data are automatically redacted. Configurable sensitivity levels and entity types.
Define topics your AI employees must never discuss: internal pricing details, legal advice, competitor bashing, or politically sensitive subjects. The AI employee recognizes restricted topics and responds with a configured redirect message.
Cap the number of actions per hour, per day, or per task. Set cost ceilings per employee or per team. When a limit is reached, the AI employee pauses and notifies you rather than continuing to execute.
Set policies that apply to every AI employee in your organization. Brand voice guidelines, compliance requirements, data handling rules, and communication standards. New hires inherit these policies automatically.
Every action, every decision, every guardrail trigger is logged with timestamps, context, and outcomes. Export audit logs for compliance reviews. Search by employee, action type, date range, or guardrail event.
| Dimension | Traditional | With Sista |
|---|---|---|
| Human oversight | AI operates autonomously. You review outputs after they reach the customer | Approval gates pause sensitive actions for human sign-off before execution |
| Data leak prevention | Hope the AI does not include PII in responses. Manual review to catch leaks after the fact | Automatic PII detection and redaction on every outgoing message. Leaks stopped before delivery |
| Content control | Prompt engineering and prayer. Jailbreaks bypass instructions regularly | Policy-based content filtering that operates independently of the AI model. Cannot be bypassed through prompt injection |
| Cost control | Monitor spending dashboards manually. Set billing alerts that notify after overspend | Execution limits enforce hard ceilings before the cost occurs. AI employee stops, not alerts after the fact |
| Compliance readiness | Piece together audit trails from logs, screenshots, and tribal knowledge | Complete, timestamped, searchable audit log of every action and guardrail event. Export-ready for auditors |
| Policy consistency | Each AI tool has its own configuration. Inconsistent policies across tools and teams | Company-wide policies cascade to every AI employee. New hires inherit all existing policies automatically |
You define which actions require human approval: sending emails to customers, processing refunds, accessing specific data, or any custom action type. When the AI employee needs to perform a gated action, it pauses, sends a notification to the designated approver with full context, and waits. The approver reviews and either approves or rejects. If no response within the timeout window, the action is denied by default.
The system detects and redacts phone numbers, email addresses, social security numbers, credit card numbers, bank account numbers, passport numbers, driver license numbers, dates of birth, and physical addresses. You configure which entity types to scan for and whether to redact, mask, or flag them for review.
No. Content policies operate as a separate filtering layer that runs after the AI model generates its response but before the response reaches the customer. Even if someone tricks the AI model through prompt injection, the content filter catches restricted topics, PII, and policy violations independently.
You set hard ceilings on actions per hour, actions per day, and total cost per billing period for each employee or team. When an AI employee reaches a limit, it pauses execution and notifies you. This is a hard stop, not a soft alert. The AI employee resumes only after you raise the limit or the time window resets.
Yes. Every guardrail event is logged with a timestamp, the AI employee involved, the action attempted, the guardrail that fired, and the outcome (blocked, redacted, sent for approval, or allowed). Logs are searchable, filterable, and exportable in standard formats for compliance review.
Yes. Company-wide policies set a baseline that applies to all employees. On top of that, you configure employee-specific or team-specific policies. For example, your customer-facing employees might have stricter content policies than your internal research employees. Specific policies always override when they are more restrictive than the company baseline.