Skip to content

AI Contextual Governance

DuraGraph provides infrastructure-level AI contextual governance that adapts to intent, sensitivity, environment, and trust boundaries.

Traditional rule-based governance fails modern AI systems:

ChallengeImpact
45% of enterprises experienced GenAI data leakageStatic rules can’t prevent dynamic threats
Only 25% have implemented AI governanceGovernance is seen as blocker, not enabler
Multi-modal AI combines diverse dataSingle rule sets can’t cover all scenarios

The missing dimension: CONTEXT determines acceptable risk, privacy requirements, reasoning boundaries, and interaction patterns.

┌─────────────────────────────────────────────────────────────────────┐
│ AI CONTEXTUAL GOVERNANCE LAYER │
├─────────────────────────────────────────────────────────────────────┤
│ │
│ ┌────────────────┐ ┌────────────────┐ ┌────────────────┐ │
│ │ │ │ Policy │ │ Automated │ │
│ │ Context Engine │ │ Orchestration │ │ Risk │ │
│ │ │ │ Layer │ │ Evaluation │ │
│ └───────┬────────┘ └───────┬────────┘ └───────┬────────┘ │
│ │ │ │ │
│ └───────────────────┼───────────────────┘ │
│ │ │
│ ┌────────────────┐ ┌──────┴───────┐ ┌────────────────┐ │
│ │ AI │ │ Behavioral │ │ Multi-Environ │ │
│ │ Observability │◄─┤ Guardrails │──┤ Enforcement │ │
│ │ │ │ │ │ │ │
│ └────────────────┘ └──────────────┘ └────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────┘

Context Engine

Interprets metadata, domain rules, user intent, and operational signals to determine governance response.

Policy Orchestration

Programmable, composable governance building blocks with hot-reload and inheritance.

Behavioral Guardrails

Adaptive guardrails that prevent misuse, hallucinations, and drift from expected behavior.

Risk Evaluation

Real-time scoring based on sensitivity, impact, and probability of misalignment.

from duragraph import Graph, llm_node
from duragraph.governance import GovernanceEngine, Policy
# Create governance engine
governance = GovernanceEngine()
# Define a policy
policy = Policy(
name="customer_support",
guardrails=[
{"type": "output_filter", "config": {"block_pii": True}},
{"type": "topic_restriction", "config": {"blocked": ["competitor_info"]}},
],
audit_level="full",
)
governance.register_policy(policy)
@Graph
class GoverndAssistant:
@llm_node(governance=governance)
async def respond(self, state: State) -> State:
# Governance automatically applied
response = await self.llm.complete(state.messages)
state.response = response
return state
from duragraph.governance import ContextEngine, GovernanceProfile
# Context engine evaluates situational factors
context_engine = ContextEngine()
# Evaluate context
profile = await context_engine.evaluate(
data_metadata={"classification": "confidential", "contains_pii": True},
user_context={"role": "support_agent", "department": "customer_success"},
intent="answer_billing_question",
environment="production",
)
# Profile contains:
# - risk_score: 0.65
# - allowed_actions: ["read_account", "view_invoice"]
# - required_controls: ["audit_log", "pii_redaction"]
# - applicable_policies: ["customer_support", "pii_protection"]

DuraGraph evaluates risk across multiple dimensions:

sensitivity_factors = {
"classification_level": "confidential", # public, internal, confidential, restricted
"pii_presence": True,
"regulatory_scope": ["GDPR", "HIPAA"],
}
impact_factors = {
"reversibility": "low", # Can decision be undone?
"affected_parties": 50, # Number of people impacted
"financial_exposure": 10000, # Potential monetary impact
}
operational_factors = {
"time_pressure": "normal", # urgent, normal, relaxed
"human_oversight": True, # Is human review available?
"system_confidence": 0.85, # Model certainty
}

DuraGraph supports progressive governance maturity:

LevelNameCharacteristics
1ReactiveStatic policies, manual updates, post-hoc audit
2ProactiveDynamic policies, real-time evaluation, continuous monitoring
3AdaptiveSelf-adjusting policies, predictive risk, autonomous guardrails
4IntelligentGoal-aware governance, self-healing controls, strategic trust

DuraGraph governance aligns with:

  • EU AI Act - Risk classification and transparency requirements
  • NIST AI RMF - Risk management framework controls
  • ISO 42001 - AI management system certification
  • SOC 2 - Trust service criteria
  • HIPAA - Healthcare data protection
Terminal window
POST /api/v1/governance/context/evaluate
{
"data_metadata": {
"classification": "confidential",
"source": "customer_database"
},
"user_context": {
"user_id": "user_123",
"role": "analyst"
},
"intent": "generate_report",
"environment": "production"
}

Response:

{
"governance_profile": {
"policies": ["data_analyst", "pii_protection"],
"risk_level": "medium"
},
"risk_score": 0.45,
"allowed_actions": ["read", "aggregate", "export_anonymized"],
"required_controls": ["audit_log", "data_minimization"]
}
Terminal window
# List policies
GET /api/v1/governance/policies
# Create policy
POST /api/v1/governance/policies
# Simulate policy
POST /api/v1/governance/policies/simulate