
- 25 May, 2025
- read
Azure-Powered InfoSec Copilot
This project implemented an internal assistant for security policy questions inside Microsoft Teams. The objective was practical: reduce repetitive policy-routing work while keeping responses grounded in approved content.
Problem
Policy answers were slow and inconsistent because source material was distributed across documents and tribal knowledge. Common low-risk questions repeatedly hit security engineers instead of self-service pathways.
Approach
- Use Azure Cognitive Search over curated policy documents.
- Use Azure OpenAI for response generation.
- Serve responses in Teams to avoid workflow switching.
- Constrain output to grounded, policy-linked responses.
Retrieval pattern
Response quality depended more on retrieval quality than prompt style. Chunking, source freshness, and document hygiene had the largest impact.
User question
-> retrieve top policy chunks
-> build grounded prompt with source context
-> generate answer constrained to retrieved material
-> return answer + source-aware rationale
Operational gotchas
- Stale policy index causes confident but outdated answers.
- Over-broad retrieval increases answer drift.
- Adoption drops if users must leave Teams to use the tool.
Implementation notes
Prompt guardrails were intentionally conservative. Where source confidence was weak, the assistant behavior favored bounded answers over speculation.
If confidence < threshold:
- provide limited answer
- recommend escalation path
- avoid definitive policy interpretation
Results
- Faster response for recurring policy questions.
- More consistent guidance across teams.
- Reduced manual ticket load for repetitive lookup tasks.
Next steps
- Automate index refresh from policy update events.
- Add telemetry on unanswered/low-confidence question clusters.
- Refine escalation routing for high-impact policy decisions.