Oslo GenAI Hackathon 2025
Business Innovation & Social Impact Track
Sentinel is an intelligent multi-agent system that automates ethical compliance monitoring of suppliers in global supply chains. Built with AWS Bedrock Agents, it helps procurement teams identify labor rights violations, environmental risks, and governance issues in real-timeโmoving beyond manual research to AI-powered, evidence-based auditing.
Procurement teams struggle to manually vet thousands of suppliers for ethical violations. This leads to:
- Slow, error-prone research - Hours spent per supplier
- Missed violations - Critical news buried in local/obscure reports
- Reputational risk - Non-compliance with regulations like the EU Supply Chain Act
- Reactive responses - Issues discovered only after damage is done
A multi-agent system that combines AI reasoning with structured policy enforcement:
- ๐ต๏ธ Investigator Agent - Gathers intelligence from external news, NGO reports, and public data sources
- โ๏ธ Auditor Agent - Cross-references findings against your internal "Code of Conduct" using RAG (Retrieval-Augmented Generation)
- ๐ค Supervisor Agent - Orchestrates the workflow and generates structured audit reports
- โก 30-Second Audits: Automated supplier vetting with sub-minute turnaround
- ๏ฟฝ๏ฟฝ Traffic Light Dashboard: Red/Yellow/Green risk scores across Labor, Environment, and Governance
- ๐ Evidence-Based Reports: Every flag cites specific sources with dates and severity scores
- ๐ฏ Policy-Driven: RAG-powered compliance checking against your internal Code of Conduct
- ๐ Nuanced Analysis: Distinguishes between "Allegations" and "Proven Violations"
User Query โ Supervisor Agent
โโ Investigator Agent (External Intelligence)
โ โโ Search News/Reports/Public Data
โโ Auditor Agent (Policy Enforcement)
โ โโ Query Knowledge Base (RAG)
โโ Generate Final JSON Report
- AI Core: AWS Bedrock Agents (Claude 3.5 Sonnet)
- Knowledge Base: AWS Bedrock Knowledge Base (RAG for policy documents)
- Frontend: Streamlit (Python)
- Infrastructure: AWS Lambda (Action Groups for custom tools)
- Development Approach: Spec-Driven Development (see
specs/SPECIFICATION.md)
This project follows spec-driven development. All implementation is based on formal specifications:
- PRD.md - Product Requirements (functional/non-functional requirements, KPIs)
- SPECIFICATION.md - System Specification (architecture, agent definitions, API contracts)
- use_case_diagram.mermaid - Use Case Diagram
โ ๏ธ Development Rule: All code changes must reference and comply with the specifications above.
- Time Saved: 90% reduction in manual research time per supplier
- Accuracy: 80%+ correct identification of violations vs. human audit
- Latency: <30 seconds per audit
- Reliability: 95%+ schema-compliant JSON outputs
- AWS Account with Bedrock access enabled
- Python 3.9+
- AWS CLI installed and configured
- Appropriate IAM permissions for CloudFormation, Lambda, S3, and Bedrock
-
Clone the Repository
git clone https://github.com/shailwx/ethos-chain.git cd ethos-chain -
Run Setup Script
chmod +x scripts/setup.sh ./scripts/setup.sh
This will:
- Create virtual environment
- Install all dependencies
- Create .env configuration file
- Run initial tests
-
Configure AWS (if not already done)
aws configure # Enter your AWS Access Key ID, Secret Key, and preferred region -
Run the Dashboard Locally (with mock data)
source venv/bin/activate streamlit run src/dashboard/app.py
For complete AWS deployment with Bedrock Agents:
-
Deploy Infrastructure
chmod +x scripts/deploy.sh ./scripts/deploy.sh
This automated script will:
- Deploy CloudFormation stack (S3, Lambda, IAM roles)
- Upload policy documents to S3
- Deploy Lambda function code
- Create .env configuration
-
Create Bedrock Knowledge Base
- Go to AWS Bedrock Console โ Knowledge Bases
- Create new Knowledge Base
- Use the S3 bucket created by CloudFormation
- Select embedding model:
amazon.titan-embed-text-v1 - Note the Knowledge Base ID
-
Create Bedrock Agents Follow detailed instructions in
infrastructure/bedrock/README.md:- Create Investigator Agent (with Lambda Action Group)
- Create Auditor Agent (with Knowledge Base)
- Create Supervisor Agent (orchestrates sub-agents)
- Note all Agent IDs and Alias IDs
-
Update Configuration Edit
.envfile with your Agent IDs and Knowledge Base ID:SUPERVISOR_AGENT_ID=your-agent-id SUPERVISOR_ALIAS_ID=your-alias-id KNOWLEDGE_BASE_ID=your-kb-id # ... etc -
Test the Complete System
streamlit run src/dashboard/app.py
To test without AWS deployment:
# Ensure USE_MOCK_DATA=true in .env
streamlit run src/dashboard/app.pyTry these demo suppliers:
- GreenTech Manufacturing (Expected: GREEN risk)
- Global Textiles Inc (Expected: YELLOW risk)
- QuickProd Factories (Expected: RED risk)
- Run the Dashboard
streamlit run src/dashboard/app.py
Current Phase: Ready for Deployment
Target: Oslo GenAI Hackathon 2025 Demo
- Multi-agent system architecture (Supervisor, Investigator, Auditor)
- Streamlit dashboard implementation with traffic light indicators
- CloudFormation infrastructure templates
- Lambda Action Group functions with error handling
- Bedrock Agent configuration files
- Knowledge Base integration setup
- Unit and integration tests (90%+ coverage)
- Mock data for development and demo
- Automated deployment scripts
- Comprehensive documentation
- Enable Bedrock model access in AWS Console
- Run deployment script:
./scripts/deploy.sh - Create Knowledge Base in AWS Bedrock Console
- Create three Bedrock Agents (see
infrastructure/bedrock/README.md) - Update .env with Agent IDs and test end-to-end
The system is fully functional with mock data and ready for hackathon demonstration. AWS deployment is optional but recommended for live data.
- Infrastructure Setup - CloudFormation deployment guide
- Bedrock Agents - Agent creation and configuration
- Lambda Functions - Action Group implementation
- Testing Guide - Running and writing tests
- Deployment Scripts - Automation tools
- Demo Data - Sample scenarios for presentation
Run tests with different configurations:
# Run all tests
./scripts/test.sh all
# Unit tests only
./scripts/test.sh unit
# Integration tests
./scripts/test.sh integration
# With coverage report
./scripts/test.sh coverage
# Code quality checks
./scripts/test.sh lint"Model access not enabled"
- Go to AWS Bedrock Console โ Model access
- Request access to Claude 3.5 Sonnet and Titan Embeddings
- Wait for approval (usually instant)
Lambda timeout errors
- Increase timeout in
infrastructure/iac/template.yaml - Deploy updated template
Knowledge Base sync issues
- Verify S3 bucket has policy documents
- Manually trigger sync in Bedrock Console
- Check CloudWatch logs for errors
Dashboard connection errors
- Verify Agent IDs in .env are correct
- Check AWS credentials are configured
- Ensure you're in the correct AWS region
See individual component READMEs for specific troubleshooting.
This is a hackathon project. Contributions are welcome! Please:
- Read the specifications (
specs/SPECIFICATION.md) - Ensure changes align with the PRD
- Run tests before submitting:
./scripts/test.sh all - Submit PRs with clear references to spec sections
This project is licensed under the MIT License - see the LICENSE file for details.
- Atif Usman - Lead Architect / Product Owner
- Naresh Gaddam Reddy - Tech Lead
- Shailendra Singh Chauhan - Chief Engineer
Event: Oslo GenAI Hackathon 2025
Track: Business Innovation & Social Impact
Built with โค๏ธ for ethical supply chains