Zero Health is a deliberately vulnerable healthcare portal designed to demonstrate critical security vulnerabilities in medical technology. Healthcare systems are prime targets for cyberattacks due to their valuable personal health information (PHI), financial data, and critical infrastructure. A single breach can compromise patient privacy, disrupt life-saving treatments, and violate regulations like HIPAA.
This educational platform demonstrates:
- Common web security vulnerabilities in healthcare applications
- AI/LLM integration security risks and prompt injection vulnerabilities
- The devastating impact of poor security practices in medical environments
- Why robust security is essential for protecting patient data and maintaining trust
For full list of challenges: (https://github.com/aligorithm/Zero-Health/blob/main/challenges.md)
Why Healthcare Security Matters: Medical devices, patient portals, and health records systems require the highest security standards. Vulnerabilities can lead to ransomware attacks shutting down hospitals, identity theft from exposed patient data, or even manipulation of medical devices. This application helps developers understand these risks before building real healthcare systems.
- Docker and Docker Compose (recommended setup)
- OpenAI-compatible API key for chatbot functionality (OpenAI, Groq, LM Studio, Ollama, etc.)
- OR Node.js (v16+) and PostgreSQL for manual setup
I made a Demo Video explaining everything.
git clone https://github.com/aligorithm/zero-health.git
cd zero-healthZero Health includes a containerized local LLM (Ollama) by default for complete offline operation. You can also use cloud AI providers.
# Uses local Ollama container - no API key needed
docker-compose up --build# Set provider to use cloud AI instead of local Ollama
export LLM_PROVIDER=openai
export OPENAI_API_KEY=sk-your-key-here
docker-compose up --build# Change Ollama port if you have a conflicting service
export OLLAMA_PORT=11436
docker-compose up --buildTo disable the Ollama service completely (if you only want to use cloud AI):
- Edit
docker-compose.yml - Comment out the entire
ollama:service block - Comment out the
ollama:dependency in theserver:section
Note: You may need to run docker-compose with sudo, and this may lead to environment variables not being passed from the shell. If you're having issues with the chatbot, try this:
OPENAI_API_KEY=$OPENAI_API_KEY docker-compose up --build- Frontend: http://localhost:3000
- Backend API: http://localhost:5000
- API Documentation: http://localhost:5000/api/docs
All passwords: password123
Staff Accounts:
- Admin:
[email protected]- Full system access - Doctor:
[email protected]- Patient management - Pharmacist:
[email protected]- Prescription management
Patient Accounts:
- Patient:
[email protected]- Personal health portal - Patient 2:
[email protected]- Additional test data
- Different capabilities for patients, doctors, pharmacists, and admins
- Real-time SQL query generation and execution
- Conversation memory and knowledge base
- Deliberate AI vulnerabilities: Prompt injection, SQL injection via LLM
- Patients: Book appointments, view lab results, manage prescriptions
- Doctors: Patient management, create lab results, write prescriptions
- Pharmacists: Prescription management, mark as collected
- Admins: User management, system statistics
- Appointment booking and management
- Lab results with medical imagery (SVG)
- Prescription management system
- Secure messaging between patients and providers
- PDF medical report generation
- SQL Injection: Login, search, AI chatbot queries
- Cross-Site Scripting (XSS): Stored in messages, reflected in search
- Command Injection: PDF report generation
- Insecure Direct Object References: Direct access to records by ID
- File Upload Vulnerabilities: Unrestricted file types
- Mass Assignment: Profile updates can modify any field including roles
- Information Disclosure: JWT secrets and system info exposed
- Prompt Injection: Manipulate AI behavior through crafted messages
- SQL Injection via LLM: AI generates malicious database queries
- Schema Disclosure: Complete database structure exposed through AI
- Role-Based Access Bypass: Weak AI query restrictions
- Context Manipulation: Exploit conversation history
- Patient Data Exposure: PHI accessible without proper authorization
- Medical Record Tampering: Ability to modify critical health information
- Prescription Fraud: Unauthorized prescription creation and modification
- Audit Trail Bypass: Actions not properly logged for compliance
To reset the entire database and get fresh sample data:
docker-compose down -v
docker-compose up --buildSample data is automatically created on first run, including realistic medical records, prescriptions, lab results, and user accounts.
Works with any OpenAI-compatible API:
# OpenAI (default)
export OPENAI_BASE_URL="https://api.openai.com/v1"
export OPENAI_MODEL="gpt-4o-mini"
# Groq (fast inference)
export OPENAI_BASE_URL="https://api.groq.com/openai/v1"
export OPENAI_MODEL="llama3-8b-8192"
# Local LM Studio
export OPENAI_BASE_URL="http://localhost:1234/v1"
export OPENAI_MODEL="your-local-model"
# Local Ollama
export OPENAI_BASE_URL="http://localhost:11434/v1"
export OPENAI_MODEL="llama3"By studying this application, learn:
- Healthcare Security Fundamentals - HIPAA compliance, PHI protection
- Web Application Security - OWASP Top 10 vulnerabilities in medical context
- AI Security - Prompt injection, LLM security, AI-generated code risks
- Database Security - SQL injection, access controls, audit logging
- API Security - Authentication bypass, IDOR, mass assignment
- File Security - Upload validation, path traversal, malware risks
- Incident Response - Identifying and containing healthcare breaches
React Native application with mobile-specific vulnerabilities (insecure storage, certificate pinning bypass)
Advanced multi-step attack chains and modern vulnerability scenarios
- Model extraction attacks
- Adversarial prompt techniques
- LLM jailbreak scenarios
- AI-powered automated exploitation
Contributions welcome! Please maintain educational vulnerability aspects and document new security issues.
Join our community to share your learnings, discuss exploits, and get help:
- π― Share Your Exploits - Post successful attack chains and creative exploitation techniques
- β Get Help - Ask questions if you're stuck on challenges
- π‘ Learning Insights - Share what you learned and help others
- π§ Technical Issues - Report setup problems or bugs
- π Feature Requests - Suggest new vulnerabilities or improvements
- Check out the Security Challenges Guide for hands-on exercises
- Browse GitHub Discussions for community solutions and tips
- Watch the Demo Video for setup and overview
# Choose AI provider (default: ollama for offline usage)
LLM_PROVIDER=ollama # Options: 'openai' or 'ollama'
# OpenAI/Cloud AI Settings (only needed if LLM_PROVIDER=openai)
OPENAI_API_KEY=your-api-key-here # Required for cloud AI
OPENAI_MODEL=gpt-4o-mini # Optional: model to use
OPENAI_BASE_URL=https://api.openai.com/v1 # Optional: API endpoint
# Ollama/Local AI Settings (only needed if LLM_PROVIDER=ollama)
OLLAMA_PORT=11435 # Optional: external port (default: 11435)
OLLAMA_MODEL=llama3.2:3b # Optional: model to usePOSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_DB=zero_health# No environment variables needed - just run:
docker-compose up --buildexport LLM_PROVIDER=openai
export OPENAI_API_KEY=sk-your-key-here
docker-compose up --buildexport LLM_PROVIDER=openai
export OPENAI_API_KEY=your-groq-key
export OPENAI_BASE_URL=https://api.groq.com/openai/v1
export OPENAI_MODEL=llama3-8b-8192
docker-compose up --buildexport OLLAMA_PORT=11436
docker-compose up --buildMIT License
