Data Warehouse, ETL, and Analytics for OpenStreetMap Notes
This repository contains the analytics and data warehouse components for the OSM Notes profiling system. It provides ETL (Extract, Transform, Load) processes, a star schema data warehouse, and datamarts for analyzing OSM notes data.
New to the project? Follow this reading path to understand the system (~1.5-2 hours):
-
Start Here (20 min)
- Read this README.md (you're here!)
- Understand the project purpose and main features
- Review the Quick Start guide below
-
Project Context (30 min)
- Read docs/Rationale.md - Why this project exists
- Understand the problem it solves and design decisions
-
System Architecture (30 min)
- Read docs/DWH_Star_Schema_ERD.md - Data warehouse structure
- Understand star schema design and table relationships
-
Getting Started (30 min)
- Read bin/dwh/ENTRY_POINTS.md - Which scripts to use
- Review bin/dwh/ENVIRONMENT_VARIABLES.md - Configuration
- Follow the Quick Start guide in this README
-
Troubleshooting (10 min)
- Bookmark docs/Troubleshooting_Guide.md - Common issues and solutions
Total time: ~2 hours for a complete overview.
-
Foundation (1 hour)
- docs/Rationale.md - Project context (30 min)
- docs/DWH_Star_Schema_ERD.md - Data model (30 min)
-
Implementation (1 hour)
- docs/ETL_Enhanced_Features.md - ETL capabilities (30 min)
- bin/dwh/ENTRY_POINTS.md - Script entry points (15 min)
- bin/dwh/ENVIRONMENT_VARIABLES.md - Configuration (15 min)
-
Development Workflow (45 min)
- tests/README.md - Testing guide (20 min)
- docs/CI_CD_Guide.md - CI/CD workflows (25 min)
-
Deep Dive (as needed)
- docs/DWH_Star_Schema_Data_Dictionary.md - Complete schema reference
- CONTRIBUTING.md - Contribution guidelines
- docs/README.md - Complete documentation index
-
Data Model (1 hour)
- docs/DWH_Star_Schema_ERD.md - Schema overview (30 min)
- docs/DWH_Star_Schema_Data_Dictionary.md - Column definitions (30 min)
-
Data Access (30 min)
- bin/dwh/profile.sh - Profile generation
- docs/Dashboard_Analysis.md - Available metrics
-
Deployment (45 min)
- This README - Setup and deployment (20 min)
- bin/dwh/ENTRY_POINTS.md - Script entry points (15 min)
- bin/dwh/ENVIRONMENT_VARIABLES.md - Configuration (10 min)
-
Operations (1 hour)
- docs/ETL_Enhanced_Features.md - ETL operations (30 min)
- docs/DWH_Maintenance_Guide.md - Maintenance procedures (30 min)
-
Troubleshooting (30 min)
- docs/Troubleshooting_Guide.md - Problem resolution
For complete navigation by role, see docs/README.md.
- Star Schema Data Warehouse: Comprehensive dimensional model for notes analysis (see DWH Star Schema ERD)
- Enhanced ETL Process: Robust ETL with recovery, validation, and monitoring
- Partitioned Facts Table: Automatic partitioning by year (2013-2025+)
- Country Datamart: Pre-computed analytics by country (77+ metrics)
- User Datamart: Pre-computed analytics by user (78+ metrics)
- Profile Generator: Generate country and user profiles
- Advanced Dimensions:
- Timezones for local time analysis
- Seasons based on date and latitude
- Continents for geographical grouping
- Application versions tracking
- SCD2 for username changes
- Automation level detection
- User experience levels
- Hashtags (unlimited via bridge table)
- Enhanced Metrics (October 2025):
- Resolution metrics: avg/median days to resolution, resolution rate
- Application statistics: mobile/desktop apps count, most used app
- Content quality: comment length, URL/mention detection, engagement metrics
- Community health: active notes, backlog, age distribution, recent activity
- Comprehensive Testing: 197 automated tests (90%+ function coverage)
- PostgreSQL 12 or higher
- PostGIS 3.0 or higher
- Bash 4.0 or higher
- OSM Notes Ingestion Database: This analytics system reads from the base notes tables populated by the OSM-Notes-Ingestion ingestion system
This system uses a shared database approach with separate schemas:
Database: osm_notes
├── Schema: public # Base tables (managed by Ingestion repo)
│ ├── notes
│ ├── note_comments
│ ├── note_comments_text
│ ├── users
│ └── countries
└── Schema: dwh # data warehouse (managed by this repo)
├── facts # Fact table (see Data Dictionary)
├── dimension_* # Dimension tables (see ERD)
└── datamart_* # Datamart tables
For complete schema documentation, see DWH Star Schema ERD and Data Dictionary.
This guide walks you through the complete process from scratch to having exportable JSON datamarts.
- Process Overview
- Prerequisites Check
- Data Warehouse Setup
- Analytics and Export
- Quick Troubleshooting
- Incremental Updates
1. Base Data → 2. ETL/DWH → 3. Datamarts → 4. JSON Export
(notes) (facts, dims) (aggregations) (web viewer)
git clone https://github.com/OSMLatam/OSM-Notes-Analytics.git
cd OSM-Notes-AnalyticsWhat this does: Downloads the analytics repository to your local machine.
Verify: Check that the directory was created:
ls -la OSM-Notes-Analytics/
# Should show: bin/, docs/, sql/, etc/, lib/, etc.Note: If you need the shared libraries (OSM-Notes-Common submodule), initialize it:
git submodule update --init --recursiveWhat this does: Sets up database connection settings for all scripts.
Create the configuration file from the example:
cp etc/properties.sh.example etc/properties.sh
nano etc/properties.sh # or use your preferred editorEdit with your database credentials:
# Database configuration
DBNAME="osm_notes" # Same database as Ingestion
DB_USER="myuser" # Your PostgreSQL userVerify configuration:
# Test database connection
psql -d "${DBNAME:-osm_notes}" -U "${DB_USER:-myuser}" -c "SELECT version();"
# Should show PostgreSQL version informationExpected output:
PostgreSQL 12.x or higher
Troubleshooting:
- If connection fails, check PostgreSQL is running:
sudo systemctl status postgresql - Verify database exists:
psql -l | grep osm_notes - Check user permissions:
psql -d osm_notes -c "SELECT current_user;"
What this does: Ensures the base data from OSM-Notes-Ingestion exists before running ETL.
Why this matters: The analytics system reads from base tables populated by the ingestion system. Without this data, ETL cannot run.
Check that base tables exist and have data:
# Check notes table
psql -d "${DBNAME:-osm_notes}" -c "SELECT COUNT(*) FROM notes;"
# Check note_comments table
psql -d "${DBNAME:-osm_notes}" -c "SELECT COUNT(*) FROM note_comments;"
# Check note_comments_text table
psql -d "${DBNAME:-osm_notes}" -c "SELECT COUNT(*) FROM note_comments_text;"Expected output:
count
-------
1234567
(1 row)
Each query should return a number > 0. If any table is empty or doesn't exist, you need to run the OSM-Notes-Ingestion system first.
Troubleshooting:
- "relation does not exist": Base tables not created. Run OSM-Notes-Ingestion first.
- Count is 0: Tables exist but empty. Run OSM-Notes-Ingestion to populate data.
- Connection error: Check Step 2 configuration and PostgreSQL service.
What this does: Transforms base data into a star schema data warehouse with pre-computed analytics.
Why this matters: The ETL process creates the dimensional model that enables fast analytical queries and generates the datamarts used for profiles and dashboards.
The ETL creates the data warehouse (schema dwh) with:
- Fact table (partitioned by year for optimal performance) - see Data Dictionary
- Dimension tables (users, countries, dates, etc.) - see ERD
- All necessary transformations - see ETL Enhanced Features
- Automatically updates datamarts
ETL Process (auto-detects first execution vs incremental):
cd bin/dwh
./ETL.shFirst execution output (creates DWH and loads all historical data):
[INFO] Preparing environment.
[INFO] Entering auto-detect mode
[WARN] Starting process.
[INFO] AUTO-DETECTED FIRST EXECUTION - Performing initial load
[INFO] Creating base tables...
[INFO] Processing years in parallel...
[INFO] Using 4 threads for parallel processing
[INFO] Processing year 2013...
[INFO] Processing year 2014...
...
[INFO] Consolidating partitions...
[INFO] Updating datamarts...
[WARN] Ending process.
Subsequent executions output (incremental updates only):
[INFO] Preparing environment.
[INFO] Entering auto-detect mode
[WARN] Starting process.
[INFO] AUTO-DETECTED INCREMENTAL EXECUTION - Processing only new data
[INFO] Processing new data since last run...
[INFO] Updated 1234 facts
[INFO] Updating datamarts...
[WARN] Ending process.
Note: The same command works for both initial setup and regular updates.
The script automatically detects if it's the first execution by checking if
the dwh.facts table exists and has data.
Time estimates:
- Initial load: ~30 hours (processes all years from 2013 to present)
- Incremental update: 5-15 minutes (depends on new data volume)
Monitor progress:
# Follow ETL logs in real-time
tail -40f $(ls -1rtd /tmp/ETL_* | tail -1)/ETL.logWhat the ETL does automatically:
- Creates schema
dwhwith all tables (see ERD) - Creates automatic partitions for facts table (2013-2025+) - see Partitioning Strategy
- Populates dimension tables
- Loads facts from note_comments - see ETL Process Flow
- Creates indexes and constraints
- Updates datamarts (countries and users)
- Creates specialized views for hashtag analytics
- Calculates automation levels for users
- Updates experience levels for users
- Creates note activity metrics (comment counts, reopenings)
For detailed ETL process flow, see ETL Enhanced Features and DWH Star Schema ERD.
Troubleshooting:
- ETL fails immediately: Check base tables exist (Step 3)
- "Out of memory": Reduce
ETL_MAX_PARALLEL_JOBSinetc/etl.properties - Takes too long: Increase parallelism or check system resources
- See Troubleshooting Guide for more solutions
What this does: Confirms the data warehouse was created successfully with data.
Why this matters: Verifies ETL completed successfully before proceeding to datamarts.
Check that the data warehouse was created:
# Check schema exists
psql -d "${DBNAME:-osm_notes}" -c "SELECT schema_name FROM information_schema.schemata WHERE schema_name = 'dwh';"Expected output:
schema_name
-------------
dwh
(1 row)
# Check tables exist (should show many tables)
psql -d "${DBNAME:-osm_notes}" -c "SELECT tablename FROM pg_tables WHERE schemaname = 'dwh' ORDER BY tablename;"Expected output:
tablename
------------------------
dimension_applications
dimension_countries
dimension_days
dimension_users
facts
facts_2013
facts_2014
...
(20+ rows)
# Check fact counts (should be > 0)
psql -d "${DBNAME:-osm_notes}" -c "SELECT COUNT(*) FROM dwh.facts;"
psql -d "${DBNAME:-osm_notes}" -c "SELECT COUNT(*) FROM dwh.dimension_users;"
psql -d "${DBNAME:-osm_notes}" -c "SELECT COUNT(*) FROM dwh.dimension_countries;"Expected output:
count
---------
1234567
(1 row)
Troubleshooting:
- Schema doesn't exist: ETL didn't complete. Check logs and re-run ETL.
- Tables missing: ETL failed partway through. Check error logs.
- Count is 0: ETL ran but no data loaded. Check base tables have data (Step 3).
What this does: Creates pre-computed analytics tables for fast querying.
Why this matters: Datamarts contain pre-aggregated metrics (77+ per user/country) that enable instant profile generation without expensive queries.
✅ Datamarts are automatically updated during ETL execution. No manual step needed!
The datamarts aggregate data for quick access and are automatically populated after ETL completes.
Verify datamarts were created automatically:
# Check datamart tables exist
psql -d "${DBNAME:-osm_notes}" -c "SELECT tablename FROM pg_tables WHERE schemaname = 'dwh' AND tablename LIKE 'datamart%';"Expected output:
tablename
------------------
datamartcountries
datamartglobal
datamartusers
(3 rows)
Manual Update (only if needed, or for incremental updates):
# Update users datamart
cd bin/dwh/datamartUsers
./datamartUsers.shExpected output:
[INFO] Starting process.
[INFO] Processing 500 users...
[INFO] Updated 500 users
[WARN] Ending process.
# Update countries datamart
cd bin/dwh/datamartCountries
./datamartCountries.shExpected output:
[INFO] Starting process.
[INFO] Processing countries...
[INFO] Updated 195 countries
[WARN] Ending process.
Note: Datamarts process incrementally (only modified entities) for optimal performance.
Troubleshooting:
- Datamart tables don't exist: ETL didn't complete datamart update. Run manually.
- Count is 0: Datamarts empty. Run datamart scripts manually.
- See Troubleshooting Guide for more solutions
What this does: Confirms datamarts are populated with data.
Why this matters: Ensures datamarts have data before generating profiles or exporting to JSON.
# Check datamart tables exist
psql -d "${DBNAME:-osm_notes}" -c "SELECT tablename FROM pg_tables WHERE schemaname = 'dwh' AND tablename LIKE 'datamart%';"Expected output:
tablename
------------------
datamartcountries
datamartglobal
datamartusers
(3 rows)
# Check counts (should be > 0)
psql -d "${DBNAME:-osm_notes}" -c "SELECT COUNT(*) FROM dwh.datamartusers;"
psql -d "${DBNAME:-osm_notes}" -c "SELECT COUNT(*) FROM dwh.datamartcountries;"Expected output:
count
-------
5000
(1 row)
# View sample user data
psql -d "${DBNAME:-osm_notes}" -c "SELECT user_id, username, history_whole_open, history_whole_closed FROM dwh.datamartusers LIMIT 5;"Expected output:
user_id | username | history_whole_open | history_whole_closed
---------+----------+--------------------+---------------------
1234 | AngocA | 150 | 120
5678 | User2 | 50 | 45
...
(5 rows)
# View sample country data
psql -d "${DBNAME:-osm_notes}" -c "SELECT country_id, country_name_en, history_whole_open, history_whole_closed FROM dwh.datamartcountries LIMIT 5;"Expected output:
country_id | country_name_en | history_whole_open | history_whole_closed
------------+------------------+--------------------+---------------------
1 | Colombia | 5000 | 4500
2 | Germany | 8000 | 7500
...
(5 rows)
Troubleshooting:
- Count is 0: Datamarts not populated. Run datamart scripts (Step 6).
- Tables don't exist: ETL didn't create datamarts. Check ETL logs.
- See Troubleshooting Guide for more solutions
What this does: Exports datamart data to JSON files for OSM-Notes-Viewer (sister project) consumption.
Why this matters: The web viewer reads pre-computed JSON files instead of querying the database directly, enabling fast static hosting.
Once datamarts are populated, export to JSON:
cd bin/dwh
./exportDatamartsToJSON.shExpected output:
[INFO] Starting export process...
[INFO] Exporting users datamart...
[INFO] Exported 5000 users
[INFO] Exporting countries datamart...
[INFO] Exported 195 countries
[INFO] Validating JSON files...
[INFO] All files validated successfully
[INFO] Moving files to output directory...
[INFO] Export completed successfully
Verify export:
# Check output directory
ls -lh ./output/json/
# Check user files
ls -lh ./output/json/users/ | head -10
# Check country files
ls -lh ./output/json/countries/ | head -10
# Check index files
ls -lh ./output/json/indexes/Expected structure:
output/json/
├── users/
│ ├── 1234.json
│ ├── 5678.json
│ └── ...
├── countries/
│ ├── 1.json
│ ├── 2.json
│ └── ...
├── indexes/
│ ├── users.json
│ └── countries.json
└── metadata.json
Export Features:
- Atomic writes: Files generated in temporary directory, validated, then moved atomically
- Schema validation: Each JSON file validated against schemas before export
- Fail-safe: On validation failure, keeps existing files and logs error
- No partial updates: Either all files are valid and moved, or none
Troubleshooting:
- Export is empty: Check datamarts have data (Step 7)
- Validation fails: Check schema files in
lib/osm-common/schemas/ - Permission errors: Check
./output/json/directory is writable - See Troubleshooting Guide for more solutions
See JSON Export Documentation and Atomic Validation Export for complete details.
If you encounter issues during setup, here are quick solutions:
Problem: "Schema 'dwh' does not exist"
- Solution: Run
./bin/dwh/ETL.shfirst
Problem: "Base tables do not exist"
- Solution: Run OSM-Notes-Ingestion first to populate base tables
Problem: "Cannot connect to database"
- Solution: Check
etc/properties.shconfiguration and PostgreSQL service status
Problem: "ETL takes too long"
- Solution: Increase
ETL_MAX_PARALLEL_JOBSinetc/etl.propertiesor reduceETL_BATCH_SIZE
Problem: "Datamart tables empty"
- Solution: Run datamart scripts manually:
./bin/dwh/datamartUsers/datamartUsers.shand./bin/dwh/datamartCountries/datamartCountries.sh
Problem: "JSON export fails validation"
- Solution: Check datamarts have data and schema files exist in
lib/osm-common/schemas/
For comprehensive troubleshooting, see Troubleshooting Guide.
For ongoing updates, run these in sequence:
# 1. Update base data (your OSM notes import process)
# 2. Update DWH
cd bin/dwh
./ETL.sh
# 3. Update datamarts
cd datamartUsers
./datamartUsers.sh
cd ../datamartCountries
./datamartCountries.sh
# 4. Export JSON (optional)
cd ..
./exportDatamartsToJSON.sh
# Note: The export script validates all JSON files before moving them to the final destination.
# If validation fails, it keeps existing files and exits with an error, ensuring data integrity.For automated analytics updates:
# Update ETL every hour (after ingestion completes)
0 * * * * ~/OSM-Notes-Analytics/bin/dwh/ETL.sh
# Update country datamart daily
0 2 * * * ~/OSM-Notes-Analytics/bin/dwh/datamartCountries/datamartCountries.sh
# Update user datamart daily (processes 500 users per run)
30 2 * * * ~/OSM-Notes-Analytics/bin/dwh/datamartUsers/datamartUsers.sh
# Export to JSON and push to GitHub Pages (every 15 minutes, after datamarts update)
# This script exports JSON files and automatically deploys them to GitHub Pages
45 2 * * * ~/OSM-Notes-Analytics/bin/dwh/exportAndPushToGitHub.shFor a complete automated pipeline that includes JSON export with validation:
# Create wrapper script: /opt/osm-analytics/update-and-export.sh
#!/bin/bash
cd /opt/osm-analytics/OSM-Notes-Analytics
# ETL incremental update
./bin/dwh/ETL.sh || exit 1
# Update datamarts
./bin/dwh/datamartUsers/datamartUsers.sh || exit 1
./bin/dwh/datamartCountries/datamartCountries.sh || exit 1
# Export to JSON and push to GitHub Pages
# The script exports JSON files and automatically deploys them to GitHub Pages
./bin/dwh/exportAndPushToGitHub.sh || exit 1
# If we get here, all files are valid and exported
echo "SUCCESS: All exports validated and moved to destination"Then schedule this wrapper:
# Run complete pipeline every 15 minutes
*/15 * * * * /opt/osm-analytics/update-and-export.sh >> /var/log/osm-analytics.log 2>&1Key features of JSON export:
- ✅ Atomic writes: Files are generated in temporary directory first
- ✅ Schema validation: Each JSON file is validated before final export
- ✅ Fail-safe: On validation failure, keeps existing files and exits with error
- ✅ No partial updates: Either all files are valid and moved, or none
OSM-Notes-Analytics/
├── bin/ # Executable scripts
│ ├── dwh/ # ETL and datamart scripts
│ │ ├── ETL.sh # Main ETL process
│ │ ├── profile.sh # Profile generator
│ │ ├── cleanupDWH.sh # Data warehouse cleanup script
│ │ ├── README.md # DWH scripts documentation
│ │ ├── datamartCountries/
│ │ │ └── datamartCountries.sh
│ │ └── datamartUsers/
│ │ └── datamartUsers.sh
│ └── README.md # Scripts documentation
├── etc/ # Configuration files
│ ├── properties.sh # Database configuration
│ ├── etl.properties # ETL configuration
│ └── README.md # Configuration documentation
├── sql/ # SQL scripts
│ ├── dwh/ # DWH DDL and procedures
│ │ ├── ETL_*.sql # ETL scripts
│ │ ├── Staging_*.sql # Staging procedures
│ │ ├── datamartCountries/ # Country datamart SQL
│ │ └── datamartUsers/ # User datamart SQL
│ └── README.md # SQL documentation
├── scripts/ # Utility scripts
│ └── README.md # Scripts documentation
├── tests/ # Test suites
│ ├── unit/ # Unit tests
│ │ ├── bash/ # Bash script tests
│ │ └── sql/ # SQL tests
│ ├── integration/ # Integration tests
│ ├── run_all_tests.sh # Run all tests
│ ├── run_dwh_tests.sh # Run DWH tests
│ ├── run_quality_tests.sh # Run quality tests
│ └── README.md # Testing documentation
├── docs/ # Documentation
│ ├── DWH_Star_Schema_ERD.md # Star schema diagrams
│ ├── DWH_Star_Schema_Data_Dictionary.md # Data dictionary
│ ├── ETL_Enhanced_Features.md # ETL features
│ ├── CI_CD_Guide.md # CI/CD workflows
│ └── README.md # Documentation index
└── lib/ # Shared libraries
├── osm-common/ # OSM-Notes-Common submodule (Git submodule)
│ ├── bash_logger.sh
│ ├── commonFunctions.sh
│ ├── validationFunctions.sh
│ ├── errorHandlingFunctions.sh
│ ├── consolidatedValidationFunctions.sh
│ └── schemas/ # JSON schemas for validation
└── README.md # Library documentation
./bin/dwh/ETL.shCreates the complete data warehouse from scratch, including all dimensions and facts.
./bin/dwh/ETL.shProcesses only new data since the last ETL run. Use this for scheduled updates.
dwh.facts: Central fact table containing note actions and metrics (see Data Dictionary for complete column definitions)- Partitioned by year (action_at) for optimal performance
- Automatic partition creation for current and future years
- Each year stored in separate partition (e.g.,
facts_2024,facts_2025) - 10-50x faster queries when filtering by date
dimension_users: User information with SCD2 supportdimension_countries: Countries with ISO codes and regionsdimension_regions: Geographic regionsdimension_continents: Continental groupingdimension_days: Date dimension with enhanced attributesdimension_time_of_week: Hour of week with period of daydimension_applications: Applications used to create notesdimension_application_versions: Application version trackingdimension_hashtags: Hashtags found in notesdimension_timezones: Timezone informationdimension_seasons: Seasons based on date and latitude
-
datamart_countries: Pre-computed country analytics (77+ metrics)- Historical metrics: notes opened/closed by country
- Resolution metrics: avg/median days to resolution, resolution rate
- Application statistics: mobile/desktop app usage, most used app
- Content quality: comment length, URLs, mentions, engagement
- Community health: active notes, backlog size, age distribution
- Hashtag analysis: top hashtags, usage patterns
-
datamart_users: Pre-computed user analytics (78+ metrics)- Historical metrics: notes opened/closed by user
- Resolution metrics: avg/median days to resolution, resolution rate
- Application statistics: mobile/desktop app usage
- Content quality: comment length, URLs, mentions, engagement
- Community health: active notes, recent activity
- Automation level: human/automated detection
- Experience level: beginner to legendary contributor
The dwh.facts table is partitioned by year using the action_at column:
- Automatic partition management: The ETL automatically creates partitions for:
- Current year (always verified)
- Next year (to prevent failures on year transition)
- One additional year ahead (buffer)
- Zero maintenance: No manual intervention needed when the year changes
- Performance benefits:
- 10-50x faster queries when filtering by date
- Only scans relevant year partitions (PostgreSQL partition pruning)
- Faster VACUUM and maintenance operations per partition
- Easy archival: Old year partitions can be detached/archived independently
See docs/partitioning_strategy.md for complete details.
- ETL Initial Load: ~30 hours (parallel by year since 2013)
- Country Datamart: ~20 minutes
- User Datamart: ~5 days (500 users per run, asynchronous)
- Memory: 4GB+ recommended for ETL
- Disk Space: Depends on notes volume (GB scale)
- CPU: Multi-core recommended for parallel processing
# Quality tests (fast, no database required)
./tests/run_quality_tests.sh
# DWH tests (requires database 'dwh')
./tests/run_dwh_tests.sh
# All tests
./tests/run_all_tests.shThis project includes comprehensive CI/CD with:
- ✅ GitHub Actions workflows for automated testing
- ✅ Automated dependency checking
Git hooks are available in .git-hooks/ and can be installed manually if needed.
See CI/CD Guide for complete documentation.
The ETL process creates detailed logs:
# Follow ETL progress
tail -40f $(ls -1rtd /tmp/ETL_* | tail -1)/ETL.log
# Set log level
export LOG_LEVEL=DEBUG
./bin/dwh/ETL.shAvailable log levels: TRACE, DEBUG, INFO, WARN, ERROR, FATAL
The project includes a cleanup script for maintenance and troubleshooting:
# Safe operations (no data loss):
./bin/dwh/cleanupDWH.sh --remove-temp-files # Clean temporary files
./bin/dwh/cleanupDWH.sh --dry-run # Preview operations
# Destructive operations (require confirmation):
./bin/dwh/cleanupDWH.sh # Full cleanup
./bin/dwh/cleanupDWH.sh --remove-all-data # DWH objects onlyWhen to use:
- Development: Clean temporary files regularly
- Testing: Reset environment between test runs
- Troubleshooting: Remove corrupted objects
- Maintenance: Free disk space
--dry-run first.
For detailed maintenance procedures, see DWH Maintenance Guide.
Solution: Run ./bin/dwh/ETL.sh first to create the data warehouse.
Solution: Run the datamart scripts:
bin/dwh/datamartUsers/datamartUsers.shbin/dwh/datamartCountries/datamartCountries.sh
The ETL processes data by year in parallel. Adjust parallelism in etc/properties.sh:
MAX_THREADS=8 # Increase for more coresDatamarts process entities incrementally (500 at a time). Run the script multiple times:
# Keep running until it says "0 users processed"
while true; do
./datamartUsers.sh
sleep 5
doneOr check the modified flag:
SELECT COUNT(*) FROM dwh.dimension_users WHERE modified = TRUE;When it returns 0, all users are processed.
Ensure datamarts have data:
SELECT COUNT(*) FROM dwh.datamartusers;
SELECT COUNT(*) FROM dwh.datamartcountries;If counts are 0, re-run the datamart population scripts.
Check that:
- Database connection is configured correctly
- Base tables exist (populated by Ingestion system)
- PostgreSQL is running and accessible
- Increase
ETL_MAX_PARALLEL_JOBSinetc/etl.properties - Adjust
ETL_BATCH_SIZEfor better throughput - Run
VACUUM ANALYZEon base tables
Check for orphaned facts (example query):
psql -d osm_notes -c "SELECT COUNT(*) FROM dwh.facts f
LEFT JOIN dwh.dimension_countries c ON f.dimension_id_country = c.dimension_country_id
WHERE c.dimension_country_id IS NULL;"This analytics system depends on the OSM-Notes-Ingestion ingestion system:
-
Ingestion (OSM-Notes-Ingestion)
- Downloads notes from OSM Planet and API
- Populates base tables:
notes,note_comments,note_comments_text - Manages WMS layer publication
-
Analytics (this repository)
- Reads from base tables
- Transforms data into star schema
- Generates datamarts and profiles
Deployment Order:
- Deploy and run Ingestion system first
- Wait for base tables to be populated
- Deploy and run Analytics system
- DWH Star Schema ERD: Entity-relationship diagram
- Data Dictionary: Complete schema documentation
- ETL Enhanced Features: Advanced ETL capabilities
- CI/CD Guide: CI/CD workflows and git hooks
- Testing Guide: Complete testing documentation
# Database configuration
DBNAME="osm_notes"
DB_USER="myuser"
# Processing configuration
LOOP_SIZE="10000"
MAX_THREADS="4"# Performance
ETL_BATCH_SIZE=1000
ETL_PARALLEL_ENABLED=true
ETL_MAX_PARALLEL_JOBS=4
# Resource Control
MAX_MEMORY_USAGE=80
MAX_DISK_USAGE=90
ETL_TIMEOUT=7200
# Recovery
ETL_RECOVERY_ENABLED=true
# Validation
ETL_VALIDATE_INTEGRITY=trueSee CONTRIBUTING.md for guidelines on contributing to this project.
See LICENSE for license information.
Important: This repository contains only code and configuration files. All data processed by this system comes from OpenStreetMap (OSM) and is licensed under the Open Database License (ODbL). The processed data (notes, analytics, datamarts, etc.) stored in the database is derived from OSM and must comply with OSM's licensing requirements.
- OSM Data License: Open Database License (ODbL)
- OSM Copyright: OpenStreetMap contributors
- OSM Attribution: Required when using or distributing OSM data
For more information about OSM licensing, see: https://www.openstreetmap.org/copyright
- Andres Gomez (@AngocA): Main developer
- Jose Luis Ceron Sarria: Architecture design and infrastructure
- OpenStreetMap contributors: For the valuable notes data and their contributions to the OSM project
This analytics project is part of a larger ecosystem for OSM Notes analysis:
The OSM Notes ecosystem consists of three sister projects at the same organizational level:
OSMLatam/
├── OSM-Notes-Ingestion/ # Data ingestion from OSM API/Planet
├── OSM-Notes-Analytics/ # data warehouse & ETL (this repository)
├── OSM-Notes-Viewer/ # Web frontend visualization (web application)
└── OSM-Notes-Common/ # Shared Bash libraries (Git submodule)
All three projects are independent repositories at the same level, working together to provide a complete OSM Notes analysis solution.
┌─────────────────────────────────────┐
│ OSM-Notes-Ingestion │
│ - Downloads OSM notes │
│ - Populates base tables │
│ - Publishes WMS layer │
└──────────────┬────────────────────────┘
│ feeds data
▼
┌─────────────────────────────────────┐
│ OSM-Notes-Analytics │
│ - ETL process (this repo) │
│ - Star schema data warehouse │
│ - Datamarts for analytics │
│ - Export to JSON │
└──────────────┬────────────────────────┘
│ exports JSON data
▼
┌─────────────────────────────────────┐
│ OSM-Notes-Viewer │
│ - Web dashboard │
│ - Interactive visualizations │
│ - User & country profiles │
└──────────────────────────────────────┘
┌─────────────────────┐
│ OSM-Notes-Common/ │
│ - Shared libraries │
│ - bash_logger.sh │
│ - validation.sh │
│ - commonFunctions │
└─────────────────────┘
↑ used by all (via submodule)
-
- Downloads notes from OSM Planet and API
- Populates base tables:
notes,note_comments,users,countries - Publishes WMS layer for mapping applications
- Deploy this FIRST - analytics needs base data
-
OSM-Notes-Viewer (sister project):
- Web application (web page) for visualizing analytics
- Interactive dashboards and visualizations
- User and country profiles
- Reads JSON exports from this analytics system
- Sister project at the same organizational level as Ingestion and Analytics
-
OSM-Notes-Common (shared library):
- Common Bash libraries and utilities (Git submodule)
- Used by Ingestion and Analytics projects
- Located at
lib/osm-common/in each project - Provides: logging, validation, error handling, common functions, schemas
- Shared via Git submodule across repositories
-
OpenStreetMap: https://www.openstreetmap.org
- Create an issue in this repository
- Check the documentation in the
docs/directory - Review logs for error messages
The following major enhancements have been implemented:
- 21 new metrics added to both
datamartCountriesanddatamartUsers - Resolution tracking: Days to resolution, resolution rates
- Application analytics: Mobile vs desktop usage, most popular apps
- Content quality: Comment analysis, URL/mention detection
- Community health: Active notes, backlog, temporal patterns
- 88+ new automated tests added to validate all new metrics
- Automation detection: Identifies bot/automated notes vs human
- Experience levels: Classifies users from newcomer to legendary
- Note activity metrics: Tracks accumulated comments and reopenings
- Hashtag bridge table: Supports unlimited hashtags per note
- Partitioned facts table: 10-50x faster date-based queries
- Specialized indexes: Optimized for common query patterns
- Automated maintenance: VACUUM and ANALYZE on partitions
See docs/Dashboard_Analysis.md for complete details on available metrics.
Current Version: 2025-10-26