Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
229859b
Add CBS metadata purge interval configuration helper
pimpin Oct 31, 2025
2f5ebdb
Add tombstone purge tests with configurable intervals
pimpin Oct 31, 2025
cc2a8fe
Fix get_doc_rev to retrieve tombstones with deleted parameter
pimpin Oct 31, 2025
3e05160
Fix check_doc_in_cbs URL builder error and improve output
pimpin Oct 31, 2025
67e9863
Add explanatory comment for SGW tombstone retrieval failure
pimpin Oct 31, 2025
c073f30
Fix check_doc_in_cbs to query tombstones via XATTRs
pimpin Oct 31, 2025
f6db007
Configure metadata purge interval to 1 hour in CBS setup
pimpin Oct 31, 2025
2736cc1
Fix _sync xattr field name: use _deleted instead of deleted
pimpin Oct 31, 2025
3935e48
Add tombstone_quick_check example for rapid validation
pimpin Oct 31, 2025
a1803f0
Fix tombstone detection using _sync.flags field
pimpin Oct 31, 2025
10f0565
Suppress dead_code and deprecated warnings in test utilities
pimpin Oct 31, 2025
9fb1f99
Suppress deprecated warnings in helper functions for purge tests
pimpin Oct 31, 2025
676eeaf
Fix metadata purge interval configuration to use REST API
pimpin Oct 31, 2025
8db78d6
Fix set_metadata_purge_interval to use correct REST API parameters
pimpin Oct 31, 2025
5916129
Add automated test infrastructure with reporting and Docker management
pimpin Nov 1, 2025
a427ac3
Integrate automated test infrastructure in tombstone purge test
pimpin Nov 1, 2025
192be0b
Rename db to db_cblite for clarity in test examples
pimpin Nov 3, 2025
a4880fe
Add soft_delete logic to sync function for tombstone resurrection test
pimpin Nov 3, 2025
b760dd2
Add helpers to delete and check documents in central (SGW)
pimpin Nov 3, 2025
70a76f5
Add tombstone_resurrection_test for BC-994 scenario validation
pimpin Nov 3, 2025
7ad96fb
Document tombstone_resurrection_test in README
pimpin Nov 3, 2025
b676a0b
Add local database cleanup in resurrection test setup
pimpin Nov 3, 2025
9f0cdb6
Add timezone synchronization for Docker containers
pimpin Nov 3, 2025
316876f
Fix STEP numbering, logging, and replication in resurrection test
pimpin Nov 3, 2025
1d0b885
Remove document touch step - test natural resurrection with reset che…
pimpin Nov 3, 2025
1d72288
Clean up redundant examples and document final findings
pimpin Nov 5, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -8,3 +8,7 @@ Cargo.lock
.DS_Store

*.cblite2/

# Test results
test_results/
response_to_thomas.md
2 changes: 2 additions & 0 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,8 @@ enum_primitive = "0.1.1"
lazy_static = "1.5.0"
regex = "1.11.1"
serde_json = "1"
serde = { version = "1", features = ["derive"] }
chrono = "0.4"
tempdir = "0.3.7"

[dev-dependencies.reqwest]
Expand Down
102 changes: 99 additions & 3 deletions examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,13 +42,109 @@ Update the file `docker-conf/db-config.json` and run
$ curl -XPUT -v "http://localhost:4985/my-db/" -H 'Content-Type: application/json' --data-binary @docker-conf/db-config.json
```

## Automated Test Infrastructure

The `tombstone_purge_test` includes comprehensive automation:

- **Automatic Docker environment management**: Stops, rebuilds, and starts containers with correct configuration
- **Git validation**: Ensures no uncommitted changes before running
- **Timezone synchronization**: Verifies containers use same timezone as host
- **Structured reporting**: Generates comprehensive test reports in `test_results/` directory

### Test Reports

Each test run generates a timestamped report directory containing:
- `README.md`: Executive summary with test checkpoints and findings
- `metadata.json`: Test metadata, commit SHA, GitHub link
- `tombstone_states.json`: Full `_sync` xattr content at each checkpoint
- `test_output.log`: Complete console output
- `cbs_logs.log`: Couchbase Server container logs
- `sgw_logs.log`: Sync Gateway container logs

**Example report path**: `test_results/test_run_2025-11-01_08-00-00_8db78d6/`

### Important Findings

**Tombstone Purge Behavior:**
- ✅ Tombstones are purged after 1 hour when purge interval is configured **at bucket creation**
- ❌ Configuring purge interval after tombstones are created does NOT purge existing tombstones
- ✅ Re-created documents are always treated as new (`flags=0`) even if tombstone persists

**Reset Checkpoint Limitation:**
- ❌ Reset checkpoint alone does NOT re-push unmodified documents
- CBLite only pushes documents that changed since last successful sync
- For BC-994 scenario, documents must be modified locally before reset to trigger push

## Running an example

As of now, there is only one example: `ticket_70596`.
### Available examples

#### `check_cbs_config`
Utility to verify Couchbase Server bucket configuration, especially metadata purge interval.

**Runtime: Instant**

```shell
$ cargo run --features=enterprise --example check_cbs_config
```

Expected output:
```
✓ CBS metadata purge interval (at purgeInterval): 0.04
= 0.04 days (~1.0 hours, ~58 minutes)
```

#### `tombstone_quick_check`
Rapid validation test for tombstone detection via XATTRs. Verifies that tombstones are correctly identified in CBS without waiting for purge intervals.

**Runtime: ~30 seconds**
**Output**: Clean, no warnings

```shell
$ cargo run --features=enterprise --example tombstone_quick_check
```

#### `ticket_70596`
Demonstrates auto-purge behavior when documents are moved to inaccessible channels.

It can be run with the following command:
```shell
$ cargo run --features=enterprise --example ticket_70596
```

There are utility functions available to interact with the Sync Gateway or Couchbase Server, feel free to add more if needed.
#### `tombstone_purge_test`
Complete tombstone purge test following Couchbase support recommendations (Thomas). Tests whether tombstones can be completely purged from CBS and SGW after the minimum 1-hour interval, such that re-creating a document with the same ID is treated as a new document.

**Runtime: ~65-70 minutes** (+ ~5 minutes for Docker rebuild)
**Features**: Automatic Docker management, structured reporting

```shell
$ cargo run --features=enterprise --example tombstone_purge_test
```

**What it does automatically:**
- ✅ Checks git status (fails if uncommitted changes)
- ✅ Rebuilds Docker environment (docker compose down -v && up)
- ✅ Verifies CBS purge interval configuration
- ✅ Runs complete test with checkpoints
- ✅ Generates structured report in `test_results/`
- ✅ Captures CBS and SGW logs

**Test scenario:**
1. Create document in accessible channel and replicate
2. Delete document (creating tombstone)
3. Purge tombstone from Sync Gateway
4. Verify CBS purge interval (configured at bucket creation)
5. Wait 65 minutes
6. Compact CBS and SGW
7. Verify tombstone state (purged or persisting)
8. Re-create document with same ID and verify it's treated as new (flags=0, not flags=1)

**Report location**: `test_results/test_run_<timestamp>_<commit_sha>/`

### Utility functions

There are utility functions available in `examples/utils/` to interact with the Sync Gateway and Couchbase Server:
- **SGW admin operations**: user management, sessions, document operations, database lifecycle
- **CBS admin operations**: bucket compaction, document queries, tombstone management, metadata purge interval configuration

Feel free to add more if needed.
12 changes: 12 additions & 0 deletions examples/check_cbs_config.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
mod utils;

use utils::*;

fn main() {
println!("=== CBS Configuration Check ===\n");

println!("Checking current metadata purge interval configuration:");
get_metadata_purge_interval();

println!("\n=== Check complete ===");
}
33 changes: 33 additions & 0 deletions examples/docker-conf/couchbase-server-dev/configure-server.sh
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,30 @@ function bucketCreate() {
fi
}

function configureBucketCompaction() {
# Configure metadata purge interval to 1 hour (0.04 days) - CBS minimum
# This is important for tombstone purge testing with Sync Gateway
# Default is 3 days, which is too long for testing
#
# IMPORTANT: Must use REST API to configure per-bucket auto-compaction
# The couchbase-cli setting-compaction command only sets cluster-wide defaults
#
# Required parameters:
# - autoCompactionDefined=true: Enable per-bucket auto-compaction override
# - purgeInterval=0.04: Metadata purge interval (1 hour minimum)
# - parallelDBAndViewCompaction: Required parameter for auto-compaction
curl -X POST \
-u "$COUCHBASE_ADMINISTRATOR_USERNAME:$COUCHBASE_ADMINISTRATOR_PASSWORD" \
"http://127.0.0.1:8091/pools/default/buckets/$COUCHBASE_BUCKET" \
-d "autoCompactionDefined=true" \
-d "purgeInterval=0.04" \
-d "parallelDBAndViewCompaction=false"

if [[ $? != 0 ]]; then
return 1
fi
}

function userSgCreate() {
couchbase-cli user-manage \
-c 127.0.0.1:8091 \
Expand Down Expand Up @@ -101,6 +125,15 @@ function main() {
echo "Creating the bucket [OK]"
echo

echo "Configuring bucket compaction settings...."
retry configureBucketCompaction
if [[ $? != 0 ]]; then
echo "Bucket compaction config failed. Exiting." >&2
exit 1
fi
echo "Configuring bucket compaction settings [OK]"
echo

echo "Creating Sync Gateway user...."
retry userSgCreate
if [[ $? != 0 ]]; then
Expand Down
4 changes: 4 additions & 0 deletions examples/docker-conf/docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,8 @@ services:
- "11210:11210" # memcached port
build:
context: ${PWD}/couchbase-server-dev
environment:
- TZ=${TZ:-UTC}
deploy:
resources:
limits:
Expand All @@ -17,6 +19,8 @@ services:
ports:
- "4984:4984"
- "4985:4985"
environment:
- TZ=${TZ:-UTC}
deploy:
resources:
limits:
Expand Down
19 changes: 19 additions & 0 deletions examples/docker-conf/sync-function.js
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,25 @@ function sync(doc, oldDoc, meta) {
console.log("Metadata:");
console.log(meta);

// Test logic for BC-994: Handle resurrection after tombstone purge
// Detect documents resurrecting without oldDoc after tombstone expiry
if (!oldDoc && doc.updatedAt) {
var ONE_HOUR_MS = 60 * 60 * 1000;
var updatedAtTimestamp = new Date(doc.updatedAt).getTime();
var cutoffTimestamp = Date.now() - ONE_HOUR_MS;

if (updatedAtTimestamp < cutoffTimestamp) {
// Document is resurrecting after tombstone expired
// Route to soft_deleted channel so auto-purge will remove from cblite
console.log(">>> Soft deleting document: updatedAt is older than 1 hour");
channel("soft_deleted");
// Set TTL to 5 minutes for testing (production would use 6 months)
expiry(5 * 60); // 5 minutes in seconds
console.log(">>> Document routed to soft_deleted channel with 5-minute TTL");
return;
}
}

if(doc.channels) {
channel(doc.channels);
}
Expand Down
36 changes: 18 additions & 18 deletions examples/ticket_70596.rs
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ use couchbase_lite::*;
use utils::*;

fn main() {
let mut db = Database::open(
let mut db_cblite = Database::open(
"test1",
Some(DatabaseConfiguration {
directory: Path::new("./"),
Expand All @@ -19,31 +19,31 @@ fn main() {
let session_token = get_session("great_name");
println!("Sync gateway session token: {session_token}");

let mut repl =
setup_replicator(db.clone(), session_token).add_document_listener(Box::new(doc_listener));
let mut repl = setup_replicator(db_cblite.clone(), session_token)
.add_document_listener(Box::new(doc_listener));

repl.start(false);

std::thread::sleep(std::time::Duration::from_secs(3));

// Auto-purge test scenario from support ticket https://support.couchbase.com/hc/en-us/requests/70596?page=1
// Testing if documents pushed to inaccessible channels get auto-purged
create_doc(&mut db, "doc1", "channel1");
create_doc(&mut db, "doc2", "channel2");
create_doc(&mut db_cblite, "doc1", "channel1");
create_doc(&mut db_cblite, "doc2", "channel2");

std::thread::sleep(std::time::Duration::from_secs(10));
assert!(get_doc(&db, "doc1").is_ok());
assert!(get_doc(&db, "doc2").is_ok()); // This looks buggy
assert!(get_doc(&db_cblite, "doc1").is_ok());
assert!(get_doc(&db_cblite, "doc2").is_ok()); // This looks buggy

change_channel(&mut db, "doc1", "channel2");
change_channel(&mut db_cblite, "doc1", "channel2");

std::thread::sleep(std::time::Duration::from_secs(10));
assert!(get_doc(&db, "doc1").is_err());
assert!(get_doc(&db_cblite, "doc1").is_err());

repl.stop(None);
}

fn create_doc(db: &mut Database, id: &str, channel: &str) {
fn create_doc(db_cblite: &mut Database, id: &str, channel: &str) {
let mut doc = Document::new_with_id(id);
doc.set_properties_as_json(
&serde_json::json!({
Expand All @@ -52,32 +52,32 @@ fn create_doc(db: &mut Database, id: &str, channel: &str) {
.to_string(),
)
.unwrap();
db.save_document(&mut doc).unwrap();
db_cblite.save_document(&mut doc).unwrap();

println!(
"Created doc {id} with content: {}",
doc.properties_as_json()
);
}

fn get_doc(db: &Database, id: &str) -> Result<Document> {
db.get_document(id)
fn get_doc(db_cblite: &Database, id: &str) -> Result<Document> {
db_cblite.get_document(id)
}

fn change_channel(db: &mut Database, id: &str, channel: &str) {
let mut doc = get_doc(db, id).unwrap();
fn change_channel(db_cblite: &mut Database, id: &str, channel: &str) {
let mut doc = get_doc(db_cblite, id).unwrap();
let mut prop = doc.mutable_properties();
prop.at("channels").put_string(channel);
let _ = db.save_document(&mut doc);
let _ = db_cblite.save_document(&mut doc);
println!(
"Changed doc {id} with content: {}",
doc.properties_as_json()
);
}

fn setup_replicator(db: Database, session_token: String) -> Replicator {
fn setup_replicator(db_cblite: Database, session_token: String) -> Replicator {
let repl_conf = ReplicatorConfiguration {
database: Some(db.clone()),
database: Some(db_cblite.clone()),
endpoint: Endpoint::new_with_url(SYNC_GW_URL).unwrap(),
replicator_type: ReplicatorType::PushAndPull,
continuous: true,
Expand Down
Loading