Peers sharing the same sync_passphrase can now find each other
automatically over the internet without manual ticket exchange or
port forwarding. Uses n0's public pkarr relay servers as a
rendezvous point.
How it works:
- Derive 8 deterministic Ed25519 keypair "slots" from the passphrase
- Each peer claims a slot by publishing its EndpointId as a TXT record
- All peers scan all 8 slots every 15s to discover new peers
- Re-publish every 60s with 5min TTL to stay visible
- Discovered EndpointIds feed into the same peer channel as gossip
This runs alongside the existing gossip discovery (which still needs
bootstrap peers) and direct ticket-file connections (used by tests).
All 6 stress tests pass (102 assets, 63+ MB/s bidirectional).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace polling-based sync detection with SSE (Server-Sent Events) from
CAN service for instant push notifications on new asset ingests. Add
incremental hash queries via ?since=timestamp parameter to avoid
transferring full hash lists on every sync cycle.
CAN service changes:
- Add broadcast channel (SyncEventSender) in AppState for SSE events
- Add GET /sync/events SSE endpoint with auth via header or query param
- Fire broadcast events on both ingest and sync push
- Add db::get_assets_since() for incremental queries
- Support ?since= parameter on POST /sync/hashes
can-sync agent changes:
- Add SSE subscription with auto-reconnect in can_client
- Add get_hashes_since() for incremental catch-up
- Rewrite live push loop: SSE-driven with 30s fallback poll
- Remove poll_interval parameter from live sync functions
All 6 stress tests pass (102 assets, 63 MB/s bidirectional).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add mpsc channel between live receive and push loops so hashes
received from a peer aren't pushed right back (echo prevention).
Change initial reconciliation to use tokio::join! for concurrent
send/receive, avoiding QUIC flow-control deadlock when both sides
have large transfers. Update known_hashes to union-insert so
peer-received hashes persist across poll cycles.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Fix bidirectional stream handling: responder uses accept_bi() instead
of open_bi() so both sides communicate on the same stream
- Add live_receive_loop to accept incoming bi-streams during ongoing
sync (peer's push loop opens new streams per batch)
- Split live_sync_loop into live_push_loop + live_receive_loop running
concurrently via tokio::select in new run_live_sync()
- Update handle_incoming to run live sync after initial reconciliation
- Add direct peer connection via ticket files (EndpointAddr JSON
exchange) for local testing without gossip bootstrap
- Add CAN_PORT env var override for running multiple CAN instances
- Add integration test binary (sync_test.rs): starts 2 CAN services +
2 sync agents, ingests files on each side, verifies bidirectional
sync with 4 test cases (A→B, B→A, batch, count match)
- Add PowerShell script (run-integration-test.ps1) for one-command test
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace the over-engineered iroh-docs/libraries/filters architecture
with a simple peer-to-peer sync using:
- iroh 0.96 Endpoint for QUIC transport + NAT traversal
- iroh-gossip for peer discovery via shared passphrase
- Protobuf messages over QUIC streams for asset transfer
- CAN service's private /sync/* API for local data access
Deleted: announcer, fetcher, library, manifest, node, routes (2860 lines)
Added: discovery, peer, protocol (simplified ~600 lines)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
CAN Service: content-addressable storage with HTTP API, SQLite metadata,
file-based blob storage, thumbnail generation, and integrity verification.
can-sync v1: P2P sync sidecar using iroh-docs for encrypted peer-to-peer
replication with library/filter-based selective sync. Fully builds but
being superseded by v2 (simplified full-mirror approach).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>