Initial commit: CAN Service + examples (can-sync v1, canfs, filemanager, paste)
CAN Service: content-addressable storage with HTTP API, SQLite metadata, file-based blob storage, thumbnail generation, and integrity verification. can-sync v1: P2P sync sidecar using iroh-docs for encrypted peer-to-peer replication with library/filter-based selective sync. Fully builds but being superseded by v2 (simplified full-mirror approach). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
commit
360ecbdad0
24
.gitignore
vendored
Normal file
24
.gitignore
vendored
Normal file
@ -0,0 +1,24 @@
|
|||||||
|
# Build artifacts
|
||||||
|
/target/
|
||||||
|
examples/*/target/
|
||||||
|
|
||||||
|
# Data files (runtime-generated)
|
||||||
|
can_data/
|
||||||
|
*.db
|
||||||
|
*.db-shm
|
||||||
|
*.db-wal
|
||||||
|
can_sync_data/
|
||||||
|
examples/can-sync/can_sync_data/
|
||||||
|
|
||||||
|
# IDE / Editor
|
||||||
|
.vscode/
|
||||||
|
.idea/
|
||||||
|
*.swp
|
||||||
|
*.swo
|
||||||
|
|
||||||
|
# OS files
|
||||||
|
.DS_Store
|
||||||
|
Thumbs.db
|
||||||
|
|
||||||
|
# Claude local settings
|
||||||
|
.claude/
|
||||||
483
API.md
Normal file
483
API.md
Normal file
@ -0,0 +1,483 @@
|
|||||||
|
# CAN Service API Reference
|
||||||
|
|
||||||
|
**Base URL:** `http://localhost:3210`
|
||||||
|
**API Prefix:** `/api/v1/can/0`
|
||||||
|
**Content-Type:** `application/json` (all responses are JSON)
|
||||||
|
|
||||||
|
> The `0` in the path is the `can_id`. MVP supports only container `0`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Start the server
|
||||||
|
cargo run
|
||||||
|
|
||||||
|
# Store a file
|
||||||
|
curl -X POST http://localhost:3210/api/v1/can/0/ingest \
|
||||||
|
-F "file=@photo.jpg" \
|
||||||
|
-F "tags=vacation,beach" \
|
||||||
|
-F "description=Summer trip"
|
||||||
|
|
||||||
|
# Store data (no file needed)
|
||||||
|
curl -X POST http://localhost:3210/api/v1/can/0/ingest/data \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{"data": {"sensor": "temp", "value": 22.5}, "tags": "iot,sensor"}'
|
||||||
|
|
||||||
|
# List everything
|
||||||
|
curl http://localhost:3210/api/v1/can/0/list
|
||||||
|
|
||||||
|
# Search by tag
|
||||||
|
curl "http://localhost:3210/api/v1/can/0/search?tags=vacation"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Response Envelope
|
||||||
|
|
||||||
|
All responses use a standard wrapper:
|
||||||
|
|
||||||
|
```json
|
||||||
|
// Success
|
||||||
|
{
|
||||||
|
"status": "success",
|
||||||
|
"data": { ... }
|
||||||
|
}
|
||||||
|
|
||||||
|
// Error
|
||||||
|
{
|
||||||
|
"status": "error",
|
||||||
|
"error": "Human-readable error message"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**HTTP Status Codes:**
|
||||||
|
| Code | Meaning |
|
||||||
|
|------|---------|
|
||||||
|
| 200 | Success |
|
||||||
|
| 400 | Bad request (missing/invalid parameters) |
|
||||||
|
| 404 | Asset not found |
|
||||||
|
| 500 | Internal error / corrupted asset |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Endpoints
|
||||||
|
|
||||||
|
### 1. Ingest File (Multipart)
|
||||||
|
|
||||||
|
Upload a binary file with optional metadata.
|
||||||
|
|
||||||
|
```
|
||||||
|
POST /api/v1/can/0/ingest
|
||||||
|
Content-Type: multipart/form-data
|
||||||
|
```
|
||||||
|
|
||||||
|
**Form Fields:**
|
||||||
|
|
||||||
|
| Field | Type | Required | Description |
|
||||||
|
|-------|------|----------|-------------|
|
||||||
|
| `file` | Binary | **Yes** | The file to store |
|
||||||
|
| `mime_type` | String | No | Override MIME type (auto-detected from filename if omitted) |
|
||||||
|
| `human_file_name` | String | No | Logical filename (e.g. `report.pdf`) |
|
||||||
|
| `human_readable_path` | String | No | Logical folder path (e.g. `/docs/reports/`) |
|
||||||
|
| `application` | String | No | Application that produced this file |
|
||||||
|
| `user` | String | No | User or agent identity |
|
||||||
|
| `tags` | String | No | Comma-separated tags (e.g. `finance,Q4,report`) |
|
||||||
|
| `description` | String | No | Human-readable description |
|
||||||
|
|
||||||
|
**Response:**
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"status": "success",
|
||||||
|
"data": {
|
||||||
|
"timestamp": 1773014400123,
|
||||||
|
"hash": "a3b2c4d5e6f7...",
|
||||||
|
"filename": "1773014400123_a3b2c4d5e6f7_finance_Q4.pdf"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -X POST http://localhost:3210/api/v1/can/0/ingest \
|
||||||
|
-F "file=@quarterly_report.pdf" \
|
||||||
|
-F "application=WebUI" \
|
||||||
|
-F "user=jason" \
|
||||||
|
-F "tags=finance,Q4,report" \
|
||||||
|
-F "description=Q4 2025 financial report" \
|
||||||
|
-F "human_file_name=quarterly_report.pdf" \
|
||||||
|
-F "human_readable_path=/finance/reports/"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 2. Ingest Data (JSON)
|
||||||
|
|
||||||
|
Store any JSON value directly -- no multipart needed. Designed for agents and programmatic use.
|
||||||
|
|
||||||
|
```
|
||||||
|
POST /api/v1/can/0/ingest/data
|
||||||
|
Content-Type: application/json
|
||||||
|
```
|
||||||
|
|
||||||
|
**JSON Body:**
|
||||||
|
|
||||||
|
| Field | Type | Required | Default | Description |
|
||||||
|
|-------|------|----------|---------|-------------|
|
||||||
|
| `data` | Any JSON | **Yes** | -- | The payload to store. Object, array, string, number, boolean, or null. |
|
||||||
|
| `mime_type` | String | No | `application/json` | Override MIME type (e.g. `text/plain` to store as `.txt`) |
|
||||||
|
| `human_file_name` | String | No | | Logical filename |
|
||||||
|
| `human_readable_path` | String | No | | Logical folder path |
|
||||||
|
| `application` | String | No | | Application/agent that produced this data |
|
||||||
|
| `user` | String | No | | User or agent identity |
|
||||||
|
| `tags` | String | No | | Comma-separated tags |
|
||||||
|
| `description` | String | No | | Human-readable description |
|
||||||
|
|
||||||
|
The `data` field is serialized to pretty-printed JSON and stored as a `.json` file (or `.txt` etc. if you override `mime_type`).
|
||||||
|
|
||||||
|
**Response:** Same as file ingest.
|
||||||
|
|
||||||
|
**Examples:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Minimal -- just dump an object
|
||||||
|
curl -X POST http://localhost:3210/api/v1/can/0/ingest/data \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{"data": {"key": "value"}}'
|
||||||
|
|
||||||
|
# Agent saving structured output
|
||||||
|
curl -X POST http://localhost:3210/api/v1/can/0/ingest/data \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{
|
||||||
|
"data": {
|
||||||
|
"agent_id": "planner-v2",
|
||||||
|
"session": "abc-123",
|
||||||
|
"steps": ["research", "outline", "draft"]
|
||||||
|
},
|
||||||
|
"application": "AgentOrchestrator",
|
||||||
|
"user": "planner_agent",
|
||||||
|
"tags": "agent,plan,session",
|
||||||
|
"description": "Planning output for session abc-123",
|
||||||
|
"human_file_name": "plan_output.json",
|
||||||
|
"human_readable_path": "/agents/planner/"
|
||||||
|
}'
|
||||||
|
|
||||||
|
# Store a plain string
|
||||||
|
curl -X POST http://localhost:3210/api/v1/can/0/ingest/data \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{"data": "Log: task completed at 14:30", "tags": "log"}'
|
||||||
|
|
||||||
|
# Store an array
|
||||||
|
curl -X POST http://localhost:3210/api/v1/can/0/ingest/data \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{"data": [1, 2, 3, "four"], "tags": "test"}'
|
||||||
|
|
||||||
|
# Store as plain text instead of JSON
|
||||||
|
curl -X POST http://localhost:3210/api/v1/can/0/ingest/data \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{"data": "plain text content", "mime_type": "text/plain"}'
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 3. Retrieve Asset
|
||||||
|
|
||||||
|
Download the physical file by its hash.
|
||||||
|
|
||||||
|
```
|
||||||
|
GET /api/v1/can/0/asset/{hash}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Path Parameters:**
|
||||||
|
|
||||||
|
| Param | Type | Description |
|
||||||
|
|-------|------|-------------|
|
||||||
|
| `hash` | String | The SHA-256 hash returned from ingest |
|
||||||
|
|
||||||
|
**Response:** Raw file bytes with headers:
|
||||||
|
- `Content-Type` set to the asset's MIME type
|
||||||
|
- `Content-Disposition: attachment; filename="<human_filename>"`
|
||||||
|
|
||||||
|
Returns `500` if the asset is flagged as corrupted.
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -o output.pdf http://localhost:3210/api/v1/can/0/asset/a3b2c4d5e6f7...
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 4. Get Asset Metadata
|
||||||
|
|
||||||
|
Retrieve all metadata for an asset without downloading the file.
|
||||||
|
|
||||||
|
```
|
||||||
|
GET /api/v1/can/0/asset/{hash}/meta
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response:**
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"status": "success",
|
||||||
|
"data": {
|
||||||
|
"hash": "a3b2c4d5e6f7...",
|
||||||
|
"mime_type": "application/pdf",
|
||||||
|
"application": "WebUI",
|
||||||
|
"user": "jason",
|
||||||
|
"tags": ["finance", "Q4", "report"],
|
||||||
|
"description": "Q4 2025 financial report",
|
||||||
|
"human_filename": "quarterly_report.pdf",
|
||||||
|
"human_path": "/finance/reports/",
|
||||||
|
"timestamp": 1773014400123,
|
||||||
|
"is_trashed": false,
|
||||||
|
"is_corrupted": false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 5. Update Metadata
|
||||||
|
|
||||||
|
Modify tags and/or description for an existing asset. The physical file is unchanged.
|
||||||
|
|
||||||
|
```
|
||||||
|
PATCH /api/v1/can/0/asset/{hash}
|
||||||
|
Content-Type: application/json
|
||||||
|
```
|
||||||
|
|
||||||
|
**JSON Body:**
|
||||||
|
|
||||||
|
| Field | Type | Required | Description |
|
||||||
|
|-------|------|----------|-------------|
|
||||||
|
| `tags` | String[] | No | New tag list (replaces all existing tags) |
|
||||||
|
| `description` | String | No | New description |
|
||||||
|
|
||||||
|
Both fields are optional. Only provided fields are updated.
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -X PATCH http://localhost:3210/api/v1/can/0/asset/a3b2c4d5e6f7... \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{
|
||||||
|
"tags": ["finance", "Q4", "report", "reviewed"],
|
||||||
|
"description": "Q4 2025 report - reviewed by CFO"
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response:**
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"status": "success",
|
||||||
|
"data": "updated"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 6. List Assets
|
||||||
|
|
||||||
|
Paginated listing of all assets with optional filtering.
|
||||||
|
|
||||||
|
```
|
||||||
|
GET /api/v1/can/0/list
|
||||||
|
```
|
||||||
|
|
||||||
|
**Query Parameters:**
|
||||||
|
|
||||||
|
| Param | Type | Default | Description |
|
||||||
|
|-------|------|---------|-------------|
|
||||||
|
| `limit` | Integer | `50` | Page size |
|
||||||
|
| `offset` | Integer | `0` | Starting position |
|
||||||
|
| `offset_time` | Integer | -- | Epoch ms cursor. Lists items strictly before/after this timestamp (based on `order`). Faster than offset for large datasets. |
|
||||||
|
| `order` | String | `desc` | Sort direction: `asc` or `desc` (by timestamp) |
|
||||||
|
| `application` | String | -- | Filter to assets from this application |
|
||||||
|
| `include_trashed` | Boolean | `false` | Include soft-deleted assets |
|
||||||
|
| `include_corrupted` | Boolean | `false` | Include corrupted assets |
|
||||||
|
|
||||||
|
**Response:**
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"status": "success",
|
||||||
|
"data": {
|
||||||
|
"items": [
|
||||||
|
{
|
||||||
|
"hash": "a3b2...",
|
||||||
|
"mime_type": "application/pdf",
|
||||||
|
"application": "WebUI",
|
||||||
|
"user": "jason",
|
||||||
|
"tags": ["finance"],
|
||||||
|
"description": "...",
|
||||||
|
"human_filename": "report.pdf",
|
||||||
|
"human_path": "/docs/",
|
||||||
|
"timestamp": 1773014400123,
|
||||||
|
"is_trashed": false,
|
||||||
|
"is_corrupted": false
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"pagination": {
|
||||||
|
"limit": 50,
|
||||||
|
"offset": 0,
|
||||||
|
"total": 142
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Examples:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# First page, 10 items
|
||||||
|
curl "http://localhost:3210/api/v1/can/0/list?limit=10"
|
||||||
|
|
||||||
|
# Second page
|
||||||
|
curl "http://localhost:3210/api/v1/can/0/list?limit=10&offset=10"
|
||||||
|
|
||||||
|
# Only assets from a specific app, oldest first
|
||||||
|
curl "http://localhost:3210/api/v1/can/0/list?application=IoTAgent&order=asc"
|
||||||
|
|
||||||
|
# Include trashed assets
|
||||||
|
curl "http://localhost:3210/api/v1/can/0/list?include_trashed=true"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 7. Search Assets
|
||||||
|
|
||||||
|
Search with multiple filters. All filters are AND-combined.
|
||||||
|
|
||||||
|
```
|
||||||
|
GET /api/v1/can/0/search
|
||||||
|
```
|
||||||
|
|
||||||
|
**Query Parameters:**
|
||||||
|
|
||||||
|
| Param | Type | Default | Description |
|
||||||
|
|-------|------|---------|-------------|
|
||||||
|
| `hash` | String | -- | Exact hash or prefix match (e.g. `a3b2` matches `a3b2c4d5...`) |
|
||||||
|
| `start_time` | Integer | -- | Epoch ms lower bound (inclusive) |
|
||||||
|
| `end_time` | Integer | -- | Epoch ms upper bound (inclusive) |
|
||||||
|
| `tags` | String | -- | Comma-separated. AND logic: asset must have **all** specified tags. |
|
||||||
|
| `mime_type` | String | -- | Exact MIME type match (e.g. `image/jpeg`) |
|
||||||
|
| `user` | String | -- | Exact user/agent identity match |
|
||||||
|
| `application` | String | -- | Exact application match |
|
||||||
|
| `limit` | Integer | `50` | Page size |
|
||||||
|
| `offset` | Integer | `0` | Starting position |
|
||||||
|
| `order` | String | `desc` | Sort direction: `asc` or `desc` |
|
||||||
|
| `include_trashed` | Boolean | `false` | Include soft-deleted assets |
|
||||||
|
| `include_corrupted` | Boolean | `false` | Include corrupted assets |
|
||||||
|
|
||||||
|
**Response:** Same structure as List (items + pagination).
|
||||||
|
|
||||||
|
**Examples:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Find by hash prefix
|
||||||
|
curl "http://localhost:3210/api/v1/can/0/search?hash=a3b2"
|
||||||
|
|
||||||
|
# All JPEG images from last 24 hours
|
||||||
|
curl "http://localhost:3210/api/v1/can/0/search?mime_type=image/jpeg&start_time=1773014400000"
|
||||||
|
|
||||||
|
# Assets tagged with BOTH "sensor" AND "temperature"
|
||||||
|
curl "http://localhost:3210/api/v1/can/0/search?tags=sensor,temperature"
|
||||||
|
|
||||||
|
# Everything from a specific agent
|
||||||
|
curl "http://localhost:3210/api/v1/can/0/search?application=PlannerAgent&user=agent_v2"
|
||||||
|
|
||||||
|
# Combine filters
|
||||||
|
curl "http://localhost:3210/api/v1/can/0/search?tags=report&application=WebUI&order=asc&limit=5"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 8. Get Thumbnail
|
||||||
|
|
||||||
|
Generate a resized thumbnail for image assets. Non-image assets return a placeholder SVG.
|
||||||
|
|
||||||
|
```
|
||||||
|
GET /api/v1/can/0/asset/{hash}/thumb/{max_width}/{max_height}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Path Parameters:**
|
||||||
|
|
||||||
|
| Param | Type | Description |
|
||||||
|
|-------|------|-------------|
|
||||||
|
| `hash` | String | Asset hash |
|
||||||
|
| `max_width` | Integer | Maximum width in pixels |
|
||||||
|
| `max_height` | Integer | Maximum height in pixels |
|
||||||
|
|
||||||
|
Aspect ratio is always preserved. The image fits within the `max_width x max_height` box.
|
||||||
|
|
||||||
|
**Response:**
|
||||||
|
- **Image assets:** JPEG bytes (`Content-Type: image/jpeg`). Cached in `.thumbs/` if enabled.
|
||||||
|
- **Non-image assets:** SVG placeholder icon (`Content-Type: image/svg+xml`).
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 200x200 thumbnail
|
||||||
|
curl -o thumb.jpg http://localhost:3210/api/v1/can/0/asset/a3b2.../thumb/200/200
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
`config.yaml` at the project root (or pass a custom path as the first CLI argument):
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
storage_root: "/var/lib/can_data" # Where files are stored
|
||||||
|
admin_token: "super_secret_rebuild" # Bearer token for admin operations
|
||||||
|
enable_thumbnail_cache: true # Cache thumbnails in .thumbs/
|
||||||
|
rebuild_error_threshold: 50 # Error tolerance before hard rebuild
|
||||||
|
verify_interval_hours: 12 # Hours between integrity scrubs
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Concepts
|
||||||
|
|
||||||
|
### Hash
|
||||||
|
|
||||||
|
Every asset gets a unique SHA-256 hash computed as `SHA256(timestamp_be_bytes + file_content)`. This hash is the primary identifier used in all API calls. Because the timestamp is mixed in, even identical file content produces different hashes at different times.
|
||||||
|
|
||||||
|
### Physical Filename
|
||||||
|
|
||||||
|
Files are stored as: `{timestamp}_{hash}_{tags}.{extension}`
|
||||||
|
|
||||||
|
For example: `1773014400123_a3b2c4d5e6f7_finance_Q4.pdf`
|
||||||
|
|
||||||
|
This naming allows offline integrity verification -- you can recompute the hash from the timestamp and file contents and compare it to the filename.
|
||||||
|
|
||||||
|
### Tags
|
||||||
|
|
||||||
|
Tags are comma-separated strings. On ingest and in search, pass them as a single string: `"finance,Q4,report"`. In metadata responses, they come back as an array: `["finance", "Q4", "report"]`.
|
||||||
|
|
||||||
|
When searching, tag matching uses AND logic: `?tags=finance,Q4` finds only assets that have **both** tags.
|
||||||
|
|
||||||
|
When patching, tags are **replaced** entirely (not merged).
|
||||||
|
|
||||||
|
### Integrity Verification
|
||||||
|
|
||||||
|
A background verifier runs automatically:
|
||||||
|
1. **On startup:** Scrubs all assets against their hashes.
|
||||||
|
2. **On file change:** Watches the storage directory for modifications.
|
||||||
|
3. **Periodically:** Every `verify_interval_hours`.
|
||||||
|
|
||||||
|
Corrupted assets are flagged (`is_corrupted: true`) and excluded from standard list/search results unless `include_corrupted=true` is passed. Retrieving a corrupted asset via GET returns a 500 error.
|
||||||
|
|
||||||
|
### OS File Attributes
|
||||||
|
|
||||||
|
Critical metadata is written to the host OS as file attributes for disaster recovery:
|
||||||
|
- **Linux/macOS:** Extended attributes (`xattr`) under the `user.can.*` namespace
|
||||||
|
- **Windows:** NTFS Alternate Data Streams (`:can.*`)
|
||||||
|
|
||||||
|
This means the SQLite database can be rebuilt from scratch by scanning the storage directory.
|
||||||
2630
Cargo.lock
generated
Normal file
2630
Cargo.lock
generated
Normal file
File diff suppressed because it is too large
Load Diff
52
Cargo.toml
Normal file
52
Cargo.toml
Normal file
@ -0,0 +1,52 @@
|
|||||||
|
[package]
|
||||||
|
name = "can-service"
|
||||||
|
version = "0.1.0"
|
||||||
|
edition = "2021"
|
||||||
|
description = "Containerized Asset Network - a self-healing local storage daemon"
|
||||||
|
|
||||||
|
[dependencies]
|
||||||
|
# Web framework
|
||||||
|
axum = { version = "0.8", features = ["multipart"] }
|
||||||
|
tokio = { version = "1", features = ["full"] }
|
||||||
|
tower-http = { version = "0.6", features = ["cors", "trace"] }
|
||||||
|
tokio-util = { version = "0.7", features = ["io"] }
|
||||||
|
|
||||||
|
# Database
|
||||||
|
rusqlite = { version = "0.32", features = ["bundled"] }
|
||||||
|
|
||||||
|
# Serialization
|
||||||
|
serde = { version = "1", features = ["derive"] }
|
||||||
|
serde_json = "1"
|
||||||
|
serde_yaml = "0.9"
|
||||||
|
|
||||||
|
# Hashing
|
||||||
|
sha2 = "0.10"
|
||||||
|
hex = "0.4"
|
||||||
|
|
||||||
|
# Image processing
|
||||||
|
image = { version = "0.25", default-features = false, features = ["jpeg", "png", "gif", "webp"] }
|
||||||
|
|
||||||
|
# File system watching
|
||||||
|
notify = "7"
|
||||||
|
|
||||||
|
# MIME type detection
|
||||||
|
mime_guess = "2"
|
||||||
|
mime = "0.3"
|
||||||
|
|
||||||
|
# Logging
|
||||||
|
tracing = "0.1"
|
||||||
|
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
|
||||||
|
|
||||||
|
# Utilities
|
||||||
|
chrono = { version = "0.4", features = ["serde"] }
|
||||||
|
anyhow = "1"
|
||||||
|
thiserror = "2"
|
||||||
|
|
||||||
|
# OS attributes (unix only, windows uses custom ADS)
|
||||||
|
[target.'cfg(unix)'.dependencies]
|
||||||
|
xattr = "1"
|
||||||
|
|
||||||
|
[dev-dependencies]
|
||||||
|
tempfile = "3"
|
||||||
|
reqwest = { version = "0.12", features = ["multipart", "json"] }
|
||||||
|
tokio-test = "0.4"
|
||||||
5
config.yaml
Normal file
5
config.yaml
Normal file
@ -0,0 +1,5 @@
|
|||||||
|
storage_root: "./can_data"
|
||||||
|
admin_token: "super_secret_rebuild"
|
||||||
|
enable_thumbnail_cache: true
|
||||||
|
rebuild_error_threshold: 50
|
||||||
|
verify_interval_hours: 12
|
||||||
5575
examples/can-sync/Cargo.lock
generated
Normal file
5575
examples/can-sync/Cargo.lock
generated
Normal file
File diff suppressed because it is too large
Load Diff
44
examples/can-sync/Cargo.toml
Normal file
44
examples/can-sync/Cargo.toml
Normal file
@ -0,0 +1,44 @@
|
|||||||
|
[package]
|
||||||
|
name = "can-sync"
|
||||||
|
version = "0.1.0"
|
||||||
|
edition = "2021"
|
||||||
|
description = "P2P sync service for CAN content-addressable storage"
|
||||||
|
|
||||||
|
[[bin]]
|
||||||
|
name = "can-sync"
|
||||||
|
path = "src/main.rs"
|
||||||
|
|
||||||
|
[dependencies]
|
||||||
|
# P2P networking
|
||||||
|
iroh = "0.96"
|
||||||
|
iroh-blobs = "0.98"
|
||||||
|
iroh-docs = "0.96"
|
||||||
|
iroh-gossip = "0.96"
|
||||||
|
|
||||||
|
# HTTP server + client
|
||||||
|
axum = "0.8"
|
||||||
|
tokio = { version = "1", features = ["full"] }
|
||||||
|
reqwest = { version = "0.12", features = ["json", "multipart"] }
|
||||||
|
tower-http = { version = "0.6", features = ["cors"] }
|
||||||
|
|
||||||
|
# Serialization
|
||||||
|
serde = { version = "1", features = ["derive"] }
|
||||||
|
serde_json = "1"
|
||||||
|
serde_yaml = "0.9"
|
||||||
|
postcard = { version = "1", features = ["alloc"] }
|
||||||
|
|
||||||
|
# Storage
|
||||||
|
rusqlite = { version = "0.32", features = ["bundled"] }
|
||||||
|
|
||||||
|
# Utilities
|
||||||
|
tracing = "0.1"
|
||||||
|
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
|
||||||
|
anyhow = "1"
|
||||||
|
open = "5"
|
||||||
|
sha2 = "0.10"
|
||||||
|
hex = "0.4"
|
||||||
|
uuid = { version = "1", features = ["v4"] }
|
||||||
|
chrono = { version = "0.4", features = ["serde"] }
|
||||||
|
bytes = "1"
|
||||||
|
futures-lite = "2"
|
||||||
|
tokio-util = { version = "0.7", features = ["io"] }
|
||||||
263
examples/can-sync/README.md
Normal file
263
examples/can-sync/README.md
Normal file
@ -0,0 +1,263 @@
|
|||||||
|
# CAN Sync
|
||||||
|
|
||||||
|
P2P file synchronization service that runs on top of [CAN Service](../../). Uses [iroh](https://iroh.computer/) for encrypted peer-to-peer networking with NAT traversal.
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────┐ HTTP API ┌─────────────┐ iroh (QUIC) ┌─────────────┐
|
||||||
|
│ CAN Service │◄───────────►│ CAN Sync │◄─────────────►│ CAN Sync │
|
||||||
|
│ (port 3210)│ │ (port 3213)│ │ (remote) │
|
||||||
|
│ storage + │ │ P2P node + │ │ │
|
||||||
|
│ SQLite │ │ libraries │ │ │
|
||||||
|
└─────────────┘ └─────────────┘ └─────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
CAN Sync communicates with CAN Service **only** via its public HTTP API — zero changes to CAN Service required.
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
1. **Start CAN Service** (default port 3210):
|
||||||
|
```bash
|
||||||
|
cd ../..
|
||||||
|
cargo run
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Edit config** (optional — defaults work out of the box):
|
||||||
|
```bash
|
||||||
|
cp config.yaml my-config.yaml
|
||||||
|
# edit my-config.yaml if needed
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Start CAN Sync**:
|
||||||
|
```bash
|
||||||
|
cargo run
|
||||||
|
# or with a custom config:
|
||||||
|
cargo run -- my-config.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
CAN Sync starts on `http://127.0.0.1:3213` and connects to CAN Service at `http://127.0.0.1:3210/api/v1/can/0`.
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
`config.yaml`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# URL of the local CAN Service API
|
||||||
|
can_service_url: "http://127.0.0.1:3210/api/v1/can/0"
|
||||||
|
|
||||||
|
# Address for the CAN Sync HTTP API
|
||||||
|
listen_addr: "127.0.0.1:3213"
|
||||||
|
|
||||||
|
# Directory for persistent data (peer key, sync state DB)
|
||||||
|
data_dir: "./can_sync_data"
|
||||||
|
|
||||||
|
# Custom relay server URL (null = iroh's public relay)
|
||||||
|
relay_url: null
|
||||||
|
|
||||||
|
# Seconds between fast polls for new assets
|
||||||
|
poll_interval_secs: 5
|
||||||
|
|
||||||
|
# Seconds between full scans of all assets
|
||||||
|
full_scan_interval_secs: 300
|
||||||
|
```
|
||||||
|
|
||||||
|
## Concepts
|
||||||
|
|
||||||
|
### Libraries
|
||||||
|
|
||||||
|
A **library** is a shared collection of CAN assets that syncs between peers. Each library has a **filter** that determines which assets belong to it.
|
||||||
|
|
||||||
|
Filter options (combined with AND logic):
|
||||||
|
- `application` — match assets with this application tag (e.g. `"paste"`)
|
||||||
|
- `tags` — match assets with any of these tags (e.g. `["photos", "backup"]`)
|
||||||
|
- `user` — match assets from this user identity
|
||||||
|
- `mime_prefix` — match assets whose MIME type starts with this (e.g. `"image/"`)
|
||||||
|
- `hashes` — manual list of specific asset hashes to include
|
||||||
|
|
||||||
|
### Sync Flow
|
||||||
|
|
||||||
|
**Outbound** (local → remote):
|
||||||
|
1. Announcer polls CAN Service for new/changed assets
|
||||||
|
2. Assets matching a library's filter get announced to the library's iroh document
|
||||||
|
3. iroh replicates the entry to all subscribed peers
|
||||||
|
4. Remote peer's fetcher downloads the blob and ingests it into their local CAN Service
|
||||||
|
|
||||||
|
**Inbound** (remote → local):
|
||||||
|
1. iroh document receives new entry from remote peer
|
||||||
|
2. Fetcher downloads the blob via iroh's encrypted QUIC transport
|
||||||
|
3. Fetcher verifies the CAN hash (SHA-256) independently
|
||||||
|
4. Fetcher ingests the file into local CAN Service with all metadata preserved
|
||||||
|
|
||||||
|
## API
|
||||||
|
|
||||||
|
All endpoints return JSON with `{ "status": "success", "data": ... }` or `{ "status": "error", "error": "..." }`.
|
||||||
|
|
||||||
|
### Status & Peers
|
||||||
|
|
||||||
|
| Method | Endpoint | Description |
|
||||||
|
|--------|----------|-------------|
|
||||||
|
| GET | `/status` | Node status, CAN service health, library count |
|
||||||
|
| GET | `/peers` | Connected peers list |
|
||||||
|
|
||||||
|
### Libraries
|
||||||
|
|
||||||
|
| Method | Endpoint | Description |
|
||||||
|
|--------|----------|-------------|
|
||||||
|
| POST | `/libraries` | Create a library |
|
||||||
|
| GET | `/libraries` | List all libraries |
|
||||||
|
| GET | `/libraries/{id}` | Get library details |
|
||||||
|
| DELETE | `/libraries/{id}` | Remove a library |
|
||||||
|
|
||||||
|
### Sharing
|
||||||
|
|
||||||
|
| Method | Endpoint | Description |
|
||||||
|
|--------|----------|-------------|
|
||||||
|
| POST | `/libraries/{id}/invite` | Generate a share ticket |
|
||||||
|
| POST | `/join` | Join a library from a ticket |
|
||||||
|
|
||||||
|
### Examples
|
||||||
|
|
||||||
|
**Create a library** that syncs all assets with `application=paste`:
|
||||||
|
```bash
|
||||||
|
curl -X POST http://127.0.0.1:3213/libraries \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{"name": "my-pastes", "filter": {"application": "paste"}}'
|
||||||
|
```
|
||||||
|
|
||||||
|
**Create a library** that syncs all images:
|
||||||
|
```bash
|
||||||
|
curl -X POST http://127.0.0.1:3213/libraries \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{"name": "images", "filter": {"mime_prefix": "image/"}}'
|
||||||
|
```
|
||||||
|
|
||||||
|
**Generate an invite ticket** to share with another machine:
|
||||||
|
```bash
|
||||||
|
curl -X POST http://127.0.0.1:3213/libraries/{id}/invite
|
||||||
|
```
|
||||||
|
|
||||||
|
**Join a library** on another machine using the ticket:
|
||||||
|
```bash
|
||||||
|
curl -X POST http://127.0.0.1:3213/join \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{"ticket": "eyJsaWJyYXJ5X25hbWUiOi..."}'
|
||||||
|
```
|
||||||
|
|
||||||
|
**List all libraries**:
|
||||||
|
```bash
|
||||||
|
curl http://127.0.0.1:3213/libraries
|
||||||
|
```
|
||||||
|
|
||||||
|
**Check status**:
|
||||||
|
```bash
|
||||||
|
curl http://127.0.0.1:3213/status
|
||||||
|
```
|
||||||
|
|
||||||
|
## Two-Machine Setup
|
||||||
|
|
||||||
|
### Machine A (the host)
|
||||||
|
|
||||||
|
**1. Start CAN Service** (default port 3210):
|
||||||
|
```bash
|
||||||
|
cd /path/to/CanService
|
||||||
|
cargo run
|
||||||
|
```
|
||||||
|
|
||||||
|
**2. Start CAN Sync** with default config (port 3213):
|
||||||
|
```bash
|
||||||
|
cd examples/can-sync
|
||||||
|
cargo run
|
||||||
|
```
|
||||||
|
|
||||||
|
**3. Create a library** (e.g. sync all images):
|
||||||
|
```bash
|
||||||
|
curl -X POST http://127.0.0.1:3213/libraries \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{"name": "shared-images", "filter": {"mime_prefix": "image/"}}'
|
||||||
|
```
|
||||||
|
Save the `id` from the response (e.g. `"id": "a1b2c3d4-..."`).
|
||||||
|
|
||||||
|
**4. Generate an invite ticket:**
|
||||||
|
```bash
|
||||||
|
curl -X POST http://127.0.0.1:3213/libraries/a1b2c3d4-.../invite
|
||||||
|
```
|
||||||
|
Copy the `ticket` string from the response — this is what Machine B needs.
|
||||||
|
|
||||||
|
### Machine B (the joiner)
|
||||||
|
|
||||||
|
**1. Start CAN Service** on a different port:
|
||||||
|
```bash
|
||||||
|
cd /path/to/CanService
|
||||||
|
CAN_PORT=3220 cargo run
|
||||||
|
```
|
||||||
|
|
||||||
|
**2. Create a config file** for CAN Sync pointing at Machine B's CAN Service:
|
||||||
|
```yaml
|
||||||
|
# machine-b-config.yaml
|
||||||
|
can_service_url: "http://127.0.0.1:3220/api/v1/can/0"
|
||||||
|
listen_addr: "127.0.0.1:3223"
|
||||||
|
data_dir: "./can_sync_data_b"
|
||||||
|
```
|
||||||
|
|
||||||
|
**3. Start CAN Sync** with that config:
|
||||||
|
```bash
|
||||||
|
cd examples/can-sync
|
||||||
|
cargo run -- machine-b-config.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
**4. Join the library** using Machine A's ticket:
|
||||||
|
```bash
|
||||||
|
curl -X POST http://127.0.0.1:3223/join \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{"ticket": "eyJsaWJyYXJ5X25hbWUiOi..."}'
|
||||||
|
```
|
||||||
|
|
||||||
|
### Verify it works
|
||||||
|
|
||||||
|
**Ingest a file on Machine A:**
|
||||||
|
```bash
|
||||||
|
curl -X POST http://127.0.0.1:3210/api/v1/can/0/ingest \
|
||||||
|
-F "file=@photo.jpg" \
|
||||||
|
-F "mime_type=image/jpeg"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Check Machine B** — the file should appear within a few seconds:
|
||||||
|
```bash
|
||||||
|
curl http://127.0.0.1:3220/api/v1/can/0/list?limit=5
|
||||||
|
```
|
||||||
|
|
||||||
|
The same image (with matching hash and metadata) will be in Machine B's CAN Service, synced over iroh's encrypted P2P connection.
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
```
|
||||||
|
src/
|
||||||
|
├── main.rs — entry point: config, iroh node, announcer, fetcher, HTTP server
|
||||||
|
├── config.rs — YAML config loading
|
||||||
|
├── can_client.rs — HTTP client for CAN Service API (list, search, ingest, meta, etc.)
|
||||||
|
├── node.rs — iroh endpoint + blobs + docs + gossip + router
|
||||||
|
├── library.rs — library/filter definitions + SQLite state tracking
|
||||||
|
├── manifest.rs — AssetSyncEntry serialized into iroh document entries
|
||||||
|
├── announcer.rs — polls CAN Service, announces matching assets to libraries
|
||||||
|
├── fetcher.rs — receives remote entries, downloads blobs, ingests into CAN Service
|
||||||
|
└── routes.rs — Axum HTTP API handlers
|
||||||
|
```
|
||||||
|
|
||||||
|
## Security
|
||||||
|
|
||||||
|
- **Transport**: All peer-to-peer traffic is encrypted with QUIC + TLS 1.3 (mandatory in iroh)
|
||||||
|
- **Identity**: Each node has an Ed25519 keypair generated on first run
|
||||||
|
- **Access control**: Library access via cryptographic capability tickets — only peers with a valid ticket can read/write
|
||||||
|
- **NAT traversal**: iroh's built-in relay servers and hole-punching
|
||||||
|
- **Hash verification**: Downloaded files are independently verified against CAN's SHA-256 hash before ingestion
|
||||||
|
|
||||||
|
## Current Status
|
||||||
|
|
||||||
|
The service compiles and runs with the following fully implemented:
|
||||||
|
- iroh P2P node startup with all protocol handlers (blobs, docs, gossip)
|
||||||
|
- CAN Service HTTP client with full API coverage
|
||||||
|
- Library management with SQLite persistence
|
||||||
|
- Announcer polling loop (fast + full scan) with real iroh-docs writes
|
||||||
|
- Fetcher with iroh document event subscription for real-time sync
|
||||||
|
- Fetcher blob download via iroh and CAN hash verification before ingestion
|
||||||
|
- Real DocTicket-based invite/join with cryptographic capability tokens
|
||||||
|
- HTTP API for library CRUD, invite, and join
|
||||||
7
examples/can-sync/config.yaml
Normal file
7
examples/can-sync/config.yaml
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
# CAN Sync configuration
|
||||||
|
can_service_url: "http://127.0.0.1:3210/api/v1/can/0"
|
||||||
|
listen_addr: "127.0.0.1:3213"
|
||||||
|
data_dir: "./can_sync_data"
|
||||||
|
relay_url: null
|
||||||
|
poll_interval_secs: 5
|
||||||
|
full_scan_interval_secs: 300
|
||||||
234
examples/can-sync/src/announcer.rs
Normal file
234
examples/can-sync/src/announcer.rs
Normal file
@ -0,0 +1,234 @@
|
|||||||
|
use std::sync::Arc;
|
||||||
|
use std::time::Duration;
|
||||||
|
|
||||||
|
use anyhow::Result;
|
||||||
|
use tracing::{debug, error, info, warn};
|
||||||
|
|
||||||
|
use crate::can_client::CanClient;
|
||||||
|
use crate::library::SyncState;
|
||||||
|
use crate::manifest::AssetSyncEntry;
|
||||||
|
use crate::node::SyncNode;
|
||||||
|
|
||||||
|
/// The announcer periodically polls CAN service for new or changed assets
|
||||||
|
/// and writes matching entries into iroh library documents.
|
||||||
|
pub struct Announcer {
|
||||||
|
can: CanClient,
|
||||||
|
state: Arc<SyncState>,
|
||||||
|
node: Arc<SyncNode>,
|
||||||
|
poll_interval: Duration,
|
||||||
|
full_scan_interval: Duration,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Announcer {
|
||||||
|
pub fn new(
|
||||||
|
can: CanClient,
|
||||||
|
state: Arc<SyncState>,
|
||||||
|
node: Arc<SyncNode>,
|
||||||
|
poll_interval_secs: u64,
|
||||||
|
full_scan_interval_secs: u64,
|
||||||
|
) -> Self {
|
||||||
|
Self {
|
||||||
|
can,
|
||||||
|
state,
|
||||||
|
node,
|
||||||
|
poll_interval: Duration::from_secs(poll_interval_secs),
|
||||||
|
full_scan_interval: Duration::from_secs(full_scan_interval_secs),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Run the announcer loop — fast polls + periodic full scans
|
||||||
|
pub async fn run(self) {
|
||||||
|
let mut fast_tick = tokio::time::interval(self.poll_interval);
|
||||||
|
let mut full_tick = tokio::time::interval(self.full_scan_interval);
|
||||||
|
// Skip the first immediate tick for full scan (let fast poll get first data)
|
||||||
|
full_tick.tick().await;
|
||||||
|
|
||||||
|
info!(
|
||||||
|
"Announcer started (fast poll: {}s, full scan: {}s)",
|
||||||
|
self.poll_interval.as_secs(),
|
||||||
|
self.full_scan_interval.as_secs(),
|
||||||
|
);
|
||||||
|
|
||||||
|
loop {
|
||||||
|
tokio::select! {
|
||||||
|
_ = fast_tick.tick() => {
|
||||||
|
if let Err(e) = self.fast_poll().await {
|
||||||
|
warn!("Fast poll error: {:#}", e);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
_ = full_tick.tick() => {
|
||||||
|
if let Err(e) = self.full_scan().await {
|
||||||
|
warn!("Full scan error: {:#}", e);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Fast poll: check for recently ingested assets
|
||||||
|
async fn fast_poll(&self) -> Result<()> {
|
||||||
|
let last_ts = self
|
||||||
|
.state
|
||||||
|
.get_state("last_seen_timestamp")?
|
||||||
|
.and_then(|s| s.parse::<i64>().ok())
|
||||||
|
.unwrap_or(0);
|
||||||
|
|
||||||
|
// Get recent assets ordered newest first
|
||||||
|
let resp = self.can.list(50, 0, "desc", Some(last_ts)).await?;
|
||||||
|
|
||||||
|
if resp.items.is_empty() {
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
debug!("Fast poll found {} new assets since ts={}", resp.items.len(), last_ts);
|
||||||
|
|
||||||
|
// Track the newest timestamp we see
|
||||||
|
let mut max_ts = last_ts;
|
||||||
|
for asset in &resp.items {
|
||||||
|
if asset.timestamp > max_ts {
|
||||||
|
max_ts = asset.timestamp;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Process assets against libraries
|
||||||
|
let libraries = self.state.list_libraries()?;
|
||||||
|
|
||||||
|
for asset in &resp.items {
|
||||||
|
for lib in &libraries {
|
||||||
|
if lib.filter.matches(asset) {
|
||||||
|
self.announce_asset(lib, asset).await?;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update last seen timestamp
|
||||||
|
self.state.set_state("last_seen_timestamp", &max_ts.to_string())?;
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Full scan: paginate through all assets, checking for metadata changes
|
||||||
|
async fn full_scan(&self) -> Result<()> {
|
||||||
|
info!("Starting full scan...");
|
||||||
|
|
||||||
|
let libraries = self.state.list_libraries()?;
|
||||||
|
|
||||||
|
if libraries.is_empty() {
|
||||||
|
debug!("No libraries configured, skipping full scan");
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
let page_size = 100;
|
||||||
|
let mut offset = 0;
|
||||||
|
let mut total_scanned = 0;
|
||||||
|
let mut total_announced = 0;
|
||||||
|
|
||||||
|
loop {
|
||||||
|
let resp = self.can.list_all(page_size, offset, true).await?;
|
||||||
|
let count = resp.items.len();
|
||||||
|
total_scanned += count;
|
||||||
|
|
||||||
|
for asset in &resp.items {
|
||||||
|
for lib in &libraries {
|
||||||
|
if lib.filter.matches(asset) {
|
||||||
|
let was_new = self.announce_asset(lib, asset).await?;
|
||||||
|
if was_new {
|
||||||
|
total_announced += 1;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (count as i64) < page_size {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
offset += page_size;
|
||||||
|
}
|
||||||
|
|
||||||
|
info!(
|
||||||
|
"Full scan complete: scanned {}, announced {} new/updated",
|
||||||
|
total_scanned, total_announced
|
||||||
|
);
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Announce a single asset to a library's iroh document.
|
||||||
|
/// Returns true if the asset was newly announced or updated.
|
||||||
|
async fn announce_asset(
|
||||||
|
&self,
|
||||||
|
lib: &crate::library::Library,
|
||||||
|
asset: &crate::can_client::AssetMeta,
|
||||||
|
) -> Result<bool> {
|
||||||
|
let doc_id = match &lib.doc_id {
|
||||||
|
Some(id) => id.clone(),
|
||||||
|
None => {
|
||||||
|
debug!("Library '{}' has no doc_id yet, skipping", lib.name);
|
||||||
|
return Ok(false);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// Check if already announced at current version
|
||||||
|
if self.state.is_announced(&lib.id, &asset.hash)? {
|
||||||
|
// Already announced — skip unless metadata changed
|
||||||
|
// (full scan handles re-announcement on metadata change)
|
||||||
|
return Ok(false);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Download file content from CAN service and add as iroh blob
|
||||||
|
let iroh_blob_hash = match self.can.get_asset(&asset.hash).await {
|
||||||
|
Ok(content) => {
|
||||||
|
// Add to iroh blob store so remote peers can download it
|
||||||
|
match self.node.blobs.add_bytes(content).await {
|
||||||
|
Ok(tag_info) => Some(tag_info.hash.to_string()),
|
||||||
|
Err(e) => {
|
||||||
|
warn!(
|
||||||
|
"Failed to add blob for asset {}: {:#}",
|
||||||
|
&asset.hash[..12],
|
||||||
|
e
|
||||||
|
);
|
||||||
|
None
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
warn!(
|
||||||
|
"Failed to download asset {} from CAN service: {:#}",
|
||||||
|
&asset.hash[..12],
|
||||||
|
e
|
||||||
|
);
|
||||||
|
None
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// Create sync entry with the iroh blob hash
|
||||||
|
let mut entry = AssetSyncEntry::from_asset_meta(asset, &self.node.peer_id());
|
||||||
|
entry.iroh_blob_hash = iroh_blob_hash;
|
||||||
|
let entry_bytes = entry.to_bytes();
|
||||||
|
|
||||||
|
// Write to iroh document (CRDT — concurrent writes merge automatically)
|
||||||
|
if let Err(e) = self
|
||||||
|
.node
|
||||||
|
.write_to_doc(&doc_id, asset.hash.as_bytes(), &entry_bytes)
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
error!(
|
||||||
|
"Failed to write asset {} to doc {}: {:#}",
|
||||||
|
&asset.hash[..12],
|
||||||
|
&doc_id[..12],
|
||||||
|
e
|
||||||
|
);
|
||||||
|
return Ok(false);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Mark as announced in local state
|
||||||
|
self.state.mark_announced(&lib.id, &asset.hash, entry.version)?;
|
||||||
|
|
||||||
|
debug!(
|
||||||
|
"Announced asset {} to library '{}' (doc {})",
|
||||||
|
&asset.hash[..12],
|
||||||
|
lib.name,
|
||||||
|
&doc_id[..12]
|
||||||
|
);
|
||||||
|
Ok(true)
|
||||||
|
}
|
||||||
|
}
|
||||||
291
examples/can-sync/src/can_client.rs
Normal file
291
examples/can-sync/src/can_client.rs
Normal file
@ -0,0 +1,291 @@
|
|||||||
|
use anyhow::{Context, Result};
|
||||||
|
use bytes::Bytes;
|
||||||
|
use reqwest::multipart;
|
||||||
|
use serde::{Deserialize, Serialize};
|
||||||
|
|
||||||
|
/// HTTP client for CAN service API
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub struct CanClient {
|
||||||
|
client: reqwest::Client,
|
||||||
|
base_url: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── API response types (mirror CAN service) ──
|
||||||
|
|
||||||
|
#[derive(Debug, Deserialize)]
|
||||||
|
pub struct ApiResponse<T> {
|
||||||
|
pub status: String,
|
||||||
|
pub data: T,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Deserialize)]
|
||||||
|
pub struct ErrorResponse {
|
||||||
|
pub status: String,
|
||||||
|
pub error: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct AssetMeta {
|
||||||
|
pub hash: String,
|
||||||
|
pub mime_type: String,
|
||||||
|
pub application: Option<String>,
|
||||||
|
pub user: Option<String>,
|
||||||
|
pub tags: Vec<String>,
|
||||||
|
pub description: Option<String>,
|
||||||
|
pub human_filename: Option<String>,
|
||||||
|
pub human_path: Option<String>,
|
||||||
|
pub timestamp: i64,
|
||||||
|
pub is_trashed: bool,
|
||||||
|
#[serde(default)]
|
||||||
|
pub is_corrupted: bool,
|
||||||
|
pub size: i64,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Deserialize)]
|
||||||
|
pub struct ListResponse {
|
||||||
|
pub items: Vec<AssetMeta>,
|
||||||
|
pub pagination: Pagination,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Deserialize)]
|
||||||
|
pub struct Pagination {
|
||||||
|
pub limit: i64,
|
||||||
|
pub offset: i64,
|
||||||
|
pub total: i64,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Deserialize)]
|
||||||
|
pub struct IngestResult {
|
||||||
|
pub timestamp: i64,
|
||||||
|
pub hash: String,
|
||||||
|
pub filename: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Search parameters ──
|
||||||
|
|
||||||
|
#[derive(Debug, Default, Serialize)]
|
||||||
|
pub struct SearchParams {
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub hash: Option<String>,
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub start_time: Option<i64>,
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub end_time: Option<i64>,
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub tags: Option<String>,
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub mime_type: Option<String>,
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub user: Option<String>,
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub application: Option<String>,
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub limit: Option<i64>,
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub offset: Option<i64>,
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub order: Option<String>,
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub include_trashed: Option<bool>,
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Ingest metadata ──
|
||||||
|
|
||||||
|
#[derive(Debug, Default)]
|
||||||
|
pub struct IngestMeta {
|
||||||
|
pub mime_type: Option<String>,
|
||||||
|
pub human_file_name: Option<String>,
|
||||||
|
pub human_readable_path: Option<String>,
|
||||||
|
pub application: Option<String>,
|
||||||
|
pub user: Option<String>,
|
||||||
|
pub tags: Option<String>,
|
||||||
|
pub description: Option<String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Client implementation ──
|
||||||
|
|
||||||
|
impl CanClient {
|
||||||
|
pub fn new(base_url: &str) -> Self {
|
||||||
|
Self {
|
||||||
|
client: reqwest::Client::new(),
|
||||||
|
base_url: base_url.trim_end_matches('/').to_string(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// List assets with pagination and ordering
|
||||||
|
pub async fn list(
|
||||||
|
&self,
|
||||||
|
limit: i64,
|
||||||
|
offset: i64,
|
||||||
|
order: &str,
|
||||||
|
offset_time: Option<i64>,
|
||||||
|
) -> Result<ListResponse> {
|
||||||
|
let mut url = format!("{}/list?limit={}&offset={}&order={}", self.base_url, limit, offset, order);
|
||||||
|
if let Some(ts) = offset_time {
|
||||||
|
url.push_str(&format!("&offset_time={}", ts));
|
||||||
|
}
|
||||||
|
let resp = self.client.get(&url).send().await.context("list request failed")?;
|
||||||
|
let status = resp.status();
|
||||||
|
if !status.is_success() {
|
||||||
|
let text = resp.text().await.unwrap_or_default();
|
||||||
|
anyhow::bail!("CAN list failed ({}): {}", status, text);
|
||||||
|
}
|
||||||
|
let api: ApiResponse<ListResponse> = resp.json().await.context("parse list response")?;
|
||||||
|
Ok(api.data)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// List all assets (paginated, including trashed for full sync)
|
||||||
|
pub async fn list_all(
|
||||||
|
&self,
|
||||||
|
limit: i64,
|
||||||
|
offset: i64,
|
||||||
|
include_trashed: bool,
|
||||||
|
) -> Result<ListResponse> {
|
||||||
|
let mut url = format!("{}/list?limit={}&offset={}&order=asc", self.base_url, limit, offset);
|
||||||
|
if include_trashed {
|
||||||
|
url.push_str("&include_trashed=true");
|
||||||
|
}
|
||||||
|
let resp = self.client.get(&url).send().await.context("list_all request failed")?;
|
||||||
|
let status = resp.status();
|
||||||
|
if !status.is_success() {
|
||||||
|
let text = resp.text().await.unwrap_or_default();
|
||||||
|
anyhow::bail!("CAN list_all failed ({}): {}", status, text);
|
||||||
|
}
|
||||||
|
let api: ApiResponse<ListResponse> = resp.json().await.context("parse list_all response")?;
|
||||||
|
Ok(api.data)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Search assets by filters
|
||||||
|
pub async fn search(&self, params: &SearchParams) -> Result<ListResponse> {
|
||||||
|
let resp = self
|
||||||
|
.client
|
||||||
|
.get(&format!("{}/search", self.base_url))
|
||||||
|
.query(params)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.context("search request failed")?;
|
||||||
|
let status = resp.status();
|
||||||
|
if !status.is_success() {
|
||||||
|
let text = resp.text().await.unwrap_or_default();
|
||||||
|
anyhow::bail!("CAN search failed ({}): {}", status, text);
|
||||||
|
}
|
||||||
|
let api: ApiResponse<ListResponse> = resp.json().await.context("parse search response")?;
|
||||||
|
Ok(api.data)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Download asset content by hash
|
||||||
|
pub async fn get_asset(&self, hash: &str) -> Result<Bytes> {
|
||||||
|
let resp = self
|
||||||
|
.client
|
||||||
|
.get(&format!("{}/asset/{}", self.base_url, hash))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.context("get_asset request failed")?;
|
||||||
|
let status = resp.status();
|
||||||
|
if !status.is_success() {
|
||||||
|
let text = resp.text().await.unwrap_or_default();
|
||||||
|
anyhow::bail!("CAN get_asset failed ({}): {}", status, text);
|
||||||
|
}
|
||||||
|
resp.bytes().await.context("read asset bytes")
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get asset metadata by hash
|
||||||
|
pub async fn get_meta(&self, hash: &str) -> Result<AssetMeta> {
|
||||||
|
let resp = self
|
||||||
|
.client
|
||||||
|
.get(&format!("{}/asset/{}/meta", self.base_url, hash))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.context("get_meta request failed")?;
|
||||||
|
let status = resp.status();
|
||||||
|
if !status.is_success() {
|
||||||
|
let text = resp.text().await.unwrap_or_default();
|
||||||
|
anyhow::bail!("CAN get_meta failed ({}): {}", status, text);
|
||||||
|
}
|
||||||
|
let api: ApiResponse<AssetMeta> = resp.json().await.context("parse meta response")?;
|
||||||
|
Ok(api.data)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Ingest a file into CAN service via multipart upload
|
||||||
|
pub async fn ingest(&self, content: Bytes, meta: IngestMeta) -> Result<IngestResult> {
|
||||||
|
let file_part = multipart::Part::bytes(content.to_vec())
|
||||||
|
.file_name(meta.human_file_name.clone().unwrap_or_else(|| "file".to_string()))
|
||||||
|
.mime_str(meta.mime_type.as_deref().unwrap_or("application/octet-stream"))?;
|
||||||
|
|
||||||
|
let mut form = multipart::Form::new().part("file", file_part);
|
||||||
|
|
||||||
|
if let Some(ref v) = meta.mime_type {
|
||||||
|
form = form.text("mime_type", v.clone());
|
||||||
|
}
|
||||||
|
if let Some(ref v) = meta.human_file_name {
|
||||||
|
form = form.text("human_file_name", v.clone());
|
||||||
|
}
|
||||||
|
if let Some(ref v) = meta.human_readable_path {
|
||||||
|
form = form.text("human_readable_path", v.clone());
|
||||||
|
}
|
||||||
|
if let Some(ref v) = meta.application {
|
||||||
|
form = form.text("application", v.clone());
|
||||||
|
}
|
||||||
|
if let Some(ref v) = meta.user {
|
||||||
|
form = form.text("user", v.clone());
|
||||||
|
}
|
||||||
|
if let Some(ref v) = meta.tags {
|
||||||
|
form = form.text("tags", v.clone());
|
||||||
|
}
|
||||||
|
if let Some(ref v) = meta.description {
|
||||||
|
form = form.text("description", v.clone());
|
||||||
|
}
|
||||||
|
|
||||||
|
let resp = self
|
||||||
|
.client
|
||||||
|
.post(&format!("{}/ingest", self.base_url))
|
||||||
|
.multipart(form)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.context("ingest request failed")?;
|
||||||
|
let status = resp.status();
|
||||||
|
if !status.is_success() {
|
||||||
|
let text = resp.text().await.unwrap_or_default();
|
||||||
|
anyhow::bail!("CAN ingest failed ({}): {}", status, text);
|
||||||
|
}
|
||||||
|
let api: ApiResponse<IngestResult> = resp.json().await.context("parse ingest response")?;
|
||||||
|
Ok(api.data)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Update asset metadata (tags, description)
|
||||||
|
pub async fn update_meta(
|
||||||
|
&self,
|
||||||
|
hash: &str,
|
||||||
|
tags: Option<Vec<String>>,
|
||||||
|
description: Option<String>,
|
||||||
|
) -> Result<()> {
|
||||||
|
#[derive(Serialize)]
|
||||||
|
struct MetadataUpdate {
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
tags: Option<Vec<String>>,
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
description: Option<String>,
|
||||||
|
}
|
||||||
|
let resp = self
|
||||||
|
.client
|
||||||
|
.patch(&format!("{}/asset/{}", self.base_url, hash))
|
||||||
|
.json(&MetadataUpdate { tags, description })
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.context("update_meta request failed")?;
|
||||||
|
let status = resp.status();
|
||||||
|
if !status.is_success() {
|
||||||
|
let text = resp.text().await.unwrap_or_default();
|
||||||
|
anyhow::bail!("CAN update_meta failed ({}): {}", status, text);
|
||||||
|
}
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Check if CAN service is reachable
|
||||||
|
pub async fn health_check(&self) -> Result<bool> {
|
||||||
|
match self.list(1, 0, "desc", None).await {
|
||||||
|
Ok(_) => Ok(true),
|
||||||
|
Err(_) => Ok(false),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
78
examples/can-sync/src/config.rs
Normal file
78
examples/can-sync/src/config.rs
Normal file
@ -0,0 +1,78 @@
|
|||||||
|
use anyhow::{Context, Result};
|
||||||
|
use serde::{Deserialize, Serialize};
|
||||||
|
use std::path::{Path, PathBuf};
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct SyncConfig {
|
||||||
|
/// Base URL for the CAN service API (e.g. "http://127.0.0.1:3210/api/v1/can/0")
|
||||||
|
pub can_service_url: String,
|
||||||
|
|
||||||
|
/// Address for the CAN Sync HTTP API (e.g. "127.0.0.1:3213")
|
||||||
|
pub listen_addr: String,
|
||||||
|
|
||||||
|
/// Directory for persistent data (peer key, sync state DB)
|
||||||
|
pub data_dir: String,
|
||||||
|
|
||||||
|
/// Optional custom relay URL; null uses iroh's public relay
|
||||||
|
pub relay_url: Option<String>,
|
||||||
|
|
||||||
|
/// Seconds between fast polls for new assets
|
||||||
|
#[serde(default = "default_poll_interval")]
|
||||||
|
pub poll_interval_secs: u64,
|
||||||
|
|
||||||
|
/// Seconds between full scans of all assets
|
||||||
|
#[serde(default = "default_full_scan_interval")]
|
||||||
|
pub full_scan_interval_secs: u64,
|
||||||
|
}
|
||||||
|
|
||||||
|
fn default_poll_interval() -> u64 {
|
||||||
|
5
|
||||||
|
}
|
||||||
|
|
||||||
|
fn default_full_scan_interval() -> u64 {
|
||||||
|
300
|
||||||
|
}
|
||||||
|
|
||||||
|
impl SyncConfig {
|
||||||
|
/// Load config from a YAML file, falling back to defaults if not found
|
||||||
|
pub fn load(path: &Path) -> Result<Self> {
|
||||||
|
if path.exists() {
|
||||||
|
let contents =
|
||||||
|
std::fs::read_to_string(path).context("Failed to read config file")?;
|
||||||
|
let config: SyncConfig =
|
||||||
|
serde_yaml::from_str(&contents).context("Failed to parse config YAML")?;
|
||||||
|
Ok(config)
|
||||||
|
} else {
|
||||||
|
tracing::warn!("Config file not found at {}, using defaults", path.display());
|
||||||
|
Ok(Self::default())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Resolved data directory path
|
||||||
|
pub fn data_path(&self) -> PathBuf {
|
||||||
|
PathBuf::from(&self.data_dir)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Path to the peer keypair file
|
||||||
|
pub fn peer_key_path(&self) -> PathBuf {
|
||||||
|
self.data_path().join("peer_key")
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Path to the sync state SQLite database
|
||||||
|
pub fn db_path(&self) -> PathBuf {
|
||||||
|
self.data_path().join("can_sync.db")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Default for SyncConfig {
|
||||||
|
fn default() -> Self {
|
||||||
|
Self {
|
||||||
|
can_service_url: "http://127.0.0.1:3210/api/v1/can/0".to_string(),
|
||||||
|
listen_addr: "127.0.0.1:3213".to_string(),
|
||||||
|
data_dir: "./can_sync_data".to_string(),
|
||||||
|
relay_url: None,
|
||||||
|
poll_interval_secs: default_poll_interval(),
|
||||||
|
full_scan_interval_secs: default_full_scan_interval(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
352
examples/can-sync/src/fetcher.rs
Normal file
352
examples/can-sync/src/fetcher.rs
Normal file
@ -0,0 +1,352 @@
|
|||||||
|
use std::sync::Arc;
|
||||||
|
|
||||||
|
use anyhow::Result;
|
||||||
|
use futures_lite::StreamExt;
|
||||||
|
use sha2::{Digest, Sha256};
|
||||||
|
use tokio::io::AsyncReadExt;
|
||||||
|
use tracing::{debug, error, info, warn};
|
||||||
|
|
||||||
|
use crate::can_client::{CanClient, IngestMeta};
|
||||||
|
use crate::library::SyncState;
|
||||||
|
use crate::manifest::AssetSyncEntry;
|
||||||
|
use crate::node::SyncNode;
|
||||||
|
|
||||||
|
/// The fetcher receives remote asset entries from iroh documents
|
||||||
|
/// and ingests them into the local CAN service.
|
||||||
|
pub struct Fetcher {
|
||||||
|
can: CanClient,
|
||||||
|
state: Arc<SyncState>,
|
||||||
|
node: Arc<SyncNode>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Fetcher {
|
||||||
|
pub fn new(can: CanClient, state: Arc<SyncState>, node: Arc<SyncNode>) -> Self {
|
||||||
|
Self { can, state, node }
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Run the fetcher — subscribes to library document events for real-time sync,
|
||||||
|
/// falls back to periodic polling for documents without active subscriptions
|
||||||
|
pub async fn run(self) {
|
||||||
|
info!("Fetcher started — watching for remote asset entries");
|
||||||
|
|
||||||
|
// Run two loops concurrently:
|
||||||
|
// 1. Subscription watcher — subscribes to active library docs
|
||||||
|
// 2. Periodic checker — catches anything missed
|
||||||
|
let poll_interval = tokio::time::interval(std::time::Duration::from_secs(10));
|
||||||
|
let sub_interval = tokio::time::interval(std::time::Duration::from_secs(5));
|
||||||
|
|
||||||
|
tokio::pin!(poll_interval);
|
||||||
|
tokio::pin!(sub_interval);
|
||||||
|
|
||||||
|
loop {
|
||||||
|
tokio::select! {
|
||||||
|
_ = poll_interval.tick() => {
|
||||||
|
if let Err(e) = self.check_for_new_entries().await {
|
||||||
|
warn!("Fetcher poll error: {:#}", e);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
_ = sub_interval.tick() => {
|
||||||
|
// Try to subscribe to any library docs that we haven't subscribed to yet
|
||||||
|
if let Err(e) = self.subscribe_to_libraries().await {
|
||||||
|
debug!("Fetcher subscription check: {:#}", e);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Subscribe to document events for all libraries that have doc_ids
|
||||||
|
async fn subscribe_to_libraries(&self) -> Result<()> {
|
||||||
|
let libraries = self.state.list_libraries()?;
|
||||||
|
|
||||||
|
for lib in &libraries {
|
||||||
|
if let Some(ref doc_id_hex) = lib.doc_id {
|
||||||
|
// Open the doc and subscribe to events
|
||||||
|
let doc = match self.node.open_doc(doc_id_hex).await {
|
||||||
|
Ok(d) => d,
|
||||||
|
Err(_) => continue,
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut events = match doc.subscribe().await {
|
||||||
|
Ok(e) => e,
|
||||||
|
Err(_) => continue,
|
||||||
|
};
|
||||||
|
|
||||||
|
// Spawn a task to process events from this doc
|
||||||
|
let can = self.can.clone();
|
||||||
|
let node_peer_id = self.node.peer_id();
|
||||||
|
let node = self.node.clone();
|
||||||
|
let lib_name = lib.name.clone();
|
||||||
|
|
||||||
|
tokio::spawn(async move {
|
||||||
|
while let Some(event) = events.next().await {
|
||||||
|
match event {
|
||||||
|
Ok(iroh_docs::engine::LiveEvent::InsertRemote {
|
||||||
|
entry,
|
||||||
|
content_status,
|
||||||
|
..
|
||||||
|
}) => {
|
||||||
|
let key = entry.key().to_vec();
|
||||||
|
let can_hash = String::from_utf8_lossy(&key).to_string();
|
||||||
|
|
||||||
|
if content_status == iroh_docs::ContentStatus::Complete {
|
||||||
|
// The entry value (our AssetSyncEntry) is available
|
||||||
|
// Read the entry content from the blob store
|
||||||
|
let content_hash = entry.content_hash();
|
||||||
|
let mut reader = node.blobs.reader(content_hash);
|
||||||
|
let mut buf = Vec::new();
|
||||||
|
if reader.read_to_end(&mut buf).await.is_ok() {
|
||||||
|
if let Ok(sync_entry) = AssetSyncEntry::from_bytes(&buf) {
|
||||||
|
if sync_entry.last_modified_by == node_peer_id {
|
||||||
|
continue; // Skip our own entries
|
||||||
|
}
|
||||||
|
info!(
|
||||||
|
"Received remote entry for {} in library '{}'",
|
||||||
|
&can_hash[..can_hash.len().min(12)],
|
||||||
|
lib_name
|
||||||
|
);
|
||||||
|
if let Err(e) = process_remote_entry(
|
||||||
|
&can,
|
||||||
|
&node,
|
||||||
|
&node_peer_id,
|
||||||
|
&can_hash,
|
||||||
|
sync_entry,
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
error!(
|
||||||
|
"Error processing remote entry {}: {:#}",
|
||||||
|
&can_hash[..can_hash.len().min(12)],
|
||||||
|
e
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Ok(iroh_docs::engine::LiveEvent::NeighborUp(peer)) => {
|
||||||
|
info!("Peer connected: {}", peer.fmt_short());
|
||||||
|
}
|
||||||
|
Ok(iroh_docs::engine::LiveEvent::NeighborDown(peer)) => {
|
||||||
|
info!("Peer disconnected: {}", peer.fmt_short());
|
||||||
|
}
|
||||||
|
Ok(_) => {} // Ignore other events
|
||||||
|
Err(e) => {
|
||||||
|
warn!("Document event error: {:#}", e);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
// Only subscribe to one doc per tick to avoid overwhelming
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Check all library documents for entries we don't have locally (polling fallback)
|
||||||
|
async fn check_for_new_entries(&self) -> Result<()> {
|
||||||
|
let libraries = self.state.list_libraries()?;
|
||||||
|
|
||||||
|
for lib in &libraries {
|
||||||
|
if let Some(ref doc_id_hex) = lib.doc_id {
|
||||||
|
let doc = match self.node.open_doc(doc_id_hex).await {
|
||||||
|
Ok(d) => d,
|
||||||
|
Err(_) => continue,
|
||||||
|
};
|
||||||
|
|
||||||
|
// Query all entries (latest per key)
|
||||||
|
let query = iroh_docs::store::Query::single_latest_per_key().build();
|
||||||
|
let entries = match doc.get_many(query).await {
|
||||||
|
Ok(e) => e,
|
||||||
|
Err(_) => continue,
|
||||||
|
};
|
||||||
|
tokio::pin!(entries);
|
||||||
|
|
||||||
|
while let Some(Ok(entry)) = entries.next().await {
|
||||||
|
let key = entry.key().to_vec();
|
||||||
|
let can_hash = String::from_utf8_lossy(&key).to_string();
|
||||||
|
|
||||||
|
// Read the entry value (AssetSyncEntry)
|
||||||
|
let content_hash = entry.content_hash();
|
||||||
|
let mut reader = self.node.blobs.reader(content_hash);
|
||||||
|
let mut buf = Vec::new();
|
||||||
|
if reader.read_to_end(&mut buf).await.is_err() {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
let sync_entry = match AssetSyncEntry::from_bytes(&buf) {
|
||||||
|
Ok(e) => e,
|
||||||
|
Err(_) => continue,
|
||||||
|
};
|
||||||
|
|
||||||
|
// Skip our own entries
|
||||||
|
if sync_entry.last_modified_by == self.node.peer_id() {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if already processed
|
||||||
|
if self.state.is_announced(&lib.id, &can_hash).unwrap_or(false) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
info!(
|
||||||
|
"Polling found remote entry for {} in library '{}'",
|
||||||
|
&can_hash[..can_hash.len().min(12)],
|
||||||
|
lib.name
|
||||||
|
);
|
||||||
|
|
||||||
|
if let Err(e) = process_remote_entry(
|
||||||
|
&self.can,
|
||||||
|
&self.node,
|
||||||
|
&self.node.peer_id(),
|
||||||
|
&can_hash,
|
||||||
|
sync_entry,
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
error!(
|
||||||
|
"Error processing remote entry {}: {:#}",
|
||||||
|
&can_hash[..can_hash.len().min(12)],
|
||||||
|
e
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Mark as processed
|
||||||
|
let _ = self.state.mark_announced(&lib.id, &can_hash, 1);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Process a remote asset entry — download blob and ingest into CAN service
|
||||||
|
async fn process_remote_entry(
|
||||||
|
can: &CanClient,
|
||||||
|
node: &SyncNode,
|
||||||
|
local_peer_id: &str,
|
||||||
|
can_hash: &str,
|
||||||
|
entry: AssetSyncEntry,
|
||||||
|
) -> Result<()> {
|
||||||
|
// Skip if this is our own entry
|
||||||
|
if entry.last_modified_by == local_peer_id {
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if already in local CAN service
|
||||||
|
match can.get_meta(can_hash).await {
|
||||||
|
Ok(existing) => {
|
||||||
|
// Asset exists — check if metadata needs updating
|
||||||
|
if entry.tags != existing.tags
|
||||||
|
|| entry.description != existing.description
|
||||||
|
|| entry.is_trashed != existing.is_trashed
|
||||||
|
{
|
||||||
|
info!("Updating metadata for {} from remote peer", &can_hash[..12]);
|
||||||
|
can.update_meta(
|
||||||
|
can_hash,
|
||||||
|
Some(entry.tags.clone()),
|
||||||
|
entry.description.clone(),
|
||||||
|
)
|
||||||
|
.await?;
|
||||||
|
}
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
Err(_) => {
|
||||||
|
// Asset not found locally — need to fetch and ingest
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
info!(
|
||||||
|
"Fetching remote asset {} ({}B) from peer {}",
|
||||||
|
&can_hash[..12],
|
||||||
|
entry.size,
|
||||||
|
&entry.last_modified_by[..entry.last_modified_by.len().min(12)]
|
||||||
|
);
|
||||||
|
|
||||||
|
// Download blob via iroh
|
||||||
|
let content = download_blob(node, &entry).await?;
|
||||||
|
|
||||||
|
if content.is_empty() {
|
||||||
|
warn!("Downloaded empty blob for {} — skipping", &can_hash[..12]);
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify CAN hash: SHA256(timestamp_bytes + content)
|
||||||
|
if !verify_can_hash(can_hash, entry.timestamp, &content) {
|
||||||
|
error!(
|
||||||
|
"CAN hash verification failed for {} — rejecting",
|
||||||
|
&can_hash[..12]
|
||||||
|
);
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
// Ingest into local CAN service
|
||||||
|
let meta = IngestMeta {
|
||||||
|
mime_type: Some(entry.mime_type.clone()),
|
||||||
|
human_file_name: entry.human_filename.clone(),
|
||||||
|
human_readable_path: entry.human_path.clone(),
|
||||||
|
application: entry.application.clone(),
|
||||||
|
user: entry.user.clone(),
|
||||||
|
tags: if entry.tags.is_empty() {
|
||||||
|
None
|
||||||
|
} else {
|
||||||
|
Some(entry.tags.join(","))
|
||||||
|
},
|
||||||
|
description: entry.description.clone(),
|
||||||
|
};
|
||||||
|
|
||||||
|
match can.ingest(content.into(), meta).await {
|
||||||
|
Ok(result) => {
|
||||||
|
info!(
|
||||||
|
"Ingested remote asset: hash={}, filename={}",
|
||||||
|
&result.hash[..12],
|
||||||
|
result.filename
|
||||||
|
);
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
error!("Failed to ingest remote asset {}: {:#}", &can_hash[..12], e);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Download a blob via iroh using the blob hash from the sync entry
|
||||||
|
async fn download_blob(node: &SyncNode, entry: &AssetSyncEntry) -> Result<Vec<u8>> {
|
||||||
|
let blob_hash_str = match &entry.iroh_blob_hash {
|
||||||
|
Some(h) => h,
|
||||||
|
None => {
|
||||||
|
warn!("No iroh blob hash in sync entry — cannot download");
|
||||||
|
return Ok(Vec::new());
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// Parse the BLAKE3 hash
|
||||||
|
let blob_hash: iroh_blobs::Hash = blob_hash_str
|
||||||
|
.parse()
|
||||||
|
.map_err(|_| anyhow::anyhow!("Invalid iroh blob hash: {}", &blob_hash_str[..12]))?;
|
||||||
|
|
||||||
|
// Read from the local blob store (iroh-docs should have synced it)
|
||||||
|
let mut reader = node.blobs.reader(blob_hash);
|
||||||
|
let mut buf = Vec::with_capacity(entry.size as usize);
|
||||||
|
reader.read_to_end(&mut buf).await?;
|
||||||
|
|
||||||
|
debug!(
|
||||||
|
"Downloaded blob {} ({} bytes)",
|
||||||
|
&blob_hash_str[..12],
|
||||||
|
buf.len()
|
||||||
|
);
|
||||||
|
Ok(buf)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Verify CAN hash: SHA256(timestamp_string + content) matches expected hash
|
||||||
|
fn verify_can_hash(expected_hash: &str, timestamp: i64, content: &[u8]) -> bool {
|
||||||
|
let mut hasher = Sha256::new();
|
||||||
|
hasher.update(timestamp.to_string().as_bytes());
|
||||||
|
hasher.update(content);
|
||||||
|
let computed = hex::encode(hasher.finalize());
|
||||||
|
computed == expected_hash
|
||||||
|
}
|
||||||
288
examples/can-sync/src/library.rs
Normal file
288
examples/can-sync/src/library.rs
Normal file
@ -0,0 +1,288 @@
|
|||||||
|
use anyhow::{Context, Result};
|
||||||
|
use rusqlite::Connection;
|
||||||
|
use serde::{Deserialize, Serialize};
|
||||||
|
|
||||||
|
use crate::can_client::AssetMeta;
|
||||||
|
|
||||||
|
/// Filter criteria that determines which CAN assets belong to a library
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct LibraryFilter {
|
||||||
|
/// Match assets with this application tag
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub application: Option<String>,
|
||||||
|
/// Match assets with any of these tags
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub tags: Option<Vec<String>>,
|
||||||
|
/// Match assets from this user
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub user: Option<String>,
|
||||||
|
/// Match assets with MIME type prefix (e.g. "image/")
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub mime_prefix: Option<String>,
|
||||||
|
/// Manual list of specific hashes to include
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub hashes: Option<Vec<String>>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl LibraryFilter {
|
||||||
|
/// Check if an asset matches this filter
|
||||||
|
pub fn matches(&self, asset: &AssetMeta) -> bool {
|
||||||
|
// If hashes list is set, only match those exact hashes
|
||||||
|
if let Some(ref hashes) = self.hashes {
|
||||||
|
return hashes.contains(&asset.hash);
|
||||||
|
}
|
||||||
|
|
||||||
|
// All set criteria must match (AND logic)
|
||||||
|
if let Some(ref app) = self.application {
|
||||||
|
if asset.application.as_deref() != Some(app.as_str()) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if let Some(ref required_tags) = self.tags {
|
||||||
|
// Asset must have at least one of the required tags
|
||||||
|
if !required_tags.iter().any(|t| asset.tags.contains(t)) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if let Some(ref user) = self.user {
|
||||||
|
if asset.user.as_deref() != Some(user.as_str()) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if let Some(ref prefix) = self.mime_prefix {
|
||||||
|
if !asset.mime_type.starts_with(prefix.as_str()) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// A library definition stored locally
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct Library {
|
||||||
|
/// Unique library ID (UUID)
|
||||||
|
pub id: String,
|
||||||
|
/// Human-readable name
|
||||||
|
pub name: String,
|
||||||
|
/// Filter criteria
|
||||||
|
pub filter: LibraryFilter,
|
||||||
|
/// iroh document ID (namespace) — set after creation
|
||||||
|
pub doc_id: Option<String>,
|
||||||
|
/// Whether this library was created locally or joined from remote
|
||||||
|
pub is_local: bool,
|
||||||
|
/// Creation timestamp
|
||||||
|
pub created_at: i64,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Tracks which assets have been announced to which libraries.
|
||||||
|
/// Uses std::sync::Mutex because rusqlite::Connection is !Send,
|
||||||
|
/// so tokio::sync::RwLock won't work across .await points.
|
||||||
|
pub struct SyncState {
|
||||||
|
db: std::sync::Mutex<Connection>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl SyncState {
|
||||||
|
/// Open or create the sync state database
|
||||||
|
pub fn open(path: &std::path::Path) -> Result<Self> {
|
||||||
|
let db = Connection::open(path).context("open sync state DB")?;
|
||||||
|
db.execute_batch(
|
||||||
|
"
|
||||||
|
CREATE TABLE IF NOT EXISTS libraries (
|
||||||
|
id TEXT PRIMARY KEY,
|
||||||
|
name TEXT NOT NULL,
|
||||||
|
filter_json TEXT NOT NULL,
|
||||||
|
doc_id TEXT,
|
||||||
|
is_local INTEGER NOT NULL DEFAULT 1,
|
||||||
|
created_at INTEGER NOT NULL
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS announced_assets (
|
||||||
|
library_id TEXT NOT NULL,
|
||||||
|
hash TEXT NOT NULL,
|
||||||
|
version INTEGER NOT NULL DEFAULT 1,
|
||||||
|
announced_at INTEGER NOT NULL,
|
||||||
|
PRIMARY KEY (library_id, hash),
|
||||||
|
FOREIGN KEY (library_id) REFERENCES libraries(id) ON DELETE CASCADE
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS sync_state (
|
||||||
|
key TEXT PRIMARY KEY,
|
||||||
|
value TEXT NOT NULL
|
||||||
|
);
|
||||||
|
",
|
||||||
|
)
|
||||||
|
.context("init sync state tables")?;
|
||||||
|
Ok(Self {
|
||||||
|
db: std::sync::Mutex::new(db),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
fn lock_db(&self) -> std::sync::MutexGuard<'_, Connection> {
|
||||||
|
self.db.lock().expect("sync state DB lock poisoned")
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Library CRUD ──
|
||||||
|
|
||||||
|
pub fn save_library(&self, lib: &Library) -> Result<()> {
|
||||||
|
let db = self.lock_db();
|
||||||
|
let filter_json = serde_json::to_string(&lib.filter)?;
|
||||||
|
db.execute(
|
||||||
|
"INSERT OR REPLACE INTO libraries (id, name, filter_json, doc_id, is_local, created_at)
|
||||||
|
VALUES (?1, ?2, ?3, ?4, ?5, ?6)",
|
||||||
|
rusqlite::params![
|
||||||
|
lib.id,
|
||||||
|
lib.name,
|
||||||
|
filter_json,
|
||||||
|
lib.doc_id,
|
||||||
|
lib.is_local as i32,
|
||||||
|
lib.created_at,
|
||||||
|
],
|
||||||
|
)?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn list_libraries(&self) -> Result<Vec<Library>> {
|
||||||
|
let db = self.lock_db();
|
||||||
|
let mut stmt =
|
||||||
|
db.prepare("SELECT id, name, filter_json, doc_id, is_local, created_at FROM libraries")?;
|
||||||
|
let libs = stmt
|
||||||
|
.query_map([], |row| {
|
||||||
|
let filter_json: String = row.get(2)?;
|
||||||
|
Ok(Library {
|
||||||
|
id: row.get(0)?,
|
||||||
|
name: row.get(1)?,
|
||||||
|
filter: serde_json::from_str(&filter_json).unwrap_or(LibraryFilter {
|
||||||
|
application: None,
|
||||||
|
tags: None,
|
||||||
|
user: None,
|
||||||
|
mime_prefix: None,
|
||||||
|
hashes: None,
|
||||||
|
}),
|
||||||
|
doc_id: row.get(3)?,
|
||||||
|
is_local: row.get::<_, i32>(4)? != 0,
|
||||||
|
created_at: row.get(5)?,
|
||||||
|
})
|
||||||
|
})?
|
||||||
|
.collect::<Result<Vec<_>, _>>()?;
|
||||||
|
Ok(libs)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn get_library(&self, id: &str) -> Result<Option<Library>> {
|
||||||
|
let db = self.lock_db();
|
||||||
|
let mut stmt = db.prepare(
|
||||||
|
"SELECT id, name, filter_json, doc_id, is_local, created_at FROM libraries WHERE id = ?1",
|
||||||
|
)?;
|
||||||
|
let mut rows = stmt.query_map([id], |row| {
|
||||||
|
let filter_json: String = row.get(2)?;
|
||||||
|
Ok(Library {
|
||||||
|
id: row.get(0)?,
|
||||||
|
name: row.get(1)?,
|
||||||
|
filter: serde_json::from_str(&filter_json).unwrap_or(LibraryFilter {
|
||||||
|
application: None,
|
||||||
|
tags: None,
|
||||||
|
user: None,
|
||||||
|
mime_prefix: None,
|
||||||
|
hashes: None,
|
||||||
|
}),
|
||||||
|
doc_id: row.get(3)?,
|
||||||
|
is_local: row.get::<_, i32>(4)? != 0,
|
||||||
|
created_at: row.get(5)?,
|
||||||
|
})
|
||||||
|
})?;
|
||||||
|
match rows.next() {
|
||||||
|
Some(Ok(lib)) => Ok(Some(lib)),
|
||||||
|
Some(Err(e)) => Err(e.into()),
|
||||||
|
None => Ok(None),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn delete_library(&self, id: &str) -> Result<()> {
|
||||||
|
let db = self.lock_db();
|
||||||
|
db.execute("DELETE FROM announced_assets WHERE library_id = ?1", [id])?;
|
||||||
|
db.execute("DELETE FROM libraries WHERE id = ?1", [id])?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn update_library_doc_id(&self, id: &str, doc_id: &str) -> Result<()> {
|
||||||
|
let db = self.lock_db();
|
||||||
|
db.execute(
|
||||||
|
"UPDATE libraries SET doc_id = ?1 WHERE id = ?2",
|
||||||
|
[doc_id, id],
|
||||||
|
)?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Asset announcement tracking ──
|
||||||
|
|
||||||
|
pub fn is_announced(&self, library_id: &str, hash: &str) -> Result<bool> {
|
||||||
|
let db = self.lock_db();
|
||||||
|
let count: i64 = db.query_row(
|
||||||
|
"SELECT COUNT(*) FROM announced_assets WHERE library_id = ?1 AND hash = ?2",
|
||||||
|
[library_id, hash],
|
||||||
|
|row| row.get(0),
|
||||||
|
)?;
|
||||||
|
Ok(count > 0)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn get_announced_version(&self, library_id: &str, hash: &str) -> Result<Option<u64>> {
|
||||||
|
let db = self.lock_db();
|
||||||
|
let mut stmt = db.prepare(
|
||||||
|
"SELECT version FROM announced_assets WHERE library_id = ?1 AND hash = ?2",
|
||||||
|
)?;
|
||||||
|
let mut rows = stmt.query_map(rusqlite::params![library_id, hash], |row| {
|
||||||
|
row.get::<_, i64>(0)
|
||||||
|
})?;
|
||||||
|
match rows.next() {
|
||||||
|
Some(Ok(v)) => Ok(Some(v as u64)),
|
||||||
|
Some(Err(e)) => Err(e.into()),
|
||||||
|
None => Ok(None),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn mark_announced(&self, library_id: &str, hash: &str, version: u64) -> Result<()> {
|
||||||
|
let db = self.lock_db();
|
||||||
|
let now = chrono::Utc::now().timestamp_millis();
|
||||||
|
db.execute(
|
||||||
|
"INSERT OR REPLACE INTO announced_assets (library_id, hash, version, announced_at)
|
||||||
|
VALUES (?1, ?2, ?3, ?4)",
|
||||||
|
rusqlite::params![library_id, hash, version as i64, now],
|
||||||
|
)?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn remove_announced(&self, library_id: &str, hash: &str) -> Result<()> {
|
||||||
|
let db = self.lock_db();
|
||||||
|
db.execute(
|
||||||
|
"DELETE FROM announced_assets WHERE library_id = ?1 AND hash = ?2",
|
||||||
|
[library_id, hash],
|
||||||
|
)?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── General state ──
|
||||||
|
|
||||||
|
pub fn get_state(&self, key: &str) -> Result<Option<String>> {
|
||||||
|
let db = self.lock_db();
|
||||||
|
let mut stmt = db.prepare("SELECT value FROM sync_state WHERE key = ?1")?;
|
||||||
|
let mut rows = stmt.query_map([key], |row| row.get::<_, String>(0))?;
|
||||||
|
match rows.next() {
|
||||||
|
Some(Ok(v)) => Ok(Some(v)),
|
||||||
|
Some(Err(e)) => Err(e.into()),
|
||||||
|
None => Ok(None),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn set_state(&self, key: &str, value: &str) -> Result<()> {
|
||||||
|
let db = self.lock_db();
|
||||||
|
db.execute(
|
||||||
|
"INSERT OR REPLACE INTO sync_state (key, value) VALUES (?1, ?2)",
|
||||||
|
[key, value],
|
||||||
|
)?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
}
|
||||||
121
examples/can-sync/src/main.rs
Normal file
121
examples/can-sync/src/main.rs
Normal file
@ -0,0 +1,121 @@
|
|||||||
|
#![allow(dead_code)]
|
||||||
|
|
||||||
|
mod announcer;
|
||||||
|
mod can_client;
|
||||||
|
mod config;
|
||||||
|
mod fetcher;
|
||||||
|
mod library;
|
||||||
|
mod manifest;
|
||||||
|
mod node;
|
||||||
|
mod routes;
|
||||||
|
|
||||||
|
use std::path::PathBuf;
|
||||||
|
use std::sync::Arc;
|
||||||
|
|
||||||
|
use anyhow::{Context, Result};
|
||||||
|
use tracing::info;
|
||||||
|
|
||||||
|
use crate::announcer::Announcer;
|
||||||
|
use crate::can_client::CanClient;
|
||||||
|
use crate::config::SyncConfig;
|
||||||
|
use crate::fetcher::Fetcher;
|
||||||
|
use crate::library::SyncState;
|
||||||
|
use crate::node::SyncNode;
|
||||||
|
use crate::routes::AppState;
|
||||||
|
|
||||||
|
#[tokio::main]
|
||||||
|
async fn main() -> Result<()> {
|
||||||
|
// Initialize tracing
|
||||||
|
tracing_subscriber::fmt()
|
||||||
|
.with_env_filter(
|
||||||
|
tracing_subscriber::EnvFilter::try_from_default_env()
|
||||||
|
.unwrap_or_else(|_| "can_sync=info,iroh=warn".parse().unwrap()),
|
||||||
|
)
|
||||||
|
.init();
|
||||||
|
|
||||||
|
// Load config
|
||||||
|
let config_path = std::env::args()
|
||||||
|
.nth(1)
|
||||||
|
.map(PathBuf::from)
|
||||||
|
.unwrap_or_else(|| PathBuf::from("config.yaml"));
|
||||||
|
|
||||||
|
let config = SyncConfig::load(&config_path)?;
|
||||||
|
info!("CAN Sync starting...");
|
||||||
|
info!(" CAN service: {}", config.can_service_url);
|
||||||
|
info!(" Listen addr: {}", config.listen_addr);
|
||||||
|
info!(" Data dir: {}", config.data_dir);
|
||||||
|
|
||||||
|
// Ensure data directory exists
|
||||||
|
std::fs::create_dir_all(config.data_path())
|
||||||
|
.context("Failed to create data directory")?;
|
||||||
|
|
||||||
|
// Initialize CAN service client
|
||||||
|
let can = CanClient::new(&config.can_service_url);
|
||||||
|
|
||||||
|
// Check CAN service health
|
||||||
|
match can.health_check().await {
|
||||||
|
Ok(true) => info!("CAN service is reachable"),
|
||||||
|
Ok(false) | Err(_) => {
|
||||||
|
tracing::warn!(
|
||||||
|
"CAN service at {} is not reachable — will retry on each poll",
|
||||||
|
config.can_service_url
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Open sync state database
|
||||||
|
let state = SyncState::open(&config.db_path())?;
|
||||||
|
let state = Arc::new(state);
|
||||||
|
info!("Sync state DB opened at {}", config.db_path().display());
|
||||||
|
|
||||||
|
// Start iroh P2P node
|
||||||
|
let node = SyncNode::spawn(&config).await?;
|
||||||
|
let node = Arc::new(node);
|
||||||
|
info!("iroh node ID: {}", node.peer_id());
|
||||||
|
|
||||||
|
// Build shared app state
|
||||||
|
let app_state = Arc::new(AppState {
|
||||||
|
node: node.clone(),
|
||||||
|
state: state.clone(),
|
||||||
|
can: can.clone(),
|
||||||
|
});
|
||||||
|
|
||||||
|
// Start the announcer (polls CAN service for new assets)
|
||||||
|
let announcer = Announcer::new(
|
||||||
|
can.clone(),
|
||||||
|
state.clone(),
|
||||||
|
node.clone(),
|
||||||
|
config.poll_interval_secs,
|
||||||
|
config.full_scan_interval_secs,
|
||||||
|
);
|
||||||
|
tokio::spawn(async move {
|
||||||
|
announcer.run().await;
|
||||||
|
});
|
||||||
|
|
||||||
|
// Start the fetcher (receives remote assets and ingests them)
|
||||||
|
let fetcher = Fetcher::new(can.clone(), state.clone(), node.clone());
|
||||||
|
tokio::spawn(async move {
|
||||||
|
fetcher.run().await;
|
||||||
|
});
|
||||||
|
|
||||||
|
// Build HTTP router
|
||||||
|
let router = routes::build_router(app_state);
|
||||||
|
|
||||||
|
// Start HTTP server
|
||||||
|
let listener = tokio::net::TcpListener::bind(&config.listen_addr)
|
||||||
|
.await
|
||||||
|
.context("Failed to bind HTTP listener")?;
|
||||||
|
info!("CAN Sync API listening on http://{}", config.listen_addr);
|
||||||
|
|
||||||
|
// Open browser to status page
|
||||||
|
let status_url = format!("http://{}/status", config.listen_addr);
|
||||||
|
if open::that(&status_url).is_err() {
|
||||||
|
info!("Open {} in your browser to check status", status_url);
|
||||||
|
}
|
||||||
|
|
||||||
|
axum::serve(listener, router)
|
||||||
|
.await
|
||||||
|
.context("HTTP server error")?;
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
75
examples/can-sync/src/manifest.rs
Normal file
75
examples/can-sync/src/manifest.rs
Normal file
@ -0,0 +1,75 @@
|
|||||||
|
use serde::{Deserialize, Serialize};
|
||||||
|
|
||||||
|
use crate::can_client::AssetMeta;
|
||||||
|
|
||||||
|
/// Entry stored in iroh documents for each synced asset.
|
||||||
|
/// Key = CAN hash, Value = serialized AssetSyncEntry
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct AssetSyncEntry {
|
||||||
|
/// CAN timestamp (milliseconds since epoch)
|
||||||
|
pub timestamp: i64,
|
||||||
|
/// MIME type
|
||||||
|
pub mime_type: String,
|
||||||
|
/// Application tag
|
||||||
|
pub application: Option<String>,
|
||||||
|
/// User identity
|
||||||
|
pub user: Option<String>,
|
||||||
|
/// Tags list
|
||||||
|
pub tags: Vec<String>,
|
||||||
|
/// Description
|
||||||
|
pub description: Option<String>,
|
||||||
|
/// Original human-readable filename
|
||||||
|
pub human_filename: Option<String>,
|
||||||
|
/// Original human-readable path
|
||||||
|
pub human_path: Option<String>,
|
||||||
|
/// File size in bytes
|
||||||
|
pub size: i64,
|
||||||
|
/// Whether the asset is trashed
|
||||||
|
pub is_trashed: bool,
|
||||||
|
/// iroh blob hash (BLAKE3) for downloading via iroh
|
||||||
|
pub iroh_blob_hash: Option<String>,
|
||||||
|
/// Version counter for conflict resolution (higher wins)
|
||||||
|
pub version: u64,
|
||||||
|
/// Peer ID that last modified this entry
|
||||||
|
pub last_modified_by: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl AssetSyncEntry {
|
||||||
|
/// Create from CAN service asset metadata
|
||||||
|
pub fn from_asset_meta(meta: &AssetMeta, peer_id: &str) -> Self {
|
||||||
|
Self {
|
||||||
|
timestamp: meta.timestamp,
|
||||||
|
mime_type: meta.mime_type.clone(),
|
||||||
|
application: meta.application.clone(),
|
||||||
|
user: meta.user.clone(),
|
||||||
|
tags: meta.tags.clone(),
|
||||||
|
description: meta.description.clone(),
|
||||||
|
human_filename: meta.human_filename.clone(),
|
||||||
|
human_path: meta.human_path.clone(),
|
||||||
|
size: meta.size,
|
||||||
|
is_trashed: meta.is_trashed,
|
||||||
|
iroh_blob_hash: None,
|
||||||
|
version: 1,
|
||||||
|
last_modified_by: peer_id.to_string(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Serialize to bytes for storage in iroh document
|
||||||
|
pub fn to_bytes(&self) -> Vec<u8> {
|
||||||
|
postcard::to_allocvec(self).expect("serialize AssetSyncEntry")
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Deserialize from bytes
|
||||||
|
pub fn from_bytes(bytes: &[u8]) -> anyhow::Result<Self> {
|
||||||
|
Ok(postcard::from_bytes(bytes)?)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Check if metadata differs from a CAN asset (indicates update needed)
|
||||||
|
pub fn metadata_differs(&self, meta: &AssetMeta) -> bool {
|
||||||
|
self.tags != meta.tags
|
||||||
|
|| self.description != meta.description
|
||||||
|
|| self.is_trashed != meta.is_trashed
|
||||||
|
|| self.human_filename != meta.human_filename
|
||||||
|
|| self.human_path != meta.human_path
|
||||||
|
}
|
||||||
|
}
|
||||||
150
examples/can-sync/src/node.rs
Normal file
150
examples/can-sync/src/node.rs
Normal file
@ -0,0 +1,150 @@
|
|||||||
|
use anyhow::{Context, Result};
|
||||||
|
use iroh::protocol::Router;
|
||||||
|
use iroh::Endpoint;
|
||||||
|
use iroh_blobs::store::mem::MemStore;
|
||||||
|
use iroh_blobs::{BlobsProtocol, ALPN as BLOBS_ALPN};
|
||||||
|
use iroh_docs::api::protocol::{AddrInfoOptions, ShareMode};
|
||||||
|
use iroh_docs::protocol::Docs;
|
||||||
|
use iroh_docs::{AuthorId, DocTicket, NamespaceId, ALPN as DOCS_ALPN};
|
||||||
|
use iroh_gossip::net::Gossip;
|
||||||
|
use iroh_gossip::ALPN as GOSSIP_ALPN;
|
||||||
|
use tokio::sync::OnceCell;
|
||||||
|
|
||||||
|
use crate::config::SyncConfig;
|
||||||
|
|
||||||
|
/// Holds all iroh subsystems for the P2P node
|
||||||
|
pub struct SyncNode {
|
||||||
|
pub endpoint: Endpoint,
|
||||||
|
pub blobs: BlobsProtocol,
|
||||||
|
pub docs: Docs,
|
||||||
|
pub gossip: Gossip,
|
||||||
|
pub router: Router,
|
||||||
|
/// Cached default author ID (created once on startup)
|
||||||
|
author_id: OnceCell<AuthorId>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl SyncNode {
|
||||||
|
/// Start the iroh node with all protocol handlers
|
||||||
|
pub async fn spawn(_config: &SyncConfig) -> Result<Self> {
|
||||||
|
// Build endpoint (Ed25519 keypair auto-generated and cached)
|
||||||
|
let endpoint = Endpoint::bind()
|
||||||
|
.await
|
||||||
|
.map_err(|e| anyhow::anyhow!("Failed to bind iroh endpoint: {}", e))?;
|
||||||
|
|
||||||
|
tracing::info!(
|
||||||
|
"iroh node started — EndpointID: {}",
|
||||||
|
endpoint.id()
|
||||||
|
);
|
||||||
|
|
||||||
|
// Gossip for peer communication
|
||||||
|
let gossip = Gossip::builder().spawn(endpoint.clone());
|
||||||
|
|
||||||
|
// Blob store (in-memory — blobs are transient, CAN service is authoritative)
|
||||||
|
let mem_store = MemStore::default();
|
||||||
|
let blobs_store: &iroh_blobs::api::Store = &mem_store;
|
||||||
|
let blobs = BlobsProtocol::new(blobs_store, None);
|
||||||
|
|
||||||
|
// Document sync (CRDT-replicated key-value store)
|
||||||
|
let docs = Docs::memory()
|
||||||
|
.spawn(endpoint.clone(), blobs_store.clone(), gossip.clone())
|
||||||
|
.await
|
||||||
|
.context("Failed to spawn iroh-docs")?;
|
||||||
|
|
||||||
|
// Router accepts incoming connections and dispatches to handlers
|
||||||
|
let router = Router::builder(endpoint.clone())
|
||||||
|
.accept(BLOBS_ALPN, blobs.clone())
|
||||||
|
.accept(GOSSIP_ALPN, gossip.clone())
|
||||||
|
.accept(DOCS_ALPN, docs.clone())
|
||||||
|
.spawn();
|
||||||
|
|
||||||
|
Ok(Self {
|
||||||
|
endpoint,
|
||||||
|
blobs,
|
||||||
|
docs,
|
||||||
|
gossip,
|
||||||
|
router,
|
||||||
|
author_id: OnceCell::new(),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get this node's peer ID as a string
|
||||||
|
pub fn peer_id(&self) -> String {
|
||||||
|
self.endpoint.id().to_string()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get the node's endpoint address info for sharing
|
||||||
|
pub fn endpoint_addr(&self) -> iroh::EndpointAddr {
|
||||||
|
self.endpoint.addr()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get or create the default author for writing to documents
|
||||||
|
pub async fn author(&self) -> Result<AuthorId> {
|
||||||
|
self.author_id
|
||||||
|
.get_or_try_init(|| async {
|
||||||
|
self.docs.author_default().await
|
||||||
|
})
|
||||||
|
.await
|
||||||
|
.copied()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Create a new iroh document and return its NamespaceId as a hex string
|
||||||
|
pub async fn create_doc(&self) -> Result<String> {
|
||||||
|
let doc = self.docs.create().await?;
|
||||||
|
let ns_id = doc.id();
|
||||||
|
Ok(hex::encode(ns_id.to_bytes()))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Open an existing document by its hex-encoded namespace ID
|
||||||
|
pub async fn open_doc(&self, doc_id_hex: &str) -> Result<iroh_docs::api::Doc> {
|
||||||
|
let ns_id = parse_namespace_id(doc_id_hex)?;
|
||||||
|
self.docs
|
||||||
|
.open(ns_id)
|
||||||
|
.await?
|
||||||
|
.ok_or_else(|| anyhow::anyhow!("Document {} not found", &doc_id_hex[..12]))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Write a key-value entry to a document
|
||||||
|
pub async fn write_to_doc(
|
||||||
|
&self,
|
||||||
|
doc_id_hex: &str,
|
||||||
|
key: &[u8],
|
||||||
|
value: &[u8],
|
||||||
|
) -> Result<()> {
|
||||||
|
let doc = self.open_doc(doc_id_hex).await?;
|
||||||
|
let author = self.author().await?;
|
||||||
|
doc.set_bytes(author, key.to_vec(), value.to_vec()).await?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Generate a share ticket (DocTicket) for a document
|
||||||
|
pub async fn share_doc(&self, doc_id_hex: &str) -> Result<DocTicket> {
|
||||||
|
let doc = self.open_doc(doc_id_hex).await?;
|
||||||
|
let ticket = doc
|
||||||
|
.share(ShareMode::Write, AddrInfoOptions::RelayAndAddresses)
|
||||||
|
.await?;
|
||||||
|
Ok(ticket)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Import a document from a DocTicket, returns the namespace ID as hex
|
||||||
|
pub async fn import_doc(&self, ticket: DocTicket) -> Result<String> {
|
||||||
|
let doc = self.docs.import(ticket).await?;
|
||||||
|
let ns_id = doc.id();
|
||||||
|
Ok(hex::encode(ns_id.to_bytes()))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Graceful shutdown
|
||||||
|
pub async fn shutdown(self) -> Result<()> {
|
||||||
|
tracing::info!("Shutting down iroh node...");
|
||||||
|
self.router.shutdown().await?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Parse a hex-encoded NamespaceId
|
||||||
|
pub fn parse_namespace_id(hex_str: &str) -> Result<NamespaceId> {
|
||||||
|
let bytes: [u8; 32] = hex::decode(hex_str)
|
||||||
|
.context("Invalid hex in doc_id")?
|
||||||
|
.try_into()
|
||||||
|
.map_err(|_| anyhow::anyhow!("doc_id must be 32 bytes (64 hex chars)"))?;
|
||||||
|
Ok(NamespaceId::from(bytes))
|
||||||
|
}
|
||||||
430
examples/can-sync/src/routes.rs
Normal file
430
examples/can-sync/src/routes.rs
Normal file
@ -0,0 +1,430 @@
|
|||||||
|
use std::sync::Arc;
|
||||||
|
|
||||||
|
use axum::{
|
||||||
|
extract::{Path, State},
|
||||||
|
http::StatusCode,
|
||||||
|
response::IntoResponse,
|
||||||
|
routing::{get, post},
|
||||||
|
Json, Router,
|
||||||
|
};
|
||||||
|
use serde::{Deserialize, Serialize};
|
||||||
|
|
||||||
|
use crate::can_client::CanClient;
|
||||||
|
use crate::library::{Library, LibraryFilter, SyncState};
|
||||||
|
use crate::node::SyncNode;
|
||||||
|
|
||||||
|
/// Shared application state for route handlers
|
||||||
|
pub struct AppState {
|
||||||
|
pub node: Arc<SyncNode>,
|
||||||
|
pub state: Arc<SyncState>,
|
||||||
|
pub can: CanClient,
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Request/Response types ──
|
||||||
|
|
||||||
|
#[derive(Serialize)]
|
||||||
|
struct StatusResponse {
|
||||||
|
peer_id: String,
|
||||||
|
can_service_healthy: bool,
|
||||||
|
library_count: usize,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Serialize)]
|
||||||
|
struct PeerInfo {
|
||||||
|
peer_id: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Deserialize)]
|
||||||
|
pub struct CreateLibraryRequest {
|
||||||
|
pub name: String,
|
||||||
|
pub filter: LibraryFilter,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Serialize)]
|
||||||
|
struct LibraryResponse {
|
||||||
|
id: String,
|
||||||
|
name: String,
|
||||||
|
filter: LibraryFilter,
|
||||||
|
doc_id: Option<String>,
|
||||||
|
is_local: bool,
|
||||||
|
created_at: i64,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Serialize)]
|
||||||
|
struct InviteResponse {
|
||||||
|
ticket: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Deserialize)]
|
||||||
|
pub struct JoinRequest {
|
||||||
|
pub ticket: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Serialize)]
|
||||||
|
struct JoinResponse {
|
||||||
|
library_id: String,
|
||||||
|
message: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Serialize)]
|
||||||
|
struct ApiResp<T: Serialize> {
|
||||||
|
status: String,
|
||||||
|
data: T,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Serialize)]
|
||||||
|
struct ApiErr {
|
||||||
|
status: String,
|
||||||
|
error: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
fn ok_json<T: Serialize>(data: T) -> Json<ApiResp<T>> {
|
||||||
|
Json(ApiResp {
|
||||||
|
status: "success".to_string(),
|
||||||
|
data,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
fn err_resp(status: StatusCode, msg: &str) -> (StatusCode, Json<ApiErr>) {
|
||||||
|
(
|
||||||
|
status,
|
||||||
|
Json(ApiErr {
|
||||||
|
status: "error".to_string(),
|
||||||
|
error: msg.to_string(),
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Routes ──
|
||||||
|
|
||||||
|
pub fn build_router(app_state: Arc<AppState>) -> Router {
|
||||||
|
Router::new()
|
||||||
|
.route("/status", get(get_status))
|
||||||
|
.route("/peers", get(get_peers))
|
||||||
|
.route("/libraries", post(create_library).get(list_libraries))
|
||||||
|
.route(
|
||||||
|
"/libraries/{id}",
|
||||||
|
get(get_library).delete(delete_library),
|
||||||
|
)
|
||||||
|
.route("/libraries/{id}/invite", post(create_invite))
|
||||||
|
.route("/join", post(join_library))
|
||||||
|
.with_state(app_state)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Handlers ──
|
||||||
|
|
||||||
|
async fn get_status(State(app): State<Arc<AppState>>) -> impl IntoResponse {
|
||||||
|
let can_healthy = app.can.health_check().await.unwrap_or(false);
|
||||||
|
let lib_count = app.state.list_libraries().unwrap_or_default().len();
|
||||||
|
|
||||||
|
ok_json(StatusResponse {
|
||||||
|
peer_id: app.node.peer_id(),
|
||||||
|
can_service_healthy: can_healthy,
|
||||||
|
library_count: lib_count,
|
||||||
|
})
|
||||||
|
.into_response()
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn get_peers(State(app): State<Arc<AppState>>) -> impl IntoResponse {
|
||||||
|
let peers: Vec<PeerInfo> = vec![PeerInfo {
|
||||||
|
peer_id: app.node.peer_id(),
|
||||||
|
}];
|
||||||
|
ok_json(peers).into_response()
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn create_library(
|
||||||
|
State(app): State<Arc<AppState>>,
|
||||||
|
Json(req): Json<CreateLibraryRequest>,
|
||||||
|
) -> impl IntoResponse {
|
||||||
|
// Create an iroh document for this library
|
||||||
|
let doc_id = match app.node.create_doc().await {
|
||||||
|
Ok(id) => Some(id),
|
||||||
|
Err(e) => {
|
||||||
|
tracing::warn!("Failed to create iroh document for library: {:#}", e);
|
||||||
|
None
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
let lib = Library {
|
||||||
|
id: uuid::Uuid::new_v4().to_string(),
|
||||||
|
name: req.name,
|
||||||
|
filter: req.filter,
|
||||||
|
doc_id,
|
||||||
|
is_local: true,
|
||||||
|
created_at: chrono::Utc::now().timestamp_millis(),
|
||||||
|
};
|
||||||
|
|
||||||
|
if let Err(e) = app.state.save_library(&lib) {
|
||||||
|
return err_resp(
|
||||||
|
StatusCode::INTERNAL_SERVER_ERROR,
|
||||||
|
&format!("save failed: {}", e),
|
||||||
|
)
|
||||||
|
.into_response();
|
||||||
|
}
|
||||||
|
|
||||||
|
tracing::info!(
|
||||||
|
"Created library '{}' (id={}, doc_id={:?})",
|
||||||
|
lib.name,
|
||||||
|
&lib.id[..8],
|
||||||
|
lib.doc_id.as_deref().map(|d| &d[..12.min(d.len())])
|
||||||
|
);
|
||||||
|
|
||||||
|
ok_json(LibraryResponse {
|
||||||
|
id: lib.id,
|
||||||
|
name: lib.name,
|
||||||
|
filter: lib.filter,
|
||||||
|
doc_id: lib.doc_id,
|
||||||
|
is_local: lib.is_local,
|
||||||
|
created_at: lib.created_at,
|
||||||
|
})
|
||||||
|
.into_response()
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn list_libraries(State(app): State<Arc<AppState>>) -> impl IntoResponse {
|
||||||
|
match app.state.list_libraries() {
|
||||||
|
Ok(libs) => {
|
||||||
|
let responses: Vec<LibraryResponse> = libs
|
||||||
|
.into_iter()
|
||||||
|
.map(|lib| LibraryResponse {
|
||||||
|
id: lib.id,
|
||||||
|
name: lib.name,
|
||||||
|
filter: lib.filter,
|
||||||
|
doc_id: lib.doc_id,
|
||||||
|
is_local: lib.is_local,
|
||||||
|
created_at: lib.created_at,
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
ok_json(responses).into_response()
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
err_resp(StatusCode::INTERNAL_SERVER_ERROR, &format!("{}", e)).into_response()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn get_library(
|
||||||
|
State(app): State<Arc<AppState>>,
|
||||||
|
Path(id): Path<String>,
|
||||||
|
) -> impl IntoResponse {
|
||||||
|
match app.state.get_library(&id) {
|
||||||
|
Ok(Some(lib)) => ok_json(LibraryResponse {
|
||||||
|
id: lib.id,
|
||||||
|
name: lib.name,
|
||||||
|
filter: lib.filter,
|
||||||
|
doc_id: lib.doc_id,
|
||||||
|
is_local: lib.is_local,
|
||||||
|
created_at: lib.created_at,
|
||||||
|
})
|
||||||
|
.into_response(),
|
||||||
|
Ok(None) => err_resp(StatusCode::NOT_FOUND, "Library not found").into_response(),
|
||||||
|
Err(e) => {
|
||||||
|
err_resp(StatusCode::INTERNAL_SERVER_ERROR, &format!("{}", e)).into_response()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn delete_library(
|
||||||
|
State(app): State<Arc<AppState>>,
|
||||||
|
Path(id): Path<String>,
|
||||||
|
) -> impl IntoResponse {
|
||||||
|
match app.state.delete_library(&id) {
|
||||||
|
Ok(()) => ok_json("deleted").into_response(),
|
||||||
|
Err(e) => {
|
||||||
|
err_resp(StatusCode::INTERNAL_SERVER_ERROR, &format!("{}", e)).into_response()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn create_invite(
|
||||||
|
State(app): State<Arc<AppState>>,
|
||||||
|
Path(id): Path<String>,
|
||||||
|
) -> impl IntoResponse {
|
||||||
|
match app.state.get_library(&id) {
|
||||||
|
Ok(Some(lib)) => {
|
||||||
|
let doc_id = match &lib.doc_id {
|
||||||
|
Some(d) => d,
|
||||||
|
None => {
|
||||||
|
return err_resp(
|
||||||
|
StatusCode::BAD_REQUEST,
|
||||||
|
"Library has no iroh document — cannot create invite",
|
||||||
|
)
|
||||||
|
.into_response()
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// Generate a real DocTicket via iroh
|
||||||
|
match app.node.share_doc(doc_id).await {
|
||||||
|
Ok(ticket) => {
|
||||||
|
// DocTicket implements Display via iroh's Ticket trait (base32 serialization)
|
||||||
|
let ticket_str = ticket.to_string();
|
||||||
|
|
||||||
|
// Wrap with library metadata so the joiner knows the name and filter
|
||||||
|
let invite_data = serde_json::json!({
|
||||||
|
"ticket": ticket_str,
|
||||||
|
"library_name": lib.name,
|
||||||
|
"filter": lib.filter,
|
||||||
|
});
|
||||||
|
let invite_b64 = base64_encode(
|
||||||
|
&serde_json::to_vec(&invite_data).unwrap(),
|
||||||
|
);
|
||||||
|
|
||||||
|
ok_json(InviteResponse { ticket: invite_b64 }).into_response()
|
||||||
|
}
|
||||||
|
Err(e) => err_resp(
|
||||||
|
StatusCode::INTERNAL_SERVER_ERROR,
|
||||||
|
&format!("Failed to create invite: {}", e),
|
||||||
|
)
|
||||||
|
.into_response(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Ok(None) => err_resp(StatusCode::NOT_FOUND, "Library not found").into_response(),
|
||||||
|
Err(e) => {
|
||||||
|
err_resp(StatusCode::INTERNAL_SERVER_ERROR, &format!("{}", e)).into_response()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn join_library(
|
||||||
|
State(app): State<Arc<AppState>>,
|
||||||
|
Json(req): Json<JoinRequest>,
|
||||||
|
) -> impl IntoResponse {
|
||||||
|
// Decode our envelope
|
||||||
|
let ticket_bytes = match base64_decode(&req.ticket) {
|
||||||
|
Ok(b) => b,
|
||||||
|
Err(_) => {
|
||||||
|
return err_resp(StatusCode::BAD_REQUEST, "Invalid ticket encoding").into_response()
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
let ticket_data: serde_json::Value = match serde_json::from_slice(&ticket_bytes) {
|
||||||
|
Ok(v) => v,
|
||||||
|
Err(_) => {
|
||||||
|
return err_resp(StatusCode::BAD_REQUEST, "Invalid ticket data").into_response()
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// Extract the real DocTicket string
|
||||||
|
let ticket_str = match ticket_data["ticket"].as_str() {
|
||||||
|
Some(s) => s,
|
||||||
|
None => {
|
||||||
|
return err_resp(StatusCode::BAD_REQUEST, "Missing 'ticket' field in invite")
|
||||||
|
.into_response()
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// Parse DocTicket from the serialized string
|
||||||
|
let doc_ticket: iroh_docs::DocTicket = match ticket_str.parse() {
|
||||||
|
Ok(t) => t,
|
||||||
|
Err(e) => {
|
||||||
|
return err_resp(
|
||||||
|
StatusCode::BAD_REQUEST,
|
||||||
|
&format!("Invalid DocTicket: {}", e),
|
||||||
|
)
|
||||||
|
.into_response()
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// Import the document via iroh (starts sync with remote peers)
|
||||||
|
let doc_id_hex = match app.node.import_doc(doc_ticket).await {
|
||||||
|
Ok(id) => id,
|
||||||
|
Err(e) => {
|
||||||
|
return err_resp(
|
||||||
|
StatusCode::INTERNAL_SERVER_ERROR,
|
||||||
|
&format!("Failed to join document: {}", e),
|
||||||
|
)
|
||||||
|
.into_response()
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
let name = ticket_data["library_name"]
|
||||||
|
.as_str()
|
||||||
|
.unwrap_or("remote library")
|
||||||
|
.to_string();
|
||||||
|
|
||||||
|
let filter: LibraryFilter = serde_json::from_value(ticket_data["filter"].clone())
|
||||||
|
.unwrap_or(LibraryFilter {
|
||||||
|
application: None,
|
||||||
|
tags: None,
|
||||||
|
user: None,
|
||||||
|
mime_prefix: None,
|
||||||
|
hashes: None,
|
||||||
|
});
|
||||||
|
|
||||||
|
let lib = Library {
|
||||||
|
id: uuid::Uuid::new_v4().to_string(),
|
||||||
|
name: name.clone(),
|
||||||
|
filter,
|
||||||
|
doc_id: Some(doc_id_hex),
|
||||||
|
is_local: false,
|
||||||
|
created_at: chrono::Utc::now().timestamp_millis(),
|
||||||
|
};
|
||||||
|
|
||||||
|
if let Err(e) = app.state.save_library(&lib) {
|
||||||
|
return err_resp(
|
||||||
|
StatusCode::INTERNAL_SERVER_ERROR,
|
||||||
|
&format!("save failed: {}", e),
|
||||||
|
)
|
||||||
|
.into_response();
|
||||||
|
}
|
||||||
|
|
||||||
|
tracing::info!(
|
||||||
|
"Joined library '{}' (id={}, doc_id={:?})",
|
||||||
|
name,
|
||||||
|
&lib.id[..8],
|
||||||
|
lib.doc_id.as_deref().map(|d| &d[..12.min(d.len())])
|
||||||
|
);
|
||||||
|
|
||||||
|
ok_json(JoinResponse {
|
||||||
|
library_id: lib.id,
|
||||||
|
message: "Joined library successfully".to_string(),
|
||||||
|
})
|
||||||
|
.into_response()
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Base64 helpers ──
|
||||||
|
|
||||||
|
fn base64_encode(data: &[u8]) -> String {
|
||||||
|
const CHARS: &[u8] = b"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
|
||||||
|
let mut result = Vec::new();
|
||||||
|
for chunk in data.chunks(3) {
|
||||||
|
let b0 = chunk[0] as u32;
|
||||||
|
let b1 = if chunk.len() > 1 { chunk[1] as u32 } else { 0 };
|
||||||
|
let b2 = if chunk.len() > 2 { chunk[2] as u32 } else { 0 };
|
||||||
|
let triple = (b0 << 16) | (b1 << 8) | b2;
|
||||||
|
result.push(CHARS[((triple >> 18) & 0x3F) as usize]);
|
||||||
|
result.push(CHARS[((triple >> 12) & 0x3F) as usize]);
|
||||||
|
if chunk.len() > 1 {
|
||||||
|
result.push(CHARS[((triple >> 6) & 0x3F) as usize]);
|
||||||
|
} else {
|
||||||
|
result.push(b'=');
|
||||||
|
}
|
||||||
|
if chunk.len() > 2 {
|
||||||
|
result.push(CHARS[(triple & 0x3F) as usize]);
|
||||||
|
} else {
|
||||||
|
result.push(b'=');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
String::from_utf8(result).unwrap()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn base64_decode(input: &str) -> Result<Vec<u8>, &'static str> {
|
||||||
|
const CHARS: &[u8] = b"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
|
||||||
|
let input = input.trim_end_matches('=');
|
||||||
|
let bytes: Vec<u8> = input
|
||||||
|
.bytes()
|
||||||
|
.filter_map(|b| CHARS.iter().position(|&c| c == b).map(|p| p as u8))
|
||||||
|
.collect();
|
||||||
|
let mut buf = Vec::new();
|
||||||
|
for chunk in bytes.chunks(4) {
|
||||||
|
if chunk.len() >= 2 {
|
||||||
|
buf.push((chunk[0] << 2) | (chunk[1] >> 4));
|
||||||
|
}
|
||||||
|
if chunk.len() >= 3 {
|
||||||
|
buf.push((chunk[1] << 4) | (chunk[2] >> 2));
|
||||||
|
}
|
||||||
|
if chunk.len() >= 4 {
|
||||||
|
buf.push((chunk[2] << 6) | chunk[3]);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Ok(buf)
|
||||||
|
}
|
||||||
2377
examples/canfs/Cargo.lock
generated
Normal file
2377
examples/canfs/Cargo.lock
generated
Normal file
File diff suppressed because it is too large
Load Diff
27
examples/canfs/Cargo.toml
Normal file
27
examples/canfs/Cargo.toml
Normal file
@ -0,0 +1,27 @@
|
|||||||
|
[package]
|
||||||
|
name = "canfs"
|
||||||
|
version = "0.1.0"
|
||||||
|
edition = "2021"
|
||||||
|
publish = false
|
||||||
|
description = "Mount CAN service assets as a virtual Windows filesystem via WinFSP"
|
||||||
|
|
||||||
|
[[bin]]
|
||||||
|
name = "canfs"
|
||||||
|
path = "src/main.rs"
|
||||||
|
|
||||||
|
[dependencies]
|
||||||
|
winfsp = "0.12"
|
||||||
|
widestring = "1"
|
||||||
|
reqwest = { version = "0.12", features = ["json", "blocking"] }
|
||||||
|
serde = { version = "1", features = ["derive"] }
|
||||||
|
serde_json = "1"
|
||||||
|
chrono = "0.4"
|
||||||
|
parking_lot = "0.12"
|
||||||
|
clap = { version = "4", features = ["derive"] }
|
||||||
|
tracing = "0.1"
|
||||||
|
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
|
||||||
|
anyhow = "1"
|
||||||
|
ctrlc = "3"
|
||||||
|
|
||||||
|
[build-dependencies]
|
||||||
|
winfsp = { version = "0.12", features = ["delayload"] }
|
||||||
3
examples/canfs/build.rs
Normal file
3
examples/canfs/build.rs
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
fn main() {
|
||||||
|
winfsp::build::winfsp_link_delayload();
|
||||||
|
}
|
||||||
4
examples/canfs/run.bat
Normal file
4
examples/canfs/run.bat
Normal file
@ -0,0 +1,4 @@
|
|||||||
|
@echo off
|
||||||
|
set PATH=C:\Program Files (x86)\WinFsp\bin;%PATH%
|
||||||
|
cd /d "%~dp0"
|
||||||
|
cargo run -- --mount J:
|
||||||
104
examples/canfs/src/api.rs
Normal file
104
examples/canfs/src/api.rs
Normal file
@ -0,0 +1,104 @@
|
|||||||
|
use anyhow::{Context, Result};
|
||||||
|
use serde::Deserialize;
|
||||||
|
|
||||||
|
/// Mirrors the server's AssetMeta response type.
|
||||||
|
#[derive(Debug, Clone, Deserialize)]
|
||||||
|
#[allow(dead_code)]
|
||||||
|
pub struct AssetMeta {
|
||||||
|
pub hash: String,
|
||||||
|
pub mime_type: String,
|
||||||
|
pub application: Option<String>,
|
||||||
|
pub user: Option<String>,
|
||||||
|
pub tags: Vec<String>,
|
||||||
|
pub description: Option<String>,
|
||||||
|
pub human_filename: Option<String>,
|
||||||
|
pub human_path: Option<String>,
|
||||||
|
pub timestamp: i64,
|
||||||
|
pub is_trashed: bool,
|
||||||
|
pub is_corrupted: bool,
|
||||||
|
#[serde(default)]
|
||||||
|
pub size: i64,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Deserialize)]
|
||||||
|
struct ApiResponse<T> {
|
||||||
|
#[allow(dead_code)]
|
||||||
|
status: String,
|
||||||
|
data: T,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Deserialize)]
|
||||||
|
struct ListData {
|
||||||
|
items: Vec<AssetMeta>,
|
||||||
|
pagination: Pagination,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Deserialize)]
|
||||||
|
struct Pagination {
|
||||||
|
#[allow(dead_code)]
|
||||||
|
limit: i64,
|
||||||
|
#[allow(dead_code)]
|
||||||
|
offset: i64,
|
||||||
|
total: i64,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Blocking HTTP client for the CAN service API.
|
||||||
|
pub struct CanClient {
|
||||||
|
client: reqwest::blocking::Client,
|
||||||
|
base_url: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl CanClient {
|
||||||
|
pub fn new(base_url: &str) -> Self {
|
||||||
|
Self {
|
||||||
|
client: reqwest::blocking::Client::new(),
|
||||||
|
base_url: base_url.trim_end_matches('/').to_string(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Fetch all non-trashed assets by paginating through the list endpoint.
|
||||||
|
pub fn list_all(&self) -> Result<Vec<AssetMeta>> {
|
||||||
|
let mut all = Vec::new();
|
||||||
|
let page_size = 500;
|
||||||
|
let mut offset = 0i64;
|
||||||
|
|
||||||
|
loop {
|
||||||
|
let url = format!(
|
||||||
|
"{}/list?limit={}&offset={}&order=desc",
|
||||||
|
self.base_url, page_size, offset
|
||||||
|
);
|
||||||
|
let resp: ApiResponse<ListData> = self
|
||||||
|
.client
|
||||||
|
.get(&url)
|
||||||
|
.send()
|
||||||
|
.context("failed to reach CAN service")?
|
||||||
|
.json()
|
||||||
|
.context("failed to parse list response")?;
|
||||||
|
|
||||||
|
let count = resp.data.items.len() as i64;
|
||||||
|
all.extend(resp.data.items);
|
||||||
|
|
||||||
|
if all.len() as i64 >= resp.data.pagination.total || count < page_size {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
offset += count;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Filter out trashed and corrupted
|
||||||
|
all.retain(|a| !a.is_trashed && !a.is_corrupted);
|
||||||
|
Ok(all)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Download the raw bytes of an asset by hash.
|
||||||
|
pub fn fetch_bytes(&self, hash: &str) -> Result<Vec<u8>> {
|
||||||
|
let url = format!("{}/asset/{}", self.base_url, hash);
|
||||||
|
let bytes = self
|
||||||
|
.client
|
||||||
|
.get(&url)
|
||||||
|
.send()
|
||||||
|
.context("failed to fetch asset")?
|
||||||
|
.bytes()
|
||||||
|
.context("failed to read asset body")?;
|
||||||
|
Ok(bytes.to_vec())
|
||||||
|
}
|
||||||
|
}
|
||||||
316
examples/canfs/src/fs.rs
Normal file
316
examples/canfs/src/fs.rs
Normal file
@ -0,0 +1,316 @@
|
|||||||
|
use std::os::raw::c_void;
|
||||||
|
use std::sync::Arc;
|
||||||
|
|
||||||
|
use parking_lot::{Mutex, RwLock};
|
||||||
|
use tracing::{debug, warn};
|
||||||
|
use widestring::U16CStr;
|
||||||
|
|
||||||
|
use winfsp::filesystem::{
|
||||||
|
DirBuffer, DirInfo, DirMarker, FileInfo, FileSecurity, FileSystemContext, OpenFileInfo,
|
||||||
|
VolumeInfo, WideNameInfo,
|
||||||
|
};
|
||||||
|
use winfsp::FspError;
|
||||||
|
|
||||||
|
use crate::api::{AssetMeta, CanClient};
|
||||||
|
use crate::tree::{NodeId, NodeKind, VirtualTree};
|
||||||
|
use crate::util;
|
||||||
|
|
||||||
|
// NTSTATUS constants (raw i32 values to avoid windows crate version conflicts)
|
||||||
|
const STATUS_OBJECT_NAME_NOT_FOUND: i32 = 0xC0000034_u32 as i32;
|
||||||
|
const STATUS_NOT_A_DIRECTORY: i32 = 0xC0000103_u32 as i32;
|
||||||
|
const STATUS_UNEXPECTED_NETWORK_ERROR: i32 = 0xC00000C4_u32 as i32;
|
||||||
|
const STATUS_INVALID_DEVICE_REQUEST: i32 = 0xC0000010_u32 as i32;
|
||||||
|
|
||||||
|
// File attribute constants
|
||||||
|
const FILE_ATTRIBUTE_DIRECTORY: u32 = 0x10;
|
||||||
|
const FILE_ATTRIBUTE_READONLY: u32 = 0x01;
|
||||||
|
const FILE_ATTRIBUTE_ARCHIVE: u32 = 0x20;
|
||||||
|
|
||||||
|
fn ntstatus(code: i32) -> FspError {
|
||||||
|
FspError::NTSTATUS(code)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Shared cache state: asset list + virtual tree.
|
||||||
|
pub struct CacheState {
|
||||||
|
pub assets: Vec<AssetMeta>,
|
||||||
|
pub tree: VirtualTree,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// The WinFSP filesystem context for CAN service.
|
||||||
|
pub struct CanFs {
|
||||||
|
pub cache: Arc<RwLock<CacheState>>,
|
||||||
|
pub client: Arc<CanClient>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Per-open-handle context.
|
||||||
|
pub struct CanFileContext {
|
||||||
|
node_id: NodeId,
|
||||||
|
/// Lazily fetched file bytes.
|
||||||
|
content: Mutex<Option<Vec<u8>>>,
|
||||||
|
/// Directory enumeration buffer.
|
||||||
|
dir_buffer: DirBuffer,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl FileSystemContext for CanFs {
|
||||||
|
type FileContext = CanFileContext;
|
||||||
|
|
||||||
|
fn get_security_by_name(
|
||||||
|
&self,
|
||||||
|
file_name: &U16CStr,
|
||||||
|
_security_descriptor: Option<&mut [c_void]>,
|
||||||
|
_resolve_reparse_points: impl FnOnce(&U16CStr) -> Option<FileSecurity>,
|
||||||
|
) -> winfsp::Result<FileSecurity> {
|
||||||
|
let path = util::normalize_path(file_name);
|
||||||
|
debug!("get_security_by_name: {}", path);
|
||||||
|
|
||||||
|
let cache = self.cache.read();
|
||||||
|
let node_id = cache
|
||||||
|
.tree
|
||||||
|
.lookup(&path)
|
||||||
|
.ok_or(ntstatus(STATUS_OBJECT_NAME_NOT_FOUND))?;
|
||||||
|
let node = cache.tree.get(node_id);
|
||||||
|
|
||||||
|
let attributes = if node.is_directory() {
|
||||||
|
FILE_ATTRIBUTE_DIRECTORY
|
||||||
|
} else {
|
||||||
|
FILE_ATTRIBUTE_READONLY | FILE_ATTRIBUTE_ARCHIVE
|
||||||
|
};
|
||||||
|
|
||||||
|
Ok(FileSecurity {
|
||||||
|
reparse: false,
|
||||||
|
sz_security_descriptor: 0,
|
||||||
|
attributes,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
fn open(
|
||||||
|
&self,
|
||||||
|
file_name: &U16CStr,
|
||||||
|
_create_options: u32,
|
||||||
|
_granted_access: u32,
|
||||||
|
file_info: &mut OpenFileInfo,
|
||||||
|
) -> winfsp::Result<Self::FileContext> {
|
||||||
|
let path = util::normalize_path(file_name);
|
||||||
|
debug!("open: {}", path);
|
||||||
|
|
||||||
|
let cache = self.cache.read();
|
||||||
|
let node_id = cache
|
||||||
|
.tree
|
||||||
|
.lookup(&path)
|
||||||
|
.ok_or(ntstatus(STATUS_OBJECT_NAME_NOT_FOUND))?;
|
||||||
|
let node = cache.tree.get(node_id);
|
||||||
|
|
||||||
|
let fi = file_info.as_mut();
|
||||||
|
if node.is_directory() {
|
||||||
|
fi.file_attributes = FILE_ATTRIBUTE_DIRECTORY;
|
||||||
|
fi.file_size = 0;
|
||||||
|
fi.allocation_size = 0;
|
||||||
|
} else {
|
||||||
|
fi.file_attributes = FILE_ATTRIBUTE_READONLY | FILE_ATTRIBUTE_ARCHIVE;
|
||||||
|
if let NodeKind::File { asset_index } = &node.kind {
|
||||||
|
let sz = cache.assets[*asset_index].size as u64;
|
||||||
|
fi.file_size = sz;
|
||||||
|
fi.allocation_size = sz;
|
||||||
|
} else {
|
||||||
|
fi.file_size = 0;
|
||||||
|
fi.allocation_size = 0;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if let NodeKind::File { asset_index } = &node.kind {
|
||||||
|
let ts = util::epoch_ms_to_filetime(cache.assets[*asset_index].timestamp);
|
||||||
|
fi.creation_time = ts;
|
||||||
|
fi.last_access_time = ts;
|
||||||
|
fi.last_write_time = ts;
|
||||||
|
fi.change_time = ts;
|
||||||
|
} else {
|
||||||
|
fi.creation_time = 0;
|
||||||
|
fi.last_access_time = 0;
|
||||||
|
fi.last_write_time = 0;
|
||||||
|
fi.change_time = 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
fi.index_number = 0;
|
||||||
|
fi.hard_links = 0;
|
||||||
|
fi.ea_size = 0;
|
||||||
|
fi.reparse_tag = 0;
|
||||||
|
|
||||||
|
Ok(CanFileContext {
|
||||||
|
node_id,
|
||||||
|
content: Mutex::new(None),
|
||||||
|
dir_buffer: DirBuffer::new(),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
fn close(&self, _context: Self::FileContext) {}
|
||||||
|
|
||||||
|
fn get_file_info(
|
||||||
|
&self,
|
||||||
|
context: &Self::FileContext,
|
||||||
|
file_info: &mut FileInfo,
|
||||||
|
) -> winfsp::Result<()> {
|
||||||
|
let cache = self.cache.read();
|
||||||
|
let node = cache.tree.get(context.node_id);
|
||||||
|
|
||||||
|
if node.is_directory() {
|
||||||
|
file_info.file_attributes = FILE_ATTRIBUTE_DIRECTORY;
|
||||||
|
file_info.file_size = 0;
|
||||||
|
file_info.allocation_size = 0;
|
||||||
|
file_info.creation_time = 0;
|
||||||
|
file_info.last_access_time = 0;
|
||||||
|
file_info.last_write_time = 0;
|
||||||
|
file_info.change_time = 0;
|
||||||
|
} else {
|
||||||
|
file_info.file_attributes = FILE_ATTRIBUTE_READONLY | FILE_ATTRIBUTE_ARCHIVE;
|
||||||
|
|
||||||
|
// Use actual downloaded size if available, otherwise metadata size
|
||||||
|
let content = context.content.lock();
|
||||||
|
if let Some(ref bytes) = *content {
|
||||||
|
let sz = bytes.len() as u64;
|
||||||
|
file_info.file_size = sz;
|
||||||
|
file_info.allocation_size = sz;
|
||||||
|
} else if let NodeKind::File { asset_index } = &node.kind {
|
||||||
|
let sz = cache.assets[*asset_index].size as u64;
|
||||||
|
file_info.file_size = sz;
|
||||||
|
file_info.allocation_size = sz;
|
||||||
|
} else {
|
||||||
|
file_info.file_size = 0;
|
||||||
|
file_info.allocation_size = 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
if let NodeKind::File { asset_index } = &node.kind {
|
||||||
|
let ts = util::epoch_ms_to_filetime(cache.assets[*asset_index].timestamp);
|
||||||
|
file_info.creation_time = ts;
|
||||||
|
file_info.last_access_time = ts;
|
||||||
|
file_info.last_write_time = ts;
|
||||||
|
file_info.change_time = ts;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
file_info.index_number = 0;
|
||||||
|
file_info.hard_links = 0;
|
||||||
|
file_info.ea_size = 0;
|
||||||
|
file_info.reparse_tag = 0;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
fn read(
|
||||||
|
&self,
|
||||||
|
context: &Self::FileContext,
|
||||||
|
buffer: &mut [u8],
|
||||||
|
offset: u64,
|
||||||
|
) -> winfsp::Result<u32> {
|
||||||
|
let mut content = context.content.lock();
|
||||||
|
|
||||||
|
if content.is_none() {
|
||||||
|
let cache = self.cache.read();
|
||||||
|
let node = cache.tree.get(context.node_id);
|
||||||
|
if let NodeKind::File { asset_index } = &node.kind {
|
||||||
|
let hash = &cache.assets[*asset_index].hash;
|
||||||
|
debug!("fetching bytes for {}", hash);
|
||||||
|
match self.client.fetch_bytes(hash) {
|
||||||
|
Ok(bytes) => {
|
||||||
|
*content = Some(bytes);
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
warn!("failed to fetch asset: {}", e);
|
||||||
|
return Err(ntstatus(STATUS_UNEXPECTED_NETWORK_ERROR));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
return Err(ntstatus(STATUS_INVALID_DEVICE_REQUEST));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
let bytes = content.as_ref().unwrap();
|
||||||
|
let offset = offset as usize;
|
||||||
|
if offset >= bytes.len() {
|
||||||
|
return Ok(0);
|
||||||
|
}
|
||||||
|
let end = (offset + buffer.len()).min(bytes.len());
|
||||||
|
let count = end - offset;
|
||||||
|
buffer[..count].copy_from_slice(&bytes[offset..end]);
|
||||||
|
Ok(count as u32)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn read_directory(
|
||||||
|
&self,
|
||||||
|
context: &Self::FileContext,
|
||||||
|
_pattern: Option<&U16CStr>,
|
||||||
|
marker: DirMarker,
|
||||||
|
buffer: &mut [u8],
|
||||||
|
) -> winfsp::Result<u32> {
|
||||||
|
let cache = self.cache.read();
|
||||||
|
let node = cache.tree.get(context.node_id);
|
||||||
|
|
||||||
|
if !node.is_directory() {
|
||||||
|
return Err(ntstatus(STATUS_NOT_A_DIRECTORY));
|
||||||
|
}
|
||||||
|
|
||||||
|
if let Ok(dir_buffer_lock) = context.dir_buffer.acquire(marker.is_none(), None) {
|
||||||
|
// "." entry
|
||||||
|
{
|
||||||
|
let mut di: DirInfo = DirInfo::new();
|
||||||
|
let _ = di.set_name(std::ffi::OsStr::new("."));
|
||||||
|
di.file_info_mut().file_attributes = FILE_ATTRIBUTE_DIRECTORY;
|
||||||
|
let _ = dir_buffer_lock.write(&mut di);
|
||||||
|
}
|
||||||
|
// ".." entry
|
||||||
|
{
|
||||||
|
let mut di: DirInfo = DirInfo::new();
|
||||||
|
let _ = di.set_name(std::ffi::OsStr::new(".."));
|
||||||
|
di.file_info_mut().file_attributes = FILE_ATTRIBUTE_DIRECTORY;
|
||||||
|
let _ = dir_buffer_lock.write(&mut di);
|
||||||
|
}
|
||||||
|
|
||||||
|
for &child_id in &node.children {
|
||||||
|
let child = cache.tree.get(child_id);
|
||||||
|
let mut di: DirInfo = DirInfo::new();
|
||||||
|
|
||||||
|
if di.set_name(std::ffi::OsStr::new(&child.name)).is_err() {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
let fi = di.file_info_mut();
|
||||||
|
if child.is_directory() {
|
||||||
|
fi.file_attributes = FILE_ATTRIBUTE_DIRECTORY;
|
||||||
|
fi.file_size = 0;
|
||||||
|
fi.allocation_size = 0;
|
||||||
|
} else {
|
||||||
|
fi.file_attributes = FILE_ATTRIBUTE_READONLY | FILE_ATTRIBUTE_ARCHIVE;
|
||||||
|
if let NodeKind::File { asset_index } = &child.kind {
|
||||||
|
let sz = cache.assets[*asset_index].size as u64;
|
||||||
|
fi.file_size = sz;
|
||||||
|
fi.allocation_size = sz;
|
||||||
|
} else {
|
||||||
|
fi.file_size = 0;
|
||||||
|
fi.allocation_size = 0;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if let NodeKind::File { asset_index } = &child.kind {
|
||||||
|
let ts = util::epoch_ms_to_filetime(cache.assets[*asset_index].timestamp);
|
||||||
|
fi.creation_time = ts;
|
||||||
|
fi.last_access_time = ts;
|
||||||
|
fi.last_write_time = ts;
|
||||||
|
fi.change_time = ts;
|
||||||
|
}
|
||||||
|
|
||||||
|
fi.index_number = 0;
|
||||||
|
fi.hard_links = 0;
|
||||||
|
fi.ea_size = 0;
|
||||||
|
fi.reparse_tag = 0;
|
||||||
|
|
||||||
|
let _ = dir_buffer_lock.write(&mut di);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(context.dir_buffer.read(marker, buffer))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn get_volume_info(&self, out_volume_info: &mut VolumeInfo) -> winfsp::Result<()> {
|
||||||
|
out_volume_info.total_size = 1024 * 1024 * 1024; // 1 GB
|
||||||
|
out_volume_info.free_size = 0;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
}
|
||||||
139
examples/canfs/src/main.rs
Normal file
139
examples/canfs/src/main.rs
Normal file
@ -0,0 +1,139 @@
|
|||||||
|
mod api;
|
||||||
|
mod fs;
|
||||||
|
mod tree;
|
||||||
|
mod util;
|
||||||
|
|
||||||
|
use std::sync::atomic::{AtomicBool, Ordering};
|
||||||
|
use std::sync::Arc;
|
||||||
|
use std::time::Duration;
|
||||||
|
|
||||||
|
use clap::Parser;
|
||||||
|
use parking_lot::RwLock;
|
||||||
|
use tracing::{error, info};
|
||||||
|
use winfsp::host::{FileSystemHost, FileSystemParams, VolumeParams};
|
||||||
|
use winfsp::winfsp_init_or_die;
|
||||||
|
|
||||||
|
use crate::api::CanClient;
|
||||||
|
use crate::fs::{CacheState, CanFs};
|
||||||
|
use crate::tree::VirtualTree;
|
||||||
|
|
||||||
|
#[derive(Parser)]
|
||||||
|
#[command(name = "canfs", about = "Mount CAN service assets as a virtual drive")]
|
||||||
|
struct Args {
|
||||||
|
/// Mount point: a drive letter like "X:" or a directory path.
|
||||||
|
#[arg(short, long, default_value = "X:")]
|
||||||
|
mount: String,
|
||||||
|
|
||||||
|
/// CAN service base URL.
|
||||||
|
#[arg(long, default_value = "http://127.0.0.1:3210/api/v1/can/0")]
|
||||||
|
can_url: String,
|
||||||
|
|
||||||
|
/// Cache refresh interval in seconds.
|
||||||
|
#[arg(long, default_value = "60")]
|
||||||
|
refresh_secs: u64,
|
||||||
|
}
|
||||||
|
|
||||||
|
fn main() {
|
||||||
|
tracing_subscriber::fmt()
|
||||||
|
.with_env_filter(
|
||||||
|
tracing_subscriber::EnvFilter::try_from_default_env()
|
||||||
|
.unwrap_or_else(|_| tracing_subscriber::EnvFilter::new("info")),
|
||||||
|
)
|
||||||
|
.init();
|
||||||
|
|
||||||
|
let _init = winfsp_init_or_die();
|
||||||
|
let args = Args::parse();
|
||||||
|
|
||||||
|
info!("connecting to CAN service at {}", args.can_url);
|
||||||
|
let client = Arc::new(CanClient::new(&args.can_url));
|
||||||
|
|
||||||
|
let assets = match client.list_all() {
|
||||||
|
Ok(a) => a,
|
||||||
|
Err(e) => {
|
||||||
|
error!("initial fetch failed — is the CAN service running? {}", e);
|
||||||
|
std::process::exit(1);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
info!("loaded {} assets", assets.len());
|
||||||
|
|
||||||
|
let tree = VirtualTree::build(&assets);
|
||||||
|
let cache = Arc::new(RwLock::new(CacheState { assets, tree }));
|
||||||
|
|
||||||
|
// Background refresh thread
|
||||||
|
{
|
||||||
|
let cache = Arc::clone(&cache);
|
||||||
|
let client = Arc::clone(&client);
|
||||||
|
let interval = Duration::from_secs(args.refresh_secs);
|
||||||
|
std::thread::spawn(move || loop {
|
||||||
|
std::thread::sleep(interval);
|
||||||
|
match client.list_all() {
|
||||||
|
Ok(assets) => {
|
||||||
|
let tree = VirtualTree::build(&assets);
|
||||||
|
let count = assets.len();
|
||||||
|
*cache.write() = CacheState { assets, tree };
|
||||||
|
info!("cache refreshed: {} assets", count);
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
error!("cache refresh failed: {}", e);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
let canfs = CanFs {
|
||||||
|
cache: Arc::clone(&cache),
|
||||||
|
client,
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut volume_params = VolumeParams::new();
|
||||||
|
volume_params
|
||||||
|
.filesystem_name("CanFS")
|
||||||
|
.sector_size(512)
|
||||||
|
.sectors_per_allocation_unit(1)
|
||||||
|
.file_info_timeout(1000)
|
||||||
|
.case_sensitive_search(false)
|
||||||
|
.case_preserved_names(true)
|
||||||
|
.read_only_volume(true)
|
||||||
|
.unicode_on_disk(true)
|
||||||
|
.persistent_acls(false);
|
||||||
|
|
||||||
|
let params = FileSystemParams::default_params(volume_params);
|
||||||
|
|
||||||
|
let mut host = match FileSystemHost::new_with_options(params, canfs) {
|
||||||
|
Ok(h) => h,
|
||||||
|
Err(e) => {
|
||||||
|
error!("failed to create filesystem host: {:?}", e);
|
||||||
|
std::process::exit(1);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
info!("mounting on {}", args.mount);
|
||||||
|
if let Err(e) = host.mount(std::ffi::OsStr::new(&args.mount)) {
|
||||||
|
error!("failed to mount: {:?}", e);
|
||||||
|
std::process::exit(1);
|
||||||
|
}
|
||||||
|
if let Err(e) = host.start() {
|
||||||
|
error!("failed to start: {:?}", e);
|
||||||
|
std::process::exit(1);
|
||||||
|
}
|
||||||
|
|
||||||
|
info!("CanFS mounted on {} — press Ctrl+C to unmount", args.mount);
|
||||||
|
|
||||||
|
// Wait for Ctrl+C
|
||||||
|
let running = Arc::new(AtomicBool::new(true));
|
||||||
|
{
|
||||||
|
let running = Arc::clone(&running);
|
||||||
|
ctrlc::set_handler(move || {
|
||||||
|
info!("shutting down...");
|
||||||
|
running.store(false, Ordering::SeqCst);
|
||||||
|
})
|
||||||
|
.expect("failed to set Ctrl+C handler");
|
||||||
|
}
|
||||||
|
|
||||||
|
while running.load(Ordering::SeqCst) {
|
||||||
|
std::thread::sleep(Duration::from_millis(100));
|
||||||
|
}
|
||||||
|
|
||||||
|
host.stop();
|
||||||
|
info!("unmounted");
|
||||||
|
}
|
||||||
259
examples/canfs/src/tree.rs
Normal file
259
examples/canfs/src/tree.rs
Normal file
@ -0,0 +1,259 @@
|
|||||||
|
use std::collections::{HashMap, HashSet};
|
||||||
|
|
||||||
|
use crate::api::AssetMeta;
|
||||||
|
use crate::util;
|
||||||
|
use chrono::DateTime;
|
||||||
|
|
||||||
|
/// Unique identifier for a node in the virtual tree.
|
||||||
|
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
|
||||||
|
pub struct NodeId(pub usize);
|
||||||
|
|
||||||
|
/// A node is either a directory or a file reference.
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub enum NodeKind {
|
||||||
|
Directory,
|
||||||
|
/// Points to an index in the flat asset list.
|
||||||
|
File { asset_index: usize },
|
||||||
|
}
|
||||||
|
|
||||||
|
/// A node in the virtual directory tree.
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub struct VNode {
|
||||||
|
pub name: String,
|
||||||
|
pub kind: NodeKind,
|
||||||
|
pub children: Vec<NodeId>,
|
||||||
|
#[allow(dead_code)]
|
||||||
|
pub parent: Option<NodeId>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl VNode {
|
||||||
|
pub fn is_directory(&self) -> bool {
|
||||||
|
matches!(self.kind, NodeKind::Directory)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// The complete virtual directory tree built from a list of assets.
|
||||||
|
pub struct VirtualTree {
|
||||||
|
nodes: Vec<VNode>,
|
||||||
|
/// Normalized path -> NodeId lookup.
|
||||||
|
path_index: HashMap<String, NodeId>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl VirtualTree {
|
||||||
|
/// Build the virtual tree from a flat list of assets.
|
||||||
|
pub fn build(assets: &[AssetMeta]) -> Self {
|
||||||
|
let mut tree = TreeBuilder::new();
|
||||||
|
|
||||||
|
// Create top-level directories
|
||||||
|
let root = tree.root();
|
||||||
|
let can_dir = tree.add_dir("CAN", root);
|
||||||
|
let app_dir = tree.add_dir("APPLICATION", root);
|
||||||
|
let dates_dir = tree.add_dir("DATES", root);
|
||||||
|
let tags_dir = tree.add_dir("TAGS", root);
|
||||||
|
|
||||||
|
for (i, asset) in assets.iter().enumerate() {
|
||||||
|
let ext = util::mime_to_ext(&asset.mime_type);
|
||||||
|
let hash8 = &asset.hash[..asset.hash.len().min(8)];
|
||||||
|
|
||||||
|
// 1) CAN/ — always: {timestamp}_{hash8}.ext
|
||||||
|
let can_name = format!("{}_{}.{}", asset.timestamp, hash8, ext);
|
||||||
|
let can_name = util::sanitize_filename(&can_name);
|
||||||
|
tree.add_file(&can_name, can_dir, i);
|
||||||
|
|
||||||
|
// Display name for other folders: human_filename if available, else hash8.ext
|
||||||
|
let display_name = if let Some(ref hf) = asset.human_filename {
|
||||||
|
util::sanitize_filename(hf)
|
||||||
|
} else {
|
||||||
|
format!("{}.{}", hash8, ext)
|
||||||
|
};
|
||||||
|
|
||||||
|
// 2) APPLICATION/{app}/ — if application is set
|
||||||
|
if let Some(ref app) = asset.application {
|
||||||
|
let app_name = util::sanitize_filename(app);
|
||||||
|
let app_sub = tree.ensure_dir(&app_name, app_dir);
|
||||||
|
tree.add_file(&display_name, app_sub, i);
|
||||||
|
}
|
||||||
|
|
||||||
|
// 3) DATES/{year}/{month:02}/
|
||||||
|
if let Some(dt) = DateTime::from_timestamp_millis(asset.timestamp) {
|
||||||
|
let year_str = dt.format("%Y").to_string();
|
||||||
|
let month_str = dt.format("%m").to_string();
|
||||||
|
let year_dir = tree.ensure_dir(&year_str, dates_dir);
|
||||||
|
let month_dir = tree.ensure_dir(&month_str, year_dir);
|
||||||
|
tree.add_file(&display_name, month_dir, i);
|
||||||
|
}
|
||||||
|
|
||||||
|
// 4) TAGS/{tag}/
|
||||||
|
for tag in &asset.tags {
|
||||||
|
let tag_name = util::sanitize_filename(tag);
|
||||||
|
if tag_name.is_empty() {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
let tag_sub = tree.ensure_dir(&tag_name, tags_dir);
|
||||||
|
tree.add_file(&display_name, tag_sub, i);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Sort children and build path index
|
||||||
|
tree.finalize()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Look up a node by its normalized path (e.g., `\can\file.txt`).
|
||||||
|
pub fn lookup(&self, path: &str) -> Option<NodeId> {
|
||||||
|
let normalized = path.to_lowercase().replace('/', "\\");
|
||||||
|
let normalized = if normalized.len() > 1 && normalized.ends_with('\\') {
|
||||||
|
&normalized[..normalized.len() - 1]
|
||||||
|
} else {
|
||||||
|
&normalized
|
||||||
|
};
|
||||||
|
self.path_index.get(normalized).copied()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get a node by ID.
|
||||||
|
pub fn get(&self, id: NodeId) -> &VNode {
|
||||||
|
&self.nodes[id.0]
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get the root node ID.
|
||||||
|
#[allow(dead_code)]
|
||||||
|
pub fn root(&self) -> NodeId {
|
||||||
|
NodeId(0)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Builder helper for constructing the virtual tree.
|
||||||
|
struct TreeBuilder {
|
||||||
|
nodes: Vec<VNode>,
|
||||||
|
/// Track names used per directory to resolve collisions.
|
||||||
|
dir_names: HashMap<usize, HashSet<String>>,
|
||||||
|
/// Cache for ensure_dir: (parent_id, name) -> NodeId
|
||||||
|
dir_cache: HashMap<(usize, String), NodeId>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl TreeBuilder {
|
||||||
|
fn new() -> Self {
|
||||||
|
let root = VNode {
|
||||||
|
name: String::new(),
|
||||||
|
kind: NodeKind::Directory,
|
||||||
|
children: Vec::new(),
|
||||||
|
parent: None,
|
||||||
|
};
|
||||||
|
let mut dir_names = HashMap::new();
|
||||||
|
dir_names.insert(0, HashSet::new());
|
||||||
|
Self {
|
||||||
|
nodes: vec![root],
|
||||||
|
dir_names,
|
||||||
|
dir_cache: HashMap::new(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn root(&self) -> NodeId {
|
||||||
|
NodeId(0)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Add a directory as a child of `parent`. Returns its NodeId.
|
||||||
|
fn add_dir(&mut self, name: &str, parent: NodeId) -> NodeId {
|
||||||
|
let id = NodeId(self.nodes.len());
|
||||||
|
self.nodes.push(VNode {
|
||||||
|
name: name.to_string(),
|
||||||
|
kind: NodeKind::Directory,
|
||||||
|
children: Vec::new(),
|
||||||
|
parent: Some(parent),
|
||||||
|
});
|
||||||
|
self.nodes[parent.0].children.push(id);
|
||||||
|
self.dir_names.insert(id.0, HashSet::new());
|
||||||
|
self.dir_names
|
||||||
|
.entry(parent.0)
|
||||||
|
.or_default()
|
||||||
|
.insert(name.to_lowercase());
|
||||||
|
self.dir_cache
|
||||||
|
.insert((parent.0, name.to_lowercase()), id);
|
||||||
|
id
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get or create a subdirectory by name under `parent`.
|
||||||
|
fn ensure_dir(&mut self, name: &str, parent: NodeId) -> NodeId {
|
||||||
|
let key = (parent.0, name.to_lowercase());
|
||||||
|
if let Some(&id) = self.dir_cache.get(&key) {
|
||||||
|
return id;
|
||||||
|
}
|
||||||
|
self.add_dir(name, parent)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Add a file node as a child of `parent`, deduplicating names.
|
||||||
|
fn add_file(&mut self, name: &str, parent: NodeId, asset_index: usize) {
|
||||||
|
let used = self.dir_names.entry(parent.0).or_default();
|
||||||
|
let lower = name.to_lowercase();
|
||||||
|
|
||||||
|
let final_name = if !used.contains(&lower) {
|
||||||
|
used.insert(lower);
|
||||||
|
name.to_string()
|
||||||
|
} else {
|
||||||
|
// Deduplicate: try _2, _3, etc.
|
||||||
|
let (stem, ext) = if let Some(dot_pos) = name.rfind('.') {
|
||||||
|
(&name[..dot_pos], &name[dot_pos..])
|
||||||
|
} else {
|
||||||
|
(name, "")
|
||||||
|
};
|
||||||
|
let mut n = 2;
|
||||||
|
loop {
|
||||||
|
let candidate = format!("{}_{}{}", stem, n, ext);
|
||||||
|
let cand_lower = candidate.to_lowercase();
|
||||||
|
if !used.contains(&cand_lower) {
|
||||||
|
used.insert(cand_lower);
|
||||||
|
break candidate;
|
||||||
|
}
|
||||||
|
n += 1;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
let id = NodeId(self.nodes.len());
|
||||||
|
self.nodes.push(VNode {
|
||||||
|
name: final_name,
|
||||||
|
kind: NodeKind::File { asset_index },
|
||||||
|
children: Vec::new(),
|
||||||
|
parent: Some(parent),
|
||||||
|
});
|
||||||
|
self.nodes[parent.0].children.push(id);
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Sort children and build the path index.
|
||||||
|
fn finalize(mut self) -> VirtualTree {
|
||||||
|
// Sort children by name (case-insensitive)
|
||||||
|
let names: Vec<String> = self.nodes.iter().map(|n| n.name.to_lowercase()).collect();
|
||||||
|
for node in &mut self.nodes {
|
||||||
|
node.children.sort_by(|a, b| names[a.0].cmp(&names[b.0]));
|
||||||
|
}
|
||||||
|
|
||||||
|
// Build path index by walking the tree
|
||||||
|
let mut path_index = HashMap::new();
|
||||||
|
path_index.insert("\\".to_string(), NodeId(0));
|
||||||
|
|
||||||
|
fn walk(
|
||||||
|
nodes: &[VNode],
|
||||||
|
id: NodeId,
|
||||||
|
prefix: &str,
|
||||||
|
index: &mut HashMap<String, NodeId>,
|
||||||
|
) {
|
||||||
|
for &child_id in &nodes[id.0].children {
|
||||||
|
let child = &nodes[child_id.0];
|
||||||
|
let path = if prefix == "\\" {
|
||||||
|
format!("\\{}", child.name.to_lowercase())
|
||||||
|
} else {
|
||||||
|
format!("{}\\{}", prefix, child.name.to_lowercase())
|
||||||
|
};
|
||||||
|
index.insert(path.clone(), child_id);
|
||||||
|
if child.is_directory() {
|
||||||
|
walk(nodes, child_id, &path, index);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
walk(&self.nodes, NodeId(0), "\\", &mut path_index);
|
||||||
|
|
||||||
|
VirtualTree {
|
||||||
|
nodes: self.nodes,
|
||||||
|
path_index,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
76
examples/canfs/src/util.rs
Normal file
76
examples/canfs/src/util.rs
Normal file
@ -0,0 +1,76 @@
|
|||||||
|
use widestring::U16CStr;
|
||||||
|
|
||||||
|
/// Convert a MIME type string to a file extension (without dot).
|
||||||
|
pub fn mime_to_ext(mime: &str) -> &'static str {
|
||||||
|
// Common overrides for types where mime_guess gives odd results
|
||||||
|
match mime {
|
||||||
|
"text/plain" => "txt",
|
||||||
|
"text/html" => "html",
|
||||||
|
"text/css" => "css",
|
||||||
|
"text/javascript" | "application/javascript" => "js",
|
||||||
|
"application/json" => "json",
|
||||||
|
"application/pdf" => "pdf",
|
||||||
|
"application/zip" => "zip",
|
||||||
|
"application/gzip" => "gz",
|
||||||
|
"image/jpeg" => "jpg",
|
||||||
|
"image/png" => "png",
|
||||||
|
"image/gif" => "gif",
|
||||||
|
"image/webp" => "webp",
|
||||||
|
"image/svg+xml" => "svg",
|
||||||
|
"audio/mpeg" => "mp3",
|
||||||
|
"audio/ogg" => "ogg",
|
||||||
|
"video/mp4" => "mp4",
|
||||||
|
"video/webm" => "webm",
|
||||||
|
"application/octet-stream" => "bin",
|
||||||
|
other => {
|
||||||
|
// Try to extract from subtype: "image/tiff" -> "tiff"
|
||||||
|
if let Some((_main, sub)) = other.split_once('/') {
|
||||||
|
let sub = sub.split('+').next().unwrap_or(sub);
|
||||||
|
let sub = sub.split('.').next_back().unwrap_or(sub);
|
||||||
|
// Leak a static string for the extension - acceptable for a small set
|
||||||
|
// In practice we'll hit the match arms above for common types
|
||||||
|
Box::leak(sub.to_string().into_boxed_str())
|
||||||
|
} else {
|
||||||
|
"bin"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Convert Unix epoch milliseconds to Windows FILETIME (100ns intervals since 1601-01-01).
|
||||||
|
pub fn epoch_ms_to_filetime(ts_ms: i64) -> u64 {
|
||||||
|
// Windows epoch is 1601-01-01, Unix epoch is 1970-01-01
|
||||||
|
// Difference: 11644473600 seconds
|
||||||
|
const EPOCH_DIFF_SECS: i64 = 11_644_473_600;
|
||||||
|
const TICKS_PER_SEC: i64 = 10_000_000;
|
||||||
|
const TICKS_PER_MS: i64 = 10_000;
|
||||||
|
|
||||||
|
let secs = ts_ms / 1000;
|
||||||
|
let ms_remainder = ts_ms % 1000;
|
||||||
|
|
||||||
|
((secs + EPOCH_DIFF_SECS) * TICKS_PER_SEC + ms_remainder * TICKS_PER_MS) as u64
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Normalize a WinFSP U16CStr path to a lowercase String with backslash separators.
|
||||||
|
/// Strips trailing backslash (except for root "\").
|
||||||
|
pub fn normalize_path(path: &U16CStr) -> String {
|
||||||
|
let s = path.to_string_lossy();
|
||||||
|
let s = s.to_lowercase();
|
||||||
|
let s = s.replace('/', "\\");
|
||||||
|
if s.len() > 1 && s.ends_with('\\') {
|
||||||
|
s[..s.len() - 1].to_string()
|
||||||
|
} else {
|
||||||
|
s
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Sanitize a string for use as a filename: replace invalid chars with underscore.
|
||||||
|
pub fn sanitize_filename(s: &str) -> String {
|
||||||
|
s.chars()
|
||||||
|
.map(|c| match c {
|
||||||
|
'<' | '>' | ':' | '"' | '/' | '\\' | '|' | '?' | '*' => '_',
|
||||||
|
c if c.is_control() => '_',
|
||||||
|
_ => c,
|
||||||
|
})
|
||||||
|
.collect()
|
||||||
|
}
|
||||||
1955
examples/filemanager/Cargo.lock
generated
Normal file
1955
examples/filemanager/Cargo.lock
generated
Normal file
File diff suppressed because it is too large
Load Diff
20
examples/filemanager/Cargo.toml
Normal file
20
examples/filemanager/Cargo.toml
Normal file
@ -0,0 +1,20 @@
|
|||||||
|
[package]
|
||||||
|
name = "filemanager"
|
||||||
|
version = "0.1.0"
|
||||||
|
edition = "2021"
|
||||||
|
publish = false
|
||||||
|
description = "Web-based file manager for CAN service assets"
|
||||||
|
|
||||||
|
[[bin]]
|
||||||
|
name = "filemanager"
|
||||||
|
path = "src/main.rs"
|
||||||
|
|
||||||
|
[dependencies]
|
||||||
|
axum = "0.8"
|
||||||
|
tokio = { version = "1", features = ["full"] }
|
||||||
|
reqwest = { version = "0.12", features = ["json"] }
|
||||||
|
serde = { version = "1", features = ["derive"] }
|
||||||
|
serde_json = "1"
|
||||||
|
open = "5"
|
||||||
|
tracing = "0.1"
|
||||||
|
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
|
||||||
992
examples/filemanager/src/html.rs
Normal file
992
examples/filemanager/src/html.rs
Normal file
@ -0,0 +1,992 @@
|
|||||||
|
pub const INDEX_HTML: &str = r##"<!DOCTYPE html>
|
||||||
|
<html lang="en">
|
||||||
|
<head>
|
||||||
|
<meta charset="utf-8">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||||
|
<title>CAN File Manager</title>
|
||||||
|
<style>
|
||||||
|
:root {
|
||||||
|
--bg: #1a1a1e;
|
||||||
|
--bg2: #222228;
|
||||||
|
--bg3: #2a2a32;
|
||||||
|
--bg-hover: #32323c;
|
||||||
|
--border: #3a3a44;
|
||||||
|
--text: #e0e0e6;
|
||||||
|
--text2: #9898a4;
|
||||||
|
--accent: #6c8cff;
|
||||||
|
--accent-dim: #4a6ad0;
|
||||||
|
--folder: #f0c040;
|
||||||
|
--mono: 'SF Mono', 'Cascadia Code', 'Consolas', monospace;
|
||||||
|
--sans: -apple-system, 'Segoe UI', system-ui, sans-serif;
|
||||||
|
--radius: 6px;
|
||||||
|
--transition: 0.15s ease;
|
||||||
|
}
|
||||||
|
* { margin: 0; padding: 0; box-sizing: border-box; }
|
||||||
|
body {
|
||||||
|
font-family: var(--sans);
|
||||||
|
background: var(--bg);
|
||||||
|
color: var(--text);
|
||||||
|
height: 100vh;
|
||||||
|
display: flex;
|
||||||
|
flex-direction: column;
|
||||||
|
overflow: hidden;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* ── Toolbar ── */
|
||||||
|
.toolbar {
|
||||||
|
display: flex;
|
||||||
|
align-items: center;
|
||||||
|
gap: 8px;
|
||||||
|
padding: 8px 12px;
|
||||||
|
background: var(--bg2);
|
||||||
|
border-bottom: 1px solid var(--border);
|
||||||
|
flex-shrink: 0;
|
||||||
|
}
|
||||||
|
.toolbar .logo {
|
||||||
|
font-weight: 700;
|
||||||
|
font-size: 14px;
|
||||||
|
color: var(--accent);
|
||||||
|
white-space: nowrap;
|
||||||
|
margin-right: 8px;
|
||||||
|
}
|
||||||
|
.search-box {
|
||||||
|
flex: 1;
|
||||||
|
max-width: 500px;
|
||||||
|
position: relative;
|
||||||
|
}
|
||||||
|
.search-box input {
|
||||||
|
width: 100%;
|
||||||
|
padding: 7px 12px 7px 32px;
|
||||||
|
background: var(--bg3);
|
||||||
|
border: 1px solid var(--border);
|
||||||
|
border-radius: var(--radius);
|
||||||
|
color: var(--text);
|
||||||
|
font-size: 13px;
|
||||||
|
outline: none;
|
||||||
|
transition: border var(--transition);
|
||||||
|
}
|
||||||
|
.search-box input:focus { border-color: var(--accent); }
|
||||||
|
.search-box .icon {
|
||||||
|
position: absolute;
|
||||||
|
left: 10px;
|
||||||
|
top: 50%;
|
||||||
|
transform: translateY(-50%);
|
||||||
|
color: var(--text2);
|
||||||
|
font-size: 13px;
|
||||||
|
pointer-events: none;
|
||||||
|
}
|
||||||
|
.toolbar-actions {
|
||||||
|
display: flex;
|
||||||
|
gap: 4px;
|
||||||
|
margin-left: auto;
|
||||||
|
}
|
||||||
|
.tb-btn {
|
||||||
|
background: var(--bg3);
|
||||||
|
border: 1px solid var(--border);
|
||||||
|
color: var(--text2);
|
||||||
|
padding: 6px 10px;
|
||||||
|
border-radius: var(--radius);
|
||||||
|
cursor: pointer;
|
||||||
|
font-size: 13px;
|
||||||
|
transition: all var(--transition);
|
||||||
|
}
|
||||||
|
.tb-btn:hover { background: var(--bg-hover); color: var(--text); }
|
||||||
|
.tb-btn.active { color: var(--accent); border-color: var(--accent-dim); }
|
||||||
|
|
||||||
|
/* ── Filter bar ── */
|
||||||
|
.filter-bar {
|
||||||
|
display: none;
|
||||||
|
gap: 8px;
|
||||||
|
padding: 8px 12px;
|
||||||
|
background: var(--bg2);
|
||||||
|
border-bottom: 1px solid var(--border);
|
||||||
|
flex-wrap: wrap;
|
||||||
|
align-items: center;
|
||||||
|
flex-shrink: 0;
|
||||||
|
}
|
||||||
|
.filter-bar.show { display: flex; }
|
||||||
|
.filter-bar label { font-size: 12px; color: var(--text2); }
|
||||||
|
.filter-bar select, .filter-bar input[type="date"] {
|
||||||
|
padding: 4px 8px;
|
||||||
|
background: var(--bg3);
|
||||||
|
border: 1px solid var(--border);
|
||||||
|
border-radius: var(--radius);
|
||||||
|
color: var(--text);
|
||||||
|
font-size: 12px;
|
||||||
|
outline: none;
|
||||||
|
}
|
||||||
|
.filter-bar select:focus, .filter-bar input[type="date"]:focus {
|
||||||
|
border-color: var(--accent);
|
||||||
|
}
|
||||||
|
.filter-clear {
|
||||||
|
background: none;
|
||||||
|
border: none;
|
||||||
|
color: var(--accent);
|
||||||
|
cursor: pointer;
|
||||||
|
font-size: 12px;
|
||||||
|
padding: 4px 8px;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* ── Main layout ── */
|
||||||
|
.main {
|
||||||
|
display: flex;
|
||||||
|
flex: 1;
|
||||||
|
overflow: hidden;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* ── Sidebar ── */
|
||||||
|
.sidebar {
|
||||||
|
width: 220px;
|
||||||
|
min-width: 220px;
|
||||||
|
background: var(--bg2);
|
||||||
|
border-right: 1px solid var(--border);
|
||||||
|
overflow-y: auto;
|
||||||
|
padding: 8px 0;
|
||||||
|
flex-shrink: 0;
|
||||||
|
}
|
||||||
|
.tree-item {
|
||||||
|
display: flex;
|
||||||
|
align-items: center;
|
||||||
|
padding: 4px 8px 4px calc(8px + var(--depth, 0) * 16px);
|
||||||
|
cursor: pointer;
|
||||||
|
font-size: 13px;
|
||||||
|
color: var(--text2);
|
||||||
|
transition: all var(--transition);
|
||||||
|
user-select: none;
|
||||||
|
white-space: nowrap;
|
||||||
|
overflow: hidden;
|
||||||
|
text-overflow: ellipsis;
|
||||||
|
}
|
||||||
|
.tree-item:hover { background: var(--bg-hover); color: var(--text); }
|
||||||
|
.tree-item.active { background: var(--bg3); color: var(--accent); }
|
||||||
|
.tree-item .arrow {
|
||||||
|
width: 16px;
|
||||||
|
text-align: center;
|
||||||
|
font-size: 10px;
|
||||||
|
flex-shrink: 0;
|
||||||
|
transition: transform var(--transition);
|
||||||
|
}
|
||||||
|
.tree-item .arrow.open { transform: rotate(90deg); }
|
||||||
|
.tree-item .folder-icon { margin-right: 6px; flex-shrink: 0; }
|
||||||
|
.tree-children { display: none; }
|
||||||
|
.tree-children.open { display: block; }
|
||||||
|
|
||||||
|
/* ── Content area ── */
|
||||||
|
.content {
|
||||||
|
flex: 1;
|
||||||
|
display: flex;
|
||||||
|
flex-direction: column;
|
||||||
|
overflow: hidden;
|
||||||
|
}
|
||||||
|
.breadcrumb {
|
||||||
|
display: flex;
|
||||||
|
align-items: center;
|
||||||
|
gap: 4px;
|
||||||
|
padding: 8px 16px;
|
||||||
|
font-size: 13px;
|
||||||
|
color: var(--text2);
|
||||||
|
border-bottom: 1px solid var(--border);
|
||||||
|
flex-shrink: 0;
|
||||||
|
flex-wrap: wrap;
|
||||||
|
}
|
||||||
|
.breadcrumb span { cursor: pointer; transition: color var(--transition); }
|
||||||
|
.breadcrumb span:hover { color: var(--accent); }
|
||||||
|
.breadcrumb .sep { color: var(--border); cursor: default; }
|
||||||
|
.breadcrumb .sep:hover { color: var(--border); }
|
||||||
|
.breadcrumb .current { color: var(--text); cursor: default; }
|
||||||
|
.breadcrumb .current:hover { color: var(--text); }
|
||||||
|
|
||||||
|
.file-area {
|
||||||
|
flex: 1;
|
||||||
|
overflow-y: auto;
|
||||||
|
padding: 12px 16px;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* ── Status ── */
|
||||||
|
.status-bar {
|
||||||
|
padding: 4px 16px;
|
||||||
|
font-size: 12px;
|
||||||
|
color: var(--text2);
|
||||||
|
border-top: 1px solid var(--border);
|
||||||
|
background: var(--bg2);
|
||||||
|
flex-shrink: 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* ── Grid view ── */
|
||||||
|
.file-grid {
|
||||||
|
display: grid;
|
||||||
|
grid-template-columns: repeat(auto-fill, minmax(160px, 1fr));
|
||||||
|
gap: 12px;
|
||||||
|
}
|
||||||
|
.file-card {
|
||||||
|
background: var(--bg2);
|
||||||
|
border: 1px solid var(--border);
|
||||||
|
border-radius: var(--radius);
|
||||||
|
padding: 8px;
|
||||||
|
cursor: pointer;
|
||||||
|
transition: all var(--transition);
|
||||||
|
overflow: hidden;
|
||||||
|
}
|
||||||
|
.file-card:hover { border-color: var(--accent-dim); transform: translateY(-1px); }
|
||||||
|
.file-card .thumb {
|
||||||
|
width: 100%;
|
||||||
|
aspect-ratio: 1;
|
||||||
|
background: var(--bg3);
|
||||||
|
border-radius: 4px;
|
||||||
|
display: flex;
|
||||||
|
align-items: center;
|
||||||
|
justify-content: center;
|
||||||
|
margin-bottom: 8px;
|
||||||
|
overflow: hidden;
|
||||||
|
}
|
||||||
|
.file-card .thumb img {
|
||||||
|
width: 100%;
|
||||||
|
height: 100%;
|
||||||
|
object-fit: cover;
|
||||||
|
border-radius: 4px;
|
||||||
|
}
|
||||||
|
.file-card .thumb .file-icon {
|
||||||
|
font-size: 36px;
|
||||||
|
opacity: 0.4;
|
||||||
|
}
|
||||||
|
.file-card .name {
|
||||||
|
font-size: 12px;
|
||||||
|
font-weight: 500;
|
||||||
|
white-space: nowrap;
|
||||||
|
overflow: hidden;
|
||||||
|
text-overflow: ellipsis;
|
||||||
|
margin-bottom: 2px;
|
||||||
|
}
|
||||||
|
.file-card .meta {
|
||||||
|
font-size: 11px;
|
||||||
|
color: var(--text2);
|
||||||
|
white-space: nowrap;
|
||||||
|
overflow: hidden;
|
||||||
|
text-overflow: ellipsis;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* folder cards */
|
||||||
|
.folder-card {
|
||||||
|
background: var(--bg2);
|
||||||
|
border: 1px solid var(--border);
|
||||||
|
border-radius: var(--radius);
|
||||||
|
padding: 12px;
|
||||||
|
cursor: pointer;
|
||||||
|
transition: all var(--transition);
|
||||||
|
display: flex;
|
||||||
|
align-items: center;
|
||||||
|
gap: 10px;
|
||||||
|
}
|
||||||
|
.folder-card:hover { border-color: var(--folder); }
|
||||||
|
.folder-card .folder-icon { font-size: 28px; }
|
||||||
|
.folder-card .folder-name { font-size: 13px; font-weight: 500; }
|
||||||
|
.folder-card .folder-count { font-size: 11px; color: var(--text2); }
|
||||||
|
|
||||||
|
/* ── List view ── */
|
||||||
|
.file-list { width: 100%; }
|
||||||
|
.file-list-header, .file-list-row {
|
||||||
|
display: grid;
|
||||||
|
grid-template-columns: 32px 1fr 90px 120px 140px;
|
||||||
|
gap: 8px;
|
||||||
|
align-items: center;
|
||||||
|
padding: 6px 8px;
|
||||||
|
font-size: 12px;
|
||||||
|
}
|
||||||
|
.file-list-header {
|
||||||
|
color: var(--text2);
|
||||||
|
font-weight: 600;
|
||||||
|
border-bottom: 1px solid var(--border);
|
||||||
|
position: sticky;
|
||||||
|
top: 0;
|
||||||
|
background: var(--bg);
|
||||||
|
z-index: 1;
|
||||||
|
}
|
||||||
|
.file-list-row {
|
||||||
|
cursor: pointer;
|
||||||
|
border-radius: var(--radius);
|
||||||
|
transition: background var(--transition);
|
||||||
|
}
|
||||||
|
.file-list-row:hover { background: var(--bg-hover); }
|
||||||
|
.file-list-row .list-icon { text-align: center; font-size: 16px; }
|
||||||
|
.file-list-row .list-name {
|
||||||
|
white-space: nowrap;
|
||||||
|
overflow: hidden;
|
||||||
|
text-overflow: ellipsis;
|
||||||
|
}
|
||||||
|
.file-list-row .list-size { color: var(--text2); text-align: right; }
|
||||||
|
.file-list-row .list-type {
|
||||||
|
color: var(--text2);
|
||||||
|
white-space: nowrap;
|
||||||
|
overflow: hidden;
|
||||||
|
text-overflow: ellipsis;
|
||||||
|
}
|
||||||
|
.file-list-row .list-date { color: var(--text2); }
|
||||||
|
|
||||||
|
/* ── Detail modal ── */
|
||||||
|
.modal-overlay {
|
||||||
|
display: none;
|
||||||
|
position: fixed;
|
||||||
|
inset: 0;
|
||||||
|
background: rgba(0,0,0,0.7);
|
||||||
|
z-index: 100;
|
||||||
|
align-items: center;
|
||||||
|
justify-content: center;
|
||||||
|
}
|
||||||
|
.modal-overlay.show { display: flex; }
|
||||||
|
.modal {
|
||||||
|
background: var(--bg2);
|
||||||
|
border: 1px solid var(--border);
|
||||||
|
border-radius: 8px;
|
||||||
|
max-width: 600px;
|
||||||
|
width: 90%;
|
||||||
|
max-height: 85vh;
|
||||||
|
overflow-y: auto;
|
||||||
|
padding: 20px;
|
||||||
|
}
|
||||||
|
.modal .close-btn {
|
||||||
|
float: right;
|
||||||
|
background: none;
|
||||||
|
border: none;
|
||||||
|
color: var(--text2);
|
||||||
|
font-size: 20px;
|
||||||
|
cursor: pointer;
|
||||||
|
}
|
||||||
|
.modal .close-btn:hover { color: var(--text); }
|
||||||
|
.modal .preview {
|
||||||
|
width: 100%;
|
||||||
|
max-height: 300px;
|
||||||
|
object-fit: contain;
|
||||||
|
border-radius: var(--radius);
|
||||||
|
background: var(--bg3);
|
||||||
|
margin: 12px 0;
|
||||||
|
}
|
||||||
|
.modal h3 { font-size: 16px; margin-bottom: 12px; }
|
||||||
|
.modal .detail-row {
|
||||||
|
display: flex;
|
||||||
|
padding: 6px 0;
|
||||||
|
font-size: 13px;
|
||||||
|
border-bottom: 1px solid var(--border);
|
||||||
|
}
|
||||||
|
.modal .detail-label {
|
||||||
|
width: 100px;
|
||||||
|
color: var(--text2);
|
||||||
|
flex-shrink: 0;
|
||||||
|
}
|
||||||
|
.modal .detail-value {
|
||||||
|
word-break: break-all;
|
||||||
|
flex: 1;
|
||||||
|
}
|
||||||
|
.modal .tag-pill {
|
||||||
|
display: inline-block;
|
||||||
|
background: var(--accent-dim);
|
||||||
|
color: #fff;
|
||||||
|
padding: 2px 8px;
|
||||||
|
border-radius: 10px;
|
||||||
|
font-size: 11px;
|
||||||
|
margin: 2px;
|
||||||
|
}
|
||||||
|
.modal .actions {
|
||||||
|
display: flex;
|
||||||
|
gap: 8px;
|
||||||
|
margin-top: 16px;
|
||||||
|
}
|
||||||
|
.modal .actions a, .modal .actions button {
|
||||||
|
padding: 8px 16px;
|
||||||
|
border-radius: var(--radius);
|
||||||
|
font-size: 13px;
|
||||||
|
text-decoration: none;
|
||||||
|
cursor: pointer;
|
||||||
|
border: 1px solid var(--border);
|
||||||
|
transition: all var(--transition);
|
||||||
|
}
|
||||||
|
.modal .actions .primary {
|
||||||
|
background: var(--accent);
|
||||||
|
color: #fff;
|
||||||
|
border-color: var(--accent);
|
||||||
|
}
|
||||||
|
.modal .actions .primary:hover { background: var(--accent-dim); }
|
||||||
|
.modal .actions .secondary {
|
||||||
|
background: var(--bg3);
|
||||||
|
color: var(--text);
|
||||||
|
}
|
||||||
|
.modal .actions .secondary:hover { background: var(--bg-hover); }
|
||||||
|
|
||||||
|
/* ── Empty state ── */
|
||||||
|
.empty-state {
|
||||||
|
text-align: center;
|
||||||
|
padding: 60px 20px;
|
||||||
|
color: var(--text2);
|
||||||
|
}
|
||||||
|
.empty-state .icon { font-size: 48px; margin-bottom: 12px; opacity: 0.3; }
|
||||||
|
.empty-state p { font-size: 14px; }
|
||||||
|
|
||||||
|
/* ── Scrollbar ── */
|
||||||
|
::-webkit-scrollbar { width: 8px; }
|
||||||
|
::-webkit-scrollbar-track { background: transparent; }
|
||||||
|
::-webkit-scrollbar-thumb { background: var(--border); border-radius: 4px; }
|
||||||
|
::-webkit-scrollbar-thumb:hover { background: var(--text2); }
|
||||||
|
</style>
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
|
||||||
|
<!-- Toolbar -->
|
||||||
|
<div class="toolbar">
|
||||||
|
<div class="logo">CAN Files</div>
|
||||||
|
<div class="search-box">
|
||||||
|
<span class="icon">🔍</span>
|
||||||
|
<input type="text" id="searchInput" placeholder="Search files...">
|
||||||
|
</div>
|
||||||
|
<div class="toolbar-actions">
|
||||||
|
<button class="tb-btn" id="filterToggle" title="Toggle filters">☰ Filters</button>
|
||||||
|
<button class="tb-btn active" id="gridBtn" title="Grid view">▦</button>
|
||||||
|
<button class="tb-btn" id="listBtn" title="List view">☰</button>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<!-- Filter bar -->
|
||||||
|
<div class="filter-bar" id="filterBar">
|
||||||
|
<label>App:</label>
|
||||||
|
<select id="filterApp"><option value="">All</option></select>
|
||||||
|
<label>Type:</label>
|
||||||
|
<select id="filterMime"><option value="">All</option></select>
|
||||||
|
<label>Tag:</label>
|
||||||
|
<select id="filterTag"><option value="">All</option></select>
|
||||||
|
<label>From:</label>
|
||||||
|
<input type="date" id="filterFrom">
|
||||||
|
<label>To:</label>
|
||||||
|
<input type="date" id="filterTo">
|
||||||
|
<button class="filter-clear" id="clearFilters">Clear</button>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<!-- Main -->
|
||||||
|
<div class="main">
|
||||||
|
<div class="sidebar" id="sidebar"></div>
|
||||||
|
<div class="content">
|
||||||
|
<div class="breadcrumb" id="breadcrumb"></div>
|
||||||
|
<div class="file-area" id="fileArea"></div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<!-- Status -->
|
||||||
|
<div class="status-bar" id="statusBar">Loading...</div>
|
||||||
|
|
||||||
|
<!-- Detail modal -->
|
||||||
|
<div class="modal-overlay" id="modalOverlay">
|
||||||
|
<div class="modal" id="modal"></div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<script>
|
||||||
|
// ── State ──
|
||||||
|
let allAssets = [];
|
||||||
|
let vtree = null; // virtual tree root {children: {name: node}}
|
||||||
|
let currentPath = []; // e.g. ['CAN'] or ['DATES','2025','01']
|
||||||
|
let viewMode = 'grid';
|
||||||
|
let searchQuery = '';
|
||||||
|
let activeFilters = {};
|
||||||
|
|
||||||
|
// ── Init ──
|
||||||
|
document.addEventListener('DOMContentLoaded', () => {
|
||||||
|
loadAssets();
|
||||||
|
document.getElementById('searchInput').addEventListener('input', onSearch);
|
||||||
|
document.getElementById('filterToggle').addEventListener('click', toggleFilters);
|
||||||
|
document.getElementById('gridBtn').addEventListener('click', () => setView('grid'));
|
||||||
|
document.getElementById('listBtn').addEventListener('click', () => setView('list'));
|
||||||
|
document.getElementById('clearFilters').addEventListener('click', clearFilters);
|
||||||
|
document.getElementById('modalOverlay').addEventListener('click', e => {
|
||||||
|
if (e.target === e.currentTarget) closeModal();
|
||||||
|
});
|
||||||
|
['filterApp','filterMime','filterTag','filterFrom','filterTo'].forEach(id => {
|
||||||
|
document.getElementById(id).addEventListener('change', onFilterChange);
|
||||||
|
});
|
||||||
|
document.addEventListener('keydown', e => {
|
||||||
|
if (e.key === 'Escape') closeModal();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
async function loadAssets() {
|
||||||
|
try {
|
||||||
|
const r = await fetch('/fm/list?limit=10000&order=desc');
|
||||||
|
const json = await r.json();
|
||||||
|
if (json.status !== 'success') throw new Error(json.error || 'load failed');
|
||||||
|
allAssets = json.data.items.filter(a => !a.is_trashed && !a.is_corrupted);
|
||||||
|
vtree = buildVirtualTree(allAssets);
|
||||||
|
populateFilterOptions();
|
||||||
|
renderSidebar();
|
||||||
|
navigateTo([]);
|
||||||
|
updateStatus();
|
||||||
|
} catch (e) {
|
||||||
|
document.getElementById('statusBar').textContent = 'Error: ' + e.message;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Virtual Tree ──
|
||||||
|
function mimeToExt(mime) {
|
||||||
|
const map = {
|
||||||
|
'application/pdf':'pdf','application/json':'json','text/plain':'txt',
|
||||||
|
'text/html':'html','text/css':'css','text/csv':'csv',
|
||||||
|
'image/jpeg':'jpg','image/png':'png','image/gif':'gif','image/webp':'webp',
|
||||||
|
'image/svg+xml':'svg','audio/mpeg':'mp3','video/mp4':'mp4',
|
||||||
|
'application/zip':'zip','application/xml':'xml',
|
||||||
|
};
|
||||||
|
return map[mime] || mime.split('/').pop() || 'bin';
|
||||||
|
}
|
||||||
|
|
||||||
|
function buildVirtualTree(assets) {
|
||||||
|
const root = { name: '', type: 'dir', children: {}, items: [] };
|
||||||
|
|
||||||
|
function ensureDir(parent, name) {
|
||||||
|
if (!parent.children[name]) {
|
||||||
|
parent.children[name] = { name, type: 'dir', children: {}, items: [] };
|
||||||
|
}
|
||||||
|
return parent.children[name];
|
||||||
|
}
|
||||||
|
|
||||||
|
function addFile(parent, fileName, asset) {
|
||||||
|
// deduplicate names
|
||||||
|
let name = fileName;
|
||||||
|
let i = 2;
|
||||||
|
while (parent.children[name]) {
|
||||||
|
const dot = fileName.lastIndexOf('.');
|
||||||
|
if (dot > 0) {
|
||||||
|
name = fileName.slice(0, dot) + '_' + i + fileName.slice(dot);
|
||||||
|
} else {
|
||||||
|
name = fileName + '_' + i;
|
||||||
|
}
|
||||||
|
i++;
|
||||||
|
}
|
||||||
|
parent.children[name] = { name, type: 'file', asset };
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const asset of assets) {
|
||||||
|
const ext = mimeToExt(asset.mime_type);
|
||||||
|
const hash8 = asset.hash.slice(0, 8);
|
||||||
|
const canName = asset.timestamp + '_' + hash8 + '.' + ext;
|
||||||
|
const friendlyName = asset.human_filename || (hash8 + '.' + ext);
|
||||||
|
|
||||||
|
// CAN/
|
||||||
|
const canDir = ensureDir(root, 'CAN');
|
||||||
|
addFile(canDir, canName, asset);
|
||||||
|
|
||||||
|
// APPLICATION/
|
||||||
|
if (asset.application) {
|
||||||
|
const appRoot = ensureDir(root, 'APPLICATION');
|
||||||
|
const appDir = ensureDir(appRoot, asset.application);
|
||||||
|
addFile(appDir, friendlyName, asset);
|
||||||
|
}
|
||||||
|
|
||||||
|
// DATES/
|
||||||
|
const d = new Date(asset.timestamp);
|
||||||
|
if (!isNaN(d.getTime())) {
|
||||||
|
const datesRoot = ensureDir(root, 'DATES');
|
||||||
|
const yearDir = ensureDir(datesRoot, String(d.getFullYear()));
|
||||||
|
const monthDir = ensureDir(yearDir, String(d.getMonth() + 1).padStart(2, '0'));
|
||||||
|
addFile(monthDir, friendlyName, asset);
|
||||||
|
}
|
||||||
|
|
||||||
|
// TAGS/
|
||||||
|
if (asset.tags && asset.tags.length > 0) {
|
||||||
|
const tagsRoot = ensureDir(root, 'TAGS');
|
||||||
|
for (const tag of asset.tags) {
|
||||||
|
const tagDir = ensureDir(tagsRoot, tag);
|
||||||
|
addFile(tagDir, friendlyName, asset);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return root;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Navigation ──
|
||||||
|
function getNode(path) {
|
||||||
|
let node = vtree;
|
||||||
|
for (const seg of path) {
|
||||||
|
if (!node || !node.children || !node.children[seg]) return null;
|
||||||
|
node = node.children[seg];
|
||||||
|
}
|
||||||
|
return node;
|
||||||
|
}
|
||||||
|
|
||||||
|
function navigateTo(path) {
|
||||||
|
currentPath = path;
|
||||||
|
searchQuery = '';
|
||||||
|
document.getElementById('searchInput').value = '';
|
||||||
|
renderBreadcrumb();
|
||||||
|
renderContent();
|
||||||
|
highlightSidebar();
|
||||||
|
updateStatus();
|
||||||
|
}
|
||||||
|
|
||||||
|
function renderBreadcrumb() {
|
||||||
|
const bc = document.getElementById('breadcrumb');
|
||||||
|
let html = '<span onclick="navigateTo([])">Root</span>';
|
||||||
|
for (let i = 0; i < currentPath.length; i++) {
|
||||||
|
html += '<span class="sep">▸</span>';
|
||||||
|
if (i === currentPath.length - 1) {
|
||||||
|
html += '<span class="current">' + esc(currentPath[i]) + '</span>';
|
||||||
|
} else {
|
||||||
|
const p = currentPath.slice(0, i + 1);
|
||||||
|
html += '<span onclick="navigateTo(' + esc(JSON.stringify(p)) + ')">' + esc(currentPath[i]) + '</span>';
|
||||||
|
}
|
||||||
|
}
|
||||||
|
bc.innerHTML = html;
|
||||||
|
}
|
||||||
|
|
||||||
|
function renderContent() {
|
||||||
|
const area = document.getElementById('fileArea');
|
||||||
|
const node = getNode(currentPath);
|
||||||
|
|
||||||
|
if (!node || !node.children) {
|
||||||
|
area.innerHTML = '<div class="empty-state"><div class="icon">📂</div><p>Empty folder</p></div>';
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Separate dirs and files
|
||||||
|
const entries = Object.values(node.children);
|
||||||
|
let dirs = entries.filter(e => e.type === 'dir');
|
||||||
|
let files = entries.filter(e => e.type === 'file');
|
||||||
|
|
||||||
|
// Apply search filter
|
||||||
|
if (searchQuery) {
|
||||||
|
const q = searchQuery.toLowerCase();
|
||||||
|
files = files.filter(f => {
|
||||||
|
const a = f.asset;
|
||||||
|
return f.name.toLowerCase().includes(q)
|
||||||
|
|| (a.description && a.description.toLowerCase().includes(q))
|
||||||
|
|| (a.human_filename && a.human_filename.toLowerCase().includes(q))
|
||||||
|
|| a.hash.toLowerCase().startsWith(q);
|
||||||
|
});
|
||||||
|
dirs = dirs.filter(d => d.name.toLowerCase().includes(q));
|
||||||
|
}
|
||||||
|
|
||||||
|
// Apply active filters — filter files that match
|
||||||
|
if (hasActiveFilters()) {
|
||||||
|
files = files.filter(f => matchesFilters(f.asset));
|
||||||
|
}
|
||||||
|
|
||||||
|
dirs.sort((a, b) => a.name.localeCompare(b.name));
|
||||||
|
files.sort((a, b) => b.asset.timestamp - a.asset.timestamp);
|
||||||
|
|
||||||
|
if (dirs.length === 0 && files.length === 0) {
|
||||||
|
area.innerHTML = '<div class="empty-state"><div class="icon">🔍</div><p>No items found</p></div>';
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (viewMode === 'grid') {
|
||||||
|
renderGrid(area, dirs, files);
|
||||||
|
} else {
|
||||||
|
renderList(area, dirs, files);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Grid view ──
|
||||||
|
function renderGrid(area, dirs, files) {
|
||||||
|
let html = '<div class="file-grid">';
|
||||||
|
|
||||||
|
for (const d of dirs) {
|
||||||
|
const childCount = Object.keys(d.children).length;
|
||||||
|
const p = esc(JSON.stringify([...currentPath, d.name]));
|
||||||
|
html += '<div class="folder-card" onclick="navigateTo(' + p + ')">'
|
||||||
|
+ '<div class="folder-icon">📁</div>'
|
||||||
|
+ '<div><div class="folder-name">' + esc(d.name) + '</div>'
|
||||||
|
+ '<div class="folder-count">' + childCount + ' items</div></div>'
|
||||||
|
+ '</div>';
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const f of files) {
|
||||||
|
const a = f.asset;
|
||||||
|
const isImage = a.mime_type.startsWith('image/');
|
||||||
|
const thumbHtml = isImage
|
||||||
|
? '<img src="/fm/thumb/' + a.hash + '" loading="lazy" alt="">'
|
||||||
|
: '<div class="file-icon">' + fileIcon(a.mime_type) + '</div>';
|
||||||
|
|
||||||
|
html += '<div class="file-card" onclick="showDetail(\'' + a.hash + '\')">'
|
||||||
|
+ '<div class="thumb">' + thumbHtml + '</div>'
|
||||||
|
+ '<div class="name" title="' + esc(f.name) + '">' + esc(f.name) + '</div>'
|
||||||
|
+ '<div class="meta">' + formatSize(a.size) + ' · ' + shortDate(a.timestamp) + '</div>'
|
||||||
|
+ '</div>';
|
||||||
|
}
|
||||||
|
|
||||||
|
html += '</div>';
|
||||||
|
area.innerHTML = html;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── List view ──
|
||||||
|
function renderList(area, dirs, files) {
|
||||||
|
let html = '<div class="file-list">'
|
||||||
|
+ '<div class="file-list-header">'
|
||||||
|
+ '<div></div><div>Name</div><div style="text-align:right">Size</div><div>Type</div><div>Date</div>'
|
||||||
|
+ '</div>';
|
||||||
|
|
||||||
|
for (const d of dirs) {
|
||||||
|
const p = esc(JSON.stringify([...currentPath, d.name]));
|
||||||
|
html += '<div class="file-list-row" onclick="navigateTo(' + p + ')">'
|
||||||
|
+ '<div class="list-icon">📁</div>'
|
||||||
|
+ '<div class="list-name">' + esc(d.name) + '</div>'
|
||||||
|
+ '<div class="list-size">—</div>'
|
||||||
|
+ '<div class="list-type">Folder</div>'
|
||||||
|
+ '<div class="list-date">—</div>'
|
||||||
|
+ '</div>';
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const f of files) {
|
||||||
|
const a = f.asset;
|
||||||
|
html += '<div class="file-list-row" onclick="showDetail(\'' + a.hash + '\')">'
|
||||||
|
+ '<div class="list-icon">' + fileIcon(a.mime_type) + '</div>'
|
||||||
|
+ '<div class="list-name" title="' + esc(f.name) + '">' + esc(f.name) + '</div>'
|
||||||
|
+ '<div class="list-size">' + formatSize(a.size) + '</div>'
|
||||||
|
+ '<div class="list-type">' + esc(shortMime(a.mime_type)) + '</div>'
|
||||||
|
+ '<div class="list-date">' + shortDate(a.timestamp) + '</div>'
|
||||||
|
+ '</div>';
|
||||||
|
}
|
||||||
|
|
||||||
|
html += '</div>';
|
||||||
|
area.innerHTML = html;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Sidebar tree ──
|
||||||
|
function renderSidebar() {
|
||||||
|
if (!vtree) return;
|
||||||
|
const sb = document.getElementById('sidebar');
|
||||||
|
sb.innerHTML = buildTreeHtml(vtree, [], 0);
|
||||||
|
}
|
||||||
|
|
||||||
|
function buildTreeHtml(node, path, depth) {
|
||||||
|
if (!node.children) return '';
|
||||||
|
const dirs = Object.values(node.children).filter(c => c.type === 'dir');
|
||||||
|
dirs.sort((a, b) => a.name.localeCompare(b.name));
|
||||||
|
let html = '';
|
||||||
|
|
||||||
|
for (const d of dirs) {
|
||||||
|
const cPath = [...path, d.name];
|
||||||
|
const key = cPath.join('/');
|
||||||
|
const hasSubs = Object.values(d.children).some(c => c.type === 'dir');
|
||||||
|
const arrowCls = hasSubs ? 'arrow' : 'arrow';
|
||||||
|
const arrowChar = hasSubs ? '▸' : '';
|
||||||
|
|
||||||
|
html += '<div class="tree-item" data-path="' + esc(key) + '" '
|
||||||
|
+ 'style="--depth:' + depth + '" '
|
||||||
|
+ 'onclick="onTreeClick(event, ' + esc(JSON.stringify(cPath)) + ')">'
|
||||||
|
+ '<span class="' + arrowCls + '">' + arrowChar + '</span>'
|
||||||
|
+ '<span class="folder-icon">📁</span>'
|
||||||
|
+ esc(d.name)
|
||||||
|
+ '</div>';
|
||||||
|
|
||||||
|
if (hasSubs) {
|
||||||
|
html += '<div class="tree-children" data-path="' + esc(key) + '">';
|
||||||
|
html += buildTreeHtml(d, cPath, depth + 1);
|
||||||
|
html += '</div>';
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return html;
|
||||||
|
}
|
||||||
|
|
||||||
|
function onTreeClick(e, path) {
|
||||||
|
e.stopPropagation();
|
||||||
|
const key = path.join('/');
|
||||||
|
// toggle expand
|
||||||
|
const children = document.querySelector('.tree-children[data-path="' + key + '"]');
|
||||||
|
const arrow = e.currentTarget.querySelector('.arrow');
|
||||||
|
if (children) {
|
||||||
|
children.classList.toggle('open');
|
||||||
|
if (arrow) arrow.classList.toggle('open');
|
||||||
|
}
|
||||||
|
navigateTo(path);
|
||||||
|
}
|
||||||
|
|
||||||
|
function highlightSidebar() {
|
||||||
|
const key = currentPath.join('/');
|
||||||
|
document.querySelectorAll('.tree-item').forEach(el => {
|
||||||
|
el.classList.toggle('active', el.dataset.path === key);
|
||||||
|
});
|
||||||
|
// auto-expand parents
|
||||||
|
for (let i = 1; i <= currentPath.length; i++) {
|
||||||
|
const parentKey = currentPath.slice(0, i).join('/');
|
||||||
|
const ch = document.querySelector('.tree-children[data-path="' + parentKey + '"]');
|
||||||
|
if (ch) ch.classList.add('open');
|
||||||
|
const arrow = document.querySelector('.tree-item[data-path="' + parentKey + '"] .arrow');
|
||||||
|
if (arrow) arrow.classList.add('open');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Search ──
|
||||||
|
function onSearch(e) {
|
||||||
|
searchQuery = e.target.value.trim();
|
||||||
|
renderContent();
|
||||||
|
updateStatus();
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Filters ──
|
||||||
|
function toggleFilters() {
|
||||||
|
document.getElementById('filterBar').classList.toggle('show');
|
||||||
|
document.getElementById('filterToggle').classList.toggle('active');
|
||||||
|
}
|
||||||
|
|
||||||
|
function populateFilterOptions() {
|
||||||
|
const apps = new Set(), mimes = new Set(), tags = new Set();
|
||||||
|
for (const a of allAssets) {
|
||||||
|
if (a.application) apps.add(a.application);
|
||||||
|
mimes.add(a.mime_type);
|
||||||
|
if (a.tags) a.tags.forEach(t => tags.add(t));
|
||||||
|
}
|
||||||
|
fillSelect('filterApp', [...apps].sort());
|
||||||
|
fillSelect('filterMime', [...mimes].sort());
|
||||||
|
fillSelect('filterTag', [...tags].sort());
|
||||||
|
}
|
||||||
|
|
||||||
|
function fillSelect(id, values) {
|
||||||
|
const sel = document.getElementById(id);
|
||||||
|
const current = sel.value;
|
||||||
|
sel.innerHTML = '<option value="">All</option>';
|
||||||
|
for (const v of values) {
|
||||||
|
sel.innerHTML += '<option value="' + esc(v) + '">' + esc(v) + '</option>';
|
||||||
|
}
|
||||||
|
sel.value = current;
|
||||||
|
}
|
||||||
|
|
||||||
|
function onFilterChange() {
|
||||||
|
activeFilters = {
|
||||||
|
application: document.getElementById('filterApp').value,
|
||||||
|
mime_type: document.getElementById('filterMime').value,
|
||||||
|
tag: document.getElementById('filterTag').value,
|
||||||
|
from: document.getElementById('filterFrom').value,
|
||||||
|
to: document.getElementById('filterTo').value,
|
||||||
|
};
|
||||||
|
renderContent();
|
||||||
|
updateStatus();
|
||||||
|
}
|
||||||
|
|
||||||
|
function clearFilters() {
|
||||||
|
['filterApp','filterMime','filterTag','filterFrom','filterTo'].forEach(id => {
|
||||||
|
document.getElementById(id).value = '';
|
||||||
|
});
|
||||||
|
activeFilters = {};
|
||||||
|
renderContent();
|
||||||
|
updateStatus();
|
||||||
|
}
|
||||||
|
|
||||||
|
function hasActiveFilters() {
|
||||||
|
return Object.values(activeFilters).some(v => v);
|
||||||
|
}
|
||||||
|
|
||||||
|
function matchesFilters(asset) {
|
||||||
|
const f = activeFilters;
|
||||||
|
if (f.application && asset.application !== f.application) return false;
|
||||||
|
if (f.mime_type && asset.mime_type !== f.mime_type) return false;
|
||||||
|
if (f.tag && (!asset.tags || !asset.tags.includes(f.tag))) return false;
|
||||||
|
if (f.from) {
|
||||||
|
const from = new Date(f.from).getTime();
|
||||||
|
if (asset.timestamp < from) return false;
|
||||||
|
}
|
||||||
|
if (f.to) {
|
||||||
|
const to = new Date(f.to).getTime() + 86400000; // end of day
|
||||||
|
if (asset.timestamp >= to) return false;
|
||||||
|
}
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Detail modal ──
|
||||||
|
function showDetail(hash) {
|
||||||
|
const asset = allAssets.find(a => a.hash === hash);
|
||||||
|
if (!asset) return;
|
||||||
|
|
||||||
|
const isImage = asset.mime_type.startsWith('image/');
|
||||||
|
const previewHtml = isImage
|
||||||
|
? '<img class="preview" src="/fm/asset/' + hash + '" alt="preview">'
|
||||||
|
: '';
|
||||||
|
|
||||||
|
let tagsHtml = '';
|
||||||
|
if (asset.tags && asset.tags.length) {
|
||||||
|
tagsHtml = asset.tags.map(t => '<span class="tag-pill">' + esc(t) + '</span>').join(' ');
|
||||||
|
}
|
||||||
|
|
||||||
|
const modal = document.getElementById('modal');
|
||||||
|
modal.innerHTML = '<button class="close-btn" onclick="closeModal()">×</button>'
|
||||||
|
+ '<h3>' + esc(asset.human_filename || asset.hash.slice(0, 12)) + '</h3>'
|
||||||
|
+ previewHtml
|
||||||
|
+ '<div class="detail-row"><div class="detail-label">Hash</div><div class="detail-value" style="font-family:var(--mono);font-size:12px">' + esc(asset.hash) + '</div></div>'
|
||||||
|
+ '<div class="detail-row"><div class="detail-label">Type</div><div class="detail-value">' + esc(asset.mime_type) + '</div></div>'
|
||||||
|
+ '<div class="detail-row"><div class="detail-label">Size</div><div class="detail-value">' + formatSize(asset.size) + '</div></div>'
|
||||||
|
+ '<div class="detail-row"><div class="detail-label">Date</div><div class="detail-value">' + fullDate(asset.timestamp) + '</div></div>'
|
||||||
|
+ (asset.application ? '<div class="detail-row"><div class="detail-label">Application</div><div class="detail-value">' + esc(asset.application) + '</div></div>' : '')
|
||||||
|
+ (asset.user ? '<div class="detail-row"><div class="detail-label">User</div><div class="detail-value">' + esc(asset.user) + '</div></div>' : '')
|
||||||
|
+ (asset.description ? '<div class="detail-row"><div class="detail-label">Description</div><div class="detail-value">' + esc(asset.description) + '</div></div>' : '')
|
||||||
|
+ (tagsHtml ? '<div class="detail-row"><div class="detail-label">Tags</div><div class="detail-value">' + tagsHtml + '</div></div>' : '')
|
||||||
|
+ '<div class="actions">'
|
||||||
|
+ '<a class="primary" href="/fm/asset/' + hash + '" target="_blank">Open</a>'
|
||||||
|
+ '<a class="secondary" href="/fm/asset/' + hash + '" download>Download</a>'
|
||||||
|
+ '</div>';
|
||||||
|
|
||||||
|
document.getElementById('modalOverlay').classList.add('show');
|
||||||
|
}
|
||||||
|
|
||||||
|
function closeModal() {
|
||||||
|
document.getElementById('modalOverlay').classList.remove('show');
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── View toggle ──
|
||||||
|
function setView(mode) {
|
||||||
|
viewMode = mode;
|
||||||
|
document.getElementById('gridBtn').classList.toggle('active', mode === 'grid');
|
||||||
|
document.getElementById('listBtn').classList.toggle('active', mode === 'list');
|
||||||
|
renderContent();
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Status bar ──
|
||||||
|
function updateStatus() {
|
||||||
|
const node = getNode(currentPath);
|
||||||
|
if (!node) {
|
||||||
|
document.getElementById('statusBar').textContent = allAssets.length + ' total assets';
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
const entries = Object.values(node.children || {});
|
||||||
|
const dirs = entries.filter(e => e.type === 'dir').length;
|
||||||
|
const files = entries.filter(e => e.type === 'file').length;
|
||||||
|
let text = '';
|
||||||
|
if (dirs > 0) text += dirs + ' folder' + (dirs !== 1 ? 's' : '');
|
||||||
|
if (files > 0) text += (text ? ', ' : '') + files + ' file' + (files !== 1 ? 's' : '');
|
||||||
|
if (!text) text = 'Empty';
|
||||||
|
text += ' | ' + allAssets.length + ' total assets';
|
||||||
|
document.getElementById('statusBar').textContent = text;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Helpers ──
|
||||||
|
function esc(s) {
|
||||||
|
if (!s) return '';
|
||||||
|
return s.replace(/&/g,'&').replace(/</g,'<').replace(/>/g,'>').replace(/"/g,'"');
|
||||||
|
}
|
||||||
|
|
||||||
|
function formatSize(bytes) {
|
||||||
|
if (!bytes || bytes === 0) return '0 B';
|
||||||
|
const units = ['B', 'KB', 'MB', 'GB'];
|
||||||
|
let i = 0;
|
||||||
|
let v = bytes;
|
||||||
|
while (v >= 1024 && i < units.length - 1) { v /= 1024; i++; }
|
||||||
|
return (i === 0 ? v : v.toFixed(1)) + ' ' + units[i];
|
||||||
|
}
|
||||||
|
|
||||||
|
function shortDate(ts) {
|
||||||
|
const d = new Date(ts);
|
||||||
|
return d.toLocaleDateString(undefined, { year:'numeric', month:'short', day:'numeric' });
|
||||||
|
}
|
||||||
|
|
||||||
|
function fullDate(ts) {
|
||||||
|
return new Date(ts).toLocaleString();
|
||||||
|
}
|
||||||
|
|
||||||
|
function shortMime(mime) {
|
||||||
|
const parts = mime.split('/');
|
||||||
|
return parts.length > 1 ? parts[1] : mime;
|
||||||
|
}
|
||||||
|
|
||||||
|
function fileIcon(mime) {
|
||||||
|
if (mime.startsWith('image/')) return '📷';
|
||||||
|
if (mime.startsWith('video/')) return '🎥';
|
||||||
|
if (mime.startsWith('audio/')) return '🎵';
|
||||||
|
if (mime === 'application/pdf') return '📄';
|
||||||
|
if (mime.startsWith('text/')) return '📄';
|
||||||
|
if (mime === 'application/json') return '📄';
|
||||||
|
return '📄';
|
||||||
|
}
|
||||||
|
</script>
|
||||||
|
</body>
|
||||||
|
</html>"##;
|
||||||
161
examples/filemanager/src/main.rs
Normal file
161
examples/filemanager/src/main.rs
Normal file
@ -0,0 +1,161 @@
|
|||||||
|
mod html;
|
||||||
|
|
||||||
|
use axum::extract::{Path, Query, State};
|
||||||
|
use axum::http::{HeaderMap, HeaderValue, StatusCode};
|
||||||
|
use axum::response::{Html, IntoResponse, Response};
|
||||||
|
use axum::routing::get;
|
||||||
|
use axum::Router;
|
||||||
|
use std::collections::HashMap;
|
||||||
|
|
||||||
|
const CAN_API: &str = "http://127.0.0.1:3210/api/v1/can/0";
|
||||||
|
|
||||||
|
#[derive(Clone)]
|
||||||
|
struct AppState {
|
||||||
|
client: reqwest::Client,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::main]
|
||||||
|
async fn main() {
|
||||||
|
tracing_subscriber::fmt()
|
||||||
|
.with_env_filter(
|
||||||
|
tracing_subscriber::EnvFilter::try_from_default_env()
|
||||||
|
.unwrap_or_else(|_| "filemanager=info".into()),
|
||||||
|
)
|
||||||
|
.init();
|
||||||
|
|
||||||
|
let state = AppState {
|
||||||
|
client: reqwest::Client::new(),
|
||||||
|
};
|
||||||
|
|
||||||
|
let app = Router::new()
|
||||||
|
.route("/", get(serve_index))
|
||||||
|
.route("/fm/list", get(proxy_list))
|
||||||
|
.route("/fm/search", get(proxy_search))
|
||||||
|
.route("/fm/asset/{hash}", get(proxy_asset))
|
||||||
|
.route("/fm/asset/{hash}/meta", get(proxy_meta))
|
||||||
|
.route("/fm/thumb/{hash}", get(proxy_thumb))
|
||||||
|
.with_state(state);
|
||||||
|
|
||||||
|
let addr = "127.0.0.1:3212";
|
||||||
|
let url = format!("http://{}", addr);
|
||||||
|
tracing::info!("File Manager listening on {}", url);
|
||||||
|
|
||||||
|
// Best-effort browser open
|
||||||
|
let _ = open::that(&url);
|
||||||
|
|
||||||
|
let listener = tokio::net::TcpListener::bind(addr).await.unwrap();
|
||||||
|
axum::serve(listener, app).await.unwrap();
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn serve_index() -> Html<&'static str> {
|
||||||
|
Html(html::INDEX_HTML)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Forward a reqwest response to the axum caller, preserving status + content-type.
|
||||||
|
async fn forward(resp: Result<reqwest::Response, reqwest::Error>) -> Response {
|
||||||
|
match resp {
|
||||||
|
Ok(r) => {
|
||||||
|
let status = StatusCode::from_u16(r.status().as_u16()).unwrap_or(StatusCode::OK);
|
||||||
|
let ct = r
|
||||||
|
.headers()
|
||||||
|
.get("content-type")
|
||||||
|
.and_then(|v| v.to_str().ok())
|
||||||
|
.unwrap_or("application/octet-stream")
|
||||||
|
.to_string();
|
||||||
|
let body = r.bytes().await.unwrap_or_default();
|
||||||
|
let mut headers = HeaderMap::new();
|
||||||
|
headers.insert("content-type", HeaderValue::from_str(&ct).unwrap());
|
||||||
|
(status, headers, body).into_response()
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
tracing::warn!("CAN service error: {}", e);
|
||||||
|
(StatusCode::BAD_GATEWAY, "CAN service unavailable").into_response()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Build a query string from a HashMap of params.
|
||||||
|
fn build_qs(params: &HashMap<String, String>) -> String {
|
||||||
|
if params.is_empty() {
|
||||||
|
return String::new();
|
||||||
|
}
|
||||||
|
let qs: Vec<String> = params
|
||||||
|
.iter()
|
||||||
|
.map(|(k, v)| format!("{}={}", urlencoding(k), urlencoding(v)))
|
||||||
|
.collect();
|
||||||
|
format!("?{}", qs.join("&"))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn urlencoding(s: &str) -> String {
|
||||||
|
s.chars()
|
||||||
|
.map(|c| match c {
|
||||||
|
'A'..='Z' | 'a'..='z' | '0'..='9' | '-' | '_' | '.' | '~' => c.to_string(),
|
||||||
|
_ => format!("%{:02X}", c as u8),
|
||||||
|
})
|
||||||
|
.collect()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Proxy list: pass all query params through to CAN /list.
|
||||||
|
async fn proxy_list(
|
||||||
|
State(state): State<AppState>,
|
||||||
|
Query(params): Query<HashMap<String, String>>,
|
||||||
|
) -> Response {
|
||||||
|
let url = format!("{}/list{}", CAN_API, build_qs(¶ms));
|
||||||
|
forward(state.client.get(&url).send().await).await
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Proxy search: pass all query params through to CAN /search.
|
||||||
|
async fn proxy_search(
|
||||||
|
State(state): State<AppState>,
|
||||||
|
Query(params): Query<HashMap<String, String>>,
|
||||||
|
) -> Response {
|
||||||
|
let url = format!("{}/search{}", CAN_API, build_qs(¶ms));
|
||||||
|
forward(state.client.get(&url).send().await).await
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Proxy asset download.
|
||||||
|
async fn proxy_asset(
|
||||||
|
State(state): State<AppState>,
|
||||||
|
Path(hash): Path<String>,
|
||||||
|
) -> Response {
|
||||||
|
let url = format!("{}/asset/{}", CAN_API, hash);
|
||||||
|
let resp = state.client.get(&url).send().await;
|
||||||
|
// Drop Content-Disposition so images render inline
|
||||||
|
match resp {
|
||||||
|
Ok(r) => {
|
||||||
|
let status = StatusCode::from_u16(r.status().as_u16()).unwrap_or(StatusCode::OK);
|
||||||
|
let ct = r
|
||||||
|
.headers()
|
||||||
|
.get("content-type")
|
||||||
|
.and_then(|v| v.to_str().ok())
|
||||||
|
.unwrap_or("application/octet-stream")
|
||||||
|
.to_string();
|
||||||
|
let body = r.bytes().await.unwrap_or_default();
|
||||||
|
let mut headers = HeaderMap::new();
|
||||||
|
headers.insert("content-type", HeaderValue::from_str(&ct).unwrap());
|
||||||
|
(status, headers, body).into_response()
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
tracing::warn!("CAN service error: {}", e);
|
||||||
|
(StatusCode::BAD_GATEWAY, "CAN service unavailable").into_response()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Proxy asset metadata.
|
||||||
|
async fn proxy_meta(
|
||||||
|
State(state): State<AppState>,
|
||||||
|
Path(hash): Path<String>,
|
||||||
|
) -> Response {
|
||||||
|
let url = format!("{}/asset/{}/meta", CAN_API, hash);
|
||||||
|
forward(state.client.get(&url).send().await).await
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Proxy thumbnail (200x200).
|
||||||
|
async fn proxy_thumb(
|
||||||
|
State(state): State<AppState>,
|
||||||
|
Path(hash): Path<String>,
|
||||||
|
) -> Response {
|
||||||
|
let url = format!("{}/asset/{}/thumb/200/200", CAN_API, hash);
|
||||||
|
forward(state.client.get(&url).send().await).await
|
||||||
|
}
|
||||||
2003
examples/paste/Cargo.lock
generated
Normal file
2003
examples/paste/Cargo.lock
generated
Normal file
File diff suppressed because it is too large
Load Diff
20
examples/paste/Cargo.toml
Normal file
20
examples/paste/Cargo.toml
Normal file
@ -0,0 +1,20 @@
|
|||||||
|
[package]
|
||||||
|
name = "paste"
|
||||||
|
version = "0.1.0"
|
||||||
|
edition = "2021"
|
||||||
|
publish = false
|
||||||
|
description = "Clipboard log UI — example app for CanService"
|
||||||
|
|
||||||
|
[[bin]]
|
||||||
|
name = "paste"
|
||||||
|
path = "src/main.rs"
|
||||||
|
|
||||||
|
[dependencies]
|
||||||
|
axum = { version = "0.8", features = ["multipart"] }
|
||||||
|
tokio = { version = "1", features = ["full"] }
|
||||||
|
reqwest = { version = "0.12", features = ["multipart", "json"] }
|
||||||
|
serde = { version = "1", features = ["derive"] }
|
||||||
|
serde_json = "1"
|
||||||
|
open = "5"
|
||||||
|
tracing = "0.1"
|
||||||
|
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
|
||||||
382
examples/paste/src/html.rs
Normal file
382
examples/paste/src/html.rs
Normal file
@ -0,0 +1,382 @@
|
|||||||
|
pub const INDEX_HTML: &str = r##"<!DOCTYPE html>
|
||||||
|
<html lang="en">
|
||||||
|
<head>
|
||||||
|
<meta charset="utf-8">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||||
|
<title>paste</title>
|
||||||
|
<style>
|
||||||
|
:root {
|
||||||
|
--bg: #1a1a1e;
|
||||||
|
--bg-card: #24242a;
|
||||||
|
--bg-input: #2c2c34;
|
||||||
|
--border: #38383f;
|
||||||
|
--text: #e4e4e8;
|
||||||
|
--text-muted: #8888a0;
|
||||||
|
--accent: #6c8cff;
|
||||||
|
--accent-dim: #4a6ad0;
|
||||||
|
--success: #4caf80;
|
||||||
|
--error: #e05555;
|
||||||
|
--radius: 6px;
|
||||||
|
--mono: "SF Mono", "Cascadia Code", "Consolas", monospace;
|
||||||
|
--sans: -apple-system, "Segoe UI", system-ui, sans-serif;
|
||||||
|
}
|
||||||
|
*, *::before, *::after { margin: 0; padding: 0; box-sizing: border-box; }
|
||||||
|
body {
|
||||||
|
background: var(--bg);
|
||||||
|
color: var(--text);
|
||||||
|
font-family: var(--sans);
|
||||||
|
line-height: 1.5;
|
||||||
|
max-width: 720px;
|
||||||
|
margin: 0 auto;
|
||||||
|
padding: 2rem 1rem;
|
||||||
|
}
|
||||||
|
header { margin-bottom: 1.5rem; }
|
||||||
|
header h1 {
|
||||||
|
font-size: 1.4rem;
|
||||||
|
font-weight: 600;
|
||||||
|
color: var(--accent);
|
||||||
|
font-family: var(--mono);
|
||||||
|
letter-spacing: -0.02em;
|
||||||
|
}
|
||||||
|
header .sub {
|
||||||
|
color: var(--text-muted);
|
||||||
|
font-size: 0.82rem;
|
||||||
|
margin-top: 0.2rem;
|
||||||
|
}
|
||||||
|
.input-area { margin-bottom: 1.5rem; }
|
||||||
|
.input-row {
|
||||||
|
display: flex;
|
||||||
|
gap: 0.5rem;
|
||||||
|
align-items: center;
|
||||||
|
}
|
||||||
|
#paste-input {
|
||||||
|
flex: 1;
|
||||||
|
padding: 0.7rem 1rem;
|
||||||
|
background: var(--bg-input);
|
||||||
|
border: 1px solid var(--border);
|
||||||
|
border-radius: var(--radius);
|
||||||
|
color: var(--text);
|
||||||
|
font-size: 0.95rem;
|
||||||
|
font-family: var(--sans);
|
||||||
|
outline: none;
|
||||||
|
transition: border-color 0.15s;
|
||||||
|
}
|
||||||
|
#paste-input:focus { border-color: var(--accent); }
|
||||||
|
#paste-input::placeholder { color: var(--text-muted); }
|
||||||
|
.clip-btn {
|
||||||
|
flex-shrink: 0;
|
||||||
|
width: 38px;
|
||||||
|
height: 38px;
|
||||||
|
background: var(--bg-input);
|
||||||
|
border: 1px solid var(--border);
|
||||||
|
border-radius: var(--radius);
|
||||||
|
color: var(--text-muted);
|
||||||
|
cursor: pointer;
|
||||||
|
display: flex;
|
||||||
|
align-items: center;
|
||||||
|
justify-content: center;
|
||||||
|
transition: border-color 0.15s, color 0.15s;
|
||||||
|
}
|
||||||
|
.clip-btn:hover { border-color: var(--accent); color: var(--accent); }
|
||||||
|
.clip-btn svg { width: 18px; height: 18px; }
|
||||||
|
.status {
|
||||||
|
font-size: 0.78rem;
|
||||||
|
margin-top: 0.35rem;
|
||||||
|
min-height: 1.2em;
|
||||||
|
color: var(--text-muted);
|
||||||
|
}
|
||||||
|
.status.ok { color: var(--success); }
|
||||||
|
.status.err { color: var(--error); }
|
||||||
|
.items { display: flex; flex-direction: column; gap: 0.5rem; }
|
||||||
|
.item {
|
||||||
|
background: var(--bg-card);
|
||||||
|
border: 1px solid var(--border);
|
||||||
|
border-radius: var(--radius);
|
||||||
|
padding: 0.65rem 0.85rem;
|
||||||
|
display: flex;
|
||||||
|
gap: 0.7rem;
|
||||||
|
align-items: flex-start;
|
||||||
|
}
|
||||||
|
.item-thumb {
|
||||||
|
flex-shrink: 0;
|
||||||
|
width: 48px;
|
||||||
|
height: 48px;
|
||||||
|
border-radius: 4px;
|
||||||
|
object-fit: cover;
|
||||||
|
background: var(--bg);
|
||||||
|
cursor: pointer;
|
||||||
|
}
|
||||||
|
.item-icon {
|
||||||
|
flex-shrink: 0;
|
||||||
|
width: 48px;
|
||||||
|
height: 48px;
|
||||||
|
border-radius: 4px;
|
||||||
|
background: var(--bg);
|
||||||
|
display: flex;
|
||||||
|
align-items: center;
|
||||||
|
justify-content: center;
|
||||||
|
cursor: pointer;
|
||||||
|
}
|
||||||
|
.item-icon svg { width: 26px; height: 26px; }
|
||||||
|
.item-icon.pdf { color: #e05555; }
|
||||||
|
.item-icon.file { color: var(--text-muted); }
|
||||||
|
.item-filename {
|
||||||
|
font-size: 0.78rem;
|
||||||
|
margin-top: 0.2rem;
|
||||||
|
}
|
||||||
|
.item-filename a {
|
||||||
|
color: var(--accent);
|
||||||
|
text-decoration: none;
|
||||||
|
}
|
||||||
|
.item-filename a:hover { text-decoration: underline; }
|
||||||
|
.item-body { flex: 1; min-width: 0; }
|
||||||
|
.item-meta {
|
||||||
|
display: flex;
|
||||||
|
flex-wrap: wrap;
|
||||||
|
gap: 0.6rem;
|
||||||
|
font-size: 0.73rem;
|
||||||
|
color: var(--text-muted);
|
||||||
|
margin-bottom: 0.25rem;
|
||||||
|
}
|
||||||
|
.item-meta .hash {
|
||||||
|
font-family: var(--mono);
|
||||||
|
color: var(--accent-dim);
|
||||||
|
}
|
||||||
|
.item-content {
|
||||||
|
font-size: 0.88rem;
|
||||||
|
white-space: pre-wrap;
|
||||||
|
word-break: break-word;
|
||||||
|
max-height: 6em;
|
||||||
|
overflow: hidden;
|
||||||
|
}
|
||||||
|
.item-image { margin-top: 0.4rem; }
|
||||||
|
.item-image img {
|
||||||
|
max-width: 100%;
|
||||||
|
max-height: 200px;
|
||||||
|
border-radius: 4px;
|
||||||
|
}
|
||||||
|
.item-tags {
|
||||||
|
display: flex;
|
||||||
|
flex-wrap: wrap;
|
||||||
|
gap: 0.3rem;
|
||||||
|
margin-top: 0.3rem;
|
||||||
|
}
|
||||||
|
.tag {
|
||||||
|
font-size: 0.7rem;
|
||||||
|
font-family: var(--mono);
|
||||||
|
background: #2a2f45;
|
||||||
|
color: var(--accent);
|
||||||
|
padding: 0.1rem 0.45rem;
|
||||||
|
border-radius: 3px;
|
||||||
|
}
|
||||||
|
.empty {
|
||||||
|
text-align: center;
|
||||||
|
color: var(--text-muted);
|
||||||
|
padding: 3rem 1rem;
|
||||||
|
font-size: 0.9rem;
|
||||||
|
}
|
||||||
|
</style>
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
|
||||||
|
<header>
|
||||||
|
<h1>paste</h1>
|
||||||
|
<p class="sub">type text + enter, or paste an image — use #hashtags to add tags</p>
|
||||||
|
</header>
|
||||||
|
|
||||||
|
<main>
|
||||||
|
<div class="input-area">
|
||||||
|
<div class="input-row">
|
||||||
|
<input type="text" id="paste-input"
|
||||||
|
placeholder="type something and press Enter, or Ctrl+V an image"
|
||||||
|
autocomplete="off" spellcheck="false">
|
||||||
|
<button class="clip-btn" id="clip-btn" title="Attach a file">
|
||||||
|
<svg viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
|
||||||
|
<path d="M21.44 11.05l-9.19 9.19a6 6 0 01-8.49-8.49l9.19-9.19a4 4 0 015.66 5.66l-9.2 9.19a2 2 0 01-2.83-2.83l8.49-8.48"/>
|
||||||
|
</svg>
|
||||||
|
</button>
|
||||||
|
<input type="file" id="file-input" hidden>
|
||||||
|
</div>
|
||||||
|
<div id="status" class="status"></div>
|
||||||
|
</div>
|
||||||
|
<div id="items" class="items"></div>
|
||||||
|
</main>
|
||||||
|
|
||||||
|
<script>
|
||||||
|
const $ = s => document.querySelector(s);
|
||||||
|
const input = $('#paste-input');
|
||||||
|
const statusEl = $('#status');
|
||||||
|
const itemsEl = $('#items');
|
||||||
|
|
||||||
|
function setStatus(msg, type) {
|
||||||
|
statusEl.textContent = msg;
|
||||||
|
statusEl.className = 'status' + (type ? ' ' + type : '');
|
||||||
|
if (type === 'ok') setTimeout(() => { statusEl.textContent = ''; statusEl.className = 'status'; }, 2500);
|
||||||
|
}
|
||||||
|
|
||||||
|
function escapeHtml(s) {
|
||||||
|
const d = document.createElement('div');
|
||||||
|
d.textContent = s;
|
||||||
|
return d.innerHTML;
|
||||||
|
}
|
||||||
|
|
||||||
|
function fmtTime(ts) {
|
||||||
|
const d = new Date(ts);
|
||||||
|
const pad = n => String(n).padStart(2, '0');
|
||||||
|
return d.getFullYear() + '-' + pad(d.getMonth()+1) + '-' + pad(d.getDate())
|
||||||
|
+ ' ' + pad(d.getHours()) + ':' + pad(d.getMinutes()) + ':' + pad(d.getSeconds());
|
||||||
|
}
|
||||||
|
|
||||||
|
async function pasteText(text) {
|
||||||
|
setStatus('Sending...');
|
||||||
|
try {
|
||||||
|
const r = await fetch('/paste/text', {
|
||||||
|
method: 'POST',
|
||||||
|
headers: { 'Content-Type': 'application/json' },
|
||||||
|
body: JSON.stringify({ text })
|
||||||
|
});
|
||||||
|
if (!r.ok) throw new Error(await r.text());
|
||||||
|
setStatus('Saved', 'ok');
|
||||||
|
loadItems();
|
||||||
|
} catch (e) {
|
||||||
|
setStatus('Error: ' + e.message, 'err');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function pasteFile(file, description, fileName) {
|
||||||
|
setStatus('Uploading...');
|
||||||
|
try {
|
||||||
|
const form = new FormData();
|
||||||
|
form.append('file', file, fileName || file.name || 'clipboard.bin');
|
||||||
|
if (description) form.append('description', description);
|
||||||
|
const r = await fetch('/paste/file', { method: 'POST', body: form });
|
||||||
|
if (!r.ok) throw new Error(await r.text());
|
||||||
|
setStatus('Saved', 'ok');
|
||||||
|
loadItems();
|
||||||
|
} catch (e) {
|
||||||
|
setStatus('Error: ' + e.message, 'err');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function loadItems() {
|
||||||
|
try {
|
||||||
|
const r = await fetch('/paste/list');
|
||||||
|
const json = await r.json();
|
||||||
|
if (json.status !== 'success') throw new Error(json.error || 'load failed');
|
||||||
|
renderItems(json.data.items);
|
||||||
|
} catch (e) {
|
||||||
|
setStatus('Load error: ' + e.message, 'err');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function renderItems(items) {
|
||||||
|
if (!items || items.length === 0) {
|
||||||
|
itemsEl.innerHTML = '<div class="empty">Nothing here yet. Type something or paste an image.</div>';
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
const pdfSvg = '<svg viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round"><path d="M14 2H6a2 2 0 00-2 2v16a2 2 0 002 2h12a2 2 0 002-2V8z"/><polyline points="14 2 14 8 20 8"/><text x="12" y="17" text-anchor="middle" fill="currentColor" stroke="none" font-size="6" font-weight="bold" font-family="sans-serif">PDF</text></svg>';
|
||||||
|
const fileSvg = '<svg viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round"><path d="M14 2H6a2 2 0 00-2 2v16a2 2 0 002 2h12a2 2 0 002-2V8z"/><polyline points="14 2 14 8 20 8"/><line x1="16" y1="13" x2="8" y2="13"/><line x1="16" y1="17" x2="8" y2="17"/></svg>';
|
||||||
|
|
||||||
|
itemsEl.innerHTML = items.map(it => {
|
||||||
|
const time = fmtTime(it.timestamp);
|
||||||
|
const shortHash = it.hash.substring(0, 12);
|
||||||
|
const isImage = it.mime_type && it.mime_type.startsWith('image/');
|
||||||
|
const isPdf = it.mime_type === 'application/pdf';
|
||||||
|
const isText = it.mime_type && it.mime_type.startsWith('text/');
|
||||||
|
const assetUrl = '/paste/asset/' + it.hash;
|
||||||
|
const thumbUrl = '/paste/thumb/' + it.hash;
|
||||||
|
|
||||||
|
let thumb = '';
|
||||||
|
if (isImage) {
|
||||||
|
thumb = '<a href="' + assetUrl + '" target="_blank"><img class="item-thumb" src="' + thumbUrl + '" alt=""></a>';
|
||||||
|
} else if (isPdf) {
|
||||||
|
thumb = '<a href="' + assetUrl + '" target="_blank" class="item-icon pdf">' + pdfSvg + '</a>';
|
||||||
|
} else if (!isText) {
|
||||||
|
thumb = '<a href="' + assetUrl + '" target="_blank" class="item-icon file">' + fileSvg + '</a>';
|
||||||
|
}
|
||||||
|
|
||||||
|
let content = '';
|
||||||
|
if (isImage) {
|
||||||
|
content = '<div class="item-image"><a href="' + assetUrl + '" target="_blank"><img src="' + assetUrl + '" alt="pasted image" loading="lazy"></a></div>';
|
||||||
|
}
|
||||||
|
if (it.description) {
|
||||||
|
content += '<div class="item-content">' + escapeHtml(it.description) + '</div>';
|
||||||
|
}
|
||||||
|
|
||||||
|
let fileLink = '';
|
||||||
|
const fname = it.human_filename || it.human_file_name;
|
||||||
|
if (fname && !isText) {
|
||||||
|
fileLink = '<div class="item-filename"><a href="' + assetUrl + '" target="_blank">' + escapeHtml(fname) + '</a></div>';
|
||||||
|
}
|
||||||
|
|
||||||
|
let tagsHtml = '';
|
||||||
|
if (it.tags && it.tags.length > 0) {
|
||||||
|
tagsHtml = '<div class="item-tags">'
|
||||||
|
+ it.tags.map(t => '<span class="tag">#' + escapeHtml(t) + '</span>').join('')
|
||||||
|
+ '</div>';
|
||||||
|
}
|
||||||
|
|
||||||
|
return '<div class="item">'
|
||||||
|
+ thumb
|
||||||
|
+ '<div class="item-body">'
|
||||||
|
+ '<div class="item-meta">'
|
||||||
|
+ '<span class="time">' + time + '</span>'
|
||||||
|
+ '<span class="hash">' + shortHash + '</span>'
|
||||||
|
+ '<span>' + escapeHtml(it.mime_type || '') + '</span>'
|
||||||
|
+ '</div>'
|
||||||
|
+ fileLink
|
||||||
|
+ content
|
||||||
|
+ tagsHtml
|
||||||
|
+ '</div>'
|
||||||
|
+ '</div>';
|
||||||
|
}).join('');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Enter key sends text
|
||||||
|
input.addEventListener('keydown', e => {
|
||||||
|
if (e.key === 'Enter' && input.value.trim()) {
|
||||||
|
pasteText(input.value.trim());
|
||||||
|
input.value = '';
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
// Clipboard paste: intercept images
|
||||||
|
document.addEventListener('paste', e => {
|
||||||
|
const items = e.clipboardData && e.clipboardData.items;
|
||||||
|
if (!items) return;
|
||||||
|
for (const item of items) {
|
||||||
|
if (item.type.startsWith('image/')) {
|
||||||
|
e.preventDefault();
|
||||||
|
const blob = item.getAsFile();
|
||||||
|
if (blob) {
|
||||||
|
const ext = blob.type === 'image/png' ? '.png'
|
||||||
|
: blob.type === 'image/jpeg' ? '.jpg'
|
||||||
|
: blob.type === 'image/gif' ? '.gif'
|
||||||
|
: blob.type === 'image/webp' ? '.webp' : '.bin';
|
||||||
|
pasteFile(blob, input.value.trim(), 'clipboard' + ext);
|
||||||
|
input.value = '';
|
||||||
|
}
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// text paste: let browser fill the input naturally
|
||||||
|
});
|
||||||
|
|
||||||
|
// Paperclip file picker
|
||||||
|
const fileInput = $('#file-input');
|
||||||
|
$('#clip-btn').addEventListener('click', () => fileInput.click());
|
||||||
|
fileInput.addEventListener('change', () => {
|
||||||
|
const file = fileInput.files[0];
|
||||||
|
if (file) {
|
||||||
|
pasteFile(file, input.value.trim());
|
||||||
|
input.value = '';
|
||||||
|
fileInput.value = '';
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
// Initial load
|
||||||
|
loadItems();
|
||||||
|
</script>
|
||||||
|
|
||||||
|
</body>
|
||||||
|
</html>"##;
|
||||||
263
examples/paste/src/main.rs
Normal file
263
examples/paste/src/main.rs
Normal file
@ -0,0 +1,263 @@
|
|||||||
|
mod html;
|
||||||
|
|
||||||
|
use axum::extract::{DefaultBodyLimit, Multipart, Path, State};
|
||||||
|
use axum::http::{header, StatusCode};
|
||||||
|
use axum::response::{Html, IntoResponse, Response};
|
||||||
|
use axum::routing::{get, post};
|
||||||
|
use axum::{Json, Router};
|
||||||
|
use serde::Deserialize;
|
||||||
|
use std::net::SocketAddr;
|
||||||
|
|
||||||
|
const CAN_API: &str = "http://127.0.0.1:3210/api/v1/can/0";
|
||||||
|
|
||||||
|
#[derive(Clone)]
|
||||||
|
struct AppState {
|
||||||
|
client: reqwest::Client,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Deserialize)]
|
||||||
|
struct PasteTextRequest {
|
||||||
|
text: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Helpers ──────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// Extract #hashtags from text, returning the comma-separated tag string.
|
||||||
|
/// e.g. "some #chicken and #food" -> "chicken,food"
|
||||||
|
fn extract_tags(text: &str) -> String {
|
||||||
|
text.split_whitespace()
|
||||||
|
.filter(|w| w.starts_with('#') && w.len() > 1)
|
||||||
|
.map(|w| w[1..].trim_end_matches(|c: char| !c.is_alphanumeric() && c != '_' && c != '-'))
|
||||||
|
.filter(|t| !t.is_empty())
|
||||||
|
.collect::<Vec<_>>()
|
||||||
|
.join(",")
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Convert a reqwest response into an axum response, copying status +
|
||||||
|
/// content-type + body. Intentionally drops Content-Disposition so that
|
||||||
|
/// images render inline rather than triggering a download.
|
||||||
|
async fn forward(resp: Result<reqwest::Response, reqwest::Error>) -> Response {
|
||||||
|
match resp {
|
||||||
|
Ok(r) => {
|
||||||
|
let status = StatusCode::from_u16(r.status().as_u16())
|
||||||
|
.unwrap_or(StatusCode::BAD_GATEWAY);
|
||||||
|
let ct = r
|
||||||
|
.headers()
|
||||||
|
.get("content-type")
|
||||||
|
.and_then(|v| v.to_str().ok())
|
||||||
|
.unwrap_or("application/octet-stream")
|
||||||
|
.to_string();
|
||||||
|
let bytes = r.bytes().await.unwrap_or_default();
|
||||||
|
(status, [(header::CONTENT_TYPE, ct)], bytes).into_response()
|
||||||
|
}
|
||||||
|
Err(e) => (
|
||||||
|
StatusCode::BAD_GATEWAY,
|
||||||
|
format!("CanService unreachable: {e}"),
|
||||||
|
)
|
||||||
|
.into_response(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Handlers ─────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
async fn serve_index() -> Html<&'static str> {
|
||||||
|
Html(html::INDEX_HTML)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Accept `{ "text": "..." }` from the frontend, forward to CanService as
|
||||||
|
/// a multipart text/plain file so the stored content is raw text (not
|
||||||
|
/// JSON-wrapped).
|
||||||
|
async fn paste_text(
|
||||||
|
State(state): State<AppState>,
|
||||||
|
Json(body): Json<PasteTextRequest>,
|
||||||
|
) -> Response {
|
||||||
|
let desc = if body.text.len() > 200 {
|
||||||
|
format!("{}...", &body.text[..body.text.char_indices().nth(200).map(|(i, _)| i).unwrap_or(body.text.len())])
|
||||||
|
} else {
|
||||||
|
body.text.clone()
|
||||||
|
};
|
||||||
|
|
||||||
|
let tags = extract_tags(&desc);
|
||||||
|
|
||||||
|
let part = match reqwest::multipart::Part::bytes(body.text.into_bytes())
|
||||||
|
.file_name("paste.txt")
|
||||||
|
.mime_str("text/plain")
|
||||||
|
{
|
||||||
|
Ok(p) => p,
|
||||||
|
Err(e) => return (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()).into_response(),
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut form = reqwest::multipart::Form::new()
|
||||||
|
.part("file", part)
|
||||||
|
.text("application", "paste")
|
||||||
|
.text("description", desc);
|
||||||
|
|
||||||
|
if !tags.is_empty() {
|
||||||
|
form = form.text("tags", tags);
|
||||||
|
}
|
||||||
|
|
||||||
|
let resp = state
|
||||||
|
.client
|
||||||
|
.post(format!("{CAN_API}/ingest"))
|
||||||
|
.multipart(form)
|
||||||
|
.send()
|
||||||
|
.await;
|
||||||
|
|
||||||
|
forward(resp).await
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Accept a multipart upload from the frontend (clipboard image).
|
||||||
|
/// Re-packages it into a new multipart request for CanService.
|
||||||
|
async fn paste_file(
|
||||||
|
State(state): State<AppState>,
|
||||||
|
mut multipart: Multipart,
|
||||||
|
) -> Response {
|
||||||
|
let mut file_bytes: Option<Vec<u8>> = None;
|
||||||
|
let mut file_name = "clipboard.png".to_string();
|
||||||
|
let mut content_type = "image/png".to_string();
|
||||||
|
let mut description = String::new();
|
||||||
|
|
||||||
|
loop {
|
||||||
|
match multipart.next_field().await {
|
||||||
|
Ok(Some(field)) => {
|
||||||
|
let name = field.name().unwrap_or("").to_string();
|
||||||
|
match name.as_str() {
|
||||||
|
"file" => {
|
||||||
|
if let Some(fname) = field.file_name() {
|
||||||
|
file_name = fname.to_string();
|
||||||
|
}
|
||||||
|
if let Some(ct) = field.content_type() {
|
||||||
|
content_type = ct.to_string();
|
||||||
|
}
|
||||||
|
match field.bytes().await {
|
||||||
|
Ok(b) => file_bytes = Some(b.to_vec()),
|
||||||
|
Err(e) => {
|
||||||
|
return (StatusCode::BAD_REQUEST, format!("Failed to read file: {e}")).into_response();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
"description" => {
|
||||||
|
description = field.text().await.unwrap_or_default();
|
||||||
|
}
|
||||||
|
_ => {}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Ok(None) => break,
|
||||||
|
Err(e) => {
|
||||||
|
return (StatusCode::BAD_REQUEST, format!("Multipart error: {e}")).into_response();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
let Some(bytes) = file_bytes else {
|
||||||
|
return (StatusCode::BAD_REQUEST, "Missing file field").into_response();
|
||||||
|
};
|
||||||
|
|
||||||
|
let part = match reqwest::multipart::Part::bytes(bytes)
|
||||||
|
.file_name(file_name)
|
||||||
|
.mime_str(&content_type)
|
||||||
|
{
|
||||||
|
Ok(p) => p,
|
||||||
|
Err(e) => return (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()).into_response(),
|
||||||
|
};
|
||||||
|
|
||||||
|
let tags = extract_tags(&description);
|
||||||
|
|
||||||
|
let mut form = reqwest::multipart::Form::new()
|
||||||
|
.part("file", part)
|
||||||
|
.text("application", "paste");
|
||||||
|
|
||||||
|
if !description.is_empty() {
|
||||||
|
form = form.text("description", description);
|
||||||
|
}
|
||||||
|
if !tags.is_empty() {
|
||||||
|
form = form.text("tags", tags);
|
||||||
|
}
|
||||||
|
|
||||||
|
let resp = state
|
||||||
|
.client
|
||||||
|
.post(format!("{CAN_API}/ingest"))
|
||||||
|
.multipart(form)
|
||||||
|
.send()
|
||||||
|
.await;
|
||||||
|
|
||||||
|
forward(resp).await
|
||||||
|
}
|
||||||
|
|
||||||
|
/// List items with application=paste, newest first.
|
||||||
|
async fn paste_list(State(state): State<AppState>) -> Response {
|
||||||
|
let resp = state
|
||||||
|
.client
|
||||||
|
.get(format!(
|
||||||
|
"{CAN_API}/list?application=paste&order=desc&limit=100"
|
||||||
|
))
|
||||||
|
.send()
|
||||||
|
.await;
|
||||||
|
|
||||||
|
forward(resp).await
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Proxy asset download by hash.
|
||||||
|
async fn proxy_asset(
|
||||||
|
State(state): State<AppState>,
|
||||||
|
Path(hash): Path<String>,
|
||||||
|
) -> Response {
|
||||||
|
let resp = state
|
||||||
|
.client
|
||||||
|
.get(format!("{CAN_API}/asset/{hash}"))
|
||||||
|
.send()
|
||||||
|
.await;
|
||||||
|
|
||||||
|
forward(resp).await
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Proxy thumbnail (200x200) by hash.
|
||||||
|
async fn proxy_thumb(
|
||||||
|
State(state): State<AppState>,
|
||||||
|
Path(hash): Path<String>,
|
||||||
|
) -> Response {
|
||||||
|
let resp = state
|
||||||
|
.client
|
||||||
|
.get(format!("{CAN_API}/asset/{hash}/thumb/200/200"))
|
||||||
|
.send()
|
||||||
|
.await;
|
||||||
|
|
||||||
|
forward(resp).await
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Main ─────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[tokio::main]
|
||||||
|
async fn main() {
|
||||||
|
tracing_subscriber::fmt()
|
||||||
|
.with_env_filter(
|
||||||
|
tracing_subscriber::EnvFilter::try_from_default_env()
|
||||||
|
.unwrap_or_else(|_| "paste=info".into()),
|
||||||
|
)
|
||||||
|
.init();
|
||||||
|
|
||||||
|
let state = AppState {
|
||||||
|
client: reqwest::Client::new(),
|
||||||
|
};
|
||||||
|
|
||||||
|
let app = Router::new()
|
||||||
|
.route("/", get(serve_index))
|
||||||
|
.route("/paste/text", post(paste_text))
|
||||||
|
.route("/paste/file", post(paste_file))
|
||||||
|
.route("/paste/list", get(paste_list))
|
||||||
|
.route("/paste/asset/{hash}", get(proxy_asset))
|
||||||
|
.route("/paste/thumb/{hash}", get(proxy_thumb))
|
||||||
|
.layer(DefaultBodyLimit::max(100 * 1024 * 1024)) // 100 MB
|
||||||
|
.with_state(state);
|
||||||
|
|
||||||
|
let addr = SocketAddr::from(([127, 0, 0, 1], 3211));
|
||||||
|
tracing::info!("paste running at http://{addr}");
|
||||||
|
tracing::info!("requires CanService at http://127.0.0.1:3210");
|
||||||
|
|
||||||
|
// Open browser (best-effort, won't crash on headless)
|
||||||
|
let url = format!("http://{addr}");
|
||||||
|
let _ = open::that(&url);
|
||||||
|
|
||||||
|
let listener = tokio::net::TcpListener::bind(addr).await.unwrap();
|
||||||
|
axum::serve(listener, app).await.unwrap();
|
||||||
|
}
|
||||||
69
go_example_1.ps1
Normal file
69
go_example_1.ps1
Normal file
@ -0,0 +1,69 @@
|
|||||||
|
# go_example_1.ps1 — Start CanService + Paste example, open browser
|
||||||
|
|
||||||
|
$ErrorActionPreference = "Stop"
|
||||||
|
$root = $PSScriptRoot
|
||||||
|
|
||||||
|
# Kill any leftover processes on our ports
|
||||||
|
Write-Host "Cleaning up stale processes..." -ForegroundColor Yellow
|
||||||
|
Get-NetTCPConnection -LocalPort 3210 -ErrorAction SilentlyContinue |
|
||||||
|
ForEach-Object { Stop-Process -Id $_.OwningProcess -Force -ErrorAction SilentlyContinue }
|
||||||
|
Get-NetTCPConnection -LocalPort 3211 -ErrorAction SilentlyContinue |
|
||||||
|
ForEach-Object { Stop-Process -Id $_.OwningProcess -Force -ErrorAction SilentlyContinue }
|
||||||
|
Start-Sleep -Milliseconds 500
|
||||||
|
|
||||||
|
Write-Host "Building CanService..." -ForegroundColor Cyan
|
||||||
|
cargo build --manifest-path "$root\Cargo.toml"
|
||||||
|
|
||||||
|
Write-Host "Building Paste example..." -ForegroundColor Cyan
|
||||||
|
cargo build --manifest-path "$root\examples\paste\Cargo.toml"
|
||||||
|
|
||||||
|
# Start CanService in background
|
||||||
|
Write-Host "Starting CanService on port 3210..." -ForegroundColor Green
|
||||||
|
$canService = Start-Process -FilePath "cargo" `
|
||||||
|
-ArgumentList "run --manifest-path `"$root\Cargo.toml`"" `
|
||||||
|
-WorkingDirectory $root `
|
||||||
|
-PassThru -NoNewWindow
|
||||||
|
|
||||||
|
# Wait for CanService to be ready
|
||||||
|
Write-Host "Waiting for CanService..." -ForegroundColor Yellow
|
||||||
|
$ready = $false
|
||||||
|
for ($i = 0; $i -lt 30; $i++) {
|
||||||
|
try {
|
||||||
|
$null = Invoke-WebRequest -Uri "http://127.0.0.1:3210/api/v1/can/0/list" -TimeoutSec 1 -ErrorAction Stop
|
||||||
|
$ready = $true
|
||||||
|
break
|
||||||
|
} catch {
|
||||||
|
Start-Sleep -Milliseconds 500
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (-not $ready) {
|
||||||
|
Write-Host "CanService failed to start within 15s" -ForegroundColor Red
|
||||||
|
Stop-Process -Id $canService.Id -Force -ErrorAction SilentlyContinue
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
Write-Host "CanService ready." -ForegroundColor Green
|
||||||
|
|
||||||
|
# Start Paste example (it opens the browser itself)
|
||||||
|
Write-Host "Starting Paste on port 3211..." -ForegroundColor Green
|
||||||
|
$paste = Start-Process -FilePath "cargo" `
|
||||||
|
-ArgumentList "run --manifest-path `"$root\examples\paste\Cargo.toml`"" `
|
||||||
|
-WorkingDirectory $root `
|
||||||
|
-PassThru -NoNewWindow
|
||||||
|
|
||||||
|
Write-Host ""
|
||||||
|
Write-Host "Running:" -ForegroundColor Cyan
|
||||||
|
Write-Host " CanService -> http://127.0.0.1:3210"
|
||||||
|
Write-Host " Paste UI -> http://127.0.0.1:3211"
|
||||||
|
Write-Host ""
|
||||||
|
Write-Host "Press Ctrl+C to stop both." -ForegroundColor Yellow
|
||||||
|
|
||||||
|
# Wait for either process to exit, then clean up both
|
||||||
|
try {
|
||||||
|
while (-not $canService.HasExited -and -not $paste.HasExited) {
|
||||||
|
Start-Sleep -Seconds 1
|
||||||
|
}
|
||||||
|
} finally {
|
||||||
|
Write-Host "Shutting down..." -ForegroundColor Yellow
|
||||||
|
Stop-Process -Id $canService.Id -Force -ErrorAction SilentlyContinue
|
||||||
|
Stop-Process -Id $paste.Id -Force -ErrorAction SilentlyContinue
|
||||||
|
}
|
||||||
185
spec.md
Normal file
185
spec.md
Normal file
@ -0,0 +1,185 @@
|
|||||||
|
# CAN (Containerized Asset Network) Service Specification
|
||||||
|
**Version:** 1.0 (Final MVP)
|
||||||
|
**Target Language:** Rust
|
||||||
|
|
||||||
|
## 1. System Overview
|
||||||
|
The CAN service is a robust, self-healing local network daemon designed to simulate a high-speed, append-oriented file system. It provides an HTTP REST and Protobuf API to ingest, manage, and retrieve assets (files and data).
|
||||||
|
|
||||||
|
To bypass the slow nature of traditional OS file searches, it uses an embedded SQLite database for millisecond querying. To ensure 100% disaster-recovery readiness, critical metadata is redundantly written to the host's native OS file attributes.
|
||||||
|
|
||||||
|
**MVP Scope:** This version supports a single, default container. To future-proof the API, all routes require a `{can_id}` parameter, which **must always be `0`**. Physically, all data is mapped flatly to the configured storage root.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. Directory Structure & Configuration
|
||||||
|
The system uses a flat directory structure within the configured root folder.
|
||||||
|
|
||||||
|
**Physical Structure:**
|
||||||
|
```text
|
||||||
|
/var/lib/can_data/ # Defined by storage_root
|
||||||
|
├── .can.db # Master SQLite Index (Hidden)
|
||||||
|
├── .trash/ # Soft-deleted physical assets (Hidden)
|
||||||
|
├── .thumbs/ # Cached thumbnail images (Hidden, if enabled)
|
||||||
|
├── 1773014400123_a3b2... # Physical Asset
|
||||||
|
└── 1773014405999_f8c9... # Physical Asset
|
||||||
|
```
|
||||||
|
|
||||||
|
**Configuration (`config.yaml`):**
|
||||||
|
```yaml
|
||||||
|
storage_root: "/var/lib/can_data" # Absolute path to the storage folder
|
||||||
|
admin_token: "super_secret_rebuild" # Bearer token for admin operations
|
||||||
|
enable_thumbnail_cache: true # Toggle caching in .thumbs/
|
||||||
|
rebuild_error_threshold: 50 # Tolerance before triggering a hard rebuild
|
||||||
|
verify_interval_hours: 12 # Frequency of full background hash verification
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. Storage Mechanics & Disaster Recovery
|
||||||
|
|
||||||
|
### 3.1 Cryptographic Naming Convention
|
||||||
|
Files are written with a strict physical naming format to allow offline, mathematical verification of integrity.
|
||||||
|
**Format:** `{timestamp}_{sha256}_{truncated_tags}.{extension}`
|
||||||
|
|
||||||
|
* `timestamp`: Epoch Unix timestamp in milliseconds (e.g., `1773014400123`).
|
||||||
|
* `sha256`: A SHA-256 hash calculated exactly as: `SHA256([timestamp_bytes] + [raw_file_content_bytes])`.
|
||||||
|
* `truncated_tags`: Tags joined by underscores (`_`). Non-alphanumeric characters stripped. Safely truncated to ensure the total filename stays safely under OS path limits (~255 chars). Omitted if no tags provided.
|
||||||
|
* `extension`: Derived from the `mime_type` or magic bytes (e.g., `.pdf`, `.json`).
|
||||||
|
|
||||||
|
### 3.2 Native OS File Attributes
|
||||||
|
To guarantee the SQLite database can be rebuilt from scratch, critical metadata is bound directly to the file using OS-level attributes (Extended Attributes / `xattr` on Linux/macOS; NTFS Alternate Data Streams on Windows).
|
||||||
|
|
||||||
|
**Required Attributes:**
|
||||||
|
* `can.application`: Software that ingested the file.
|
||||||
|
* `can.user`: User identity.
|
||||||
|
* `can.tags`: **The complete, unbounded, comma-separated list of tags.**
|
||||||
|
* `can.description`: Human-readable description.
|
||||||
|
* `can.human_filename`: The logical filename provided during ingestion.
|
||||||
|
* `can.human_path`: The logical folder path provided during ingestion.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4. Metadata Indexing (`.can.db`)
|
||||||
|
A fully normalized SQLite database located at `{storage_root}/.can.db`.
|
||||||
|
|
||||||
|
**Schema:**
|
||||||
|
```sql
|
||||||
|
CREATE TABLE assets (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
timestamp INTEGER NOT NULL,
|
||||||
|
hash TEXT NOT NULL UNIQUE,
|
||||||
|
mime_type TEXT NOT NULL,
|
||||||
|
application TEXT,
|
||||||
|
user_identity TEXT,
|
||||||
|
description TEXT,
|
||||||
|
actual_filename TEXT NOT NULL,
|
||||||
|
human_filename TEXT,
|
||||||
|
human_path TEXT,
|
||||||
|
is_trashed BOOLEAN NOT NULL DEFAULT 0,
|
||||||
|
is_corrupted BOOLEAN NOT NULL DEFAULT 0
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE TABLE tags (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
name TEXT NOT NULL UNIQUE
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE TABLE asset_tags (
|
||||||
|
asset_id INTEGER NOT NULL,
|
||||||
|
tag_id INTEGER NOT NULL,
|
||||||
|
PRIMARY KEY (asset_id, tag_id),
|
||||||
|
FOREIGN KEY (asset_id) REFERENCES assets(id) ON DELETE CASCADE,
|
||||||
|
FOREIGN KEY (tag_id) REFERENCES tags(id) ON DELETE CASCADE
|
||||||
|
);
|
||||||
|
|
||||||
|
-- Optimization Indexes
|
||||||
|
CREATE INDEX idx_hash ON assets(hash);
|
||||||
|
CREATE INDEX idx_timestamp ON assets(timestamp);
|
||||||
|
CREATE INDEX idx_application ON assets(application);
|
||||||
|
CREATE INDEX idx_user ON assets(user_identity);
|
||||||
|
CREATE INDEX idx_trashed ON assets(is_trashed);
|
||||||
|
CREATE INDEX idx_tag_name ON tags(name);
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5. Background Verifier Subsystem
|
||||||
|
A low-priority background thread dedicated to data integrity.
|
||||||
|
|
||||||
|
1. **Initial Scrub:** Runs on startup. Verifies `SHA256(timestamp + content)` for all files against their filenames.
|
||||||
|
2. **Continuous Monitoring:** Hooks into OS file system events (e.g., `inotify`). If a file is touched or altered by an external program, the verifier immediately rescans it.
|
||||||
|
3. **Periodic Scrub:** Runs every `verify_interval_hours` to catch silent bit rot.
|
||||||
|
4. **Corruption Handling:** If a hash mismatch is found, it flags `is_corrupted = 1` in `.can.db`. Corrupted files are explicitly marked in API responses and excluded from standard operations.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 6. API Endpoints
|
||||||
|
|
||||||
|
**Protocol Negotation:**
|
||||||
|
All endpoints communicate in JSON by default. Clients can request/send **Protocol Buffers** by providing the HTTP headers:
|
||||||
|
* `Accept: application/x-protobuf`
|
||||||
|
* `Content-Type: application/x-protobuf`
|
||||||
|
|
||||||
|
*(Note: Endpoint paths below use `{can_id}` which must be passed as `0`)*
|
||||||
|
|
||||||
|
### 6.1 Ingest Data
|
||||||
|
* **Method:** `POST`
|
||||||
|
* **Path:** `/api/v1/can/0/ingest`
|
||||||
|
* **Content-Type:** `multipart/form-data`
|
||||||
|
* **Form Payload:**
|
||||||
|
* `file` (Binary File) - **Required**
|
||||||
|
* `mime_type` (String) - *Optional*
|
||||||
|
* `human_file_name` (String) - *Optional*
|
||||||
|
* `human_readable_path` (String) - *Optional*
|
||||||
|
* `application` (String) - *Optional*
|
||||||
|
* `user` (String) - *Optional*
|
||||||
|
* `tags` (String) - *Optional* (comma-separated)
|
||||||
|
* `description` (String) - *Optional*
|
||||||
|
* **Action:** Hashes file, writes to `{storage_root}`, attaches OS attributes, logs to DB.
|
||||||
|
* **Response (JSON):**
|
||||||
|
`{ "status": "success", "data": { "timestamp": 1773014400123, "hash": "abc...", "filename": "1773014400123_abc_tag.pdf" } }`
|
||||||
|
|
||||||
|
### 6.2 Retrieve Physical Asset
|
||||||
|
* **Method:** `GET`
|
||||||
|
* **Path:** `/api/v1/can/0/asset/{hash}`
|
||||||
|
* **Action:** Streams the physical file. Sets `Content-Type` via DB mapping and `Content-Disposition` using `human_filename`. Returns 500/Warning if `is_corrupted = 1`.
|
||||||
|
|
||||||
|
### 6.3 Retrieve Asset Metadata
|
||||||
|
* **Method:** `GET`
|
||||||
|
* **Path:** `/api/v1/can/0/asset/{hash}/meta`
|
||||||
|
* **Action:** Returns DB record.
|
||||||
|
* **Response (JSON):**
|
||||||
|
`{ "status": "success", "data": { "hash": "abc...", "mime_type": "image/jpeg", "application": "WebUI", "user": "Jason", "tags": ["tag1", "tag2"], "description": "...", "human_filename": "photo.jpg", "human_path": "/img/", "timestamp": 1773014400123, "is_trashed": false, "is_corrupted": false } }`
|
||||||
|
|
||||||
|
### 6.4 Retrieve Thumbnail
|
||||||
|
* **Method:** `GET`
|
||||||
|
* **Path:** `/api/v1/can/0/asset/{hash}/thumb/{max_width}/{max_height}`
|
||||||
|
* **Action:** Resizes image strictly preserving aspect ratio. Falls back to static icon (SVG/PNG) for non-images. If `enable_thumbnail_cache=true`, reads/writes to `{storage_root}/.thumbs/{hash}_{max_width}x{max_height}.jpg`. Streams byte payload.
|
||||||
|
|
||||||
|
### 6.5 Modify Metadata
|
||||||
|
* **Method:** `PATCH`
|
||||||
|
* **Path:** `/api/v1/can/0/asset/{hash}`
|
||||||
|
* **Body (JSON/Protobuf):**
|
||||||
|
`{ "tags": ["new_tag1", "new_tag2"], "description": "New description" }`
|
||||||
|
* **Action:** Updates `can.tags` and `can.description` OS Attributes. Updates SQLite `assets`, `tags`, and `asset_tags` tables inside a transaction. Physical filename remains unchanged.
|
||||||
|
|
||||||
|
### 6.6 List Assets
|
||||||
|
* **Method:** `GET`
|
||||||
|
* **Path:** `/api/v1/can/0/list`
|
||||||
|
* **Query Parameters:**
|
||||||
|
* `limit` (Integer) - Default `50`
|
||||||
|
* `offset` (Integer) - Default `0`
|
||||||
|
* `offset_time` (Integer) - *Optional*. Epoch ms. High-speed cursor. Lists items strictly after/before this timestamp based on `order`.
|
||||||
|
* `order` (String) - `asc` or `desc`. Default `desc`.
|
||||||
|
* `application` (String) - *Optional*. Scopes list exclusively to files ingested by this Application ID.
|
||||||
|
* `include_trashed` (Boolean) - Default `false`.
|
||||||
|
* `include_corrupted` (Boolean) - Default `false`.
|
||||||
|
* **Response:** Paginated array of metadata objects (matching 6.3 output) + `pagination` block.
|
||||||
|
|
||||||
|
### 6.7 Search Assets
|
||||||
|
* **Method:** `GET`
|
||||||
|
* **Path:** `/api/v1/can/0/search`
|
||||||
|
* **Query Parameters:**
|
||||||
|
* `hash` (String) - Exact or partial prefix.
|
||||||
|
* `start_time` (Integer) - Epoch ms.
|
||||||
|
* `end_time` (Integer) -
|
||||||
112
src/config.rs
Normal file
112
src/config.rs
Normal file
@ -0,0 +1,112 @@
|
|||||||
|
use serde::Deserialize;
|
||||||
|
use std::path::{Path, PathBuf};
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Deserialize)]
|
||||||
|
pub struct Config {
|
||||||
|
pub storage_root: PathBuf,
|
||||||
|
#[serde(default = "default_admin_token")]
|
||||||
|
pub admin_token: String,
|
||||||
|
#[serde(default = "default_true")]
|
||||||
|
pub enable_thumbnail_cache: bool,
|
||||||
|
#[serde(default = "default_rebuild_threshold")]
|
||||||
|
pub rebuild_error_threshold: u32,
|
||||||
|
#[serde(default = "default_verify_interval")]
|
||||||
|
pub verify_interval_hours: u64,
|
||||||
|
}
|
||||||
|
|
||||||
|
fn default_admin_token() -> String {
|
||||||
|
"changeme".to_string()
|
||||||
|
}
|
||||||
|
fn default_true() -> bool {
|
||||||
|
true
|
||||||
|
}
|
||||||
|
fn default_rebuild_threshold() -> u32 {
|
||||||
|
50
|
||||||
|
}
|
||||||
|
fn default_verify_interval() -> u64 {
|
||||||
|
12
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Config {
|
||||||
|
pub fn load(path: &Path) -> anyhow::Result<Self> {
|
||||||
|
let contents = std::fs::read_to_string(path)?;
|
||||||
|
let config: Config = serde_yaml::from_str(&contents)?;
|
||||||
|
Ok(config)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn db_path(&self) -> PathBuf {
|
||||||
|
self.storage_root.join(".can.db")
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn trash_dir(&self) -> PathBuf {
|
||||||
|
self.storage_root.join(".trash")
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn thumbs_dir(&self) -> PathBuf {
|
||||||
|
self.storage_root.join(".thumbs")
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn ensure_dirs(&self) -> anyhow::Result<()> {
|
||||||
|
std::fs::create_dir_all(&self.storage_root)?;
|
||||||
|
std::fs::create_dir_all(self.trash_dir())?;
|
||||||
|
if self.enable_thumbnail_cache {
|
||||||
|
std::fs::create_dir_all(self.thumbs_dir())?;
|
||||||
|
}
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
use tempfile::TempDir;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_load_config() {
|
||||||
|
let dir = TempDir::new().unwrap();
|
||||||
|
let config_path = dir.path().join("config.yaml");
|
||||||
|
std::fs::write(
|
||||||
|
&config_path,
|
||||||
|
r#"
|
||||||
|
storage_root: "/tmp/can_test"
|
||||||
|
admin_token: "test_token"
|
||||||
|
enable_thumbnail_cache: false
|
||||||
|
rebuild_error_threshold: 10
|
||||||
|
verify_interval_hours: 6
|
||||||
|
"#,
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let config = Config::load(&config_path).unwrap();
|
||||||
|
assert_eq!(config.storage_root, PathBuf::from("/tmp/can_test"));
|
||||||
|
assert_eq!(config.admin_token, "test_token");
|
||||||
|
assert!(!config.enable_thumbnail_cache);
|
||||||
|
assert_eq!(config.rebuild_error_threshold, 10);
|
||||||
|
assert_eq!(config.verify_interval_hours, 6);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_config_defaults() {
|
||||||
|
let dir = TempDir::new().unwrap();
|
||||||
|
let config_path = dir.path().join("config.yaml");
|
||||||
|
std::fs::write(&config_path, "storage_root: /tmp/test\n").unwrap();
|
||||||
|
|
||||||
|
let config = Config::load(&config_path).unwrap();
|
||||||
|
assert_eq!(config.admin_token, "changeme");
|
||||||
|
assert!(config.enable_thumbnail_cache);
|
||||||
|
assert_eq!(config.rebuild_error_threshold, 50);
|
||||||
|
assert_eq!(config.verify_interval_hours, 12);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_derived_paths() {
|
||||||
|
let dir = TempDir::new().unwrap();
|
||||||
|
let config_path = dir.path().join("config.yaml");
|
||||||
|
std::fs::write(&config_path, "storage_root: /data/can\n").unwrap();
|
||||||
|
let config = Config::load(&config_path).unwrap();
|
||||||
|
|
||||||
|
assert_eq!(config.db_path(), PathBuf::from("/data/can/.can.db"));
|
||||||
|
assert_eq!(config.trash_dir(), PathBuf::from("/data/can/.trash"));
|
||||||
|
assert_eq!(config.thumbs_dir(), PathBuf::from("/data/can/.thumbs"));
|
||||||
|
}
|
||||||
|
}
|
||||||
663
src/db.rs
Normal file
663
src/db.rs
Normal file
@ -0,0 +1,663 @@
|
|||||||
|
use rusqlite::{params, Connection, OptionalExtension};
|
||||||
|
use std::path::Path;
|
||||||
|
use std::sync::{Arc, Mutex};
|
||||||
|
|
||||||
|
use crate::models::{Asset, AssetMeta, ListParams, SearchParams};
|
||||||
|
|
||||||
|
pub type Db = Arc<Mutex<Connection>>;
|
||||||
|
|
||||||
|
pub fn open(path: &Path) -> anyhow::Result<Db> {
|
||||||
|
let conn = Connection::open(path)?;
|
||||||
|
conn.execute_batch("PRAGMA journal_mode=WAL; PRAGMA foreign_keys=ON;")?;
|
||||||
|
init_schema(&conn)?;
|
||||||
|
Ok(Arc::new(Mutex::new(conn)))
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn open_in_memory() -> anyhow::Result<Db> {
|
||||||
|
let conn = Connection::open_in_memory()?;
|
||||||
|
conn.execute_batch("PRAGMA foreign_keys=ON;")?;
|
||||||
|
init_schema(&conn)?;
|
||||||
|
Ok(Arc::new(Mutex::new(conn)))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn init_schema(conn: &Connection) -> rusqlite::Result<()> {
|
||||||
|
conn.execute_batch(
|
||||||
|
"
|
||||||
|
CREATE TABLE IF NOT EXISTS assets (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
timestamp INTEGER NOT NULL,
|
||||||
|
hash TEXT NOT NULL UNIQUE,
|
||||||
|
mime_type TEXT NOT NULL,
|
||||||
|
application TEXT,
|
||||||
|
user_identity TEXT,
|
||||||
|
description TEXT,
|
||||||
|
actual_filename TEXT NOT NULL,
|
||||||
|
human_filename TEXT,
|
||||||
|
human_path TEXT,
|
||||||
|
is_trashed BOOLEAN NOT NULL DEFAULT 0,
|
||||||
|
is_corrupted BOOLEAN NOT NULL DEFAULT 0
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS tags (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
name TEXT NOT NULL UNIQUE
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS asset_tags (
|
||||||
|
asset_id INTEGER NOT NULL,
|
||||||
|
tag_id INTEGER NOT NULL,
|
||||||
|
PRIMARY KEY (asset_id, tag_id),
|
||||||
|
FOREIGN KEY (asset_id) REFERENCES assets(id) ON DELETE CASCADE,
|
||||||
|
FOREIGN KEY (tag_id) REFERENCES tags(id) ON DELETE CASCADE
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_hash ON assets(hash);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_timestamp ON assets(timestamp);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_application ON assets(application);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_user ON assets(user_identity);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_trashed ON assets(is_trashed);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_tag_name ON tags(name);
|
||||||
|
",
|
||||||
|
)?;
|
||||||
|
|
||||||
|
// Migration: add size column (ignore error if column already exists)
|
||||||
|
let _ = conn.execute("ALTER TABLE assets ADD COLUMN size INTEGER NOT NULL DEFAULT 0", []);
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Insert a new asset. Returns the row id.
|
||||||
|
pub fn insert_asset(conn: &Connection, asset: &Asset) -> rusqlite::Result<i64> {
|
||||||
|
conn.execute(
|
||||||
|
"INSERT INTO assets (timestamp, hash, mime_type, application, user_identity, description, actual_filename, human_filename, human_path, size)
|
||||||
|
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?9, ?10)",
|
||||||
|
params![
|
||||||
|
asset.timestamp,
|
||||||
|
asset.hash,
|
||||||
|
asset.mime_type,
|
||||||
|
asset.application,
|
||||||
|
asset.user_identity,
|
||||||
|
asset.description,
|
||||||
|
asset.actual_filename,
|
||||||
|
asset.human_filename,
|
||||||
|
asset.human_path,
|
||||||
|
asset.size,
|
||||||
|
],
|
||||||
|
)?;
|
||||||
|
Ok(conn.last_insert_rowid())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Look up an asset by its hash.
|
||||||
|
pub fn get_asset_by_hash(conn: &Connection, hash: &str) -> rusqlite::Result<Option<Asset>> {
|
||||||
|
conn.query_row(
|
||||||
|
"SELECT id, timestamp, hash, mime_type, application, user_identity, description,
|
||||||
|
actual_filename, human_filename, human_path, is_trashed, is_corrupted, size
|
||||||
|
FROM assets WHERE hash = ?1",
|
||||||
|
params![hash],
|
||||||
|
|row| {
|
||||||
|
Ok(Asset {
|
||||||
|
id: row.get(0)?,
|
||||||
|
timestamp: row.get(1)?,
|
||||||
|
hash: row.get(2)?,
|
||||||
|
mime_type: row.get(3)?,
|
||||||
|
application: row.get(4)?,
|
||||||
|
user_identity: row.get(5)?,
|
||||||
|
description: row.get(6)?,
|
||||||
|
actual_filename: row.get(7)?,
|
||||||
|
human_filename: row.get(8)?,
|
||||||
|
human_path: row.get(9)?,
|
||||||
|
is_trashed: row.get(10)?,
|
||||||
|
is_corrupted: row.get(11)?,
|
||||||
|
size: row.get(12)?,
|
||||||
|
})
|
||||||
|
},
|
||||||
|
)
|
||||||
|
.optional()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get tags for an asset.
|
||||||
|
pub fn get_asset_tags(conn: &Connection, asset_id: i64) -> rusqlite::Result<Vec<String>> {
|
||||||
|
let mut stmt = conn.prepare(
|
||||||
|
"SELECT t.name FROM tags t
|
||||||
|
JOIN asset_tags at ON at.tag_id = t.id
|
||||||
|
WHERE at.asset_id = ?1
|
||||||
|
ORDER BY t.name",
|
||||||
|
)?;
|
||||||
|
let tags = stmt.query_map(params![asset_id], |row| row.get(0))?;
|
||||||
|
tags.collect()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Upsert a tag and return its id.
|
||||||
|
pub fn upsert_tag(conn: &Connection, name: &str) -> rusqlite::Result<i64> {
|
||||||
|
conn.execute(
|
||||||
|
"INSERT OR IGNORE INTO tags (name) VALUES (?1)",
|
||||||
|
params![name],
|
||||||
|
)?;
|
||||||
|
conn.query_row("SELECT id FROM tags WHERE name = ?1", params![name], |row| {
|
||||||
|
row.get(0)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Replace all tags for an asset within a transaction.
|
||||||
|
pub fn set_asset_tags(conn: &Connection, asset_id: i64, tags: &[String]) -> rusqlite::Result<()> {
|
||||||
|
conn.execute(
|
||||||
|
"DELETE FROM asset_tags WHERE asset_id = ?1",
|
||||||
|
params![asset_id],
|
||||||
|
)?;
|
||||||
|
for tag in tags {
|
||||||
|
let tag_id = upsert_tag(conn, tag)?;
|
||||||
|
conn.execute(
|
||||||
|
"INSERT OR IGNORE INTO asset_tags (asset_id, tag_id) VALUES (?1, ?2)",
|
||||||
|
params![asset_id, tag_id],
|
||||||
|
)?;
|
||||||
|
}
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Build an AssetMeta from an Asset row + tags.
|
||||||
|
pub fn asset_to_meta(conn: &Connection, asset: &Asset) -> rusqlite::Result<AssetMeta> {
|
||||||
|
let tags = get_asset_tags(conn, asset.id)?;
|
||||||
|
Ok(AssetMeta {
|
||||||
|
hash: asset.hash.clone(),
|
||||||
|
mime_type: asset.mime_type.clone(),
|
||||||
|
application: asset.application.clone(),
|
||||||
|
user: asset.user_identity.clone(),
|
||||||
|
tags,
|
||||||
|
description: asset.description.clone(),
|
||||||
|
human_filename: asset.human_filename.clone(),
|
||||||
|
human_path: asset.human_path.clone(),
|
||||||
|
timestamp: asset.timestamp,
|
||||||
|
is_trashed: asset.is_trashed,
|
||||||
|
is_corrupted: asset.is_corrupted,
|
||||||
|
size: asset.size,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Update description and/or tags for an asset.
|
||||||
|
pub fn update_asset_metadata(
|
||||||
|
conn: &Connection,
|
||||||
|
hash: &str,
|
||||||
|
description: Option<&str>,
|
||||||
|
tags: Option<&[String]>,
|
||||||
|
) -> rusqlite::Result<()> {
|
||||||
|
let asset = get_asset_by_hash(conn, hash)?
|
||||||
|
.ok_or(rusqlite::Error::QueryReturnedNoRows)?;
|
||||||
|
|
||||||
|
if let Some(desc) = description {
|
||||||
|
conn.execute(
|
||||||
|
"UPDATE assets SET description = ?1 WHERE id = ?2",
|
||||||
|
params![desc, asset.id],
|
||||||
|
)?;
|
||||||
|
}
|
||||||
|
if let Some(tags) = tags {
|
||||||
|
set_asset_tags(conn, asset.id, tags)?;
|
||||||
|
}
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Flag an asset as corrupted.
|
||||||
|
pub fn flag_corrupted(conn: &Connection, hash: &str, corrupted: bool) -> rusqlite::Result<()> {
|
||||||
|
conn.execute(
|
||||||
|
"UPDATE assets SET is_corrupted = ?1 WHERE hash = ?2",
|
||||||
|
params![corrupted, hash],
|
||||||
|
)?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Update file size for an asset (used by verifier to backfill).
|
||||||
|
pub fn update_asset_size(conn: &Connection, hash: &str, size: i64) -> rusqlite::Result<()> {
|
||||||
|
conn.execute(
|
||||||
|
"UPDATE assets SET size = ?1 WHERE hash = ?2",
|
||||||
|
params![size, hash],
|
||||||
|
)?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Soft-delete: mark as trashed.
|
||||||
|
pub fn trash_asset(conn: &Connection, hash: &str) -> rusqlite::Result<()> {
|
||||||
|
conn.execute(
|
||||||
|
"UPDATE assets SET is_trashed = 1 WHERE hash = ?1",
|
||||||
|
params![hash],
|
||||||
|
)?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// List assets with pagination and filtering.
|
||||||
|
pub fn list_assets(conn: &Connection, params: &ListParams) -> rusqlite::Result<(Vec<Asset>, i64)> {
|
||||||
|
let limit = params.limit.unwrap_or(50);
|
||||||
|
let offset = params.offset.unwrap_or(0);
|
||||||
|
let order = match params.order.as_deref() {
|
||||||
|
Some("asc") => "ASC",
|
||||||
|
_ => "DESC",
|
||||||
|
};
|
||||||
|
let include_trashed = params.include_trashed.unwrap_or(false);
|
||||||
|
let include_corrupted = params.include_corrupted.unwrap_or(false);
|
||||||
|
|
||||||
|
let mut conditions = Vec::new();
|
||||||
|
let mut bind_values: Vec<Box<dyn rusqlite::types::ToSql>> = Vec::new();
|
||||||
|
|
||||||
|
if !include_trashed {
|
||||||
|
conditions.push("is_trashed = 0");
|
||||||
|
}
|
||||||
|
if !include_corrupted {
|
||||||
|
conditions.push("is_corrupted = 0");
|
||||||
|
}
|
||||||
|
if let Some(ref app) = params.application {
|
||||||
|
conditions.push("application = ?");
|
||||||
|
bind_values.push(Box::new(app.clone()));
|
||||||
|
}
|
||||||
|
if let Some(offset_time) = params.offset_time {
|
||||||
|
if order == "DESC" {
|
||||||
|
conditions.push("timestamp < ?");
|
||||||
|
} else {
|
||||||
|
conditions.push("timestamp > ?");
|
||||||
|
}
|
||||||
|
bind_values.push(Box::new(offset_time));
|
||||||
|
}
|
||||||
|
|
||||||
|
let where_clause = if conditions.is_empty() {
|
||||||
|
String::new()
|
||||||
|
} else {
|
||||||
|
format!("WHERE {}", conditions.join(" AND "))
|
||||||
|
};
|
||||||
|
|
||||||
|
let count_sql = format!("SELECT COUNT(*) FROM assets {}", where_clause);
|
||||||
|
let refs: Vec<&dyn rusqlite::types::ToSql> = bind_values.iter().map(|b| b.as_ref()).collect();
|
||||||
|
let total: i64 = conn.query_row(&count_sql, refs.as_slice(), |row| row.get(0))?;
|
||||||
|
|
||||||
|
let query_sql = format!(
|
||||||
|
"SELECT id, timestamp, hash, mime_type, application, user_identity, description,
|
||||||
|
actual_filename, human_filename, human_path, is_trashed, is_corrupted, size
|
||||||
|
FROM assets {} ORDER BY timestamp {} LIMIT ? OFFSET ?",
|
||||||
|
where_clause, order
|
||||||
|
);
|
||||||
|
|
||||||
|
let mut all_binds: Vec<Box<dyn rusqlite::types::ToSql>> = bind_values;
|
||||||
|
all_binds.push(Box::new(limit));
|
||||||
|
all_binds.push(Box::new(offset));
|
||||||
|
let refs2: Vec<&dyn rusqlite::types::ToSql> = all_binds.iter().map(|b| b.as_ref()).collect();
|
||||||
|
|
||||||
|
let mut stmt = conn.prepare(&query_sql)?;
|
||||||
|
let assets = stmt
|
||||||
|
.query_map(refs2.as_slice(), |row| {
|
||||||
|
Ok(Asset {
|
||||||
|
id: row.get(0)?,
|
||||||
|
timestamp: row.get(1)?,
|
||||||
|
hash: row.get(2)?,
|
||||||
|
mime_type: row.get(3)?,
|
||||||
|
application: row.get(4)?,
|
||||||
|
user_identity: row.get(5)?,
|
||||||
|
description: row.get(6)?,
|
||||||
|
actual_filename: row.get(7)?,
|
||||||
|
human_filename: row.get(8)?,
|
||||||
|
human_path: row.get(9)?,
|
||||||
|
is_trashed: row.get(10)?,
|
||||||
|
is_corrupted: row.get(11)?,
|
||||||
|
size: row.get(12)?,
|
||||||
|
})
|
||||||
|
})?
|
||||||
|
.collect::<rusqlite::Result<Vec<_>>>()?;
|
||||||
|
|
||||||
|
Ok((assets, total))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Search assets with various filters.
|
||||||
|
pub fn search_assets(
|
||||||
|
conn: &Connection,
|
||||||
|
params: &SearchParams,
|
||||||
|
) -> rusqlite::Result<(Vec<Asset>, i64)> {
|
||||||
|
let limit = params.limit.unwrap_or(50);
|
||||||
|
let offset = params.offset.unwrap_or(0);
|
||||||
|
let order = match params.order.as_deref() {
|
||||||
|
Some("asc") => "ASC",
|
||||||
|
_ => "DESC",
|
||||||
|
};
|
||||||
|
let include_trashed = params.include_trashed.unwrap_or(false);
|
||||||
|
let include_corrupted = params.include_corrupted.unwrap_or(false);
|
||||||
|
|
||||||
|
let mut conditions = Vec::new();
|
||||||
|
let mut bind_values: Vec<Box<dyn rusqlite::types::ToSql>> = Vec::new();
|
||||||
|
let mut needs_tag_join = false;
|
||||||
|
|
||||||
|
if !include_trashed {
|
||||||
|
conditions.push("a.is_trashed = 0".to_string());
|
||||||
|
}
|
||||||
|
if !include_corrupted {
|
||||||
|
conditions.push("a.is_corrupted = 0".to_string());
|
||||||
|
}
|
||||||
|
if let Some(ref hash) = params.hash {
|
||||||
|
conditions.push("a.hash LIKE ?".to_string());
|
||||||
|
bind_values.push(Box::new(format!("{}%", hash)));
|
||||||
|
}
|
||||||
|
if let Some(start) = params.start_time {
|
||||||
|
conditions.push("a.timestamp >= ?".to_string());
|
||||||
|
bind_values.push(Box::new(start));
|
||||||
|
}
|
||||||
|
if let Some(end) = params.end_time {
|
||||||
|
conditions.push("a.timestamp <= ?".to_string());
|
||||||
|
bind_values.push(Box::new(end));
|
||||||
|
}
|
||||||
|
if let Some(ref mime) = params.mime_type {
|
||||||
|
conditions.push("a.mime_type = ?".to_string());
|
||||||
|
bind_values.push(Box::new(mime.clone()));
|
||||||
|
}
|
||||||
|
if let Some(ref user) = params.user {
|
||||||
|
conditions.push("a.user_identity = ?".to_string());
|
||||||
|
bind_values.push(Box::new(user.clone()));
|
||||||
|
}
|
||||||
|
if let Some(ref app) = params.application {
|
||||||
|
conditions.push("a.application = ?".to_string());
|
||||||
|
bind_values.push(Box::new(app.clone()));
|
||||||
|
}
|
||||||
|
|
||||||
|
// Tag filtering: AND logic - asset must have ALL specified tags
|
||||||
|
let tag_names: Vec<String> = params
|
||||||
|
.tags
|
||||||
|
.as_deref()
|
||||||
|
.unwrap_or("")
|
||||||
|
.split(',')
|
||||||
|
.map(|s| s.trim().to_string())
|
||||||
|
.filter(|s| !s.is_empty())
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
if !tag_names.is_empty() {
|
||||||
|
needs_tag_join = true;
|
||||||
|
let placeholders: Vec<String> = tag_names.iter().map(|_| "?".to_string()).collect();
|
||||||
|
conditions.push(format!(
|
||||||
|
"a.id IN (
|
||||||
|
SELECT at.asset_id FROM asset_tags at
|
||||||
|
JOIN tags t ON t.id = at.tag_id
|
||||||
|
WHERE t.name IN ({})
|
||||||
|
GROUP BY at.asset_id
|
||||||
|
HAVING COUNT(DISTINCT t.id) = ?
|
||||||
|
)",
|
||||||
|
placeholders.join(", ")
|
||||||
|
));
|
||||||
|
for tag in &tag_names {
|
||||||
|
bind_values.push(Box::new(tag.clone()));
|
||||||
|
}
|
||||||
|
bind_values.push(Box::new(tag_names.len() as i64));
|
||||||
|
}
|
||||||
|
|
||||||
|
let _ = needs_tag_join; // subquery handles it
|
||||||
|
|
||||||
|
let where_clause = if conditions.is_empty() {
|
||||||
|
String::new()
|
||||||
|
} else {
|
||||||
|
format!("WHERE {}", conditions.join(" AND "))
|
||||||
|
};
|
||||||
|
|
||||||
|
let count_sql = format!("SELECT COUNT(*) FROM assets a {}", where_clause);
|
||||||
|
let refs: Vec<&dyn rusqlite::types::ToSql> = bind_values.iter().map(|b| b.as_ref()).collect();
|
||||||
|
let total: i64 = conn.query_row(&count_sql, refs.as_slice(), |row| row.get(0))?;
|
||||||
|
|
||||||
|
let query_sql = format!(
|
||||||
|
"SELECT a.id, a.timestamp, a.hash, a.mime_type, a.application, a.user_identity,
|
||||||
|
a.description, a.actual_filename, a.human_filename, a.human_path,
|
||||||
|
a.is_trashed, a.is_corrupted, a.size
|
||||||
|
FROM assets a {} ORDER BY a.timestamp {} LIMIT ? OFFSET ?",
|
||||||
|
where_clause, order
|
||||||
|
);
|
||||||
|
|
||||||
|
let mut all_binds = bind_values;
|
||||||
|
all_binds.push(Box::new(limit));
|
||||||
|
all_binds.push(Box::new(offset));
|
||||||
|
let refs2: Vec<&dyn rusqlite::types::ToSql> = all_binds.iter().map(|b| b.as_ref()).collect();
|
||||||
|
|
||||||
|
let mut stmt = conn.prepare(&query_sql)?;
|
||||||
|
let assets = stmt
|
||||||
|
.query_map(refs2.as_slice(), |row| {
|
||||||
|
Ok(Asset {
|
||||||
|
id: row.get(0)?,
|
||||||
|
timestamp: row.get(1)?,
|
||||||
|
hash: row.get(2)?,
|
||||||
|
mime_type: row.get(3)?,
|
||||||
|
application: row.get(4)?,
|
||||||
|
user_identity: row.get(5)?,
|
||||||
|
description: row.get(6)?,
|
||||||
|
actual_filename: row.get(7)?,
|
||||||
|
human_filename: row.get(8)?,
|
||||||
|
human_path: row.get(9)?,
|
||||||
|
is_trashed: row.get(10)?,
|
||||||
|
is_corrupted: row.get(11)?,
|
||||||
|
size: row.get(12)?,
|
||||||
|
})
|
||||||
|
})?
|
||||||
|
.collect::<rusqlite::Result<Vec<_>>>()?;
|
||||||
|
|
||||||
|
Ok((assets, total))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get all non-trashed asset records (for verifier startup scan).
|
||||||
|
pub fn get_all_active_assets(conn: &Connection) -> rusqlite::Result<Vec<Asset>> {
|
||||||
|
let mut stmt = conn.prepare(
|
||||||
|
"SELECT id, timestamp, hash, mime_type, application, user_identity, description,
|
||||||
|
actual_filename, human_filename, human_path, is_trashed, is_corrupted, size
|
||||||
|
FROM assets WHERE is_trashed = 0",
|
||||||
|
)?;
|
||||||
|
let assets = stmt
|
||||||
|
.query_map([], |row| {
|
||||||
|
Ok(Asset {
|
||||||
|
id: row.get(0)?,
|
||||||
|
timestamp: row.get(1)?,
|
||||||
|
hash: row.get(2)?,
|
||||||
|
mime_type: row.get(3)?,
|
||||||
|
application: row.get(4)?,
|
||||||
|
user_identity: row.get(5)?,
|
||||||
|
description: row.get(6)?,
|
||||||
|
actual_filename: row.get(7)?,
|
||||||
|
human_filename: row.get(8)?,
|
||||||
|
human_path: row.get(9)?,
|
||||||
|
is_trashed: row.get(10)?,
|
||||||
|
is_corrupted: row.get(11)?,
|
||||||
|
size: row.get(12)?,
|
||||||
|
})
|
||||||
|
})?
|
||||||
|
.collect::<rusqlite::Result<Vec<_>>>()?;
|
||||||
|
Ok(assets)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
fn make_test_asset(ts: i64, hash: &str) -> Asset {
|
||||||
|
Asset {
|
||||||
|
id: 0,
|
||||||
|
timestamp: ts,
|
||||||
|
hash: hash.to_string(),
|
||||||
|
mime_type: "text/plain".to_string(),
|
||||||
|
application: Some("test_app".to_string()),
|
||||||
|
user_identity: Some("test_user".to_string()),
|
||||||
|
description: Some("test desc".to_string()),
|
||||||
|
actual_filename: format!("{}_{}.txt", ts, hash),
|
||||||
|
human_filename: Some("readme.txt".to_string()),
|
||||||
|
human_path: Some("/docs/".to_string()),
|
||||||
|
is_trashed: false,
|
||||||
|
is_corrupted: false,
|
||||||
|
size: 0,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_insert_and_get_asset() {
|
||||||
|
let db = open_in_memory().unwrap();
|
||||||
|
let conn = db.lock().unwrap();
|
||||||
|
let asset = make_test_asset(1000, "abc123");
|
||||||
|
let id = insert_asset(&conn, &asset).unwrap();
|
||||||
|
assert!(id > 0);
|
||||||
|
|
||||||
|
let found = get_asset_by_hash(&conn, "abc123").unwrap().unwrap();
|
||||||
|
assert_eq!(found.hash, "abc123");
|
||||||
|
assert_eq!(found.timestamp, 1000);
|
||||||
|
assert_eq!(found.mime_type, "text/plain");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_get_nonexistent_asset() {
|
||||||
|
let db = open_in_memory().unwrap();
|
||||||
|
let conn = db.lock().unwrap();
|
||||||
|
let found = get_asset_by_hash(&conn, "nonexistent").unwrap();
|
||||||
|
assert!(found.is_none());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_tags() {
|
||||||
|
let db = open_in_memory().unwrap();
|
||||||
|
let conn = db.lock().unwrap();
|
||||||
|
let asset = make_test_asset(2000, "def456");
|
||||||
|
let id = insert_asset(&conn, &asset).unwrap();
|
||||||
|
|
||||||
|
let tags = vec!["photo".to_string(), "vacation".to_string()];
|
||||||
|
set_asset_tags(&conn, id, &tags).unwrap();
|
||||||
|
|
||||||
|
let fetched = get_asset_tags(&conn, id).unwrap();
|
||||||
|
assert_eq!(fetched, vec!["photo", "vacation"]);
|
||||||
|
|
||||||
|
// Replace tags
|
||||||
|
let new_tags = vec!["work".to_string()];
|
||||||
|
set_asset_tags(&conn, id, &new_tags).unwrap();
|
||||||
|
let fetched2 = get_asset_tags(&conn, id).unwrap();
|
||||||
|
assert_eq!(fetched2, vec!["work"]);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_update_metadata() {
|
||||||
|
let db = open_in_memory().unwrap();
|
||||||
|
let conn = db.lock().unwrap();
|
||||||
|
let asset = make_test_asset(3000, "ghi789");
|
||||||
|
insert_asset(&conn, &asset).unwrap();
|
||||||
|
|
||||||
|
let new_tags = vec!["updated".to_string()];
|
||||||
|
update_asset_metadata(&conn, "ghi789", Some("new desc"), Some(&new_tags)).unwrap();
|
||||||
|
|
||||||
|
let found = get_asset_by_hash(&conn, "ghi789").unwrap().unwrap();
|
||||||
|
assert_eq!(found.description, Some("new desc".to_string()));
|
||||||
|
let tags = get_asset_tags(&conn, found.id).unwrap();
|
||||||
|
assert_eq!(tags, vec!["updated"]);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_flag_corrupted() {
|
||||||
|
let db = open_in_memory().unwrap();
|
||||||
|
let conn = db.lock().unwrap();
|
||||||
|
let asset = make_test_asset(4000, "corrupt1");
|
||||||
|
insert_asset(&conn, &asset).unwrap();
|
||||||
|
|
||||||
|
flag_corrupted(&conn, "corrupt1", true).unwrap();
|
||||||
|
let found = get_asset_by_hash(&conn, "corrupt1").unwrap().unwrap();
|
||||||
|
assert!(found.is_corrupted);
|
||||||
|
|
||||||
|
flag_corrupted(&conn, "corrupt1", false).unwrap();
|
||||||
|
let found2 = get_asset_by_hash(&conn, "corrupt1").unwrap().unwrap();
|
||||||
|
assert!(!found2.is_corrupted);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_trash_asset() {
|
||||||
|
let db = open_in_memory().unwrap();
|
||||||
|
let conn = db.lock().unwrap();
|
||||||
|
let asset = make_test_asset(5000, "trash1");
|
||||||
|
insert_asset(&conn, &asset).unwrap();
|
||||||
|
|
||||||
|
trash_asset(&conn, "trash1").unwrap();
|
||||||
|
let found = get_asset_by_hash(&conn, "trash1").unwrap().unwrap();
|
||||||
|
assert!(found.is_trashed);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_list_assets_basic() {
|
||||||
|
let db = open_in_memory().unwrap();
|
||||||
|
let conn = db.lock().unwrap();
|
||||||
|
|
||||||
|
for i in 0..5 {
|
||||||
|
let asset = make_test_asset(1000 + i, &format!("hash_{}", i));
|
||||||
|
insert_asset(&conn, &asset).unwrap();
|
||||||
|
}
|
||||||
|
|
||||||
|
let params = ListParams {
|
||||||
|
limit: Some(3),
|
||||||
|
offset: Some(0),
|
||||||
|
offset_time: None,
|
||||||
|
order: Some("desc".to_string()),
|
||||||
|
application: None,
|
||||||
|
include_trashed: None,
|
||||||
|
include_corrupted: None,
|
||||||
|
};
|
||||||
|
let (assets, total) = list_assets(&conn, ¶ms).unwrap();
|
||||||
|
assert_eq!(total, 5);
|
||||||
|
assert_eq!(assets.len(), 3);
|
||||||
|
// DESC order: highest timestamp first
|
||||||
|
assert!(assets[0].timestamp > assets[1].timestamp);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_list_excludes_trashed_by_default() {
|
||||||
|
let db = open_in_memory().unwrap();
|
||||||
|
let conn = db.lock().unwrap();
|
||||||
|
|
||||||
|
let a1 = make_test_asset(100, "visible1");
|
||||||
|
insert_asset(&conn, &a1).unwrap();
|
||||||
|
let a2 = make_test_asset(200, "trashed1");
|
||||||
|
insert_asset(&conn, &a2).unwrap();
|
||||||
|
trash_asset(&conn, "trashed1").unwrap();
|
||||||
|
|
||||||
|
let params = ListParams {
|
||||||
|
limit: None, offset: None, offset_time: None,
|
||||||
|
order: None, application: None,
|
||||||
|
include_trashed: None, include_corrupted: None,
|
||||||
|
};
|
||||||
|
let (assets, total) = list_assets(&conn, ¶ms).unwrap();
|
||||||
|
assert_eq!(total, 1);
|
||||||
|
assert_eq!(assets[0].hash, "visible1");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_search_by_hash_prefix() {
|
||||||
|
let db = open_in_memory().unwrap();
|
||||||
|
let conn = db.lock().unwrap();
|
||||||
|
let a1 = make_test_asset(100, "abcdef123");
|
||||||
|
let a2 = make_test_asset(200, "abcxyz789");
|
||||||
|
let a3 = make_test_asset(300, "zzz000111");
|
||||||
|
insert_asset(&conn, &a1).unwrap();
|
||||||
|
insert_asset(&conn, &a2).unwrap();
|
||||||
|
insert_asset(&conn, &a3).unwrap();
|
||||||
|
|
||||||
|
let params = SearchParams {
|
||||||
|
hash: Some("abc".to_string()),
|
||||||
|
start_time: None, end_time: None, tags: None,
|
||||||
|
mime_type: None, user: None, application: None,
|
||||||
|
limit: None, offset: None, order: None,
|
||||||
|
include_trashed: None, include_corrupted: None,
|
||||||
|
};
|
||||||
|
let (assets, total) = search_assets(&conn, ¶ms).unwrap();
|
||||||
|
assert_eq!(total, 2);
|
||||||
|
assert!(assets.iter().all(|a| a.hash.starts_with("abc")));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_search_by_tags() {
|
||||||
|
let db = open_in_memory().unwrap();
|
||||||
|
let conn = db.lock().unwrap();
|
||||||
|
|
||||||
|
let a1 = make_test_asset(100, "tagged1");
|
||||||
|
let id1 = insert_asset(&conn, &a1).unwrap();
|
||||||
|
set_asset_tags(&conn, id1, &["red".to_string(), "blue".to_string()]).unwrap();
|
||||||
|
|
||||||
|
let a2 = make_test_asset(200, "tagged2");
|
||||||
|
let id2 = insert_asset(&conn, &a2).unwrap();
|
||||||
|
set_asset_tags(&conn, id2, &["red".to_string()]).unwrap();
|
||||||
|
|
||||||
|
// Search for both red AND blue -> only tagged1
|
||||||
|
let params = SearchParams {
|
||||||
|
hash: None, start_time: None, end_time: None,
|
||||||
|
tags: Some("red,blue".to_string()),
|
||||||
|
mime_type: None, user: None, application: None,
|
||||||
|
limit: None, offset: None, order: None,
|
||||||
|
include_trashed: None, include_corrupted: None,
|
||||||
|
};
|
||||||
|
let (assets, total) = search_assets(&conn, ¶ms).unwrap();
|
||||||
|
assert_eq!(total, 1);
|
||||||
|
assert_eq!(assets[0].hash, "tagged1");
|
||||||
|
}
|
||||||
|
}
|
||||||
42
src/error.rs
Normal file
42
src/error.rs
Normal file
@ -0,0 +1,42 @@
|
|||||||
|
use axum::http::StatusCode;
|
||||||
|
use axum::response::{IntoResponse, Response};
|
||||||
|
use crate::models::ErrorResponse;
|
||||||
|
|
||||||
|
#[derive(Debug, thiserror::Error)]
|
||||||
|
pub enum AppError {
|
||||||
|
#[error("Not found: {0}")]
|
||||||
|
NotFound(String),
|
||||||
|
|
||||||
|
#[error("Bad request: {0}")]
|
||||||
|
BadRequest(String),
|
||||||
|
|
||||||
|
#[error("Asset is corrupted: {0}")]
|
||||||
|
Corrupted(String),
|
||||||
|
|
||||||
|
#[error("Database error: {0}")]
|
||||||
|
Database(#[from] rusqlite::Error),
|
||||||
|
|
||||||
|
#[error("IO error: {0}")]
|
||||||
|
Io(#[from] std::io::Error),
|
||||||
|
|
||||||
|
#[error("Internal error: {0}")]
|
||||||
|
Internal(String),
|
||||||
|
}
|
||||||
|
|
||||||
|
impl IntoResponse for AppError {
|
||||||
|
fn into_response(self) -> Response {
|
||||||
|
let (status, message) = match &self {
|
||||||
|
AppError::NotFound(msg) => (StatusCode::NOT_FOUND, msg.clone()),
|
||||||
|
AppError::BadRequest(msg) => (StatusCode::BAD_REQUEST, msg.clone()),
|
||||||
|
AppError::Corrupted(msg) => (StatusCode::INTERNAL_SERVER_ERROR, msg.clone()),
|
||||||
|
AppError::Database(e) => (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()),
|
||||||
|
AppError::Io(e) => (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()),
|
||||||
|
AppError::Internal(msg) => (StatusCode::INTERNAL_SERVER_ERROR, msg.clone()),
|
||||||
|
};
|
||||||
|
|
||||||
|
tracing::error!(%status, error = %message, "request error");
|
||||||
|
|
||||||
|
let body = serde_json::to_string(&ErrorResponse::new(message)).unwrap_or_default();
|
||||||
|
(status, [(axum::http::header::CONTENT_TYPE, "application/json")], body).into_response()
|
||||||
|
}
|
||||||
|
}
|
||||||
48
src/hash.rs
Normal file
48
src/hash.rs
Normal file
@ -0,0 +1,48 @@
|
|||||||
|
use sha2::{Digest, Sha256};
|
||||||
|
|
||||||
|
/// Compute SHA-256 hash as: SHA256(timestamp_bytes + content_bytes).
|
||||||
|
/// The timestamp is serialized as its string representation's bytes,
|
||||||
|
/// matching the spec: `SHA256([timestamp_bytes] + [raw_file_content_bytes])`.
|
||||||
|
pub fn compute_hash(timestamp_ms: i64, content: &[u8]) -> String {
|
||||||
|
let mut hasher = Sha256::new();
|
||||||
|
hasher.update(timestamp_ms.to_be_bytes());
|
||||||
|
hasher.update(content);
|
||||||
|
hex::encode(hasher.finalize())
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_deterministic_hash() {
|
||||||
|
let ts = 1773014400123i64;
|
||||||
|
let content = b"hello world";
|
||||||
|
let h1 = compute_hash(ts, content);
|
||||||
|
let h2 = compute_hash(ts, content);
|
||||||
|
assert_eq!(h1, h2);
|
||||||
|
assert_eq!(h1.len(), 64); // SHA-256 hex = 64 chars
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_different_timestamp_different_hash() {
|
||||||
|
let content = b"same content";
|
||||||
|
let h1 = compute_hash(1000, content);
|
||||||
|
let h2 = compute_hash(2000, content);
|
||||||
|
assert_ne!(h1, h2);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_different_content_different_hash() {
|
||||||
|
let ts = 1234567890i64;
|
||||||
|
let h1 = compute_hash(ts, b"content A");
|
||||||
|
let h2 = compute_hash(ts, b"content B");
|
||||||
|
assert_ne!(h1, h2);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_empty_content() {
|
||||||
|
let h = compute_hash(0, b"");
|
||||||
|
assert_eq!(h.len(), 64);
|
||||||
|
}
|
||||||
|
}
|
||||||
20
src/lib.rs
Normal file
20
src/lib.rs
Normal file
@ -0,0 +1,20 @@
|
|||||||
|
pub mod config;
|
||||||
|
pub mod db;
|
||||||
|
pub mod error;
|
||||||
|
pub mod hash;
|
||||||
|
pub mod models;
|
||||||
|
pub mod routes;
|
||||||
|
pub mod storage;
|
||||||
|
pub mod verifier;
|
||||||
|
pub mod xattr;
|
||||||
|
|
||||||
|
use std::sync::Arc;
|
||||||
|
|
||||||
|
use crate::config::Config;
|
||||||
|
use crate::db::Db;
|
||||||
|
|
||||||
|
#[derive(Clone)]
|
||||||
|
pub struct AppState {
|
||||||
|
pub config: Arc<Config>,
|
||||||
|
pub db: Db,
|
||||||
|
}
|
||||||
64
src/main.rs
Normal file
64
src/main.rs
Normal file
@ -0,0 +1,64 @@
|
|||||||
|
use std::net::SocketAddr;
|
||||||
|
use std::path::PathBuf;
|
||||||
|
use std::sync::Arc;
|
||||||
|
|
||||||
|
use axum::extract::DefaultBodyLimit;
|
||||||
|
use axum::Router;
|
||||||
|
use tower_http::cors::CorsLayer;
|
||||||
|
use tower_http::trace::TraceLayer;
|
||||||
|
|
||||||
|
use can_service::config::Config;
|
||||||
|
use can_service::{db, routes, verifier, AppState};
|
||||||
|
|
||||||
|
#[tokio::main]
|
||||||
|
async fn main() -> anyhow::Result<()> {
|
||||||
|
// Initialize tracing
|
||||||
|
tracing_subscriber::fmt()
|
||||||
|
.with_env_filter(
|
||||||
|
tracing_subscriber::EnvFilter::try_from_default_env()
|
||||||
|
.unwrap_or_else(|_| "can_service=info,tower_http=info".into()),
|
||||||
|
)
|
||||||
|
.init();
|
||||||
|
|
||||||
|
// Load config
|
||||||
|
let config_path = std::env::args()
|
||||||
|
.nth(1)
|
||||||
|
.map(PathBuf::from)
|
||||||
|
.unwrap_or_else(|| PathBuf::from("config.yaml"));
|
||||||
|
|
||||||
|
let config = Config::load(&config_path)?;
|
||||||
|
tracing::info!("Loaded config, storage_root: {:?}", config.storage_root);
|
||||||
|
|
||||||
|
// Ensure directories exist
|
||||||
|
config.ensure_dirs()?;
|
||||||
|
|
||||||
|
// Open database
|
||||||
|
let db = db::open(&config.db_path())?;
|
||||||
|
tracing::info!("Database initialized at {:?}", config.db_path());
|
||||||
|
|
||||||
|
let config = Arc::new(config);
|
||||||
|
|
||||||
|
// Start background verifier
|
||||||
|
verifier::start((*config).clone(), db.clone());
|
||||||
|
|
||||||
|
let state = AppState {
|
||||||
|
config: config.clone(),
|
||||||
|
db,
|
||||||
|
};
|
||||||
|
|
||||||
|
// Build router
|
||||||
|
let app = Router::new()
|
||||||
|
.merge(routes::router())
|
||||||
|
.layer(DefaultBodyLimit::max(100 * 1024 * 1024)) // 100 MB
|
||||||
|
.layer(TraceLayer::new_for_http())
|
||||||
|
.layer(CorsLayer::permissive())
|
||||||
|
.with_state(state);
|
||||||
|
|
||||||
|
let addr = SocketAddr::from(([0, 0, 0, 0], 3210));
|
||||||
|
tracing::info!("CAN service listening on {}", addr);
|
||||||
|
|
||||||
|
let listener = tokio::net::TcpListener::bind(addr).await?;
|
||||||
|
axum::serve(listener, app).await?;
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
164
src/models.rs
Normal file
164
src/models.rs
Normal file
@ -0,0 +1,164 @@
|
|||||||
|
use serde::{Deserialize, Serialize};
|
||||||
|
|
||||||
|
/// Database representation of an asset.
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct Asset {
|
||||||
|
pub id: i64,
|
||||||
|
pub timestamp: i64,
|
||||||
|
pub hash: String,
|
||||||
|
pub mime_type: String,
|
||||||
|
pub application: Option<String>,
|
||||||
|
pub user_identity: Option<String>,
|
||||||
|
pub description: Option<String>,
|
||||||
|
pub actual_filename: String,
|
||||||
|
pub human_filename: Option<String>,
|
||||||
|
pub human_path: Option<String>,
|
||||||
|
pub is_trashed: bool,
|
||||||
|
pub is_corrupted: bool,
|
||||||
|
pub size: i64,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// API-facing asset metadata response.
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct AssetMeta {
|
||||||
|
pub hash: String,
|
||||||
|
pub mime_type: String,
|
||||||
|
pub application: Option<String>,
|
||||||
|
pub user: Option<String>,
|
||||||
|
pub tags: Vec<String>,
|
||||||
|
pub description: Option<String>,
|
||||||
|
pub human_filename: Option<String>,
|
||||||
|
pub human_path: Option<String>,
|
||||||
|
pub timestamp: i64,
|
||||||
|
pub is_trashed: bool,
|
||||||
|
pub is_corrupted: bool,
|
||||||
|
pub size: i64,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Standard API response wrapper.
|
||||||
|
#[derive(Debug, Serialize, Deserialize)]
|
||||||
|
pub struct ApiResponse<T: Serialize> {
|
||||||
|
pub status: String,
|
||||||
|
pub data: T,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<T: Serialize> ApiResponse<T> {
|
||||||
|
pub fn success(data: T) -> Self {
|
||||||
|
Self {
|
||||||
|
status: "success".to_string(),
|
||||||
|
data,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Error response body.
|
||||||
|
#[derive(Debug, Serialize, Deserialize)]
|
||||||
|
pub struct ErrorResponse {
|
||||||
|
pub status: String,
|
||||||
|
pub error: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl ErrorResponse {
|
||||||
|
pub fn new(msg: impl Into<String>) -> Self {
|
||||||
|
Self {
|
||||||
|
status: "error".to_string(),
|
||||||
|
error: msg.into(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Ingest success response data.
|
||||||
|
#[derive(Debug, Serialize, Deserialize)]
|
||||||
|
pub struct IngestResult {
|
||||||
|
pub timestamp: i64,
|
||||||
|
pub hash: String,
|
||||||
|
pub filename: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Pagination metadata in list responses.
|
||||||
|
#[derive(Debug, Serialize, Deserialize)]
|
||||||
|
pub struct Pagination {
|
||||||
|
pub limit: i64,
|
||||||
|
pub offset: i64,
|
||||||
|
pub total: i64,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Paginated list response.
|
||||||
|
#[derive(Debug, Serialize, Deserialize)]
|
||||||
|
pub struct ListResponse {
|
||||||
|
pub items: Vec<AssetMeta>,
|
||||||
|
pub pagination: Pagination,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// PATCH request body for metadata updates.
|
||||||
|
#[derive(Debug, Deserialize)]
|
||||||
|
pub struct MetadataUpdate {
|
||||||
|
pub tags: Option<Vec<String>>,
|
||||||
|
pub description: Option<String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// OS-level file attribute metadata (for xattr / NTFS ADS).
|
||||||
|
#[derive(Debug, Clone, Default, PartialEq)]
|
||||||
|
pub struct FileAttributes {
|
||||||
|
pub mime_type: Option<String>,
|
||||||
|
pub application: Option<String>,
|
||||||
|
pub user: Option<String>,
|
||||||
|
pub tags: Option<String>,
|
||||||
|
pub description: Option<String>,
|
||||||
|
pub human_filename: Option<String>,
|
||||||
|
pub human_path: Option<String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// JSON-based data ingest request (agent-friendly, no multipart needed).
|
||||||
|
///
|
||||||
|
/// `data` accepts any JSON value — object, array, string, number, etc.
|
||||||
|
/// It gets serialized to pretty JSON and stored as a `.json` file.
|
||||||
|
/// Minimal call: `{ "data": {"key": "value"} }`
|
||||||
|
#[derive(Debug, Deserialize)]
|
||||||
|
pub struct DataIngestRequest {
|
||||||
|
/// The payload to store. Any valid JSON value.
|
||||||
|
pub data: serde_json::Value,
|
||||||
|
/// Override MIME type. Defaults to `application/json`.
|
||||||
|
pub mime_type: Option<String>,
|
||||||
|
/// Logical filename (e.g. "agent_config.json").
|
||||||
|
pub human_file_name: Option<String>,
|
||||||
|
/// Logical folder path (e.g. "/configs/").
|
||||||
|
pub human_readable_path: Option<String>,
|
||||||
|
/// Application that produced this data.
|
||||||
|
pub application: Option<String>,
|
||||||
|
/// User / agent identity.
|
||||||
|
pub user: Option<String>,
|
||||||
|
/// Comma-separated tags.
|
||||||
|
pub tags: Option<String>,
|
||||||
|
/// Human-readable description.
|
||||||
|
pub description: Option<String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Query parameters for the list endpoint.
|
||||||
|
#[derive(Debug, Deserialize)]
|
||||||
|
pub struct ListParams {
|
||||||
|
pub limit: Option<i64>,
|
||||||
|
pub offset: Option<i64>,
|
||||||
|
pub offset_time: Option<i64>,
|
||||||
|
pub order: Option<String>,
|
||||||
|
pub application: Option<String>,
|
||||||
|
pub include_trashed: Option<bool>,
|
||||||
|
pub include_corrupted: Option<bool>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Query parameters for the search endpoint.
|
||||||
|
#[derive(Debug, Deserialize)]
|
||||||
|
pub struct SearchParams {
|
||||||
|
pub hash: Option<String>,
|
||||||
|
pub start_time: Option<i64>,
|
||||||
|
pub end_time: Option<i64>,
|
||||||
|
pub tags: Option<String>,
|
||||||
|
pub mime_type: Option<String>,
|
||||||
|
pub user: Option<String>,
|
||||||
|
pub application: Option<String>,
|
||||||
|
pub limit: Option<i64>,
|
||||||
|
pub offset: Option<i64>,
|
||||||
|
pub order: Option<String>,
|
||||||
|
pub include_trashed: Option<bool>,
|
||||||
|
pub include_corrupted: Option<bool>,
|
||||||
|
}
|
||||||
101
src/routes/asset.rs
Normal file
101
src/routes/asset.rs
Normal file
@ -0,0 +1,101 @@
|
|||||||
|
use axum::body::Body;
|
||||||
|
use axum::extract::{Path, State};
|
||||||
|
use axum::http::header;
|
||||||
|
use axum::response::{IntoResponse, Response};
|
||||||
|
use axum::routing::{get, patch};
|
||||||
|
use axum::{Json, Router};
|
||||||
|
use tokio::fs::File;
|
||||||
|
use tokio_util::io::ReaderStream;
|
||||||
|
|
||||||
|
use crate::error::AppError;
|
||||||
|
use crate::models::{ApiResponse, FileAttributes, MetadataUpdate};
|
||||||
|
use crate::{db, xattr, AppState};
|
||||||
|
|
||||||
|
pub fn router() -> Router<AppState> {
|
||||||
|
Router::new()
|
||||||
|
.route("/api/v1/can/0/asset/{hash}", get(get_asset))
|
||||||
|
.route("/api/v1/can/0/asset/{hash}", patch(patch_asset))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// GET /api/v1/can/0/asset/{hash} - Stream the physical file.
|
||||||
|
async fn get_asset(
|
||||||
|
State(state): State<AppState>,
|
||||||
|
Path(hash): Path<String>,
|
||||||
|
) -> Result<Response, AppError> {
|
||||||
|
let asset = {
|
||||||
|
let conn = state.db.lock().unwrap();
|
||||||
|
db::get_asset_by_hash(&conn, &hash)?
|
||||||
|
.ok_or_else(|| AppError::NotFound(format!("Asset not found: {}", hash)))?
|
||||||
|
};
|
||||||
|
|
||||||
|
if asset.is_corrupted {
|
||||||
|
return Err(AppError::Corrupted(format!(
|
||||||
|
"Asset {} is flagged as corrupted",
|
||||||
|
hash
|
||||||
|
)));
|
||||||
|
}
|
||||||
|
|
||||||
|
let file_path = state.config.storage_root.join(&asset.actual_filename);
|
||||||
|
let file = File::open(&file_path).await.map_err(|e| {
|
||||||
|
AppError::Internal(format!("Failed to open file {}: {}", asset.actual_filename, e))
|
||||||
|
})?;
|
||||||
|
|
||||||
|
let stream = ReaderStream::new(file);
|
||||||
|
let body = Body::from_stream(stream);
|
||||||
|
|
||||||
|
let content_type = asset.mime_type.clone();
|
||||||
|
let disposition = match &asset.human_filename {
|
||||||
|
Some(name) => format!("attachment; filename=\"{}\"", name),
|
||||||
|
None => format!("attachment; filename=\"{}\"", asset.actual_filename),
|
||||||
|
};
|
||||||
|
|
||||||
|
Ok((
|
||||||
|
[
|
||||||
|
(header::CONTENT_TYPE, content_type),
|
||||||
|
(header::CONTENT_DISPOSITION, disposition),
|
||||||
|
],
|
||||||
|
body,
|
||||||
|
)
|
||||||
|
.into_response())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// PATCH /api/v1/can/0/asset/{hash} - Update metadata (tags, description).
|
||||||
|
async fn patch_asset(
|
||||||
|
State(state): State<AppState>,
|
||||||
|
Path(hash): Path<String>,
|
||||||
|
Json(update): Json<MetadataUpdate>,
|
||||||
|
) -> Result<Json<ApiResponse<String>>, AppError> {
|
||||||
|
let asset = {
|
||||||
|
let conn = state.db.lock().unwrap();
|
||||||
|
db::get_asset_by_hash(&conn, &hash)?
|
||||||
|
.ok_or_else(|| AppError::NotFound(format!("Asset not found: {}", hash)))?
|
||||||
|
};
|
||||||
|
|
||||||
|
// Update DB
|
||||||
|
{
|
||||||
|
let conn = state.db.lock().unwrap();
|
||||||
|
db::update_asset_metadata(
|
||||||
|
&conn,
|
||||||
|
&hash,
|
||||||
|
update.description.as_deref(),
|
||||||
|
update.tags.as_deref(),
|
||||||
|
)?;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update OS attributes
|
||||||
|
let file_path = state.config.storage_root.join(&asset.actual_filename);
|
||||||
|
if file_path.exists() {
|
||||||
|
let mut attrs = FileAttributes::default();
|
||||||
|
if let Some(ref desc) = update.description {
|
||||||
|
attrs.description = Some(desc.clone());
|
||||||
|
}
|
||||||
|
if let Some(ref tags) = update.tags {
|
||||||
|
attrs.tags = Some(tags.join(","));
|
||||||
|
}
|
||||||
|
if let Err(e) = xattr::write_attributes(&file_path, &attrs) {
|
||||||
|
tracing::warn!("Failed to update OS attributes: {}", e);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(Json(ApiResponse::success("updated".to_string())))
|
||||||
|
}
|
||||||
273
src/routes/ingest.rs
Normal file
273
src/routes/ingest.rs
Normal file
@ -0,0 +1,273 @@
|
|||||||
|
use axum::extract::{Multipart, State};
|
||||||
|
use axum::routing::post;
|
||||||
|
use axum::{Json, Router};
|
||||||
|
use std::time::{SystemTime, UNIX_EPOCH};
|
||||||
|
|
||||||
|
use crate::error::AppError;
|
||||||
|
use crate::models::{ApiResponse, Asset, DataIngestRequest, FileAttributes, IngestResult};
|
||||||
|
use crate::{db, hash, storage, xattr, AppState};
|
||||||
|
|
||||||
|
pub fn router() -> Router<AppState> {
|
||||||
|
Router::new()
|
||||||
|
.route("/api/v1/can/0/ingest", post(ingest_multipart))
|
||||||
|
.route("/api/v1/can/0/ingest/data", post(ingest_data))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Shared ingest pipeline ──────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// All the parsed fields needed to ingest an asset, regardless of source.
|
||||||
|
struct IngestInput {
|
||||||
|
content: Vec<u8>,
|
||||||
|
mime_type: String,
|
||||||
|
human_file_name: Option<String>,
|
||||||
|
human_readable_path: Option<String>,
|
||||||
|
application: Option<String>,
|
||||||
|
user: Option<String>,
|
||||||
|
tags: Vec<String>,
|
||||||
|
description: Option<String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Common pipeline: timestamp → hash → write file → xattr → DB insert.
|
||||||
|
fn do_ingest(state: &AppState, input: IngestInput) -> Result<IngestResult, AppError> {
|
||||||
|
let timestamp = SystemTime::now()
|
||||||
|
.duration_since(UNIX_EPOCH)
|
||||||
|
.unwrap()
|
||||||
|
.as_millis() as i64;
|
||||||
|
|
||||||
|
let file_hash = hash::compute_hash(timestamp, &input.content);
|
||||||
|
|
||||||
|
let actual_filename =
|
||||||
|
storage::build_filename(timestamp, &file_hash, &input.tags, &input.mime_type);
|
||||||
|
|
||||||
|
let file_path =
|
||||||
|
storage::write_asset(&state.config.storage_root, &actual_filename, &input.content)?;
|
||||||
|
|
||||||
|
// OS-level attributes (best-effort)
|
||||||
|
let attrs = FileAttributes {
|
||||||
|
mime_type: Some(input.mime_type.clone()),
|
||||||
|
application: input.application.clone(),
|
||||||
|
user: input.user.clone(),
|
||||||
|
tags: if input.tags.is_empty() {
|
||||||
|
None
|
||||||
|
} else {
|
||||||
|
Some(input.tags.join(","))
|
||||||
|
},
|
||||||
|
description: input.description.clone(),
|
||||||
|
human_filename: input.human_file_name.clone(),
|
||||||
|
human_path: input.human_readable_path.clone(),
|
||||||
|
};
|
||||||
|
if let Err(e) = xattr::write_attributes(&file_path, &attrs) {
|
||||||
|
tracing::warn!("Failed to write OS attributes: {}", e);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Database insert
|
||||||
|
let asset = Asset {
|
||||||
|
id: 0,
|
||||||
|
timestamp,
|
||||||
|
hash: file_hash.clone(),
|
||||||
|
mime_type: input.mime_type,
|
||||||
|
application: input.application,
|
||||||
|
user_identity: input.user,
|
||||||
|
description: input.description,
|
||||||
|
actual_filename: actual_filename.clone(),
|
||||||
|
human_filename: input.human_file_name,
|
||||||
|
human_path: input.human_readable_path,
|
||||||
|
is_trashed: false,
|
||||||
|
is_corrupted: false,
|
||||||
|
size: input.content.len() as i64,
|
||||||
|
};
|
||||||
|
|
||||||
|
{
|
||||||
|
let conn = state.db.lock().unwrap();
|
||||||
|
let asset_id = db::insert_asset(&conn, &asset)?;
|
||||||
|
if !input.tags.is_empty() {
|
||||||
|
db::set_asset_tags(&conn, asset_id, &input.tags)?;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(IngestResult {
|
||||||
|
timestamp,
|
||||||
|
hash: file_hash,
|
||||||
|
filename: actual_filename,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Parse a comma-separated tag string into a clean Vec.
|
||||||
|
fn parse_tags(raw: Option<&str>) -> Vec<String> {
|
||||||
|
raw.unwrap_or("")
|
||||||
|
.split(',')
|
||||||
|
.map(|s| s.trim().to_string())
|
||||||
|
.filter(|s| !s.is_empty())
|
||||||
|
.collect()
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── POST /api/v1/can/0/ingest (multipart — file uploads) ──────────────
|
||||||
|
|
||||||
|
async fn ingest_multipart(
|
||||||
|
State(state): State<AppState>,
|
||||||
|
mut multipart: Multipart,
|
||||||
|
) -> Result<Json<ApiResponse<IngestResult>>, AppError> {
|
||||||
|
let mut file_data: Option<Vec<u8>> = None;
|
||||||
|
let mut mime_type: Option<String> = None;
|
||||||
|
let mut human_file_name: Option<String> = None;
|
||||||
|
let mut human_readable_path: Option<String> = None;
|
||||||
|
let mut application: Option<String> = None;
|
||||||
|
let mut user: Option<String> = None;
|
||||||
|
let mut tags_str: Option<String> = None;
|
||||||
|
let mut description: Option<String> = None;
|
||||||
|
let mut original_filename: Option<String> = None;
|
||||||
|
|
||||||
|
while let Some(field) = multipart
|
||||||
|
.next_field()
|
||||||
|
.await
|
||||||
|
.map_err(|e| AppError::BadRequest(e.to_string()))?
|
||||||
|
{
|
||||||
|
let name = field.name().unwrap_or("").to_string();
|
||||||
|
match name.as_str() {
|
||||||
|
"file" => {
|
||||||
|
if let Some(fname) = field.file_name() {
|
||||||
|
original_filename = Some(fname.to_string());
|
||||||
|
}
|
||||||
|
if let Some(ct) = field.content_type() {
|
||||||
|
if mime_type.is_none() {
|
||||||
|
mime_type = Some(ct.to_string());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
file_data = Some(
|
||||||
|
field
|
||||||
|
.bytes()
|
||||||
|
.await
|
||||||
|
.map_err(|e| AppError::BadRequest(e.to_string()))?
|
||||||
|
.to_vec(),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
"mime_type" => {
|
||||||
|
let val = field
|
||||||
|
.text()
|
||||||
|
.await
|
||||||
|
.map_err(|e| AppError::BadRequest(e.to_string()))?;
|
||||||
|
if !val.is_empty() {
|
||||||
|
mime_type = Some(val);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
"human_file_name" => {
|
||||||
|
human_file_name = Some(
|
||||||
|
field
|
||||||
|
.text()
|
||||||
|
.await
|
||||||
|
.map_err(|e| AppError::BadRequest(e.to_string()))?,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
"human_readable_path" => {
|
||||||
|
human_readable_path = Some(
|
||||||
|
field
|
||||||
|
.text()
|
||||||
|
.await
|
||||||
|
.map_err(|e| AppError::BadRequest(e.to_string()))?,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
"application" => {
|
||||||
|
application = Some(
|
||||||
|
field
|
||||||
|
.text()
|
||||||
|
.await
|
||||||
|
.map_err(|e| AppError::BadRequest(e.to_string()))?,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
"user" => {
|
||||||
|
user = Some(
|
||||||
|
field
|
||||||
|
.text()
|
||||||
|
.await
|
||||||
|
.map_err(|e| AppError::BadRequest(e.to_string()))?,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
"tags" => {
|
||||||
|
tags_str = Some(
|
||||||
|
field
|
||||||
|
.text()
|
||||||
|
.await
|
||||||
|
.map_err(|e| AppError::BadRequest(e.to_string()))?,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
"description" => {
|
||||||
|
description = Some(
|
||||||
|
field
|
||||||
|
.text()
|
||||||
|
.await
|
||||||
|
.map_err(|e| AppError::BadRequest(e.to_string()))?,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
_ => {}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
let content = file_data.ok_or_else(|| AppError::BadRequest("Missing 'file' field".into()))?;
|
||||||
|
|
||||||
|
let resolved_mime = mime_type.unwrap_or_else(|| {
|
||||||
|
original_filename
|
||||||
|
.as_deref()
|
||||||
|
.and_then(|name| mime_guess::from_path(name).first_raw().map(|s| s.to_string()))
|
||||||
|
.unwrap_or_else(|| "application/octet-stream".to_string())
|
||||||
|
});
|
||||||
|
|
||||||
|
let result = do_ingest(
|
||||||
|
&state,
|
||||||
|
IngestInput {
|
||||||
|
content,
|
||||||
|
mime_type: resolved_mime,
|
||||||
|
human_file_name,
|
||||||
|
human_readable_path,
|
||||||
|
application,
|
||||||
|
user,
|
||||||
|
tags: parse_tags(tags_str.as_deref()),
|
||||||
|
description,
|
||||||
|
},
|
||||||
|
)?;
|
||||||
|
|
||||||
|
Ok(Json(ApiResponse::success(result)))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── POST /api/v1/can/0/ingest/data (JSON — agent-friendly) ────────────
|
||||||
|
|
||||||
|
/// JSON data ingest. Accepts any JSON value in `data`, serializes it to
|
||||||
|
/// pretty-printed JSON bytes, and stores it as a `.json` asset.
|
||||||
|
///
|
||||||
|
/// Minimal agent call:
|
||||||
|
/// ```json
|
||||||
|
/// POST /api/v1/can/0/ingest/data
|
||||||
|
/// Content-Type: application/json
|
||||||
|
///
|
||||||
|
/// { "data": { "key": "value" } }
|
||||||
|
/// ```
|
||||||
|
///
|
||||||
|
/// All metadata fields (tags, application, user, etc.) are optional —
|
||||||
|
/// same semantics as the multipart endpoint.
|
||||||
|
async fn ingest_data(
|
||||||
|
State(state): State<AppState>,
|
||||||
|
Json(req): Json<DataIngestRequest>,
|
||||||
|
) -> Result<Json<ApiResponse<IngestResult>>, AppError> {
|
||||||
|
// Serialize the data payload to pretty JSON bytes
|
||||||
|
let content = serde_json::to_vec_pretty(&req.data)
|
||||||
|
.map_err(|e| AppError::Internal(format!("Failed to serialize data: {}", e)))?;
|
||||||
|
|
||||||
|
let mime = req
|
||||||
|
.mime_type
|
||||||
|
.unwrap_or_else(|| "application/json".to_string());
|
||||||
|
|
||||||
|
let result = do_ingest(
|
||||||
|
&state,
|
||||||
|
IngestInput {
|
||||||
|
content,
|
||||||
|
mime_type: mime,
|
||||||
|
human_file_name: req.human_file_name,
|
||||||
|
human_readable_path: req.human_readable_path,
|
||||||
|
application: req.application,
|
||||||
|
user: req.user,
|
||||||
|
tags: parse_tags(req.tags.as_deref()),
|
||||||
|
description: req.description,
|
||||||
|
},
|
||||||
|
)?;
|
||||||
|
|
||||||
|
Ok(Json(ApiResponse::success(result)))
|
||||||
|
}
|
||||||
35
src/routes/list.rs
Normal file
35
src/routes/list.rs
Normal file
@ -0,0 +1,35 @@
|
|||||||
|
use axum::extract::{Query, State};
|
||||||
|
use axum::routing::get;
|
||||||
|
use axum::{Json, Router};
|
||||||
|
|
||||||
|
use crate::error::AppError;
|
||||||
|
use crate::models::{ApiResponse, AssetMeta, ListParams, ListResponse, Pagination};
|
||||||
|
use crate::{db, AppState};
|
||||||
|
|
||||||
|
pub fn router() -> Router<AppState> {
|
||||||
|
Router::new().route("/api/v1/can/0/list", get(list_assets))
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn list_assets(
|
||||||
|
State(state): State<AppState>,
|
||||||
|
Query(params): Query<ListParams>,
|
||||||
|
) -> Result<Json<ApiResponse<ListResponse>>, AppError> {
|
||||||
|
let conn = state.db.lock().unwrap();
|
||||||
|
let (assets, total) = db::list_assets(&conn, ¶ms)?;
|
||||||
|
|
||||||
|
let items: Vec<AssetMeta> = assets
|
||||||
|
.iter()
|
||||||
|
.map(|a| db::asset_to_meta(&conn, a))
|
||||||
|
.collect::<Result<Vec<_>, _>>()?;
|
||||||
|
|
||||||
|
let response = ListResponse {
|
||||||
|
items,
|
||||||
|
pagination: Pagination {
|
||||||
|
limit: params.limit.unwrap_or(50),
|
||||||
|
offset: params.offset.unwrap_or(0),
|
||||||
|
total,
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
Ok(Json(ApiResponse::success(response)))
|
||||||
|
}
|
||||||
22
src/routes/meta.rs
Normal file
22
src/routes/meta.rs
Normal file
@ -0,0 +1,22 @@
|
|||||||
|
use axum::extract::{Path, State};
|
||||||
|
use axum::routing::get;
|
||||||
|
use axum::{Json, Router};
|
||||||
|
|
||||||
|
use crate::error::AppError;
|
||||||
|
use crate::models::{ApiResponse, AssetMeta};
|
||||||
|
use crate::{db, AppState};
|
||||||
|
|
||||||
|
pub fn router() -> Router<AppState> {
|
||||||
|
Router::new().route("/api/v1/can/0/asset/{hash}/meta", get(get_meta))
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn get_meta(
|
||||||
|
State(state): State<AppState>,
|
||||||
|
Path(hash): Path<String>,
|
||||||
|
) -> Result<Json<ApiResponse<AssetMeta>>, AppError> {
|
||||||
|
let conn = state.db.lock().unwrap();
|
||||||
|
let asset = db::get_asset_by_hash(&conn, &hash)?
|
||||||
|
.ok_or_else(|| AppError::NotFound(format!("Asset not found: {}", hash)))?;
|
||||||
|
let meta = db::asset_to_meta(&conn, &asset)?;
|
||||||
|
Ok(Json(ApiResponse::success(meta)))
|
||||||
|
}
|
||||||
19
src/routes/mod.rs
Normal file
19
src/routes/mod.rs
Normal file
@ -0,0 +1,19 @@
|
|||||||
|
pub mod ingest;
|
||||||
|
pub mod asset;
|
||||||
|
pub mod meta;
|
||||||
|
pub mod list;
|
||||||
|
pub mod search;
|
||||||
|
pub mod thumb;
|
||||||
|
|
||||||
|
use axum::Router;
|
||||||
|
use crate::AppState;
|
||||||
|
|
||||||
|
pub fn router() -> Router<AppState> {
|
||||||
|
Router::new()
|
||||||
|
.merge(ingest::router())
|
||||||
|
.merge(asset::router())
|
||||||
|
.merge(meta::router())
|
||||||
|
.merge(list::router())
|
||||||
|
.merge(search::router())
|
||||||
|
.merge(thumb::router())
|
||||||
|
}
|
||||||
38
src/routes/search.rs
Normal file
38
src/routes/search.rs
Normal file
@ -0,0 +1,38 @@
|
|||||||
|
use axum::extract::{Query, State};
|
||||||
|
use axum::routing::get;
|
||||||
|
use axum::{Json, Router};
|
||||||
|
|
||||||
|
use crate::error::AppError;
|
||||||
|
use crate::models::{ApiResponse, AssetMeta, ListResponse, Pagination, SearchParams};
|
||||||
|
use crate::{db, AppState};
|
||||||
|
|
||||||
|
pub fn router() -> Router<AppState> {
|
||||||
|
Router::new().route("/api/v1/can/0/search", get(search_assets))
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn search_assets(
|
||||||
|
State(state): State<AppState>,
|
||||||
|
Query(params): Query<SearchParams>,
|
||||||
|
) -> Result<Json<ApiResponse<ListResponse>>, AppError> {
|
||||||
|
let conn = state.db.lock().unwrap();
|
||||||
|
let limit = params.limit.unwrap_or(50);
|
||||||
|
let offset = params.offset.unwrap_or(0);
|
||||||
|
|
||||||
|
let (assets, total) = db::search_assets(&conn, ¶ms)?;
|
||||||
|
|
||||||
|
let items: Vec<AssetMeta> = assets
|
||||||
|
.iter()
|
||||||
|
.map(|a| db::asset_to_meta(&conn, a))
|
||||||
|
.collect::<Result<Vec<_>, _>>()?;
|
||||||
|
|
||||||
|
let response = ListResponse {
|
||||||
|
items,
|
||||||
|
pagination: Pagination {
|
||||||
|
limit,
|
||||||
|
offset,
|
||||||
|
total,
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
Ok(Json(ApiResponse::success(response)))
|
||||||
|
}
|
||||||
97
src/routes/thumb.rs
Normal file
97
src/routes/thumb.rs
Normal file
@ -0,0 +1,97 @@
|
|||||||
|
use axum::body::Body;
|
||||||
|
use axum::extract::{Path, State};
|
||||||
|
use axum::http::header;
|
||||||
|
use axum::response::{IntoResponse, Response};
|
||||||
|
use axum::routing::get;
|
||||||
|
use axum::Router;
|
||||||
|
use image::imageops::FilterType;
|
||||||
|
use image::ImageFormat;
|
||||||
|
use std::io::Cursor;
|
||||||
|
|
||||||
|
use crate::error::AppError;
|
||||||
|
use crate::{db, AppState};
|
||||||
|
|
||||||
|
pub fn router() -> Router<AppState> {
|
||||||
|
Router::new().route(
|
||||||
|
"/api/v1/can/0/asset/{hash}/thumb/{max_width}/{max_height}",
|
||||||
|
get(get_thumb),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Static fallback SVG icon for non-image assets.
|
||||||
|
const FALLBACK_SVG: &str = r##"<svg xmlns="http://www.w3.org/2000/svg" width="128" height="128" viewBox="0 0 128 128">
|
||||||
|
<rect width="128" height="128" rx="8" fill="#e0e0e0"/>
|
||||||
|
<text x="64" y="72" text-anchor="middle" font-family="sans-serif" font-size="40" fill="#888">?</text>
|
||||||
|
</svg>"##;
|
||||||
|
|
||||||
|
async fn get_thumb(
|
||||||
|
State(state): State<AppState>,
|
||||||
|
Path((hash, max_width, max_height)): Path<(String, u32, u32)>,
|
||||||
|
) -> Result<Response, AppError> {
|
||||||
|
let asset = {
|
||||||
|
let conn = state.db.lock().unwrap();
|
||||||
|
db::get_asset_by_hash(&conn, &hash)?
|
||||||
|
.ok_or_else(|| AppError::NotFound(format!("Asset not found: {}", hash)))?
|
||||||
|
};
|
||||||
|
|
||||||
|
// Check if MIME type is an image we can resize
|
||||||
|
let is_image = asset.mime_type.starts_with("image/")
|
||||||
|
&& !asset.mime_type.contains("svg");
|
||||||
|
|
||||||
|
if !is_image {
|
||||||
|
// Return fallback SVG
|
||||||
|
return Ok((
|
||||||
|
[(header::CONTENT_TYPE, "image/svg+xml".to_string())],
|
||||||
|
FALLBACK_SVG.to_string(),
|
||||||
|
)
|
||||||
|
.into_response());
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check thumbnail cache
|
||||||
|
if state.config.enable_thumbnail_cache {
|
||||||
|
let cache_name = format!("{}_{}x{}.jpg", hash, max_width, max_height);
|
||||||
|
let cache_path = state.config.thumbs_dir().join(&cache_name);
|
||||||
|
if cache_path.exists() {
|
||||||
|
let data = tokio::fs::read(&cache_path).await?;
|
||||||
|
return Ok((
|
||||||
|
[(header::CONTENT_TYPE, "image/jpeg".to_string())],
|
||||||
|
Body::from(data),
|
||||||
|
)
|
||||||
|
.into_response());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Read original file
|
||||||
|
let file_path = state.config.storage_root.join(&asset.actual_filename);
|
||||||
|
let data = tokio::fs::read(&file_path).await.map_err(|e| {
|
||||||
|
AppError::Internal(format!("Failed to read file: {}", e))
|
||||||
|
})?;
|
||||||
|
|
||||||
|
// Decode and resize
|
||||||
|
let img = image::load_from_memory(&data)
|
||||||
|
.map_err(|e| AppError::Internal(format!("Failed to decode image: {}", e)))?;
|
||||||
|
|
||||||
|
let thumb = img.resize(max_width, max_height, FilterType::Lanczos3);
|
||||||
|
|
||||||
|
// Encode as JPEG
|
||||||
|
let mut buf = Cursor::new(Vec::new());
|
||||||
|
thumb
|
||||||
|
.write_to(&mut buf, ImageFormat::Jpeg)
|
||||||
|
.map_err(|e| AppError::Internal(format!("Failed to encode thumbnail: {}", e)))?;
|
||||||
|
let jpeg_bytes = buf.into_inner();
|
||||||
|
|
||||||
|
// Cache the thumbnail
|
||||||
|
if state.config.enable_thumbnail_cache {
|
||||||
|
let cache_name = format!("{}_{}x{}.jpg", hash, max_width, max_height);
|
||||||
|
let cache_path = state.config.thumbs_dir().join(&cache_name);
|
||||||
|
if let Err(e) = tokio::fs::write(&cache_path, &jpeg_bytes).await {
|
||||||
|
tracing::warn!("Failed to cache thumbnail: {}", e);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok((
|
||||||
|
[(header::CONTENT_TYPE, "image/jpeg".to_string())],
|
||||||
|
Body::from(jpeg_bytes),
|
||||||
|
)
|
||||||
|
.into_response())
|
||||||
|
}
|
||||||
192
src/storage.rs
Normal file
192
src/storage.rs
Normal file
@ -0,0 +1,192 @@
|
|||||||
|
use std::path::{Path, PathBuf};
|
||||||
|
|
||||||
|
/// Build the physical filename per the spec:
|
||||||
|
/// `{timestamp}_{sha256}_{truncated_tags}.{extension}`
|
||||||
|
pub fn build_filename(
|
||||||
|
timestamp: i64,
|
||||||
|
hash: &str,
|
||||||
|
tags: &[String],
|
||||||
|
mime_type: &str,
|
||||||
|
) -> String {
|
||||||
|
let extension = mime_to_extension(mime_type);
|
||||||
|
|
||||||
|
let base = format!("{}_{}", timestamp, hash);
|
||||||
|
|
||||||
|
if tags.is_empty() {
|
||||||
|
return format!("{}.{}", base, extension);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Sanitize tags: strip non-alphanumeric, join with underscore
|
||||||
|
let sanitized_tags: Vec<String> = tags
|
||||||
|
.iter()
|
||||||
|
.map(|t| t.chars().filter(|c| c.is_alphanumeric()).collect::<String>())
|
||||||
|
.filter(|t| !t.is_empty())
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
if sanitized_tags.is_empty() {
|
||||||
|
return format!("{}.{}", base, extension);
|
||||||
|
}
|
||||||
|
|
||||||
|
let tag_part = sanitized_tags.join("_");
|
||||||
|
|
||||||
|
// Truncate to keep total filename under ~200 chars (safely under 255)
|
||||||
|
let max_tag_len = 200usize.saturating_sub(base.len() + extension.len() + 2); // 2 for _ and .
|
||||||
|
let truncated = if tag_part.len() > max_tag_len {
|
||||||
|
&tag_part[..max_tag_len]
|
||||||
|
} else {
|
||||||
|
&tag_part
|
||||||
|
};
|
||||||
|
|
||||||
|
format!("{}_{}. {}", base, truncated, extension)
|
||||||
|
.replace(". ", ".")
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Derive file extension from MIME type.
|
||||||
|
pub fn mime_to_extension(mime: &str) -> &str {
|
||||||
|
match mime {
|
||||||
|
"application/pdf" => "pdf",
|
||||||
|
"application/json" => "json",
|
||||||
|
"application/xml" | "text/xml" => "xml",
|
||||||
|
"application/zip" => "zip",
|
||||||
|
"application/gzip" => "gz",
|
||||||
|
"application/octet-stream" => "bin",
|
||||||
|
"text/plain" => "txt",
|
||||||
|
"text/html" => "html",
|
||||||
|
"text/css" => "css",
|
||||||
|
"text/csv" => "csv",
|
||||||
|
"text/javascript" | "application/javascript" => "js",
|
||||||
|
"image/jpeg" => "jpg",
|
||||||
|
"image/png" => "png",
|
||||||
|
"image/gif" => "gif",
|
||||||
|
"image/webp" => "webp",
|
||||||
|
"image/svg+xml" => "svg",
|
||||||
|
"image/bmp" => "bmp",
|
||||||
|
"audio/mpeg" => "mp3",
|
||||||
|
"audio/wav" => "wav",
|
||||||
|
"audio/ogg" => "ogg",
|
||||||
|
"video/mp4" => "mp4",
|
||||||
|
"video/webm" => "webm",
|
||||||
|
_ => {
|
||||||
|
// Try to extract from mime_guess
|
||||||
|
mime_guess::get_mime_extensions_str(mime)
|
||||||
|
.and_then(|exts| exts.first().copied())
|
||||||
|
.unwrap_or("bin")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Write asset bytes to the storage root. Returns the full path.
|
||||||
|
pub fn write_asset(root: &Path, filename: &str, data: &[u8]) -> std::io::Result<PathBuf> {
|
||||||
|
let path = root.join(filename);
|
||||||
|
std::fs::write(&path, data)?;
|
||||||
|
Ok(path)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Read asset bytes from the storage root.
|
||||||
|
pub fn read_asset(root: &Path, filename: &str) -> std::io::Result<Vec<u8>> {
|
||||||
|
let path = root.join(filename);
|
||||||
|
std::fs::read(path)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Move an asset file to the .trash directory.
|
||||||
|
pub fn trash_asset_file(root: &Path, filename: &str) -> std::io::Result<()> {
|
||||||
|
let src = root.join(filename);
|
||||||
|
let trash_dir = root.join(".trash");
|
||||||
|
std::fs::create_dir_all(&trash_dir)?;
|
||||||
|
let dst = trash_dir.join(filename);
|
||||||
|
std::fs::rename(src, dst)?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Parse a physical filename to extract the hash component.
|
||||||
|
/// Format: `{timestamp}_{sha256}_{tags}.{ext}` or `{timestamp}_{sha256}.{ext}`
|
||||||
|
pub fn parse_hash_from_filename(filename: &str) -> Option<String> {
|
||||||
|
// Remove extension
|
||||||
|
let stem = filename.rsplit_once('.')?.0;
|
||||||
|
// Split by underscore: first part is timestamp, second is hash (64 hex chars)
|
||||||
|
let parts: Vec<&str> = stem.splitn(3, '_').collect();
|
||||||
|
if parts.len() >= 2 && parts[1].len() == 64 {
|
||||||
|
Some(parts[1].to_string())
|
||||||
|
} else {
|
||||||
|
None
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Parse a physical filename to extract the timestamp component.
|
||||||
|
pub fn parse_timestamp_from_filename(filename: &str) -> Option<i64> {
|
||||||
|
let stem = filename.rsplit_once('.')?.0;
|
||||||
|
let ts_str = stem.split('_').next()?;
|
||||||
|
ts_str.parse().ok()
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
use tempfile::TempDir;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_build_filename_no_tags() {
|
||||||
|
let name = build_filename(1773014400123, "a3b2c4d5e6f7", &[], "application/pdf");
|
||||||
|
assert_eq!(name, "1773014400123_a3b2c4d5e6f7.pdf");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_build_filename_with_tags() {
|
||||||
|
let tags = vec!["photo".to_string(), "vacation".to_string()];
|
||||||
|
let name = build_filename(1773014400123, "a3b2c4d5e6f7", &tags, "image/jpeg");
|
||||||
|
assert_eq!(name, "1773014400123_a3b2c4d5e6f7_photo_vacation.jpg");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_build_filename_strips_special_chars_from_tags() {
|
||||||
|
let tags = vec!["hello world!".to_string(), "test@123".to_string()];
|
||||||
|
let name = build_filename(100, "abc", &tags, "text/plain");
|
||||||
|
assert_eq!(name, "100_abc_helloworld_test123.txt");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_mime_to_extension() {
|
||||||
|
assert_eq!(mime_to_extension("image/png"), "png");
|
||||||
|
assert_eq!(mime_to_extension("application/pdf"), "pdf");
|
||||||
|
assert_eq!(mime_to_extension("text/plain"), "txt");
|
||||||
|
assert_eq!(mime_to_extension("unknown/thing"), "bin");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_write_and_read_asset() {
|
||||||
|
let dir = TempDir::new().unwrap();
|
||||||
|
let data = b"hello world";
|
||||||
|
let path = write_asset(dir.path(), "test_file.txt", data).unwrap();
|
||||||
|
assert!(path.exists());
|
||||||
|
|
||||||
|
let read_back = read_asset(dir.path(), "test_file.txt").unwrap();
|
||||||
|
assert_eq!(read_back, data);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_trash_asset_file() {
|
||||||
|
let dir = TempDir::new().unwrap();
|
||||||
|
write_asset(dir.path(), "to_trash.txt", b"bye").unwrap();
|
||||||
|
|
||||||
|
trash_asset_file(dir.path(), "to_trash.txt").unwrap();
|
||||||
|
assert!(!dir.path().join("to_trash.txt").exists());
|
||||||
|
assert!(dir.path().join(".trash").join("to_trash.txt").exists());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_parse_hash_from_filename() {
|
||||||
|
let hash_64 = "a".repeat(64);
|
||||||
|
let filename = format!("1773014400123_{}.pdf", hash_64);
|
||||||
|
assert_eq!(parse_hash_from_filename(&filename), Some(hash_64.clone()));
|
||||||
|
|
||||||
|
let filename_tags = format!("1773014400123_{}_photo_vacation.jpg", hash_64);
|
||||||
|
assert_eq!(parse_hash_from_filename(&filename_tags), Some(hash_64));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_parse_timestamp_from_filename() {
|
||||||
|
let hash_64 = "b".repeat(64);
|
||||||
|
let filename = format!("1773014400123_{}.pdf", hash_64);
|
||||||
|
assert_eq!(parse_timestamp_from_filename(&filename), Some(1773014400123));
|
||||||
|
}
|
||||||
|
}
|
||||||
308
src/verifier.rs
Normal file
308
src/verifier.rs
Normal file
@ -0,0 +1,308 @@
|
|||||||
|
use notify::{Event, EventKind, RecursiveMode, Watcher};
|
||||||
|
use std::path::PathBuf;
|
||||||
|
// Mutex used via Db type alias
|
||||||
|
use std::time::Duration;
|
||||||
|
use tokio::sync::mpsc;
|
||||||
|
|
||||||
|
use crate::config::Config;
|
||||||
|
use crate::db::{self, Db};
|
||||||
|
use crate::hash::compute_hash;
|
||||||
|
use crate::models::FileAttributes;
|
||||||
|
use crate::storage::{parse_hash_from_filename, parse_timestamp_from_filename};
|
||||||
|
use crate::xattr;
|
||||||
|
|
||||||
|
/// Start the background verifier subsystem.
|
||||||
|
/// - Runs an initial full scrub
|
||||||
|
/// - Watches for filesystem changes
|
||||||
|
/// - Runs periodic scrubs
|
||||||
|
pub fn start(config: Config, db: Db) {
|
||||||
|
let config2 = config.clone();
|
||||||
|
let db2 = db.clone();
|
||||||
|
|
||||||
|
// Initial scrub
|
||||||
|
let config3 = config.clone();
|
||||||
|
let db3 = db.clone();
|
||||||
|
tokio::spawn(async move {
|
||||||
|
tracing::info!("Verifier: starting initial scrub...");
|
||||||
|
if let Err(e) = run_scrub(&config3, &db3).await {
|
||||||
|
tracing::error!("Verifier: initial scrub failed: {}", e);
|
||||||
|
}
|
||||||
|
tracing::info!("Verifier: initial scrub complete");
|
||||||
|
});
|
||||||
|
|
||||||
|
// Periodic scrub
|
||||||
|
let interval_hours = config.verify_interval_hours;
|
||||||
|
tokio::spawn(async move {
|
||||||
|
let mut interval =
|
||||||
|
tokio::time::interval(Duration::from_secs(interval_hours * 3600));
|
||||||
|
interval.tick().await; // Skip first immediate tick
|
||||||
|
loop {
|
||||||
|
interval.tick().await;
|
||||||
|
tracing::info!("Verifier: starting periodic scrub...");
|
||||||
|
if let Err(e) = run_scrub(&config2, &db2).await {
|
||||||
|
tracing::error!("Verifier: periodic scrub failed: {}", e);
|
||||||
|
}
|
||||||
|
tracing::info!("Verifier: periodic scrub complete");
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
// Filesystem watcher
|
||||||
|
tokio::spawn(async move {
|
||||||
|
if let Err(e) = run_watcher(config3_for_watcher(config), db).await {
|
||||||
|
tracing::error!("Verifier: filesystem watcher failed: {}", e);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
fn config3_for_watcher(config: Config) -> Config {
|
||||||
|
config
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn run_watcher(config: Config, db: Db) -> anyhow::Result<()> {
|
||||||
|
let (tx, mut rx) = mpsc::channel::<PathBuf>(100);
|
||||||
|
let storage_root = config.storage_root.clone();
|
||||||
|
|
||||||
|
// Spawn blocking watcher in a separate thread
|
||||||
|
let watcher_root = storage_root.clone();
|
||||||
|
std::thread::spawn(move || {
|
||||||
|
let tx_clone = tx.clone();
|
||||||
|
let mut watcher = notify::recommended_watcher(move |res: Result<Event, _>| {
|
||||||
|
if let Ok(event) = res {
|
||||||
|
match event.kind {
|
||||||
|
EventKind::Modify(_) | EventKind::Create(_) => {
|
||||||
|
for path in event.paths {
|
||||||
|
// Ignore hidden dirs (.trash, .thumbs, .can.db)
|
||||||
|
let filename = path
|
||||||
|
.file_name()
|
||||||
|
.and_then(|f| f.to_str())
|
||||||
|
.unwrap_or("");
|
||||||
|
if filename.starts_with('.') {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
let _ = tx_clone.blocking_send(path);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
_ => {}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
.expect("Failed to create filesystem watcher");
|
||||||
|
|
||||||
|
watcher
|
||||||
|
.watch(&watcher_root, RecursiveMode::NonRecursive)
|
||||||
|
.expect("Failed to watch storage root");
|
||||||
|
|
||||||
|
// Keep watcher alive
|
||||||
|
loop {
|
||||||
|
std::thread::sleep(Duration::from_secs(3600));
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
// Process file change events
|
||||||
|
while let Some(path) = rx.recv().await {
|
||||||
|
let filename = match path.file_name().and_then(|f| f.to_str()) {
|
||||||
|
Some(f) => f.to_string(),
|
||||||
|
None => continue,
|
||||||
|
};
|
||||||
|
|
||||||
|
tracing::debug!("Verifier: checking modified file: {}", filename);
|
||||||
|
if let Err(e) = verify_single_file(&config, &db, &filename).await {
|
||||||
|
tracing::warn!("Verifier: error checking {}: {}", filename, e);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Run a full scrub: verify every active asset's hash.
|
||||||
|
async fn run_scrub(config: &Config, db: &Db) -> anyhow::Result<()> {
|
||||||
|
let assets = {
|
||||||
|
let conn = db.lock().unwrap();
|
||||||
|
db::get_all_active_assets(&conn)?
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut corrupted_count = 0u32;
|
||||||
|
|
||||||
|
for asset in &assets {
|
||||||
|
let file_path = config.storage_root.join(&asset.actual_filename);
|
||||||
|
if !file_path.exists() {
|
||||||
|
tracing::warn!(
|
||||||
|
"Verifier: file missing for asset {}: {}",
|
||||||
|
asset.hash,
|
||||||
|
asset.actual_filename
|
||||||
|
);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
match tokio::fs::read(&file_path).await {
|
||||||
|
Ok(content) => {
|
||||||
|
let expected_hash = compute_hash(asset.timestamp, &content);
|
||||||
|
if expected_hash != asset.hash {
|
||||||
|
tracing::warn!(
|
||||||
|
"Verifier: CORRUPTION detected for {} (expected {}, got {})",
|
||||||
|
asset.actual_filename,
|
||||||
|
asset.hash,
|
||||||
|
expected_hash
|
||||||
|
);
|
||||||
|
let conn = db.lock().unwrap();
|
||||||
|
db::flag_corrupted(&conn, &asset.hash, true)?;
|
||||||
|
corrupted_count += 1;
|
||||||
|
} else if asset.is_corrupted {
|
||||||
|
// File was previously marked corrupted but now passes - clear flag
|
||||||
|
let conn = db.lock().unwrap();
|
||||||
|
db::flag_corrupted(&conn, &asset.hash, false)?;
|
||||||
|
tracing::info!(
|
||||||
|
"Verifier: asset {} is no longer corrupted",
|
||||||
|
asset.hash
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
tracing::warn!(
|
||||||
|
"Verifier: cannot read {}: {}",
|
||||||
|
asset.actual_filename,
|
||||||
|
e
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if corrupted_count > 0 {
|
||||||
|
tracing::warn!(
|
||||||
|
"Verifier: scrub found {} corrupted assets out of {}",
|
||||||
|
corrupted_count,
|
||||||
|
assets.len()
|
||||||
|
);
|
||||||
|
} else {
|
||||||
|
tracing::info!(
|
||||||
|
"Verifier: scrub passed for all {} assets",
|
||||||
|
assets.len()
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Sync DB metadata → OS-level file attributes
|
||||||
|
let mut attrs_synced = 0u32;
|
||||||
|
for asset in &assets {
|
||||||
|
let file_path = config.storage_root.join(&asset.actual_filename);
|
||||||
|
if !file_path.exists() {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Build expected attributes from DB
|
||||||
|
let tags = {
|
||||||
|
let conn = db.lock().unwrap();
|
||||||
|
db::get_asset_tags(&conn, asset.id).unwrap_or_default()
|
||||||
|
};
|
||||||
|
|
||||||
|
let expected = FileAttributes {
|
||||||
|
mime_type: Some(asset.mime_type.clone()),
|
||||||
|
application: asset.application.clone(),
|
||||||
|
user: asset.user_identity.clone(),
|
||||||
|
tags: if tags.is_empty() {
|
||||||
|
None
|
||||||
|
} else {
|
||||||
|
Some(tags.join(","))
|
||||||
|
},
|
||||||
|
description: asset.description.clone(),
|
||||||
|
human_filename: asset.human_filename.clone(),
|
||||||
|
human_path: asset.human_path.clone(),
|
||||||
|
};
|
||||||
|
|
||||||
|
// Read current file attributes
|
||||||
|
let current = xattr::read_attributes(&file_path).unwrap_or_default();
|
||||||
|
|
||||||
|
if current != expected {
|
||||||
|
if let Err(e) = xattr::write_attributes(&file_path, &expected) {
|
||||||
|
tracing::warn!(
|
||||||
|
"Verifier: failed to sync attributes for {}: {}",
|
||||||
|
asset.actual_filename,
|
||||||
|
e
|
||||||
|
);
|
||||||
|
} else {
|
||||||
|
attrs_synced += 1;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if attrs_synced > 0 {
|
||||||
|
tracing::info!(
|
||||||
|
"Verifier: synced file attributes for {} assets",
|
||||||
|
attrs_synced
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Backfill missing file sizes
|
||||||
|
let mut sizes_backfilled = 0u32;
|
||||||
|
for asset in &assets {
|
||||||
|
if asset.size > 0 {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
let file_path = config.storage_root.join(&asset.actual_filename);
|
||||||
|
if !file_path.exists() {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
match file_path.metadata() {
|
||||||
|
Ok(meta) => {
|
||||||
|
let len = meta.len() as i64;
|
||||||
|
if len > 0 {
|
||||||
|
let conn = db.lock().unwrap();
|
||||||
|
if let Err(e) = db::update_asset_size(&conn, &asset.hash, len) {
|
||||||
|
tracing::warn!(
|
||||||
|
"Verifier: failed to backfill size for {}: {}",
|
||||||
|
asset.hash,
|
||||||
|
e
|
||||||
|
);
|
||||||
|
} else {
|
||||||
|
sizes_backfilled += 1;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
tracing::warn!(
|
||||||
|
"Verifier: cannot stat {}: {}",
|
||||||
|
asset.actual_filename,
|
||||||
|
e
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if sizes_backfilled > 0 {
|
||||||
|
tracing::info!(
|
||||||
|
"Verifier: backfilled file sizes for {} assets",
|
||||||
|
sizes_backfilled
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Verify a single file by its physical filename.
|
||||||
|
async fn verify_single_file(
|
||||||
|
config: &Config,
|
||||||
|
db: &Db,
|
||||||
|
filename: &str,
|
||||||
|
) -> anyhow::Result<()> {
|
||||||
|
let hash = match parse_hash_from_filename(filename) {
|
||||||
|
Some(h) => h,
|
||||||
|
None => return Ok(()), // Not a CAN-managed file
|
||||||
|
};
|
||||||
|
let timestamp = match parse_timestamp_from_filename(filename) {
|
||||||
|
Some(t) => t,
|
||||||
|
None => return Ok(()),
|
||||||
|
};
|
||||||
|
|
||||||
|
let file_path = config.storage_root.join(filename);
|
||||||
|
let content = tokio::fs::read(&file_path).await?;
|
||||||
|
let computed = compute_hash(timestamp, &content);
|
||||||
|
|
||||||
|
if computed != hash {
|
||||||
|
tracing::warn!(
|
||||||
|
"Verifier: corruption detected on change for {}",
|
||||||
|
filename
|
||||||
|
);
|
||||||
|
let conn = db.lock().unwrap();
|
||||||
|
db::flag_corrupted(&conn, &hash, true)?;
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
187
src/xattr.rs
Normal file
187
src/xattr.rs
Normal file
@ -0,0 +1,187 @@
|
|||||||
|
use crate::models::FileAttributes;
|
||||||
|
use std::path::Path;
|
||||||
|
|
||||||
|
/// Write CAN metadata as OS-level file attributes.
|
||||||
|
/// - Unix/macOS: Extended Attributes (xattr)
|
||||||
|
/// - Windows: NTFS Alternate Data Streams
|
||||||
|
pub fn write_attributes(path: &Path, attrs: &FileAttributes) -> std::io::Result<()> {
|
||||||
|
#[cfg(unix)]
|
||||||
|
{
|
||||||
|
write_xattr(path, attrs)
|
||||||
|
}
|
||||||
|
#[cfg(windows)]
|
||||||
|
{
|
||||||
|
write_ntfs_ads(path, attrs)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Read CAN metadata from OS-level file attributes.
|
||||||
|
pub fn read_attributes(path: &Path) -> std::io::Result<FileAttributes> {
|
||||||
|
#[cfg(unix)]
|
||||||
|
{
|
||||||
|
read_xattr(path)
|
||||||
|
}
|
||||||
|
#[cfg(windows)]
|
||||||
|
{
|
||||||
|
read_ntfs_ads(path)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Unix implementation using xattr crate ──
|
||||||
|
|
||||||
|
#[cfg(unix)]
|
||||||
|
fn write_xattr(path: &Path, attrs: &FileAttributes) -> std::io::Result<()> {
|
||||||
|
use xattr::FileExt;
|
||||||
|
let file = std::fs::File::open(path)?;
|
||||||
|
|
||||||
|
if let Some(ref v) = attrs.mime_type {
|
||||||
|
file.set_xattr("user.can.mime_type", v.as_bytes())?;
|
||||||
|
}
|
||||||
|
if let Some(ref v) = attrs.application {
|
||||||
|
file.set_xattr("user.can.application", v.as_bytes())?;
|
||||||
|
}
|
||||||
|
if let Some(ref v) = attrs.user {
|
||||||
|
file.set_xattr("user.can.user", v.as_bytes())?;
|
||||||
|
}
|
||||||
|
if let Some(ref v) = attrs.tags {
|
||||||
|
file.set_xattr("user.can.tags", v.as_bytes())?;
|
||||||
|
}
|
||||||
|
if let Some(ref v) = attrs.description {
|
||||||
|
file.set_xattr("user.can.description", v.as_bytes())?;
|
||||||
|
}
|
||||||
|
if let Some(ref v) = attrs.human_filename {
|
||||||
|
file.set_xattr("user.can.human_filename", v.as_bytes())?;
|
||||||
|
}
|
||||||
|
if let Some(ref v) = attrs.human_path {
|
||||||
|
file.set_xattr("user.can.human_path", v.as_bytes())?;
|
||||||
|
}
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(unix)]
|
||||||
|
fn read_xattr(path: &Path) -> std::io::Result<FileAttributes> {
|
||||||
|
use xattr::FileExt;
|
||||||
|
let file = std::fs::File::open(path)?;
|
||||||
|
|
||||||
|
let read_attr = |name: &str| -> Option<String> {
|
||||||
|
file.get_xattr(name)
|
||||||
|
.ok()
|
||||||
|
.flatten()
|
||||||
|
.and_then(|bytes| String::from_utf8(bytes).ok())
|
||||||
|
};
|
||||||
|
|
||||||
|
Ok(FileAttributes {
|
||||||
|
mime_type: read_attr("user.can.mime_type"),
|
||||||
|
application: read_attr("user.can.application"),
|
||||||
|
user: read_attr("user.can.user"),
|
||||||
|
tags: read_attr("user.can.tags"),
|
||||||
|
description: read_attr("user.can.description"),
|
||||||
|
human_filename: read_attr("user.can.human_filename"),
|
||||||
|
human_path: read_attr("user.can.human_path"),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Windows implementation using NTFS Alternate Data Streams ──
|
||||||
|
|
||||||
|
#[cfg(windows)]
|
||||||
|
fn write_ntfs_ads(path: &Path, attrs: &FileAttributes) -> std::io::Result<()> {
|
||||||
|
let base = path.to_string_lossy();
|
||||||
|
|
||||||
|
if let Some(ref v) = attrs.mime_type {
|
||||||
|
std::fs::write(format!("{}:can.mime_type", base), v)?;
|
||||||
|
}
|
||||||
|
if let Some(ref v) = attrs.application {
|
||||||
|
std::fs::write(format!("{}:can.application", base), v)?;
|
||||||
|
}
|
||||||
|
if let Some(ref v) = attrs.user {
|
||||||
|
std::fs::write(format!("{}:can.user", base), v)?;
|
||||||
|
}
|
||||||
|
if let Some(ref v) = attrs.tags {
|
||||||
|
std::fs::write(format!("{}:can.tags", base), v)?;
|
||||||
|
}
|
||||||
|
if let Some(ref v) = attrs.description {
|
||||||
|
std::fs::write(format!("{}:can.description", base), v)?;
|
||||||
|
}
|
||||||
|
if let Some(ref v) = attrs.human_filename {
|
||||||
|
std::fs::write(format!("{}:can.human_filename", base), v)?;
|
||||||
|
}
|
||||||
|
if let Some(ref v) = attrs.human_path {
|
||||||
|
std::fs::write(format!("{}:can.human_path", base), v)?;
|
||||||
|
}
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(windows)]
|
||||||
|
fn read_ntfs_ads(path: &Path) -> std::io::Result<FileAttributes> {
|
||||||
|
let base = path.to_string_lossy();
|
||||||
|
|
||||||
|
let read_stream = |name: &str| -> Option<String> {
|
||||||
|
std::fs::read_to_string(format!("{}:{}", base, name)).ok()
|
||||||
|
};
|
||||||
|
|
||||||
|
Ok(FileAttributes {
|
||||||
|
mime_type: read_stream("can.mime_type"),
|
||||||
|
application: read_stream("can.application"),
|
||||||
|
user: read_stream("can.user"),
|
||||||
|
tags: read_stream("can.tags"),
|
||||||
|
description: read_stream("can.description"),
|
||||||
|
human_filename: read_stream("can.human_filename"),
|
||||||
|
human_path: read_stream("can.human_path"),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
use tempfile::NamedTempFile;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_write_and_read_attributes() {
|
||||||
|
let file = NamedTempFile::new().unwrap();
|
||||||
|
std::fs::write(file.path(), b"test content").unwrap();
|
||||||
|
|
||||||
|
let attrs = FileAttributes {
|
||||||
|
mime_type: Some("image/jpeg".to_string()),
|
||||||
|
application: Some("TestApp".to_string()),
|
||||||
|
user: Some("jason".to_string()),
|
||||||
|
tags: Some("photo,vacation,2024".to_string()),
|
||||||
|
description: Some("A test file".to_string()),
|
||||||
|
human_filename: Some("my_photo.jpg".to_string()),
|
||||||
|
human_path: Some("/photos/trip/".to_string()),
|
||||||
|
};
|
||||||
|
|
||||||
|
write_attributes(file.path(), &attrs).unwrap();
|
||||||
|
let read_back = read_attributes(file.path()).unwrap();
|
||||||
|
|
||||||
|
assert_eq!(read_back.mime_type, Some("image/jpeg".to_string()));
|
||||||
|
assert_eq!(read_back.application, Some("TestApp".to_string()));
|
||||||
|
assert_eq!(read_back.user, Some("jason".to_string()));
|
||||||
|
assert_eq!(read_back.tags, Some("photo,vacation,2024".to_string()));
|
||||||
|
assert_eq!(read_back.description, Some("A test file".to_string()));
|
||||||
|
assert_eq!(read_back.human_filename, Some("my_photo.jpg".to_string()));
|
||||||
|
assert_eq!(read_back.human_path, Some("/photos/trip/".to_string()));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_partial_attributes() {
|
||||||
|
let file = NamedTempFile::new().unwrap();
|
||||||
|
std::fs::write(file.path(), b"data").unwrap();
|
||||||
|
|
||||||
|
let attrs = FileAttributes {
|
||||||
|
mime_type: None,
|
||||||
|
application: Some("App".to_string()),
|
||||||
|
user: None,
|
||||||
|
tags: None,
|
||||||
|
description: None,
|
||||||
|
human_filename: None,
|
||||||
|
human_path: None,
|
||||||
|
};
|
||||||
|
|
||||||
|
write_attributes(file.path(), &attrs).unwrap();
|
||||||
|
let read_back = read_attributes(file.path()).unwrap();
|
||||||
|
|
||||||
|
assert_eq!(read_back.application, Some("App".to_string()));
|
||||||
|
assert_eq!(read_back.user, None);
|
||||||
|
assert_eq!(read_back.tags, None);
|
||||||
|
}
|
||||||
|
}
|
||||||
741
tests/integration.rs
Normal file
741
tests/integration.rs
Normal file
@ -0,0 +1,741 @@
|
|||||||
|
use reqwest::multipart;
|
||||||
|
use std::sync::Arc;
|
||||||
|
use tempfile::TempDir;
|
||||||
|
use tokio::net::TcpListener;
|
||||||
|
|
||||||
|
// We need to import the binary crate's internals.
|
||||||
|
// Since integration tests can't access `mod` items directly, we'll spin up
|
||||||
|
// the server using the library-like approach by duplicating setup logic.
|
||||||
|
// A cleaner approach is to test through HTTP.
|
||||||
|
|
||||||
|
/// Helper: spin up a test server and return its base URL + temp dir handle.
|
||||||
|
async fn spawn_test_server() -> (String, TempDir) {
|
||||||
|
let tmp = TempDir::new().unwrap();
|
||||||
|
let storage_root = tmp.path().to_path_buf();
|
||||||
|
|
||||||
|
// Create config.yaml in tempdir
|
||||||
|
let config_content = format!(
|
||||||
|
r#"storage_root: "{}"
|
||||||
|
admin_token: "test_token"
|
||||||
|
enable_thumbnail_cache: true
|
||||||
|
rebuild_error_threshold: 50
|
||||||
|
verify_interval_hours: 999
|
||||||
|
"#,
|
||||||
|
storage_root.to_string_lossy().replace('\\', "/")
|
||||||
|
);
|
||||||
|
|
||||||
|
let config_path = tmp.path().join("config.yaml");
|
||||||
|
std::fs::write(&config_path, &config_content).unwrap();
|
||||||
|
|
||||||
|
// Load config
|
||||||
|
let config: can_service::config::Config =
|
||||||
|
serde_yaml::from_str(&config_content).unwrap();
|
||||||
|
config.ensure_dirs().unwrap();
|
||||||
|
|
||||||
|
// Open DB
|
||||||
|
let db = can_service::db::open(&config.db_path()).unwrap();
|
||||||
|
let config = Arc::new(config);
|
||||||
|
|
||||||
|
let state = can_service::AppState {
|
||||||
|
config: config.clone(),
|
||||||
|
db,
|
||||||
|
};
|
||||||
|
|
||||||
|
// Build router
|
||||||
|
let app = axum::Router::new()
|
||||||
|
.merge(can_service::routes::router())
|
||||||
|
.with_state(state);
|
||||||
|
|
||||||
|
// Bind to random port
|
||||||
|
let listener = TcpListener::bind("127.0.0.1:0").await.unwrap();
|
||||||
|
let addr = listener.local_addr().unwrap();
|
||||||
|
let base_url = format!("http://{}", addr);
|
||||||
|
|
||||||
|
tokio::spawn(async move {
|
||||||
|
axum::serve(listener, app).await.unwrap();
|
||||||
|
});
|
||||||
|
|
||||||
|
// Give the server a moment to start
|
||||||
|
tokio::time::sleep(std::time::Duration::from_millis(50)).await;
|
||||||
|
|
||||||
|
(base_url, tmp)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_ingest_and_retrieve_metadata() {
|
||||||
|
let (base_url, _tmp) = spawn_test_server().await;
|
||||||
|
let client = reqwest::Client::new();
|
||||||
|
|
||||||
|
// Ingest a file
|
||||||
|
let file_part = multipart::Part::bytes(b"hello world".to_vec())
|
||||||
|
.file_name("hello.txt")
|
||||||
|
.mime_str("text/plain")
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let form = multipart::Form::new()
|
||||||
|
.part("file", file_part)
|
||||||
|
.text("application", "TestApp")
|
||||||
|
.text("user", "jason")
|
||||||
|
.text("tags", "greeting,test")
|
||||||
|
.text("description", "A test file")
|
||||||
|
.text("human_file_name", "hello.txt")
|
||||||
|
.text("human_readable_path", "/docs/");
|
||||||
|
|
||||||
|
let resp = client
|
||||||
|
.post(format!("{}/api/v1/can/0/ingest", base_url))
|
||||||
|
.multipart(form)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(resp.status(), 200);
|
||||||
|
let body: serde_json::Value = resp.json().await.unwrap();
|
||||||
|
assert_eq!(body["status"], "success");
|
||||||
|
|
||||||
|
let hash = body["data"]["hash"].as_str().unwrap().to_string();
|
||||||
|
let timestamp = body["data"]["timestamp"].as_i64().unwrap();
|
||||||
|
assert!(!hash.is_empty());
|
||||||
|
assert!(timestamp > 0);
|
||||||
|
|
||||||
|
// Retrieve metadata
|
||||||
|
let resp = client
|
||||||
|
.get(format!("{}/api/v1/can/0/asset/{}/meta", base_url, hash))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(resp.status(), 200);
|
||||||
|
let body: serde_json::Value = resp.json().await.unwrap();
|
||||||
|
assert_eq!(body["status"], "success");
|
||||||
|
assert_eq!(body["data"]["hash"], hash);
|
||||||
|
assert_eq!(body["data"]["mime_type"], "text/plain");
|
||||||
|
assert_eq!(body["data"]["application"], "TestApp");
|
||||||
|
assert_eq!(body["data"]["user"], "jason");
|
||||||
|
assert_eq!(body["data"]["description"], "A test file");
|
||||||
|
assert_eq!(body["data"]["human_filename"], "hello.txt");
|
||||||
|
assert_eq!(body["data"]["human_path"], "/docs/");
|
||||||
|
|
||||||
|
let tags = body["data"]["tags"].as_array().unwrap();
|
||||||
|
assert!(tags.contains(&serde_json::json!("greeting")));
|
||||||
|
assert!(tags.contains(&serde_json::json!("test")));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_retrieve_physical_asset() {
|
||||||
|
let (base_url, _tmp) = spawn_test_server().await;
|
||||||
|
let client = reqwest::Client::new();
|
||||||
|
|
||||||
|
let file_content = b"binary content here";
|
||||||
|
let file_part = multipart::Part::bytes(file_content.to_vec())
|
||||||
|
.file_name("data.bin")
|
||||||
|
.mime_str("application/octet-stream")
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let form = multipart::Form::new().part("file", file_part);
|
||||||
|
|
||||||
|
let resp = client
|
||||||
|
.post(format!("{}/api/v1/can/0/ingest", base_url))
|
||||||
|
.multipart(form)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let body: serde_json::Value = resp.json().await.unwrap();
|
||||||
|
let hash = body["data"]["hash"].as_str().unwrap();
|
||||||
|
|
||||||
|
// Download the asset
|
||||||
|
let resp = client
|
||||||
|
.get(format!("{}/api/v1/can/0/asset/{}", base_url, hash))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(resp.status(), 200);
|
||||||
|
let downloaded = resp.bytes().await.unwrap();
|
||||||
|
assert_eq!(downloaded.as_ref(), file_content);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_patch_metadata() {
|
||||||
|
let (base_url, _tmp) = spawn_test_server().await;
|
||||||
|
let client = reqwest::Client::new();
|
||||||
|
|
||||||
|
// Ingest
|
||||||
|
let file_part = multipart::Part::bytes(b"patch me".to_vec())
|
||||||
|
.file_name("patch.txt")
|
||||||
|
.mime_str("text/plain")
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let form = multipart::Form::new()
|
||||||
|
.part("file", file_part)
|
||||||
|
.text("tags", "original")
|
||||||
|
.text("description", "original desc");
|
||||||
|
|
||||||
|
let resp = client
|
||||||
|
.post(format!("{}/api/v1/can/0/ingest", base_url))
|
||||||
|
.multipart(form)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let body: serde_json::Value = resp.json().await.unwrap();
|
||||||
|
let hash = body["data"]["hash"].as_str().unwrap().to_string();
|
||||||
|
|
||||||
|
// Patch
|
||||||
|
let patch_body = serde_json::json!({
|
||||||
|
"tags": ["updated", "new_tag"],
|
||||||
|
"description": "updated description"
|
||||||
|
});
|
||||||
|
|
||||||
|
let resp = client
|
||||||
|
.patch(format!("{}/api/v1/can/0/asset/{}", base_url, hash))
|
||||||
|
.json(&patch_body)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(resp.status(), 200);
|
||||||
|
|
||||||
|
// Verify
|
||||||
|
let resp = client
|
||||||
|
.get(format!("{}/api/v1/can/0/asset/{}/meta", base_url, hash))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let body: serde_json::Value = resp.json().await.unwrap();
|
||||||
|
assert_eq!(body["data"]["description"], "updated description");
|
||||||
|
let tags = body["data"]["tags"].as_array().unwrap();
|
||||||
|
assert!(tags.contains(&serde_json::json!("updated")));
|
||||||
|
assert!(tags.contains(&serde_json::json!("new_tag")));
|
||||||
|
assert!(!tags.contains(&serde_json::json!("original")));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_list_assets_pagination() {
|
||||||
|
let (base_url, _tmp) = spawn_test_server().await;
|
||||||
|
let client = reqwest::Client::new();
|
||||||
|
|
||||||
|
// Ingest 5 files
|
||||||
|
for i in 0..5 {
|
||||||
|
let content = format!("file content {}", i);
|
||||||
|
let file_part = multipart::Part::bytes(content.into_bytes())
|
||||||
|
.file_name(format!("file_{}.txt", i))
|
||||||
|
.mime_str("text/plain")
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let form = multipart::Form::new()
|
||||||
|
.part("file", file_part)
|
||||||
|
.text("application", "ListTest");
|
||||||
|
|
||||||
|
client
|
||||||
|
.post(format!("{}/api/v1/can/0/ingest", base_url))
|
||||||
|
.multipart(form)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
// Small delay so timestamps differ
|
||||||
|
tokio::time::sleep(std::time::Duration::from_millis(10)).await;
|
||||||
|
}
|
||||||
|
|
||||||
|
// List with limit=2
|
||||||
|
let resp = client
|
||||||
|
.get(format!(
|
||||||
|
"{}/api/v1/can/0/list?limit=2&offset=0",
|
||||||
|
base_url
|
||||||
|
))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(resp.status(), 200);
|
||||||
|
let body: serde_json::Value = resp.json().await.unwrap();
|
||||||
|
let items = body["data"]["items"].as_array().unwrap();
|
||||||
|
assert_eq!(items.len(), 2);
|
||||||
|
assert_eq!(body["data"]["pagination"]["total"], 5);
|
||||||
|
assert_eq!(body["data"]["pagination"]["limit"], 2);
|
||||||
|
|
||||||
|
// List with offset=3
|
||||||
|
let resp = client
|
||||||
|
.get(format!(
|
||||||
|
"{}/api/v1/can/0/list?limit=10&offset=3",
|
||||||
|
base_url
|
||||||
|
))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let body: serde_json::Value = resp.json().await.unwrap();
|
||||||
|
let items = body["data"]["items"].as_array().unwrap();
|
||||||
|
assert_eq!(items.len(), 2); // 5 total, offset 3 = 2 remaining
|
||||||
|
|
||||||
|
// List with application filter
|
||||||
|
let resp = client
|
||||||
|
.get(format!(
|
||||||
|
"{}/api/v1/can/0/list?application=ListTest",
|
||||||
|
base_url
|
||||||
|
))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let body: serde_json::Value = resp.json().await.unwrap();
|
||||||
|
assert_eq!(body["data"]["pagination"]["total"], 5);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_search_assets() {
|
||||||
|
let (base_url, _tmp) = spawn_test_server().await;
|
||||||
|
let client = reqwest::Client::new();
|
||||||
|
|
||||||
|
// Ingest files with different tags and metadata
|
||||||
|
let ingest = |name: &str, tags: &str, mime: &str| {
|
||||||
|
let base = base_url.clone();
|
||||||
|
let client = client.clone();
|
||||||
|
let name = name.to_string();
|
||||||
|
let tags = tags.to_string();
|
||||||
|
let mime = mime.to_string();
|
||||||
|
async move {
|
||||||
|
let file_part = multipart::Part::bytes(format!("content of {}", name).into_bytes())
|
||||||
|
.file_name(name.clone())
|
||||||
|
.mime_str(&mime)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let form = multipart::Form::new()
|
||||||
|
.part("file", file_part)
|
||||||
|
.text("tags", tags)
|
||||||
|
.text("user", "tester");
|
||||||
|
|
||||||
|
let resp = client
|
||||||
|
.post(format!("{}/api/v1/can/0/ingest", base))
|
||||||
|
.multipart(form)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let body: serde_json::Value = resp.json().await.unwrap();
|
||||||
|
body["data"]["hash"].as_str().unwrap().to_string()
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
let _h1 = ingest("photo.jpg", "nature,landscape", "image/jpeg").await;
|
||||||
|
tokio::time::sleep(std::time::Duration::from_millis(10)).await;
|
||||||
|
let _h2 = ingest("doc.pdf", "work,report", "application/pdf").await;
|
||||||
|
tokio::time::sleep(std::time::Duration::from_millis(10)).await;
|
||||||
|
let _h3 = ingest("nature.png", "nature,macro", "image/png").await;
|
||||||
|
|
||||||
|
// Search by tags
|
||||||
|
let resp = client
|
||||||
|
.get(format!(
|
||||||
|
"{}/api/v1/can/0/search?tags=nature",
|
||||||
|
base_url
|
||||||
|
))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let body: serde_json::Value = resp.json().await.unwrap();
|
||||||
|
assert_eq!(body["data"]["pagination"]["total"], 2);
|
||||||
|
|
||||||
|
// Search by mime_type
|
||||||
|
let resp = client
|
||||||
|
.get(format!(
|
||||||
|
"{}/api/v1/can/0/search?mime_type=application/pdf",
|
||||||
|
base_url
|
||||||
|
))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let body: serde_json::Value = resp.json().await.unwrap();
|
||||||
|
assert_eq!(body["data"]["pagination"]["total"], 1);
|
||||||
|
|
||||||
|
// Search by user
|
||||||
|
let resp = client
|
||||||
|
.get(format!(
|
||||||
|
"{}/api/v1/can/0/search?user=tester",
|
||||||
|
base_url
|
||||||
|
))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let body: serde_json::Value = resp.json().await.unwrap();
|
||||||
|
assert_eq!(body["data"]["pagination"]["total"], 3);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_asset_not_found() {
|
||||||
|
let (base_url, _tmp) = spawn_test_server().await;
|
||||||
|
let client = reqwest::Client::new();
|
||||||
|
|
||||||
|
let resp = client
|
||||||
|
.get(format!(
|
||||||
|
"{}/api/v1/can/0/asset/nonexistent_hash",
|
||||||
|
base_url
|
||||||
|
))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(resp.status(), 404);
|
||||||
|
|
||||||
|
let resp = client
|
||||||
|
.get(format!(
|
||||||
|
"{}/api/v1/can/0/asset/nonexistent_hash/meta",
|
||||||
|
base_url
|
||||||
|
))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(resp.status(), 404);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_thumbnail_fallback_svg() {
|
||||||
|
let (base_url, _tmp) = spawn_test_server().await;
|
||||||
|
let client = reqwest::Client::new();
|
||||||
|
|
||||||
|
// Ingest a non-image file
|
||||||
|
let file_part = multipart::Part::bytes(b"not an image".to_vec())
|
||||||
|
.file_name("doc.txt")
|
||||||
|
.mime_str("text/plain")
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let form = multipart::Form::new().part("file", file_part);
|
||||||
|
|
||||||
|
let resp = client
|
||||||
|
.post(format!("{}/api/v1/can/0/ingest", base_url))
|
||||||
|
.multipart(form)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let body: serde_json::Value = resp.json().await.unwrap();
|
||||||
|
let hash = body["data"]["hash"].as_str().unwrap();
|
||||||
|
|
||||||
|
// Request thumbnail - should get SVG fallback
|
||||||
|
let resp = client
|
||||||
|
.get(format!(
|
||||||
|
"{}/api/v1/can/0/asset/{}/thumb/128/128",
|
||||||
|
base_url, hash
|
||||||
|
))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(resp.status(), 200);
|
||||||
|
let content_type = resp.headers().get("content-type").unwrap().to_str().unwrap();
|
||||||
|
assert!(content_type.contains("svg"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_list_order() {
|
||||||
|
let (base_url, _tmp) = spawn_test_server().await;
|
||||||
|
let client = reqwest::Client::new();
|
||||||
|
|
||||||
|
// Ingest 3 files with delays
|
||||||
|
for i in 0..3 {
|
||||||
|
let file_part = multipart::Part::bytes(format!("order test {}", i).into_bytes())
|
||||||
|
.file_name(format!("order_{}.txt", i))
|
||||||
|
.mime_str("text/plain")
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let form = multipart::Form::new().part("file", file_part);
|
||||||
|
|
||||||
|
client
|
||||||
|
.post(format!("{}/api/v1/can/0/ingest", base_url))
|
||||||
|
.multipart(form)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
tokio::time::sleep(std::time::Duration::from_millis(15)).await;
|
||||||
|
}
|
||||||
|
|
||||||
|
// List descending (default)
|
||||||
|
let resp = client
|
||||||
|
.get(format!("{}/api/v1/can/0/list?order=desc", base_url))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let body: serde_json::Value = resp.json().await.unwrap();
|
||||||
|
let items = body["data"]["items"].as_array().unwrap();
|
||||||
|
assert_eq!(items.len(), 3);
|
||||||
|
let ts0 = items[0]["timestamp"].as_i64().unwrap();
|
||||||
|
let ts1 = items[1]["timestamp"].as_i64().unwrap();
|
||||||
|
let ts2 = items[2]["timestamp"].as_i64().unwrap();
|
||||||
|
assert!(ts0 > ts1);
|
||||||
|
assert!(ts1 > ts2);
|
||||||
|
|
||||||
|
// List ascending
|
||||||
|
let resp = client
|
||||||
|
.get(format!("{}/api/v1/can/0/list?order=asc", base_url))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let body: serde_json::Value = resp.json().await.unwrap();
|
||||||
|
let items = body["data"]["items"].as_array().unwrap();
|
||||||
|
let ts0 = items[0]["timestamp"].as_i64().unwrap();
|
||||||
|
let ts1 = items[1]["timestamp"].as_i64().unwrap();
|
||||||
|
assert!(ts0 < ts1);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── JSON data ingest tests ──────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_data_ingest_minimal() {
|
||||||
|
let (base_url, _tmp) = spawn_test_server().await;
|
||||||
|
let client = reqwest::Client::new();
|
||||||
|
|
||||||
|
// Minimal call: just data
|
||||||
|
let resp = client
|
||||||
|
.post(format!("{}/api/v1/can/0/ingest/data", base_url))
|
||||||
|
.json(&serde_json::json!({
|
||||||
|
"data": { "key": "value", "count": 42 }
|
||||||
|
}))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(resp.status(), 200);
|
||||||
|
let body: serde_json::Value = resp.json().await.unwrap();
|
||||||
|
assert_eq!(body["status"], "success");
|
||||||
|
|
||||||
|
let hash = body["data"]["hash"].as_str().unwrap();
|
||||||
|
let filename = body["data"]["filename"].as_str().unwrap();
|
||||||
|
assert!(!hash.is_empty());
|
||||||
|
assert!(filename.ends_with(".json"));
|
||||||
|
|
||||||
|
// Retrieve and verify it's stored as pretty JSON
|
||||||
|
let resp = client
|
||||||
|
.get(format!("{}/api/v1/can/0/asset/{}", base_url, hash))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(resp.status(), 200);
|
||||||
|
let stored: serde_json::Value = resp.json().await.unwrap();
|
||||||
|
assert_eq!(stored["key"], "value");
|
||||||
|
assert_eq!(stored["count"], 42);
|
||||||
|
|
||||||
|
// Verify metadata defaults to application/json
|
||||||
|
let resp = client
|
||||||
|
.get(format!("{}/api/v1/can/0/asset/{}/meta", base_url, hash))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let meta: serde_json::Value = resp.json().await.unwrap();
|
||||||
|
assert_eq!(meta["data"]["mime_type"], "application/json");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_data_ingest_with_all_metadata() {
|
||||||
|
let (base_url, _tmp) = spawn_test_server().await;
|
||||||
|
let client = reqwest::Client::new();
|
||||||
|
|
||||||
|
let resp = client
|
||||||
|
.post(format!("{}/api/v1/can/0/ingest/data", base_url))
|
||||||
|
.json(&serde_json::json!({
|
||||||
|
"data": {
|
||||||
|
"agent_id": "planner-v2",
|
||||||
|
"session": "abc-123",
|
||||||
|
"output": ["step1", "step2", "step3"]
|
||||||
|
},
|
||||||
|
"application": "AgentOrchestrator",
|
||||||
|
"user": "agent_planner",
|
||||||
|
"tags": "agent,plan,session",
|
||||||
|
"description": "Planning agent output for session abc-123",
|
||||||
|
"human_file_name": "plan_output.json",
|
||||||
|
"human_readable_path": "/agents/planner/"
|
||||||
|
}))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(resp.status(), 200);
|
||||||
|
let body: serde_json::Value = resp.json().await.unwrap();
|
||||||
|
let hash = body["data"]["hash"].as_str().unwrap();
|
||||||
|
|
||||||
|
// Verify all metadata persisted
|
||||||
|
let resp = client
|
||||||
|
.get(format!("{}/api/v1/can/0/asset/{}/meta", base_url, hash))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let meta: serde_json::Value = resp.json().await.unwrap();
|
||||||
|
assert_eq!(meta["data"]["application"], "AgentOrchestrator");
|
||||||
|
assert_eq!(meta["data"]["user"], "agent_planner");
|
||||||
|
assert_eq!(meta["data"]["description"], "Planning agent output for session abc-123");
|
||||||
|
assert_eq!(meta["data"]["human_filename"], "plan_output.json");
|
||||||
|
assert_eq!(meta["data"]["human_path"], "/agents/planner/");
|
||||||
|
assert_eq!(meta["data"]["mime_type"], "application/json");
|
||||||
|
|
||||||
|
let tags = meta["data"]["tags"].as_array().unwrap();
|
||||||
|
assert!(tags.contains(&serde_json::json!("agent")));
|
||||||
|
assert!(tags.contains(&serde_json::json!("plan")));
|
||||||
|
assert!(tags.contains(&serde_json::json!("session")));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_data_ingest_various_json_types() {
|
||||||
|
let (base_url, _tmp) = spawn_test_server().await;
|
||||||
|
let client = reqwest::Client::new();
|
||||||
|
|
||||||
|
// Store a plain string
|
||||||
|
let resp = client
|
||||||
|
.post(format!("{}/api/v1/can/0/ingest/data", base_url))
|
||||||
|
.json(&serde_json::json!({
|
||||||
|
"data": "just a plain string log entry",
|
||||||
|
"tags": "log"
|
||||||
|
}))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(resp.status(), 200);
|
||||||
|
let body: serde_json::Value = resp.json().await.unwrap();
|
||||||
|
let hash_str = body["data"]["hash"].as_str().unwrap();
|
||||||
|
|
||||||
|
// Retrieve and verify the string was stored
|
||||||
|
let resp = client
|
||||||
|
.get(format!("{}/api/v1/can/0/asset/{}", base_url, hash_str))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let stored: serde_json::Value = resp.json().await.unwrap();
|
||||||
|
assert_eq!(stored, "just a plain string log entry");
|
||||||
|
|
||||||
|
// Store an array
|
||||||
|
let resp = client
|
||||||
|
.post(format!("{}/api/v1/can/0/ingest/data", base_url))
|
||||||
|
.json(&serde_json::json!({
|
||||||
|
"data": [1, 2, 3, "four", null, true]
|
||||||
|
}))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(resp.status(), 200);
|
||||||
|
let body: serde_json::Value = resp.json().await.unwrap();
|
||||||
|
let hash_arr = body["data"]["hash"].as_str().unwrap();
|
||||||
|
|
||||||
|
let resp = client
|
||||||
|
.get(format!("{}/api/v1/can/0/asset/{}", base_url, hash_arr))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let stored: serde_json::Value = resp.json().await.unwrap();
|
||||||
|
assert_eq!(stored, serde_json::json!([1, 2, 3, "four", null, true]));
|
||||||
|
|
||||||
|
// Store a number
|
||||||
|
let resp = client
|
||||||
|
.post(format!("{}/api/v1/can/0/ingest/data", base_url))
|
||||||
|
.json(&serde_json::json!({
|
||||||
|
"data": 99.5
|
||||||
|
}))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(resp.status(), 200);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_data_ingest_shows_up_in_list_and_search() {
|
||||||
|
let (base_url, _tmp) = spawn_test_server().await;
|
||||||
|
let client = reqwest::Client::new();
|
||||||
|
|
||||||
|
// Ingest via JSON data endpoint
|
||||||
|
client
|
||||||
|
.post(format!("{}/api/v1/can/0/ingest/data", base_url))
|
||||||
|
.json(&serde_json::json!({
|
||||||
|
"data": { "sensor": "temperature", "value": 22.5 },
|
||||||
|
"application": "IoTAgent",
|
||||||
|
"tags": "sensor,temperature"
|
||||||
|
}))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
// Also ingest via multipart
|
||||||
|
let file_part = multipart::Part::bytes(b"binary sensor log".to_vec())
|
||||||
|
.file_name("sensor.bin")
|
||||||
|
.mime_str("application/octet-stream")
|
||||||
|
.unwrap();
|
||||||
|
let form = multipart::Form::new()
|
||||||
|
.part("file", file_part)
|
||||||
|
.text("application", "IoTAgent")
|
||||||
|
.text("tags", "sensor,binary");
|
||||||
|
client
|
||||||
|
.post(format!("{}/api/v1/can/0/ingest", base_url))
|
||||||
|
.multipart(form)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
tokio::time::sleep(std::time::Duration::from_millis(20)).await;
|
||||||
|
|
||||||
|
// Both should show up in list
|
||||||
|
let resp = client
|
||||||
|
.get(format!("{}/api/v1/can/0/list?application=IoTAgent", base_url))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let body: serde_json::Value = resp.json().await.unwrap();
|
||||||
|
assert_eq!(body["data"]["pagination"]["total"], 2);
|
||||||
|
|
||||||
|
// Search by tag should find the JSON one
|
||||||
|
let resp = client
|
||||||
|
.get(format!("{}/api/v1/can/0/search?tags=temperature", base_url))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let body: serde_json::Value = resp.json().await.unwrap();
|
||||||
|
assert_eq!(body["data"]["pagination"]["total"], 1);
|
||||||
|
assert_eq!(body["data"]["items"][0]["mime_type"], "application/json");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_data_ingest_custom_mime_type() {
|
||||||
|
let (base_url, _tmp) = spawn_test_server().await;
|
||||||
|
let client = reqwest::Client::new();
|
||||||
|
|
||||||
|
// Agent stores data but overrides mime_type to text/plain
|
||||||
|
let resp = client
|
||||||
|
.post(format!("{}/api/v1/can/0/ingest/data", base_url))
|
||||||
|
.json(&serde_json::json!({
|
||||||
|
"data": "This is a plain text log line from the agent",
|
||||||
|
"mime_type": "text/plain",
|
||||||
|
"human_file_name": "agent.log"
|
||||||
|
}))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(resp.status(), 200);
|
||||||
|
let body: serde_json::Value = resp.json().await.unwrap();
|
||||||
|
let filename = body["data"]["filename"].as_str().unwrap();
|
||||||
|
assert!(filename.ends_with(".txt"), "Expected .txt extension, got {}", filename);
|
||||||
|
|
||||||
|
let hash = body["data"]["hash"].as_str().unwrap();
|
||||||
|
let resp = client
|
||||||
|
.get(format!("{}/api/v1/can/0/asset/{}/meta", base_url, hash))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let meta: serde_json::Value = resp.json().await.unwrap();
|
||||||
|
assert_eq!(meta["data"]["mime_type"], "text/plain");
|
||||||
|
assert_eq!(meta["data"]["human_filename"], "agent.log");
|
||||||
|
}
|
||||||
Loading…
x
Reference in New Issue
Block a user