Compare commits

...

6 Commits

Author SHA1 Message Date
9e6ee7b1ea feat: POST /api/v1/strategies/validate — structured DSL validation endpoint
Accepts a strategy config JSON, runs the full deserialization pipeline,
and returns all errors as a structured list. Always returns HTTP 200
(`valid: false` is a validation result, not an HTTP error).

Two-stage validation:
1. Structural — serde_path_to_error deserialization; returns one error
   with dotted field path (e.g. "rules[0].then.quantity") on failure.
2. Semantic — walks the full condition/expression/action tree and
   collects all errors simultaneously:
   - candle_interval and timeframe values checked against allowed set
   - ema_crossover: fast_period < slow_period enforced
   - apply_func: ATR/ADX/Supertrend/RSI blocked (require OHLC internals)
   - Sizing method parameters validated (positive amounts/percents,
     percent_of_balance ≤ 100)
   - rules array must be non-empty

Also documents the endpoint in docs/api.md.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-10 09:05:15 +02:00
84989e2d9c feat: declarative position sizing methods in strategy DSL
Adds a new SizingMethod discriminated union to QuantitySpec so that
scout (and other LLM clients) can specify sizing intent declaratively
instead of constructing expression trees:

  { "method": "fixed_sum",          "amount": "500" }
  { "method": "percent_of_balance", "percent": "2", "asset": "usdc" }
  { "method": "fixed_units",        "units": "0.01" }

Changes:
- swym-dal: SizingMethod enum (tagged by "method"), Sizing variant added
  to QuantitySpec between Fixed and Expr so untagged serde tries it first
- paper-executor: resolve_sizing() computes base-asset quantity from
  live price + balances map at candle close; integrated into the existing
  quantity match in RuleStrategy::generate_algo_orders
- schema.json: SizingFixedSum / SizingPercentOfBalance / SizingFixedUnits
  definitions, Action.quantity.oneOf updated
- docs/strategy-dsl.md: "Position sizing methods" section with examples

Sell orders continue to override quantity to close the full position
regardless of the sizing method specified.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-10 08:53:06 +02:00
494ce68e92 fix: dashboard balance toFixed crash + backfill range/style tweaks
PaperRunDetailPage: ConfigBalance.balance fields typed as `unknown`
instead of `number` — Rust Decimal serialises to a JSON string, so
`.toFixed()` was throwing. All access sites now go through
`extractNumber()`.

backfill.sh: expand array declarations to `+=()` style for easier
editing, extend range_to to 2026-03-08.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-10 08:52:55 +02:00
e6d464948f docs+script: document backfill strategy and chunk by quarter
- docs/api.md: add "Backfill strategy" subsection explaining nginx timeout
  risk by interval, how to detect a truncated backfill (non-JSON response),
  the quarterly-chunking approach with a self-contained shell example, and
  a coverage_pct interpretation table for post-backfill verification
- script/backfill.sh: rewrite to iterate in quarterly chunks per
  (instrument, interval), accumulate inserted counts, and print ERR on
  non-JSON responses so truncated requests are immediately visible

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-09 12:58:05 +02:00
216729eb25 feat: database backup and restore scripts
database-backup.sh dumps the full remote swym DB to a local custom-format
pg_dump file. database-restore.sh restores a local backup to the remote DB,
stopping all swym services beforehand and restarting them afterwards (via
trap so services come back up even on failure). Both scripts derive the DB
connection URL from config/dev/api.json following the same pattern as
seed-dev.sh.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-09 10:45:51 +02:00
3d41574fab feat: gate candle backtests on 95% coverage with actionable diagnostics
Rejects backtest submissions where the requested date range has fewer
than 95% of the expected candles, rather than silently queuing a run
against sparse data. The 400 error includes actual vs expected counts,
coverage percentage, and an ingestion status hint derived from the
per-interval candle cursor (caught up / lagging / never ingested).

Also enriches GET /api/v1/market-candles/coverage with expected_count
and coverage_pct fields so callers can pre-check readiness before
submitting a backtest. Documents the full incomplete-data workflow in
docs/api.md.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-09 09:30:00 +02:00
16 changed files with 936 additions and 32 deletions

1
Cargo.lock generated
View File

@@ -2863,6 +2863,7 @@ dependencies = [
"rust_decimal",
"serde",
"serde_json",
"serde_path_to_error",
"sqlx",
"swym-dal",
"thiserror 2.0.18",

View File

@@ -28,6 +28,7 @@ rust_decimal = { version = "1.40.0", features = ["maths"] }
rust_decimal_macros = "1.40.0"
serde = { version = "1.0.228", features = ["derive"] }
serde_json = "1.0.149"
serde_path_to_error = "0.1"
sha2 = "0.10"
sqlx = { version = "0.8", features = ["runtime-tokio", "tls-rustls", "postgres", "chrono", "rust_decimal", "uuid", "migrate"] }
swym-dal = { path = "crates/swym-dal" }

View File

@@ -66,9 +66,12 @@
"properties": {
"side": { "type": "string", "enum": ["buy", "sell"] },
"quantity": {
"description": "Per-order size in base asset units. Either a fixed decimal string (e.g. \"0.001\") or a dynamic Expr evaluated at candle close. When an Expr returns None the order is skipped; negative values are clamped to zero.",
"description": "Per-order size in base asset units. Fixed decimal string (e.g. \"0.001\"), a declarative SizingMethod object, or a dynamic Expr object. When a method or Expr returns None the order is skipped; negative values are clamped to zero.",
"oneOf": [
{ "$ref": "#/definitions/DecimalString" },
{ "$ref": "#/definitions/SizingFixedSum" },
{ "$ref": "#/definitions/SizingPercentOfBalance" },
{ "$ref": "#/definitions/SizingFixedUnits" },
{ "$ref": "#/definitions/Expr" }
]
}
@@ -274,6 +277,37 @@
"right": { "$ref": "#/definitions/Expr" }
}
},
"SizingFixedSum": {
"description": "Buy `amount` worth of quote currency at the current price. qty = amount / current_price.",
"type": "object",
"required": ["method", "amount"],
"additionalProperties": false,
"properties": {
"method": { "const": "fixed_sum" },
"amount": { "$ref": "#/definitions/DecimalString", "description": "Quote-currency amount, e.g. \"500\" means buy $500 worth." }
}
},
"SizingPercentOfBalance": {
"description": "Buy percent% of the named asset's free balance worth of base asset. qty = balance(asset) * percent/100 / current_price.",
"type": "object",
"required": ["method", "percent", "asset"],
"additionalProperties": false,
"properties": {
"method": { "const": "percent_of_balance" },
"percent": { "$ref": "#/definitions/DecimalString", "description": "Percentage, e.g. \"2\" means 2% of the free balance." },
"asset": { "type": "string", "description": "Asset name to look up, e.g. \"usdc\". Matched case-insensitively." }
}
},
"SizingFixedUnits": {
"description": "Buy exactly `units` of base asset. Semantic alias for a fixed decimal quantity.",
"type": "object",
"required": ["method", "units"],
"additionalProperties": false,
"properties": {
"method": { "const": "fixed_units" },
"units": { "$ref": "#/definitions/DecimalString", "description": "Base asset quantity, e.g. \"0.01\" means 0.01 BTC." }
}
},
"Expr": {
"description": "A numeric expression evaluating to Option<Decimal>. Returns None (condition → false) when history is insufficient.",
"oneOf": [

View File

@@ -62,19 +62,55 @@ pub struct Rule {
pub then: Action,
}
/// Per-order quantity: either a fixed decimal or a dynamic [`Expr`] evaluated at candle close.
/// Declarative position sizing method. Resolved to a base-asset quantity at candle close
/// using the live price and free balance available at that moment.
///
/// Fixed quantities serialise as plain decimal strings (`"0.001"`), so all existing strategy
/// configs are backward-compatible. Dynamic quantities serialise as an `Expr` JSON object.
/// All methods return `None` (order skipped) if required inputs are missing or the price is
/// zero. Negative results are clamped to zero.
///
/// When a dynamic expression returns `None` (insufficient data or invalid result) the order
/// for that rule is skipped. Negative results are clamped to zero before use.
/// Use `"method"` to identify the variant (analogous to `"kind"` in [`Expr`]):
///
/// ```json
/// // Buy $500 worth at current price
/// { "method": "fixed_sum", "amount": "500" }
///
/// // Risk 2% of free USDC balance
/// { "method": "percent_of_balance", "percent": "2", "asset": "usdc" }
///
/// // Exactly 0.01 BTC (semantic alias for a fixed decimal)
/// { "method": "fixed_units", "units": "0.01" }
/// ```
#[derive(Debug, Clone, Deserialize, Serialize)]
#[serde(tag = "method", rename_all = "snake_case")]
pub enum SizingMethod {
/// Buy `amount` worth of quote currency. `qty = amount / current_price`.
FixedSum { amount: Decimal },
/// Buy `percent`% of the named asset's free balance worth of base asset.
/// `qty = balance(asset) * percent / 100 / current_price`.
/// `asset` is matched case-insensitively (e.g. `"usdc"` or `"USDC"`).
PercentOfBalance { percent: Decimal, asset: String },
/// Buy exactly `units` of base asset (explicit alias for a fixed decimal).
FixedUnits { units: Decimal },
}
/// Per-order quantity: a fixed decimal, a declarative [`SizingMethod`], or a dynamic
/// [`Expr`] evaluated at candle close.
///
/// - **Fixed** (`"0.001"`) — plain decimal string; all legacy configs are backward-compatible.
/// - **Sizing** (`{ "method": "fixed_sum", ... }`) — named method resolved from live price/balance.
/// - **Expr** (`{ "kind": "bin_op", ... }`) — arbitrary expression tree.
///
/// Sizing and expression variants that return `None` (insufficient data) cause the order to be
/// skipped. Negative results are clamped to zero before use.
///
/// ```json
/// // Fixed
/// "quantity": "0.001"
///
/// // 1% of USDT balance ÷ (2 × ATR14) — ATR-based sizing
/// // Buy $500 worth — highest-value shorthand
/// "quantity": { "method": "fixed_sum", "amount": "500" }
///
/// // 1% of USDT balance ÷ (2 × ATR14) — ATR-based sizing via expression tree
/// "quantity": {
/// "kind": "bin_op", "op": "div",
/// "left": { "kind": "bin_op", "op": "mul",
@@ -89,6 +125,10 @@ pub struct Rule {
pub enum QuantitySpec {
/// A fixed per-order size in base asset units.
Fixed(Decimal),
/// A declarative sizing method resolved at execution time from live price and balance.
/// Tried before `Expr` because both serialise as JSON objects; disambiguated by tag key
/// (`"method"` here vs `"kind"` in `Expr`).
Sizing(Box<SizingMethod>),
/// A dynamic expression evaluated at candle close.
Expr(Box<Expr>),
}

View File

@@ -132,7 +132,7 @@ interface AssetEntry {
interface ConfigBalance {
asset: string;
balance: { total: number; free: number };
balance: { total: unknown; free: unknown };
}
interface TearSheetData {
@@ -373,7 +373,7 @@ export default function PaperRunDetailPage() {
const openCount = (summary.assets ?? []).filter((entry) => {
const assetName = entry.asset ?? '';
const endFree = extractNumber(entry.tear_sheet?.balance_end?.free);
const startFree = rawBalances.find((b) => b.asset === assetName)?.balance?.free ?? 0;
const startFree = extractNumber(rawBalances.find((b) => b.asset === assetName)?.balance?.free) ?? 0;
// An asset has an open position if it ended with more free balance than it started with
// (i.e. a buy was placed but not closed).
return endFree != null && endFree > startFree + 1e-10;
@@ -570,7 +570,7 @@ export default function PaperRunDetailPage() {
const initialState = (cfg?.execution as Record<string, unknown>)?.initial_state as Record<string, unknown>;
const rawBalances = (initialState?.balances ?? []) as ConfigBalance[];
const configBalanceMap = new Map<string, number>(
rawBalances.map((b) => [b.asset, b.balance?.total ?? 0])
rawBalances.map((b) => [b.asset, extractNumber(b.balance?.total) ?? 0])
);
return (

View File

@@ -47,10 +47,12 @@ The standard iteration loop for developing a profitable strategy:
```
1. Create an ingest config → historical trade data flows in via ingest-binance
2. Backfill candles → aggregate trades into OHLCV bars at desired intervals
3. Check data coverage → confirm the date range you want to backtest is available
2. Backfill candles → POST /api/v1/market-candles/backfill per interval
3. Check data coverage → GET /api/v1/market-candles/coverage/{exchange}/{symbol}
Verify coverage_pct ≥ 95% for your target date range
4. Author a strategy → POST /api/v1/strategies (optional, but enables grouping)
5. Submit a backtest → POST /api/v1/paper-runs (mode: "backtest")
400 with coverage details if data is incomplete
6. Poll until complete → GET /api/v1/paper-runs/{id}
7. Analyse result_summary → trade stats, Sharpe ratio, win rate, etc.
8. Download positions → GET /api/v1/paper-runs/{id}/positions (equity curve)
@@ -58,6 +60,30 @@ The standard iteration loop for developing a profitable strategy:
10. Revise the strategy, repeat
```
### Handling incomplete data
The backtest submission endpoint enforces a **95% candle coverage** requirement. If fewer than 95%
of the expected candles are present for the requested date range, the request is rejected with a
`400 Bad Request` response explaining the shortfall and what to do:
```json
{
"error": "insufficient 1h candle data for BTCUSDT on binance_spot: 4380 of 8760 expected candles available (50.0% coverage, minimum 95%). Candle ingestion last reached 2025-07-01 — it may still be catching up. Retry later or trigger a backfill via POST /api/v1/market-candles/backfill."
}
```
The error includes an ingestion status hint derived from the per-interval cursor:
| Hint | Meaning | Action |
|---|---|---|
| "Candle ingestion appears up to date" | Cursor is current; data is genuinely sparse | Run `POST /api/v1/market-candles/backfill` for the gap period |
| "Candle ingestion last reached {date}" | Cursor lags behind; worker is catching up | Wait and retry, or run a targeted backfill |
| "No candle ingestion cursor found" | Interval has never been ingested by the worker | Run `POST /api/v1/market-candles/backfill` to populate via Binance REST API |
Pre-check coverage before submitting a backtest using `GET /api/v1/market-candles/coverage/{exchange}/{symbol}`.
The response now includes `expected_count` and `coverage_pct` fields so you can verify readiness
without incurring a failed backtest submission.
---
## Data Preparation
@@ -216,19 +242,140 @@ Check which candle intervals are available and their date ranges.
"interval": "1h",
"first_open": "2025-01-01T00:00:00Z",
"last_close": "2026-01-01T00:00:00Z",
"count": 8760
"count": 8755,
"expected_count": 8760,
"coverage_pct": 99.94
},
{
"interval": "4h",
"first_open": "2025-01-01T00:00:00Z",
"last_close": "2026-01-01T00:00:00Z",
"count": 2190
"count": 1800,
"expected_count": 2190,
"coverage_pct": 82.19
}
]
```
Use this before submitting a backtest to confirm data is available for your chosen interval and
date range.
| Field | Description |
|---|---|
| `count` | Actual candle rows stored in the database |
| `expected_count` | Expected rows based on interval duration across the available range |
| `coverage_pct` | `count / expected_count × 100`, capped at 100. Values below 95 indicate gaps. |
Use this before submitting a backtest to confirm data is complete for your chosen interval and
date range. The backtest endpoint requires `coverage_pct ≥ 95` for the specific `[starts_at,
finishes_at]` window; `coverage_pct` here is computed over the full available range, so a
sub-range may be complete even if the overall coverage is lower.
---
#### Backfill strategy
**The nginx timeout problem.** The backfill endpoint fetches candles from Binance and inserts them
in chunks of 500, committing each chunk before moving to the next. An nginx reverse proxy in front
of the API applies a proxy read timeout (typically 60120 s). For fine-grained intervals over a
long date range, a single request can exceed this timeout:
| Interval | Candles per year | Risk |
|---|---|---|
| `1d` | ~365 | No issue |
| `4h` | ~2 190 | No issue |
| `1h` | ~8 760 | Marginal |
| `15m` | ~35 040 | High |
| `5m` | ~105 120 | Very high |
| `1m` | ~525 600 | Certain timeout for multi-month ranges |
When nginx times out, it closes the connection and returns an HTML error page. The API handler is
cancelled mid-run: chunks already committed remain in the database, but the remainder of the range
is not inserted. The `inserted` field in the response is never returned — the client receives HTML
instead of JSON.
**Detecting a truncated backfill.** A successful response is always JSON with an `inserted` field:
```json
{ "inserted": 4032, "interval": "5m", "from": "...", "to": "..." }
```
If the response body cannot be parsed as JSON, or the HTTP response is not `200`/`201`, the
backfill was cut short. Check the raw response body — a timeout returns an HTML `504 Gateway
Timeout` or similar.
After any backfill, always verify completeness:
```bash
curl -s https://<host>/api/v1/market-candles/coverage/binance_spot/BTCUSDT | \
jq '.[] | select(.coverage_pct < 95)'
```
An empty result means all intervals are ≥ 95% covered. Any rows returned indicate gaps that need
a targeted re-backfill for that interval and sub-range.
**Recommended approach: quarterly chunks.** Break large date ranges into ≤ 3-month windows, one
request per chunk. Each chunk completes well within the proxy timeout even for `1m` data. Because
backfill is idempotent (`ON CONFLICT DO NOTHING`), re-running a chunk that already has data is
safe and inserts 0 rows.
Example shell loop (no external date utilities required):
```bash
#!/usr/bin/env bash
# Backfill in quarterly chunks; idempotent — safe to re-run.
set -euo pipefail
instrument="ETHUSDC"
interval="1m"
range_from="2025-01-01"
range_to="2026-03-01"
quarter_chunks() {
local from="$1" to="$2" cursor="$from"
while [[ "$cursor" < "$to" ]]; do
local year month next_month next_year chunk_to
year="${cursor%%-*}"; month="${cursor#*-}"; month="${month%%-*}"
next_month=$(( 10#$month + 3 )); next_year=$year
if (( next_month > 12 )); then next_month=$(( next_month - 12 )); next_year=$(( next_year + 1 )); fi
chunk_to=$(printf "%04d-%02d-01" "$next_year" "$next_month")
[[ "$chunk_to" > "$to" ]] && chunk_to="$to"
echo "${cursor}T00:00:00Z"; echo "${chunk_to}T00:00:00Z"
cursor="$chunk_to"
done
}
mapfile -t chunks < <(quarter_chunks "$range_from" "$range_to")
i=0
while (( i < ${#chunks[@]} )); do
chunk_from="${chunks[$i]}"; chunk_to="${chunks[$((i+1))]}"; i=$(( i + 2 ))
response=$(curl -s --max-time 300 -X POST https://<host>/api/v1/market-candles/backfill \
-H 'Content-Type: application/json' \
-d "{\"exchange\":\"binance_spot\",\"symbol\":\"${instrument}\",\"interval\":\"${interval}\",\"from\":\"${chunk_from}\",\"to\":\"${chunk_to}\"}")
inserted=$(echo "${response}" | jq -r '.inserted // empty' 2>/dev/null)
if [[ -n "$inserted" ]]; then
echo "ok ${chunk_from}${chunk_to}: ${inserted} inserted"
else
echo "ERR ${chunk_from}${chunk_to}: ${response}" # likely HTML — backfill incomplete
fi
done
```
**Verifying the result.** After all chunks complete, check coverage:
```bash
curl -s https://<host>/api/v1/market-candles/coverage/binance_spot/ETHUSDC | jq .
```
Interpret the response:
| `coverage_pct` | Meaning | Action |
|---|---|---|
| ≥ 95 | Sufficient for backtesting | Proceed |
| 8095 | Borderline — gaps exist | Re-run backfill for affected sub-range; check Binance availability for that period |
| < 80 | Significant gaps | Binance may not have data for that period, or a request failed silently; re-run and inspect per-chunk output |
| 0 or missing interval | No data at all | No backfill was run for this interval; run from scratch |
If `coverage_pct` is stuck below 95% after repeated backfills, the data may genuinely not exist
on Binance for that period (instrument was not listed, or trading was suspended). Narrow the
backtest `starts_at`/`finishes_at` to a range with full coverage.
---
@@ -351,6 +498,69 @@ Strategies can be created independently and then referenced when submitting runs
— strategy records are also created automatically when a paper run is submitted. The main benefit
of pre-creating a strategy is to get a stable UUID and group runs by `strategy_id`.
### `POST /api/v1/strategies/validate`
Validates a strategy config JSON without persisting anything. Runs the full deserialization
pipeline and semantic checks, returning every error as a structured list. Use this before
submitting a run to confirm the config is correct.
Always returns **HTTP 200**. `valid: false` is a validation result, not an HTTP error.
**Request body:** the strategy config object directly (same shape as the `config` field in
`POST /api/v1/strategies`):
```json
{
"type": "rule_based",
"candle_interval": "5m",
"rules": [
{
"when": { "kind": "ema_crossover", "fast_period": 21, "slow_period": 9, "direction": "above" },
"then": { "side": "buy", "quantity": { "method": "fixed_sum", "amount": "500" } }
}
]
}
```
**Response — valid strategy (200):**
```json
{
"valid": true,
"errors": []
}
```
**Response — invalid strategy (200):**
```json
{
"valid": false,
"errors": [
{
"path": "rules[0].when",
"message": "fast_period (21) must be less than slow_period (9)"
},
{
"path": "candle_interval",
"message": "\"2m\" is not a valid interval; must be one of: 1m, 5m, 15m, 1h, 4h, 1d"
}
]
}
```
Each error object has:
| Field | Type | Description |
|---|---|---|
| `message` | string | Human-readable description of the problem |
| `path` | string\|null | Dotted JSON path to the offending field (e.g. `rules[0].when.fast_period`), absent for top-level structural errors |
Serde deserialization errors short-circuit on the first structural problem (one error returned);
semantic errors are all collected and returned together.
---
### `POST /api/v1/strategies`
**Request body:**
@@ -520,10 +730,21 @@ Submit a new paper run (backtest or live).
**Validation rules:**
- `finishes_at` must be after `starts_at`
- For `"backtest"`: `starts_at` must be in the past; data must exist for the instrument and interval; the range must fall within available data
- For `"backtest"` with candles: the requested range must have **≥ 95% candle coverage** (actual count vs expected count derived from interval duration). Returns 400 with a diagnostic message and ingestion status hint if below threshold.
- For `"live"`: `finishes_at` must be in the future; `candle_interval` must not be set
- For `RuleBased` strategies: all timeframes referenced by expressions must have available candle data
- For `RuleBased` strategies: all timeframes referenced by expressions must have available candle data with ≥ 95% coverage
- Raw-tick backtests are rejected if the date range contains more than 500,000,000 trades
**Insufficient coverage response (400):**
```json
{
"error": "insufficient 1h candle data for BTCUSDT on binance_spot: 4380 of 8760 expected candles available (50.0% coverage, minimum 95%). Candle ingestion last reached 2025-07-01 — it may still be catching up. Retry later or trigger a backfill via POST /api/v1/market-candles/backfill."
}
```
See [Handling incomplete data](#handling-incomplete-data) for the interpretation guide.
**Response (201):** `PaperRunResponse`
---

View File

@@ -22,12 +22,13 @@ candle_interval must be one of: "1m" | "5m" | "15m" | "1h" | "4h" | "1d"
"then": { "side": "buy" | "sell", "quantity": <QuantitySpec> }
}
QuantitySpec is either a fixed decimal string or a dynamic Expr:
"quantity": "0.001" // fixed: 0.001 BTC per order
"quantity": { "kind": "bin_op", ... } // dynamic: evaluated at candle close
QuantitySpec is a fixed decimal string, a named SizingMethod, or a dynamic Expr:
"quantity": "0.001" // fixed: 0.001 BTC per order
"quantity": { "method": "fixed_sum", "amount": "500" } // sizing method: buy $500 worth
"quantity": { "kind": "bin_op", ... } // dynamic Expr: evaluated at close
For fixed quantities, use "0.001" for BTC, "0.01" for ETH, etc.
For dynamic quantities, see "Position state and dynamic sizing" below.
For sizing methods, see "Position sizing methods" below.
For dynamic expression trees, see "Position state and dynamic sizing" below.
All rules that fire on the same candle close will execute. Typically you have one
buy rule (gated on position=flat) and one sell rule (gated on position=long).
@@ -430,6 +431,34 @@ Use a candle_interval and backtesting window long enough to cover warm-up.
"then": { "side": "sell", "quantity": "0.001" }
}
## Position sizing methods
Sizing methods are declarative shortcuts that resolve to a base-asset quantity at execution time
using the live price and free balances. They use a `"method"` tag instead of `"kind"`.
Sell orders always close the full open position regardless of the quantity specified — the
resolved quantity only matters for buy (entry) orders.
### fixed_sum — buy a fixed quote-currency amount
// Buy $500 worth of base asset at current price (qty = 500 / price)
"quantity": { "method": "fixed_sum", "amount": "500" }
### percent_of_balance — fraction of a named asset's free balance
// Risk 2% of free USDC balance (qty = balance("usdc") * 0.02 / price)
"quantity": { "method": "percent_of_balance", "percent": "2", "asset": "usdc" }
// asset is matched case-insensitively; use the internal lowercased name (e.g. "usdc" not "USDC")
### fixed_units — explicit base-asset quantity (alias for a fixed decimal)
// Buy exactly 0.01 BTC
"quantity": { "method": "fixed_units", "units": "0.01" }
If the price is zero or the required balance is absent, the order is skipped (same semantics as
an Expr that returns None).
### ATR-based position sizing (risk 1% of USDT balance per trade)
// Risk = balance × 1%. Stop distance = 2 × ATR(14). Size = risk / stop_distance.
@@ -459,7 +488,8 @@ Use a candle_interval and backtesting window long enough to cover warm-up.
## Constraints to respect
1. All numeric values (threshold, multiplier, value in literals) must be JSON strings, not numbers.
The exception is `quantity`: it may be a decimal string ("0.001") OR a JSON object Expr.
The exception is `quantity`: it may be a decimal string ("0.001"), a SizingMethod object
(with "method" tag), or an Expr object (with "kind" tag).
2. "offset" and "multiplier" may be omitted when their value is 0 / None respectively.
3. "field" in "func" defaults to "close" when omitted — include it explicitly to be safe.
4. Decimal values in "field": use snake_case: "open", "high", "low", "close", "volume".

79
script/backfill.sh Executable file
View File

@@ -0,0 +1,79 @@
#!/usr/bin/env bash
# Backfill market candles in quarterly chunks to stay within nginx proxy timeouts.
# Each (instrument, interval, quarter) is a separate request; idempotent via ON CONFLICT DO NOTHING.
set -euo pipefail
declare -a instruments=()
instruments+=( BTCUSDC )
instruments+=( ETHUSDC )
instruments+=( SOLUSDC )
declare -a intervals=()
intervals+=( 1m )
intervals+=( 5m )
intervals+=( 15m )
intervals+=( 1h )
intervals+=( 4h )
intervals+=( 1d )
range_from="2025-01-01"
range_to="2026-03-08"
# Generate quarter boundaries between two dates (YYYY-MM-DD).
# Prints pairs of lines: chunk_from chunk_to (ISO-8601 with Z suffix).
quarter_chunks() {
local from="$1" to="$2"
local cursor="$from"
while [[ "$cursor" < "$to" ]]; do
local year month next_month next_year chunk_to
year="${cursor%%-*}"
month="${cursor#*-}"; month="${month%%-*}"
# Advance by 3 months
next_month=$(( 10#$month + 3 ))
next_year=$year
if (( next_month > 12 )); then
next_month=$(( next_month - 12 ))
next_year=$(( next_year + 1 ))
fi
chunk_to=$(printf "%04d-%02d-01" "$next_year" "$next_month")
# Clamp to overall range_to
if [[ "$chunk_to" > "$to" ]]; then
chunk_to="$to"
fi
echo "${cursor}T00:00:00Z"
echo "${chunk_to}T00:00:00Z"
cursor="$chunk_to"
done
}
# Read quarter pairs into an array.
mapfile -t chunks < <(quarter_chunks "$range_from" "$range_to")
total_inserted=0
for instrument in "${instruments[@]}"; do
for interval in "${intervals[@]}"; do
interval_inserted=0
i=0
while (( i < ${#chunks[@]} )); do
chunk_from="${chunks[$i]}"
chunk_to="${chunks[$((i+1))]}"
i=$(( i + 2 ))
response=$(curl -k -s --max-time 300 -X POST \
https://dev.swym.hanzalova.internal/api/v1/market-candles/backfill \
-H 'Content-Type: application/json' \
-d "{\"exchange\":\"binance_spot\",\"symbol\":\"${instrument}\",\"interval\":\"${interval}\",\"from\":\"${chunk_from}\",\"to\":\"${chunk_to}\"}")
inserted=$(echo "${response}" | jq -r '.inserted // empty' 2>/dev/null)
if [[ -n "$inserted" ]]; then
interval_inserted=$(( interval_inserted + inserted ))
echo " ok ${instrument} ${interval} ${chunk_from}${chunk_to}: ${inserted} inserted"
else
echo " err ${instrument} ${interval} ${chunk_from}${chunk_to}: ${response}"
fi
done
echo "${instrument} ${interval}: ${interval_inserted} total inserted"
total_inserted=$(( total_inserted + interval_inserted ))
done
done
echo "==> Done. ${total_inserted} candles inserted across all instruments and intervals."

41
script/database-backup.sh Executable file
View File

@@ -0,0 +1,41 @@
#!/usr/bin/env bash
# Create a full backup of the remote swym database to a local file.
#
# Usage: ./script/database-backup.sh <target_path>
#
# target_path Local file path to write the backup to (pg_dump custom format).
# Example: /tmp/swym-backup-$(date +%Y%m%d-%H%M%S).dump
set -euo pipefail
REPO_ROOT="$(cd "$(dirname "$0")/.." && pwd)"
if [[ $# -lt 1 ]]; then
echo "Usage: $0 <target_path>" >&2
exit 1
fi
target_path="$1"
# Build connection string from dev api config (same logic as seed-dev.sh).
api_cfg="$REPO_ROOT/config/dev/api.json"
config_db_url=$(jq -r '.database.url' "$api_cfg")
db_user="$(echo "${config_db_url}" | cut -d '/' -f 3 | cut -d '@' -f 1)"
db_host="$(echo "${config_db_url}" | cut -d '@' -f 2 | cut -d ':' -f 1)"
db_port="$(echo "${config_db_url}" | cut -d ':' -f 3 | cut -d '/' -f 1)"
db_name="$(echo "${config_db_url}" | cut -d '/' -f 4 | cut -d '?' -f 1)"
db_ssl_mode=verify-full
db_ssl_root_cert=/etc/pki/ca-trust/source/anchors/ca.internal-rsa.pem
db_ssl_cert=/etc/pki/tls/misc/$(hostnamectl hostname)-rsa.pem
db_ssl_key=/etc/pki/tls/private/$(hostnamectl hostname)-rsa.pem
db_url="postgres://${db_user}@${db_host}:${db_port}/${db_name}?sslmode=${db_ssl_mode}&sslrootcert=${db_ssl_root_cert}&sslcert=${db_ssl_cert}&sslkey=${db_ssl_key}"
echo "==> Backing up ${db_user}@${db_host}:${db_port}/${db_name}${target_path}"
pg_dump \
--format=custom \
--no-password \
"$db_url" \
--file="$target_path"
size=$(du -sh "$target_path" | cut -f1)
echo "==> Backup complete: ${target_path} (${size}) at $(date -u '+%Y-%m-%dT%H:%M:%SZ')"

89
script/database-restore.sh Executable file
View File

@@ -0,0 +1,89 @@
#!/usr/bin/env bash
# Restore a local pg_dump backup to the remote swym database.
#
# Usage: ./script/database-restore.sh <backup_path>
#
# backup_path Local pg_dump custom-format file produced by database-backup.sh.
#
# WARNING: This will DROP and recreate all objects in the target database.
# All swym services on the app host will be stopped during the restore
# and restarted afterwards.
set -euo pipefail
REPO_ROOT="$(cd "$(dirname "$0")/.." && pwd)"
SSH_USER="grenade"
APP_HOST="quadbrat.hanzalova.internal"
SERVICES=(swym-api swym-ingest-binance swym-market-worker swym-paper-executor)
if [[ $# -lt 1 ]]; then
echo "Usage: $0 <backup_path>" >&2
exit 1
fi
backup_path="$1"
if [[ ! -f "$backup_path" ]]; then
echo "Error: backup file not found: $backup_path" >&2
exit 1
fi
# Build connection string from dev api config (same logic as seed-dev.sh).
api_cfg="$REPO_ROOT/config/dev/api.json"
config_db_url=$(jq -r '.database.url' "$api_cfg")
db_user="$(echo "${config_db_url}" | cut -d '/' -f 3 | cut -d '@' -f 1)"
db_host="$(echo "${config_db_url}" | cut -d '@' -f 2 | cut -d ':' -f 1)"
db_port="$(echo "${config_db_url}" | cut -d ':' -f 3 | cut -d '/' -f 1)"
db_name="$(echo "${config_db_url}" | cut -d '/' -f 4 | cut -d '?' -f 1)"
db_ssl_mode=verify-full
db_ssl_root_cert=/etc/pki/ca-trust/source/anchors/ca.internal-rsa.pem
db_ssl_cert=/etc/pki/tls/misc/$(hostnamectl hostname)-rsa.pem
db_ssl_key=/etc/pki/tls/private/$(hostnamectl hostname)-rsa.pem
db_url="postgres://${db_user}@${db_host}:${db_port}/${db_name}?sslmode=${db_ssl_mode}&sslrootcert=${db_ssl_root_cert}&sslcert=${db_ssl_cert}&sslkey=${db_ssl_key}"
backup_size=$(du -sh "$backup_path" | cut -f1)
echo "╔══════════════════════════════════════════════════════════════╗"
echo "║ DATABASE RESTORE WARNING ║"
echo "╠══════════════════════════════════════════════════════════════╣"
echo " Backup file : $backup_path ($backup_size)"
echo " Target DB : ${db_user}@${db_host}:${db_port}/${db_name}"
echo " App host : ${APP_HOST}"
echo ""
echo " This will DROP and recreate ALL objects in the target database."
echo " All swym services will be stopped and restarted."
echo "╚══════════════════════════════════════════════════════════════╝"
echo ""
read -r -p "Type 'yes' to proceed: " confirm
if [[ "$confirm" != "yes" ]]; then
echo "Aborted."
exit 1
fi
# Ensure services are restarted even if the restore fails.
restart_services() {
echo "==> Restarting swym services on ${APP_HOST}..."
for svc in "${SERVICES[@]}"; do
ssh "${SSH_USER}@${APP_HOST}" sudo systemctl start "$svc" || \
echo " Warning: failed to start ${svc} (may already be running or not installed)"
done
echo "==> Services restarted."
}
trap restart_services EXIT
echo "==> Stopping swym services on ${APP_HOST}..."
for svc in "${SERVICES[@]}"; do
ssh "${SSH_USER}@${APP_HOST}" sudo systemctl stop "$svc" || true
done
echo "==> Services stopped."
echo "==> Restoring ${backup_path}${db_user}@${db_host}:${db_port}/${db_name}..."
pg_restore \
--clean \
--if-exists \
--no-password \
--dbname="$db_url" \
"$backup_path"
echo "==> Restore complete at $(date -u '+%Y-%m-%dT%H:%M:%SZ')"

View File

@@ -12,6 +12,7 @@ rust_decimal = { workspace = true }
reqwest = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
serde_path_to_error = { workspace = true }
sqlx = { workspace = true }
swym-dal = { workspace = true }
thiserror = { workspace = true }

View File

@@ -127,7 +127,12 @@ pub struct CoverageEntry {
pub interval: String,
pub first_open: DateTime<Utc>,
pub last_close: DateTime<Utc>,
/// Actual number of candles stored in the database for this interval.
pub count: i64,
/// Expected number of candles for the available range (derived from interval duration).
pub expected_count: i64,
/// Coverage as a percentage (0100). Values below 95 indicate gaps in the data.
pub coverage_pct: f64,
}
pub async fn get_candle_coverage(
@@ -144,11 +149,24 @@ pub async fn get_candle_coverage(
let entries = rows
.into_iter()
.map(|(interval, first_open, last_close, count)| CoverageEntry {
interval,
first_open,
last_close,
count,
.map(|(interval, first_open, last_close, count)| {
let range_secs = (last_close - first_open).num_seconds().max(0) as u64;
let interval_secs =
swym_dal::models::strategy_config::parse_interval_secs(&interval).unwrap_or(1);
let expected_count = (range_secs / interval_secs) as i64;
let coverage_pct = if expected_count > 0 {
(count as f64 / expected_count as f64 * 100.0).min(100.0)
} else {
100.0
};
CoverageEntry {
interval,
first_open,
last_close,
count,
expected_count,
coverage_pct,
}
})
.collect();

View File

@@ -14,7 +14,7 @@ use swym_dal::models::paper_run::{PaperRunRow, PaperRunStatus};
use swym_dal::models::paper_run_position::PaperRunPositionRow;
use swym_dal::models::strategy_config::{StrategyConfig, collect_timeframes};
use swym_dal::models::condition_audit::ConditionAuditRow;
use swym_dal::repo::{condition_audit, instrument, market_event, paper_run, paper_run_position, strategy};
use swym_dal::repo::{condition_audit, ingest_config, instrument, market_event, paper_run, paper_run_position, strategy};
use swym_dal::strategy_hash::{compute_strategy_hash, normalize_strategy};
// -- Request / Response types --
@@ -224,6 +224,16 @@ pub async fn create_paper_run(
)));
}
validate_candle_completeness(
&state.pool,
instrument.id,
&format!("{name_exchange} on {exchange_name}"),
interval,
req.starts_at,
req.finishes_at,
)
.await?;
// For rule-based strategies, also validate every additional timeframe
// referenced by expressions in the rule tree.
if let StrategyConfig::RuleBased(ref params) = run_config.strategy {
@@ -267,6 +277,16 @@ pub async fn create_paper_run(
data_end = tf_range.1,
)));
}
validate_candle_completeness(
&state.pool,
instrument.id,
&format!("{name_exchange} on {exchange_name}"),
tf,
req.starts_at,
req.finishes_at,
)
.await?;
}
}
} else {
@@ -589,3 +609,73 @@ pub async fn list_paper_run_candles(
candles,
}))
}
// ---------------------------------------------------------------------------
// Candle completeness validation
// ---------------------------------------------------------------------------
const MIN_CANDLE_COVERAGE: f64 = 0.95;
/// Validate that candle coverage for `[from, to)` meets the minimum threshold.
///
/// Computes the expected candle count from the interval duration and compares
/// it to the actual count in the database. Returns `Err(BadRequest)` with a
/// diagnostic message (including an ingestion status hint) when coverage is
/// below [`MIN_CANDLE_COVERAGE`].
async fn validate_candle_completeness(
pool: &sqlx::PgPool,
instrument_id: i32,
instrument_label: &str,
interval: &str,
from: DateTime<Utc>,
to: DateTime<Utc>,
) -> Result<(), ApiError> {
use swym_dal::models::strategy_config::parse_interval_secs;
let interval_secs = parse_interval_secs(interval)
.expect("interval already validated before this call");
let range_secs = (to - from).num_seconds().max(0) as u64;
let expected = (range_secs / interval_secs) as i64;
if expected == 0 {
return Ok(());
}
let actual = market_event::count_candles(pool, instrument_id, interval, from, to).await?;
let coverage = actual as f64 / expected as f64;
if coverage >= MIN_CANDLE_COVERAGE {
return Ok(());
}
// Build an ingestion status hint from the candle cursor.
let cursor = ingest_config::get_candle_cursor(pool, instrument_id, interval).await?;
let ingestion_hint = match cursor {
Some(date) => {
let yesterday = (Utc::now() - chrono::Duration::days(1)).date_naive();
if date >= yesterday {
"Candle ingestion appears up to date; the data may be genuinely sparse \
for this period."
.to_string()
} else {
format!(
"Candle ingestion last reached {date}; it may still be catching up. \
Retry later or trigger a backfill via POST /api/v1/market-candles/backfill."
)
}
}
None => {
"No candle ingestion cursor found for this interval. \
Trigger a backfill via POST /api/v1/market-candles/backfill."
.to_string()
}
};
Err(ApiError::BadRequest(format!(
"insufficient {interval} candle data for {instrument_label}: \
{actual} of {expected} expected candles available \
({pct:.1}% coverage, minimum {min:.0}%). {ingestion_hint}",
pct = coverage * 100.0,
min = MIN_CANDLE_COVERAGE * 100.0,
)))
}

View File

@@ -4,13 +4,16 @@ use axum::{
http::StatusCode,
};
use chrono::{DateTime, Utc};
use rust_decimal::Decimal;
use serde::{Deserialize, Serialize};
use uuid::Uuid;
use crate::error::ApiError;
use crate::state::AppState;
use swym_dal::models::strategy::StrategyRow;
use swym_dal::models::strategy_config::StrategyConfig;
use swym_dal::models::strategy_config::{
Condition, Expr, QuantitySpec, RuleBasedParams, SizingMethod, StrategyConfig,
};
use swym_dal::repo::strategy;
use swym_dal::strategy_hash::{compute_strategy_hash, normalize_strategy};
@@ -117,3 +120,231 @@ pub async fn create_strategy(
Ok((StatusCode::CREATED, Json(row.into())))
}
// ---------------------------------------------------------------------------
// POST /api/v1/strategies/validate
// ---------------------------------------------------------------------------
const VALID_INTERVALS: &[&str] = &["1m", "5m", "15m", "1h", "4h", "1d"];
/// A single validation error with an optional dotted JSON path.
#[derive(Debug, Serialize)]
pub struct ValidationError {
pub message: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub path: Option<String>,
}
impl ValidationError {
fn at(path: impl Into<String>, message: impl Into<String>) -> Self {
Self { message: message.into(), path: Some(path.into()) }
}
fn global(message: impl Into<String>) -> Self {
Self { message: message.into(), path: None }
}
}
#[derive(Debug, Serialize)]
pub struct ValidateStrategyResponse {
pub valid: bool,
pub errors: Vec<ValidationError>,
}
/// POST /api/v1/strategies/validate
///
/// Accepts the same strategy JSON that would go into a run config (`strategy` field),
/// runs the full deserialization pipeline, and returns every error as a structured list.
/// Always returns HTTP 200; `valid: false` means the strategy has errors.
///
/// Example request body:
/// ```json
/// { "type": "rule_based", "candle_interval": "5m", "rules": [...] }
/// ```
pub async fn validate_strategy(
Json(body): Json<serde_json::Value>,
) -> Json<ValidateStrategyResponse> {
// Stage 1: structural deserialization using serde_path_to_error for field paths.
// Re-serialize to a string so we can hand a serde_json::Deserializer to
// serde_path_to_error (serde_json::Value is not itself a Deserializer).
let config: StrategyConfig = {
let json_str = serde_json::to_string(&body)
.expect("re-serializing a parsed Value should not fail");
let mut json_de = serde_json::Deserializer::from_str(&json_str);
match serde_path_to_error::deserialize(&mut json_de) {
Ok(c) => c,
Err(e) => {
let path = e.path().to_string();
let err = if path.is_empty() || path == "." {
ValidationError::global(e.inner().to_string())
} else {
ValidationError::at(path, e.inner().to_string())
};
return Json(ValidateStrategyResponse { valid: false, errors: vec![err] });
}
}
};
// Stage 2: semantic validation — collects *all* errors.
let mut errors: Vec<ValidationError> = Vec::new();
match &config {
StrategyConfig::Default => {
// Default strategy has no further constraints.
}
StrategyConfig::RuleBased(params) => {
validate_rule_based(params, &mut errors);
}
}
let valid = errors.is_empty();
Json(ValidateStrategyResponse { valid, errors })
}
fn validate_rule_based(params: &RuleBasedParams, errors: &mut Vec<ValidationError>) {
// candle_interval must be a recognised value.
if !VALID_INTERVALS.contains(&params.candle_interval.as_str()) {
errors.push(ValidationError::at(
"candle_interval",
format!(
"\"{}\" is not a valid interval; must be one of: {}",
params.candle_interval,
VALID_INTERVALS.join(", ")
),
));
}
if params.rules.is_empty() {
errors.push(ValidationError::at("rules", "must contain at least one rule"));
}
for (i, rule) in params.rules.iter().enumerate() {
let prefix = format!("rules[{i}]");
validate_condition(&rule.when, &format!("{prefix}.when"), errors);
validate_action_quantity(&rule.then.quantity, &format!("{prefix}.then.quantity"), errors);
}
}
fn validate_condition(cond: &Condition, path: &str, errors: &mut Vec<ValidationError>) {
match cond {
Condition::EmaCrossover { fast_period, slow_period, timeframe, .. } => {
if fast_period >= slow_period {
errors.push(ValidationError::at(
path,
format!("fast_period ({fast_period}) must be less than slow_period ({slow_period})"),
));
}
validate_optional_timeframe(timeframe.as_deref(), path, errors);
}
Condition::EmaTrend { timeframe, .. }
| Condition::Rsi { timeframe, .. }
| Condition::Bollinger { timeframe, .. }
| Condition::PriceLevel { timeframe, .. } => {
validate_optional_timeframe(timeframe.as_deref(), path, errors);
}
Condition::AllOf { conditions } => {
for (i, c) in conditions.iter().enumerate() {
validate_condition(c, &format!("{path}.conditions[{i}]"), errors);
}
}
Condition::AnyOf { conditions } => {
for (i, c) in conditions.iter().enumerate() {
validate_condition(c, &format!("{path}.conditions[{i}]"), errors);
}
}
Condition::Not { condition } => {
validate_condition(condition, &format!("{path}.condition"), errors);
}
Condition::EventCount { condition, .. } => {
validate_condition(condition, &format!("{path}.condition"), errors);
}
Condition::Compare { left, right, .. } => {
validate_expr(left, &format!("{path}.left"), errors);
validate_expr(right, &format!("{path}.right"), errors);
}
Condition::CrossOver { left, right } | Condition::CrossUnder { left, right } => {
validate_expr(left, &format!("{path}.left"), errors);
validate_expr(right, &format!("{path}.right"), errors);
}
Condition::Position { .. } => {}
}
}
fn validate_expr(expr: &Expr, path: &str, errors: &mut Vec<ValidationError>) {
match expr {
Expr::Func { name, .. } => {
// apply_func does not support these; warn when used directly in Func
// (Func itself is fine — just documenting no recursion needed here).
let _ = name;
}
Expr::ApplyFunc { name, input, .. } => {
use swym_dal::models::strategy_config::FuncName;
if matches!(name, FuncName::Atr | FuncName::Adx | FuncName::Supertrend | FuncName::Rsi) {
errors.push(ValidationError::at(
path,
format!("{name:?} cannot be used inside apply_func; it requires OHLC data"),
));
}
validate_expr(input, &format!("{path}.input"), errors);
}
Expr::BinOp { left, right, .. } => {
validate_expr(left, &format!("{path}.left"), errors);
validate_expr(right, &format!("{path}.right"), errors);
}
Expr::UnaryOp { operand, .. } => {
validate_expr(operand, &format!("{path}.operand"), errors);
}
Expr::BarsSince { condition, .. } => {
validate_condition(condition, &format!("{path}.condition"), errors);
}
// Leaf nodes (Literal, Field, Balance, EntryPrice, PositionQuantity,
// UnrealisedPnl, BarsSinceEntry) need no semantic validation.
_ => {}
}
}
fn validate_action_quantity(qty: &QuantitySpec, path: &str, errors: &mut Vec<ValidationError>) {
match qty {
QuantitySpec::Fixed(d) => {
if *d <= Decimal::ZERO {
errors.push(ValidationError::at(path, "fixed quantity must be greater than zero"));
}
}
QuantitySpec::Sizing(method) => match method.as_ref() {
SizingMethod::FixedSum { amount } => {
if *amount <= Decimal::ZERO {
errors.push(ValidationError::at(path, "fixed_sum amount must be greater than zero"));
}
}
SizingMethod::PercentOfBalance { percent, .. } => {
if *percent <= Decimal::ZERO {
errors.push(ValidationError::at(path, "percent_of_balance percent must be greater than zero"));
} else if *percent > Decimal::ONE_HUNDRED {
errors.push(ValidationError::at(path, "percent_of_balance percent exceeds 100"));
}
}
SizingMethod::FixedUnits { units } => {
if *units <= Decimal::ZERO {
errors.push(ValidationError::at(path, "fixed_units must be greater than zero"));
}
}
},
QuantitySpec::Expr(e) => {
validate_expr(e, path, errors);
}
}
}
fn validate_optional_timeframe(tf: Option<&str>, path: &str, errors: &mut Vec<ValidationError>) {
if let Some(t) = tf {
if !VALID_INTERVALS.contains(&t) {
errors.push(ValidationError::at(
&format!("{path}.timeframe"),
format!(
"\"{}\" is not a valid timeframe; must be one of: {}",
t,
VALID_INTERVALS.join(", ")
),
));
}
}
}

View File

@@ -74,6 +74,7 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
.route("/api/v1/exchanges/{name}/sub-kinds", get(handlers::exchanges::get_exchange_sub_kinds))
.route("/api/v1/exchanges/{name}/instruments", get(handlers::exchanges::get_exchange_instruments))
.route("/api/v1/instruments/{exchange}/{name}/data-range", get(handlers::paper_runs::get_instrument_data_range))
.route("/api/v1/strategies/validate", post(handlers::strategies::validate_strategy))
.route("/api/v1/strategies", get(handlers::strategies::list_strategies))
.route("/api/v1/strategies", post(handlers::strategies::create_strategy))
.route("/api/v1/strategies/{id}", get(handlers::strategies::get_strategy))

View File

@@ -33,7 +33,7 @@ use barter_instrument::{
instrument::InstrumentIndex,
};
use rust_decimal::Decimal;
use swym_dal::models::strategy_config::{ActionSide, QuantitySpec, Rule};
use swym_dal::models::strategy_config::{ActionSide, QuantitySpec, Rule, SizingMethod};
use crate::strategy::{
SwymState,
@@ -93,6 +93,32 @@ impl RuleStrategy {
}
/// Resolve a [`SizingMethod`] to a base-asset quantity using the current price and balance map.
///
/// Returns `None` if `price` is zero or if a required balance is absent.
fn resolve_sizing(
method: &SizingMethod,
price: Decimal,
balances: &std::collections::HashMap<String, Decimal>,
) -> Option<Decimal> {
if price.is_zero() {
return None;
}
match method {
SizingMethod::FixedSum { amount } => {
Some((*amount / price).max(Decimal::ZERO))
}
SizingMethod::PercentOfBalance { percent, asset } => {
let balance = balances.get(asset.to_lowercase().as_str()).copied()?;
let notional = balance * percent / Decimal::ONE_HUNDRED;
Some((notional / price).max(Decimal::ZERO))
}
SizingMethod::FixedUnits { units } => {
Some((*units).max(Decimal::ZERO))
}
}
}
impl AlgoStrategy for RuleStrategy {
type State = SwymState;
@@ -185,9 +211,10 @@ impl AlgoStrategy for RuleStrategy {
ActionSide::Sell => Side::Sell,
};
// Resolve quantity: fixed or dynamic expression.
// Resolve quantity: fixed, declarative sizing method, or dynamic expression.
let base_quantity = match &rule.then.quantity {
QuantitySpec::Fixed(d) => Some(*d),
QuantitySpec::Sizing(s) => resolve_sizing(s, price, &balances),
QuantitySpec::Expr(e) => eval_expr(e, &ctx).map(|v| v.max(Decimal::ZERO)),
};
let Some(base_quantity) = base_quantity else { continue };