Add ModelFamily enum (config.rs) detected from the model name:
- DeepSeekR1: matched on "deepseek-r1", "r1-distill" — R1 thinking blocks
consume thousands of output tokens before the JSON; max_output_tokens
raised to 32768 and HTTP timeout to 300s; prompt tells the model its
<think> output is stripped and only the bare JSON is used
- Generic: previous behaviour (8192 tokens, 120s timeout)
ClaudeClient stores the detected family and uses it for max_tokens and
the request timeout. family() accessor lets the caller (agent.rs) pass
it into system_prompt().
prompts::system_prompt() now accepts &ModelFamily and injects a
family-specific "output format" section in place of the hardcoded
"How to respond" block. New families can be added by extending the
enum and the match arms without touching prompt logic elsewhere.
Also: log full anyhow cause chain (:#) on JSON extraction failure and
show response length alongside the truncated preview, to make future
diagnosis easier.
Root cause of the 2026-03-09T18:29:22 run failure: R1's thinking tokens
counted against max_tokens:8192, leaving only ~500 chars for the actual
JSON, which was always truncated mid-object.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
R1 models use 500-2000 tokens for <think> blocks before the final
response. 4096 was too tight — the model would exhaust the budget
mid-thought and never emit the JSON.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
DeepSeek-R1 models emit <think>...</think> before their actual response.
The brace-counting extractor would grab the first { inside the thinking
block (which contains partial JSON fragments) rather than the final
strategy JSON.
strip_think_blocks() removes all <think>...</think> sections including
unterminated blocks (truncated responses), leaving only the final output
for extract_json to process.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- validate_strategy(): hard error if quantity is not a parseable decimal
(catches "ATR_SIZED" etc. before sending to swym API); soft warning if
a sell rule has no entry_price stop-loss or no bars_since_entry time exit
- Hard validation errors skip the backtest and feed errors back to the LLM
via IterationRecord.validation_notes included in summary()
- json_contains_kind(): recursive helper to search strategy JSON tree
- diagnose_history(): add cycling detection — triggers is_converged when
any avg_sharpe value appears 3+ times in history (not just last 3 streak),
catching the alternating RSI<30 / RSI<25 pattern seen in the latest run
- prompts: clarify that quantity must parse as a float; list invalid
placeholder strings ("ATR_SIZED", "FULL_BALANCE", "dynamic", etc.)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Three related improvements to help the model learn and explore effectively:
Strategy JSON in history: include the compact strategy JSON in each
IterationRecord::summary() so the LLM knows exactly what was tested in
every past iteration, not just the outcome metrics. Without this the model
had no record of what it tried once conversation history was trimmed.
Rule comment in audit: include rule_comment from the condition audit in
the formatted audit string so the LLM can correlate hit-rate data with
the rule's stated purpose.
Convergence detection and anti-anchoring: diagnose_history() now returns
(String, bool) where the bool signals that the last 3 iterations had
avg_sharpe spread < 0.03 (model stuck in local optimum). When converged:
- Emit a ⚠ CONVERGENCE DETECTED note listing untried candle intervals
- Suppress best_so_far JSON to break the anchoring effect that was
causing the model to produce near-identical strategies for 13+ iterations
- Targeted "try a different approach" instruction
Also add volume-as-field clarification to the DSL mistakes section in
the system prompt, fixing the "unknown variant `volume`" submit error.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The swym API response structure differs from what the code previously
assumed. Fix all field extraction to match the real shape:
- total_positions: backtest_metadata.position_count (not top-level)
- sharpe_ratio, win_rate, profit_factor: instruments.{key}.{field}.value
wrapped decimal strings (not plain floats); treat Decimal::MAX sentinel
(~7.9e28) as None
- net_pnl: instruments.{key}.pnl (plain decimal string)
- instrument key derived as "{exchange_no_underscores}-{base}_{quote}"
Also fix coverage-based backtest_from clamping: after the coverage
check, compute the effective backtest start as the max first_open across
all instruments × common intervals, so strategies never fail with
"requested range outside available data". Log per-interval date ranges
for each instrument at startup.
Additionally:
- Compact format_audit_summary to handle {"rules":[...],"total_bars":N}
structure with per-condition true_count/evaluated breakdown
- Drop avg_bars from summary_line (field absent from API)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The model was generating Expr objects for quantity (e.g. ATR-based sizing),
causing consistent QuantitySpec deserialization failures. Replace the
"prefer dynamic sizing" hint with an explicit rule: quantity must always
be a fixed decimal string like "0.001".
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Shows correct usage of rsi/bollinger/ema_trend condition shortcuts, entry_price
and bars_since_entry ExprKind values, and func/cross_over/bin_op expressions.
Also calls out common model mistakes (rsi as ExprKind, bars_since_entry as
FuncName, expr_field) and adds a note that spot strategies are long-only.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Log full strategy JSON at debug level, show full anyhow cause chain on
submit failures, surface condition_audit_summary for 0-trade results in
both logs and the summary fed back to the AI each iteration.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>