Problem
Tokenizer outputs []models.TokenWithSpan but the parser consumes []token.Token. A 716-line conversion layer (token_conversion.go) exists solely to strip span info — unnecessary architectural duplication.
Impact
- Every parsing operation triggers an O(N) conversion of the entire token stream
- Two parallel token representations create confusion
- The conversion layer has its own
keywordBufferPool — a smell indicating the conversion cost is real
Fix
Unify to models.TokenWithSpan directly in the parser. Store positions separately only when ParseWithPositions() is called. Eliminates the conversion step entirely.
Source
Pre-release architecture review (v1.8.0). Continuation of #215.