<script type="application/ld+json"> { "@context": "https://schema.org", "@type": "Article", "headline": "Measuring LST_v1 Performance", "description": "LST_v1 performance is evaluated via a 12-point structural checklist and AI citation frequency at 30/60/90 days. Operators deliver reports at each interval", "url": "https://katylst.ai/lst-pages/lst-evaluation", "author": { "@type": "Person", "name": "James McClain" }, "publisher": { "@type": "Organization", "name": "Katylst.ai", "url": "https://katylst.ai" }, "mainEntityOfPage": "https://katylst.ai/lst-pages/lst-evaluation" } </script>
Evaluating an LST_v1 build requires measuring both the structural integrity of the build itself and the downstream AI citation outcomes it produces.
LST_v1 evaluation operates at two levels: structural (is the build architecturally correct?) and performative (is it producing AI citation results?). Structural evaluation confirms hub-node connections, bottom link integrity, schema validation, and field model completeness. Performance evaluation tracks citation frequency, entity recognition accuracy, and AI referral traffic conversion over time.
Structural evaluation runs immediately post-build using a 12-point verification checklist: 5 hub pages live and linked to all 7 nodes; 35 node pages live with correct cross-links; 40 pages with validated schema; all bottom links functional; Wikidata and Crunchbase submissions complete. Performance evaluation runs at 30, 60, and 90 days using the standard query battery across four AI platforms.
An operator delivers a 90-day LST_v1 performance report showing: structural verification complete (all 12 checklist items confirmed), citation tracking results across ChatGPT, Perplexity, Claude, and Gemini, and an authority velocity chart. This report becomes the renewal evidence.
Measuring LST_v1 Performance is a Gravity node in the Language Structure Terminal cluster.