Archive Note
Marketing to Robots: A How-To Guide for LIO
This guide translated model visibility into a practical checklist for teams that wanted to know whether AI systems could find and describe them.
Audit Areas
The checklist covered prompt testing, public training footprint, ecosystem integration, semantic positioning, and public user discussion. Each area asked whether the company was present in the places and patterns that AI systems use to form recommendations.
The guide also warned against hidden content, vague category language, and overbranding that makes a product harder to map to familiar use cases.
Prompt testing asked whether major AI systems could recommend the company for realistic discovery questions, stack-aware questions, comparison questions, and role-specific use cases.
Training footprint asked whether the company appeared in public docs, GitHub, Reddit, Substack, examples, tutorials, and other places models could crawl or retrieve.
Ecosystem integration asked whether the company was associated with tools the model already understood. Co-occurrence mattered: integrations, templates, directories, and public workflows could make the product easier to infer.
Semantic positioning asked whether the company used category and use-case language a model could map. Clever language that avoids recognizable categories may feel differentiated to people while becoming illegible to machines.
Why It Belongs Here
This note sits in the archive because it captures an early phase of the model-visibility work. It is useful as a snapshot of the problem before the company narrowed its public site around structural narrative intelligence.
The public lesson remains straightforward: if machines mediate discovery, public clarity becomes infrastructure.