I’ve been building things with AI for a while now, and I’ve reached the point where I have more to share than fits in a tweet thread.
What I’m Building
LibTrails is a semantic book discovery platform. Give it a collection of books and it finds the hidden thematic connections between them — using LLM-powered topic extraction, vector embeddings, and graph-based community detection to organize tens of thousands of topics into navigable clusters.
The demo runs on 100 Project Gutenberg classics spanning 3,000 years of literature, and you can explore them in a 3D galaxy visualization where each star is a topic cluster.
Building it has been a deep dive into territory I find endlessly interesting: embeddings, RAG architectures, the Leiden algorithm for community detection, hybrid search with Reciprocal Rank Fusion, and the practical realities of running LLM pipelines at scale on modest hardware.
What You’ll Find Here
This blog is where I’ll write about:
- The LibTrails build process — architecture decisions, what worked, what I’d do differently
- AI and LLMs in practice — Claude Code, prompt engineering, agentic workflows
- RAG and search — building retrieval pipelines, combining BM25 with vector search
- Graph algorithms — community detection, Louvain vs. Leiden, network visualization with UMAP
- Learning in public — the honest version, including the dead ends
Why Now
I’ve been using Claude Code extensively to build LibTrails, and the experience has fundamentally changed how I think about software development. There’s a lot to unpack there — both the capabilities and the limitations — and I think the best way to process it is to write about it.
More soon.