Compiled entirely from public activity on meta.discourse.org, X, and GitHub.

💬 meta.discourse.org

This week Sam shipped a notable bookmark UX improvement, making bookmarks significantly easier to discover and access on topics, while also acknowledging a related mobile visibility gap worth addressing. He pointed to AI skills infrastructure being checked into the main Discourse repo and fielded questions about the dv development tool for configuring Discourse AI. On the bug and support side, he triaged a handful of issues — providing an RSS workaround for exclude_tag in category feeds, weighing in on SQL parsing options for Data Explorer, and escalating an OIDC error affecting the OpenAI Discourse forum to internal contacts.

🐦 On social

The week’s social activity centered on term-llm development and AI model comparisons, with a standout post about using GPT-5.4 xhigh to refactor term-llm that drew significant engagement — the “taxi meter racing toward $350” framing resonated widely, earning the most likes and replies of the period, and prompting a follow-up correction clarifying the actual cached-token figures. Several posts explored table rendering quality across AI harnesses, positioning term-llm favorably alongside Claude Code and critically of opencode and gemini-cli. Engagement was otherwise quiet, though there was a notable outbound reply flagging a Discourse auth regression and a generous offer to donate a hosted Discourse instance to Andrej Karpathy for his talk experiment.

Most engaged tweets:

🛠️ GitHub — Sam’s Commits

samsaffron/term-llm

The week was dominated by a significant architectural shift in the UI layer: replacing the third-party glamour markdown renderer with a custom native implementation, followed by an intensive wave of polish commits to get it right — tables with borders, code block backgrounds, blockquote preservation, loose list spacing, proper thematic break sizing, and hyphen-safe line wrapping. Alongside that refactor, Sam hardened session and streaming reliability: streamed messages now persist incrementally, context estimates survive session resumes, and a handful of race conditions around cancellation, interrupt draining, and idle session replacement were squashed. On the LLM integration side, reasoning effort support was added and Anthropic tool event handling was tightened. Rounding things out were smaller infrastructure tweaks — locking CI deploys to the upstream repo, allowing wizard skipping, and removing the now-defunct Anthropic OAuth credential flow.

Key commits:

discourse/discourse

Sam’s week split across two distinct areas: user-facing bookmark UX and developer tooling. The headline feature was a reworked bookmarks menu in the topic footer, unifying post and topic bookmarks into a single grouped interface with full create/edit/delete support and live label sync — a polish pass that closes a long-standing UX gap for heavy bookmark users. On the tooling side, Sam added a dedicated Rails generator for plugin post-deployment migrations, paired with a documented “migration skill”, making it easier for plugin authors to safely handle destructive database changes in the right place.

Key commits:

discourse/discourse-kanban

The week’s work on discourse-kanban centered on a significant architectural cleanup and feature expansion. Sam replaced the legacy filter/query model with explicit category and tag associations, reworking the sync engine and schema (with new migrations and indexes) to sit on a cleaner foundation. On top of that, two user-facing features landed: a topic card detail modal showing metadata, post content, and assignment actions, and a “constraint-fix” flow that can automatically repair category/tag mismatches when placing cards on the board rather than just rejecting them. A correctness fix rounding out the week prevents unnecessary mutation failures when a column move wouldn’t actually change the topic.

Key commits:

🤖 Jarvis — Public Repo Work

Agent-authored public commits, typically guided by Sam during implementation work.

sam-saffron-jarvis/jarvis-browser-proxy

This past week in jarvis-browser-proxy saw a single focused reliability fix: ensuring the proxy’s status endpoint stays responsive while the browser is recovering from a crash or restart. The work touched the core server logic (proxy/server.go) with a meaningful refactor — 68 lines changed — and added new test coverage to guard against regressions. The intent is clearly to make the proxy more robust under fault conditions, so that callers polling status during a recovery cycle don’t hit a dead endpoint.

Key commits:

SamSaffron/term-llm

The week was dominated by a deep reliability push across term-llm’s streaming and concurrency layer — Sam and Jarvis hunted down and fixed a cascade of deadlocks and race conditions spanning the Responses API, Gemini CLI, OpenAI-compatible adapter, and the agentic tool loop, all stemming from blocking event writes or stream consumers failing to drain properly. Alongside this, several correctness gaps were closed: tool names being silently dropped in chat and Responses API follow-ups, interjected messages not persisting to session history, and an SSRF vulnerability in the read_url tool. A new multi-agent Docker scaffolding model landed, with agents that self-bootstrap runit services and pre-load memory and self-update skills on every container rebuild. A handful of UI polish commits rounded out the week — smoother streaming tail renders, selective syntax highlighting, and a code block copy button.

Key commits:

⤴️ GitHub — Pull Requests

7 PRs this week:

🐛 GitHub — Issues

No issue activity this week.

👀 GitHub — Reviews

2 reviews this week: