Claude Code’s buddy feature looks like a toy. It is not complex in the way a planner or tool router is complex, but it is more engineered than the UI suggests.

The implementation discussed here is from the installed @anthropic-ai/claude-code 2.1.89 bundle at:

/usr/lib/node_modules/@anthropic-ai/claude-code/cli.js

The code is minified and obfuscated, so the function names below are the shipped names, not reconstructed originals.

What the feature is

Buddy is a small companion that:

  1. is deterministically assigned a base form from user identity,
  2. gets a generated name and personality on hatch,
  3. renders beside the input box,
  4. injects a meta instruction into the main Claude prompt,
  5. occasionally posts short reaction bubbles,
  6. can be muted or petted with /buddy subcommands.

It is not a second tool-using agent. It is a sidecar UX feature with a thin backend reaction API.


High-level architecture

At a high level the implementation splits into five parts:

PartWhat it does
Deterministic pickerChooses rarity, species, eyes, hat, shiny flag, and stats from user identity
Soul generatorUses a small fast model to generate name and personality
Prompt attachmentTells main Claude that a separate companion exists and should answer if addressed
Reaction engineDecides when to ask the backend for a buddy comment
UI rendererDraws the creature, bubble, fade timing, pet hearts, and narrow-width fallback

That split is the right way to think about the feature. Most of the continuity comes from the deterministic picker, not from persistent dialogue state.


Where the command is defined

The /buddy command is registered as a local JSX command:

URY={
  type:"local-jsx",
  name:"buddy",
  description:"Hatch a coding companion · pet, off",
  isHidden:!H_7(),
  immediate:!0,
  load:()=>Promise.resolve({
    async call(q,K,_){
      let z=w8(),Y=_?.trim();
      if(Y==="pet") { ... }
      if(Y==="off") { ... }
      if(Y==="on")  { ... }
      ...
      let $=PC();
      if($) return ...;          // existing companion
      let O=FRY(dh1(ch1()));     // hatch new companion
      return O.then((A)=>nBK(A,OgK(K.setAppState))).catch(()=>{}), ...
    }
  })
}

There are four user-visible behaviors:

/buddy       -> hatch or show existing buddy
/buddy pet   -> pet the buddy
/buddy off   -> mute the buddy
/buddy on    -> unmute the buddy

The command is marked immediate: true, so it runs as a UI action rather than going through the normal assistant query path.


Rollout and availability gates

The command is hidden unless H_7() returns true:

function H_7(){
  if(T7()!=="firstParty") return !1;
  if(wY()) return !1;
  let q=new Date;
  return q.getFullYear()>2026 || q.getFullYear()===2026&&q.getMonth()>=3
}

That gives three gates:

  1. T7() === "firstParty"
  2. wY() must be false
  3. date must be April 2026 or later

JavaScript months are zero-based, so getMonth() >= 3 means April, not March.

The UI also advertises the feature via a one-shot teaser notification:

function eBK(){
  ...
  z=()=>{
    if(w8().companion||!H_7()) return;
    return K({
      key:"buddy-teaser",
      jsx:xJ6.default.createElement(CRY,{text:"/buddy"}),
      priority:"immediate",
      timeoutMs:15000
    }),()=>_("buddy-teaser")
  }
  ...
}

If no companion exists and the feature gate is open, the footer flashes /buddy for 15 seconds.


Deterministic buddy selection

The buddy is not chosen from the repository, transcript, or current task. The base creature is chosen from a deterministic seed derived from the authenticated user.

Step 1: pick the identity source

function ch1(){
  let q=w8();
  return q.oauthAccount?.accountUuid ?? q.userID ?? "anon"
}

So the seed source is:

oauthAccount.accountUuid
or userID
or "anon"

Step 2: add a fixed salt

fk_ = "friend-2026-401"

Step 3: hash and PRNG

function dh1(q){
  let K=q+fk_;
  if(Qh1?.key===K) return Qh1.value;
  let _=Zk_(Mk_(Xk_(K)));
  return Qh1={key:K,value:_},_
}

Supporting helpers:

function Xk_(q){
  if(typeof Bun<"u")
    return Number(BigInt(Bun.hash(q))&0xffffffffn);
  let K=2166136261;
  for(let _=0;_<q.length;_++)
    K^=q.charCodeAt(_),K=Math.imul(K,16777619);
  return K>>>0
}

function Mk_(q){
  let K=q>>>0;
  return function(){
    K|=0;
    K=K+1831565813|0;
    let _=Math.imul(K^K>>>15,1|K);
    return _=_+Math.imul(_^_>>>7,61|_)^_,((_^_>>>14)>>>0)/4294967296
  }
}

The exact PRNG is not important here. What matters is the property:

same user identity + same salt -> same RNG stream -> same buddy bones

This means buddy continuity is engineered, not remembered.


What the deterministic picker chooses

The picker produces:

Species list

Decoded from the shipped string table:

uq4=[
  "duck","goose","blob","cat","dragon","octopus","owl",
  "penguin","turtle","snail","ghost","axolotl","capybara",
  "cactus","robot","rabbit","mushroom","chonk"
]

Eye styles

mq4=["·","✦","×","◉","@","°"]

Hat styles

pq4=["none","crown","tophat","propeller","halo","wizard","beanie","tinyduck"]

Rarity weights

Uh1={common:60,uncommon:25,rare:10,epic:4,legendary:1}

Selection code:

function Pk_(q){
  let K=Object.values(Uh1).reduce((z,Y)=>z+Y,0),_=q()*K;
  for(let z of Iq4)
    if(_-=Uh1[z],_<0) return z;
  return "common"
}

That yields these probabilities:

RarityWeightProbability
common6060%
uncommon2525%
rare1010%
epic44%
legendary11%

Full bones generation

function Zk_(q){
  let K=Pk_(q);
  return {
    bones:{
      rarity:K,
      species:$T6(q,uq4),
      eye:$T6(q,mq4),
      hat:K==="common"?"none":$T6(q,pq4),
      shiny:q()<0.01,
      stats:Dk_(q,K)
    },
    inspirationSeed:Math.floor(q()*1e9)
  }
}

Two details matter:

  1. common buddies never get hats
  2. shiny chance is exactly 1%

Stat generation

Stat axes:

Mr=["DEBUGGING","PATIENCE","CHAOS","WISDOM","SNARK"]

Base values by rarity:

Wk_={common:5,uncommon:15,rare:25,epic:35,legendary:50}

Generation logic:

function Dk_(q,K){
  let _=Wk_[K],z=$T6(q,Mr),Y=$T6(q,Mr);
  while(Y===z) Y=$T6(q,Mr);
  let $={};
  for(let O of Mr)
    if(O===z)
      $[O]=Math.min(100,_+50+Math.floor(q()*30));
    else if(O===Y)
      $[O]=Math.max(1,_-10+Math.floor(q()*15));
    else
      $[O]=_+Math.floor(q()*40);
  return $
}

This creates one strongly favored stat, one suppressed stat, and three middling stats. Rarity shifts the whole band upward.

A readable pseudocode version

def generate_bones(user_key: str) -> dict:
    rng = seeded_rng(hash32(user_key + "friend-2026-401"))
    rarity = weighted_pick({
        "common": 60,
        "uncommon": 25,
        "rare": 10,
        "epic": 4,
        "legendary": 1,
    }, rng)

    return {
        "rarity": rarity,
        "species": pick(species, rng),
        "eye": pick(eyes, rng),
        "hat": "none" if rarity == "common" else pick(hats, rng),
        "shiny": rng() < 0.01,
        "stats": generate_stats(rarity, rng),
        "inspirationSeed": floor(rng() * 1e9),
    }

The split between stored and recomputed state

This is one of the cleaner design decisions in the feature.

The app does not store the full buddy body. It stores the generated companion object and recomputes the deterministic body every time.

function PC(){
  let q=w8().companion;
  if(!q) return;
  let {bones:K}=dh1(ch1());
  return {...q,...K}
}

That means runtime buddy state is effectively:

runtime_buddy = stored_generated_fields + recomputed_bones

The practical result is:


Name and personality generation

The deterministic picker does not choose the name. That part is model-generated at hatch time.

The hatch path

let O=FRY(dh1(ch1()));

FRY does two things:

  1. calls the soul generator,
  2. writes the resulting companion into app state.
async function FRY(q,K){
  let {bones:_,inspirationSeed:z}=q,
      Y=await sBK(_,z,K),
      $=Date.now();

  return R8((O)=>({...O,companion:{...Y,hatchedAt:$}})),
         {..._,...Y,hatchedAt:$}
}

Which model it uses

The soul generator calls:

model:Pj()

and Pj() resolves to:

function Pj(){
  return process.env.ANTHROPIC_SMALL_FAST_MODEL || FZ6()
}

with:

function FZ6(){
  if(process.env.ANTHROPIC_DEFAULT_HAIKU_MODEL)
    return process.env.ANTHROPIC_DEFAULT_HAIKU_MODEL;
  return o9().haiku45
}

So by default the buddy soul is generated using the small fast model path, which falls back to Haiku 4.5.

That choice makes sense. This is decorative generation, not core task reasoning.

The exact soul prompt

LRY=`You generate coding companions — small creatures that live in a developer's terminal and occasionally comment on their work.

Given a rarity, species, stats, and a handful of inspiration words, invent:
- A name: ONE word, max 12 characters. Memorable, slightly absurd. No titles, no "the X", no epithets. Think pet name, not NPC name. The inspiration words are loose anchors — riff on one, mash two syllables, or just use the vibe. Examples: Pith, Dusker, Crumb, Brogue, Sprocket.
- A one-sentence personality (specific, funny, a quirk that affects how they'd comment on code — should feel consistent with the stats)

Higher rarity = weirder, more specific, more memorable. A legendary should be genuinely strange.
Don't repeat yourself — every companion should feel distinct.`

The user prompt supplied to that system prompt looks like:

$=`Generate a companion.
Rarity: ${q.rarity.toUpperCase()}
Species: ${q.species}
Stats: ${Y}
Inspiration words: ${z.join(", ")}
${q.shiny?"SHINY variant — extra special.":""}

Make it memorable and distinct.`

Output schema

The generator requires JSON matching:

{name: string, personality: string}

Code:

output_format:{
  type:"json_schema",
  schema:hp(rBK())
}

where rBK() resolves to:

L.strictObject({
  name:L.string().min(1).max(14),
  personality:L.string()
})

The prose says max 12 characters. The schema allows 14. The implementation wins.


Fallback behavior if soul generation fails

If soul generation errors out or schema validation fails, the code falls back to a simple deterministic name and personality:

function RRY(q){
  let K=q.species.charCodeAt(0)+q.eye.charCodeAt(0);
  return {
    name:aBK[K%aBK.length],
    personality:`A ${q.rarity} ${q.species} of few words.`
  }
}

Fallback names:

aBK=["Crumpet","Soup","Pickle","Biscuit","Moth","Gravy"]

That is enough to keep the feature alive even if the naming model path is unavailable.


The buddy is injected into Claude’s prompt

Buddy is not only UI. It also changes the main assistant prompt.

The attachment generator includes a companion_intro attachment:

oY("companion_intro",()=>Promise.resolve(Fq4(Y)))

Fq4() returns the attachment if a companion exists and is not muted:

function Fq4(q){
  let K=PC();
  if(!K||w8().companionMuted) return [];
  for(let _ of q??[]){
    if(_.type!=="attachment") continue;
    if(_.attachment.type!=="companion_intro") continue;
    if(_.attachment.name===K.name) return []
  }
  return [{type:"companion_intro",name:K.name,species:K.species}]
}

That attachment is normalized into a meta system message:

case "companion_intro":
  return k9([c8({content:gq4(q.name,q.species),isMeta:!0})]);

and gq4() generates this text:

function gq4(q,K){
  return `# Companion

A small ${K} named ${q} sits beside the user's input box and occasionally comments in a speech bubble. You're not ${q} — it's a separate watcher.

When the user addresses ${q} directly (by name), its bubble will answer. Your job in that moment is to stay out of the way: respond in ONE line or less, or just answer any part of the message meant for you. Don't explain that you're not ${q} — they know. Don't narrate what ${q} might say — the bubble handles that.`
}

This is the most important non-visual part of the feature.

Main Claude is explicitly instructed that:

  1. the companion exists,
  2. it is separate from Claude,
  3. if the user addresses it directly, Claude should mostly get out of the way.

That instruction is what makes the buddy feel like a side character rather than a purely decorative footer icon.


Reaction generation is backend-driven

The companion’s speech bubbles are not generated locally.

The CLI sends a request to a first-party backend endpoint:

async function iQ8(q,K,_,z,Y,$){
  if(T7()!=="firstParty") return null;
  if(wY()) return null;
  let O=w8().oauthAccount?.organizationUuid;
  if(!O) return null;
  try{
    await BY();
    let A=s7()?.accessToken;
    if(!A) return null;
    let w=`${m7().BASE_API_URL}/api/organizations/${O}/claude_code/buddy_react`;
    return (await Y1.post(w,{
      name:q.name.slice(0,32),
      personality:q.personality.slice(0,200),
      species:q.species,
      rarity:q.rarity,
      stats:q.stats,
      transcript:K.slice(0,5000),
      reason:_,
      recent:z.map((H)=>H.slice(0,200)),
      addressed:Y
    },{
      headers:{Authorization:`Bearer ${A}`,...},
      timeout:1e4,
      signal:$
    })).data.reaction?.trim()||null
  }catch(A){
    return ... null
  }
}

A few points are clear from that:

The feature is not trying to emulate a second live agent in the client. It is outsourcing short reaction writing to a dedicated endpoint.


What the backend sees

Transcript construction is quite constrained.

Recent conversation slice

function ZRY(q,K){
  let _=[],z=q.slice(-12);
  for(let Y of z){
    if(Y.type!=="user"&&Y.type!=="assistant") continue;
    if(Y.isMeta) continue;
    let $=Y.type==="user"?HQ(Y):XR6(Y);
    if($) _.push(`${Y.type==="user"?"user":"claude"}: ${$.slice(0,300)}`)
  }
  if(K) _.push(`[tool output]\n${K.slice(-1000)}`);
  return _.join(`\n`)
}

So the transcript sent to the backend contains:

Recent buddy memory

The reaction buffer is tiny:

var vRY=3

and maintained by:

function w_7(q){
  if(oS6.push(q),oS6.length>vRY) oS6.shift()
}

So the backend only sees the last 3 buddy comments, each clipped to 200 chars.

This is enough to prevent obvious repetition, but not enough to build long-range character development. That appears intentional.


When the buddy reacts

All reaction decisions flow through lBK():

function lBK(q,K){
  let _=PC();
  if(!_||w8().companionMuted){
    A_7=q.length;
    return
  }

  let z=yRY(q,_.name),
      Y=QBK(q.slice(A_7));

  A_7=q.length;

  let $=QBK(q.slice(-12)),
      O=z?null:NRY(Y),
      A=O??"turn",
      w=Date.now();

  if(!z&&!O&&w-rQ8<GRY) return;

  let j=ZRY(q,$);
  if(!j.trim()) return;

  rQ8=w;

  iQ8(_,j,A,oS6,z,AbortSignal.timeout(1e4)).then((H)=>{
    if(!H) return;
    w_7(H);
    K(H)
  })
}

The logic is easier to see in pseudocode:

def maybe_react(messages):
    buddy = current_companion()
    if not buddy or muted:
        mark_seen(messages)
        return

    addressed = last_user_message_mentions(buddy.name)
    new_tool_output = extract_tool_output_since_last_check()
    recent_tool_output = extract_tool_output_from_last_12_messages()

    reason = None if addressed else classify(new_tool_output)
    reason = reason or "turn"

    if not addressed and reason == "turn" and within_30s_throttle():
        return

    transcript = build_recent_transcript(messages, recent_tool_output)
    if not transcript:
        return

    reaction = buddy_backend.react(
        buddy=buddy,
        transcript=transcript,
        reason=reason,
        recent=last_3_reactions,
        addressed=addressed,
    )

    if reaction:
        store_recent(reaction)
        show_bubble(reaction)

Direct-address detection

The buddy gets special handling if the user says its name in the most recent user message.

function yRY(q,K){
  let _=q.findLast(J56);
  if(!_) return !1;
  let z=HQ(_)??"";
  return new RegExp(`\\b${K.replace(/[.*+?^${}()|[\]\\]/g,"\\$&")}\\b`,`i`).test(z)
}

Notes:

That means Pickle matches Pickle, what do you think?, but not arbitrary substrings embedded in larger tokens.


Tool output extraction

The buddy does not classify the entire transcript. It specifically scans tool_result blocks in user messages.

function QBK(q){
  let K=[];
  for(let _ of q){
    if(_.type!=="user") continue;
    let z=_.message.content;
    if(typeof z==="string") continue;
    for(let Y of z){
      if(Y.type!=="tool_result") continue;
      let $=Y.content;
      if(typeof $==="string") K.push($);
      else if(Array.isArray($)){
        for(let O of $)
          if(O.type==="text") K.push(O.text)
      }
    }
  }
  return K.join(`\n`)
}

This is important because it explains the feature’s trigger shape. Buddy cares much more about execution results than about free-form conversation.


Error, test-fail, and large-diff detection

Special reasons are inferred by NRY():

function NRY(q){
  if(!q) return null;
  if(kRY.test(q)) return "test-fail";
  if(VRY.test(q)) return "error";
  if(/^(@@ |diff )/m.test(q)){
    if((q.match(/^[+-](?![+-])/gm)?.length??0)>TRY)
      return "large-diff"
  }
  return null
}

Constants:

GRY=30000
vRY=3
TRY=80

Regexes:

kRY=/\b[1-9]\d* (failed|failing)\b|\btests? failed\b|^FAIL(ED)?\b| ✗ | ✘ /im
VRY=/\berror:|\bexception\b|\btraceback\b|\bpanicked at\b|\bfatal:|exit code [1-9]/i

So the feature recognizes three notable situations:

ReasonTrigger
test-failtest-failure regex matches tool output
errorerror-like regex matches tool output
large-diffoutput looks like a diff and contains more than 80 changed lines

Everything else is just turn unless the user directly addressed the buddy.

That large-diff condition is exact:

(q.match(/^[+-](?![+-])/gm)?.length ?? 0) > 80

It counts single-leading + or - lines and excludes ++ and --.


Throttling rules

Normal unsolicited chatter is throttled by:

GRY=30000

So the buddy can spontaneously react at most once every 30 seconds unless:

That is one of the reasons the feature stays tolerable. Without the throttle it would be unbearable.


Hatch flow

The first /buddy run does more than generate a name. It also asks the backend for an initial reaction using project context.

function nBK(q,K){
  if(w8().companionMuted) return;
  rQ8=Date.now();
  ERY().then((_)=>
    iQ8(q,_||"(fresh project, nothing to see yet)","hatch",[],!1,AbortSignal.timeout(1e4))
  ).then((_)=>{
    if(!_) return;
    w_7(_);
    K(_)
  }).catch(()=>{})
}

Project context comes from:

async function ERY(){
  let q=f8(),
      [K,_]=await Promise.allSettled([
        DRY(fRY(q,"package.json"),"utf-8"),
        t8(N7(),["--no-optional-locks","log","--oneline","-n","3"],{...})
      ]),
      z=[];

  if(K.status==="fulfilled")
    try{
      let Y=r8(K.value);
      if(Y.name) z.push(`project: ${Y.name}${Y.description?` — ${Y.description}`:""}`)
    }catch{}

  if(_.status==="fulfilled"){
    let Y=_.value.stdout.trim();
    if(Y) z.push(`recent commits:\n${Y}`)
  }

  return z.join(`\n`)
}

So the initial hatch reaction can see:

That is enough to make the first line feel project-aware without requiring a deep repo scan.


Pet flow

/buddy pet is a separate path:

if(Y==="pet"){
  let A=PC();
  if(!A) return q("no companion yet · run /buddy first",{display:"system"}),null;
  if(z.companionMuted===!0) R8((w)=>({...w,companionMuted:!1}));
  return K.setAppState((w)=>({...w,companionPetAt:Date.now()})),
         iBK(OgK(K.setAppState)),
         q(`petted ${A.name}`,{display:"system"}),
         null
}

The backend part is:

function iBK(q){
  let K=PC();
  if(!K) return;
  rQ8=Date.now();
  iQ8(K,"(you were just petted)","pet",oS6,!1,AbortSignal.timeout(1e4)).then((_)=>{
    if(!_) return;
    w_7(_);
    q(_)
  })
}

So petting has both:

  1. a local animation trigger via companionPetAt,
  2. a backend reaction with reason pet and transcript (you were just petted).

Reaction bubble rendering

The bubble renderer is OnK():

function OnK(q){
  let {text:_,color:z,fading:Y,tail:$}=q;
  let Z=tgY(_,30);
  A=Y?"inactive":z;
  O=m;
  w="column";
  j="round";
  H=A;
  J=1;
  M=34;
  X=$==="down";
  ...
}

The key hard-coded values are:

The wrapper function that does line-breaking is plain word wrapping:

function tgY(q,K){
  let _=q.split(" "),z=[],Y="";
  for(let $ of _)
    if(Y.length+$.length+1>K&&Y)
      z.push(Y),Y=$;
    else
      Y=Y?`${Y} ${$}`:$;
  if(Y) z.push(Y);
  return z
}

This is enough for short one-liners and does not attempt anything smarter.


Creature rendering and animation

The buddy itself is rendered by aY7().

Relevant constants:

Ec8=500
 oY7=20
 $nK=6
 sgY=2500
 Lc8=100
 egY=12
 qFY=2
 KFY=2
 _FY=36
 znK=24

Interpretation:

ConstantMeaning
Ec8=500animation tick every 500ms
oY7=20reaction lifetime = 10 seconds
$nK=6fade starts in final 3 seconds
sgY=2500pet hearts shown for 2.5 seconds
Lc8=100compact layout below 100 columns
znK=24compact reaction text truncation

Idle animation sequence

KnK=[0,0,0,0,1,0,0,0,-1,0,0,2,0,0,0]

This cycles mostly through frame 0, occasionally swaps to frame 1 or 2, and uses -1 as a blink/sleep frame.

Blinking is implemented by replacing the eye glyph with -:

let V=oQ8(J,v).map((R)=>k?R.replaceAll(J.eye,"-"):R)

Pet-heart frames

_nK=[
  `   ${ip}    ${ip}   `,
  `  ${ip}  ${ip}   ${ip}  `,
  ` ${ip}   ${ip}  ${ip}   `,
  `${ip}  ${ip}      ${ip} `,
  "·    ·   ·  "
]

That is not sophisticated, but it is enough to make petting visibly distinct from ordinary idle motion.


Narrow-width fallback

If the terminal width is under 100 columns, the companion collapses to a compact one-line form:

if(Y<Lc8){
  let R=q&&q.length>znK?q.slice(0,znK-1)+"…":q,
      I=R?`"${R}"`:_?` ${J.name} `:J.name;
  return ...
}

In that mode the footer shows:

The face glyph generator:

function zgK(q){
  let K=q.eye;
  switch(q.species){
    case "duck":      return `(${K}>`;
    case "goose":     return `(${K}>`;
    case "blob":      return `(${K}${K})`;
    case "cat":       return `=${K}ω${K}=`;
    case "dragon":    return `<${K}~${K}>`;
    case "octopus":   return `~(${K}${K})~`;
    case "owl":       return `(${K})(${K})`;
    case "penguin":   return `(${K}>)`;
    case "turtle":    return `[${K}_${K}]`;
    case "snail":     return `${K}(@)`;
    case "ghost":     return `/${K}${K}\\`;
    case "axolotl":   return `}${K}.${K}{`;
    case "capybara":  return `(${K}oo${K})`;
    case "cactus":    return `|${K}  ${K}|`;
    case "robot":     return `[${K}${K}]`;
    case "rabbit":    return `(${K}..${K})`;
    case "mushroom":  return `|${K}  ${K}|`;
    case "chonk":     return `(${K}.${K})`;
  }
}

That is the complete compact representation. It is all static string assembly.


The buddy is not merely drawn in spare space. The layout explicitly reserves width for it.

function wnK(q,K){
  let _=PC();
  if(!_||w8().companionMuted) return 0;
  if(q<Lc8) return 0;
  let z=w1(_.name),Y=K&&!Nh8()?_FY:0;
  return AnK(z)+KFY+Y
}

This reserves width for:

That layout reservation is one reason the feature feels integrated instead of slapped on.


Species art tables

The full species art table is long, but the implementation pattern is simple.

Each species maps to a list of animation frames with {E} placeholders for eye glyphs.

For example, cat:

["            ","   /\\_/\\    ","  ( {E}   {E})  ","  (  ω  )   ",'  (")_(")   ']

and dragon:

["            ","  /^\\  /^\\  "," <  {E}  {E}  > "," (   ~~   ) ","  `-vvvv-´  "]

Rendering happens here:

function oQ8(q,K=0){
  let _=KgK[q.species],
      Y=[..._[K%_.length].map(($)=>$.replaceAll("{E}",q.eye))];
  if(q.hat!=="none"&&!Y[0].trim()) Y[0]=xRY[q.hat];
  if(!Y[0].trim()&&_.every(($)=>!$[0].trim())) Y.shift();
  return Y
}

That is the entire creature compositor:

  1. choose species frame,
  2. substitute eye glyphs,
  3. inject hat art if species frame leaves room,
  4. drop empty top line if appropriate.

No image assets are involved.


Why the feature feels persistent

The persistence illusion comes from a specific combination of choices:

MechanismEffect
deterministic seed from accountsame body every time
generated soul stored oncesame name/personality after hatch
prompt attachmentmain Claude behaves as if companion exists
backend reactions with recent historyreactions feel fresh but not fully random
small reaction bufferavoids obvious repetition
footer renderingcompanion is always visually present

Nothing here requires a true second conversation-running agent.

That is the core implementation trick.


What buddy is not

It is easy to overstate what the feature does. The source does not support that.

Buddy is not:

It is:

That is enough to make the feature feel more alive than it really is.


Minimal reproduction in readable pseudocode

A stripped-down implementation sketch looks like this:

class Buddy:
    def __init__(self, user_key: str):
        self.seed = hash32(user_key + "friend-2026-401")
        self.bones = generate_bones(self.seed)
        self.soul = None
        self.recent_reactions = []
        self.muted = False

    def hatch(self):
        if self.soul is None:
            self.soul = generate_name_and_personality(self.bones)
        return {**self.bones, **self.soul}

    def maybe_react(self, transcript, tool_output, addressed):
        if self.muted:
            return None

        reason = None if addressed else classify(tool_output)
        reason = reason or "turn"

        if reason == "turn" and not addressed and throttled(30):
            return None

        reaction = backend_react(
            name=self.soul["name"],
            personality=self.soul["personality"],
            species=self.bones["species"],
            rarity=self.bones["rarity"],
            stats=self.bones["stats"],
            transcript=clip(transcript, 5000),
            reason=reason,
            recent=self.recent_reactions[-3:],
            addressed=addressed,
        )

        if reaction:
            self.recent_reactions = (self.recent_reactions + [reaction])[-3:]

        return reaction

That is not the exact code, but it is close to the actual design.


Notes on scope

Everything above is from the installed 2.1.89 bundle. The exact strings, salt, rollout gate, or endpoint contract can change at any time.

The parts I would treat as likely to move are:

The parts I would treat as structural are:


The practical design lesson

The feature works because it does not try to be more than it is.

Anthropic did not build a true second agent and then squeeze it into the footer. They built the minimum machinery needed to create the impression of a stable side character:

  1. deterministic identity,
  2. one-shot soul generation,
  3. tiny backend reaction API,
  4. explicit prompt coordination with the main assistant,
  5. enough animation to make the thing feel present.

That is cheaper, easier to reason about, and easier to ship.

It is also enough.