Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Harn

Harn is a pipeline-oriented programming language for orchestrating AI coding agents. It has native LLM calls, tool use, structured output, and async concurrency built into the language.

pipeline default(task) {
  let tools = tool_registry()
    |> tool_add("search", "Search the web", search_fn, {query: "string"})

  let result = llm_call(task, "You are a research assistant", {
    tools: tools,
    response_format: "json",
  })

  log(result.data)
}

Getting started

Prerequisites

Harn is built with Rust. You’ll need:

  • Rust (1.70 or later) — install with curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
  • Git

Install from source

git clone https://github.com/burin-labs/harn
cd harn && cargo build --release
cp target/release/harn ~/.local/bin/

Create a project and run it:

harn init my-agent
cd my-agent
export ANTHROPIC_API_KEY=sk-...
harn run main.harn

What’s in this guide

Why Harn?

The problem

Building AI agents is complex. A typical agent needs to call LLMs, execute tools, handle errors and retries, run tasks concurrently, maintain conversation state, and coordinate multiple sub-agents. In most languages, this means assembling a tower of libraries:

  • An LLM SDK (LangChain, OpenAI SDK, Anthropic SDK)
  • An async runtime (asyncio, Tokio, goroutines)
  • Retry and timeout logic (tenacity, custom decorators)
  • Tool registration and dispatch (custom JSON Schema plumbing)
  • Structured logging and tracing (separate packages)
  • A test framework (pytest, Jest)

Each layer adds configuration, boilerplate, and failure modes. The orchestration logic – the part that actually matters – gets buried under infrastructure code.

What Harn does differently

Harn is a programming language where agent orchestration primitives are built into the syntax, not bolted on as libraries.

Pipelines are the unit of composition

Every Harn program is a set of named pipelines. Pipelines can extend each other, override steps, and be imported across files. This gives you a natural way to structure multi-stage agent workflows:

pipeline analyze(task) {
  let context = read_file("README.md")
  let plan = llm_call(task + "\n\nContext:\n" + context, "Break this into steps.")
  let steps = json_parse(plan)

  let results = parallel_map(steps) { step ->
    agent_loop(step, "You are a coding assistant.", {persistent: true})
  }

  write_file("results.json", json_stringify(results))
}

LLM calls are builtins

llm_call and agent_loop are language primitives. No SDK imports, no client initialization, no response parsing. Set an environment variable and call a model:

let answer = llm_call("Summarize this code", "You are a code reviewer.")

Harn supports Anthropic, OpenAI, Ollama, and OpenRouter. Switching providers is a one-field change in the options dict.

Native concurrency without async/await

parallel_map, parallel, spawn/await, and channels are keywords, not library functions. No callback chains, no promise combinators, no async def annotations:

let results = parallel_map(files) { file ->
  llm_call(read_file(file), "Review this file for security issues")
}

Retry and error recovery are syntax

retry and try/catch are control flow constructs. Wrapping an unreliable LLM call in retries is a one-liner:

retry 3 {
  let result = llm_call(prompt, system)
  json_parse(result)
}

MCP for external tools

Harn has built-in support for the Model Context Protocol. Connect to any MCP-compatible tool server, list its tools, and call them – all from within a pipeline:

let client = mcp_connect("npx", ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"])
let tools = mcp_list_tools(client)
let content = mcp_call(client, "read_file", {path: "/tmp/data.txt"})
mcp_disconnect(client)

Tail call optimization for agent loops

Recursive agent patterns – where an agent processes one item, then calls itself with the next – are idiomatic in Harn. The VM performs tail call optimization so recursive loops do not overflow the stack, even across thousands of iterations:

fn process_items(items, results) {
  if items.count == 0 {
    return results
  }
  let item = items.first
  let rest = items.slice(1)
  let result = llm_call(item, "Process this item")
  return process_items(rest, results + [result])
}

Gradual typing

Type annotations are optional. Add them where they help, leave them off where they don’t:

fn score(text: string) -> int {
  let result = llm_call(text, "Rate 1-10. Respond with just the number.")
  return to_int(result)
}

Embeddable

Harn compiles to a WASM target for browser embedding and ships with LSP and DAP servers for IDE integration. Agent pipelines can run inside editors, CI systems, or web applications.

Who Harn is for

  • Developers building AI agents who want orchestration logic to be readable and concise, not buried under framework boilerplate.
  • IDE authors who want a scriptable, embeddable language for agent pipelines with built-in LSP support.
  • Researchers prototyping agent architectures who need fast iteration without setting up infrastructure.

Comparison

Here is what a “fetch three URLs in parallel, summarize each with an LLM, and retry failures” pattern looks like across approaches:

Python (LangChain + asyncio):

import asyncio
from langchain_anthropic import ChatAnthropic
from tenacity import retry, stop_after_attempt
import aiohttp

llm = ChatAnthropic(model="claude-sonnet-4-20250514")

@retry(stop=stop_after_attempt(3))
async def summarize(url):
    async with aiohttp.ClientSession() as session:
        async with session.get(url) as resp:
            text = await resp.text()
    result = await llm.ainvoke(f"Summarize:\n{text}")
    return result.content

async def main():
    urls = ["https://a.com", "https://b.com", "https://c.com"]
    results = await asyncio.gather(*[summarize(u) for u in urls])
    for r in results:
        print(r)

asyncio.run(main())

Harn:

pipeline default(task) {
  let urls = ["https://a.com", "https://b.com", "https://c.com"]

  let results = parallel_map(urls) { url ->
    retry 3 {
      let page = http_get(url)
      llm_call("Summarize:\n" + page, "Be concise.")
    }
  }

  for r in results {
    log(r)
  }
}

The Harn version has no imports, no decorators, no client initialization, no async annotations, and no runtime setup. The orchestration logic is all that remains.

Getting started

Install Harn and create a project:

curl -fsSL https://raw.githubusercontent.com/burin-labs/harn/main/install.sh | sh
harn init my-agent
cd my-agent
harn run main.harn

See the cookbook for practical patterns, or language basics for a full syntax guide.

Language basics

This guide covers the core syntax and semantics of Harn.

Pipelines

Pipelines are the top-level organizational unit. A Harn program is one or more pipelines. The runtime executes the pipeline named default, or the first one declared.

pipeline default(task) {
  log("Hello from the default pipeline")
}

pipeline other(task) {
  log("This only runs if called or if there's no default")
}

Pipeline parameters task and project are injected by the host runtime. A context dict with keys task, project_root, and task_type is always available.

Variables

let creates immutable bindings. var creates mutable ones.

let name = "Alice"
var counter = 0

counter = counter + 1  // ok
name = "Bob"           // error: immutable assignment

Types and values

Harn is dynamically typed with optional type annotations.

TypeExampleNotes
int42Platform-width integer
float3.14Double-precision
string"hello"UTF-8, supports interpolation
booltrue, false
nilnilNull value
list[1, 2, 3]Heterogeneous, ordered
dict{name: "Alice"}String-keyed map
closure{ x -> x + 1 }First-class function
duration5s, 100msTime duration

Type annotations

Annotations are optional and checked at compile time:

let x: int = 42
let name: string = "hello"
let nums: list<int> = [1, 2, 3]

fn add(a: int, b: int) -> int {
  return a + b
}

Supported type expressions: int, float, string, bool, nil, list, list<T>, dict, dict<K, V>, union types (string | nil), and structural shape types ({name: string, age: int}).

Parameter type annotations for primitive types (int, float, string, bool, list, dict, set, nil, closure) are enforced at runtime. Calling a function with the wrong type produces a TypeError:

fn add(a: int, b: int) -> int {
  return a + b
}

add("hello", "world")
// TypeError: parameter 'a' expected int, got string (hello)

Structural types (shapes)

Shape types describe the expected fields of a dict. The type checker verifies that required fields are present with compatible types. Extra fields are allowed (width subtyping).

let user: {name: string, age: int} = {name: "Alice", age: 30}
let config: {host: string, port?: int} = {host: "localhost"}

fn greet(u: {name: string}) -> string {
  return "hi " + u["name"]
}
greet({name: "Bob", age: 25})

Use type aliases for reusable shape definitions:

type Config = {model: string, max_tokens: int}
let cfg: Config = {model: "gpt-4", max_tokens: 100}

Truthiness

These values are falsy: false, nil, 0, 0.0, "", [], {}. Everything else is truthy.

Strings

Interpolation

let name = "world"
log("Hello, ${name}!")
log("2 + 2 = ${2 + 2}")

Any expression works inside ${}.

Multi-line strings

let doc = """
  This is a multi-line string.
  Common leading whitespace is stripped.
  Interpolation is NOT supported here.
"""

Escape sequences

\n (newline), \t (tab), \\ (backslash), \" (quote), \$ (dollar sign).

String methods

"hello".count                    // 5
"hello".empty                    // false
"hello".contains("ell")          // true
"hello".replace("l", "r")       // "herro"
"a,b,c".split(",")              // ["a", "b", "c"]
"  hello  ".trim()              // "hello"
"hello".starts_with("he")       // true
"hello".ends_with("lo")         // true
"hello".uppercase()             // "HELLO"
"hello".lowercase()             // "hello"
"hello world".substring(0, 5)   // "hello"

Operators

Ordered by precedence (lowest to highest):

PrecedenceOperatorsDescription
1|>Pipe
2? :Ternary conditional
3??Nil coalescing
4||Logical OR (short-circuit)
5&&Logical AND (short-circuit)
6== !=Equality
7< > <= >=Comparison
8+ -Add, subtract, string/list concat
9* /Multiply, divide
10! -Unary not, negate
11. ?. [] [:] () ?Member access, optional chaining, subscript, slice, call, try

Division by zero returns nil. Integer division truncates.

Optional chaining (?.)

Access properties or call methods on values that might be nil. Returns nil instead of erroring when the receiver is nil:

let user = nil
println(user?.name)           // nil (no error)
println(user?.greet("hi"))    // nil (method not called)

let d = {name: "Alice"}
println(d?.name)              // Alice

Chains propagate nil: a?.b?.c returns nil if any step is nil.

List and string slicing ([start:end])

Extract sublists or substrings using slice syntax:

let items = [10, 20, 30, 40, 50]
println(items[1:3])   // [20, 30]
println(items[:2])    // [10, 20]
println(items[3:])    // [40, 50]
println(items[-2:])   // [40, 50]

let s = "hello world"
println(s[0:5])       // hello
println(s[-5:])       // world

Negative indices count from the end. Omit start for 0, omit end for length.

Try operator (?)

The postfix ? operator works with Result values (Ok / Err). It unwraps Ok values and propagates Err values by returning early from the enclosing function:

fn divide(a, b) {
  if b == 0 {
    return Err("division by zero")
  }
  return Ok(a / b)
}

fn compute(x) {
  let result = divide(x, 2)?   // unwraps Ok, or returns Err early
  return Ok(result + 10)
}

fn compute_zero(x) {
  let result = divide(x, 0)?   // divide returns Err, ? propagates it
  return Ok(result + 10)
}

log(compute(20))       // Result.Ok(20)
log(compute_zero(20))  // Result.Err(division by zero)

Multiple ? calls can be chained in a single function to build pipelines that short-circuit on the first error.

Control flow

if/else

if score > 90 {
  log("A")
} else if score > 80 {
  log("B")
} else {
  log("C")
}

Can be used as an expression: let grade = if score > 90 { "A" } else { "B" }

for/in

for item in [1, 2, 3] {
  log(item)
}

// Dict iteration yields {key, value} entries sorted by key
for entry in {a: 1, b: 2} {
  log("${entry.key}: ${entry.value}")
}

while

var i = 0
while i < 10 {
  log(i)
  i = i + 1
}

Safety limit of 10,000 iterations.

match

match status {
  "active" -> { log("Running") }
  "stopped" -> { log("Halted") }
}

Patterns are expressions compared by equality. First match wins. No match returns nil.

guard

Early exit if a condition isn’t met:

guard x > 0 else {
  return "invalid"
}
// x is guaranteed > 0 here

Ranges

for i in 1 thru 5 {   // inclusive: 1, 2, 3, 4, 5
  log(i)
}

for i in 0 upto 3 {   // exclusive: 0, 1, 2
  log(i)
}

Functions and closures

Named functions

fn double(x) {
  return x * 2
}

fn greet(name: string) -> string {
  return "Hello, ${name}!"
}

Functions can be declared at the top level (for library files) or inside pipelines.

Closures

let square = { x -> x * x }
let add = { a, b -> a + b }

log(square(4))     // 16
log(add(2, 3))     // 5

Closures capture their lexical environment at definition time. Parameters are immutable.

Higher-order functions

let nums = [1, 2, 3, 4, 5]

nums.map({ x -> x * 2 })           // [2, 4, 6, 8, 10]
nums.filter({ x -> x > 3 })        // [4, 5]
nums.reduce(0, { acc, x -> acc + x }) // 15
nums.find({ x -> x == 3 })         // 3
nums.any({ x -> x > 4 })           // true
nums.all({ x -> x > 0 })           // true
nums.flat_map({ x -> [x, x] })     // [1, 1, 2, 2, 3, 3, 4, 4, 5, 5]

Pipe operator

The pipe operator |> passes the left side as the argument to the right side:

let result = data
  |> { list -> list.filter({ x -> x > 0 }) }
  |> { list -> list.map({ x -> x * 2 }) }
  |> json_stringify

Pipe placeholder (_)

Use _ to control where the piped value is placed in the call:

"hello world" |> split(_, " ")       // ["hello", "world"]
[3, 1, 2] |> _.sort()               // [1, 2, 3]
items |> len(_)                      // length of items
"world" |> replace("hello _", "_", _) // "hello world"

Without _, the value is passed as the sole argument to a closure or function name.

Multiline expressions

Binary operators, method chains, and pipes can span multiple lines:

let message = "hello"
  + " "
  + "world"

let result = items
  .filter({ x -> x > 0 })
  .map({ x -> x * 2 })

let valid = check_a()
  && check_b()
  || fallback()

Note: - does not continue across lines because it doubles as unary negation.

A backslash at the end of a line forces the next line to continue the current expression, even when no operator is present:

let long_value = some_function( \
  arg1, arg2, arg3 \
)

Destructuring

Destructuring extracts values from dicts and lists into local variables.

Dict destructuring

let person = {name: "Alice", age: 30}
let {name, age} = person
log(name)  // "Alice"
log(age)   // 30

List destructuring

let items = [1, 2, 3, 4, 5]
let [first, ...rest] = items
log(first)  // 1
log(rest)   // [2, 3, 4, 5]

Renaming

Use : to bind a dict field to a different variable name:

let data = {name: "Alice"}
let {name: user_name} = data
log(user_name)  // "Alice"

Destructuring in for-in loops

let entries = [{key: "a", value: 1}, {key: "b", value: 2}]
for {key, value} in entries {
  log("${key}: ${value}")
}

Missing keys and empty rest

Missing keys destructure to nil. A rest pattern with no remaining items gives an empty collection:

let {name, email} = {name: "Alice"}
log(email)  // nil

let [only, ...rest] = [42]
log(rest)   // []

Collections

Lists

let nums = [1, 2, 3]
nums.count          // 3
nums.first          // 1
nums.last           // 3
nums.empty          // false
nums[0]             // 1 (subscript access)

Lists support + for concatenation: [1, 2] + [3, 4] yields [1, 2, 3, 4]. Assigning to an out-of-bounds index throws an error.

Dicts

let user = {name: "Alice", age: 30}
user.name           // "Alice" (property access)
user["age"]         // 30 (subscript access)
user.missing        // nil (missing keys return nil)
user.has("email")   // false

user.keys()         // ["age", "name"] (sorted)
user.values()       // [30, "Alice"]
user.entries()      // [{key: "age", value: 30}, ...]
user.merge({role: "admin"})  // new dict with merged keys
user.map_values({ v -> to_string(v) })
user.filter({ v -> type_of(v) == "int" })

Computed keys use bracket syntax: {[dynamic_key]: value}.

Quoted string keys are also supported for JSON compatibility: {"content-type": "json"}. The formatter normalizes simple quoted keys to unquoted form and non-identifier keys to computed key syntax.

Keywords can be used as dict keys and property names: {type: "read"}, op.type.

Dicts iterate in sorted key order (alphabetical). This means for k in dict is deterministic and reproducible, but does not preserve insertion order.

Sets

Sets are unordered collections of unique values. Duplicates are automatically removed.

let s = set(1, 2, 3)          // create from individual values
let s2 = set([4, 5, 5, 6])   // create from a list (deduplicates)
let tags = set("a", "b", "c") // works with any value type

Set operations are provided as builtin functions:

let a = set(1, 2, 3)
let b = set(3, 4, 5)

set_contains(a, 2)       // true
set_contains(a, 99)      // false

set_union(a, b)          // set(1, 2, 3, 4, 5)
set_intersect(a, b)      // set(3)
set_difference(a, b)     // set(1, 2) -- items in a but not in b

set_add(a, 4)            // set(1, 2, 3, 4)
set_remove(a, 2)         // set(1, 3)

Sets support iteration with for..in:

var sum = 0
for item in set(10, 20, 30) {
  sum = sum + item
}
log(sum)  // 60

Convert a set to a list with to_list():

let items = to_list(set(10, 20))
type_of(items)  // "list"

Enums and structs

Enums

enum Status {
  Active
  Inactive
  Pending(reason)
  Failed(code, message)
}

let s = Status.Pending("waiting")
match s.variant {
  "Pending" -> { log(s.fields[0]) }
  "Active" -> { log("ok") }
}

Structs

struct Point {
  x: int
  y: int
}

let p = {x: 10, y: 20}
log(p.x)

Structs can also be constructed with the struct name as a constructor, which tags the value with the struct type:

let p = Point({x: 10, y: 20})
log(p.x)  // 10

Impl blocks

Add methods to a struct with impl:

struct Point {
  x: int
  y: int
}

impl Point {
  fn distance(self) {
    return sqrt(self.x * self.x + self.y * self.y)
  }
  fn translate(self, dx, dy) {
    return Point({x: self.x + dx, y: self.y + dy})
  }
}

let p = Point({x: 3, y: 4})
log(p.distance())       // 5.0
log(p.translate(10, 20)) // Point({x: 13, y: 24})

The first parameter must be self, which receives the struct instance. Methods are called with dot syntax on values constructed with the struct constructor.

Interfaces

Interfaces let you define a contract: a set of methods that a type must have. Harn uses implicit satisfaction, just like Go. A struct satisfies an interface automatically if its impl block has all the required methods. You never write implements or impl Interface for Type.

Step 1: Define an interface

An interface lists method signatures without bodies:

interface Displayable {
  fn display(self) -> string
}

This says: any type that has a display(self) -> string method counts as Displayable.

Step 2: Create structs with matching methods

struct Dog {
  name: string
  breed: string
}

impl Dog {
  fn display(self) -> string {
    return "${self.name} the ${self.breed}"
  }
}

struct Cat {
  name: string
  indoor: bool
}

impl Cat {
  fn display(self) -> string {
    let status = if self.indoor { "indoor" } else { "outdoor" }
    return "${self.name} (${status} cat)"
  }
}

Both Dog and Cat have a display(self) -> string method, so they both satisfy Displayable. No extra annotation is needed.

Step 3: Use the interface as a type

Now you can write a function that accepts any Displayable:

fn introduce(animal: Displayable) {
  println("Meet: " + animal.display())
}

let d = Dog({name: "Rex", breed: "Labrador"})
let c = Cat({name: "Whiskers", indoor: true})

introduce(d)  // Meet: Rex the Labrador
introduce(c)  // Meet: Whiskers (indoor cat)

The type checker verifies at compile time that Dog and Cat satisfy Displayable. If a struct is missing a required method, you get a clear error at the call site.

Interfaces with multiple methods

Interfaces can require more than one method:

interface Serializable {
  fn serialize(self) -> string
  fn byte_size(self) -> int
}

A struct must implement all listed methods to satisfy the interface.

Generic constraints

You can also use interfaces as constraints on generic type parameters:

fn log_item<T>(item: T) where T: Displayable {
  println("[LOG] " + item.display())
}

The where T: Displayable clause tells the type checker to verify that whatever concrete type is passed for T satisfies Displayable. If it does not, a compile-time warning is produced.

Spread in function calls

The spread operator ... expands a list into individual function arguments:

fn add(a, b, c) {
  return a + b + c
}

let nums = [1, 2, 3]
println(add(...nums))  // 6

You can mix regular arguments and spread arguments:

let rest = [2, 3]
println(add(1, ...rest))  // 6

Spread works in method calls too:

let point = Point({x: 0, y: 0})
let deltas = [10, 20]
let moved = point.translate(...deltas)

Try-expression

The try keyword without a catch block is a try-expression. It evaluates its body and wraps the outcome in a Result:

let result = try { json_parse(raw_input) }
// Result.Ok(parsed_data)  -- if parsing succeeds
// Result.Err("invalid JSON: ...") -- if parsing throws

This is the complement of the ? operator. Use try to enter Result-land (catching errors into Result.Err), and ? to exit Result-land (propagating errors upward):

fn safe_divide(a, b) {
  return try { a / b }
}

fn compute(x) {
  let half = safe_divide(x, 2)?  // unwrap Ok or propagate Err
  return Ok(half + 10)
}

No catch or finally is needed. If a catch follows try, it is parsed as the traditional try/catch statement instead.

Duration literals

let d1 = 500ms   // 500 milliseconds
let d2 = 5s      // 5 seconds
let d3 = 2m      // 2 minutes
let d4 = 1h      // 1 hour

Durations can be passed to sleep() and used in deadline blocks.

Comments

// Line comment

/* Block comment
   /* Nested block comments are supported */
   Still inside the outer comment */

Error handling

Harn provides try/catch/throw for error handling and retry for automatic recovery.

throw

Any value can be thrown as an error:

throw "something went wrong"
throw {code: 404, message: "not found"}
throw 42

try/catch

Catch errors with an optional error binding:

try {
  let data = json_parse(raw_input)
} catch (e) {
  log("Parse failed: ${e}")
}

The error variable is optional:

try {
  risky_operation()
} catch {
  log("Something failed, moving on")
}

What gets bound to the error variable

  • If the error was created with throw: e is the thrown value directly (string, dict, etc.)
  • If the error is an internal runtime error: e is the error’s description as a string

return inside try

A return statement inside a try block is not caught. It propagates out of the enclosing pipeline or function as expected.

fn find_user(id) {
  try {
    let user = lookup(id)
    return user  // this returns from find_user, not caught
  } catch (e) {
    return nil
  }
}

Typed catch

Catch specific error types using enum-based error hierarchies:

enum AppError {
  NotFound(resource)
  Unauthorized(reason)
  Internal(message)
}

try {
  throw AppError.NotFound("user:123")
} catch (e: AppError) {
  match e.variant {
    "NotFound" -> { log("Missing: ${e.fields[0]}") }
    "Unauthorized" -> { log("Access denied") }
  }
}

Errors that don’t match the typed catch propagate up the call stack.

retry

Automatically retry a block up to N times:

retry 3 {
  let response = http_post(url, payload)
  let parsed = json_parse(response)
  parsed
}
  • If the body succeeds on any attempt, returns that result immediately
  • If all attempts fail, returns nil
  • return inside a retry block propagates out (not retried)

Try-expression

The try keyword without a catch block acts as a try-expression. It evaluates the body and returns a Result:

  • On success: Result.Ok(value)
  • On error: Result.Err(error)
let result = try { json_parse(raw_input) }

This is useful when you want to capture an error as a value rather than crashing or needing a full try/catch:

let parsed = try { json_parse(input) }
if is_err(parsed) {
  println("Bad input, using defaults")
  parsed = Ok({})
}
let data = unwrap(parsed)

The try-expression pairs naturally with the ? operator. Use try to enter Result-land and ? to propagate within it:

fn fetch_json(url) {
  let body = try { http_get(url) }
  let text = unwrap(body)?
  let data = try { json_parse(text) }
  return data
}

If a catch block follows try, it is parsed as the traditional try/catch statement – not a try-expression.

Runtime shape validation errors

When a function parameter has a structural type annotation (a shape like {name: string, age: int}), Harn validates the argument at runtime. If the argument is missing a required field or a field has the wrong type, a clear error is produced:

fn process(user: {name: string, age: int}) {
  println("${user.name} is ${user.age}")
}

process({name: "Alice"})
// Error: parameter 'user': missing field 'age' (int)

process({name: "Alice", age: "old"})
// Error: parameter 'user': field 'age' expected int, got string

Shape validation works with both plain dicts and struct instances. Extra fields beyond those listed in the shape are allowed (width subtyping).

This catches a common class of bugs where a dict is passed with missing or mistyped fields, giving you precise feedback about exactly which field is wrong.

Result type

The built-in Result enum provides an alternative to try/catch for representing success and failure as values. A Result is either Ok(value) or Err(error).

let ok = Ok(42)
let err = Err("something failed")

log(ok)   // Result.Ok(42)
log(err)  // Result.Err(something failed)

The shorthand constructors Ok(value) and Err(value) are equivalent to Result.Ok(value) and Result.Err(value).

Result helper functions

FunctionDescription
is_ok(r)Returns true if r is Result.Ok
is_err(r)Returns true if r is Result.Err
unwrap(r)Returns the Ok value, throws if r is Err
unwrap_or(r, default)Returns the Ok value, or default if r is Err
unwrap_err(r)Returns the Err value, throws if r is Ok
let r = Ok(42)
log(is_ok(r))           // true
log(is_err(r))          // false
log(unwrap(r))          // 42
log(unwrap_or(Err("x"), "default"))  // default

Pattern matching on Result

Result values can be destructured with match:

fn fetch_data(url) {
  // ... returns Ok(data) or Err(message)
}

match fetch_data("/api/users") {
  Result.Ok(data) -> { log("Got ${len(data)} users") }
  Result.Err(err) -> { log("Failed: ${err}") }
}

The ? operator

The postfix ? operator provides concise error propagation. Applied to a Result value, it unwraps Ok and returns the value, or immediately returns the Err from the enclosing function.

fn divide(a, b) {
  if b == 0 {
    return Err("division by zero")
  }
  return Ok(a / b)
}

fn compute(x) {
  let result = divide(x, 2)?   // unwraps Ok, or returns Err early
  return Ok(result + 10)
}

let r1 = compute(20)  // Result.Ok(20)
let r2 = compute(0)   // Result.Err(division by zero)

The ? operator has the same precedence as ., [], and (), so it chains naturally:

fn fetch_and_parse(url) {
  let response = http_get(url)?
  let data = json_parse(response)?
  return Ok(data)
}

Applying ? to a non-Result value produces a runtime type error.

Result vs. try/catch

Use Result and ? when errors are expected outcomes that callers should handle (validation failures, missing data, parse errors). Use try/catch for unexpected errors or when you want to recover from failures in-place without propagating them through return values.

The two patterns can be combined:

fn safe_parse(input) {
  try {
    let data = json_parse(input)
    return Ok(data)
  } catch (e) {
    return Err("parse error: ${e}")
  }
}

fn process(raw) {
  let data = safe_parse(raw)?   // propagate Err if parse fails
  return Ok(transform(data))
}

Stack traces

When a runtime error occurs, Harn displays a stack trace showing the call chain that led to the error. The trace includes file location, source context, and the sequence of function calls.

error: division by zero
  --> example.harn:3:14
  |
3 |   let x = a / b
  |              ^
  = note: called from compute at example.harn:8
  = note: called from pipeline at example.harn:12

The error format shows:

  • Error message: what went wrong
  • Source location: file, line, and column where the error occurred
  • Source context: the relevant source line with a caret (^) pointing to the exact position
  • Call chain: each function in the call stack, from innermost to outermost, with file and line numbers

Stack traces are captured at the point of the error, before try/catch unwinding, so the full call chain is preserved even when errors are caught at a higher level.

Combining patterns

retry 3 {
  try {
    let result = llm_call(prompt, system)
    let parsed = json_parse(result)
    return parsed
  } catch (e) {
    log("Attempt failed: ${e}")
    throw e  // re-throw to trigger retry
  }
}

Modules and imports

Harn supports splitting code across files using import and top-level fn declarations.

Importing files

import "lib/helpers.harn"

The extension is optional — these are equivalent:

import "lib/helpers.harn"
import "lib/helpers"

Import paths are resolved relative to the current file’s directory. If main.harn imports "lib/helpers", it looks for lib/helpers.harn next to main.harn.

Writing a library file

Library files contain top-level fn declarations:

// lib/math.harn

fn double(x) {
  return x * 2
}

fn clamp(value, low, high) {
  if value < low { return low }
  if value > high { return high }
  return value
}

When imported, these functions become available in the importing file’s scope.

Using imported functions

import "lib/math"

pipeline default(task) {
  log(double(21))        // 42
  log(clamp(150, 0, 100)) // 100
}

Importing pipelines

Imported files can also contain pipelines, which are registered globally by name:

// lib/analysis.harn
pipeline analyze(task) {
  log("Analyzing: ${task}")
}
import "lib/analysis"

pipeline default(task) {
  // the "analyze" pipeline is now registered and available
}

Standard library modules

Harn includes built-in modules that are compiled into the interpreter. Import them with the std/ prefix:

import "std/text"
import "std/collections"
import "std/math"
import "std/path"
import "std/json"

std/text

Text processing utilities for LLM output and code analysis:

FunctionDescription
extract_paths(text)Extract file paths from text, filtering comments and validating extensions
parse_cells(response)Parse fenced code blocks from LLM output. Returns [{type, lang, code}]
filter_test_cells(cells, target_file?)Filter cells to keep code blocks and write_file calls
truncate_head_tail(text, n)Keep first/last n lines with omission marker
detect_compile_error(output)Check for compile error patterns (SyntaxError, etc.)
has_got_want(output)Check for got/want test failure patterns
format_test_errors(output)Extract error-relevant lines (max 20)

std/collections

Collection utilities and store helpers:

FunctionDescription
filter_nil(dict)Remove entries where value is nil, empty string, or “null”
store_stale(key, max_age_seconds)Check if a store key’s timestamp is stale
store_refresh(key)Update a store key’s timestamp to now

std/math

Extended math utilities:

FunctionDescription
clamp(value, lo, hi)Clamp a value between min and max
lerp(a, b, t)Linear interpolation between a and b by t (0..1)
map_range(value, in_lo, in_hi, out_lo, out_hi)Map a value from one range to another
deg_to_rad(degrees)Convert degrees to radians
rad_to_deg(radians)Convert radians to degrees
sum(items)Sum a list of numbers
avg(items)Average of a list of numbers (returns 0 for empty lists)
import "std/math"

log(clamp(150, 0, 100))         // 100
log(lerp(0, 10, 0.5))           // 5
log(map_range(50, 0, 100, 0, 1)) // 0.5
log(sum([1, 2, 3, 4]))          // 10
log(avg([10, 20, 30]))          // 20

std/path

Path manipulation utilities:

FunctionDescription
ext(path)Get the file extension without the dot
stem(path)Get the filename without extension
normalize(path)Normalize path separators (backslash to forward slash)
is_absolute(path)Check if a path is absolute
list_files(dir)List files in a directory (one level)
list_dirs(dir)List subdirectories in a directory
import "std/path"

log(ext("main.harn"))          // "harn"
log(stem("/src/main.harn"))    // "main"
log(is_absolute("/usr/bin"))   // true

let files = list_files("src")
let dirs = list_dirs(".")

std/json

JSON utility patterns:

FunctionDescription
pretty(value)Pretty-print a value as indented JSON
safe_parse(text)Safely parse JSON, returning nil on failure instead of throwing
merge(a, b)Shallow-merge two dicts (keys in b override keys in a)
pick(data, keys)Pick specific keys from a dict
omit(data, keys)Omit specific keys from a dict
import "std/json"

let data = safe_parse("{\"x\": 1}")   // {x: 1}, or nil on bad input
let merged = merge({a: 1}, {b: 2})    // {a: 1, b: 2}
let subset = pick({a: 1, b: 2, c: 3}, ["a", "c"])  // {a: 1, c: 3}
let rest = omit({a: 1, b: 2, c: 3}, ["b"])          // {a: 1, c: 3}

Selective imports

Import specific functions from any module:

import { extract_paths, parse_cells } from "std/text"

Import behavior

  1. The imported file is parsed and executed
  2. Pipelines in the imported file are registered by name
  3. Non-pipeline top-level statements (fn declarations, let bindings) are executed, making their values available
  4. Circular imports are detected and skipped (each file is imported at most once)
  5. The working directory is temporarily changed to the imported file’s directory, so nested imports resolve correctly

Pipeline inheritance

Pipelines can extend other pipelines:

pipeline base(task) {
  log("Step 1: setup")
  log("Step 2: execute")
  log("Step 3: cleanup")
}

pipeline custom(task) extends base {
  override fn setup() {
    log("Custom setup")
  }
}

If the child pipeline has override declarations, the parent’s body runs with the overrides applied. If the child has no overrides, the child’s body replaces the parent’s entirely.

Organizing a project

A typical project structure:

my-project/
  main.harn
  lib/
    context.harn      # shared context-gathering functions
    agent.harn        # shared agent utility functions
    helpers.harn      # general-purpose utilities
// main.harn
import "lib/context"
import "lib/agent"
import "lib/helpers"

pipeline default(task, project) {
  let ctx = gather_context(task, project)
  let result = run_agent(ctx)
  finalize(result)
}

Concurrency

Harn has built-in concurrency primitives that don’t require callbacks, promises, or async/await boilerplate.

spawn and await

Launch background tasks and collect results:

let handle = spawn {
  sleep(1s)
  "done"
}

let result = await(handle)  // blocks until complete
log(result)                 // "done"

Cancel a task before it finishes:

let handle = spawn { sleep(10s) }
cancel(handle)

Each spawned task runs in an isolated interpreter instance.

parallel

Run N tasks concurrently and collect results in order:

let results = parallel(5) { i ->
  i * 10
}
// [0, 10, 20, 30, 40]

The variable i is the zero-based task index. Results are always returned in index order regardless of completion order.

parallel_map

Map over a collection concurrently:

let files = ["a.txt", "b.txt", "c.txt"]

let contents = parallel_map(files) { file ->
  read_file(file)
}

Results preserve the original list order.

retry

Automatically retry a block that might fail:

retry 3 {
  http_get("https://flaky-api.example.com/data")
}

Executes the body up to N times. If the body succeeds, returns immediately. If all attempts fail, returns nil. Note that return statements inside retry propagate out (they are not retried).

Channels

Message-passing between concurrent tasks:

let ch = channel("events")
send(ch, {event: "start", timestamp: timestamp()})
let msg = receive(ch)

Channel iteration

You can iterate over a channel with a for loop. The loop receives messages one at a time and exits when the channel is closed and fully drained:

let ch = channel("stream")

spawn {
  send(ch, "chunk 1")
  send(ch, "chunk 2")
  close_channel(ch)
}

for chunk in ch {
  log(chunk)
}
// prints "chunk 1" then "chunk 2", then the loop ends

This is especially useful with llm_stream, which returns a channel of response chunks:

let stream = llm_stream("Tell me a story", "You are a storyteller")
for chunk in stream {
  print(chunk)
}

Use try_receive(ch) for non-blocking reads – it returns nil immediately if no message is available. Use close_channel(ch) to signal that no more messages will be sent.

Atomics

Thread-safe counters:

let counter = atomic(0)
log(atomic_get(counter))         // 0

let c2 = atomic_add(counter, 5)
log(atomic_get(c2))              // 5

let c3 = atomic_set(c2, 100)
log(atomic_get(c3))              // 100

Atomic operations return new atomic values (they don’t mutate in place).

Mutex

Mutual exclusion for critical sections:

mutex {
  // only one task executes this block at a time
  var count = count + 1
}

Deadline

Set a timeout on a block of work:

deadline 30s {
  // must complete within 30 seconds
  agent_loop(task, system, {persistent: true})
}

LLM calls and agent loops

Harn has built-in support for calling language models and running persistent agent loops. No libraries or SDKs needed.

Providers

Harn supports four LLM providers. Set the appropriate environment variable to authenticate:

ProviderEnvironment variableDefault model
Anthropic (default)ANTHROPIC_API_KEYclaude-sonnet-4-20250514
OpenAIOPENAI_API_KEYgpt-4o
OpenRouterOPENROUTER_API_KEYanthropic/claude-sonnet-4-20250514
OllamaOLLAMA_HOST (optional)llama3.2

Ollama runs locally and doesn’t require an API key. The default host is http://localhost:11434.

llm_call

Make a single LLM request:

let response = llm_call("What is 2 + 2?")

With a system message:

let response = llm_call(
  "Explain quicksort",
  "You are a computer science teacher. Be concise."
)

With options:

let response = llm_call(
  "Translate to French: Hello, world",
  "You are a translator.",
  {
    provider: "openai",
    model: "gpt-4o",
    max_tokens: 1024
  }
)

Parameters

ParameterTypeRequiredDescription
promptstringyesThe user message
systemstringnoSystem message for the model
optionsdictnoProvider, model, and generation settings

Options dict

KeyTypeDefaultDescription
providerstring"anthropic""anthropic", "openai", "ollama", or "openrouter"
modelstringvaries by providerModel identifier
max_tokensint4096Maximum tokens in the response

agent_loop

Run an agent that keeps working until it’s done. The agent maintains conversation history across turns and loops until it outputs the ##DONE## sentinel. Returns a dict with {status, text, iterations, duration_ms, tools_used}.

let result = agent_loop(
  "Write a function that sorts a list, then write tests for it.",
  "You are a senior engineer.",
  {persistent: true}
)
log(result.text)       // the accumulated output
log(result.status)     // "done" or "stuck"
log(result.iterations) // number of LLM round-trips

How it works

  1. Sends the prompt to the model
  2. Reads the response
  3. If persistent: true:
    • Checks if the response contains ##DONE##
    • If yes, stops and returns the accumulated output
    • If no, sends a nudge message asking the agent to continue
    • Repeats until done or limits are hit
  4. If persistent: false (default): returns after the first response

agent_loop options

Same as llm_call, plus additional options:

KeyTypeDefaultDescription
persistentboolfalseKeep looping until ##DONE##
max_iterationsint50Maximum number of LLM round-trips
max_nudgesint3Max consecutive text-only responses before stopping
nudgestringsee belowCustom message to send when nudging the agent

Default nudge message:

You have not output ##DONE## yet — the task is not complete. Use your tools to continue working. Only output ##DONE## when the task is fully complete and verified.

When persistent: true, the system prompt is automatically extended with:

IMPORTANT: You MUST keep working until the task is complete. Do NOT stop to explain or summarize — take action. Output ##DONE## only when the task is fully complete and verified.

Example with retry

retry 3 {
  let result = agent_loop(
    task,
    "You are a coding assistant.",
    {
      persistent: true,
      max_iterations: 30,
      max_nudges: 5,
      provider: "anthropic",
      model: "claude-sonnet-4-20250514"
    }
  )
  log(result.text)
}

Provider API details

Anthropic

  • Endpoint: https://api.anthropic.com/v1/messages
  • Auth: x-api-key header
  • API version: 2023-06-01
  • System message sent as a top-level system field

OpenAI

  • Endpoint: https://api.openai.com/v1/chat/completions
  • Auth: Authorization: Bearer <key>
  • System message sent as a message with role: "system"

OpenRouter

  • Endpoint: https://openrouter.ai/api/v1/chat/completions
  • Auth: Authorization: Bearer <key>
  • Same message format as OpenAI

Ollama

  • Endpoint: <OLLAMA_HOST>/v1/chat/completions
  • Default host: http://localhost:11434
  • No authentication required
  • Same message format as OpenAI

Builtin functions

Complete reference for all built-in functions available in Harn.

Output

FunctionParametersReturnsDescription
log(msg)msg: anynilPrint with [harn] prefix and newline
print(msg)msg: anynilPrint without prefix or newline
println(msg)msg: anynilPrint with newline, no prefix

Type conversion

FunctionParametersReturnsDescription
type_of(value)value: anystringReturns type name: "int", "float", "string", "bool", "nil", "list", "dict", "closure", "taskHandle", "duration", "enum", "struct"
to_string(value)value: anystringConvert to string representation
to_int(value)value: anyint or nilParse/convert to integer. Floats truncate, bools become 0/1
to_float(value)value: anyfloat or nilParse/convert to float

Runtime shape validation

Function parameters with structural type annotations (shapes) are validated at runtime. If a dict or struct argument is missing a required field or has the wrong field type, a descriptive error is thrown before the function body executes.

fn greet(u: {name: string, age: int}) {
  println("${u.name} is ${u.age}")
}

greet({name: "Alice", age: 30})   // OK
greet({name: "Alice"})            // Error: parameter 'u': missing field 'age' (int)

See Error handling – Runtime shape validation errors for more details.

Result

Harn has a built-in Result type for representing success/failure values without exceptions. Ok and Err create Result.Ok and Result.Err enum variants respectively. When called on a non-Result value, unwrap and unwrap_or pass the value through unchanged.

FunctionParametersReturnsDescription
Ok(value)value: anyResult.OkCreate a Result.Ok value
Err(value)value: anyResult.ErrCreate a Result.Err value
is_ok(result)result: anyboolReturns true if value is Result.Ok
is_err(result)result: anyboolReturns true if value is Result.Err
unwrap(result)result: anyanyExtract Ok value. Throws on Err. Non-Result values pass through
unwrap_or(result, default)result: any, default: anyanyExtract Ok value. Returns default on Err. Non-Result values pass through
unwrap_err(result)result: anyanyExtract Err value. Throws on non-Err

Example:

let good = Ok(42)
let bad = Err("something went wrong")

println(is_ok(good))             // true
println(is_err(bad))             // true

println(unwrap(good))            // 42
println(unwrap_or(bad, 0))       // 0
println(unwrap_err(bad))         // something went wrong

JSON

FunctionParametersReturnsDescription
json_parse(str)str: stringvalueParse JSON string into Harn values. Throws on invalid JSON
json_stringify(value)value: anystringSerialize Harn value to JSON. Closures and handles become null
json_validate(data, schema)data: any, schema: dictboolValidate data against a schema. Returns true if valid, throws with details if not
json_extract(text, key?)text: string, key: string (optional)valueExtract JSON from text (strips markdown code fences). If key given, returns that key’s value

Type mapping:

JSONHarn
stringstring
integerint
decimal/exponentfloat
true/falsebool
nullnil
arraylist
objectdict

json_validate schema format

The schema is a plain Harn dict (not JSON Schema). Supported keys:

KeyTypeDescription
typestringExpected type: "string", "int", "float", "bool", "list", "dict", "any"
requiredlistList of required key names (for dicts)
propertiesdictDict mapping property names to sub-schemas (for dicts)
itemsdictSchema to validate each item against (for lists)

Example:

let schema = {
  type: "dict",
  required: ["name", "age"],
  properties: {
    name: {type: "string"},
    age: {type: "int"},
    tags: {type: "list", items: {type: "string"}}
  }
}
json_validate(data, schema)  // throws if invalid

json_extract

Extracts JSON from LLM responses that may contain markdown code fences or surrounding prose. Handles ```json ... ```, ``` ... ```, and bare JSON with surrounding text.

let response = llm_call("Return JSON with name and age")
let data = json_extract(response)         // parse, stripping fences
let name = json_extract(response, "name") // extract just one key

Math

FunctionParametersReturnsDescription
abs(n)n: int or floatint or floatAbsolute value
ceil(n)n: floatintCeiling (rounds up). Ints pass through unchanged
floor(n)n: floatintFloor (rounds down). Ints pass through unchanged
round(n)n: floatintRound to nearest integer. Ints pass through unchanged
sqrt(n)n: int or floatfloatSquare root
pow(base, exp)base: number, exp: numberint or floatExponentiation. Returns int when both args are int and exp is non-negative
min(a, b)a: number, b: numberint or floatMinimum of two values. Returns float if either argument is float
max(a, b)a: number, b: numberint or floatMaximum of two values. Returns float if either argument is float
random()nonefloatRandom float in [0, 1)
random_int(min, max)min: int, max: intintRandom integer in [min, max] inclusive

Trigonometry

FunctionParametersReturnsDescription
sin(n)n: floatfloatSine (radians)
cos(n)n: floatfloatCosine (radians)
tan(n)n: floatfloatTangent (radians)
asin(n)n: floatfloatInverse sine
acos(n)n: floatfloatInverse cosine
atan(n)n: floatfloatInverse tangent
atan2(y, x)y: float, x: floatfloatTwo-argument inverse tangent

Logarithms and exponentials

FunctionParametersReturnsDescription
log2(n)n: floatfloatBase-2 logarithm
log10(n)n: floatfloatBase-10 logarithm
ln(n)n: floatfloatNatural logarithm
exp(n)n: floatfloatEuler’s number raised to the power n

Constants and utilities

FunctionParametersReturnsDescription
pi()nonefloatThe constant pi (3.14159…)
e()nonefloatEuler’s number (2.71828…)
sign(n)n: int or floatintSign of a number: -1, 0, or 1
is_nan(n)n: floatboolCheck if value is NaN
is_infinite(n)n: floatboolCheck if value is infinite

Sets

FunctionParametersReturnsDescription
set(items?)items: list (optional)setCreate a new set, optionally from a list
set_add(s, value)s: set, value: anysetAdd a value to a set, returns new set
set_remove(s, value)s: set, value: anysetRemove a value from a set, returns new set
set_contains(s, value)s: set, value: anyboolCheck if set contains a value
set_union(a, b)a: set, b: setsetUnion of two sets
set_intersect(a, b)a: set, b: setsetIntersection of two sets
set_intersection(a, b)a: set, b: setsetAlias for set_intersect
set_difference(a, b)a: set, b: setsetDifference (elements in a but not b)
set_symmetric_difference(a, b)a: set, b: setsetElements in either but not both
set_is_subset(a, b)a: set, b: setboolTrue if all elements of a are in b
set_is_superset(a, b)a: set, b: setboolTrue if a contains all elements of b
set_is_disjoint(a, b)a: set, b: setboolTrue if a and b share no elements
to_list(s)s: setlistConvert a set to a list

Set methods (dot syntax)

Sets also support method syntax: my_set.union(other).

MethodParametersReturnsDescription
.count() / .len()noneintNumber of elements
.empty()noneboolTrue if set is empty
.contains(val)val: anyboolCheck membership
.add(val)val: anysetNew set with val added
.remove(val)val: anysetNew set with val removed
.union(other)other: setsetUnion
.intersect(other)other: setsetIntersection
.difference(other)other: setsetElements in self but not other
.symmetric_difference(other)other: setsetElements in either but not both
.is_subset(other)other: setboolTrue if self is a subset of other
.is_superset(other)other: setboolTrue if self is a superset of other
.is_disjoint(other)other: setboolTrue if no shared elements
.to_list()nonelistConvert to list
.map(fn)fn: closuresetTransform elements (deduplicates)
.filter(fn)fn: closuresetKeep elements matching predicate
.any(fn)fn: closureboolTrue if any element matches
.all(fn) / .every(fn)fn: closureboolTrue if all elements match

String functions

FunctionParametersReturnsDescription
len(value)value: string, list, or dictintLength of string (chars), list (items), or dict (keys)
trim(str)str: stringstringRemove leading and trailing whitespace
lowercase(str)str: stringstringConvert to lowercase
uppercase(str)str: stringstringConvert to uppercase
split(str, sep)str: string, sep: stringlistSplit string by separator
starts_with(str, prefix)str: string, prefix: stringboolCheck if string starts with prefix
ends_with(str, suffix)str: string, suffix: stringboolCheck if string ends with suffix
contains(str, substr)str: string, substr: stringboolCheck if string contains substring. Also works on lists
replace(str, old, new)str: string, old: string, new: stringstringReplace all occurrences
join(list, sep)list: list, sep: stringstringJoin list elements with separator
substring(str, start, len?)str: string, start: int, len: intstringExtract substring from start position
format(template, ...)template: string, args: anystringFormat string with {} placeholders

String methods (dot syntax)

These are called on string values with dot notation: "hello".uppercase().

MethodParametersReturnsDescription
.trim()nonestringRemove leading/trailing whitespace
.trim_start()nonestringRemove leading whitespace only
.trim_end()nonestringRemove trailing whitespace only
.lines()nonelistSplit string by newlines
.char_at(index)index: intstring or nilCharacter at index (nil if out of bounds)
.index_of(substr)substr: stringintFirst character offset of substring (-1 if not found)
.last_index_of(substr)substr: stringintLast character offset of substring (-1 if not found)
.len()noneintCharacter count
.chars()nonelistList of single-character strings
.reverse()nonestringReversed string
.repeat(n)n: intstringRepeat n times
.pad_left(width, char?)width: int, char: stringstringPad to width with char (default space)
.pad_right(width, char?)width: int, char: stringstringPad to width with char (default space)

List methods (dot syntax)

MethodParametersReturnsDescription
.map(fn)fn: closurelistTransform each element
.filter(fn)fn: closurelistKeep elements where fn returns truthy
.reduce(init, fn)init: any, fn: closureanyFold with accumulator
.find(fn)fn: closureany or nilFirst element matching predicate
.find_index(fn)fn: closureintIndex of first match (-1 if not found)
.any(fn)fn: closureboolTrue if any element matches
.all(fn) / .every(fn)fn: closureboolTrue if all elements match
.none(fn?)fn: closureboolTrue if no elements match (no arg: checks emptiness)
.first(n?)n: int (optional)any or listFirst element, or first n elements
.last(n?)n: int (optional)any or listLast element, or last n elements
.partition(fn)fn: closurelistSplit into [[truthy], [falsy]]
.group_by(fn)fn: closuredictGroup into dict keyed by fn result
.sort() / .sort_by(fn)fn: closure (optional)listSort (natural or by key function)
.min() / .max()noneanyMinimum/maximum value
.min_by(fn) / .max_by(fn)fn: closureanyMin/max by key function
.chunk(size)size: intlistSplit into chunks of size
.each_cons(size)size: intlistSliding windows of size
.compact()nonelistRemove nil values
.unique()nonelistRemove duplicates
.flatten()nonelistFlatten one level of nesting
.flat_map(fn)fn: closurelistMap then flatten
.tally()nonedictFrequency count: {value: count}
.zip(other)other: listlistPair elements from two lists
.enumerate()nonelistList of {index, value} dicts
.take(n) / .skip(n)n: intlistFirst/remaining n elements
.sum()noneint or floatSum of numeric values
.join(sep?)sep: stringstringJoin to string
.reverse()nonelistReversed list
.push(item) / .pop()item: anylistNew list with item added/removed (immutable)
.contains(item)item: anyboolCheck if list contains item
.index_of(item)item: anyintIndex of item (-1 if not found)
.slice(start, end?)start: int, end: intlistSlice with negative index support

Path functions

FunctionParametersReturnsDescription
dirname(path)path: stringstringDirectory component of path
basename(path)path: stringstringFile name component of path
extname(path)path: stringstringFile extension including dot (e.g., .harn)
path_join(parts...)parts: stringsstringJoin path components

File I/O

FunctionParametersReturnsDescription
read_file(path)path: stringstringRead entire file as UTF-8 string. Throws on failure
write_file(path, content)path: string, content: stringnilWrite string to file. Throws on failure
append_file(path, content)path: string, content: stringnilAppend string to file, creating it if it doesn’t exist. Throws on failure
copy_file(src, dst)src: string, dst: stringnilCopy a file. Throws on failure
delete_file(path)path: stringnilDelete a file or directory (recursive). Throws on failure
file_exists(path)path: stringboolCheck if a file or directory exists
list_dir(path?)path: string (default ".")listList directory contents as sorted list of file names. Throws on failure
mkdir(path)path: stringnilCreate directory and all parent directories. Throws on failure
stat(path)path: stringdictFile metadata: {size, is_file, is_dir, readonly, modified}. Throws on failure
temp_dir()nonestringSystem temporary directory path
render(path, bindings?)path: string, bindings: dictstringRead a template file and replace {{key}} placeholders with values from bindings dict. Without bindings, just reads the file

Environment and system

FunctionParametersReturnsDescription
env(name)name: stringstring or nilRead environment variable
timestamp()nonefloatUnix timestamp in seconds with sub-second precision
elapsed()noneintMilliseconds since VM startup
exec(cmd, args...)cmd: string, args: stringsdictExecute external command. Returns {stdout, stderr, status, success}
shell(cmd)cmd: stringdictExecute command via shell. Returns {stdout, stderr, status, success}
exit(code)code: int (default 0)neverTerminate the process
username()nonestringCurrent OS username
hostname()nonestringMachine hostname
platform()nonestringOS name: "darwin", "linux", or "windows"
arch()nonestringCPU architecture (e.g., "aarch64", "x86_64")
home_dir()nonestringUser’s home directory path
pid()noneintCurrent process ID
cwd()nonestringCurrent working directory
source_dir()nonestringDirectory of the currently-executing .harn file (falls back to cwd)
project_root()nonestring or nilNearest ancestor directory containing harn.toml
date_iso()nonestringCurrent UTC time in ISO 8601 format (e.g., "2026-03-29T14:30:00.123Z")

Regular expressions

FunctionParametersReturnsDescription
regex_match(pattern, text)pattern: string, text: stringlist or nilFind all non-overlapping matches. Returns nil if no matches
regex_replace(pattern, replacement, text)pattern: string, replacement: string, text: stringstringReplace all matches. Throws on invalid regex
regex_captures(pattern, text)pattern: string, text: stringlistFind all matches with capture group details

regex_captures

Returns a list of dicts, one per match. Each dict contains:

  • match – the full matched string
  • groups – a list of positional capture group values (from (...))
  • Named capture groups (from (?P<name>...)) appear as additional keys
let results = regex_captures("(\\w+)@(\\w+)", "alice@example bob@test")
// [
//   {match: "alice@example", groups: ["alice", "example"]},
//   {match: "bob@test", groups: ["bob", "test"]}
// ]

Named capture groups are added as top-level keys on each result dict:

let named = regex_captures("(?P<user>\\w+):(?P<role>\\w+)", "alice:admin")
// [{match: "alice:admin", groups: ["alice", "admin"], user: "alice", role: "admin"}]

Returns an empty list if there are no matches. Throws on invalid regex.

Encoding

FunctionParametersReturnsDescription
base64_encode(string)string: stringstringBase64 encode a string (standard alphabet with padding)
base64_decode(string)string: stringstringBase64 decode a string. Throws on invalid input

Example:

let encoded = base64_encode("Hello, World!")
println(encoded)                  // SGVsbG8sIFdvcmxkIQ==
println(base64_decode(encoded))   // Hello, World!

Hashing

FunctionParametersReturnsDescription
sha256(string)string: stringstringSHA-256 hash, returned as a lowercase hex-encoded string
md5(string)string: stringstringMD5 hash, returned as a lowercase hex-encoded string

Example:

println(sha256("hello"))  // 2cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824
println(md5("hello"))     // 5d41402abc4b2a76b9719d911017c592

Date/Time

FunctionParametersReturnsDescription
date_now()nonedictCurrent UTC datetime as dict with year, month, day, hour, minute, second, weekday, and timestamp fields
date_parse(str)str: stringfloatParse a datetime string (e.g., "2024-01-15 10:30:00") into a Unix timestamp. Extracts numeric components from the string. Throws if fewer than 3 parts (year, month, day). Validates month (1-12), day (1-31), hour (0-23), minute (0-59), second (0-59)
date_format(dt, format?)dt: float, int, or dict; format: string (default "%Y-%m-%d %H:%M:%S")stringFormat a timestamp or date dict as a string. Supports %Y, %m, %d, %H, %M, %S placeholders. Throws for negative timestamps

Testing

FunctionParametersReturnsDescription
assert(condition, msg?)condition: any, msg: string (optional)nilAssert value is truthy. Throws with message on failure
assert_eq(a, b, msg?)a: any, b: any, msg: string (optional)nilAssert two values are equal. Throws with message on failure
assert_ne(a, b, msg?)a: any, b: any, msg: string (optional)nilAssert two values are not equal. Throws with message on failure

HTTP

FunctionParametersReturnsDescription
http_get(url, options?)url: string, options: dictdictGET request
http_post(url, body, options?)url: string, body: string, options: dictdictPOST request
http_put(url, body, options?)url: string, body: string, options: dictdictPUT request
http_patch(url, body, options?)url: string, body: string, options: dictdictPATCH request
http_delete(url, options?)url: string, options: dictdictDELETE request
http_request(method, url, options?)method: string, url: string, options: dictdictGeneric HTTP request

All HTTP functions return {status: int, headers: dict, body: string, ok: bool}. Options: timeout (ms), retries, backoff (ms), headers (dict), auth (string or {bearer: "token"} or {basic: {user, password}}), follow_redirects (bool), max_redirects (int), body (string). Throws on network errors.

Mock HTTP

For testing pipelines that make HTTP calls without hitting real servers.

FunctionParametersReturnsDescription
http_mock(method, url_pattern, response)method: string, url_pattern: string, response: dictnilRegister a mock. Use * in url_pattern for glob matching (supports multiple * wildcards, e.g., https://api.example.com/*/items/*)
http_mock_clear()nonenilClear all mocks and recorded calls
http_mock_calls()nonelistReturn list of {method, url, body} for all intercepted calls
http_mock("GET", "https://api.example.com/users", {
  status: 200,
  body: "{\"users\": [\"alice\"]}",
  headers: {}
})
let resp = http_get("https://api.example.com/users")
assert_eq(resp.status, 200)

Interactive input

FunctionParametersReturnsDescription
prompt_user(msg)msg: string (optional)stringDisplay message, read line from stdin

Async and timing

FunctionParametersReturnsDescription
sleep(duration)duration: int (ms) or duration literalnilPause execution

Concurrency primitives

Channels

FunctionParametersReturnsDescription
channel(name?)name: string (default "default")dictCreate a channel with name, type, and messages fields
send(ch, value)ch: dict, value: anynilSend a value to a channel
receive(ch)ch: dictanyReceive a value from a channel (blocks until data available)
close_channel(ch)ch: channelnilClose a channel, preventing further sends
try_receive(ch)ch: channelany or nilNon-blocking receive. Returns nil if no data available
select(ch1, ch2, ...)channels: channeldict or nilWait for data on any channel. Returns {index, value, channel} for the first ready channel, or nil if all closed

Atomics

FunctionParametersReturnsDescription
atomic(initial?)initial: any (default 0)dictCreate an atomic value
atomic_get(a)a: dictanyRead the current value
atomic_set(a, value)a: dict, value: anydictReturns new atomic with updated value
atomic_add(a, delta)a: dict, delta: intdictReturns new atomic with incremented value

Persistent store

FunctionParametersReturnsDescription
store_get(key)key: stringanyRetrieve value from store, nil if missing
store_set(key, value)key: string, value: anynilStore value, auto-saves to .harn/store.json
store_delete(key)key: stringnilRemove key from store
store_list()nonelistList all keys (sorted)
store_save()nonenilExplicitly flush store to disk
store_clear()nonenilRemove all keys from store

The store is backed by .harn/store.json relative to the script’s directory. The file is created lazily on first store_set. In bridge mode, the host can override these builtins.

LLM

See LLM calls and agent loops for full documentation.

FunctionParametersReturnsDescription
llm_call(prompt, system?, options?)prompt: string, system: string, options: dictstringSingle LLM request
agent_loop(prompt, system?, options?)prompt: string, system: string, options: dictdictMulti-turn agent loop with ##DONE## sentinel. Returns {status, text, iterations, duration_ms, tools_used}
llm_info()dictCurrent LLM config: {provider, model, api_key_set}
llm_usage()dictCumulative usage: {input_tokens, output_tokens, total_duration_ms, call_count}
llm_resolve_model(alias)alias: stringdictResolve model alias to {id, provider} via providers.toml
llm_infer_provider(model_id)model_id: stringstringInfer provider from model ID (e.g. "claude-*""anthropic")
llm_model_tier(model_id)model_id: stringstringGet capability tier: "small", "mid", or "frontier"
llm_healthcheck(provider?)provider: stringdictValidate API key. Returns {valid, message, metadata}
llm_providers()listList all configured provider names
llm_config(provider?)provider: stringdictGet provider config (base_url, auth_style, etc.)
llm_cost(model, input_tokens, output_tokens)model: string, input_tokens: int, output_tokens: intfloatEstimate USD cost from embedded pricing table
llm_session_cost()dictSession totals: {total_cost, input_tokens, output_tokens, call_count}
llm_budget(max_cost)max_cost: floatnilSet session budget in USD. LLM calls throw if exceeded
llm_budget_remaining()float or nilRemaining budget (nil if no budget set)

Provider configuration

LLM provider endpoints, model aliases, inference rules, and default parameters are configured via a TOML file. The VM searches for config in this order:

  1. HARN_PROVIDERS_CONFIG env var (explicit path)
  2. ~/.config/harn/providers.toml
  3. Built-in defaults (Anthropic, OpenAI, OpenRouter, HuggingFace, Ollama)

See harn init to generate a default config file, or create one manually:

[providers.anthropic]
base_url = "https://api.anthropic.com/v1"
auth_style = "header"
auth_header = "x-api-key"
auth_env = "ANTHROPIC_API_KEY"
chat_endpoint = "/messages"

[aliases]
sonnet = { id = "claude-sonnet-4-20250514", provider = "anthropic" }

[[inference_rules]]
pattern = "claude-*"
provider = "anthropic"

[[tier_rules]]
pattern = "claude-*"
tier = "frontier"

[model_defaults."qwen/*"]
temperature = 0.3

Timers

FunctionParametersReturnsDescription
timer_start(name?)name: stringdictStart a named timer
timer_end(timer)timer: dictintStop timer, prints elapsed, returns milliseconds
elapsed()intMilliseconds since process start

Structured logging

FunctionParametersReturnsDescription
log_json(key, value)key: string, value: anynilEmit a JSON log line with timestamp

Metadata

Project metadata store backed by .burin/metadata/ sharded JSON files. Supports hierarchical namespace resolution (child directories inherit from parents).

FunctionParametersReturnsDescription
metadata_get(dir, namespace?)dir: string, namespace: stringdict | nilRead metadata with inheritance
metadata_set(dir, namespace, data)dir: string, namespace: string, data: dictnilWrite metadata for directory/namespace
metadata_save()nilFlush metadata to disk
metadata_stale(project)project: stringdictCheck staleness: {any_stale, tier1, tier2}
metadata_refresh_hashes()nilRecompute content hashes
compute_content_hash(dir)dir: stringstringHash of directory contents
invalidate_facts(dir)dir: stringnilMark cached facts as stale

MCP (Model Context Protocol)

Connect to external tool servers using the Model Context Protocol. Supports stdio transport (spawns a child process).

FunctionParametersReturnsDescription
mcp_connect(command, args?)command: string, args: listmcp_clientSpawn an MCP server and perform the initialize handshake
mcp_list_tools(client)client: mcp_clientlistList available tools from the server
mcp_call(client, name, arguments?)client: mcp_client, name: string, arguments: dictstring or listCall a tool and return the result
mcp_list_resources(client)client: mcp_clientlistList available resources from the server
mcp_list_resource_templates(client)client: mcp_clientlistList resource templates (URI templates) from the server
mcp_read_resource(client, uri)client: mcp_client, uri: stringstring or listRead a resource by URI
mcp_list_prompts(client)client: mcp_clientlistList available prompts from the server
mcp_get_prompt(client, name, arguments?)client: mcp_client, name: string, arguments: dictdictGet a prompt with optional arguments
mcp_server_info(client)client: mcp_clientdictGet connection info (name, connected)
mcp_disconnect(client)client: mcp_clientnilKill the server process and release resources

Example:

let client = mcp_connect("npx", ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"])
let tools = mcp_list_tools(client)
println(tools)

let result = mcp_call(client, "read_file", {"path": "/tmp/hello.txt"})
println(result)

mcp_disconnect(client)

Notes:

  • mcp_call returns a string when the tool produces a single text block, a list of content dicts for multi-block results, or nil when empty.
  • If the tool reports isError: true, mcp_call throws the error text.
  • mcp_connect throws if the command cannot be spawned or the initialize handshake fails.

Auto-connecting MCP servers via harn.toml

Instead of calling mcp_connect manually, you can declare MCP servers in harn.toml. They will be connected automatically before the pipeline executes and made available through the global mcp dict.

Add a [[mcp]] entry for each server:

[[mcp]]
name = "filesystem"
command = "npx"
args = ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"]

[[mcp]]
name = "github"
command = "npx"
args = ["-y", "@modelcontextprotocol/server-github"]

Each entry requires:

FieldTypeDescription
namestringIdentifier used to access the client (e.g., mcp.filesystem)
commandstringExecutable to spawn
argslist of stringsCommand-line arguments (default: empty)

The connected clients are available as properties on the mcp global dict:

pipeline default() {
  let tools = mcp_list_tools(mcp.filesystem)
  println(tools)

  let result = mcp_call(mcp.github, "list_issues", {repo: "harn"})
  println(result)
}

If a server fails to connect, a warning is printed to stderr and that server is omitted from the mcp dict. Other servers still connect normally. The mcp global is only defined when at least one server connects successfully.

MCP Server Mode

Harn pipelines can expose tools, resources, resource templates, and prompts as an MCP server using harn mcp-serve. The CLI serves them over stdio using the MCP protocol, making them callable by Claude Desktop, Cursor, or any MCP client.

FunctionParametersReturnsDescription
tool_registry()dictCreate an empty tool registry
tool_define(registry, name, desc, config)registry, name, desc: string, config: dictdictAdd a tool (config: {params, handler, annotations?})
mcp_tools(registry)registry: dictnilRegister tools for MCP serving
mcp_resource(config)config: dictnilRegister a static resource ({uri, name, text, description?, mime_type?})
mcp_resource_template(config)config: dictnilRegister a resource template ({uri_template, name, handler, description?, mime_type?})
mcp_prompt(config)config: dictnilRegister a prompt ({name, handler, description?, arguments?})

Tool annotations (MCP spec annotations field) can be passed in the tool_define config to describe tool behavior:

tools = tool_define(tools, "search", "Search files", {
  params: { query: "string" },
  handler: { args -> "results for " + args.query },
  annotations: {
    title: "File Search",
    readOnlyHint: true,
    destructiveHint: false
  }
})

Example (agent.harn):

pipeline main(task) {
  var tools = tool_registry()
  tools = tool_define(tools, "greet", "Greet someone", {
    params: { name: "string" },
    handler: { args -> "Hello, " + args.name + "!" }
  })
  mcp_tools(tools)

  mcp_resource({
    uri: "docs://readme",
    name: "README",
    text: "# My Agent\nA demo MCP server."
  })

  mcp_resource_template({
    uri_template: "config://{key}",
    name: "Config Values",
    handler: { args -> "value for " + args.key }
  })

  mcp_prompt({
    name: "review",
    description: "Code review prompt",
    arguments: [{ name: "code", required: true }],
    handler: { args -> "Please review:\n" + args.code }
  })
}

Run as an MCP server:

harn mcp-serve agent.harn

Configure in Claude Desktop (claude_desktop_config.json):

{
  "mcpServers": {
    "my-agent": {
      "command": "harn",
      "args": ["mcp-serve", "agent.harn"]
    }
  }
}

Notes:

  • mcp_tools(registry) (or the alias mcp_serve) must be called to register tools.
  • Resources, resource templates, and prompts are registered individually.
  • All print/println output goes to stderr (stdout is the MCP transport).
  • The server supports the 2024-11-05 MCP protocol version over stdio.
  • Tool handlers receive arguments as a dict and should return a string result.
  • Prompt handlers receive arguments as a dict and return a string (single user message) or a list of {role, content} dicts.
  • Resource template handlers receive URI template variables as a dict and return the resource text.

Harn Cookbook

Practical patterns for building AI agents and pipelines in Harn. Each recipe is self-contained with a short explanation and working code.

1. Basic LLM call

Single-shot prompt with a system message. Set ANTHROPIC_API_KEY (or the appropriate key for your provider) before running.

pipeline default(task) {
  let response = llm_call(
    "Explain the builder pattern in three sentences.",
    "You are a software engineering tutor. Be concise."
  )
  log(response)
}

To use a different provider or model, pass an options dict:

pipeline default(task) {
  let response = llm_call(
    "Explain the builder pattern in three sentences.",
    "You are a software engineering tutor. Be concise.",
    {provider: "openai", model: "gpt-4o", max_tokens: 512}
  )
  log(response)
}

2. Agent loop with tools

Register tools with JSON Schema-compatible definitions, generate a system prompt that describes them, then let the LLM call tools in a loop.

pipeline default(task) {
  var tools = tool_registry()

  tools = tool_add(tools, "read", "Read a file from disk", { path ->
    return read_file(path)
  }, {path: "string"})

  tools = tool_add(tools, "search", "Search code for a pattern", { query ->
    let result = shell("grep -r '" + query + "' src/ || true")
    return result.stdout
  }, {query: "string"})

  let system = tool_prompt(tools)

  var messages = task
  var done = false
  var iterations = 0

  while !done && iterations < 10 {
    let response = llm_call(messages, system)
    let calls = tool_parse_call(response)

    if calls.count() == 0 {
      log(response)
      done = true
    } else {
      var tool_output = ""
      for call in calls {
        let tool = tool_find(tools, call.name)
        let handler = tool.handler
        let result = handler(call.arguments[call.arguments.keys()[0]])
        tool_output = tool_output + tool_format_result(call.name, result)
      }
      messages = tool_output
    }
    iterations = iterations + 1
  }
}

3. Parallel tool execution

Run multiple independent operations concurrently with parallel_map. Results preserve the original list order.

pipeline default(task) {
  let files = ["src/main.rs", "src/lib.rs", "src/utils.rs"]

  let reviews = parallel_map(files) { file ->
    let content = read_file(file)
    llm_call(
      "Review this code for bugs and suggest fixes:\n\n" + content,
      "You are a senior code reviewer. Be specific."
    )
  }

  for i in 0 upto files.count {
    log("=== ${files[i]} ===")
    log(reviews[i])
  }
}

Use parallel when you need to run N indexed tasks rather than mapping over a list:

pipeline default(task) {
  let prompts = [
    "Write a haiku about Rust",
    "Write a haiku about concurrency",
    "Write a haiku about debugging"
  ]

  let results = parallel(prompts.count) { i ->
    llm_call(prompts[i], "You are a poet.")
  }

  for r in results {
    log(r)
  }
}

4. MCP server integration

Connect to an MCP-compatible tool server, list available tools, and call them. This example uses the filesystem MCP server.

pipeline default(task) {
  let client = mcp_connect("npx", ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"])

  // Check connection
  let info = mcp_server_info(client)
  log("Connected to: ${info.name}")

  // List available tools
  let tools = mcp_list_tools(client)
  for tool in tools {
    log("Tool: ${tool.name} - ${tool.description}")
  }

  // Write a file, then read it back
  mcp_call(client, "write_file", {path: "/tmp/hello.txt", content: "Hello from Harn!"})
  let content = mcp_call(client, "read_file", {path: "/tmp/hello.txt"})
  log("File content: ${content}")

  // List directory
  let entries = mcp_call(client, "list_directory", {path: "/tmp"})
  log(entries)

  mcp_disconnect(client)
}

5. Recursive agent with TCO

Tail-recursive functions are optimized by the VM, so they do not overflow the stack even across thousands of iterations. This pattern is useful for processing a queue of work items one at a time.

pipeline default(task) {
  let items = ["Refactor auth module", "Add input validation", "Write unit tests"]

  fn process(remaining, results) {
    if remaining.count == 0 {
      return results
    }
    let item = remaining.first
    let rest = remaining.slice(1)

    let result = retry 3 {
      llm_call(
        "Plan how to: " + item,
        "You are a senior engineer. Output a numbered list of steps."
      )
    }

    return process(rest, results + [{task: item, plan: result}])
  }

  let plans = process(items, [])

  for p in plans {
    log("=== ${p.task} ===")
    log(p.plan)
  }
}

For non-LLM workloads, TCO handles deep recursion without issues:

pipeline default(task) {
  fn sum_to(n, acc) {
    if n <= 0 {
      return acc
    }
    return sum_to(n - 1, acc + n)
  }

  log(sum_to(10000, 0))
}

6. Pipeline composition

Split agent logic across files and compose pipelines using imports and inheritance.

lib/context.harn – shared context-gathering logic:

fn gather_context(task) {
  let readme = read_file("README.md")
  return {
    task: task,
    readme: readme,
    timestamp: timestamp()
  }
}

lib/review.harn – a reusable review pipeline:

import "lib/context"

pipeline review(task) {
  let ctx = gather_context(task)
  let prompt = "Review this project.\n\nREADME:\n" + ctx.readme + "\n\nTask: " + ctx.task
  let result = llm_call(prompt, "You are a code reviewer.")
  log(result)
}

main.harn – extend and customize:

import "lib/review"

pipeline default(task) extends review {
  override fn setup() {
    log("Starting custom review pipeline")
  }
}

7. Error handling in agent loops

Wrap LLM calls in try/catch with retry to handle transient failures. Use typed catch for structured error handling.

pipeline default(task) {
  enum AgentError {
    LlmFailure(message)
    ParseFailure(raw)
    Timeout(seconds)
  }

  fn safe_llm_call(prompt, system) {
    retry 3 {
      try {
        let raw = llm_call(prompt, system)
        let parsed = json_parse(raw)
        return parsed
      } catch (e) {
        log("LLM call failed: ${e}")
        throw AgentError.LlmFailure(to_string(e))
      }
    }
  }

  try {
    let result = safe_llm_call(
      "Return a JSON object with keys 'summary' and 'score'.",
      "You are an evaluator. Always respond with valid JSON only."
    )
    log("Summary: ${result.summary}")
    log("Score: ${result.score}")
  } catch (e: AgentError) {
    match e.variant {
      "LlmFailure" -> { log("LLM failed after retries: ${e.fields[0]}") }
      "ParseFailure" -> { log("Could not parse LLM output: ${e.fields[0]}") }
    }
  } catch (e) {
    log("Unexpected error: ${e}")
  }
}

8. Channel-based coordination

Use channels to coordinate between spawned tasks. One task produces work, another consumes it.

pipeline default(task) {
  let ch = channel("work", 10)
  let results_ch = channel("results", 10)

  // Producer: send work items
  let producer = spawn {
    let items = ["item_a", "item_b", "item_c"]
    for item in items {
      send(ch, item)
    }
    send(ch, "DONE")
  }

  // Consumer: process work items
  let consumer = spawn {
    var processed = 0
    var running = true
    while running {
      let item = receive(ch)
      if item == "DONE" {
        running = false
      } else {
        let result = "processed: " + item
        send(results_ch, result)
        processed = processed + 1
      }
    }
    send(results_ch, "COMPLETE:" + to_string(processed))
  }

  await(producer)
  await(consumer)

  // Collect results
  var collecting = true
  while collecting {
    let msg = receive(results_ch)
    if msg.starts_with("COMPLETE:") {
      log(msg)
      collecting = false
    } else {
      log(msg)
    }
  }
}

9. Context building pattern

Gather context from multiple sources, merge it into a single dict, and pass it to an LLM.

pipeline default(task) {
  fn read_or_empty(path) {
    try {
      return read_file(path)
    } catch (e) {
      return ""
    }
  }

  // Gather context from multiple sources in parallel
  let sources = ["README.md", "CHANGELOG.md", "docs/architecture.md"]

  let contents = parallel_map(sources) { path ->
    {path: path, content: read_or_empty(path)}
  }

  // Build a merged context dict
  var context = {task: task, files: {}}
  for item in contents {
    if item.content != "" {
      context = context.merge({files: context.files.merge({[item.path]: item.content})})
    }
  }

  // Format context for the LLM
  var prompt = "Task: " + task + "\n\n"
  for entry in context.files {
    prompt = prompt + "=== " + entry.key + " ===\n" + entry.value + "\n\n"
  }

  let result = llm_call(prompt, "You are a helpful assistant. Use the provided files as context.")
  log(result)
}

10. Structured output parsing

Ask the LLM for JSON output, parse it with json_parse, and validate the structure before using it.

pipeline default(task) {
  let system = """
You are a task planner. Given a task description, break it into steps.
Respond with ONLY a JSON array of objects, each with "step" (string) and
"priority" (int 1-5). No other text.
"""

  fn get_plan(task_desc) {
    retry 3 {
      let raw = llm_call(task_desc, system)
      let parsed = json_parse(raw)

      // Validate structure
      guard type_of(parsed) == "list" else {
        throw "Expected a JSON array, got: " + type_of(parsed)
      }

      for item in parsed {
        guard item.has("step") && item.has("priority") else {
          throw "Missing required fields in: " + json_stringify(item)
        }
      }

      return parsed
    }
  }

  let plan = get_plan("Build a REST API for a todo app")

  if plan != nil {
    let sorted = plan.filter({ s -> s.priority <= 3 })
    for step in sorted {
      log("[P${step.priority}] ${step.step}")
    }
  } else {
    log("Failed to get a valid plan after retries")
  }
}

11. Sets for deduplication and membership testing

Use sets to track processed items and avoid duplicates. Sets provide O(1)-style membership testing via set_contains and are immutable – operations like set_add return a new set.

pipeline default(task) {
  let urls = [
    "https://example.com/a",
    "https://example.com/b",
    "https://example.com/a",
    "https://example.com/c",
    "https://example.com/b"
  ]

  // Deduplicate with set(), then convert back to a list
  let unique_urls = to_list(set(urls))
  log("${len(unique_urls)} unique URLs out of ${len(urls)} total")

  // Track which URLs have been processed
  var visited = set()

  for url in unique_urls {
    if !set_contains(visited, url) {
      log("Processing: ${url}")
      visited = set_add(visited, url)
    }
  }

  // Set operations: find overlap between two batches
  let batch_a = set("task-1", "task-2", "task-3")
  let batch_b = set("task-2", "task-3", "task-4")

  let already_done = set_intersect(batch_a, batch_b)
  let new_work = set_difference(batch_b, batch_a)

  log("Overlap: ${len(already_done)}, New: ${len(new_work)}")
}

12. Typed functions with runtime enforcement

Add type annotations to function parameters for automatic runtime validation. When a caller passes a value of the wrong type, the VM throws a TypeError before the function body executes.

pipeline default(task) {
  fn summarize(text: string, max_words: int) -> string {
    let words = text.split(" ")
    if words.count <= max_words {
      return text
    }
    let truncated = words.slice(0, max_words)
    return join(truncated, " ") + "..."
  }

  log(summarize("The quick brown fox jumps over the lazy dog", 5))

  // Catch type errors gracefully
  try {
    summarize(42, "not a number")
  } catch (e) {
    log("Caught: ${e}")
    // -> TypeError: parameter 'text' expected string, got int (42)
  }

  // Works with all primitive types: string, int, float, bool, list, dict, set
  fn process_batch(items: list, verbose: bool) {
    for item in items {
      if verbose {
        log("Processing: ${item}")
      }
    }
    log("Done: ${len(items)} items")
  }

  process_batch(["a", "b", "c"], true)
}