Skip to content

Scripting and Automation with the Sidekick API

The sidekick.api package is the Python interface for scripting and automating Sidekick from Binary Ninja. Use it to:

  • Execute LLM agents and prompts locally, with your own API keys or local models
  • Run BNQL queries and semantic searches against loaded binaries
  • Read and write workspace resources (Indexes, Code Maps, Notebook) programmatically
  • Drive headless analysis pipelines outside the Binary Ninja GUI

Prerequisites

  • Sidekick must be installed and active in Binary Ninja
  • Default model endpoints and completion pools (smart, fast) are installed automatically on first launch. They use the Sidekick service as a zero-config backend. To use your own API keys or local models, see Configuring completion pools.

How to import

import sidekick.api as api

Use this import in Binary Ninja scripts and Library Python entry points when you need Sidekick-managed resources such as indexes, code maps, notebooks, prompts, or agents. Do not import plugin implementation modules such as Vector35_Sidekick; those are internal packaging details rather than the supported scripting surface.

All public symbols are also importable directly from sidekick.api:

from sidekick.api import execute_bnql, render_object, concept, load_prompt, init_runtime

Two execution contexts

The API works in two distinct contexts. Understanding which one you are in determines how you write your scripts.

Inside Binary Ninja (interactive or script execution)

When Sidekick is active and a file is open, the BinaryView already has an associated session and kernel. You pass bv directly to API functions.

# Works in the BN scripting console or a script run from the Library panel
from sidekick.api import execute_bnql

funcs = execute_bnql(bv, "/view/function")
print([f.name for f in funcs])

No setup or teardown is needed. The session lifecycle is managed by Sidekick. Library Python scripts run in this same context: bv is pre-bound, and sidekick.api is the supported way to read or mutate Sidekick workspace resources from those scripts.

Library Python automation recipe

import sidekick.api as api
from binaryninja import MediumLevelILOperation

TAILCALL_OPS = {
    MediumLevelILOperation.MLIL_TAILCALL,
    MediumLevelILOperation.MLIL_TAILCALL_UNTYPED,
    MediumLevelILOperation.MLIL_TAILCALL_SSA,
    MediumLevelILOperation.MLIL_TAILCALL_UNTYPED_SSA,
}

# Reuse the current workspace index or create it on first run.
api.get_index(bv, "tailcalls", create_if_missing=True)

# Collect Binary Ninja objects of any indexable kind. add_index_objects
# accepts the objects directly and derives canonical object IDs for them.
tailcalls = [
    insn
    for func in bv.functions
    if func.mlil is not None
    for insn in func.mlil.instructions
    if insn.operation in TAILCALL_OPS
]

entries = api.add_index_objects(
    bv,
    "tailcalls",
    objects=tailcalls,
    prevent_duplicates=True,
)

print({"index": "tailcalls", "added": len(entries)})

This is the recommended pattern for Library automations: keep the logic in Python, use the pre-bound bv, and access Sidekick resources through sidekick.api rather than plugin internals. prevent_duplicates=True makes the script safe to re-run as the binary's analysis evolves.

Outside Binary Ninja (headless scripts)

Use the sidekick() context manager to initialize a headless application instance, then open a session for a file or project.

import sidekick.api as api
from pathlib import Path

app_dir = Path("~/.sidekick-headless").expanduser()

with api.sidekick(app_dir) as sk:
    with sk.session("/path/to/target.bndb") as sess:
        funcs = api.execute_bnql(sess.bv, "/view/function", limit=10)
        print([f.name for f in funcs])

api.sidekick(app_dir) creates a SidekickContext. The app_dir stores workspace databases and index data. By default, that directory is preserved on exit so a caller can reuse cached state across runs. Pass cleanup_app_dir=True to remove temporary state on exit when using a scratch directory. sk.session(target) opens a SessionContext for the target file and exposes a bv property for the active BinaryView.

from tempfile import TemporaryDirectory

with TemporaryDirectory(prefix="sidekick-headless-") as tmp:
    with api.sidekick(tmp, cleanup_app_dir=True) as sk:
        with sk.session("/path/to/target.bndb") as sess:
            funcs = api.execute_bnql(sess.bv, "/view/function", limit=10)

Project sessions

Project-backed sessions start without any open files. Open members individually with open_project_file().

with api.sidekick(app_dir) as sk:
    with sk.session(project) as sess:
        bv = sess.open_project_file("libfoo.so.bndb")
        results = api.execute_bnql(sess.binary_view_set, "/view/function")

open_project_file() accepts an exact path_on_disk, a project member name, or an existing BinaryView or FileMetadata from the same project. If a filename matches more than one project member, pass a fuller on-disk path.

Enabling semantic search in headless sessions

Semantic search via concept() requires an initialized semantic index. Opt in explicitly.

with api.sidekick(app_dir, semantic_index=True) as sk:
    with sk.session("/path/to/target.bndb") as sess:
        sess.wait_for_semantic_index_ready(timeout_secs=120.0)
        matches = api.concept(sess.bv, "credential handling")

Note

Structural BNQL queries work without semantic indexing. Only concept() and the concept() BNQL function require it. In headless scripts, concept() stays non-blocking and can return [] until indexing is ready, so call sess.wait_for_semantic_index_ready(...) before relying on semantic results.


Querying with BNQL

execute_bnql runs a BNQL query against a BinaryView or BinaryViewSet and returns a list of objects.

from sidekick.api import execute_bnql

# All functions
funcs = execute_bnql(bv, "/view/function")

# Functions that call malloc
callers = execute_bnql(bv, '/view/function[calls::function[@name == "malloc"]]')

# First 20 functions sorted by name
funcs = execute_bnql(bv, "/view/function", limit=20)

To get rendered text instead of objects, use render_object on each result:

from sidekick.api import execute_bnql, render_object

funcs = execute_bnql(bv, "/view/function", limit=5)
summaries = [render_object(f, verbosity="summary") for f in funcs]

See Building and Using Indexes and the BNQL reference for full query syntax.


Semantic search with concept()

concept() performs semantic vector search against the binary's index and returns ranked results.

from sidekick.api import concept

results = concept(bv, "file encryption")
for obj, score, similarity in results:
    print(f"{obj.name}  score={score:.3f}  similarity={similarity:.3f}")

Parameters:

Parameter Type Default Description
target BinaryView or BinaryViewSet required The analysis target
subject any required A text string, Binary Ninja object, or rendered text to search for
object_type str or None None Restrict to a specific kind, e.g. "function"
similarity float 0.45 Minimum cosine similarity threshold (0.0—“1.0)
limit int 10 Maximum number of results

Each result is a tuple of (object, score, similarity_score).

Note

concept() requires an active semantic index. Semantic indexing is disabled by default; enable it in Edit > Preferences > Settings (sidekick.semantic_index.enabled). When the index is unavailable, concept() returns an empty list. In headless sessions, pass semantic_index=True to api.sidekick().

You can also embed concept() directly inside a BNQL query:

# Functions related to "network communication" that also call send
results = execute_bnql(
    bv,
    '/view/function[concept("network communication") in . and calls::function[@name == "send"]]',
)

Rendering objects with render_object

LLMs work with text. render_object converts Binary Ninja objects into text at a specified level of detail.

from sidekick.api import render_object

func = bv.get_function_at(0x401000)

name_only = render_object(func, verbosity="identifier")   # "sub_401000"
signature = render_object(func, verbosity="summary")      # "int sub_401000(int arg1)"
full_code  = render_object(func, verbosity="contents")    # decompiled body
Verbosity What you get
"identifier" Name or address only
"summary" Signature or prototype
"contents" Full decompiled code or object body
"contextual" Contents with surrounding context

Auto-rendering of agent and prompt variables

When a non-string Binary Ninja object is passed as a variable to an agent or prompt, the API renders it automatically at "contents" verbosity. To use a different verbosity, either:

  • pass variable_render_verbosity= to the agent or prompt call (or to runtime.load_agent(...) to set the default for that loaded agent), or
  • call render_object(obj, verbosity=...) first and pass the resulting string.

Executing prompts

Prompts are single-shot LLM completions. They accept template variables, call the model once, and return a response.

When to use: Summarization, naming, classification, extracting structured information from a piece of code.

Define a prompt in YAML

Prompt files are named prompt.<name>.yaml and must live in a directory the API can resolve (see Scripts root resolution).

# prompt.summarize-function.yaml
version: 1
name: summarize-function
completion_pool_id: fast
prompt_template:
  messages:
    - role: user
      content: "Summarize this function in one sentence:\n\n{{CODE}}"
  variables:
    CODE:
      name: CODE
      required: true

Call the prompt from Python

from sidekick.api import load_prompt, render_object

summarize = load_prompt("summarize-function")
code_text = render_object(bv.get_function_at(0x401000), verbosity="contents")
result = summarize(CODE=code_text)
print(result)

Keyword arguments map directly to the variable names defined in the YAML.

Structured output

Add output_schema to receive a parsed dict instead of plain text:

# prompt.extract-summary.yaml
version: 1
name: extract-summary
completion_pool_id: fast
output_schema:
  type: object
  properties:
    summary:
      type: string
    confidence:
      type: number
  required: [summary, confidence]
prompt_template:
  messages:
    - role: user
      content: "Summarize this code and rate your confidence:\n\n{{CODE}}"
  variables:
    CODE:
      name: CODE
      required: true
extract = load_prompt("extract-summary")
result = extract(CODE=render_object(func, verbosity="contents"))
print(result["summary"], result["confidence"])

Executing agents

Agents are autonomous, multi-step actors. They are given a goal, can use tools to query and modify the binary, and run until they decide the task is complete.

When to use: Complex analysis, automated renaming, tasks that require exploring the binary across multiple steps.

Define an agent in YAML

Agent files are named agent.<name>.yaml.

# agent.rename-vars.yaml
version: 1
protocol_id: rename-vars
completion_pool_id: smart
uses_tags:
  - database  # grants query and update access to the Binary Ninja database
prompt_template:
  messages:
    - role: system
      content: |
        You are a reverse engineering assistant. Analyze the provided function
        and use the update_database tool to rename any default-named variables
        (like var_18) to something meaningful based on their usage.
    - role: user
      content: "Please rename variables in this function:\n\n{{TARGET_FUNCTION}}"
  variables:
    TARGET_FUNCTION:
      name: TARGET_FUNCTION
      required: true

The uses_tags field controls which local runtime tools the agent can use:

  • database — policy-level alias that allows both database.query and database.update tools. An agent with uses_tags: [database] gains access to both registered tools.
  • database.query — the query tool only (read access to the Binary Ninja database)
  • database.update — the update tool only (write access to the Binary Ninja database)
  • index.query — inspect existing indexes
  • index.update — update indexes
  • codemap.query — inspect existing code maps
  • codemap.update — update code maps

Note

database is a tag policy alias set in init_runtime, not a tool tag registered directly on a tool. The actual tools are registered under database.query and database.update. Listing database in uses_tags picks up the policy allowance, which grants access to both. If you want read-only access, list database.query instead.

When init_runtime is called with allow_updates=False, UpdateTargetTool is not registered, so only database.query is available — even if the agent YAML lists database in uses_tags.

Initialize a runtime and run the agent

from sidekick.api import init_runtime, execute_bnql

with init_runtime(bv=bv) as runtime:
    renamer = runtime.load_agent("rename-vars")

    large_funcs = execute_bnql(bv, "/view/function[count(basic_blocks) > 100]")
    for func in large_funcs:
        print(f"Analyzing {func.name}...")
        renamer(TARGET_FUNCTION=func)

init_runtime(bv=bv) creates an AgentRuntime that is an ephemeral kernel bound to the target view. Pass allow_updates=False to create a read-only runtime.

Structured agent output

Add output_schema to an agent definition to receive a parsed dict. Make the system prompt explicitly require JSON matching the schema.

# agent.triage.yaml
version: 1
protocol_id: triage
completion_pool_id: smart
output_schema:
  type: object
  properties:
    verdict:
      type: string
    rationale:
      type: string
  required: [verdict, rationale]
uses_tags:
  - database
prompt_template:
  messages:
    - role: system
      content: "Analyze the target and respond with JSON matching the output schema."
    - role: user
      content: "Triage this function:\n\n{{TARGET_FUNCTION}}"
  variables:
    TARGET_FUNCTION:
      name: TARGET_FUNCTION
      required: true
with init_runtime(bv=bv) as runtime:
    triage = runtime.load_agent("triage")
    result = triage(TARGET_FUNCTION=func)

print(result["verdict"], result["rationale"])

Scripts root resolution

When you call load_prompt("name") or runtime.load_agent("name"), the API resolves the YAML file from a scripts root directory. Resolution order:

  1. Explicit root= parameter: load_prompt("name", root="/path/to/scripts")
  2. __sidekick_script_root variable in the calling frame's globals or locals (set automatically when running scripts from the Sidekick Library panel)
  3. SIDEKICK_SCRIPTS_ROOT environment variable
  4. Value set by api.set_default_root("/path/to/scripts")

When running scripts from the Library panel, __sidekick_script_root is injected automatically and points to the script's working copy directory. You do not need to configure a root in that case.

In headless mode (api.sidekick()), the scripts root defaults to the current working directory. You can override it:

with api.sidekick("./app", scripts_root="/path/to/specs") as sk:
    ...

Pass scripts_root=None to disable automatic root configuration.

For standalone scripts or the BN scripting console, set the root explicitly:

import sidekick.api as api

api.set_default_root("/path/to/my/scripts")
prompt = api.load_prompt("summarize-function")

You can also pass an absolute path directly to bypass root resolution:

prompt = api.load_prompt("/absolute/path/to/prompt.summarize-function.yaml")

Managing workspace resources

The resource APIs take a BinaryView as their first argument and resolve the active session and kernel automatically.

Conventions

The Indexes, Code Maps, and Notebook APIs share a small set of conventions:

  • Session resolution. Every resource function takes bv as its first positional argument and looks up the active Sidekick session and kernel from it. A bv from a different or inactive session raises RuntimeError. Direct session and kernel access via get_session_for_view() / get_kernel_for_view() is only needed for advanced operations.
  • Object IDs are derived, not constructed. The Index APIs accept Binary Ninja objects directly — Functions, IL instructions, basic blocks, data variables, and other indexable types. Canonical object IDs of the form bv:function/0x401000 or bv:function/0x401000/mlil/instruction/0x401024 are produced internally. The Code Maps API is the exception: its roots parameter takes canonical ID strings; see that section for how to convert objects.
  • Binary attribution is automatic. Index entries carry a binary_view_id field, populated from bv when the entry is created.
  • prevent_duplicates=True makes add operations re-runnable. With it set, repeat calls to add_index_objects skip entries already present (matched on (binary_view_id, object_id)). Use remove_index_entries to drop entries that no longer apply.
  • Identifiers and timestamps are generated on creation. Resource IDs are created automatically when you omit them, and notebook entries populate created, updated, last_touched, and last_viewed fields as needed.

Indexes

from sidekick.api import (
    create_index, get_index, list_indexes,
    update_index, delete_index,
    add_index_objects, update_index_entries, remove_index_entries,
)

# Create
idx = create_index(bv, "interesting-functions", metadata={"owner": "user"})

# List and retrieve
print([r.resource_id for r in list_indexes(bv)])
same_idx = get_index(bv, "interesting-functions")

# Add Binary Ninja objects as entries. The API derives a canonical object_id
# from each object and attributes the entry to bv.
entries = add_index_objects(bv, "interesting-functions", objects=[func_a, func_b])

# Update entry metadata by entry_id
update_index_entries(bv, "interesting-functions", [entries[0].entry_id], {"priority": 10})

# Remove entries
remove_index_entries(bv, "interesting-functions", [entries[1].entry_id])

# Rename or update the index itself
update_index(bv, "interesting-functions", name="priority-functions", index_type="manual")

# Delete
delete_index(bv, "priority-functions")

Indexable object types

add_index_objects accepts the following kinds of Binary Ninja objects:

  • Functions and IL functions (LLIL, MLIL, HLIL)
  • Assembly instructions and IL instructions (LLIL, MLIL, HLIL)
  • Basic blocks and IL basic blocks (LLIL, MLIL, HLIL)
  • Data variables and local variables
  • Strings, types, symbols
  • Segments, sections, components
  • Comments, tags, constants

BinaryView itself, SSA variables, type libraries, platform and architecture objects, external library references, and synthetic query nodes are not indexable.

By default, unsupported objects are silently skipped. Pass on_unsupported="raise" to surface them as ValueError instead.

Re-running indexing scripts

Pass prevent_duplicates=True to add_index_objects so repeat calls skip entries already in the index. Deduplication uses the natural key (binary_view_id, object_id).

add_index_objects(bv, "interesting-functions", objects=funcs, prevent_duplicates=True)

To remove entries that no longer apply on a re-run, compute the set of stale entry_ids and call remove_index_entries(bv, name, entry_ids). There is no need to track the binary view ID yourself; entries already carry it.

Code Maps

from sidekick.api import create_code_map, update_code_map, get_code_map, delete_code_map

code_map = create_code_map(
    bv,
    "parser-map",
    roots=["bv:function/0x401000"],
    predecessor_hops=1,
    successor_hops=2,
)

update_code_map(bv, "parser-map", node_width=96)

resolved = get_code_map(bv, code_map.resource_id)
delete_code_map(bv, resolved.resource_id)

roots is a list of canonical object ID strings, not Binary Ninja objects. To convert a BN object to its canonical ID, use make_id:

from sidekick.binja.entities.object_ids import make_id

main = bv.get_functions_by_name("main")[0]
code_map = create_code_map(bv, "main-map", roots=[make_id(main)])

Passing a Binary Ninja object directly to roots raises TypeError. Use make_id (or hand-build the ID string) to convert first.

Note

Manual graph payload updates (manual_nodes / manual_edges) are not supported in the v1 API. Only auto-config fields can be set through create_code_map and update_code_map.

Notebook

Notebook helpers operate directly on notebook entry resources.

from sidekick.api import (
    list_notebook_entries, get_notebook_entry,
    get_notebook_entry_context,
    create_notebook_entry, update_notebook_entry, delete_notebook_entry,
    update_notebook_entry_context,
    add_notebook_task, update_notebook_task, complete_notebook_task, delete_notebook_task,
    add_notebook_outcome, list_notebook_outcomes,
    get_notebook_outcome, update_notebook_outcome, delete_notebook_outcome,
)

# Create an entry
entry = create_notebook_entry(
    bv,
    "Investigate custom crypto routine",
    entry_type="research",
    relevant_domains=["crypto"],
)

# Inspect entries
entries = list_notebook_entries(bv)
same_entry = get_notebook_entry(bv, entry.id)
print(same_entry.analysis_context)

# Record structured intermediate analysis state
update_notebook_entry_context(
    bv,
    entry.id,
    set={"scope": {"strategy": "small-exhaustive", "functions": ["sub_4011a0"]}},
)
print(get_notebook_entry_context(bv, entry.id))

# Add and complete a task
task = add_notebook_task(bv, entry.id, "Trace key schedule")
update_notebook_task(bv, entry.id, task.id, status="in_progress")
complete_notebook_task(bv, entry.id, task.id)

# Record a finding
outcome = add_notebook_outcome(
    bv,
    entry.id,
    kind="finding",
    title="Round key update located at sub_4011a0",
    confidence=0.85,
)

print(get_notebook_outcome(bv, entry.id, outcome.id))
print(list_notebook_outcomes(bv, entry.id))
delete_notebook_entry(bv, entry.id)

Prefer omitting entry_id. Sidekick generates an opaque stable ID automatically, and that generated entry.id should be the value you pass to follow-on helpers such as get_notebook_entry, update_notebook_entry, add_notebook_task, and add_notebook_outcome. Supplying your own entry_id is only useful when you need to preserve an external identifier across systems or imports; it should not be treated as the user-facing title.

Notebook entries expose created, updated, last_touched, and last_viewed timestamps. relevant_domains is free-form text; using consistent domain names like "crypto", "networking", or "parser" across entries makes it easier to filter and group them later.

Entries may also carry analysis_context: structured intermediate state for tooling and scripts. Use it for durable machine-readable blocks such as scope, candidate lists, partition rules, or coverage manifests. This is not a fourth outcome kind, and it is separate from user-facing notebook outcomes. get_notebook_entry(bv, entry_id) exposes the field directly on the returned resource, and get_notebook_entry_context(...) returns a defensive copy.

Valid entry_type values: "operational", "research", "learning".

Valid entry status values: "active", "completed", "abandoned".

Task status values: "pending", "in_progress", "completed", "blocked".

Valid outcome kind values: "finding", "artifact", "blocker".

Valid outcome status values: "draft", "verified", "rejected". Outcomes also carry an orthogonal important: bool flag that is independent of status.


Session and kernel access

Most automation scripts do not need these helpers — every resource API resolves the active session and kernel from bv automatically. Reach for these only when you need to call something on the session or kernel directly.

from sidekick.api import get_session_for_view, get_kernel_for_view

session = get_session_for_view(bv)   # BinjaSession
kernel  = get_kernel_for_view(bv)    # Kernel

Both raise RuntimeError if no active Sidekick session exists for the view.


Configuring completion pools

Agents and prompts reference a completion_pool_id in their YAML. A completion pool maps a logical name (like fast or smart) to one or more model endpoints with parameters. If the first endpoint fails, the pool automatically tries the next (failover).

Shipped defaults

Sidekick ships default endpoints and pools that are installed to your user directory on first launch:

Endpoint Service Description
sidekick sidekick Zero-config. Proxies through the Sidekick cloud service using your existing API key.
ollama openai-compatible Template for local Ollama models. Edit service_model to match your installed model.
vllm openai-compatible Template for self-hosted vLLM deployments. Edit service_model and base_url.
anthropic-claude-sonnet anthropic Direct Anthropic API. Requires ANTHROPIC_API_KEY environment variable.
openai-gpt-4.1 openai Direct OpenAI API. Requires OPENAI_API_KEY environment variable.
Pool Default endpoint Use
smart sidekick Primary analysis, code generation, type suggestion, research
fast sidekick Thread naming, summaries, notebook entries

The sidekick endpoint works immediately if you have a Sidekick API key configured. For better quality or fully offline use, add your own endpoints as described below.

Customizing

Configure endpoints and pools through Plugins > Sidekick > Configure Automation Models... in Binary Ninja. The configuration is stored as YAML files in:

BN_USER_DIRECTORY/sidekick/config/completion_routing/

Endpoint types:

  • sidekick — Proxied through the Sidekick cloud service. No API key or base URL needed.
  • anthropic — Direct Anthropic API. Set api_key_env to the environment variable holding your key.
  • openai — Direct OpenAI API. Set api_key_env to the environment variable holding your key.
  • openai-compatible — Any OpenAI-compatible API (Ollama, vLLM, LM Studio, etc.). Set base_url and service_model. No API key required for most local servers (api_key_env: null).

Example: adding a local model to your smart pool

First, edit the Ollama endpoint (endpoint.ollama.yaml) with your model name:

name: ollama
service: openai-compatible
service_model: qwen2.5-coder:14b
base_url: http://localhost:11434/v1
api_key_env: null
tokenizer: cl100k_base
max_tokens: 128000
max_completion_tokens: 4096
pricing:
  prompt: 0.0
  cached: 0.0
  completion: 0.0
  reasoning: 0.0
  cache_writing: 0.0

Then add it to the smart pool as the primary endpoint, with sidekick as failover:

group_id: smart
completers:
  - endpoint_id: ollama
    parameters:
      temperature: 0.2
  - endpoint_id: sidekick
    parameters:
      temperature: 0.2

In your agent or prompt YAML:

completion_pool_id: smart

Swapping the underlying model only requires updating the endpoint definition. Agent and prompt YAML files do not need to change.

Tip

Use a fast pool backed by a smaller model for single-shot prompts and classification tasks. Reserve a smart pool backed by a capable model for agents that need multi-step reasoning.


API reference summary

Context management (headless)

Symbol Description
sidekick(app_dir, *, semantic_index=False, cleanup_app_dir=False, scripts_root="", ...) Create a SidekickContext for headless use; preserve app_dir by default or clean it up on exit for scratch runs; scripts_root sets the default agent/prompt lookup directory (defaults to cwd; pass None to disable)
SidekickContext.session(target, ...) Open a SessionContext for a file or project
SessionContext.bv The active BinaryView (raises if none)
SessionContext.views All BinaryView objects in the session
SessionContext.open_project_file(selector) Open a project member and add it to the session

Analysis

Symbol Description
execute_bnql(target, query, *, limit=None, offset=None, time_limit_secs=30) Run a BNQL query; returns a list of objects
concept(target, subject, object_type=None, similarity=0.45, limit=10) Semantic search; returns list[(obj, score, similarity)]
render_object(obj, verbosity="summary", options=None) Render a BN object to text
DetailLevel Enum: IDENTIFIER, SUMMARY, CONTENTS, CONTEXTUAL

Agents and prompts

Symbol Description
init_runtime(*, bv, allow_updates=True) Create an AgentRuntime bound to bv
load_agent(name, *, kernel, path=None, root=None, ...) Load an agent without a context manager; returns a LoadedAgent callable bound to the supplied kernel
AgentRuntime.load_agent(name, *, root=None, ...) Load an agent bound to this runtime's kernel; returns a LoadedAgent callable
load_prompt(name, *, root=None, completion_pool=None) Load a prompt and return a LoadedPrompt callable
list_agents(root=None) List agent.*.yaml names in the scripts root
list_prompts(root=None) List prompt.*.yaml names in the scripts root
set_default_root(root) Set the default scripts root directory
get_default_root() Get the current default scripts root
get_scripts_dir(root=None) Resolve and return the scripts root as a Path

Session and kernel

Symbol Description
get_session_for_view(bv) Return the BinjaSession for bv
get_kernel_for_view(bv) Return the Kernel for bv

Index resource

Symbol Description
list_indexes(bv, *, process_id=None) List all index resources
get_index(bv, name_or_id, *, create_if_missing=False) Get an index by name or resource ID
create_index(bv, name, *, metadata=None, index_type=None) Create a new index
update_index(bv, name_or_id, *, name=None, metadata=None, index_type=None) Update index fields
delete_index(bv, name_or_id) Delete an index; returns bool
add_index_objects(bv, name_or_id, objects, *, on_unsupported="skip", ...) Add BN objects as index entries
update_index_entries(bv, name_or_id, entry_ids, attribute_updates) Update entries by ID
remove_index_entries(bv, name_or_id, entry_ids) Remove entries by ID; returns removed count

Code Map resource

Symbol Description
list_code_maps(bv, *, process_id=None) List all code map resources
get_code_map(bv, name_or_id, *, create_if_missing=False) Get a code map
create_code_map(bv, name, *, roots=None, predecessor_hops=None, ...) Create a code map
update_code_map(bv, name_or_id, ...) Update code map fields
delete_code_map(bv, name_or_id) Delete a code map; returns bool

Notebook entries

Symbol Description
list_notebook_entries(bv) List all notebook entries, sorted by last_touched descending
get_notebook_entry(bv, entry_id) Get an entry by ID
get_notebook_entry_context(bv, entry_id) Get a deep copy of the entry's analysis_context
create_notebook_entry(bv, description, *, entry_type="operational", relevant_domains=None, entry_id=None) Create an entry; omit entry_id in normal usage so Sidekick generates one
update_notebook_entry(bv, entry_id, ...) Update entry fields; returns bool
update_notebook_entry_context(bv, entry_id, *, set=None, remove=None, clear=False) Update top-level analysis_context blocks; returns bool
delete_notebook_entry(bv, entry_id) Delete an entry; returns bool
add_notebook_task(bv, entry_id, description) Add a task
update_notebook_task(bv, entry_id, task_id, ...) Update a task; returns bool
complete_notebook_task(bv, entry_id, task_id) Mark a task complete; returns bool
delete_notebook_task(bv, entry_id, task_id) Delete a task; returns bool
add_notebook_outcome(bv, entry_id, kind, title, ...) Add an outcome
update_notebook_outcome(bv, entry_id, outcome_id, ...) Update an outcome; returns bool
delete_notebook_outcome(bv, entry_id, outcome_id) Delete an outcome; returns bool
get_notebook_outcome(bv, entry_id, outcome_id) Get an outcome by ID
list_notebook_outcomes(bv, entry_id, *, kind=None, ...) List outcomes with optional filters