Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -13,3 +13,4 @@ htmlcov/
.coverage
*.egg
.vscode/
spike/
42 changes: 37 additions & 5 deletions CLAUDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,13 +17,23 @@ jambonz is an open-source Communications Platform as a Service for building voic
```
src/jambonz_sdk/
├── __init__.py # Public API re-exports
├── verbs/ # Public re-exports of generated verb models (Gather, Say, …)
├── components/ # Public re-exports of generated component models (Recognizer, …)
├── _models/ # Pydantic v2 models
│ ├── base.py # JambonzModel base class (alias/serialization config)
│ ├── _registry.py # JSON verb name → generated model class lookup
│ ├── _generated/ # Auto-generated from JSON Schemas — do not edit
│ │ ├── verbs/
│ │ ├── components/
│ │ └── callbacks/
│ └── _patches/ # Hand-written supplements (currently empty stub)
├── types/
│ ├── __init__.py
│ ├── components.py # Shared types: Synthesizer, Recognizer, Target, ActionHook, etc.
│ ├── verbs.py # All 26+ verb TypedDicts
│ ├── components.py # Legacy TypedDicts (still used internally)
│ ├── verbs.py # Legacy verb TypedDicts
│ ├── rest.py # REST API request/response types
│ └── session.py # Call session & WebSocket message types
├── verb_builder.py # VerbBuilder — methods auto-generated from JSON Schema
├── verb_builder.py # VerbBuilder — verb methods route through the generated models
├── verb_registry.py # Verb definitions: maps spec entries → Python methods
├── webhook/
│ ├── __init__.py
Expand All @@ -47,8 +57,8 @@ src/jambonz_sdk/

- **Transport-agnostic verb building**: Same verb methods on both `WebhookResponse` and `Session`
- **Fluent/chainable API**: All verb methods return `self` for method chaining
- **TypedDict for verb schemas**: Type-safe verb construction matching JSON schemas exactly
- **Auto-generated verb methods**: VerbBuilder methods are generated at import time from JSON Schema files (`@jambonz/schema`) + `verb_registry.py` — when the schema changes, the SDK automatically picks up new parameters
- **Typed pydantic models**: Every verb and component has a generated pydantic v2 model. Verb methods accept a typed model, a raw dict, or kwargs interchangeably — all three are validated through the model and serialized via `model_dump(mode="json", by_alias=True, exclude_none=True)`. Typos and missing required fields fail at construction time rather than at the jambonz server. Dict and kwargs styles remain for backwards compatibility.
- **Auto-generated verb methods**: VerbBuilder methods are generated at import time from JSON Schema files (`@jambonz/schema`) + `verb_registry.py`. When the schema changes, the SDK picks up new parameters automatically.
- **aiohttp for both HTTP and WebSocket**: Single dependency for REST client and WS transport

## Verb System
Expand Down Expand Up @@ -95,10 +105,32 @@ python scripts/sync_schema.py v0.1.1

# Copy from a local directory
python scripts/sync_schema.py --local /path/to/schema

# After any schema sync, regenerate the pydantic models and the .pyi stubs:
python scripts/regen_models.py
python scripts/generate_stubs.py
```

Source: https://github.com/jambonz/schema

### Pydantic model regeneration

`scripts/regen_models.py` mirrors the bundled schemas, runs
`datamodel-code-generator`, applies post-gen patches (declared as tables
at the top of the script), then writes:

- `src/jambonz_sdk/_models/_generated/` — generated classes, committed
as a build artifact so end users don't need codegen tools installed
- `src/jambonz_sdk/verbs/__init__.py` and
`src/jambonz_sdk/components/__init__.py` — user-facing re-exports

Post-processing performs: (1) `BaseModel` → `JambonzModel`, (2) `AnyUrl` → `str`
(the schemas use `format: uri` even for relative paths like `/menu`),
(3) nested `dict[str, Any]` → real model classes on fields like `Gather.say`,
`Gather.play`, and `Agent.llm`, (4) appending cross-field validators to
specific classes (`Gather` digit-bounds rules). All these live as tables
in the script — extend them there, never edit generated files directly.

## AI Agent Support

### AGENTS.md
Expand Down
64 changes: 58 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,43 @@ app.router.add_post("/incoming", handle_incoming)
web.run_app(app, port=3000)
```

### Typed input (recommended)

Every verb method also accepts a pydantic model for full IDE autocomplete and
typo-proof nested fields:

```python
from jambonz_sdk.webhook import WebhookResponse
from jambonz_sdk.verbs import Gather, Say
from jambonz_sdk.components import Recognizer

jambonz = WebhookResponse()
jambonz.gather(Gather(
input=["speech", "digits"],
action_hook="/menu",
timeout=15,
num_digits=1,
say=Say(text="Press 1 for sales, 2 for support"),
recognizer=Recognizer(vendor="deepgram", language="en-US"),
)).hangup()
```

Dict and kwargs styles both still work and coerce automatically, so existing
apps keep running unchanged. Mix and match as you like:

```python
jambonz.gather(
input=["speech"],
say=Say(text="Hello"), # model
recognizer={"vendor": "google"}, # dict
)
```

Unknown fields, missing required fields, wrong types, and violated cross-field
rules (e.g. `numDigits` combined with `minDigits`/`maxDigits`) raise
`pydantic.ValidationError` at construction time — no more hunting silent
failures after a round-trip to the jambonz server.

### WebSocket

```python
Expand Down Expand Up @@ -80,14 +117,27 @@ The SDK does **not** hardcode verb method signatures. Instead, verb methods (`.s
- Every method has **real typed parameters** (not `**kwargs: Any`) so IDEs show autocomplete and type hints
- Verb synonyms (`stream` ↔ `listen`, `openai_s2s` → `llm` with `vendor: "openai"`) are handled by the registry

### Typed pydantic models

Alongside the schema-driven method signatures, the SDK ships pydantic v2 models
generated from the same JSON Schemas. They live under `jambonz_sdk.verbs` and
`jambonz_sdk.components` and can be passed directly to any verb method.

These models are a build artifact — produced by `scripts/regen_models.py`
from the bundled schemas — and get checked into the repo so end users don't
need codegen tools to install the SDK.

### Updating the schema

```bash
# Download the pinned version from @jambonz/schema:
python scripts/sync_schema.py

# Or copy from a local clone:
python scripts/sync_schema.py --local /path/to/schema
# Regenerate the typed pydantic models so they match:
python scripts/regen_models.py

# Regenerate the .pyi stubs so IDE autocomplete stays in sync:
python scripts/generate_stubs.py
```

If a **new verb** was added (not just new properties), add one line to `verb_registry.py`:
Expand Down Expand Up @@ -132,12 +182,14 @@ source .venv/bin/activate
pip install -e ".[dev]"

# Run tests
pytest tests/unit/ # Fast unit tests (253)
pytest tests/integration/ # Real server tests (26)
pytest # All 279 tests
pytest tests/unit/ # Fast unit tests
pytest tests/integration/ # Real server tests
pytest # All tests

# Sync schema from upstream
# Sync schema from upstream and regenerate pydantic models + stubs
python scripts/sync_schema.py
python scripts/regen_models.py
python scripts/generate_stubs.py
```

## License
Expand Down
2 changes: 2 additions & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ classifiers = [
dependencies = [
"aiohttp>=3.9",
"jsonschema>=4.20",
"pydantic>=2.10",
"referencing>=0.31",
"typing_extensions>=4.0; python_version < '3.11'",
]
Expand All @@ -43,6 +44,7 @@ dev = [
"aioresponses>=0.7",
"ruff>=0.4",
"mypy>=1.10",
"datamodel-code-generator[http]>=0.25",
]

[tool.hatch.build.targets.wheel]
Expand Down
14 changes: 9 additions & 5 deletions scripts/generate_stubs.py
Original file line number Diff line number Diff line change
Expand Up @@ -97,6 +97,11 @@ def generate() -> str:
'"""Auto-generated type stubs for VerbBuilder.',
"",
"DO NOT EDIT — regenerate with: python scripts/generate_stubs.py",
"",
"Each verb method accepts three interchangeable input forms:",
" 1. a positional generated model instance",
" 2. a positional dict payload",
" 3. keyword arguments matching the verb's JSON Schema",
'"""',
"",
"from typing import Any, Self",
Expand All @@ -118,25 +123,24 @@ def generate() -> str:
properties = spec.get("properties", {})
required = set(spec.get("required", []))

# Build parameter list
params = ["self"]
# First positional-only arg: model or dict. Then kwargs mirroring
# the schema properties for kwargs-style autocomplete.
params = ["self, arg: Any = ..., /"]
for prop_name, prop_spec in properties.items():
py_name = "from_" if prop_name == "from" else prop_name
py_type = resolve_type(prop_spec)
params.append(f"{py_name}: {py_type} = ...")

# Add **kwargs for forward compatibility
params.append("**kwargs: Any")

param_str = ",\n ".join(params)

# Build docstring
doc_lines = [f' """{verb_def.doc}']
if required:
doc_lines.append("")
doc_lines.append(f" Required: {', '.join(sorted(required))}")
doc_lines.append("")
doc_lines.append(" Args:")
doc_lines.append(" arg: a typed model instance or a dict payload (alternative to kwargs).")
for prop_name, prop_spec in properties.items():
py_name = "from_" if prop_name == "from" else prop_name
py_type = resolve_type(prop_spec)
Expand Down
Loading
Loading