<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Building Rynko]]></title><description><![CDATA[Blog describing how Rynko was built and can be used.]]></description><link>https://blog.rynko.dev</link><generator>RSS for Node</generator><lastBuildDate>Thu, 09 Apr 2026 15:40:51 GMT</lastBuildDate><atom:link href="https://blog.rynko.dev/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Extract Your Documents, Validate Against Your Data: Introducing Rynko Extract and Lookup Tables
]]></title><description><![CDATA[Until now, Rynko Flow assumed that agents would submit structured JSON for validation. And many do — when the data starts out structured. But a large number of real workflows start with a PDF, a scann]]></description><link>https://blog.rynko.dev/extract-your-documents-validate-against-your-data-introducing-rynko-extract-and-lookup-tables</link><guid isPermaLink="true">https://blog.rynko.dev/extract-your-documents-validate-against-your-data-introducing-rynko-extract-and-lookup-tables</guid><category><![CDATA[rynko]]></category><category><![CDATA[Document Processing]]></category><category><![CDATA[Extracts]]></category><category><![CDATA[lookup tables]]></category><dc:creator><![CDATA[Rynko Dev]]></dc:creator><pubDate>Wed, 01 Apr 2026 15:01:16 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/698ea54a73da603a1b59230b/37f9f6b9-8f0b-401c-ab80-3d36340e5def.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Until now, Rynko Flow assumed that agents would submit structured JSON for validation. And many do — when the data starts out structured. But a large number of real workflows start with a PDF, a scanned receipt, an Excel spreadsheet, or an email body that someone pasted into a form. The agent has the file, but not the data.</p>
<p>The typical workaround is to run the document through a separate OCR or extraction service, write glue code to map the extracted fields to your schema, handle the confidence issues, and then submit the result to your validation gate. And even after extraction, you often need to verify the extracted values against reference datasets — is this HS code valid? Is this vendor approved? Does this SKU exist in our catalog? That's another integration point, another system to maintain.</p>
<p>Today we're launching two features that solve both problems together: <strong>Rynko Extract</strong> and <strong>Lookup Tables</strong>.</p>
<p>Extract adds a Stage 0 to the Flow pipeline. Upload a file to a gate, and the AI extracts structured data from it before validation runs. Lookup Tables let you upload reference datasets — tariff codes, vendor lists, product catalogs — and query them directly inside your gate's business rules. Together, they turn a multi-system integration problem into a single pipeline call.</p>
<h2>How Extract Works</h2>
<p>When you enable Extract on a gate, the gate accepts both structured input (JSON, YAML, XML) and unstructured input (PDFs, images, Excel, CSV, plain text). Structured input skips extraction entirely — it goes straight to validation, no extraction credit consumed. Unstructured input passes through the AI extraction layer first, then the extracted data is validated against the gate's schema and business rules.</p>
<p>The key design decision was: one schema, not two. The gate's published schema serves as both the validation target and the extraction target. When the AI model extracts data from a document, it uses the gate's schema as a guide — field names, types, descriptions, required fields. There's no separate "extraction schema" to maintain. You edit the gate schema once, and both extraction and validation update together.</p>
<img src="https://cdn.hashnode.com/uploads/covers/698ea54a73da603a1b59230b/069212d0-5be6-4086-8af3-8c2f96793e1d.png" alt="" style="display:block;margin:0 auto" />

<h3>Per-Field Confidence Scoring</h3>
<p>Every field that Extract returns comes with a confidence score — HIGH, MEDIUM, or LOW — along with a numerical score between 0 and 1. This isn't a binary "we found it or we didn't" signal. It tells you how reliable the extraction is for each individual field.</p>
<p>A clean invoice with a clearly printed invoice number in the header will get a HIGH confidence score of 0.95+. A handwritten note where the amount is partially obscured might get a LOW score of 0.3. The confidence scores are per-field, not per-document, so a single extraction can have high confidence on the vendor name but low confidence on the purchase order number.</p>
<p>We built three review modes around this. "Continue" accepts everything regardless of confidence — useful for high-volume, low-risk workflows. "Review" routes to a human reviewer when any field drops below your configured threshold or when required fields are missing. "Fail" rejects the extraction entirely if quality thresholds aren't met.</p>
<p>The review mode creates a human-in-the-loop workflow where the reviewer sees the extracted data alongside the source document, with confidence badges on each field. They can edit values, approve the extraction, or reject it. The edited data then continues through the validation pipeline — reviewer corrections are treated as the authoritative extraction result.</p>
<h3>Schema Discovery and Zero-Cost Iteration</h3>
<p>If you have a new document type and don't know what fields it contains, you can upload a sample file and let the AI analyze it. The AI returns a suggested JSON Schema with field names, types, and descriptions — a starting point you can refine.</p>
<p>The discovery pass stores the raw AI output as a "reference extraction" in S3. Every subsequent schema edit re-maps fields from this reference using a local field matcher — exact match, normalized match (snake_case to camelCase), description similarity, common aliases, and fuzzy matching. No AI calls, no credits consumed. You can iterate on your schema as many times as you want after the initial discovery, and the field matcher instantly shows you how the reference data maps to your updated schema.</p>
<p>This pattern — one AI call to establish the reference, then unlimited local iterations — makes it practical to experiment with schema design without worrying about API costs.</p>
<h2>How Lookup Tables Work</h2>
<p>A lookup table is a team-level key-value store. You create a table, populate it with entries (either one at a time via the API or in bulk via CSV/JSON upload), and then reference it in your gate's business rules using the <code>lookup()</code> function.</p>
<p>The simplest example is an existence check:</p>
<pre><code class="language-javascript">// Is this HS code in our tariff schedule?
lookup('hs_codes_us', hs_code) !== null
</code></pre>
<p>The <code>lookup()</code> function takes a table name and a key, and returns the stored value if the key exists, or <code>null</code> if it doesn't. Since values are stored as JSON, you can store rich objects and access their properties:</p>
<pre><code class="language-javascript">// Is this item restricted?
lookup('hs_codes_us', hs_code).restricted !== true

// Is the unit price within the expected range for this category?
lookup('price_ranges', product_category).max &gt;= unit_price

// Composite key for duty rate lookup
lookup('duty_rates', hs_code + ':' + origin_country + ':' + destination_country) !== null
</code></pre>
<p>You can also use the <code>fail()</code> function for prescriptive error messages that tell agents exactly what went wrong:</p>
<pre><code class="language-javascript">lookup('vendors', vendor_name) !== null
  ? true
  : fail('Vendor "' + vendor_name + '" not in approved list. Check spelling or onboard new vendor.')
</code></pre>
<p>This works well with Gate Intelligence — agents that fail a lookup check get a clear, actionable error message instead of a generic "rule failed" response.</p>
<h3>The Double-Blind Principle</h3>
<p>One design decision worth explaining: lookup table data is not sent to the AI during extraction.</p>
<p>This might seem counterintuitive — wouldn't extraction be more accurate if the model knew the valid HS codes? In practice, the opposite is true. If you give an LLM a list of 5,000 valid codes and ask it to extract an HS code from a blurry scan, it will "correct" what it sees to match something in the list. Instead of honest extraction, you get confident guessing.</p>
<p>We separate the roles deliberately. The AI is a witness — it reports exactly what it sees in the document, with a confidence score for each field. The gate is the judge — it checks the extracted values against your business rules and lookup tables deterministically. The lookup table is the law — it defines what's valid.</p>
<p>This means an extraction might come back with <code>hs_code: "8471.30"</code> at HIGH confidence, and then the gate rule <code>lookup('hs_codes_us', hs_code) !== null</code> confirms it's a real code. Or the extraction returns <code>hs_code: "8471.39"</code> (an OCR misread), the lookup fails, and the run routes to human review where the reviewer can see both the extracted value and the source document side by side.</p>
<p>The separation preserves the integrity of both steps. The extraction quality isn't artificially inflated by reference data matching, and the validation is fully deterministic — no probabilistic reasoning in the judgment step.</p>
<h3>Atomic Bulk Sync</h3>
<p>For small reference datasets — approved vendor lists, product categories — adding entries one at a time through the API is fine. But HS code databases have tens of thousands of entries, and product catalogs can have hundreds of thousands.</p>
<p>Bulk sync handles this with a shadow-flip pattern. When you upload a CSV or JSON file, the system writes all new entries at a new version number while your gate continues reading from the current version. Once all entries are imported, a single database update atomically switches the active version. There's no moment where your gate sees partial data — it's either the old complete dataset or the new complete dataset.</p>
<pre><code class="language-plaintext">POST /api/flow/lookup-tables/:tableId/sync
Content-Type: multipart/form-data

file: hs_codes_2026.csv
mode: replace
</code></pre>
<p>The sync runs asynchronously via BullMQ, and you can poll the status or check the sync history. Each sync records how many entries were received, imported, and skipped, along with the duration.</p>
<h3>Key Normalizers</h3>
<p>One problem that came up immediately during testing: HS codes. Some invoices format them with dots (<code>8542.31.0000</code>), some without (<code>8542310000</code>). Both are the same code. A vendor extracted as <code>ACME CORP</code> won't match a lookup table entry stored as <code>Acme Corp</code>. Port codes might come through as <code>uslax</code> instead of <code>USLAX</code>.</p>
<p>Rather than forcing users to pre-process their data or write normalizing wrappers around every <code>lookup()</code> call, we added key normalizers at the table level. Each lookup table can specify a normalization strategy that's applied automatically — both when entries are stored and when keys are queried.</p>
<img src="https://cdn.hashnode.com/uploads/covers/698ea54a73da603a1b59230b/359b3b36-e136-4a8f-aecc-ab51202e005e.png" alt="" style="display:block;margin:0 auto" />

<pre><code class="language-plaintext">POST /api/flow/lookup-tables
{
  "name": "hs_codes_us",
  "keyNormalizer": "strip_dots",
  "keyDescription": "10-digit HTS code",
  ...
}
</code></pre>
<p>With <code>strip_dots</code>, the expression <code>lookup('hs_codes_us', '8542.31.0000')</code> matches a stored key of <code>8542310000</code> because both are normalized to <code>8542310000</code> before comparison. The original key is preserved for display — you still see <code>8542.31.0000</code> in the UI — but matching uses the normalized form.</p>
<p>Seven normalizer types are available:</p>
<table>
<thead>
<tr>
<th>Normalizer</th>
<th>Transformation</th>
<th>Use Case</th>
</tr>
</thead>
<tbody><tr>
<td><code>none</code></td>
<td>No change (default)</td>
<td>Exact match</td>
</tr>
<tr>
<td><code>lowercase</code></td>
<td>Lowercase + trim</td>
<td>Vendor names, company names</td>
</tr>
<tr>
<td><code>uppercase</code></td>
<td>Uppercase + trim</td>
<td>Port codes, country codes</td>
</tr>
<tr>
<td><code>strip_dots</code></td>
<td>Remove dots + trim</td>
<td>HS codes</td>
</tr>
<tr>
<td><code>strip_punctuation</code></td>
<td>Remove dots, dashes, spaces + lowercase</td>
<td>Tax IDs (EIN, EORI)</td>
</tr>
<tr>
<td><code>alphanumeric</code></td>
<td>Keep only a-z 0-9 + lowercase</td>
<td>SKUs, part numbers</td>
</tr>
<tr>
<td><code>numeric</code></td>
<td>Keep only digits</td>
<td>Phone numbers, postal codes</td>
</tr>
</tbody></table>
<p>The normalization happens at write time — a <code>normalizedKey</code> column is pre-computed alongside the original key and indexed for O(1) lookups. This means even tables with millions of entries don't pay a performance penalty for normalization. Changing the normalizer on an existing table triggers a background recomputation of all normalized keys.</p>
<p>It's a small feature, but it eliminates an entire class of false validation failures that would otherwise require custom expressions or data cleansing pipelines.</p>
<h2>The Pipeline Orchestrator</h2>
<p>Behind both features is a pipeline orchestrator built as an independent package (<code>@rynko/pipeline-core</code>). It's a data-driven stage router — the pipeline is defined as an ordered array of stage definitions, and the orchestrator walks the array to determine what comes next. Adding a new stage to the pipeline is adding one entry to the array. Reordering stages is moving the entry.</p>
<p>The orchestrator is framework-agnostic — zero dependencies on NestJS, Prisma, or any specific infrastructure. It takes a storage adapter interface and a logger interface, and that's it. The Flow module provides a thin NestJS wrapper that implements the storage adapter using Prisma and wires the stage executors to the actual services.</p>
<p>The validation stage is special: for direct JSON submissions, it runs synchronously in the HTTP request handler with zero database operations — the response returns in single-digit milliseconds. Adding Extract to the pipeline doesn't slow down the existing validation path at all.</p>
<p>Lookup resolution is Redis-cached with lazy loading — only keys that are actually queried get cached, with a one-hour TTL. For gates that don't use <code>lookup()</code> in any rule, there's zero overhead. All lookups across all rules are resolved in a single batch before any rule evaluation begins — no I/O during rule execution.</p>
<h2>The Full Pipeline</h2>
<p>Here's what a complete pipeline looks like for trade document processing:</p>
<ol>
<li><p><strong>Extract</strong> (Stage 0): Agent uploads a commercial invoice PDF. The AI extracts vendor name, HS codes, line items, quantities, unit prices, total amount, origin country, destination country.</p>
</li>
<li><p><strong>Validate</strong> (Stage 1): Gate rules check the extracted data:</p>
<ul>
<li><p>Schema validation: required fields present, correct types</p>
</li>
<li><p>Business rules: <code>total_amount === sum of line item amounts</code></p>
</li>
<li><p>Lookup checks: <code>lookup('hs_codes_us', hs_code) !== null</code> for every line item</p>
</li>
<li><p>Lookup checks: <code>lookup('approved_vendors', vendor_name) !== null</code></p>
</li>
<li><p>Lookup checks: <code>lookup('sanctioned_entities', vendor_name) === null</code> (blocklist — must NOT be in table)</p>
</li>
</ul>
</li>
<li><p><strong>Review</strong> (if needed): Low-confidence extractions or failed lookup checks route to a human reviewer who sees the original document alongside the extracted data.</p>
</li>
<li><p><strong>Deliver</strong> (Stage 2+): Validated data is delivered via webhook, or rendered into a standardized document via Rynko Render.</p>
</li>
</ol>
<p>The entire flow is one API call from the agent's perspective. Upload a file, get back a run ID, poll for results. The agent doesn't need to know about extraction confidence scores, lookup table queries, or human review routing — the gate handles all of it.</p>
<h2>What's Available</h2>
<p>Extract and Lookup Tables are available now in founders preview. Extract comes with 100 free extraction credits per team. Lookup tables are included in your Flow tier, with limits scaling from 1 table and 1,000 entries on Free up to 25 tables and 10 million entries on Scale.</p>
<p>The extraction runs on Google Gemini 2.5 Flash for fast, cost-effective processing. SDK support covers Node.js, Python, and Java with <code>submitFileRun()</code> for gate pipeline integration. For agents that already have document content as text, text-based extraction is available via MCP tools. Lookup table management is available through the REST API, and the webapp UI for managing tables and entries is rolling out now.</p>
<p>If you're building agent workflows that start with documents and need to validate extracted data against reference datasets — tariff codes, approved vendor lists, product catalogs, sanctions lists — this is what we built it for. The docs are at <a href="https://docs.rynko.dev/docs/developer-guide/extract-overview">docs.rynko.dev</a> and we're at <a href="https://rynko.dev">rynko.dev</a>.</p>
]]></content:encoded></item><item><title><![CDATA[Microsoft's Agent Governance Toolkit and Where Rynko Flow Fits In]]></title><description><![CDATA[Microsoft just open-sourced the Agent Governance Toolkit, a runtime governance platform that covers all 10 risks in the OWASP Agentic Top 10. I've spent the morning reading through the architecture, b]]></description><link>https://blog.rynko.dev/microsoft-s-agent-governance-toolkit-and-where-rynko-flow-fits-in</link><guid isPermaLink="true">https://blog.rynko.dev/microsoft-s-agent-governance-toolkit-and-where-rynko-flow-fits-in</guid><category><![CDATA[ai agents]]></category><category><![CDATA[agent-governance]]></category><category><![CDATA[Microsoft]]></category><category><![CDATA[rynko]]></category><category><![CDATA[rynko-flow]]></category><category><![CDATA[AI]]></category><category><![CDATA[agentic AI]]></category><category><![CDATA[langchain]]></category><category><![CDATA[AutoGen]]></category><category><![CDATA[CrewAI]]></category><dc:creator><![CDATA[Rynko Dev]]></dc:creator><pubDate>Sun, 22 Mar 2026 02:21:05 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/698ea54a73da603a1b59230b/2671c023-49cf-49b7-862b-97741c07668d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Microsoft just open-sourced the <a href="https://github.com/microsoft/agent-governance-toolkit">Agent Governance Toolkit</a>, a runtime governance platform that covers all 10 risks in the OWASP Agentic Top 10. I've spent the morning reading through the architecture, benchmarks, and OWASP compliance docs, and it's one of the most thorough agent governance framework I've seen from any company, open-source or otherwise.</p>
<ul>
<li><p>Policy evaluation at 0.012ms latency.</p>
</li>
<li><p>Ed25519 cryptographic agent identity with trust scoring.</p>
</li>
<li><p>Four-tier execution rings with kill switches.</p>
</li>
<li><p>Circuit breakers and chaos engineering for reliability.</p>
</li>
<li><p>Adapters for 12+ frameworks including LangChain, AutoGen, CrewAI, and Google ADK.</p>
</li>
<li><p>6,100+ tests. MIT licensed.</p>
</li>
</ul>
<p>This is the kind of infrastructure that the agentic ecosystem desperately needs, and Microsoft giving it away for free accelerates the entire space.</p>
<p>It also makes me more confident about the bet we've been making at Rynko, because the toolkit solves a genuinely hard set of problems that we don't solve — and it leaves room for the specific problem that we do.</p>
<h2>What the Toolkit Does Well</h2>
<p>The toolkit has four components, and each one addresses a real production concern that teams building agentic systems struggle with.</p>
<p><strong>Agent OS</strong> is the policy engine. Every agent action passes through it before an execution. You define capabilities like which tools the agent can call, resource limits like token budgets, API call caps and content policies. It evaluates these at sub-millisecond latency — 72,000 policy evaluations per second for single rules, 31,000 for 100-rule policies. Custom policies can be written in OPA/Rego or Cedar, which means teams can reuse their existing policy infrastructure rather than learning a new DSL, a thoughtful design choice.</p>
<p><strong>AgentMesh</strong> handles identity and inter-agent trust. Every agent gets an Ed25519 cryptographic credentials. Trust scores on a 0–1000 scale determine what an agent can do eg. a score of 900+ gets verified partner access, below 300 gets read-only. The communication between agents is encrypted through trust gates, and it bridges A2A, MCP, and IATP protocols. The trust scoring model is particularly well thought out, eg. new agents default to 500 and progress based on compliance history, which mirrors how you'd onboard a new team member with gradually expanding permissions.</p>
<p><strong>Agent Runtime</strong> is the execution supervisor. It uses four privilege rings to isolate what agents can touch. Saga orchestration is used to coordinate multi-step operations. Kill switches terminate non-compliant agents and Append-only audit logs record everything for forensic replay.</p>
<p><strong>Agent SRE</strong> provides reliability engineering. SLO enforcement, error budgets, circuit breakers are available to prevent cascading failures, replay debugging and chaos engineering. The production observability patterns you'd expect from a team that runs Azure at scale.</p>
<p>All four components work together to answer a fundamental question: <strong>is this agent allowed to do what it's trying to do, and is it doing it safely?</strong></p>
<p>This is genuinely hard infrastructure to build correctly. Identity, policy enforcement, execution isolation, and reliability engineering each have deep rabbit holes, and Microsoft has the engineering depth to go down all of them properly.</p>
<h2>Where Flow Adds a Complementary Layer</h2>
<img src="https://cdn.hashnode.com/uploads/covers/698ea54a73da603a1b59230b/2fed675c-32bc-4331-8177-1d8b8cc97695.png" alt="" style="display:block;margin:0 auto" />

<p>The toolkit governs agent <em>behavior</em> — permissions, identity, execution boundaries, reliability. Flow governs agent <em>output</em> i.e. the actual data the agent produces when it completes an action.</p>
<p>These are different concerns. The toolkit ensures the agent is authorized and operating safely. Flow ensures the data the agent produces is correct and hasn't been tampered with before reaching the downstream system.</p>
<p>One reasonable question to ask would be: couldn't AgentMesh's trust gates or the Agent OS policy engine handle data validation too? Technically, you could write OPA/Rego policies that inspect payload fields — Rego is expressive enough to check <code>input.payload.amount &gt; 0</code>. But policy engines are designed to return allow/deny decisions, not structured validation errors with field-level messages that an agent can use to self-correct and resubmit. You'd also be mixing authorization concerns with domain-specific business logic in the same policy files. Also, you wouldn't get HMAC-based payload verification or human approval routing. It's a bit like using a firewall for input validation — it can inspect packet contents, but that doesn't make it the right layer for checking whether an invoice total matches its line items.</p>
<p>Think about the OWASP compliance mapping in the toolkit. ASI-05 addresses unexpected code execution through privilege rings and sandboxing. This makes sure that the agent can't run arbitrary code. That's the right control for that risk. But once the agent produces a result through an approved tool call — an invoice, a purchase order, a compliance report — there's a different question to answer: is the data in that result actually correct?</p>
<p>An agent can be fully authorized, properly authenticated, running within its privilege ring, with no circuit breaker tripped. The policy engine approved the action. And the agent still submits <code>"currency": "usd"</code> instead of <code>"USD"</code>, calculates a total that's off by a rounding error, or drops a required field. These are domain-specific data quality issues that a behavioral governance layer isn't designed to catch, and honestly shouldn't try to, that would mix concerns and bloat the policy engine with domain logic.</p>
<p>This is what Flow was built for. You define a gate with a schema and business rules specific to your domain, and the agent's output gets validated before it reaches the downstream system. Validation Failures return structured errors which the agent can use to self-correct. Passed validations return a <code>validation_id</code> — an HMAC-SHA256 hash of the validated payload which the downstream system can independently verify.</p>
<h2>How the Two Layers Work Together</h2>
<img src="https://cdn.hashnode.com/uploads/covers/698ea54a73da603a1b59230b/37a548ce-b550-4d3c-b703-412930eccdc7.png" alt="" style="display:block;margin:0 auto" />

<p>The distinction maps to how we think about security in traditional systems. Authentication and authorization tell you who's making a request and whether they're allowed to. Input validation tells you whether the data they're sending is well-formed and correct. You've always needed both. The agentic world isn't different.</p>
<table>
<thead>
<tr>
<th>Layer</th>
<th>Question</th>
<th>Microsoft Toolkit</th>
<th>Rynko Flow</th>
</tr>
</thead>
<tbody><tr>
<td><strong>Identity</strong></td>
<td>Who is this agent?</td>
<td>Ed25519 credentials, trust scores</td>
<td>API key auth</td>
</tr>
<tr>
<td><strong>Authorization</strong></td>
<td>Can it call this tool?</td>
<td>Policy engine, capability model</td>
<td>—</td>
</tr>
<tr>
<td><strong>Execution</strong></td>
<td>Is it running safely?</td>
<td>Privilege rings, sandboxing</td>
<td>—</td>
</tr>
<tr>
<td><strong>Reliability</strong></td>
<td>Will failures cascade?</td>
<td>Circuit breakers, SLOs</td>
<td>—</td>
</tr>
<tr>
<td><strong>Output correctness</strong></td>
<td>Is the data valid?</td>
<td>—</td>
<td>Schema + business rules</td>
</tr>
<tr>
<td><strong>Output integrity</strong></td>
<td>Was the data tampered?</td>
<td>—</td>
<td>HMAC verification</td>
</tr>
<tr>
<td><strong>Human oversight</strong></td>
<td>Should a person review?</td>
<td>—</td>
<td>Approval routing</td>
</tr>
</tbody></table>
<p>The toolkit handles the rows above the line. Flow handles the rows below it. Together, they cover the pipeline end to end.</p>
<h2>A Practical Example</h2>
<p>Say you have an order processing agent running in an environment with the toolkit deployed. The policy engine confirms the agent has permission to submit orders. AgentMesh verified its identity. The runtime supervisor confirmed it's operating within its privilege ring.</p>
<p>The agent submits this order:</p>
<pre><code class="language-json">{
  "order_id": "ORD-2847",
  "vendor": "Acme Corp",
  "amount": -500,
  "currency": "usd",
  "line_items": []
}
</code></pre>
<p>From the toolkit's perspective, everything checks out. The agent was authorized, authenticated, and operating within bounds. The policy engine approved the action. And it should approve it — the toolkit's job is to enforce behavioral governance, not validate business data.</p>
<p>Flow picks up where the toolkit leaves off. A gate with the appropriate schema and rules catches three issues:</p>
<pre><code class="language-json">{
  "success": false,
  "errors": [
    { "field": "amount", "message": "Must be &gt;= 0" },
    { "field": "currency", "message": "Must be one of: USD, EUR, GBP" },
    { "rule": "line_items.length &gt; 0", "message": "Must have at least one line item" }
  ]
}
</code></pre>
<p>The agent self-corrects using the structured feedback, resubmits, and gets a <code>validation_id</code> on success. The downstream system verifies the ID before accepting the data. The toolkit made sure the right agent submitted the order safely. Flow made sure the order itself was correct.</p>
<h2>Performance — Both Layers Are Essentially Free</h2>
<p>One thing the toolkit's benchmarks highlight is that governance overhead should be invisible relative to LLM latency. Their policy evaluation adds 0.01–0.1ms. An LLM API call takes 200–3,000ms. I think they're exactly right about this — governance shouldn't be the bottleneck, and at those numbers it never will be.</p>
<p>Flow operates at a different timescale because it's doing more work per evaluation — parsing payloads, validating schemas against variable arrays, running expression-based business rules through a recursive descent parser. Our benchmarks show ~50ms server-side validation for enterprise-scale payloads (21 schema variables, 10 business rules, 900 line items in a single payload). For typical payloads (a few KB), it's single-digit milliseconds.</p>
<p>Combined, both layers add maybe 50–60ms to a pipeline where the LLM inference took 500–3,000ms. You're paying a negligible cost for behavioral governance and output validation together.</p>
<h2>The Bigger Picture</h2>
<p>Between the OWASP Agentic Top 10, the <a href="https://blog.rynko.dev/how-rynko-flow-maps-to-the-aws-agentic-ai-security-scoping-matrix">AWS Agentic AI Security Scoping Matrix</a>, Snapchat's <a href="https://blog.rynko.dev/what-snapchat-auton-framework-means-for-ai-agent-validation">Auton framework</a>, and now Microsoft's toolkit, the industry is converging on something I think is important: agent governance is not a single problem with a single solution. It's a stack of specialized layers, each addressing different risks at different points in the pipeline.</p>
<p>Microsoft releasing this toolkit validates the category in a way that benefits everyone building in the space. When the company that runs Azure tells the world "agent governance is infrastructure, here's our reference implementation for free," it moves the conversation from "do we need agent governance?" to "which layers do we still need to add?"</p>
<p>We think output validation is one of those layers. Not because the toolkit missed something, but because domain-specific data correctness is a separate concern that deserves its own specialized tooling. Checking whether an invoice has the right currency code, whether an order total matches its line items, or whether a compliance report includes all required fields isn't a policy evaluation problem. It's a schema and business rule problem with optional human review in the loop.</p>
<p>That's what we built Flow to handle. If you're deploying the Agent Governance Toolkit and want to add output validation to the pipeline, try dropping a <a href="https://app.rynko.dev/flow/gates">Flow gate</a> between the governed agent and your downstream system. The free tier gives you 500 validation runs per month and three gates — enough to see how the two layers work together in practice.</p>
<hr />
<p><em>Rynko Flow is a validation gateway for AI agent outputs.</em> <a href="https://app.rynko.dev/signup"><em>Try it free</em></a> <em>or</em> <a href="https://docs.rynko.dev/flow/getting-started"><em>read the docs</em></a><em>.</em></p>
]]></content:encoded></item><item><title><![CDATA[IBM's $11 Billion Confluent Acquisition, AWS + Cerebras, and Where Output Validation Fits In]]></title><description><![CDATA[Two announcements in the same week paint a clear picture of where enterprise AI infrastructure is headed, and both of them are exciting.
IBM closed its $11 billion acquisition of Confluent, the Kafka-]]></description><link>https://blog.rynko.dev/ibm-s-11-billion-confluent-acquisition-aws-cerebras-and-where-output-validation-fits-in</link><guid isPermaLink="true">https://blog.rynko.dev/ibm-s-11-billion-confluent-acquisition-aws-cerebras-and-where-output-validation-fits-in</guid><category><![CDATA[IBM]]></category><category><![CDATA[Confluent Kafka]]></category><category><![CDATA[AWS]]></category><category><![CDATA[cerebras]]></category><category><![CDATA[agentic AI]]></category><category><![CDATA[Enterprise AI]]></category><category><![CDATA[output-validation]]></category><category><![CDATA[rynko]]></category><category><![CDATA[AI infrastructure]]></category><dc:creator><![CDATA[Rynko Dev]]></dc:creator><pubDate>Wed, 18 Mar 2026 07:59:41 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/698ea54a73da603a1b59230b/262af139-dda3-431e-8cb6-f2dc49f2a698.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Two announcements in the same week paint a clear picture of where enterprise AI infrastructure is headed, and both of them are exciting.</p>
<p>IBM closed its <a href="https://www.linkedin.com/pulse/welcome-confluent-ibm-xtjqe">$11 billion acquisition of Confluent</a>, the Kafka-based streaming platform used by 40% of Fortune 500 companies. The thesis is sound: enterprises moving from AI experimentation to production need live, continuously flowing data — not batch exports that arrive hours late. As Rob Thomas (IBM SVP) put it, "AI decisions need to happen just as fast" as the transactions generating the data. That's exactly right, and Confluent is the best platform in the world for making it happen.</p>
<p>Meanwhile, <a href="https://www.cerebras.ai/press-release/awscollaboration">AWS announced a collaboration with Cerebras</a> to bring wafer-scale inference to Amazon Bedrock. The CS-3 delivers thousands of times more memory bandwidth than the fastest GPU, targeting the decode bottleneck that slows agentic workloads. Andrew Feldman (Cerebras CEO) called it "blisteringly fast inference." Their disaggregated architecture pairs Trainium for compute-heavy prefill with Cerebras WSE for bandwidth-heavy token generation — an order of magnitude faster inference than what's available today. For anyone building real-time agentic workflows, this is a big deal.</p>
<p>These are the kind of infrastructure investments that make agentic systems practical at enterprise scale. They also got me thinking about where Rynko Flow fits into this picture.</p>
<h2>The Pipeline and Where Each Layer Contributes</h2>
<p>The enterprise AI pipeline looks roughly like this:</p>
<img src="https://cdn.hashnode.com/uploads/covers/698ea54a73da603a1b59230b/007527b9-edcb-4cdd-9689-13ee4b68e4c6.png" alt="Simple Pipeline" style="display:block;margin:0 auto" />

<p>IBM + Confluent handle the input: getting live, governed, trustworthy data to the agent. AWS + Cerebras handle the processing: making the agent produce output fast enough for real-time operations. Both are necessary — an agent making decisions on stale data is worse than no agent at all, and an agent that takes 30 seconds to respond isn't useful for time-sensitive workflows.</p>
<p>What we've been focused on at Rynko is the next step in that pipeline: once the agent processes that real-time data at speed and produces a result — an invoice, a purchase order, a compliance report — how do you validate that the result is correct before it reaches the downstream system?</p>
<p>This is a genuinely different problem from data freshness or inference speed, and it's the problem we built Flow to solve. Even with perfect input data, agents can submit <code>"usd"</code> instead of <code>"USD"</code>, produce a total that's off by a rounding error, or silently drop a required field. The data flowing in was pristine. The processing was fast. The output still needs a checkpoint.</p>
<h2>What Flow Adds to the Pipeline</h2>
<p>Flow is a validation gateway that sits between the agent's output and your downstream systems. You define a gate with a schema and business rules, the agent submits its output, and Flow validates it before the data moves forward. Failed submissions return structured errors the agent can use to self-correct. Passed submissions return a tamper-proof <code>validation_id</code> that the downstream system can verify to confirm nothing was modified in transit.</p>
<p>Say you have an order processing agent. Confluent is streaming real-time order events from your POS systems, inventory databases, and payment providers. The agent processes these events and produces a purchase order to send downstream. Here's the Flow gate that checks the agent's output:</p>
<pre><code class="language-plaintext">Schema:
  - order_id: string, required
  - vendor: string, required
  - amount: number, required, min 10
  - currency: string, required, enum [USD, EUR, GBP]
  - line_items: array of objects, required

Business Rules:
  - amount &gt; 0 ("Order amount must be positive")
  - amount &lt;= 100000 ("Single order cannot exceed $100,000")
  - line_items.length &gt; 0 ("Must have at least one line item")
</code></pre>
<p>The agent submits its payload. Flow validates it against the schema and evaluates every business rule. If the agent submitted <code>amount: -500</code>, it gets back:</p>
<pre><code class="language-json">{
  "success": false,
  "status": "validation_failed",
  "errors": [
    { "rule": "amount &gt; 0", "message": "Order amount must be positive" }
  ]
}
</code></pre>
<p>The agent self-corrects and resubmits. When validation passes, the response includes a <code>validation_id</code>:</p>
<pre><code class="language-json">{
  "success": true,
  "status": "validated",
  "validation_id": "val_4f546e9bcb76f120c4984d72"
}
</code></pre>
<p>That <code>validation_id</code> is an HMAC-SHA256 hash of the validated payload, computed using canonical JSON serialization with recursively sorted keys. This means even if the payload passes through multiple systems that reorder the JSON keys or reformat the whitespace, the verification still works. The downstream system receives the payload and the <code>validation_id</code> from the agent, then calls Flow to verify:</p>
<pre><code class="language-bash">curl -X POST https://api.rynko.dev/api/flow/verify \
  -H "Authorization: Bearer $API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "validation_id": "val_4f546e9bcb76f120c4984d72",
    "payload": { "order_id": "ORD-001", "vendor": "Acme", "amount": 500, ... }
  }'
</code></pre>
<pre><code class="language-json">{
  "verified": true,
  "runId": "550e8400-e29b-41d4-a716-446655440000",
  "gateName": "Order Validation",
  "gateSlug": "order-validation"
}
</code></pre>
<p>If the agent tampered with the payload after validation — changed the amount, added a field, removed a required value — verification returns <code>verified: false</code>. The downstream system knows not to trust the data.</p>
<h2>Validation Doesn't Have to Be a Bottleneck</h2>
<p>One concern I hear is whether validation adds meaningful latency to a real-time pipeline. We benchmarked Flow against enterprise-scale payloads — the kind of data you'd see flowing through Kafka in a large manufacturing or logistics operation.</p>
<p>We tested with a Sterling Commerce OMS-style order payload: 21 schema variables, 10 business rules, 900 order line items. The payloads were around 9MB for XML and 7.3MB for JSON.</p>
<table>
<thead>
<tr>
<th>Metric</th>
<th>XML (9.1 MB)</th>
<th>JSON (7.3 MB)</th>
</tr>
</thead>
<tbody><tr>
<td><strong>Total round-trip</strong></td>
<td>4,989 ms</td>
<td>4,401 ms</td>
</tr>
<tr>
<td><strong>Server-side validation</strong></td>
<td>~50 ms</td>
<td>~50 ms</td>
</tr>
<tr>
<td><strong>Network upload (at ~800 KB/s)</strong></td>
<td>~3,800 ms</td>
<td>~3,100 ms</td>
</tr>
</tbody></table>
<p>The validation itself — schema checks plus 10 business rule evaluations — takes about 50 milliseconds. The rest is network transfer. At typical payload sizes (a few KB for a single order or invoice), the validation adds single-digit milliseconds. For a 30-line order at 0.3MB, the total round-trip was 1,960ms with most of that being upload time over a standard connection.</p>
<p>Server-side processing is fast because Flow runs validation in-memory: schema validation against a pre-compiled variable array, then expression evaluation through a recursive descent parser for each business rule. No database queries during validation. Persistence runs asynchronously after the response is sent — payloads go to S3, run metadata goes to Postgres, both fire-and-forget so the agent gets its response immediately.</p>
<p>For Kafka-speed pipelines where even 50ms matters, Flow also supports webhook delivery — validation happens, and the validated payload is pushed directly to your endpoint without the agent needing to relay it. That eliminates the agent-as-middleman entirely.</p>
<h2>All Three Layers Together</h2>
<p>Here's how I'd architect an agentic pipeline with all three layers working together:</p>
<img src="https://cdn.hashnode.com/uploads/covers/698ea54a73da603a1b59230b/73794ee6-d2cc-4567-be85-865d998f75c9.png" alt="Enterprise Pipeline" style="display:block;margin:0 auto" />

<p>Confluent handles data-in-motion: live events, governed, streaming at scale. Cerebras on Bedrock processes those events fast — the disaggregated Trainium+WSE architecture means agents produce structured output at speeds that make real-time workflows practical. Flow validates that output against your schema and business rules, returns errors for self-correction or a tamper-proof <code>validation_id</code> on success. The downstream system verifies the <code>validation_id</code> before accepting the data.</p>
<p>Three separate problems, three separate layers. Each one does its part well. Confluent ensures the agent gets good data. Cerebras ensures the agent processes it fast. Flow ensures the output is correct before it reaches production systems.</p>
<h2>Why This Matters Now</h2>
<p>I want to be clear: Flow doesn't replace anything IBM, Confluent, AWS, or Cerebras are building. They're solving data infrastructure and inference speed — foundational problems that every enterprise needs addressed. These are massive, hard engineering challenges, and both acquisitions and partnerships reflect the kind of investment this space deserves.</p>
<p>What Flow adds is a complementary output validation layer. As agents move from experimental to production, and as the data flowing through them gets faster and the inference gets cheaper, the volume of agent-generated outputs hitting downstream systems is going to increase significantly. Having a validation checkpoint in that pipeline — one that catches domain-specific errors, enforces business rules, and provides tamper-proof verification — becomes more valuable as the rest of the stack gets faster.</p>
<p>AWS's Agentic AI Security Scoping Matrix (published November 2025) calls out many of the capabilities Flow provides: approval gateway enforcement, agent controls, audit trails, agency perimeters. We've <a href="https://blog.rynko.dev/how-rynko-flow-maps-to-the-aws-agentic-ai-security-scoping-matrix">mapped Flow against every scope in that framework</a> — it covers Scopes 2 and 3 well, with partial coverage at Scope 4 where fully autonomous agents need capabilities beyond what a validation gateway provides alone.</p>
<p>If you're building agentic workflows on Kafka, Bedrock, or both, try dropping a <a href="https://app.rynko.dev/flow/gates">Flow gate</a> between your agent and your downstream system. The free tier gives you 500 validation runs per month and three gates — enough to see how output validation fits into your pipeline.</p>
<hr />
<p><em>Rynko Flow is a validation gateway for AI agent outputs.</em> <a href="https://app.rynko.dev/signup"><em>Try it free</em></a> <em>or</em> <a href="https://docs.rynko.dev/flow/getting-started"><em>read the docs</em></a><em>.</em></p>
]]></content:encoded></item><item><title><![CDATA[Teaching Gates to Learn: How We Built Intelligence Into Rynko Flow]]></title><description><![CDATA[The key insight: When agents fail and retry without clear guidance, it's not a minor inconvenience — it's a reliability failure. Every failed correction loop is a moment where your automation is stuck]]></description><link>https://blog.rynko.dev/teaching-gates-to-learn-how-we-built-intelligence-into-rynko-flow</link><guid isPermaLink="true">https://blog.rynko.dev/teaching-gates-to-learn-how-we-built-intelligence-into-rynko-flow</guid><category><![CDATA[rynko]]></category><category><![CDATA[AI]]></category><category><![CDATA[agentic AI]]></category><category><![CDATA[mcp]]></category><category><![CDATA[#AIgovernance]]></category><category><![CDATA[Reliability]]></category><category><![CDATA[software architecture]]></category><category><![CDATA[#llmops]]></category><category><![CDATA[observability]]></category><category><![CDATA[json-schema]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Rynko Dev]]></dc:creator><pubDate>Mon, 16 Mar 2026 06:57:29 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/698ea54a73da603a1b59230b/b511fcd8-54e0-4099-92b3-7c9855970b58.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote>
<p><strong>The key insight:</strong> When agents fail and retry without clear guidance, it's not a minor inconvenience — it's a reliability failure. Every failed correction loop is a moment where your automation is stuck in a cycle of non-compliance. Gate Intelligence identifies the friction points that prevent agents from reaching a successful state, and feeds that knowledge back into the gate's contract so the next agent gets it right on the first try.</p>
</blockquote>
<hr />
<p>When we launched Flow, the pitch was straightforward: define a gate with a schema and business rules, point your agent at it, and Flow validates the payload before it reaches your database. Schema checks, expression-based rules, optional human approval. It works well — agents submit data, gates validate it, failed submissions come back with structured errors the agent can act on.</p>
<p>But we were sitting on a pile of useful data and not doing anything with it.</p>
<p>Every run Flow processes is stored: the input payload, the validation verdict, which rules passed, which failed, and the exact values that caused the failure. For agents that self-correct, we track the full chain — first attempt, second attempt, third, until either the agent gets it right or gives up. That's tens of thousands of data points per gate per week, and until now, it only showed up as numbers on the analytics dashboard.</p>
<p>Gate Intelligence turns that data into concrete suggestions for improving your gates.</p>
<h2>The Problem It Solves</h2>
<p>Here's a real pattern we saw in our own test gates. We set up an invoice validation gate with five business rules: amount must be positive, currency must be a 3-letter uppercase code, vendor can't be empty, line items must sum to the total, and there must be at least one line item. Standard stuff.</p>
<p>When we ran agents against it, 40% of first attempts failed the currency format rule. The agents were submitting "usd" and "eur" instead of "USD" and "EUR". Another 25% failed the line items sum check — off by a fraction of a cent due to floating-point rounding. 15% of submissions omitted the vendor field entirely.</p>
<p>None of these are schema problems. The schema says currency is a string, vendor is required, amount is a number. All correct. The issue is that the gate's documentation and rules don't give agents enough context to get it right on the first try. The currency rule says "must equal its own uppercase version" — technically precise, but a Claude or GPT model reading the MCP tool description doesn't know that means "must be uppercase ISO 4217."</p>
<p>When agents fail and retry without that context, the automation isn't saving time — it's stuck. Each failed loop is a moment where your pipeline is spinning instead of producing results. If an agent needs five attempts and 45 seconds to pass a rule that a single well-placed hint would have fixed on the first try, the gate itself is the bottleneck.</p>
<p>Gate Intelligence identifies these patterns automatically and tells you what to do about them.</p>
<img src="https://cdn.hashnode.com/uploads/covers/698ea54a73da603a1b59230b/25f93cf8-18c2-4b8c-8317-e88167aff24c.png" alt="The gate configurator showing the 4-step card layout (Details → Hints → Schema → Processing → Delivery)" style="display:block;margin:0 auto" />

<h2>What It Computes</h2>
<p>Every hour, a background job runs for each active gate. It analyzes the last 7 days of runs and computes six metrics:</p>
<p><strong>Per-rule failure rates</strong> — what percentage of first-attempt submissions fail each rule, with trend direction compared to the previous 7-day window. If your "amount must be positive" rule went from 5% failure to 15%, that's flagged as trending up.</p>
<p><strong>Common failure values</strong> — the actual values agents submitted that caused failures. For the currency rule, this surfaces "usd", "eur", "gbp" as the top offenders. For a numeric rule, it might show 0, -1, or 99999.999. These values are what make suggestions actionable — instead of "rule X fails a lot," the system can say "agents are submitting lowercase currency codes."</p>
<p><strong>Field omission rates</strong> — how often required schema fields are missing from submissions. A 30% omission rate on the vendor field means agents don't realize it's required, or the field name isn't clear enough.</p>
<p><strong>Chain convergence</strong> — of all the failed submissions that triggered a correction chain, what percentage eventually succeeded? If agents submit, fail, retry, fail again, and give up 70% of the time, that's a fundamental reliability problem. A 33% convergence rate doesn't just mean "some retries" — it means your automation succeeds less than a third of the time. For any system that's supposed to run autonomously, that's a non-starter.</p>
<p><strong>Average chain length and time-to-correction</strong> — how many attempts does it take, and how long does the cycle last? Two attempts averaging 3 seconds is healthy. Five attempts averaging 45 seconds means the agent is struggling.</p>
<img src="https://cdn.hashnode.com/uploads/covers/698ea54a73da603a1b59230b/8eae32fc-7bd2-4261-b904-bd4068650fb0.png" alt="The three analysis cards (Rule Failure Rates, Field Omissions, Self-Correction Chains) showing real data with trend arrows and color-coded metrics" style="display:block;margin:0 auto" />

<h2>Pattern Detection</h2>
<p>Raw metrics tell you <em>what</em> is failing. Pattern detection tells you <em>why</em>.</p>
<p>The system examines common failure values and classifies them into four fixable patterns:</p>
<ul>
<li><p><strong>Case mismatch</strong> — the submitted value is a lowercase version of what's expected. "usd" vs "USD", "active" vs "Active". This usually means the gate needs to either make the rule case-insensitive or add explicit guidance about expected casing.</p>
</li>
<li><p><strong>Rounding tolerance</strong> — a numeric value is within 1% of the expected threshold but fails because of floating-point precision. An amount of 99.999 failing an exact equality check where the expected sum is 100.00. The fix is usually adding a small tolerance to the rule.</p>
</li>
<li><p><strong>Type coercion</strong> — a string representation of a number where a number is expected. The string "42" instead of the number 42. Common with agents that serialize JSON from natural language.</p>
</li>
<li><p><strong>Empty string</strong> — an empty string where a non-empty value is expected. Distinct from a missing field — the agent knows the field exists but doesn't have a value for it.</p>
</li>
</ul>
<p>These patterns feed into the suggestions. Instead of a generic "rule X fails 60% of the time," the suggestion says "the currency format rule fails 60% of the time — agents submit lowercase currency codes (usd, eur, gbp). Consider adding a note that currency must be uppercase ISO 4217."</p>
<h2>Suggestions and the Intelligence Tab</h2>
<p>Each gate now has an Intelligence tab alongside Configuration and Performance. The tab shows a summary bar with insight counts by severity, per-rule failure rates with trend arrows, field omission rates, chain convergence metrics, and a health trend chart built from historical snapshots.</p>
<p>Below the analysis cards, concrete suggestions appear as dismissable cards with three severity levels:</p>
<table>
<thead>
<tr>
<th>Severity</th>
<th>Trigger</th>
<th>Example</th>
</tr>
</thead>
<tbody><tr>
<td>Critical</td>
<td>Rule fails &gt;50% of first attempts</td>
<td>"Currency format fails 60% — add format guidance"</td>
</tr>
<tr>
<td>Critical</td>
<td>Chain convergence below 50%</td>
<td>"Agents give up on this gate 70% of the time"</td>
</tr>
<tr>
<td>Warning</td>
<td>Required field missing &gt;30%</td>
<td>"Vendor omitted in 38% of submissions"</td>
</tr>
<tr>
<td>Info</td>
<td>Rule never fails (500+ runs)</td>
<td>"This rule may be redundant — 0 failures in 600 runs"</td>
</tr>
<tr>
<td>Info</td>
<td>All rules &gt;95% success</td>
<td>"Excellent validation performance"</td>
</tr>
</tbody></table>
<p>Each suggestion has three actions: <strong>Apply</strong>, <strong>Dismiss</strong>, and <strong>Snooze</strong> (hide for 7 days).</p>
<img src="https://cdn.hashnode.com/uploads/covers/698ea54a73da603a1b59230b/4f7aca74-9266-4283-8caa-ee7a3c97aa84.png" alt="Close-up of a critical-severity suggestion card showing the Apply/Dismiss/Snooze action buttons and the severity coloring" style="display:block;margin:0 auto" />

<h2>Version-Controlled Hints</h2>
<p>This is where the architecture gets interesting. The first version of "Apply" directly modified the gate's description field — injecting hint text like "Common mistakes: agents submit lowercase currency codes." It worked, but it was a bad design for three reasons.</p>
<p>First, no version control. The description change bypassed the gate's draft/publish pipeline. In regulated environments — banking, healthcare, insurance — operators need to know exactly when and why a gate's contract changed. A direct write to the description is invisible in the version history.</p>
<p>Second, no audit trail. If an agent's behavior shifts after someone clicks "Apply" (for better or worse), there's no correlation between the click and the behavior change.</p>
<p>Third, no review step. The hint goes live immediately. If Gate Intelligence generates five suggestions and the operator clicks Apply on all of them, five changes hit production with no review.</p>
<p>So we changed the approach. Hints are now a first-class versioned field on the gate — stored alongside the schema, business rules, and identity key fields. When you click "Apply" on a suggestion, it adds the hint text to the gate's <strong>draft</strong> version. If no draft exists, it creates one. The hint doesn't go live until you review it in the gate configurator and publish.</p>
<p>The gate configurator now has a dedicated "Hints" panel sitting between the Details and Schema steps — visible at a glance without opening any dialog. You can see what Intelligence suggested, edit the text, add your own custom hints, or remove ones you don't want. When you're satisfied, you publish the gate version — which goes through the existing audit log, resets circuit breakers, and notifies connected MCP sessions that the tool description has changed.</p>
<p>This means hints get the same treatment as any other gate configuration change: versioned, auditable, rollbackable.</p>
<img src="https://cdn.hashnode.com/uploads/covers/698ea54a73da603a1b59230b/58c42e63-9da0-4c17-8712-ecf84919eb7e.png" alt="Screenshot of the Hints Panel in the gate configurator, showing 2-3 hints between the Define Details and Schema &amp; Validate step cards" style="display:block;margin:0 auto" />

<h2>How Hints Reach the Agent</h2>
<p>The MCP tool description for each gate is assembled from three sources:</p>
<pre><code class="language-markdown">Submit data to Invoice Validation
Validates invoice payloads before processing to the ERP system.

Business rules:
- amount_positive: Amount must be greater than zero
- currency_format: Currency must be valid ISO 4217
- line_items_match: Line item totals must equal invoice amount

--- Best Practices ---
- Currency must be uppercase ISO 4217 (e.g., USD, EUR, not usd)
- Line item totals must sum to the invoice amount within ±0.01
- Vendor name is required — do not submit an empty string
</code></pre>
<p>The gate description is always included. Business rules are always appended so the agent knows the constraints. The "Best Practices" section only appears when the auto-hints toggle is enabled on the gate — it's off by default because it changes what agents see, and the gate owner should make that decision deliberately.</p>
<p>The key improvement from the original architecture: reading hints is now a simple array read from the published gate record, not a database query against the insights table. The old approach queried the insights service every time an MCP tool description was built, which meant a database hit on every tool list request. The new approach reads directly from the gate record — the hints were copied there at publish time. This matters because MCP tool descriptions are assembled on every session connection and tool refresh. Moving from a database lookup to a direct read keeps the tool-build path fast and predictable, which is critical when you're serving multiple concurrent agent sessions.</p>
<h2>Historical Snapshots and Trend Analysis</h2>
<p>Each time the intelligence job runs, it saves a snapshot with aggregate metrics: total runs, overall failure rate, per-rule failure rates, field omission rates, chain convergence, and suggestion counts. This creates a time series of gate health that's visible in the Intelligence tab as a bar chart.</p>
<p>The chart color-codes each bar: red for failure rates above 50%, amber for 20–50%, and the primary color below 20%. Hovering shows the exact values and date. Over time, you can see whether applying suggestions actually improved the gate's success rate — which is the whole point.</p>
<img src="https://cdn.hashnode.com/uploads/covers/698ea54a73da603a1b59230b/5f481e9f-4445-4b8e-88c1-d89931da088e.png" alt="The health trend chart showing failure rate bars over time, with visible improvement (red bars transitioning to amber/green) after hints were applied " style="display:block;margin:0 auto" />

<h2>What's Next</h2>
<p>Gate Intelligence today is reactive — it analyzes historical data and suggests improvements. There are two directions on the roadmap:</p>
<p><strong>Proactive schema evolution</strong>: if Intelligence detects that agents consistently submit a field that isn't in the schema (say, a tax rate keeps appearing in payloads that only define amount and currency), it suggests adding it. This requires analyzing raw payloads beyond just validation results, which is a different data pipeline.</p>
<p><strong>AI Judge integration</strong>: the current business rules are deterministic expressions. We're building an "AI Judge" mode that evaluates payloads using an LLM for semantic checks that can't be expressed as expressions — things like "the description should be professional in tone" or "the address looks like a real postal address." Intelligence would track AI Judge pass/fail rates the same way it tracks expression rules, but the suggestion engine would need to account for the non-deterministic nature of LLM evaluation.</p>
<p>Neither is shipped yet, but the foundation is designed for them. The analysis pipeline, suggestion engine, versioned hints, and snapshot time series are all extensible — adding a new data source feeds into the same pattern detection and suggestion framework without rearchitecting anything.</p>
<h2>Getting Started</h2>
<p>If you have an active Flow gate with at least 50 runs, Intelligence will start generating insights on the next hourly cycle. Open any gate, click the Intelligence tab, and hit Refresh to trigger analysis immediately.</p>
<p>We're rolling it out gradually — it's available today for all paid tiers (Starter, Growth, Scale) and will be available on the Free tier once we're confident in the compute overhead.</p>
<img src="https://cdn.hashnode.com/uploads/covers/698ea54a73da603a1b59230b/46e6661e-5247-4b31-934b-d71afd04eae1.png" alt="The full gate detail page showing all three tabs (Configuration, Performance, Intelligence) with the Intelligence tab active — gives readers the full picture of where this lives in the product" style="display:block;margin:0 auto" />

<p>Whether your agents are running on AWS Bedrock, OpenAI's API, or any other provider — the validation layer is where reliability is won or lost. If your gates are rejecting 60% of first attempts and your correction chains converge less than half the time, your automation isn't autonomous. It's just expensive retry logic. Gate Intelligence gives you the data to fix that, and the versioned hints to make the fix stick.  </p>
<p>Flow docs: <a href="https://docs.rynko.dev/flow">docs.rynko.dev/flow</a></p>
<p>Get started: <a href="https://app.rynko.dev/signup">app.rynko.dev/signup</a> — free tier, 500 runs/month, 3 gates, no credit card.</p>
]]></content:encoded></item><item><title><![CDATA[How Rynko Flow Maps to the AWS Agentic AI Security Scoping Matrix]]></title><description><![CDATA[When AWS published the Agentic AI Security Scoping Matrix in November 2025, it put language around something we'd been building toward with Rynko Flow for a few months. The framework categorizes agent]]></description><link>https://blog.rynko.dev/how-rynko-flow-maps-to-the-aws-agentic-ai-security-scoping-matrix</link><guid isPermaLink="true">https://blog.rynko.dev/how-rynko-flow-maps-to-the-aws-agentic-ai-security-scoping-matrix</guid><category><![CDATA[ai agents]]></category><category><![CDATA[Security]]></category><category><![CDATA[AWS]]></category><category><![CDATA[rynko]]></category><category><![CDATA[rynko-flow]]></category><category><![CDATA[mcp]]></category><category><![CDATA[Validation]]></category><category><![CDATA[agentic AI]]></category><category><![CDATA[compliance ]]></category><dc:creator><![CDATA[Rynko Dev]]></dc:creator><pubDate>Thu, 12 Mar 2026 18:49:13 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/698ea54a73da603a1b59230b/887192ea-5f50-40f8-8fb7-887d3ab810e5.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When AWS published the <a href="https://aws.amazon.com/blogs/security/the-agentic-ai-security-scoping-matrix-a-framework-for-securing-autonomous-ai-systems/">Agentic AI Security Scoping Matrix</a> in November 2025, it put language around something we'd been building toward with Rynko Flow for a few months. The framework categorizes agentic AI systems into four scopes based on two axes — agency (what the agent can do) and autonomy (how independently it acts) — and maps six security dimensions across each scope. It's been referenced by OWASP, CoSAI, and multiple systems integrators since publication.</p>
<p>I read through it and realized we'd already implemented a significant portion of what it recommends, particularly at Scopes 2 through 4. But I also found gaps worth being honest about. This post walks through each scope, maps it to Flow's current capabilities, and flags where we're still building.</p>
<h2>A Quick Primer on What Flow Does</h2>
<p>For context if you haven't seen Flow before: Rynko Flow is a validation gateway that sits between AI agents and downstream systems. You define a gate with a JSON schema and business rules, your agent submits payloads to it, and Flow validates the data before it proceeds. Failed validations return structured errors the agent can use to self-correct. Successful validations return a tamper-proof <code>validation_id</code>. Optionally, you can add human approval steps and webhook delivery.</p>
<p>The pipeline:</p>
<img src="https://cdn.hashnode.com/uploads/covers/698ea54a73da603a1b59230b/d6313367-0860-4082-abe2-cadf8d3ad0e3.png" alt="" style="display:block;margin:0 auto" />

<p>Gates are exposed as MCP tools, so agents discover and use them without per-gate integration code. We also support REST API submission for non-MCP agents.</p>
<p>One distinction that matters throughout this post: <strong>Flow is a validation checkpoint, not a centralized orchestrator.</strong> It doesn't manage agent workflows or decide what runs next. The agent (or whatever framework orchestrates it — LangGraph, CrewAI, your own code) decides when to call a gate and what to do with the result. Flow's job is narrower: validate the data, return a verdict, and track what happened.</p>
<p>That said, Flow's webhook delivery does enable a form of event-driven orchestration — when a payload passes validation, the webhook can trigger the next agent or service in a pipeline, creating loosely-coupled handoffs without a central coordinator. This means Flow covers some of the AWS framework's security dimensions deeply (audit, agent controls, agency perimeters) and has a partial but real story for orchestration through webhooks.</p>
<p>With that context, here's how Flow maps to each scope in the AWS framework.</p>
<h2>Scope 1: No Agency</h2>
<p><strong>What AWS describes:</strong> Systems with human-initiated processes and no autonomous change capabilities. The agent follows predefined paths, processes data within workflow nodes, but can't modify anything. Read-only operations. Fixed execution paths.</p>
<p><strong>Security focus:</strong> Process integrity, boundary enforcement, preventing agents from exceeding their boundaries.</p>
<p><strong>How Flow maps here:</strong> Flow isn't really designed for Scope 1. If your agent is purely read-only and follows a fixed workflow with no ability to produce output that reaches external systems, you don't need a validation gateway — there's nothing to validate.</p>
<p>That said, Flow's schema validation does share DNA with one of Scope 1's key requirements: "input validation at each workflow step boundary." If you're building a pipeline where each stage processes data and hands it to the next, you could place a Flow gate between stages to validate that each node's output conforms to the expected shape. But that's using Flow as plumbing, not as an agent governance layer.</p>
<p><strong>Flow coverage: Minimal — and that's fine.</strong> Scope 1 agents don't produce autonomous outputs.</p>
<h2>Scope 2: Prescribed Agency</h2>
<p><strong>What AWS describes:</strong> Human-initiated, human-approved agentic actions. Agents can gather information, analyze data, and prepare recommendations, but all actions of consequence require explicit human approval. This is the "human in the loop" (HITL) scope.</p>
<p><strong>Key characteristics:</strong></p>
<ul>
<li><p>Agents can execute change with human review and approval</p>
</li>
<li><p>Real-time human oversight with approval workflows</p>
</li>
<li><p>Bidirectional interaction — agents can ask humans for context</p>
</li>
<li><p>Audit trails of all human approval decisions</p>
</li>
</ul>
<p><strong>Security focus:</strong> Securing approval workflows, preventing agents from bypassing human authorization, and maintaining oversight effectiveness.</p>
<p><strong>How Flow maps here:</strong> This is where Flow starts to fit well. The approval workflow was one of the first features we built into Flow, and it maps directly to what the AWS framework calls "approval gateway enforcement."</p>
<p>Here's how each Scope 2 security dimension looks in Flow:</p>
<table>
<thead>
<tr>
<th>Security Dimension</th>
<th>Scope 2 Requirement</th>
<th>Flow Implementation</th>
</tr>
</thead>
<tbody><tr>
<td><strong>Identity context</strong></td>
<td>User auth, service auth, human identity verification for approvals</td>
<td>JWT auth for dashboard users, API key auth for agents (scoped to team/workspace), magic-link reviewer identity via HMAC-SHA256 signed tokens</td>
</tr>
<tr>
<td><strong>Data, memory, &amp; state protection</strong></td>
<td>Role-based access control, human approval workflows, read-mostly permissions for agents</td>
<td>Workspace-scoped gates, team-based RBAC, agents can only submit — they can't modify gate schemas or approve their own runs</td>
</tr>
<tr>
<td><strong>Audit &amp; logging</strong></td>
<td>Human decision audit trails, agent recommendation logging, approval process tracking</td>
<td>Every run logged with full payload, per-rule validation verdicts, approval decisions (approve/reject) with reviewer identity and timestamp</td>
</tr>
<tr>
<td><strong>Agent &amp; FM controls</strong></td>
<td>Approval gateway enforcement, extended session monitoring</td>
<td>Gate validation = approval gateway. MCP session tracking with <code>mcpSessionId</code> ties agent submissions to a specific session. Circuit breaker monitors session health</td>
</tr>
<tr>
<td><strong>Agency perimeters &amp; policies</strong></td>
<td>Human-validated constraint changes, time-bound elevated access, multi-step validation</td>
<td>Gate schema versioning (draft → publish cycle means a human approves schema changes). Magic links expire in 72 hours. Schema validation + business rules = multi-step validation</td>
</tr>
<tr>
<td><strong>Orchestration</strong></td>
<td>Multi-step workflow orchestration, approval-gated tool access, human-validated tool chains</td>
<td>Flow doesn't centrally orchestrate — the agent or framework (LangGraph, CrewAI, etc.) decides what to call and when. However, Flow's webhook delivery provides an event-driven handoff mechanism: when a submission passes validation (and approval, if configured), the validated payload is pushed to a webhook endpoint that can trigger the next agent, tool, or service. This enables loosely-coupled pipeline orchestration without a central orchestrator — each gate validates one stage and hands off to the next via webhook</td>
</tr>
</tbody></table>
<p>The Scope 2 implementation consideration that stood out to me was "time-bounded approval tokens with automatic expiration." We built this: magic links for external reviewers are HMAC-SHA256 signed, expire in 72 hours, and are single-use for approval actions. The reviewer doesn't need a Rynko account — they click the link, see the payload rendered with safe content (sanitized HTML/Markdown), and approve or reject.</p>
<p>One gap: the paper mentions "cryptographically signed approval decisions." Our approval decisions are stored in the database with reviewer identity and timestamp, but we don't produce a standalone cryptographic proof of the decision. It's an area where we could do more.</p>
<p><strong>Flow coverage: Strong.</strong> Approval workflows, audit trails, scoped identity, and time-bounded reviewer access align closely with Scope 2 requirements.</p>
<h2>Scope 3: Supervised Agency</h2>
<p><strong>What AWS describes:</strong> Human-initiated, but the agent executes autonomously. The agent makes decisions and takes actions without further approval. Humans define objectives and trigger execution, but agents operate independently through dynamic planning and tool usage. Optional human intervention points exist, but the agent can proceed without them.</p>
<p><strong>Key characteristics:</strong></p>
<ul>
<li><p>Agents can execute change with no (or optional) human review</p>
</li>
<li><p>Dynamic planning and decision-making during execution</p>
</li>
<li><p>Direct access to external APIs and systems</p>
</li>
<li><p>Persistent memory across extended sessions</p>
</li>
<li><p>Autonomous tool selection and orchestration within defined boundaries</p>
</li>
</ul>
<p><strong>Security focus:</strong> Maintaining control during autonomous execution, scope management, and behavioral monitoring.</p>
<p><strong>How Flow maps here:</strong> This is Flow's primary operating mode. Most teams using Flow today have agents that submit data autonomously — no human in the loop — and rely entirely on the gate's schema and business rules to catch problems. The agent self-corrects from structured errors, and Flow's circuit breaker intervenes if the agent enters a failure loop.</p>
<table>
<thead>
<tr>
<th>Security Dimension</th>
<th>Scope 3 Requirement</th>
<th>Flow Implementation</th>
</tr>
</thead>
<tbody><tr>
<td><strong>Identity context</strong></td>
<td>Agent authentication, identity delegation for autonomous actions</td>
<td>API keys authenticate agents at team/workspace scope. MCP sessions bind agent identity to a persistent session with its own state</td>
</tr>
<tr>
<td><strong>Data, memory, &amp; state protection</strong></td>
<td>Context-aware authorization, just-in-time privilege elevation, dynamic permission boundaries</td>
<td>Schema + business rules provide context-aware authorization — the gate evaluates each submission based on the data content, not just the caller's identity. Gate versioning allows operators to update what's accepted without downtime (publish a new version to tighten or relax rules). Flow doesn't provide privilege elevation — agents have the same permissions throughout a session</td>
</tr>
<tr>
<td><strong>Audit &amp; logging</strong></td>
<td>Comprehensive action logging, reasoning chain capture, extended session tracking</td>
<td>Full run audit trail (payload, validation verdicts, processing time). Self-correction chain tracking links retries — you can see the full sequence of submit → fail → correct → resubmit as a single chain with a shared <code>correlationId</code>. MCP session IDs track activity across an agent's full conversation</td>
</tr>
<tr>
<td><strong>Agent &amp; FM controls</strong></td>
<td>Container isolation, long-running process management, tool invocation sandboxing</td>
<td>Gate validation acts as a sandbox for agent outputs — the agent can call <code>validate_*</code> tools, but every payload must pass through deterministic rules before it's accepted. Circuit breaker prevents runaway retry loops by pausing the gate after consecutive failures. Flow doesn't provide container isolation or manage the agent's process lifecycle — it only controls the validation boundary</td>
</tr>
<tr>
<td><strong>Agency perimeters &amp; policies</strong></td>
<td>Runtime constraint evaluation, resource scaling limits, automated safety checks</td>
<td>Business rules are evaluated at runtime against each submission — constraints are enforced per-payload, not just at setup time. Monthly run quotas cap total throughput per team. Circuit breaker acts as an automated safety check, tripping after N consecutive failures. These are static limits though — they don't adjust dynamically based on agent behavior</td>
</tr>
<tr>
<td><strong>Orchestration</strong></td>
<td>Dynamic tool orchestration, parallel execution paths, cross-system integration</td>
<td>Flow isn't a centralized orchestrator — it doesn't decide what runs next. But it supports two forms of integration: (1) MCP tool discovery, where agents find gates dynamically, and (2) webhook delivery, where validated payloads are pushed to downstream endpoints that can trigger the next step in a pipeline. This enables event-driven, loosely-coupled orchestration — gate A validates and webhooks to service B, which processes and submits to gate C. The sequencing emerges from the webhook chain, not from a central coordinator</td>
</tr>
</tbody></table>
<img src="https://cdn.hashnode.com/uploads/covers/698ea54a73da603a1b59230b/0e43d4b2-0994-4082-8aae-e008e9dfcb20.png" alt="Autonomous Self-Correction in Action: Rynko Flow guides the agent through two policy violations to a successful, compliant tool execution." style="display:block;margin:0 auto" />

<p>The self-correction chain tracking deserves specific mention here. The AWS paper talks about "reasoning chain capture" — being able to see why an agent made the decisions it did. Flow's chain tracking gives you a concrete version of this for validation: when an agent submits to a gate, fails, reads the errors, and resubmits, the entire sequence is linked by a <code>correlationId</code>. You can see exactly what the agent submitted each time, which rules it violated, what it fixed, and whether it eventually succeeded. In the webapp, chains are displayed as collapsible groups — the latest attempt shows as the primary row with a badge showing "3 attempts," and expanding reveals the full correction timeline.</p>
<p>The circuit breaker is Flow's implementation of the "automated safety checks" requirement. When an agent keeps failing the same gate — tracked per session for MCP agents, per payload hash for REST agents — the circuit breaker trips after a configurable number of consecutive failures. The gate transitions to a paused state, the system sends in-app and email notifications to the gate creator, and all further submissions are blocked until the cooldown expires or a new gate version is published.</p>
<p>Here's what makes the circuit breaker interesting from the AWS framework perspective: it's an example of <strong>graceful degradation</strong>, which the paper calls out as a key architectural pattern. The paper says: "Design systems to automatically reduce autonomy levels when security events are detected." The circuit breaker does exactly this — when the agent can't produce valid output, Flow reduces the agent's effective autonomy by blocking further submissions and notifying the human operator.</p>
<p><strong>Flow coverage: Strong to comprehensive.</strong> Self-correction chains, circuit breaker, runtime validation, MCP session tracking, and structured audit trails address most Scope 3 requirements.</p>
<h2>Scope 4: Full Agency</h2>
<p><strong>What AWS describes:</strong> Fully autonomous AI that initiates its own activities based on environmental monitoring, learned patterns, or predefined conditions. No human triggers the agent — it operates continuously, makes independent decisions about when and how to act. The highest level of agency and risk.</p>
<p><strong>Key characteristics:</strong></p>
<ul>
<li><p>Self-directed activity initiation based on environmental triggers</p>
</li>
<li><p>Continuous operation with minimal human oversight</p>
</li>
<li><p>High to full degrees of autonomy in goal setting, planning, and execution</p>
</li>
<li><p>Dynamic interaction with multiple external systems and agents</p>
</li>
<li><p>Capability for recursive self-improvement</p>
</li>
</ul>
<p><strong>Security focus:</strong> Continuous behavioral validation, enforcing agency boundaries, preventing capability drift, and maintaining organizational alignment.</p>
<p><strong>How Flow maps here:</strong> Flow isn't a Scope 4 system itself — it doesn't initiate actions or make autonomous decisions. But it serves as a governance layer that Scope 4 agents submit to. The distinction matters: Flow doesn't control what the agent does; it controls what outputs the agent can successfully land in downstream systems. In the AWS framework's terms, Flow provides the "Advanced Deterministic Guardrails" that Scope 4 requires.</p>
<table>
<thead>
<tr>
<th>Security Dimension</th>
<th>Scope 4 Requirement</th>
<th>Flow Implementation</th>
<th>Gap</th>
</tr>
</thead>
<tbody><tr>
<td><strong>Identity context</strong></td>
<td>Dynamic identity lifecycle, federated auth, continuous identity verification, agent identity attestation</td>
<td>API key auth, MCP session binding</td>
<td>No agent identity attestation or dynamic identity lifecycle. Flow authenticates the agent but doesn't verify its internal state or attest to its identity to third parties</td>
</tr>
<tr>
<td><strong>Data, memory, &amp; state protection</strong></td>
<td>Behavioral authorization, adaptive access controls, continuous authorization validation</td>
<td>Every submission is checked against schema and business rules — this provides continuous authorization validation at the data level. Gate versioning lets operators evolve rules over time. Business rules reject outputs that violate constraints regardless of which agent submitted them</td>
<td>No ML-based adaptive controls — rules are deterministic, defined by humans. No behavioral authorization that learns from past patterns</td>
</tr>
<tr>
<td><strong>Audit &amp; logging</strong></td>
<td>Continuous behavioral logging, pattern analysis, predictive monitoring, automated incident correlation</td>
<td>Full run audit trail. Chain tracking correlates related submissions. Circuit breaker events log failure patterns. Archive sync for long-term retention</td>
<td>No predictive monitoring or ML-based pattern analysis. Circuit breaker counts failures but doesn't detect novel anomaly patterns</td>
</tr>
<tr>
<td><strong>Agent &amp; FM controls</strong></td>
<td>Behavioral analysis, anomaly detection, automated containment, self-healing security</td>
<td>Circuit breaker provides automated containment — pauses the gate and notifies operators when failures accumulate. Self-correction chain tracking gives visibility into agent retry patterns. Reset-on-publish clears stale circuit breaker state when new rules are deployed, which is closer to operational recovery than true self-healing</td>
<td>No behavioral analysis beyond failure counting. No anomaly detection on payload content. No automated response that adapts without human intervention</td>
</tr>
<tr>
<td><strong>Agency perimeters &amp; policies</strong></td>
<td>Self-adjusting boundaries, context-aware constraints, cross-system resource management, autonomous limit adaptation</td>
<td>Business rules provide context-aware constraints (evaluated per-payload). Circuit breaker provides a form of autonomous limit adaptation — it auto-pauses the gate without human action. Monthly quotas cap resource usage</td>
<td>No self-adjusting boundaries — rules are static until a human publishes a new version. No cross-system resource management</td>
</tr>
<tr>
<td><strong>Orchestration</strong></td>
<td>Autonomous multi-agent orchestration, cross-session learning, dynamic service discovery</td>
<td>MCP tool discovery lets agents find gates dynamically. Webhook delivery enables event-driven handoffs between stages — a validated payload can trigger the next agent or service without a central orchestrator. This supports loosely-coupled multi-agent pipelines</td>
<td>No centralized multi-agent coordination. No cross-session learning. No dynamic service discovery beyond MCP tool listing. Each gate is independent — Flow doesn't manage the pipeline topology</td>
</tr>
</tbody></table>
<p>I want to be transparent about where the gaps are, because the Scope 4 requirements are genuinely hard. The paper calls for "continuous monitoring with machine learning-based anomaly detection" and "automated response systems for behavioral deviations." Flow's circuit breaker is an automated response system, but it's simple — it counts consecutive failures. It doesn't analyze payload content for anomalies, detect drift in agent behavior patterns over time, or predict when an agent is likely to start producing invalid output.</p>
<p>That said, Flow provides the infrastructure that makes Scope 4 deployment safer:</p>
<ol>
<li><p><strong>Every submission is validated</strong> — the agent can't skip the gate. Schema + business rules are deterministic, not probabilistic. A Scope 4 agent that submits an order with a negative total gets rejected regardless of how autonomously it's operating.</p>
</li>
<li><p><strong>Self-correction is tracked</strong> — you can see whether a Scope 4 agent is self-correcting successfully (resilient autonomy) or spiraling into repeated failures (failing autonomy). Chain tracking gives you this visibility without instrumenting the agent itself.</p>
</li>
<li><p><strong>Automated containment</strong> — the circuit breaker pauses the gate when failures accumulate. This is the "failsafe mechanism that can halt operations when confidence drops" that the paper recommends for Scope 4.</p>
</li>
<li><p><strong>Human re-entry point</strong> — when the circuit breaker trips, the gate creator gets notified (in-app + email). This is the "human ability to inject strategic guidance without disrupting operations" pattern. The human publishes a new gate version (potentially with adjusted rules), which reactivates the gate and resets circuit breakers.</p>
</li>
</ol>
<p><strong>Flow coverage: Partial but meaningful.</strong> Flow provides the deterministic guardrails, automated containment, and audit infrastructure that Scope 4 requires. The gaps are in ML-based anomaly detection, agent identity attestation, and cross-session learning — areas that require capabilities beyond what a validation gateway provides on its own.</p>
<h2>The Six Security Dimensions — Summary Matrix</h2>
<p>Here's a consolidated view of how Flow maps across all four scopes and six dimensions:</p>
<table>
<thead>
<tr>
<th>Dimension</th>
<th>Scope 1</th>
<th>Scope 2</th>
<th>Scope 3</th>
<th>Scope 4</th>
</tr>
</thead>
<tbody><tr>
<td><strong>Identity</strong></td>
<td>N/A</td>
<td>JWT + API key + magic links</td>
<td>+ MCP session binding</td>
<td>Gap: no agent attestation</td>
</tr>
<tr>
<td><strong>Data &amp; state</strong></td>
<td>N/A</td>
<td>RBAC, workspace scoping</td>
<td>+ runtime schema validation, gate versioning</td>
<td>Gap: no adaptive controls</td>
</tr>
<tr>
<td><strong>Audit</strong></td>
<td>N/A</td>
<td>Run logs, approval trails</td>
<td>+ chain tracking, session tracking</td>
<td>Gap: no predictive monitoring</td>
</tr>
<tr>
<td><strong>Agent controls</strong></td>
<td>N/A</td>
<td>Approval gateway</td>
<td>+ circuit breaker, chain tracking (observes correction, doesn't drive it)</td>
<td>Gap: no behavioral analysis</td>
</tr>
<tr>
<td><strong>Agency perimeters</strong></td>
<td>N/A</td>
<td>Schema versioning, expiring tokens</td>
<td>+ runtime rules, quotas, circuit breaker</td>
<td>Gap: no self-adjusting boundaries</td>
</tr>
<tr>
<td><strong>Orchestration</strong></td>
<td>N/A</td>
<td>Webhook delivery enables event-driven handoffs</td>
<td>+ MCP tool discovery</td>
<td>Gap: no centralized orchestration or cross-session learning. Supports loosely-coupled pipelines via webhooks, not coordinated multi-agent workflows</td>
</tr>
</tbody></table>
<h2>The Key Architectural Patterns</h2>
<p>The AWS paper concludes with five architectural patterns for agentic AI deployments. Flow aligns with four of them:</p>
<p><strong>Progressive autonomy deployment</strong> — "Start with Scope 1 or 2 implementations and gradually advance." Flow supports this directly. A gate can start with approval workflows (Scope 2 — human reviews every submission). Once you're confident in the schema and rules, remove the approval step and let the agent operate autonomously (Scope 3). The gate's validation logic stays the same; you're just adjusting the human oversight level.</p>
<p><strong>Continuous validation loops</strong> — "Establish automated systems that continuously verify agent behavior against expected patterns." This is literally what Flow does. Every agent submission is validated against the gate's schema and business rules. The self-correction loop (submit → fail → read errors → fix → resubmit) is a continuous validation loop operating at the individual submission level.</p>
<p><strong>Human oversight integration</strong> — "Maintain meaningful human oversight through strategic checkpoints, behavioral reporting, and manual override capabilities." Flow's approval workflows are the strategic checkpoints. Chain tracking and circuit breaker notifications are the behavioral reporting. Gate versioning and manual pause/resume are the manual override capabilities.</p>
<p><strong>Graceful degradation</strong> — "Design systems to automatically reduce autonomy levels when security events are detected." The circuit breaker does this: when an agent accumulates consecutive failures, the gate pauses, notifications fire, and the agent's effective autonomy drops to zero until a human intervenes or the cooldown expires. The paper specifically recommends systems that "automatically inject tighter restrictions such as requiring more HITL or reducing the actions an agent can take" — this is exactly what happens when a gate transitions from auto-approve to paused.</p>
<p>The one pattern we don't cover well is <strong>layered security architecture</strong> — "defense in depth with security controls at multiple levels." Flow operates at the application layer (validating agent outputs). It doesn't provide network-level controls, model-level guardrails, or infrastructure-level isolation. Teams building Scope 3-4 systems need Flow as one layer in a broader security stack, not as the only layer.</p>
<h2>What We're Building Next</h2>
<p>Reading the AWS framework confirmed some things on our roadmap and added others:</p>
<p><strong>AI Judge (semantic validation)</strong> — Currently, Flow's validation is deterministic: JSON Schema types and expression-based business rules. For Scope 4 agents producing unstructured or semi-structured outputs, deterministic rules aren't always enough. We're building an LLM-based evaluation mode where a second model reviews the agent's output against criteria defined in natural language. This addresses the "continuous behavioral validation" gap at Scope 4.</p>
<p><strong>Approval timeout enforcement</strong> — Our approval workflow creates a <code>pending_approval</code> state, but there's no automatic expiration. The paper's Scope 2 recommendation of "time-bounded approval tokens with automatic expiration" applies here. We're adding configurable timeout with auto-reject or auto-escalate behavior.</p>
<p><strong>Agent behavioral baselining</strong> — The paper's Scope 4 requirements for "pattern analysis" and "predictive monitoring" are beyond what a simple failure counter provides. We're exploring tracking submission patterns per agent (payload shape, submission frequency, validation pass rate) and flagging deviations. Not ML-based yet, but statistical baselines that surface when an agent's behavior changes.</p>
<h2>Where Flow Fits in Your Agentic Architecture</h2>
<p>Rynko Flow isn't a complete Scope 4 security stack — no single product is. But it provides the validation gateway, automated containment, and audit infrastructure that the AWS framework identifies as critical across Scopes 2-4.</p>
<p>If you're building agents that produce data destined for production systems — orders, invoices, tickets, customer records — Flow gives you deterministic guardrails that work regardless of which model or framework your agent uses. The gate doesn't care whether the agent is a Claude tool-use loop, a LangGraph pipeline, or a custom orchestrator. It validates the output, tracks the correction chain, and trips the circuit breaker if things go sideways.</p>
<p>The AWS framework is worth reading in full — it provides a structured way to think about where your agent sits on the agency/autonomy spectrum and what security controls you need at that level. And if you're at Scope 2 or above, a validation gateway isn't optional — it's one of the six critical security dimensions.</p>
<p>Paper: <a href="https://aws.amazon.com/blogs/security/the-agentic-ai-security-scoping-matrix-a-framework-for-securing-autonomous-ai-systems/">The Agentic AI Security Scoping Matrix</a></p>
<p>Flow docs: <a href="https://docs.rynko.dev/flow">docs.rynko.dev/flow</a></p>
<p>Get started: <a href="https://app.rynko.dev/signup">app.rynko.dev/signup</a> — free tier, 500 runs/month, 3 gates, no credit card.</p>
]]></content:encoded></item><item><title><![CDATA[Adding Output Validation to Your LangGraph Agent with Rynko Flow]]></title><description><![CDATA[LangGraph gives you fine-grained control over your agent's execution graph — you define nodes, edges, and conditional routing. But one thing that's missing from most LangGraph tutorials is what happen]]></description><link>https://blog.rynko.dev/langgraph-flow-validation</link><guid isPermaLink="true">https://blog.rynko.dev/langgraph-flow-validation</guid><category><![CDATA[langgraph]]></category><category><![CDATA[ai agents]]></category><category><![CDATA[Validation]]></category><category><![CDATA[rynko]]></category><category><![CDATA[rynko-flow]]></category><category><![CDATA[Python]]></category><category><![CDATA[mcp]]></category><category><![CDATA[self-correction]]></category><category><![CDATA[pydantic]]></category><dc:creator><![CDATA[Rynko Dev]]></dc:creator><pubDate>Tue, 10 Mar 2026 13:50:43 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/698ea54a73da603a1b59230b/c6cf1c51-7563-4635-aa47-a960acb71427.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>LangGraph gives you fine-grained control over your agent's execution graph — you define nodes, edges, and conditional routing. But one thing that's missing from most LangGraph tutorials is what happens when a node produces bad data. The next node just receives it and either crashes or propagates the error downstream.</p>
<p>I ran into this when building an order processing pipeline with LangGraph. The extraction node would occasionally produce negative amounts, invalid currencies, or missing fields. The downstream nodes — pricing, invoicing, fulfillment — would silently process the bad data. By the time someone noticed, the damage was already in the database.</p>
<p>The typical fix is writing validation logic inside each node. That works, but it means every node carries its own schema checks, the validation rules are scattered across your codebase, and there's no central place to see what's failing and why.</p>
<p>So I hooked up Rynko Flow as an external validation step in the graph. The agent extracts data, Flow validates it against a schema and business rules, and only if it passes does the pipeline continue. If it fails, the agent gets structured errors it can use to self-correct.</p>
<h2>What You'll Build</h2>
<p>A LangGraph agent with three nodes:</p>
<ol>
<li><p><strong>Extract</strong> — LLM extracts order data from a natural language request</p>
</li>
<li><p><strong>Validate</strong> — Submits the extracted data to a Rynko Flow gate</p>
</li>
<li><p><strong>Process</strong> — Handles the validated order (or routes back for correction)</p>
</li>
</ol>
<p>The graph looks like this:</p>
<pre><code class="language-plaintext">extract → validate → process
              ↓ (if failed)
          extract (retry with error context)
</code></pre>
<h2>Prerequisites</h2>
<pre><code class="language-bash">pip install langgraph langchain-openai httpx
</code></pre>
<p>You'll also need:</p>
<ul>
<li><p>A <a href="https://app.rynko.dev/signup">Rynko account</a> (free tier works)</p>
</li>
<li><p>A Flow gate configured with your order schema</p>
</li>
<li><p>An OpenAI API key (or any LangChain-compatible LLM)</p>
</li>
</ul>
<h2>Setting Up the Flow Gate</h2>
<p>Create a gate in the <a href="https://app.rynko.dev/flow/gates">Flow dashboard</a> with this schema:</p>
<table>
<thead>
<tr>
<th>Field</th>
<th>Type</th>
<th>Constraints</th>
</tr>
</thead>
<tbody><tr>
<td>vendor</td>
<td>string</td>
<td>required, min 1 char</td>
</tr>
<tr>
<td>amount</td>
<td>number</td>
<td>required, &gt;= 0</td>
</tr>
<tr>
<td>currency</td>
<td>string</td>
<td>required, one of: USD, EUR, GBP, INR</td>
</tr>
<tr>
<td>po_number</td>
<td>string</td>
<td>optional</td>
</tr>
</tbody></table>
<p>Add a business rule: <code>amount &gt;= 10</code> with error message "Order amount must be at least $10."</p>
<p>If you already have a Pydantic model, you can import the schema directly — run <code>YourModel.model_json_schema()</code> and paste the output into the gate's Import Schema dialog. There's a <a href="https://docs.rynko.dev/tutorials/import-pydantic-zod">tutorial for that</a>.</p>
<p>Save and publish the gate. Note the gate ID — you'll need it in the code.</p>
<h2>The Validation Client</h2>
<p>First, a small wrapper around the Flow API. This is what the validate node will call:</p>
<pre><code class="language-python">import httpx
import os

RYNKO_BASE_URL = os.environ.get("RYNKO_BASE_URL", "https://api.rynko.dev/api")
RYNKO_API_KEY = os.environ["RYNKO_API_KEY"]

def validate_with_flow(gate_id: str, payload: dict) -&gt; dict:
    """Submit a payload to a Flow gate and return the result."""
    response = httpx.post(
        f"{RYNKO_BASE_URL}/flow/gates/{gate_id}/runs",
        json={"payload": payload},
        headers={
            "Authorization": f"Bearer {RYNKO_API_KEY}",
            "Content-Type": "application/json",
        },
        timeout=30,
    )
    return response.json()
</code></pre>
<p>This returns the full validation result — status, errors, validation ID, the works. The important fields are <code>status</code> (either <code>"validated"</code> or <code>"validation_failed"</code>) and <code>errors</code> (an array of specific field-level issues when validation fails).</p>
<h2>Defining the Graph State</h2>
<p>LangGraph uses a typed state that flows between nodes. Ours tracks the user request, extracted data, validation result, and retry count:</p>
<pre><code class="language-python">from typing import TypedDict, Optional

class OrderState(TypedDict):
    user_request: str
    extracted_data: Optional[dict]
    validation_result: Optional[dict]
    validation_errors: Optional[str]
    retry_count: int
    final_result: Optional[str]
</code></pre>
<h2>The Three Nodes</h2>
<h3>Extract Node</h3>
<p>The LLM extracts structured order data from the user's natural language request. If there were previous validation errors, they're included in the prompt so the LLM can correct its output:</p>
<pre><code class="language-python">from langchain_openai import ChatOpenAI
import json

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

GATE_ID = os.environ["FLOW_GATE_ID"]  # Your gate ID

def extract_order(state: OrderState) -&gt; dict:
    error_context = ""
    if state.get("validation_errors"):
        error_context = (
            f"\n\nYour previous extraction had validation errors:\n"
            f"{state['validation_errors']}\n"
            f"Fix these issues in your new extraction."
        )

    response = llm.invoke(
        f"Extract order data from this request as JSON with fields: "
        f"vendor (string), amount (number), currency (string, one of USD/EUR/GBP/INR), "
        f"po_number (string, optional).\n\n"
        f"Request: {state['user_request']}"
        f"{error_context}\n\n"
        f"Respond with ONLY valid JSON, no markdown."
    )

    try:
        extracted = json.loads(response.content)
    except json.JSONDecodeError:
        extracted = {"vendor": "", "amount": 0, "currency": ""}

    return {"extracted_data": extracted}
</code></pre>
<h3>Validate Node</h3>
<p>This is the Flow integration — submit the extracted data to the gate and capture the result:</p>
<pre><code class="language-python">def validate_order(state: OrderState) -&gt; dict:
    result = validate_with_flow(GATE_ID, state["extracted_data"])

    if result.get("status") == "validation_failed":
        errors = result.get("error", {}).get("details", [])
        error_text = "\n".join(
            f"- {e.get('field', e.get('rule_id', 'unknown'))}: {e.get('message', 'invalid')}"
            for e in errors
        )
        return {
            "validation_result": result,
            "validation_errors": error_text,
            "retry_count": state.get("retry_count", 0) + 1,
        }

    return {
        "validation_result": result,
        "validation_errors": None,
    }
</code></pre>
<h3>Process Node</h3>
<p>If validation passed, the order moves forward. In a real system this would write to your database, trigger fulfillment, or call another API:</p>
<pre><code class="language-python">def process_order(state: OrderState) -&gt; dict:
    validation_id = state["validation_result"].get("validation_id", "")
    return {
        "final_result": (
            f"Order processed successfully.\n"
            f"Vendor: {state['extracted_data']['vendor']}\n"
            f"Amount: {state['extracted_data']['amount']} {state['extracted_data']['currency']}\n"
            f"Validation ID: {validation_id}"
        )
    }
</code></pre>
<p>The <code>validation_id</code> is a tamper-proof token from Flow — your downstream systems can verify that the data passed validation and hasn't been modified since.</p>
<h2>Wiring the Graph</h2>
<p>Now connect the nodes with conditional routing. If validation fails and we haven't exhausted retries, route back to the extract node with the error context:</p>
<pre><code class="language-python">from langgraph.graph import StateGraph, END

def should_retry(state: OrderState) -&gt; str:
    if state.get("validation_errors") and state.get("retry_count", 0) &lt; 3:
        return "retry"
    elif state.get("validation_errors"):
        return "give_up"
    return "proceed"

# Build the graph
graph = StateGraph(OrderState)

graph.add_node("extract", extract_order)
graph.add_node("validate", validate_order)
graph.add_node("process", process_order)

graph.set_entry_point("extract")
graph.add_edge("extract", "validate")

graph.add_conditional_edges(
    "validate",
    should_retry,
    {
        "retry": "extract",     # Back to extraction with error context
        "proceed": "process",   # Validation passed
        "give_up": END,         # Max retries reached
    },
)
graph.add_edge("process", END)

app = graph.compile()
</code></pre>
<h2>Running It</h2>
<pre><code class="language-python">result = app.invoke({
    "user_request": "Process an order from Globex Corp for twelve thousand five hundred dollars USD, PO number PO-2026-042",
    "retry_count": 0,
})

print(result["final_result"])
</code></pre>
<p>Output:</p>
<pre><code class="language-plaintext">Order processed successfully.
Vendor: Globex Corp
Amount: 12500.0 USD
Validation ID: v_abc123...
</code></pre>
<h2>The Self-Correction Loop</h2>
<p>The interesting part is what happens when the LLM makes a mistake. Say it extracts <code>currency: "Dollars"</code> instead of <code>"USD"</code>. Flow returns:</p>
<pre><code class="language-json">{
  "status": "validation_failed",
  "errors": [
    {"field": "currency", "message": "must be one of: USD, EUR, GBP, INR"}
  ]
}
</code></pre>
<p>The graph routes back to the extract node, which now includes the error in its prompt. The LLM reads "currency must be one of: USD, EUR, GBP, INR", fixes its extraction to <code>"USD"</code>, and the second attempt passes validation.</p>
<p>This happens automatically — no human intervention, no hardcoded fixes. The LLM uses the structured error feedback from Flow to correct itself.</p>
<p>In our testing, most validation issues resolve in one retry. The <code>retry_count</code> cap of 3 is a safety net — if the agent can't fix it in three attempts, something is fundamentally wrong with the input and it's better to fail explicitly.</p>
<h2>Why Not Just Use Pydantic in the Node?</h2>
<p>You could validate with Pydantic directly in the extract node. For a single agent, that works fine. But Flow gives you a few things Pydantic doesn't:</p>
<p><strong>Business rules that cross fields.</strong> Pydantic validates field types and constraints, but expressions like <code>endDate &gt; startDate</code> or <code>quantity * price == total</code> need custom validators. Flow evaluates these as expressions — you configure them in the dashboard, no code changes needed.</p>
<p><strong>Centralized validation across agents.</strong> If you have five different LangGraph pipelines submitting orders, they all validate against the same gate. Change a rule once, it applies everywhere. With Pydantic, you'd need to update the model in every repo.</p>
<p><strong>Observability.</strong> Flow's analytics dashboard shows you which fields fail most often, which business rules trigger, and which agents (by session) are producing the most errors. When you're debugging why Agent C keeps submitting bad currencies, this is where you look.</p>
<p><strong>Approval workflows.</strong> For high-value orders, add a human approval step on the gate. The pipeline pauses, a reviewer approves or rejects, and the graph resumes. You can't do this with a Pydantic validator.</p>
<h2>Adding MCP for Direct Tool Access</h2>
<p>If you want the LLM to call Flow tools directly (instead of going through a hardcoded REST call), you can use LangChain's MCP tool integration. Flow's MCP endpoint at <code>https://api.rynko.dev/api/flow/mcp</code> auto-generates a <code>validate_{gate_slug}</code> tool for each active gate in your workspace.</p>
<p>This means the LLM can discover available gates and submit payloads through tool calling, which is useful when the agent needs to decide which gate to validate against based on the input.</p>
<h2>Local Development Setup</h2>
<p>To set up a local LangGraph development environment:</p>
<pre><code class="language-bash"># Create a project directory
mkdir langgraph-flow-demo &amp;&amp; cd langgraph-flow-demo

# Set up a virtual environment
python -m venv .venv
source .venv/bin/activate

# Install dependencies
pip install langgraph langchain-openai httpx python-dotenv

# Create .env file
cat &gt; .env &lt;&lt; 'EOF'
OPENAI_API_KEY=sk-...
RYNKO_API_KEY=your_api_key_here
FLOW_GATE_ID=your_gate_id_here
EOF
</code></pre>
<p>Create a <code>main.py</code> with the code from this tutorial, add <code>from dotenv import load_dotenv; load_dotenv()</code> at the top, and run it with <code>python main.py</code>.</p>
<p>For iterative development, LangGraph has a built-in visualization tool:</p>
<pre><code class="language-python"># Print the graph structure
app.get_graph().print_ascii()

# Or save as PNG (requires pygraphviz)
app.get_graph().draw_png("graph.png")
</code></pre>
<p>This shows you the nodes, edges, and conditional routing at a glance — useful for verifying the self-correction loop is wired correctly.</p>
<h2>Full Working Example</h2>
<p>The complete code for this tutorial — including the graph, Flow client, <code>.env.example</code>, and two test scenarios — is in our <a href="https://github.com/rynko-dev/developer-resources/tree/main/examples/langgraph-flow-validation">developer resources repo</a>. Clone it, add your API keys, and run <code>python src/main.py</code>.</p>
<hr />
<p><strong>Resources:</strong></p>
<ul>
<li><p><a href="https://docs.rynko.dev/flow">Rynko Flow documentation</a></p>
</li>
<li><p><a href="https://docs.rynko.dev/api-reference/flow">Flow API reference</a></p>
</li>
<li><p><a href="https://langchain-ai.github.io/langgraph/">LangGraph documentation</a></p>
</li>
<li><p><a href="https://app.rynko.dev/signup">Sign up for free</a> — 500 Flow runs/month, no credit card</p>
</li>
<li><p><a href="https://asciinema.org/a/824113">Self-correction demo (terminal recording)</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Validating CrewAI Agent Outputs with Rynko Flow]]></title><description><![CDATA[CrewAI's strength is that you define agents with roles, goals, and tools, and the framework handles the orchestration. An agent researches, another analyzes, a third writes the report. The problem sho]]></description><link>https://blog.rynko.dev/crewai-flow-validation</link><guid isPermaLink="true">https://blog.rynko.dev/crewai-flow-validation</guid><category><![CDATA[AI]]></category><category><![CDATA[CrewAI]]></category><category><![CDATA[rynko]]></category><category><![CDATA[agentic AI]]></category><category><![CDATA[Validation]]></category><category><![CDATA[agents]]></category><category><![CDATA[Python]]></category><category><![CDATA[mcp]]></category><category><![CDATA[Model Context Protocol]]></category><dc:creator><![CDATA[Rynko Dev]]></dc:creator><pubDate>Tue, 10 Mar 2026 13:43:20 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/698ea54a73da603a1b59230b/5fbaaead-dba8-4313-b3f5-21945c580604.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>CrewAI's strength is that you define agents with roles, goals, and tools, and the framework handles the orchestration. An agent researches, another analyzes, a third writes the report. The problem shows up when the last agent in the chain produces the final output — a JSON payload that needs to be structurally valid, conform to business rules, and sometimes get human approval before it goes downstream.</p>
<p>Most CrewAI tutorials skip this part. The output comes back as a string, maybe you parse it as JSON, and you hope it's correct. In production, that hope turns into bugs.</p>
<p>I've been using Rynko Flow as the validation layer after CrewAI tasks. The agent does its work, the output goes through a Flow gate that checks schema and business rules, and only validated data moves forward. When validation fails, the error response is structured enough that the agent can fix itself and retry.</p>
<h2>What We're Building</h2>
<p>A CrewAI crew with two agents:</p>
<ol>
<li><p><strong>Order Processor</strong> — Takes a natural language order request and extracts structured data</p>
</li>
<li><p><strong>Validator</strong> — Submits the extracted data to a Rynko Flow gate, handles errors, and retries if needed</p>
</li>
</ol>
<p>The validator agent uses a custom tool that wraps the Flow API, so it gets structured validation errors directly in its tool response.</p>
<h2>Setup</h2>
<pre><code class="language-bash">pip install crewai httpx
</code></pre>
<p>You'll need:</p>
<ul>
<li><p>A <a href="https://app.rynko.dev/signup">Rynko account</a> (free tier is fine)</p>
</li>
<li><p>A Flow gate with your schema (<a href="https://docs.rynko.dev/flow/getting-started">setup guide</a>)</p>
</li>
<li><p>An OpenAI API key (CrewAI's default LLM)</p>
</li>
</ul>
<h2>The Flow Validation Tool</h2>
<p>CrewAI agents use tools — Python functions decorated with <code>@tool</code>. Here's one that submits data to a Flow gate and returns the result in a format the LLM can reason about:</p>
<pre><code class="language-python">import os
import json
import httpx
from crewai.tools import tool

RYNKO_BASE_URL = os.environ.get("RYNKO_BASE_URL", "https://api.rynko.dev/api")
RYNKO_API_KEY = os.environ["RYNKO_API_KEY"]
GATE_ID = os.environ["FLOW_GATE_ID"]

@tool("validate_order")
def validate_order(order_json: str) -&gt; str:
    """Validate an order payload against the Flow gate.
    Input must be a JSON string with fields: vendor (string),
    amount (number), currency (USD/EUR/GBP/INR), po_number (optional string).
    Returns validation result with status and any errors."""

    try:
        payload = json.loads(order_json)
    except json.JSONDecodeError as e:
        return json.dumps({"success": False, "error": f"Invalid JSON: {e}"})

    response = httpx.post(
        f"{RYNKO_BASE_URL}/flow/gates/{GATE_ID}/runs",
        json={"payload": payload},
        headers={
            "Authorization": f"Bearer {RYNKO_API_KEY}",
            "Content-Type": "application/json",
        },
        timeout=30,
    )

    result = response.json()

    if result.get("status") == "validation_failed":
        errors = result.get("error", {}).get("details", [])
        error_lines = [f"- {e.get('field', e.get('rule_id', 'unknown'))}: {e.get('message')}" for e in errors]
        return json.dumps({
            "success": False,
            "status": "validation_failed",
            "errors": error_lines,
            "message": "Fix these errors and resubmit.",
        }, indent=2)

    return json.dumps({
        "success": True,
        "status": result.get("status"),
        "run_id": result.get("runId"),
        "validation_id": result.get("validation_id"),
    }, indent=2)
</code></pre>
<p>The tool returns structured JSON in both success and failure cases. When validation fails, the error messages are specific enough — "currency must be one of: USD, EUR, GBP, INR" — that the LLM can fix the issue without guessing.</p>
<h2>Defining the Agents</h2>
<pre><code class="language-python">from crewai import Agent

order_processor = Agent(
    role="Order Processor",
    goal="Extract structured order data from customer requests accurately",
    backstory=(
        "You are an order processing specialist. You extract vendor name, "
        "amount, currency, and PO number from natural language requests. "
        "You output clean JSON with fields: vendor, amount, currency, po_number. "
        "Currency must be a 3-letter code (USD, EUR, GBP, or INR)."
    ),
    verbose=True,
    allow_delegation=False,
)

order_validator = Agent(
    role="Order Validator",
    goal="Validate extracted orders against business rules and fix any issues",
    backstory=(
        "You validate order data by submitting it to the validation gateway. "
        "If validation fails, you read the error messages carefully, fix each "
        "issue in the JSON, and resubmit. You keep trying until it passes or "
        "you've made 3 attempts. Always report the final validation status."
    ),
    tools=[validate_order],
    verbose=True,
    allow_delegation=False,
)
</code></pre>
<p>The validator agent has the Flow tool and explicit instructions to read errors and retry. CrewAI agents follow their backstory closely, so the self-correction behavior comes from the backstory rather than from framework-level retry logic.</p>
<h2>Defining the Tasks</h2>
<pre><code class="language-python">from crewai import Task

extract_task = Task(
    description=(
        "Extract order data from this customer request:\n\n"
        "{user_request}\n\n"
        "Output a JSON object with fields: vendor (string), amount (number), "
        "currency (3-letter code: USD, EUR, GBP, or INR), po_number (string, optional). "
        "Output ONLY the JSON, nothing else."
    ),
    expected_output="A JSON object with vendor, amount, currency, and optional po_number",
    agent=order_processor,
)

validate_task = Task(
    description=(
        "Take the order JSON from the previous task and validate it using the "
        "validate_order tool. If validation fails, read the error messages, fix "
        "the JSON, and call the tool again with corrected data. "
        "Report the final run ID and validation status."
    ),
    expected_output="Validation result with run ID and status (validated or failed)",
    agent=order_validator,
    context=[extract_task],
)
</code></pre>
<p>The <code>context=[extract_task]</code> tells CrewAI to pass the output of the extract task to the validator. The validator then takes that JSON and runs it through Flow.</p>
<h2>Running the Crew</h2>
<pre><code class="language-python">from crewai import Crew, Process

crew = Crew(
    agents=[order_processor, order_validator],
    tasks=[extract_task, validate_task],
    process=Process.sequential,
    verbose=True,
)

result = crew.kickoff(
    inputs={
        "user_request": (
            "We need to process an order from Globex Corp for "
            "twelve thousand five hundred dollars, PO number PO-2026-042"
        )
    }
)

print("\n--- Final Result ---")
print(result)
</code></pre>
<h2>What Happens at Runtime</h2>
<p>When you run this, the output shows the full agent reasoning:</p>
<pre><code class="language-plaintext">[Order Processor] Extracting order data...
&gt; {"vendor": "Globex Corp", "amount": 12500, "currency": "USD", "po_number": "PO-2026-042"}

[Order Validator] Validating order...
&gt; Using tool: validate_order
&gt; Tool result: {"success": true, "status": "validated", "run_id": "..."}

--- Final Result ---
Order validated successfully. Run ID: 550e8400-...
</code></pre>
<p>Now here's the interesting case. Say the processor extracts <code>currency: "Dollars"</code>:</p>
<pre><code class="language-plaintext">[Order Validator] Validating order...
&gt; Using tool: validate_order
&gt; Tool result: {"success": false, "errors": ["- currency: must be one of: USD, EUR, GBP, INR"]}

[Order Validator] The currency is invalid. Fixing to "USD" and resubmitting...
&gt; Using tool: validate_order
&gt; Tool result: {"success": true, "status": "validated", "run_id": "..."}
</code></pre>
<p>The validator reads the error, fixes the currency, and resubmits. One retry, no human involved.</p>
<h2>Handling Multiple Agents Writing to the Same Gate</h2>
<p>CrewAI shines when you have multiple specialized agents. In a more complex setup, you might have separate crews for different order types — one for domestic orders, one for international, one for recurring subscriptions. All three can validate against the same Flow gate.</p>
<pre><code class="language-python"># Different crews, same validation gate
domestic_crew = Crew(agents=[domestic_processor, validator], ...)
international_crew = Crew(agents=[intl_processor, validator], ...)
subscription_crew = Crew(agents=[sub_processor, validator], ...)
</code></pre>
<p>The gate enforces consistent validation regardless of which crew produced the data. If you change a business rule — say, increasing the minimum order amount from \(10 to \)50 — you update it once in the Flow dashboard and every crew picks it up immediately.</p>
<p>Flow's analytics dashboard shows validation results by session, so you can see which crew or agent is producing the most errors and needs prompt tuning.</p>
<h2>Adding Human Approval</h2>
<p>For high-value orders, configure the gate's approval mode to require human review. When the validator submits a $50,000 order, Flow holds it in a <code>review_required</code> state instead of auto-approving. A reviewer gets an email, reviews the payload, and approves or rejects.</p>
<p>Your CrewAI task can poll for the approval result:</p>
<pre><code class="language-python">@tool("wait_for_approval")
def wait_for_approval(run_id: str) -&gt; str:
    """Poll a Flow run until it reaches a terminal state."""
    for _ in range(60):
        response = httpx.get(
            f"{RYNKO_BASE_URL}/flow/runs/{run_id}",
            headers={"Authorization": f"Bearer {RYNKO_API_KEY}"},
            timeout=30,
        )
        status = response.json().get("status")
        if status in ("approved", "rejected", "completed", "delivered"):
            return json.dumps({"status": status, "run_id": run_id})
        time.sleep(5)
    return json.dumps({"status": "timeout", "run_id": run_id})
</code></pre>
<h2>Using MCP Instead of REST</h2>
<p>If you prefer the agent to discover Flow gates dynamically through tool calling (rather than hardcoding the gate ID), you can connect CrewAI to Flow's MCP endpoint. Flow auto-generates a <code>validate_{gate_slug}</code> tool for each active gate, and the tool schema includes field types and constraints so the LLM knows what to submit.</p>
<p>This is useful when your agents work across multiple gates and need to pick the right one based on context.</p>
<h2>Local Development Setup</h2>
<pre><code class="language-bash"># Create project
mkdir crewai-flow-demo &amp;&amp; cd crewai-flow-demo
python -m venv .venv
source .venv/bin/activate

# Install
pip install crewai httpx python-dotenv

# Environment
cat &gt; .env &lt;&lt; 'EOF'
OPENAI_API_KEY=sk-...
RYNKO_API_KEY=your_api_key_here
FLOW_GATE_ID=your_gate_id_here
EOF
</code></pre>
<p>Create <code>main.py</code> with the code above, add <code>from dotenv import load_dotenv; load_dotenv()</code> at the top, and run with <code>python main.py</code>. CrewAI's <code>verbose=True</code> shows you the full agent reasoning — useful for debugging prompt issues.</p>
<h2>Full Working Example</h2>
<p>The complete code — agents, tools, tasks, <code>.env.example</code>, and two test scenarios — is in our <a href="https://github.com/rynko-dev/developer-resources/tree/main/examples/crewai-flow-validation">developer resources repo</a>. Clone it, add your API keys, and run <code>python src/main.py</code>.</p>
<hr />
<p><strong>Resources:</strong></p>
<ul>
<li><p><a href="https://docs.rynko.dev/flow">Rynko Flow documentation</a></p>
</li>
<li><p><a href="https://docs.crewai.com/">CrewAI documentation</a></p>
</li>
<li><p><a href="https://app.rynko.dev/signup">Sign up for free</a> — 500 Flow runs/month, no credit card</p>
</li>
<li><p><a href="https://asciinema.org/a/824113">Self-correction demo (terminal recording)</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Launching Rynko Flow: A Self-Correcting Validation Gateway for AI Agent Outputs]]></title><description><![CDATA[When we launched Rynko, the focus was document generation — templates, PDFs, Excel files. But the more we worked with teams building AI-powered workflows, the more we noticed the same problem showing ]]></description><link>https://blog.rynko.dev/launching-rynko-flow-a-validation-gateway-for-ai-agent-outputs</link><guid isPermaLink="true">https://blog.rynko.dev/launching-rynko-flow-a-validation-gateway-for-ai-agent-outputs</guid><category><![CDATA[ai-agent]]></category><category><![CDATA[mcp]]></category><category><![CDATA[AI]]></category><category><![CDATA[Validation]]></category><category><![CDATA[llm]]></category><category><![CDATA[pydantic]]></category><category><![CDATA[multi-agent]]></category><category><![CDATA[CrewAI]]></category><category><![CDATA[langgraph]]></category><dc:creator><![CDATA[Rynko Dev]]></dc:creator><pubDate>Mon, 09 Mar 2026 17:26:34 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/698ea54a73da603a1b59230b/3db061ae-1305-4a4b-8caf-f669b9070d7e.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When we launched Rynko, the focus was document generation — templates, PDFs, Excel files. But the more we worked with teams building AI-powered workflows, the more we noticed the same problem showing up everywhere: the agent produces structured data, and the developer writes validation code to check it before passing it downstream. Schema checks, business rule enforcement, sometimes a human review step. Every team was building some version of this from scratch.</p>
<p>So, we built Flow.</p>
<h2>What Flow Does</h2>
<p>Rynko Flow is a validation gateway that sits between your AI agent and your downstream systems. You define a gate with a schema and business rules, your agent submits payloads to it, and Flow validates the data before it moves forward. If the payload fails, the agent gets a clear error response it can act on. If it passes, Flow returns a tamper-proof <code>validation_id</code> that downstream systems can verify to confirm the data hasn't been modified in transit.</p>
<p>The pipeline looks like this:</p>
<img src="https://cdn.hashnode.com/uploads/covers/698ea54a73da603a1b59230b/fda7c961-de7b-44c3-a0d3-a38a38b240b8.svg" alt="" style="display:block;margin:0 auto" />

<p>Each stage is independent. Schema validation checks field types, required fields, and constraints like min/max values and allowed enums. Business rules evaluate cross-field expressions — things like <code>endDate &gt; startDate</code> or <code>quantity * price == total</code>. If you need a human to review before delivery, add an approval step with internal team members or external reviewers. Once everything passes, Flow delivers the payload to your webhook endpoints.</p>
<h2>Gates, Not Middleware</h2>
<p>A gate is a named validation checkpoint. It has a schema (the structure you expect), business rules (the constraints that cross fields), and optionally an approval configuration and delivery channels. Each gate gets its own API endpoint.</p>
<p>Creating a gate takes about a minute in the dashboard:</p>
<ol>
<li><p><strong>Open the</strong> <a href="https://app.rynko.dev/flow/gates"><strong>Flow dashboard</strong></a> and click <strong>Create Gate</strong></p>
</li>
<li><p><strong>Name your gate</strong> — give it something descriptive like "Order Validation". Flow generates a URL-friendly slug automatically (<code>order-validation</code>)</p>
</li>
<li><p><strong>Define the schema</strong> — use the schema builder to add fields. For an order gate, you'd add <code>orderId</code> (string, required), <code>amount</code> (number, required, min 0), <code>currency</code> (string, required, allowed values: USD/EUR/GBP), and <code>customerEmail</code> (string, required, email format). Each field has a type dropdown and constraint options — no JSON to write by hand</p>
</li>
<li><p><strong>Add business rules</strong> — click <strong>Add Rule</strong> and write expressions like <code>amount &gt;= 10</code> with an error message ("Order amount must be at least $10"). The rule editor validates your expression as you type, so you know it'll work before you save</p>
</li>
<li><p><strong>Save the gate</strong> — it's immediately active and ready to receive payloads</p>
</li>
</ol>
<p>If you already have your data models defined in code, you don't have to recreate the schema manually. Flow supports importing schemas directly from <strong>Pydantic</strong> (Python) and <strong>Zod</strong> (TypeScript). In the schema builder, click <strong>Import Schema</strong>, pick the format, and paste the JSON Schema output from <code>model_json_schema()</code> (Pydantic) or <code>zodToJsonSchema()</code> (Zod). Flow maps the types, constraints, and required fields automatically. There's a <a href="https://docs.rynko.dev/tutorials/import-pydantic-zod">full tutorial</a> with code examples for both.</p>
<img src="https://cdn.hashnode.com/uploads/covers/698ea54a73da603a1b59230b/ec2ea489-23ba-4608-9323-a7bf29c5358c.png" alt="" style="display:block;margin:0 auto" />

<p>This means if you have a Pydantic model like:</p>
<pre><code class="language-python">class Order(BaseModel):
    order_id: str
    amount: float = Field(ge=0)
    currency: Literal["USD", "EUR", "GBP"]
    customer_email: EmailStr
</code></pre>
<p>You run <code>Order.model_json_schema()</code>, paste the output into the import dialog, and your gate schema is ready — field types, constraints, and all.</p>
<pre><code class="language-json">{
    "title": "Order",
    "type": "object",
    "properties": {
      "order_id": {
        "title": "Order Id",
        "type": "string"
      },
      "amount": {
        "title": "Amount",
        "type": "number",
        "minimum": 0
      },
      "currency": {
        "title": "Currency",
        "type": "string",
        "enum": ["USD", "EUR", "GBP"]
      },
      "customer_email": {
        "title": "Customer Email",
        "type": "string",
        "format": "email"
      }
    },
    "required": ["order_id", "amount", "currency", "customer_email"]
  }
</code></pre>
<p>When your agent submits a payload to this gate, the response tells you exactly what happened at each validation layer:</p>
<pre><code class="language-json">{
  "success": true,
  "runId": "550e8400-e29b-41d4-a716-446655440000",
  "status": "validated",
  "validation_id": "v_abc123...",
  "layers": {
    "schema": "pass",
    "business_rules": "pass"
  }
}
</code></pre>
<p>If validation fails, the response includes specific error details — which field failed, which constraint was violated, which business rule returned false. The agent gets actionable feedback it can use to fix the data and resubmit.</p>
<h2>Why This Matters for AI Agents</h2>
<p>LLMs hallucinate. They produce plausible-looking data that might have an invalid enum value, a missing required field, or a number that violates a business constraint. When you're generating a single document, you catch these by eye. When an agent is processing hundreds of payloads autonomously, you need systematic validation.</p>
<p>The interesting thing we've seen in practice is that agents self-correct. When an MCP-connected agent submits a payload that fails validation, it reads the error response, fixes the issues, and resubmits — often without any human involvement. We ran tests where we intentionally gave agents incomplete or incorrect data, and the validation-resubmission loop resolved the issues in one or two attempts. (ref: <a href="https://docs.rynko.dev/reports/flow-mcp-agent-test">Flow MCP — AI Agent Integration Test Report | Rynko Documentation)</a></p>
<p>Flow has a built-in circuit breaker for this pattern. If an agent (identified by its MCP session) keeps submitting payloads that fail the same gate, Flow backs off — first warning, then temporarily blocking submissions from that session. This prevents a malfunctioning agent from burning through your quota with an infinite retry loop. The circuit breaker tracks failures per gate per session, with configurable thresholds and cooldown periods.</p>
<h2>Multi-Agent Workflows</h2>
<p><a class="embed-card" href="https://asciinema.org/a/824113">https://asciinema.org/a/824113</a></p>
The single-agent case is straightforward, but Flow was really designed for multi-agent architectures — the kind you build with LangGraph, CrewAI, AutoGen, or your own orchestration layer. In these setups, you have multiple specialized agents handling different parts of a pipeline: one agent researches, another drafts, a third formats, and a fourth submits to your system of record. Each agent is good at its job, but none of them knows what the others are doing, and any of them can produce data that doesn't meet your downstream requirements.

<p>Gates are the shared contract between these agents and your systems. A "Customer Order" gate doesn't care whether the payload comes from a single monolithic agent or from the last step in a five-agent chain — it validates the same schema and business rules regardless. This means you can swap agents, change your orchestration graph, or add new agents to the pipeline without touching your validation logic. The gate is stable while the agents evolve around it.</p>
<p>In practice, this plays out in a few ways:</p>
<p><strong>Pipeline validation.</strong> An orchestrator runs Agent A (data extraction) → Agent B (enrichment) → Agent C (formatting), and the final output goes through a Flow gate before hitting your database. If Agent C produces bad data, the orchestrator gets structured errors it can route back to the responsible agent for correction — not a generic 400 from your API.</p>
<p><strong>Parallel agents, same gate.</strong> Multiple agents process different inputs concurrently — say, ten order-processing agents each handling a different customer. They all submit to the same "Order Validation" gate. Flow validates each independently, the circuit breaker tracks failures per session so one misbehaving agent doesn't affect the others, and your downstream system only receives validated payloads.</p>
<p><strong>Cross-agent consistency.</strong> When Agent A writes to the "Invoice" gate and Agent B writes to the "Payment" gate, and both gates have business rules referencing amount ranges and currency constraints, you get consistent validation across your entire agent fleet without encoding those rules in each agent's prompt.</p>
<p>The analytics dashboard makes this observable — you can see which agents (by session) are hitting which gates, what their failure rates look like, and which business rules are triggering most often. When you're running dozens of agents in production, this is how you find the one that's drifting.</p>
<h2>Human-in-the-Loop When You Need It</h2>
<p>Not everything should be auto-approved. For high-value transactions, sensitive data changes, or any scenario where you want a human to verify before the data moves downstream, Flow supports approval workflows.</p>
<p>You configure approvers on a gate — either team members who review from the dashboard, or external reviewers who get a magic link via email. External reviewers don't need a Rynko account. They click a link, see the payload, and approve or reject it. The magic links are HMAC-SHA256 signed, expire after 72 hours, and are single-use for approval actions.</p>
<p>The approval model is any-approves: the first approver to act determines the outcome. For high-volume gates, we batch notification emails into 5-minute digests so reviewers don't get buried in individual emails. There's also a hard safety cap of 30 emails per hour per approver to prevent notification fatigue.</p>
<p>The review experience for freetext content (Markdown, HTML, plain text) includes scroll-to-approve guardrails — the approve button stays disabled until the reviewer has scrolled through the entire content. For long documents, we auto-generate a table of contents from Markdown headers so reviewers can navigate quickly.</p>
<h2>MCP Integration: Dynamic Tools per Gate</h2>
<p>This is where Flow connects directly to the AI agent workflow. When you connect an AI tool (Claude Desktop, Cursor, VS Code, or any MCP client) to Flow's MCP endpoint at <code>https://api.rynko.dev/api/flow/mcp</code>, Flow auto-generates a validation tool for each active gate in your workspace.</p>
<p>A gate with slug <code>order-validation</code> becomes a tool called <code>validate_order_validation</code>. The tool's input schema is generated from the gate's current schema — each field becomes a typed JSON Schema property with its constraints. The tool description includes the gate's business rule error messages, so the agent understands the constraints before submitting.</p>
<p>When you update a gate's schema, Flow pushes a <code>notifications/tools/list_changed</code> event to connected agents. They automatically see the updated tool list without reconnecting. No redeploy, no config change.</p>
<p>A conversation with an MCP-connected agent might look like this:</p>
<pre><code class="language-plaintext">You: Validate this order before submitting.

Agent: I'll validate it against your Order Validation gate.
       [calls validate_order_validation]

       The order passed validation:
       - Schema validation: pass
       - Business rules: pass
       - Validation ID: v_abc123...

       You can use the validation ID to confirm this data
       hasn't been modified when it reaches your order system.
</code></pre>
<h2>Freetext Mode</h2>
<p>Not all agent outputs are structured JSON. Sometimes the output is a Markdown document, an HTML email body, or a code snippet. Flow supports a freetext mode where the gate accepts content as a string instead of a structured schema.</p>
<p>Content format is declared at gate creation — plaintext, Markdown, HTML, or code. For Markdown and HTML content, Flow runs a sanitization pipeline on the backend using <code>sanitize-html</code> with a strict allowlist. Script tags, iframes, event handlers, and inline styles are stripped. Links get <code>rel="noopener noreferrer"</code>. The reviewer sees sanitized content in a sandboxed view.</p>
<p>This is useful for agent workflows that produce reports, summaries, or email drafts where you need a human to review the content before it gets sent.</p>
<h2>Delivery and Reliability</h2>
<p>After validation (and approval, if configured), Flow delivers the payload to your webhook endpoints. Deliveries are signed with HMAC-SHA256 so you can verify the payload hasn't been tampered with.</p>
<p>The retry policy is straightforward: 5 attempts with delays at 30 seconds, 2 minutes, and 10 minutes. Each delivery attempt is logged, and failed deliveries can be retried manually from the dashboard or via the SDK.</p>
<p>Flow enforces per-team concurrency caps to keep the multi-tenant system fair — the exact limits scale with your tier, but even on the Scale plan, no single team can consume more than 25% of total worker concurrency. This prevents one team's spike from degrading service for everyone else.</p>
<h2>SDKs</h2>
<p>We've added Flow support to all three official SDKs. The pattern is the same across Node.js, Python, and Java — submit a run, poll for result, handle approvals:</p>
<pre><code class="language-typescript">import { Rynko } from '@rynko/sdk';

const client = new Rynko({ apiKey: process.env.RYNKO_API_KEY });

const run = await client.flow.submitRun('gate_abc123', {
  input: {
    orderId: 'ORD-2026-042',
    amount: 1250.00,
    currency: 'USD',
    customerEmail: 'buyer@example.com',
  },
});

const result = await client.flow.waitForRun(run.id, {
  pollInterval: 1000,
  timeout: 60000,
});

if (result.status === 'approved' || result.status === 'completed') {
  console.log('Validated:', result.validatedPayload);
} else if (result.status === 'validation_failed') {
  console.log('Errors:', result.errors);
}
</code></pre>
<p>The SDKs are at version 1.3.1 with 14 Flow methods covering gates (read-only), runs, approvals, and deliveries. All three SDKs include automatic retry with exponential backoff for rate limits and transient errors.</p>
<h2>Pricing</h2>
<p>Flow is a separate subscription from Rynko's document generation (Render). The pricing is based on validation runs per month:</p>
<table>
<thead>
<tr>
<th>Tier</th>
<th>Runs/Month</th>
<th>Gates</th>
<th>Price</th>
<th>Overage</th>
</tr>
</thead>
<tbody><tr>
<td>Free</td>
<td>500</td>
<td>3</td>
<td>$0</td>
<td>—</td>
</tr>
<tr>
<td>Starter</td>
<td>10,000</td>
<td>Unlimited</td>
<td>$29/mo</td>
<td>$0.005/run</td>
</tr>
<tr>
<td>Growth</td>
<td>100,000</td>
<td>Unlimited</td>
<td>$99/mo</td>
<td>$0.004/run</td>
</tr>
<tr>
<td>Scale</td>
<td>500,000</td>
<td>Unlimited</td>
<td>$349/mo</td>
<td>$0.002/run</td>
</tr>
</tbody></table>
<p>The free tier is gate-limited to 3 — this is intentional. Most teams find they need more gates once they start connecting multiple agents to different validation checkpoints, and that's the natural upgrade trigger. Paid tiers have unlimited gates.</p>
<p>To celebrate the launch, we're opening a <strong>Founder's Preview</strong> — <a href="https://app.rynko.dev/signup">sign up today</a> and get 3 months of the Growth tier (100,000 runs/month, unlimited gates) completely free. No credit card required, no commitment. Once the preview ends, you can stay on Growth or switch to any tier that fits your usage.</p>
<p>If you also need document generation, Render Packs are available as add-ons on any tier — \(19/month for 500 documents, \)49 for 2,000, or $119 for 10,000.</p>
<h2>The Dashboard</h2>
<p>Flow comes with a full web dashboard for managing gates, reviewing runs, handling approvals, and tracking analytics. The gate configurator includes a visual schema builder (with Pydantic and Zod import), a business rule editor with live expression validation, and approval/delivery configuration. The runs view shows real-time status updates, validation error breakdowns, and a timeline of each run's journey through the pipeline.</p>
<p>The analytics dashboard covers the metrics you'd expect — run outcomes by gate, top failing rules, approval rates and decision times, throughput over configurable periods, and circuit breaker health. These metrics help you tune your gates and catch systemic issues early.</p>
<h2>Getting Started</h2>
<ol>
<li><p><a href="https://app.rynko.dev/signup"><strong>Sign up for free</strong></a> — 500 Flow runs/month included, no credit card</p>
</li>
<li><p><strong>Create a gate</strong> in the <a href="https://app.rynko.dev/flow/gates">Flow dashboard</a> — define your schema and business rules</p>
</li>
<li><p><strong>Submit a test payload</strong> using the <a href="https://docs.rynko.dev/flow/getting-started">Quick Start guide</a> or the dry-run endpoint (doesn't count against quota)</p>
</li>
<li><p><strong>Connect your AI agent</strong> via the <a href="https://docs.rynko.dev/integrations/mcp-flow">MCP endpoint</a> — Claude Desktop, Cursor, VS Code, Windsurf, Zed, or Claude Code</p>
</li>
</ol>
<p>Flow is live and production-ready. We've been running it internally for weeks and the architecture has handled sustained load without surprises. If you're building with AI agents and need a systematic way to validate their outputs before they reach downstream systems, this is what we built it for.</p>
<p><a href="https://app.rynko.dev/signup">Sign up</a> | <a href="https://docs.rynko.dev/flow">Documentation</a> | <a href="https://docs.rynko.dev/integrations/mcp-flow">MCP Setup</a> | <a href="https://docs.rynko.dev/api-reference/flow">API Reference</a></p>
<hr />
<p><em>Questions or feedback:</em> <a href="mailto:support@rynko.dev"><em>support@rynko.dev</em></a> <em>or</em> <a href="https://discord.gg/d8cU2MG6"><em>Discord</em></a><em>.</em></p>
]]></content:encoded></item><item><title><![CDATA[We Moved Our MCP Server to Remote HTTP — Here's Why]]></title><description><![CDATA[When we first launched our MCP integration back in January, the only practical option was a local stdio server — an npm package that ran on your machine and bridged Claude Desktop to our API. It worke]]></description><link>https://blog.rynko.dev/we-moved-our-mcp-server-to-remote-http-here-s-why</link><guid isPermaLink="true">https://blog.rynko.dev/we-moved-our-mcp-server-to-remote-http-here-s-why</guid><category><![CDATA[AI]]></category><category><![CDATA[rynko]]></category><category><![CDATA[mcp]]></category><category><![CDATA[ai-agent]]></category><category><![CDATA[remote-mcp]]></category><category><![CDATA[claude.ai]]></category><category><![CDATA[api]]></category><category><![CDATA[Developer Tools]]></category><dc:creator><![CDATA[Rynko Dev]]></dc:creator><pubDate>Mon, 09 Mar 2026 17:03:49 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/698ea54a73da603a1b59230b/db61891e-1e61-408d-9b87-d7fbd5547be2.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When we first launched our MCP integration back in January, the only practical option was a local stdio server — an npm package that ran on your machine and bridged Claude Desktop to our API. It worked, but it came with friction: you needed Node.js installed, <code>npx</code> in your PATH, and a JSON config file pointing to the right command. If something broke, debugging meant checking process logs, PATH variables, and config syntax all at once.</p>
<p>Since then, every major AI tool has added support for remote MCP servers over HTTP. Claude Desktop, Cursor, Windsurf, VS Code, Zed, and Claude Code all support connecting to a remote MCP endpoint with a URL and headers — no local process, no npm, no Node.js dependency.</p>
<p>So we've moved to remote-first and deprecated the standalone <code>@rynko/mcp-server</code> npm package.</p>
<h2>What Changed</h2>
<p>Our MCP server now runs as part of Rynko's backend infrastructure, exposed at two endpoints:</p>
<ul>
<li><p><strong>Render MCP</strong> (templates and document generation): <code>https://api.rynko.dev/api/mcp-documents</code></p>
</li>
<li><p><strong>Flow MCP</strong> (AI output validation): <code>https://api.rynko.dev/api/flow/mcp</code></p>
</li>
</ul>
<p>Both use Streamable HTTP transport — JSON-RPC 2.0 over HTTP with optional Server-Sent Events for real-time notifications. This is the same transport that Claude Desktop, Cursor, and other tools use natively.</p>
<p>The old npm package used stdio transport: it spawned a local Node.js process that communicated with your AI tool via stdin/stdout. This meant every user needed Node.js 18+, the package had to be downloaded and kept up to date, and any server-side improvements required a new npm release.</p>
<p>With remote HTTP, none of that applies. You point your AI tool at a URL, add an auth header, and you're connected. Updates happen server-side — no package to update.</p>
<h2>Setup Is Now 30 Seconds</h2>
<p>Here's what the config looks like for each tool:</p>
<h3>Cursor</h3>
<pre><code class="language-json">{
  "mcpServers": {
    "rynko": {
      "url": "https://api.rynko.dev/api/mcp-documents",
      "headers": {
        "Authorization": "Bearer pat_xxxxxxxxxxxxxxxx"
      }
    }
  }
}
</code></pre>
<h3>VS Code</h3>
<pre><code class="language-json">{
  "servers": {
    "rynko": {
      "type": "http",
      "url": "https://api.rynko.dev/api/mcp-documents",
      "headers": {
        "Authorization": "Bearer pat_xxxxxxxxxxxxxxxx"
      }
    }
  }
}
</code></pre>
<h3>Claude Code</h3>
<pre><code class="language-bash">claude mcp add --transport http rynko https://api.rynko.dev/api/mcp-documents \
  --header "Authorization: Bearer pat_xxxxxxxxxxxxxxxx"
</code></pre>
<h3>Claude Desktop</h3>
<p>Claude Desktop has a few options. The simplest is using Connectors with OAuth — go to <strong>Settings</strong> → <strong>Connectors</strong>, add a custom connector with the URL, and sign in with your Rynko account. No token management needed.</p>
<p>If you prefer using a PAT, use the <code>mcp-remote</code> proxy:</p>
<pre><code class="language-json">{
  "mcpServers": {
    "rynko": {
      "command": "npx",
      "args": [
        "mcp-remote",
        "https://api.rynko.dev/api/mcp-documents",
        "--header",
        "Authorization:${RYNKO_PAT}"
      ],
      "env": {
        "RYNKO_PAT": "Bearer pat_xxxxxxxxxxxxxxxx"
      }
    }
  }
}
</code></pre>
<p>Full setup instructions for all six tools are in our <a href="https://docs.rynko.dev/integrations/mcp-render">Render MCP guide</a> and <a href="https://docs.rynko.dev/integrations/mcp-flow">Flow MCP guide</a>.</p>
<h2>Flow MCP: Why Remote Was Essential</h2>
<p>When we built Rynko Flow — our AI output validation gateway — we needed dynamic tool registration. Each validation gate in your workspace generates a dedicated <code>validate_{slug}</code> tool with a schema derived from the gate's configuration. When you create, update, or delete a gate, the MCP server pushes a <code>notifications/tools/list_changed</code> event so connected agents see the updated tool list without reconnecting.</p>
<p>This kind of server-initiated push notification is only possible with remote transport. A stdio server can't push events to the client — it can only respond to requests. We would have needed to implement polling or some other workaround, and the developer experience would have been worse.</p>
<p>Going remote-first meant Flow MCP worked naturally from day one, with real-time tool discovery and no compromises.</p>
<h2>What Happens to the npm Package</h2>
<p>The <code>@rynko/mcp-server</code> package on npm still works, but we've marked it as deprecated. It only supports Render tools (template management and document generation) — Flow tools were never added to it because they require server-side event support.</p>
<p>If you're currently using the npm package, switching to the remote server takes about a minute: replace the <code>command</code>/<code>args</code>/<code>env</code> config with a <code>url</code>/<code>headers</code> config pointing to <code>https://api.rynko.dev/api/mcp-documents</code>. Your PAT token works the same way — just pass it as a Bearer token in the Authorization header.</p>
<h2>What We Gained</h2>
<p>Moving to remote MCP simplified things on both sides:</p>
<p><strong>For users</strong>: No Node.js dependency, no npm package to install or update, no PATH issues, no local process to debug. Just a URL and a token.</p>
<p><strong>For us</strong>: One deployment target instead of coordinating npm releases with backend changes. Server-side improvements are available to everyone immediately. And we can use features like SSE for real-time notifications that stdio doesn't support.</p>
<p>The MCP ecosystem is still young, but the direction is clear — remote HTTP transport is becoming the standard. We're glad we made the move early.</p>
<hr />
<p><strong>Resources:</strong></p>
<ul>
<li><p><a href="https://docs.rynko.dev/integrations/mcp-render">Render MCP Setup Guide</a></p>
</li>
<li><p><a href="https://docs.rynko.dev/integrations/mcp-flow">Flow MCP Setup Guide</a></p>
</li>
<li><p><a href="https://app.rynko.dev/signup">Sign up for free</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Adding PDF Export to Your SaaS in an Afternoon]]></title><description><![CDATA[It's a feature request that shows up in every SaaS app eventually: "Can I download this as a PDF?"
Users want to export dashboards, download invoices, save reports, and share formatted documents. And ]]></description><link>https://blog.rynko.dev/adding-pdf-export-to-your-saas-in-an-afternoon</link><guid isPermaLink="true">https://blog.rynko.dev/adding-pdf-export-to-your-saas-in-an-afternoon</guid><category><![CDATA[SaaS]]></category><category><![CDATA[Next.js]]></category><category><![CDATA[Web Development]]></category><category><![CDATA[Tutorial]]></category><category><![CDATA[rynko]]></category><dc:creator><![CDATA[Rynko Dev]]></dc:creator><pubDate>Mon, 23 Feb 2026 11:40:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1770997579240/225476c8-9c63-4065-9a1b-876caec97a1e.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<hr />
<p>It's a feature request that shows up in every SaaS app eventually: "Can I download this as a PDF?"</p>
<p>Users want to export dashboards, download invoices, save reports, and share formatted documents. And as a developer, you know this means dealing with PDF generation — one of those tasks that sounds simple but turns into a rabbit hole of browser dependencies, layout quirks, and maintenance headaches.</p>
<p>This guide shows you how to add PDF (and Excel) export to your SaaS in an afternoon using Rynko, without adding Puppeteer to your stack.</p>
<h2>The Traditional Approach (and Why It Hurts)</h2>
<p>Most teams approach PDF export like this:</p>
<ol>
<li><p>Create an HTML template with Handlebars/EJS</p>
</li>
<li><p>Add Puppeteer or Playwright as a dependency</p>
</li>
<li><p>Spin up a headless Chrome instance</p>
</li>
<li><p>Render the HTML and export to PDF</p>
</li>
<li><p>Handle Chrome memory leaks, version conflicts, and cold start times</p>
</li>
<li><p>Scale the whole thing when you have more than a few concurrent exports</p>
</li>
</ol>
<p>The result is a PDF service that:</p>
<ul>
<li><p>Takes 3-8 seconds per document</p>
</li>
<li><p>Requires 200-500MB RAM per Chrome instance</p>
</li>
<li><p>Breaks when Chrome auto-updates</p>
</li>
<li><p>Needs its own scaling strategy</p>
</li>
<li><p>Makes your Docker images huge</p>
</li>
</ul>
<p>And when product asks "can we also export as Excel?" — you start from scratch because Puppeteer doesn't do spreadsheets.</p>
<h2>The Rynko Approach</h2>
<p>Instead, you design a template once (visual editor or API), and call our SDK to generate documents. No browser. Sub-500ms generation. PDF and Excel from the same template.</p>
<p>Here's the full integration pattern.</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770998287749/c5990117-769f-433e-8104-0413e281dc8e.png" alt="The integration pattern: your SaaS app calls Rynko's SDK, gets back a download URL in under 500ms. No browser, no infrastructure." style="display:block;margin:0 auto" />

<h2>Step 1: Design Your Templates</h2>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770998242667/9df51b84-9904-4ee1-8475-a21164bb7030.png" alt="The visual template designer — drag and drop components, define variables, preview in real time. No HTML required." style="display:block;margin:0 auto" />

<p>Before writing any code, create the templates your users will need. Common SaaS export templates:</p>
<table>
<thead>
<tr>
<th>Template</th>
<th>Use Case</th>
</tr>
</thead>
<tbody><tr>
<td>Invoice</td>
<td>Billing, payments</td>
</tr>
<tr>
<td>Usage Report</td>
<td>Monthly account summaries</td>
</tr>
<tr>
<td>Data Export</td>
<td>Table data as formatted PDF/Excel</td>
</tr>
<tr>
<td>Dashboard Summary</td>
<td>KPI snapshot with charts</td>
</tr>
<tr>
<td>Certificate</td>
<td>Course completion, achievements</td>
</tr>
<tr>
<td>Contract</td>
<td>User agreements, proposals</td>
</tr>
</tbody></table>
<p>Use the visual designer at <a href="http://app.rynko.dev">app.rynko.dev</a> to build each template. Define variables that match your application's data model.</p>
<p>For example, a "Usage Report" template might have these variables:</p>
<pre><code class="language-typescript">interface UsageReportVariables {
  accountName: string;
  reportPeriod: string;
  totalApiCalls: number;
  totalDocuments: number;
  storageUsedMb: number;
  dailyUsage: Array&lt;{ date: string; apiCalls: number; documents: number }&gt;;
  topEndpoints: Array&lt;{ endpoint: string; calls: number; avgLatencyMs: number }&gt;;
}
</code></pre>
<h2>Step 2: Create a Document Service</h2>
<p>Add a thin service layer that handles document generation:</p>
<pre><code class="language-typescript">// src/services/document.service.ts
import { Rynko } from '@rynko/sdk';

const rynko = new Rynko({ apiKey: process.env.RYNKO_API_KEY! });

// Template IDs - map to your Rynko templates
const TEMPLATES = {
  invoice: 'invoice',
  usageReport: 'usage-report',
  dataExport: 'data-export',
  dashboardSummary: 'dashboard-summary',
} as const;

type TemplateKey = keyof typeof TEMPLATES;

interface GenerateOptions {
  template: TemplateKey;
  variables: Record&lt;string, any&gt;;
  format?: 'pdf' | 'excel';
  filename?: string;
}

export async function generateDocument(options: GenerateOptions) {
  const { template, variables, format = 'pdf' } = options;
  const params = { templateId: TEMPLATES[template], variables };

  const job = format === 'excel'
    ? await rynko.documents.generateExcel(params)
    : await rynko.documents.generatePdf(params);

  const completed = await rynko.documents.waitForCompletion(job.jobId);

  if (completed.status !== 'completed') {
    throw new Error(`Document generation failed: ${completed.errorMessage}`);
  }

  return {
    downloadUrl: completed.downloadUrl,
    jobId: job.jobId,
    format,
  };
}
</code></pre>
<h2>Step 3: Add API Endpoints</h2>
<h3>Express / Node.js</h3>
<pre><code class="language-typescript">// src/routes/exports.ts
import { Router } from 'express';
import { generateDocument } from '../services/document.service';
import { requireAuth } from '../middleware/auth';

const router = Router();

// Download invoice as PDF
router.get('/invoices/:id/pdf', requireAuth, async (req, res) =&gt; {
  const invoice = await db.invoices.findById(req.params.id);

  if (!invoice || invoice.accountId !== req.user.accountId) {
    return res.status(404).json({ error: 'Invoice not found' });
  }

  const result = await generateDocument({
    template: 'invoice',
    format: 'pdf',
    variables: {
      invoiceNumber: invoice.number,
      date: invoice.date,
      customerName: invoice.customerName,
      items: invoice.lineItems,
      subtotal: invoice.subtotal,
      tax: invoice.tax,
      total: invoice.total,
    },
  });

  res.json({ downloadUrl: result.downloadUrl });
});

// Download usage report (PDF or Excel)
router.get('/reports/usage', requireAuth, async (req, res) =&gt; {
  const format = req.query.format === 'excel' ? 'excel' : 'pdf';
  const period = req.query.period as string || 'current-month';

  const usage = await db.usage.getReport(req.user.accountId, period);

  const result = await generateDocument({
    template: 'usageReport',
    format,
    variables: {
      accountName: req.user.accountName,
      reportPeriod: usage.periodLabel,
      totalApiCalls: usage.totalApiCalls,
      totalDocuments: usage.totalDocuments,
      storageUsedMb: usage.storageUsedMb,
      dailyUsage: usage.dailyBreakdown,
      topEndpoints: usage.topEndpoints,
    },
  });

  res.json({ downloadUrl: result.downloadUrl });
});

// Generic data export
router.post('/exports', requireAuth, async (req, res) =&gt; {
  const { type, filters, format = 'pdf' } = req.body;
  const data = await db.exports.getData(req.user.accountId, type, filters);

  const result = await generateDocument({
    template: 'dataExport',
    format,
    variables: {
      exportTitle: `${type} Export`,
      exportDate: new Date().toISOString().split('T')[0],
      columns: data.columns,
      rows: data.rows,
      totalRows: data.totalRows,
    },
  });

  res.json({ downloadUrl: result.downloadUrl });
});

export default router;
</code></pre>
<h3>Next.js API Routes</h3>
<pre><code class="language-typescript">// app/api/invoices/[id]/pdf/route.ts
import { NextRequest, NextResponse } from 'next/server';
import { generateDocument } from '@/lib/document-service';
import { getServerSession } from 'next-auth';

export async function GET(req: NextRequest, { params }: { params: { id: string } }) {
  const session = await getServerSession();
  if (!session) return NextResponse.json({ error: 'Unauthorized' }, { status: 401 });

  const invoice = await db.invoices.findById(params.id);
  if (!invoice) return NextResponse.json({ error: 'Not found' }, { status: 404 });

  const result = await generateDocument({
    template: 'invoice',
    format: 'pdf',
    variables: mapInvoiceToVariables(invoice),
  });

  return NextResponse.json({ downloadUrl: result.downloadUrl });
}
</code></pre>
<h2>Step 4: Add Frontend Download Buttons</h2>
<pre><code class="language-tsx">// components/DownloadButton.tsx
'use client';

import { useState } from 'react';
import { Download, FileSpreadsheet } from 'lucide-react';

interface DownloadButtonProps {
  endpoint: string;
  format: 'pdf' | 'excel';
  label?: string;
}

export function DownloadButton({ endpoint, format, label }: DownloadButtonProps) {
  const [loading, setLoading] = useState(false);
  const [error, setError] = useState&lt;string | null&gt;(null);

  const handleDownload = async () =&gt; {
    setLoading(true);
    setError(null);
    try {
      const res = await fetch(`\({endpoint}?format=\){format}`);
      if (!res.ok) throw new Error('Export failed');
      const { downloadUrl } = await res.json();
      window.open(downloadUrl, '_blank');
    } catch (err) {
      setError('Export failed. Please try again.');
    } finally {
      setLoading(false);
    }
  };

  const Icon = format === 'excel' ? FileSpreadsheet : Download;
  const text = label || `Download ${format.toUpperCase()}`;

  return (
    &lt;div&gt;
      &lt;button onClick={handleDownload} disabled={loading} className="btn btn-secondary"&gt;
        &lt;Icon className="h-4 w-4 mr-2" /&gt;
        {loading ? 'Generating...' : text}
      &lt;/button&gt;
      {error &amp;&amp; &lt;p className="text-sm text-red-500 mt-1"&gt;{error}&lt;/p&gt;}
    &lt;/div&gt;
  );
}
</code></pre>
<p>Usage in your pages:</p>
<pre><code class="language-tsx">// In your invoice detail page
&lt;DownloadButton endpoint={`/api/invoices/${invoice.id}/pdf`} format="pdf" /&gt;

// In your reports page — offer both formats
&lt;div className="flex gap-2"&gt;
  &lt;DownloadButton endpoint="/api/reports/usage" format="pdf" label="PDF Report" /&gt;
  &lt;DownloadButton endpoint="/api/reports/usage" format="excel" label="Excel Export" /&gt;
&lt;/div&gt;
</code></pre>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770998632148/a4669911-db93-4f57-a225-301711bab87a.png" alt="PDF and Excel download buttons side by side — one template, two export options for your users." style="display:block;margin:0 auto" />

<h2>Step 5: Handle Background Generation (Optional)</h2>
<p>For large reports or batch exports, use webhooks instead of waiting synchronously:</p>
<pre><code class="language-typescript">// Start generation without waiting
export async function queueDocumentGeneration(options: GenerateOptions &amp; { userId: string }) {
  const params = {
    templateId: TEMPLATES[options.template],
    variables: options.variables,
  };

  const job = options.format === 'excel'
    ? await rynko.documents.generateExcel(params)
    : await rynko.documents.generatePdf(params);

  // Store the job reference
  await db.exportJobs.create({
    jobId: job.jobId,
    userId: options.userId,
    status: 'processing',
  });

  return job.jobId;
}

// Webhook handler — called when document is ready
import { verifyWebhookSignature } from '@rynko/sdk';

app.post('/webhooks/rynko', express.raw({ type: 'application/json' }), async (req, res) =&gt; {
  const event = verifyWebhookSignature({
    payload: req.body.toString(),
    signature: req.headers['x-rynko-signature'] as string,
    secret: process.env.RYNKO_WEBHOOK_SECRET!,
  });

  if (event.type === 'document.generated') {
    await db.exportJobs.update(event.data.jobId, {
      status: 'completed',
      downloadUrl: event.data.downloadUrl,
    });

    // Notify user (in-app notification, email, etc.)
    await notifyUser(event.data.metadata.userId, {
      title: 'Your export is ready',
      downloadUrl: event.data.downloadUrl,
    });
  }

  res.json({ received: true });
});
</code></pre>
<h2>What This Gets You</h2>
<p>After an afternoon of integration:</p>
<ul>
<li><p><strong>PDF export</strong> on any page that displays data</p>
</li>
<li><p><strong>Excel export</strong> from the same templates — no separate implementation</p>
</li>
<li><p><strong>Sub-500ms generation</strong> — users don't wait</p>
</li>
<li><p><strong>No browser dependencies</strong> — no Puppeteer, no Chrome, no memory issues</p>
</li>
<li><p><strong>Template changes without deploys</strong> — update templates in the visual editor, no code changes needed. Stop wasting engineering sprints on <em>"Can we move the logo 5px to the left?"</em> — give the visual editor to your designer and never touch the PDF code again.</p>
</li>
<li><p><strong>Non-developers can update templates</strong> — product or design teams can modify document layouts directly</p>
</li>
</ul>
<h2>Cost at Scale</h2>
<table>
<thead>
<tr>
<th>Monthly exports</th>
<th>Rynko plan</th>
<th>Cost</th>
<th>Per-document</th>
</tr>
</thead>
<tbody><tr>
<td>Up to 50</td>
<td>Free</td>
<td>$0</td>
<td>$0</td>
</tr>
<tr>
<td>Up to 600</td>
<td>Starter</td>
<td>$29/mo</td>
<td>$0.048</td>
</tr>
<tr>
<td>Up to 4,000</td>
<td>Growth</td>
<td>$79/mo</td>
<td>$0.020</td>
</tr>
<tr>
<td>Up to 12,000</td>
<td>Scale</td>
<td>$149/mo</td>
<td>$0.012</td>
</tr>
</tbody></table>
<p>Compare that to Puppeteer infrastructure costs at scale, plus the engineering time to maintain it.</p>
<h2>Get Started</h2>
<ol>
<li><p><a href="https://app.rynko.dev/signup">Sign up for Rynko</a> — free with 5,000 credits</p>
</li>
<li><p>Design your export templates in the visual editor</p>
</li>
<li><p><code>npm install @rynko/sdk</code> (or <code>pip install rynko</code>)</p>
</li>
<li><p>Add the document service and API endpoints</p>
</li>
<li><p>Add download buttons to your frontend</p>
</li>
</ol>
<p>Your users get their PDF exports. You get your afternoon back.</p>
<p><a href="https://app.rynko.dev/signup">Get Started Free</a> | <a href="https://docs.rynko.dev">SDK Documentation</a> | <a href="https://docs.rynko.dev/api">API Reference</a></p>
<p><strong>New to Rynko?</strong> Follow our <a href="https://blog.rynko.dev/getting-started-in-5-minutes">Getting Started in 5 Minutes</a> guide. Want to understand the rendering architecture? Read <a href="https://blog.rynko.dev/design-once-generate-anywhere">Design Once, Generate Anywhere</a>.</p>
<hr />
<p><em>Building something specific? Check our</em> <a href="https://www.rynko.dev/use-cases"><em>use cases</em></a> <em>for industry-specific examples and templates.</em></p>
<p><em><sub>Disclosure: I ideate and draft content, then refine it with the aid of artificial intelligence tools like Claude and revise it to reflect my intended message.</sub></em></p>
]]></content:encoded></item><item><title><![CDATA[Rynko vs Puppeteer for PDF Generation: Architecture, Performance, and Tradeoffs]]></title><description><![CDATA[Puppeteer comes up in almost every conversation about PDF generation. It's battle-tested, well-documented, and most developers already know HTML and CSS. When I started building Rynko, I didn't set ou]]></description><link>https://blog.rynko.dev/rynko-vs-puppeteer-pdf-generation</link><guid isPermaLink="true">https://blog.rynko.dev/rynko-vs-puppeteer-pdf-generation</guid><category><![CDATA[JavaScript]]></category><category><![CDATA[PDF generation]]></category><category><![CDATA[puppeteer]]></category><category><![CDATA[rynko]]></category><category><![CDATA[api]]></category><category><![CDATA[Node.js]]></category><category><![CDATA[pdf]]></category><category><![CDATA[excel]]></category><category><![CDATA[Document Generation]]></category><dc:creator><![CDATA[Rynko Dev]]></dc:creator><pubDate>Mon, 23 Feb 2026 11:11:30 GMT</pubDate><enclosure url="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/698ea54a73da603a1b59230b/42872329-8b3a-40f3-aab2-0b8670cc3a9a.svg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<hr />
<p>Puppeteer comes up in almost every conversation about PDF generation. It's battle-tested, well-documented, and most developers already know HTML and CSS. When I started building Rynko, I didn't set out to replace Puppeteer — I set out to solve a different problem. But the two tools end up on the same shortlists, so it's worth laying out where they overlap, where they diverge, and when each one makes sense.</p>
<h2>How They Work (Side by Side)</h2>
<p>The architectural difference between Puppeteer and Rynko isn't just an implementation detail — it drives almost every other tradeoff between the two.</p>
<p><strong>Puppeteer's pipeline</strong> looks like this: your application generates HTML (usually from a template engine like Handlebars or EJS), launches a headless Chrome instance, loads the HTML into a page, waits for rendering to complete, and calls <code>page.pdf()</code> to export the result. The PDF you get is essentially a print of a web page.</p>
<pre><code class="language-plaintext">HTML + CSS → Headless Chrome → Page Render → PDF Export
</code></pre>
<p><strong>Rynko's pipeline</strong> skips the browser entirely. Templates are defined as a structured JSON component tree — not HTML. The <a href="https://yogalayout.dev/">Yoga layout engine</a> (the same flexbox engine that powers React Native) computes every element's position, and <a href="https://pdfkit.org/">PDFKit</a> writes native vector primitives directly to the PDF. Text is rendered as searchable glyphs, charts as Bezier paths, fonts are embedded.</p>
<pre><code class="language-plaintext">JSON Template + Variables → Yoga Layout → PDFKit → Native PDF
</code></pre>
<p>The key insight: Puppeteer renders a browser page and then exports it. Rynko builds the PDF directly, with no intermediate rendering step. This distinction shows up in performance, resource usage, deployment complexity, and output consistency.</p>
<h2>The Performance Gap</h2>
<p>I'll be upfront — performance is where the difference is most dramatic, and the numbers aren't close.</p>
<p><strong>Generation speed.</strong> A typical Puppeteer document takes 3–8 seconds to generate. That includes Chrome startup (or page creation if you're reusing a browser instance), HTML rendering with CSS layout, font loading, and the PDF export step. Rynko generates the same document in 200–500ms. The Yoga layout pass is fast because it's computing flexbox positions in native code, and PDFKit writes PDF primitives without a browser rendering step in between.</p>
<p><strong>Memory.</strong> Each Chrome instance needs 200–500MB of RAM. If you're generating documents concurrently, that adds up fast — 10 concurrent PDFs can consume 2–5GB. Rynko's workers use roughly 50MB each, because there's no browser process to keep alive.</p>
<p><strong>Serverless reality.</strong> This is where Puppeteer hits its hardest wall. AWS Lambda has a 250MB deployment package limit — Chrome alone is roughly 280MB compressed. Cold starts add 5–15 seconds on top of the generation time. Most teams that start with Puppeteer on Lambda eventually move to dedicated instances, which changes the cost equation significantly. Because Rynko's renderer has no browser dependency, it fits comfortably in serverless environments without special packaging or layer configurations.</p>
<table style="min-width:75px"><colgroup><col style="min-width:25px"></col><col style="min-width:25px"></col><col style="min-width:25px"></col></colgroup><tbody><tr><th><p></p></th><th><p>Rynko</p></th><th><p>Puppeteer</p></th></tr><tr><td><p><strong>Generation speed</strong></p></td><td><p>200–500ms</p></td><td><p>3–8s</p></td></tr><tr><td><p><strong>Memory per document</strong></p></td><td><p>~50MB</p></td><td><p>200–500MB</p></td></tr><tr><td><p><strong>Serverless-friendly</strong></p></td><td><p>Yes</p></td><td><p>Difficult (Chrome size, cold starts)</p></td></tr><tr><td><p><strong>Concurrent generation</strong></p></td><td><p>Low memory overhead</p></td><td><p>Limited by RAM</p></td></tr><tr><td><p><strong>Infrastructure</strong></p></td><td><p>No browser dependency</p></td><td><p>Chrome must be installed</p></td></tr></tbody></table>

<h2>What Puppeteer Does Better</h2>
<p>I want to be fair about this, because Puppeteer has real strengths that matter for certain use cases.</p>
<p><strong>Full CSS support.</strong> Puppeteer renders in a real browser, which means you get the complete CSS specification — grid, flexbox, animations (for screenshots), web fonts via <code>@font-face</code>, media queries, and CSS <code>@page</code> rules. Rynko's layout engine supports flexbox-style positioning through Yoga, but it's not a browser. If your templates rely heavily on CSS grid or advanced selectors, Puppeteer handles that natively.</p>
<p><strong>Existing HTML templates work as-is.</strong> If you have a Handlebars, EJS, or React-based template pipeline that's already producing the HTML you want, Puppeteer plugs directly into it. No conversion, no new template format. That's a real advantage when you've already invested in HTML templates.</p>
<p><strong>URL-to-PDF and screenshots.</strong> Puppeteer can navigate to a URL and export what it sees. That's useful for capturing web pages, generating screenshots of dashboards, or turning existing web content into PDFs without modifying the source. Rynko doesn't do this — it generates documents from structured templates, not from URLs.</p>
<p><strong>Mature ecosystem.</strong> Puppeteer has been around since 2017. It has extensive documentation, a large community, hundreds of Stack Overflow answers, and well-understood patterns for common problems. When something goes wrong, you'll find someone who's already solved it.</p>
<p><strong>Free and open source.</strong> There's no subscription, no usage limits, no vendor dependency. You own the entire pipeline.</p>
<h2>What Rynko Does Better</h2>
<p><strong>Speed and resource efficiency.</strong> As covered above, the performance gap is significant. For latency-sensitive workloads — a user clicks "Download Invoice" and waits — 200ms vs 5 seconds is the difference between a smooth experience and a loading spinner.</p>
<p><strong>PDF and Excel from the same template.</strong> This is the biggest practical difference. Puppeteer generates PDFs only. If you also need Excel output — and most business applications eventually do — you're building and maintaining a completely separate pipeline with a library like ExcelJS. With Rynko, you design one template and generate both formats from the same API call by changing a single parameter.</p>
<pre><code class="language-typescript">// Same template, same data — just change the method
const pdf = await rynko.documents.generatePdf({
  templateId: 'invoice',
  variables: invoiceData,
});

const excel = await rynko.documents.generateExcel({
  templateId: 'invoice',
  variables: invoiceData,
});
</code></pre>
<p><strong>Visual designer for non-developers.</strong> Templates are designed in a web-based drag-and-drop editor. Product managers, finance teams, and operations people can update a template without a code deploy. With Puppeteer, every layout change — moving a logo, adjusting a font size, adding a column — requires a developer to modify HTML.</p>
<p><strong>28 built-in component types.</strong> Tables, charts (8 types), QR codes, barcodes, 9 interactive PDF form field types (text inputs, checkboxes, dropdowns, digital signatures), conditional blocks, loops, and key-value layouts. These are first-class components with property schemas validated at design time, not HTML you have to build and style yourself.</p>
<p><strong>AI template creation via MCP.</strong> Rynko provides an <a href="https://www.rynko.dev/mcp">MCP server</a> that lets AI agents like Claude Desktop, Cursor, or any MCP-compatible client create templates and generate documents through conversation. The structured JSON schema gives the AI a well-defined contract to work against — it can't produce broken HTML or invalid CSS because the schema is validated before it reaches the renderer. This is a fundamentally different situation from asking an LLM to generate valid HTML templates, where broken tags and layout issues are common and hard to catch before rendering.</p>
<p><strong>No browser dependency.</strong> No Chrome to install, no Puppeteer version to pin, no <code>--no-sandbox</code> flags in Docker, no Chromium packages to maintain in your CI/CD pipeline. The renderer is a Node.js library with no native browser dependencies.</p>
<p><strong>Zero XSS surface.</strong> Templates are JSON, not HTML. There's no <code>innerHTML</code>, no script injection vector, no CSS <code>expression()</code> attack surface. The expression evaluator uses a strict allowlist — arithmetic, comparisons, <code>Math.*</code> functions, and safe array methods like <code>.map()</code>, <code>.reduce()</code>, <code>.filter()</code>. Calls to <code>eval()</code>, <code>require()</code>, prototype access, and template literals are blocked at the syntax level.</p>
<p><strong>Deterministic rendering.</strong> This is one the biggest strengths of Rynko. The same template and data produce identical output regardless of the host environment. There's no CSS cascade, no browser rendering differences between your development machine and a Linux container in production. Chrome version updates can't silently change your document output.</p>
<h2>The Excel Question</h2>
<p>This deserves its own section because it's the single most common reason teams switch from Puppeteer to Rynko.</p>
<p>Puppeteer generates PDFs. That's it. When your finance team asks for an Excel version of the same invoice, or your clients need spreadsheet exports they can manipulate, you're building a second generation pipeline. That typically means ExcelJS or SheetJS, with its own template logic, its own styling code (cell-by-cell formatting, manual column widths), and its own maintenance burden. Two codebases for the same document.</p>
<pre><code class="language-typescript">// With Puppeteer, you need two completely separate pipelines:

// Pipeline 1: PDF via Puppeteer
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.setContent(invoiceHtml);
const pdf = await page.pdf({ format: 'A4' });
await browser.close();

// Pipeline 2: Excel via ExcelJS (separate library, separate logic)
const workbook = new ExcelJS.Workbook();
const sheet = workbook.addWorksheet('Invoice');
sheet.columns = [
  { header: 'Description', key: 'description', width: 30 },
  { header: 'Qty', key: 'qty', width: 10 },
  { header: 'Price', key: 'price', width: 15 },
];
invoice.items.forEach(item =&gt; sheet.addRow(item));
// Now manually style every cell, add borders, format numbers...
await workbook.xlsx.writeFile('invoice.xlsx');
</code></pre>
<p>With Rynko, it's the same template, the same API, the same data. The rendering engine handles format-specific output — tables become Excel sheets with proper column types, number formatting carries over, charts render natively in both formats. When you update the template, both outputs update together.</p>
<p>If your application only ever needs PDF, this isn't a factor. But in my experience building enterprise systems, Excel comes up eventually — and rebuilding later is significantly more expensive than choosing a tool that handles both from the start.</p>
<h2>Code Comparison: Generating an Invoice</h2>
<p>Here's what generating the same invoice looks like with each tool, end to end.</p>
<h3>Puppeteer</h3>
<pre><code class="language-typescript">import puppeteer from 'puppeteer';
import Handlebars from 'handlebars';

// Step 1: Define your HTML template (or load from a file)
const templateSource = `
&lt;html&gt;
&lt;style&gt;
  body { font-family: Arial; padding: 40px; }
  table { width: 100%; border-collapse: collapse; margin-top: 20px; }
  th { background: #f4f4f4; text-align: left; }
  th, td { padding: 8px; border-bottom: 1px solid #ddd; }
  .total { font-weight: bold; text-align: right; margin-top: 20px; font-size: 18px; }
  .header { display: flex; justify-content: space-between; }
&lt;/style&gt;
&lt;body&gt;
  &lt;div class="header"&gt;
    &lt;div&gt;
      &lt;h1&gt;{{companyName}}&lt;/h1&gt;
      &lt;p&gt;{{companyEmail}}&lt;/p&gt;
    &lt;/div&gt;
    &lt;div&gt;
      &lt;h2&gt;Invoice #{{invoiceNumber}}&lt;/h2&gt;
      &lt;p&gt;Date: {{invoiceDate}}&lt;/p&gt;
    &lt;/div&gt;
  &lt;/div&gt;
  &lt;p&gt;Bill to: {{clientName}}&lt;/p&gt;
  &lt;table&gt;
    &lt;thead&gt;
      &lt;tr&gt;&lt;th&gt;Description&lt;/th&gt;&lt;th&gt;Hours&lt;/th&gt;&lt;th&gt;Rate&lt;/th&gt;&lt;th&gt;Amount&lt;/th&gt;&lt;/tr&gt;
    &lt;/thead&gt;
    &lt;tbody&gt;
      {{#each lineItems}}
      &lt;tr&gt;
        &lt;td&gt;{{description}}&lt;/td&gt;
        &lt;td&gt;{{hours}}&lt;/td&gt;
        &lt;td&gt;${{rate}}&lt;/td&gt;
        &lt;td&gt;${{multiply hours rate}}&lt;/td&gt;
      &lt;/tr&gt;
      {{/each}}
    &lt;/tbody&gt;
  &lt;/table&gt;
  &lt;p class="total"&gt;Total: ${{total}}&lt;/p&gt;
&lt;/body&gt;
&lt;/html&gt;
`;

// Step 2: Compile and render the HTML
Handlebars.registerHelper('multiply', (a, b) =&gt; a * b);
const template = Handlebars.compile(templateSource);
const html = template(invoiceData);

// Step 3: Launch Chrome, render, and export
const browser = await puppeteer.launch({
  args: ['--no-sandbox', '--disable-setuid-sandbox'],
});
const page = await browser.newPage();
await page.setContent(html, { waitUntil: 'networkidle0' });
const pdf = await page.pdf({
  format: 'A4',
  printBackground: true,
  margin: { top: '20mm', bottom: '20mm', left: '15mm', right: '15mm' },
});
await browser.close();
</code></pre>
<p>You're responsible for: the HTML template, the CSS styling, Handlebars helpers for calculations, Chrome lifecycle management, and error handling when Chrome crashes or runs out of memory.</p>
<h3>Rynko</h3>
<pre><code class="language-typescript">import { Rynko } from '@rynko/sdk';

const rynko = new Rynko({ apiKey: process.env.RYNKO_API_KEY! });

// Template is designed in the visual editor — layout, styling,
// and calculated fields (subtotals, tax) live in the template
const job = await rynko.documents.generatePdf({
  templateId: 'invoice',
  variables: {
    companyName: 'Delivstat Consulting',
    companyEmail: 'billing@delivstat.com',
    invoiceNumber: 'INV-2026-0042',
    invoiceDate: '2026-02-23',
    clientName: 'Acme Technologies Pvt. Ltd.',
    lineItems: [
      { description: 'Technical Consulting', hours: 24, rate: 150 },
      { description: 'API Design &amp; Review', hours: 16, rate: 150 },
      { description: 'Performance Optimization', hours: 12, rate: 175 },
    ],
    taxRate: 0.18,
    taxLabel: 'GST (18%)',
  },
});

const result = await rynko.documents.waitForCompletion(job.jobId);
console.log(result.downloadUrl);
</code></pre>
<p>The template handles layout, styling, and calculations like subtotals and tax amounts through built-in expression support (e.g., <code>lineItems.reduce((sum, item) =&gt; sum + item.hours * item.rate, 0)</code>). Your application code just sends the data.</p>
<h2>When to Use Each</h2>
<h3>When Puppeteer makes sense</h3>
<ul>
<li><p><strong>You have existing HTML templates</strong> and they work well. If your Handlebars or EJS pipeline already produces good PDFs and you don't need Excel, there's no reason to migrate.</p>
</li>
<li><p><strong>You need URL-to-PDF capture.</strong> Puppeteer can navigate to a page and export it. Rynko generates documents from templates, not from URLs.</p>
</li>
<li><p><strong>Full CSS is required.</strong> If your templates use CSS grid, advanced selectors, or browser-specific features that Yoga's flexbox model doesn't cover, Puppeteer gives you the full browser engine.</p>
</li>
<li><p><strong>Budget is the primary constraint.</strong> Puppeteer is free. If you have the engineering time to manage the infrastructure and Chrome dependencies, the licensing cost is zero.</p>
</li>
<li><p><strong>Low volume, latency doesn't matter.</strong> If you're generating a handful of documents per day in a background job, the 3–8 second generation time may not be a problem worth solving.</p>
</li>
</ul>
<h3>When Rynko makes sense</h3>
<ul>
<li><p><strong>You're starting fresh.</strong> If you don't have existing templates, building with Rynko's visual designer and structured JSON is faster than writing HTML/CSS from scratch and wiring up a Puppeteer pipeline.</p>
</li>
<li><p><strong>You need PDF and Excel.</strong> This is the clearest decision point. If both formats are required, Rynko eliminates the second pipeline.</p>
</li>
<li><p><strong>Performance matters.</strong> High-throughput generation, user-facing download buttons, serverless environments — anywhere the 3–8 second Puppeteer time is a problem.</p>
</li>
<li><p><strong>Non-developers need to edit templates.</strong> The visual designer lets product, finance, or operations teams make layout changes without a code deploy.</p>
</li>
<li><p><strong>You're building AI-powered workflows.</strong> The MCP server lets AI agents create templates and generate documents through conversation, with schema validation that prevents the broken-HTML problem.</p>
</li>
<li><p><strong>You want deterministic output.</strong> No Chrome version drift, no CSS rendering differences between environments, no surprises when your CI server renders differently from your local machine.</p>
</li>
</ul>
<h2>Migrating from Puppeteer</h2>
<p>If you're currently using Puppeteer and want to try Rynko, the migration path doesn't have to be all-or-nothing.</p>
<p>The <a href="https://www.rynko.dev/mcp">MCP server</a> can help with the conversion. Point Claude Desktop or Cursor at your existing HTML template and ask it to recreate the layout as a Rynko template. The AI analyzes the HTML structure, identifies dynamic fields, and maps them to template variables and components. It won't produce a pixel-perfect replica every time, but it gets you 80–90% of the way there, and the visual designer lets you refine the rest.</p>
<p>You can also start by migrating a single template — try an invoice or a simple report — and run both pipelines in parallel while you evaluate. Rynko's free tier includes 50 documents per month, and every new account gets 5,000 credits during the Founder's Preview, so there's room to test with real workloads before committing.</p>
<p><a href="https://app.rynko.dev/signup">Try Rynko Free</a> | <a href="https://docs.rynko.dev">Documentation</a> | <a href="https://www.rynko.dev/mcp">MCP Setup</a></p>
<hr />
<p><em>Questions or feedback:</em> <a href="mailto:support@rynko.dev"><em>support@rynko.dev</em></a> <em>or</em> <a href="https://discord.gg/d8cU2MG6"><em>Discord</em></a><em>.</em></p>
<hr />
<p><em>Disclosure: I ideate and draft content, then refine it with the aid of artificial intelligence tools like Claude and revise it to reflect my intended message.</em></p>
]]></content:encoded></item><item><title><![CDATA[Introducing Rynko: The Deterministic Document API for Startups and Developers]]></title><description><![CDATA[Today, I am introducing Rynko. This is a new document generation platform built to help developers and AI agents design and generate PDF and Excel documents at scale without the traditional overhead.
]]></description><link>https://blog.rynko.dev/rynko-intro</link><guid isPermaLink="true">https://blog.rynko.dev/rynko-intro</guid><category><![CDATA[rynko]]></category><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[Document Generation]]></category><dc:creator><![CDATA[Rynko Dev]]></dc:creator><pubDate>Mon, 23 Feb 2026 08:00:00 GMT</pubDate><enclosure url="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/698ea54a73da603a1b59230b/8b677a0a-7efa-49e1-ba06-0841cc8ff5c7.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Today, I am introducing Rynko. This is a new document generation platform built to help developers and AI agents design and generate PDF and Excel documents at scale without the traditional overhead.</p>
<p>If you are building a SaaS, you eventually have to generate an invoice, a receipt, or a complex report. Developers usually waste days wrestling with CSS media queries or setting up resource-heavy HTML-to-PDF microservices using Puppeteer. Rynko provides the infrastructure to design, version, and generate documents deterministically. You can go from a blank canvas to a production-ready template in minutes and get back to building your core product.</p>
<h3>Architecture: Native Speed, No Bloat</h3>
<p>Rynko generates PDF and Excel documents from a single definition. This definition is a structured JSON component tree rather than HTML.</p>
<p>We explicitly chose not to use HTML because headless browsers are heavy. A standard Chromium-based PDF generator can easily consume hundreds of megabytes of RAM per instance. Rynko uses a native layout pipeline powered by the Yoga Layout Engine and PDFKit.</p>
<p>The result is a massive win for your server costs and performance:</p>
<ul>
<li><p><strong>Low Footprint:</strong> Rynko workers operate at roughly 50MB of memory.</p>
</li>
<li><p><strong>High Speed:</strong> Documents generate in a median of 426 milliseconds.</p>
</li>
<li><p><strong>Deterministic:</strong> Identical JSON input produces identical PDF output every single time. There are no rendering differences between your local machine and your production server.</p>
</li>
</ul>
<h3>The Template Designer</h3>
<p>You do not have to write the JSON manually. Templates are designed visually using a drag-and-drop editor that supports over 19 component types. These include data tables, charts, dynamic QR codes, and conditional logic.</p>
<p>Each component has a strict property schema validated at design time. You can preview templates in real time with live variable substitution and sample data. Once designed, the exact same template can generate both a highly styled PDF and a formatted Excel spreadsheet with native formulas.</p>
<h3>AI Integration: Let Agents Do the Work</h3>
<p>To make integration even faster, we built a native Model Context Protocol (MCP) server. This allows AI agents from Claude Desktop, Cursor, or Windsurf to interact with Rynko directly.</p>
<p>You can prompt your IDE to "Generate an invoice template for Acme Corp with a tax calculated field." The agent will use the MCP tools to build the JSON tree and draft the template. You can then review it visually in the dashboard before using it in your application.</p>
<h3>Developer Experience</h3>
<p>We treat document generation as a code-first citizen. We provide official SDKs for Node.js, Python, and Java. These SDKs feature automatic retries with exponential backoff.</p>
<p>You can batch generate multiple documents in a single API call. Final documents are delivered via cryptographically signed URLs that automatically expire after three days. Webhook deliveries include HMAC-SHA256 signature verification so you can securely update your database when a document is ready.</p>
<h3>Infrastructure That Grows with You</h3>
<p>Rynko is easy enough for a weekend side project, but it is built to handle enterprise scale when your startup grows.</p>
<p>We organize resources using Projects and Environments. You get complete resource isolation for your dev, staging, and production environments. When you land enterprise clients, Rynko is ready with PDF/A-2b compliance for long-term archival, role-based access control for your team, and full audit logs.</p>
<h3>Join the Public Beta: Founder's Preview</h3>
<p>Rynko isn't just a basic PDF wrapper. We are building the deterministic infrastructure that allows developers and AI agents to generate documents at scale.</p>
<p>Rynko is currently in <strong>Public Beta: Founder's Preview</strong>. Join today to claim <strong>5,000 free document generation credits</strong> and start building deterministic document workflows without the Chromium overhead.</p>
<p><a href="https://app.rynko.dev/signup">**<br />Try It Free**</a> | <a href="https://www.rynko.dev/mcp"><strong>MCP Setup Guide</strong></a> | <a href="https://docs.rynko.dev/"><strong>Documentation</strong></a></p>
<hr />
<p><em>Questions? Join our</em> <a href="https://discord.gg/d8cU2MG6"><em><strong>Discord</strong></em></a> <em>or check the</em> <a href="https://www.npmjs.com/package/@rynko/mcp-server"><em><strong>npm package</strong></em></a><em>.</em></p>
<p><em>Disclosure: I ideate and draft content, then refine it with the aid of artificial intelligence tools like Claude and revise it to reflect my intended message.</em></p>
]]></content:encoded></item><item><title><![CDATA[Why We Built an MCP Server for Document Generation]]></title><description><![CDATA[The Gap in AI Agent Capabilities
Picture this: you're using Claude to analyze your quarterly sales data. It queries your database, crunches the numbers, identifies trends, and writes a summary. Then you say:
"Great. Now turn that into a PDF report I ...]]></description><link>https://blog.rynko.dev/why-we-built-mcp-server-document-generation</link><guid isPermaLink="true">https://blog.rynko.dev/why-we-built-mcp-server-document-generation</guid><category><![CDATA[rynko]]></category><category><![CDATA[AI]]></category><category><![CDATA[Document Generation]]></category><category><![CDATA[pdf]]></category><category><![CDATA[excel]]></category><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[SaaS]]></category><category><![CDATA[System Architecture]]></category><category><![CDATA[Web Development]]></category><dc:creator><![CDATA[Rynko Dev]]></dc:creator><pubDate>Tue, 17 Feb 2026 18:30:38 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1770983248781/3a915089-f40f-4070-b85a-43e8de5529f8.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<hr />
<h2 id="heading-the-gap-in-ai-agent-capabilities">The Gap in AI Agent Capabilities</h2>
<p>Picture this: you're using Claude to analyze your quarterly sales data. It queries your database, crunches the numbers, identifies trends, and writes a summary. Then you say:</p>
<p><em>"Great. Now turn that into a PDF report I can send to the board."</em></p>
<p>And suddenly, the most capable AI in the world is stuck. It can give you Markdown. It can generate HTML. But a real, formatted PDF with your company logo, charts, page numbers, and a professional layout? That requires infrastructure the AI agent simply doesn't have access to.</p>
<p>This is the gap we built Rynko's MCP server to fill.</p>
<h2 id="heading-what-is-mcp">What Is MCP?</h2>
<p><a target="_blank" href="https://modelcontextprotocol.io/">Model Context Protocol (MCP)</a> is an open standard that allows AI assistants to connect to external tools and services. Think of it as a USB port for AI—a standardized way for models like Claude to interact with the outside world.</p>
<p>When you connect an MCP server to Claude Desktop or Cursor, the AI gains new abilities. It can read files, query APIs, execute code—anything the server provides.</p>
<p>Our MCP server enables AI agents to <strong>design templates, generate documents, and manage the entire document lifecycle</strong>—all through natural conversation.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770983881600/b526198d-48aa-4fd5-ae98-1193ef18857c.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-what-our-mcp-server-can-do">What Our MCP Server Can Do</h2>
<p>Here are the tools provided by the Rynko Document Generation MCP:</p>
<ol>
<li><p><strong>list_workspaces</strong> — List all workspaces you have access to, including workspace names, team names, and your role.</p>
</li>
<li><p><strong>get_workspace</strong> — Get details about a specific workspace including template count and team information.</p>
</li>
<li><p><strong>list_templates</strong> — List templates in a workspace, filterable by format (PDF or Excel).</p>
</li>
<li><p><strong>get_template</strong> — Get full details of a template including schema, variables, and settings.</p>
</li>
<li><p><strong>create_draft_template</strong> — Create a new draft template with a defined schema and variables.</p>
</li>
<li><p><strong>update_draft_template</strong> — Update an existing template's draft version (name, description, schema, or variables).</p>
</li>
<li><p><strong>validate_schema</strong> — Validate a template schema without creating a template; returns any validation errors.</p>
</li>
<li><p><strong>get_schema_reference</strong> — Fetch the complete Rynko template schema reference documentation (all component types, examples, styling, validation rules, etc.).</p>
</li>
<li><p><strong>parse_data_file</strong> — Parse an Excel or CSV file and return structured JSON data for document generation.</p>
</li>
<li><p><strong>map_variables</strong> — Auto-map data columns to template variables based on name similarity, with confidence scores.</p>
</li>
<li><p><strong>preview_template</strong> — Generate a free preview of a document with a download URL (valid 5 minutes). Doesn't consume credits.</p>
</li>
<li><p><strong>generate_document</strong> — Generate a production document (consumes credits). Returns a job ID for status tracking.</p>
</li>
<li><p><strong>get_job_status</strong> — Check the status of a document generation job and get the download URL when complete.</p>
</li>
<li><p><strong>list_assets</strong> — List image assets in your asset library (returns <code>assets://{id}</code> references for use in templates).</p>
</li>
<li><p><strong>upload_asset</strong> — Upload an image (base64 or public URL) to the asset library for use in templates.</p>
</li>
<li><p><strong>get_sdk_examples</strong> — Get SDK code examples and documentation for programmatic integration (Node.js, Python, Java, REST).</p>
</li>
</ol>
<p>These cover the full lifecycle — from workspace/template management, schema authoring and validation, data parsing and variable mapping, to preview, generation, asset management, and developer integration.</p>
<h3 id="heading-template-creation-and-management">Template Creation and Management</h3>
<p>Your AI agent can create document templates from a natural language description:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770983944369/6d8e147d-2d24-4d34-957b-16e9392a86c3.png" alt="Claude creating an invoice template via Rynko's MCP server — from natural language to production template in seconds." class="image--center mx-auto" /></p>
<p>The agent will create a well-structured template with all the necessary components, variables, and layout using our template schema. Templates created through MCP begin as drafts, ensuring nothing goes to production without your review.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770983997361/0338814b-bd5b-47dc-8687-9f127278741d.png" alt="The result: a fully structured invoice template with variables, layout, and styling — created from a single text prompt." class="image--center mx-auto" /></p>
<h3 id="heading-document-generation">Document Generation</h3>
<p>Once you have a template, generating documents is a single conversation turn:</p>
<pre><code class="lang-plaintext">"Generate a PDF invoice for Acme Corp using the invoice template.
Invoice #INV-2026-042, 3x Consulting Hours at $150/hr,
1x Software License at $500. 8% tax."
</code></pre>
<p>The agent fills in the variables, calls the generation API, and provides a download link. You can also create Excel files from the same template—just ask.</p>
<p><strong>Previews are free.</strong> The agent can generate preview documents without using up your monthly quota, allowing you to refine templates without worrying about costs.</p>
<h3 id="heading-data-import">Data Import</h3>
<p>Have data in a spreadsheet? The agent can parse it and map it to your template:</p>
<pre><code class="lang-plaintext">"Parse this CSV file and map the columns to the invoice template variables"
</code></pre>
<p>The <code>parse_data_file</code> and <code>map_variables</code> tools manage Excel and CSV files by automatically matching column headers to the template variable names.</p>
<h3 id="heading-asset-management">Asset Management</h3>
<p>Templates often need images, such as logos, signatures, and icons. The agent can upload and manage these assets:</p>
<pre><code class="lang-plaintext">"Upload our company logo for use in templates"
</code></pre>
<p>Assets are stored in your team's library and can be used in all templates.</p>
<h2 id="heading-setup-just-2-minutes-no-coding-needed">Setup: Just 2 Minutes, No Coding Needed</h2>
<h3 id="heading-step-1-acquire-a-personal-access-token">Step 1: Acquire a Personal Access Token</h3>
<ol>
<li><p>Log into <a target="_blank" href="http://app.rynko.dev">app.rynko.dev</a></p>
</li>
<li><p>Go to <strong>Settings</strong> &gt; <strong>Personal Access Tokens</strong></p>
</li>
<li><p>Create a new token (expires in 30 days for security)</p>
</li>
</ol>
<h3 id="heading-step-2-integrate-with-your-ai-tool">Step 2: Integrate with Your AI Tool</h3>
<p><strong>Claude Desktop</strong> — download the <code>.mcpb</code> extension from our <a target="_blank" href="https://github.com/rynko-dev/mcp-server/releases">GitHub releases</a> and drag it into <strong>Settings &gt; Extensions</strong>. Enter your token when prompted. That's it.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770984028530/18d665a2-d4b5-459b-a407-550eb1d8b3e6.png" alt="Rynko installed as a Claude Desktop extension — one-click setup." class="image--center mx-auto" /></p>
<p>Or, if you prefer manual config, edit <code>claude_desktop_config.json</code>:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"mcpServers"</span>: {
    <span class="hljs-attr">"rynko"</span>: {
      <span class="hljs-attr">"command"</span>: <span class="hljs-string">"npx"</span>,
      <span class="hljs-attr">"args"</span>: [<span class="hljs-string">"-y"</span>, <span class="hljs-string">"@rynko/mcp-server"</span>],
      <span class="hljs-attr">"env"</span>: {
        <span class="hljs-attr">"RYNKO_USER_TOKEN"</span>: <span class="hljs-string">"pat_your_token_here"</span>
      }
    }
  }
}
</code></pre>
<p><strong>Cursor</strong> — Open <strong>Settings</strong> (Cmd+, / Ctrl+,) → <strong>Features</strong> → <strong>MCP</strong> → <strong>Add New MCP Server</strong>. Enter the name <code>rynko</code>, select the type <strong>command</strong>, input the command <code>npx -y @rynko/mcp-server</code>, and add the environment variable <code>RYNKO_USER_TOKEN</code> with your token. Save and restart.</p>
<p>Upon restarting Cursor, the document generation tools will be available immediately.</p>
<p>We also support <strong>Windsurf</strong>, <strong>VS Code</strong>, and <strong>Zed</strong> — see our <a target="_blank" href="https://docs.rynko.dev/integrations/mcp-integration">MCP integration guide</a> for setup instructions.</p>
<h2 id="heading-why-choose-mcp-over-a-standard-api">Why Choose MCP Over a Standard API?</h2>
<p>You might wonder — why build an MCP server when we already have a REST API?</p>
<p>The answer is <strong>context and intent</strong>.</p>
<p>When a developer integrates our API, they write code that knows exactly what template to use, what variables to pass, and what format to generate. The logic is hardcoded.</p>
<p>When an AI agent uses our MCP server, it can <strong>reason about the problem</strong>. It can:</p>
<ul>
<li><p>Browse available templates and pick the right one based on name, description, and format</p>
</li>
<li><p>Inspect a template's variables — see what fields exist, what types they expect, what defaults are set — and map incoming data to them (we even have a <code>map_variables</code> tool that does fuzzy column-to-variable matching)</p>
</li>
<li><p>Generate a free preview first, check if the output looks right, and regenerate with tweaked data if needed</p>
</li>
<li><p>Create an entirely new draft template from scratch if none of the existing ones fit</p>
</li>
</ul>
<p>The difference is that the API caller needs to know exactly what to do upfront. The MCP caller can explore, inspect, and iterate — more like a junior developer with access to your template library than a script executing a fixed pipeline.</p>
<h2 id="heading-real-world-workflows">Real-World Workflows</h2>
<p>Here are workflows where MCP actually makes sense — cases where a human is in the loop and the AI is doing the legwork interactively:</p>
<h3 id="heading-ad-hoc-invoice-generation">Ad-Hoc Invoice Generation</h3>
<p>You're in a chat with Claude and a client just confirmed a project scope over email. You say: <em>"Generate an invoice for Acme Corp — 3 months of consulting at $15k/month, net-30 terms."</em> The agent lists your templates, picks your invoice template, inspects its variables, fills them in, generates a preview, and gives you a download link. You review it, ask for a tweak ("add a 10% early payment discount line"), and the agent regenerates. Two minutes, no context switching to a dashboard.</p>
<h3 id="heading-on-demand-reporting">On-Demand Reporting</h3>
<p>You ask Claude: <em>"Generate this month's sales report."</em> The agent queries your database (via a separate DB tool or MCP server you've configured), structures the data to match your report template's variables, and generates a PDF. This works well when the template already exists and the variables are straightforward — but you need the data source wired up separately. The MCP server handles the document generation part, not the data fetching.</p>
<h3 id="heading-template-prototyping">Template Prototyping</h3>
<p>You're building a new onboarding packet and don't want to spend 30 minutes in the visual designer for the first draft. You describe what you need: <em>"Create a welcome letter template with company name, employee name, start date, and a list of first-week tasks."</em> The agent uses <code>create_draft_template</code> to build a structured template, generates a preview with sample data, and you iterate from there. Once you're happy with the structure, you open it in the visual designer to polish the layout and publish.</p>
<h2 id="heading-secure-by-design">Secure by Design</h2>
<p>We prioritized security when building the MCP server:</p>
<ul>
<li><p><strong>Personal Access Tokens</strong> expire after 30 days and are stored as SHA-256 hashes</p>
</li>
<li><p><strong>Draft-only mode</strong> — templates created via MCP are always drafts until manually published</p>
</li>
<li><p><strong>Workspace isolation</strong> — the MCP server enforces workspace selection at session start. The AI must list your workspaces, let you pick one, and lock to it before any template or document operations work. Switching workspaces requires explicit re-selection.</p>
</li>
<li><p><strong>Audit trail</strong> — every MCP operation is logged in your team's activity feed</p>
</li>
<li><p><strong>Token revocation</strong> — instantly revoke any token from the dashboard</p>
</li>
</ul>
<h2 id="heading-getting-started">Getting Started</h2>
<ol>
<li><p><a target="_blank" href="https://app.rynko.dev/signup">Sign up for free</a> — You'll get 5,000 credits with every new account!</p>
</li>
<li><p><a target="_blank" href="https://www.rynko.dev/mcp">Install the MCP server</a> — It only takes 2 minutes to set up for Claude, Cursor, or any MCP client.</p>
</li>
<li><p>Ask your agent: <em>"List my Rynko templates"</em> — If it works, you're all set and connected!</p>
</li>
</ol>
<p>We created this because we believe the future of document generation should be as easy as having a conversation. Instead of learning another API, you just tell your AI what you need.</p>
<p><a target="_blank" href="https://app.rynko.dev/signup">Try It Free</a> | <a target="_blank" href="https://www.rynko.dev/mcp">MCP Setup Guide</a> | <a target="_blank" href="https://docs.rynko.dev">Documentation</a></p>
<hr />
<p><em>Questions? Join our</em> <a target="_blank" href="https://discord.gg/d8cU2MG6"><em>Discord</em></a> <em>or check the</em> <a target="_blank" href="https://www.npmjs.com/package/@rynko/mcp-server"><em>npm package</em></a><em>.</em></p>
<p><em><sub>Disclosure: I ideate and draft content, then refine it with the aid of artificial intelligence tools like Claude and revise it to reflect my intended message.</sub></em></p>
]]></content:encoded></item><item><title><![CDATA[Design Once, Generate Anywhere: How Our Unified Template System Works]]></title><description><![CDATA[Most document generation tools make you choose: build a PDF template or an Excel template. Need both formats? Build two templates, maintain two schemas, handle two sets of bugs.
We thought that was unnecessary. So, we built a unified template system ...]]></description><link>https://blog.rynko.dev/unified-template-system-pdf-excel-architecture</link><guid isPermaLink="true">https://blog.rynko.dev/unified-template-system-pdf-excel-architecture</guid><category><![CDATA[software architecture]]></category><category><![CDATA[Node.js]]></category><category><![CDATA[System Design]]></category><category><![CDATA[Web Development]]></category><category><![CDATA[rynko]]></category><dc:creator><![CDATA[Rynko Dev]]></dc:creator><pubDate>Mon, 16 Feb 2026 06:30:09 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1770989351552/0ae38972-bc8a-466a-9bab-27aaf0182b8b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<hr />
<p>Most document generation tools make you choose: build a PDF template <em>or</em> an Excel template. Need both formats? Build two templates, maintain two schemas, handle two sets of bugs.</p>
<p>We thought that was unnecessary. So, we built a unified template system where you design once and generate in any format. This post explains how it works under the hood.</p>
<h2 id="heading-the-problem-with-separate-templates">The Problem with Separate Templates</h2>
<p>Consider a typical invoice. Your sales team wants PDF invoices for customers. Your finance team wants the same data as Excel for their accounting software. Your ops team wants Excel reports they can sort and filter.</p>
<p>With traditional tools, you'd maintain:</p>
<ul>
<li><p>An HTML template for the PDF invoice</p>
</li>
<li><p>A separate Excel template (or code that builds spreadsheets)</p>
</li>
<li><p>Logic to keep both templates in sync when the invoice format changes</p>
</li>
</ul>
<p>When someone adds a "discount" field, you update the PDF template, then remember to update the Excel template too. Inevitably, they drift apart.</p>
<h2 id="heading-the-rynko-approach">The Rynko Approach</h2>
<p>In Rynko, a template is a JSON document that describes the <strong>structure and data</strong> of your document, not the final visual output. The renderers — one for PDF, one for Excel — interpret that structure for their respective formats.</p>
<p>Here's a simplified view of an invoice template:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"name"</span>: <span class="hljs-string">"Invoice"</span>,
  <span class="hljs-attr">"format"</span>: <span class="hljs-string">"both"</span>,
  <span class="hljs-attr">"variables"</span>: [
    { <span class="hljs-attr">"name"</span>: <span class="hljs-string">"invoiceNumber"</span>, <span class="hljs-attr">"type"</span>: <span class="hljs-string">"string"</span> },
    { <span class="hljs-attr">"name"</span>: <span class="hljs-string">"customerName"</span>, <span class="hljs-attr">"type"</span>: <span class="hljs-string">"string"</span> },
    { <span class="hljs-attr">"name"</span>: <span class="hljs-string">"items"</span>, <span class="hljs-attr">"type"</span>: <span class="hljs-string">"array"</span>, <span class="hljs-attr">"itemType"</span>: <span class="hljs-string">"object"</span>,
      <span class="hljs-attr">"schema"</span>: {
        <span class="hljs-attr">"itemSchema"</span>: {
          <span class="hljs-attr">"properties"</span>: {
            <span class="hljs-attr">"description"</span>: { <span class="hljs-attr">"type"</span>: <span class="hljs-string">"string"</span> },
            <span class="hljs-attr">"quantity"</span>: { <span class="hljs-attr">"type"</span>: <span class="hljs-string">"number"</span> },
            <span class="hljs-attr">"price"</span>: { <span class="hljs-attr">"type"</span>: <span class="hljs-string">"number"</span> }
          }
        }
      }
    },
    { <span class="hljs-attr">"name"</span>: <span class="hljs-string">"total"</span>, <span class="hljs-attr">"type"</span>: <span class="hljs-string">"number"</span> }
  ],
  <span class="hljs-attr">"sections"</span>: [
    {
      <span class="hljs-attr">"components"</span>: [
        { <span class="hljs-attr">"type"</span>: <span class="hljs-string">"heading"</span>, <span class="hljs-attr">"props"</span>: { <span class="hljs-attr">"text"</span>: <span class="hljs-string">"Invoice #{{invoiceNumber}}"</span> } },
        { <span class="hljs-attr">"type"</span>: <span class="hljs-string">"text"</span>, <span class="hljs-attr">"props"</span>: { <span class="hljs-attr">"text"</span>: <span class="hljs-string">"Bill to: {{customerName}}"</span> } },
        {
          <span class="hljs-attr">"type"</span>: <span class="hljs-string">"dataTable"</span>,
          <span class="hljs-attr">"props"</span>: {
            <span class="hljs-attr">"dataSource"</span>: <span class="hljs-string">"{{items}}"</span>,
            <span class="hljs-attr">"columns"</span>: [
              { <span class="hljs-attr">"header"</span>: <span class="hljs-string">"Description"</span>, <span class="hljs-attr">"field"</span>: <span class="hljs-string">"description"</span> },
              { <span class="hljs-attr">"header"</span>: <span class="hljs-string">"Qty"</span>, <span class="hljs-attr">"field"</span>: <span class="hljs-string">"quantity"</span> },
              { <span class="hljs-attr">"header"</span>: <span class="hljs-string">"Price"</span>, <span class="hljs-attr">"field"</span>: <span class="hljs-string">"price"</span>, <span class="hljs-attr">"format"</span>: <span class="hljs-string">"currency"</span> }
            ]
          }
        },
        { <span class="hljs-attr">"type"</span>: <span class="hljs-string">"text"</span>, <span class="hljs-attr">"props"</span>: { <span class="hljs-attr">"text"</span>: <span class="hljs-string">"Total: ${{total}}"</span>, <span class="hljs-attr">"bold"</span>: <span class="hljs-literal">true</span> } }
      ]
    }
  ]
}
</code></pre>
<p>This single template produces both a formatted PDF and a structured Excel workbook. The same variable data feeds both.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1771349251907/927e2f5f-813a-4b35-a6e2-42c3f6cd0db8.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-how-the-rendering-works">How the Rendering Works</h2>
<h3 id="heading-the-layout-engine-yoga">The Layout Engine: Yoga</h3>
<p>At the core of our <strong>PDF and Designer</strong> pipeline is <a target="_blank" href="https://yogalayout.dev/">Yoga</a> — the same flexbox layout engine that powers React Native. Every component in a template becomes a Yoga node with flex properties:</p>
<pre><code class="lang-plaintext">Template Section
├── Heading node (flex: row, alignItems: center)
├── Text node (marginBottom: 10)
├── DataTable node
│   ├── Header row (flex: row, backgroundColor: #f5f5f5)
│   └── Data rows (flex: row, borderBottom: 1px)
└── Total text node (flex: row, justifyContent: flex-end)
</code></pre>
<p>Yoga calculates the exact position and size of every element. This gives us pixel-perfect PDF layout and live preview in the designer — without parsing HTML or running a browser.</p>
<p>For <strong>Excel</strong>, we bypass the pixel layout engine entirely and map the component tree directly to spreadsheet rows and columns. Yoga deals in X/Y coordinates; Excel deals in rows and cells — so each renderer interprets the same component tree in the way that's native to its format.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770996380159/6b161243-73f5-4db5-8ba7-27d6c412b655.png" alt="The architecture fork: JSON Template → Abstract Component Tree → Yoga/PDFKit for PDF, Grid Mapper/ExcelJS for Excel. We don't convert HTML to Excel — we render from a shared abstract source." class="image--center mx-auto" /></p>
<h3 id="heading-pdf-renderer">PDF Renderer</h3>
<p>The PDF renderer takes the Yoga layout tree and draws directly to PDFKit:</p>
<ol>
<li><p><strong>Layout pass</strong>: Yoga calculates positions for all nodes</p>
</li>
<li><p><strong>Render pass</strong>: Each component type has a renderer that draws to PDFKit</p>
<ul>
<li><p><code>heading</code> → <code>doc.fontSize(24).text(...)</code></p>
</li>
<li><p><code>dataTable</code> → Table drawn with lines, cells, formatting</p>
</li>
<li><p><code>image</code> → <code>doc.image(...)</code> with proper scaling</p>
</li>
<li><p><code>chart</code> → Rendered to canvas, then embedded as image</p>
</li>
</ul>
</li>
<li><p><strong>Pagination</strong>: When content exceeds page height, automatic page breaks with header/footer repetition</p>
</li>
<li><p><strong>Output</strong>: Native PDF file, no browser involved</p>
</li>
</ol>
<p><strong>Result</strong>: 200-500ms generation, ~50MB memory.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770990574049/9d74cf48-524b-4c9e-ac0b-32a7f5badb46.png" alt="The rendering fork: same Yoga layout tree feeds both the PDFKit renderer and ExcelJS renderer in parallel." class="image--center mx-auto" /></p>
<h3 id="heading-excel-renderer">Excel Renderer</h3>
<p>The Excel renderer interprets the same template for spreadsheet format:</p>
<ol>
<li><p><strong>Sheet mapping</strong>: Each template section can map to an Excel sheet</p>
</li>
<li><p><strong>Component translation</strong>:</p>
<ul>
<li><p><code>heading</code> → Merged cells with bold formatting</p>
</li>
<li><p><code>text</code> → Cell with text and formatting</p>
</li>
<li><p><code>dataTable</code> → Excel table with headers, data rows, and auto-filters</p>
</li>
<li><p><code>chart</code> → Native Excel chart object</p>
</li>
<li><p><code>image</code> → Embedded image in cell</p>
</li>
</ul>
</li>
<li><p><strong>Formula support</strong>: Define native Excel formulas in your template for cells that should calculate in the spreadsheet</p>
</li>
<li><p><strong>Formatting</strong>: Fonts, colors, borders, number formats translate to Excel styles</p>
</li>
</ol>
<p><strong>Result</strong>: A proper <code>.xlsx</code> file with native Excel features — not a CSV, not a screenshot of a table.</p>
<h2 id="heading-component-by-component-translation">Component-by-Component Translation</h2>
<p>All 28 component types and how they map across formats:</p>
<h3 id="heading-content-components">Content Components</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Component</td><td>PDF Rendering</td><td>Excel Rendering</td></tr>
</thead>
<tbody>
<tr>
<td><strong>Text</strong></td><td>Positioned text with font styling</td><td>Cell with formatted text</td></tr>
<tr>
<td><strong>Rich Text</strong></td><td>Multiple styles per line (bold, italic, links)</td><td>Rich text cell with formatting runs</td></tr>
<tr>
<td><strong>Heading</strong></td><td>Large text with configurable level (h1-h6)</td><td>Merged cells with bold/large font</td></tr>
<tr>
<td><strong>Title</strong></td><td>Large heading variant with emphasis</td><td>Merged cells with prominent styling</td></tr>
<tr>
<td><strong>Image</strong></td><td>Embedded image with scaling</td><td>Embedded image anchored to cell</td></tr>
<tr>
<td><strong>List</strong></td><td>Bulleted/numbered list items</td><td>Rows with indent and bullet/number prefix</td></tr>
<tr>
<td><strong>Divider</strong></td><td>Horizontal line (solid/dashed/dotted)</td><td>Bottom border on cells</td></tr>
<tr>
<td><strong>Spacer</strong></td><td>Empty space with configurable height</td><td>Empty row(s)</td></tr>
<tr>
<td><strong>SVG</strong></td><td>Rendered as embedded image</td><td>Rendered as embedded image</td></tr>
<tr>
<td><strong>QR Code</strong></td><td>Rendered as embedded image</td><td>Rendered as embedded image</td></tr>
<tr>
<td><strong>Barcode</strong></td><td>Rendered as embedded image (10 formats)</td><td>Rendered as embedded image</td></tr>
</tbody>
</table>
</div><h3 id="heading-layout-components">Layout Components</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Component</td><td>PDF Rendering</td><td>Excel Rendering</td></tr>
</thead>
<tbody>
<tr>
<td><strong>Container</strong></td><td>Wrapper with background, border, padding</td><td>Styled cell region</td></tr>
<tr>
<td><strong>Columns</strong></td><td>Flex layout side-by-side</td><td>Adjacent cells</td></tr>
<tr>
<td><strong>Table Layout</strong></td><td>Grid with cell positioning and spans</td><td>Cells with colspan/rowspan</td></tr>
<tr>
<td><strong>Conditional</strong></td><td>Show/hide based on expression</td><td>Show/hide based on expression</td></tr>
<tr>
<td><strong>Loop</strong></td><td>Repeated section for each array item</td><td>Repeated rows for each array item</td></tr>
<tr>
<td><strong>Page Break</strong></td><td>New PDF page</td><td>New Excel sheet or row separator</td></tr>
</tbody>
</table>
</div><h3 id="heading-data-amp-visualization-components">Data &amp; Visualization Components</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Component</td><td>PDF Rendering</td><td>Excel Rendering</td></tr>
</thead>
<tbody>
<tr>
<td><strong>Table</strong></td><td>Drawn table with borders and styling</td><td>Native Excel table with auto-filters</td></tr>
<tr>
<td><strong>Chart</strong></td><td>Rendered as embedded image (8 chart types)</td><td>Native Excel chart object</td></tr>
<tr>
<td><strong>Key-Value</strong></td><td>Label-value pairs with layout options</td><td>Two-column cell pairs</td></tr>
<tr>
<td><strong>Formula</strong></td><td>N/A</td><td>Native Excel formula cell (<code>=SUM(...)</code>)</td></tr>
</tbody>
</table>
</div><h3 id="heading-pdf-form-components-pdf-only">PDF Form Components (PDF only)</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Component</td><td>PDF Rendering</td></tr>
</thead>
<tbody>
<tr>
<td><strong>Form Text</strong></td><td>Single-line text input field</td></tr>
<tr>
<td><strong>Form Textarea</strong></td><td>Multi-line text input</td></tr>
<tr>
<td><strong>Form Checkbox</strong></td><td>Boolean checkbox with label</td></tr>
<tr>
<td><strong>Form Radio</strong></td><td>Radio button group with options</td></tr>
<tr>
<td><strong>Form Dropdown</strong></td><td>Select dropdown with options</td></tr>
<tr>
<td><strong>Form Date</strong></td><td>Date picker with format configuration</td></tr>
<tr>
<td><strong>Form Signature</strong></td><td>Signature placeholder (text, image, or digital)</td></tr>
<tr>
<td><strong>Form Button</strong></td><td>Interactive button (print, reset, link)</td></tr>
</tbody>
</table>
</div><h3 id="heading-whats-format-specific">What's Format-Specific</h3>
<p>Some features only make sense in one format:</p>
<p><strong>PDF only:</strong></p>
<ul>
<li><p>8 fillable form field types (text, textarea, checkbox, radio, dropdown, date, signature, button)</p>
</li>
<li><p>Custom fonts</p>
</li>
<li><p>Precise Yoga flexbox positioning</p>
</li>
<li><p>Page headers and footers</p>
</li>
</ul>
<p><strong>Excel only:</strong></p>
<ul>
<li><p>Native Excel formulas via Formula component</p>
</li>
<li><p>Auto-filters on data tables</p>
</li>
<li><p>Sortable columns</p>
</li>
<li><p>Cell-level data types (dates as dates, numbers as numbers)</p>
</li>
</ul>
<h2 id="heading-the-variable-system">The Variable System</h2>
<p>Variables are format-agnostic. You define them once, and both renderers use the same data:</p>
<h3 id="heading-scalar-variables">Scalar Variables</h3>
<pre><code class="lang-json">{ <span class="hljs-attr">"name"</span>: <span class="hljs-string">"customerName"</span>, <span class="hljs-attr">"type"</span>: <span class="hljs-string">"string"</span>, <span class="hljs-attr">"defaultValue"</span>: <span class="hljs-string">"John Doe"</span> }
</code></pre>
<p>Works identically in both formats — the value appears wherever <code>{{customerName}}</code> is used.</p>
<h3 id="heading-array-variables-dynamic-tables">Array Variables (Dynamic Tables)</h3>
<pre><code class="lang-json">{
  <span class="hljs-attr">"name"</span>: <span class="hljs-string">"items"</span>,
  <span class="hljs-attr">"type"</span>: <span class="hljs-string">"array"</span>,
  <span class="hljs-attr">"itemType"</span>: <span class="hljs-string">"object"</span>,
  <span class="hljs-attr">"schema"</span>: {
    <span class="hljs-attr">"itemSchema"</span>: {
      <span class="hljs-attr">"properties"</span>: {
        <span class="hljs-attr">"description"</span>: { <span class="hljs-attr">"type"</span>: <span class="hljs-string">"string"</span> },
        <span class="hljs-attr">"quantity"</span>: { <span class="hljs-attr">"type"</span>: <span class="hljs-string">"number"</span> },
        <span class="hljs-attr">"price"</span>: { <span class="hljs-attr">"type"</span>: <span class="hljs-string">"number"</span> }
      }
    }
  }
}
</code></pre>
<ul>
<li><p><strong>PDF</strong>: Renders as a table with one row per array item, automatically paginating</p>
</li>
<li><p><strong>Excel</strong>: Creates data rows with proper column types and formatting</p>
</li>
</ul>
<h3 id="heading-calculated-variables">Calculated Variables</h3>
<pre><code class="lang-json">{ <span class="hljs-attr">"name"</span>: <span class="hljs-string">"subtotal"</span>, <span class="hljs-attr">"type"</span>: <span class="hljs-string">"number"</span>, <span class="hljs-attr">"expression"</span>: <span class="hljs-string">"items.reduce((sum, item) =&gt; sum + item.quantity * item.price, 0)"</span> }
</code></pre>
<ul>
<li><p><strong>PDF</strong>: Evaluated server-side, result rendered as text</p>
</li>
<li><p><strong>Excel</strong>: Evaluated server-side, result placed in the cell as a static value. If you need live Excel formulas (e.g., <code>=SUM(D2:D10)</code>), define them separately in the template's Excel formula configuration</p>
</li>
</ul>
<h3 id="heading-system-variables">System Variables</h3>
<p>Both formats support system-provided variables like <code>__CURRENT_DATE__</code>, <code>__COMPANY_NAME__</code>, <code>__TEMPLATE_NAME__</code>, etc. These are resolved identically regardless of output format.</p>
<h2 id="heading-why-this-architecture">Why This Architecture?</h2>
<p>We considered several approaches before settling on the Yoga-based unified template:</p>
<h3 id="heading-rejected-html-as-the-source">Rejected: HTML as the Source</h3>
<p>We could have used HTML templates and converted to both PDF and Excel. But:</p>
<ul>
<li><p>HTML-to-Excel conversion is lossy — you can't preserve semantic data</p>
</li>
<li><p>Tables in HTML become images or flat text in Excel, losing sort/filter capability</p>
</li>
<li><p>Browser-based rendering is slow and resource-heavy</p>
</li>
</ul>
<h3 id="heading-rejected-separate-templates-with-shared-schema">Rejected: Separate Templates with Shared Schema</h3>
<p>We could have had separate PDF and Excel templates that share the same variables. But:</p>
<ul>
<li><p>Two templates to maintain = two places for bugs</p>
</li>
<li><p>Templates inevitably drift apart</p>
</li>
<li><p>More work for template designers</p>
</li>
</ul>
<h3 id="heading-chosen-abstract-component-tree">Chosen: Abstract Component Tree</h3>
<p>By defining templates as an abstract component tree (not HTML, not Excel XML), each renderer can interpret components optimally for its format:</p>
<ul>
<li><p>The PDF renderer uses Yoga for precise layout</p>
</li>
<li><p>The Excel renderer uses native Excel features (real tables, real formulas)</p>
</li>
<li><p>Both consume the same template and variables</p>
</li>
</ul>
<h2 id="heading-try-it-yourself">Try It Yourself</h2>
<p>You can see this in action in under 5 minutes:</p>
<ol>
<li><p><a target="_blank" href="https://app.rynko.dev/signup">Sign up free</a> at Rynko</p>
</li>
<li><p>Create a template in the visual designer</p>
</li>
<li><p>Click "Preview" and toggle between PDF and Excel output</p>
</li>
<li><p>Send the same data via API and get both formats</p>
</li>
</ol>
<p>Here's the dual-format payoff in code — same template, same variables, two API calls:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { Rynko } <span class="hljs-keyword">from</span> <span class="hljs-string">'@rynko/sdk'</span>;

<span class="hljs-keyword">const</span> rynko = <span class="hljs-keyword">new</span> Rynko({ apiKey: process.env.RYNKO_API_KEY! });

<span class="hljs-keyword">const</span> invoiceData = {
  templateId: <span class="hljs-string">'invoice'</span>,
  variables: {
    invoiceNumber: <span class="hljs-string">'INV-2026-001'</span>,
    customerName: <span class="hljs-string">'Acme Corp'</span>,
    items: [
      { description: <span class="hljs-string">'Consulting'</span>, quantity: <span class="hljs-number">10</span>, price: <span class="hljs-number">150.00</span> },
      { description: <span class="hljs-string">'Software License'</span>, quantity: <span class="hljs-number">5</span>, price: <span class="hljs-number">99.00</span> },
    ],
    total: <span class="hljs-number">1995.00</span>,
  },
};

<span class="hljs-comment">// PDF for the customer</span>
<span class="hljs-keyword">const</span> pdf = <span class="hljs-keyword">await</span> rynko.documents.generatePdf(invoiceData);

<span class="hljs-comment">// Excel for the finance team — same template, same data</span>
<span class="hljs-keyword">const</span> excel = <span class="hljs-keyword">await</span> rynko.documents.generateExcel(invoiceData);
</code></pre>
<p>Or, if you're using Claude or Cursor, ask the AI to create a template and generate both formats:</p>
<pre><code class="lang-plaintext">Create an invoice template, then generate it as both PDF and Excel
with sample data for 3 line items
</code></pre>
<p>The same JSON data, the same template, two perfectly formatted documents.</p>
<p><strong>New to Rynko?</strong> Follow our <a target="_blank" href="https://blog.rynko.dev/getting-started-in-5-minutes">Getting Started in 5 Minutes</a> guide. Evaluating alternatives? See the <a target="_blank" href="https://blog.rynko.dev/pdf-generation-api-comparison">PDF Generation API Comparison</a>.</p>
<p><a target="_blank" href="https://app.rynko.dev/signup">Get Started Free</a> | <a target="_blank" href="https://docs.rynko.dev/developer-guide/template-schema">Template Schema Reference</a> | <a target="_blank" href="https://www.rynko.dev/engine">Rendering Engine</a></p>
<hr />
<p><em>Questions about the rendering architecture? Join our</em> <a target="_blank" href="https://discord.gg/d8cU2MG6"><em>Discord</em></a> <em>or check the</em> <a target="_blank" href="https://www.rynko.dev/engine"><em>engine page</em></a> <em>for the full breakdown.</em></p>
<hr />
<p><em><sub>Disclosure: I ideate and draft content, then refine it with the aid of artificial intelligence tools like Claude and revise it to reflect my intended message.</sub></em></p>
]]></content:encoded></item><item><title><![CDATA[Getting Started with Rynko in 5 Minutes]]></title><description><![CDATA[This guide walks you through generating your first document with Rynko. By the end, you'll have a working PDF generated from a template via our API.
Step 1: Create Your Account (30 seconds)
Head to app.rynko.dev/signup and create a free account. No c...]]></description><link>https://blog.rynko.dev/getting-started-with-rynko-in-5-minutes</link><guid isPermaLink="true">https://blog.rynko.dev/getting-started-with-rynko-in-5-minutes</guid><category><![CDATA[JavaScript]]></category><category><![CDATA[TypeScript]]></category><category><![CDATA[api]]></category><category><![CDATA[rynko]]></category><category><![CDATA[SaaS]]></category><category><![CDATA[Document Generation]]></category><category><![CDATA[Python]]></category><dc:creator><![CDATA[Rynko Dev]]></dc:creator><pubDate>Sat, 14 Feb 2026 18:30:31 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1770984831670/3bb0b1af-282d-4e8f-b1af-0f85ca12f436.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<hr />
<p>This guide walks you through generating your first document with Rynko. By the end, you'll have a working PDF generated from a template via our API.</p>
<h2 id="heading-step-1-create-your-account-30-seconds">Step 1: Create Your Account (30 seconds)</h2>
<p>Head to <a target="_blank" href="http://app.rynko.dev/signup">app.rynko.dev/signup</a> and create a free account. No credit card required.</p>
<p>Every new account comes with <strong>5,000 free document credits</strong> as part of our Founder's Preview — more than enough to build and test.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770984914973/038483a1-042d-4048-8dca-60c415c5ac7e.png" alt="Browse pre-built templates or create your own from scratch." class="image--center mx-auto" /></p>
<h2 id="heading-step-2-create-a-template-2-minutes">Step 2: Create a Template (2 minutes)</h2>
<p>Once you're in the dashboard, click <strong>New Template</strong> to open the visual designer.</p>
<p>You have three options:</p>
<h3 id="heading-option-a-use-a-pre-built-template">Option A: Use a Pre-Built Template</h3>
<p>Browse the template gallery and pick one that's close to what you need — invoices, reports, certificates, and more. Customize it in the designer.</p>
<h3 id="heading-option-b-design-from-scratch">Option B: Design from Scratch</h3>
<p>The drag-and-drop designer supports 28 component types. Drag in text, tables, images, charts, QR codes, and more. Set up variables using <code>{{variableName}}</code> syntax in any text field.</p>
<h3 id="heading-option-c-let-ai-create-it-mcp">Option C: Let AI Create It (MCP)</h3>
<p>If you're using Claude Desktop or Cursor with our <a target="_blank" href="https://www.rynko.dev/mcp">MCP server</a>, just describe what you need:</p>
<pre><code class="lang-plaintext">"Create an invoice template with company logo, customer details,
a line items table, and totals with tax calculation"
</code></pre>
<p>For this guide, let's create a simple invoice. Add these variables to your template:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Variable</td><td>Type</td><td>Description</td></tr>
</thead>
<tbody>
<tr>
<td><code>invoiceNumber</code></td><td>string</td><td>Invoice identifier (e.g., "INV-2026-001")</td></tr>
<tr>
<td><code>clientName</code></td><td>string</td><td>Customer name</td></tr>
<tr>
<td><code>clientEmail</code></td><td>string</td><td>Customer email</td></tr>
<tr>
<td><code>lineItems</code></td><td>array</td><td>Line items with <code>description</code>, <code>quantity</code>, <code>price</code></td></tr>
<tr>
<td><code>subtotal</code></td><td>number</td><td>Subtotal amount</td></tr>
<tr>
<td><code>tax</code></td><td>number</td><td>Tax amount</td></tr>
<tr>
<td><code>total</code></td><td>number</td><td>Total amount</td></tr>
</tbody>
</table>
</div><p>Click <strong>Publish</strong> when you're happy with the design.</p>
<blockquote>
<p><strong>Tip:</strong> Use the <strong>Preview</strong> button to test your template before publishing. Previews are free and don't consume credits.</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770984971551/27f02adf-432c-4cb3-a78b-5493910ed1d9.png" alt="The visual template designer with a working invoice template — variables, tables, calculated fields, and live preview." class="image--center mx-auto" /></p>
<h2 id="heading-step-3-get-your-api-key-30-seconds">Step 3: Get Your API Key (30 seconds)</h2>
<p>Go to <strong>Settings</strong> &gt; <strong>API Keys</strong> and create a new key. Copy it — you'll need it in the next step.</p>
<p>Your API key starts with your team ID and gives programmatic access to your templates and document generation.</p>
<h2 id="heading-step-4-generate-a-document-2-minutes">Step 4: Generate a Document (2 minutes)</h2>
<p>Pick your language:</p>
<p>You can also test document generation without code using the <strong>API Playground</strong> in your dashboard — paste in variables and generate a document directly from the browser.</p>
<h3 id="heading-nodejs">Node.js</h3>
<pre><code class="lang-bash">npm install @rynko/sdk
</code></pre>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { Rynko } <span class="hljs-keyword">from</span> <span class="hljs-string">'@rynko/sdk'</span>;

<span class="hljs-keyword">const</span> rynko = <span class="hljs-keyword">new</span> Rynko({
  apiKey: process.env.RYNKO_API_KEY!,
});

<span class="hljs-comment">// Generate a PDF</span>
<span class="hljs-keyword">const</span> job = <span class="hljs-keyword">await</span> rynko.documents.generatePdf({
  templateId: <span class="hljs-string">'invoice'</span>, <span class="hljs-comment">// Your template slug, shortId, or UUID</span>
  variables: {
    invoiceNumber: <span class="hljs-string">'INV-2026-001'</span>,
    clientName: <span class="hljs-string">'Acme Technologies Pvt. Ltd.'</span>,
    clientEmail: <span class="hljs-string">'accounts@acmetech.com'</span>,
    lineItems: [
      { description: <span class="hljs-string">'Technical Consulting'</span>, quantity: <span class="hljs-number">2</span>, price: <span class="hljs-number">150.00</span> },
      { description: <span class="hljs-string">'Software License'</span>, quantity: <span class="hljs-number">1</span>, price: <span class="hljs-number">500.00</span> },
    ],
    subtotal: <span class="hljs-number">800.00</span>,
    tax: <span class="hljs-number">80.00</span>,
    total: <span class="hljs-number">880.00</span>,
  },
});

<span class="hljs-built_in">console</span>.log(<span class="hljs-string">'Job queued:'</span>, job.jobId);

<span class="hljs-comment">// Wait for completion</span>
<span class="hljs-keyword">const</span> completed = <span class="hljs-keyword">await</span> rynko.documents.waitForCompletion(job.jobId);
<span class="hljs-built_in">console</span>.log(<span class="hljs-string">'Download URL:'</span>, completed.downloadUrl);
</code></pre>
<h3 id="heading-python">Python</h3>
<pre><code class="lang-bash">pip install rynko
</code></pre>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> os
<span class="hljs-keyword">from</span> rynko <span class="hljs-keyword">import</span> Rynko

client = Rynko(api_key=os.environ[<span class="hljs-string">"RYNKO_API_KEY"</span>])

<span class="hljs-comment"># Generate a PDF</span>
job = client.documents.generate_pdf(
    template_id=<span class="hljs-string">"invoice"</span>,
    variables={
        <span class="hljs-string">"invoiceNumber"</span>: <span class="hljs-string">"INV-2026-001"</span>,
        <span class="hljs-string">"clientName"</span>: <span class="hljs-string">"Acme Technologies Pvt. Ltd."</span>,
        <span class="hljs-string">"clientEmail"</span>: <span class="hljs-string">"accounts@acmetech.com"</span>,
        <span class="hljs-string">"lineItems"</span>: [
            {<span class="hljs-string">"description"</span>: <span class="hljs-string">"Technical Consulting"</span>, <span class="hljs-string">"quantity"</span>: <span class="hljs-number">2</span>, <span class="hljs-string">"price"</span>: <span class="hljs-number">150.00</span>},
            {<span class="hljs-string">"description"</span>: <span class="hljs-string">"Software License"</span>, <span class="hljs-string">"quantity"</span>: <span class="hljs-number">1</span>, <span class="hljs-string">"price"</span>: <span class="hljs-number">500.00</span>},
        ],
        <span class="hljs-string">"subtotal"</span>: <span class="hljs-number">800.00</span>,
        <span class="hljs-string">"tax"</span>: <span class="hljs-number">80.00</span>,
        <span class="hljs-string">"total"</span>: <span class="hljs-number">880.00</span>,
    },
)

print(<span class="hljs-string">f"Job queued: <span class="hljs-subst">{job[<span class="hljs-string">'jobId'</span>]}</span>"</span>)

<span class="hljs-comment"># Wait for completion</span>
completed = client.documents.wait_for_completion(job[<span class="hljs-string">"jobId"</span>])
print(<span class="hljs-string">f"Download URL: <span class="hljs-subst">{completed[<span class="hljs-string">'downloadUrl'</span>]}</span>"</span>)
</code></pre>
<h3 id="heading-curl">cURL</h3>
<pre><code class="lang-bash"><span class="hljs-comment"># Generate a document</span>
curl -X POST https://api.rynko.dev/api/v1/documents/generate \
  -H <span class="hljs-string">"Authorization: Bearer YOUR_API_KEY"</span> \
  -H <span class="hljs-string">"Content-Type: application/json"</span> \
  -d <span class="hljs-string">'{
    "templateId": "invoice",
    "format": "pdf",
    "variables": {
      "invoiceNumber": "INV-2026-001",
      "clientName": "Acme Technologies Pvt. Ltd.",
      "clientEmail": "accounts@acmetech.com",
      "lineItems": [
        {"description": "Technical Consulting", "quantity": 2, "price": 150.00},
        {"description": "Software License", "quantity": 1, "price": 500.00}
      ],
      "subtotal": 800.00,
      "tax": 80.00,
      "total": 880.00
    }
  }'</span>

<span class="hljs-comment"># Response includes a jobId - poll for completion</span>
curl https://api.rynko.dev/api/v1/documents/<span class="hljs-built_in">jobs</span>/JOB_ID \
  -H <span class="hljs-string">"Authorization: Bearer YOUR_API_KEY"</span>
</code></pre>
<p>Run the code. In under a second, you'll get a signed download URL for your PDF. Click it — there's your document.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770985024497/c09c4658-64f2-4921-a87f-69d60c23a8db.png" alt="The generated PDF invoice — professional layout with line items, calculated totals, and branding." class="image--center mx-auto" /></p>
<h2 id="heading-want-excel-instead">Want Excel Instead?</h2>
<p>Same template, different format:</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// Node.js</span>
<span class="hljs-keyword">const</span> job = <span class="hljs-keyword">await</span> rynko.documents.generateExcel({
  templateId: <span class="hljs-string">'invoice'</span>,
  variables: { <span class="hljs-comment">/* same variables */</span> },
});
</code></pre>
<pre><code class="lang-python"><span class="hljs-comment"># Python</span>
job = client.documents.generate_excel(
    template_id=<span class="hljs-string">"invoice"</span>,
    variables={ ... },
)
</code></pre>
<p>That's it. One template, both formats.</p>
<h2 id="heading-step-5-set-up-webhooks-optional">Step 5: Set Up Webhooks (Optional)</h2>
<p>Instead of polling for completion, you can receive a webhook when your document is ready:</p>
<ol>
<li><p>Go to <strong>Settings</strong> &gt; <strong>Webhooks</strong> in the dashboard</p>
</li>
<li><p>Add your endpoint URL</p>
</li>
<li><p>Select the <code>document.generated</code> event</p>
</li>
</ol>
<p>When a document finishes generating, we'll POST to your endpoint with the download URL:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { verifyWebhookSignature } <span class="hljs-keyword">from</span> <span class="hljs-string">'@rynko/sdk'</span>;

app.post(<span class="hljs-string">'/webhooks/rynko'</span>, <span class="hljs-function">(<span class="hljs-params">req, res</span>) =&gt;</span> {
  <span class="hljs-keyword">const</span> event = verifyWebhookSignature({
    payload: req.body.toString(),
    signature: req.headers[<span class="hljs-string">'x-rynko-signature'</span>],
    secret: process.env.WEBHOOK_SECRET,
  });

  <span class="hljs-keyword">if</span> (event.type === <span class="hljs-string">'document.generated'</span>) {
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'Document ready:'</span>, event.data.downloadUrl);
  }

  res.status(<span class="hljs-number">200</span>).json({ received: <span class="hljs-literal">true</span> });
});
</code></pre>
<h2 id="heading-whats-next">What's Next?</h2>
<p>Now that you've generated your first document, here's what to explore:</p>
<ul>
<li><p><a target="_blank" href="https://app.rynko.dev"><strong>Visual Designer</strong></a> — Design templates with drag-and-drop, live preview, and 28 component types</p>
</li>
<li><p><a target="_blank" href="https://www.rynko.dev/mcp"><strong>MCP Server</strong></a> — Let Claude or Cursor create and generate documents for you</p>
</li>
<li><p><a target="_blank" href="https://docs.rynko.dev/api"><strong>API Reference</strong></a> — Full API documentation with interactive examples</p>
</li>
<li><p><a target="_blank" href="https://docs.rynko.dev/developer-guide/template-schema"><strong>Template Schema</strong></a> — Deep dive into template structure and capabilities</p>
</li>
<li><p><a target="_blank" href="https://www.rynko.dev/integrations"><strong>Integrations</strong></a> — Connect with Zapier, <a target="_blank" href="http://Make.com">Make.com</a>, n8n, and Google Sheets</p>
</li>
</ul>
<h2 id="heading-need-help">Need Help?</h2>
<ul>
<li><p><a target="_blank" href="https://docs.rynko.dev"><strong>Documentation</strong></a> — Guides, tutorials, and API reference</p>
</li>
<li><p><a target="_blank" href="https://discord.gg/d8cU2MG6"><strong>Discord</strong></a> — Chat with the team and community</p>
</li>
<li><p><a target="_blank" href="mailto:support@rynko.dev"><strong>support@rynko.dev</strong></a> — Email support</p>
</li>
</ul>
<p>Happy generating!</p>
<p><em><sub>Disclosure: I ideate and draft content, then refine it with the aid of artificial intelligence tools like Claude and revise it to reflect my intended message.</sub></em></p>
]]></content:encoded></item><item><title><![CDATA[How to Generate PDFs from Claude Desktop Using MCP]]></title><description><![CDATA[Claude Desktop is already great at analyzing data, writing content, and answering questions. But what if it could also generate real, formatted PDF documents?
With Rynko's MCP server, it can. This tutorial shows you how to set it up and start generat...]]></description><link>https://blog.rynko.dev/generate-pdfs-from-claude-desktop-mcp</link><guid isPermaLink="true">https://blog.rynko.dev/generate-pdfs-from-claude-desktop-mcp</guid><category><![CDATA[mcp]]></category><category><![CDATA[Claude Desktop]]></category><category><![CDATA[AI]]></category><category><![CDATA[#ai-tools]]></category><category><![CDATA[llm]]></category><category><![CDATA[Document Generation]]></category><category><![CDATA[PDF generation]]></category><category><![CDATA[Tutorial]]></category><category><![CDATA[pdf]]></category><category><![CDATA[Developer Tools]]></category><category><![CDATA[rynko]]></category><dc:creator><![CDATA[Rynko Dev]]></dc:creator><pubDate>Fri, 13 Feb 2026 11:15:31 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1770971027392/3054ea54-89b3-443c-a46a-b25272f66369.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<hr />
<p>Claude Desktop is already great at analyzing data, writing content, and answering questions. But what if it could also generate real, formatted PDF documents?</p>
<p>With Rynko's MCP server, it can. This tutorial shows you how to set it up and start generating documents from Claude Desktop in under 5 minutes.</p>
<h2 id="heading-what-youll-build">What You'll Build</h2>
<p>By the end of this tutorial, you'll be able to:</p>
<ul>
<li><p>Ask Claude to create document templates from a description</p>
</li>
<li><p>Generate PDF invoices, reports, and certificates from conversation</p>
</li>
<li><p>Preview documents before using credits</p>
</li>
<li><p>Generate Excel files from the same templates</p>
</li>
</ul>
<h2 id="heading-prerequisites">Prerequisites</h2>
<ul>
<li><p><a target="_blank" href="https://claude.ai/download">Claude Desktop</a> installed</p>
</li>
<li><p>A free <a target="_blank" href="https://app.rynko.dev/signup">Rynko account</a> (5,000 credits included)</p>
</li>
</ul>
<h2 id="heading-step-1-get-a-personal-access-token">Step 1: Get a Personal Access Token</h2>
<p>MCP connections use Personal Access Tokens (PATs) instead of API keys. Here's how to create one:</p>
<ol>
<li><p>Log into <a target="_blank" href="https://app.rynko.dev">app.rynko.dev</a></p>
</li>
<li><p>Go to <strong>Settings</strong> &gt; <strong>Personal Access Tokens</strong></p>
</li>
<li><p>Click <strong>Create Token</strong></p>
</li>
<li><p>Give it a label like "Claude Desktop"</p>
</li>
<li><p>Set the expiry (up to 30 days)</p>
</li>
<li><p>Copy the token — it starts with <code>pat_</code> and won't be shown again</p>
</li>
</ol>
<h2 id="heading-step-2-install-the-rynko-extension">Step 2: Install the Rynko Extension</h2>
<p>There are two ways to install — pick whichever you prefer.</p>
<h3 id="heading-option-a-one-click-install-recommended">Option A: One-Click Install (Recommended)</h3>
<p>The fastest way. Download the <code>.mcpb</code> extension package and drag it into Claude Desktop:</p>
<ol>
<li><p>Download <code>rynko-mcp-{version}.mcpb</code> from the <a target="_blank" href="https://github.com/rynko-dev/mcp-server/releases">latest release</a></p>
</li>
<li><p>In Claude Desktop, go to <strong>Settings</strong> &gt; <strong>Extensions</strong></p>
</li>
<li><p>Drag the downloaded <code>.mcpb</code> file into the install area (or click <strong>Install from file</strong>)</p>
</li>
<li><p>Click <strong>Configure</strong> on the Rynko extension and enter your Personal Access Token</p>
</li>
</ol>
<p>That's it — no config files, no terminal commands.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770970442692/4d790664-cbfc-4f19-9262-996e7db90f2e.png" alt="Rynko extension installed in Claude Desktop Settings &gt; Extensions" class="image--center mx-auto" /></p>
<h3 id="heading-option-b-manual-configuration">Option B: Manual Configuration</h3>
<p>If you prefer editing config files directly, open your Claude Desktop configuration:</p>
<ul>
<li><p><strong>macOS</strong>: <code>~/Library/Application Support/Claude/claude_desktop_config.json</code></p>
</li>
<li><p><strong>Windows</strong>: <code>%APPDATA%\Claude\claude_desktop_config.json</code></p>
</li>
<li><p><strong>Linux</strong>: <code>~/.config/Claude/claude_desktop_config.json</code></p>
</li>
</ul>
<p>Add the Rynko MCP server:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"mcpServers"</span>: {
    <span class="hljs-attr">"rynko"</span>: {
      <span class="hljs-attr">"command"</span>: <span class="hljs-string">"npx"</span>,
      <span class="hljs-attr">"args"</span>: [<span class="hljs-string">"-y"</span>, <span class="hljs-string">"@rynko/mcp-server"</span>],
      <span class="hljs-attr">"env"</span>: {
        <span class="hljs-attr">"RYNKO_USER_TOKEN"</span>: <span class="hljs-string">"pat_your_token_here"</span>
      }
    }
  }
}
</code></pre>
<p>Save the file and restart Claude Desktop.</p>
<h2 id="heading-step-3-verify-the-connection">Step 3: Verify the Connection</h2>
<p>Start a new conversation in Claude Desktop and type:</p>
<pre><code class="lang-plaintext">List my Rynko workspaces
</code></pre>
<p>Claude should respond with your workspace names and IDs. If it does, you're connected. If not, double-check your config file for JSON syntax errors and make sure the token is valid.</p>
<h2 id="heading-generating-your-first-document">Generating Your First Document</h2>
<h3 id="heading-create-a-template">Create a Template</h3>
<p>Let's start by asking Claude to create an invoice template:</p>
<pre><code class="lang-plaintext">Create a Rynko invoice template with:
- Company header with logo placeholder and company name
- "Bill To" section with customer name, address, and email
- Invoice number and date fields
- A line items table with columns: Description, Quantity, Unit Price, Amount
- Subtotal, tax rate, tax amount, and total
- Payment terms at the bottom
</code></pre>
<p>Claude will use the <code>create_draft_template</code> tool to build a properly structured template. It handles the layout, component types, and variable definitions automatically.</p>
<blockquote>
<p><strong>Pro Tip:</strong> You can drag and drop your company logo into the chat and say <em>"Upload this as my logo."</em> Claude will save it to your Rynko asset library and automatically link it in the template.</p>
</blockquote>
<h3 id="heading-preview-the-template">Preview the Template</h3>
<p>Before generating a real document, preview it with sample data:</p>
<pre><code class="lang-plaintext">Preview the invoice template I just created with this data:
- Company: Rynko Inc
- Customer: Acme Corp, 456 Business Ave, San Francisco CA
- Invoice #INV-2026-001, dated today
- Items: 10 hours of consulting at $150/hr, 1 software license at $500
- Tax rate: 8%
</code></pre>
<p>Claude will generate a preview PDF and give you a download link. Previews are free (and watermarked) — they don't count against your quota.</p>
<h3 id="heading-generate-a-production-document">Generate a Production Document</h3>
<p>Happy with the preview? Generate the real thing:</p>
<pre><code class="lang-plaintext">Generate a PDF invoice with these details:
- Customer: TechStart LLC, 789 Innovation Dr, Austin TX
- Invoice #INV-2026-002
- 5x API Integration Setup at $200 each
- 2x Monthly Support at $99 each
- Tax: 8.25%
- Payment terms: Net 30
</code></pre>
<p>Claude calculates the totals, fills in all the variables, and asks you to confirm the credit usage: <em>"This will use 1 document from your monthly quota. Proceed?"</em> Once you say yes, it generates the PDF and gives you a download link in seconds.</p>
<h3 id="heading-generate-excel-instead">Generate Excel Instead</h3>
<p>Want the same data as an Excel file?</p>
<pre><code class="lang-plaintext">Generate that same invoice as an Excel file instead of PDF
</code></pre>
<p>Same template, same data, different format. The Excel file includes proper cell formatting, formulas, and structure.</p>
<h2 id="heading-recreate-any-document-from-a-sample">Recreate Any Document from a Sample</h2>
<p>Here's one of the most powerful things you can do: <strong>upload an existing document and have Claude recreate it as a Rynko template.</strong></p>
<p>Claude Desktop supports file uploads — PDFs, images, screenshots. Since Claude is multimodal, it can visually analyze a document's layout, structure, fonts, spacing, and data fields. Combine that with Rynko's MCP tools, and you get this workflow:</p>
<ol>
<li><p>Upload a sample invoice, report, or certificate (PDF or screenshot)</p>
</li>
<li><p>Ask Claude to recreate it</p>
</li>
</ol>
<pre><code class="lang-plaintext">I've uploaded a sample invoice we currently use. Create a Rynko
template that matches this layout as closely as possible. Identify
all the dynamic fields and create variables for them.
</code></pre>
<p>Claude will:</p>
<ul>
<li><p>Analyze the visual layout — header position, table structure, column widths, alignment</p>
</li>
<li><p>Identify which parts are static (labels, logos) and which are dynamic (names, amounts, dates)</p>
</li>
<li><p>Create a Rynko template with the right components and variables</p>
</li>
<li><p>Match the general styling — fonts, colors, spacing</p>
</li>
</ul>
<p>This means you can migrate existing documents to Rynko without manually recreating every template from scratch. Upload your current invoice PDF, and Claude builds the template for you.</p>
<p><strong>Works great for:</strong></p>
<ul>
<li><p>Migrating from Word/mail merge templates</p>
</li>
<li><p>Recreating documents you only have as PDFs (no source file)</p>
</li>
<li><p>Converting a designer's mockup into a working template</p>
</li>
<li><p>Matching a client's existing document format</p>
</li>
</ul>
<p>After Claude creates the draft, preview it with sample data, compare with the original, and ask for adjustments until it matches.</p>
<h2 id="heading-advanced-multi-document-workflows">Advanced: Multi-Document Workflows</h2>
<p>Once you're comfortable with the basics, try more complex workflows:</p>
<h3 id="heading-batch-invoice-generation">Batch Invoice Generation</h3>
<pre><code class="lang-plaintext">I have these 3 invoices to generate:

1. Acme Corp - INV-001: 5x Widget A at $50, 2x Widget B at $75
2. TechStart - INV-002: 10 hours consulting at $150/hr
3. GlobalCo - INV-003: 1x Enterprise License at $5,000

Generate all 3 as PDFs using the invoice template. Tax rate 8% for all.
</code></pre>
<p>Claude will generate each one and give you all three download links.</p>
<h3 id="heading-data-analysis-report-generation">Data Analysis + Report Generation</h3>
<pre><code class="lang-plaintext">Here's our Q1 sales data: [paste your data]

Analyze the trends, identify the top performing regions,
then generate a PDF report with a summary, charts, and
a detailed breakdown table.
</code></pre>
<p>Claude analyzes the data, creates a report template if needed, and generates a formatted PDF with charts and tables.</p>
<h3 id="heading-template-inspection">Template Inspection</h3>
<p>Not sure what variables a template needs?</p>
<pre><code class="lang-plaintext">Show me the variables for the invoice template
</code></pre>
<p>Claude will list every variable with its type, whether it's required, and its default value.</p>
<h2 id="heading-tips-for-best-results">Tips for Best Results</h2>
<ol>
<li><p><strong>Be specific about layout</strong> — "header with logo on the left and company info on the right" works better than "add a header"</p>
</li>
<li><p><strong>Specify variable types</strong> — "items should be an array of objects with description (string), quantity (number), and price (number)" helps Claude create the right schema</p>
</li>
<li><p><strong>Preview first</strong> — Always preview before generating production documents. Previews are free and let you catch issues early</p>
</li>
<li><p><strong>Use existing templates</strong> — Ask Claude to list your templates before creating new ones. You might already have what you need</p>
</li>
<li><p><strong>Iterate on drafts</strong> — Templates created via MCP are always drafts. Ask Claude to update them until they're right, then publish from the dashboard</p>
</li>
</ol>
<h2 id="heading-troubleshooting">Troubleshooting</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Issue</td><td>Solution</td></tr>
</thead>
<tbody>
<tr>
<td>"Server not found"</td><td>If using manual config, check file path and JSON syntax. If using <code>.mcpb</code>, re-install the extension. Restart Claude Desktop</td></tr>
<tr>
<td>"Invalid token"</td><td>Token may have expired (30-day max). Create a new one</td></tr>
<tr>
<td>"Template not found"</td><td>Make sure you're in the right workspace. Ask Claude to list workspaces</td></tr>
<tr>
<td>Tools not appearing</td><td>If using manual config, ensure <code>npx</code> is in your PATH. Try the <code>.mcpb</code> install instead — it handles dependencies automatically</td></tr>
</tbody>
</table>
</div><h2 id="heading-whats-next">What's Next?</h2>
<ul>
<li><p><a target="_blank" href="https://app.rynko.dev"><strong>Customize templates</strong></a> in the visual designer for pixel-perfect control</p>
</li>
<li><p><a target="_blank" href="https://docs.rynko.dev/developer-guide/webhooks"><strong>Set up webhooks</strong></a> to get notified when documents are ready</p>
</li>
<li><p><a target="_blank" href="https://docs.rynko.dev/api"><strong>Explore the API</strong></a> for programmatic integration</p>
</li>
<li><p><a target="_blank" href="https://blog.rynko.dev/generate-documents-from-cursor"><strong>Try Cursor</strong></a> — our MCP server works there too</p>
</li>
</ul>
<hr />
<p><em>Need help? Join our</em> <a target="_blank" href="https://discord.gg/d8cU2MG6"><em>Discord</em></a> <em>or email</em> <a target="_blank" href="mailto:support@rynko.dev"><em>support@rynko.dev</em></a><em>.</em></p>
<p><em><sub>Disclosure: I ideate and draft content, then refine it with the aid of artificial intelligence tools like Claude and revise it to reflect my intended message.</sub></em></p>
]]></content:encoded></item><item><title><![CDATA[How to Generate Documents from Cursor IDE with MCP]]></title><description><![CDATA[If you're building an app that needs to generate documents — invoices, reports, contracts — you probably have a dedicated document service or a messy Puppeteer setup. What if you could design templates, test generation, and integrate the API all from...]]></description><link>https://blog.rynko.dev/generate-documents-from-cursor-mcp</link><guid isPermaLink="true">https://blog.rynko.dev/generate-documents-from-cursor-mcp</guid><category><![CDATA[AI]]></category><category><![CDATA[cursor]]></category><category><![CDATA[cursor IDE]]></category><category><![CDATA[Document Generation]]></category><category><![CDATA[Developer Tools]]></category><category><![CDATA[Tutorial]]></category><category><![CDATA[mcp]]></category><category><![CDATA[#ai-tools]]></category><category><![CDATA[AI Tools for Developers]]></category><category><![CDATA[rynko]]></category><dc:creator><![CDATA[Rynko Dev]]></dc:creator><pubDate>Fri, 13 Feb 2026 11:15:17 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1770974108577/be62aad3-42ae-42ed-950a-f870bb0ce14d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<hr />
<p>If you're building an app that needs to generate documents — invoices, reports, contracts — you probably have a dedicated document service or a messy Puppeteer setup. What if you could design templates, test generation, and integrate the API all from within Cursor?</p>
<p>With Rynko's MCP server, Cursor can create document templates, generate PDFs and Excel files, and help you write the integration code — all in a single workflow.</p>
<h2 id="heading-why-this-matters-for-developers">Why This Matters for Developers</h2>
<p>When you're building a feature that generates PDFs, the workflow usually looks like:</p>
<ol>
<li><p>Open the document service dashboard in a browser tab</p>
</li>
<li><p>Design a template there</p>
</li>
<li><p>Switch back to your editor</p>
</li>
<li><p>Write the API integration code</p>
</li>
<li><p>Test, realize the template needs changes</p>
</li>
<li><p>Switch back to the browser, tweak the template</p>
</li>
<li><p>Repeat</p>
</li>
</ol>
<p>With MCP, Cursor becomes your document generation command center. Template creation, testing, and code generation happen in the same place you're writing your application code.</p>
<h2 id="heading-setup-2-minutes">Setup (2 Minutes)</h2>
<h3 id="heading-prerequisites">Prerequisites</h3>
<ul>
<li><p><a target="_blank" href="https://cursor.sh/">Cursor</a> installed</p>
</li>
<li><p>A free <a target="_blank" href="https://app.rynko.dev/signup">Rynko account</a></p>
</li>
<li><p>A Personal Access Token (create one at Settings &gt; Personal Access Tokens in the dashboard)</p>
</li>
</ul>
<h3 id="heading-configure-mcp">Configure MCP</h3>
<p>The easiest way is through Cursor's settings UI:</p>
<ol>
<li><p>Open <strong>Cursor Settings</strong> (Cmd+, / Ctrl+,)</p>
</li>
<li><p>Go to <strong>Features</strong> &gt; <strong>MCP</strong></p>
</li>
<li><p>Click <strong>Add New MCP Server</strong></p>
</li>
<li><p>Enter a name (e.g., "rynko") and select <strong>command</strong> as the type</p>
</li>
<li><p>Enter the command: <code>npx -y @rynko/mcp-server</code></p>
</li>
<li><p>Add the environment variable: <code>RYNKO_USER_TOKEN</code> = <code>pat_your_token_here</code></p>
</li>
<li><p>Click <strong>Save</strong></p>
</li>
</ol>
<p>Alternatively, you can add it directly to <code>.cursor/mcp.json</code> in your project root (or <code>~/.cursor/mcp.json</code> for global access):</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"mcpServers"</span>: {
    <span class="hljs-attr">"rynko"</span>: {
      <span class="hljs-attr">"command"</span>: <span class="hljs-string">"npx"</span>,
      <span class="hljs-attr">"args"</span>: [<span class="hljs-string">"-y"</span>, <span class="hljs-string">"@rynko/mcp-server"</span>],
      <span class="hljs-attr">"env"</span>: {
        <span class="hljs-attr">"RYNKO_USER_TOKEN"</span>: <span class="hljs-string">"pat_your_token_here"</span>
      }
    }
  }
}
</code></pre>
<p>Restart Cursor. The Rynko tools will appear in Cursor's MCP tool list.</p>
<h3 id="heading-verify-it-works">Verify It Works</h3>
<p>Open Cursor's AI chat (Cmd+L / Ctrl+L) and type:</p>
<pre><code class="lang-plaintext">List my Rynko workspaces and templates
</code></pre>
<p>If you see your workspace and any existing templates, you're ready.</p>
<h2 id="heading-workflow-1-build-a-document-feature-from-scratch">Workflow 1: Build a Document Feature from Scratch</h2>
<p>Let's say you're building a SaaS app and need to add invoice generation. Here's the full workflow in Cursor:</p>
<h3 id="heading-step-1-create-the-template">Step 1: Create the Template</h3>
<pre><code class="lang-plaintext">Create a Rynko PDF template called "customer-invoice" with:
- Company logo and name in the header
- Invoice number, date, and due date
- Customer name, email, and address
- Line items table: description, quantity, unit price, amount
- Subtotal, discount (optional), tax, total
- Payment instructions footer
- Professional styling with a blue accent color
</code></pre>
<p>Cursor creates the template via MCP. It's saved as a draft in your Rynko workspace.</p>
<blockquote>
<p><strong>Pro Tip:</strong> Have a company logo? Drop the image file into your project directory and tell Cursor: <em>"Upload src/assets/logo.png as my company logo."</em> Cursor will save it to your Rynko asset library and link it in the template.</p>
</blockquote>
<h3 id="heading-step-2-test-with-sample-data">Step 2: Test with Sample Data</h3>
<pre><code class="lang-plaintext">Preview the customer-invoice template with:
- Invoice #INV-2026-100
- Customer: Acme Corp, john@acme.com
- Items: 3x "Monthly Subscription" at $99, 1x "Setup Fee" at $250
- 10% discount, 8% tax
</code></pre>
<p>Cursor generates a preview (free and watermarked) and gives you the download link. Check the PDF — if it needs tweaks, just describe what to change.</p>
<h3 id="heading-step-3-generate-the-integration-code">Step 3: Generate the Integration Code</h3>
<p>Now here's where it gets powerful. Ask Cursor to write the code:</p>
<pre><code class="lang-plaintext">Write a TypeScript service that generates invoices using the
customer-invoice template via the Rynko SDK. Include:
- A generateInvoice function that accepts order data
- Webhook handler for document.generated events
- Error handling and retry logic
- Types for the invoice variables
</code></pre>
<p>Cursor writes the code knowing exactly what variables the template expects, because it just created the template. No guessing, no mismatched field names.</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// Generated by Cursor with knowledge of the actual template schema</span>
<span class="hljs-keyword">import</span> { Rynko } <span class="hljs-keyword">from</span> <span class="hljs-string">'@rynko/sdk'</span>;

<span class="hljs-keyword">interface</span> InvoiceItem {
  description: <span class="hljs-built_in">string</span>;
  quantity: <span class="hljs-built_in">number</span>;
  unitPrice: <span class="hljs-built_in">number</span>;
  amount: <span class="hljs-built_in">number</span>;
}

<span class="hljs-keyword">interface</span> InvoiceData {
  invoiceNumber: <span class="hljs-built_in">string</span>;
  date: <span class="hljs-built_in">string</span>;
  dueDate: <span class="hljs-built_in">string</span>;
  customerName: <span class="hljs-built_in">string</span>;
  customerEmail: <span class="hljs-built_in">string</span>;
  customerAddress: <span class="hljs-built_in">string</span>;
  items: InvoiceItem[];
  subtotal: <span class="hljs-built_in">number</span>;
  discount?: <span class="hljs-built_in">number</span>;
  tax: <span class="hljs-built_in">number</span>;
  total: <span class="hljs-built_in">number</span>;
}

<span class="hljs-keyword">const</span> rynko = <span class="hljs-keyword">new</span> Rynko({ apiKey: process.env.RYNKO_API_KEY! });

<span class="hljs-keyword">export</span> <span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">generateInvoice</span>(<span class="hljs-params">data: InvoiceData</span>): <span class="hljs-title">Promise</span>&lt;<span class="hljs-title">string</span>&gt; </span>{
  <span class="hljs-keyword">const</span> job = <span class="hljs-keyword">await</span> rynko.documents.generatePdf({
    templateId: <span class="hljs-string">'customer-invoice'</span>,
    variables: data,
  });

  <span class="hljs-keyword">const</span> completed = <span class="hljs-keyword">await</span> rynko.documents.waitForCompletion(job.jobId);

  <span class="hljs-keyword">if</span> (completed.status !== <span class="hljs-string">'completed'</span>) {
    <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Error</span>(<span class="hljs-string">`Invoice generation failed: <span class="hljs-subst">${completed.errorMessage}</span>`</span>);
  }

  <span class="hljs-keyword">return</span> completed.downloadUrl;
}
</code></pre>
<h3 id="heading-step-4-iterate">Step 4: Iterate</h3>
<p>Need to change the template? Just ask in the same conversation:</p>
<pre><code class="lang-plaintext">Update the customer-invoice template to add a QR code
in the bottom right with the payment URL
</code></pre>
<p>Cursor updates the template and you can immediately test it again — without leaving the editor.</p>
<h2 id="heading-workflow-2-recreate-any-document-from-a-sample">Workflow 2: Recreate Any Document from a Sample</h2>
<p>Have an existing PDF, screenshot, or mockup? Cursor can recreate it as a Rynko template.</p>
<p>Drop the sample file into your project directory and ask:</p>
<pre><code class="lang-plaintext">Look at docs/sample-invoice.pdf. Create a Rynko template that
matches this layout. Identify all the dynamic fields and create
variables for them.
</code></pre>
<p>Cursor reads the file, analyzes the layout — header position, table structure, column widths, alignment, colors — and creates a matching template via MCP. It figures out which parts are static (labels, logos) and which are dynamic (names, amounts, dates).</p>
<p><strong>Works great for:</strong></p>
<ul>
<li><p>Migrating from Word/mail merge templates</p>
</li>
<li><p>Recreating documents you only have as PDFs (no source file)</p>
</li>
<li><p>Converting a designer's mockup into a working template</p>
</li>
<li><p>Matching a client's existing document format</p>
</li>
</ul>
<h2 id="heading-workflow-3-migrate-html-templates-to-rynko">Workflow 3: Migrate HTML Templates to Rynko</h2>
<p>Already have HTML templates or Puppeteer-based generation? Cursor can migrate both the template and the code:</p>
<pre><code class="lang-plaintext">I have this HTML invoice template in src/templates/invoice.html.
Create an equivalent Rynko template that matches the layout,
then write the migration code to replace the Puppeteer generation
with Rynko SDK calls.
</code></pre>
<p>Cursor reads your existing HTML, creates a matching Rynko template, and rewrites your generation code. You go from a 3-8 second Puppeteer pipeline to sub-500ms native rendering.</p>
<h2 id="heading-workflow-4-data-driven-reports">Workflow 4: Data-Driven Reports</h2>
<p>Combine Cursor's code understanding with document generation:</p>
<pre><code class="lang-plaintext">Look at the analytics data shape in src/services/analytics.ts.
Create a Rynko report template that visualizes this data with:
- A summary section with key metrics
- A bar chart of monthly revenue
- A table of top customers
- Page break, then a detailed transaction log

Then write a scheduled job that generates this report weekly.
</code></pre>
<p>Cursor understands your data types, creates a matching template, and writes the automation code — all in one conversation.</p>
<h2 id="heading-tips-for-cursor-rynko">Tips for Cursor + Rynko</h2>
<ol>
<li><p><strong>Share config across the team</strong> — If using the <code>.cursor/mcp.json</code> file approach, commit it to your repo so every team member gets the same MCP setup (just use different tokens)</p>
</li>
<li><p><strong>Let Cursor inspect templates</strong> — Before writing integration code, ask Cursor to get the template details. It'll write more accurate code when it knows the exact schema</p>
</li>
<li><p><strong>Preview before publishing</strong> — Templates created via MCP are drafts. Preview them with sample data, iterate, then publish from the dashboard when ready</p>
</li>
<li><p><strong>Keep templates in sync</strong> — If you update a template in the visual designer, ask Cursor to re-inspect it so it knows about the changes</p>
</li>
<li><p><strong>Combine with your codebase</strong> — Cursor can read your existing types and models, then create templates that match your data shape perfectly</p>
</li>
</ol>
<h2 id="heading-available-mcp-tools">Available MCP Tools</h2>
<p>Here's what Cursor can do via the Rynko MCP server:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Tool</td><td>Description</td></tr>
</thead>
<tbody>
<tr>
<td><code>list_workspaces</code></td><td>List your workspaces</td></tr>
<tr>
<td><code>switch_workspace</code></td><td>Switch workspace context</td></tr>
<tr>
<td><code>list_templates</code></td><td>List all templates</td></tr>
<tr>
<td><code>get_template</code></td><td>Get template details and variables</td></tr>
<tr>
<td><code>create_draft_template</code></td><td>Create a new template</td></tr>
<tr>
<td><code>update_draft_template</code></td><td>Update a draft template</td></tr>
<tr>
<td><code>validate_schema</code></td><td>Validate template JSON</td></tr>
<tr>
<td><code>get_schema_reference</code></td><td>Get template schema docs</td></tr>
<tr>
<td><code>preview_template</code></td><td>Generate a free preview (watermarked)</td></tr>
<tr>
<td><code>generate_document</code></td><td>Generate a production document (Cursor will confirm credit usage first)</td></tr>
<tr>
<td><code>get_job_status</code></td><td>Check generation status</td></tr>
<tr>
<td><code>parse_data_file</code></td><td>Parse Excel/CSV to JSON</td></tr>
<tr>
<td><code>map_variables</code></td><td>Auto-map data to template variables</td></tr>
<tr>
<td><code>list_assets</code></td><td>List image assets</td></tr>
<tr>
<td><code>upload_asset</code></td><td>Upload images for templates</td></tr>
</tbody>
</table>
</div><h2 id="heading-get-started">Get Started</h2>
<ol>
<li><p><a target="_blank" href="https://app.rynko.dev/signup">Sign up free</a> — 5,000 credits included</p>
</li>
<li><p>Add the config to <code>.cursor/mcp.json</code></p>
</li>
<li><p>Start a chat: <em>"Create a Rynko template for..."</em></p>
</li>
</ol>
<p>The best part: Cursor already understands your codebase. Combined with Rynko's MCP tools, it can build end-to-end document generation features that perfectly match your data models.</p>
<p><a target="_blank" href="https://app.rynko.dev/signup">Get Started Free</a> | <a target="_blank" href="https://docs.rynko.dev/integrations/mcp-integration">MCP Documentation</a> | <a target="_blank" href="https://docs.rynko.dev/api">SDK Reference</a></p>
<hr />
<p><em>Our MCP server also works with</em> <a target="_blank" href="https://blog.rynko.dev/generate-pdfs-from-claude-desktop"><em>Claude Desktop</em></a> <em>(one-click install via</em> <a target="_blank" href="https://github.com/rynko-dev/mcp-server/releases"><code>.mcpb</code> extension</a>), VS Code, Windsurf, and Zed. See the <a target="_blank" href="https://www.rynko.dev/mcp">full setup guide</a>.</p>
<p><em><sub>Disclosure: I ideate and draft content, then refine it with the aid of artificial intelligence tools like Claude and revise it to reflect my intended message.</sub></em></p>
]]></content:encoded></item></channel></rss>