BLOG

How to Build an AI Agent Workflow With n8n (Step-by-Step)

M megaone_admin Mar 31, 2026 5 min read
Engine Score 7/10 — Important
Editorial illustration for: How to Build an AI Agent Workflow With n8n (Step-by-Step)

n8n, the open-source workflow automation platform, has emerged as the go-to orchestration layer for AI agent workflows in 2026. The self-hosted community edition is completely free with unlimited executions — the only cost is infrastructure at $5-10 per month for small workloads. With 500+ integrations and native AI nodes for both Claude and ChatGPT, n8n lets you build autonomous agent workflows without writing a custom application from scratch.

This tutorial walks through building a complete automated content research pipeline: RSS feed triggers a workflow that summarizes new articles, drafts an outline, saves results to Google Sheets, and sends a Slack notification. Every step is reproducible.

Why n8n Instead of ChatGPT

Most people use ChatGPT or Claude for one-off prompts — paste text in, get a response, copy it somewhere. The productivity unlock is setting up autonomous workflows that run on triggers without manual intervention. The difference between a chatbot and an agent is agency: an agent acts on its own based on conditions you define.

n8n provides the visual canvas, the trigger system, the integrations to external services, and the AI nodes that connect to Claude or GPT as the reasoning engine. Every workflow has four parts: Trigger (what starts it), AI Node (what thinks), Tools (what the AI can use), and Memory (what persists between runs).

In early 2026, n8n added MCP support — Claude can now build, modify, and debug n8n workflows via natural language, further lowering the barrier to entry.

What You Need Before Starting

  • n8n instance: Self-hosted (free, via Docker: docker run -it --rm -p 5678:5678 n8nio/n8n) or cloud (from EUR 24/mo)
  • API key: Anthropic (Claude) or OpenAI (GPT) — either works. Claude 3.5 Sonnet is recommended for instruction-following accuracy
  • Google account: For Google Sheets output (free)
  • Slack workspace: For notifications (free tier works)
  • RSS feed URL: Any feed you want to monitor (e.g., TechCrunch AI: https://techcrunch.com/category/artificial-intelligence/feed/)

Total cost for this setup on self-hosted n8n with Claude API: roughly $5-15 per month depending on volume.

Step 1: Set Up the RSS Trigger

Open your n8n canvas and add an RSS Feed Trigger node. This node polls an RSS feed URL at a configurable interval and fires the workflow when new items appear.

  • Set Feed URL to your target RSS feed
  • Set Poll Times to every 20 minutes (or whatever frequency fits your use case)
  • The trigger outputs: title, link, content snippet, publication date

The RSS trigger handles deduplication automatically — it tracks which items it has already seen and only fires on genuinely new entries.

Step 2: Summarize With AI

Add a Claude (or OpenAI) node after the trigger. Configure the system prompt to define the AI’s role:

Set the model to Claude 3.5 Sonnet or GPT-4o. Pass the RSS item’s title and content snippet as the user message using n8n’s expression syntax: {{ $json.title }} and {{ $json.contentSnippet }}.

The AI node returns the summary as its output, which flows to the next step in the workflow.

Step 3: Draft an Outline

Add a second AI node. This one takes the summary from Step 2 and generates a structured content outline. The prompt should specify the output format you want — bullet points, section headings, or a full brief.

Chaining AI nodes is where n8n’s agent workflow model becomes powerful. Each node can use a different model, a different system prompt, and different temperature settings. The first node might use a fast, cheap model for summarization; the second might use a more capable model for creative outline generation.

Step 4: Save to Google Sheets

Add a Google Sheets node configured to append a row. Map columns to your workflow data:

  • Column A: Article title ({{ $('RSS Trigger').item.json.title }})
  • Column B: Source URL ({{ $('RSS Trigger').item.json.link }})
  • Column C: AI Summary (from Step 2 output)
  • Column D: Content Outline (from Step 3 output)
  • Column E: Timestamp ({{ $now.toISO() }})

Authenticate the Google Sheets node with OAuth2 — n8n walks through this on first connection. The spreadsheet becomes your running database of researched content opportunities.

Step 5: Send Slack Notification

Add a Slack node at the end. Configure it to post to a channel of your choice with a message containing the article title, the AI summary, and a link to the Google Sheet row. This closes the loop — your team is notified immediately when the workflow processes a new article worth reviewing.

Error Handling and Human-in-the-Loop

Production workflows need failure handling. n8n provides an Error Trigger node that catches any step failure and routes it to a separate notification flow. Add one connected to your Slack to receive alerts when the AI node times out, the API returns an error, or Google Sheets authentication expires.

For human-in-the-loop checkpoints, add a Wait node with a webhook resume trigger. The workflow pauses, sends you a Slack message with approve/reject buttons, and only continues when you click approve. This is critical for workflows where the AI output needs human review before taking action — posting to social media, sending emails, or publishing content.

Cost Monitoring

The most common mistake with AI agent workflows is unmonitored API spend. Each Claude or GPT call has a token cost. A workflow processing 50 RSS items per day with two AI nodes per item generates roughly 100 API calls daily. At Claude 3.5 Sonnet pricing, that is approximately $2-5 per day — manageable but worth tracking.

n8n’s execution log shows every workflow run with timing and node-level detail. For cost tracking, add a Google Sheets append node that logs the estimated token count per run to a separate “costs” sheet. Review it weekly.

The Emerging Stack: n8n + Dify + Ollama

n8n, Dify, and Ollama are surfacing together as the go-to open-source AI automation stack in 2026. n8n handles workflow orchestration and system integration. Dify provides the LLM application layer — RAG, prompt management, chatbot interfaces. Ollama runs models locally with an OpenAI-compatible API, eliminating per-token API costs for routine workloads.

The combination delivers zero API latency for local inference, complete data sovereignty (nothing leaves your hardware), and the flexibility to swap any model — Llama 3, Mistral, DeepSeek — without changing workflow logic. For teams processing high volumes of content where API costs would be prohibitive, this local stack reduces the marginal cost of each AI operation to effectively zero after the initial infrastructure investment.

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime

M
MegaOne AI Editorial Team

MegaOne AI monitors 200+ sources daily to identify and score the most important AI developments. Our editorial team reviews 200+ sources with rigorous oversight to deliver accurate, scored coverage of the AI industry. Every story is fact-checked, linked to primary sources, and rated using our six-factor Engine Score methodology.

About Us Editorial Policy