Agentic AI Marketing Workflows: Enterprise Implementation Guide
Build an enterprise AI content pipeline with AI agents: brand governance hooks, martech integrations for HubSpot and Salesforce, multi-brand skills, and parallel publishing at 50+ posts per week.
yfxmarketer
February 18, 2026
Your content team produces 50+ posts per week across LinkedIn, Twitter, email, and paid channels. You have HubSpot, Salesforce Marketing Cloud, and Marketo running in parallel. Brand governance requires legal review for regulated claims and regional compliance rules. Three brands share the same team.
Claude Code with the right architecture handles all of this. Not theoretically. Today. This guide gives enterprise Marketing Ops and Content teams the exact setup: hooks for automated governance, skills for each brand, martech stack integrations, quality scoring gates, and parallel sub-agents for multi-platform publishing at scale.
TL;DR
Enterprise marketing teams at 100+ headcount use Claude Code as a content engine connected to their existing martech stack via a three-layer architecture: Claude Code generates and governs content, an orchestration layer (n8n or Make.com) routes it through approval workflows, and platform APIs (HubSpot, SFMC, Marketo, LinkedIn) receive the output. Hooks enforce brand standards automatically. Skills isolate brand configurations. Parallel sub-agents cut multi-platform scheduling time by half. This guide covers every layer, with copy-paste prompts and hook configurations throughout.
Key Takeaways
- Claude Code hooks fire at 12 specific lifecycle events, giving Marketing Ops automated control over every content write before it reaches any publishing endpoint
- Multi-brand pipelines require brand-isolated SKILL.md files with separate voice guides, terminology dictionaries, and compliance overlays sharing common infrastructure
- HubSpot connects via official MCP server (OAuth 2.1); Salesforce Marketing Cloud connects via Cequence AI Gateway; Marketo connects via Inflection.io MCP (note: SOAP API deprecation is March 31, 2026)
- n8n self-hosted at 50 posts/week costs $20-50/month in VPS fees versus $104/month on Zapier for the same volume
- A weighted quality scoring gate (brand voice 25%, compliance 15%, SEO 20%, readability 15%, engagement prediction 10%, grammar 15%) with a 75/100 pass threshold is the difference between a scalable AI content operation and a liability
- The credential proxy pattern keeps API keys for HubSpot, SFMC, Marketo, and LinkedIn out of Claude Code’s security boundary entirely
- Marketing teams using AI content systems with governance infrastructure in place achieve 30% higher campaign ROI than those running AI without governance (McKinsey, 2025)
What This Guide Is For and Who Should Read It
This guide is written for Marketing Ops leads, MarTech architects, Content Ops managers, and CMOs at B2B technology companies with 50 to 500 person marketing teams. It assumes you have an existing martech stack, brand guidelines, content approval workflows, and a real volume problem: more content to produce than your team produces at current headcount.
The system described here replaces manual content drafting, visual generation, platform formatting, and scheduling coordination with an automated pipeline. Human judgment stays in the loop for final approval. The AI handles the volume.
The Enterprise Architecture: Three Layers
An enterprise AI content pipeline built on Claude Code has three layers. Understanding the boundary between them is the first decision Marketing Ops needs to make before building anything.
Layer one is Claude Code as the content engine. It generates copy, scores quality, enforces brand standards via hooks, and triggers the next layer when content is approved. Layer two is the orchestration layer, either n8n (recommended for data sovereignty and cost at scale) or Make.com. It routes approved content through approval workflows, calls platform APIs, handles retries, and manages scheduling logic. Layer three is your platform APIs: HubSpot for CRM-connected content, Salesforce Marketing Cloud or Marketo for email campaigns, LinkedIn for organic and paid content, and a unified social publishing API (such as Ayrshare or similar) for cross-platform scheduling.
Claude Code and your orchestration layer talk to each other through webhooks. Claude Code generates content and calls an n8n webhook endpoint. n8n applies business logic (routing to the right brand approval chain, calling the right platform API) and responds with confirmation. The AI handles content intelligence. The orchestration layer handles operational logic.
Action item: Before building, map your existing approval workflow on paper. Who approves what for each brand? What compliance rules apply per channel? This map becomes your hook and orchestration configuration.
Project Folder Structure for Enterprise Multi-Brand Teams
Enterprise content pipelines require a specific folder structure before writing a single line of configuration. This structure isolates brand configurations, enforces channel rules, and gives Claude Code a predictable place to load context at the start of every session.
Set up your project root with this structure:
/ai-content-pipeline
├── /.claude
│ ├── hooks.json # All hook configurations live here
│ └── settings.json # Permission rules and MCP server allowlist
├── /brand-configs
│ ├── brand-a
│ │ ├── voice-guide.md # Tone, vocabulary, sentence structure, examples
│ │ ├── terminology.json # Approved terms, prohibited terms, product names
│ │ ├── compliance.md # Industry-specific regulatory rules
│ │ └── golden-set/ # 15-20 reference posts per platform
│ ├── brand-b
│ │ └── (same structure)
│ └── brand-c
│ └── (same structure)
├── /channel-templates
│ ├── linkedin.md # Format rules, character limits, CTA patterns
│ ├── twitter.md
│ ├── email-nurture.md
│ └── paid-ad.md
├── /skills
│ ├── post.md # SKILL.md for single-post generation
│ └── plan-week.md # SKILL.md for weekly content planning
├── /hooks
│ ├── brand-voice-check.sh # Compliance regex scanner
│ ├── load-context.sh # Loads brand config at session start
│ ├── quality-score.py # Weighted scoring gate
│ └── route-approval.sh # Triggers n8n approval webhook
├── /drafts # Generated content staging area
├── /approved # Content cleared for publishing
├── /published # Post-publish log with URLs and metadata
└── settings.local.json # API keys (never commit to version control)
This structure gives every hook a known path to load. The SessionStart hook loads /brand-configs/{ACTIVE_BRAND}/voice-guide.md on initialization. The PreToolUse hook reads /brand-configs/{ACTIVE_BRAND}/compliance.md before every write. The quality scoring hook references /brand-configs/{ACTIVE_BRAND}/golden-set/ for comparison. Structure drives consistency at scale.
Action item: Create this folder structure before running any Claude Code prompts. Populate each brand’s voice-guide.md and terminology.json with your actual brand standards. Empty configuration files produce generic output regardless of model quality.
Step 1: Install Claude Code and Configure Enterprise Permissions
Claude Code installs as an extension inside Visual Studio Code. Open VS Code, click the Extensions icon (three cubes on the left sidebar), search “Claude,” and install the result with 4+ million downloads. An orange Claude Code icon appears top-right. Click it, type /login, and complete authentication.
For enterprise teams, the $100/month Claude Pro plan is the practical minimum for agentic workflows. Tool-heavy sessions involving multiple API calls, web fetches, and file writes consume quota fast. The $200/month plan is appropriate for Marketing Ops leads running daily batch content sessions. Teams on the lower tier should batch their sessions: one two-hour session produces a full week of content at significantly lower quota consumption than running small requests throughout the week.
Model selection matters for quality. Use claude-opus-4-6 for all content generation, brand voice matching, and quality scoring. Reserve claude-haiku-4-5 for fast, deterministic checks like character limit validation and regex scanning inside hooks. This split optimizes for both quality and cost per hook execution.
Configure Permissions Without Constant Prompts
Claude Code requests explicit approval for almost every bash command and file write by default. At enterprise scale with hundreds of operations per session, this creates unacceptable friction. Run this permissions configuration prompt once per project to authorize non-destructive commands automatically:
SYSTEM: You are configuring Claude Code project settings for an enterprise marketing content pipeline.
Update .claude/settings.json for this project with the following permission rules:
ALLOW automatically (no confirmation needed):
- Read operations on all files in /brand-configs, /channel-templates, /skills, /hooks
- Write operations to /drafts directory only
- Bash commands: cat, ls, grep, python scripts in /hooks directory
- Web search for topic research
- Webhook calls to our n8n instance at {{N8N_WEBHOOK_BASE_URL}}
ASK before executing:
- Write operations to /approved or /published directories
- Any external API calls not on the allowlist
- File deletion operations
DENY always:
- Access to settings.local.json (contains API keys)
- Operations on ~/.ssh or any credential directory
- Git commits or pushes
Scope these rules to this project only, not globally.
Set your mode to Plan (Ask Before Edits) as the default. Switch to Edit Automatically only after reviewing a specific plan and approving it. Marketing Ops leads who run in Edit mode by default report far more unintended overwrites of approved content. The extra step of confirming edits is worth it when the content going into your publishing pipeline is what gets scheduled.
Action item: Run the permissions configuration prompt and verify settings.json was created correctly. Open it and confirm the /drafts write rule and the settings.local.json deny rule are both present.
Step 2: Build the Brand Voice Hook System
Claude Code hooks are scripts that run automatically at specific lifecycle events without any human intervention. They do not wait for someone to remember to run a quality check. They fire before every write, every publish attempt, and every session start. For enterprise marketing, hooks are the primary mechanism for brand governance at scale.
The hooks system supports four events relevant to content pipelines:
- SessionStart fires when Claude Code initializes, loading brand context before any work begins
- PreToolUse fires before any file write, enabling real-time brand voice validation and compliance scanning
- PostToolUse fires after a write completes, triggering quality scoring and approval routing
- Stop fires when the generation agent finishes a task, triggering the publishing pipeline
Here is the complete hooks.json configuration for a multi-brand content pipeline. This file lives in .claude/hooks.json:
{
"hooks": {
"SessionStart": [
{
"hooks": [
{
"type": "command",
"command": ".claude/hooks/load-context.sh"
}
]
}
],
"PreToolUse": [
{
"matcher": "Write|Edit",
"hooks": [
{
"type": "command",
"command": ".claude/hooks/brand-voice-check.sh"
}
]
}
],
"PostToolUse": [
{
"matcher": "Write",
"hooks": [
{
"type": "command",
"command": "python .claude/hooks/quality-score.py"
},
{
"type": "command",
"command": ".claude/hooks/route-approval.sh"
}
]
}
]
}
}
The SessionStart Context Loader
This script fires when every new session opens. It loads the active brand’s voice guide and compliance rules into a temp file that other hooks reference. Set the ACTIVE_BRAND variable to switch between brands:
#!/bin/bash
# .claude/hooks/load-context.sh
ACTIVE_BRAND="${ACTIVE_BRAND:-brand-a}"
BRAND_DIR="./brand-configs/${ACTIVE_BRAND}"
echo "Loading brand context for: ${ACTIVE_BRAND}"
cat "${BRAND_DIR}/voice-guide.md" > /tmp/active-brand-voice.md
cat "${BRAND_DIR}/compliance.md" > /tmp/active-brand-compliance.md
cat "${BRAND_DIR}/terminology.json" > /tmp/active-brand-terms.json
echo "Context loaded successfully"
The Pre-Write Compliance Scanner
This hook runs before every file write. It uses regex to catch regulated claims, prohibited terms, and competitor mentions. Returns exit code 2 to block the write and report the specific violation. For regulated industries (fintech, healthtech, legal tech), this is the line of defense that keeps compliance from reviewing 50 posts manually:
#!/bin/bash
# .claude/hooks/brand-voice-check.sh
INPUT=$(cat)
CONTENT=$(echo "$INPUT" | jq -r '.tool_input.content // empty')
if [ -z "$CONTENT" ]; then
exit 0
fi
# Load prohibited terms from active brand config
PROHIBITED=$(cat /tmp/active-brand-terms.json | jq -r '.prohibited[]' 2>/dev/null)
# Check for non-compliant performance claims
if echo "$CONTENT" | grep -qiE "guaranteed results|100% effective|risk-free|outperforms all"; then
echo '{"decision":"block","reason":"Non-compliant performance claim detected. Remove or qualify the claim before writing."}'
exit 2
fi
# Check for regulated financial/health language (customize per industry)
if echo "$CONTENT" | grep -qiE "guaranteed return|cure|diagnose|prevent disease"; then
echo '{"decision":"block","reason":"Regulated language detected. Route to Legal before including this claim."}'
exit 2
fi
# Check product name casing against terminology.json
PRODUCT_NAME=$(cat /tmp/active-brand-terms.json | jq -r '.product_name // empty')
if [ -n "$PRODUCT_NAME" ]; then
if echo "$CONTENT" | grep -qi "$PRODUCT_NAME" && ! echo "$CONTENT" | grep -q "$PRODUCT_NAME"; then
echo '{"decision":"block","reason":"Incorrect product name casing. Use exact capitalization from brand guidelines."}'
exit 2
fi
fi
exit 0
Action item: Populate your terminology.json with your actual prohibited terms, competitor names to avoid mentioning, and exact product name strings. The hook does nothing useful with empty configuration files.
Step 3: Build the /post Skill for Single-Post Generation
The /post skill is the single-post content engine. It accepts a topic, a campaign brief, a research summary, or an existing draft. It applies brand voice, generates a visual asset if requested, runs through the quality gate, and routes to your approval workflow.
Paste this prompt into Claude Code to create the skill. It is long because the clarifying question instruction at the end does most of the work of filling gaps you did not anticipate:
SYSTEM: You are building a Claude Code skill for enterprise content marketing.
Create a new skill at ./skills/post.md with the following specifications:
Name: post
Description: Enterprise social media post generator for B2B technology marketing teams. Accepts topics, campaign briefs, research briefs, or existing drafts. Generates platform-optimized copy with brand voice, optionally generates visual assets via your configured image generation API, runs quality scoring, and routes to approval workflow. Supports LinkedIn, Twitter, email preview, and paid ad copy.
How it works:
1. Parse user intent from whatever input is provided
2. If a research brief or topic is provided, use web search to gather supporting data and angles
3. Load active brand voice from /tmp/active-brand-voice.md
4. Generate 3 post variants per platform requested, applying brand voice guidelines
5. If visual requested, call the configured image generation API using brand's approved asset templates
6. Run quality scoring gate - block if score below 75, surface feedback if 60-74
7. Output drafts to ./drafts/[BRAND]-[DATE]-[PLATFORM]-draft.md
8. If score passes, call approval webhook at {{N8N_WEBHOOK_URL}}/content-approval
Scheduling options:
- publish_now: immediate publish
- schedule_date: ISO 8601 datetime
- next_open_slot: publishes at the next configured slot in your scheduling platform
API credentials are in settings.local.json.
Social publishing API key: settings.local.json > social_api_key
n8n webhook base URL: settings.local.json > n8n_webhook_url
Maintain a published post log at ./published/post-log.md including: post URL, platform, brand, publish date, platform post ID, and approval chain used.
End behavior: Ask clarifying questions one at a time until you are 95% confident you can complete the task successfully.
What Goes Inside Each Brand’s Voice Guide
The voice guide is what separates generic AI output from content your audience recognizes as yours. A minimal enterprise voice guide has five sections. Each section should be populated by your brand or content team, not guessed by Claude:
# Brand Voice Guide: [BRAND NAME]
## Tone Attributes
Primary: [e.g., Direct, Data-driven, Confident]
Secondary: [e.g., Conversational, Slightly contrarian]
Avoid: [e.g., Hyperbolic, Jargon-heavy, Passive]
## Sentence Structure Rules
- Max sentence length: 20 words
- Lead with conclusion, then explanation
- Active voice required
- Short paragraphs: 2-3 sentences max
- LinkedIn hook: first line must be a standalone statement under 12 words
## Banned Phrases (Examples - Replace With Yours)
- "game-changer", "revolutionary", "best-in-class"
- "seamless", "robust", "synergy"
- "we are excited to announce"
- em dashes anywhere
## Platform-Specific Rules
LinkedIn: Professional tone, data points in first sentence, 3-5 hashtags, 1200 chars max for algorithm reach
Twitter: Punchy, 240 chars max, 1-2 hashtags, thread format for anything over 280 chars
Email subject lines: 6-10 words, no clickbait, front-load the benefit
## Reference Posts (Golden Set)
[Paste 10-15 of your best-performing posts per platform here]
Action item: Have your brand or content team complete voice guides for every brand before running the skill. Test by running /post [topic] LinkedIn for each brand, comparing output to your golden set examples.
Step 4: Build the Quality Scoring Gate
The quality scoring gate is the most important technical component in the pipeline. It is a PostToolUse hook that evaluates every draft before routing it to approval. Posts scoring below 60 are blocked. Posts scoring 60-74 route to human review with specific improvement notes. Posts scoring 75 and above route to the fast-track approval chain.
This Python script implements a weighted six-dimension scoring system. Save it as .claude/hooks/quality-score.py:
#!/usr/bin/env python3
# .claude/hooks/quality-score.py
import json
import sys
import re
import subprocess
def flesch_kincaid_grade(text):
"""Calculate readability score. Target: Grade 8 or below for B2B general content."""
sentences = len(re.findall(r'[.!?]+', text)) or 1
words = len(text.split()) or 1
syllables = sum(count_syllables(w) for w in text.split()) or 1
score = 0.39 * (words / sentences) + 11.8 * (syllables / words) - 15.59
return max(0, min(20, score))
def count_syllables(word):
word = word.lower().strip(".,!?;:")
if len(word) <= 3:
return 1
vowels = re.findall(r'[aeiouy]+', word)
return max(1, len(vowels))
def score_readability(text, target_grade=8):
grade = flesch_kincaid_grade(text)
if grade <= target_grade:
return 100
elif grade <= target_grade + 2:
return 75
elif grade <= target_grade + 4:
return 50
return 25
def score_compliance(text):
"""Check for non-compliant claims. Returns 100 if clean, 0 if violations found."""
violations = [
r'guaranteed\s+results',
r'100%\s+effective',
r'risk.free',
r'outperforms\s+all',
r'best\s+in\s+class',
r'revolutionary',
]
for pattern in violations:
if re.search(pattern, text, re.IGNORECASE):
return 0
return 100
def score_brand_voice(text, voice_guide_path="/tmp/active-brand-voice.md"):
"""LLM-powered brand voice scoring via Claude Haiku for speed."""
try:
with open(voice_guide_path, 'r') as f:
voice_guide = f.read()
# In production: call Claude Haiku API here
# For now: basic heuristic checks
score = 100
if len([s for s in text.split('.') if len(s.split()) > 25]) > 0:
score -= 20 # Sentences too long
if re.search(r'we are excited|we are pleased|thrilled to', text, re.IGNORECASE):
score -= 25 # Generic corporate language
if re.search(r'—', text):
score -= 15 # Em dashes banned
return max(0, score)
except:
return 70 # Default if voice guide not loaded
def score_seo(text):
"""Basic SEO structure check."""
score = 50 # Base
if len(text.split()) >= 150:
score += 20 # Sufficient length
if re.search(r'http|https', text):
score += 15 # Contains links
if len(re.findall(r'#\w+', text)) >= 2:
score += 15 # Has hashtags (social)
return min(100, score)
def calculate_weighted_score(text):
weights = {
'brand_voice': 0.25,
'compliance': 0.15,
'seo': 0.20,
'readability': 0.15,
'grammar': 0.15, # Simplified: proxy with sentence structure
'engagement': 0.10 # Simplified: proxy with hook strength
}
scores = {
'brand_voice': score_brand_voice(text),
'compliance': score_compliance(text),
'seo': score_seo(text),
'readability': 100 - (score_readability(text) * 5),
'grammar': 80, # Replace with LanguageTool API call in production
'engagement': 70 if text.split('.')[0] and len(text.split('.')[0].split()) <= 12 else 50
}
weighted = sum(scores[k] * weights[k] for k in scores)
return round(weighted), scores
def main():
input_data = json.loads(sys.stdin.read())
content = input_data.get('tool_input', {}).get('content', '')
if not content or len(content) < 50:
sys.exit(0)
total_score, dimension_scores = calculate_weighted_score(content)
result = {
"total_score": total_score,
"dimensions": dimension_scores,
"decision": "approve" if total_score >= 75 else ("review" if total_score >= 60 else "block"),
"feedback": []
}
if dimension_scores['compliance'] == 0:
result['feedback'].append("COMPLIANCE VIOLATION: Remove or qualify flagged claims before proceeding.")
if dimension_scores['brand_voice'] < 70:
result['feedback'].append("BRAND VOICE: Rewrite to match active brand voice guide. Check sentence length and banned phrases.")
if dimension_scores['readability'] < 60:
result['feedback'].append("READABILITY: Simplify sentence structure. Target Grade 8 or below.")
if result['decision'] == 'block':
print(json.dumps(result))
sys.exit(2)
print(json.dumps(result))
sys.exit(0)
if __name__ == "__main__":
main()
Three outcomes exit this gate. Exit code 2 blocks the write and returns detailed feedback. Exit code 0 with decision: review writes the draft to /drafts with a review flag and triggers the standard approval chain in n8n. Exit code 0 with decision: approve writes to /drafts and triggers the fast-track approval chain. In practice, posts from a well-calibrated brand voice guide score above 75 roughly 60-70% of the time on first generation, climbing to 85%+ after your team has tuned the voice guide with real examples.
Action item: Run the quality scoring hook against 10 posts you consider on-brand and 10 you would reject. Adjust the scoring weights and thresholds until the gate correctly separates them.
Step 5: Connect to Your Martech Stack
Claude Code connects to enterprise martech via MCP servers. Each platform has a distinct integration path. Here is the setup for the four platforms most common in B2B tech marketing stacks.
HubSpot Integration
HubSpot provides an official MCP server for Claude integration. Access it through Claude’s Settings, then Integrations. The native connector requires OAuth authentication tied to your HubSpot portal. For programmatic control from Claude Code, add the MCP server to your .claude/settings.json:
{
"mcpServers": {
"hubspot": {
"command": "npx",
"args": ["@scopiousdigital/server-hubspot"],
"env": {
"HUBSPOT_ACCESS_TOKEN": "${HUBSPOT_ACCESS_TOKEN}"
}
}
}
}
With HubSpot connected, Claude Code creates blog posts, updates contact properties, triggers workflows, and reads CRM data to personalize content. The most useful Marketing Ops pattern: pull contact segment data from HubSpot CRM, pass it to Claude Code as context, and generate personalized content variants for each segment in a single session.
Salesforce Marketing Cloud Integration
SFMC integration uses the Cequence AI Gateway MCP server, which translates Claude’s natural language requests into SFMC REST API calls. Setup requires four values from your SFMC Installed Package (create under Setup > Apps > Installed Packages, Server-to-Server integration type):
# Store in settings.local.json or your secrets manager
SFMC_CLIENT_ID=your_client_id
SFMC_CLIENT_SECRET=your_client_secret
SFMC_AUTH_BASE_URI=https://your_subdomain.auth.marketingcloudapis.com
SFMC_MID=your_business_unit_mid
The Cequence MCP server handles token refresh, so you authenticate once and the server maintains the session. Claude Code calls the server using natural language: “Create an email in Journey Builder for the Q3 nurture campaign, audience: enterprise accounts in the pipeline.” The server translates this to the correct SFMC API sequence.
Marketo Integration
Critical note for all teams using Marketo: the SOAP API deprecates on March 31, 2026. Any existing Claude Code or automation that calls SOAP endpoints breaks on that date. Rebuild integrations against REST API v1 before that deadline.
For new Marketo integrations, use the Inflection.io MCP Server. It is free, open-source, and connects in under 10 minutes:
npm install -g @inflection-ai/marketo-mcp
marketo-mcp configure --base-url https://your-instance.mktorest.com --client-id YOUR_ID --client-secret YOUR_SECRET
Once connected, Claude Code creates and approves Marketo assets, updates email content, and triggers smart campaign membership changes. Marketo’s asset workflow requires three steps for emails: Create, then Update Content, then Approve. The Inflection.io server handles this sequence automatically when Claude Code issues a single create instruction.
LinkedIn Marketing API
No mature MCP server exists for LinkedIn Campaign Manager. Claude Code calls the LinkedIn Marketing API directly via HTTP requests inside skills and hooks. All Marketing API calls require three-legged OAuth and the current API version header. As of February 2026, use version li-lms-2025-11. LinkedIn sunsets API versions monthly, so version your API calls in skills rather than hardcoding them:
# Inside a skill or hook that posts to LinkedIn Company Page
LINKEDIN_API_VERSION="li-lms-2025-11"
curl -X POST "https://api.linkedin.com/rest/posts" \
-H "LinkedIn-Version: ${LINKEDIN_API_VERSION}" \
-H "Authorization: Bearer ${LINKEDIN_ACCESS_TOKEN}" \
-H "Content-Type: application/json" \
-d "{
\"author\": \"urn:li:organization:${ORG_ID}\",
\"commentary\": \"${POST_CONTENT}\",
\"visibility\": \"PUBLIC\",
\"distribution\": {\"feedDistribution\": \"MAIN_FEED\"}
}"
Action item: Store all API credentials in your secrets manager (HashiCorp Vault, AWS Secrets Manager, or Azure Key Vault) and reference them as environment variables. Never put API keys directly in settings.local.json if your project directory syncs to version control.
Step 6: Choose Your Orchestration Layer
Claude Code generates content and enforces brand governance. The orchestration layer handles everything else: routing drafts to the right approver, calling platform APIs in the right sequence, managing retries when an API times out, scheduling posts into calendar slots, and logging every action for audit purposes.
At 50 posts per week across two to three platforms, cost and operational complexity drive the platform decision.
n8n self-hosted on a $20-50/month VPS handles 200+ post executions per month with unlimited workflow complexity. It connects natively to HubSpot, Salesforce, Marketo, LinkedIn, and social publishing APIs via pre-built nodes. The Claude node accepts API calls directly. For enterprise data sovereignty requirements, self-hosted n8n keeps all marketing data within your network perimeter. This is the right choice for most enterprise teams at this scale.
Make.com offers a better visual interface and more sophisticated error handling with visual error paths and alternative pathway logic. The Core plan at $11/month covers 10,000 operations. For mixed technical and non-technical teams where Marketing Ops without engineering backgrounds need to maintain workflows, Make.com’s visual builder is easier to hand off.
Zapier costs significantly more at scale ($104/month for 2,000 tasks) but offers an MCP server that lets Claude Code directly trigger any of Zapier’s 8,000+ integrations. For accessing niche platform integrations not available in n8n or Make, the Zapier MCP server is worth running as a secondary layer alongside your primary orchestration tool.
The recommended enterprise setup: n8n self-hosted as the primary orchestration layer for all high-volume, well-defined workflows, plus Zapier MCP for one-off integrations with platforms not supported by n8n.
Action item: Deploy n8n on a VPS (DigitalOcean or Hetzner are cost-effective). Create three webhook endpoints: one for content approval routing, one for scheduled publishing confirmation, and one for error and rejection notifications back to the content team.
Step 7: Build the /plan-week Skill for Batch Content
The /plan-week skill is how Marketing Ops batches an entire week of content in one session. A content lead sits down for 90 minutes on Monday morning, runs one command, reviews and edits the full week’s draft content, approves it, and walks away with everything scheduled.
This prompt creates the /plan-week skill. The clarifying questions at the end are essential because enterprise multi-brand teams have more edge cases than a single prompt covers:
SYSTEM: You are building a Claude Code skill for enterprise weekly content planning.
Create a new skill at ./skills/plan-week.md with the following specifications:
Name: plan-week
Description: Enterprise weekly content calendar generator for B2B technology marketing teams. Accepts a topic, a set of topics, a campaign brief, a research summary, or a combination of these. Generates a full week of content drafts for all connected platforms, schedules visual assets, outputs a reviewable plan document, and awaits approval before scheduling.
How it works:
1. Parse input to identify content themes, topics, and angles
2. If a research brief or campaign brief is provided, extract distinct angles for each day
3. Load active brand voice and channel templates
4. Generate platform-specific draft posts:
- LinkedIn: one post per weekday (Mon-Fri), professional tone, data-driven hooks
- Twitter: one thread or standalone tweet per weekday
- Email preview copy: one subject line + preview text per campaign email planned this week
5. For posts requiring visual assets, call your configured image generation API and queue the jobs
6. Output the complete content plan to ./drafts/week-plan-[DATE].md with:
- Day-by-day schedule
- Draft copy for each post per platform
- Visual asset job IDs and generation status
- Quality scores per post
- Approval status per post
7. Present plan and wait for explicit approval before scheduling anything
8. On approval, use parallel sub-agents (one per platform) to schedule all posts at configured publishing times
9. Log all scheduled post URLs to ./published/post-log.md
Schedule to next available publishing slot by default unless a specific date is requested.
Brand context loads from /tmp/active-brand-voice.md (set by SessionStart hook).
Social publishing API key and n8n webhook URL are in settings.local.json.
End behavior: Ask clarifying questions one at a time until you are 95% confident you can complete the task successfully.
How to Run a Weekly Content Session
A typical enterprise content session using /plan-week follows this pattern:
- Open Claude Code, set
ACTIVE_BRAND=brand-ain your environment - Run:
/plan-week [campaign brief or topic list] - Claude Code processes your input, generates 10 posts across platforms, queues visual asset jobs, scores each post, and writes the full plan to
./drafts/week-plan-[DATE].md - Open the week plan file in VS Code. Select any draft you want to change and prompt Claude Code to rewrite only that section. VS Code shows the selected line count at the bottom so Claude Code knows exactly which content to touch.
- Review quality scores in the plan document. Posts flagged for review show specific feedback from the scoring gate.
- When satisfied, type
approvein Claude Code. Parallel sub-agents spin up, one per platform, and schedule all posts at your configured publishing times. - Confirm all posts appear in your scheduling platform’s calendar view.
The full session from input to approved schedule takes 60 to 90 minutes. Time saved versus manual drafting, formatting, visual creation, and scheduling: 8 to 15 hours per week depending on content volume and platform count.
Configuring Your Publishing Schedule
Your social publishing API stores default posting times per platform. Configure these once and the /plan-week skill fills them automatically on every approval. For enterprise teams managing multiple brands, set separate schedules per brand and per platform based on audience overlap and platform-specific frequency norms.
LinkedIn reach falls when you post more than once per day per company page. One post per weekday per brand is the right cadence. Twitter tolerates much higher frequency. Start with two posts per weekday and adjust based on engagement data. For paid channels, coordinate publishing times with your media buy schedule to maximize organic amplification during campaign windows.
Store your preferred posting times in your scheduling platform’s configuration and reference the next_open_slot parameter in your skill to fill the next available time automatically.
Action item: Run /plan-week with your next campaign brief or a list of five content angles for the week. Compare the generated drafts against posts your content team wrote manually on similar topics. This comparison reveals exactly where to strengthen your voice guide.
Step 8: Parallel Sub-Agents for Multi-Platform Publishing
Sequential publishing means each platform waits for the previous one to finish. At 10 posts per week across two platforms, that is 20 sequential API calls. At 50 posts per week across four platforms, sequential publishing creates enough latency to miss preferred publishing windows when your scheduling platform’s queue fills faster than your publishing loop runs.
Parallel sub-agents solve this by running one agent per platform simultaneously. Each agent adapts the post copy for its platform, runs the platform-specific quality gate, and calls the publishing API independently. All four platforms finish in roughly the time it takes one platform to finish sequentially.
Update the /post skill to support the all keyword as a platform input:
Update ./skills/post.md to add parallel sub-agent publishing:
When the user specifies "all" as the platform, or when /plan-week approves a full week of content:
1. Read the list of connected platforms from settings.local.json > connected_platforms
2. For each platform in the list, spawn a parallel sub-agent with:
- The platform-specific draft from the content plan
- The platform's channel template rules
- The platform's API credentials
- Instruction to call quality gate before publishing
3. Each sub-agent publishes or schedules independently and does not wait for other agents
4. Each sub-agent logs its result (URL, post ID, timestamp, platform, score) to ./published/post-log.md
5. Report all results when every sub-agent completes
If any sub-agent fails, it logs the error and retries once before reporting the failure without blocking other agents from completing.
The retry behavior is important for enterprise reliability. LinkedIn’s Marketing API returns rate limit errors during peak hours. The single retry with failure logging (rather than blocking the entire batch) keeps your schedule intact while flagging the specific post for manual republishing.
Action item: Test parallel publishing with two platforms and three posts each before scaling to full volume. Confirm all six posts appear in your scheduling platform’s calendar and that post-log.md captured all URLs and platform post IDs.
Step 9: Enterprise Security Architecture
Enterprise marketing teams using Claude Code to access HubSpot, SFMC, Marketo, and LinkedIn face a specific security risk: credential exposure. Claude Code needs to call APIs. Those API calls require credentials. Credentials in the agent’s file system or environment are one misconfigured git push away from exposure.
The credential proxy pattern eliminates this risk. Claude Code sends plaintext requests to a proxy server running inside your network perimeter. The proxy injects the actual API credentials before forwarding to destination APIs. Claude Code never sees the credentials. Anthropic’s documentation explicitly recommends using ANTHROPIC_BASE_URL to route through this type of proxy.
Implement this in five steps:
- Deploy a lightweight proxy service (Node.js or Python) inside your corporate network or VPC
- Give the proxy read-only access to your secrets manager (HashiCorp Vault, AWS Secrets Manager)
- Add an allowlist: the proxy only forwards requests to approved API endpoints (HubSpot, SFMC, Marketo, LinkedIn, your social publishing API)
- Set
ANTHROPIC_BASE_URLin Claude Code’s settings to point to your proxy - Log every API call through the proxy with timestamp, endpoint, brand, and user identity
Additional controls for production deployments:
Configure .claude/settings.json to deny access to credential files:
{
"permissions": {
"deny": [
"Read(settings.local.json)",
"Read(~/.ssh/**)",
"Read(**/.env)",
"Bash(curl * github.com *)"
]
}
}
Set 90-day automated rotation for all marketing API keys. HubSpot and LinkedIn both support key rotation without downtime. SFMC requires a brief re-authentication window during rotation. Build a rotation script that updates your secrets manager, tests connectivity to each API, and confirms the new key works before deactivating the old one.
For audit logging, every content generation, quality scoring decision, approval event, and publish action should write to a structured log:
{
"timestamp": "2026-02-17T09:15:00Z",
"user": "jsmith@company.com",
"brand": "brand-a",
"action": "content_published",
"platform": "linkedin",
"post_id": "platform_post_12345",
"quality_score": 82,
"approval_chain": "peer_review",
"model_version": "claude-opus-4-6",
"hook_version": "1.4.2"
}
This log structure satisfies SOC 2 audit requirements and gives Marketing Ops a full chain of custody for every post. Retain logs for a minimum of 90 days. For regulated industries, check whether your compliance team requires longer retention.
Action item: Before connecting Claude Code to any production martech credentials, deploy the credential proxy. Run one test session through the proxy and confirm the audit log captured the expected entries.
Step 10: The CLAUDE.md File as Enterprise Memory
Every Claude Code session starts cold. Without a CLAUDE.md file, the agent knows nothing about your project, your brands, your martech integrations, or the conventions your team established. You re-explain the same context every session. At enterprise scale with multiple team members using the same pipeline, this creates inconsistency in how different operators interact with the system.
The CLAUDE.md file stores project memory. It loads automatically at every session start, giving Claude Code the context it needs without any prompting. Run this prompt once to generate a baseline CLAUDE.md, then have your team update it at the end of every working session:
Create a CLAUDE.md file for this enterprise marketing content pipeline project. Include:
1. Project purpose: AI content pipeline for [COMPANY] marketing team. Generates and publishes content for [LIST BRANDS] across LinkedIn, Twitter, and email channels.
2. Skills available:
- /post: single-post generator, accepts topic/brief/draft, specify platform or "all"
- /plan-week: weekly content calendar generator, accepts topic list or campaign brief
3. Active brand switching: Set ACTIVE_BRAND environment variable before sessions
Available brands: [LIST YOUR BRANDS]
Brand configs location: ./brand-configs/{brand-name}/
4. API credentials location: settings.local.json (never read directly, proxy handles injection)
5. Orchestration webhook: n8n at {{N8N_WEBHOOK_BASE_URL}}
6. Social publishing API: [your configured social API endpoint]
7. Quality gate thresholds:
- 75+ auto-routes to fast-track approval
- 60-74 routes to human review with feedback
- Below 60 blocks with specific correction guidance
8. Hooks summary:
- SessionStart: loads active brand context
- PreToolUse (Write): runs compliance scanner
- PostToolUse (Write): runs quality scorer and routes to approval webhook
9. File conventions:
- Drafts: ./drafts/[BRAND]-[DATE]-[PLATFORM]-draft.md
- Approved: ./approved/
- Published log: ./published/post-log.md
10. Visual asset generation is asynchronous. Poll job status every 10 seconds. Average generation time depends on your configured image API and template complexity.
11. LinkedIn Marketing API requires version header: li-lms-2025-11 (update monthly as LinkedIn sunsets versions)
12. Add any conventions or lessons learned since last update at the bottom of this file.
Update CLAUDE.md after every session where something unexpected happened, a new convention was established, or a platform behavior changed. A CLAUDE.md file maintained over 10 sessions eliminates 90% of the re-prompting required to orient Claude Code to your specific environment.
Action item: After completing the full pipeline build, run the CLAUDE.md generation prompt. Open the file and verify it correctly describes your brands, your approval thresholds, and your API endpoints. Share it with every team member who will run content sessions.
Orchestration vs. Claude Code: Where Each Tool Belongs
A question Marketing Ops leads ask repeatedly: which tasks belong in Claude Code versus in n8n or Make?
Claude Code handles tasks requiring language understanding, content judgment, brand voice matching, quality scoring, and multi-step reasoning. Use Claude Code for: generating post variants, evaluating brand voice adherence, scoring content against historical performance patterns, extracting angles from long-form source content, and adapting copy between platforms.
n8n and Make handle tasks requiring deterministic logic, scheduling, retries, state management, and integration with enterprise systems that require specific authentication patterns. Use your orchestration layer for: routing approved content to the right approver based on content type and brand, calling HubSpot and SFMC APIs with the correct payload structure, managing social publishing API scheduling logic, sending Slack notifications to content reviewers, maintaining the published content database, and running scheduled triggers that initiate weekly content sessions automatically.
The boundary is: Claude Code is the content brain, n8n is the operational backbone. Claude Code should never be responsible for retry logic, state persistence between sessions, or complex conditional routing. Those tasks belong in the orchestration layer where they are visible, testable, and maintainable without requiring Claude Code knowledge.
Claude Code is also not the right tool for orchestration tasks that require long-running triggers (e.g., “when a new HubSpot contact is created, generate a personalized welcome email”). That trigger logic belongs in n8n. The Claude API call for content generation happens inside the n8n workflow, not as a standalone Claude Code command.
Final Takeaways
The hooks system is the governance layer. Without PreToolUse compliance scanning and PostToolUse quality scoring, an enterprise content pipeline produces volume but not brand safety. Build the hooks before building the skills.
Multi-brand pipelines require brand-isolated configuration, not brand-specific prompts. The difference: configuration files loaded at session start versus instructions typed into each prompt. Typed instructions get forgotten. Loaded configuration files apply every time, across every team member who runs the session.
Your martech integrations are only as reliable as your API version discipline. LinkedIn sunsets versions monthly. Marketo’s SOAP API disappears March 31, 2026. Build version references into your CLAUDE.md and schedule a quarterly review of all API version dependencies.
n8n self-hosted at $20-50/month per month handles the orchestration volume that Zapier charges $299+/month for at the same scale. The credential proxy adds one deployment step and removes your largest security risk. Both decisions pay back within the first month of production use.
The quality scoring gate compounds over time. The more your team tunes the brand voice scoring dimension against your actual golden set posts, the higher the first-pass score becomes, and the less time content reviewers spend correcting AI output before approval.
yfxmarketer
AI Marketing Ops Specialist
Writing about AI marketing, growth, and the systems behind successful campaigns.
read_next(related)
The Claude Code Content Automation Playbook for Marketers
Turn one Substack article into carousels, threads, and tweets. Claude Code writes, designs, and publishes automatically.
Claude Code for GTM Teams: Governed AI Content Production with n8n Automation
Build a governed AI content system with brand compliance checks, approval workflows, and deterministic outputs across all GTM channels.
Claude Code Skills and Agents for Enterprise Marketers: The Complete Implementation Guide
Build AI agents with Claude Code skills to automate 80% of enterprise marketing workflows in weeks.