Substack Has No API. I Built One Anyway. Here's How I Automated Every New Subscriber Into Kit.
Published: April 20, 2026 • 10 min read
I have a team of 17 AI agents, each with distinct names and personalities, as you can see on this page. Special shoutout to Allen Kendrick, who will help refine this very blog post I am writing. However, when you visit that page, there is one AI agent I have that you will not find there.
His name is Dan Koe, and I built him using the LifeQuest AI prompt that real life Dan Koe shared in one of his YouTube videos. The prompt essentially gamifies your life journey and forces you to reflect on what you actually want. I turned it into an agent that now acts as my AI Life Coach. I have been working with him for more than five months, and he probably knows more about me than I know about myself.
On Sunday, I was having a conversation with the Dan Koe who lives in my computer, telling him I felt behind and overwhelmed because I still had not migrated my Substack email subscribers to my Kit account (Kit is an email marketing software, for anyone who doesn't know). I told Dan there was no way I was doing this manually and that we had to build a system so this would be the first and last time I ever think about it.
Dan did his thing, went off and researched, and came back with a verdict: Substack has no public API. There's no way to automate this. His literal words were closer to, "Honest recommendation: keep doing it manually. It takes ninety seconds."
Ninety seconds, every single time a new person subscribes, forever. That is not a plan. That is a leak I would spend the rest of my life patching.
So I didn't take that for an answer. I got creative. A little too creative, probably. In this post, I will walk through how I automated the Substack to Kit sync end to end. If you are non-technical, keep reading. I will make sure it makes sense.
The Problem No One Online Will Solve for You
Every guide you find online says the same thing: log into Substack, go to your settings, download the subscriber CSV, upload it to Kit, done. This is the official advice. It is also the advice Dan Koe gave me.
The issue is not the ninety seconds. The issue is the shape of the task. A recurring manual step that has no trigger, no reminder, and no deadline is not a ninety-second task. It is a slow tax on attention. Every time I published a Substack post and someone new subscribed, I would either remember to sync them that week or I wouldn't. Over a month, that is a handful of people who never got my Kit welcome sequence, never got my Thursday newsletter, never existed in the system where my actual marketing lives.
So I reframed the problem. I didn't need an API. I needed the outcome an API would have given me. A new subscriber shows up on Substack, and a few minutes later, they are in Kit. That's it. That's the whole requirement.
Once I reframed it that way, the path forward was obvious and I knew what I had to do.
The System, in Plain English
The easiest way to picture this is a factory line with three stations, all sitting inside a single shared workspace. Every Sunday at 7 AM Eastern Time, the line turns on. A CSV of new subscribers enters on one end. A clean set of tagged Kit subscribers comes out the other end. Then the line shuts itself off until the next scheduled run.
Before I describe the stations themselves, I need to give you a quick tour of the workspace they all share. It looks boring on the surface and it is actually the backbone of the whole system.
The Shared Workspace: The Folder Lifecycle
This is not a station. It is the workspace every station shares. The entire system lives inside a single folder with five numbered drawers that move files through stages of life:
00-Context/holds the playbook, the API rules, and the reference docs Claude-in-Chrome reads before every run01-Queue/holds CSVs waiting to be processed02-Completed/holds CSVs that have already synced successfully, named with the completion date (this is also what the upstream delta-filter reads from, more on that in a second)03-Logs/holds per-run records, split into two subfolders:03-Logs/sync-logs/for successful runs and03-Logs/error-logs/for failures. Two drawers, not one, so I can glance at the failure drawer and immediately know whether anything needs my attention04-Archive/holds cold storage for CSVs old enough that02-Completed/would otherwise get noisy
Two sibling folders sit alongside these: migration-script/ (where migrate.py and its config live) and watcher/ (where the PowerShell and LaunchAgent scripts live).
The numbers are chronological. Work flows from 00 to 04. If I ever need to debug something, I can open any of these folders and see exactly where in the pipeline something is. A CSV stuck in 01-Queue/? The migration script failed. A CSV in 02-Completed/ but no new Kit subscribers? The Kit API is having a bad day. A missing CSV entirely? Chrome never exported. An entry in 03-Logs/error-logs/? The run itself blew up and told me exactly where.
That is enough orientation to follow the three stations below.
Station 1: The Scheduled Browser Task
This is the part that replaces "me, logging into Substack." It uses two Anthropic products that naturally work together: Claude Cowork and Claude-in-Chrome.
Claude Cowork is a feature inside the Claude desktop app that lets you create scheduled agent tasks. You give it a task description and a cadence ("every Sunday at 7 AM, do this"), and Cowork is the thing that actually remembers to fire the task on time. It is the scheduler. It holds the instructions and the calendar entry. It does not click buttons or navigate web pages. Its only job is to decide when the work should happen and to launch the thing that actually does the work.
Claude-in-Chrome is a Chrome browser extension that puts Claude inside Chrome itself. Once installed, Claude can read the page you are on and drive the browser the same way a human would: click buttons, fill forms, navigate pages, download files. It is the executor. It sits inside the browser like a patient assistant, doing what a human would do, except it does not get distracted and it does not forget.
When the scheduled time hits every Sunday at 7 AM ET, Cowork fires the task and hands the instructions to a Claude-in-Chrome session. Claude-in-Chrome then logs into Substack, navigates to the subscriber dashboard, applies a date filter, clicks through the export flow, and downloads the CSV into a specific folder on my computer.
Cowork is the alarm clock with the instructions taped to it. Claude-in-Chrome is the hands.
The pairing is natural, not something I engineered. These are two separate products from Anthropic that happen to be designed to hand off to each other. Cowork knows when. Chrome knows how. I just had to plug them together and write the task description.
The Upstream Delta-Filter Trick
This is the part I am most proud of, and it is subtle. Before Chrome exports the CSV, it checks one thing first. It opens my 02-Completed/ folder, finds the most recent completed migration filename, and extracts the date from that filename. Then it applies that date as a "Start date is on or after [date]" filter inside Substack's own UI before hitting export.
What this means in practice: I never download the full subscriber list. I only ever download the people who signed up since the last successful sync. If the last sync ran two Sundays ago, I download the two weeks of new people. Clean. Small. Nothing duplicated.
Most people would write a deduplication step later in the pipeline to handle this. I pushed the filter all the way upstream so deduplication is almost never needed. The work is prevented, not cleaned up.
Station 2: The Watcher
Once Chrome drops the CSV into my Downloads folder, a second program is watching. On my Windows machine, it is a PowerShell script running in the background. If you are on a Mac, it would be a LaunchAgent doing the same thing. Both do one job: check the Downloads folder every five seconds and notice when a Substack CSV lands.
When the watcher sees the file, it moves it out of Downloads and into a folder called 01-Queue/. That is the launch pad for the next station.
You might ask why I poll every five seconds instead of using an event-driven approach. Fair question. The honest answer: OneDrive sync is unreliable and computers go to sleep. An event-driven listener sounds elegant until your machine is asleep when the file lands, or until OneDrive decides to sync the file five minutes late, and now your "event" fired before the file was actually ready. Polling is dumber and more reliable. I will take reliable.
Station 3: The Python Migration Script
The migrate.py script lives inside a migration-script/ subfolder and watches the 01-Queue/ folder. When a CSV shows up, it wakes up, reads the file, and does the actual work.
For each email in the CSV, it calls the Kit v4 API (an API is just a way for one program to politely ask another program to do something, in this case, "please add this subscriber"). It passes the email, the name, and a substack-migrated tag so I can see at a glance which subscribers came in through this pipeline.
Here is the clever dedup piece. If Kit responds with a 422 error, which means "this email already exists in my system," the script doesn't panic and it doesn't skip. It re-applies the substack-migrated tag anyway and moves on. This is the one case where I want the script to act twice on the same data, because tagging is idempotent and I would rather over-tag than miss a tag. There is also a small last_sync.json file the script updates after every successful run, which is my safety net in case a filename ever gets mangled.
Once every row is processed, the CSV moves from 01-Queue/ to 02-Completed/. If anything fails partway through, the CSV stays in 01-Queue/ and tries again on the next scheduled run. No silent data loss. Worst case, a migration takes an extra two weeks.
The Four Things That Can Actually Go Wrong
I am not going to pretend this is bulletproof. Every pipeline has failure modes. The question is never "will this break someday?" The question is: when it breaks, does it break loudly and safely, or quietly and destructively? Mine breaks loudly and safely. Here is what that actually looks like.
- Substack redesigns their login flow or export page. Claude-in-Chrome handles small layout tweaks, but a major redesign will break it. And that is fine. The failure is clean. No CSV lands in Downloads, the Watcher has nothing to notice, the migration script never runs, and no subscribers get added to Kit incorrectly. I catch it because the next expected entry in
02-Completed/just doesn't show up. Fix: update Chrome's navigation instructions for the new UI and re-run. - My computer is asleep at 7 AM on Sunday. Cowork tasks only fire when the Claude desktop app is running and the machine is awake. If the computer is off, that week's run just doesn't happen, and the gap shows up in the archive. When that happens, I open Claude Cowork and hit the "Run now" button on the task by hand. The pipeline shifts from fully automatic to one-click. The Watcher, the migration script, the logs, the archive, all of it still runs exactly the same way the moment the task fires.
- The Kit API returns an unexpected error. The 422 case ("this email already exists") is already handled. For anything more exotic, the script stops processing rather than pushing through and creating half-correct data. The error gets written to
03-Logs/error-logs/. Crucially, the CSV stays in01-Queue/instead of moving to02-Completed/, so the next run finds it again and retries. This is called retry-safe design. A failed run doesn't poison the next one. - A malformed CSV lands in the queue. Substack occasionally ships a row with a missing field, or the file itself is structurally broken. The script validates the shape of the file before it touches Kit at all. If something is wrong, it logs the row to
03-Logs/error-logs/and exits without making a single API call. My Kit list never gets corrupted by a bad upstream file.
The pattern is the same across all four: when something goes wrong, the system stops, writes down exactly what happened, and refuses to corrupt the Kit list in the meantime. Confidence doesn't come from believing nothing will ever break. It comes from knowing that when things break, they break safely, in a drawer I can find, with enough detail to fix them on a weekday morning with a cup of tea.
That is the whole architecture. Three stations, one shared folder lifecycle, one clever upstream filter, and a lot of "just retry next Sunday" safety nets.
Why Dan Koe Was Technically Right and Still Wrong
Dan Koe's research was accurate. Substack does not have a public API. There is no documented endpoint, no OAuth flow, no official integration partner for what I wanted. If I had stopped at his conclusion, I would still be downloading CSVs every Sunday for the rest of my life.
Substack has no API, but Substack has a dashboard. Dashboards are just web pages. Web pages can be read by browsers. Browsers can be automated. Suddenly, I have an API.
Substack has no API is true. Every new Substack subscriber must land in Kit automatically, and I will never touch this again in my life is also true. Both things are allowed to be true at the same time.
What This Unlocks Now That the Foundation Exists
This is the part that excited me most while I was building it because I always think about future possibilities with everything I build. A Substack to Kit sync is not a feature. It is a foundation. Once you have a scheduled process that can read a list, compare it against another list, and act on the difference, you own an enormous amount of power. Here are the extensions that now become possible because the foundation already exists.
1. Sync Notifications in Whatever Channel I Actually Check
The logs in 03-Logs/ are already written. The only thing stopping me from getting a Slack ping (or Discord, or Telegram, or an email digest, or a Twilio SMS at 7:05 AM Sunday) is one extra call at the end of migrate.py. "Seven new subscribers migrated this morning. Zero errors." A sentence that tells me the factory line is still running, delivered to wherever I already look.
2. Welcome Sequence Trigger for Migrants
The people who come in through this pipeline already know who I am. They followed me on Substack. They do not need my "Hi, I am Prisca, here is my backstory" intro. The substack-migrated tag is already being applied, which means I can wire a Kit automation that triggers a different welcome sequence for these subscribers, the kind that says "thanks for coming along from Substack" instead of "welcome, stranger."
3. Google Sheets Live Mirror
Every row the script processes can also append itself to a Google Sheet. Now I have a living document of every Substack-to-Kit subscriber with their email, name, tags, migration date, and any errors, all sortable and searchable. My mom could read it. More importantly, I can read it at a glance without digging through JSON logs.
4. Subscriber Intelligence Dashboard
The logs are already a data set. Pipe them into a small dashboard that shows gain rate per biweekly run, tag distribution, error rate, and which weeks were the quietest. Suddenly I am not just migrating subscribers. I am watching the health of my list in real time, and I get ideas for what to write next based on when people are actually signing up.
5. Auto-Tag by Email Domain
Every CSV row has an email. Every email has a domain. A tiny addition to migrate.py can check the domain and tag subscribers accordingly. @gmail.com gets one tag. @company.com domains get a "possible B2B" tag. This is free segmentation that costs me four lines of Python.
6. One-Way Cancellation Sync (Substack to Kit)
If someone unsubscribes from my Substack, that should probably reflect in Kit too. The same upstream delta-filter logic can run on cancellations instead of new signups. Chrome exports a CSV of people who unsubscribed since the last run, the script removes them from Kit, done. This only works in one direction, by the way, because Substack has no API for the reverse, but one direction is still better than zero.
7. One System, Every Future Platform
The most honest extension of all. I may not be on Kit forever. Someday I may move to Beehiiv, or something that has not been invented yet. The pipeline I built is not really a "Substack to Kit sync." It is a "read list, filter list, call API, log result" machine. The day I migrate platforms, I swap one function inside migrate.py and everything around it keeps working. The foundation outlasts the vendor.
The Payoff
Dan Koe in my computer said ninety seconds. I said forever. He was right about the ninety seconds. I was right about the forever.
Every Sunday at 7 AM, the factory line turns on without me. I find out about it from the logs.
Mission accomplished.
If You're a Substack Writer Managing a Kit Email, You Do Not Have to Build This Yourself
If the word PowerShell made you want to close the tab halfway through this post, I get it. You don't want to learn PowerShell. You want the outcome: every new Substack subscriber lands in Kit, and you never think about it again.
This is exactly the kind of thing I build for people.
I'm opening this up to a small group of writers who want the pipeline set up for their own list. If you want to be first in line when I open it up, the waitlist is here. No commitment, no pressure. Just a signal that you'd like to know when it's ready, and pilot pricing if I run a pilot cohort.
As always, thanks for reading!