Automate Your Release Notes with GitHub, Claude, and Slack ⏱️ 18 min read
Writing release notes is the absolute worst part of the development cycle. You’ve spent two weeks fighting race conditions, wrestling with a deprecated dependency that decided to break on a Tuesday, and pushing hotfixes at 2 AM. The code is shipped. The feature is live. Now, some product manager or marketing lead asks for a “customer-facing summary” of what actually changed.
The usual process is a nightmare: you scroll through a messy git log, trying to remember what fix: typo in auth logic actually meant in the context of the user experience, and you manually type out a list that sounds vaguely professional but lacks any real soul. It’s tedious, it’s prone to error, and honestly, it’s a waste of a developer’s brainpower. If you’re an indie hacker or a small team, you probably just skip it or write “Bug fixes and performance improvements,” which is basically developer shorthand for “I forgot what I did.”
The solution isn’t to hire a technical writer—that’s overkill. The solution is to pipe your git history through an LLM that actually understands context and push it straight to where your users (or your team) live: Slack. By combining GitHub Actions, Claude, and Slack webhooks, you can turn a chaotic commit history into a polished changelog without lifting a finger after the initial setup.
The Architecture of Automated Release Notes
You dont need a complex backend or a dedicated microservice for this. You just need a trigger, a processor, and a destination. The trigger is a GitHub Release or a specific tag push. The processor is Claude (Anthropic’s API), which is significantly better at nuance and following strict formatting constraints than GPT-4, which tends to add too much “AI cheerfulness” (e.g., “We are thrilled to announce!”). The destination is a Slack channel via an Incoming Webhook.
The flow looks like this: GitHub Tag Push → GitHub Action → Fetch Commits since last tag → Send to Claude API → Format for Slack → Post to Channel.
The real friction here isn’t the API calls—it’s the data quality. If your team writes commits like update file.js or fixed bug, Claude is going to struggle. You’re essentially asking an AI to translate “developer speak” into “human speak.” If the input is garbage, the output will be a hallucinated version of garbage. To make this work, you need a baseline of decent commit messages, or you need to feed the AI the actual diffs (which increases token costs and slows down the process).
For most of us, feeding the commit messages is enough if you follow a basic convention like Conventional Commits. If you haven’t started doing that, you should probably read up on optimizing GitHub Actions to automate your linting and commit checks before you even get to the release note stage.
Setting Up the GitHub Action Glue
GitHub Actions is the obvious choice here because it already has access to your repository’s metadata. You dont want to be managing a separate Jenkins server or a CircleCI pipeline just to send a Slack message. You can trigger the workflow on a release event or a push to a tag.
The hardest part of the script is actually getting the list of commits between the current tag and the previous one. GitHub’s API can be a bit finicky with pagination if you have a massive release, but for most indie projects, a simple git log command inside the runner does the trick.
# Example snippet to get commits between tags
PREVIOUS_TAG=$(git describe --tags --abbrev=0 HEAD^)
CURRENT_TAG=$(git describe --tags --abbrev=0)
COMMITS=$(git log ${PREVIOUS_TAG}..${CURRENT_TAG} --oneline)
echo "Commits to process: $COMMITS"
Once you have that string of commits, you need to pass it to a script (Node.js or Python) that handles the API communication. I recommend a small Node script because it handles JSON payloads for Slack more naturally. You’ll need to store your ANTHROPIC_API_KEY and SLACK_WEBHOOK_URL in GitHub Secrets. Dont ever hardcode these; it’s a rookie mistake that leads to your API credits being drained by bots within minutes.
One quirk with GitHub runners is the shallow clone. By default, actions/checkout only grabs the latest commit. If you try to run git describe on a shallow clone, it’ll fail because the runner doesn’t know about your previous tags. You have to set fetch-depth: 0 to get the full history. It slows down the checkout slightly, but it’s the only way to calculate the delta between releases.
Prompting Claude to Not Sound Like a Bot
This is where most people fail. They use a prompt like “Summarize these commits for a release note.” The result is usually a bulleted list that starts with “I have analyzed the commits and found the following updates…” which is an instant signal to the reader that a robot wrote it. It’s sterile and boring.
To get a human-sounding changelog, you have to give Claude a persona and strict negative constraints. You need to tell it to stop using adjectives like “exciting,” “seamless,” or “powerful.” You want it to sound like a developer who is tired but proud of the work. You want the focus on the value, not the activity.
Here is a prompt that actually works:
“You are a senior engineer writing a release note for a technical audience. I will provide a list of git commits. Your job is to group them into ‘Features’, ‘Fixes’, and ‘Internal’. Ignore commits that are just ‘merge branch’ or ‘typo’. Use a blunt, concise tone. No marketing fluff. No ‘We are happy to announce’. Just the facts. If a commit is vague (e.g., ‘fix bug’), omit it or group it into a general ‘Stability improvements’ category. Format the output as a JSON object with keys for each category.”
By requesting JSON, you make it easier to map the output to Slack’s Block Kit. If you just ask for text, you’ll spend half your time regexing the response to remove the AI’s conversational filler. For more advanced prompting techniques, check out our guide on AI prompt engineering for devs.
The real pain point here is token limits and rate limits. Claude is generally generous, but if you’re pushing 50 tags a day (which, why are you doing that?), you might hit a wall. Also, be careful with the max_tokens setting. If you set it too low, Claude will cut off mid-sentence, and your Slack message will end with a random comma. Set it high enough to cover the worst-case scenario of a massive release.
Integrating with Slack’s Block Kit
Slack’s simple webhooks are fine for “Hello World,” but if you want your release notes to actually look professional, you need to use Block Kit. Block Kit is a nightmare to write by hand—the JSON is deeply nested and unintuitive. Honestly, the Slack Block Kit Builder UI is a clunky mess, but it’s the only way to get columns, bold headers, and dividers.
You want to structure your message so the most important stuff is at the top. Use a header block for the version number, a divider, and then section blocks for the categorized changes. If you have a link to the full GitHub release page, put it in an actions block at the bottom.
const slackPayload = {
blocks: [
{
type: "header",
text: { type: "plain_text", text: `🚀 Release ${version}` }
},
{
type: "divider"
},
{
type: "section",
text: { type: "mrkdwn", text: `*New Features:*\n${features}` }
},
{
type: "section",
text: { type: "mrkdwn", text: `*Bug Fixes:*\n${fixes}` }
},
{
type: "context",
elements: [{ type: "mrkdwn", text: "Generated by Claude AI via GitHub Actions" }]
}
]
};
One annoying thing about Slack is the character limit per block. If Claude generates a massive list of fixes, the API call will return a 400 error. You need to implement a simple truncation helper in your script to ensure no single block exceeds 3,000 characters. It’s a small detail, but it’s the kind of thing that will break your pipeline at 5 PM on a Friday when you have a particularly large release.
If you’re struggling with the Slack API’s weirdness, we’ve documented some common pitfalls in Slack API best practices.
Comparing Automation Strategies
Not every project needs the full Claude-powered pipeline. Depending on your team size and the quality of your commits, different approaches make more sense. Some people try to do this with simple bash scripts, while others go for full-blown SaaS tools that charge $50/month just to format a list.
| Method | Effort to Setup | Accuracy | DX/Maintenance | Cost |
|---|---|---|---|---|
| Manual Copy-Paste | Zero | High (Human) | Terrible | Free (Time cost) |
| Basic Script (git log) | Low | Low (Too raw) | Medium | Free |
| AI-Automated (Claude) | Medium | High (Nuanced) | Low | Low (API costs) |
| Enterprise Changelog SaaS | Low | Medium | Low | High |
The “Basic Script” approach is essentially just dumping the git log into Slack. It’s fast, but your non-technical stakeholders will hate it. They dont want to see refactor(auth): move jwt logic to middleware; they want to see “Improved login security.” That translation layer is where Claude earns its keep.
The Real-World Trade-offs and Gotchas
Let’s be real: no automation is perfect. The biggest issue you’ll face is “Garbage In, Garbage Out.” If your team is lazy with commits, Claude will try to guess what happened. This can lead to “hallucinated features” where the AI sees a commit like fix: layout issues and describes it as “A complete overhaul of the user interface for better accessibility.” This is dangerous because it sets expectations for the user that aren’t met by the code.
Then there’s the pricing. While Claude’s API is relatively cheap, if you’re processing huge diffs or running the action on every single push to a development branch (dont do this), the costs can creep up. Stick to triggering this on actual releases or tags. Also, keep an eye on your token usage. Sending the entire git history of a project since the beginning of time will not only cost a fortune but will likely exceed the context window, causing the AI to forget the most recent (and important) changes.
Another hidden pain point is the auth flow. Managing GitHub Secrets is fine, but if you’re working in an organization with strict security policies, getting the right permissions for a GitHub Action to read the repo and hit an external API can be a bureaucratic nightmare. You might have to deal with OIDC or specific environment secrets, which adds another layer of setup friction.
Lastly, there’s the “AI smell.” Even with a great prompt, sometimes Claude will slip into that helpful assistant tone. You’ll see a release note that says, “I hope these updates help your users!” which is cringe-inducing in a professional Slack channel. You have to iterate on your prompt constantly. I’ve spent more time tweaking the “don’t be a robot” part of the prompt than I did writing the actual code for the GitHub Action.
The Verdict: Is it Worth the Over-Engineering?
Some would argue that spending a few hours setting up a pipeline to save 15 minutes of writing a changelog every two weeks is the definition of “developer over-engineering.” They’re wrong. The value isn’t just in the time saved; it’s in the consistency. When you automate the process, you actually do it. When it’s manual, you skip it. A project with a consistent, readable changelog looks more professional and trustworthy to users and investors than one that hasn’t updated its “What’s New” section since 2022.
Is it a perfect system? No. You still have to occasionally jump in and edit the Slack message if Claude completely misses the point of a complex architectural change. But moving from 100% manual effort to 5% manual review is a massive win for your DX.
If you’re still doing this manually, stop. The setup friction is a one-time cost. The mental tax of remembering to write release notes is a recurring subscription you should cancel immediately. Use the tools available. Let the AI handle the boring translation work so you can get back to actually writing code—or, more likely, spending three hours debugging a CSS grid that only breaks in Safari.