I recently implemented a solution to automate my Bind9 zone file updates in my private infrastructure using GitHub Actions with free runners, all secured through a Tailscale overlay network. This setup has significantly improved my workflow and provided me with both flexibility and security. In this post, I’ll share my approach and compare it with cloud DNS solutions like AWS Route53 and Google Cloud DNS.
The Setup: GitHub Actions + Tailscale + Bind9
What I’ve built
My solution uses GitHub Actions to automatically deploy DNS zone changes to my private Bind9 server whenever I push updates to my repository. Here’s how it works:
- Zone File Repository: I maintain a Git repository with all my Bind9 zone files
- GitHub Actions Workflow: When I push changes, a workflow runs that:
- Validates the zone files (using
named-checkzone
) - Connects to my private network via Tailscale
- Deploys the updated zone files to my Bind9 server
- Reloads the DNS server configuration
- Validates the zone files (using
GitHub Actions Configuration
Here’s a simplified version of my workflow file:
name: Deploy DNS Updates
on:
push:
branches: [ main ]
paths:
- 'zones/**'
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Install dependencies
run: sudo apt-get update && sudo apt-get install -y bind9utils
- name: Validate zone files
run: |
for zone in zones/*.db; do
named-checkzone $(basename ${zone%.db}) $zone
done
- name: Setup Tailscale
uses: tailscale/github-action@v1
with:
oauth-client-id: ${{ secrets.TS_OAUTH_CLIENT_ID }}
oauth-secret: ${{ secrets.TS_OAUTH_SECRET }}
tags: tag:ci
- name: Deploy zone files
run: |
scp -o StrictHostKeyChecking=no -r zones/* dns-admin@bind-server:/etc/bind/zones/
ssh -o StrictHostKeyChecking=no dns-admin@bind-server 'sudo rndc reload'
Tailscale Integration
Tailscale provides a secure overlay network that connects my GitHub Actions runners to my private infrastructure without exposing my DNS server to the public internet. This is particularly important for DNS, as it contains sensitive information about my internal network.
Key benefits of using Tailscale in this setup:
- Zero Trust Security: Only authenticated GitHub Action runners can access my DNS server
- No Public IP Required: My Bind9 server stays completely private
- Simplified Networking: No need for complex firewall rules or VPN configurations
Bind9 vs. Cloud DNS Solutions: Why I chose self-hosting
While AWS Route53 and Google Cloud DNS offer robust, managed DNS solutions, there are several advantages to managing my own DNS infrastructure with Bind9:
Cost Benefits
- No Query Charges: Cloud DNS services typically charge per DNS query (AWS Route53: $0.40 per million queries, GCP: $0.20-0.40 per million queries)
- No Zone Charges: Both Route53 and Cloud DNS charge monthly fees per hosted zone
- Free GitHub Minutes: GitHub Actions provides 2,000 free minutes per month for private repositories, which is more than enough for DNS updates
For my setup with dozens of zones and millions of monthly queries, self-hosting with Bind9 saves hundreds of dollars annually.
Technical Advantages
- Complete Control: I can implement custom DNS features not available in cloud offerings
- DNS Views: Bind9 allows different answers based on the requester’s IP address
- Local Resolution: Queries for internal resources stay within my network, reducing latency
- Integration with Local Services: Easier integration with internal DHCP and other network services
- Advanced Record Types: Support for specialized DNS record types and custom configurations
- Offline Operation: DNS continues to function even during internet outages
Privacy and Sovereignty
- Data Privacy: My zone data never leaves my infrastructure
- No Vendor Lock-in: I can easily migrate my DNS configuration to any standard DNS server
- Regulatory Compliance: For certain industries, keeping DNS data on-premises can help with compliance requirements
Hybrid Approaches: Integrating Bind9 with Cloud DNS
You can actually combine the best of both worlds by setting up a master-slave architecture between Bind9 and cloud providers. Here’s how this might work:
Bind9 as Master with Route53/Cloud DNS as Slaves
This architecture is possible and provides several benefits:
- Primary Control: Maintain Bind9 as your authoritative master DNS server
- Global Distribution: Use Route53 or Cloud DNS as slave servers for global presence
- Scalability: Cloud DNS handles high query volumes while you maintain control
Implementation Approach:
AWS Route53 supports AXFR (zone transfers) through a feature called “Route53 Traffic Flow.” You would:
- Configure your Bind9 server to allow zone transfers to specific Route53 IPs
- Set up Route53 Traffic Flow to pull zone data from your Bind9 server
- Configure appropriate transfer security using TSIG keys
Google Cloud DNS similarly supports zone transfers with their API.
Example Bind9 Configuration:
// Allow zone transfers to Route53/Cloud DNS slaves
acl cloud_dns_slaves {
// AWS Route53 transfer IPs
192.0.2.1;
192.0.2.2;
// Google Cloud DNS transfer IPs
198.51.100.1;
198.51.100.2;
};
options {
...
allow-transfer { cloud_dns_slaves; };
...
};
// Use TSIG for secure zone transfers
key "cloud_dns_key" {
algorithm hmac-sha256;
secret "your-secret-key-here";
};
server 192.0.2.1 {
keys { cloud_dns_key; };
};
Scalability Considerations
A hybrid approach offers excellent scalability:
- Query Distribution: Cloud providers handle the bulk of global DNS queries
- Management Scalability: Continue to use your GitHub Actions workflow to update your master Bind9 server, which then propagates to cloud slaves
- Disaster Recovery: If either your Bind9 server or the cloud provider has issues, the other continues to serve DNS
- Flexibility: Easily adjust which zones are public vs. private
Conclusion
While cloud DNS services offer convenience and global scale, my automated Bind9 setup with GitHub Actions and Tailscale provides the perfect balance of control, cost-efficiency, and security for my needs. The automation through GitHub Actions means I don’t sacrifice ease of management, while Tailscale ensures everything remains secure.
For those considering a similar setup, I highly recommend evaluating your specific requirements around cost, control, and scalability. A hybrid approach with Bind9 as master and cloud DNS as slaves could offer the best of both worlds for organizations with more complex needs.
What’s your DNS setup? I’d love to hear about your experiences in the comments below!