I have a confession to make. Somewhere on my hard drive, buried in a folder labeled "archive-old-projects-DO-NOT-DELETE," I have approximately 47 different versions of files named deploy.sh. Some are barely 10 lines. Others are sprawling 500-line monsters with configuration parsers, rollback logic, and ASCII art progress bars. Each one represents a moment in time when I was absolutely certain I had finally figured out the "right" way to deploy software.
Spoiler alert: I had not.
If you have been writing software for more than a few years, you probably have your own collection. Maybe yours are called push.sh, or ship-it.sh, or—if you were feeling particularly optimistic—never-fails.sh. Whatever you call them, these scripts tell a story. A story of evolution, hard-won lessons, and that one 3 AM production incident that made you question all your life choices.
Let me take you on a tour through my deployment script museum. Some of these artifacts are cringe-worthy. Some are overengineered to the point of absurdity. But all of them taught me something valuable about what actually matters when you are trying to ship code reliably.
The First Script: Blissful Ignorance
Here is my earliest deployment script that I can still find, circa 2008:
#!/bin/bash
scp -r * user@prodserver:/var/www/myapp
ssh user@prodserver "cd /var/www/myapp && npm install && pm2 restart myapp"
echo "Deployed!"
Look at that beautiful simplicity. Look at that breathtaking naivety. This script assumes:
- The SSH connection will work
- The wildcard
*will catch everything important and nothing dangerous npm installwill succeed- The application will start
- The universe fundamentally wants good things to happen to me
What could possibly go wrong?
Everything. Everything went wrong.
The first time I ran this in production, I copied my entire node_modules folder (which was not in .gitignore yet), overwrote the production environment variables with my local ones, and brought down the site for 20 minutes while I frantically tried to figure out why the database connection was trying to hit localhost.
But you know what? That script taught me the single most important lesson about deployment: The script should fail fast and fail loudly. If something goes wrong, the absolute worst thing you can do is keep going and deploy a partially broken application.
The "It Works on My Machine" Era
After that humbling experience, I entered what I now call the "defensive scripting" phase. I added checks. So many checks.
#!/bin/bash
if [ ! -f "package.json" ]; then
echo "ERROR: No package.json found!"
exit 1
fi
if [ -z "$PROD_SERVER" ]; then
echo "ERROR: PROD_SERVER not set!"
exit 1
fi
rsync -avz --exclude 'node_modules' --exclude '.git' ./ user@$PROD_SERVER:/var/www/myapp
ssh user@$PROD_SERVER << 'EOF'
cd /var/www/myapp
npm install --production
pm2 restart myapp
EOF
echo "Deployment complete!"
This was better. At least now the script would bail out if something obviously wrong was happening. But it still made a fatal assumption: that the production server was set up the same way as my development machine.
It was not.
The production server was running a different version of Node. It did not have pm2 installed. The user I was SSH-ing as did not have write permissions to /var/www/myapp. And, in what I consider the universe's way of mocking me, the server's hostname had changed and nobody told me.
This phase taught me that scripts should verify the environment before doing anything destructive. Do not just check that your local setup is correct. Check that the remote environment is ready to receive your deployment.
The Over-Engineering Phase: When Scripts Become Frameworks
After enough production incidents, I swung hard in the opposite direction. I was going to build a deployment system so robust, so thoroughly engineered, that it could survive anything. The result was... not good.
I created a deployment script with:
- A plugin system for "hooks" (pre-deploy, post-deploy, rollback)
- YAML configuration files
- Environment-specific profiles
- Automatic backup and rollback
- Slack notifications
- Health checks with retry logic
- A custom logging framework with different verbosity levels
The script was over 800 lines. It had its own documentation site. It took longer to configure than the application it was deploying.
And here is the kicker: it was so complex that when something went wrong, nobody—including me—could figure out what was happening. The abstraction layers I had carefully built to make things "flexible" made debugging nearly impossible. I had created a deployment script that required a deployment script to deploy safely.
A senior architect I was working with at the time looked at my masterpiece and said something that stuck with me: "You built a framework to avoid thinking about each deployment individually. But every deployment is different. You cannot abstract away judgment."
That hurt. But he was right.
This phase taught me that simplicity is a feature, not a bug. The best deployment script is often the most boring one. If you cannot read it and understand what it does in 30 seconds, it is probably too clever for its own good.
Finding Balance: Scripts That Have Survived
After years of swinging between extremes, I have finally settled on a set of patterns that work consistently across different projects. These are not revolutionary. They are not going to win any awards for innovation. But they work, and they have survived years of production use without major rewrites.
Here is what they have in common:
1. They are idempotent
Running the script twice should have the same effect as running it once. This seems obvious, but you would be surprised how many scripts break if you run them after a partial failure.
2. They verify before they act
Check that required files exist. Check that environment variables are set. Check that the remote server is reachable. Check that the application actually started after deployment. Only print "Success!" if things actually succeeded.
3. They are readable by junior developers
If a new team member cannot understand what your deployment script does, you have failed. Use clear variable names. Add comments. Avoid clever bash tricks that save two lines but require a Stack Overflow search to understand.
4. They log everything
Not just the happy path. Log every decision point, every check, every command. When something goes wrong at 2 AM, you will want that audit trail.
5. They fail fast
Use set -e in bash so the script exits immediately on any error. Do not try to recover automatically. Stop, alert, and let a human decide what to do.
The Current Collection: Practical Patterns for Real Deployments
I still have a collection of deployment scripts, but now they are organized by deployment pattern rather than being different attempts at the same thing. Here are the patterns I reach for most often:
Static Site Deployment
For sites that are just HTML/CSS/JS with no server-side logic, the pattern is simple:
- Build the assets locally
- Verify the build succeeded and generated expected files
- Sync to S3 or similar static hosting
- Invalidate CDN cache
- Run a quick smoke test on the deployed URL
API Deployment with Zero Downtime
For API servers that cannot afford downtime:
- Deploy new code to a staging slot or new container
- Run health checks against the new version
- Gradually shift traffic from old to new
- Monitor error rates
- Keep old version running for quick rollback
Background Worker Deployment
For queue processors and scheduled jobs:
- Signal workers to finish current tasks
- Wait for graceful shutdown with timeout
- Deploy new code
- Start workers with new code
- Verify they are processing tasks
Database-Backed Application Deployment
For applications with schema changes:
- Run database migrations in a transaction
- Verify migrations succeeded
- Deploy application code
- Restart application
- Run post-deployment health checks
Each of these patterns is backed by a script that is usually 50-150 lines. Not too short to be dangerous, not too long to be unmaintainable.
What Four Decades of Deployment Has Taught Me
I have been deploying software in one form or another for over 40 years—from copying floppy disks to configuring serverless functions. I helped architect deployment systems for everything from early e-commerce sites to AWS GovCloud environments for the Department of Homeland Security. And here is what I have learned:
The technology changes, but the principles stay the same. Whether you are deploying to a physical server, a virtual machine, a container, or a serverless function, you still need to verify your environment, test your deployment, and have a rollback plan.
Automation is not about eliminating humans. It is about eliminating repetitive tasks so humans can focus on judgment calls. A good deployment script handles the boring parts flawlessly, so you can spend your mental energy on the interesting problems.
The best script is the one you will actually use. I have seen incredibly sophisticated CI/CD pipelines that developers bypass because they are too slow or too complicated. I have also seen simple bash scripts that run reliably hundreds of times a day because they are fast and trustworthy.
These days, when I work with teams transitioning from manual deployments to automated ones, I tell them the same thing: start simple. Do not try to build the perfect deployment system on day one. Build something that works, use it, let it break, learn from it, and iterate.
Your first deployment script should be embarrassingly simple. If it is not, you are probably over-engineering it. The goal is not to build a deployment framework. The goal is to deploy your application reliably.
The Script You Should Write Today
If you are just starting your deployment script collection, here is what I recommend:
Start with a script that does exactly three things:
- Verifies that prerequisites are met (environment variables set, required files exist, remote server is reachable)
- Deploys your application (using whatever method makes sense for your stack)
- Verifies that the deployment worked (health check, smoke test, or similar)
That is it. Do not add rollback logic yet. Do not add configuration files yet. Do not add Slack notifications yet. Just get those three things working reliably.
Once you have deployed successfully a dozen times, you will start to notice patterns. You will see which checks you wish you had added. You will find the pain points that automation could solve. Then—and only then—should you add more sophistication.
The Call to Action
Go dig up your oldest deployment script. Seriously, stop reading and go find it. Check your old project folders, your backup drives, that GitHub repo from 2012 that you thought nobody would ever look at again.
When you find it, do not be too harsh on yourself. That script represents where you were in your journey. It probably taught you something important, even if the lesson was "never do this again."
And if you are feeling brave, share it. Some of the best conversations I have had with other developers started with "Oh man, let me show you this terrible deploy script I wrote..."
Because here is the thing: we have all been there. We have all written the script that assumed everything would work perfectly. We have all over-engineered a solution to yesterday's problem. We have all deployed on a Friday afternoon and immediately regretted it.
Your deployment script collection is not a source of shame. It is a chronicle of your growth as a developer. Each script is a snapshot of what you knew at the time, the problems you were solving, and the lessons you had not learned yet.
So embrace your inner deploy.sh hoarder. Keep that folder of old scripts. Learn from them, laugh at them, and maybe—just maybe—copy that one clever function you wrote that actually still holds up.
And the next time you write a deployment script, remember: it does not have to be perfect. It just has to work. Everything else is just future you building your next museum piece.