Practice and reinforce the concepts from Lesson 7
By the end of this workshop, you will:
This professional workshop transforms you from CI/CD novice to automation expert. You'll build real-world pipelines that teams use in production environments, implementing industry-standard practices for continuous integration and deployment.
GitHub Actions provides enterprise-grade automation capabilities through a component-based architecture:
Component | Purpose | Best Practice |
---|---|---|
Workflows | Define automated processes | One workflow per concern (CI, CD, security) |
Events | Trigger workflow execution | Use specific events to minimize runs |
Jobs | Logical grouping of steps | Parallelize independent jobs |
Steps | Individual tasks in a job | Keep atomic and idempotent |
Actions | Reusable workflow components | Version-lock for stability |
Runners | Execution environments | Use matrix strategy for cross-platform |
Organize workflows by purpose: ci.yml
for continuous integration, cd.yml
for deployment, security.yml
for scans. This separation improves maintainability and allows teams to own specific processes.
Create .github/workflows/ci-pipeline.yml
:
name: Continuous Integration Pipeline
# Workflow triggers with branch protection
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main ]
types: [ opened, synchronize, reopened, ready_for_review ]
workflow_dispatch:
inputs:
debug_enabled:
type: boolean
description: 'Enable debug logging'
required: false
default: false
# Environment variables for consistency
env:
NODE_VERSION: '20.x'
ARTIFACTS_RETENTION_DAYS: 7
# Limit concurrent runs
concurrency:
group: ci-${{ github.ref }}
cancel-in-progress: true
jobs:
# Job 1: Static code analysis
code-quality:
name: Code Quality Checks
runs-on: ubuntu-latest
if: github.event.pull_request.draft == false
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0 # Full history for better analysis
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm'
- name: Cache dependencies
uses: actions/cache@v3
id: cache
with:
path: node_modules
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-node-
- name: Install dependencies
if: steps.cache.outputs.cache-hit != 'true'
run: npm ci --prefer-offline --no-audit
- name: Run linting
run: |
npm run lint -- --format json --output-file eslint-report.json || true
npm run lint -- --format stylish
- name: Upload lint results
uses: actions/upload-artifact@v3
if: always()
with:
name: lint-results
path: eslint-report.json
retention-days: ${{ env.ARTIFACTS_RETENTION_DAYS }}
Scenario: A fintech startup needs to ensure code quality before deployment Solution: This pipeline automatically validates code on every commit Impact: 70% reduction in production bugs, 50% faster review cycles
Professional CI/CD requires robust testing at multiple levels. Let's implement a production-grade testing strategy.
First, set up a realistic project structure:
mkdir -p src/{services,utils,models} tests/{unit,integration,e2e}
npm init -y
npm install --save-dev jest @types/jest supertest eslint prettier
npm install express mongoose dotenv
Create src/services/calculator.service.js
:
class CalculatorService {
constructor(logger) {
this.logger = logger;
this.operations = 0;
}
add(a, b) {
this.validateNumbers(a, b);
this.operations++;
const result = a + b;
this.logger.info(`Addition: ${a} + ${b} = ${result}`);
return result;
}
subtract(a, b) {
this.validateNumbers(a, b);
this.operations++;
const result = a - b;
this.logger.info(`Subtraction: ${a} - ${b} = ${result}`);
return result;
}
multiply(a, b) {
this.validateNumbers(a, b);
this.operations++;
const result = a * b;
this.logger.info(`Multiplication: ${a} * ${b} = ${result}`);
return result;
}
divide(a, b) {
this.validateNumbers(a, b);
if (b === 0) {
this.logger.error('Attempted division by zero');
throw new Error('Division by zero is not allowed');
}
this.operations++;
const result = a / b;
this.logger.info(`Division: ${a} / ${b} = ${result}`);
return result;
}
validateNumbers(...args) {
args.forEach(arg => {
if (typeof arg !== 'number' || isNaN(arg)) {
throw new TypeError(`Invalid input: ${arg} is not a valid number`);
}
});
}
getStats() {
return {
totalOperations: this.operations,
timestamp: new Date().toISOString()
};
}
}
module.exports = CalculatorService;
Create comprehensive tests in tests/unit/calculator.service.test.js
:
const CalculatorService = require('../../src/services/calculator.service');
describe('CalculatorService', () => {
let calculator;
let mockLogger;
beforeEach(() => {
mockLogger = {
info: jest.fn(),
error: jest.fn(),
warn: jest.fn()
};
calculator = new CalculatorService(mockLogger);
});
describe('Basic Operations', () => {
test.each([
[1, 2, 3],
[0, 0, 0],
[-1, -1, -2],
[0.1, 0.2, 0.3],
[1000000, 2000000, 3000000]
])('add(%d, %d) = %d', (a, b, expected) => {
expect(calculator.add(a, b)).toBeCloseTo(expected);
expect(mockLogger.info).toHaveBeenCalled();
});
test('handles division by zero gracefully', () => {
expect(() => calculator.divide(10, 0)).toThrow('Division by zero');
expect(mockLogger.error).toHaveBeenCalledWith('Attempted division by zero');
});
test('validates input types', () => {
expect(() => calculator.add('1', 2)).toThrow(TypeError);
expect(() => calculator.add(NaN, 2)).toThrow(TypeError);
expect(() => calculator.add(null, 2)).toThrow(TypeError);
});
});
describe('Statistics Tracking', () => {
test('tracks operation count', () => {
calculator.add(1, 2);
calculator.multiply(3, 4);
calculator.divide(10, 2);
const stats = calculator.getStats();
expect(stats.totalOperations).toBe(3);
expect(stats.timestamp).toBeTruthy();
});
});
});
Create .github/workflows/test-automation.yml
:
name: Automated Testing Pipeline
on:
push:
branches: [ main, develop, 'release/**' ]
pull_request:
branches: [ main, develop ]
env:
COVERAGE_THRESHOLD: 80
TEST_TIMEOUT: 300000 # 5 minutes
jobs:
test-matrix:
name: Test Suite (${{ matrix.os }} / Node ${{ matrix.node-version }})
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macos-latest]
node-version: ['18.x', '20.x', '21.x']
include:
- os: ubuntu-latest
node-version: '22.x'
experimental: true
fail-fast: false
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Setup Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node-version }}
cache: 'npm'
- name: Install dependencies
run: npm ci --prefer-offline
- name: Run unit tests
run: |
npm test -- --coverage \
--coverageDirectory=coverage/unit \
--testPathPattern=tests/unit \
--maxWorkers=2
continue-on-error: ${{ matrix.experimental == true }}
- name: Run integration tests
run: |
npm test -- --coverage \
--coverageDirectory=coverage/integration \
--testPathPattern=tests/integration \
--runInBand
env:
NODE_ENV: test
DATABASE_URL: ${{ secrets.TEST_DATABASE_URL }}
- name: Validate coverage threshold
run: |
coverage=$(cat coverage/unit/coverage-summary.json | jq '.total.lines.pct')
echo "Coverage: ${coverage}%"
if (( $(echo "$coverage < $COVERAGE_THRESHOLD" | bc -l) )); then
echo "::error::Coverage ${coverage}% is below threshold ${COVERAGE_THRESHOLD}%"
exit 1
fi
- name: Generate test report
if: always()
run: |
npm test -- --json --outputFile=test-results.json
echo "## Test Results Summary" >> $GITHUB_STEP_SUMMARY
echo "- Total Tests: $(jq '.numTotalTests' test-results.json)" >> $GITHUB_STEP_SUMMARY
echo "- Passed: $(jq '.numPassedTests' test-results.json)" >> $GITHUB_STEP_SUMMARY
echo "- Failed: $(jq '.numFailedTests' test-results.json)" >> $GITHUB_STEP_SUMMARY
- name: Upload coverage reports
uses: actions/upload-artifact@v3
if: always()
with:
name: coverage-${{ matrix.os }}-${{ matrix.node-version }}
path: |
coverage/
test-results.json
retention-days: 7
- name: Comment PR with coverage
uses: actions/github-script@v7
if: github.event_name == 'pull_request'
with:
script: |
const coverage = require('./coverage/unit/coverage-summary.json');
const comment = `### Test Coverage Report
| Metric | Coverage |
|--------|----------|
| Statements | ${coverage.total.statements.pct}% |
| Branches | ${coverage.total.branches.pct}% |
| Functions | ${coverage.total.functions.pct}% |
| Lines | ${coverage.total.lines.pct}% |
Minimum threshold: ${process.env.COVERAGE_THRESHOLD}%`;
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: comment
});
Issue: Tests fail in CI but pass locally
Solution: Ensure environment parity by using Docker containers or checking timezone/locale differences
Prevention: Use --runInBand
flag for integration tests to avoid race conditions
Modern deployment requires security, reliability, and observability. Let's build a production-grade deployment pipeline.
Create .github/workflows/deployment-pipeline.yml
:
name: Production Deployment Pipeline
on:
push:
branches: [ main ]
tags: [ 'v*.*.*' ]
workflow_dispatch:
inputs:
environment:
type: choice
description: 'Target environment'
options:
- staging
- production
required: true
# Prevent concurrent deployments
concurrency:
group: deploy-${{ github.ref }}-${{ inputs.environment || 'auto' }}
cancel-in-progress: false
env:
DOCKER_REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
# Build and push Docker image
build-image:
name: Build Container Image
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
outputs:
image-tag: ${{ steps.meta.outputs.tags }}
image-digest: ${{ steps.build.outputs.digest }}
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ${{ env.DOCKER_REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.DOCKER_REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=ref,event=branch
type=ref,event=pr
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=sha,prefix={{branch}}-
- name: Build and push Docker image
id: build
uses: docker/build-push-action@v5
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
build-args: |
BUILD_VERSION=${{ github.sha }}
BUILD_TIME=${{ github.event.head_commit.timestamp }}
# Security scanning
security-scan:
name: Security Vulnerability Scan
runs-on: ubuntu-latest
needs: build-image
permissions:
contents: read
security-events: write
steps:
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: ${{ needs.build-image.outputs.image-tag }}
format: 'sarif'
output: 'trivy-results.sarif'
severity: 'CRITICAL,HIGH'
- name: Upload Trivy scan results to GitHub Security
uses: github/codeql-action/upload-sarif@v2
with:
sarif_file: 'trivy-results.sarif'
- name: Check for critical vulnerabilities
run: |
if grep -q "CRITICAL" trivy-results.sarif; then
echo "::error::Critical vulnerabilities found in image"
exit 1
fi
# Deploy to staging
deploy-staging:
name: Deploy to Staging
runs-on: ubuntu-latest
needs: [build-image, security-scan]
if: github.ref == 'refs/heads/main' || github.event.inputs.environment == 'staging'
environment:
name: staging
url: https://staging.example.com
steps:
- name: Deploy to Kubernetes
run: |
echo "Deploying ${{ needs.build-image.outputs.image-tag }} to staging"
# kubectl apply -f k8s/staging/
# kubectl set image deployment/app app=${{ needs.build-image.outputs.image-tag }}
- name: Run smoke tests
run: |
echo "Running smoke tests against staging environment"
# npm run test:e2e -- --env=staging
- name: Notify deployment status
uses: actions/github-script@v7
with:
script: |
await github.rest.repos.createDeploymentStatus({
owner: context.repo.owner,
repo: context.repo.repo,
deployment_id: context.payload.deployment.id,
state: 'success',
environment_url: 'https://staging.example.com',
description: 'Deployment to staging completed'
});
# Production deployment with approval
deploy-production:
name: Deploy to Production
runs-on: ubuntu-latest
needs: [build-image, security-scan, deploy-staging]
if: startsWith(github.ref, 'refs/tags/v') || github.event.inputs.environment == 'production'
environment:
name: production
url: https://app.example.com
steps:
- name: Verify image signature
run: |
echo "Verifying container image signature"
# cosign verify ${{ needs.build-image.outputs.image-tag }}
- name: Deploy to production
run: |
echo "Deploying ${{ needs.build-image.outputs.image-tag }} to production"
# kubectl apply -f k8s/production/
# kubectl set image deployment/app app=${{ needs.build-image.outputs.image-tag }}
- name: Monitor deployment health
run: |
echo "Monitoring application health"
# for i in {1..10}; do
# if curl -f https://app.example.com/health; then
# echo "Application is healthy"
# break
# fi
# sleep 30
# done
- name: Create release notes
uses: actions/github-script@v7
with:
script: |
const tag = context.ref.replace('refs/tags/', '');
const release = await github.rest.repos.createRelease({
owner: context.repo.owner,
repo: context.repo.repo,
tag_name: tag,
name: `Release ${tag}`,
body: `## What's Changed
- Image: ${{ needs.build-image.outputs.image-tag }}
- Digest: ${{ needs.build-image.outputs.image-digest }}
Full changelog: https://github.com/${{ github.repository }}/compare/previous...${tag}`,
draft: false,
prerelease: false
});
Implement zero-downtime deployments by maintaining two identical production environments. Deploy to the inactive environment, run tests, then switch traffic. This allows instant rollback if issues arise.
Security must be integrated throughout the CI/CD pipeline, not added as an afterthought. Let's build a comprehensive security scanning workflow.
Create .github/workflows/security-pipeline.yml
:
name: Security Scanning Pipeline
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main ]
schedule:
- cron: '0 0 * * 1' # Weekly security scan
env:
SECURITY_THRESHOLD: 'HIGH'
jobs:
# Static Application Security Testing (SAST)
code-security:
name: Static Code Analysis
runs-on: ubuntu-latest
permissions:
security-events: write
contents: read
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Initialize CodeQL
uses: github/codeql-action/init@v2
with:
languages: javascript, typescript
queries: security-and-quality
- name: Autobuild
uses: github/codeql-action/autobuild@v2
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v2
with:
category: "/language:javascript"
- name: Run Semgrep
uses: returntocorp/semgrep-action@v1
with:
config: >-
p/security-audit
p/secrets
p/owasp-top-ten
generateSarif: true
- name: Upload Semgrep results
uses: github/codeql-action/upload-sarif@v2
with:
sarif_file: semgrep.sarif
# Dependency vulnerability scanning
dependency-scan:
name: Dependency Security Audit
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Run npm audit
run: |
npm audit --json > npm-audit.json || true
critical_count=$(jq '.metadata.vulnerabilities.critical' npm-audit.json)
high_count=$(jq '.metadata.vulnerabilities.high' npm-audit.json)
echo "## Security Audit Summary" >> $GITHUB_STEP_SUMMARY
echo "- Critical vulnerabilities: $critical_count" >> $GITHUB_STEP_SUMMARY
echo "- High vulnerabilities: $high_count" >> $GITHUB_STEP_SUMMARY
if [ "$critical_count" -gt 0 ]; then
echo "::error::Critical vulnerabilities found"
exit 1
fi
- name: Run Snyk security scan
uses: snyk/actions/node@master
env:
SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}
with:
args: --severity-threshold=${{ env.SECURITY_THRESHOLD }}
- name: OWASP Dependency Check
uses: dependency-check/Dependency-Check_Action@main
with:
project: ${{ github.repository }}
path: '.'
format: 'ALL'
args: >
--enableRetired
--enableExperimental
- name: Upload dependency check results
uses: actions/upload-artifact@v3
with:
name: dependency-check-report
path: reports/
# Infrastructure as Code scanning
iac-security:
name: Infrastructure Security Scan
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Run Checkov
uses: bridgecrewio/checkov-action@master
with:
directory: .
framework: dockerfile,kubernetes,github_actions
output_format: sarif
output_file_path: checkov.sarif
- name: Upload Checkov results
uses: github/codeql-action/upload-sarif@v2
with:
sarif_file: checkov.sarif
- name: Terraform security scan
uses: aquasecurity/tfsec-action@v1.0.0
with:
soft_fail: false
format: sarif
output: tfsec.sarif
- name: Upload Terraform scan results
uses: github/codeql-action/upload-sarif@v2
with:
sarif_file: tfsec.sarif
# Secret scanning
secret-scan:
name: Secret Detection
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0 # Full history for secret scanning
- name: Run Gitleaks
uses: gitleaks/gitleaks-action@v2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: TruffleHog scan
uses: trufflesecurity/trufflehog@main
with:
path: ./
base: ${{ github.event.repository.default_branch }}
head: HEAD
extra_args: --debug --only-verified
# Consolidated security report
security-report:
name: Generate Security Report
runs-on: ubuntu-latest
needs: [code-security, dependency-scan, iac-security, secret-scan]
if: always()
steps:
- name: Create security summary
uses: actions/github-script@v7
with:
script: |
const jobs = context.payload.workflow_run.jobs;
let report = '# Security Scan Summary\n\n';
jobs.forEach(job => {
const status = job.conclusion === 'success' ? '✅' : '❌';
report += `- ${status} ${job.name}\n`;
});
// Create issue if any security job failed
if (jobs.some(job => job.conclusion === 'failure')) {
await github.rest.issues.create({
owner: context.repo.owner,
repo: context.repo.repo,
title: `Security Alert: Vulnerabilities Detected - ${new Date().toISOString().split('T')[0]}`,
body: report,
labels: ['security', 'high-priority']
});
}
Scenario: A healthcare SaaS platform must comply with HIPAA security requirements Solution: Automated security scanning catches vulnerabilities before they reach production Impact: 100% compliance maintained, zero security incidents in production
Monitoring CI/CD performance is crucial for maintaining fast feedback loops and optimizing resource usage.
Create .github/workflows/performance-monitoring.yml
:
name: Performance Monitoring Pipeline
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main ]
workflow_run:
workflows: ["*"]
types: [completed]
jobs:
# Application performance testing
performance-test:
name: Performance Benchmarking
runs-on: ubuntu-latest
if: github.event_name != 'workflow_run'
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20.x'
- name: Install dependencies
run: npm ci
- name: Run performance benchmarks
run: |
npm install -D autocannon clinic
# Start application
npm start &
APP_PID=$!
sleep 5
# Run load test
npx autocannon -c 100 -d 30 -p 10 \
--json \
--renderStatusCodes \
http://localhost:3000 > benchmark-results.json
# Stop application
kill $APP_PID
- name: Analyze performance metrics
run: |
# Extract key metrics
requests=$(jq '.requests.total' benchmark-results.json)
latency_avg=$(jq '.latency.average' benchmark-results.json)
throughput=$(jq '.throughput.average' benchmark-results.json)
errors=$(jq '.errors' benchmark-results.json)
echo "## Performance Test Results" >> $GITHUB_STEP_SUMMARY
echo "| Metric | Value |" >> $GITHUB_STEP_SUMMARY
echo "|--------|-------|" >> $GITHUB_STEP_SUMMARY
echo "| Total Requests | $requests |" >> $GITHUB_STEP_SUMMARY
echo "| Avg Latency | ${latency_avg}ms |" >> $GITHUB_STEP_SUMMARY
echo "| Throughput | ${throughput} req/sec |" >> $GITHUB_STEP_SUMMARY
echo "| Errors | $errors |" >> $GITHUB_STEP_SUMMARY
# Fail if performance degrades
if [ "$errors" -gt 0 ] || (( $(echo "$latency_avg > 500" | bc -l) )); then
echo "::error::Performance degradation detected"
exit 1
fi
- name: Upload performance results
uses: actions/upload-artifact@v3
with:
name: performance-results
path: benchmark-results.json
# Bundle size monitoring
bundle-analysis:
name: Bundle Size Analysis
runs-on: ubuntu-latest
if: github.event_name != 'workflow_run'
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20.x'
- name: Analyze bundle size
run: |
npm ci
npm run build
# Analyze bundle
npx webpack-bundle-analyzer build/stats.json -m json -r bundle-report.json
# Check size limits
main_bundle_size=$(find build -name "main.*.js" -exec du -b {} \; | awk '{print $1}')
size_limit=500000 # 500KB
echo "Main bundle size: $((main_bundle_size / 1024))KB"
if [ "$main_bundle_size" -gt "$size_limit" ]; then
echo "::warning::Bundle size exceeds limit: $((main_bundle_size / 1024))KB > $((size_limit / 1024))KB"
fi
- name: Comment bundle size on PR
uses: actions/github-script@v7
if: github.event_name == 'pull_request'
with:
script: |
const fs = require('fs');
const report = JSON.parse(fs.readFileSync('bundle-report.json', 'utf8'));
const comment = `### 📦 Bundle Size Report
| Bundle | Size | Gzipped |
|--------|------|---------|
| Main | ${report.main.size} | ${report.main.gzipped} |
| Vendor | ${report.vendor.size} | ${report.vendor.gzipped} |
| Total | ${report.total.size} | ${report.total.gzipped} |
<details>
<summary>View detailed breakdown</summary>
\`\`\`json
${JSON.stringify(report.modules.slice(0, 10), null, 2)}
\`\`\`
</details>`;
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: comment
});
# Workflow performance analytics
workflow-analytics:
name: CI/CD Performance Analytics
runs-on: ubuntu-latest
if: github.event_name == 'workflow_run'
steps:
- name: Analyze workflow performance
uses: actions/github-script@v7
with:
script: |
const run = context.payload.workflow_run;
const duration = new Date(run.updated_at) - new Date(run.created_at);
// Get job details
const jobs = await github.rest.actions.listJobsForWorkflowRun({
owner: context.repo.owner,
repo: context.repo.repo,
run_id: run.id
});
// Calculate metrics
const jobMetrics = jobs.data.jobs.map(job => ({
name: job.name,
duration: new Date(job.completed_at) - new Date(job.started_at),
status: job.conclusion
}));
// Store metrics
const metrics = {
workflow: run.name,
total_duration: duration,
status: run.conclusion,
jobs: jobMetrics,
timestamp: new Date().toISOString()
};
// Create performance report
const report = `## Workflow Performance Report
**Workflow:** ${run.name}
**Status:** ${run.conclusion}
**Total Duration:** ${Math.round(duration / 1000)}s
### Job Breakdown
| Job | Duration | Status |
|-----|----------|--------|
${jobMetrics.map(job =>
`| ${job.name} | ${Math.round(job.duration / 1000)}s | ${job.status} |`
).join('\n')}
### Optimization Opportunities
${jobMetrics
.filter(job => job.duration > 300000) // Jobs taking more than 5 minutes
.map(job => `- Consider optimizing "${job.name}" (${Math.round(job.duration / 1000)}s)`)
.join('\n')}`;
// Add to workflow summary
await core.summary
.addRaw(report)
.write();
Optimization | Impact | Implementation |
---|---|---|
Parallel jobs | 50-70% faster | Use job dependencies wisely |
Caching | 30-50% faster | Cache dependencies and build outputs |
Matrix strategy | Better coverage | Test multiple environments efficiently |
Conditional steps | Resource savings | Skip unnecessary steps |
Self-hosted runners | Cost reduction | Use for resource-intensive tasks |
Creating reusable actions and workflows is essential for maintaining consistency across projects and teams.
Create .github/actions/deployment-suite/action.yml
:
name: 'Production Deployment Suite'
description: 'Complete deployment suite with health checks, rollback, and monitoring'
inputs:
environment:
description: 'Target deployment environment'
required: true
version:
description: 'Application version to deploy'
required: true
health-check-url:
description: 'URL to check application health'
required: true
rollback-on-failure:
description: 'Automatically rollback on deployment failure'
required: false
default: 'true'
slack-webhook:
description: 'Slack webhook for notifications'
required: false
datadog-api-key:
description: 'Datadog API key for monitoring'
required: false
outputs:
deployment-id:
description: 'Unique deployment identifier'
value: ${{ steps.deploy.outputs.deployment-id }}
deployment-url:
description: 'URL of deployed application'
value: ${{ steps.deploy.outputs.url }}
runs:
using: "composite"
steps:
- name: Pre-deployment validation
shell: bash
run: |
echo "::group::Pre-deployment checks"
# Validate inputs
if [[ -z "${{ inputs.environment }}" ]]; then
echo "::error::Environment is required"
exit 1
fi
# Check if version exists
if ! git rev-parse "${{ inputs.version }}" >/dev/null 2>&1; then
echo "::error::Version ${{ inputs.version }} not found"
exit 1
fi
echo "::endgroup::"
- name: Create deployment record
id: deploy
shell: bash
run: |
deployment_id="${{ inputs.environment }}-$(date +%s)-${{ github.run_id }}"
echo "deployment-id=$deployment_id" >> $GITHUB_OUTPUT
echo "url=https://${{ inputs.environment }}.example.com" >> $GITHUB_OUTPUT
# Store current version for rollback
echo "${{ inputs.version }}" > .last-deployed-version
- name: Deploy application
shell: bash
run: |
echo "::group::Deploying to ${{ inputs.environment }}"
# Deployment logic here
echo "Deploying version ${{ inputs.version }} to ${{ inputs.environment }}"
# Simulate deployment
sleep 5
echo "::endgroup::"
- name: Health check
id: health
shell: bash
run: |
echo "::group::Health checks"
max_attempts=10
attempt=0
while [ $attempt -lt $max_attempts ]; do
if curl -f -s "${{ inputs.health-check-url }}" > /dev/null; then
echo "Health check passed"
echo "healthy=true" >> $GITHUB_OUTPUT
break
fi
attempt=$((attempt + 1))
echo "Health check attempt $attempt failed, retrying..."
sleep 10
done
if [ $attempt -eq $max_attempts ]; then
echo "::error::Health check failed after $max_attempts attempts"
echo "healthy=false" >> $GITHUB_OUTPUT
exit 1
fi
echo "::endgroup::"
- name: Rollback on failure
if: failure() && inputs.rollback-on-failure == 'true'
shell: bash
run: |
echo "::warning::Deployment failed, initiating rollback"
if [ -f .last-deployed-version ]; then
last_version=$(cat .last-deployed-version)
echo "Rolling back to version: $last_version"
# Rollback logic here
fi
- name: Send notifications
if: always()
shell: bash
run: |
status="${{ steps.health.outputs.healthy == 'true' && '✅ Success' || '❌ Failed' }}"
# Slack notification
if [[ -n "${{ inputs.slack-webhook }}" ]]; then
curl -X POST "${{ inputs.slack-webhook }}" \
-H 'Content-Type: application/json' \
-d "{
\"text\": \"Deployment $status\",
\"attachments\": [{
\"color\": \"${{ steps.health.outputs.healthy == 'true' && 'good' || 'danger' }}\",
\"fields\": [
{\"title\": \"Environment\", \"value\": \"${{ inputs.environment }}\", \"short\": true},
{\"title\": \"Version\", \"value\": \"${{ inputs.version }}\", \"short\": true},
{\"title\": \"Deployment ID\", \"value\": \"${{ steps.deploy.outputs.deployment-id }}\"},
{\"title\": \"URL\", \"value\": \"${{ steps.deploy.outputs.url }}\"}
]
}]
}"
fi
# Datadog event
if [[ -n "${{ inputs.datadog-api-key }}" ]]; then
curl -X POST "https://api.datadoghq.com/api/v1/events" \
-H "DD-API-KEY: ${{ inputs.datadog-api-key }}" \
-H "Content-Type: application/json" \
-d "{
\"title\": \"Deployment $status\",
\"text\": \"Deployed ${{ inputs.version }} to ${{ inputs.environment }}\",
\"priority\": \"normal\",
\"tags\": [\"env:${{ inputs.environment }}\", \"version:${{ inputs.version }}\"],
\"alert_type\": \"${{ steps.health.outputs.healthy == 'true' && 'info' || 'error' }}\"
}"
fi
Create .github/workflows/deploy-with-suite.yml
:
name: Production Deployment
on:
push:
tags:
- 'v*'
jobs:
deploy:
name: Deploy to Production
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Deploy application
uses: ./.github/actions/deployment-suite
with:
environment: production
version: ${{ github.ref_name }}
health-check-url: https://api.example.com/health
rollback-on-failure: 'true'
slack-webhook: ${{ secrets.SLACK_WEBHOOK }}
datadog-api-key: ${{ secrets.DATADOG_API_KEY }}
- name: Post-deployment tasks
run: |
echo "Deployment completed!"
echo "ID: ${{ steps.deploy.outputs.deployment-id }}"
echo "URL: ${{ steps.deploy.outputs.deployment-url }}"
Let's build a complete production pipeline that combines everything we've learned.
Create .github/workflows/enterprise-pipeline.yml
:
name: Enterprise Production Pipeline
on:
push:
branches: [ main, develop, 'release/**' ]
pull_request:
branches: [ main, develop ]
release:
types: [ published ]
env:
# Global configuration
NODE_VERSION: '20.x'
DOCKER_REGISTRY: ghcr.io
AWS_REGION: us-east-1
TERRAFORM_VERSION: '1.5.0'
jobs:
# 1. Code Quality & Security Gates
quality-gates:
name: Quality & Security Checks
runs-on: ubuntu-latest
outputs:
quality-score: ${{ steps.sonar.outputs.score }}
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: SonarCloud scan
id: sonar
uses: SonarSource/sonarcloud-github-action@master
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
- name: Check quality gate
run: |
score=$(curl -s -u ${{ secrets.SONAR_TOKEN }}: \
"https://sonarcloud.io/api/measures/component?component=myproject&metricKeys=alert_status" \
| jq -r '.component.measures[0].value')
if [ "$score" != "OK" ]; then
echo "::error::Quality gate failed"
exit 1
fi
# 2. Build & Test Matrix
build-test:
name: Build & Test (${{ matrix.service }})
needs: quality-gates
runs-on: ubuntu-latest
strategy:
matrix:
service: [api, frontend, worker]
include:
- service: api
path: ./services/api
test-command: npm run test:api
- service: frontend
path: ./services/frontend
test-command: npm run test:frontend
- service: worker
path: ./services/worker
test-command: npm run test:worker
steps:
- uses: actions/checkout@v4
- name: Setup build environment
uses: ./.github/actions/setup-environment
with:
service: ${{ matrix.service }}
node-version: ${{ env.NODE_VERSION }}
- name: Build service
working-directory: ${{ matrix.path }}
run: |
npm ci
npm run build
npm run test:unit
- name: Integration tests
working-directory: ${{ matrix.path }}
run: |
docker-compose -f docker-compose.test.yml up -d
npm run test:integration
docker-compose -f docker-compose.test.yml down
- name: Build Docker image
run: |
docker build -t ${{ env.DOCKER_REGISTRY }}/${{ github.repository }}/${{ matrix.service }}:${{ github.sha }} \
--build-arg VERSION=${{ github.sha }} \
--build-arg BUILD_TIME=$(date -u +'%Y-%m-%dT%H:%M:%SZ') \
${{ matrix.path }}
- name: Security scan
uses: aquasecurity/trivy-action@master
with:
image-ref: ${{ env.DOCKER_REGISTRY }}/${{ github.repository }}/${{ matrix.service }}:${{ github.sha }}
exit-code: '1'
severity: 'CRITICAL,HIGH'
# 3. Infrastructure validation
infrastructure:
name: Infrastructure as Code
needs: quality-gates
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Terraform
uses: hashicorp/setup-terraform@v2
with:
terraform_version: ${{ env.TERRAFORM_VERSION }}
- name: Terraform init
working-directory: ./infrastructure
run: terraform init
- name: Terraform validate
working-directory: ./infrastructure
run: terraform validate
- name: Terraform plan
working-directory: ./infrastructure
run: terraform plan -out=tfplan
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
- name: Terraform security scan
uses: triat/terraform-security-scan@v3
with:
tfsec_actions_working_dir: ./infrastructure
# 4. Deploy to staging
deploy-staging:
name: Deploy to Staging
needs: [build-test, infrastructure]
if: github.ref == 'refs/heads/develop'
runs-on: ubuntu-latest
environment:
name: staging
url: https://staging.example.com
steps:
- uses: actions/checkout@v4
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v2
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- name: Deploy to EKS
run: |
aws eks update-kubeconfig --name staging-cluster
# Deploy each service
for service in api frontend worker; do
kubectl set image deployment/$service \
$service=${{ env.DOCKER_REGISTRY }}/${{ github.repository }}/$service:${{ github.sha }} \
-n staging
done
# Wait for rollout
kubectl rollout status deployment/api -n staging
kubectl rollout status deployment/frontend -n staging
kubectl rollout status deployment/worker -n staging
- name: Run smoke tests
run: |
npm run test:e2e -- --env=staging
# 5. Deploy to production
deploy-production:
name: Deploy to Production
needs: [build-test, infrastructure]
if: github.event_name == 'release'
runs-on: ubuntu-latest
environment:
name: production
url: https://app.example.com
steps:
- uses: actions/checkout@v4
- name: Blue-Green deployment
run: |
# Deploy to blue environment
kubectl apply -f k8s/production/blue/ -n production
# Health checks
./scripts/health-check.sh https://blue.example.com
# Switch traffic
kubectl patch service app-service -n production \
-p '{"spec":{"selector":{"version":"blue"}}}'
# Monitor for 5 minutes
./scripts/monitor-deployment.sh 300
# Clean up green environment
kubectl delete -f k8s/production/green/ -n production
- name: Update monitoring
run: |
# Update Datadog
curl -X POST "https://api.datadoghq.com/api/v1/events" \
-H "DD-API-KEY: ${{ secrets.DATADOG_API_KEY }}" \
-d '{
"title": "Production deployment completed",
"text": "Version ${{ github.event.release.tag_name }} deployed",
"priority": "normal",
"tags": ["env:production", "version:${{ github.event.release.tag_name }}"]
}'
# 6. Post-deployment validation
validate-deployment:
name: Post-Deployment Validation
needs: [deploy-staging, deploy-production]
if: always()
runs-on: ubuntu-latest
steps:
- name: Run synthetic monitoring
run: |
# Datadog synthetics
curl -X POST "https://api.datadoghq.com/api/v1/synthetics/tests/trigger" \
-H "DD-API-KEY: ${{ secrets.DATADOG_API_KEY }}" \
-d '{"public_ids": ["abc-123-def"]}'
- name: Performance validation
run: |
# Run lighthouse CI
npm install -g @lhci/cli
lhci autorun --config=.lighthouserc.json
- name: Security validation
run: |
# OWASP ZAP scan
docker run -t owasp/zap2docker-stable zap-baseline.py \
-t https://app.example.com \
-r security-report.html
Build a production-ready CI/CD pipeline that includes:
Continuous Integration Pipeline
Continuous Deployment Pipeline
Advanced Features
Documentation Package
Aspect | Weight | Requirements |
---|---|---|
Pipeline Design | 30% | Efficient job organization, proper dependencies, parallel execution |
Security Implementation | 25% | Comprehensive scanning, secret management, vulnerability handling |
Testing Strategy | 20% | Multiple test levels, coverage thresholds, quality gates |
Deployment Automation | 15% | Safe deployments, rollback capability, monitoring |
Documentation | 10% | Clear documentation, runbooks, troubleshooting guides |
Issue: Workflows taking too long to complete Solution:
# Implement caching strategy
- uses: actions/cache@v3
with:
path: |
~/.npm
~/.cache
node_modules
key: ${{ runner.os }}-${{ hashFiles('**/package-lock.json') }}
# Use parallel jobs
jobs:
test-unit:
runs-on: ubuntu-latest
test-integration:
runs-on: ubuntu-latest
test-e2e:
runs-on: ubuntu-latest
Prevention: Design workflows with parallelization in mind from the start
Issue: Secrets exposed in logs Solution: Use masked outputs and secure handling
- name: Secure secret usage
run: |
echo "::add-mask::${{ secrets.API_KEY }}"
export API_KEY="${{ secrets.API_KEY }}"
# Use the secret without exposing it
Issue: Production deployments failing intermittently Solution: Implement robust health checks and rollback
- name: Deploy with validation
run: |
# Deploy new version
./deploy.sh $VERSION
# Health check loop
for i in {1..10}; do
if curl -f https://app.example.com/health; then
echo "Deployment successful"
exit 0
fi
sleep 30
done
# Rollback if health check fails
./rollback.sh $PREVIOUS_VERSION
exit 1
After completing this workshop, you're ready for:
Remember: Great CI/CD pipelines are invisible when working correctly but invaluable when preventing disasters. Focus on reliability, security, and developer experience.
Workshop Completion Certificate: Available upon successful implementation of all requirements and passing the assessment.