Practice and reinforce the concepts from Lesson 1
By completing this hands-on activity, you will:
IDE Setup: Install VS Code with the following extensions:
API Access: Register for at least two AI services:
Version Control: Initialize a Git repository for tracking experiments:
mkdir ai-coding-experiments
cd ai-coding-experiments
git init
echo "# AI Coding Experiments" > README.md
git add README.md
git commit -m "Initial commit"
Create a comprehensive evaluation matrix for AI coding assistants:
## AI Coding Assistant Evaluation Matrix
| Criteria | GitHub Copilot | ChatGPT-4 | Claude | Tabnine | Amazon CodeWhisperer |
|----------|----------------|-----------|--------|---------|---------------------|
| **Integration** | | | | | |
| IDE Support | VS Code, JetBrains, Vim | Web/API | Web/API | Multi-IDE | VS Code, JetBrains |
| Real-time Suggestions | ✓ | ✗ | ✗ | ✓ | ✓ |
| **Language Support** | | | | | |
| Popular Languages | Excellent | Excellent | Excellent | Good | Good |
| Framework Knowledge | Very Good | Excellent | Excellent | Good | AWS-focused |
| **Code Quality** | | | | | |
| Accuracy Rate | 70-80% | 85-90% | 85-90% | 60-70% | 70-75% |
| Security Practices | Good | Very Good | Excellent | Good | Very Good |
| **Pricing** | | | | | |
| Free Tier | Student/OSS | Limited | Limited | Yes | Yes |
| Professional Cost | $10/month | $20/month | $20/month | $12/month | Free with AWS |
| **Performance Metrics** | | | | | |
| Response Time | <100ms | 1-3s | 1-3s | <200ms | <150ms |
| Context Window | 2048 tokens | 128k tokens | 200k tokens | 1024 tokens | 2048 tokens |
Create a production-ready REST API client with AI assistance:
Professional Prompt Engineering:
Create a Python REST API client with the following requirements:
- Async/await support for concurrent requests
- Proper error handling and retry logic
- Request rate limiting (10 requests/second)
- Comprehensive logging
- Type hints and docstrings
- Unit test examples
Include best practices for production deployment.
Expected Output Structure:
# api_client.py
import asyncio
import aiohttp
from typing import Optional, Dict, Any
from dataclasses import dataclass
import logging
from tenacity import retry, stop_after_attempt, wait_exponential
@dataclass
class APIConfig:
base_url: str
api_key: str
timeout: int = 30
max_retries: int = 3
rate_limit: int = 10
class ProductionAPIClient:
"""Production-ready API client with enterprise features."""
def __init__(self, config: APIConfig):
self.config = config
self.logger = logging.getLogger(__name__)
self.rate_limiter = asyncio.Semaphore(config.rate_limit)
# ... implementation continues
Implement AI-assisted code review:
Security Audit Prompt:
Review this code for security vulnerabilities:
- SQL injection risks
- XSS vulnerabilities
- Authentication flaws
- Data validation issues
Provide OWASP-compliant recommendations.
Performance Optimization:
Analyze this code for performance bottlenecks:
- Time complexity analysis
- Memory usage optimization
- Database query efficiency
- Caching opportunities
Suggest specific improvements with benchmarks.
Generate comprehensive test suites:
# test_api_client.py
import pytest
import asyncio
from unittest.mock import patch, AsyncMock
from api_client import ProductionAPIClient, APIConfig
class TestProductionAPIClient:
@pytest.fixture
def client(self):
config = APIConfig(
base_url="https://api.example.com",
api_key="test_key"
)
return ProductionAPIClient(config)
@pytest.mark.asyncio
async def test_rate_limiting(self, client):
"""Test that rate limiting prevents exceeding request limit."""
# Implementation with AI assistance
pass
@pytest.mark.asyncio
async def test_retry_logic(self, client):
"""Test exponential backoff retry mechanism."""
# Implementation with AI assistance
pass
Create a performance testing framework for AI-generated code:
# benchmark_framework.py
import time
import memory_profiler
import cProfile
from typing import Callable, Dict, Any
import pandas as pd
class AICodeBenchmark:
"""Framework for benchmarking AI-generated code performance."""
def __init__(self):
self.results = []
def time_execution(self, func: Callable, *args, **kwargs) -> float:
"""Measure execution time of a function."""
start = time.perf_counter()
result = func(*args, **kwargs)
end = time.perf_counter()
return end - start, result
def profile_memory(self, func: Callable) -> Dict[str, Any]:
"""Profile memory usage of a function."""
# Implementation details
pass
def compare_implementations(self, implementations: Dict[str, Callable]):
"""Compare multiple implementations of the same algorithm."""
# Implementation details
pass
Algorithm Optimization Challenge
# Original AI-generated code (inefficient)
def fibonacci_recursive(n):
if n <= 1:
return n
return fibonacci_recursive(n-1) + fibonacci_recursive(n-2)
# Request optimized versions from AI
# Compare: memoization, dynamic programming, matrix multiplication
Data Processing Benchmark
# Test case: Process 1 million records
# Metrics: execution time, memory usage, CPU utilization
test_data = generate_test_dataset(1_000_000)
implementations = {
"naive": ai_generated_naive_solution,
"optimized": ai_generated_optimized_solution,
"vectorized": ai_generated_vectorized_solution
}
benchmark.compare_implementations(implementations)
Metric | Target | Measurement Method |
---|---|---|
Execution Time | < 100ms for 10k records | time.perf_counter() |
Memory Usage | < 100MB peak | memory_profiler |
CPU Efficiency | > 80% core utilization | psutil |
Scalability | O(n log n) or better | Big-O analysis |
Concurrency | 1000+ concurrent ops | asyncio benchmarks |
Answer these questions in your notebook:
First Impressions: What surprised you most about using AI for coding?
Strengths and Limitations: Based on your experiments, what do you think AI assistants are good at? What are their limitations?
Future Use: How do you think AI coding assistants could help you in your programming journey?
Ethical Considerations: What concerns might you have about using AI-generated code?
Test your chosen AI assistant with these specific tasks and document the results:
Create a report with:
Complete the following before submitting:
In the next lesson, you'll learn about prompt engineering - the art of crafting effective prompts to get the best results from AI assistants. Start thinking about what makes a good prompt versus a poor one based on today's experiments!