Practice and reinforce the concepts from Lesson 15
Master advanced AI orchestration through token optimization, multi-agent workflows, professional project setup, and MCP tool integration at production scale.
| Prompt Type | Input Tokens | Output Tokens | Objective |
|---|---|---|---|
| Prompt A (Simple): "Fix typo in line 42 of auth.js where 'usre' should be 'user'" | Calculate using 4-char rule | Assume 1,000 | Calculate costs for Sonnet ($3/$15 per 1M) and Haiku ($0.25/$1.25 per 1M) |
| Prompt B (Medium): Auth review + 500 lines code (2K chars) + "Check SQL injection, password security" | Calculate | Assume 2,000 | Compare Sonnet vs Haiku savings |
| Prompt C (Large): Codebase analysis + 20 files (100K chars) + performance requirements | Calculate | Assume 3,000 | Calculate percentage savings using Haiku |
Success Criteria: Document calculations in token-analysis.txt, show cost comparison table, identify percentage savings
Original (2,147 tokens): Bloated todo bug description, 15 files, conversational filler, unclear focus
Optimize to under 500 tokens: Bug -> Backend status -> Likely cause -> Relevant code (2 files only) -> Error
Success Criteria: Under 500 tokens, show reduction %, calculate Sonnet cost savings
| Task | Tokens (In+Out) | Your Model Choice | Reasoning | Cost |
|---|---|---|---|---|
| Generate JSDoc for 10 simple functions | 3K + 2K | ? | ? | ? |
| Debug race condition in async Redux Saga | 8K + 3K | ? | ? | ? |
| Reformat 5 components to style guide | 4K + 4K | ? | ? | ? |
| Design novel ML transformer architecture | 10K + 5K | ? | ? | ? |
| Write CRUD unit tests (4 endpoints) | 5K + 6K | ? | ? | ? |
Success Criteria: Complete table with justified model choices, calculate total cost with Sonnet-only vs strategic selection, show percentage savings
Models: Haiku ($0.25/$1.25 per 1M) - simple tasks, Sonnet ($3/$15) - complex reasoning, Opus ($15/$75) - critical tasks
10 Required Sections: 1) Project Description (2-3 sentences), 2) Tech Stack (React 18, state lib, styling, AI API), 3) File Structure (show tree), 4) Code Style (arrow functions, quotes, naming), 5) Data Structures (Message object), 6) Key Functions (sendMessage), 7) Constraints (no DB, API key in env, rate limits), 8) Project Goals (learn AI integration), 9) AI Context (experience level, learning focus, time limit), 10) Workflow (testing, commits, @agent.md usage)
Success Criteria: 50+ line file, test with @agent.md What's my tech stack?, verify AI follows code style
| Step | Command | Success Check |
|---|---|---|
| Init npm | npm init (fill prompts) |
package.json exists with scripts |
| Add scripts | Add dev, build, preview, validate to package.json | npm run validate succeeds |
| Create .gitignore | Exclude node_modules, .env, dist, .DS_Store, *.log | File exists |
| Init Git | git init && git add . && git commit -m "feat: Initial Smart Farm setup" |
git log shows commit |
Success Criteria: Screenshots of npm run validate passing and git log output
| Test | Prompt | Expected Result |
|---|---|---|
| Context Loading | @agent.md What is this project about? |
AI describes Smart Farm using your Project Description |
| Tech Stack | @agent.md What state management library am I using? |
AI answers from your Tech Stack section |
| Code Style | @agent.md Create WeatherWidget component. Follow our code style. |
AI uses your function style, quotes, naming conventions |
Success Criteria: Document all 3 test results, note if AI followed code style (Yes/No + examples)
Design 5-agent pipeline for saving conversations to localStorage:
| Agent | Role | Input | Output | Prompt Template |
|---|---|---|---|---|
| Planning | Architecture design | Feature requirements | Tech spec (data structure, files, API design) | @agent.md Design Chat History with localStorage. Requirements: Save across refreshes, list conversations, load on click, delete option. Output: Tech spec. |
| Implementation | Code writer | Tech spec | Working code | [Fill in prompt] |
| Testing | QA tester | Implementation | Test results + bugs | [Fill in prompt] |
| Bug Fix | Debugger | Bug reports | Fixed code | [Fill in prompt] |
| Documentation | Doc writer | Final code | README + comments | [Fill in prompt] |
Success Criteria: Complete all 5 agents with Role/Input/Output/Prompt, calculate time savings vs manual approach
Design 4-agent workflow (3 builders in parallel + 1 integrator):
Phase One: Parallel Development
| Agent | Task | Files Created | Dependencies | Prompt | Time |
|---|---|---|---|---|---|
| Weather Widget | [Describe] | [List] | [npm packages] | [Fill in] | 10 min |
| Chart Component | [Describe] | [List] | [npm packages] | [Fill in] | 10 min |
| Action Buttons | [Describe] | [List] | [npm packages] | [Fill in] | 10 min |
Phase 2: Integration
Time Calculation:
Success Criteria: Complete workflow design, calculate time savings percentage
Route 3 problems to appropriate expert agents:
| Problem | Expert Agent | Prompt Design | Expected Fixes |
|---|---|---|---|
| Chat laggy after 50 messages (typing delays, slow scroll) | Performance Expert | "You are a performance specialist. Problem: [describe]. Code: [paste MessageList.jsx]. Profile bottlenecks: re-renders, heavy computations, DOM issues, memory leaks. Provide fixes." | [List expected optimizations] |
| React warning: "Cannot update component while rendering" | [Which expert?] | [Your prompt] | [Expected fixes] |
| API key visible in browser DevTools Network tab | [Which expert?] | [Your prompt] | [Expected fixes] |
Success Criteria: Complete all 3 problem routings with justified expert choices
Objective: Verify latest DALL-E API before implementing image generation
| Step | Action | Deliverable |
|---|---|---|
| Step 1: Query Context7 | Get latest OpenAI DALL-E API. use context7 (parameters, pricing, sizes, response structure) |
Context7 output |
| Step 2: Write assumed API | Document what you THINK the API looks like before checking | Code snippet of assumptions |
| Step 3: Write actual API | Document REAL API from Context7 | Corrected code snippet |
| Step 4: Compare | List differences, what would break, new features discovered | Comparison table |
Success Criteria: Assumed vs actual API comparison, list 3 learnings from Context7, write correct implementation for Smart Farm
Design 5 test scenarios for Smart Farm Assistant:
| Test | Description | Steps | Playwright Tools Needed |
|---|---|---|---|
| Test 1: Happy Path | User sends message, receives AI response | Navigate to localhost -> Type message -> Click send -> Verify response -> Check console | browser_navigate, browser_snapshot, browser_console_messages |
| Test 2: Empty Message | [Fill in test description] | [List steps] | [List tools] |
| Test 3: Rapid Spam | [Fill in test description] | [List steps + expected behavior] | [List tools] |
| Test 4: Responsive Design | Chat works at 375px mobile width | Resize to 375x812 -> [remaining steps] | browser_resize, [others] |
| Test 5: Accessibility | Keyboard navigation (Tab, Enter, Escape) | [Fill in steps] | [Fill in tools] |
Success Criteria: Complete test plan for all 5 scenarios, bonus: run ONE test and report results
5-Phase Validation Workflow:
| Phase | Task | Method | Success Criteria |
|---|---|---|---|
| Phase 1: Verify APIs | Check OpenAI Chat Completions implementation | Context7: Show OpenAI Chat API for GPT-4. use context7 -> Compare with your code |
API matches current docs |
| Phase 2: Fix Issues | Update outdated API usage | Document changes -> Implement -> Test | No deprecated APIs |
| Phase 3: Functional Test | Test user flow end-to-end | Playwright: Landing -> Send -> Receive -> History | All flows pass, zero console errors |
| Phase 4: Edge Cases | Test error handling | Network timeout, invalid API key, empty message, 1000+ char message | Expected error behaviors work |
| Phase 5: Performance Audit | Measure load time, response time, memory usage | DevTools + Playwright -> Check after 20 messages | Meets performance targets |
Success Criteria: Complete pipeline design, expected vs actual results table, list issues found
Submit 9 files: token-analysis.txt, optimized-prompt.txt, model-selection-table.txt, agent.md, git log screenshot, agent test screenshots (3), sequential-pipeline.md, parallel-workflow.md, specialized-experts.md, context7-verification.md, playwright-test-plan.md, validation-pipeline.md, reflection.md
Reflection Questions (in reflection.md):
@agent.md improve AI responses? Example?Grading: Token Management (20) + Project Setup (20) + Multi-Agent (30) + MCP Tools (20) + Reflection (10) + Bonus (+10) = 110 points