Hardening a Tiny Portfolio: Security, Performance & Production Readiness
A multi-part series on taking a weekend portfolio project to production: implementing CSP, rate limiting, INP optimization, and more.
Series Background: This is a follow-up to Shipping a Tiny Portfolio: Next.js, Tailwind v4 & shadcn/ui, where I built a minimal portfolio with Next.js 15, TypeScript, and Tailwind v4 in a weekend. This series documents the production-hardening work I did after launch.
After shipping my portfolio, I realized that "working" and "production-ready" are two very different things. Over the next few weeks, I systematically addressed security vulnerabilities, performance bottlenecks, and developer experience gaps.
This series documents everything I learned along the way.
The Journey
Part 1: Security Hardening
Content Security Policy (CSP) Implementation
The site had zero CSP headers, making it vulnerable to XSS attacks and clickjacking. I implemented a defense-in-depth approach using Next.js middleware and Vercel configuration.
What I Built:
- Dynamic CSP with unique nonce generation per request
- Next.js middleware (
src/middleware.ts
) for HTML routes - Two-layer architecture: dynamic middleware + static
vercel.json
fallback - Support for third-party scripts (Vercel Analytics, Speed Insights)
Key Learnings:
- Middleware runs on every request, so keep it lightweight (34.2 kB bundle)
- Nonces are critical for inline scripts in modern frameworks
- Testing CSP requires browser DevTools and careful header inspection
- Vercel's CDN can strip headers; middleware provides reliable enforcement
Technical Highlights:
// Generate unique nonce per request
const nonce = Buffer.from(crypto.randomUUID()).toString("base64");
// Build strict CSP
const csp = `
default-src 'self';
script-src 'self' 'nonce-${nonce}' https://va.vercel-scripts.com;
style-src 'self' 'unsafe-inline';
img-src 'self' data: https:;
font-src 'self';
frame-ancestors 'none';
base-uri 'self';
form-action 'self';
`.replace(/\s{2,}/g, ' ').trim();
response.headers.set("Content-Security-Policy", csp);
Results:
- ✅ XSS attack mitigation
- ✅ Clickjacking protection via
frame-ancestors 'none'
- ✅ Strict resource loading policies
- ✅ All builds passing with middleware enabled
Documentation: See /docs/CSP_IMPLEMENTATION_COMPLETE.md
for full implementation details.
Rate Limiting Implementation
The contact form API endpoint had no rate limiting, making it vulnerable to spam and abuse. I implemented a zero-dependency, in-memory rate limiter.
What I Built:
- Custom rate limiter utility in
src/lib/rate-limit.ts
(146 lines) - IP-based tracking with Vercel header support (
X-Forwarded-For
) - Standard rate limit headers (
X-RateLimit-Limit
,X-RateLimit-Remaining
,X-RateLimit-Reset
) - Graceful 429 responses with retry timing
- Automated test suite (
scripts/test-rate-limit.mjs
)
Key Learnings:
- In-memory storage works for single-instance deployments (Vercel serverless)
- For multi-region or high-scale, upgrade to Vercel KV or Upstash Redis
- Rate limit headers improve client-side UX (show retry time)
- IP-based tracking requires careful proxy header handling
Technical Highlights:
// Rate limiter with automatic cleanup
export class RateLimiter {
private requests = new Map<string, RequestRecord>();
async check(identifier: string): Promise<RateLimitResult> {
const now = Date.now();
const record = this.requests.get(identifier);
// Check if rate limited
if (record && record.requests.length >= this.limit) {
const oldestRequest = record.requests[0];
if (now - oldestRequest < this.windowMs) {
return {
success: false,
limit: this.limit,
remaining: 0,
reset: new Date(oldestRequest + this.windowMs)
};
}
}
// Allow request and update tracking
return this.recordRequest(identifier, now);
}
}
Results:
- ✅ Contact form protected from spam (5 requests per 15 minutes)
- ✅ Zero external dependencies
- ✅ Automated testing with
npm run test:rate-limit
- ✅ Graceful user feedback on rate limit
Documentation: See /docs/RATE_LIMITING_IMPLEMENTATION_COMPLETE.md
for full implementation details.
Part 2: Performance Optimization
INP (Interaction to Next Paint) Improvements
Google's Core Web Vitals showed poor INP scores (664ms+) on navigation links. The culprit: Next.js hover prefetching blocking the main thread.
What I Fixed:
- Disabled hover prefetching on
<Link>
components (prefetch={false}
) - Added CSS performance hints (
will-change-auto
,contain: layout style
) - Hardware-accelerated transitions (
transform: translateZ(0)
) - Non-blocking theme toggle using React's
useTransition
- Non-blocking form submissions
Key Learnings:
- Hover prefetching is great for UX but terrible for INP
- Links still prefetch on viewport intersection (better trade-off)
- CSS containment prevents expensive layout recalculations
- React 18+
useTransition
keeps UI responsive during state updates
Technical Highlights:
// Non-blocking theme toggle
export function ThemeToggle() {
const [isPending, startTransition] = useTransition();
const { setTheme, theme } = useTheme();
const toggleTheme = () => {
startTransition(() => {
setTheme(theme === "light" ? "dark" : "light");
});
};
return (
<Button
onClick={toggleTheme}
disabled={isPending}
className="will-change-auto"
>
{/* ... */}
</Button>
);
}
Results:
- ✅ INP score improved from 664ms+ to <200ms (Good)
- ✅ Smoother navigation and interactions
- ✅ Theme toggle feels instant
- ✅ No layout shifts or visual jank
Documentation: See /docs/INP_OPTIMIZATION.md
for full optimization details.
GitHub Contributions Heatmap
I added a live GitHub contributions heatmap to the homepage to showcase activity.
What I Built:
- GitHub GraphQL API integration (
/api/github-contributions
) - Client-side caching (24-hour duration) via localStorage
- Fallback to sample data if API unavailable
- Visual heatmap using
react-calendar-heatmap
- Optional
GITHUB_TOKEN
env var for higher rate limits
Key Learnings:
- GitHub's public API has low rate limits (60 req/hour)
- Personal access tokens (no scopes needed) bump it to 5,000 req/hour
- Client-side caching is essential for API rate limit management
- Always have fallback data for better UX
Technical Highlights:
// Fetch with caching
const fetchContributions = async (): Promise<ContributionResponse> => {
// Check cache first
const cached = localStorage.getItem(CACHE_KEY);
if (cached) {
const data = JSON.parse(cached);
if (Date.now() - data.timestamp < CACHE_DURATION) {
return data;
}
}
// Fetch from API
const response = await fetch('/api/github-contributions');
const data = await response.json();
// Cache result
localStorage.setItem(CACHE_KEY, JSON.stringify({
...data,
timestamp: Date.now()
}));
return data;
};
Results:
- ✅ Live contribution data on homepage
- ✅ 24-hour client-side cache reduces API calls
- ✅ Graceful fallback to sample data
- ✅ <5KB bundle size increase
Part 3: Developer Experience
AI Contributor Guide
I created comprehensive instructions for AI coding assistants (GitHub Copilot, Cursor, etc.) to maintain code quality and architectural consistency.
What I Built:
- Detailed guide in
.github/copilot-instructions.md
- Auto-sync script to workspace root (
agents.md
) - MCP (Model Context Protocol) server guidelines
- Stack decisions, architecture patterns, and conventions
Key Learnings:
- AI assistants benefit from explicit architectural constraints
- Document "what not to change" as much as "how to build"
- Include import alias patterns and file organization rules
- MCP servers enable secure, local-first AI integrations
Documentation Structure:
docs/
├── API.md # API route documentation
├── CSP_IMPLEMENTATION_COMPLETE.md # CSP implementation guide
├── CSP_QUICKREF.md # CSP quick reference
├── RATE_LIMITING_IMPLEMENTATION_COMPLETE.md
├── RATE_LIMITING_QUICKREF.md
├── INP_OPTIMIZATION.md # Performance optimization
├── DEPLOYMENT_CHECKLIST.md # Pre-deploy checklist
├── SECURITY_FINDINGS_RESOLUTION.md # Security audit results
└── TODO.md # Ongoing work
Results:
- ✅ Consistent code quality across AI-assisted changes
- ✅ Clear boundaries and conventions
- ✅ Faster onboarding for contributors
- ✅ Reduced architectural drift
Key Takeaways
- Security is not optional: CSP and rate limiting should be in every production app
- Performance is perception: 664ms feels slow, <200ms feels instant
- Zero dependencies are underrated: Custom rate limiter = 146 lines, no external deps
- Caching strategy matters: Client-side cache + fallback data = resilient UX
- Document everything: Future you will thank present you
- AI assistants need guardrails: Explicit conventions prevent drift
What's Next?
Future improvements on my radar:
- Search functionality for blog posts
- Tag filtering and navigation
- View counts and analytics
- Upgrade rate limiter to Vercel KV for multi-region
- Add E2E tests with Playwright
Resources
All documentation lives in /docs
:
- CSP Implementation:
CSP_IMPLEMENTATION_COMPLETE.md
- Rate Limiting:
RATE_LIMITING_IMPLEMENTATION_COMPLETE.md
- Performance:
INP_OPTIMIZATION.md
- Security Audit:
SECURITY_FINDINGS_RESOLUTION.md
Conclusion
Taking a project from "works on my machine" to "production-ready" requires intentional hardening. Security, performance, and developer experience all need attention.
The good news? Modern frameworks like Next.js make it easier than ever. Middleware for CSP, server actions for rate limiting, and React's concurrent features for performance—the tools are there.
Now it's your turn. What's your "weekend project" that needs production hardening?
Read Part 1: Shipping a Tiny Portfolio: Next.js, Tailwind v4 & shadcn/ui