Back to Blog

How I Use MCP Agents, Skills, and Subagents to Build Software Faster

Introduction

After months of building a full-stack university social platform—OnlyCampus—with Spring Boot backend, Swift/SwiftUI iOS app, and Kotlin/Compose Android app, I've developed a development workflow powered by Claude Code that I genuinely can't imagine living without.

The secret isn't just using AI to write code. It's about MCP servers for real-time context, custom subagents for automated reviews, and skills for cross-platform consistency. Together, these tools catch bugs before they ship, enforce architectural patterns automatically, and keep three codebases in perfect sync.

Here's exactly how I use them, with real examples from production code.

The Stack: Three Codebases, One Ecosystem

OnlyCampus consists of three interconnected repositories:

  • hocam-backend - Spring Boot 3.5 / Java 21 REST API (source of truth for DTOs and API contracts)
  • onlycampus-ios - Swift/SwiftUI iOS app with MVVM architecture
  • onlycampus-android - Kotlin/Jetpack Compose Android app with strict design token system

Managing consistency across these three platforms manually would be a nightmare. Changes to a backend DTO need to propagate to both mobile apps. Security vulnerabilities in one codebase could exist in others. Architectural patterns should be uniform.

This is where MCP agents, skills, and subagents become essential.

Part 1: MCP Servers - Real-Time Context for AI

MCP (Model Context Protocol) servers connect Claude Code to external services, giving it live access to data and documentation. I use four MCP servers daily:

1. Firebase MCP Server

What it does: Direct access to Firestore database, authentication, and Firebase Admin SDK operations.

Real use case: My blog (this website) uses Firestore for posts, comments, and analytics. When I asked Claude to add unique view tracking to blog posts, it:

  1. Queried my existing Firestore schema to understand the posts collection structure
  2. Checked security rules to see what's publicly readable vs protected
  3. Generated code that seamlessly integrated with existing indexes
  4. Suggested optimized Firestore rules for the new uniqueViews field

Why it matters: No manual schema explanation needed. Claude sees the actual database structure and generates code that fits perfectly.

2. Notion MCP Plugin

What it does: Full read/write access to Notion workspace for task management and documentation.

Real use case: I manage all OnlyCampus features in Notion databases. When implementing a new feature, Claude:

  • Creates tasks automatically when I start coding
  • Updates task status as I work ("In Progress" → "Testing" → "Complete")
  • Generates technical documentation pages with code examples after completion
  • Creates follow-up tasks when bugs are discovered during implementation

Example: After completing a Spring Boot feature for gamification, I said "Document this in Notion." Claude automatically created a page explaining the XP system, badge rules, and database schema—complete with code snippets and SQL queries.

3. Context7 MCP Plugin

What it does: Access to up-to-date official documentation for frameworks and libraries.

Real use case: When implementing Spring Boot features, Claude queries Context7 for current Spring Boot 3.5 best practices instead of relying on potentially outdated training data.

Example: I asked Claude to add request validation to an endpoint. Instead of using deprecated patterns, it queried Context7 for current @Valid and @Validated usage, applied modern exception handlers, and followed Spring Boot 3.x conventions exactly.

No more Stack Overflow answers from 2018. Always current, always correct.

4. Greptile MCP Plugin

What it does: AI-powered semantic code search and PR analysis.

Real use case: When investigating a bug in an unfamiliar part of the codebase, I asked: "How does the messaging system handle offline users?"

Using Greptile, Claude:

  1. Semantically searched for messaging-related code (not just keyword matching)
  2. Traced the WebSocket connection flow through multiple classes
  3. Found the Firebase Cloud Messaging (FCM) fallback for offline users
  4. Identified that push notifications were sent via AWS SNS as a backup
  5. Highlighted a race condition where messages could be lost during network transitions

This would have taken hours of grepping and reading. With Greptile, Claude had the full picture in 30 seconds.

Part 2: Custom Subagents - Automated Code Review

This is where things get really powerful. I've built custom Claude Code subagents that automatically review my code as I write it, enforcing architectural patterns and catching security issues before they reach production.

Backend: Architecture & Security Reviewers

My Spring Boot backend has two subagents in .claude/agents/:

1. backend-architecture-reviewer

Triggers automatically when:

  • New controllers, services, or repositories are created
  • Entity or DTO files are modified
  • Database schemas change
  • Custom @Query annotations are used

What it checks:

  • Structured logging compliance: Ensures I never use System.out.println(), printStackTrace(), or unstructured logs. All logging must follow the pattern:
log.info("OPERATION_PHASE | key={} | key={}", value1, value2);
  • Soft delete enforcement: Verifies that delete operations set deletedAt timestamp instead of hard deleting (critical for data integrity)
  • Layer architecture violations: Prevents controllers from calling repositories directly (must go through services)
  • Authorization checks: Ensures @RequiresRole annotations exist on service methods
  • DTO patterns: Validates mobile DTOs are in the correct dto/mobile/ package
  • MapStruct conventions: Checks that entity-to-DTO mapping follows established patterns

Real example: I accidentally used repository.delete(entity) instead of setting deletedAt. The architecture reviewer flagged it immediately:

❌ CRITICAL: Hard delete detected in UserService.java:142
   ➜ Rule: All entities must use soft delete (set deletedAt timestamp)
   ➜ Found: repository.delete(user)
   ➜ Required: user.setDeletedAt(LocalDateTime.now()); repository.save(user)

This kind of mistake could have corrupted production data. Caught in seconds.

2. security-reviewer

Triggers automatically when:

  • Authentication or authorization code changes
  • Controllers are modified (checks endpoint security)
  • Database queries are written (SQL injection risks)
  • Configuration files change (CORS, secrets management)

What it checks:

  • No hardcoded credentials: Scans for API keys, JWT secrets, passwords in code
  • SQL injection prevention: Ensures queries use parameterized statements
  • Logging security: Prevents logging passwords, tokens, or full entity objects
  • Input validation: Checks that DTOs have validation annotations (@NotNull, @Email, etc.)
  • Secure defaults: Verifies CORS settings, HTTPS enforcement, CSRF protection

Real example: I was debugging and temporarily added debug logging:

log.debug("User login attempt: {}", loginRequest);

The security reviewer immediately blocked it:

🚨 SECURITY VIOLATION: Potential password logging in AuthController.java:67
   ➜ Detected: log.debug("User login attempt: {}", loginRequest)
   ➜ Risk: loginRequest likely contains password field
   ➜ Fix: Log only safe fields: log.debug("Login attempt | email={}", loginRequest.getEmail())

This could have leaked passwords into CloudWatch logs. Disaster averted.

iOS: Architecture, Security, and Cross-Platform Reviewers

The iOS app has three subagents:

1. ios-architecture-reviewer

Enforces:

  • MVVM compliance: ViewModels must be @MainActor, use @Published for state
  • Component reusability: Searches for existing components before allowing new ones (prevents duplication)
  • Localization: All user-facing strings must be in OCLocalizationKey enum (supports EN + TR)
  • Logging compliance: Bans print(), requires enum-based FWLogger

2. security-reviewer

Checks:

  • No hardcoded API keys or tokens
  • Keychain usage for credential storage (not UserDefaults)
  • HTTPS-only network requests
  • No logging of tokens or passwords

3. cross-platform-reviewer

This is the game-changer for multi-platform development. It automatically verifies iOS/Android/Backend consistency:

  • DTO field alignment: Checks that iOS models match backend mobile DTOs (in hocam-backend/src/.../dto/mobile/)
  • API endpoint parity: Ensures iOS API paths match backend ApiPaths.java
  • Localization key sync: Validates that all iOS localization keys exist in Android strings.xml
  • Feature behavior equivalence: Flags when iOS implements a feature differently than Android

Real example: I added a new field to the backend UserResponseDto:

// Backend: UserResponseDto.java
public class UserResponseDto {
    private String bio; // NEW FIELD
    // ... other fields
}

When I modified the iOS User model, the cross-platform reviewer ran automatically and warned:

⚠️  CROSS-PLATFORM DRIFT DETECTED
   ➜ Backend added 'bio' field to UserResponseDto
   ➜ iOS: Added to User model ✅
   ➜ Android: NOT FOUND in User data class ❌
   ➜ Action required: Update onlycampus-android/app/.../domain/model/User.kt

It even provided the exact Android file path. I would have shipped inconsistent mobile apps without this check.

Android: Architecture & Security Reviewers

Similar setup to iOS, with Android-specific rules:

android-architecture-reviewer

Enforces:

  • Zero string literals policy: All text in stringResource(R.string.*), all routes in NavigationRoute enum
  • Design token compliance: No hardcoded colors, spacing, or typography (must use OCColorPalette, OCSpacing, OCFont)
  • State management: Enforces sealed interfaces for UI state (bans boolean flags like isLoading)
  • Logging compliance: Bans Log.d, println, requires typed FWLogger

Real example: I hardcoded a color while prototyping:

Box(modifier = Modifier.background(Color(0xFF3498db)))

The architecture reviewer blocked the commit:

❌ DESIGN TOKEN VIOLATION: Hardcoded color in HomeScreen.kt:89
   ➜ Found: Color(0xFF3498db)
   ➜ Required: OCColorPalette.dark.primary or MaterialTheme.colorScheme.*
   ➜ Reason: Violates iOS parity and theme consistency

This kind of enforcement is how we maintain perfect visual consistency with iOS.

Part 3: Custom Skills - Reusable Development Workflows

Skills are reusable Claude Code workflows that I can invoke with simple commands like /api-client or /test-gen. I've built 14 global skills organized into two categories: cross-platform skills that work across all projects, and platform-specific skills tailored for iOS, Android, or backend.

Cross-Platform Skills (8 Total)

1. api-client - Generate API Client Code

Usage: /api-client ios Get user badges - GET /api/gamification/badges/:userId

What it does: Generates complete API client code (service protocol/interface + implementation) for iOS and Android from backend endpoint specifications.

Key features:

  • iOS: Creates protocol + async/await implementation
  • Android: Creates interface + ServiceImpl with Result<T> wrapper
  • Ensures DTOs match backend mobile DTOs exactly
  • Adds proper error handling and validation

2. analytics-event - Add Analytics Tracking

Usage: /analytics-event both User tapped marketplace "See All" button

What it does: Adds analytics event tracking consistently across platforms with NO magic strings.

Key features:

  • Creates typed event constants (iOS: enums, Android: const val)
  • Adds parameter constants with proper documentation
  • Ensures event names match across platforms
  • Suggests implementation locations in codebase

3. deep-link - Add Deep Link Support

Usage: /deep-link both Link to marketplace item with itemId parameter

What it does: Adds deep link routes for screens/flows across platforms.

Key features:

  • iOS: Updates DeeplinkRoute enum and handler
  • Android: Updates NavGraph with deep link patterns
  • Ensures URL format consistency (onlycampus://feature/:id)
  • Tests with push notifications and external links

4. create-endpoint - Create REST API Endpoints

Usage: /create-endpoint POST /api/marketplace/listings - create listing with CreateMarketplaceListingDto

What it does: Creates complete Spring Boot REST endpoints with controller, service, and DTOs.

Key features:

  • Generates controller with proper annotations (@RequiresRole, @Audited)
  • Creates request/response DTOs in correct package
  • Adds service layer with business logic
  • Includes Swagger/OpenAPI documentation

5. code-complexity-analyzer - Analyze Code Complexity

Usage: /code-complexity-analyzer src/main/java/com/example/service/

What it does: Analyzes code complexity metrics and suggests refactoring opportunities.

Checks:

  • Cyclomatic complexity (flags methods > 10)
  • Method length (flags > 50 lines)
  • Nesting depth (flags > 4 levels)
  • Class size and parameter count
  • Provides actionable refactoring suggestions

6. ui-consistency-reviewer - Review UI Design System

Usage: /ui-consistency-reviewer ios OnlyCampus/UI/Marketplace/

What it does: Reviews UI code for design system compliance and consistency.

Checks:

  • Design token usage (colors, spacing, typography - no hardcoded values)
  • Component reusability (uses shared components)
  • Accessibility (labels, touch targets, contrast)
  • Platform-specific best practices (SwiftUI, Compose)

7. cross-platform-consistency-checker - Verify Platform Parity

Usage: /cross-platform-consistency-checker marketplace listing creation

What it does: Checks consistency between iOS and Android implementations of the same feature.

Validates:

  • API endpoint usage matches on both platforms
  • DTO fields are identical
  • UI/UX flows are similar
  • Analytics events match
  • Deep link URLs are consistent

Real example: Detected that iOS allowed 10 photos for marketplace listings but Android only allowed 5. Flagged the inconsistency immediately.

8. localization-reviewer - Review Localization

Usage: /localization-reviewer both

What it does: Reviews localization implementation and finds missing or hardcoded strings.

Checks:

  • Finds hardcoded strings in UI code
  • Identifies missing translation keys (EN vs TR)
  • Validates key naming conventions
  • Compares translation completeness across languages

Platform-Specific Skills (6 Total)

9. test-gen-ios - Generate iOS Tests

Usage: /test-gen-ios OnlyCampus/UI/Settings/SettingsViewModel.swift

What it creates: Comprehensive unit tests for iOS ViewModels, services, and UI components.

Features:

  • ViewModel tests with @MainActor and Combine testing
  • Service tests with mock dependencies
  • UI tests with XCTest and accessibility checks
  • Follows Given-When-Then structure

10. test-gen-android - Generate Android Tests

Usage: /test-gen-android app/src/main/java/.../SettingsViewModel.kt --type all

What it creates: Unit tests, Compose tests, and Paparazzi snapshot tests for Android.

Features:

  • ViewModel tests with Turbine for Flow testing
  • Compose UI tests with semantic queries
  • Paparazzi screenshot tests for visual regression
  • Uses MockK for dependency mocking

11. test-gen-backend - Generate Spring Boot Tests

Usage: /test-gen-backend src/main/java/.../UserService.java --type integration

What it creates: Unit and integration tests for Spring Boot services, controllers, and repositories.

Features:

  • Service tests with @Mock and @InjectMocks
  • Controller tests with MockMvc
  • Repository tests with @DataJpaTest
  • Integration tests with @SpringBootTest

12. new-component-ios - Generate SwiftUI Components

Usage: /new-component-ios ProfileEdit --feature Profile

What it creates: Complete SwiftUI component with ViewModel, container, and state management.

Generates:

  • View with design system compliance (OCGradient, OCSpacing)
  • ViewModel with @MainActor and @Published state
  • ViewContainer for lifecycle management
  • Localization keys in OCLocalizationKey enum

13. new-component-android - Generate Compose Screens

Usage: /new-component-android ProfileEdit --feature profile --with-test

What it creates: Jetpack Compose screen with Hilt ViewModel and optional Paparazzi tests.

Generates:

  • Composable screen with design token compliance
  • @HiltViewModel with StateFlow state management
  • Event/Action sealed interfaces
  • Paparazzi snapshot tests (if --with-test)

14. sync-localization - Sync Localization Keys

Usage: /sync-localization --android

What it does: Validates and syncs localization keys across iOS and Android, with EN/TR translation checks.

Workflow when I run /sync-localization:

  1. Scans iOS OCLocalizationKey.swift enum for all keys
  2. Scans Android strings.xml for all keys
  3. Compares and reports:
  • Keys in iOS but missing in Android
  • Keys in Android but missing in iOS
  • Keys with mismatched content (EN text differs between platforms)

4. Offers to auto-fix by adding missing keys or aligning content

Real example output:

📱 LOCALIZATION SYNC REPORT

iOS Keys: 247
Android Keys: 243

❌ Missing in Android (4 keys):
   • profile_edit_bio
   • settings_notifications_sound
   • error_network_timeout
   • gamification_streak_broken

⚠️  Content Mismatch (2 keys):
   • login_welcome_back
     iOS: "Welcome back!"
     Android: "Welcome Back"  (capitalization differs)

   • marketplace_item_sold
     iOS: "Item marked as sold"
     Android: "Item sold successfully"  (wording differs)

✅ Auto-fix available: Add 4 missing keys to Android strings.xml
   Continue? (y/n)

Without this skill, keeping two mobile apps synchronized across English and Turkish would be a manual nightmare prone to errors.

Part 4: How They Work Together

The real magic is when MCP servers, subagents, and skills combine. Here's a real workflow:

Scenario: Add "Report User" Feature

1. I ask Claude: "Add a report user feature to OnlyCampus"

2. Notion MCP creates task: Automatically creates a Notion task "Implement User Reporting"

3. Context7 MCP queries docs: Fetches current Spring Boot best practices for moderation features

4. Greptile MCP searches codebase: Finds existing moderation patterns in the codebase

5. I use /create-endpoint skill: /create-endpoint POST /api/users/:userId/report - create user report

  • Creates ReportController, ReportService, ReportRepository
  • Adds Report entity with proper soft delete
  • Creates ReportDto in dto/mobile/ (source of truth for mobile apps)
  • Adds authorization (@RequiresRole) and audit logging (@Audited)

6. backend-architecture-reviewer runs automatically:

  • ✅ Logging compliance: Structured logs used
  • ✅ Soft delete: deletedAt field present
  • ✅ Authorization: @RequiresRole on service methods

7. security-reviewer runs automatically:

  • ✅ No sensitive data in logs
  • ✅ Input validation on ReportDto
  • ✅ Authorization prevents users from reporting themselves

8. I use /api-client and /new-component-ios skills:

  • /api-client ios POST /api/users/:userId/report - creates API service
  • /new-component-ios ReportUser --feature Profile - generates full component
  • Creates ReportUserView, ReportUserViewModel with proper patterns
  • Updates OCLocalizationKey with new strings
  • /analytics-event ios User submitted report - adds analytics tracking

9. ios-architecture-reviewer runs:

  • ✅ MVVM compliance
  • ✅ Localization keys defined
  • ✅ No print() statements

10. cross-platform-reviewer runs:

  • ⚠️ iOS added localization keys not in Android
  • ⚠️ Android hasn't implemented ReportUser model yet

11. Claude implements Android:

  • Creates ReportUserScreen composable
  • Adds Report data class matching backend DTO
  • Updates strings.xml with EN + TR translations

12. android-architecture-reviewer runs:

  • ✅ Design tokens used (no hardcoded colors)
  • ✅ String resources (no literals)
  • ✅ Sealed UI state classes

13. sync-localization skill runs:

  • ✅ iOS and Android localization keys match
  • ✅ EN and TR translations consistent

14. Notion MCP updates task: Marks "Implement User Reporting" as Complete, generates documentation page

Total time: ~20 minutes for a complete cross-platform feature with automated reviews, security checks, and documentation.

Without this setup: Easily 4-6 hours of manual work, with high risk of:

  • iOS/Android inconsistency
  • Missing security validations
  • Localization drift
  • Architecture violations
  • Forgotten documentation

Setting This Up Yourself

1. Configure MCP Servers

In ~/.claude/config.json:

{
  "mcpServers": {
    "firebase": {
      "command": "npx",
      "args": ["-y", "@firebase/mcp-server"],
      "env": {
        "GOOGLE_APPLICATION_CREDENTIALS": "/path/to/service-account.json"
      }
    },
    "notion": {
      "enabled": true
    },
    "context7": {
      "enabled": true
    },
    "greptile": {
      "enabled": true
    }
  }
}

2. Create Custom Subagents

In your project: .claude/agents/architecture-reviewer.md:

# Architecture Reviewer Agent

## Purpose
Enforce architectural patterns and code quality standards.

## Triggers
- New controllers, services, repositories
- Entity or DTO modifications
- Configuration changes

## Checks
- [ ] Logging compliance (no System.out.println)
- [ ] Layer separation (controllers don't call repos directly)
- [ ] Naming conventions
- [ ] Design pattern compliance

## Actions
- Report violations with file:line references
- Suggest fixes
- Block commits if critical violations found

3. Build Custom Skills

In .claude/skills/sync-localization/SKILL.md:

# Sync Localization Skill

## Description
Validates and syncs localization keys across iOS and Android.

## Usage
/sync-localization

## Workflow
1. Read iOS localization enum
2. Read Android strings.xml
3. Compare keys
4. Report mismatches
5. Offer auto-fix

4. Configure Permissions

In .claude/settings.local.json:

{
  "permissions": {
    "allowedCommands": {
      "bash": ["./gradlew", "./mvnw", "git", "npm", "xcodebuild"]
    }
  },
  "hooks": {
    "post-tool-use": {
      "*.java": "./mvnw spotless:apply",
      "*.kt": "./gradlew ktlintFormat",
      "*.swift": "swiftlint --fix"
    }
  },
  "additionalWorkingDirectories": [
    "/path/to/backend",
    "/path/to/ios",
    "/path/to/android"
  ]
}

Lessons Learned

1. Start with One Subagent

Don't try to build all subagents at once. Start with the most painful manual check (for me: structured logging compliance) and automate that first.

2. Make Subagents Strict but Helpful

Bad subagent output:

❌ Error: Violation detected

Good subagent output:

❌ DESIGN TOKEN VIOLATION: Hardcoded spacing in HomeScreen.kt:45
   ➜ Found: .padding(16.dp)
   ➜ Required: .padding(OCSpacing.md)
   ➜ Fix: Replace hardcoded value with OCSpacing.md (16.dp equivalent)

3. Use Cross-Platform Reviewers for Multi-Repo Projects

The cross-platform reviewer was the hardest to build but has the highest ROI. It's caught dozens of iOS/Android drift issues before they reached production.

4. MCP Servers Eliminate Context Switching

Before Notion MCP: Tab to Notion → Create task → Tab back → Start coding After Notion MCP: "Create a task for this" → Claude does it → Keep coding

Sounds small, but eliminating context switches adds up to hours per week.

Metrics: Before and After

Before this setup:

  • Feature implementation: 4-6 hours (backend + iOS + Android)
  • Code review iterations: 2-3 rounds (architectural issues, security gaps)
  • Localization bugs: 1-2 per release (missing keys, mismatched content)
  • Documentation: Often delayed or skipped
  • Cross-platform drift incidents: ~5 per month

After this setup:

  • Feature implementation: 20-40 minutes (backend + iOS + Android)
  • Code review iterations: 0-1 rounds (automated reviews catch most issues)
  • Localization bugs: 0 in last 3 months (sync skill catches them)
  • Documentation: Generated automatically via Notion MCP
  • Cross-platform drift incidents: 0 (cross-platform reviewer prevents them)

Development velocity increase: ~10x for new features.

Final Thoughts

MCP servers, custom subagents, and skills have fundamentally changed how I build software. I'm not just writing code faster—I'm writing better code that's more secure, more consistent, and better documented.

The setup takes time (I spent about 2 weeks building and refining my subagents), but the ROI is massive. Every bug caught by a subagent is a production incident prevented. Every localization sync is hours of manual QA saved. Every Notion task auto-created is one less context switch.

If you're building non-trivial software with Claude Code—especially multi-platform or multi-language projects—invest in this infrastructure. Your future self will thank you.

Resources:

Comments (0)