A hiring process designed to find builders, not talkers.
Submit your application with your GitHub username. No resume required — just your identity and a desire to build.
Once accepted, you get a private GitHub repo with a real-world engineering challenge. The clock starts ticking.
You have 7 days to architect, implement, test, and document your solution. Work how you actually work — we're watching the process, not just the output.
Our automated pipeline analyzes your code quality, security, architecture, and git discipline. Human reviewers evaluate your AI implementation and engineering judgment.
Eight dimensions scored through automated analysis and expert review.
Linting, type safety, complexity metrics, and clean code principles
System design, separation of concerns, scalability, and design patterns
Prompt engineering, agent design, tool-calling, and error handling
Dependency scanning, secrets management, input validation, and auth
Infrastructure-as-code, deployment strategy, and observability
Coverage, test quality, edge cases, and testing strategy
Commit cadence, message quality, branching strategy, and PR workflow
Response to feedback, documentation quality, and collaboration signals
We analyze your entire commit history — cadence, message quality, refactoring patterns, and work habits. One giant commit of AI-generated code stands out immediately.
During the challenge, an AI-powered bot simulates a real engineering team — opening issues, reviewing PRs, and reporting bugs. How you respond reveals your collaboration skills.
Security scanners, linters, architecture analyzers, and test coverage tools provide objective measurements. Human reviewers add context and judgment.