Our Methodology
How Exiqus analyzes repositories with evidence-based insights, not arbitrary scores
Fundamental Approach: Evidence, Not Scores
Exiqus has adopted a purely evidence-based approach to developer assessment. We DO NOT assign numerical scores, ratings, or percentages to developers. Instead, we extract observable patterns from public repositories and present them as evidence for hiring decisions.
No Numerical Scoring
No "8.5/10 code quality" scores. Only observable evidence like "126 test files for 9 code files."
Observable Patterns Only
We report facts like "27 bug fix commits (54% of total)" not subjective assessments.
Context-Aware
Same data, different insights based on your selected context (Startup, Enterprise, Agency, Open Source).
What We Analyze
- •Code Structure: File counts, language distribution, directory organization
- •Commit History: Frequency, timing, message patterns, bug fix ratios
- •Testing Evidence: Test file ratios, CI/CD configurations, testing patterns
- •Documentation: README quality, comment density, docs folders
- •Collaboration: Contributors, issue references, co-authored commits
What We DON'T Analyze
- •Private Data: Pull request discussions, code reviews, private repos
- •Runtime Performance: Actual execution speed or resource usage
- •Personal Metrics: Individual productivity or time tracking
- •Subjective Quality: "Good" vs "bad" code judgments
- •Soft Skills: Communication, teamwork, cultural fit
Context-Aware Analysis
We understand that different roles require different evaluation. A startup needs builders who can experiment and iterate. An enterprise needs architects who consider scale and maintainability. We tailor our analysis accordingly.
Startup Context
For experimental projects, we explore innovation, learning agility, and rapid prototyping skills. Perfect for evaluating builders and early-stage contributors.
Enterprise Context
For production-ready code, we assess architectural decisions, team collaboration, and maintainability practices.
Open Source Context
For community projects, we evaluate contribution quality, documentation, and collaborative development skills.
Agency Context
For client-ready developers, we assess versatility, professional practices, and ability to deliver under constraints.
Transparency Note: If a repository has limited patterns for a specific context, we'll tell you. An experimental notebook might generate fewer enterprise-focused questions - and that's honest feedback, not a limitation.
The Evidence Hierarchy
Important: Insights are Repository-Dependent, Not Fixed
The number of insights generated is deterministic based on repository content, not a fixed quota. A minimal repo might generate only 1-2 insights even on Scale+ tier, while feature-rich repos like tinygrad or facebook/react can generate 25-30 insights. We never generate artificial insights just to hit a number - every insight is based on actual evidence found in the repository.
Direct Observations
Highest ConfidenceFile counts and sizes • Language percentages • Commit timestamps • Contributor lists
Derived Patterns
High ConfidenceTest coverage ratios • Commit frequency trends • Documentation ratios • Bug fix percentages
Development Patterns
Medium ConfidenceCollaboration style • Code maintenance habits • Learning indicators • Commit message quality
Contextual Insights
Requires Human InterpretationDomain expertise markers • Architecture complexity • Team dynamics • Growth trajectories
Repository Size Limits by Tier
Our system analyzes repositories of all sizes, with tier-based limits to ensure optimal performance:
FREE/STARTER
Up to 500MB
Standard projects and libraries
GROWTH
Up to 2GB
Large frameworks and applications
SCALE
Up to 5GB
Enterprise systems and major projects
SCALE+
Up to 10GB
Massive monorepos and platform codebases
Note: If a repository exceeds your tier limit, the system will suggest upgrading to analyze larger repositories. We never attempt partial or incomplete analysis.
Handling Edge Cases
Minimal/Empty Repos
When a repository has less than 10KB of code and fewer than 5 files, we honestly tell you it lacks sufficient content for meaningful analysis rather than generating fluff
Monorepos
Smart sampling ensures efficient analysis without timeouts, maintaining accuracy despite size (up to 10GB on Scale+)
Documentation-Only
Properly classified with no code quality claims, focusing solely on documentation evidence
Interview Question Generation
Available across all tiers, our AI generates questions that are:
Evidence-Based
Reference specific repository data
Practice-Focused
Focus on technical approaches
Context-Aware
Tailored to your hiring situation
Open-Ended
Encourage discussion, not yes/no
Example Question: "I noticed 27 of your commits were bug fixes. Walk me through your approach to debugging in a fast-moving startup environment."
Limitations We Acknowledge
We're transparent about what we cannot assess:
Export and Batch Analysis Features
Export Formats by Tier
- •Free: JSON only
- •Starter/Growth/Scale: JSON, HTML, PDF
- •Scale+: JSON, HTML, PDF, Markdown
Batch Analysis Limits
- •Free: No batch analysis
- •Starter: 2 repos at once
- •Growth: 5 repos at once
- •Scale: 10 repos at once
- •Scale+: 15 repos at once
The Human Element
Our analysis is designed to augment human judgment, not replace it:
Last Updated: July 2025
Methodology Version: 2.0 - Evidence-Based Approach