# Kernelbot Data Processing Skills This document describes how to extract and process submission data from the Kernelbot database. ## Database Connection The production database is hosted on Heroku. **NEVER run write operations (INSERT, UPDATE, DELETE) on this database.** ```bash # Get DATABASE_URL from Heroku heroku config:get DATABASE_URL --app discord-cluster-manager ``` ## Database Schema The relevant tables are in the `leaderboard` schema: | Table | Description | |-------|-------------| | `leaderboard.leaderboard` | Problem definitions (id, name, deadline, task, description) | | `leaderboard.submission` | User submissions (id, leaderboard_id, user_id, code_id, submission_time, status) | | `leaderboard.runs` | Execution results (submission_id, score, passed, mode, runner, result) | | `leaderboard.user_info` | User details (id, user_name) | | `leaderboard.gpu_type` | GPU types per problem (leaderboard_id, gpu_type) | | `leaderboard.code_files` | Actual submission code content (old_code text, code bytea) | ## Key Problem IDs ### NVFP4 Problems - **595**: nvfp4_gemv - **597**: nvfp4_gemm - **598**: nvfp4_dual_gemm - **730**: nvfp4_group_gemm (not released yet) ### AMD Problems - **398**: amd-identity - **399**: amd-fp8-mm - **430**: amd-mixture-of-experts - **463**: amd-mla-decode - **563**: amd-all2all - **564**: amd-gemm-rs - **565**: amd-ag-gemm ## Run Modes | Mode | Description | Has Score? | |------|-------------|------------| | `test` | Correctness tests | No | | `benchmark` | Performance benchmarks (internal) | No | | `leaderboard` | Official leaderboard runs | **Yes** | | `profile.0-3` | Profiling runs | No | **Important:** - Use `mode = 'leaderboard'` when joining runs to get scores. - **Lower scores are better** (scores are execution time in seconds). ## SQL Queries All SQL queries are in `queries.sql`. Key queries: - List all problems - Check submission counts - Export deduplicated submissions with code - Get top N submissions - Get user progression over time ## Adding Support for a New Problem ### Step 1: Find the Problem ID Use the "LIST ALL PROBLEMS" query from `queries.sql`. ### Step 2: Check Submission Counts Use the "CHECK SUBMISSION COUNTS" query from `queries.sql`. ### Step 3: Export Deduplicated Submissions Use the "EXPORT DEDUPLICATED SUBMISSIONS WITH CODE" query from `queries.sql`. ```python import pandas as pd import psycopg2 DATABASE_URL = "..." # from heroku config:get conn = psycopg2.connect(DATABASE_URL) # Read query from queries.sql and modify problem IDs as needed with open('queries.sql') as f: # Find and use the export query section pass df = pd.read_sql(query, conn) df.to_parquet('new_problem_submissions.parquet', index=False) ``` ### Step 4: Verify Data Quality ```python from analyze_submissions import load_submissions, leaderboard_summary df = load_submissions('new_problem_submissions.parquet') print(leaderboard_summary(df)) ``` ## Accessing Submission Code The parquet files include the full code content for each submission: ```python from analyze_submissions import load_submissions df = load_submissions() # Get a specific user's best submission user_subs = df[(df['user_name'] == 'gau.nernst') & (df['problem_name'] == 'nvfp4_gemv')] best = user_subs.sort_values('score').head(1) # Access the code code = best['code'].values[0] print(code) ``` ## Helper Functions Use `analyze_submissions.py`: ```python from analyze_submissions import ( load_submissions, # Load parquet file author_progression, # See user's submissions over time top_contestants, # Get leaderboard rankings leaderboard_summary, # Summary stats per problem user_stats, # Stats for a specific user format_score # Format score with units (us, ms, s) ) ``` ## Environment Setup ```bash uv venv .venv source .venv/bin/activate uv pip install pandas pyarrow psycopg2-binary ``` ## Files | File | Description | |------|-------------| | `nvidia_nvfp4_submissions.parquet` | Deduplicated NVIDIA NVFP4 submissions with code (~1.4 GB) | | `queries.sql` | All SQL queries for data extraction | | `scripts/nvfp4/analyze_submissions.py` | Helper functions library | | `scripts/nvfp4/get_fastest_submission.py` | Print user's fastest submission | | `scripts/nvfp4/query_submissions.py` | List submission IDs or query specific ID | ## Review Checklist Before Pushing 1. Verify submission counts match expectations 2. Check for any anomalies in scores (negative, extremely large, etc.) 3. Confirm deduplication worked correctly 4. Test helper functions work with the new data 5. Run `python scripts/nvfp4/query_submissions.py` to verify