new

Get trending papers in your email inbox!

Subscribe

Trending Papers

byAK and the research community

Trending Papers
Submitted by
taesiri

PaperBanana: Automating Academic Illustration for AI Scientists

_paperbanana is an agentic framework that automates the creation of publication-ready academic illustrations using advanced vision-language models and image generation techniques.

google Google · Jan 30, 2026
Submitted by
nepfaff

SceneSmith: Agentic Generation of Simulation-Ready Indoor Scenes

SceneSmith is a hierarchical agentic framework that generates simulation-ready indoor environments from natural language prompts through multiple stages involving VLM agents and integrated asset generation techniques.

Submitted by
daixufang

Agent Lightning: Train ANY AI Agents with Reinforcement Learning

Agent Lightning is a flexible RL framework for training LLMs in various agents, using a hierarchical RL algorithm and decoupling execution from training to handle complex interactions.

  • 8 authors
· Aug 5, 2025
Submitted by
andito

SmolDocling: An ultra-compact vision-language model for end-to-end multi-modal document conversion

SmolDocling is a compact vision-language model that performs end-to-end document conversion with robust performance across various document types using 256M parameters and a new markup format.

ibm-granite IBM Granite · Mar 14, 2025
Submitted by
Rbin

RAG-Anything: All-in-One RAG Framework

RAG-Anything is a unified framework that enhances multimodal knowledge retrieval by integrating cross-modal relationships and semantic matching, outperforming existing methods on complex benchmarks.

Submitted by
richardxp888

SkillRL: Evolving Agents via Recursive Skill-Augmented Reinforcement Learning

SkillRL enables LLM agents to improve through hierarchical skill discovery and recursive policy evolution, achieving superior performance on complex tasks while reducing computational overhead.

Submitted by
taesiri

Qwen3-TTS Technical Report

The Qwen3-TTS series presents advanced multilingual text-to-speech models with voice cloning and controllable speech generation capabilities, utilizing dual-track LM architecture and specialized speech tokenizers for efficient streaming synthesis.

Qwen Qwen · Jan 22, 2026
Submitted by
akhaliq

Efficient Memory Management for Large Language Model Serving with PagedAttention

PagedAttention algorithm and vLLM system enhance the throughput of large language models by efficiently managing memory and reducing waste in the key-value cache.

  • 9 authors
· Sep 12, 2023
Submitted by
yangzhi1

QuantaAlpha: An Evolutionary Framework for LLM-Driven Alpha Mining

Financial markets are noisy and non-stationary, making alpha mining highly sensitive to noise in backtesting results and sudden market regime shifts. While recent agentic frameworks improve alpha mining automation, they often lack controllable multi-round search and reliable reuse of validated experience. To address these challenges, we propose QuantaAlpha, an evolutionary alpha mining framework that treats each end-to-end mining run as a trajectory and improves factors through trajectory-level mutation and crossover operations. QuantaAlpha localizes suboptimal steps in each trajectory for targeted revision and recombines complementary high-reward segments to reuse effective patterns, enabling structured exploration and refinement across mining iterations. During factor generation, QuantaAlpha enforces semantic consistency across the hypothesis, factor expression, and executable code, while constraining the complexity and redundancy of the generated factor to mitigate crowding. Extensive experiments on the China Securities Index 300 (CSI 300) demonstrate consistent gains over strong baseline models and prior agentic systems. When utilizing GPT-5.2, QuantaAlpha achieves an Information Coefficient (IC) of 0.1501, with an Annualized Rate of Return (ARR) of 27.75% and a Maximum Drawdown (MDD) of 7.98%. Moreover, factors mined on CSI 300 transfer effectively to the China Securities Index 500 (CSI 500) and the Standard & Poor's 500 Index (S&P 500), delivering 160% and 137% cumulative excess return over four years, respectively, which indicates strong robustness of QuantaAlpha under market distribution shifts.

QuantaAlpha QuantaAlpha · Feb 6, 2026

OmniFlatten: An End-to-end GPT Model for Seamless Voice Conversation

A novel GPT-based model, OmniFlatten, enables real-time natural full-duplex spoken dialogue through a multi-stage post-training technique that integrates speech and text without altering the original model's architecture.

  • 9 authors
· Oct 23, 2024
Submitted by
akhaliq

Mem0: Building Production-Ready AI Agents with Scalable Long-Term Memory

Mem0, a memory-centric architecture with graph-based memory, enhances long-term conversational coherence in LLMs by efficiently extracting, consolidating, and retrieving information, outperforming existing memory systems in terms of accuracy and computational efficiency.

  • 5 authors
· Apr 28, 2025
Submitted by
UglyToilet

MemOS: A Memory OS for AI System

MemOS, a memory operating system for Large Language Models, addresses memory management challenges by unifying plaintext, activation-based, and parameter-level memories, enabling efficient storage, retrieval, and continual learning.

  • 39 authors
· Jul 4, 2025

TradingAgents: Multi-Agents LLM Financial Trading Framework

A multi-agent framework using large language models for stock trading simulates real-world trading firms, improving performance metrics like cumulative returns and Sharpe ratio.

  • 4 authors
· Dec 28, 2024
Submitted by
myownskyW7

DeepGen 1.0: A Lightweight Unified Multimodal Model for Advancing Image Generation and Editing

A lightweight 5B unified multimodal model achieves competitive performance through hierarchical feature extraction, learnable think tokens, and progressive training strategies including alignment pre-training, joint supervised fine-tuning, and reinforcement learning with MR-GRPO.

Submitted by
taesiri

Towards Robust Mathematical Reasoning

IMO-Bench, a suite of advanced reasoning benchmarks, evaluates mathematical reasoning capabilities of foundation models using IMO-level problems and detailed grading guidelines, achieving gold-level performance with Gemini Deep Think.

  • 20 authors
· Nov 3, 2025
Submitted by
taesiri

MiniCPM-V 4.5: Cooking Efficient MLLMs via Architecture, Data, and Training Recipe

MiniCPM-V 4.5, a 8B parameter multimodal large language model, achieves high performance and efficiency through a unified 3D-Resampler architecture, a unified learning paradigm, and a hybrid reinforcement learning strategy.

  • 34 authors
· Sep 16, 2025
Submitted by
Dongchao

HeartMuLa: A Family of Open Sourced Music Foundation Models

A suite of open-source music foundation models is introduced, featuring components for audio-text alignment, lyric recognition, music coding, and large language model-based song generation with controllable attributes and scalable parameterization.

  • 28 authors
· Jan 15, 2026
Submitted by
taesiri

PaddleOCR-VL: Boosting Multilingual Document Parsing via a 0.9B Ultra-Compact Vision-Language Model

PaddleOCR-VL, a vision-language model combining NaViT-style dynamic resolution and ERNIE, achieves state-of-the-art performance in document parsing and element recognition with high efficiency.

PaddlePaddle PaddlePaddle · Oct 16, 2025

Moonshine: Speech Recognition for Live Transcription and Voice Commands

Moonshine, an encoder-decoder transformer architecture for speech recognition, uses Rotary Position Embedding, reducing compute requirements without decreasing accuracy.

  • 6 authors
· Oct 21, 2024
Submitted by
hao-li

Agent READMEs: An Empirical Study of Context Files for Agentic Coding

Agentic coding tools receive goals written in natural language as input, break them down into specific tasks, and write or execute the actual code with minimal human intervention. Central to this process are agent context files ("READMEs for agents") that provide persistent, project-level instructions. In this paper, we conduct the first large-scale empirical study of 2,303 agent context files from 1,925 repositories to characterize their structure, maintenance, and content. We find that these files are not static documentation but complex, difficult-to-read artifacts that evolve like configuration code, maintained through frequent, small additions. Our content analysis of 16 instruction types shows that developers prioritize functional context, such as build and run commands (62.3%), implementation details (69.9%), and architecture (67.7%). We also identify a significant gap: non-functional requirements like security (14.5%) and performance (14.5%) are rarely specified. These findings indicate that while developers use context files to make agents functional, they provide few guardrails to ensure that agent-written code is secure or performant, highlighting the need for improved tooling and practices.

  • 11 authors
· Nov 17, 2025
Submitted by
fdugyt

MOSS-Audio-Tokenizer: Scaling Audio Tokenizers for Future Audio Foundation Models

A fully end-to-end Transformer-based audio tokenizer architecture achieves high-fidelity reconstruction across diverse audio domains and enables superior text-to-speech and automatic speech recognition performance.

OpenMOSS-Team OpenMOSS · Feb 11, 2026
Submitted by
taesiri

MinerU2.5: A Decoupled Vision-Language Model for Efficient High-Resolution Document Parsing

MinerU2.5, a 1.2B-parameter document parsing vision-language model, achieves state-of-the-art recognition accuracy with computational efficiency through a coarse-to-fine parsing strategy.

  • 61 authors
· Sep 26, 2025
Submitted by
tianyilt

MOVA: Towards Scalable and Synchronized Video-Audio Generation

MOVA is an open-source model that generates synchronized audio-visual content using a Mixture-of-Experts architecture with 32 billion parameters, supporting image-text to video-audio generation tasks.

OpenMOSS-Team OpenMOSS · Feb 9, 2026
Submitted by
taesiri

DeepCode: Open Agentic Coding

DeepCode, a fully autonomous framework, addresses the challenges of document-to-codebase synthesis by optimizing information flow through source compression, structured indexing, knowledge injection, and error correction, achieving state-of-the-art performance and surpassing human experts.

  • 5 authors
· Dec 8, 2025

dots.ocr: Multilingual Document Layout Parsing in a Single Vision-Language Model

A unified Vision-Language Model, dots.ocr, achieves state-of-the-art performance on document layout parsing by jointly learning layout detection, text recognition, and relational understanding, validated on OmniDocBench and XDocParse benchmarks.

rednote-hilab rednote-hilab · Dec 2, 2025
Submitted by
rajkumarrawal

Recursive Language Models

We study allowing large language models (LLMs) to process arbitrarily long prompts through the lens of inference-time scaling. We propose Recursive Language Models (RLMs), a general inference strategy that treats long prompts as part of an external environment and allows the LLM to programmatically examine, decompose, and recursively call itself over snippets of the prompt. We find that RLMs successfully handle inputs up to two orders of magnitude beyond model context windows and, even for shorter prompts, dramatically outperform the quality of base LLMs and common long-context scaffolds across four diverse long-context tasks, while having comparable (or cheaper) cost per query.

Submitted by
Dongchao

UniAudio 2.0: A Unified Audio Language Model with Text-Aligned Factorized Audio Tokenization

Researchers developed a discrete audio codec called ReasoningCodec that separates audio into reasoning and reconstruction tokens for improved understanding and generation, and created UniAudio 2.0, a unified autoregressive model trained on large-scale text and audio data that shows strong performance across various audio tasks and generalizes well in few-shot and zero-shot scenarios.

  • 6 authors
· Feb 4, 2026

EvoCorps: An Evolutionary Multi-Agent Framework for Depolarizing Online Discourse

EvoCorps is an evolutionary multi-agent framework that proactively addresses polarization in online discourse through dynamic social game coordination and closed-loop learning for real-time discourse governance.

  • 8 authors
· Feb 9, 2026
Submitted by
unilm

VibeVoice Technical Report

VibeVoice synthesizes long-form multi-speaker speech using next-token diffusion and a highly efficient continuous speech tokenizer, achieving superior performance and fidelity.

MicrosoftResearch Microsoft Research · Aug 26, 2025

LightRAG: Simple and Fast Retrieval-Augmented Generation

LightRAG improves Retrieval-Augmented Generation by integrating graph structures for enhanced contextual awareness and efficient information retrieval, achieving better accuracy and response times.

  • 5 authors
· Oct 8, 2024
Submitted by
zhangxgu

UI-Venus-1.5 Technical Report

UI-Venus-1.5 is a unified GUI agent with improved performance through mid-training stages, online reinforcement learning, and model merging techniques.

inclusionAI inclusionAI · Feb 9, 2026
Submitted by
taesiri

LTX-2: Efficient Joint Audio-Visual Foundation Model

LTX-2 is an open-source audiovisual diffusion model that generates synchronized video and audio content using a dual-stream transformer architecture with cross-modal attention and classifier-free guidance.

  • 29 authors
· Jan 6, 2026

pySLAM: An Open-Source, Modular, and Extensible Framework for SLAM

pySLAM is an open-source framework supporting Visual SLAM with monocular, stereo, and RGB-D cameras, incorporating classical and modern features, loop closure methods, volumetric reconstruction, and depth prediction models.

  • 1 authors
· Feb 17, 2025
Submitted by
akhaliq

LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models

LlamaFactory is a unified framework enabling efficient fine-tuning of large language models across various tasks using a web-based user interface.

  • 5 authors
· Mar 20, 2024
Submitted by
taesiri

MolmoSpaces: A Large-Scale Open Ecosystem for Robot Navigation and Manipulation

MolmoSpaces presents an open ecosystem with diverse indoor environments and annotated objects for large-scale robot policy benchmarking across multiple tasks and simulators.

ai21labs AI21 · Feb 11, 2026
Submitted by
Shengran

Learning to Continually Learn via Meta-learning Agentic Memory Designs

ALMA is a framework that uses meta-learning to automatically discover memory designs for agentic systems, enabling continual learning without human engineering across diverse domains.

  • 3 authors
· Feb 8, 2026
Submitted by
ChilleD

Agent World Model: Infinity Synthetic Environments for Agentic Reinforcement Learning

Large language model agents trained in synthetic environments with code-driven simulations and database-backed state transitions demonstrate superior out-of-distribution generalization compared to traditional benchmark-specific approaches.

Snowflake Snowflake · Feb 10, 2026
Submitted by
Luo2003

Closing the Loop: Universal Repository Representation with RPG-Encoder

RPG-Encoder framework transforms repository comprehension and generation into a unified cycle by encoding code into high-fidelity Repository Planning Graph representations that improve understanding and reconstruction accuracy.

  • 13 authors
· Feb 2, 2026
Submitted by
lovesnowbest

UI-TARS-2 Technical Report: Advancing GUI Agent with Multi-Turn Reinforcement Learning

UI-TARS-2, a native GUI-centered agent model, addresses challenges in data scalability, multi-turn reinforcement learning, and environment stability, achieving significant improvements over its predecessor and strong baselines across various benchmarks.

ByteDance-Seed ByteDance Seed · Sep 2, 2025
Submitted by
akhaliq

OpenDevin: An Open Platform for AI Software Developers as Generalist Agents

OpenDevin is a platform for developing AI agents that interact with the world by writing code, using command lines, and browsing the web, with support for multiple agents and evaluation benchmarks.

  • 24 authors
· Jul 23, 2024

IndexTTS: An Industrial-Level Controllable and Efficient Zero-Shot Text-To-Speech System

IndexTTS, an enhanced text-to-speech system combining XTTS and Tortoise models, offers improved naturalness, enhanced voice cloning, and controllable usage through hybrid character-pinyin modeling and optimized vector quantization.

  • 5 authors
· Feb 8, 2025
Submitted by
taesiri

Code2World: A GUI World Model via Renderable Code Generation

Code2World enables autonomous GUI agents to predict next visual states through renderable code generation, achieving high visual fidelity and structural controllability while improving navigation performance.

GD-ML AMAP-ML · Feb 10, 2026
Submitted by
jayinnn

Stroke of Surprise: Progressive Semantic Illusions in Vector Sketching

Progressive Semantic Illusions use a generative framework with dual-branch Score Distillation Sampling to create vector sketches that transform semantically through sequential stroke additions, achieving superior recognizability and illusion strength.

Submitted by
Yikunb

Weak-Driven Learning: How Weak Agents make Strong Agents Stronger

WMSS is a post-training paradigm that uses weak model checkpoints to identify and fill learning gaps, enabling continued improvement beyond conventional saturation points in large language models.

  • 11 authors
· Feb 9, 2026
Submitted by
taesiri

Step 3.5 Flash: Open Frontier-Level Intelligence with 11B Active Parameters

Step 3.5 Flash is a sparse Mixture-of-Experts model that achieves frontier-level agentic intelligence through efficient parameter utilization and optimized attention mechanisms, demonstrating strong performance across multiple benchmarks.

stepfun-ai StepFun · Feb 11, 2026
Submitted by
Jeff-Wang

GigaBrain-0.5M*: a VLA That Learns From World Model-Based Reinforcement Learning

A vision-language-action model enhanced with world model-based reinforcement learning demonstrates improved performance and long-horizon execution capabilities for robotic manipulation tasks.

open-gigaai GigaAI · Feb 12, 2026
Submitted by
taesiri

Kimi K2.5: Visual Agentic Intelligence

Kimi K2.5 is an open-source multimodal agentic model that enhances text and vision processing through joint optimization techniques and introduces Agent Swarm for parallel task execution.

moonshotai Moonshot AI · Feb 2, 2026
Submitted by
JiaaqiLiu

SimpleMem: Efficient Lifelong Memory for LLM Agents

To support reliable long-term interaction in complex environments, LLM agents require memory systems that efficiently manage historical experiences. Existing approaches either retain full interaction histories via passive context extension, leading to substantial redundancy, or rely on iterative reasoning to filter noise, incurring high token costs. To address this challenge, we introduce SimpleMem, an efficient memory framework based on semantic lossless compression. We propose a three-stage pipeline designed to maximize information density and token utilization: (1) Semantic Structured Compression, which applies entropy-aware filtering to distill unstructured interactions into compact, multi-view indexed memory units; (2) Recursive Memory Consolidation, an asynchronous process that integrates related units into higher-level abstract representations to reduce redundancy; and (3) Adaptive Query-Aware Retrieval, which dynamically adjusts retrieval scope based on query complexity to construct precise context efficiently. Experiments on benchmark datasets show that our method consistently outperforms baseline approaches in accuracy, retrieval efficiency, and inference cost, achieving an average F1 improvement of 26.4% while reducing inference-time token consumption by up to 30-fold, demonstrating a superior balance between performance and efficiency. Code is available at https://github.com/aiming-lab/SimpleMem.

  • 8 authors
· Jan 5, 2026
Submitted by
Wendy-Fly

Idea2Story: An Automated Pipeline for Transforming Research Concepts into Complete Scientific Narratives

Offline knowledge construction through structured methodological graphs enables more reliable and scalable autonomous scientific discovery by reducing reliance on real-time literature processing.

AgentAlphaAGI AgentAlpha · Jan 28, 2026
Submitted by
FSCCS

dVoting: Fast Voting for dLLMs

Diffusion large language models enable parallel token generation and efficient reasoning enhancement through a voting technique that identifies and refines uncertain predictions across multiple samples.

Trending Papers - Hugging Face
new

Get trending papers in your email inbox!

Subscribe

Trending Papers

byAK and the research community

Trending Papers
Submitted by
taesiri

PaperBanana: Automating Academic Illustration for AI Scientists

_paperbanana is an agentic framework that automates the creation of publication-ready academic illustrations using advanced vision-language models and image generation techniques.

google Google · Jan 30, 2026
Submitted by
nepfaff

SceneSmith: Agentic Generation of Simulation-Ready Indoor Scenes

SceneSmith is a hierarchical agentic framework that generates simulation-ready indoor environments from natural language prompts through multiple stages involving VLM agents and integrated asset generation techniques.

Submitted by
daixufang

Agent Lightning: Train ANY AI Agents with Reinforcement Learning

Agent Lightning is a flexible RL framework for training LLMs in various agents, using a hierarchical RL algorithm and decoupling execution from training to handle complex interactions.

  • 8 authors
· Aug 5, 2025
Submitted by
andito

SmolDocling: An ultra-compact vision-language model for end-to-end multi-modal document conversion

SmolDocling is a compact vision-language model that performs end-to-end document conversion with robust performance across various document types using 256M parameters and a new markup format.

ibm-granite IBM Granite · Mar 14, 2025
Submitted by
Rbin

RAG-Anything: All-in-One RAG Framework

RAG-Anything is a unified framework that enhances multimodal knowledge retrieval by integrating cross-modal relationships and semantic matching, outperforming existing methods on complex benchmarks.

Submitted by
richardxp888

SkillRL: Evolving Agents via Recursive Skill-Augmented Reinforcement Learning

SkillRL enables LLM agents to improve through hierarchical skill discovery and recursive policy evolution, achieving superior performance on complex tasks while reducing computational overhead.

Submitted by
taesiri

Qwen3-TTS Technical Report

The Qwen3-TTS series presents advanced multilingual text-to-speech models with voice cloning and controllable speech generation capabilities, utilizing dual-track LM architecture and specialized speech tokenizers for efficient streaming synthesis.

Qwen Qwen · Jan 22, 2026
Submitted by
akhaliq

Efficient Memory Management for Large Language Model Serving with PagedAttention

PagedAttention algorithm and vLLM system enhance the throughput of large language models by efficiently managing memory and reducing waste in the key-value cache.

  • 9 authors
· Sep 12, 2023
Submitted by
yangzhi1

QuantaAlpha: An Evolutionary Framework for LLM-Driven Alpha Mining

Financial markets are noisy and non-stationary, making alpha mining highly sensitive to noise in backtesting results and sudden market regime shifts. While recent agentic frameworks improve alpha mining automation, they often lack controllable multi-round search and reliable reuse of validated experience. To address these challenges, we propose QuantaAlpha, an evolutionary alpha mining framework that treats each end-to-end mining run as a trajectory and improves factors through trajectory-level mutation and crossover operations. QuantaAlpha localizes suboptimal steps in each trajectory for targeted revision and recombines complementary high-reward segments to reuse effective patterns, enabling structured exploration and refinement across mining iterations. During factor generation, QuantaAlpha enforces semantic consistency across the hypothesis, factor expression, and executable code, while constraining the complexity and redundancy of the generated factor to mitigate crowding. Extensive experiments on the China Securities Index 300 (CSI 300) demonstrate consistent gains over strong baseline models and prior agentic systems. When utilizing GPT-5.2, QuantaAlpha achieves an Information Coefficient (IC) of 0.1501, with an Annualized Rate of Return (ARR) of 27.75% and a Maximum Drawdown (MDD) of 7.98%. Moreover, factors mined on CSI 300 transfer effectively to the China Securities Index 500 (CSI 500) and the Standard & Poor's 500 Index (S&P 500), delivering 160% and 137% cumulative excess return over four years, respectively, which indicates strong robustness of QuantaAlpha under market distribution shifts.

QuantaAlpha QuantaAlpha · Feb 6, 2026

OmniFlatten: An End-to-end GPT Model for Seamless Voice Conversation

A novel GPT-based model, OmniFlatten, enables real-time natural full-duplex spoken dialogue through a multi-stage post-training technique that integrates speech and text without altering the original model's architecture.

  • 9 authors
· Oct 23, 2024
Submitted by
akhaliq

Mem0: Building Production-Ready AI Agents with Scalable Long-Term Memory

Mem0, a memory-centric architecture with graph-based memory, enhances long-term conversational coherence in LLMs by efficiently extracting, consolidating, and retrieving information, outperforming existing memory systems in terms of accuracy and computational efficiency.

  • 5 authors
· Apr 28, 2025
Submitted by
UglyToilet

MemOS: A Memory OS for AI System

MemOS, a memory operating system for Large Language Models, addresses memory management challenges by unifying plaintext, activation-based, and parameter-level memories, enabling efficient storage, retrieval, and continual learning.

  • 39 authors
· Jul 4, 2025

TradingAgents: Multi-Agents LLM Financial Trading Framework

A multi-agent framework using large language models for stock trading simulates real-world trading firms, improving performance metrics like cumulative returns and Sharpe ratio.

  • 4 authors
· Dec 28, 2024
Submitted by
myownskyW7

DeepGen 1.0: A Lightweight Unified Multimodal Model for Advancing Image Generation and Editing

A lightweight 5B unified multimodal model achieves competitive performance through hierarchical feature extraction, learnable think tokens, and progressive training strategies including alignment pre-training, joint supervised fine-tuning, and reinforcement learning with MR-GRPO.

Submitted by
taesiri

Towards Robust Mathematical Reasoning

IMO-Bench, a suite of advanced reasoning benchmarks, evaluates mathematical reasoning capabilities of foundation models using IMO-level problems and detailed grading guidelines, achieving gold-level performance with Gemini Deep Think.

  • 20 authors
· Nov 3, 2025
Submitted by
taesiri

MiniCPM-V 4.5: Cooking Efficient MLLMs via Architecture, Data, and Training Recipe

MiniCPM-V 4.5, a 8B parameter multimodal large language model, achieves high performance and efficiency through a unified 3D-Resampler architecture, a unified learning paradigm, and a hybrid reinforcement learning strategy.

  • 34 authors
· Sep 16, 2025
Submitted by
Dongchao

HeartMuLa: A Family of Open Sourced Music Foundation Models

A suite of open-source music foundation models is introduced, featuring components for audio-text alignment, lyric recognition, music coding, and large language model-based song generation with controllable attributes and scalable parameterization.

  • 28 authors
· Jan 15, 2026
Submitted by
taesiri

PaddleOCR-VL: Boosting Multilingual Document Parsing via a 0.9B Ultra-Compact Vision-Language Model

PaddleOCR-VL, a vision-language model combining NaViT-style dynamic resolution and ERNIE, achieves state-of-the-art performance in document parsing and element recognition with high efficiency.

PaddlePaddle PaddlePaddle · Oct 16, 2025

Moonshine: Speech Recognition for Live Transcription and Voice Commands

Moonshine, an encoder-decoder transformer architecture for speech recognition, uses Rotary Position Embedding, reducing compute requirements without decreasing accuracy.

  • 6 authors
· Oct 21, 2024
Submitted by
hao-li

Agent READMEs: An Empirical Study of Context Files for Agentic Coding

Agentic coding tools receive goals written in natural language as input, break them down into specific tasks, and write or execute the actual code with minimal human intervention. Central to this process are agent context files ("READMEs for agents") that provide persistent, project-level instructions. In this paper, we conduct the first large-scale empirical study of 2,303 agent context files from 1,925 repositories to characterize their structure, maintenance, and content. We find that these files are not static documentation but complex, difficult-to-read artifacts that evolve like configuration code, maintained through frequent, small additions. Our content analysis of 16 instruction types shows that developers prioritize functional context, such as build and run commands (62.3%), implementation details (69.9%), and architecture (67.7%). We also identify a significant gap: non-functional requirements like security (14.5%) and performance (14.5%) are rarely specified. These findings indicate that while developers use context files to make agents functional, they provide few guardrails to ensure that agent-written code is secure or performant, highlighting the need for improved tooling and practices.

  • 11 authors
· Nov 17, 2025
Submitted by
fdugyt

MOSS-Audio-Tokenizer: Scaling Audio Tokenizers for Future Audio Foundation Models

A fully end-to-end Transformer-based audio tokenizer architecture achieves high-fidelity reconstruction across diverse audio domains and enables superior text-to-speech and automatic speech recognition performance.

OpenMOSS-Team OpenMOSS · Feb 11, 2026
Submitted by
taesiri

MinerU2.5: A Decoupled Vision-Language Model for Efficient High-Resolution Document Parsing

MinerU2.5, a 1.2B-parameter document parsing vision-language model, achieves state-of-the-art recognition accuracy with computational efficiency through a coarse-to-fine parsing strategy.

  • 61 authors
· Sep 26, 2025
Submitted by
tianyilt

MOVA: Towards Scalable and Synchronized Video-Audio Generation

MOVA is an open-source model that generates synchronized audio-visual content using a Mixture-of-Experts architecture with 32 billion parameters, supporting image-text to video-audio generation tasks.

OpenMOSS-Team OpenMOSS · Feb 9, 2026
Submitted by
taesiri

DeepCode: Open Agentic Coding

DeepCode, a fully autonomous framework, addresses the challenges of document-to-codebase synthesis by optimizing information flow through source compression, structured indexing, knowledge injection, and error correction, achieving state-of-the-art performance and surpassing human experts.

  • 5 authors
· Dec 8, 2025

dots.ocr: Multilingual Document Layout Parsing in a Single Vision-Language Model

A unified Vision-Language Model, dots.ocr, achieves state-of-the-art performance on document layout parsing by jointly learning layout detection, text recognition, and relational understanding, validated on OmniDocBench and XDocParse benchmarks.

rednote-hilab rednote-hilab · Dec 2, 2025
Submitted by
rajkumarrawal

Recursive Language Models

We study allowing large language models (LLMs) to process arbitrarily long prompts through the lens of inference-time scaling. We propose Recursive Language Models (RLMs), a general inference strategy that treats long prompts as part of an external environment and allows the LLM to programmatically examine, decompose, and recursively call itself over snippets of the prompt. We find that RLMs successfully handle inputs up to two orders of magnitude beyond model context windows and, even for shorter prompts, dramatically outperform the quality of base LLMs and common long-context scaffolds across four diverse long-context tasks, while having comparable (or cheaper) cost per query.

Submitted by
Dongchao

UniAudio 2.0: A Unified Audio Language Model with Text-Aligned Factorized Audio Tokenization

Researchers developed a discrete audio codec called ReasoningCodec that separates audio into reasoning and reconstruction tokens for improved understanding and generation, and created UniAudio 2.0, a unified autoregressive model trained on large-scale text and audio data that shows strong performance across various audio tasks and generalizes well in few-shot and zero-shot scenarios.

  • 6 authors
· Feb 4, 2026

EvoCorps: An Evolutionary Multi-Agent Framework for Depolarizing Online Discourse

EvoCorps is an evolutionary multi-agent framework that proactively addresses polarization in online discourse through dynamic social game coordination and closed-loop learning for real-time discourse governance.

  • 8 authors
· Feb 9, 2026
Submitted by
unilm

VibeVoice Technical Report

VibeVoice synthesizes long-form multi-speaker speech using next-token diffusion and a highly efficient continuous speech tokenizer, achieving superior performance and fidelity.

MicrosoftResearch Microsoft Research · Aug 26, 2025

LightRAG: Simple and Fast Retrieval-Augmented Generation

LightRAG improves Retrieval-Augmented Generation by integrating graph structures for enhanced contextual awareness and efficient information retrieval, achieving better accuracy and response times.

  • 5 authors
· Oct 8, 2024
Submitted by
zhangxgu

UI-Venus-1.5 Technical Report

UI-Venus-1.5 is a unified GUI agent with improved performance through mid-training stages, online reinforcement learning, and model merging techniques.

inclusionAI inclusionAI · Feb 9, 2026
Submitted by
taesiri

LTX-2: Efficient Joint Audio-Visual Foundation Model

LTX-2 is an open-source audiovisual diffusion model that generates synchronized video and audio content using a dual-stream transformer architecture with cross-modal attention and classifier-free guidance.

  • 29 authors
· Jan 6, 2026

pySLAM: An Open-Source, Modular, and Extensible Framework for SLAM

pySLAM is an open-source framework supporting Visual SLAM with monocular, stereo, and RGB-D cameras, incorporating classical and modern features, loop closure methods, volumetric reconstruction, and depth prediction models.

  • 1 authors
· Feb 17, 2025
Submitted by
akhaliq

LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models

LlamaFactory is a unified framework enabling efficient fine-tuning of large language models across various tasks using a web-based user interface.

  • 5 authors
· Mar 20, 2024
Submitted by
taesiri

MolmoSpaces: A Large-Scale Open Ecosystem for Robot Navigation and Manipulation

MolmoSpaces presents an open ecosystem with diverse indoor environments and annotated objects for large-scale robot policy benchmarking across multiple tasks and simulators.

ai21labs AI21 · Feb 11, 2026
Submitted by
Shengran

Learning to Continually Learn via Meta-learning Agentic Memory Designs

ALMA is a framework that uses meta-learning to automatically discover memory designs for agentic systems, enabling continual learning without human engineering across diverse domains.

  • 3 authors
· Feb 8, 2026
Submitted by
ChilleD

Agent World Model: Infinity Synthetic Environments for Agentic Reinforcement Learning

Large language model agents trained in synthetic environments with code-driven simulations and database-backed state transitions demonstrate superior out-of-distribution generalization compared to traditional benchmark-specific approaches.

Snowflake Snowflake · Feb 10, 2026
Submitted by
Luo2003

Closing the Loop: Universal Repository Representation with RPG-Encoder

RPG-Encoder framework transforms repository comprehension and generation into a unified cycle by encoding code into high-fidelity Repository Planning Graph representations that improve understanding and reconstruction accuracy.

  • 13 authors
· Feb 2, 2026
Submitted by
lovesnowbest

UI-TARS-2 Technical Report: Advancing GUI Agent with Multi-Turn Reinforcement Learning

UI-TARS-2, a native GUI-centered agent model, addresses challenges in data scalability, multi-turn reinforcement learning, and environment stability, achieving significant improvements over its predecessor and strong baselines across various benchmarks.

ByteDance-Seed ByteDance Seed · Sep 2, 2025
Submitted by
akhaliq

OpenDevin: An Open Platform for AI Software Developers as Generalist Agents

OpenDevin is a platform for developing AI agents that interact with the world by writing code, using command lines, and browsing the web, with support for multiple agents and evaluation benchmarks.

  • 24 authors
· Jul 23, 2024

IndexTTS: An Industrial-Level Controllable and Efficient Zero-Shot Text-To-Speech System

IndexTTS, an enhanced text-to-speech system combining XTTS and Tortoise models, offers improved naturalness, enhanced voice cloning, and controllable usage through hybrid character-pinyin modeling and optimized vector quantization.

  • 5 authors
· Feb 8, 2025
Submitted by
taesiri

Code2World: A GUI World Model via Renderable Code Generation

Code2World enables autonomous GUI agents to predict next visual states through renderable code generation, achieving high visual fidelity and structural controllability while improving navigation performance.

GD-ML AMAP-ML · Feb 10, 2026
Submitted by
jayinnn

Stroke of Surprise: Progressive Semantic Illusions in Vector Sketching

Progressive Semantic Illusions use a generative framework with dual-branch Score Distillation Sampling to create vector sketches that transform semantically through sequential stroke additions, achieving superior recognizability and illusion strength.

Submitted by
Yikunb

Weak-Driven Learning: How Weak Agents make Strong Agents Stronger

WMSS is a post-training paradigm that uses weak model checkpoints to identify and fill learning gaps, enabling continued improvement beyond conventional saturation points in large language models.

  • 11 authors
· Feb 9, 2026
Submitted by
taesiri

Step 3.5 Flash: Open Frontier-Level Intelligence with 11B Active Parameters

Step 3.5 Flash is a sparse Mixture-of-Experts model that achieves frontier-level agentic intelligence through efficient parameter utilization and optimized attention mechanisms, demonstrating strong performance across multiple benchmarks.

stepfun-ai StepFun · Feb 11, 2026
Submitted by
Jeff-Wang

GigaBrain-0.5M*: a VLA That Learns From World Model-Based Reinforcement Learning

A vision-language-action model enhanced with world model-based reinforcement learning demonstrates improved performance and long-horizon execution capabilities for robotic manipulation tasks.

open-gigaai GigaAI · Feb 12, 2026
Submitted by
taesiri

Kimi K2.5: Visual Agentic Intelligence

Kimi K2.5 is an open-source multimodal agentic model that enhances text and vision processing through joint optimization techniques and introduces Agent Swarm for parallel task execution.

moonshotai Moonshot AI · Feb 2, 2026
Submitted by
JiaaqiLiu

SimpleMem: Efficient Lifelong Memory for LLM Agents

To support reliable long-term interaction in complex environments, LLM agents require memory systems that efficiently manage historical experiences. Existing approaches either retain full interaction histories via passive context extension, leading to substantial redundancy, or rely on iterative reasoning to filter noise, incurring high token costs. To address this challenge, we introduce SimpleMem, an efficient memory framework based on semantic lossless compression. We propose a three-stage pipeline designed to maximize information density and token utilization: (1) Semantic Structured Compression, which applies entropy-aware filtering to distill unstructured interactions into compact, multi-view indexed memory units; (2) Recursive Memory Consolidation, an asynchronous process that integrates related units into higher-level abstract representations to reduce redundancy; and (3) Adaptive Query-Aware Retrieval, which dynamically adjusts retrieval scope based on query complexity to construct precise context efficiently. Experiments on benchmark datasets show that our method consistently outperforms baseline approaches in accuracy, retrieval efficiency, and inference cost, achieving an average F1 improvement of 26.4% while reducing inference-time token consumption by up to 30-fold, demonstrating a superior balance between performance and efficiency. Code is available at https://github.com/aiming-lab/SimpleMem.

  • 8 authors
· Jan 5, 2026
Submitted by
Wendy-Fly

Idea2Story: An Automated Pipeline for Transforming Research Concepts into Complete Scientific Narratives

Offline knowledge construction through structured methodological graphs enables more reliable and scalable autonomous scientific discovery by reducing reliance on real-time literature processing.

AgentAlphaAGI AgentAlpha · Jan 28, 2026
Submitted by
FSCCS

dVoting: Fast Voting for dLLMs

Diffusion large language models enable parallel token generation and efficient reasoning enhancement through a voting technique that identifies and refines uncertain predictions across multiple samples.