Projects

AFlow

AFlow is an automated framework for generating and optimizing agentic workflows tailored to large language models (LLMs). This innovation eliminates the need for manual workflow design while enhancing performance and reducing associated costs.

AoT

AoT (Atom of Thoughts) is an innovative reasoning framework that transforms complex reasoning processes into a Markov-style sequence of atomic questions. This enables more efficient and effective problem-solving in large language models (LLMs) during test-time scaling.

Data Interpreter

Data Interpreter is an advanced LLM-based agent designed to solve complex data science problems end-to-end. It addresses the challenges of handling interconnected tasks, dynamic data adjustments, and domain-specific expertise requirements that traditional approaches struggle with. The system employs sophisticated hierarchical modeling to break down complex problems into manageable components while maintaining adaptability to real-time changes.

MetaGPT X

MetaGPT X represents a sophisticated multi-agent framework that orchestrates the collaboration of artificial intelligence agents to execute intricate workflows beyond conventional software development's confines.

MetaGPT

MetaGPT is a multi-agent collaboration framework that leverages meta-programming to transform natural language requirements into complete software solutions. With the 1.0 update introducing Foundation Agent technology, it now offers enhanced capabilities for tackling complex challenges.

OpenManus

OpenManus is an open-source framework designed for developing general AI agents capable of autonomously executing complex tasks without requiring invitation codes or special access permissions.

SELA

SELA (Tree-Search Enhanced LLM Agents) is an innovative system that enhances Automated Machine Learning (AutoML) by integrating Monte Carlo Tree Search (MCTS) with LLM-based agents to overcome the limitations of traditional AutoML approaches.

SPO

SPO is an innovative framework that automates prompt engineering for large language models (LLMs) through a self-supervised approach. It eliminates the need for external references or human feedback while significantly reducing optimization costs.