Four ScienceClaws Walk Into a Lab...
In the fast-moving OpenClaw ecosystem, naming collisions happen. But four independent teams — spanning MIT, the Chinese Academy of Sciences, and individual researchers — all choosing ScienceClaw as their project name isn't just a coincidence. It reflects a shared conviction that AI agents are ready to transform scientific research.
Despite sharing a name, these four projects take fundamentally different approaches. Here's how they compare.
Quick Comparison
| Zaoqu-Liu | TaichuAI | beita6969 | MIT LAMM | |
|---|---|---|---|---|
| Stars | 33 | 135 | 282 | 76 |
| Base | OpenClaw | LangChain DeepAgents | OpenClaw (redesigned) | Independent (Python) |
| Skills/Tools | 266 | 1,900+ (ToolUniverse) | 285 (self-growing) | 300+ (interoperable) |
| Deployment | Bash script | Docker (10 services) | Bash / npm | Python venv |
| UI | Terminal + multi-channel bots | Full web app (Vue 3) | Web gateway | CLI + Infinite platform |
| Key strength | Zero-code Research Recipes | Enterprise security & transparency | Self-evolving + persistent memory | Decentralized multi-agent + DAG lineage |
| Organization | Individual researcher | Chinese Academy of Sciences | Individual researcher | MIT (Markus Buehler Lab) |
| Paper | — | — | — | arXiv:2603.14312 |
| License | MIT | MIT | MIT | MIT |
ScienceClaw (Zaoqu-Liu): The Minimalist
GitHub: Zaoqu-Liu/ScienceClaw
The most elegant of the three. The entire system is a single SCIENCE.md file (~600 lines) plus 266 domain skills — all in plain markdown. Zero custom code, zero TypeScript, zero Python servers.
What makes it unique:
- Research Recipes — 6 pre-built workflows (gene-landscape, target-validation, literature-review, etc.) that auto-detect from a single prompt
- 77+ databases including UniProt, PDB, TCGA, GTEx, ChEMBL, STRING
- Export to Word/PowerPoint/LaTeX with one command
- Literature monitoring via
/watchcommand
Best for: Bioinformatics researchers who want a plug-and-play pipeline with zero setup friction. If you just want to type "analyze THBS2 in tumor microenvironment" and get a 30-page report with 87 citations, this is your pick.
ScienceClaw (TaichuAI): The Enterprise Platform
GitHub: AgentTeam-TaichuAI/ScienceClaw
Built by the NLP Group at the Chinese Academy of Sciences, this is the most ambitious of the three. It deliberately breaks from the OpenClaw architecture, using LangChain DeepAgents + AIO Sandbox as its foundation.
What makes it unique:
- 1,900+ built-in tools via Harvard's ToolUniverse — the largest tool library of any ScienceClaw
- Full web UI (Vue 3 frontend + FastAPI backend) with login system, file management, and resource monitoring
- Security-first design — everything runs in Docker containers with no host system access
- Full transparency — every step (search → reasoning → tool calls → output) is visible and traceable
- Windows desktop app available — no Docker needed
Best for: Teams and labs that need enterprise-grade security, a polished web interface, and don't mind a heavier deployment. The Docker-based 10-service architecture means more resources but also more capabilities.
ScienceClaw (beita6969): The Self-Evolving Colleague
GitHub: beita6969/ScienceClaw
The most innovative of the three. Its killer feature is self-evolution — the agent creates new skills at runtime based on your usage patterns. By week 4, it has specialized skills tuned to your specific subfield.
What makes it unique:
- Self-evolving skills — new
SKILL.mdfiles are generated automatically without redeployment - 4-layer persistent memory with temporal decay weighting and LanceDB vector storage. "Continue the review from last Tuesday" actually works
- 1-hour+ session timeout (vs. OpenClaw's default 600s) with mandatory depth thresholds — Quick=5, Survey=30, Review=60, Systematic=100+ tool calls
- Zero-hallucination protocol — hard rule: every citation must come from a tool result in the current conversation. No fabricated PMIDs.
Best for: Individual researchers who want an AI colleague that grows with them. The persistent memory and self-evolution mean it gets better the more you use it. The zero-hallucination protocol makes it the most trustworthy for citation-heavy work.
ScienceClaw (MIT LAMM): The Decentralized Research Network
GitHub: lamm-mit/scienceclaw | Paper: arXiv:2603.14312 | Platform: Infinite
The most academically rigorous of the four. Built by Markus Buehler's lab at MIT, this ScienceClaw takes a fundamentally different approach: instead of one agent doing everything, it creates a network of independent agents that coordinate without central control.
What makes it unique:
- Decentralized multi-agent coordination — no central planner. Agents discover and fulfill each other's information needs through pressure-based scoring
- DAG artifact lineage — every output is an immutable artifact with typed metadata and parent lineage, forming a directed acyclic graph. Full computational reproducibility baked in
- ArtifactReactor — peer agents trigger multi-parent synthesis across independent analyses. Schema-overlap matching finds connections humans would miss
- Infinite platform — a shared scientific discourse platform where agent outputs become auditable scientific records with provenance views
- Autonomous mutation layer — prunes the expanding artifact DAG to resolve conflicting or redundant workflows
- 300+ interoperable tools with scientific profile-based selection
Best for: Research groups and labs that want multiple AI agents collaborating on the same problem space — each approaching it from a different angle. The DAG lineage and Infinite platform make it the most publishable and reproducible of the four. The only one with a peer-reviewed paper.
So Which One Should You Use?
| If you need... | Choose |
|---|---|
| Quick start, zero setup, bioinformatics focus | Zaoqu-Liu |
| Enterprise security, web UI, largest tool library | TaichuAI |
| Long-term research partner that learns your habits | beita6969 |
| Multi-agent collaboration with full provenance | MIT LAMM |
| The most stars / community validation | beita6969 (282 ⭐) |
| A completely non-OpenClaw architecture | TaichuAI or MIT LAMM |
| Academic rigor / publishable outputs | MIT LAMM (arXiv paper) |
The beauty of open source: you don't have to choose just one. They're all MIT-licensed, and each brings something the others don't.
Last updated: March 22, 2026. All four projects are actively maintained.
