Key Takeaways
- Hugging Science is Hugging Face's curated portal for AI-for-science models, datasets, and benchmarks — the building blocks
- Claw4Science indexes runnable agents and skills built on top of OpenClaw, Claude Code, and friends — the ready-to-use tools
- They sit at different layers of the same stack and are complementary, not competitors
- Pick Hugging Science when you want to train, fine-tune, or evaluate a model
- Pick Claw4Science when you want to install something today and run an experiment tomorrow
Two Words That Mean Different Things
"AI for Science" is doing a lot of work in 2026. Two recent directories use almost the same vocabulary and route users to very different places.
Hugging Science (huggingscience.co) is a Hugging Face initiative. The pitch on their homepage: "curated datasets, models, and tools — from proteins to plasma fusion." It is, structurally, a curated subset of the wider Hugging Face Hub focused on scientific research.
Claw4Science (this site) catalogs OpenClaw scientific agents and skills — installable command-line and Claude Code tooling that wraps domain workflows in conversational interfaces.
Same target audience (scientists who want AI to do real work). Different layer of the stack.
What Lives on Each Site
Hugging Science
The unit of curation is a model card or dataset card:
- Pretrained protein language models (ESM-3, AlphaFold weights)
- Curated bio/chem benchmark datasets
- Foundation models for PDEs, plasma fusion, materials
- Training scripts and Spaces
You go there to train, fine-tune, or run inference with research-grade weights.
Claw4Science
The unit of curation is a runnable agent or skill:
- OpenClaw skills (
bioinfor-claw,bioclaw,medclaw, …) - Autonomous research agents (AI-Scientist, AutoBA, BioDiscoveryAgent)
- Benchmarks for agents (BixBench, ScienceAgentBench)
- Skill hubs (Bioclaw_Skills_Hub, ClawHub)
You come here to install something and have it execute a task in your terminal — no training, no GPUs, no model selection.
When to Pick Which
| You want to… | Go to |
|---|---|
| Fine-tune a protein language model | Hugging Science |
| Find a single-cell foundation model | Hugging Science |
| Benchmark a new model on PDE data | Hugging Science |
| Install a CLI agent that runs scRNA-seq pipelines | Claw4Science |
| Find an LLM-judged ranking of 2,000+ skills | Claw4Science |
| See which research agent has paper backing | Claw4Science |
| Compare BioClaw, ClawBio, MedClaw | Claw4Science |
If your question starts with "which model," Hugging Science. If it starts with "which agent" or "which skill," Claw4Science.
Why Both Exist
The AI-for-science stack has three layers and each needs its own directory:
- Foundations — papers, theory, methods (arXiv, Papers with Code)
- Building blocks — models, datasets, benchmarks (Hugging Science)
- Runnable tools — agents, skills, CLI workflows (Claw4Science)
Layer 2 is where Hugging Face has dominated for five years. Layer 3 is newer — it only became coherent once Claude Code, OpenClaw, and the skill ecosystem made "install a skill, talk to it" a real workflow in 2025–2026.
A protein scientist might use both in one week: pull ESM-3 weights from Hugging Science to fine-tune on their assay data, then install bioinfor-claw from Claw4Science to wire that fine-tuned model into a literature-search → docking → reporting pipeline.
Curation Style
| Hugging Science | Claw4Science | |
|---|---|---|
| Source repos | Hugging Face Hub | GitHub + skill hubs |
| Coverage | ~hundreds (curated) | 172 projects, 2,094 skills |
| Quality signal | HF org affiliation | LLM-judged 14-metric rubric |
| Activity tracking | Push date on HF | 26-week commit sparklines |
| Compare tool | None | 8 disambiguation pages |
| Open source | Closed (HF initiative page) | Open data, open algorithms |
Both are curated, opinionated, and English-first. Claw4Science publishes its scoring methodology and rubric pass rates per skill; Hugging Science leans on the HF brand for the quality signal.
What This Means for the Field
Two new directories in the same year is a healthy sign. It means the AI-for-science space is mature enough that "find me a thing" is a real user need that no single hub can satisfy.
The right mental model: Hugging Science is the model store, Claw4Science is the app store. Same way your phone has both an SDK directory (developer.apple.com) and an App Store (apps.apple.com) — you need both, for different jobs.
We've added Hugging Science to our Friends list in the footer. They're solving a complementary problem and we want users moving between both.
Try Both
- Models, datasets, benchmarks → huggingscience.co
- Agents, skills, CLI tools → claw4science.org
If you build something that bridges the two layers (e.g., a Claw skill that wraps a Hugging Science model), submit it — those are exactly the projects we want to feature.
