Knowledge Commons

Global Wiki Tree

A meta-communities wiki tool — a living knowledge commons where any kind of data is imported, curated collaboratively, and exported as interoperable files for tools like Tana, Logseq, and beyond. Wikipedia meets a living forest.

Status
Forming the Movement
Tags
#side-project · #my-legacy
System Type
Simple (Sub-)System
Inspired By
WikiTree
Knowledge Tree

What is Global Wiki Tree?

Global Wiki Tree is a meta-communities wiki tool that reimagines how knowledge is collectively owned, curated, and distributed. Where Wikipedia created a single encyclopedia, Global Wiki Tree creates a living forest of interconnected knowledge bases — each community tending its own branch while sharing roots with the whole.

The core premise is simple: communities and organizations already produce vast amounts of structured knowledge, but it remains trapped in silos. Global Wiki Tree gives these communities a shared trunk — a common data layer with strong confidence defaults — while letting each branch define its own curation standards, taxonomies, and export formats.

Think of it as the knowledge commons that the internet was always supposed to become: not a single platform that owns everything, but a living tree where every node is sovereign, every connection is meaningful, and the whole grows stronger as each part grows.

Addressing Real Needs

Two forces are converging. First, knowledge management remains fragmented — your notes, your team's wiki, your community's documentation all live in separate tools that barely talk to each other. Second, the rise of AI has made trustworthy, curated data more valuable than ever. Models trained on messy, unverified web scrapes produce messy, unverified outputs. Global Wiki Tree addresses both: structured knowledge that flows freely between tools, with provenance and confidence baked in.

How It Works

Global Wiki Tree follows a three-phase flow: any kind of data enters the system, gets shaped through community curation, and exits as clean, portable files ready for whatever tool you prefer.

📥

Import

Any kind of data enters the tree — structured, semi-structured, or raw. APIs, file uploads, scrapers.

🌳

Curate

Community editors shape, verify, and connect knowledge. Easiest path: curation by domain experts.

📤

Export

Clean, interoperable files for Tana, Logseq, Obsidian, or any tool. Also available via API.

Confidence Levels

Not all knowledge carries the same weight. Global Wiki Tree defaults to showing only strong-confidence content — claims that have been verified by multiple curators or backed by cited sources. This is the public face: what any visitor sees.

Low
Strong (default)

For organizations and communities making offers, the confidence threshold is adjustable. An internal research team might want to see everything, including tentative connections and unverified imports. A public knowledge base for a city government might want only the most rigorously sourced material. The slider is theirs to set.

The API

Global Wiki Tree exposes a full API for programmatic access. Import data from any source, query the knowledge graph, export curated subsets, and integrate with existing workflows. The API is the backbone that makes the tree a living system rather than a static archive.

Business Model

Global Wiki Tree sustains itself through the organizations and communities that use it to publish and manage knowledge — not by extracting value from individual users. The commons stays free; the tooling for professional use carries the cost.

For Communities

Communities that want to publish structured knowledge get the full platform for free or near-free. The tree grows by having more public data from more communities.

  • Free public knowledge hosting
  • Community curation tools
  • Open export in all formats
  • Strong-confidence defaults

For Organizations Making Offers

Organizations that use the tree for internal knowledge management or to publish professional-grade datasets pay for advanced features.

  • Adjustable confidence thresholds
  • Private branches and access control
  • Priority API access and SLA
  • Custom export pipelines

How to get lots of public data? By serving those who already want to publish — communities, NGOs, open-knowledge organizations, civic institutions. They bring the data; the tree gives it structure, discoverability, and interoperability.

The Movement

Global Wiki Tree is not just a product — it needs to become a tech movement, similar to how Wikipedia became a movement before it became a platform. The technology is the easy part. The hard part is building the culture of collaborative knowledge curation at scale.

To reach critical mass, the project requires six key players who together form the initial root system of the tree.

01

The Architect

System designer who understands knowledge graphs, ontologies, and interoperability at scale.

02

The Community Builder

Someone who can rally early-adopter communities and build the culture of open curation.

03

The Data Steward

Expert in data quality, provenance tracking, and confidence scoring systems.

04

The Tool Maker

Developer focused on import/export pipelines, API design, and integration with existing tools.

05

The Advocate

Public voice who can articulate why knowledge commons matter and attract institutional partners.

06

The Sustainer

Business strategist who designs the model that keeps the commons funded without compromising it.

Current status: The vision is clear, the need is real, but the movement has not yet formed. Like Wikipedia in 2000, we need the right combination of idealism, technical talent, and timing. The seed is planted — it waits for its six roots.

Resources & Inspirations

Global Wiki Tree stands on the shoulders of tools and communities that have been exploring networked knowledge for years. These are the roots we draw from.

WikiTree TiddlyWiki Logseq Tana Obsidian Roam Research Network-graph outliners Tools for Thought Lambrospetrou Wikipedia Linked Data / RDF Knowledge Graphs