It was in June 2024, when I theoretically understood the word “ontology”. Another quarter before I saw it being used in a POC. I wish I could write “that changed everything” but nope. Only in Q1 2025 when we actively started pushing agents to production, my understanding became firm - thanks to actual implementations.
Context graphs have made the concept of ontology wildly popular. Apart from Palantir and knowledge management groups, it’s not a commonly used term in the new age AI communities. Though Ontologists as a job title has been for a while and regulated industries like Pharma, Healthcare, have dedicated knowledge management teams, including ontologists.
The question really is - What’s an ontology, why does it matter now and how should you think about leveraging it?
What’s an Ontology [Juan’s explained it here as well]
Non ELI5 Version - A formal, machine-readable domain model of concepts, attributes, relationships, and axioms within a domain, enabling shared understanding and reasoning. Sounds dense. Let's break it down piece by piece.
Formal -It's unambiguous. No room for interpretation. Eg - A glossary says "Diabetes is a metabolic condition." An ontology says "Diabetes is-a MetabolicDisorder. MetabolicDisorder is-a Disease." The relationship is explicit.
Logic, not prose.Machine-readable - It's code. Not just text or flowcharts. For instance - You can draw a box-and-arrow diagram in PowerPoint. That's not an ontology. An ontology lives in a format (OWL, RDF) that software can parse, query, and reason over. The machine can actually do something with it. Ask chatgpt?
Domain model of concepts, attributes, relationships - This part feels familiar. Think UML diagrams.
Concepts: Patient, Medication, Diagnosis, Procedure
Attributes: Patient has dateOfBirth, Medication has dosage
Relationships: Patient has-diagnosis Diagnosis, Medication treats Condition
If you've ever drawn an entity-relationship diagram, you've done a version of this. The ontology just makes it formal and machine-readable.
Axioms - Rules and constraints. For eg "A patient cannot be prescribed a medication they're allergic to." That's an axiom. "If Drug A interacts with Drug B, and a patient is on Drug A, then Drug B is contraindicated." That's an axiom too. These aren't just data - they're logic that the system enforces.
Domain — It's bounded. You're not modeling everything. That is - a healthcare ontology doesn't need to define what a "chair" is. Stay within your domain.
Enabling shared understanding - It enables agreement. For instance - When your cardiology team and your pharmacy team both use the same ontology, they've agreed on what "hypertension" means, how it relates to medications, what counts as a contraindication. Formalizing disagreements is itself a form of agreement—at least now you know where you differ.
Reasoning - Deriving new knowledge from what's already there. You state explicitly: "Metformin treats Type 2 Diabetes." You state: "Patient X has Type 2 Diabetes." The system infers: "Metformin is a candidate treatment for Patient
X." That's reasoning. Implicit knowledge from explicit facts.

Ontologies vs Taxonomies. Relationships vs Hierarchy
Taxonomy ( hierachial categorization ) is far more well understood as a concept. Think the category menu on Amazon - how product are classified and organized. Ontology captures the relationships between those concepts. As code.
So far so good. Why are they popular now?
Simply put - Human teams route around incomplete information all the time. We know which concept means what in a specific concept without explicit. We fill gaps with judgment. AI agents don't have that. They take the documents and words literally. They don't know that Metformin treats diabetes - they just know those words appear near each other in their training data. That's correlation, not meaning.
Here are some solid examples that illustrate how the same word carries completely different meanings across domains.
Discharge | Pipeline | Exposure |
|---|---|---|
Healthcare: Patient leaving the hospital after treatment | Sales: Deals in progress, from lead to close | Finance: Risk level to a particular asset or market |
Banking/Legal: Releasing someone from a debt or obligation | Oil & Gas: Physical infrastructure carrying crude | Healthcare: Contact with a pathogen or harmful substance |
Manufacturing: Releasing material from a container or system | Software/ML: Sequence of data processing steps | Photography/Marketing: Visibility or reach |
Ontologies give AI the relationship structure that humans carry in their heads. That's why everyone's suddenly talking about them.
Sounds Great. What's the Catch?
Most teams don’t get to ontologies. ETL Pipelines - built. Already shipped the pilot. Prompt iterations are ongoing to fix hallucinations. When you hear "we need an ontology" and suddenly you're retrofitting structure onto a system that was built without
it. The instinct is to bolt on the ontology at the end. Layer it on top of your existing data. That's backwards.
Wait - Don't LLMs Already Know This Stuff?
Sort of. But not in the way you need. LLMs learn patterns. They know that "Metformin" and "diabetes" frequently appear together. That's statistical correlation. It's not the same as knowing that Metformin treats Type 2 Diabetes, that it's contraindicated in patients with kidney failure, that it belongs to the biguanide drug class.
LLM might get it right. It might not. Ask the same question differently and you might get a different answer. That's the problem - there's no guarantee of consistency, no logical structure underneath.
Using ontologies isn't about training LLMs. It's about extending a formal structure to
LLMs for reasoning.
So, Where should one start?
Short Answer - start with the domain ontology. Not your enterprise data. Enterprise data is dynamic - evolves, new knowledge is added, older information is sunsetted but the underlying meaning doesn’t change.
The domain ontology captures relationships that exist in your field regardless of your specific company. Then you introduce your enterprise data on top of that foundation.
Most teams hit this wall and try to fix it with more data. More documents. More context. Longer prompts. Unfortunately, the wrong fix. The foundation needs the ontology. And the ontology needs to come first, not last.
*Non ELI5 Alert* - Here’s a blog post that goes deeper into the weeds of how ontologies enable and govern reasoning, how the model isn't asked to decide what's true. Good follow-up for readers who want the "how" after this newsletter covers the "what" and "why."
One thing before you go. I am looking for feedback to improve - If this made sense, just reply "got it." If something didn't land, tell me that as well. The fun part is figuring out how to say complicated stuff in a way that actually sticks.
And if you liked it - maybe send it to someone who is wrangling their head with these jargons?


