# What is Portex?

PortexAI builds evaluations ("evals") for state-of-the-art AI models and agents. Evals have become a bedrock of the AI ecosystem as they are increasingly doing double duty: they both contextualize model performance in benchmarks and provide reward signals for post-training and reinforcement learning.

Portex Evals are expert-authored, domain-specific evaluation datasets and grading rubrics designed to measure frontier and economically-relevant work by AI models. Each eval is a set of procedural tasks (with optional reference files) plus a private answer key and explicit rubric used by our AsymmetryZero LLM-jury protocol or lexical judge to produce standardized scores and reports.

The [PortexAI Datalab](https://datalab.portexai.com) is where experts design, publish, and commercialize evals and accompanying datasets, and where model builders can license task bundles or evaluate their model's responses.

<figure><img src="https://867705781-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FUkUyaZptb9tX5Pbk7oma%2Fuploads%2F2NN8Jdzms8U59LsOTX3s%2FScreenshot%202026-02-14%20at%208.02.48%E2%80%AFAM.png?alt=media&#x26;token=307948b5-c2d0-44cc-8cfa-3834bdc63b29" alt=""><figcaption></figcaption></figure>

These docs cover how to create evals, run them, and use the Datalab as either an expert or a model builder.

{% hint style="success" %}
New here? Start with [Creating an Account](https://docs.portexai.com/portex-docs/getting-started/creating-an-account) or read [How Evals Work](https://docs.portexai.com/portex-docs/core-concepts/how-evals-work) for a conceptual overview.
{% endhint %}
