Skip to main content

Procedure for Fine-Tuning Requirements

Strict and consistent manual annotation of the test sets of a requirement

Updated over a month ago

per Test Set: two steps + save:

Practical Analogy:

Imagine you are mentoring a young, talented lawyer. You present them with 10 contracts and mark in each contract:

  • WHAT is relevant to answer a specific legal question (the context).

  • HOW the correct answer to this question must be (the AI answer).

This manual annotation is exactly the activity you perform in Legartis to train the AI.

Step 1: Review of the Relevant Context

  • Review the relevant context to see if the pre-annotated sentences (light blue) are actually relevant for answering the requirement.

  • Add relevant sentences or remove non-relevant sentences (manual annotation) – by simply clicking them in the test set.

Tip: Pre-annotated sentences are additionally highlighted in dark blue in the Mini-Map, which allows for a quick overview.

Tip: You can filter the view to the pre-annotated sentences using the "Show only selected segments" option.

Step 2: Review and Correction of the AI Answer

  • Review the AI's answer in the "Information identified by AI" area. The answer is typically displayed as "fulfilled" / "not fulfilled":

    • Fulfilled: The requirement (e.g., a clause is present or acceptable) is met in the present contract.

    • Not fulfilled: The requirement is not met in the contract (e.g., the clause is missing or formulated unacceptably).

  • Correct the AI's answer to the correct status if necessary by clicking the toggle button.

Tip: The "AI Explanation" is provided to understand its reasoning. However, remain critical! The AI is trained to provide plausible explanations – even if the answer is wrong. Human expertise is not replaceable here.

Two Central Principles for Effective Annotation:

1. Rigor (Precision):

It must be ensured that the relevant context is strictly annotated.

  • Focus on the sentences that are absolutely necessary to answer the review requirement.

  • Avoid over-annotation: Many sentences may appear thematically relevant, but not everything related to the topic is necessary for answering the specific requirement.

2. Coherence (Consistency)

All Test Set Cases within a test set must be annotated in the same way.

  • Consistency is essential. An inconsistent test set (e.g., a passage is annotated once, and the next time it is not, although it has the same function) confuses the AI and impairs the quality of the entire playbook.

  • Recommendation: Annotate all Test Set Cases for a requirement in a single pass.

Save

After completing both steps, click "Save" and the system automatically switches to the next contract within the test set.


When is a Test Set Fully Annotated?

A test set is considered fully annotated when the following criteria are met:

  • Balance: The set contains a sufficient and representative mix of positive ("fulfilled") and negative ("not fulfilled") examples.

  • All contracts/documents within the test set have been manually annotated and reviewed.

  • Marking: The test set has been finally marked with "Manual Annotation Completed".

Did this answer your question?