Skip to main content

Glossary

Frequently used terms

Updated today

Legartis legal tech terminology

F1-Score

A metric used to evaluate AI contract review performance, combining precision (accuracy of detected requirements) and recall (completeness of detection) into a single score. It ranges between 0 and 1, with a value closer to 1 indicating higher overall effectiveness.

Playbook

A purpose-specific compilation of selected requirements to ensure uniform contract review in compliance with internal guidelines.

Contract Playbook Creator

A Legartis tool that simplifies and accelerates the creation of playbooks and their requirements by extracting internal rules and standards from contracts or written guidelines (e.g. Excel spreadsheets) or, if no template is available, interactively generating suggestions for contract review.

Prompt Editing

Writing prompts (instructions) to the AI in order to steer its behaviour, reactions or results towards a specific goal or desired outcome.

Requirement

Content review requirement for a contract that reflects the company's internal guidelines for contract review. An example would be: ‘The payment period may not exceed 30 days.’

Test Set

A collection of contracts used by the customer to evaluate the AI’s performance. Each contract includes relevant context and the correct answer for a requirement, allowing assessment of the AI’s accuracy. A test set at Legartis usually contains 10 cases, i.e. it consists of 10 contracts.

Positive Test Set Case

A test set case in which a contract meets the specified requirement. If the relevant sentence or clause is present and fulfills the requirement, the case is considered positive.

Negative Test Set Case

A test set case in which a contract does not meet the specified requirement. If the relevant sentence or clause is absent or does not fulfill the requirement, the case is considered negative.

No-Context Test Set Case

A test set case in which the contract provides no relevant context for the requirement. In this case, the AI cannot generate an answer, as it relies on context to evaluate the test request.

Fine-tuning/Review

The review and assessment of test sets to determine whether the AI correctly interprets and applies the requirements in evaluating both context and requirement response. At Legartis, this process called fine-tuning, helps validate AI performance and supports future training improvements.

Did this answer your question?