Monday, December 1, 2025

Constructing a Easy Knowledge High quality DSL in Python


Constructing a Easy Knowledge High quality DSL in Python
Picture by Writer

 

Introduction

 
Knowledge validation code in Python is usually a ache to take care of. Enterprise guidelines get buried in nested if statements, validation logic mixes with error dealing with, and including new checks usually means sifting by means of procedural capabilities to seek out the fitting place to insert code. Sure, there are knowledge validation frameworks you should utilize, however we’ll deal with constructing one thing tremendous easy but helpful with Python.

Let’s write a easy Area-Particular Language (DSL) of types by making a vocabulary particularly for knowledge validation. As an alternative of writing generic Python code, you construct specialised capabilities and lessons that specific validation guidelines in phrases that match how you concentrate on the issue.

For knowledge validation, this implies guidelines that learn like enterprise necessities: “buyer ages have to be between 18 and 120” or “e-mail addresses should include an @ image and may have a legitimate area.” You’d just like the DSL to deal with the mechanics of checking knowledge and reporting violations, whilst you deal with expressing what legitimate knowledge seems like. The result’s validation logic that is readable, straightforward to take care of and check, and easy to increase. So, let’s begin coding!

🔗 Hyperlink to the code on GitHub

 

Why Constructing a DSL?

 
Take into account validating buyer knowledge with Python:

def validate_customers(df):
    errors = []
    if df['customer_id'].duplicated().any():
        errors.append("Duplicate IDs")
    if (df['age'] < 0).any():
        errors.append("Adverse ages")
    if not df['email'].str.accommodates('@').all():
        errors.append("Invalid emails")
    return errors

 

This method hardcodes validation logic, mixes enterprise guidelines with error dealing with, and turns into unmaintainable as guidelines multiply. As an alternative, we’re seeking to write a DSL that separates considerations and creates reusable validation elements.

As an alternative of writing procedural validation capabilities, a DSL permits you to specific guidelines that learn like enterprise necessities:

# Conventional method
if df['age'].min() < 0 or df['age'].max() > 120:
    increase ValueError("Invalid ages discovered")

# DSL method  
validator.add_rule(Rule("Legitimate ages", between('age', 0, 120), "Ages have to be 0-120"))

 

The DSL method separates what you are validating (enterprise guidelines) from how violations are dealt with (error reporting). This makes validation logic testable, reusable, and readable by non-programmers.

 

Making a Pattern Dataset

 
Begin by spinning up a pattern, reasonable e-commerce buyer knowledge containing widespread high quality points:

import pandas as pd

prospects = pd.DataFrame({
    'customer_id': [101, 102, 103, 103, 105],
    'e-mail': ['john@gmail.com', 'invalid-email', '', 'sarah@yahoo.com', 'mike@domain.co'],
    'age': [25, -5, 35, 200, 28],
    'total_spent': [250.50, 1200.00, 0.00, -50.00, 899.99],
    'join_date': ['2023-01-15', '2023-13-45', '2023-02-20', '2023-02-20', '']
}) # Notice: 2023-13-45 is an deliberately malformed date.

 

This dataset has duplicate buyer IDs, invalid e-mail codecs, unimaginable ages, unfavorable spending quantities, and malformed dates. That ought to work fairly nicely for testing validation guidelines.

 

Writing the Validation Logic

 

// Creating the Rule Class

Let’s begin by writing a easy Rule class that wraps validation logic:

class Rule:
    def __init__(self, title, situation, error_msg):
        self.title = title
        self.situation = situation
        self.error_msg = error_msg
    
    def examine(self, df):
        # The situation perform returns True for VALID rows.
        # We use ~ (bitwise NOT) to pick the rows that VIOLATE the situation.
        violations = df[~self.condition(df)]
        if not violations.empty:
            return {
                'rule': self.title,
                'message': self.error_msg,
                'violations': len(violations),
                'sample_rows': violations.head(3).index.tolist()
            }
        return None

 

The situation parameter accepts any perform that takes a DataFrame and returns a boolean Sequence indicating legitimate rows. The tilde operator (~) inverts this Boolean Sequence to determine violations. When violations exist, the examine methodology returns detailed info together with the rule title, error message, violation rely, and pattern row indices for debugging.

This design separates validation logic from error reporting. The situation perform focuses purely on the enterprise rule whereas the Rule class handles error particulars constantly.

 

// Including A number of Guidelines

Subsequent, let’s code up a DataValidator class that manages collections of guidelines:

class DataValidator:
    def __init__(self):
        self.guidelines = []
    
    def add_rule(self, rule):
        self.guidelines.append(rule)
        return self # Allows methodology chaining
    
    def validate(self, df):
        outcomes = []
        for rule in self.guidelines:
            violation = rule.examine(df)
            if violation:
                outcomes.append(violation)
        return outcomes

 

The add_rule methodology returns self to allow methodology chaining. The validate methodology executes all guidelines independently and collects violation studies. This method ensures one failing rule would not forestall others from working.

 

// Constructing Readable Circumstances

Recall that when instantiating an object of the Rule class, we additionally want a situation perform. This may be any perform that takes in a DataFrame and returns a Boolean Sequence. Whereas easy lambda capabilities work, they don’t seem to be very straightforward to learn. So let’s write helper capabilities to create a readable validation vocabulary:

def not_null(column):
    return lambda df: df[column].notna()

def unique_values(column):
    return lambda df: ~df.duplicated(subset=[column], hold=False)

def between(column, min_val, max_val):
    return lambda df: df[column].between(min_val, max_val)

 

Every helper perform returns a lambda that works with pandas Boolean operations.

  • The not_null helper makes use of pandas’ notna() methodology to determine non-null values.
  • The unique_values helper makes use of duplicated(..., hold=False) with a subset parameter to flag all duplicate occurrences, guaranteeing a extra correct violation rely.
  • The between helper makes use of the pandas between() methodology which handles vary checks mechanically.

For sample matching, common expressions develop into easy:

import re

def matches_pattern(column, sample):
    return lambda df: df[column].str.match(sample, na=False)

 

The na=False parameter ensures lacking values are handled as validation failures somewhat than matches, which is often the specified habits for required fields.

 

Constructing a Knowledge Validator for the Pattern Dataset

 
Let’s now construct a validator for the shopper dataset to see how this DSL works:

validator = DataValidator()

validator.add_rule(Rule(
   "Distinctive buyer IDs", 
   unique_values('customer_id'),
   "Buyer IDs have to be distinctive throughout all data"
))

validator.add_rule(Rule(
   "Legitimate e-mail format",
   matches_pattern('e-mail', r'^[^@s]+@[^@s]+.[^@s]+$'),
   "E mail addresses should include @ image and area"
))

validator.add_rule(Rule(
   "Cheap buyer age",
   between('age', 13, 120),
   "Buyer age have to be between 13 and 120 years"
))

validator.add_rule(Rule(
   "Non-negative spending",
   lambda df: df['total_spent'] >= 0,
   "Complete spending quantity can't be unfavorable"
))

 

Every rule follows the identical sample: a descriptive title, a validation situation, and an error message.

  • The primary rule makes use of the unique_values helper perform to examine for duplicate buyer IDs.
  • The second rule applies common expression sample matching to validate e-mail codecs. The sample requires a minimum of one character earlier than and after the @ image, plus a site extension.
  • The third rule makes use of the between helper for vary validation, setting affordable age limits for patrons.
  • The ultimate rule makes use of a lambda perform for an inline situation checking that total_spent values are non-negative.

Discover how every rule reads virtually like a enterprise requirement. The validator collects these guidelines and might execute all of them towards any DataFrame with matching column names:

points = validator.validate(prospects)

for problem in points:
    print(f"❌ Rule: {problem['rule']}")
    print(f"Downside: {problem['message']}")
    print(f"Affected rows: {problem['sample_rows']}")
    print()

 

The output clearly identifies particular issues and their places within the dataset, making debugging easy. For the pattern knowledge, you’ll get the next output:

Validation Outcomes:
❌ Rule: Distinctive buyer IDs
   Downside: Buyer IDs have to be distinctive throughout all data
   Violations: 2
   Affected rows: [2, 3]

❌ Rule: Legitimate e-mail format
   Downside: E mail addresses should include @ image and area
   Violations: 3
   Affected rows: [1, 2, 4]

❌ Rule: Cheap buyer age
   Downside: Buyer age have to be between 13 and 120 years
   Violations: 2
   Affected rows: [1, 3]

❌ Rule: Non-negative spending
   Downside: Complete spending quantity can't be unfavorable
   Violations: 1
   Affected rows: [3]

 

Including Cross-Column Validations

 

Actual enterprise guidelines usually contain relationships between columns. Customized lambda capabilities deal with complicated validation logic:

def high_spender_email_required(df):
    high_spenders = df['total_spent'] > 500
    has_valid_email = df['email'].str.accommodates('@', na=False)
    # Passes if: (Not a excessive spender) OR (Has a legitimate e-mail)
    return ~high_spenders | has_valid_email

validator.add_rule(Rule(
    "Excessive Spenders Want Legitimate E mail",
    high_spender_email_required,
    "Prospects spending over $500 will need to have legitimate e-mail addresses"
))

 

This rule makes use of Boolean logic the place high-spending prospects will need to have legitimate emails, however low spenders can have lacking contact info. The expression ~high_spenders | has_valid_email interprets to “not a excessive spender OR has legitimate e-mail,” which permits low spenders to move validation no matter e-mail standing.

 

Dealing with Date Validation

 
Date validation requires cautious dealing with since date parsing can fail:

def valid_date_format(column, date_format="%Y-%m-%d"):
    def check_dates(df):
        # pd.to_datetime with errors="coerce" turns invalid dates into NaT (Not a Time)
        parsed_dates = pd.to_datetime(df[column], format=date_format, errors="coerce")
        # A row is legitimate if the unique worth isn't null AND the parsed date isn't NaT
        return df[column].notna() & parsed_dates.notna()
    return check_dates

validator.add_rule(Rule(
    "Legitimate Be part of Dates",
    valid_date_format('join_date'),
    "Be part of dates should comply with YYYY-MM-DD format"
))

 

The validation passes solely when the unique worth isn’t null AND the parsed date is legitimate (i.e., not NaT). We take away the pointless try-except block, counting on errors="coerce" in pd.to_datetime to deal with malformed strings gracefully by changing them to NaT, which is then caught by parsed_dates.notna().

 

Writing Decorator Integration Patterns

 
For manufacturing pipelines, you’ll be able to write decorator patterns that present clear integration:

def validate_dataframe(validator):
    def decorator(func):
        def wrapper(df, *args, **kwargs):
            points = validator.validate(df)
            if points:
                error_details = [f"{issue['rule']}: {problem['violations']} violations" for problem in points]
                increase ValueError(f"Knowledge validation failed: {'; '.be part of(error_details)}")
            return func(df, *args, **kwargs)
        return wrapper
    return decorator

# Notice: 'customer_validator' must be outlined globally or handed in an actual implementation
# Assuming 'customer_validator' is the occasion we constructed earlier
# @validate_dataframe(customer_validator)
def process_customer_data(df):
    return df.groupby('age').agg({'total_spent': 'sum'})

 

This decorator ensures knowledge passes validation earlier than processing begins, stopping corrupted knowledge from propagating by means of the pipeline. The decorator raises descriptive errors that embrace particular validation failures. A remark was added to the code snippet to notice that customer_validator would must be accessible to the decorator.

 

Extending the Sample

 
You may lengthen the DSL to incorporate different validation guidelines as wanted:

# Statistical outlier detection
def within_standard_deviations(column, std_devs=3):
    # Legitimate if absolute distinction from imply is inside N normal deviations
    return lambda df: abs(df[column] - df[column].imply()) <= std_devs * df[column].std()

# Referential integrity throughout datasets
def foreign_key_exists(column, reference_df, reference_column):
    # Legitimate if worth in column is current within the reference_column of the reference_df
    return lambda df: df[column].isin(reference_df[reference_column])

# Customized enterprise logic
def profit_margin_reasonable(df):
    # Ensures 0 <= margin <= 1
    margin = (df['revenue'] - df['cost']) / df['revenue']
    return (margin >= 0) & (margin <= 1)

 

That is how one can construct validation logic as composable capabilities that return Boolean collection.

Right here’s an instance of how you should utilize the information validation DSL we’ve constructed on the pattern knowledge, assuming the helper capabilities are in a module known as data_quality_dsl:

import pandas as pd
from data_quality_dsl import DataValidator, Rule, unique_values, between, matches_pattern

# Pattern knowledge
df = pd.DataFrame({
    'user_id': [1, 2, 2, 3],
    'e-mail': ['user@test.com', 'invalid', 'user@real.com', ''],
    'age': [25, -5, 30, 150]
})

# Construct validator
validator = DataValidator()
validator.add_rule(Rule("Distinctive customers", unique_values('user_id'), "Person IDs have to be distinctive"))
validator.add_rule(Rule("Legitimate emails", matches_pattern('e-mail', r'^[^@]+@[^@]+.[^@]+$'), "Invalid e-mail format"))
validator.add_rule(Rule("Cheap ages", between('age', 0, 120), "Age have to be 0-120"))

# Run validation
points = validator.validate(df)
for problem in points:
    print(f"❌ {problem['rule']}: {problem['violations']} violations")

 

Conclusion

 
This DSL, though easy, works as a result of it aligns with how knowledge professionals take into consideration validation. Guidelines specific enterprise logic in easy-to-understand necessities whereas permitting us to make use of pandas for each efficiency and adaptability.

The separation of considerations makes validation logic testable and maintainable. This method requires no exterior dependencies past pandas and introduces no studying curve for these already aware of pandas operations.

That is one thing I labored on over a few night coding sprints and a number of other cups of espresso (after all!). However you should utilize this model as a place to begin and construct one thing a lot cooler. Completely happy coding!
 
 

Bala Priya C is a developer and technical author from India. She likes working on the intersection of math, programming, knowledge science, and content material creation. Her areas of curiosity and experience embrace DevOps, knowledge science, and pure language processing. She enjoys studying, writing, coding, and occasional! Presently, she’s engaged on studying and sharing her data with the developer neighborhood by authoring tutorials, how-to guides, opinion items, and extra. Bala additionally creates partaking useful resource overviews and coding tutorials.



Related Articles

Latest Articles