Thursday, March 26, 2026

Vibe Coding a Non-public AI Monetary Analyst with Python and Native LLMs



Picture by Writer

 

Introduction

 
Final month, I discovered myself watching my financial institution assertion, making an attempt to determine the place my cash was really going. Spreadsheets felt cumbersome. Current apps are like black bins, and the worst half is that they demand I add my delicate monetary information to a cloud server. I wished one thing completely different. I wished an AI information analyst that would analyze my spending, spot uncommon transactions, and provides me clear insights — all whereas retaining my information 100% native. So, I constructed one.

What began as a weekend mission become a deep dive into real-world information preprocessing, sensible machine studying, and the ability of native giant language fashions (LLMs). On this article, I’ll stroll you thru how I created an AI-powered monetary evaluation app utilizing Python with “Vibe Coding.” Alongside the best way, you’ll be taught many sensible ideas that apply to any information science mission, whether or not you’re analyzing gross sales logs, sensor information, or buyer suggestions.

By the tip, you’ll perceive:

  • Methods to construct a strong information preprocessing pipeline that handles messy, real-world CSV information
  • How to decide on and implement machine studying fashions when you could have restricted coaching information
  • Methods to design interactive visualizations that really reply person questions
  • Methods to combine an area LLM for producing natural-language insights with out sacrificing privateness

The entire supply code is on the market on GitHub. Be at liberty to fork it, prolong it, or use it as a place to begin on your personal AI information analyst.

 

App dashboard showing spending breakdown and AI insights
Fig. 1: App dashboard exhibiting spending breakdown and AI insights | Picture by Writer

 

The Drawback: Why I Constructed This

 
Most private finance apps share a elementary flaw: your information leaves your management. You add financial institution statements to providers that retailer, course of, and doubtlessly monetize your info. I wished a instrument that:

  1. Let me add and analyze information immediately
  2. Processed the whole lot domestically — no cloud, no information leaks
  3. Supplied AI-powered insights, not simply static charts

This mission turned my car for studying a number of ideas that each information scientist ought to know, like dealing with inconsistent information codecs, choosing algorithms that work with small datasets, and constructing privacy-preserving AI options.

 

Venture Structure

 
Earlier than diving into code, here’s a mission construction exhibiting how the items match collectively:

 


mission/   
  ├── app.py              # Foremost Streamlit app
  ├── config.py           # Settings (classes, Ollama config)
  ├── preprocessing.py    # Auto-detect CSV codecs, normalize information
  ├── ml_models.py        # Transaction classifier + Isolation Forest anomaly detector
  ├── visualizations.py   # Plotly charts (pie, bar, timeline, heatmap)
  ├── llm_integration.py  # Ollama streaming integration
  ├── necessities.txt    # Dependencies
  ├── README.md           # Documentation with "deep dive" classes
  └── sample_data/
    ├── sample_bank_statement.csv
    └── sample_bank_format_2.csv

 

We’ll have a look at constructing every layer step-by-step.

 

Step 1: Constructing a Strong Information Preprocessing Pipeline

 
The primary lesson I realized was that real-world information is messy. Totally different banks export CSVs in utterly completely different codecs. Chase Financial institution makes use of “Transaction Date” and “Quantity.” Financial institution of America makes use of “Date,” “Payee,” and separate “Debit”https://www.kdnuggets.com/”Credit score” columns. Moniepoint and OPay every have their very own types.

A preprocessing pipeline should deal with these variations mechanically.

 

// Auto-Detecting Column Mappings

I constructed a pattern-matching system that identifies columns no matter naming conventions. Utilizing common expressions, we will map unclear column names to straightforward fields.

import re

COLUMN_PATTERNS = {
    "date": [r"date", r"trans.*date", r"posting.*date"],
    "description": [r"description", r"memo", r"payee", r"merchant"],
    "quantity": [r"^amount$", r"transaction.*amount"],
    "debit": [r"debit", r"withdrawal", r"expense"],
    "credit score": [r"credit", r"deposit", r"income"],
}

def detect_column_mapping(df):
    mapping = {}
    for subject, patterns in COLUMN_PATTERNS.objects():
        for col in df.columns:
            for sample in patterns:
                if re.search(sample, col.decrease()):
                    mapping[field] = col
                    break
    return mapping

 

The important thing perception: design for variations, not particular codecs. This strategy works for any CSV that makes use of widespread monetary phrases.

 

// Normalizing to a Normal Schema

As soon as columns are detected, we normalize the whole lot right into a constant construction. For instance, banks that break up debits and credit have to be mixed right into a single quantity column (damaging for bills, constructive for earnings):

if "debit" in mapping and "credit score" in mapping:
    debit = df[mapping["debit"]].apply(parse_amount).abs() * -1
    credit score = df[mapping["credit"]].apply(parse_amount).abs()
    normalized["amount"] = credit score + debit

 

Key takeaway: Normalize your information as quickly as attainable. It simplifies each following operation, like function engineering, machine studying modeling, and visualization.

 

The preprocessing report shows what the pipeline detected, giving users transparency
Fig 2: The preprocessing report reveals what the pipeline detected, giving customers transparency | Picture by Writer

 

Step 2: Selecting Machine Studying Fashions for Restricted Information

 
The second main problem is proscribed coaching information. Customers add their very own statements, and there’s no large labeled dataset to coach a deep studying mannequin. We want algorithms that work properly with small samples and will be augmented with easy guidelines.

 

// Transaction Classification: A Hybrid Method

As a substitute of pure machine studying, I constructed a hybrid system:

  1. Rule-based matching for assured circumstances (e.g., key phrases like “WALMART” → groceries)
  2. Sample-based fallback for ambiguous transactions
SPENDING_CATEGORIES = {
    "groceries": ["walmart", "costco", "whole foods", "kroger"],
    "eating": ["restaurant", "starbucks", "mcdonald", "doordash"],
    "transportation": ["uber", "lyft", "shell", "chevron", "gas"],
    # ... extra classes
}

def classify_transaction(description, quantity):
    for class, key phrases in SPENDING_CATEGORIES.objects():
        if any(kw in description.decrease() for kw in key phrases):
            return class
    return "earnings" if quantity > 0 else "different"

 

This strategy works instantly with none coaching information, and it’s simple for customers to know and customise.

 

// Anomaly Detection: Why Isolation Forest?

For detecting uncommon spending, I wanted an algorithm that would:

  1. Work with small datasets (in contrast to deep studying)
  2. Make no assumptions about information distribution (in contrast to statistical strategies like Z-score alone)
  3. Present quick predictions for an interactive UI

Isolation Forest from scikit-learn ticked all of the bins. It isolates anomalies by randomly partitioning the info. Anomalies are few and completely different, in order that they require fewer splits to isolate.

from sklearn.ensemble import IsolationForest

detector = IsolationForest(
    contamination=0.05,  # Count on ~5% anomalies
    random_state=42
)
detector.match(options)
predictions = detector.predict(options)  # -1 = anomaly

 

I additionally mixed this with easy Z-score checks to catch apparent outliers. A Z-score describes the place of a uncooked rating when it comes to its distance from the imply, measured in customary deviations:
[
z = frac{x – mu}{sigma}
]
The mixed strategy catches extra anomalies than both technique alone.

Key takeaway: Generally easy, well-chosen algorithms outperform advanced ones, particularly when you could have restricted information.

 

The anomaly detector flags unusual transactions, which stand out in the timeline
Fig 3: The anomaly detector flags uncommon transactions, which stand out within the timeline | Picture by Writer

 

Step 3: Designing Visualizations That Reply Questions

 
Visualizations ought to reply questions, not simply present information. I used Plotly for interactive charts as a result of it permits customers to discover the info themselves. Listed here are the design ideas I adopted:

  1. Constant colour coding: Purple for bills, inexperienced for earnings
  2. Context via comparability: Present earnings vs. bills aspect by aspect
  3. Progressive disclosure: Present a abstract first, then let customers drill down

For instance, the spending breakdown makes use of a donut chart with a gap within the center for a cleaner look:

import plotly.categorical as px

fig = px.pie(
    category_totals,
    values="Quantity",
    names="Class",
    gap=0.4,
    color_discrete_map=CATEGORY_COLORS
)

 

Streamlit makes it simple so as to add these charts with st.plotly_chart() and construct a responsive dashboard.

 

Multiple chart types give users different perspectives on the same data
Fig 4: A number of chart varieties give customers completely different views on the identical information | Picture by Writer

 

Step 4: Integrating a Native Giant Language Mannequin for Pure Language Insights

 
The ultimate piece was producing human-readable insights. I selected to combine Ollama, a instrument for working LLMs domestically. Why native as an alternative of calling OpenAI or Claude?

  1. Privateness: Financial institution information by no means leaves the machine
  2. Value: Limitless queries, zero API charges
  3. Pace: No community latency (although technology nonetheless takes a number of seconds)

 

// Streaming for Higher Consumer Expertise

LLMs can take a number of seconds to generate a response. Streamlit reveals tokens as they arrive, making the wait really feel shorter. Right here is an easy implementation utilizing requests with streaming:

import requests
import json

def generate(self, immediate):
    response = requests.publish(
        f"{self.base_url}/api/generate",
        json={"mannequin": "llama3.2", "immediate": immediate, "stream": True},
        stream=True
    )
    for line in response.iter_lines():
        if line:
            information = json.hundreds(line)
            yield information.get("response", "")

 

In Streamlit, you may show this with st.write_stream().

st.write_stream(llm.get_overall_insights(df))

 

// Immediate Engineering for Monetary Information

The important thing to helpful LLM output is a structured immediate that features precise information. For instance:

immediate = f"""Analyze this monetary abstract:
- Complete Earnings: ${earnings:,.2f}
- Complete Bills: ${bills:,.2f}
- Prime Class: {top_category}
- Largest Anomaly: {anomaly_desc}

Present 2-3 actionable suggestions based mostly on this information."""

 

This offers the mannequin concrete numbers to work with, resulting in extra related insights.

 

The upload interface is simple; choose a CSV and let the AI do the rest
Fig 5: The add interface is easy; select a CSV and let the AI do the remaining | Picture by Writer

 

// Working the Utility

Getting began is easy. You will have Python put in, then run:

pip set up -r necessities.txt

# Non-compulsory, for AI insights
ollama pull llama3.2

streamlit run app.py

 

Add any financial institution CSV (the app auto-detects the format), and inside seconds, you will note a dashboard with categorized transactions, anomalies, and AI-generated insights.

 

Conclusion

 
This mission taught me that constructing one thing practical is just the start. The true studying occurred after I requested why every bit works:

  • Why auto-detect columns? As a result of real-world information doesn’t observe your schema. Constructing a versatile pipeline saves hours of guide cleanup.
  • Why Isolation Forest? As a result of small datasets want algorithms designed for them. You don’t all the time want deep studying.
  • Why native LLMs? As a result of privateness and value matter in manufacturing. Working fashions domestically is now sensible and highly effective.

These classes apply far past private finance, whether or not you’re analyzing gross sales information, server logs, or scientific measurements. The identical ideas of sturdy preprocessing, pragmatic modeling, and privacy-aware AI will serve you in any information mission.

The entire supply code is on the market on GitHub. Fork it, prolong it, and make it your individual. If you happen to construct one thing cool with it, I’d love to listen to about it.

 

// References

 
 

Shittu Olumide is a software program engineer and technical author captivated with leveraging cutting-edge applied sciences to craft compelling narratives, with a eager eye for element and a knack for simplifying advanced ideas. You may as well discover Shittu on Twitter.



Related Articles

Latest Articles