Pradeep Kumar

Full Stack Developer turned AI Engineer · System Builder · Problem Decomposer

I build production-grade AI systems that combine rules, workflows, and LLM intelligence—focused on correctness, scale, and real business outcomes. But My Primary Skill is Figuring it out and making it work.

View Systems I’ve Built GitHub LinkedIn

How I Became a Systems Engineer

2018 – 2022

Phase 1: Engineering Foundation

B.Tech in Electrical & Electronics Engineering

Strong inclination toward:

  • Mathematics & Linear Algebra
  • Logic & Structure
  • System behavior over clear syntax

“Even before AI, I was interested in how components interact, fail, and scale.”

2022 – 2024

Phase 2: Full Stack → Workflow Thinking

Manage Artworks

Built Critical Path Methodology (CPM) engine handling:

  • Dependencies, Transitions, & Parallel activities
  • Custom dummy activities & Route optimization

Impact: Optimized workflow duration by 40% and storage usage by 30% (deduplication).

Where my systems brain formed: modeling time, constraints, and graphs.

2024 – Present

Phase 3: AI Without Hype

Valgenesis

Building Hybrid Rule + LLM validation engines & deterministic workflows.

  • Solved PDF intelligence: TOC extraction, chunking, vectorization
  • Built FastAPI-based monetizable AI infra
  • Used Celery for parallelism

Focus: Accuracy (98%), Performance (<10s), Auditability.

What I Build

🧩

Intelligent Workflow Engines

CPM engines, Rule-based systems, Transition graphs, Parallel execution models.

📄

PDF & Document Intelligence

Layout analysis, Table detection, Section extraction, Deduplication & post-processing.

🧠

LLM-Powered Systems

RAG pipelines, Hybrid validation engines, Prompt ensembles, Semantic extraction.

⚙️

Production APIs

FastAPI microservices, Async & parallel processing, Secure metering & monetization.

Flagship Projects

Hybrid Compliance Validation Engine

The Problem

Manual regulatory validation is slow, ambiguous, and error-prone.

The Solution

30+ deterministic rule checks combined with LLM-based semantic validation and ensemble logic to resolve ambiguity.

Architecture

Input Docs → Rule Engine → Semantic Validator (LLM) → Decision Aggregator → Audit Logs

Key Results

  • 98% accuracy in validation
  • Massive reduction in manual reviews
  • Production deployed

Generic RAG Pre-Processor for PDFs

PDF Intelligence Utility

The Problem

PDFs are messy. RAG pipelines fail without high-quality preprocessing.

The Solution

Intelligent chunking, TOC extraction, and vectorization pipeline with parallel processing via Celery.

Performance

  • < 10 seconds per document
  • 70–90% manual effort reduction

Key Insight

Retrieval quality matters more than the LLM.

Critical Path Workflow Engine

Manage Artworks

The Problem

Standard workflow engines fail when dependencies become complex.

The Solution

Graph-based CPM engine supporting parallel activities, routes, custom dummy activities, and forward calculation logic.

Impact

  • 40% reduction in workflow duration
  • Production usage

Logic & Mathematics applied to real-world time modeling.

How I Think About Systems

Why LLMs must be bounded by rules

Pure LLMs are non-deterministic. In compliance and high-stakes fields, 98% isn't enough without a safety net. My approach pairs strict deterministic pre-checks with LLM semantic reasoning only where necessary.

Workflows should be data-driven, not hardcoded

Hardcoding workflow steps creates technical debt. I build engines where the workflow itself is a graph data structure, allowing dynamic routing and easier maintenance.

Technical Arsenal

Languages & Backend

Python JavaScript SQL FastAPI Celery AsyncIO

AI Systems

RAG Architectures Hybrid Rule+LLM Embeddings Vector Search

Data & Logic

Graph Algorithms CPM Dependency Resolution Deduplication