Skip to main content
Chase NextGen LLC  —  2026 Edition

AI & Generative AI
Professional Certification Program

6-week intensive curriculum + 3 advanced masterclasses. Industry-standard 2026, aligned with OWASP LLM Top 10 — NIST AI RMF — EU AI Act.

6Core Weeks
12Live Sessions
25+Hours Content
3Masterclasses
Thu & SunLive Sessions
Lead InstructorChief AI Enterprise Solution Architect, AI Scientist & Researcher
ScheduleThu 7:30-9:00 PM EST  |  Sun 4:00-6:30 PM EST
Tech StackPython, ML, LLMs, Orchestration, LangChain, LangGraph, Vector Databases, OpenAI, Claude, Gemini, Hugging Face, n8n, MCP , AWS, Azure, GCP
CertificationAI & Generative AI Professional Certification Program

Who This Program Is For

7 professional profiles — from career-changers & enthusiasts to ML engineers & enterprise architects.

View Audience Profiles
  • Career Transitioners: Pivoting from any field into AI/ML engineering as a new profession.
    Beginner
  • AI Enthusiasts: Experimenting with AI tools for personal projects and workflow automation — no engineering background required.
    Beginner
  • Software Developers: Integrating LLMs and AI agents into production-grade applications.
    Intermediate
  • Analysts & PMs: Using AI tools to accelerate analysis, automate reporting, and lead AI product decisions.
    Beginner–Mid
  • Consultants & Architects: Designing, evaluating, and presenting enterprise AI solutions to clients.
    Advanced
  • Data Scientists & ML Engineers: Bridging classical ML and modern LLM workflows in production environments.
    Intermediate–Adv
  • Business Leaders & Executives: Leading AI adoption strategy, managing AI initiatives, and driving org-wide transformation.
    Non-technical

Prerequisites

Requirements vary by track — beginner, intermediate, and advanced paths each have their own entry points.

View Requirements
Beginner Tracks — Career Transitioners, AI Enthusiasts, Business Leaders
  • No prior AI or ML experience required
  • Basic comfort with computers and spreadsheets
  • OpenAI API account with $5–10 credit — setup instructions provided in Week 1
    Commitment
    3–5 hrs/week
Intermediate Tracks — Analysts & PMs, Software Developers
  • Basic Python familiarity (variables, loops, functions) — dev setup covered in Week 1
  • Comfort with data concepts, APIs, and command-line basics
  • OpenAI API account with $5–10 credit — setup instructions provided
  • For Software Developers: familiarity with at least one backend language (Python preferred)
    Commitment
    5–8 hrs/week
Advanced Tracks — Data Scientists, ML Engineers, Consultants & Architects
  • Solid Python proficiency — OOP, data structures, and libraries (NumPy, Pandas)
  • Familiarity with cloud platforms (AWS, Azure, or GCP) and API integration
  • For ML Engineers: prior experience with model training, evaluation, and deployment
  • For Consultants & Architects: experience with enterprise systems design or technical advisory
  • OpenAI API account with $10–20 credit — additional cloud credits may apply
    Commitment
    8–12 hrs/week
Core Curriculum

6-Week Learning Journey

Each tile covers 1–2 weeks. Click any tile to expand full session details, hands-on labs & deliverables.

Phase 1 — AI Foundations & ML

Weeks 1–2 • Sessions 1–4 • From Zero to First Model • Build your dev environment, train ML models, and ship a working text classifier.

PythonScikit-learnNLPNLTKGradient BoostingWord2Vec
View Sessions & Labs
Week 1 — AI Foundations, Algorithms & Dev Environment
Session 1 — 2 hrsHistory of AI, Gen AI Landscape & the ML Algorithm
  • AI history → Modern AI boom (2017—2026)
  • Gen AI vs. Discriminative AI — how they differ
  • ML algorithm taxonomy: supervised, unsupervised, RL, deep learning
  • Classification vs. regression vs. clustering vs. decision-making
  • AI lifecycle: Data → Model → Deployment & Ops
  • Dev environment setup: VS Code, Python 3.11+, Jupyter, GitHub Copilot
  • HANDS-ON: Run your first Python ML script with scikit-learn
Session 2 — 2 hrsDeep Learning, Neural Networks & Vector Embeddings
  • ANN, CNN, RNN architecture with real use cases
  • Transformer models: self-attention, positional encoding, encoder-decoder
  • Why transformers power GPT, Claude, Gemini
  • Vector embeddings & multi-dimensional numerical space
  • HANDS-ON: Train Word2Vec; visualize 50D embeddings in 2D with PCA
Week 1 DeliverableWorking Jupyter Notebook: word embedding visualization + dev environment checklist.
Week 2 — Applied ML, NLP & Sentiment Analysis
Session 3 — 2 hrsNLP Pipelines & Sentiment Analysis at Scale
  • NLP pipeline: tokenization → stop-word removal → lemmatization → vectorization
  • VADER sentiment analysis: compound scores -1 to +1
  • VADER vs. ML vs. LLMs for sentiment — when to use each
  • Enterprise apps: customer feedback, brand monitoring, financial news
  • HANDS-ON: Sentiment pipeline on 8,353 NFL draft comments
Session 4 — 2 hrsAddress Classification: Regex, Gradient Boosting & Model Evaluation
  • Rule-based regex approach — accuracy ceiling 74.32%
  • Gradient Boosting Classifier — 97.70% accuracy
  • Model evaluation: Accuracy, Precision, Recall, F1-Score
  • Saves 222,000+ manual corrections per million addresses
  • HANDS-ON: Build and evaluate a full text classification pipeline
Week 2 Deliverable — Mini CapstoneEnd-to-end text classification pipeline with evaluation report. Must exceed 80% accuracy.

Phase 2 — LLMs, APIs & RAG

Weeks 3–4 • Sessions 5–8 • Make LLMs Know What Your Business Knows • Call GPT/Claude/Gemini APIs, master prompt engineering, and build a production RAG system.

OpenAIHugging FaceChromaDBRAGLangChainGradio
View Sessions & Labs
Week 3 — LLM Architecture, APIs & Prompt Engineering
Session 5 — 2 hrsLLM Architecture, API Integration & Model Selection
  • LLM internals: tokenization, context windows, attention heads, parameter scale
  • Proprietary vs. open-source models — cost, privacy, control trade-offs
  • API integration: GPT-4o Mini, Gemini 1.5 Flash, Claude Haiku
  • Model selection: use case → rate limits → context window → cost per token
  • Secure API key management: .env locally; AWS Secrets Manager in prod
  • HANDS-ON: Call 3 LLM APIs; compare latency, cost & output quality
Session 6 — 2 hrsAdvanced Prompt Engineering & Open-Source LLMs on Hugging Face
  • Prompt patterns: zero-shot, few-shot, chain-of-thought, tree-of-thought, ReAct
  • System prompts, personas, and guardrail instructions
  • HR job matching with all-MiniLM-L6-v2 (semantic similarity)
  • Support ticket routing with facebook/bart-large-mnli (zero-shot classification)
  • Running LLMs locally with Ollama
  • HANDS-ON: Build a persona-based LLM chatbot with structured JSON output
Week 3 DeliverableDomain-specific LLM application using ≥1 Hugging Face model + 1 proprietary API.
Week 4 — Retrieval-Augmented Generation (RAG)
Session 7 — 2 hrsRAG Architecture, Vector Databases & Financial Intelligence Demo
  • Why RAG? Solving LLM limitations: knowledge cutoff, private data, hallucinations
  • RAG vs. fine-tuning decision framework
  • Indexing: PDF → chunking → OpenAI embeddings → ChromaDB
  • "Lost in the Middle" problem and mitigation strategies
  • Vector DB options: Pinecone, Qdrant, Weaviate, ChromaDB, pgvector
  • HANDS-ON: Build FinRAG — financial chatbot over earnings reports & SEC filings
Session 8 — 2 hrsAdvanced RAG, Multi-PDF Knowledge Base & Production Deployment
  • Advanced RAG: multi-query, self-reflective, hierarchical, agentic RAG
  • RAGAS evaluation: faithfulness, answer relevance, context precision/recall
  • LLM-as-judge automated evaluation (1—5 scale per metric)
  • SQLite audit logging: query history, EU AI Act compliance trail
  • Production deployment: Gradio web UI + Flask REST API
  • HANDS-ON: Deploy multi-PDF knowledge base with evaluation dashboard & audit trail
Week 4 Mid-Program ProjectProduction-ready RAG system with Gradio UI, source citations, evaluation scores & SQLite audit log.

Phase 3 — Agents, MCP & Cloud AI

Weeks 5–6 • Sessions 9–12 • Build & Ship Production AI Systems • Autonomous agents, Model Context Protocol, n8n automation, cloud AI stacks & capstone.

LangGraphMCPn8nAWS BedrockAzure AIVertex AI
View Sessions & Labs
Week 5 — Agentic AI, LangGraph & MCP
Session 9 — 2 hrsLangChain, LangGraph & Building Enterprise AI Agents
  • Agentic AI: reasoning engines, tool calling, state management, cyclic pipelines
  • LangChain: LLM wrappers, prompt templates, chains, output parsers
  • LangGraph: directed graphs, nodes, edges, Pydantic state objects
  • Single-agent vs. multi-agent architecture
  • HANDS-ON: Build SmartHire — AI talent recruitment agent (resume screening → email)
Session 10 — 2 hrsModel Context Protocol (MCP) — Connecting AI to Enterprise Systems
  • What is MCP? Open standard for structured, secure AI-to-enterprise-data connectivity
  • MCP architecture: Host → Bridge → Servers (databases, APIs, tools)
  • EU AI Act alignment: audit trails, human-in-the-loop, separation of concerns
  • Connecting MCP to PostgreSQL, SharePoint & custom internal APIs
  • HANDS-ON: Build enterprise AI assistant with FastAPI MCP server + GPT-4o Mini + Gradio
Week 5 DeliverableAgentic AI app connecting to ≥1 real data source via MCP or LangGraph tool use, with architecture diagram.
Week 6 — Cloud AI, n8n Automation & Capstone
Session 11 — 2 hrsn8n Automation + AI System Design, Cloud AI Stack & MLOps
  • n8n: triggers, nodes, 800+ integrations; AI Agent nodes with OpenAI/Claude/Gemini
  • AWS AI Stack: Bedrock, SageMaker, Rekognition, Amazon Q
  • Azure AI Stack: Azure OpenAI Service, AI Foundry, Cognitive Services
  • GCP AI Stack: Vertex AI, Gemini API, AutoML, BigQuery ML
  • MLOps: CI/CD for models, experiment tracking, drift monitoring
  • HANDS-ON: Build 3-node AI workflow — news fetch → LLM summarize → auto-distribute
Session 12 — 2 hrsCapstone Presentations, Mock AI Technical Interview & Career Pathways
  • Student capstone presentations: architecture + live demo + lessons learned
  • Portfolio-building: GitHub standards, LinkedIn positioning
  • Interview mastery: AI system design prompts + STAR method
  • AI Engineer career paths 2026: roles, compensation bands, required skills
  • LIVE MOCK INTERVIEW: Practice system design with peer & instructor feedback
Final Capstone DeliverableEnd-to-end AI app combining ≥3 program concepts, deployed on cloud. Required: GitHub repo, architecture diagram, 5-min demo, 1-page business impact summary, cloud ADR.

Assessment & Certification

Clear, measurable criteria across 4 assessment components. Three certification tiers from Core Certificate to AI Engineering Professional.

Core CertificateAdvanced PractitionerAI Engineering Professional
View Criteria
Assessment Components
  • 30% — Weekly Hands-On Labs: Jupyter notebooks per session, graded on functionality & completeness
  • 20% — Mid-Program Project (Week 4 RAG): graded on functionality, RAGAS metrics & documentation
  • 30% — Final Capstone Presentation: live demo + architecture diagram + business impact summary
  • 20% — Masterclass Deliverables: required for Advanced Practitioner and AI Engineering Professional tiers
Certification Requirements
  • Complete 6-week core curriculum with 80%+ attendance
  • Submit at least 1 project deliverable
  • Core Certificate — complete the 6-week curriculum
  • Advanced Practitioner — Core + MC1 (Fine-Tuning) or MC2 (Responsible AI)
  • AI Engineering Professional — Core + all 3 Masterclasses + Capstone
Advanced Specializations

Advanced Masterclasses

Full-day intensive specializations recommended after Week 3+. Each masterclass is structured as 2-hour sessions. Click any tile to explore the complete agenda.

Masterclass 1 — LLM Fine-Tuning with LoRA & QLoRA

Full-day intensive • Theme: Adapt Foundation Models to Your Domain • From theory to a fully evaluated, production-deployed fine-tuned model.

LoRAQLoRAQwen 2.5PEFTBERTScoreHugging Face Hub
View Full Agenda
  • Recommended after completing Week 3+
  • Requires GPU access (Google Colab Pro or local GPU)
  • Based on a real insurance industry use case
Session 1 LoRA & QLoRA Theory + Data Preparation
  • Fine-tuning decision matrix: prompt engineering vs. LoRA/PEFT vs. full fine-tuning
  • LoRA mechanics: low-rank adapter matrices A & B — 0.16% of parameters vs. full fine-tuning
  • QLoRA: LoRA + 4-bit quantization — fine-tune 7B+ models on a single consumer GPU
  • Label masking: prompt tokens → -100 (ignored); response tokens → actual IDs
  • Data quality requirements, minimum dataset size, instruction-following format spec
  • Insurance use case: emails + call transcripts + CRM notes → concise summaries
  • HANDS-ON: Prepare fine-tuning dataset from multi-source customer communication data
Session 2 Fine-Tuning Execution, Evaluation & Production Deployment
  • Configure LoRA hyperparameters: rank, alpha, target modules, LR, batch size
  • Fine-tune Qwen 2.5 (0.5B) on domain-specific insurance summarization task
  • Training monitoring: loss curves, gradient norms, overfitting detection
  • BERTScore evaluation: precision, recall, F1 — F1 > 0.9 is near human-level quality
  • Merge LoRA adapters back into base model weights for production
  • Hosting: Hugging Face Hub, AWS SageMaker, or self-hosted inference
  • HIPAA & GDPR compliance for fine-tuning on enterprise data
  • HANDS-ON: Full fine-tuning run + before/after quality comparison with BERTScore
MC1 DeliverableFine-tuned LLM on a domain of your choice (legal, medical, finance, HR) with a BERTScore evaluation report comparing base vs. fine-tuned outputs.

Masterclass 2 — Responsible AI, LLM Security & Governance

Full-day intensive • Theme: Build AI That Is Safe, Fair & Compliant • Threat landscape, guardrail implementation, and compliance frameworks.

OWASP LLM Top 10NIST AI RMFEU AI ActLLM-GuardRed-Teaming
View Full Agenda
  • Mandatory knowledge for any enterprise AI deployment
  • Aligned with OWASP LLM Top 10, NIST AI RMF & EU AI Act 2026
Session 1 LLM Threat Landscape & Attack Scenarios
  • OWASP LLM Top 10 (2026): LLM01 Prompt Injection → LLM10 Model Theft
  • Prompt injection: direct, indirect, jailbreaking, multi-turn bypass techniques
  • Real-world attacks across healthcare, finance & e-commerce with live examples
  • Agentic AI & Excessive Agency: least-privilege, scoping tool permissions
  • Data poisoning, model inversion & membership inference attacks
  • HANDS-ON: Red-team a sample LLM application — find & document 3 attack surfaces
Session 2 Guardrail Architecture & Compliance Frameworks
  • 4-layer defense-in-depth: input filtering → content validation → output review → monitoring
  • Rule-based guardrails: regex, topic blocklists, PII detection & redaction
  • ML-based guardrails: Detoxify classifier for real-time toxic content detection
  • LLM-Guard framework: scanners, validators & shields
  • NIST AI RMF: Govern → Map → Measure → Manage cycle
  • EU AI Act 2026: high-risk classification, conformity assessments, audit trail requirements
  • Human-in-the-loop design: when approval is required before any agent action
  • HANDS-ON: Implement 4-layer guardrail system on Llama 3 with compliance audit logging
MC2 DeliverableSecurity assessment report: map attack surfaces to OWASP LLM Top 10, implement ≥2 guardrail layers, document NIST AI RMF and EU AI Act compliance posture.

Masterclass 3 — AI Dev Lifecycle, Cloud AI & Agentic Desktop

Full-day intensive • From Problem Definition to Production Rollout • AI Dev Lifecycle, AWS / Azure / GCP deep-dive, and Claude CoWork agentic AI.

AWS BedrockAzure AI FoundryVertex AIClaude CoWorkAI Dev Lifecycle
View Full Agenda
Session 1 AI Development Lifecycle & Pre-Development Checklist
  • Automation — Eliminate repetitive tasks; reduce overhead by 40—70%
  • Intelligence — Unlock insights invisible to traditional systems
  • Scale — Serve millions with consistent quality and zero fatigue
  • Pre-Dev Checklist: Problem Statement → Stakeholders → Data Availability → Feasibility → Success Criteria → Ethical Review
  • HANDS-ON: Define a complete AI project pre-dev checklist for your organization
Session 2 Cloud AI Services — AWS, Azure & GCP Deep-Dive
  • AWS: Amazon Bedrock (Model-as-a-Service), SageMaker AI, Amazon Q — Models: Nova Pro, Claude 4.6, Llama 4, Mistral Large 3
  • Azure: Azure AI Foundry, Azure OpenAI Service, Azure AI Search (vector DB for RAG) — Models: GPT-5.4, Claude 4.6, Phi-4, DeepSeek-R1
  • GCP: Vertex AI Platform, Gemini API (multimodal), BigQuery ML, Vision AI — Models: Gemini 3.1 Pro, Claude 4.6 Sonnet, Llama 4
  • Selection guide: when to choose each platform based on existing infra, compliance & use case
  • HANDS-ON: Deploy the same RAG use case on AWS Bedrock vs. Azure OpenAI — compare cost, latency & quality
Session 3 Claude CoWork — Agentic AI for Knowledge Work
  • Agentic desktop tool built around the outcome, not the prompt — launched Jan 2026
  • Available for Pro, Max, Team & Enterprise subscribers; runs inside Claude Desktop (macOS & Windows)
  • Same engine as Claude Code — no terminal needed; multi-step tasks executed end-to-end
  • File System Access: reads, edits & creates files in folders you grant access to
  • HANDS-ON: Use Claude CoWork to autonomously generate a technical report from raw data
MC3 DeliverableMaster the full AI application lifecycle. Navigate enterprise cloud platforms and deploy autonomous AI desktop tooling end-to-end.

Supplemental Resources & Stack

Curated frameworks, APIs, evaluation tools & research papers used throughout the program.

LangChainRAGASLLM-GuardBERTScoreOWASPNIST AI RMF
View Resources
Frameworks & Platforms
  • LangChain — RAG, agents & chain orchestration
  • LangGraph — Stateful multi-agent graph workflows
  • Hugging Face — Open-source models, datasets & spaces
  • Ollama — Run LLMs locally without API costs
  • n8n — Visual workflow automation with AI agent nodes
  • ChromaDB — Embedded vector database for RAG
APIs & Model Providers
  • OpenAI Platform — GPT-4o Mini, embeddings, fine-tuning
  • Anthropic Console — Claude API (Haiku, Sonnet, Opus)
  • Google AI Studio — Gemini 1.5 Flash/Pro API access
  • Artificial Analysis Leaderboard — Compare models on speed, quality & cost
Evaluation & Quality Tools
  • RAGAS — RAG evaluation: faithfulness, relevance, precision
  • BERTScore — Semantic similarity evaluation for LLM outputs
  • LLM-Guard — Open-source guardrail toolkit
Key Research Papers & Standards
  • Lewis et al. (2020) — RAG Original Paper: "Retrieval-Augmented Generation for NLU"
  • Liu et al. (2023) — "Lost in the Middle": LLM attention in long contexts
  • Hu et al. (2021) — LoRA Paper: "Low-Rank Adaptation of Large Language Models"
  • OWASP LLM Top 10 (2025/2026) — LLM application security
  • NIST AI RMF — AI risk management framework
  • EU AI Act — Current compliance requirements

Ready to Build the Future with AI?

Join professionals already transforming their careers with Chase NextGen's AI certification program.

Enroll Now Contact Program Coordinator