AI & Generative AI
Professional Certification Program
6-week intensive curriculum + 3 advanced masterclasses. Industry-standard 2026, aligned with OWASP LLM Top 10 — NIST AI RMF — EU AI Act.
Who This Program Is For
7 professional profiles — from career-changers & enthusiasts to ML engineers & enterprise architects.
- Career Transitioners: Pivoting from any field into AI/ML engineering as a new profession.
Beginner
- AI Enthusiasts: Experimenting with AI tools for personal projects and workflow automation — no engineering background required.
Beginner
- Software Developers: Integrating LLMs and AI agents into production-grade applications.
Intermediate
- Analysts & PMs: Using AI tools to accelerate analysis, automate reporting, and lead AI product decisions.
Beginner–Mid
- Consultants & Architects: Designing, evaluating, and presenting enterprise AI solutions to clients.
Advanced
- Data Scientists & ML Engineers: Bridging classical ML and modern LLM workflows in production environments.
Intermediate–Adv
- Business Leaders & Executives: Leading AI adoption strategy, managing AI initiatives, and driving org-wide transformation.
Non-technical
Prerequisites
Requirements vary by track — beginner, intermediate, and advanced paths each have their own entry points.
- No prior AI or ML experience required
- Basic comfort with computers and spreadsheets
- OpenAI API account with $5–10 credit — setup instructions provided in Week 1
Commitment3–5 hrs/week
- Basic Python familiarity (variables, loops, functions) — dev setup covered in Week 1
- Comfort with data concepts, APIs, and command-line basics
- OpenAI API account with $5–10 credit — setup instructions provided
- For Software Developers: familiarity with at least one backend language (Python preferred)
Commitment5–8 hrs/week
- Solid Python proficiency — OOP, data structures, and libraries (NumPy, Pandas)
- Familiarity with cloud platforms (AWS, Azure, or GCP) and API integration
- For ML Engineers: prior experience with model training, evaluation, and deployment
- For Consultants & Architects: experience with enterprise systems design or technical advisory
- OpenAI API account with $10–20 credit — additional cloud credits may apply
Commitment8–12 hrs/week
6-Week Learning Journey
Each tile covers 1–2 weeks. Click any tile to expand full session details, hands-on labs & deliverables.
Phase 1 — AI Foundations & ML
Weeks 1–2 • Sessions 1–4 • From Zero to First Model • Build your dev environment, train ML models, and ship a working text classifier.
- AI history → Modern AI boom (2017—2026)
- Gen AI vs. Discriminative AI — how they differ
- ML algorithm taxonomy: supervised, unsupervised, RL, deep learning
- Classification vs. regression vs. clustering vs. decision-making
- AI lifecycle: Data → Model → Deployment & Ops
- Dev environment setup: VS Code, Python 3.11+, Jupyter, GitHub Copilot
- HANDS-ON: Run your first Python ML script with scikit-learn
- ANN, CNN, RNN architecture with real use cases
- Transformer models: self-attention, positional encoding, encoder-decoder
- Why transformers power GPT, Claude, Gemini
- Vector embeddings & multi-dimensional numerical space
- HANDS-ON: Train Word2Vec; visualize 50D embeddings in 2D with PCA
- NLP pipeline: tokenization → stop-word removal → lemmatization → vectorization
- VADER sentiment analysis: compound scores -1 to +1
- VADER vs. ML vs. LLMs for sentiment — when to use each
- Enterprise apps: customer feedback, brand monitoring, financial news
- HANDS-ON: Sentiment pipeline on 8,353 NFL draft comments
- Rule-based regex approach — accuracy ceiling 74.32%
- Gradient Boosting Classifier — 97.70% accuracy
- Model evaluation: Accuracy, Precision, Recall, F1-Score
- Saves 222,000+ manual corrections per million addresses
- HANDS-ON: Build and evaluate a full text classification pipeline
Phase 2 — LLMs, APIs & RAG
Weeks 3–4 • Sessions 5–8 • Make LLMs Know What Your Business Knows • Call GPT/Claude/Gemini APIs, master prompt engineering, and build a production RAG system.
- LLM internals: tokenization, context windows, attention heads, parameter scale
- Proprietary vs. open-source models — cost, privacy, control trade-offs
- API integration: GPT-4o Mini, Gemini 1.5 Flash, Claude Haiku
- Model selection: use case → rate limits → context window → cost per token
- Secure API key management: .env locally; AWS Secrets Manager in prod
- HANDS-ON: Call 3 LLM APIs; compare latency, cost & output quality
- Prompt patterns: zero-shot, few-shot, chain-of-thought, tree-of-thought, ReAct
- System prompts, personas, and guardrail instructions
- HR job matching with all-MiniLM-L6-v2 (semantic similarity)
- Support ticket routing with facebook/bart-large-mnli (zero-shot classification)
- Running LLMs locally with Ollama
- HANDS-ON: Build a persona-based LLM chatbot with structured JSON output
- Why RAG? Solving LLM limitations: knowledge cutoff, private data, hallucinations
- RAG vs. fine-tuning decision framework
- Indexing: PDF → chunking → OpenAI embeddings → ChromaDB
- "Lost in the Middle" problem and mitigation strategies
- Vector DB options: Pinecone, Qdrant, Weaviate, ChromaDB, pgvector
- HANDS-ON: Build FinRAG — financial chatbot over earnings reports & SEC filings
- Advanced RAG: multi-query, self-reflective, hierarchical, agentic RAG
- RAGAS evaluation: faithfulness, answer relevance, context precision/recall
- LLM-as-judge automated evaluation (1—5 scale per metric)
- SQLite audit logging: query history, EU AI Act compliance trail
- Production deployment: Gradio web UI + Flask REST API
- HANDS-ON: Deploy multi-PDF knowledge base with evaluation dashboard & audit trail
Phase 3 — Agents, MCP & Cloud AI
Weeks 5–6 • Sessions 9–12 • Build & Ship Production AI Systems • Autonomous agents, Model Context Protocol, n8n automation, cloud AI stacks & capstone.
- Agentic AI: reasoning engines, tool calling, state management, cyclic pipelines
- LangChain: LLM wrappers, prompt templates, chains, output parsers
- LangGraph: directed graphs, nodes, edges, Pydantic state objects
- Single-agent vs. multi-agent architecture
- HANDS-ON: Build SmartHire — AI talent recruitment agent (resume screening → email)
- What is MCP? Open standard for structured, secure AI-to-enterprise-data connectivity
- MCP architecture: Host → Bridge → Servers (databases, APIs, tools)
- EU AI Act alignment: audit trails, human-in-the-loop, separation of concerns
- Connecting MCP to PostgreSQL, SharePoint & custom internal APIs
- HANDS-ON: Build enterprise AI assistant with FastAPI MCP server + GPT-4o Mini + Gradio
- n8n: triggers, nodes, 800+ integrations; AI Agent nodes with OpenAI/Claude/Gemini
- AWS AI Stack: Bedrock, SageMaker, Rekognition, Amazon Q
- Azure AI Stack: Azure OpenAI Service, AI Foundry, Cognitive Services
- GCP AI Stack: Vertex AI, Gemini API, AutoML, BigQuery ML
- MLOps: CI/CD for models, experiment tracking, drift monitoring
- HANDS-ON: Build 3-node AI workflow — news fetch → LLM summarize → auto-distribute
- Student capstone presentations: architecture + live demo + lessons learned
- Portfolio-building: GitHub standards, LinkedIn positioning
- Interview mastery: AI system design prompts + STAR method
- AI Engineer career paths 2026: roles, compensation bands, required skills
- LIVE MOCK INTERVIEW: Practice system design with peer & instructor feedback
Assessment & Certification
Clear, measurable criteria across 4 assessment components. Three certification tiers from Core Certificate to AI Engineering Professional.
- 30% — Weekly Hands-On Labs: Jupyter notebooks per session, graded on functionality & completeness
- 20% — Mid-Program Project (Week 4 RAG): graded on functionality, RAGAS metrics & documentation
- 30% — Final Capstone Presentation: live demo + architecture diagram + business impact summary
- 20% — Masterclass Deliverables: required for Advanced Practitioner and AI Engineering Professional tiers
- Complete 6-week core curriculum with 80%+ attendance
- Submit at least 1 project deliverable
- Core Certificate — complete the 6-week curriculum
- Advanced Practitioner — Core + MC1 (Fine-Tuning) or MC2 (Responsible AI)
- AI Engineering Professional — Core + all 3 Masterclasses + Capstone
Advanced Masterclasses
Full-day intensive specializations recommended after Week 3+. Each masterclass is structured as 2-hour sessions. Click any tile to explore the complete agenda.
Masterclass 1 — LLM Fine-Tuning with LoRA & QLoRA
Full-day intensive • Theme: Adapt Foundation Models to Your Domain • From theory to a fully evaluated, production-deployed fine-tuned model.
- Recommended after completing Week 3+
- Requires GPU access (Google Colab Pro or local GPU)
- Based on a real insurance industry use case
- Fine-tuning decision matrix: prompt engineering vs. LoRA/PEFT vs. full fine-tuning
- LoRA mechanics: low-rank adapter matrices A & B — 0.16% of parameters vs. full fine-tuning
- QLoRA: LoRA + 4-bit quantization — fine-tune 7B+ models on a single consumer GPU
- Label masking: prompt tokens → -100 (ignored); response tokens → actual IDs
- Data quality requirements, minimum dataset size, instruction-following format spec
- Insurance use case: emails + call transcripts + CRM notes → concise summaries
- HANDS-ON: Prepare fine-tuning dataset from multi-source customer communication data
- Configure LoRA hyperparameters: rank, alpha, target modules, LR, batch size
- Fine-tune Qwen 2.5 (0.5B) on domain-specific insurance summarization task
- Training monitoring: loss curves, gradient norms, overfitting detection
- BERTScore evaluation: precision, recall, F1 — F1 > 0.9 is near human-level quality
- Merge LoRA adapters back into base model weights for production
- Hosting: Hugging Face Hub, AWS SageMaker, or self-hosted inference
- HIPAA & GDPR compliance for fine-tuning on enterprise data
- HANDS-ON: Full fine-tuning run + before/after quality comparison with BERTScore
Masterclass 2 — Responsible AI, LLM Security & Governance
Full-day intensive • Theme: Build AI That Is Safe, Fair & Compliant • Threat landscape, guardrail implementation, and compliance frameworks.
- Mandatory knowledge for any enterprise AI deployment
- Aligned with OWASP LLM Top 10, NIST AI RMF & EU AI Act 2026
- OWASP LLM Top 10 (2026): LLM01 Prompt Injection → LLM10 Model Theft
- Prompt injection: direct, indirect, jailbreaking, multi-turn bypass techniques
- Real-world attacks across healthcare, finance & e-commerce with live examples
- Agentic AI & Excessive Agency: least-privilege, scoping tool permissions
- Data poisoning, model inversion & membership inference attacks
- HANDS-ON: Red-team a sample LLM application — find & document 3 attack surfaces
- 4-layer defense-in-depth: input filtering → content validation → output review → monitoring
- Rule-based guardrails: regex, topic blocklists, PII detection & redaction
- ML-based guardrails: Detoxify classifier for real-time toxic content detection
- LLM-Guard framework: scanners, validators & shields
- NIST AI RMF: Govern → Map → Measure → Manage cycle
- EU AI Act 2026: high-risk classification, conformity assessments, audit trail requirements
- Human-in-the-loop design: when approval is required before any agent action
- HANDS-ON: Implement 4-layer guardrail system on Llama 3 with compliance audit logging
Masterclass 3 — AI Dev Lifecycle, Cloud AI & Agentic Desktop
Full-day intensive • From Problem Definition to Production Rollout • AI Dev Lifecycle, AWS / Azure / GCP deep-dive, and Claude CoWork agentic AI.
- Automation — Eliminate repetitive tasks; reduce overhead by 40—70%
- Intelligence — Unlock insights invisible to traditional systems
- Scale — Serve millions with consistent quality and zero fatigue
- Pre-Dev Checklist: Problem Statement → Stakeholders → Data Availability → Feasibility → Success Criteria → Ethical Review
- HANDS-ON: Define a complete AI project pre-dev checklist for your organization
- AWS: Amazon Bedrock (Model-as-a-Service), SageMaker AI, Amazon Q — Models: Nova Pro, Claude 4.6, Llama 4, Mistral Large 3
- Azure: Azure AI Foundry, Azure OpenAI Service, Azure AI Search (vector DB for RAG) — Models: GPT-5.4, Claude 4.6, Phi-4, DeepSeek-R1
- GCP: Vertex AI Platform, Gemini API (multimodal), BigQuery ML, Vision AI — Models: Gemini 3.1 Pro, Claude 4.6 Sonnet, Llama 4
- Selection guide: when to choose each platform based on existing infra, compliance & use case
- HANDS-ON: Deploy the same RAG use case on AWS Bedrock vs. Azure OpenAI — compare cost, latency & quality
- Agentic desktop tool built around the outcome, not the prompt — launched Jan 2026
- Available for Pro, Max, Team & Enterprise subscribers; runs inside Claude Desktop (macOS & Windows)
- Same engine as Claude Code — no terminal needed; multi-step tasks executed end-to-end
- File System Access: reads, edits & creates files in folders you grant access to
- HANDS-ON: Use Claude CoWork to autonomously generate a technical report from raw data
Supplemental Resources & Stack
Curated frameworks, APIs, evaluation tools & research papers used throughout the program.
- LangChain — RAG, agents & chain orchestration
- LangGraph — Stateful multi-agent graph workflows
- Hugging Face — Open-source models, datasets & spaces
- Ollama — Run LLMs locally without API costs
- n8n — Visual workflow automation with AI agent nodes
- ChromaDB — Embedded vector database for RAG
- OpenAI Platform — GPT-4o Mini, embeddings, fine-tuning
- Anthropic Console — Claude API (Haiku, Sonnet, Opus)
- Google AI Studio — Gemini 1.5 Flash/Pro API access
- Artificial Analysis Leaderboard — Compare models on speed, quality & cost
- RAGAS — RAG evaluation: faithfulness, relevance, precision
- BERTScore — Semantic similarity evaluation for LLM outputs
- LLM-Guard — Open-source guardrail toolkit
- Lewis et al. (2020) — RAG Original Paper: "Retrieval-Augmented Generation for NLU"
- Liu et al. (2023) — "Lost in the Middle": LLM attention in long contexts
- Hu et al. (2021) — LoRA Paper: "Low-Rank Adaptation of Large Language Models"
- OWASP LLM Top 10 (2025/2026) — LLM application security
- NIST AI RMF — AI risk management framework
- EU AI Act — Current compliance requirements
Ready to Build the Future with AI?
Join professionals already transforming their careers with Chase NextGen's AI certification program.
Enroll Now Contact Program Coordinator