Full course description
Path I: AI Fundamentals
Module 1: The AI Landscape | Joe Reilly | April 9
This foundational session introduces the essential concepts and current state of artificial intelligence, with a focus on large language models and their rapidly evolving capabilities. Participants will explore how LLMs work at a conceptual level, including transformers, tokens, and retrieval-augmented generation, before surveying the 2026 model landscape across proprietary and open-weight systems, reasoning models, and multimodal AI. The session gives particular attention to the rise of agentic AI, where models move beyond simple question-and-answer interactions to autonomously plan, use tools, and execute multi-step workflows on behalf of users. Through a guided tour of Claude and a critical discussion of how AI adoption affects professional expertise, participants will develop practical frameworks for selecting the right AI tools, identifying common failure modes like hallucination and compounding errors, and maintaining meaningful human oversight as these systems become more autonomous.
Module 2: Exploring Ethics and AI Through Pop Culture | Ted Miller / Jennifer Turrentine | April 16
Before most of us ever typed a prompt into an AI tool, we'd already been educated by HAL 9000, The Terminator, and C-3PO. Those stories didn't just entertain us. They became the default frameworks through which we interpret AI today, and whether we realize it or not, they're shaping workplace decisions, policy conversations, and adoption resistance right now.
This session explores how two dominant pop culture narratives (AI as existential threat and AI as perfect assistant) have created a polarized landscape where professionals are either paralyzed by fear or seduced by unrealistic expectations. Neither position serves us well. By learning to identify which narrative a colleague, stakeholder, or policy document is operating from, you can cut through the noise, address real concerns, and move toward grounded, responsible AI practice.
We'll examine the gap between fictional AI and the statistical pattern-recognition systems we actually work with, explore how classic stories can function as governance stress tests, and develop practical strategies for meeting people where they are, narratively speaking. Whether you're a student navigating AI tools in your coursework or a staff member supporting AI integration at CPS, this session will give you a new lens for making sense of the conversation around you.
Because ultimately, people don't resist AI. They resist the story they believe about it.
Module 3: Advanced Prompt Engineering for College Research | Balazs Szelenyi | April 23
This session demonstrates how generative AI can be integrated into a structured academic research workflow using the question of whether we are approaching the technological singularity as a case study. Participants build a custom research bot, develop a working bibliography, refine prompts to identify credible academic sources, and structure a sustained argument. The project culminates in a 10-page research paper grounded in scholarly references, showing how AI can support rigorous research without replacing critical thinking.
Path II: AI Technology
Module 4: Automating Scientific Work with AI | Anton Sinitskiy | April 28
This module explores the cutting-edge developments in AI-driven automation of scientific research workflows, all the way from initial literature discovery to manuscript preparation and publication. Students will examine state-of-the-art systems that can autonomously conduct literature reviews, generate research hypotheses, design and execute experiments, analyze results, and draft scientific papers. Through detailed case studies of platforms like Kosmos, K-Dense, ToolUniverse, and various DeepResearch agents, students will gain experience with these emerging tools while developing critical evaluation skills to assess their capabilities and limitations. The course emphasizes the transformative potential of automated scientific workflows, preparing students to navigate the evolving landscape of AI-assisted research across disciplines.
Module 5: Developing Reliable and Responsible AI Systems | Umesh Hodeghatta | May 5
Objective: To equip students with the frameworks, methods, and tools necessary for creating AI systems that are transparent, fair, and accountable. This session transitions from problem identification to solution design. Students will learn the principles of transparency and explainability, focusing on how to make AI decisions understandable to diverse audiences, including technical teams, business stakeholders, regulators, and end-users. We will explore different explainability techniques (e.g., feature importance, model-agnostic methods) and discuss their limitations.
Module 6: Agentic Intelligence: Architecting AI Agents for Enterprise | Umesh Hodeghatta | May 12
Objective: In this session, we'll explore the fascinating world of AI agents that perceive, reason, and act to assist humans in powerful ways. Whether you're a developer, designer, or decision-maker, you'll gain a clear understanding of how AI agents work, where they're used, and why they're reshaping industries.
My goal is to explore the technical foundations of AI agents, demonstrate their practical impact through real-world applications, and bring theory to life with a live showcase of business cases. Each example reflects how AI agents are actively transforming everyday operations across industries.
Module 7: Cybersecurity for Trusted Sensor Data: Preventing Tampering and Replay | Ganesh Subramanian | May 19
Cyber-Physical Systems (CPS) are systems where software controls real things—like factory machines, building sensors, medical devices, or power equipment. In these systems, sensor data is used to make real decisions, such as stopping a machine or raising an alarm. The risk is that someone can fake sensor readings, change data in transit, or replay old "normal" readings to hide a problem. In this session, students learn a practical cybersecurity approach: verify every event before trusting it (Zero Trust), apply basic controls to reduce misuse, and detect common data attacks so actions are based on trustworthy information.
Module 8: Blockchain for Trusted Records: Shared Audit Trails for Traceability and Compliance | Ganesh Subramanian | June 2
In many CPS environments, records are shared across multiple groups-operations, maintenance vendors, auditors, regulators, and sometimes insurance. A common problem is: who can be trusted to store the "true" record of what happened? If records can be edited later, it becomes hard to prove accountability. In this session, students learn how blockchain helps by keeping records in a way that is shared, traceable, and very hard to change without leaving evidence. Students will see how blockchain supports audit trails, compliance, and traceability using simple, beginner-friendly hands-on activities.
Module 9: Understanding Machine Learning and the Tools That Use It | John Wilder | June 9
This session demystifies machine learning for non-technical professionals. Participants will learn key ML terminology, understand why machine learning gets distinguished from broader AI concepts, and explore the spectrum from traditional algorithms (decision trees, k-Nearest Neighbors, Support Vector Machines) to modern neural networks. The session examines critical limitations including hallucinations, dataset bias, and the emerging problem of AI systems trained on AI-generated content. Concrete examples, such as adversarial images and LLM jailbreaking, will give attendees a nuanced understanding of how these technologies work and fail. Machine learning has been incorporated into many tools that you already use, such as Microsoft Excel. This season will demonstrate how to make use of those tools and demonstrate that understanding how machine works will help you better able to understand why it gives the outputs that it does. You can leverage this understanding to change the way you interact with machine learning systems to get better results. This session gives a much richer understanding of the ML landscape beyond just the current LLM hype, showing them that ML has been quietly powering software application you use daily, but in forms you may not have recognized as "AI."
Path III: AI in the Professions
Module 10: The Role of AI in the Insurance and Banking Industry | Prashant Mittal | April 30
This workshop is about helping learners understand how AI can tackle challenges in the insurance and finance industry while getting hands on experience with tools like Copilot and GenAI. I'll start by describing common pain points, think claims processing delays, fraud detection, risk assessment, and customer experience issues, and show how AI can make a difference with such real-world examples. Then, I'll move into a guided tour of Copilot to explore its features, how it works, and why it's a game changer for productivity and workflow optimization.
The last part will demonstrate where I will build a solution to a specific insurance industry issue using GenAI. I will create a standalone web app from scratch, complete with a predictive model tailored to the problem and a clean, user-friendly interface. This session will give the participants practical experience they can take back to their teams.
Module 11: AI-Powered Project Management | Ravi Kalluri | May 7
This workshop helps industry leaders understand how AI can transform project management from planning and resource allocation to risk analysis and stakeholder communication. It builds on an implementation I introduced in my project risk management course at CPS. We will visit the pain points every project leader knows too well: scope creep, status reporting overhead, resource conflicts, missed risks, and stakeholder misalignment. Through a real-world case, we'll show how AI is already solving these problems at leading organizations, leading to labor cost savings. Then, participants will follow a guided, hands-on walkthrough using Claude to build their own AI-assisted project risk assessment and status reporting dashboard using vibe coding alone. By the end of the session, every participant will have a practical, repeatable AI tool they can bring back to their teams on Monday morning.
Module 12: Continuous Improvement in the Age of AI | Fabio De Martino | May 14
This session introduces students to the integration of Lean principles, stakeholder engagement, and AI-enabled tools. The course begins with a foundational overview of Continuous Improvement thinking, including value-added versus non-value-added activities and waste identification, and then transitions into a hands-on application with AI tools to capture "as-is" workflows, analyze stakeholder interview data, and generate visual process maps. Participants leave with practical tools to improve both process optimization and project management performance.
Path IV: AI & Humanics
Module 13: The Future of Generative AI: An Expert Panel | Moderated by Chris Unger and Allison Ruda, CPS LEARN Lab | May 21
What does the near future of generative AI actually look like — and what does it mean for your work? Join a panel of industry experts for a conversation about the trends and capabilities reshaping how professionals across fields are operating. Moderated by Allison Ruda and Chris Unger of CPS LEARN Lab, the discussion will move beyond the headlines to examine what's genuinely changing, what's still overpromised, and how to tell the difference. Panelists will share perspectives from their own practice areas, and participants will have the chance to weigh in, ask questions, and leave with a clearer sense of how to think critically about AI's role in their work — and where it's headed next.
Module 14: Ethical Challenges in AI | Umesh Hodeghatta | June 4
Objective: To develop a deep understanding of the ethical issues that arise across the AI lifecycle, with special attention to the role of data, systemic bias, and governance structures in shaping AI outcomes. We begin by mapping the landscape of AI ethics — exploring why responsible AI is essential in modern society and how ethical lapses can lead to societal harm, reputational damage, and regulatory consequences. Students will examine the foundational dependencies of AI systems, especially data quality, privacy, and governance. We will analyze how incomplete, unrepresentative, or poorly governed datasets can embed systemic inequities into AI models. We will also discuss AI bias, where students will dissect how bias can enter at different stages — from data collection and labeling to algorithmic design and deployment contexts.
Module 15: Mapping Your Professional Ecosystem: Visualizing Networks of Influence and Knowledge | Dan Serig | June 11
Who and what shapes your professional knowledge? This interactive workshop uses arts-based research methods and Napkin AI to help participants visualize their learning ecosystems: the networks of people, tools, communities, and practices that enable their work. Through guided mapping exercises and small-group analysis, participants will identify the human AND technological actors in their professional networks, examine power dynamics and access issues, and prototype concrete interventions to strengthen their ecosystems. AI emerges naturally as one node among many in more-than-human knowledge networks. Participants leave with the use of Napkin AI, a visual map, a replicable methodology, and actionable commitments for ongoing ecosystem cultivation.

