Data engineering education
I want data courses to feel like the work students will actually do: imperfect inputs, real tools, ambiguous tradeoffs, and enough theory to make good decisions.
Lecturer at Washington University in St. Louis
I teach computer science by asking students to build real systems, explain their reasoning, and see where abstraction meets the messy world.
My work sits between teaching, theoretical computer science, data engineering, and assessment design. I like problems where the technical details matter, but the human structure around them matters just as much.
What I do
I want data courses to feel like the work students will actually do: imperfect inputs, real tools, ambiguous tradeoffs, and enough theory to make good decisions.
I experiment with oral exams, multiplicative grading, and feedback structures that ask students to explain, revise, and defend what they understand.
Before WashU, I built data infrastructure, metrics, pipelines, and ML-adjacent tooling at Chan Zuckerberg Initiative, Robinhood, and Meta.
Selected work
The work I am most proud of tends to have the same shape: start from a real problem, find the right abstraction, and then build something that helps other people reason or create more effectively.
A hands-on graduate course I created for students who need to work with large volumes of data, real-time streams, and the operational constraints that come with modern data systems.
An introduction to data engineering that I designed from scratch, now numbered CSE 3104 and previously offered as CSE 314A and DCDS 510. The course helps data science students move from files and notebooks toward reliable workflows they can trust.
A large, co-taught undergraduate course where I have played a meaningful role reshaping CSE 247/2407 for the AI era. My focus has been helping students connect theoretical computer science ideas to implementation, problem solving, and durable habits of reasoning.
A public baseball analytics project that turns a niche rules experiment into a browsable data product. It is the sort of small, specific question I enjoy making legible.
A nested-grid strategy game that grows classic tic-tac-toe into a deeper tactical contest. It started as a chalkboard game between friends and has become a small playground for strategy, AI assistance, and web infrastructure.
A daily algorithmic-style logic puzzle built around real-world situations, elegant solutions, and AI-guided checking. The goal is a small problem that is easy to state and hard to stop thinking about.
I have built data warehouses, deletion systems, telemetry, liquidity-risk models, Spark and Airflow frameworks, and analytics pipelines across education, finance, and XR.
Teaching signal
Across the WashU reports in my private evaluation archive, the clearest patterns are enthusiasm, availability, kindness, real-world context, and steady iteration. I publish one representative CSE 5114 evaluation for transparency, summarize the broader archive carefully, combine CSE 3104/CSE 314A/DCDS 510 evidence as the same course, and treat co-taught Data Structures and Algorithms evaluations as shared-course context rather than a clean single-instructor outcome.
I try to be honest about what the evidence can and cannot say. The strongest public signal comes from courses I designed from scratch, including CSE 3104, formerly CSE 314A and DCDS 510, and CSE 5114. The full evaluation archive stays private to avoid over-publishing student feedback, but the CSE 5114 report is linked as a representative raw sample. The larger CSE 247 and CSE 2407 reports still matter, especially when students comment directly on my teaching and on course modernization work, but the course structure and many student experiences were shared across multiple instructors.
"I appreciated the commitment to tying every concept to real-world examples."
"Very enthusiastic and prepares you for a career in tech."
"Professor Goodman demonstrated a strong interest in the success of the class and each student."
Conference ideas
My iTeach talks focus on a practical question: if students have access to powerful AI tools, what kinds of assessment still help them learn and still tell us what they understand?
Oral exams can mimic design interviews, adapt to student performance, provide quick personalized feedback, and evaluate understanding in a format where outsourcing the work is much harder.
A grading model that multiplies engagement by mastery, so homework and participation matter without letting completed work hide shallow understanding.
Background
I came to computer science through physics and a few excellent teachers. That path still shapes how I teach: conceptually serious, practical where it counts, and attentive to what students are actually experiencing.
B.S. in Physics, M.S. in Computer Science, and teaching roles across CS, probability, algorithms, AI, data structures, and physics.
Senior data and machine learning engineering work across learning platforms, finance data, XR ground-truth pipelines, interviews, and infrastructure.
Lecturer in Computer Science and Engineering, course creator, advisor, AI tools committee member, and mentor for independent student projects.
Resources
For hiring committees, collaborators, students, and curious readers who want the paper trail behind the short version.
Contact
I am interested in computer science education, theoretical computer science, data engineering, AI-aware assessment, mentoring, and practical projects that are a little more interesting than they strictly need to be.