TLDR: Taught philosophy and CS students to build AI applications in 7 weeks. They went from “what’s a variable?” to analyzing philosophy papers with fine-tuned language models.
Just wrapped up an incredible 7-week teaching experience at The Ethics Institute, where I taught responsible AI to philosophy and computer science graduate students.
We started with Python basics and data analysis, then moved to real-world bias in algorithms. The most fascinating part was working through neural networks and transformers with just paper and pencil - no coding, just understanding how these systems actually think.
By the end, students were building their own AI applications with LLMs and analyzing philosophy papers via fine-tuned language models. They went from “what’s a variable?” to sophisticated AI analysis in just 7 weeks.
This experience perfectly illustrates why I love computational social science: bringing technical tools to the people asking the right questions about them.
The philosophy students brought deep ethical frameworks, while the CS students contributed technical expertise. Together, they created a rich environment for exploring AI’s societal implications.
I’m excited to announce that I’ve received funding to organize workshops helping social scientists leverage LLMs in their research. This builds on the success of the summer school and aims to bring AI tools to more researchers asking important social science questions.
Resources
- Open Source Materials: github.com/kacreel/aide_summer
- Summer School Details: Northeastern Ethics Institute
For more insights on teaching, research methodology, and AI ethics, subscribe to my newsletter.
Special thanks to Kathleen Creel for including me in this incredible teaching experience!