<?xml version="1.0" encoding="utf-8"?><?xml-stylesheet type="text/xml" href="https://kevinl.info/feed.xslt.xml"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://kevinl.info/feed.xml" rel="self" type="application/atom+xml" /><link href="https://kevinl.info/" rel="alternate" type="text/html" /><updated>2026-04-12T19:42:03+00:00</updated><id>https://kevinl.info/feed.xml</id><title type="html">Kevin Lin</title><subtitle>Designing computing education that empowers students</subtitle><author><name>Kevin Lin</name></author><entry><title type="html">Reimagining CS Education</title><link href="https://kevinl.info/reimagining-cs-education/" rel="alternate" type="text/html" title="Reimagining CS Education" /><published>2026-04-09T00:00:00+00:00</published><updated>2026-04-09T00:00:00+00:00</updated><id>https://kevinl.info/reimagining-cs-education</id><content type="html" xml:base="https://kevinl.info/reimagining-cs-education/"><![CDATA[<p>This year, my invited talk series will argue a framework for how we might reimagine computing education around agentic engineering, human-centered design, and authentic assessment. Contact me at <a href="mailto:kevinl@cs.uw.edu">kevinl@cs.uw.edu</a> to arrange a talk.</p>

<ul>
  <li><a href="https://docs.google.com/presentation/d/1nooxut5wgz73bnhATDo6VVHH5baLVLXpPRpLkROxc1o/edit?usp=sharing">Slides</a></li>
  <li>Academic references: <a href="https://doi.org/10.1145/3478431.3499394">Kirdani-Ryan and Ko 2022</a>, <a href="https://doi.org/10.1145/3641554.3701806">Kannam et al. 2024</a>, <a href="https://doi.org/10.1145/3626252.3630844">Griswold 2024</a>, <a href="https://doi.org/10.1145/3724363.3729024">Hou et al. 2025</a>, <a href="https://doi.org/10.1145/3724363.3729093">Denny et al. 2025</a>, <a href="https://doi.org/10.1145/3696630.3727251">Kam et al. 2025</a> (and Appendix in <a href="https://arxiv.org/abs/2506.00202">arXiv:2506.00202</a>), <a href="https://doi.org/10.1145/3769994.3769995">Shapiro 2025</a>, Agarwal et al. 2026</li>
</ul>

<blockquote>
  <p>Generative AI is shifting the bottleneck of software development from syntax generation to architectural design, code review, and system security. Consequently, computer science education must adapt to serve two distinct, valuable populations: deep technical practitioners who orchestrate complex multi-agent systems, and broad AI-augmented builders who leverage natural language to create functional applications. This presentation outlines a practical framework to support both curricular pathways by redesigning courses around Agentic Engineering, Human-Centered Design, and Authentic Assessment. First, we examine how to teach design thinking across the curriculum using Build Your Own Feature assignments and Probeable Problems to clarify requirements. Second, we consider the limitations of take-home assignments, demonstrating how live Interview and Demo/Discuss Assessments can shift evaluation from the submitted artifact to the student’s problem-solving process. Finally, we address the changing social fabric of the learning environment, where peer collaboration is increasingly mediated or replaced by AI tools. We present scalable interventions, such as Meet the Professor small-group sessions to build student-instructor rapport, alongside GenAI Contracts that encourage goal awareness and self-regulation. Attendees will leave with an actionable strategy to adapt their courses for the AI era while fostering learner agency and community connection.</p>
</blockquote>

<p>If you would prefer to present, adapt, or extend these ideas yourself, the slides are licensed <a href="https://creativecommons.org/licenses/by-nc-sa/4.0/">CC BY-NC-SA 4.0</a>.</p>]]></content><author><name>Kevin Lin</name></author><summary type="html"><![CDATA[Agentic Engineering, Human-Centered Design, and Authentic Assessment for Learner Agency]]></summary></entry><entry><title type="html">Generative AI for Programming Education</title><link href="https://kevinl.info/generative-ai-for-programming-education/" rel="alternate" type="text/html" title="Generative AI for Programming Education" /><published>2025-07-29T00:00:00+00:00</published><updated>2025-07-29T00:00:00+00:00</updated><id>https://kevinl.info/generative-ai-for-programming-education</id><content type="html" xml:base="https://kevinl.info/generative-ai-for-programming-education/"><![CDATA[<p>One promise of generative AI for programming is that it can help us build software by quickly translating specifications into implementations. Setting aside the question of whether generative AI actually helps <a href="https://doi.org/10.1145/3632620.3671116">novice programmers</a> or even <a href="https://www.argmin.net/p/are-developers-finally-out-of-a-job">expert programmers</a>, if we follow the premise, then skills like analyzing the qualities of specifications and evaluating the correctness of implementations could be more important to emphasize. But this presumes we know what software we want to build in the first place. This question is often answered by the instructor: we select the assignments, the motivating examples, and the specific concepts that we want to teach. Our choices shape not only what skills students learn but also their <a href="https://doi.org/10.1080/07370008.2020.1730374">disciplinary values interpretation</a>: “a process by which students reflect on the values of a disciplinary domain, as well as who they are and might become in relation to the domain” (Vakil 2020).</p>

<blockquote>
  <p>💡 How do students see themselves in relation to the field of computing today?</p>
</blockquote>

<p>My scholarship has explored redesigning technologies as a way to shape disciplinary values interpretation by drawing on design methods such as iterative design. Iterative design is a software development practice that involves prototyping, testing, analyzing, and refining technology. Instead of assigning students a complete specification of a program to implement and focusing on evaluating the qualities of the final product, iterative design provides a canvas for students to showcase their software development process over time. We can ask students questions about each step of their process and help students evaluate how they contribute to each step of the process. Generative AI is often framed as a productivity tool, but what does productivity free us to do? Teaching students design methods could empower them to ask bigger questions about their work and challenge them to reflect on what exactly they hope to achieve in their future computing careers.</p>

<blockquote>
  <p>💡 How can we change what we ask students to build, and how we ask them to build it?</p>
</blockquote>]]></content><author><name>Kevin Lin</name></author><summary type="html"><![CDATA[Situating our response to generative AI in disciplinary identity]]></summary></entry><entry><title type="html">Accessible PDFs Using the ACM Article Template</title><link href="https://kevinl.info/accessible-pdfs-using-the-acm-article-template/" rel="alternate" type="text/html" title="Accessible PDFs Using the ACM Article Template" /><published>2025-05-18T00:00:00+00:00</published><updated>2025-05-18T00:00:00+00:00</updated><id>https://kevinl.info/accessible-pdfs-using-the-acm-article-template</id><content type="html" xml:base="https://kevinl.info/accessible-pdfs-using-the-acm-article-template/"><![CDATA[<blockquote>
  <p><a href="https://texdoc.org/serve/ltnews41/0">LaTeX News Issue 41</a> provides new updates on the recommended <code class="language-plaintext highlighter-rouge">DocumentMetadata</code> for users of LaTeX2e 2025-06-01 or later. The instructions in this post were prepared before this release.</p>
</blockquote>

<p>LaTeX does not currently generate tagged PDFs by default. Tagged PDFs encode information about the document structure using specialized identifiers for headings, links, etc. that can improve usability for screenreaders and other accessibility technologies. If you’re using an up-to-date version of the <code class="language-plaintext highlighter-rouge">acmart</code> template and a recent TeX distribution, you can generate tagged PDFs in a few steps. Here’s how to switch to the <code class="language-plaintext highlighter-rouge">acmart-tagged</code> template in Overleaf:</p>

<ol>
  <li>Add the latest version of the <code class="language-plaintext highlighter-rouge">acmart</code> template to your Overleaf project: download the <a href="https://portalparts.acm.org/hippo/latex_templates/acmart-primary.zip">latest acmart template zip</a>, unzip it, and upload all files to your Overleaf project.</li>
  <li>At the top of your main TeX file, prepend this code before the <code class="language-plaintext highlighter-rouge">documentclass</code> command:
    <div class="language-tex highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">\DocumentMetadata</span><span class="p">{</span>
  lang=en,
  pdfversion=2.0,
  pdfstandard=ua-2,
  testphase=<span class="p">{</span>phase-III,firstaid,math,title<span class="p">}</span>
<span class="p">}</span>
</code></pre></div>    </div>
  </li>
  <li>Rename the document class from <code class="language-plaintext highlighter-rouge">acmart</code> to <code class="language-plaintext highlighter-rouge">acmart-tagged</code>.</li>
  <li>In your project settings, switch the TeX compiler to LuaLaTeX, which is <a href="https://latex3.github.io/tagging-project/documentation/prototype-usage-instructions.html">recommended for new documents</a>.</li>
  <li>Check the resulting PDF for accessibility using the <a href="https://check.axes4.com/en/">axes4 tool</a> and <a href="https://pave2.cloudlab.zhaw.ch/">PAVE 2</a>.</li>
</ol>

<p>These steps help generate tags for elements such as titles, heading levels, bullet point lists created through standard packages, and other basic text formats. But further manual effort is needed to add alternative text for images, tag header rows in tables, etc. See the LaTeX Project <a href="https://latex3.github.io/tagging-project/documentation/prototype-usage-instructions.html">documentation on tagging</a> for more information.</p>]]></content><author><name>Kevin Lin</name></author><summary type="html"><![CDATA[Generating tagged PDFs with the acmart-tagged document class]]></summary></entry><entry><title type="html">Accessible Design (at SIGCSE TS 2025)</title><link href="https://kevinl.info/accessible-design/" rel="alternate" type="text/html" title="Accessible Design (at SIGCSE TS 2025)" /><published>2025-02-25T00:00:00+00:00</published><updated>2025-02-25T00:00:00+00:00</updated><id>https://kevinl.info/accessible-design</id><content type="html" xml:base="https://kevinl.info/accessible-design/"><![CDATA[<ul>
  <li><a href="https://docs.google.com/presentation/d/1EUA51uviVnVW2HBtxeW3_XbJMZBth98-5XxbcfIHI_k/edit?usp=sharing">Slides</a></li>
</ul>

<p>How might we teach accessibility across the computing curriculum? This short talk introduces <strong>redesign</strong> as a conceptual frame for accessible design pedagogies and explores its applications in two courses: (1) Data Structures and Algorithms and (2) Data Programming. We identify principles of accessible design pedagogies, briefly workshop differences between inclusive design pedagogies and accessible design pedagogies, engage students to practical accessibility skills.</p>

<p>See also my preceding post on <a href="/accessible-design-icer-2024/">Accessible Design (at ICER 2024)</a> for more context.</p>]]></content><author><name>Kevin Lin</name></author><summary type="html"><![CDATA[Teaching Accessibility in Data Programming and Data Structures]]></summary></entry><entry><title type="html">Accessible Design (at ICER 2024)</title><link href="https://kevinl.info/accessible-design-icer-2024/" rel="alternate" type="text/html" title="Accessible Design (at ICER 2024)" /><published>2024-07-14T00:00:00+00:00</published><updated>2024-07-14T00:00:00+00:00</updated><id>https://kevinl.info/accessible-design-icer-2024</id><content type="html" xml:base="https://kevinl.info/accessible-design-icer-2024/"><![CDATA[<ul>
  <li>DOI: <a href="https://doi.org/10.1145/3632621.3671432">10.1145/3632621.3671432</a></li>
  <li><a href="https://docs.google.com/presentation/d/1rOYzE165PaeOp5NKFIxOctnb3f1D04EAIz9vM88BcW0/edit?usp=sharing">Slides</a></li>
  <li><a href="https://docs.google.com/drawings/d/1kfkOSoT4qL9gVuEhlgqRCKXP6Bcx8xZY5yR_NTa-f9s/edit?usp=sharing">Poster</a></li>
  <li><a href="https://docs.google.com/presentation/d/1_2CLMtkAs3D3XFQOlYd9YaB_VBTcLPE_U_Vf2pqflOk/edit?usp=drivesdk">Case Studies and Materials</a></li>
</ul>

<blockquote>
  <p>Elglaly et al. argue for teaching accessible design not only in human–computer interaction courses but across the undergraduate computing curriculum. This poster provides a description and preliminary evaluation of a pedagogical approach to teaching accessible design in an undergraduate advanced Data Structures and Algorithms (DSA) course. Inspired by the idea that all technology is assistive, our approach aims to empower students to utilize their DSA content knowledge and skills to redesign software features to address design assumptions and improve technologies for all.</p>

  <p>Our approach uses the CIDER technique to identify a design assumption; design and implement an abstract data type to address the assumption; evaluate the abstract data type and implementations; before repeating this process to further improve the design. In our pilot offering of the course during Spring 2024, we integrated accessible design skills into 4 discussion sections led by teaching assistants as well as 2 multi-week software design and analysis projects. By the end of the course, we hoped students would feel more comfortable to:</p>

  <ol>
    <li>Incorporate accessible design practices in the software design and development process;</li>
    <li>Evaluate the impacts of their software design work with a consideration for accessibility;</li>
    <li>Describe the social and historical context of disability and its present-day effects on people.</li>
  </ol>

  <p>We chose to integrate this approach during discussion sections and projects rather than lecture because we viewed accessible design work in software engineering as integrating knowledge from across multiple lessons: section provided an opportunity for students to review all the concepts introduced in the preceding week while projects required students to practice applying it to real-world problems. The 4 discussion sections applied DSA content knowledge to implement accessibility features: a disability access service, augmentative and alternative communication, screen reader website navigation, and accessible shortest paths using data from Project Sidewalk. The 2 multi-week software design and analysis projects also incorporated accessibility as a context. The first project utilized a dataset of website accessibility reports: students wrote code to identify the most common Web Content Accessibility Guidelines (WCAG) to address and then created a randomized testing framework to stress-test their data structures when given a large number of WCAG. The second project extended the discussion section on accessible shortest paths with algorithm engineering by engaging students to explain how they could redesign the project to allow users to choose whether to use the Project Sidewalk access scores when generating shortest paths navigation directions.</p>

  <p>This work-in-progress research aims to identify the elements of DSA content knowledge and accessible design skills that support critical student reflection on the impacts of their designs. We conducted pre- and post-surveys with 225 non-major students enrolled in the pilot offering of the course during Spring 2024. Data analysis is planned to begin after the conclusion of the course with preliminary insights expected by the time of poster presentation.</p>
</blockquote>]]></content><author><name>Kevin Lin</name></author><summary type="html"><![CDATA[Teaching How All Technologies Are Accessible in Data Structures and Algorithms]]></summary></entry><entry><title type="html">Alternative Grading</title><link href="https://kevinl.info/alternative-grading/" rel="alternate" type="text/html" title="Alternative Grading" /><published>2024-05-11T00:00:00+00:00</published><updated>2024-05-11T00:00:00+00:00</updated><id>https://kevinl.info/alternative-grading</id><content type="html" xml:base="https://kevinl.info/alternative-grading/"><![CDATA[<p>The Center for Teaching and Learning at the University of Washington recently invited me to give a talk for their <em>Reflection and Practice Seminar</em> series on alternative grading. I’m proud to have presented the talk to over 200 attendees from UW and beyond. Contact me at <a href="mailto:kevinl@cs.uw.edu">kevinl@cs.uw.edu</a> to arrange a talk.</p>

<ul>
  <li><a href="https://docs.google.com/presentation/d/1BIAXNqKeLADcaT7JMhsAy9U3r2PYCqV1P1yygxWjJZ8/edit?usp=sharing">Slides</a></li>
</ul>

<blockquote>
  <p>In this talk, I’ll reflect on my experiences with alternative grading practices that better represent the learning that students achieve over time, producing more equitable outcomes by changing the way we determine final grades. Moreover, alternative grading also has the potential to empower students by making space for creative student work that might not otherwise thrive in a points-based grading ecosystem. But grading policies on their own often aren’t enough—at least not in the grade-focused culture at UW—so I’ll also share some of the challenges that I’ve faced and how I work toward better relationships between students, educators, and grades.</p>
</blockquote>

<p>The recent 2024 Teaching &amp; Learning Showcase highlighted three UW teams’ work on alternative grading.</p>

<ul>
  <li><a href="https://teaching.washington.edu/learn/teaching-and-learning-symposium/2024-teaching-learning-showcase/kirkland/">Ungrading Empowers Students to Value Progress over Perfection</a></li>
  <li><a href="https://teaching.washington.edu/learn/teaching-and-learning-symposium/2024-teaching-learning-showcase/gliboff/">Student buy-in, grading and flexibility in a non-major physics course</a></li>
  <li><a href="https://teaching.washington.edu/learn/teaching-and-learning-symposium/2024-teaching-learning-showcase/song-et-al/">Assessing course syllabi with a rubric: Strategies for inclusive teaching</a></li>
</ul>

<p>At the end of the talk, a number of remaining questions focused on workload: <strong>How do you approach and respond to workload issues when using Alternative Grading?</strong></p>

<p>It can certainly seem like a burden to provide both helpful feedback and redesign grades to reflect student achievement by the end of the quarter. But many practitioners that use revisions or resubmissions actually report workload staying about the same as before: instead of spending time wrestling with assigning the most fair amount of partial credit, that time is instead spent working with students to evaluate their revisions. Ultimately, this depends on answering a few questions about the learning objectives for the course: What are your course’s learning objectives? How are they currently being assessed? Considering both the syllabus and prior quarter’s gradebooks, which objectives are being underassessed, overassessed, or assessed just right?</p>

<ul>
  <li>For objectives that are overassessed, can you redesign the assignments such that they are still required to be completed (so that learning occurs), but adopt different feedback mechanisms like instructor-moderated peer review through the FeedbackFruits tool in Canvas? This can be particularly beneficial for giving students an opportunity to see how other students approach their work, incentivizing student engagement by means of instructor oversight, and reducing instructor grading workload.</li>
  <li>Beyond the scope of an individual assignment, can we reorganize the final grading process to reflect student achievement by the end of the quarter? It is my hope that all students can demonstrate excellent proficiency by the end of the quarter, so it may not be necessary to require revision or resubmission of earlier assessments. We could instead focus assessment once, twice, or thrice at the end of the quarter rather than every week throughout the quarter.</li>
  <li>Do all dimensions of every assignment need to be assessed every week? When I taught large introductory programming courses at my alma mater, the teaching team strategically reviewed only a few parts of each programming assignment where the bulk of the difficulties were expected to occur. Likewise, revision and resubmission might not necessarily require the student to redo the entire assignment: Can the student demonstrate their improvement on the learning objective by addressing a smaller subset of your feedback? If so, can you focus your feedback on these objectives?</li>
  <li>What are the most effective communication mediums for feedback? Not all feedback needs to be provided through writing. One way is to first identify work that needs a major revision and provide high level feedback that informs students that their work doesn’t meet expectations. Then, encourage these students to come to office hours to get the deeper feedback that they need through a helpful conversation—it can be both easier for educators, more helpful for students, and potentially build better relationships when we provide detailed feedback through a conversation rather than a write up.</li>
</ul>

<p>If you would prefer to present, adapt, or extend these ideas yourself, the slides are licensed <a href="https://creativecommons.org/licenses/by-nc-sa/4.0/">CC BY-NC-SA 4.0</a>.</p>]]></content><author><name>Kevin Lin</name></author><summary type="html"><![CDATA[Equitable Grading and Ecosystems for More Caring Communities]]></summary></entry><entry><title type="html">It Can Relate to Real Lives</title><link href="https://kevinl.info/it-can-relate-to-real-lives/" rel="alternate" type="text/html" title="It Can Relate to Real Lives" /><published>2024-03-18T00:00:00+00:00</published><updated>2024-03-18T00:00:00+00:00</updated><id>https://kevinl.info/it-can-relate-to-real-lives</id><content type="html" xml:base="https://kevinl.info/it-can-relate-to-real-lives/"><![CDATA[<ul>
  <li>Author’s version: <a href="https://arxiv.org/abs/2312.12620">arXiv:2312.12620</a></li>
  <li>DOI: <a href="https://doi.org/10.1145/3626252.3630754">10.1145/3626252.3630754</a></li>
</ul>

<p>There’s a growing recognition of the need to teach computing students relevant social, ethical, and professional skills. But there remains a key question of how to integrate these approaches within the existing computing curriculum. My students are presenting a paper at SIGCSE TS 2024 on our work toward this end.</p>

<p>In this study, we report on the attitudes and expectations of non-computer science majors enrolled in a Data Structures and Algorithms course designed with a justice-centered approach. Our approach extends prior work by integrating an iterative design methodology called <a href="https://medium.com/bits-and-behavior/beyond-average-users-building-inclusive-design-skills-with-the-cider-technique-413969544e6d">CIDER</a> to empower students to not only critique technologies, but also redesign, reimplement, and re-evaluate them. This approach views the skills learned in any computing course as tools to achieve a specific goal. By encouraging students to question and critique our goals, we also create opportunities for them to use their skills creatively to <a href="/an-invitation-to-reimagine/">reimagine technologies</a>.</p>

<p>Our research employed both quantitative and qualitative methods. We administered pre-quarter and post-quarter surveys to assess changes in students’ attitudes towards computing, including their confidence and sense of belonging. Additionally, we analyzed students’ self-reflections at the end of the quarter to gauge their fulfillment of expectations and their perspectives on the course overall without specific regard or mention of particular course design approaches. The study population included a diverse mix of gender and racial identities, allowing us to examine the experiences of underrepresented groups in computing education.</p>

<p>The results revealed a significant increase in computing confidence and sense of belonging among students, highlighting the positive impact of our approach. However, women, non-binary, and other students not identifying as men (WNB+) still reported lower levels of confidence and belonging by the end of the quarter compared to men, despite an overall increase. In free response questions, the majority of students expressed a positive sentiment towards the course, appreciating its focus on real-world implications and ethical considerations. Nonetheless, some students desired more interview preparation, indicating an opportunity to better align students’ sense of learning across the technical, social, and sociotechnical dimensions.</p>

<p>Our study highlights the potential of justice-centered pedagogies in computing education to show students how to blend their traditional technical education with pressing social and sociotechnical questions. Future work should also explore ways to teach students to navigate the practical complexities of the tech industry and society at large: Within workplace power structures, how might students actually affect change in the technologies they’re responsible for implementing? By doing so, we create a new kind of justice for our students that empowers them to conduct their computing work in ways that are aligned with their social values.</p>]]></content><author><name>Kevin Lin</name></author><summary type="html"><![CDATA[Attitudes and Expectations in Justice-Centered Data Structures for Non-Majors]]></summary></entry><entry><title type="html">An Invitation to Reimagine</title><link href="https://kevinl.info/an-invitation-to-reimagine/" rel="alternate" type="text/html" title="An Invitation to Reimagine" /><published>2024-02-19T00:00:00+00:00</published><updated>2024-02-19T00:00:00+00:00</updated><id>https://kevinl.info/an-invitation-to-reimagine</id><content type="html" xml:base="https://kevinl.info/an-invitation-to-reimagine/"><![CDATA[<p>This year, my invited talk series will argue how <strong>we should teach students not only how to answer questions, but also what questions to ask</strong>. This work will be presented to the UW CSE faculty, the broader UW faculty through the <em>2024 Teaching &amp; Learning Symposium</em>, and as a lightning talk in the <em>3C Fellows Spotlight</em>. Contact me at <a href="mailto:kevinl@cs.uw.edu">kevinl@cs.uw.edu</a> to arrange a talk.</p>

<ul>
  <li><a href="https://docs.google.com/presentation/d/1fJF2HQpdit8RLU8tZ1hEggPfUuF6x0iGQ-f6vvfBxug/edit?usp=sharing">Slides</a> and <a href="https://teaching.washington.edu/learn/teaching-and-learning-symposium/2024-teaching-learning-showcase/lin-et-al/">Showcase Article</a></li>
  <li><a href="/do-abstractions-have-politics/">Lin 2021</a>, <a href="/cs-education-for-the-socially-just-worlds-we-need/">Lin 2022</a>, <a href="https://medium.com/bits-and-behavior/beyond-average-users-building-inclusive-design-skills-with-the-cider-technique-413969544e6d">Oleson et al 2022</a>, <a href="/it-can-relate-to-real-lives/">Batra et al 2024</a></li>
</ul>

<blockquote>
  <p>Many undergraduate computing courses teach concepts using simplified models that are carefully aligned with learning objectives. But these models often encode design assumptions in both problem definitions and resulting solutions. When we choose to leave assumptions unexamined, we also choose to teach that it is not disciplinary practice to question them in their future work. This not only has a societal cost given the outsize impact of computing technologies, but also a personal cost to students’ capacity to identify and address issues they care about in their computing work. How can we empower students to redesign computing problems and artifacts?</p>

  <p>In this talk, I will share how I teach design assumptions in undergraduate computing courses through “an invitation to reimagine” the simplified models presented during class. When designing autocomplete, we might assume that search results should exactly match the search query—but real autocomplete systems might make suggestions that closely but not exactly match the search query. Or, when selecting a plot title, we might assume trends in the data tell the whole story—but there could be more nuanced explanations that require more engagement with the data setting. By redesigning problems and artifacts to address these assumptions, we make space for students to demonstrate proficiency in traditional learning objectives while also empowering them to create better technologies.</p>

  <p>Discussion may include prior work and connections to student identity, attitudes, and expectations; limits of this technique as a method for empowering students; and future directions for research and practice.</p>
</blockquote>

<p>If you would prefer to present, adapt, or extend these ideas yourself, the slides are licensed <a href="https://creativecommons.org/licenses/by-nc-sa/4.0/">CC BY-NC-SA 4.0</a>.</p>]]></content><author><name>Kevin Lin</name></author><summary type="html"><![CDATA[Empowering Students to Redesign Computing Problems and Artifacts​]]></summary></entry><entry><title type="html">Teaching Critical Comparative Data Structures and Algorithms</title><link href="https://kevinl.info/teaching-critical-comparative-data-structures-and-algorithms/" rel="alternate" type="text/html" title="Teaching Critical Comparative Data Structures and Algorithms" /><published>2023-03-09T00:00:00+00:00</published><updated>2023-03-09T00:00:00+00:00</updated><id>https://kevinl.info/teaching-critical-comparative-data-structures-and-algorithms</id><content type="html" xml:base="https://kevinl.info/teaching-critical-comparative-data-structures-and-algorithms/"><![CDATA[<p>I was recently invited to give a short talk at the <a href="https://hai.stanford.edu/events/embedded-ethics-conference-strategies-teaching-responsible-computing-within-computer-science">Embedded Ethics Conference: Strategies for Teaching Responsible Computing Within the Computer Science Curriculum</a>. Here are the most relevant materials that I shared at the event.</p>

<ul>
  <li><a href="https://docs.google.com/presentation/d/1fVqtYxWupi6zOWOjHGBH0CQOWlwi6p5fuwUFE06JUGg/edit?usp=sharing">Slides</a></li>
  <li><a href="https://courses.cs.washington.edu/courses/cse373/23wi/">Course website</a> with <a href="https://courses.cs.washington.edu/courses/cse373/23wi/lessons/">Lessons</a>, <a href="https://courses.cs.washington.edu/courses/cse373/23wi/projects/">Projects</a>, and <a href="https://courses.cs.washington.edu/courses/cse373/23wi/assessments/">Assessments</a></li>
  <li><a href="https://courses.cs.washington.edu/courses/cse373/23wi/lessons/heaps-and-hashing/#affordance-analysis">Affordance analysis</a></li>
</ul>

<p>Much of the most recent ideas have been inspired by the concept of <strong>Iterative Design</strong>: a design methodology where iterating upon concepts improves the quality of a final design. I’ve been working closely with Alannah Oleson whose recent work on <a href="https://medium.com/bits-and-behavior/beyond-average-users-building-inclusive-design-skills-with-the-cider-technique-413969544e6d">Building Inclusive Design Skills with the CIDER Technique</a> uses the concept of iteration to “broaden learners’ understandings of inclusion issues and design exclusion.” I’ve expanded the use of iteration as a motif throughout the class: the goal of my Data Structures and Algorithms course is to teach students methods to answer the question, ‘Why did we choose to write the specification that way?’</p>

<p>By addressing this question in the broadest of terms, we not only engage students to learn the implementation details of data structures and algorithms, but also to consider how the choice of abstract data types can influence the outcomes of a design. Iterative design provides a methodology to integrate these two skills and understand them as both integral to the software design process: software requires modeling complex phenomena in terms of abstractions, which are then implemented using data structures and algorithms. But the ease, efficiency, and quality of implementations can go back and influence how we define our abstractions. By considering iterative design as a way to bridge these software design skills, we can build more inclusive abstractions and implement them more efficiently too.</p>

<p>By engaging in inclusive design practices, students learn how to design more just technologies and (potentially) become more critical computing professionals. In <em>Design Justice</em>, Sasha Costanza-Chock poses the question: “Following Du Bois, we might ask of the recent emphasis on learning to code: Is the ultimate object to make people good coders, or to make coders good people?” But the argument for justice in the largest social terms aren’t always convincing to all my students. Computing cultures that glorify prestigious, high-paying tech jobs may view time spent learning about ethics and identity as time that could otherwise be spent further developing their “technical” skills.</p>

<p>Iterative design provides a sociotechnical approach to address the hierarchy of value in computing culture. In doing so, we provide another potential <a href="https://doi.org/10.17763/1943-5045-88.1.26">political vision</a> for doing this work. I aim to empower students to understand how they might redesign technologies to address assumptions about users. But this process is not only about the skills or content knowledge that students attain. I also believe that discussion of sociotechnical aspects provides students an opportunity to pause and reflect not only on how they might design their software to address ethical questions, but also how they might design their lives to reflect those questions too.</p>]]></content><author><name>Kevin Lin</name></author><summary type="html"><![CDATA[What do we have so far?]]></summary></entry><entry><title type="html">Specifications grading policies</title><link href="https://kevinl.info/specifications-grading-policies/" rel="alternate" type="text/html" title="Specifications grading policies" /><published>2021-11-24T00:00:00+00:00</published><updated>2021-11-24T00:00:00+00:00</updated><id>https://kevinl.info/specifications-grading-policies</id><content type="html" xml:base="https://kevinl.info/specifications-grading-policies/"><![CDATA[<p><a href="https://users.cs.duke.edu/~ksm/">Kristin Stephens-Martinez</a> invited <a href="https://homes.cs.washington.edu/~brettwo/">Brett Wortzman</a> and me to season 3 of the <a href="https://csedpodcast.org/">CS-Ed Podcast</a>, where we discussed our grading philosophies in our large, programming-focused CS1 and CS2 courses. I wanted to provide some more context about our grading policies and the surrounding infrastructures that I’ve designed to support them.</p>

<p>One way to understand a grading system is in terms of two components: (1) the grading policies and (2) the tools that are used to communicate progress during the term. Grading policies affect the tools we might want to use. When I teach a large 4-credit course with many assignments that factor into the final grade, I want tools that help students get a better sense of their progress through the course. But our tools don’t need to rely on assignment grades to communicate progress: when I teach a small 1-credit seminar with no submitted assignments and participation-based grading, there are no assignment grades to rely on. Instead, I might rely on classroom observation and notetaking tools to help me identify participatory inequity and use that information to inform my pedagogy and communication.</p>

<p>This interface between policies and tools is a contended space, with instructors on one side designing grading policies and educational technology companies on the other side providing tools. Although every instructor has different views on grading, tools often lead us to assume specific approaches. For example, many autograding tools provide a single, summative, numeric evaluation of student submissions—simplifying multiple dimensions of quality down into a single number. Learning management systems like to maintain assignment grades as numbers, which affords calculating students’ final grades using a weighted average. Student progress in these tools are represented using numbers. It’s in this context that Robert Talbert reminds us the importance of <a href="https://rtalbert.org/maintaining-humanity-in-higher-education-in-a-high-tech-world/">maintaining humanity in higher education in a high-tech world</a>.</p>

<p>Specifications-based grading policies attempts to design grading to support growth mindset and build empathy between students and instructors in spite of larger university administration that use grades as a mechanism to separate students and sow distrust. Over the past couple years, I’ve experimented with 3 different specifications-based grading policies that wrestle with these tensions. Although none of these practices represent truly just grading policies, they are in my view a step forward from our traditional practices because they provide a foundation for enabling more creative assessment in large courses. All of these are at best imperfect solutions.</p>

<h2 id="single-rating-grading">Single-rating grading</h2>

<p>Under single-rating grading, most assignments in the course are assigned a single rating representing the quality of the work with respect to the assignment specifications. We replaced fine-grained point-based rubrics with a 4-level <strong>E/S/N/U</strong> ordinal scale: Exemplary / Satisfactory / Not yet / Unassessable. This policy was used in a CS2 course, <a href="https://courses.cs.washington.edu/courses/cse143/20au/about/#grading">CSE 143 Autumn 2020</a>.</p>

<p><strong>How is progress communicated?</strong> Although we can represent this data in a learning management system as 3/2/1/0, the conversion to an interval scale implies that differences between each rating are similar when in reality they may be quite different depending on what information the rubrics intend to evaluate at each level. This system is relatively easy to adapt to single-number learning management systems because we can enter the 3/2/1/0 rating for each assignment as the grade. Final grades are calculated by exporting the grades, entering them into a spreadsheet, and then applying the specifications-based grading policy. The contribution of specifications-based grading is to not assume a weighted average of 3/2/1/0 values is the ideal way to assign grades—not only because of the interval scale conversion, but also because final grades (for the many purposes they might serve) might be better determined using a count of E/S/N/U ratings. Robert Talbert’s <a href="https://rtalbert.org/tag/mastery-grading/">written a lot more about this</a>.</p>

<h2 id="multiple-rating-grading">Multiple-rating grading</h2>

<p>Multiple-rating grading takes the idea of E/S/N/U ratings but considers that large assignments may emphasize multiple, orthogonal dimensions of quality rather than just a single dimension of quality. Assignments are given multiple E/S/N/U ratings, one for each dimension of quality. In a programming assignment, we might assign E/S/N/U for each of the following dimensions.</p>

<ul>
  <li><strong>Behavior</strong>. Does the input and output functionality of the program meet the specification?</li>
  <li><strong>Concepts</strong>. Does the code effectively and appropriately use the Python language or libraries?</li>
  <li><strong>Quality</strong>. Does the code meet the code quality guidelines and documentation standards?</li>
  <li><strong>Testing</strong>. Do the unit tests ensure the correctness of the code across relevant inputs?</li>
</ul>

<p>This policy was used in a CS2 course, <a href="https://courses.cs.washington.edu/courses/cse163/21sp/#grading">CSE 163 Spring 2021</a>.</p>

<p><strong>How is progress communicated?</strong> This is much trickier to represent in a learning management system because they often assume assignments only have a single score. One approach would be to apply the interval scale conversion and sum the result so that a single score is generated between 0 and 12, but this raises many new issues around interpretation of grades as well as the fact that this conversion is a one-way operation—we can’t recover the original E/S/N/U grades for some scores like 6 because there are many ways for a student to have a scored a total of 6 “points” in this scale. An average of the score is similarly problematic.</p>

<p>Ultimately, we’d like to be able to communicate these ratings separately. At the University of Washington, we use the Canvas learning management system, which provides three features that help us implement multiple-rating grading. Setting up the grading system requires combining three Canvas features.</p>

<ol>
  <li><a href="https://community.canvaslms.com/t5/Instructor-Guide/How-do-I-add-a-rubric-to-an-assignment/ta-p/1058">Assignment grading rubrics</a>. Rubrics can be attached to an assignment, which allows instructors to assign scores on each dimension.</li>
  <li><a href="https://community.canvaslms.com/t5/Canvas-Basics-Guide/What-are-Outcomes/ta-p/75">Outcomes</a>. Outcomes define the method for aggregating rubric scores across multiple assignments. This is used to determine cutoffs for a student who’s “mastered” a dimension.</li>
  <li><a href="https://community.canvaslms.com/t5/Instructor-Guide/How-do-I-use-the-Learning-Mastery-Gradebook-to-view-outcome/ta-p/775">Learning mastery gradebook</a>. This opt-in feature provides an alternative view of student progress based not on the single-number grade, but rather the rubric scores accumulated across the course.</li>
</ol>

<p>The resulting learning mastery gradebook that students see is a dashboard organizing their progress on each dimension of quality. This dashboard is particularly helpful for providing students an at-a-glance view of what assignments they might need to revise or resubmit. In this example, we have a student who’s received feedback on 3 assignments (Pokemon, Primer, Startup), and we’re looking at how they’re doing on <strong>Behavior</strong> and <strong>Concepts</strong> for each of the 3 assignments. The headings note whether or not the student has “mastered” or “not mastered” the dimension according to how we defined mastery in the Outcomes.</p>

<p><img src="/assets/images/learning-mastery-gradebook.svg" alt="Learning mastery gradebook" /></p>

<p>There are some limitations with this approach. For one, there are few options to customize the “mastered” versus “not mastered” headings. Canvas offers three calculation methods for determining mastery, but all three of them are meant for standards-based grading where students are reassessed multiple times on the same concept; mastery involves a minimum threshold on the number of satisfactory evaluations. Our specifications-based grading instead uses rubrics to measure dimensions of quality in different CS concepts; mastery involves satisfactory completion of all assignments. Furthermore, in Canvas, rubrics do not replace the single-number assignment grade: an assignment is not graded until is given a single-number grade. I work around this by treating the single-number grade as a binary flag for satisfactory submissions: 0 for work that is below satisfactory on any dimension, and 1 for work that is satisfactory across all dimensions.</p>

<p>Getting all of this multiple-rating data into Canvas is also tricky for programming assignments that may be graded on other platforms. UW CSE courses use a learning platform called Ed. Ed supports grading programming assignments using our multiple-rating E/S/N/U scale, but it doesn’t have a direct integration to send the multiple-rating scores to Canvas or implement the binary flag idea that I use. So I wrote a script to send the entire class’s <a href="https://gist.github.com/kevinlin1/f3bb1bab2bab2ce65ba947e6d5040a58">multiple-rating grade data from Ed to Canvas</a>.</p>

<h2 id="module-completion-grading">Module completion grading</h2>

<p>Module completion grading is a variant of single-rating grading that organizes the grading policy around modules—collections of assignments or other work in the course. Rather than emphasizing each assignment’s grade as an individual unit to be counted toward a student’s final grade in the course, module completion grading emphasizes counting the modules completed to satisfaction. We still give grades for each assignment, but rather than use a 4-level E/S/N/U scale, I use a binary 1/0 scale. Receiving a 1 on all the work in a module signifies satisfactory completion.</p>

<p>This policy is used in an advanced data structures course, <a href="https://courses.cs.washington.edu/courses/cse373/22wi/#deliberate-practice">CSE 373 Winter 2022</a>.</p>

<p><strong>How is progress communicated?</strong> The binary 1/0 scale is a good fit for single-number learning management systems, so it’s relatively easy to adapt. Modules can be communicated using any medium: it could be posted as a document, discussed in class, included in the syllabus, or presented using <a href="https://community.canvaslms.com/t5/Canvas-Basics-Guide/What-are-Modules/ta-p/6">Canvas modules</a>. Canvas provides robust features for modules, including the ability to <a href="https://community.canvaslms.com/t5/Instructor-Guide/How-do-I-add-requirements-to-a-module/ta-p/1131">add requirements</a> that students must complete in order to pass a module. I use module requirements to codify the score requirements for assignments in a module, and as student work is graded, the module immediately updates to reflect their progress through the course.</p>

<p><img src="/assets/images/module-completions.svg" alt="Module completions" /></p>

<p>What I like about this policy is that it abstracts-away all the details of the grading policy until later when the assignments are actually released. It allows you to easily incorporate work such as participation. Just include it in the module to communicate the importance of the work. In the past, I’ve had modules that included pre-class preparation work as part of the requirements for satisfactory completion of the module. Since my courses are on Ed, I wrote another script to send the entire class’s <a href="https://gist.github.com/kevinlin1/9d233b3f9957201f619afb9d5b27a08e">lesson completions from Ed to Canvas</a>.</p>]]></content><author><name>Kevin Lin</name></author><summary type="html"><![CDATA[How to get your LMS to work with your grading policies, not against them.]]></summary></entry><entry><title type="html">CS Education for the Socially-Just Worlds We Need</title><link href="https://kevinl.info/cs-education-for-the-socially-just-worlds-we-need/" rel="alternate" type="text/html" title="CS Education for the Socially-Just Worlds We Need" /><published>2021-10-11T00:00:00+00:00</published><updated>2021-10-11T00:00:00+00:00</updated><id>https://kevinl.info/cs-education-for-the-socially-just-worlds-we-need</id><content type="html" xml:base="https://kevinl.info/cs-education-for-the-socially-just-worlds-we-need/"><![CDATA[<ul>
  <li>Author’s version: <a href="https://arxiv.org/abs/2109.13283">arXiv:2109.13283</a></li>
  <li>DOI: <a href="https://doi.org/10.1145/3478431.3499291">10.1145/3478431.3499291</a></li>
</ul>

<blockquote>
  <p>Justice-centered approaches to equitable computer science (CS) education prioritize the development of students’ CS disciplinary identities toward social justice rather than corporations, industry, empire, and militarism by emphasizing ethics, identity, and political vision. However, most research in justice-centered approaches to equitable CS education focus on K-12 learning environments. In this position paper, we problematize the lack of attention to justice-centered approaches to CS in higher education and then describe a justice-centered approach for undergraduate Data Structures and Algorithms that (1) critiques sociopolitical values of data structure and algorithm design and dominant computing epistemologies that approach social good without design justice; (2) centers students in culturally responsive-sustaining pedagogies to resist dominant computing culture and value Indigenous ways of living in nature; and (3) ensures the rightful presence of political struggles through reauthoring rights and problematizing the political power of computing. Through a case study of this Critical Comparative Data Structures and Algorithms pedagogy, we argue that justice-centered approaches to higher CS education can help students not only critique the ethical implications of nominally technical concepts, but also develop greater respect for diverse epistemologies, cultures, and narratives around computing that can help all of us realize the socially-just worlds we need.</p>
</blockquote>

<p><strong>Equity pedagogies</strong> are “assets-based pedagogical approaches that support minoritized students’ learning outcomes and further develop their potential to become social change agents” (<a href="https://doi.org/10.26716/jcsi.2020.03.2.1">Madkins 2020</a>). Equity pedagogies challenge dominant approaches that are not designed to support and engage minoritized students. The emphasis on developing students’ “potential to become social change agents” is a critical reflection of the dominant discourses in CS education. Currently, dominant approaches to CS education do not make space for social change or social justice—in fact, <a href="https://doi.org/10.1080/14626268.2019.1682616">Malazita and Resetar (2019)</a> argue that dominant approaches teach the opposite: that CS is “anti-political”. Ethics are often relegated to optional courses or a few “special topics lectures” at the end of the course “if time allows”; analysis of algorithms restricted to runtime and space complexity; programming done in service of “unfeeling” big tech corporations whose profits soar at the cost of local communities; and learning CS in service of maintaining the United States’ dominance over the world in war, money, and culture. CS education is about power.</p>

<p>Justice-centered approaches place these concerns about power and social (in)justice at the center of education. Affordance analysis and other ethical evaluations of technologies are only one small piece of the puzzle. In the absence of justice-centered approaches, affordance analysis and other interventions risk subsumption to dominant narratives: the idea that we can analyze our way out of problematic technologies, or that ethical evaluation can be done without engaging history, culture, or identity. Justice-centered approaches require engaging our full humanity in ways that resist disembodied approaches to science learning.</p>

<p>One of the contributions of this work is a new curricular approach for (advanced) data structures and algorithms. <em>Critical Comparative Data Structures and Algorithms</em> (CCDSA) builds on the foundation of <strong>affordance analysis</strong> by addressing the ways that it tends to leave human values (e.g. history, culture, identity) implicit. A critical comparative approach explicitly engages broader questions such as, “Who designs computation? To what end? How do our design processes, goals, and purposes undermine our intentions to do good?” In suggesting the last question, we raise new concerns around the conflict between <a href="https://design-justice.pubpub.org/">design justice</a> and disembodied or universalizing computer technologies. The aim to design technologies for everyone is often ultimately realized as designing technology for a few. CCDSA defines this reflexive practice of questioning processes as <em>critical comparison</em> where the dominant perspective is placed in dialogue with more marginalized and justice-centered perspectives.</p>

<p><em>Critical comparison</em> is a powerful method that can be repeatedly applied to each dimension of equity pedagogies and justice-centered approaches. It pushes us to reconsider our assumptions in every facet of the student experience. It’s particularly suitable for higher education where students significantly internalized of dominant purposes and narratives for education. However, this approach also has its own risk. In attempting to compare, we place both perspectives in the spotlight—and so we may teach that holding either perspective is acceptable. But is that always the case? There are many ideas where educators may have a moral imperative to either support or denounce if we wish to realize a more just society. And, if we want to support our students towards becoing social change agents, we’ll need to help everyone better understand the ways in which power shapes values, approaches, and narratives. In <a href="https://design-justice.pubpub.org/pub/y2ymuvuk/release/1#make-all-people-good-coders-or-make-all-coders-good-people">Sasha Costanza-Chock’s words</a>: “Is the ultimate object to make people good coders, or to make coders good people?”</p>]]></content><author><name>Kevin Lin</name></author><summary type="html"><![CDATA[The Case for Justice-Centered Approaches to CS in Higher Education]]></summary></entry><entry><title type="html">Centering Identity and Culture in Critical Computing</title><link href="https://kevinl.info/centering-identity-and-culture-in-critical-computing/" rel="alternate" type="text/html" title="Centering Identity and Culture in Critical Computing" /><published>2021-07-15T00:00:00+00:00</published><updated>2021-07-15T00:00:00+00:00</updated><id>https://kevinl.info/centering-identity-and-culture-in-critical-computing</id><content type="html" xml:base="https://kevinl.info/centering-identity-and-culture-in-critical-computing/"><![CDATA[<p>I’m presenting a lightning talk at the <a href="https://foundation.mozilla.org/en/blog/teaching-responsible-computing-summit-2021/">Teaching Responsible Computing Summit 2021</a> on one cornerstone of my teaching titled, <a href="https://docs.google.com/presentation/d/1HBM4eo-EqYWTPyMZzZbS8bmcUeA7Hcz7heV2loP6GUM/edit?pli=1#slide=id.ge4e3f5c3d4_5_0">Centering Identity and Culture in Critical Computing</a>. Here’s the abstract extended from the slide.</p>

<blockquote>
  <p>Students find “ethical and social interventions in CS education […] valuable in application-centered classes, like data visualization or applied machine learning, but not in ‘core’ technical classes like [CS1]” (<a href="https://doi.org/10.1080/14626268.2019.1682616">Malazita and Resetar 2019</a>). Ruha Benjamin reminds us of our commitment as educators to <a href="https://youtu.be/9xmrJJESCt8">Incubate a Better World in the Minds &amp; Hearts of Students</a>, calling on us to inspire and empower students to imagine and then realize resistant, contrapuntal, and anti-oppressive futures through education. Teaching responsible computing requires that we wrestle with critical questions and justice in our racialized classrooms (<a href="https://youtu.be/_iCCsYR5QJY">Shah 2020</a>, <a href="https://youtu.be/-eTQrFPTM1Y">Philip 2021</a>) that are entangled with broader social questions. I present an approach to <strong>culturally-responsive and critical pedagogy</strong> in <a href="https://courses.cs.washington.edu/courses/cse373/21su/">my responsible data structures classroom</a> grounded in 3 ideas.</p>

  <ol>
    <li>Developing <a href="https://doi.org/10.1145/3328778.3366792">cultural competence</a> and <a href="https://youtu.be/c8TQ29I8lK4">deconstructing computing culture</a>.</li>
    <li>Valuing <a href="https://doi.org/10.1007/s11422-007-9067-8">indigeneous ways of living in nature</a> and <a href="https://youtu.be/MnRZcPeEAv0?t=1833">purposes for learning</a>.</li>
    <li>Student-centered equity pedagogies that draw on the past and present experiences of people in the room.</li>
  </ol>
</blockquote>

<p>Geoff Challen recently raised the questions around the topic of <a href="https://www.geoffreychallen.com/essays/2021-07-09-creating-course-community">creating course community</a>, suggesting that “whether you build course community should be determined by your learning goals and how the community contributes to student success.” When I think about the diversity of people in our classrooms in today’s educational environment—and for the futures that we want to create—the answer has to be an affirmative ‘yes’ if we as educators want to make good on our commitment to “incubate a better world in the minds and hearts of students” (Benjamin 2016). Learning without each other assumes that student success is determined by acquisition of knowledge disconnected from the world, which is problematic not only for learning responsible computing but also because it reinforces hegemonic ideas about knowledge only as a fungible asset—one that treats students only as agents of national economic growth. For marginalized students who have lived through the harms of progress and technology narratives, it is no surprise then that many students experience traditional computing and STEM courses as oppressive spaces.</p>]]></content><author><name>Kevin Lin</name></author><summary type="html"><![CDATA[How do we wrestle with critical questions and justice in our racialized classrooms?]]></summary></entry><entry><title type="html">Ohyay</title><link href="https://kevinl.info/ohyay/" rel="alternate" type="text/html" title="Ohyay" /><published>2021-05-13T00:00:00+00:00</published><updated>2021-05-13T00:00:00+00:00</updated><id>https://kevinl.info/ohyay</id><content type="html" xml:base="https://kevinl.info/ohyay/"><![CDATA[<div style="position: relative; padding-bottom: 56.25%; height: 0;"><iframe src="https://www.loom.com/embed/4ee03866fdd1445e9d3a718d3dc673e6" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen="" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;"></iframe></div>

<p><a href="https://ohyay.co/">ohyay</a> is a customizable platform for designing virtual spaces “from music festivals and interactive museums to family reunions and themed escape rooms.” Why not classrooms, too? Amy’s written about her experience using <a href="https://medium.com/bits-and-behavior/ohyay-and-the-pedagogical-power-of-emojis-9570e606d2a0">ohyay and the pedagogical power of emojis</a>—ohyay reduces the barriers to student participation and engagement from raising a hand (in person) to composing a chat message (Zoom) to reacting with emoji (ohyay). But ohyay is more than just an improved Zoom: it’s a different way to imagine a space as a permanent structure rather than a temporary virtual meeting. Meetings aren’t just <em>scheduled</em> in ohyay. They <em>take place</em> in an ohyay workspace that has a real sense of purpose, one that’s designed with intention rather than served on a literally blank slate.</p>

<p>So how far can we take the concept of an ohyay classroom for large courses with active learning? One of the challenges of running a large course over Zoom is that it overexposes students. By assuming that all participants need to see and hear each other at the same time, everyone shares the same video and audio channel. But that’s often not how things work in real classrooms where people are seated in different parts of the room, where they can whisper to their neighbors, and where some or even most students can’t see each others’ faces. These physical limitations create a proximity effect: students know their seat-neighbors the best. Many kinds of active learning spaces are designed around the assumption that students cluster into groups and work toward the same goals. A community of practice centers meaningful participation, contribution, and interaction between these people in a group, so community-building will require rethinking how virtual spaces afford or disafford certain ways of interacting and sharing with each other.</p>

<p>I designed the <a href="https://ohyay.co/gallery_item.html?itemId=ws_ChixQFAf"><strong>CSE 373 ohyay</strong></a> workspace to enhance a sense of community and belonging around consistent teams of 8 students. Instead of placing every student in a co-located video and audio channel, students in my ohyay workspace can only see and hear their teammates (in addition to the instructor). Unlike Zoom, students can whisper to their team without interrupting the flow of class. While students’ can only see and hear their teammates, they can engage via chat or emoji reaction with the rest of the class. When a student wants to speak up in front of the entire class, they can raise their hand and the instructor can call them up to speak in front of everyone unlike Zoom, where this process is usually unmoderated. Instead of always having their video and audio on in front of everyone, students can make that decision based on the much smaller group of students in their team.</p>

<p>Being seen and heard in any community or space is an issue of equity as well as an exercise in power and domination. Marginalized students in our classrooms are simultaneously invisible and yet overexposed. <a href="https://youtu.be/kDcz44ifdQw?t=3152">Ruha Benjamin describes coded exposure</a> as:</p>

<blockquote>
  <p>[Naming] the tension between ongoing surveillance of racialized populations and calls for digital recognition and inclusion, the desire to literally be seen by technology—but inclusion into harmful systems is no clear good. Rather, the act of viewing something or someone can put the object of our vision at risk, a form of scopic vulnerability central to the feeling of being racialized. It’s not only the process of being out of sight but also in the danger of being too centered that racialized groups are made vulnerable.</p>
</blockquote>

<p>Last quarter, I implemented a simpler version of this infrastructure using <a href="/flipped-classroom-remote-teaching/">parallel Zoom sessions hosted by TAs for active learning and a central livestream broadcast from the instructor to all the students for bursts of direct instruction</a>. But the setup struggled with the overhead of moving between tools and the lack of a real-time communication channel from students back to the instructor. What makes a large classroom feel more personal than a prerecorded video is the synchronous communication between students, instructors, and each other—but, unlike Zoom affordances, this communication is rarely broadcasted to all participants at all times.</p>

<p>What ohyay brings to the table is a unified platform for seamless switching between different communication channels—to opt-into the ways that students want to engage in at any time. I hope that the ohyay classroom makes space for students to build meaningful relationships over time with the students in their team but also participate in the larger classroom-level communication happening over chat and emoji. Give the <a href="https://ohyay.co/gallery_item.html?itemId=ws_ChixQFAf"><strong>CSE 373 ohyay</strong></a> workspace a try—ohyay is free through at least the rest of the year.</p>]]></content><author><name>Kevin Lin</name></author><summary type="html"><![CDATA[A seamless, community-centered active learning classroom in ohyay!]]></summary></entry><entry><title type="html">Do Abstractions Have Politics?</title><link href="https://kevinl.info/do-abstractions-have-politics/" rel="alternate" type="text/html" title="Do Abstractions Have Politics?" /><published>2021-01-05T00:00:00+00:00</published><updated>2021-01-05T00:00:00+00:00</updated><id>https://kevinl.info/do-abstractions-have-politics</id><content type="html" xml:base="https://kevinl.info/do-abstractions-have-politics/"><![CDATA[<ul>
  <li>Author’s version: <a href="https://arxiv.org/abs/2101.00786">arXiv:2101.00786</a></li>
  <li><a href="https://docs.google.com/presentation/d/15zeZD-ADtFcVLwti1J4sPU07wt4t2i-3xnD4Y9P4Sb8/edit?usp=sharing">Slides</a></li>
</ul>

<blockquote>
  <p>The expansion of computer science (CS) education in K–12 and higher-education in the United States has prompted deeper engagement with equity that moves beyond inclusion toward a more critical CS education. Rather than frame computing as a value-neutral tool, a justice-centered approach to equitable CS education draws on critical pedagogy to ensure the rightful presence of political struggles by emphasizing the development of not only knowledge and skills but also CS disciplinary identities. While recent efforts have integrated ethics into several areas of the undergraduate CS curriculum, critical approaches for teaching data structures and algorithms in particular are undertheorized. Basic Data Structures remains focused on runtime-centered algorithm analysis.</p>

  <p>We argue for affordance analysis, a more critical algorithm analysis that centers an affordance account of value embedding. Drawing on critical methods from science and technology studies, philosophy of technology, and human-computer interaction, affordance analysis examines how the design of abstractions such as data structures and algorithms embody affordances, which in turn embody values with political consequences. We illustrate 5 case studies of how affordance analysis refutes social determination of technology, foregrounds the limitations of data abstractions, and implicates the design of algorithms in disproportionately distributing benefits and harms to particular social identities within the matrix of domination.</p>
</blockquote>

<p><strong>Affordance analysis</strong> is an algorithm analysis that makes space for the rightful presence of critical counternarratives by centering design justice in software engineering. In my own data structures and algorithms courses, we apply a comparative approach and interleave sociotechnical applications for each abstraction.</p>

<ol>
  <li>Autocomplete data structures and algorithms for <strong>search suggestions</strong> and <strong>DNA indexing</strong>.</li>
  <li>Priority queue data structures for <strong>content moderation</strong> and <strong>shortest paths</strong>.</li>
  <li>Shortest paths algorithms for <strong>seam carving</strong> and <strong>navigation directions</strong>.</li>
</ol>

<p>While affordance analysis contributes a design justice lens for analyzing software, I’m certainly not the first to integrate ethical reflection and social responsibility into computing curricula. The recently-released <a href="https://foundation.mozilla.org/en/what-we-fund/awards/teaching-responsible-computing-playbook/">Teaching Responsible Computing Playbook</a> captures much of the work produced by the <a href="https://foundation.mozilla.org/en/what-we-fund/awards/responsible-computer-science-challenge/">Responsible CS Challenge</a> award winners. Several teams have released packaged materials and modules to enable adoption at other institutions.</p>

<ul>
  <li><a href="https://ethicalcs.github.io/">Ethical Reflection Modules for CS 1</a> by <a href="http://www.eg.bucknell.edu/~emp017/">Evan Peck</a> with contributions from <a href="https://justinnhli.com/">Justin Li</a>.</li>
  <li><a href="https://responsibleproblemsolving.github.io/">Responsible Problem Solving</a> by <a href="http://www.cs.utah.edu/~suresh/">Suresh Venkatasubramanian</a>, <a href="http://sorelle.friedler.net/">Sorelle Friedler</a>, <a href="http://cs.brown.edu/~seny/">Seny Kamara</a>, and <a href="https://cs.brown.edu/~kfisler/">Kathi Fisler</a>.</li>
  <li><a href="https://www.cenportal.org/">Computing Narratives</a> by Stacy Doore’s team.</li>
  <li><a href="https://c4sg.cse.buffalo.edu/projects/Teaching%20Responsible%20Computing.html">Teaching Responsible Computing</a> by <a href="http://engineering.buffalo.edu/industrial-systems/people/faculty-directory/bolton-matthew.html">Matthew Bolton</a>, <a href="https://engineering.buffalo.edu/computer-science-engineering/people/faculty-directory/varun-chandola.html">Varun Chandola</a>, <a href="https://cse.buffalo.edu/~hartloff/index.html">Jesse Hartloff</a>, <a href="https://cse.buffalo.edu/~mhertz/">Matthew Hertz</a>, <a href="https://engineering.buffalo.edu/computer-science-engineering/people/faculty-directory/kenny-joseph.html">Kenneth Joseph</a>, <a href="https://nsr.cse.buffalo.edu/?page_id=272">Steve Ko</a>, <a href="https://www.law.buffalo.edu/faculty/facultyDirectory/manes-jonathan.html">Jonathan Manes</a>, <a href="https://engineering.buffalo.edu/computer-science-engineering/people/faculty-directory/atri-rudra.html">Atri Rudra</a>, <a href="http://ap.buffalo.edu/People/faculty/department-of-architecture-faculty.host.html/content/shared/ap/students-faculty-alumni/faculty/Shepard.detail.html">Mark Shepard</a>, and <a href="https://cse.buffalo.edu/~jwinikus/">Jennifer Winikus</a>.</li>
  <li><a href="https://data.berkeley.edu/hce-curriculum-packages">Human Context and Ethics Curriculum Packages</a> by the <a href="https://data.berkeley.edu/hce-team">HCE Team</a> including student curriculum developers Alyssa Sugarman, Sneha Somaya, <a href="https://www.linkedin.com/in/eva-newsom-777147142/">Eva Newsom</a>, <a href="https://www.linkedin.com/in/sydneytrieu/">Sydney Trieu</a>, <a href="https://www.linkedin.com/in/carlosortizdev">Carlos Ortiz</a>, and <a href="https://www.linkedin.com/in/samantharaucher/">Sammy Raucher</a>.</li>
  <li><a href="https://sites.gatech.edu/responsiblecomputerscience/">Embedding Ethics in CS Classes Through Role Play</a> by Ellen Zegura, Jason Borenstein, Benjamin Shapiro, Amanda Meng, and Emma Logevall.</li>
  <li><a href="https://vsd.ccs.neu.edu/">Value Sensitive Design @ Khoury College</a> by <a href="https://cbw.sh/">Christo Wilson</a>, <a href="https://www.northeastern.edu/csshresearch/ethics/research/public-scholarship/ronald-sandler/">Ronald Sandler</a>, <a href="https://cssh.northeastern.edu/person/matthew-kopec/">Matthew Kopec</a>, <a href="http://johnbasl.net/">John Basl</a>, <a href="https://www.northeastern.edu/csshresearch/ethics/people/postdocs/">Kevin Mills</a>, and <a href="https://www.avathomaswright.com/">Ava Thomas Wright</a>.</li>
</ul>

<p>Jessica Dai’s reflection on <a href="http://www.theindy.org/2235">The Paradox of Socially Responsible Computing</a> identifies the centrality of power as raised in other work such as Sasha Costanza-Chock’s <a href="https://design-justice.pubpub.org/"><em>Design Justice</em></a>. Casey Fiesler’s post on <a href="https://cfiesler.medium.com/what-counts-as-computer-science-31f9dd955ad9">What “counts” as computer science?</a> echoes the anti-political values in computing raised by James Malazita and Korryn Resetar in <a href="http://jesseellin.com/documents/Malazita_Resetar.pdf">Infrastructures of Abstraction</a> in that practitioners “acknowledge ethical and political dimensions” but “encapsulate and divest them from ‘what counts’ as within bounds of the computational aspects of sociotechnical systems and disciplines.” When it comes time to integrating discussions of power in our classrooms, Thomas M. Philip suggests 3 <a href="https://youtu.be/-eTQrFPTM1Y">important classroom practices</a>: centering people’s dignity and humanity, ensuring participation structures for dialogue, and expanding disciplinary lenses.</p>]]></content><author><name>Kevin Lin</name></author><summary type="html"><![CDATA[Toward a More Critical Algorithm Analysis]]></summary></entry><entry><title type="html">Implementation of Mastery Grading Toward Rightful Presence</title><link href="https://kevinl.info/implementation-of-mastery-grading-toward-rightful-presence/" rel="alternate" type="text/html" title="Implementation of Mastery Grading Toward Rightful Presence" /><published>2020-12-22T00:00:00+00:00</published><updated>2020-12-22T00:00:00+00:00</updated><id>https://kevinl.info/implementation-of-mastery-grading-toward-rightful-presence</id><content type="html" xml:base="https://kevinl.info/implementation-of-mastery-grading-toward-rightful-presence/"><![CDATA[<ul>
  <li><a href="https://docs.google.com/drawings/d/1H-ebs-3-Nqc2VOx5On-7zFHUg044M5jc2gCs0Z-vkRw/edit?usp=sharing">Poster</a></li>
</ul>

<p>Jayne Everson, Leah Perlmutter, Ken Yasuhara, Kevin Lin, Brett Wortzman</p>

<blockquote>
  <p>The design of instructional assessment ecology can affect student engagement, motivation, and experience in a course. Research has shown that traditional, single-attempt, points-based grading in particular can <a href="https://doi.org/10.1177/1469787418819728">increase anxiety and avoidance of challenging courses</a>. Rather than serve to motivate students, traditional grading can demotivate students and reinforce <a href="https://doi.org/10.1145/3291279.3339413">negative self-assessments of ability</a> when grades fail to validate student effort and learning. When situated in the broader assessment ecology, traditional grading that emphasizes normative value judgments is incompatible with frameworks for moving “<a href="https://doi.org/10.3102/0013189X20927363">beyond equity as inclusion</a>” and toward rightful presence: justice-oriented teaching and learning predicated on the deconstruction of oppressive power dynamics. While <a href="https://computinged.wordpress.com/2020/07/27/proposal-2-to-change-cs-education-to-reduce-inequity-stop-allocating-rationing-or-curving-down-grades/">changing grading practices</a> alone is not sufficient to achieve rightful presence, it can be a necessary first step toward creating a more just and equitable classroom.</p>

  <p>This poster describes our experiences with <a href="http://rtalbert.org/specs-grading-iteration-winner/">specifications-based mastery grading</a> in two large, undergraduate introductory programming courses enrolling nearly 1,400 students in total. Rather than averaging together point scores from multiple assignments, final grades were awarded based on the number of satisfactory assignments completed by the end of the quarter. Smaller assignments were graded either satisfactory (S) or not yet (N), while larger assignments were graded on an expanded, 4-level scale. Students were given the opportunity to revise and resubmit any unsatisfactory work at regular intervals throughout the quarter.</p>

  <p>We seek to understand how students perceive mastery grading contributes to a growth mindset. In feedback collected at mid-quarter, in both courses, students almost unanimously agreed that the opportunity to resubmit work facilitated learning and reduced stress around assignments. We will also present: feedback collected in the end-of-quarter course evaluation on growth mindset and motivations for revising work; considerations for implementing mastery grading in large-enrollment courses, such as handling resubmissions; and faculty conceptions of normative grade distributions and rightful presence.</p>
</blockquote>]]></content><author><name>Kevin Lin</name></author><summary type="html"><![CDATA[Moving a step closer to justice-centered CS education.]]></summary></entry><entry><title type="html">Flipped classroom remote teaching</title><link href="https://kevinl.info/flipped-classroom-remote-teaching/" rel="alternate" type="text/html" title="Flipped classroom remote teaching" /><published>2020-12-16T00:00:00+00:00</published><updated>2020-12-16T00:00:00+00:00</updated><id>https://kevinl.info/flipped-classroom-remote-teaching</id><content type="html" xml:base="https://kevinl.info/flipped-classroom-remote-teaching/"><![CDATA[<p>Our class meets for 50 minutes every Monday, Wednesday, Friday (MWF) in addition to small sections on Tuesday and Thursday (TuTh) led by undergraduate teaching assistants (TAs). I’ll describe the structure of our MWF class sessions. Before class, I have students prepare by completing interactive readings with “Explain in Plain English” questions, Parsons problems, and short quizzes interspersed throughout the reading. (These would be great activities for in-class work too, but I still need to create many more of them!) At the start of each class session, I do some just-in-time-teaching to respond to student questions, which oftentimes means walking through some of the more challenging pre-class preparation activities.</p>

<p>Then, we spend the next 25 minutes as a single block of time for students to work on programming problems in groups using Zoom breakout rooms. Each group moves at their own pace through 2 or 3 programming problems.</p>

<p><strong>Are students randomly assigned to rooms?</strong> In the first week, students are assigned to rooms randomly since I wanted them to meet various other students. They can also meet some students through their small section. From the second week onwards, we have students sign-up for their preferred in-class problem solving groups through our learning management system, Canvas. Canvas groups are then imported into Zoom pre-assigned breakout rooms with Justin Hsia’s handy <a href="https://gitlab.cs.washington.edu/jhsia/canvas-api">canvas-api</a>.</p>

<p><strong>Are they assigned ‘roles’ and required to swap roles periodically?</strong> We use large groups of 6 to 8 students in each room. Each room picks a student to screen-share and drive the group programming activity. Everyone else serves as a navigator to help out. I suggest rotating after each problem or at least between each class session. When I was first starting out with this pedagogy in the Spring quarter, we tried out various group sizes before settling on 6 to 8 students. These groups are much bigger than recommended for in-person activities, but it helps in the online environment since not all students may be in a position to participate in their breakout room. Out of those 6 to 8 students, about 3 students will lead the discussion, 2 students will chip-in here and there, and the remaining student(s) will be mostly silent. Your mileage may vary, especially if your students come into the course knowing each other already. I’ve had one group this quarter prefer a smaller 2 or 3 person groups, and their interactions were very similar to more traditional pair programming.</p>

<p><strong>Do instructors and/or TAs circulate around observing/answering questions?</strong> Yes, some of our TAs attend class on MWF in addition to leading their TuTh sections. Their role in the MWF class sessions is primarily facilitation of group work and occasionally helping groups get unstuck.</p>

<p><strong>Any best practice to get students actually working together?</strong> The students who had the best experience in this system were the ones who were the most proactive early on about creating a study group that persisted outside of scheduled class meeting times. I chatted with a student last week who had a really great experience because they connected with a few peers from their section through Facebook or LinkedIn and formed their problem solving group through those connections. Next time, I will be taking more intentional steps to try to do this for everyone and fully embrace <a href="https://doi.org/10.1145/3017680.3017727">micro-classes</a>:</p>

<ol>
  <li>Create discussion board posts for students to share video self-introductions in the first week of class so that they get to know each other. This also encourages students to use the discussion board as a community space beyond just learning the concepts.</li>
  <li>Instead of having all students join one massive Zoom meeting, for MWF sessions, have students join their TA-led small section Zoom meeting. This way, they see the same class of 20 or 30 students every day of the week. I plan on broadcasting across all of the sections using a YouTube livestream, which has the added advantage of letting students rewind content in realtime. (Also addresses the <a href="https://support.zoom.us/hc/en-us/articles/360032752671-Pre-assigning-participants-to-breakout-rooms#h_94a5b1d6-4d7d-47e7-aa09-99d5e03bcaa4">limitations of pre-assigned breakout rooms</a>, which make it hard to do pre-assigned breakout rooms in very large classes.)</li>
  <li>Broaden training of TAs to focus more on facilitating group dynamics, interactions, and activities rather than just teaching the concepts per se, which has been the traditional focus of TA training efforts and our TA application selection processes.</li>
</ol>]]></content><author><name>Kevin Lin</name></author><summary type="html"><![CDATA[How we run class in my remote, introductory programming course.]]></summary></entry><entry><title type="html">From peer instruction to POGIL</title><link href="https://kevinl.info/from-peer-instruction-to-pogil/" rel="alternate" type="text/html" title="From peer instruction to POGIL" /><published>2020-03-11T00:00:00+00:00</published><updated>2020-03-11T00:00:00+00:00</updated><id>https://kevinl.info/from-peer-instruction-to-pogil</id><content type="html" xml:base="https://kevinl.info/from-peer-instruction-to-pogil/"><![CDATA[<ul>
  <li><a href="https://docs.google.com/presentation/d/1s0-SIk1DXwGRuNrXDuim6NI4UAAq3JQtAdIWYf20DcE/edit?usp=sharing">Slides</a></li>
</ul>

<blockquote>
  <p>There is overwhelming evidence that active learning is better than completely passive lecture. However, adoption of evidence-based teaching practices has been slow in part because creating new course materials is often a time-consuming and labor-intensive process. Inspired by prior work in the sciences, we describe our experiences deploying guided lecture notes to transition from peer instruction (10% of lecture time) to process oriented guided inquiry learning (POGIL, 50% of lecture time) over two offerings of data structures and algorithms in a large R1 university. In the first offering, we added metacognitive questions in the presentation speaker notes, providing additional scaffolding between pre-lecture reading and in-lecture peer instruction activities. At the beginning of each class session, we distributed guided lecture notes to students by printing the presentation speaker notes alongside lecture slide content. In this way, we were able to seamlessly integrate new supporting materials alongside lecture graphics and examples. In the second offering, we expanded guided lecture notes into POGIL worksheets by migrating most of the remaining passive lecture content to pre-lecture readings and consolidating lecture around three levels of process oriented guided inquiry: (1) metacognitive questions, (2) peer instruction activities scaffolded by the metacognitive questions, and (3) practice problems integrating multiple ideas. The resulting POGIL classroom leverages presentation software as a canvas for introducing problems with graphics and animations while structuring activities around active learning via process oriented guided inquiry.</p>
</blockquote>]]></content><author><name>Kevin Lin</name></author><summary type="html"><![CDATA[Improving equity in CS education with guided lecture notes.]]></summary></entry><entry><title type="html">You’re spamming the autograder</title><link href="https://kevinl.info/youre-spamming-the-autograder/" rel="alternate" type="text/html" title="You’re spamming the autograder" /><published>2019-08-29T00:00:00+00:00</published><updated>2019-08-29T00:00:00+00:00</updated><id>https://kevinl.info/youre-spamming-the-autograder</id><content type="html" xml:base="https://kevinl.info/youre-spamming-the-autograder/"><![CDATA[<ul>
  <li><a href="https://docs.google.com/presentation/d/1WO7fxvEAZnfCv8oqTrbHjm5_a0uAkv4u8sw9bQ_mSqM/edit?usp=sharing">Slides</a></li>
</ul>

<blockquote>
  <p>Autograders provide instant feedback on student work, but they can also harm learning if students grow dependent on autograder feedback to solve problems. The resulting autograder-driven development cycle occurs when students make minor adjustments to their code seemingly at random, submit code to the autograder, and repeat until their program passes all of the given tests. Anecdotal evidence from other instructors suggested that rate-limiting student submissions on the server-side autograder to 3 or 4 submissions per hour was an effective intervention. We hypothesized that introducing a 3-5 minute “cooldown timer” on the client-side autograder could mitigate student over-reliance on autograder feedback by requiring students to spend more time independently debugging, planning, and evaluating their changes before receiving autograder feedback. However, a lack of user-testing, expectations management, and course integration led to students and course staff alike perceiving the cooldown timer as an inconvenience more than a learning opportunity.</p>
</blockquote>

<h2 id="it-seemed-like-a-good-idea-at-the-time">It seemed like a good idea at the time</h2>

<p>Autograders provide instant feedback to students, clarify assignment grading, and reduce staff overhead. However, immediate access to autograders comes with an inherent risk when students, often in a rush to complete an assignment on time, switch to an autograder-driven development cycle where they make minor adjustments to their code seemingly at random, submit it to the autograder, and repeat until their program passes all of the given tests. This learning process is problematic because feedback is not taken into account as students make changes to their code and potentially resulting in GPS Syndrome.</p>

<p>Our autograder infrastructure consisted of two parts: a client with a small set of local tests for instant feedback, and a server with the full set of tests for final grading. The autograder client automatically logs timestamps of student work in case the student’s computer is unable to post student work immediately to the server. Previous instructors for the course had positive experiences rate limiting student submissions on the server-side, allowing students to iterate as frequently as they liked on their own computers using the built-in tests, but only being able to submit graded work to the server 3 or 4 times per hour. We hypothesized that further limiting access on the client-side would reduce the tendency toward autograder-driven development.</p>

<p>To address this, we added a cooldown timer to the autograder client. After the student’s first 2 attempts, the autograder client would read the time of the latest backup from the metadata logs and prevent the autograder from running if less than a certain amount of time had elapsed, starting with a 3 minute cooldown that increased to 5 minutes after their next successful attempt.</p>

<p>This change, unfortunately, did not land well. While there was no complaining on the online course forum, the most immediate pushback was from course staff. On the third day of class, the cooldown timer was reduced to 60 seconds after a negative reception on the first assignment. Along with this change, the client would now print a descriptive message for students, “You’re spamming the autograder,” and give a generic suggestion, “If you’re stuck on <code class="language-plaintext highlighter-rouge">{question}</code>, try talking to your neighbor, asking for help, or running your code in interactive mode: <code class="language-plaintext highlighter-rouge">python3 -i {files}</code>.”</p>

<p>As it turns out, students did not appreciate this change either. The cooldown period was quickly perceived as an annoyance rather than an opportunity for learning. WIth only a 60-second timer, the intended benefit of students planning out their changes in advance was never actually realized. Without explicit instructions on how to work with their neighbor, or how to use Python’s interactive mode, the generic suggestion fell on deaf ears. Instead, the cooldown period only prevented students from immediately fixing syntax errors, forcing them to wait an excruciating minute after correcting a typo. In order to work around the cooldown timer, staff even internally shared a clever shell command that deleted the autograder client logs before running the autograder.</p>

<p>All popular changes are memorialized as art, and this is <a href="https://inst.eecs.berkeley.edu/~cs61a/su17/proj/scheme_gallery/#you-re-spamming-the-autograder">no exception</a>.</p>

<h2 id="learning-from-our-mistakes">Learning from our mistakes</h2>

<p>Three major lessons come to mind.</p>

<h3 id="dogfooding-is-good-user-testing-is-better">Dogfooding is good, user testing is better</h3>

<p>Gather data. Instead of just imagining what would happen as a result of a change, actually test it before deployment. Dogfooding, where staff test their own tools, could have revealed some of the problems with the change. However, because staff workflows are different from student workflows, this might not have been sufficient to uncover all of the problems. Proper evaluation of the 60-second cooldown could have revealed that it would slow down students fixing syntax errors without necessarily discouraging autograder-driven development. Small prototypes can be developed to test the waters without committing full resources.</p>

<h3 id="communicate-changes-and-suggest-a-workflow">Communicate changes and suggest a workflow</h3>

<p>This change was deployed largely behind-the-scenes and without proper instructional integration. Students were not told why they were being slowed down, and the staff alongside students may have just found the cooldown time counterproductive as 60 seconds is not long enough to assist another student in the meantime. Neither staff nor students had a plan for how to use those 60 seconds, resulting in what was perceived as wasted time. The original intent for the module may have been diluted by pushback over the first three days of class.</p>

<h3 id="be-explicit-about-the-value-of-learning-process">Be explicit about the value of learning process</h3>

<p>We know that prompting students to try strategies like self-explanation can improve student performance, especially on cognitively demanding tasks such as debugging a program. One addition made after the end of the course was to automatically suggest a problem-solving strategy for the student to try out while they waited. Had this been implemented in the original rollout and integrated into the course messaging and learning activities, student and staff reception may have been more positive.</p>

<p>This result also suggests an interesting question about the appropriate time and place for formative feedback vs. summative feedback. Limits on summative feedback are perceived as an acceptable learning hurdle, while limits on formative feedback hinder students. Other ways to tackle the problem, such as <a href="https://youtu.be/polTBnMXGQI?t=2120">autograder test unlocking</a>, could be perceived as more justified by students. These indirect interventions may be even more effective than this direct intervention.</p>]]></content><author><name>Kevin Lin</name></author><summary type="html"><![CDATA[A lesson in user-testing, expectations management, and course integration.]]></summary></entry><entry><title type="html">A Connector Course for Pre-service CS Teacher Development</title><link href="https://kevinl.info/a-connector-course-for-pre-service-cs-teacher-development/" rel="alternate" type="text/html" title="A Connector Course for Pre-service CS Teacher Development" /><published>2019-06-01T00:00:00+00:00</published><updated>2019-06-01T00:00:00+00:00</updated><id>https://kevinl.info/a-connector-course-for-pre-service-cs-teacher-development</id><content type="html" xml:base="https://kevinl.info/a-connector-course-for-pre-service-cs-teacher-development/"><![CDATA[<ul>
  <li><a href="https://docs.google.com/presentation/d/1LLryDudbNnnw8AoDAXqFSnPXg2ZxXE-k8Yz-XN17PSQ/edit?usp=sharing">Slides</a></li>
</ul>

<p>“With the expansion of computing education in mainstream K–12 schools, the current approach of providing professional development for current teachers will quickly fall short of supporting a sustainable pipeline of computer science teachers for the scale many cities and states have committed to” (<a href="https://drive.google.com/file/d/1DXgpLjl_k87TVpQ0cLusfdjnYySIgIjT/view">Priming the Computer Science Teacher Pump</a>). <a href="https://medium.com/@codeorg/universities-arent-preparing-enough-computer-science-teachers-dd5bc34a79aa">Code.org found that</a>, in 2016, “only 75 teachers graduated from universities equipped to teach computer science.” “We do not reach sustainability with in-service teacher development, though that is where most efforts are today,” says <a href="https://computinged.wordpress.com/2018/04/16/finding-a-home-for-cs-ed-in-schools-of-ed-priming-the-cs-teacher-pump-report-released/">Mark Guzdial</a>.</p>

<p>One challenge of preparing pre-service CS teachers at a large public university such as UC Berkeley is that the program for STEM majors exploring a career in education, CalTeach, does not provide a pathway for Computer Science Education (CSEd). Development of such a program normally involves recruiting faculty and securing internal or external funding. However, Schools of Education are currently “<a href="https://computinged.wordpress.com/2018/04/16/finding-a-home-for-cs-ed-in-schools-of-ed-priming-the-cs-teacher-pump-report-released/">facing enrollment declines and budget cutbacks</a>.” In CS, “computer science classrooms are overflowing at colleges and universities across the United States” at a time when they’re “unable to hire the new faculty they need and must instead restrict access to […] computing classes” (<a href="https://cs.stanford.edu/people/eroberts/ResourcesForTheCSCapacityCrisis/">Resources for the CS Capacity Crisis</a>). Specialized CSEd faculty are even rarer: “There are few researchers with CS education PhDs, and right now few or no active formal CS education PhD programs.”</p>

<p>A <a href="http://dx.doi.org/10.1145/2576872">2014 study by Yadav et al.</a> showed the benefit of integrating computational thinking (CT) into general pre-service teacher training as a week-long module. In this proposal, we suggest extending the study by developing a semester-long ‘connector course’ to support pre-service CS teacher development. A connector course uses concepts from a parent course as a foundation to teach complementary ideas; <a href="https://data.berkeley.edu/education/connectors">25 such courses have been developed and taught at UC Berkeley</a>.</p>

<p>The proposed connector course will complement The Beauty and Joy of Computing (BJC), an introductory “CS0” and AP Computer Science Principles curriculum, making it particularly well-suited as a course for students to take in preparation for a career teaching CS in K–12. The course would offer an introduction to both (a) teaching CS and (b) infusing CS and computational thinking concepts into other subjects in both STEM as well as the humanities. The course will be designed to be broadly accessible to attract both Education students and CS students, building out the beginning of a CSEd program and fill the need expressed from school principals and administrators looking to “<a href="https://drive.google.com/file/d/1DXgpLjl_k87TVpQ0cLusfdjnYySIgIjT/view">hire teachers with requisite backgrounds for computer science instruction</a>.”</p>

<p>The connector course can be launched and taught by current faculty experienced in teaching BJC and CS0 at other institutions of higher education. As the connector course is supported by a parent course, it involves a lower workload than teaching a standalone course in CSEd. It is hoped that the low implementation costs of this program combined with the pedagogical benefits of introducing CT to a broad audience of pre-service teachers will result in adoption by institutions of higher education which have introductory CS courses and faculty to teach them but lack the resources to develop standalone CSEd courses to meet the demand from CS for All Teachers.</p>]]></content><author><name>Kevin Lin</name></author><summary type="html"><![CDATA[Bootstrapping pre-service CS teacher training with CS0.]]></summary></entry><entry><title type="html">A Berkeley View of Teaching CS at Scale</title><link href="https://kevinl.info/a-berkeley-view-of-teaching-cs-at-scale/" rel="alternate" type="text/html" title="A Berkeley View of Teaching CS at Scale" /><published>2019-05-28T00:00:00+00:00</published><updated>2019-05-28T00:00:00+00:00</updated><id>https://kevinl.info/a-berkeley-view-of-teaching-cs-at-scale</id><content type="html" xml:base="https://kevinl.info/a-berkeley-view-of-teaching-cs-at-scale/"><![CDATA[<ul>
  <li>Author’s version: <a href="https://arxiv.org/abs/2005.07081">arXiv:2005.07081</a></li>
  <li>Version of record: <a href="https://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-99.html">UCB/EECS-2019-99</a></li>
</ul>

<blockquote>
  <p>Over the past decade, undergraduate Computer Science (CS) programs across the nation have experienced an explosive growth in enrollment as computational skills have proven increasingly important across many domains and in the workforce at large. Motivated by this unprecedented student demand, the CS program at the University of California, Berkeley has tripled the size of its graduating class in five years. The first two introductory courses for majors, each taught by one faculty instructor and several hundred student teachers, combine to serve nearly 2,900 students per term. This report presents three strategies that have enabled the effective teaching, delivery, and management of large-scale CS courses: (1) the development of autograder infrastructure and online platforms to provide instant feedback with minimal instructor intervention and deliver the course at scale; (2) the expansion of academic and social student support networks resulting from changes in teaching assistant responsibilities and the development of several near-peer mentoring communities; and (3) the expansion of undergraduate teacher preparation programs to meet the increased demand for qualified student teachers. These interventions have helped both introductory and advanced courses address capacity challenges and expand enrollments while receiving among the highest student evaluations of teaching in department history. Implications for inclusivity and diversity are discussed.</p>
</blockquote>]]></content><author><name>Kevin Lin</name></author><summary type="html"><![CDATA[The road to 27,000 student enrollments per year, and what lies ahead.]]></summary></entry><entry><title type="html">Formative feedback in intro CS</title><link href="https://kevinl.info/formative-feedback-in-intro-cs/" rel="alternate" type="text/html" title="Formative feedback in intro CS" /><published>2019-04-26T00:00:00+00:00</published><updated>2019-04-26T00:00:00+00:00</updated><id>https://kevinl.info/formative-feedback-in-intro-cs</id><content type="html" xml:base="https://kevinl.info/formative-feedback-in-intro-cs/"><![CDATA[<p><em>This essay was selected for the 2019 UC Berkeley Teaching Effectiveness Award. The <a href="https://gsi.berkeley.edu/from-40-to-400-to-1400-providing-formative-feedback-in-large-scale-courses/">definitive version</a> can be found on the GSI Teaching &amp; Resource Center website. Below is the author’s version.</em></p>

<p>Students learn computer science (CS) by doing. In CS 61A, a highly-rated introductory CS course at UC Berkeley, students are introduced to new concepts in lecture, go hands-on to learn the solution process in lab and discussion with guidance from teaching assistants, and synthesize multiple ideas on their own or with a partner in coding homeworks and projects. Throughout these activities, students experience aspects of CS such as programming, debugging, program tracing, and analysis while building a sense of community and belonging through small-classroom environments supported by numerous course tutors. However, these activities do not explicitly prepare students for taking high-stakes paper exams, which have been identified as a significant source of stress in the course. Furthermore, to improve the efficiency of their learning, students receive real-time assistance from instructors during lab, discussion, and office hours. While this assistance significantly reduces frustration, it can also shortcut learning when too much help is given causing students to struggle with exams as they become reliant upon the guidance.</p>

<p>I believe introductory CS instructors can maintain their current goal of minimizing student frustration with real-time feedback while also supporting comprehensive student learning by highlighting misconceptions through targeted, formative feedback. In Fall 2016, as a means of providing such formative feedback, I introduced 10-minute paper quizzes in my 40-student discussion sections. Quizzes serve as a check on student understanding after synthesizing and applying ideas in homework but before taking a high-stakes exam. Quiz questions are designed to model what students can expect to see on the exam and highlight key ideas from the previous week’s discussion section. Inspired by the active learning literature, students first take the full 10 minutes to complete their quiz individually and commit to an answer, then break out into small groups to discuss their solution process, before regrouping as a class to summarize their ideas. This sparks lively discussion as students are forced to confront their misconceptions and reflect on their solution process.</p>

<p>After some polishing, my quizzes were deployed to 400 students in CS 61A Summer 2017. While student feedback reported the group discussions as generally helpful, not all discussions were equally effective. From classroom observations, students would often miss small but crucial details. Furthermore, data collected from the course dashboard suggested that students tended not to revisit past assignments. In order to encourage students to review their quizzes, I used Gradescope, an online grading platform, to scan, grade, and share individual quiz feedback with students the same day they took the quiz. Quizzes were graded on completion, not correctness, to encourage students to focus on improvement rather than points. Our exam questions often test multiple concepts and skills, so within each rubric item I explicitly described the embedded learning goals, potential misconceptions, and problem-solving strategies. Students were encouraged to use the rubric to guide their learning and clarify questions from discussion.</p>

<p>Iterating on student and course staff feedback from the summer, I deployed quizzes to 1,400 students in CS 61A Fall 2017. While the detailed rubrics helped students identify areas for improvement, there was no guidance to point students to relevant resources and additional practice problems for students to check their understanding. To support these needs, I developed an online study guide accompanying each quiz. Each study guide included a summary of the quiz’s key ideas, problem-solving strategies, and resources for further review, as well as 3–6 problems of increasing difficulty for students to check their understanding. End-of-semester student evaluations suggested that this quiz-study guide system was well-received by students and that it served as an effective use of time in discussion section.</p>]]></content><author><name>Kevin Lin</name></author><summary type="html"><![CDATA[Deploying in-class paper quizzes: from 40 to 400 to 1,400.]]></summary></entry><entry><title type="html">Teaching faculty job search</title><link href="https://kevinl.info/teaching-faculty-job-search/" rel="alternate" type="text/html" title="Teaching faculty job search" /><published>2019-03-29T00:00:00+00:00</published><updated>2019-03-29T00:00:00+00:00</updated><id>https://kevinl.info/teaching-faculty-job-search</id><content type="html" xml:base="https://kevinl.info/teaching-faculty-job-search/"><![CDATA[<p>I will be receiving an MS in Computer Science from UC Berkeley in May 2019 and teaching in the Paul G. Allen School of Computer Science &amp; Engineering at the University of Washington in Fall 2019. Some of my friends and students were curious as to how I got here, so I wanted to talk a bit about the process and share my application and interview materials. These materials are all dated 2018.</p>

<ul>
  <li><a href="https://drive.google.com/open?id=1fWM9ZNEdC8digPhTVeuLpeCMII-aNWdE">Cover letter</a></li>
  <li><a href="https://drive.google.com/open?id=1Bv7aQF6cOye_Uq30hcubZV5oewOlo6DY">Curriculum vitae</a></li>
  <li><a href="https://drive.google.com/open?id=1vCpS_KdDNBuvDxwi7Bt2IDbOYbNVM7yz">Teaching statement</a></li>
  <li><a href="https://drive.google.com/open?id=1Iu7SgtrIi_U9Kp870AwO7TkjSezXgXvC">Diversity statement</a></li>
</ul>

<p>For students who are interested in following this path, I didn’t start seriously looking into teaching-track faculty jobs until my fourth year of undergraduate study around the same time I applied to the 5th Year MS program. But there is <a href="https://cs.stanford.edu/people/eroberts/ResourcesForTheCSCapacityCrisis/">huge demand</a> for CS teachers at all levels. CS departments at the university level are also increasingly open to hiring experienced teachers with master’s degrees, though going directly from completing a master’s degree into a teaching faculty position is rare. But I’m certainly not the first to go down this track: a few colleagues include <a href="https://www.countablethoughts.com">Adam Blank</a>, <a href="https://homes.cs.washington.edu/~hschafer/">Hunter Schafer</a>, <a href="http://allisonobourn.com/">Allison Obourn</a>, <a href="https://www.cs.washington.edu/people/faculty/zorahf">Zorah Fung</a>, <a href="https://news.cs.washington.edu/2016/11/23/uw-cse-undergraduate-tas-reunite-and-celebrate/">among many others</a>. The UW undergraduate TA program in particular has been a trailblazer in this regard.</p>

<p>I am very lucky to have had the opportunity to lecture for CS courses twice during my undergraduate career, though I believe you can do just as much (if not more) great work as a TA. What is important is that your application materials make clear your teaching experience and views. (And <a href="https://theprofessorisin.com/2016/09/12/thedreadedteachingstatement/">don’t just say nice things about teaching</a>—it’s about what you can bring to the hiring institution, and what differentiates you from other candidates.)</p>

<p>My feeling is that strong recommendation letters and a proven teaching record are the most important attributes for getting your foot in the door. A strong set of application materials sets up a first impression about your teaching values (and whether those values align with the hiring institution). When I was preparing my materials, I chose to emphasize strategies for teaching at scale since it was the thread that tied together all of my work and matched with the institutions that truly interested me. There were some institutions where this wasn’t such a good fit, and it didn’t take too long for them and for me to figure this out.</p>

<p>The teaching job talk is one of the most important events during the interview. Unlike tenure-track positions which tend to follow a more standardized research talk format, many schools put their own twist on the teaching job talk so I mixed and matched presentations to prepare a customized talk for each institution, reusing materials where possible.</p>

<ul>
  <li><a href="https://docs.google.com/presentation/d/1Gs2UUoyPPhYjcrMo2fPvHfqyKXOSXbNGXxAbVbrF31Q/edit?usp=sharing">Inheritance</a></li>
  <li><a href="https://docs.google.com/presentation/d/18v_7bwPdroeU3nWXZlx6ZVraN2BQHXEffuUmdYbpWfA/edit?usp=sharing">Heaps</a> (cf. <a href="https://docs.google.com/presentation/d/1jJoIAsLMIZ2KxiGCgcxtAfdCyxhFcw9bjRNCCw5MOkw/edit?usp=sharing">updated lecture</a> implemented as part of a real class in Autumn 2019)</li>
  <li><a href="https://docs.google.com/presentation/d/13gIWG0vA2tFHKbXaWZ8QAsBsn9L9zqE5tRXh1HX729s/edit?usp=sharing">Effective Teaching at Scale</a></li>
  <li>And <a href="https://github.com/kevinlin1/teaching-faculty-demos">live coding demos</a> for these presentations.</li>
</ul>

<p>For finding job listings, the <a href="https://cra.org/ads/">CRA Job Announcements</a> page is by far the most complete for candidates looking to teach at R1 universities. Based on my informal records, there were well over 60 open listings for full-time, long-term teaching faculty positions starting in the 2019-2020 academic year, and about two-thirds of them required a master’s as the minimum degree. The <a href="https://sigcse.org/sigcse/membership/mailing-lists.html">SIGCSE-jobs</a> mailing list is also useful for last-minute and special opportunities, but the majority of listings were posted to the CRA ads list first.</p>

<p>For detailed views on the tenure-track faculty job search, plenty of other folks have written about their experiences. I consulted <a href="https://pg.ucsd.edu/index.html#faq">Philip Guo</a> and <a href="https://matt-welsh.blogspot.com/2012/12/how-to-get-faculty-job-part-1.html">Matt Welsh</a>. These were helpful to get insight into the application and interview process. If you have any teaching faculty in your department, I would also suggest connecting with them.</p>

<p>I found the entire job search process incredibly enjoyable and a rare opportunity to broaden my perspective through meeting over a hundred academics in the span of a few months. I couldn’t have asked for any better way to cap off my five year experience at UC Berkeley, and I am grateful to the faculty, staff, and students in the EECS Department for their perpetual support.</p>]]></content><author><name>Kevin Lin</name></author><summary type="html"><![CDATA[A five year, non-traditional path into academia.]]></summary></entry><entry><title type="html">Near-Peer Student Mentors</title><link href="https://kevinl.info/near-peer-student-mentors/" rel="alternate" type="text/html" title="Near-Peer Student Mentors" /><published>2019-03-22T00:00:00+00:00</published><updated>2019-03-22T00:00:00+00:00</updated><id>https://kevinl.info/near-peer-student-mentors</id><content type="html" xml:base="https://kevinl.info/near-peer-student-mentors/"><![CDATA[<ul>
  <li><a href="https://docs.google.com/presentation/d/1tCFBy31qBpC3OPKbmPjEb57hDJdwJx0891KVHTGxR-k/edit?usp=sharing">Slides</a></li>
</ul>

<blockquote>
  <p>The recent explosion in undergraduate CS enrollments has required universities around the country to design new tools, procedures, and pedagogies for teaching larger and larger classes. In this lightning talk, I discuss how an R1 research university has responded to the enrollment crisis by leveraging undergraduate students in critical teaching and mentorship roles, allowing both introductory and advanced courses to expand to meet demand in the absence of additional faculty and without compromising on quality of education. By creating a multi-semester undergraduate teacher training program beginning immediately after students complete their first semester, large courses have been able to support enrollments in excess of 1,800 students in a single section while receiving higher teaching evaluations than ever before. These programs involve interested undergraduate students through a pipeline starting with supervised lab assistance and teacher development experiences before moving to limited-supervision one-on-one tutoring, small-group mentoring, and section leadership.</p>
</blockquote>]]></content><author><name>Kevin Lin</name></author><summary type="html"><![CDATA[Creating scalable learning experiences in large lecture courses.]]></summary></entry><entry><title type="html">Subgoals, Problem Solving Phases, and Sources of Knowledge</title><link href="https://kevinl.info/subgoals-problem-solving-phases-and-sources-of-knowledge/" rel="alternate" type="text/html" title="Subgoals, Problem Solving Phases, and Sources of Knowledge" /><published>2019-02-28T00:00:00+00:00</published><updated>2019-02-28T00:00:00+00:00</updated><id>https://kevinl.info/subgoals-problem-solving-phases-and-sources-of-knowledge</id><content type="html" xml:base="https://kevinl.info/subgoals-problem-solving-phases-and-sources-of-knowledge/"><![CDATA[<ul>
  <li>Short paper: <a href="https://arxiv.org/abs/1901.01465">arXiv:1901.01465</a></li>
  <li>DOI: <a href="https://doi.org/10.1145/3287324.3293712">10.1145/3287324.3293712</a></li>
  <li><a href="https://docs.google.com/drawings/d/1OrfWGp7-o8sI7KJyx4-leY-A8TioXP1IQFKNBDceht4/edit">Poster</a></li>
</ul>

<p>Kevin Lin, David DeLiema.</p>

<blockquote>
  <p>Educational researchers have increasingly drawn attention to how students develop computational thinking (CT) skills, including in science, math, and literacy contexts. A key component of CT is the process of abstraction, a particularly challenging concept for novice programmers, but one vital to problem solving. We propose a framework based on situated cognition that can be used to document how instructors and students communicate about abstractions during the problem solving process. We develop this framework in a multimodal interaction analysis of a 32-minute long excerpt of a middle school student working in the PixelBots JavaScript programming environment at a two-week summer programming workshop taught by undergraduate CS majors. Through a microgenetic analysis of the process of teaching and learning about abstraction in this excerpt, we document the extemporaneous prioritization of subgoals and the back-and-forth coordination of problem solving phases. In our case study, we identify that (a) problem solving phases are nested with several instances of context-switching within a single phase; (b) the introduction of new ideas and information create bridges or opportunities to move between different problem solving phases; (c) planning to solve a problem is a non-linear process; and (d) pedagogical moves such as modeling and prompting highlight situated resources and advance problem solving. Future research should address how to help students structure subgoals and reflect on connections between problem solving phases, and how to help instructors reflect on their routes to supporting students in the problem solving process.</p>
</blockquote>]]></content><author><name>Kevin Lin</name></author><summary type="html"><![CDATA[A complex mangle.]]></summary></entry></feed>