Notes from SEEM 2018

A complete summary of the workshop is available on the SEEM 2018 website.

Thank you to all participants for attending this second edition of the very interactive SEEM workshop! The workshop was a success thanks to your enthusiasm, active participation, insights, and experiences. We encourage all of you to:

  • Continue the workshop discussions via our Google discussion group SE-EDU.
  • Summarize your SEEM paper at our blog.
  • Summarize the results of your breakout sessions at our blog or using a poster or (position) paper.

Thank you!

 

An Agile Software Engineering Course with Product Hand-Off

I had the pleasure of presenting a paper at this year’s Software Engineering Education for Millennials (SEEM’18) Workshop at ICSE 2018.  It was a well-organized and enriching experience attended by other passionate software engineering researchers.  Many thanks to Cécile Péraire and Hakan Erdogmus for their efforts in making SEEM’18 happen!

In the paper, I described a novel design for an agile software engineering course that emphasizes keeping product artifacts updated throughout development. The signature transformative event in the course is the mid-semester project “hand-off,” at which point teams trade projects with other student teams and must make immediate progress despite no prior knowledge of the new project’s design, coding conventions, or documentation. In the paper, I describe the course’s features along with their implementation and assessment. A pre-publication PDF of the paper can be found here.

Preliminary Results on Low-Stakes Just-in-Time Assessments

by Hakan Erdogmus*, Soniya Gadgil**, and Cécile Péraire*
*Carnegie Mellon University, Electrical and Computer Engineering
**Carnegie Mellon University, Eberly Center

Background

A previous post talked about a Teaching-As-Research project that we had initiated with CMU’s Eberly Center for Teaching Excellence & Educational Innovation to assess a new teaching intervention.

In our flipped-classroom course on Foundations of Software Engineering,  we employ instructional materials (videos and readings) that the students are required to review each week before attending in-class sessions. During the in-class sessions, we normally perform a team activity based on the off-class material.

To better incentivize the students to review the materials before coming to class, in Fall 2017, we decided to introduce a set of low-stakes, just-in-time online assessments. First, we embedded short, online quizzes to each weekly module that the students took before and after covering a module (we called these pre-prep and post-prep quizzes). We also gave the students, at the beginning of each live session, an extra quiz on the same topics (we called this third component an in-class Q&A.). All the quizzes had 5 to 11 multiple-choice or multi-answer questions that were automatically graded.

We deployed a total of 13 such triplets of quizzes (39 total), except we deployed the prep quizzes to alternating groups of students each week starting on week 2. The students were separated into two sections: one week, the first section received the prep quizzes, and the next week, the second section received them. All students received the in-class Q&As, after which the correct answers were discussed with active participation from the students. All components were mandatory in that the students received points for completing the assigned quizzes. However the actual scores did not matter: the points counted only toward a student’s participation grade. The quizzes were thus considered low-stakes.

Research Questions

Our main research question was:

RQ1: How does receiving an embedded assessment before and after reviewing instructional material impact students’ learning of concepts targeted in the material?

To answer the research question, we tested two hypotheses:

H1.1: Students who receive the prep quizzes before and after reviewing instructional materials will score higher on a following in-class Q&A on the same topic.

H1.2: On the final exam, students will perform better on questions based on topics for which they had received prep quizzes, compared to questions based on topics for which they had not received prep quizzes.

As a side research question, we wanted to gauge the overall effectiveness of the instructional materials, leading to a third hypothesis:

RQ2: Do the instructional materials improve the students’ learning?

H2: Students’ post-prep quiz scores on average will be higher than their pre-prep quiz scores.

Results

Pre to Post Gain

First we answer RQ2. Prep quizzes were embedded for twelve of the thirteen modules. Overall all students who took the prep quizzes improved significantly from the pre-prep to the post-prep quiz. The average scores and standard deviations were as follows:

Quiz timing Average Score (Standard Deviation)
Pre-prep quiz 53% (21%)
Post-prep quiz 71% (18%)

We had a total of 559 observations in this sample. The average 34% gain was statistically significant with a  p-value  of  .002  according to the paired t-test (dof = 559, t-statistic =10.39). This result suggests that the students overall had some familiarity of the topics covered, but the instructional materials also contained new information that increased the scores. 

Performance on In-Class Q&As

We compared the average in-class Q&A scores of students who completed both pre-prep and post-prep quizzes (Treatment Group) to that of those who did not receive the prep quizzes (Control Group). The average scores of the two groups were as follows:

Group (Sample Size) In-Class Q&A Score 
Treatment: with prep quizzes (258) 64.6%
Control: without prep-quizzes (349) 61.2 %

The prep quizzes impacted the students’ learning as measured by their in-class Q&A scores: the improvement from 61.2% to 64.6%  was statistically significant according to the independent samples t-test  with a p-value of 0.02 (dof = 605, t-statistic = 2.36).  However the effect size, as measured by Cohen’s d  = 0.19,  was small.  Module-by-module improvements were as follows: 

Notably, for the quiz Q5 on Object-Oriented Analysis & Design, the control group counterintuitively performed significantly better. We will have to investigate this outlier in the next round.

Final Exam Performance

For the final exam, each exam question was mapped to the topic the content of which the question assessed. Each student’s score was tagged with a code depending on whether or not the student was in the Control Group or the Treatment Group for that topic according to the associated prep quizzes. Average scores for control topics and treatment topics were computed for each student first, and then an overall average was computed for each control topic and treatment topic for the whole class. The results were as follows for the two groups:

Group Average Score on Final Quiz (Standard Deviation)
Treatment: with prep quizzes 58% (17%)
Control: without prep-quizzes 60% (15%)

The differences were not significant according to the independent t-test (dof  = 51, t-statistic = .64 , p-value = .525). So the prep quizzes embedded in the instructional material did not have any discernible effect on the final exam.

Expectations and Surprises

We were a bit surprised that that the post-prep scores of the students had a low average of 70% and the improvement was only 34%.  We would have expected an average more in the range 75-85% corresponding to an improvement rate of over 50% relative to the pre-prep quiz.

The impact of prep quizzes on in-class Q&As were also below expectations even though, for each topic, both types of assessments had similar questions. One reason might have been the lack of feedback on correct answers in the prep quizzes. After the post-prep quiz, the students could see which answers they got wrong, but the correct answers were not revealed since they would be discussed during the live session after the corresponding in-class Q&A.

The lack of impact on final exam scores were not too surprising though. The prep quizzes and in-class Q&As had questions more at low cognitive levels (knowledge and comprehension in Bloom’s taxonomy), while the final exam had questions at higher cognitive levels (application, analysis, and synthesis in Bloom’s taxonomy). Also the final exam is separated significantly in time from the prep quizzes (depending on the topic, from three to 13 weeks), with the possible effect of knowledge loss forcing all students re-review the instructional materials to prepare for the final exam irrespective of whether they had taken the prep quizzes.

What Next?

We have been collecting more data in the subsequent offering of the course, with minor modifications that address some of the possible drawbacks of the first round and to yield further information on certain observed effects.

First, we have incorporated better feedback mechanisms to the post-prep questions so that, after completing a quiz, the students could not only see the wrong answers, but also the correct answers. We will see whether the new feedback will affect their in-class Q&A performance.

Second, we will incorporate some low-cognitive-level questions to the final exam resembling those of the prep quizzes and in-class Q&As.  We hope that this change will reveal information on whether time separation or question complexity is a more important factor in erasing the effect of just-in-time low-stakes assessment.

Finally, we will have to dig deeper to see why the instructional materials were not as effective as we had hoped as measured by the post-prep quiz scores. Were they not enough of an incentive for the students to review the instructional materials? Were the questions misaligned with the instructional materials? Or were the instructional materials not well designed in the first place to achieve their learning objectives? We will try to correlate prep quiz assignments with actual statistics on video viewing and reading material downloads.

Stay tuned!

Collaborations patterns for class activities and meetings

Our graduate-level software engineering foundations course is offered in a mixed flipped-traditional format (see post by Cécile Péraire). Two components of the course require students to run effective meetings and develop good collaboration skills. During live sessions, students perform class activities in a team, and are asked to solve a problem in a limited time. Their semester-long team project also requires them to hold frequent planning, status, and reflection meetings. Students’ workloads are heavy so they need to be efficient. To help students with running their meetings and face-to-face collaboration in a group setting, we provide them with a catalogue of simple patterns that they can use.  We describe each pattern in terms of a symptom and a tactic:

  • The symptom signals a problem, an inefficiency, or a situation that may warrant an intervention.
  • The tactic proposes a suitable intervention that addresses the root cause of the symptom.  The tactic may refer to other patterns for tackling different aspects of the symptom.

I share these patterns below. I hope they are helpful. Let me know if you have different ones, or if you find them to be helpful or unhelpful.

Ice Breaker

  • Sympton: Team members don’t know each other well. They have many priorities to tackle. Everybody is stressed and anxious.
  • Tactic: Start the meeting with small-talk and pleasantries.  Ask everyone how they are doing, what’s on their mind, or share something that is non-work related. To optimize time, use Two-Word Check-in.

Two-Word Check-in

  • Symptom: Team members don’t know each other well. They have many priorities to tackle. Everybody is stressed and anxious. Time is limited: you need to get on with the task.
  • Tactic: Go around the group and ask each person to check-in. Check-in is simply an utterance, no more than three words, that captures how the participant is feeling. Words don’t need to relate to work. Example: stressed, eager. Ok, you can use three words.

Division of Work

  • Symptom: Time is limited. Everybody is working on the same problem individually at the same time. The tasks are a bit mechanical. Everybody comes up with more or less the same solution. This is wasteful.
  • Tactic: You’re a team: so be a team. Divide the work. Use Timebox. When the time is up, let everyone show their output and aggregate results. “Joe, why don’t you find this topic in lecture slides, while Jia is creating a template. Let’s give ourselves 5 minutes for this. When the time is up, we’ll aggregate the results.”

Pair Work

  • Symptom: Time is limited. Everybody is working on the same problem individually at the same time. The tasks are not that mechanical, they may require more than one brain. The team needs to optimize.

Tactic: Split the team into one or two pairs and one or two individuals. Give the pairs the challenging tasks, give the individuals more mechanical tasks. Use Timebox. When the time is up, discuss the outputs and aggregate results.

Moderator

  • Symptom: Chaos reigns, people are shouting over each other; or silence reigns, nobody is speaking; or people are taking irrelevant tangents instead of focusing on the task at hand.
  • Tactic: appoint one of you as a moderator; let the moderator run the show; if nobody is rises to the occasion, self-appoint yourself as the moderator. The moderator:
    • Polices time, making sure nobody speaks for too long or monopolize the discussion: “I think we get your point. Perhaps we can let Joe speak a bit. Joe do you have something to add?”
    • Decides who speaks next, being fair: “Let’s rotate, Jun, you’re next.” (see Round Robin) “Sorry, I can’t understand when more than one person is speaking. Rahul, do you want to go next?”
    • Maintains focus, calling it out when somebody takes a tangent: “I think this topic is a bit outside the topic. Let’s park it and come back to it if we have time.

Scribe

  • Symptom: You had a fruitful discussion, but nobody recorded it. Darn, now you have to present and submit your results.
  • Tactic: Appoint a scribe. The scribe’s role is to take notes. Scribe stops the team if a point is not clear or the team is moving too fast for him/her to catch up: “Wait a second guys, I can’t write down everything. Can you slow down a bit?” “Let me get this straight, do you mean ….?”

Parking Lot

  • Symptom: The team is stuck and hasn’t made any progress in the last 5 minutes; the clock is ticking. Tick tock tick tock.
  • Tactic: Park the task you are currently working on, move on to another task. If you have time, you can revisit it.

Round Robin

  • Symptom: A few members dominate the discussion. Others are silent. Or: People are talking over each other.
  • Tactic: Suggest to go around the block, people take turns speaking, in an orderly manner. Make sure nobody speaks for too long (see Moderator).

Mini Plan

  • Symptom: You’ve read or listened to the activity description. But it doesn’t give you concrete steps. The team doesn’t know how to proceed.
  • Tactic: Before randomly discussing possible solutions, take 5 minutes to make a mini-plan as a team. Strategize, focussing on the first few steps to get started. The rest may become obvious later. “What shall we do first, and how shall we do it?”

Read Aloud

  • Symptom: It’s not clear whether everybody understands what needs to be done. Task description seems ambiguous.
  • Tactic: Let someone read aloud the task description to the who team. Ask another member to rephrase it and explain it to the team.

Ask the Expert

  • Symptom: Nobody knows how to proceed. Task description is mysterious. A concept is utterly unclear. Mini Plan didn’t work. Read Aloud didn’t work.
  • Tactic: The TA is right over there. And look, the instructor is nearby too. Ask for help.

Timebox

  • Symptom: There are too many tasks. Time is limited. You have to manage time carefully to deliver something sensible. It will need to be just good enough.
  • Tactic: Decide how much time you’ll spend on each task. Be strict about it. Use Parking Lot if you need to revisit task. Let Moderator manage the time.

Iterate

  • Symptom: There are too many tasks. Time is limited. You have to manage it carefully to deliver something sensible. It will need to be just good enough.
  • Tactic: Timebox to come up with a rough solution for each task. Use Parking Lot. Then repeat the process to improve the solutions gradually. This way you can stop any time and still deliver something

 

Teaching As Research

CMU’s Eberly Center for Teaching Excellence has outstanding resources  to support the faculty in their education research endeavors. They advocate an approach called Teaching as Research (TAR) that combines real-time teaching with on-the-fly research in education, for example to evaluate the effectiveness of a new teaching strategy while applying the strategy in a classroom setting.

TAR Workshops

Eberly Center’s interactive TAR workshops helps educators identify new teaching and learning strategies to introduce or existing teaching strategies to evaluate in their courses, pinpoint potential data sources, determine proper outcome measures, design classroom studies, and navigate ethical concerns and the the Institutional Review Board (IRB) approval process. Their approach builds on seven parts, each part addressing central questions:

  1. Identify a teaching or learning strategy that has the potential to impact student outcomes. What pedagogical problem is the said strategy trying to solve?
  2. What is the research question regarding the effect of the strategy considered on student outcomes? Or what do you want to know about it?
  3. What teaching intervention is associated with the strategy that will be implemented in the course as part of the study design? How will the intervention incorporate existing or new instructional techniques?
  4. What sources of data (i.e., direct measures) on student learning, engagement, and attitudes will the instructors leverage to answer the research question?
  5. What study design will the instructors use to investigate the research question?  For example, will collecting data at multiple times (e.g., pre- and post-intervention) or from multiple groups (e.g., treatment and control) help address the research question?
  6. Which IRB protocols are most suitable for the study? For example, different protocols are available depending on whether the study relies on data sources embedded in normally required course work, whether student consent is required for activities not part of the required course work, and whether any personal information, such as student registrar data, is needed.
  7. What are the actionable outcomes of the study? How will the results affect future instructional approaches or interventions?

After reviewing relevant research methods, literature, and case studies in small groups to illustrate how the above points can be addressed, each participants identifies a TAR project. The participants have a few months to refine and rethink the project, after which the center folks follow up to come up with a concrete plan in collaboration with the faculty member.

Idea

I teach a graduate-level flipped-classroom course with colleague Cécile Péraire on Foundations of Software Engineering.  We have been thinking about how to better incentivize the students to take assigned videos and other self-study study materials more seriously before attending live sessions. We wanted them to be better prepared for live session activities and also improve their uptake of the theory throughout the course. We had little idea about how effective the self-study videos and reading materials were. Once suggestion from the center folks was to use low stakes assessments with multiple components, which seemed like a good idea (and a lot of work). Cécile and I set out to implement this idea in the next offering, but we wanted to also measure and assess its impact.

Our TAR project

Based on the above idea, our TAR project, in terms of the seven questions, are summarized below.

  • Learning strategy: Multi-part, short low-stakes assessments composed of an online pre-quiz taken by student just before reviewing a self-study component, a matching online post-quiz completed by student right after reviewing the self-study component, and an online in-class quiz on the same topic taken at the beginning of the next live session. The in-class quiz is immediately followed by a plenary session to review and discuss the answers.  The assessments are low-stakes in that a student’s actual quiz performance (as measured by quiz scores)  do not count towards the final grade, but taking the quizzes are mandatory and each quiz completed counts towards a student’s participation grade.
  • Research question: Our research question is also multi-part. Are the self-study materials effective in conveying the targeted information? Do the low-stakes assessments help students retain the information given in self-study materials?
  • Intervention: The new intervention here are the pre- and post-quizzes. The in-class quiz simply replaces and formalizes an alternate technique based on online polls and ensuing discussion used in previous offerings.
  • Data sources: Low-stakes quiz scores, exam performance on matching topics, and basic demographic and background information collected through a project-team formation survey (already part of the course).
  • Study design: We used a repeated-measures, multi-object design that introduces the the intervention (pre- and post-quizzes) to pseudo-randomly determined rotating subset of students. The students are divided into two groups each week: the intervention group A and the control group B. The groups are switched in alternating weeks. Thus each student ends up receiving the intervention in alternate weeks only, as shown in the figure below. The effectiveness of self-study materials will be evaluated by comparing  pre- and post-quiz scores. The effectiveness of the intervention will be evaluated by comparing the performance of the control and intervention groups during in-class quizzes and related topics of course exams.

  • IRB protocols: Because the study relies on data sources embedded in normally required course work (with the new intervention becoming part of normal course work), we guarantee anonymity and confidentiality, and students only need to consent to their data being used in the analysis, we used an exempt IRB protocol applied to low risk studies in an educational context. To be fully aware of all research compliance issues, we recommend that anyone pursuing this type of inquiry consult with the IRB office at their institution before proceeding.
  • Actions: If the self-study materials are revealed to be inadequately effective, we have to look for ways to revise them and make them more effective, for example by shortening them, breaking them into smaller bits, adding examples or exercises, or converting them to traditional lectures. If the post-quizzes do not appear to improve retention of self-study materials, we have to consider withdrawing the intervention and trying alternative incentives and assessment strategies. If we get positive results, we will retain the interventions, keep measuring, and fine-tune the strategy with an eye to further improve student outcomes.

Status

We are in the middle of conducting the TAR study. Our results should be available by early Spring. Stay tuned for a sneak peek.

Acknowledgements

We are grateful to the Eberly Center staff Drs. Chad Hershock and Soniya Gadgil-Sharma for their guidance and help in designing the TAR study. Judy Books suggested the low-stakes assessment strategy. The section explaining the TAR approach is drawn from Eberly Center workshop materials.

Further Information

For further information on the TAR approach, visit the related page by  Center for the Integration of Research, Teaching and Learning. CIRTL is an NSF-funded network for learning and teaching in higher education.

I am a millennial and I care about sustainability (in software engineering)

Surveys among millennials indicate sustainability as one of their first priority, for example, when deciding on a purchase. I believe it would be interesting to know whether this belief also holds for how millennials see the academic curricula, in particular, the software engineering one.

In fact, in the last few years, there has been a trend towards including sustainability and in particular greenability topics in the traditional software engineering curricula. The definition of sustainability which I prefer in this context is the one offered by the United Nations

Meeting the needs of the present without compromising the ability of future generations to meet their own needs.

There is already evidence of the need for better focus on sustainability. For example, a paper presented at ICSE2016 surveyed more than 3800 developers in large companies, such as Google, IBM, and ABB. One of the statements that caught my eyes was

I would love to have more education […] for designing and investigating battery lifetime! Anything to help raise awareness and break through attitude barriers.

What emerged is that, although many developers
(many of which considered millennials) see sustainability as necessary, there seems to be a lack of teaching regarding the subject in higher education.

With my colleagues, we set up a survey to understand the current state of teaching sustainability in software engineering, targeting researchers and educators who usually publish and attend conferences and workshops dedicated to the topic.

We found out that:

  • Although the focus is on technical aspects, educators perceive the social and environmental one as important. In turn, this calls for a multidisciplinary restructuring of the curricula which is hard to achieve due to lack of time and resources.
  • Sustainability is either taught in short courses or as modules embedded in existing technical ones. The main topic of the classes is energy efficiency. It is mostly the educators who push for such courses rather than the institutions.
  • The main reason for the lack of sustainability courses is lack of awareness, lack of adequate teaching material and technology support, and high effort required to come up with a new programme of study.

The open question remains how to develop a curriculum that focuses on sustainability and that at the same time suits the needs of millennial students.

Interestingly, during the workshop, Claudia de Olivera Melo (University of Brasilia) addressed a similar topic, cyber ethics education. Their work is fascinating as it combines a conceptual framework for analyzing cyber ethics SE curricula with an analysis of ACM/IEEE Computing Curricula. They offer suggestions on how to integrate cyber ethics in the curricula which I believe can also suit sustainability topics.

The roundtable discussion centered for most of the time on these two issues. The interest received from other researchers as well as the students attending the workshop gives me hope in the fact that, shortly, these wicked topics will the at the center of academic pedagogy even in SE.

If you are interested in teaching sustainability in software engineering and want to exchange ideas about a curriculum that addresses such issues, please be in touch @dfucci.

Video Lectures: The Good, the Bad, and the Ugly

In my previous post on flipped classroom, I touched on a key benefit of this approach: Immediate faculty feedback during in-class activities enabling​ rapid and effective​ learning.

In this post, I will cover video lectures: The videos that students watch online before coming to class, in order to prepare for in-class activities. We’ll look at the good, the bad, and the ugly of video lectures.

Let’s start with the “good” 

Most students appreciate online videos, because they can watch them (potentially repeatedly) at their own time and pace. Students like the fact that the videos are short and focused on teaching them the key concepts to remember before class.

As faculty, we also appreciate those videos, because they reduce our preparation time before each class, every semester. Indeed, they eliminate the need to review a large slide deck before class in order to get ready for a long monolog. Instead, during class, students do most of the talking and thinking by solving problems related to the concepts introduced during the videos. Faculty preparation is mostly reduced to remembering how to introduce those problems to the students, facilitate the problem-solving activity, and highlight the activity takeaways.

Video lectures have their drawbacks, so let’s continue with the “bad”

Producing and maintaining video lectures could be extremely time consuming. Below are some advices (taken from Flipping a Graduate-Level Software Engineering Foundations Course) that we received from mentors who helped us produce videos for our Foundations of Software Engineering course:

  • Aim for “good enough”. Shooting perfect videos could take days if one aims for the perfect background, angle, lighting, audio, elocution, timing, etc. Even-though all these elements are important, imperfection in the context of video lectures is perfectly acceptable. Hence the video production process could be accelerated greatly by aiming for “good-enough”.
  • Keep videos short and focused. Videos should be created to retain students’ attention and maximize learning: they should be kept short (e.g. about 10 minutes at most) and convey a limited number of concepts and key messages. The key messages should be easy to summarize at the end.
  • Include required elements. Elements that should be included in a video are: a (catchy) opening with motivation, agenda, learning objectives, and summary of key messages.
  • Favor pictures over text. Prefer graphics and pictorials over text in visuals.
  • Ask for participation. A video lecture may encourage active participation of the viewer. For example it may pose a question and ask the viewer to pause and ponder the question or solve a problem.
  • Assess understanding during live sessions: Because a faculty is not present when students consume online videos, it is hard to assess students’ understanding of the content. To overcome this challenge, we often start a class (also called live session) with a Q&A to clarify or complement the content of video lectures. In addition, incorporating graded quizzes to Q&A sessions might help “motivate” students to watch videos.

Here is where it gets “ugly”

Because video lectures are initially long to create and later on difficult to maintain, they have the tendency of freezing the course content. Here are some advices (taken from Flipping a Graduate-Level Software Engineering Foundations Course) to address this problem:

  • Favor principles over fashion. Videos should focus on principles and foundational concepts versus technology and fads to maximize their relevance in fast-evolving subjects. Keep timeless components in; remove volatile components that are likely to become stale. These volatile components could be introduced during live sessions using mini-lectures (e.g. short tech-talk on how to use git) for instance.
  • Stabilize before recording. Video lectures should ideally be created once the content has been tested and stabilized. Unfortunately, we could not follow this advice. We were designing the course almost entirely from scratch, and took many risks with untested content. We later had to revisit and edit existing videos to make changes (which was extremely time-consuming). We also had to eliminate content that did not work. Be prepared to rework or trash some portion when designing a flipped classroom from scratch.

Conclusion for faculty: Is flipped-classroom right for you?

If being an activity facilitator makes you uncomfortable, you might want to stay away from flipped-classroom. Otherwise, do not let the “bad” and the “ugly” discourage you. If like me, you are not fond of slide presentations but deeply enjoy facilitating workshops, this teaching approach could clearly make teaching easier and more pleasurable. Also, note that it is very possible to replace video lectures with selected readings and videos made by others. That way you retain the benefits of flipped-classroom without the drawbacks of video lecture production.

Conveying the value of the application-oriented exercises in team-based learning

 

Last May in Buenos Aires, we presented our experience in deploying Team-Based Learning in software engineering courses.

Team-Based Learning is an Active Learning Methodology where all classroom activities are carried out among students formed teams. As such, the instructor takes on a secondary role of enabler of the process and facilitator of the discussions.

In our opinion, one key element of TBL is that the course life-cycle is completely described in the methodology. Therefore, those looking to deploy TBL in their courses should study and follow the guidelines. For that purpose, we recommend the book by Michaelsen et al

In short, each TBL module is composed of the following activities.

  • Individual Study: so that students come to class with the contents already studied.
  • Individual Readiness Assessment Test (IRAT): so that students can individually evaluate how well they have prepared for the class.
  • Group Readiness Assessment Test (GRAT): where students can check their answer within their team, and the team comes up with possibly new answers to the IRAT.
  • Written appeals: a venue for students to communicate which questions they found were poorly written.
  • Instructor feedback: which comes immediately after the previous steps, is evidence-based, on the results of the IRAT and GRAT.
  • Application-oriented activities: this is, in our opinion the key element of the methodology, and one on which we put more attention, and also, the one which we feel we should improve upon.

Application-oriented activities

Application-oriented activities (AOA), are a set of exercises designed to test the group skills and foster discussions. During AOA, students work in teams to solve problems that should jog their understanding of the topic. AOA are the place where most of the learning takes place in TBL. Unfortunately, our results and satisfaction towards AOL have been mixed.

We observed that: 

  • As only IRAT and RAT are graded in TBL, when faced with pressures from other subjects in the semester, students skip AOA classes.
  • Lower attendance also introduces the problem that serious games (with some healthy competition among teams) cannot be implemented without a minimum critical mass of students and teams.

This semester we’ve tried a few variations to get students more engaged in AOA, such as: 

  • awarding extra points for contributing to new AOA exercises, and
  • establishing a minimum attendance for AOA classes.

We are still evaluating the effects of these interventions. For the moment, attendance rate has risen during the last semester. 

Do you have any ideas to help us with this problem? We’d love to hear about them. Contact us at:

  • Santiago Matalonga – University of the West of Scotland – santiago.matalonga@uws.ac.uk
  • Alejandro Bia – Universidad Miguel Hernández – abia@umh.es

A Toolkit for Pedagogical Innovation in Software Engineering

I was asked to share some of the pedagogical innovations from two books I recommended once during a talk: Pedagogical Patterns and Training from the Back of the Room. In this post I will focus on Pedagogical Patterns, and leave Training from the Back of the Room for my next post. I will provide an overview of the book and share the insights that I have put in practice. I hope you will be tempted to read them and apply some of these ideas.

Pedagogical Patterns, Advice for Educators, is a collection edited by Joe Bergin with the help of a board of editors, including Jutta Eckstein and Helen Sharp, and the result of the collaborative work of many authors. The book is applicable beyond software engineering, but most of the examples deal with computer software issues. One of the things that resonated with me the most while reading this book is the loving warmth that the writer/teacher’s voice conveys for the work of teaching, love of both students and the experience of teaching itself.
The book is organised as a pattern catalog, following the Alexandrian format, and also grouped according to their main theme (active learning, feedback, experiential learning, different perspectives). Here are some of the patterns that have impacted me the most:

* Abstraction Gravity — Fron High to Low: Introduce concepts that require understanding on two levels of abstraction at the higher level of abstraction and then at the lower level. We apply this pattern in several forms in our first programming course at UNTREF. First, we apply it in the context of Meyer’s Inverted Curriculum, so that we focus first on understanding and using objects, and then we move on to implementing them. Also, we go from a very abstract use of Java objects using a dynamic interpreter to a more static compiled environment as the course progresses. Some of the issues in this approach are making sure that the link between the two abstraction levels is clear, and that more than one example is described.

* Active Student: Keep students active and engaged in and outside the classroom to make it more likely that they will learn. This means shifting some activities from listening and reading to speaking, writing, solving problems and interacting. The Different Approaches related pattern promotes organising activities with varied focus among different sensory modalities: visual, auditory and kinesthesia, that cater to different types of people. We find that active students are more likely to participate during the course, and that is why we make sure that during the first and second classes students are actively engaged physically and mentally (we go out of the classroom, separate in “tribes” throughout the classroom, and solve minor puzzles).

* Early Warning: Provide early feedback on students falling behind or not having a clear understanding of issues that might impact them when future activities need to build on previous knowledge. After the first few weeks of our programming course, we start working on exercises. We apply this pattern by explicitly marking the expected rhythm of progress on the exercises by publicly stating which exercise workbook corresponds to that week. For this to actually have impact, we make visible how many students have solved a given amount of exercises on the current workbook. In order to do this. we ask students to raise their hand when we say out loud the number of exercises from the current workbook they have solved up to that point. We count hands for 0, 1, 2, etc. up to the number of exercises in that workbook. This not only marks which one is the current workbook that they should be working on, but provides gentle peer pressure on students when they see each other’s progress. We have measured noticeable improvement in the amount of exercises performed on the current guide as the course progresses.

The core tenets I see behind these patterns are a caring look on students and an effort to keep then connected to the learning experience. These are only a few examples of the patterns you will find in the book and how you can apply them.

I would love to hear your comments on your own experiences and how these ideas resonate with you.