Last year, we designed a feedback system based on our ballot and voting data and got feedback to teams after the World Championship Jamboree. We were able to get feedback to the teams on how the judges voted. We have improved this system in 2013 and are able to get feedback to you before the WCJ.
iGEM teams were assessed by judges studying the wikis, examining parts in the Registry, seeing presentations, and speaking to teams at their posters. You can get feedback on your team’s performance directly from the judges’ votes which you can find on the Judging Feedback page .
First, our rubric-assisted judging system reflects the same values that iGEM judges have embraced in previous years: originality, hard work, scientific rigor, usefulness, societal impact, and creativity to name a few. Second, scores are recorded in the newly redesigned judges’ ballot system.
The new Rubric includes standard grading language that enables judges to easily express what they think about the quality of each aspect of the projects. For example, a judge might be asked ‘Did you find the presentation engaging?’ and can choose one of seven responses, ranging from ‘Kept me on the edge of my seat’ to ‘Put me to sleep’. These options correspond with a score of 6 (best) to 1 (worst). We created a Rubric for the Regional Competitions and will have a new version for the World Championship Jamboree.
The rubric organizes key aspects of iGEM projects under the traditional categories, including the Presentation, Wiki, Poster, and Special Prizes. Judges evaluated each aspect by selecting one response (from strongly positive to negative or neutral) from a simple list.
After every aspect was voted on, all votes were tallied and presented in the form of team rankings for each award. Therefore, every judge who evaluated any aspect of a team’s project contributed directly to that team’s score and ranking. This new system and the theory behind it is based on Ballinski and Laraki’s “Majority Judgment” thesis . While we provide you with numerical scores for most categories, we will not release team ranking lists for all of iGEM.
In the regional competitions, the medal criteria were included in the beginning of the rubric, as an introduction to the team and as a way to view how each team self-designated their project. The rubric enabled judges to evaluate each iGEM project with the same metric. Therefore scores, rankings, and various awards are now more consistent across all regions. This system also allows new judges to learn what we consider important in evaluating an iGEM project.
Because every judge votes on some to most aspects, we have the ability to provide you with these scores. This gives you direct feedback from all the judges on every aspect of your project.
This system may not be perfect, but represents a great stride forward and contributes to a comprehensive and fair evaluation for each team. We will continue to work on it in the coming years so we can better evaluate all the hard work you, the teams, put into your projects.
Your feedback is presented in the form of a table with two columns. The first is the "Average Score", which is the average of the judges' votes. The second column is the the category (in the example below "Project") with the aspects listed below. If you have any questions about feedback please contact the judging committee: judging AT igem DOT org and put "iGEM 2013 FEEDBACK QUESTIONS" in the subject line.Access Feedback