Measuring ‘talent’: lessons from the Indian Olympiad programme
Aniket Sule
“Sir, how do I prepare my students for the Olympiad exams?” I get this question from at least one teacher every time I organize a workshop for school teachers. I smile uneasily because that is what I consider to be, an unanswerable question. When I was a student three decades ago, I was never trained to ‘prepare for the exam’. So what has changed in these intervening years that even the best of our teachers worry about exam preparation rather than holistic development of the students?
The Indian Olympiad Programme
Before we get into larger philosophical questions, it may be prudent to describe the Indian Olympiad Programme for the benefit of the readers. The Government of India recognizes and encourages participation of Indian teams of high school students in various International Academic Olympiads. Our centre in Mumbai is tasked with the responsibility of selecting and training Indian teams for Olympiads in astronomy, biology, chemistry, mathematics, physics and junior science. There are other official organizations overseeing the process for Olympiads in informatics, linguistics, etc. The HBCSE Olympiad cycle starts every year with student (classes 8-12) enrollment in August/September. The first stage is a multiple choice examination held in November. The top 300 students are invited to appear for stage 2 of the examination which happens in January / February. This second stage examination expects students to write detailed solutions to the problems posed. From this examination, we select the top 35 students to participate in stage 3, which is an orientation-cum-selection camp at HBCSE in the April-June period. The top 5 students from this camp become the Indian team for that year and participate in the international Olympiads. Details about this programme can be found on the HBCSE website, https://olympiads.hbcse.tifr.res.in.
What we test and how we test it?
In our school or board exams, the emphasis is always on the reproduction of the exact definition or derivation of a certain formula or verbose description of some scientific phenomenon. If there is some ‘problem solving’, it is invariably just a ‘draw by numbers’ kind of numerical question, where the students just write a formula from memory, dutifully substitute variables with the given numbers and report the final answers. Further, most systems demand that in the final exam different curriculum ‘chapters’ should be treated as separate silos, making the process of paper setting even more boring.
MCQs (Multiple Choice Questions) are seen as great saviours of teachers as they are easy to grade and supposedly impersonal/objective, but they have their own set of problems. First of all, MCQs tend to test only one type of skill, i.e., the ability to spot the best of the four options through a process of elimination. On top of that, most of our question setters seem to be clueless about how to design an MCQ question. Picking up a random statement from the curriculum and converting it to an MCQ by pulling out a random word is what most people seem to do. But that does not serve any purpose. For any question, the first thing one should ask is, “What do I know about the learning of the examinee, based on their response to this question?” It does not matter if you seek the response in the form of an MCQ, fill in the blank or descriptive answer. If the only thing that your question is testing is the ability to memorize, then you need to re-evalute your question.
On the other hand, the Olympiads strive to test conceptual clarity, original thinking and the analytical ability of the student. We neither care about the information storage and retrieval capacity nor do we make our exams a speed test. We also try to connect different concepts in a single question. Let me give some examples to illustrate the point. A few years ago we gave the students a 1000 word article about astronauts travelling to the moon to read; we deliberately inserted a few changes which were scientifically inaccurate. The task for the students was to correctly identify – with due reasoning – which of the sentences in the article were inaccurate. In another examination, we had a detailed write-up on statistics about the prevalence of malaria in different communities, followed by a list of inferences and asked the students to judge which of the inferences were actually supported by the given data and which were not. We don’t care if students remember exact statements of theorems or definitions, as long as they can demonstrate that they have internalized the idea behind those definitions, like the famous ‘horse and cart’ question testing the conceptual understanding of Newton’s third law.
In the astronomy Olympiad selection camp, we go even further. Our selection tests in the camp are of open book nature and are flexible on duration. By allowing students to refer to their notes or a formula sheet during the exam, we take away their anxiety of memorizing complicated formulae and at the same time force the question setters to go beyond memory based questions. In each of our selection tests, towards the end of the pre-announced duration, we ask all the examinees, if they need extra time to pen down solution ideas which may have been forming in their minds and allow an extension of time. After all, we are supposed to check how much learning has happened for the students. If you add other constraints such as time limit, the focus of testing invariably shifts from understanding to the speed at which students can respond. There may be some professions where this kind of performance in a stressful environment is a useful or even desirable quality, but this is certainly not a necessary skill for a scientist. Doing scientific research is like running a marathon, not a 100 metre sprint. Our system’s continuous promotion of speed testers and information hoarders as ‘brilliant / talented’ students is directly responsible for the lack of enough uptake of scientific jobs in the country. I know this kind of subjective flexibility is impossible for large scale tests, but there is no reason that this cannot be implemented in individual schools for their internal tests.
Difficulty level
Next we come to the question, “What is the correct level of difficulty for any benchmarking test?” In our educational system, we award ‘first class’ at 60 per cent score and someone with more than 75 per cent score secures a ‘distinction’. But if you look at the board exams over the last decade, these words seem to have lost any real meaning. There are thousands of students each year who score more than 98 per cent marks and probably a lakh or more who score at least 90 per cent marks. If one lakh students are scoring 90 per cent marks or more and many students who secure ‘distinction’ are left without admission to their preferred higher education course, then there is something fundamentally wrong with the benchmarking test itself.
Nowadays there is proliferation of many private entities which conduct private ‘Olympiads’ in rich schools. These exams of dubious academic value are conducted right from primary school and for almost every scholastic subject. They adopt a strategy similar to the boards by setting ridiculously simple tests, resulting in most exam takers achieving very high scores. Giving medals/prizes/certificates to a large number of kids may boost their confidence and also incentivize schools to continue participation in future years. But these exams don’t serve any purpose as far as benchmarking is concerned. They create a false impression of excellence leading to inflated expectations from the students.
When I joined the Olympiad programme 15 years ago, my predecessor gave me some valuable advice about the ideal difficulty level for any large scale test. “After attempting our test, even the least prepared student should feel satisfied that he/she could attempt some part and feel motivated that the test may have been accessible with a bit more preparation from the student’s side. On the other hand, even the most prepared student should find some part of the test challenging so that it motivates him/her to study further.” In other words, almost all students attempting your test should clear the passing threshold (30-40 per cent) but only a tiny percentage of students should cross the 80 per cent barrier. If your post-exam statistics fails on any one of these two criteria, one must initiate an immediate introspection about the level of difficulty of the test.
Last words
Coming back to the question with which we started this discussion, “How does one prepare to succeed in the Olympiads?” The answer I often give is, “You cannot prepare for the Olympiads.” If you are a student who is curious about science/mathematics and are taking the effort to pursue it beyond the curriculum, you will not need any separate preparation for the Olympiads. Some practice of exam writing may be inevitable, but exams should be seen as a pit stop to reflect and regroup. If exam becomes the goal of learning, then you may taste success at some levels but somewhere down the path your tank may run empty and you may be left stranded.
We need to make the learning process of STEM enjoyable for students. If we are stressing students out by forcing them to run a rat race, then we are just pushing them away from STEM. We, as teachers, need to realize this soon and execute a course correction.
The author is an associate professor at Homi Bhabha Centre for Science Education (HBCSE-TIFR) in Mumbai. He is the general secretary of the International Olympiad on Astronomy and Astrophysics. His research interests are astronomy and mathematics education at high school level and the history of astronomy in India. He can be reached at aniket.sule@gmail.com.