5 things you may not have known about NAPLAN

7 May

For many years I researched the effects of NAPLAN and high-stakes testing. The research I discovered from the pre-NAPLAN years, revealed an overwhelming amount of evidence that clearly demonstrated the adverse effects high-stakes testing has on students, their wellbeing and their learning. As a passionate educator that advocates for students and teachers, I will admit, this nearly brought me to tears wondering why we ever decided to go down this path. So many years on, and the research speaks for itself.

Apart from the many obvious findings the research shows us about high-stakes testing and NAPLAN in recent years, here are five points I would like to mention (that were included in my thesis research), that are important for schools and parents to know:

1. Like school NAPLAN comparisons can be misleading.

In an attempt to close the gap that is evident when students commence their schooling, and to attempt to level the playing field amongst different schools, the Australian Curriculum Assessment and Reporting Authority (ACARA) uses a measure called the Index of Community Socio-Educational Advantage (ICSEA).  Each school is assigned a value based on the whole school student cohort each year. This value relates to school location, proportion of Indigenous students enrolled at the school, parent occupation and parent education (Clark, 2010, Australian Curriculum Assessment andReporting Authority, 2014).  This value is then used to compare the NAPLAN results of schools that have similar values.

ICSEA data emphasises changes in cohorts from year to year. A school can change its ICSEA value every year and can even be compared to another range of statistically similar schools from one year to the next.  This presents an issue with ICSEA figures and the actual cohort of students that complete NAPLAN tests. Students sitting NAPLAN tests are averaged out according to their entire school population to derive the ICSEA value. If we notice differences from year to year in cohorts of students leaving the school and a new cohort of students entering the school at the commencement of the year, this can present figures that do not match the cohort actually completing NAPLAN in Year 3 and Year 5, or in Year 7 and 9, which in most cases have remained stable for low transient schools. This is even more challenging for schools with lower numbers where changes in the circumstances of a few students can vary the data considerably.  It is misleading to provide a value for a school and use this value to compare schools, when it doesn’t accurately represent the cohort of students that participate in NAPLAN tests whose results are being compared (Caldwell, 2011).

2. We can predict student NAPLAN outcomes at the commencement of their schooling.

In Australia, we use a measure known as the Australian Early Developmental Census (AEDC). This is based on the Early Developmental Index (EDI) used internationally. The AEDC measures a child’s readiness for schooling in the key domains: social competence; emotional maturity; physical health and wellbeing; language and cognitive skills; and communication skills and general knowledge. These domains closely predict “good adult health, education and social outcomes” (Commonwealth of Australia, 2014). These domains cover skills that influence academic achievement (D’Angiulli et al., 2009).

Through relevant assessments during a child’s first year of schooling, research demonstrates future student achievement in literacy and numeracy can be accurately predicted (D’Angiulli et al., 2009, Brinkman et al., 2013). This includes predicting future NAPLAN success (Brinkman et al., 2013).  If the AEDC instrument also predicted future NAPLAN success, this would present issues for the Federal Government using NAPLAN data to measure school performance, when student results are already predetermined at the commencement of schooling.  The AEDC also supports the importance of focussing on personalised learning needs and student individual growth instead of emphasising externalised high-stake test results that are predictable.

3. NAPLAN results do not accurately measure the success of teachers or schools.

If we take into account the research above that informs us that we can predict student NAPLAN performance at the commencement of their schooling, then we could argue that using NAPLAN results to compare schools and their teachers is unjustified and inaccurate. Schools that have inherited a high socio-economic cohort of students, generally perform higher in NAPLAN.  NAPLAN results do not tell us anything about the quality of teaching and learning the school provides.

Schools are responsible for growth. Growth is really the only measure that is valid and important. (Georgina Pazzi)

4. 70% of NAPLAN outcomes are predetermined, the other 30% are questionable.

To add to the previous point, it is also known that approximately 70% of NAPLAN outcomes are already pre-determined. The reliance on accountability within schools is only on 30% which is contributed to school performance (Cobbold, 2010). Even if 30% was used as a comparison, this too is contentious. So many variables can occur in a student’s learning development that are not within the control of schools. This can include absenteeism, trauma, transience and additional assistance through tutors and external school programs. There are also other factors that are not attributed to the calculation of ICSEA that impact on student performance on these tests including gender, disabilities, school funding and school selection. This would diminish the 30% figure even more where school accountability is not easily measurable from one school to the next. This also raises the issue of transparency, where the government isn’t providing the full details of what these tests actually measure, and the very low percentage figures that represent the school’s contribution to NAPLAN results.

5. Student reports are presented against the intentions of NAPLAN testing.

Parents receive a student report approximately five months after NAPLAN test completion. Although this report is stated to provide parents with student individual achievement compared to national minimum standards and ranges of achievement for the middle 60% of students throughout Australia (Comber and Cormack, 2011), the visual results obtained are highly controversial.

Student results are externalised based on comparisons between their individual results and averages for their school (if the school opts to display this average), student averages (60% of the middle range only), and the national average. There is no visual indication that students have achieved above or below the minimum standard which is the prime reason for these tests (Senate Standing Committee on Education and Employment, 2014).  There is only a short text descriptor on the front page of the report and a small visual example that indicates where a child would fall below the national minimum standard, separate to the actual results themselves.

Even though the majority of students in Australia do achieve the national minimum (Gonski et al., 2011), the focus of this report clearly shows a greater importance is placed on comparing students with norms that are unequitable as student circumstances can vary from other students within and beyond their school. This increases the percentage of students that are ‘failing’, as the visual results highlight additional students in the below average range even though they have achieved the national minimum standard.


The Education Act 2013 makes it quite clear in the Preamble that “All students in all schools are entitled to an excellent education, allowing each student to reach his or her full potential so that he or she can succeed, achieve his or her aspirations, and contribute fully to his or her community, now and in the future” (Australian Government, 2013, p. 1), yet the continuing nature of high-stakes testing through the implementation of NAPLAN and its effects is inconsistent with this entitlement.

“If young people are to succeed as thinkers, as learners, and as humans who make valuable contributions to society, more must be known about them than their scores on standardised measures of achievement.”  (OECD, 2006, p. 64).


References:

AUSTRALIAN CURRICULUM ASSESSMENT AND REPORTING AUTHORITY. 2014. National Assessment Program [Online]. Available: http://www.nap.edu.au/.

AUSTRALIAN GOVERNMENT 2013. Australian Education Act 2013.

BRINKMAN, S., GREGORY, T., HARRIS, J., HART, B., BLACKMORE, S. & JANUS, M. 2013. – Associations Between the Early Development Instrument at Age 5, and Reading and Numeracy Skills at Ages 8, 10 and 12: a Prospective Linked Data Study.

CALDWELL, B. 2011. The importance of being aligned. Professional Educator, 10.

CLARK, M. 2010. Evaluating My School. Professional Educator, 9.

COBBOLD, T. 2010. Like School Comparisons Do Not Measure Up. Save our Schools [Online]. Available: http://www.saveourschools.com.au/

COMBER, B. & CORMACK, P. 2011. Education policy mediation : principals’ work with mandated literacy assessment. English in Australia, 46.

COMMONWEALTH OF AUSTRALIA. 2014. Australian Early Development Census [Online]. Available: http://www.aedc.gov.au/.

D’ANGIULLI, A., WARBURTON, W., DAHINTEN, S. & HERTZMAN, C. 2009. Population-level associations between preschool vulnerability and grade-four basic skills. Plos One, 4, e7692-e7692.

GONSKI, D., BOSTON, K., GREINER, K., LAWRENCE, C., SCALES, B. & TANNOCK, P. 2011. Review of Funding for Schooling – Final Report.

OECD 2006. Personalising education, Paris : Organisation for Economic Co-operation and Development, c2006.

SENATE STANDING COMMITTEE ON EDUCATION AND EMPLOYMENT 2014. The effectiveness of the National Assessment Program – Literacy and Numeracy (NAPLAN) Senate Report. Canberra Australia: Senate Standing  Committee on Education and Employment.

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *