Assignment+3

Assignment 3 - Annette & Ruth

**Faculty of Education**

Assignment Cover Sheet
Student Name: ||  Tim Moss   086814 / 942974   Ruth Goodman / Annette Purton  ||  **OFFICE USE ONLY**   Assignment received:  ||   ||
 * For attention of:  Student ID:
 * Unit Code: ||  ESI 449  ||   ||   ||
 * Unit Name: ||||  Schools in Society  ||   ||
 * Date Due: ||||  15th October 2010  ||   ||
 * Assignment Title/Number: ||||  Assessment Task 3: Final Paper  ||   ||
 * Word Count: ||||  (3000)  ||   ||
 * I declare that all material in this assignment is my own work except where there is clear acknowledgement or reference to the work of others **and** I have complied and agreed to the University statement on Plagiarism and Academic Integrity on the University website at [|www.utas.edu.au/plagiarism] *

Signed………………………………………………. Date 15th October 2010 ||
 * Assessor’s feedback: .......................................................................................................................... ||
 * ............................................................................. Assessment: .......................................................... ||
 * Assessor’s Signature (optional) :............................. Dated: ........................................................... ||
 * ............................................................................. Assessment: .......................................................... ||
 * Assessor’s Signature (optional) :............................. Dated: ........................................................... ||
 * ............................................................................. Assessment: .......................................................... ||
 * Assessor’s Signature (optional) :............................. Dated: ........................................................... ||
 * ............................................................................. Assessment: .......................................................... ||
 * Assessor’s Signature (optional) :............................. Dated: ........................................................... ||
 * ............................................................................. Assessment: .......................................................... ||
 * Assessor’s Signature (optional) :............................. Dated: ........................................................... ||

=Hi Annette= =Word count 3257 12/10 7.30pm= =Have copied this document onto a word doc now, if you feel I need to change something, e-mail me as I may not pick the adjustment up just by reading it through. (will format properly on word doc.)= =Will e-mail this assignment to you before I upload it on Wednesday night after tea.= =Have a good weekend in Melbourne with your family - see you next week= =Cheers Ruth= = =

Measuring success: How will/do schools and society measure success in achieving their aims and purposes? What are the implications for my classroom practice?

Societies in Australia often measure school success through national testing. Numeracy and literacy tests are based on the reasoning that setting high standards and making schools accountable, whilst establishing clear and measurable goals, has the ability to improve student achievement and educational outcomes. National testing is the main tool that the Federal Government have implemented to monitor whether schools are meeting the required benchmarks, outcomes and national standards. This essay highlights the importance of how solid educational foundations are essential for developing students to become contributing global citizens. It will analyse and investigate the educational policies from America 'The No Child Left Behind' act (NCLB) and British League Tables. The essay will continue to explore, examine and focus on the National Assessment Program Literacy and Numeracy (NAPLAN) testing in Australia and the government website ‘My School’ focusing and examining on the positive and negative effects that both NAPLAN testing and the ‘My School” website has on raising educational outcomes, the reputation of a school, government funding, teachers, students and stakeholders in the wider community. This essay will conclude with a discussion on what implications NAPLAN testing has on teaching in the classroom and whether the data from these national tests can really improve the quality of education in Australia.

Enhancing student's performance and preparing them for their future as contributing citizens in society is the main purpose and aim of every schools. Masters (2010, p. 22) states that "educational research studies have under scored the fundamental importance of literacy and numeracy, not only to educational success, but also to the students successful transitions into employment and adult life". The business sector agrees putting considerable pressure on the government to supply skilled and well-educated students whom they can employ, and as the education system accounts for considerable government expenditure, their interest in education and outcomes is not surprising (Masters, 2010). Students leaving institutions with high levels of literacy and numeracy can be related to many outcomes including, earnings, employment opportunities, personal health and well-being. The Governor of NSW Professor Bashir (2010) stated during the National Public Education Forum in Canberra, that "one of the driving forces of Australian prosperity over the last few decades has been the huge improvement in the education attainment of young Australians" (Australian Educator, 2010; Connell, Campbell, Vickers, Welsh, Foley & Bagnall 2007; Masters, 2010).

Throughout the last twenty years, education systems all over the world are being held increasingly accountable for student achievement and outcomes. Governments are justifying themselves by stating that national testing will help them monitor and improve the quality of school education. The Federal Government of the United States of America led by President George W. Bush in 2001 introduced the policy NCLB (Rose, 2009), as they found that they were progressively falling behind many countries in education outcomes and trailed behind their peers abroad (Peterson & West, 2003). Stakeholders believed that students were leaving schools uneducated and that educational reforms were required. Peterson and West (2003, p. 3) stated the government "premised on the nation that standardised tests can and do measure an important dimension of educational quality". The government also believe that the NCLB legislation would reverse the downward trend in student educational outcomes and to help ensure educational institutions were all committed to educating their students, leading to both individual and economic success of the country (Peterson & West, 2003). Ryan and Shepard (2008, p. 3) state that the Bush Government also believed that "traditional forms of school improvement, such as class size reduction and professional development, are expenses and ineffective" (Peterson & West, 2003; Rose, 2009; Ryan & Shepard, 2008).

The Australian Curriculum, Assessment and Reporting Authority (ACARA) is responsible for developing the NAPLAN tests, which are written by a team of educators who are experts in test construction (ACARA, 2010). NAPLAN testing was introduced by ACARA in 2008 to replace the National Literacy and Numeracy plan, established by Ministerial Council on Education, Training, Employment and Youth Affairs in 1997 (Department of Education 2007). NAPLAN testing is a diagnostic tool, which measures the ability of all students in Years 3, 5, 7 and 9 in numeracy, reading, writing, spelling, punctuation and grammar. The NAPLAN tests are conducted on the same day in May, under similar test conditions giving the Australian Federal Government a snap shot of students existing skills and knowledge nationally. ACARA believe that these tests are the best way to make valid and reliable judgments providing data that measures, evaluates or indicates student performance across the nation (ACARA, 2010; Masters, 2010).

In Australia, policy revisions in educational governance and accountability are on the increase, keeping up with international trends and approaches such as the implemented National Curriculum Testing in the United Kingdom and NCLB in the United States (Peterson & West, 2003). Ryan and Shepard (2008, p.192) state that, "the role of the NCLB high stakes testing shifts testing from its traditional role of achievement description, ... to holding students, teachers and schools accountable for educational improvement". The then Australian Deputy Prime Minister Julia Gillard introduced NAPLAN testing in 2008 based upon the NCLB act. ACARA (2010a) believes these tests are the best way to make valid and reliable judgements and give figures that measure or indicate student performance (ACARA, 2010a). The Tasmanian Minister for Education and Skills, Ms Thorp stated in an e-mail (2010, pers. com), that "NAPLAN provides a measure of the progress of Tasmanian students over time and allows for targeted learning programs and other resources to be deployed where required". The government believes that making NAPLAN testing results transparent will make the school and teachers more accountable and give an indication of where more funding is required to be spent in Australian schools (ACARA, 2010).

ACARA (2010b) has recently developed a government website called ‘My School’, where a school's NAPLAN results are rated and ranked against other 'like' schools. Stakeholders are able to obtain information about particular schools, such as a schools overall academic result; student population and student attendance rates. The Department of Education, Employment and workplace Relations (2010) believe that "the 'My School' website introduces a new level of transparency and accountability to the Australian school system". However, it is important to point out that when looking at the performance of a school in a particular community the statistics can be greatly affected by the socio-economic profile of the school population and by the current student intake (Masters, 2010). The year-to-year and class-to-class variation in student proficiency levels can be significant. In one year, there may be more high achievers making up the class, and in the next year, many students may struggle. A low result for a school could be the result of teacher ineffectiveness, new migrant students who have difficulty with the language or external factors due to outside circumstances such as the recent swine flu endemic, where many students are away for lengthy periods of time (Ryan & Shepard, 2008). The Australian Education Union (AEU) (2010, p.1) stated in a fact sheet that "the NAPLAN tests were never designed to be used to publicly rank or compare schools against each other". There is also concern by education institutions that the results of NAPLAN testing will be published as ‘league tables’. These ‘league tables’ are published results taken from national testing, which will rank and compare Australia against certain international benchmarks (AEU, 2010; Masters, 2010; Ryan & Shepard, 2008).

Brady and Kennedy (2007, p. 125), believe that ‘there seemed to be an obsession in Australia as much with accountability as with curriculum’. NAPLAN testing and accountability has placed schools under an enormous amount of pressure for their students to improve and meet set benchmarks, with many schools ‘teaching for the test’. A Year 9 Maths class, in which I have been assisting in this year, spent a considerable amount time during first term, concentrating on and familiarising students with NAPLAN testing procedures. When I enquired about all the sample tests we had been completing in class, Mrs. Henderson the teacher (2010, pers. com.), stated that, “the results the students obtain in the NAPLAN tests, is was what she is judged on by her peers”. She continued to state, "that her department had become driven by NAPLAN preparation". These remarks from the teacher gave me an indication of how much pressure NAPLAN testing can place on teaching staff to improve their student outcomes. Researchers have misgivings about the reliability of Australia’s NAPLAN testing. One such researcher, Dr. Wu, an Australian measurement expert from the University of Melbourne, concludes that large-scale testing is under threat because of unacceptable results and inaccuracies due to administrative challenges, sampling errors and other sources. Moreover, the British system of league tables, ‘naming and shaming’ schools and regular inspections of schools in a bid for accountability have failed to produce any significant improvement in results (Donnelly, 2010).

A discussion on NAPLAN testing with two senior Maths teachers at the school in which I work, questioned why NAPLAN testing was held in May, being only three months into the school year. These teachers felt that NAPLAN testing would be more beneficial to both students and teachers if the testing was undertaken in late October or early November. This would allow students time to build up skills and knowledge and the NAPLAN results could be a part of students end of year assessment. However, Andrew Jones (2010, pers. com.) the Assistant Manager from Educational Performance Services from the Tasmanian Department of Education, stated in an e-mail correspondence that there were many reason for the NAPLAN testing to be held in May including: the timing of school holidays for each state/territory, Easter, mid year and end of year school testing, the amount of time taken to analyse and publish results. Jones (2010, pers. com.) concluded by stating that "it is also important for all teachers to develop a 'corporate' responsibility for the achievement of students rather than assuming that it is the responsibility of those in Years 3, 5, 7, and 9 to build students' knowledge and skills to a reasonable level". With the emphasis placed on holding national testing as early as possible during the school year the Year 7 NAPLAN results are more of an indication of their learning at primary school. High schools can really only be judged on performance variations of the Year 7 students against their Year 9 results (Department of Education, 2010).

Research suggests that success in schools may be largely dependent on the amount of government funding they receive (Jamrozik, 2005). Schools in areas of socio-economic disadvantage report lower than average outcomes, lower retention rates and poor attendance in addition to higher teacher absenteeism and transience rates. However, when more funding was provided to public schools as demonstrated by the return of a Labor government in 1983 the positive effects, such as higher school retention rates and an increase in the number of university students were witnessed almost immediately (Jamrozik, 2005). Unfortunately, this trend did not continue and according to the AEU (2010), Australia does not compare favourably with the other nations of the Organisation for Economic Co-operation and Development (OECD) in expenditure on public schools. Although the rest of the world is investing heavily in education, Australia has (AEU, 2010) "been virtually alone among OECD countries in failing to increase public funding commensurately with increased private funding". Emerson (2006, p. 100) states research has shown that "public education needs at least $2.9 billion in additional annual funding to give every child the opportunity to gain a high quality education" (AEU 2010; Jamrozik, 2005)

Kevin Donnelly (2010) wrote an article printed in '//The Age'//, suggesting that educational data collected from NAPLAN testing should be provided by postcodes rather than league tables with comparisons being made only between schools with similar socio-economic profiles. Government funding could then be directed to those schools which have the greatest need. In comparison, the Federal Labor Government has only recently adopted the policy that schools should be held accountable by releasing information such as results, standards and teacher performance to the general public. Donnelly (2010) argues that the government’s rationale for making school data public is to enable parents and others to compare schools in order to measure school performance and raise the standard of underachieving schools. Donnelly (2010) maintains there are adverse implications to the kind of accountability measures employed by the government, the results of which are used to measure teacher performance and evaluate schools. Donnelly (2010) bases his concerns on the flaws in testing, which are evident in the United States legislation, namely NCLB and the British system in which League Tables are published annually. US experts argue that the results are estimations and not a true indication of what a student is capable of (Collins, 2010; Donnelly, 2010).

This suggests that government initiatives such as standardised testing will not raise the level of student achievement nor strengthen schools; unless public schools, especially those in the lowest socio-economic are supported commensurate to their needs. Indeed, Australia’s education system will not be fulfilled according to education and funding expert Dr. Jim McMorrow. According to McMorrow (2010), an education revolution will not happen unless public schools are adequately funded allowing them to reduce class sizes, increase individual attention to students who need it and update facilities, which is the opposite of the Bush NCLB policy. The previous government policies have created a system of gross inequity that if continued will result in a cut in real terms for the school system that educates the majority of Australian children (McMorrow, 2010). Furthermore, Dr. McMorrow (2010) advises an immediate investment of at least $1.5 billion to restore federal public school funding to 1996 levels in order to ensure equality of opportunity is needed (AEU, 2008; Collins, 2010; McMorrow, 2010).

After discussion and investigation, we both have found we had developed similar views regarding the implications for our future classroom practice; especially if we were to teach a class that had to be prepared for compulsory NAPLAN testing. The role of the teacher has changed dramatically over the years, none more so, than since the introduction of NAPLAN in 2008. As NAPLAN testing is compulsory, many teachers of Years 3, 5, 7, and 9 students may feel as though they are teaching solely to improve their school’s performance, rather than nurturing and developing the potential of their students. The emerging challenges from standardised testing such as NAPLAN can leave teachers feeling frustrated and continually under pressure because, not only is the class performance under scrutiny but also that of the whole school. In addition, having both worked with teachers of students who excel in certain areas of the curriculum but perform poorly in standardised tests. On the other hand, the mainstream formative and summative assessment that teachers perform regularly as part of their normal curricula, is crucial in identifying students’ strengths and weaknesses and can be utilised to promote students to the next level of learning. Rose (2009,) observes that there is no place for national standardised testing in the classroom; it is not an accurate measure of performance and although assessment is an integral part of learning, large-scale testing is not; it is so far removed from “the cognitive give-and-take of the classroom” Rose (2009, p. 47) and it is not an accurate performance indicator. We both identified a further implication which will undoubtedly impact negatively on our respective teaching practices. That is, in order to accommodate NAPLAN, teachers are having to restructure their work loads by increasing the time spent on literacy and numeracy to prepare students for the national testing. Donnelly (2010) argues that standardised tests especially those involving multiple choice questions, are narrowing the curriculum and taking valuable teaching time away from teachers (Donnelly, 2010; Collins, 2010; Rose, 2009).

NAPLAN is the product of standards based education reform, which is founded on the governments belief that setting high standards and establishing measurable goals can improve student outcomes in education, thus improving the quality of Australia's human capital enhancing national economic competitiveness. Low scores from NAPLAN testing may not necessarily relate to ineffective teaching, there are many variables that should be taken into consideration. Publishing the results of NAPLAN testing and comparing 'like schools' on the 'My School' website resulted in the teachers of Australia threatening industrial action due to the misuse of NAPLAN data. The AEU and many teachers perceive NAPLAN testing as a reductive and unsatisfactory form of measuring the success of schools. NAPLAN testing began in 2008, and the testing in 2010 showed only a slight improvement from the 2008 results (Ward, 2010). This will be the first time a cohort of children will be tested again, and the government will now be able to compare these results, identify areas for improvement. However, this may not be a reliable indicator, as many students and teachers can change schools, which can have a great impact on the results especially in smaller schools (Facchinetti, 2010). Schools, which perform badly in NAPLAN testing, may be labelled as a failing or an ineffective school. This labelling can have a devastating effect for a school and teachers, as they could face political and media ‘bashing’. Many schools in the UK and USA have been forced to close due to students constantly underachieving (Peterson & West, 2003). However, schools that yeild poor results maybe an indication as having the greatest needs, and where outcomes are low and of concern may be allocated a larger proportion of funds and resources from the government (Facchinetti, 2010; Peterson & West, 2003).

In conclusion, the writers have discussed how schools and society measure success in achieving their aims and purposes in today’s contemporary society. A large body of research is focused on the importance of literacy and numeracy outcomes in order to ensure students’ successful transition into the workforce as being the main purpose of education. That being the case, the Federal Labor government is focused on standardised testing through NAPLAN, and making the results public through the ‘My School” website. As up and coming teachers we are opposed to the NAPLAN data being compared to 'like schools', as we believe that each school is unique. The NAPLAN data from testing only gives a 'snap shot' of a school's performance. This snap shot then becomes the public image of the school, subjecting teachers and students to possible negative public opinion. We also feel that NAPLAN testing can make teachers 'teach to the test' having a negative impact on their teaching practice. NAPLAN testing is not a helpful tool in measuring the success of a school, and we are yet to see how it helps improve pedagogy and student outcomes.

Collaboratively, we have also researched, discussed and made comparisons to similar initiatives overseas, for example, the NCLB legislation in the United States and the British system of league tables. Much of the literature indicates that standardised testing is not an accurate ‘big picture’ measure of success, nor is it a valid way to make judgments about schools and teachers; much more has to be taken into account such as the socio-economic areas and student learning. Furthermore, our collaborative research combined with our separate experiences working in schools and classrooms enabled us to produce an informed discussion on the implications for our future classroom practice.

**Reference List:**

ACARA. (2010a). //National Assessment Program – Literacy and Numeracy.// Retrieved July 29, 2010 from: []

ACARA. (2010b). //My School.// Retrieved July 29, 2010 from: http: [|//www.myschool.edu.au//]

Australian Education Union. (2010). Fact sheets. NAPLAN - My School website - league tables. Retrieved September 29, 2010 from: htttp://www.aeufederasl.org.au/LT/FSLT1.pdf//

Australian Educator (2010), The power of public. Professor M. Bashir. //Australian Education Union//, Issue 66, Winter 2010.

Brady, L., & Kennedy, K. (2007). //Curriculum construction//. Frenchs Forest, N.S.W.: Pearson Education Australia.

//Collins, R. (2010, October). Getting it right, standardised testing and educational improvement.// Teacher, the national education magazine//,// 215//, pp. 26-32.//

Connell, R., Campbell, C. & Vickers, M., Welsh, A., Foley, D. & Bagnall, N. (2007). //Education, change and society//. South Melbourne, Vic.: Oxford University Press.

Department of Education (2007). //Tasmanian Curriculum//. Department of Education, Tasmania. Retrieved August, 15, 2010 from []].

Media Centre, Media release My School website launched. Retrieved July 29, 2010 from: www.deewr.gov.au/Ministers/Gillard/Media/Releases/Pages/Article_100128_102905.aspx

Donnelly, K. (2010). League tables for schools. Retrieved September 15, 2010 from [] ..

Emerson, C. (2006).//Vital signs, vibrant society//. Sydney, NSW: University of New South Wales Press Ltd.

Facchinetti, A. (2010). NAPLAN, Testing times for teachers. //Education Toda//y, V10 (2), pp 4-6.

Jamrozik, A. (2005). //Social policy in the post-welfare state: Australian society in the twenty first century// (2nd ed.). Frenchs Forest, NSW: Pearson Education Australia.

Masters, G. (2010, August). NAPLAN and My School, Shedding light on a work in progress. //Teacher, the national education magazine//, //213//, pp. 22,23.

McMorrow, J. (2010).Funding scheme favours private schools. Retrieved 6 October, 2010 from http://www.adelaidenow.com.au/news/national/funding- scheme-favours-private-schools/story-e6frea8c-1225898177691

Peterson, P. E., & West, M. R. (2003). //No child left behind, The politics and practice of school accountability//. Washington, D.C.: The Brooking Institution.

Rose, M. (2009). //Why school?, Reclaiming education for all of us.// New York, N.Y.: The New Press

Ryan, K. & Shepard, L., (2008), //The future of test-based educational accountability.// New York NY.: Routledge

Ward, B. (2010, September 11). Tassie can do better, tests show. //The Mecury//, p. 13.