Annette's+version

= Societies in Australia often measure school success through national testing on numeracy and literacy, which is based on the belief that setting high standards and establishing measurable goals can improve student achievement and educational outcomes. National testing is the main tool that the Federal Government have implemented to monitor whether schools are meeting the required benchmarks, outcomes and national standards. This essay highlights the importance of how solid educational foundations are essential for developing students to become contributing global citizens. It will analyse and investigate the educational policies from America 'The No Child Left Behind' act (NCLB) and British League Tables. The essay will continue to explore, examine and focus on the National Assessment Program Literacy and Numeracy (NAPLAN) testing in Australia and the government website ‘My school’ focusing and examining the positive and negative effects that both NAPLAN testing and the ‘My School” website has on raising educational outcomes, the reputation of a school, government funding, teachers, students and stakeholders in the wider community. This essay will conclude with a discussion on what implications NAPLAN testing has on teaching in the classroom and whether the data from these national tests can really improve the quality of education in Australia. =
 * Measuring success: How will/do schools and society measure success in achieving their aims and purposes? What are the implications for my classroom practice? **

Enhancing student progress and preparing them for their future as contributing citizens in society is the main purpose and aim of many schools. Masters (2010, p. 22) states that "educational research studies have under scored the fundamental importance of literacy and numeracy, not only to educational success, but also to the students successful transitions into employment and adult life". The business sector agrees putting considerable pressure on the government to supply skilled and well-educated students whom they can employ, and as the education system accounts for considerable government expenditure, the interest in education and outcomes is not surprising (Masters, 2010). ( Does Masters need to go on ref list or is he incorporated in Aust Educator) Students leaving institutions with high levels of literacy and numeracy can be related to many outcomes including, earnings, employment opportunities, personal health and well-being. The Governor of New South Wales, Professor Bashir (2010) stated during the National Public Education Forum in Canberra, that "one of the driving forces of Australian prosperity over the last few decades has been the huge improvement in the education attainment of young Australians" (Australian Educator, 2010; Masters, 2010). = Throughout the last twenty years, education systems all over the world are being held increasingly accountable for student achievement and outcomes. Governments are justifying themselves by stating that National testing will help them monitor and improve the quality of school education. The Federal Government of the United States of America led by President George W. Bush in 2001 introduced the policy NCLB (Rose, 2009), as they found that they were progressively falling behind many countries in education outcomes and trailed behind their peers abroad (Peterson & West, 2003). Stakeholders believed that students were leaving school uneducated and that educational reforms were required. Peterson and West (2003, p. 3) stated the government "premised on the nation that standardised tests can and do measure an important dimension of educational quality". The government also believe that the NCLB legislation would reverse the downward trend in student educational outcomes and to help ensure educational institutions were all committed to educating their students, leading to both individual and economic success of the country (Peterson & West, 2003). Ryan and Shepard (2008, p. 3) state that the Bush Government also believed that "traditional forms of school improvement, such as class size reduction and professional development, are expenses and ineffective" (Peterson & West, 2003; Rose, 2009; Ryan & Shepard, 2008). =

= The Australian Curriculum, Assessment and Reporting Authority (ACARA) is responsible for developing the NAPLAN tests, which are written by a team of educators who are experts in test construction (ACARA, 2010). NAPLAN testing was introduced by ACARA in 2008 to replace the National Literacy and Numeracy plan, established by Ministerial Council on Education, Training, Employment and Youth Affairs in 1997 (Department of Education 2007). NAPLAN testing is a diagnostic tool, which measures the ability of all students in Years 3, 5, 7 and 9 in numeracy, reading, writing, spelling, punctuation and grammar. The NAPLAN tests are conducted on the same day in May, under similar test conditions giving the Australian Federal Government a snap shot of students existing skills and knowledge nationally. ACARA believe that these tests are the best way to make valid and reliable judgments providing data that measure, evaluate or indicate student performance across the nation (ACARA, 2010; Masters, 2010). =

= In Australia, policy revisions in educational governance and accountability are on the increase, keeping up with international trends and approaches such as the implemented National Curriculum Testing in the United Kingdom and NCLB in the United States. Both of these policies support standards based education reform, which is grounded on the belief that setting high standards and establishing measurable goals can improve individual outcomes (Peterson & West, 2003). The then Australian Deputy Prime Minister Julia Gillard introduced NAPLAN testing in 2008 based upon the NCLB act. ACARA (2010a) believes these tests are the best way to make valid and reliable judgements and give figures that measure or indicate student performance (ACARA, 2010a). The Tasmanian Minister for Education and Skills, Ms Thorp stated in an e-mail dated (pers. com 31/08/10), that "NAPLAN provides a measure of the progress of Tasmanian Students over time and allows for targeted learning programs and other resources to be deployed where required". The government believes that making NAPLAN testing results transparent will make the school and teachers more accountable and give an indication to where more funding is required to be spent in Australian schools (ACARA, 2010). = =   = = ** The ACARA (2010b) has recently developed a government website called ‘My School’, where school’s NAPLAN results are rated and ranked against other 'like' schools. Stakeholders are able to obtain information about particular schools, such as a school’s overall academic result; student population and student attendance rates. The Department of Education, Employment and workplace Relations (2010) believe that "the 'My School' website introduces a new level of transparency and accountability to the Australian school system". However, it is important to point out that when looking at the performance of a school in a particular community to realise that statistics of a particular school can be greatly affected by the socio-economic profile of the school and be affected by the current student intake (Masters, 2010). The year-to-year and class-to-class variation in student proficiency levels can be significant. In one year, there may be more high achievers making up the class, and in the next year, many students may struggle. A low result for a school could be the result of teacher ineffectiveness, new migrant students who have difficulty with the language or external factors due to outside circumstances such as the recent swine flu endemic, where many students are away for lengthy periods of time (Ryan & Shepard, 2008). The Australian Education Union (AEU) (2010, p.1) stated in a fact sheet that "the NAPLAN tests were never designed to be used to publicly rank or compare schools against each other". There is also concern by education institutions that the results of NAPLAN testing will be published as ‘league tables’. These ‘league tables’ are published results taken from national testing, which will rank and compare Australia against certain international benchmarks (AEU, 2010; Masters, 2010; Ryan & Shepard, 2008). ** =  ** Brady and Kennedy (2007, p. 125), believe that ‘there seemed to be an obsession in Australia as much with accountability as with curriculum’. NAPLAN testing and accountability has placed schools under an enormous amount of pressure for their students to improve and meet set benchmarks, with many schools ‘teaching for the test’. A Year 9 Maths class, in which I have been assisting this year, spent a considerable amount time during first term, concentrating and familiarising students with NAPLAN testing procedures. When I enquired about all the sample tests we had been completing in class, Mrs. Henderson (2010, pers. com.), stated that, “the results the students obtain in the NAPLAN tests, is what she is judged on”. She added ‘that her department had become driven by NAPLAN preparation’. These remarks from the teacher gave me an indication of how much pressure NAPLAN testing can place on teaching staff to improve their student outcomes. Researchers have misgivings about the reliability of Australia’s NAPLAN testing. One such researcher, Dr. Wu, an Australian measurement expert from the University of Melbourne, concludes that large-scale testing is under threat because of unacceptable results and inaccuracies due to administrative challenges, sampling errors and other sources. Moreover, the British system of league tables, ‘naming and shaming’ schools and regular inspections of schools in a bid for accountability have failed to produce any significant improvement in results (Donnelly, 2010). ** = A discussion had with two senior Maths teachers at the school in which I work, questioned why NAPLAN testing was held in May, being only three months into the school year. These teachers felt that NAPLAN testing would be more beneficial to both students and teachers if the testing was undertaken in late October or early November, giving students time to build up skills and knowledge. However, Andrew Jones (2010, pers. com.) the Assistant Manager from Educational Performance Services from the Tasmanian Department of Education, stated in an e-mail correspondence that there were many reasons for the NAPLAN testing to be held in May including: the timing of school holidays for each state/territory, Easter, mid year and end of year school testing, the amount of time taken to analyse and publish results. Jones (2010, pers. com.) concluded by stating that "it is also important for all teachers to develop a 'corporate' responsibility for the achievement of students rather than assuming that it is the responsibility of those in Years 3, 5, 7, and 9 to build students' knowledge and skills to a reasonable level". With the emphasis placed on holding national testing as early as possible during the school year, the Year 7 NAPLAN results are more of an indication of their learning at primary school. High schools can really be judged on performance variations of the Year 7 students against their Year 9 results (Department of Education, 2010). = = Research suggests that success in schools may be largely dependent on the amount of government funding they receive. Schools in areas of socio-economic disadvantage report lower than average outcomes, lower retention rates and poor attendance in addition to higher teacher absenteeism and transience rates. However, when more funding was provided to public schools as demonstrated by the return of a Labor government in 1983 the positive effects, such as higher school retention rates and an increase in the number of university students were witnessed almost immediately (Jamrozik, 2005, p. 216). Unfortunately, as Jamrozik (2005) observed, this trend did not continue and according to the AEU (n.d), Australia does not compare favourably with the other nations of the Organisation for Economic Co-operation and Development (OECD) in expenditure on public schools. Although the rest of the world is investing heavily in education, Jamrozik (2005, p. 218) maintained that Australia has "been virtually alone among OECD countries in failing to increase public funding commensurately with increased private funding". Emerson (2006, p. 100) states research has shown that "public education needs at least $2.9 billion in additional annual funding to give every child the opportunity to gain a high quality education". Equitability is not likely prevail in the near future; rather, the Government is creating a further division between private and public schools. A report in ‘//The Advertiser’// by Dr Jim McMorrow, a school funding expert from the AEU warned that “private schools would get a $2.3 billion rise in federal general recurrent funding for 2012 - almost four times higher than the increase for public schools at $652 million” (McMorrow, 2010). = This suggests that government initiatives such as standardised testing will not raise the level of student achievement nor strengthen schools; unless public schools, especially those in the lowest socio-economic are supported commensurate to their needs. A further report in ‘//The Advertiser’// by Dr. McMorrow (2010) says this is unlikely to happen because “** private schools will receive extra funding to pay for the equivalent of 8300 new teachers, compared with only 1670 in public schools, if the current federal school funding model is kept for another four years”. ** According to McMorrow, Australia’s education system will not be fulfilled unless public schools are adequately funded allowing them to reduce class sizes, increase individual attention to students who need it and update facilities. McMorrow has the view that the previous government policies have created a system of gross injustice and if allowed to continue will result in further inadequacies for the school system that educates the majority of Australian children (McMorrow, 2010). An article in **'**//The Age'// written by Kevin Donnelly (2002), suggested that educational data should be provided by postcodes rather than league tables with comparisons being made only between schools with similar socio-economic profiles. In that way, assistance such as government funding could be directed to those schools which have the greatest need. However, in comparison, the Federal Labor Government has only recently adopted the policy that schools should be held accountable by releasing information such as results, standards and teacher performance to the general public. Donnelly (2010) argues that the government’s rationale for making school data public is to enable parents and others to compare schools in order to measure school performance and raise the standard of underachieving schools. Donnelly (2010) maintains there are adverse implications to the kind of accountability measures employed to the government, the results of which are used to measure teacher performance and evaluate schools. Donnelly (2010) bases his concerns on the flaws in testing, which are evident in the United States legislation, namely NCLB and the British system in which league tables are published annually. US experts argue that the results are estimations and not a true indication of that which a student is capable (Donnelly, 2010). After much discussion and investigation, we both have found that our views regarding the implications for our future classroom practice are similar, especially if we were to teach a class that had to be prepared for compulsory NAPLAN testing. The role of the teacher has changed dramatically over the years, none more so, than since the introduction of NAPLAN in 2008. As NAPLAN testing is compulsory, many teachers of Years 3, 5, 7, and 9 students may feel as though they are teaching solely to improve their school’s performance, rather than nurturing and developing the potential of their students. The emerging challenges from standardised testing such as NAPLAN can leave teachers feeling frustrated and continually under pressure because, not only is the class performance under scrutiny but also that of the whole school. In addition, we have both worked with teachers of students who excel in certain areas of the curriculum but perform poorly in standardised tests. On the other hand, the mainstream formative and summative assessment that teachers perform regularly as part of their normal curricula, is crucial in identifying students’ strengths and weaknesses and can be utilised to promote students to the next level of learning. Indeed, Rose (2009, p. 47) observes that there is no place for national standardised testing in the classroom; it is not an accurate measure of performance and although assessment is an integral part of learning, large-scale testing is not; it is so far removed from “the cognitive give-and-take of the classroom” and it is not an accurate performance indicator. We both identified a further implication which will undoubtedly impact negatively on our respective teaching practices. That is, in order to accommodate NAPLAN, teachers are having to restructure their work loads by increasing the time spent on literacy and numeracy to prepare students for the national testing. Donnelly (2010) argues that standardised tests especially those involving multiple choice questions, are narrowing the curriculum and taking valuable teaching time away from teachers (Donnelly, 2010). = NAPLAN is the product of standards based education reform, which is founded on the government’s belief that setting high standards and establishing measurable goals can improve student outcomes in education, thus improving the quality of Australia's human capital enhancing national economic competitiveness. Low scores from NAPLAN testing does not necessarily mean that someone has not done their job effectively; there are too many variables to take into consideration. Publishing the results of NAPLAN testing and comparing 'like' schools on the 'My School' website has had the teachers of Australia threatening industrial action. The AEU and most teachers see NAPLAN testing as a most reductive and unsatisfactory form of measuring the success of schools. Although NAPLAN testing has only been implemented in Australia for the last two years, results of the 2010 testing showed only a slight progress from the 2009 results. = = In conclusion, we have discussed how schools and society measure success in achieving their aims and purposes in today’s contemporary society. A large body of research is focused on the importance of literacy and numeracy outcomes in order to ensure students’ successful transition into the workforce as being the main purpose of education. That being the case, the Federal Labor government is focused on standardised testing through NAPLAN, and making the results public through the ‘My School” website. Collaboratively, we have also researched, discussed and made comparisons to similar initiatives overseas, for example, the NCLB legislation in the United States and the British system of league tables. Much of the literature indicates that standardised testing is not an accurate ‘big picture’ measure of success, nor is it a valid way to make judgments about schools and teachers; much more has to be taken into account such as the socio-economic areas and student learning. Furthermore, our collaborative research combined with our separate experiences working in schools and classrooms enabled us to produce an informed discussion on the implications for our future classroom practice. = **__ Reference List __** ACARA. (2010a). //National Assessment Program – Literacy and Numeracy.// Retrieved July 29, 2010 from: http://www.naplan.edu.au/ ACARA. (2010b). //My School.// Retrieved July 29, 2010 from: http: //[|www.myschool.edu.au]// Australian Educator (2010), The power of public. Professor M. Bashir. Australian Education Union, Issue 66, Winter 2010. Brady, L., & Kennedy, K. (2007). Curriculum construction. //Frenchs// //Forest, N.S.W.: Pearson Education Australia.// Connell, R., Campbell, C., Vickers, M., Welch, A., Foley, D., & Bagnall, N. (2007). Education, change and society//. South Melbourne, Vic.: Oxford University Press.// Department of Education, Employment and Workplace Relations. (2010).   Ministers Media Centre, Media release My School website launched.// Retrieved July 29, 2010 from:www.deewr.gov.au/Ministers/Gillard/Media/Releases/Pages/Article_100128_102905.aspx Department of Education (2007). //Tasmanian Curriculum//. Department of Education,Tasmania. Retrieved August, 15, 2010 from []]. Donnelly, K. (2010). League tables for schools. Retrieved September 15, 2010 from [].. Emerson, C. (2006). //Vital signs, vibrant society.// Sydney, NSW: University of New South Wales Press Ltd. Jamrozik, A. (2005). //Social policy in the post-welfare state: Australian society in the twenty first century// (2nd ed.). Frenchs Forest, NSW: Pearson Education Australia. Peterson, P. E., & West, M. R. (2003). No child left behind, The politics and practice of school accountability. Washington, D.C.: The Brooking Institution. Rose, M. (2009). Why school?, Reclaiming education for all of us. New York, N.Y.: The New Press Ryan, K. & Shepard, L., (2008), The future of test-based educational accountability. New York NY.: Routledge McMorrow,J. (2010).Funding scheme favours private schools. Retrieved 6 October, 2010 from http://www.adelaidenow.com.au/news/national/funding- scheme-favours-private-schools/story-e6frea8c-1225898177691 Hi Ruth, Second thoughts - made another page. Paragraphs are mucked up and referencing out of kilter. I will still send you the finished and formatted version tomorrow. So please don't do anything tonight unless you want to redo red bits, Will give you a ring Sunday night after I put some entries into time line. Sorry I haven't done any yet. DOn't you do anymore. I will post in on Monday as usual - Bye Annette