Saturday, April 11, 2015

Steps to Create Innovators


How do we create innovators among our students? As I read Tony Wagner’s book, I am compelled not to wait until I finish it before sharing. My initial thoughts about creating student innovators revolve around school structure, curriculum, and instructional practices.
                  I have long believed that the age-based placement of students is an archaic throwback to the age of mass production industry. We place students in classes by age knowing full well that students arrive in kindergarten with vastly different experiences, skills, and readiness for formal education. We then spend the next thirteen years trying to catch students up or having others wait while implicitly (or explicitly) communicating that some students are not as good as others. The system assumes that students should not be mixed by age even though the differences in age within group approach 20% of student’s life at age five.
                  Instead we should consider grouping students by age spans and have clear descriptors of student skills, abilities, and knowledge at the end of such periods of time. Roughly, I propose grouping students by ages 5-9, 10-14, and 15-18. That coincides with grades K-3, 4-8, and 9-12. A personalized learning program would be implemented that emphasizes student choice and self-assessment. Each grade span would have an overall focus.  Ages 5-9 would focus on discovery, experimentation, and literacy development. Ages 10-14 would focus on self-awareness, experimentation, and early application. Finally, ages 15-18 would focus on application and innovation. These are general emphases that are not mutually exclusive between the age groups.
                  Curriculum will be more integrated, problem-based, and experiential. Students will receive regular formative assessment as they develop their self-assessment skills. The personalized learning program would allow students to proceed from where they currently are rather than wait for others. Students needing more time would receive that differentiated support but expected to reach the exit outcomes for their grade span. Every effort would be made for all children to be ready for the next grade span.
                  Instruction will emphasize student engagement. Just-in-time instruction would include short lessons and plenty of time to interact with new information to assimilate new learning. Formative assessments would commonly occur that would include the student in evaluating progress.
                  To be contiued as the incremental and disruptive innovations surface….

Sunday, March 8, 2015

The Future of Testing


In her book, The Test, Anya Kamenetz outlines the possible future of assessments. While the book’s primary focus was on the shortcomings of standardized testing, Kamenetz provided a well-researched set of outcomes for the next generation of assessments.  

Organized into teams, Ms. Kamenetz associated four possible futures with different animals or objects. For example, the Robot team typifies a future where learning analytic software furnishes a passive, formative assessment of student progress. Similar in some ways to the Khan Academy and Knewton, this mastery-based learning model provides a continuous stream of information about student learning and actions associated with learning. The Robot team envisions an invisible, integrated assessment that allows the teacher and school administrators to know in real time how well students are learning. Think Big Data in education.

The Monkey team takes a different approach. Here the focus is on socio-emotional learning and the impact of grit, hope, and motivation on learning. This perspective seeks to quantify the intangibles associated with education in order to predict how likely learning is occurring. In this case, surveys are a common vehicle for gaining information. The information seeks to identify circumstances that influence learning with the presumption that conditions of learning have great influence.

The third group, the Butterfly team, espouses performance-based assessments as a strategy to measure student learning. Incorporating the 21st century skills of critical thinking, creativity, communication, and collaboration, this assessment future sees project-based learning and presentations as the method of monitoring student learning. Differentiated and individualized, this assessment future relies on rubrics to score learning. Therein lies its value in assessing those real world skills that matter in work and relationships by using specific criteria to determine levels of learning.

The Unicorn team presents a very different future. Game theory and authentic application of problem solving skills dominate an assessment future in which student learning is stealthily scored at a high level of detail. Likening this future to a high definition video compared to the current snapshot model of testing, Kamenetz acknowledges that such assessments are not available but their value will lie in measuring a student’s capacity for growth.

I believe each future holds an exciting potential in which a blend of options will occur. The current standardized testing system is changing. From paper-pencil to online assessments, the process will become more ongoing rather than an annual event. As analytics improve and become more ubiquitous, we will see an emphasis on formative assessments. The Robot team model will become common. Yet other measures, such as mindsets and essential skills and habits of the Monkey team will be of interest to strengthen the conditions of learning. A need to connect formal learning with real world applications bodes well for the use of performance-based tests that seek to evaluate a more rigorous presentation of learning. Watch for a multiple measure model that incorporates the essense of these three models.

In the end, the future of testing will continue to be based on its purpose. Are we focusing on student learning and adjusting learning opportunities or are we evaluating school systems to monitor the impact of school funding. Regardless, the need to know how well our students are learning will guide policy and actions. And who knows, one day a student’s game playing strategies may serve as a benchmark of a successful education.

Monday, January 26, 2015

Beautiful Weather and NCLB


The high pressure ridge over California continues to create beautiful warm daytime weather in January. For most of the United States, the west coast winter weather is postcard perfect. But not in California where fears of continued drought conditions pervade. In the short term, it’s wonderful weather but there is a deep-seated awareness that today’s pleasures will bring tomorrow’s pain. We need rain. Pure and simple.
                  Just as the feel-good weather portends a more serious problem, so does the feeling that the sunsetting of No Child Left Behind will bring relief. There is much talk about reducing or eliminating annual standardized testing as the members of Congress discuss options. While NCLB had many unintended consequences, it clearly gave attention to underperforming student subgroups.
                  California students of poverty scored twenty percentage points below the average of all students in reading literacy by the end of third grade. With all students demonstrating a statewide average of just under 50%, economically disadvantaged and English learners disproportionally bore the brunt of this shortcoming. Issues of truancy, drop-outs, and not being prepared for life after high school reflect the urgency of the situation. State District Attorney Kamala Harris’s office released evidence of the impact of truancy and absenteeism in schools adding to the view that results of undereducating our youth are easily seen.
                  Often with limited political capital, parents of these students rely more so on the schools to effectively educate their children. Challenges associated with poverty and not having English as a first language impact parents options to participate in their children’s schools.
                  Legislation that requires attention to the results of underperforming students serves to level the playing field. The larger perspective of state or federal legislation can set policy that local communities are unable to enact either due to a lack of resources or political resolve.  In the end, the attention given to the education of underperforming students will bring direct results that will improve long term outcomes.
                  NCLB deserves to be retired but not without a plan for supporting out neediest students. In other words, despite the beautiful California winter weather, pay attention to saving water for a lack of good stewardship can bring grave consequences.  

Monday, January 19, 2015

Problems of Practice


                  Interested in pursuing a passion that addresses a nagging challenge? Could you be interested in solutions leading to sustained change? Attending the California League of Schools Technology and Common Core conference, I discovered a set of strategies from Ms. Jennifer Magiera that get to the heart of a problem of practice.
                  Ms. Magiera is a tech coordinator for schools in Chicago and has developed tools and strategies for engaging students using technology.  She accomplished this by establishing steps to address a problem of practice she struggled to address. With consistent focus, Ms. Magiera developed strategies to best engage students by giving them decisions about what they wanted to learn. While this may not be your problem of practice, her strategies for clarifying and addressing a significant challenge can be of help to your productivity.
                  A problem of practice (PoP) is defined as a classroom, school or district-related challenge that generates a passion to address. According to Jennifer, a PoP is a problem that keeps you awake at night. It is that constantly surfacing issue that challenges you. She spoke of identifying it, understanding its impact, and devising steps to address the PoP.
                  PoPs range from involving a single individual to a large group (e.g. elementary reading teachers or students at a school). Coinciding with the amount of people affected, the challenge rates from low to high in its level of frustration.  Starting with the Gripe Jar, Ms. Magiera focused attention on what really bothers us asking us to individually write challenges down on post-it notes. Once issues were surfaced and ranked by degree of frustration and level of impact, the participants do a gallery walk to observe what others did and note those things that resonate with others (and include a comment or two). The process concludes by selecting the problem that you have the power to address and that affects the most people.
                  Once identified, the problem of practice is analyzed. Using specially-shaped paper notes, all facets of a problem are identified and written separately on a note. These notes are then placed according to their interrelatedness. For example, improving access to student results may mean improving the functionality of student management program which means attending trainings to better understand what the system has to offer. These two facets would be related and would share sides of their notes. This creates groupings that expose impacts and factors to be addressed to overcome the problem of practice. In some cases, a connected note may have numerous other notes connecting to it.  The connection of the separate subgroup of notes, or joint, becomes a significant area of focus for addressing the PoP. Finally, the process involves creating a Teacher Individualized Exploration Plan (TIEP), an action plan for addressing the problem of practice.
                  Often we share our frustration, challenges, or annoyances but the feeling is fleeting only to return in the future. The PoP process seeks to identify the critical issues influencing our work and come up with a plan to address it. Its value is quite apparent. The PoP process could be used when future direction is sought or when a situation is stuck and not progressing. For an individual, these steps could be used to establish a personal direction that is more likely to be sustained. Regardless, this problem of practice toolset presents much potential to help leaders move forward in addressing those nagging challenges.  Thank you, Jennifer Magiera.


Sunday, January 11, 2015

Are We Failing to Prepare?

This summer, thousands of students will receive results from the Smarter Balanced Assessment Consortium (SBAC) test. Students, parents, and educators alike will wonder what the information means. How will schools and districts handle their responses? Will the results generate energy for continued self growth, reactionary responses or be dismissed as irrelevant? 

School districts are challenged to implement the common core state standards (CCSS) in classrooms while preparing for SBAC testing. Understandably, the CCSS are producing the greatest focus. In my district, professional learning and curriculum resources are centered around bringing CCSS instruction to students. At the same time, districts have geared up their technology so that SBAC testing is available online to students this spring. Local districts are utilizing interim assessments that mimic SBAC-type test items for students to experience the new expectations in testing form. Teachers are encouraged to adjust instruction based on interim assessment results.

But much less attention is currently being paid to how the SBAC results will be portrayed, understood, and analyzed. Waiting for the results to arrive will be like awaiting for a hurricane to arrive to decide what to do. It will be too late to begin grasping what the information means and how it was created. Knowing beforehand how the SBAC will formulate outcomes will strategically place districts miles ahead when the results arrive. This advanced knowledge will aid in communicating to parents, students and staff what to expect, how results will be derived, and what to do next.

Having participated in the in-person scale scoring for SBAC, I write with firsthand experience of the value of understanding how this new test will present results.  The new computer assisted technology will adjust test items based on previous responses making percent correct irrelevant and leaving everyone scratching their heads wondering how a scaled score was established. Confusion could lead to circumstances that detract from the test’s intent and lead to calls to reduce or scale back statewide testing. Worst case scenarios include angry parents wanting to know why the SBAC results are poorer than past state test results, teachers feeling demoralized or dismissive of the results and students wrongly drawing conclusions about themselves.

Most importantly, the commitment of resources should lead towards school improvement. The opportunity for this to occur remains. But we must act now to understand how the SBAC results will be created, portrayed, and understood. Join me in participating in this process. Upcoming posts will address these points with the intention of improving our students’ learning.

Sunday, August 24, 2014

Values and Beliefs Impact the School Scorecard

            The other day, a colleague asked my opinion on the contents of a school scorecard. She shared an example from a school which set my thoughts in motion. As I prep to lead an upcoming graduate level course on school program evaluation, I find myself considering the role of values and beliefs in influencing what is considered as valuable data. The powerful external accountability policies at the state and federal level have overwhelmed the discussion on what constitutes school successes. Yet most educators would quickly say that there is more to school quality than test scores.
            The Balanced Scorecard developed by Robert Kaplan and David Norton and described in The Institute Way provides a resource for schools and other organizations to determine how they will measure success. Johnson and Bonaiuto (2008) described how the Needham (MA) School District used the model to identify the qualities of an excellent school, the core competencies graduates should have, and strategies to know if these competencies are achieved. Using a broad group of stakeholders, test scores were mentioned but were not the central element in what constituted success for a school. Instead elements such as safety, student engagement, quality teaching, preparing students for the real world, communication with stakeholders, clean, attractive school campuses, going to college, and diversity were identified as important in an excellent school.
            What process would you use to determine the answers to the three Needham School District guiding statements? How would the responses determine what type of data was collected and schools evaluated? The solutions surface the shared values and beliefs of the school community.

Johnson, G. & Bonaiuto, S., 2008, Accountability with Roots, Educational Leadership, 66, 4, 26-29.

Rohm, H., Wilsey, D., Stout Perry, G., & Montgomery, D., 2013, The Institute Way,  The Institute Press, Cary, NC.

Tuesday, August 19, 2014

Part 2: And Now the What


As you recall from the previous entry, the district was tasked with the challenge of reducing harassment and bullying among students. A program, Caring School Communities, had been selected for use in the K-6 setting but a nagging feeling that this would not prompt change continued to exist, if only implicitly.

Before implementation planning could begin, the core issue needed to be identified. A look at limited data provided some valuable evidence. One key was the fact that over half the district elementary teachers perceived that harassment and bullying were either a “large problem” or “somewhat a problem” on a survey the previous spring. Additionally, nearly 40% of surveyed K-12 parents conveyed the same sentiment. To complicate matters, the process that principals used to store student behavior data varied across the district, making the collection of past data somewhat problematic.

The general perception was that students need stronger ability to solve social problems and demonstrate interpersonal skills and that teachers need support to guide students in building those skills. Improving school culture became the clear need in order to provide a safe and supportive school learning environment.

With a more explicit identification of the problem, the justification for the decision to adopt the CSC program increased. Principals now had a commonly-identified rationale for implementation that would support them as they led their staffs. We now turned to identifying the desired outcomes and how to evaluate their presence. 

Generating a variety of options, it became clear that loss of class time was an issue due to playground and classroom student social problems. The district has had a standing focus on improving attendance but loss of instructional time for school day issues had not received equal billing. And now it did!  The team’s views coalesced around improving school culture as seen by decreasing the amount of class time lost due to harassment and bullying.

Next up, the team tackled probably the most difficult task, identifying what data to collect and how to store the data for reliable retrieval. As the group struggled with the topic, I wondered if the difficulty formulating what to collect and store were associated with an overreliance on anecdotal data in general. With perseverance, the team crafted a process for collecting and storing the data. This aspect of the workshop required strong teamworking skills to openly challenge and suggest options until the best solution became apparent. The team came away with a system for monitoring incidents as well as the utilization of specific research-based actions. The team recognized that the CSC program must be implemented with fidelity for its impact to be realized.

In the end, the team crafted a plan for a district-wide, K-12 school climate and culture impact. Caring School Communities, while the explicit change, is intended to improve school climate by reducing lost classroom time. Since previous data was limited,  a set of five SMART goals focused on both process and outcome measures. The three of the five outcomes involved a commitment to fidelity to implement weekly class meetings, a cross-age buddy program involving all elementary classes, and conducting at least three school-wide spirit-building activities during the school year for elementary, middle and high schools. The other two goals measure the impact of such efforts by monitoring change in the amount of incidents that involve harassment and bullying, both with and without class time being lost, and a tri-annual school climate survey completed by staff and students. 

As teacher training is to begin and plans shared for implementing the changes, it will be interesting to see if the time spent focusing on problem identification, desired outcomes, and SMART goals will be impactful. Judging from one principal’s gratitude at participating in the process, I believe it will be.