Monday, September 29, 2014

Success through experimentation with edtech

I need to do some math one of these days: tracking all of the notifications of the "must-have" applications and other technology resources that I ABSOLUTELY POSITIVELY MUST HAVE each school year. Then figure out how often I might be able to use any of those resources given a factor of how long it might take to test and become familiar with that resource and then attempt to integrate it in a lesson.

OR, ed tech coordinators and directors can do as Vicki Davis (@coolcatteacher) advises and just pick three things. What three applications make sense? Perhaps the ed tech coordinator/director can make a recommendation of up to a dozen tech resources that could be useful across the school, regardless of grade or content area. Then teachers divvy up the exploration of and experimentation with each of the resources to report back to the grade level meeting. Think Check, Please or its variations but for teachers and tech resources.

Then each teachers returns to the grade level meeting with recommendations for that particular resources: how it was used, the accompanying learning objectives, how students used the resource and responded to it, discoveries about implementation during and after class, etc. Having a checklist everyone uses would make the conversation easier.

Then, as a grade, teachers decide which of the resources they want to try to use throughout the year knowing that others have tried or are trying the same resources so they have folks of whom they can ask questions and with whom they can collaborate.

This doesn't preclude adding additional resources throughout the year, but it gives teachers a process for trying and evaluate resources, and for having someone else's perspective and insight.

I know there are schools and districts that have ed tech committees made up of a group of educators across grade levels and content areas who work with the ed tech coordinator or director (and that person's team if he or she is fortunate enough to have an actual team) to review new apps and resources and then make recommendations of which one to examine more closely. That is, I think, an ideal situation but not always possible. Still, having others who can make recommendations based on some agreed upon criteria not only helps the ed tech coordinator, but ensures the likelihood of a broader range of resources being examined.

As Ms. Coleman notes in her article 5 Reasons Why Great Edtech Products Don't Succeed, one of the reasons is the saturation of the market. So having some mechanism in place that offers a realistic product funnel may reduce some of the resistance to change and implementation of edtech products and resources.

P.S. Grammar note: The "why" isn't necessary in the title of Ms. Coleman's article.

Tuesday, September 23, 2014

The World IS Your Class[room]

There is a home school group in the Chicagoland area with that very name for its group: The World Is Your Classroom. And why not? Why not in this age of the Internet, Google Hangout, Voxer, Skype, and more? Why not in this age in which video can be so present and so powerful? Why not in this age of technological advancements that seem to spew every few seconds? Why not the world as a classroom? Other than assessments, standards, and other significant stuff that takes up resident in a brick-and-mortar classroom?

The Future of Education Eliminates the Classroom Because the World Is Your Class explores this very idea as what author Marina Gorbis calls "socialstructed learning." Never mind that the resource mentioned is Wikipedia. Wikipedia continues to get a bad rap for its sins of the past, omissions it has worked hard to resolve over the years. And keep in mind that Wikipedia is still free whereas the online version of Encyclopedia Britannica (yes, it still exists) is about $70 per year, which isn't a bad price considering all that one can get.

But let's get back to "socialstructed learning." As Ms. Gorbis points out, socialstructed learning is not really a new idea. It has its roots in early thinking about education. Read Socrates and Plato and you will see socialstructed learning. Take a closer look at constructivism and what fuels today's Maker Movement and you will see socialstructed learning. Think about project-based learning, inquiry-based learning, and action research and what you will see is problem-solving, critical thinking, and collaborative learning with ties to or roots in what informs socialstructed learning. Do some research on the way one-room schoolhouses had to operate and you will see evidence of socialstructed learning. Examine the essence of personalized learning and you will find features of socialstructed learning.

Ms. Gobis also notes that "[i]n the early stages of technology introduction we try to fit new technologies into existing social structures in ways that have become familiar to us." She uses MOOCs as an example, a way of replacing lectures. But MOOCs are simply large-scale online courses. If we look at the fundamental disruption of online courses, we see that online learning started a shift in thinking about how we can learn--any time and in any place. Hmmm. Sort of like socialstructed learning.

I know there are educators who quiver at the idea of such a concept: who will manage student learning and make sure they learn what they need to know? How will such learning be assessed?

I don't think we are yet ready to replace the current classroom model, whether in K-12 or in higher education. I do think we're struggling to figure out what works in today's model and what could and should be changed as well as how to make that happen. I think that too many educators cling too firmly to certain ways of thinking about learning and about assessment--and that changing our thinking, mine included, about assessment and how we know what students know and can do will remain one of our greatest challenges.

However, being reminded that learning opportunities and learning resources can be found where we least expect them should reinforce our appreciation of serendipitous learning moments during which our students may learn far more than today's learning objectives and, even more importantly, may retain that learning.

Ms. Gobis speaks of learning as a flow: resources are available and "opportunities for learning are abundant" which seems to suggest that teachers say, in effect, "This is our plan for learning today and this is what I hope you will achieve, but if you can do more--and I won't prescribe what 'more' means, then let's do that!" And that means letting go some (which reminds me of one of things I learned from reading Chinua Achebe's Things Fall Apart) and not just permitting but inviting learners to "have the ability to autonomously dip into and out of continuous learning flows."

Wednesday, September 17, 2014

Inventive Innovation

[Cross-posted on LinkedIn]

In The Innovator’s Dilemma (1997), Clayton M. Christensen summed up the insights as follows. He noted first that “the pace of progress that markets demand or can absorb may be different from the progress offered by technology” (p. 258). Christensen goes on to say “[r]ecognizing this possibility, we cannot expect our customers to lead us towards innovations that they do not now need.” Except in the case, I suppose, of Google Glass when businesses signaled uses that Google hadn’t anticipated.

Christensen’s second insight is that “the major reason for the difficulty of managing innovation is the complexity of managing the resource allocation progress” (p. 258). That was true until crowdfunding so perhaps it is slightly less true now.

The third insight is that another innovation problem is “matching the market to the technology” (p. 258). “Successful companies have a practiced capability in taking sustaining technologies to market, routinely giving their customers more and better versions of what they say they want” (p. 258-259). Except in the case of Apple that has made an art form of offering products for which we have no need and trying to convince us otherwise.

However, Christensen also states that “[d]isruptive technology should be framed as a marketing challenge, not a technological one” (p. 259).

I’m going to skip to the sixth insight which I think is more valuable for my purposes and that is that “it is not wise to adopt a blanket technology strategy to be always a leader or always a follower. . . . Disruptive innovations entail significant first-mover advantages” (p. 260). Unless, of course, you are the tipping point in which case you garner the advantages and the acclaim for taking advantage of the work leading towards the disruption and appearing to be the disruptive innovator. Kudos, and take your bows.

In 2003, Christensen published The Innovator’s Solution. Again I skip past most of the pages to the end of the book, though I highly recommend the “Managing the Strategy Development Process” chapter. Chapter Ten is titled “The Role of Senior Executives in Leading New Growth.” Yawn. Let me sum up: the senior executive 1) manages the resource and process flow between the “disruptive growth” and “the mainstream” businesses; 2) shepherds the “disruptive growth engine;” and 3) senses when the wind is changing direction and teaches the grasshoppers this skill.

Before we’re all in a snit that only senior executives get to play these roles, let me point out there is a reason there is only one head chef, only one ship captain, only one field commander, only one. . . well, you get the idea. Someone has to keep an eye on what is and what could be. But I also want to say that the senior executive who does teach others how to recognize the signals of disruptive growth will also have trained the staff how to make productive use of the disruptive growth engine. There are only four steps: 1) start before you need to; 2) appoint a senior executive to be in charge; 3) create a team; and 4) train people what to identify disruptive ideas. I would add that once folks know how and what to look for—and recognize that some of your people will already have these skills, perhaps intuitively—get out of the way.

In 2007, Scott Berkun published The Myth of Innovation. Say what? A book of which John Seely Brown said (and I quote the inside book jacket), “. . .insightful, inspiring, evocative, and just plain fun to read. . . it’s totally great.” I agree. I loved this book. In “There is a method for innovation,” Berkun reminds us there is little magic, but often a lot of hard work in what we see as innovation: “Innovation is best compared to exploration, and like Magellan or Captain Cook, you can’t find something new if you limit your travels to places others have already found” (p. 39).

“Name an emotion, motivation, or situation, and you’ll find an innovation somewhere that it seeded” (p. 40). So how do we get to innovation, disruptive or otherwise? There is no specific path to such righteousness, but there are categories.

Hard work in a specific direction: frame the problem; enumerate possible solutions; experiment and analyze results; adjust as needed; keep experimenting with the focus on the problem to be solved
Hard work with direction change: frame a problem and find an unexpected solution to an unknown problem so ask the question, “Huh. Wonder what I can do with this?”
Curiosity: as in the stories behind Velcro and Linux

Wealth and money: “The Internet boom and bust of the 1990s was driven by start-up firms innovating, or pretending to innovate, just enough for established corporations to acquire them” (p. 42). So have an idea and hope someone will buy into to take the risk of innovation, which is back to Christensen’s points about resources, etc.

Necessity: Well, Plato did say that “Necessity, who is the mother of our invention” (The Republic, Book II) and who is going to argue with an ancient Greek philosopher and mathematician who has been proven right time and time (and time) again?
Innovations that change the world often begin with humble aspirations” (p. 43).

The good people of Systematic Inventive Thinking (SIT) remind us that “Innovation is for rethinking things in order to do them better, not merely differently. Ideally, it is the responsibility of everyone in the organisation to leverage innovation to help them achieve whatever it is they need to achieve.”

They also suggest it might be a good idea to think inside the box because sometimes we can be so distracted by trying to be exceptionally innovative to imagine the Next Big Thing we fail to see the possibilities in front of us.

So as we are investing in 3D printers and other technologies for our schools and standing back, awaiting student innovation, perhaps with excessive expectation, let’s keep in mind some basic principles for innovation and invention. Let’s give kids the foundations they need but let’s also give
them time for tinkering. And in the work place, let’s not put people in a conference room with chart paper and blank whiteboards and expect miraculous innovative thinking because the most likely results are heartburn and headaches.

In starting early and in putting processes in place, one of the key reminders is that to be inventive and innovative, kids of all ages need time. . . to experiment.

Other stuff you should read on this topic:

Friday, September 12, 2014

The stalled MOOC revolution: Not over; not yet

I've signed up for about a dozen MOOC courses. I've a Master's degree and a doctorate. Why would I take a MOOC course? Several reasons: 1) I like to learn; 2) they're free, so why not?; 3) I don't really have to do anything to "take" the course but I can learn some new things and gather some new resources; and 4) I'm curious to see how some educators are translating their content for a MOOC. I mean, what or should an instructor do differently if there are thousands, even millions, of students in a class? How do you assign groups? How do you grade student assignments? How do you provide any kind of meaningful feedback?

Dan Friedman (@DNFriedman) asserts the MOOC Revolution is over; well, that it never really happened. He could be right. Mr. Friedman believes the key factor is engagement.

Dror Ben-Naim asked if the those "born digital" can "save" MOOCs in his August 21 post. Ben-Naim reports that MOOC completion rates are in the single digits and suggests that the reason is that "MOOCs combine a set of existing tools that can be useful instructional supports, such as online lectures, social networks, and quizzes. But few professors would consider these technologies, together, as a substitute for the course experience."

Ben-Naim goes on to tell us of a professor at Arizona State University, an institution long known for its forward thinking. Professor Ariel Anbar decided to "smash the disciplines" and his science course focuses on a single big question. That in itself is reason to pause for contemplation, but do that later. 

When you're done with this blog post, go read the linked article. Not now, later--because there are a few things I'd like you think about including some observations by Dr. Jeff Borden who published in WIRED and told us that MOOCs are dead, but not really. Dr. Borden asserts that "the fact that so many people took MOOCs not for the course, but for a section of the course is telling," and I agree. That kind of learning is truly personalized learning because the students made the choice of what part of the course was valuable to them. I think it's also important to note that many MOOC students are folks like myself: people who have college degrees and are looking for additional learning. This is an important distinction.

An instructor of a MOOC, Robert Wright (@robertwrighter), a senior fellow at the New America Foundation, had 59,000 students sign up for his "Buddhism and Modern Psychology" course. But that doesn't mean the majority of those 59,000 students who expressed an interest in his course had any intention of completing the course. Keep in mind that MOOCs are free. There is little incentive for anyone to participate in the forums, to do any of the reading or watch any of the videos, to do any of the assignments. Mr. Wrights notes that participation did, in fact, decline. But he also relates a social experiment conducted in 1985:
Mike Kinsley, who later founded Slate, did an experiment to test the hypothesis that “much-discussed” books in Washington, D.C, don’t actually get read. He had an assistant visit local bookstores and insert, about three-fourths of the way through various books, a card with Kinsley’s phone number and the promise of a cash reward to anyone who called him. No money changed hands
Mr. Wright later states that
Lots of factors will determine whether MOOCs wind up being important—and MOOCs will in any event evolve, maybe to the point of being barely recognizable descendants of their current selves. But in the near term their viability will depend very heavily on whether students want to take them and whether capable professors want to teach them. 
In my opinion, MOOCs have already changed the way we think about higher education. Coursera offers various proof of participation and/or completion through its Statement of Accomplishment, Verified Certificate, and Specialization Certificates, though the option is not available for all courses. I imagine this is an instructor choice. Yes, for some options there is a cost to the student and yes, the student has to do some actual work rather than just download the content and read some stuff or watch a few videos every now and then, if ever. But it also gives the student a "try before you buy" option so if the student really likes the course, the student can sign up for the certification options well into the course.

What we have yet to do, though, is think differently about how we teach and how students learn. Yes, it goes back to engagement. Yes, it relates to how we expect students to show us what they've learned. Yes, it relates to professors who believe they have to determine if students have learned the "right" stuff and quite possibly the "right" way.

Professor Anbar's courses use Smart Sparrow (@smart_sparrow), a company Mr. Ben-Naim represents. Smart Sparrow is an adaptive learning environment; check it out.

Pay attention now. There are K-12 schools that have adopted systems that use adaptive learning environments and, by the way, adaptive learning is really nothing new but the technological possibilities are quite incredible; the implications and consequences yet to be discovered. Adaptive learning and deep learning (not to be confused with deeper learning) will continue to influence our educational and our consumer worlds.

In Mr. Ben-Naim's article, he stated that Professor Anbar chose to "smash the disciplines." I think there are a lot of teachers who have been smashing the disciplines, flattening their classrooms, and shredding the boxes. Like Mr. Wright and Dr. Borden, I think the MOOC revolution has really only begun. Inventive and thoughtful educators will learn from what works with MOOCs and adaptive learning, and what can influence personalized learning, growth mindset, project-based learning, and a host of other educational trends and practices to continue to expand and shift the way we think about learning and teaching.

I don't think the MOOC revolution is close to being over, and I think we are seeing a leading edge of what education can be and how it can help ensure today's students are prepared for the world in which they will work and live.

Monday, September 8, 2014

Standardized and summative tests: Let's think about why

The other day I wrote about standardized tests. I suggested they are bunkum, or nonsense. Maybe not so much nonsense as nonsensical in that we give and give and give standardized tests but seem to have forgotten why we give standardized tests except to gather data that tells us how terrible our teachers or students or schools are.

After I posted those observations, I was reading about the PISA 2012 results. It should come as no surprise that the US did not do well. But I also found research that shows state-by-state analysis of the results based on parental education. The US still doesn't measure up too well though some states are more competitive than others.

My reading led to a rather heated discussion about the value of such assessments. I contend the assessments themselves aren't useful if we overtest (and we do overtest) because teachers spend too much time preparing students for tests. Is there residual learning as a result of test preparation? Good question. If there is, I can't imagine it's substantive.

There are many discussions about the use of "big data" though when teachers (or anyone) will find time to sift through all of the data, and aggregate or disaggregate it in ways that are meaningful to individual teachers for their grades levels and for specific content areas is a concern. And then if teachers will get any of that data in a sufficiently timely fashion to make any difference in the way they plan and implement their lessons.

Which leads me to a recent post by Dr. Justin Tarte who asks if summative assessments are obsolete. Now this could be poking the wrong bear, but I believe he has a point.

We know students don't learn at the same pace or the same way. We have to provide some structure and it's not unreasonable to expect students to work together on the same thing at the same time. In the "real world" those experiences are meetings of some sort.

Dr. Tarte also notes that if teachers are using formative assessments. . . . well, I need to stop there. Because we have to assume that teachers are using formative assessments and by that I mean that a) teachers know what formative assessments are and how to use them effectively; and b) teachers are using formative assessments consistently and continuously so at any time they have a general idea of their students' capabilities and challenges.

Don't be hatin' on the first supposition because we all know teachers who are not clear on the concept of formative assessments, who believe that all they have to do is give their students exit slips at the end of the class but never bother to do anything with that information.

In all forms of assessment, the differentiator is using the information from the assessment to help students continue to learn and to improve their learning. As one teacher said, "The test is too late."

And if the test is too late, if that summative assessment is too late, why bother with the summative assessment?

Dr. Tarte points out that if we know where kids are and aren't in their learning, why give a summative assessment? Why lose part of or an entire class period to give a test that is essentially unnecessary?

Kids cramming for tests isn't learning. It's cramming for a test. It's the student hoping to remember too many unlearned things for as long as the test takes. It's the teacher hoping. . . . I don't know what the teacher is hoping. That the students will retain the results of their cramming for longer than the test period?

I'm not really a fan of eliminating tests. I think rethinking why a test is given makes sense and what the test actually assesses, other than students' abilities to stay up late and memorize a bunch of stuff that is otherwise meaningless to them.

So I appreciate the article that tells us it's a waste of time for students to cram for tests. Years ago I would give my students the option of completing a project for their literature class. They could write a paper if they wanted, or they could create something: write a song, make a game, create a video, whatever. Whatever they did, they also had to include a short explanation of how this thing they created demonstrates what they learned in the class. I got amazingly creative stuff from students. I got to the point that I asked students if they wanted a take-home final or an in-class open book final. Either way it was going to be open book. Why? Because in the work place I get to open books or do research on the Internet any time I need or want to. Those skills are important, especially when working under a deadline. If the kids hadn't read the texts or done any of the work for class, cramming wasn't going to help anyway. They knew up front they'd have the choice, so they still needed to do the work to know where to go to find support for whatever they had to say about the questions. And most of my tests were short answer or essay because I wanted them to write. Maybe intuitively I knew that any other kind of exam was a waste of their time and mine.

As we think about assessments--formative, summative, and standardized--I think it's really important for us to think about why we give these assessments, and what we hope to glean from the results. Not for our benefit, but for that of the students and their learning.

Friday, September 5, 2014

Standardized tests are bunkum?

Standardized tests are probably bunkum. Many of us have known this for a while. Even if we haven't been able to prove it with barrels of data, we have anecdotal evidence. Administrators and politicians alike chortle when test scores increase even a nanometer. Should the test scores go down, even by the same margin, those same administrators and politicians scurry for excuses and schedule blathering rampages framed as press conferences to divert attention from the test scores to scores of possible scapegoats--usually teachers--to explain away that bad scores. It's never ever the test. It's never ever that that particular test could have been chock full of poorly written or just bad questions. It's never ever that any standardized test measures mostly how well prepared students are to take a standardized test.

Before I pile on (again), I would like to try to make something reasonably clear even at the risk of provoking different howls of frustration from the anti-Common Core throng. Common Core swept
into the spotlight the generally misunderstood concept of "rigor." [Check out Barbara Blackburn's work on rigor, please.] Let me sum up: not "harder" and not more. What has tagged along with Common Core, and nearly unnoticed by many, is the concept of "proficiency." Sure, plenty of teachers are part of the conversation about "mastery" and "'proficiency" and how to grade for either and the complications associated therein. But that's a different topic of conversation. The reason I mention it is this: educators know there is a difference between knowing how to do something and being able to do it.

I had this conversation just last night with a former student of mine who is a teacher and who going through this very same frustrating conversation. It is, for example, one thing to be able to explain how to do a push-up, perhaps even replete with details about muscles and other stuff. It is, however, quite another to be able to complete 10 push-ups. So is the score for mastery being able to do 10 push-ups or knowing how to do a push-up and why it matters? The answer is that it depends on what I'm scoring for mastery and why.

A standardized test asks a student to fill in a bubble for an answer. The student may have guessed or may have tried to complete the task. Unless students write in their test booklets (which they're not supposed to do) or in the margins of their tests, or unless students allowed scrap paper which is collected with their tests and reviewed along with their tests, there is no way to know if a student guessed or tried to complete the task. Even the multi-part multiple choice (selected response) questions designed to require students to provide evidence for one or more prior questions can be gamed. Students can guess at every part and still have a chance of guessing a right answer.

Given that, a standardized test really cannot measure true mastery nor proficiency, however one might define those terms and assess whatever students might do to demonstrate their levels of knowledge and/or skill. But that's just my opinion.

. . . in June 2012, the Texas House Public Education Committee did what elected officials do when they don’t know what to say. They held a hearing. To his credit, Committee Chair Rob Eissler began the hearing by posing a question that someone should have asked a generation ago: What exactly are we getting from these tests? And for six hours and 45 minutes, his committee couldn’t get a straight answer. Witness after witness attacked the latest standardized-testing regime that the Legislature had imposed. Everyone knew the system was broken, but no one knew exactly why.
Except Dr. Walter Stroup, University of Texas College of Education.
Stroup argued that the tests were working exactly as designed, but that the politicians who mandated that schools use them didn’t understand this. In effect, Stroup had caught the government using a bathroom scale to measure a student’s height. The scale wasn’t broken or badly made. The scale was working exactly as designed. It was just the wrong tool for the job. The tests, Stroup said, simply cannot and do not measure what or how much students learn in school.
People have been saying this for decades, and no one has been willing to listen. Why?

In general, the linked article gives some wonderful context for the abysmal performance of standardized tests, but then it becomes a less-than-shocking expose of how Pearson has and continues to strong-arm educators at all levels to accept its will. But it also indicates that as a result of having had the temerity to expose the absurdity of standardized tests and poking the Pearson assessment bear, Walter Stroup was denied tenure.

If this is the case, it is appalling that the University of Texas College of Education took Pearson money and then rationalized its treatment of Walter Stroup. Failing to get tenure is devastating to a college professor, personally and professionally. While I completely sympathize and empathize with Walter Stroup, the point really has to be that standardized tests are still bunkum.

However, if we were to pull the plug on standardized tests, then what? Lee County made history by opting out of all state-wide standardized tests. A few days later it reversed itself. Yes, in our zealousness for whatever (and I really don't know what it is), we test too much. 

Last year CPS reduced the number of tests its students would have to take from 25 to 10. You read that right. They eliminated 15 standardized tests. Fifteen.

I think we think we know why we give students standardized tests. And I think we think we do something with all of that data. But if we also know that the results of the standardized tests don't jibe with what classroom teachers are seeing in their classrooms and across their grades, then something is not right. The confusion of data cannot tell us if student performance in the classrooms is an accurate measure of what they know and what they can do because, along the way, we have learned not to trust the professional judgement of our teachers. That, however, is a topic for another day.

As for these standardized tests, if we insist on giving them, let's be selective about which tests we give and when, let's be realistic about why we give the tests, and then let's be realistic about what we do with that data. Let's be proactive about how we aggregate and examine the data, and how we use that data to inform something that helps teachers do their jobs more effectively and truly helps students demonstrate what they know and what they can do, not how well they can guess.

Wednesday, September 3, 2014

Academically adrift, or a dash of apathy?

I saw this article, "The Economic Price of Colleges' Failures" and chose to move on rather than read, yet again, about the alleged indifference of college faculty and administrators.

But then a friend of mine, Vicki Davis (@coolcatteacher), posted about the article and expressed her concern and so, with a sigh, I read it.

In 2010, Richard Arum and Josipa Roksa published Academically Adrift. An excerpt from the promotional copy on the web site reads:
According to their analysis of more than 2,300 undergraduates at twenty-four institutions, 45 percent of these students demonstrate no significant improvement in a range of skills—including critical thinking, complex reasoning, and writing—during their first two years of college.
I know just enough about statistics to wager the numbers provide some statistical weight, but I have questions, lots of questions, about this study. But the questions aren't really relevant because, quite honestly, I'm not surprised.

I've taught at a few different institutions of higher learning, some more selective than others, some larger than others. This is what I know: college kids tend to have no clue that they don't know what they don't know and they are often too overwhelmed by college, by work, by a freedom they'd never before experienced, by all sorts of things to pay attention to learn.

I taught freshman English for a number of years. It's a course I loved to teach and would teach again in a heartbeat. Somewhere fairly early in my teaching experience I stopped co-editing and I stopped putting grades on papers. Why? Because kids wouldn't read my questions and notes in the margins, and because I wanted them to learn to think about their choices and their reasoning and to become better writers because of it.

This "problem" of being academically adrift, and/or apathetic, doesn't start in college. It continues in college.

I coasted through high school. I was bored out of my ever-lovin' mind. I could have graduated early except back in the dark ages when I was in high school, such was not permitted, especially for a girl. I loaded up on English classes and student assistantships to pad my schedule after I completed the required one-half credit course I needed to graduate. I was never much of a student anyway. I did homework, but mostly in a desultory fashion.

Anyway, I graduated (yawn!) and went off to college. I was working three part-time jobs, commuting, and carrying a full-time load. And really enjoying being in college! At the end of my freshman year, I was on academic probation with a 1.7 or 1.9 GPA. So I transferred to a different school and was able to reboot. Being away from home created its own challenges. I was not a disciplined student and way too interested in everything else that was going on. But I managed to pull a 3.5 my first semester, which told me I could do it, so then I relaxed. I realized I could do well enough without working too hard, so I had time for my part-time job, for my sorority, and for whatever other nonsense I was getting into.

The difference for me might be that I was already reasonably adept at critical thinking, problem solving, and complex reasoning, although for many wrong reasons. And I was already good at writing. But the classroom teachers and professors who kept me on my toes and probably enabled me to be a marginally successful student were those who saw through the masquerade and didn't put up with my feigned insouciance (if I flunked out of school, my mother would have flayed me alive). For them I worked harder; for them, I tried. For everyone else, meh.

So I don't blame the faculty nor their administrators for their students' lack of success. Not entirely.

I understand the fatigue that makes them wonder if it's worth it to try to care about those who don't seem to care themselves. I understand the weariness and the wariness that accompanies a decision to allow a bit of grade inflation. Oh, I have stories. The pressure from a coach to make sure a really valuable player remains eligible. The real and perceived threats from parents, from students, from administrators. That doesn't start in college either.

Higher education is a business. Actually, higher education is a collection of businesses, often with conflicting mission statements and core competencies. One of the many questions is whether or not those missions and competencies are for the benefit of the students. Who is the institution's customer? The alumni association? Its donors? Or maybe, just maybe, it's the students.

Some students will work hard to learn no matter what, some will game the system, and others will go through the motions of learning because that's what they've learned education is all about. And that doesn't start in college.