Service design in the schools

My mother in-law was a PE and special ed teacher for about 30 years and during most of that time she was also a swimming coach in Sun Prairie, both within the school system and for local club teams. Since living in her house in Northern Wisconsin, I’ve been able to watch her process results after swim meets. She would typically come home from a meet with a bunch of paper and she was using a special computer to do something with all that paper, so I asked what that was. Boy, was the answer long and complicated.

Dubuque, IA is about 2 hours from her house outside of Madison and we spent almost the entire trip talking about swim data. It turns out the problem involves two competing technologies, “Colorado” and “IST.” One set of software ran the touch pads used for timing swimmers and data passed to the other which ran the scoreboards after passing through a piece of software called Meet Manager. Adoption of that software is inconsistent and some teams apparently keep track by hand. On paper. In pools.

Every weekend my mother in-law copied the meet results from her school’s copy of Meet Manager into an email she sent to a local newspaper reporter. I estimate she’s spending about three hours total pulling out these data to compose this weekly email.

I asked more questions: Can you export the meet data? What format do you get? Are you able to open it in an application like Excel or is the file a jumble of mangled text? How do teams share information and keep track of standings? How do students keep track of own their stats? Why hasn’t anybody built a better system?

The answers to these questions were encouraging, except for the last one. That answer was: “Because nobody cares.” High school swimming programs are small. The largest teams have a few dozen swimmers and a couple coaches and administration is largely decentralized and focused on more popular sports. While it’s not perfect, this system works and is mostly painless for the coaches, so the impact value might be low relative to available resources.

“Works” is a bit of an overstatement. If you’re an athlete, the only people with a record of your progress is you and your coach. If something happens to the Meet Manager computer or the printed-off records, it could mean an entire season’s records are lost. For my mother in-law that means manually backing up the data on her old, personal, iMac. So even if it’s low impact relative to the size of the budgets, there are only upshots to keeping better records: The coaches get easier management, student access could be self service, and the district’s athletics program gets an accurate, accessible record of their entire history.

Who’s going to build it?

Here’s an idea: turn your school’s computer club or computer science courses into a service design program. Spend a week or two trying to figure out the problem. Use open source research and design methods to figure out together what the biggest solvable problem is, and spend the rest of the term building, testing, and iterating. At the end of each week the students present their progress, and at the end of the semester you have a working minimum viable product.

A few weeks ago I tweeted:

And the responses were mixed. CFPB’s Adam Scott has done something like this, but many who responded thought it was a good idea that might never come to pass.

A lot of people are talking about teaching kids how to code. I don’t see a lot of people talking about teaching kids how to be on a design team and there’s a big difference.

College Scorecard: The value of data in higher ed

A few weeks ago the Department of Education released a trove of data about colleges and universities around the country. The site itself does a pretty great job of showing some high level information about how much a school costs and a few statistics about retention, graduation rates, and financial success after leaving.

Earlier this week NY Times columnist James Stewart wrote the latest hit piece on the project with a familiar refrain.

While Scorecard adds potentially valuable information to the dizzying array that is already available, it suffers from many of the same flaws that afflict nearly every other college ranking system: There is no way to know what, if any, impact a particular college has on its graduates’ earnings, or life for that matter.

That’s a fair critique. Other critics have called it “absurd,” and “a classic example of confusing causation and correlation.” But they’re missing the point.

There probably is some desire for a tool that will give one clear answer about what school to attend. That’s probably what some of the commercial rankings, like US News and World Report, are trying to do, but that’s not what Scorecard or the CFPB’s Paying for College tool are about. As Stewart says, “pay, of course, says nothing about the relative quality of different colleges.”

What’s valuable about the college scorecard is the magnitude and openness of data provided. This is a research tool, not a ranking system, and with greater access to research tools, students might be able to make better decisions as consumers.

The New York Times, for example, found major gaps in male and female graduate earnings at prominent universities. They found that female MIT grads experience a $58,100 gap in earnings compared to males; “women who enrolled at Harvard are making as much as men who enrolled at Tufts.”

A woman might rank MIT lower on her list with this knowledge. Or, maybe not. Despite the gap women from MIT still make more than grads from most of its competitors. Dismissing pay data misses the point that at some point every graduate, man or woman, privileged or not, has to pay for their education.

It’s also been interesting to see and think about how other people use these data. Journalists have drawn some tremendous insights out of these data already. But what if a high school computer science class took on a project of figuring out which schools are a “best fit” for students from their area? Based purely on financial aid and income data you should be able to make a good guess at how much debt a Minneapolis public school graduate should have after attending nearby colleges and universities. Building a small application out of a public data API seems like the start of a good lesson.

What about colleges? Scorecard’s diversity numbers tell a huge story. 25% of students at my alma matter receive a Pell Grant. 83% of the student body is white. My college has struggled to attract a more diverse student body, on just about every level, for a long time.

If college deans of students can identify areas where minority students are being underserved, these data can expose where student resources maybe should be increased. Maybe MIT should strengthen their career services and do a better job preparing women for salary negotiations. Maybe the history and women studies programs should get more attention so more students can study the historical roots of this problem.

All too often, though, students are bizarrely not considered a college’s primary user base. Presidents of universities typically listen to alumni and foundation donors over students about how the college is doing and what its priorities should be. With these data, though, a student body representative or college president could go to the board and say: We know that more than half of our graduates could be earning far more than they are, we need to increase student services, academic services, and design programs that put students first or we risk a donor base that can’t support us in the future.

Colleges around the country are talking about data driven decision making. Often that turns into prioritizing support for academic programs based on enrollment. Or finding data about donors to inform where to move money. Maybe with these data plus all the data these schools are already generating and not sharing we can see smarter, more student-focused decisions.