A number of years ago (7?), a colleague of mine at Stanford – Carlos Seligo – introduced the concept of “Orchids and Weeds.” He meant it jokingly at the time, but it has stuck with me ever since.
Basically, there are two ends of the spectrum when it comes to technology, support, and adoption. There are orchids, which are elegant, beautiful and perfectly aligned with needs but require an incredible amount of attention and resources to get running and properly supported. It is unlikely that support will become more efficient over time. Weeds, on the other hand, spring up organically and often as ancillary to something else. They germinate quickly and soon adoption is very high and usage has proliferated. Weeds might even be used in ways never originally intended.
An example of an orchid might be the Sakai Learning Management System, which has been developed by universities, for universities, but is quite a beast to implement and manage. Perhaps even some of the major Microsoft products – Sharepoint comes to mind – would be orchids, too. They can be ridiculously powerful if done right, but needs a lot of resources to get there. The two “big” weeds would be e-mail and SMS text messaging. SMS is in particular is a great example. Originally designed so that cell company employees could report coverage outages and other problems, it has become the primary way for many people to communicate (I send about 2000 texts a month and yet use fewer than 400 minutes).
It is important to note that there is nothing inherently bad about an orchid. By definition, it is beautiful, elegant, and a perfect solution for the job. It offers very high value and probably commands a high price (the amount of effort users are willing to expend to access the service). But the cost is also extremely high. So the overall value-cost gap (or V-C “wedge” as we’d say in my Capstone MBA course) is smaller than, say, text messaging.
But if the fundamental basis for maintaining a service portfolio is value, orchids are just fine. You just have to pick and choose the right ones. And that means a strong rubric for getting rid of the useless orchids (just pretty, but not the right fit) from the truly wonderful ones.
Basing services on value provision is pretty simple. Well, actual “value” is hard to quantify, even in the business world where there is a number for everything. But if you start with price – again, the expense of time/effort/distance traveled/hoops jumped through/etc that a user is willing to take on in order to use the service – then you can get an idea of value. Value is always the same or higher than price (unless you have one seriously messed up measurement system). If you put something somewhere (physical location of a high-end lab, a web service behind so many clicks of the mouse, etc) and people are still rushing in droves to use it, then the price is probably quite a bit lower than value. Whether the price is too low is unclear, but it’s not like you’re going to disassemble that lab and move it farther away from the dorms or put even more clicks in front of the web service. Bottom line – value is high, and it can be gauged if not measured exactly.
So once you have that, you just roll out services that are high value or high V-C. The value could be high in an absolute sense, in which case cost is not an issue (striking a deal with a cloud-based backup company where the student pays and you get all the kudos but none of the liability). Or it could be high in a relative sense (buying a bigger SAN for $75000 to do multiple redundant backups of faculty and staff data, on-site, in exchange for peace of mind to those users. cost and value are high). But if something is low in value, just don’t bother with it. Because if it’s not of value to anyone, why are you wasting resources on it? Worst case scenario, cost is HIGHER than value, and you’re just burning resources for nothing.
Ideally, you go low cast and high value – look for weeds. Get as many of these as you can because they require low overhead but adoption will be high. I’m not sure people will “value” it the same way they would with other things but they will surely use the service. Align your staff to take advantage of these. Have fewer staff managing more weeds – the ratio will be different.
Then go looking for orchids. You will probably need project management staff just to test them, then a much lower staff:service ratio to maintain them. But if you figure out the right potential orchids during testing, then deploy the best 2-5 or so and it’ll be worth it.
- On our 2008 Honda CR-V, we have different beeping, alarm tones for leaving the key in ignition, headlights on, and parking brake on. 3 different tones.
- On our 2005 Mazda 3, we have a single tone that is used for both the keys and lights. No parking brake.
- Of all cars, the 2004 Jeep Wrangler – bare bones vehicle, zip-up windows, etc – had a single warning signal for the lights, keys, and brake.
How is it possible that that three different cars utilize different methods for the simple task of notifying a driver that he or she has forgotten to do something, or left something behind? The ones that are most similar – the Honda and the Jeep, are almost as far apart as possible in terms of types of car (well, if it was an Acura or something that would be farther…). And somehow the Mazda can’t even bother with all three tones. The features of the Jeep – a car WITH NO WINDOWS – are more user-friendly than those on our Mazda.
It’s not as if the beeping sound for the brake was first invented by Jeep, musth less copyrighted (which would be required in this scenario, since the Jeep is older than the Mazda). Nor the idea of different tones being used in the Honda. And it can’t possibly cost much in terms of Cost of Goods Sold (COGS), which affects margin, for there to be either three tones or three different tones.
If I had to fill out a form right now asking for features I’d like in a car, having tones (same tone okay, since I can never remember which tone is for which problem) for all of those items would be on that list. These simple things. Just take the simple things that others have done, and make sure that you have at least those same things. Then innovate from there.
How is this at all relevant? Simple – if you’re running a Help Desk, get the basics of support, time to resolution, quality of customer service, etc , to the standard that others schools with roughly the same resources and limitations have achieved. If you have a small server farm – or a giant data center – look at what others have done and just flat out copy the best of breed basics. I don’t mean the whole kit-and-kaboodle. I don’t mean just copy every last detail. But there have to be common denominators that have sound solutions, proven over time, that are easily replicated.
Just as Mazda should not spend much time deciding whether or not to put in 3 bells – which can, I think, be concluded is a better solution than just 2 bells – why not just start off with 3, and spend more time deciding if an auxiliary audio input should be put in or not? Or, a year later, when all Mazda 3′s had those inputs because basically all cars had them, how about making the front console more user-friendly or improving something else? Innovating beyond the basics.
I’ll admit that we are not doing as good of a job at this as I’d like. And hopefully as we’d like, either. But once we can get everything to a baseline standard – and it is definitely feasible – then we can start to really mix things up. Think of new things and build up an overall structure that is above and beyond.
So let’s get started on the foundation.
I loathe the phrase “do more with less.” I abhor it. I loathe and abhor very, very few things in life, and I reserve those venomous verbs for rare occasions. Yet I both loathe and abhor the phrase and the idea behind the notion of “doing more with less” as a management tool or concept.
The idea that any organization – whether it be an entire institution, a school, a department, or even a single project team – should be expected to provide, say, 125% output with less than 100% resources is utterly absurd. When this is used during times of economic crisis – which is what we’re looking at right now in higher education – the philosophy can be something of a necessary evil. When budgets are tight then almost any project will have less than 100% of funding. When hiring freezes occur then existing staff are spread more thinly across projects. Resources – both dollars and staff – must be at less than 100% in such a situation. And at least for the short-term the same set of services must persist. There simply isn’t enough time to retool an entire department, help desk, or other operation in the face of a sudden budget crunch. Dollars per capita (DPC) goes down because it has to, at least for now.
But this cannot become standard, ongoing policy. Service portfolios must be reviewed, staff must be rearranged, and overall operations must be reorganized to accommodate the new monetary restrictions. Only those services that can be offered at the same, pre-crunch level should remain, and overall DPC should be restored. This has to happen, or everything and everyone suffers.
So “more with less” cannot work as a long-term response to budgetary constraints.
As a general inspirational philosophy, however, it can have some kind of meaning. “More” and “less” are relative terms. Provided that there is not the expectation that we somehow work 125% time (in the perhaps utopian world where everyone works to capacity and capability, no one can work more than 100% of what he or she is capable), doing “more” simply means to provide some additional quality with the work we do. For me, I use value to customer as the yardstick. How much value are we offering to our students/staff/faculty (or some subset – residential students? just the financial aid office?) with our services? How can we provide “more” value by doing X instead of Y? Or perhaps by doing a new version of X?
Similarly, “less” just means using a lower quantity of available resources. It does not and should not mean that there are fewer resources, in an absolute sense, with which to provide the same value. Just in a relative sense. In other words, using “less” is all about efficiency. How can we provide value in a manner that consume less time, less dollars, or both?
Ideally, how can we provide more value, more efficiently than we are doing now?
Taking the value concept further, it is quite possible that there are extremely high value products and/or services that are actually extremely costly in terms of resources. In a perfect world we offer huge value at a low cost – a major streamlining of workflows for a specific office using open source software that is easy to install and maintain. But if the value boost is high enough – providing a toolkit that makes Santa Clara Law students “twice” as useful to law firms than students from other schools – then cost becomes less of an issue. Maybe that toolkit involves building multiple web-based software tools and purchasing, installing, and maintaining a very expensive piece of software that provides practical training. Maybe it’s worth it. Maybe not. But it is a possibility one should consider. The “more” value part should be the first consideration, with the “less” resources – efficiency – part coming next. Hopefully if we offer this huge value, high cost solution, at least whatever resources we are expending are the absolute lowest amount required because we are very efficient.
Now – my preferred way of approaching this concept, the only way in which I am truly comfortable espousing anything along the lines of “more” and “less” is to:
“provide more value with more resources, using less resources and being more efficient, all within the same or less amount of time.”
Now, that’s awfully long-winded, but that’s also the history major in me. It is, however, important that all of these points be included.
Provide more value: covered above, but again the key thing is that we first want to provide value. If the customers don’t get the value then we shouldn’t offer it.
With more resources: if I’m asking you to find a way to provide more value, then I’m going to give you a bigger pool of resources – staff and money – to help shape things. To help form the project at the outset. Start with loose reins, then bring them in when you are underway and need control.
Using less resources and being more efficient: Of course, it is not acceptable long-term to add services to our portfolio that consume resources disproportionate to the value being provided. At the least, when pilot leads to production, the service should be streamlined and using fewer resources than all other alternatives.
All within the same or less amount of time: This is key. We all work 100%. For most people, that means 40 hours per week. So what I want is for my staff to do all of this, develop new projects that provide more value, etc, all still within those 40 hours per week.
Of course, if you’re being efficient, then what is really happening is that whereas you provided X amount of value during those 40 hours, you are now providing, say, 1.5X or even 2X in the same amount of time. The entire department is better at providing more value, and over time we do more with more while using less.
In my previous post, I discussed the importance of dollars per capita re: resource shortages rather than simply number of staff or absolute budget. This may be something that all IT leaders have already discovered on their own, but to me, divorcing myself from thinking in terms of absolute budget (well, of course Santa Clara has a smaller budget than Stanford or a huge state school) or number of staff and thinking in relative terms really helped open my eyes. Basically, the real issue at the university or even project level is how much funding is available per user/customer/student/etc. It’s not about an absolute budget. $10,000 for 1 customer is enough to buy a whole dedicated server plus a decent bit of enterprise software. But the same budget for an entire project meant to serve 1000 students means sapping staff time and other resources from other projects and initiatives. That actually provides poor DPC for the one project, and decreases DPC for the other one(s).
Of course, low DPC is a problem that plagues many organization, especially higher education. Let’s face it – many corporate IT departments, for all their notoriety for being somewhat “faceless,” would still laugh at higher ed budgets in relation to goals. When a for-profit company decides to do X, and doing so requires $Y, then they find the money (or they cease to exist as a company, I suppose). That’s not the case in higher ed. So how do we address low DPC? (more…)
As I have been considering various changes in my approach to management, leadership, and IT in higher ed, I am reminded of the importance of accountability. This is one of the most important parts of a successful team – it is part of the foundation upon which productivity and teamwork rests. In fact, it is part of a critically important cycle that is self-reinforcing – each phase of the cycle helps strengthen the continuation of that process. Accountability begets ownership. Ownership leads to a sense of responsibility. Feeling responsible results in a greater understanding of accountability. And the cycle continues.
Accountability must be pervasive, as well. It cannot be just to one’s supervisor or manager that one is accountable for his or her activities and performance. Peers must feel that they are part of the success of each of their colleagues and the team in general. Conversely, not only should managers be able to hold staff accountable, but peers should have the ability to “call out” those that are not helping meet overall expectations.
The thing about accountability as a departmental, top-bottom, bottom-top, side-side trait is that nothing is explicitly confrontational. Even the most severe conversation becomes about team and goals, rather than personal slight. Instead of “you are messing up my ability to get my job done,” one can say “we must rely on each other to get this project done to achieve a common, team goal.” I realize, of course, that we do not live in a utopia and that the former statement will still occur even in the most collaborative of environments now and then. The point is that co-dependency can become the foundation for discussion in a system that relies on accountability and shared ownership.
The question, therefore, is how to build what I call a “platform” for accountability. Much in the way that Windows or Facebook is a platform for development of software, accountability can be the foundation upon which projects and communication is constructed. (more…)
Lately, I’ve been either working with people who are less than enthusiastic about developing a meaningful rapport with myself and my department or have been affected by various issues that have made them less collaborative/cooperative. In general, I try to build relationships that will help out in the long-run. That will create allies, that will form partnerships, etc.
I have learned recently that perhaps it’s a futile effort. That the best tactic is, to turn a phrase, to put [the] baby in the corner. <nod to Dirty Dancing>
I use the term “baby” on purpose. A professional that is unwilling to develop a rapport – or even listen to one proposing to form such a relationship – is, in the context of a professional work environment, a baby. This is someone who is immature, pouts about the realities of his or her job rather than faces up to the challenges, and points fingers and places blame on others.
When working with someone that is like this, my manager gave me some very sound advice recently. Don’t try to build a rapport. Ignore all the inane, illogical issues surrounding the discussion. Place out of mind the obvious fact that if we were to work together, we could get so much more done.
Focus on what you need, and how to get it.
Not in a selfish way – if we are ones that are frustrated over lack of building rapport, we are likely ones that are generally not selfish when it comes to working with others. But in the sense that, should all diplomatic efforts fail, just focus on what you need to get your job done.
Put that “baby” in a corner. Pin him or her down with whatever mental constructs you need to block out all the noise. Focus in on what information you need – how long before the problem is fixed? What do you need from me to fix it faster? How quickly can we get out of this conversation now that we’ve gotten our needed information? – and put on blinders to everything else.
This is really a last resort (and as last resorts go, this is far from Machiavellian) and one should still go for collaboration and communication first. But I’m already finding it to be a useful communication construct when one runs into serious and undeniable barriers.
Counter-points? For those 5 people that read this?
One of the most common “issues” and topics of discussion among IT professionals in higher ed is our potential obsolescence in the face of the changing student population, the infusion of uncontrolled media, and non-university solutions for connection – IM, Facebook, etc.
There are various articulations of this fear, but the gist is that because of all of these changes, the way we have always done IT will no longer be relevant, and we will lose our jobs. Or, at the least, that we need to watch for and perhaps even fear these changes.
I am, as I begin this post, attending a keynote regarding the paradigm shift that social media, desktop servers, cloud computing, and other technologies present to (university) IT departments.
Let me rephrase that to work better for me: the SUPPOSED paradigm shift…
As I often do, I must preface the rest of this post with a bit of a disclaimer. The keynote is by Sheri Stahler,the Associate Vice President for Computer Services at Temple University. She is clearly an intelligent person and I’m sure she’s a great VP and manager. She certainly is a very affable and friendly person – at least she was when we ran into each other in the elevator at the hotel at which this conference is held. This is not a criticism much less an attack on her in any way. This is about the points being made. These perceptions are not uncommon in higher ed (certainly evidenced by some of my fellow attendees that raise their hands to certain queries posed by Ms. Stahler) and that truly and deeply worries me.
Ms. Stahler’s points surrounded a supposed paradigm shift caused by web 2.0, 3.0 (2.0 + federated ID via Facebook Connect, etc), social media, and the changing perspectives of today’s students. This shift jeopardizes the very jobs of IT staff in higher education. Our methods are no longer effective, and our jobs are in danger. This is a gross oversimplification, admittedly.
I had the pleasure of convening and attending a presentation by Dr. John Hoh, the Director of Information Technology Services at the Harrisburg campus of the Pennsylvania State University later this same day. While it’s awfully difficult to describe the entire session, the gist is that one must look strategically and quite critically at one’s service portfolio, identify what are commodity services that can be outsourced, what are high-maintenance, low-value services that should be handled by only a small set of staff, and what is the “meat” of your overall services. The stuff that you want to be good at, and that you want others to know about it. Determining this requires a very forward-looking perspective on matters. As Dr. Hoh said, the goal is to become solution-providers, not break-fixers.
Being a solution provider means that one can identify issues, see trends as they emerge, and move to take advantage of those trends as appropriate. If one is a solutions provider, then one’s job cannot be, by definition, in danger. It is the very nature of one that needs to see emerging technologies not just for the dangers they pose to our existing duties but also for the opportunities they present that future-proofs such staff from becoming obsolete.
Even without taking Dr. Hoh’s aggressive, progressive stance, I would argue that we are all in the business of analyzing the eco-system that includes technology and higher education. In the same way that we must now consider how to deal with the emergence (eruption?) of the tablet device or the commoditization of Help Desk services, IT departments had to previously examine the commoditization of personal computers and the emergence of computers as a part of everyday academic life and develop those very same Help Desk services.
In conclusion, we must look at ourselves as solutions providers, and ones that determine those solutions based on our ability to analyze changing scenarios. We have never just been IT folks, and we certainly should never be people that focus on how the “way we’ve always done things” is or is not threatened by change. Our jobs should be to analyze and change with new trends. While our duties might change, our job does not.
Usual disclaimer: IT groups at any university are faced with a tough challenge. Limited resources, usually not quite enough staff to manage too many enterprise-level type services, and a strong, legitimate desire to do things the right way that gets misread as slow response, lack of concern, and or a number of other negative opinions from constituents. I don’t like saying this is a “thankless job” because it’s an overused term, but it really can be like that. I’m sure that the various folks indicated and implicated in this post are doing their best – I know that they are. And I know that they could probably write posts about me that are similar, too.
Having said that…there have now been 2 instances of what I consider to be false advertising followed by an attempt to hide the tracks leading to those inaccuracies that truly, deeply frustrate me. At the very least, there is a lot of spin going on. Yes, I know these are strong words. (more…)
A while ago, I posted about how hard it is to be a manager. It was a kind of introspective, philosophical post rather than an in-depth analysis of management. I was doing an off-the-cuff look at the conflict between being a manager and a leader. The two are different, but unless you happen to have an administrative manager and a…leader manager, you often have to be both. Someone took it rather personally, though. The specific comment was:
“Since when did managers “lead”? Their job appears to be to punish creativity.”
This was an incredibly harsh reaction to my post, though I think more indicative of the contributor’s experiences than the content of my post, to be honest. But it does get at a very key thing – if the key responsibility of a manager is to control resources, doesn’t that stifle creativity to some extent? How much freedom can a manager provide when that person is looking at whether we can afford this, or whether this falls within a certain policy, etc? Managers tend to look at boundaries – it’s an inherent part of the job.
However, it need not be the ruling philosophy, and I am actually quite opposed to an approach that looks at limits rather than opportunities. I think that if one looks only at the boundaries and thinks first about policy then there is less rather than more organization, and certainly less creativity. So I do not at all agree with the comment quoted above – I do think it’s possible to be a manager, and encourage creativity.
I don’t quite formalize things like Google does, where employees are asked to spend a certain amount of time each week thinking of “new ideas,” but I do put the responsibility of thinking of new concepts or new ways of doing things on the staff in my department. I want to be able to trust them not only to do their jobs, but to approach those jobs with an eye towards thoughtfulness, thoroughness, and creativeness. So I want everyone to think about what is being done, whether all the bases have been covered (documentation, informing people, etc – yes, this can create more structure than allow creativity), but then to ask “is this the best way?”
Even if it seems to be the only clear method, I encourage staff to then posit “there is another way. What is it, and is it better?” I hope that they will come to me with those ideas. Yes, I will have to think about costs, because we don’t have an unlimited budget. But I also budget each year for “random things we’ll try because they are cool,” and I hope that staff will take advantage of that.
Management need not stifle creativity. Management should, in fact, encourage it. Maybe crossing the line to leadership is another whole ball of beans (messier than just a can of beans, no?), but at the very least a good manager should leave room for creativity.
My biggest fear, by the way, as I write this is that someone that knows me and my management style will read this and immediately think “Allan doesn’t manage like that at all. He’s a dictator and control-freak, not one that encourages creative thinking.” I try not to think about that.
After a long hiatus from this blog, during which I was basically swamped at work, I return to the idea of how to redefine or perhaps restructure law school to make better use of its faculty, give more to the student, and get away from the traditional models of revenue and federal aid reliance. I seek to “edupunk” law school.
I don’t have all of the “tenets” of the edupunk and edupreneur movement in front of me, but some really stick. One key aspect is that, since salaries make up a huge portion of a school’s costs, it is critical to make the most of every dollar. Especially with faculty that one must lure away from other schools, the amount of time each professor spends actually imparting wisdom unto students is the major metric. Especially for law schools, where salary (rather than tenure and job security) is often the number one reason that a lawyer would leave a lucrative position at a firm in order to teach, maximizing the contact between student and professor is important.
One means of achieving this is to bring in more adjunct faculty to do the “dirty work” for the professor. Creating exams, grading, even evaluating written assignments could conceivably all be done by lecturers or other faculty that are not on the track to tenure. Of course, this requires that the adjuncts work very closely with the professor so that the grading and exam methodology be in sync with the course materials and the professor’s style of teaching. Now the tenured faculty can spend their time in front of and with students and, hopefully, engaging others about how to change the way law is taught in an environment of continual creativity and improvement.
However, the Edupunk model falls shortl because even the adjunct faculty are often a significant financial load on a law school, much more so than that of the lecturer that runs between jobs in different fields at four separate colleges in an attempt to bring in one decent salary. Also, many adjunct are practicing lawyers and even sitting judges. These are not secondary members of the faculty that do supporting educational work for the school. These adjunct often teach courses that are popular electives with student, and they need to be in front of students just as much as the tenure-track faculty.
The question therefore, is whether there is a role for non-tenure-track faculty at a law school that are valuable both in teaching their own courses as well as being part of supporting the overall work of a tenured faculty that is presumably one of “the” reasons for attending that school.
So…this trend doesn’t work for law schools. This method of saving costs wouldn’t work for a law school.
Hopefully more success in the next attempt.