Author Archive: kaiyen

reversing the magnets

The post title is about friends becoming…more than friends (can you name the movie?), but this is NOT about that.  It is about the nature of a (professional) relationship changing due to a realization of significance.

I recently accepted a position to be CIO at Menlo College in Atherton, CA.  A post about that is in progress but discussions with people motivate me to write this one first.

I have always trusted, respected, and in many parts of my job admired my manager.  She is a professional above reproach who still invests herself personally in her projects.  She treats everyone fairly.  Most importantly, she has been a mentor to me (as much as a manager can ever be a true mentor).  She has helped me along from a  young, inexperienced (never been a manager before) but presumably filled-with-potential subordinate to someone that has, I think, proven to be an adaptable, strong-willed leader of a group that needed change and realignment.  This is no small task for me to have accomplished, and I could not have done it without her support and guidance.

However…when I applied for this new job, even though I respected my manager so much, the concept of trusting one’s direct supervisor to the extent of using her as a direct reference OR notifying her of your intent to apply for another job was foreign to me.  It’s still a bit weird, to be honest, but the concept literally did not compute or exist or…anything.

It’s very difficult to explain in writing, to be honest.  But I can offer my thought process as an example of how I honestly did not even conceive of this idea before.  To me, my instinct is that she is my manager, and therefore I don’t do anything until offer is in hand.  As a manager she will take my efforts as a sign of not being committed to my job, lack of loyalty, etc.  But perhaps that is too simplistic of a view.  In some cases, the layers above that basic reporting structure should be considered.  That step – taking time to consider the relationship as unique and distinct from a generic manager-subordinate one – just never happened.  It never entered into my thought process.

Looking back (a whopping 2 weeks ago) and having spoken to a few people at very high level executive positions at the school and university, I now realize that in some cases, such a relationship is possible.  One has to be careful, and I’m not sure I’ll ever have such a relationship again.  But I do feel that it is possible, and I will, in the future, consider these extra layers as I look forward in my career.

hm.

why not to cut your travel/conference budget

The subject line might make this seem like a really obvious post.  Of course, regardless of financial pressures, one should try to keep as many budget line items as possible and therefore not sacrifice the travel/conference budget.  We never want to cut anything right?

However, both after the dot com bubble burst and then the beginning of the Great Recession, I’ve seen departments slash these budgets first.  The very first thing to go is travel and suddenly no one goes anywhere.  It’s just accepted as a luxury that cannot be afforded anymore without much discussion.

I argue that this should be one of the last things you cut.  That you should fight for this vigorously in a budget defense and even to the point where you sacrifice other services in order to maintain that allocation.  Of course, what you really should do is energetically and critically analyze your overall service portfolio, find things that can be cut and/or increase efficiency and keep that travel budget.  I would never advocate for abolishing any existing service without careful thought just for the sake of being able to attend a conference.  But I am certain that there is something that can be cut if you look closely enough.  And make hard decisions.

At the least, cutting travel budgets should be just as hard a decision as eliminating an existing (core?) service.  It shouldn’t be an automatic decision when budgets get tight.

This isn’t really about the need to network, meet in person, etc.  Truth be told, while I value the opportunities to meet with people, I am fully aware that we can create and maintain very strong professional relationships – and exchanges of information – without meeting in person.  We can take it as far as the occasional video conference to really get things together and understood properly. You don’t have to meet in person.

This is about professional development, and connection to the community that helps foster that development.  And accomplishing the former via the latter is only viable if you maintain a presence and set of relationships that grow from consistent attendance at certain conferences.  You attend often enough to get invested, and you go again and again, and become more and more involved.  This becomes an investment from your department in you, and you in your development.

I put forth my “path” to core committee involvement for the 2012 SIGUCCS Conference.  This is held annually and brings about 300-325 (topped out at 450 but 2008 wiped the slate clean, almost) people in higher education IT together.  These attendees range from executive to line level, from CIO’s to Help Desk Managers and even a few software developers.  SIGUCCS is part of the Association of Computing Machines, the main benefits of which are the requirement to write a formal, standards-compliant paper on one’s presentation topic (if you want to present, you have to write a 4 page paper.  Now that will make people decide if they are really willing to get involved or not even at the speaker level) and the inclusion of that paper in the ACM Digital Library.  I’ve done only 3 papers and they’re fairly old, but I’ve been cited a few times and yes, it’s on my resume/CV. (I’d link to the papers but you have to be a SIGUCCS or ACM member to view them).

Every time I have attended SIGUCCS, I have increased my network through in-person meetings and chats.  But I’ve also become more and more invested in the organization, and I think I have developed as a professional as a result.

I look at my path to where I am today vis-a-vis SIGUCCS.

I realize that you can add from year to year.  But notice that except for a small gap in 2007-2008, when I went through a rather significant career change, I moved from attendee at the Technical Conference to attendee and then involvement in the program for the Spring Management Symposium (so this is more aspiring leaders than line staff) to actual conference core planning committee for 2012 (and invited to repeat role in 2013).

I’m not saying that people around the country are saying “oh, Allan Chen?  Yeah, he’s that guy from SIGUCCS!”  But I can tell you that if you said “Brad Wheeler” I’d say “that visionary CIO from Indiana University that I read about in Educause all the time.”  SIGUCCS is not Educause, but then again it would take me a lot longer to gain this level of involvement with Educause (especially because Educause is so big that organization of conferences is generally through its own existing mechanisms – not volunteers.  I’d have to be writing articles and whatnot to reach any level of notoriety).

I am invested in SIGUCCS.  The people whom I meet at SIGUCCS Conference – even those whose budgets have been slashed and only come every other year – are ones that I consider consulting when I run into various problems.  In the exact same way that I’d think about calling someone over at Central IT or perhaps up the road at Stanford.  And I have developed professionally, which is a benefit to the department and yes, to myself in the long term should I look to other professional opportunities.

And all of this is because I have fought for the travel budget.  Because we stopped offering staffed video recordings in non-automated rooms (something we’d been wanting to do for a long time anyway – we’re putting our energy towards lobbying to automate the rooms instead), because we cut back on a ambitious cloud-storage pilot (let’s find 50 committed users rather than 50+50 occasional users), and because we continue to look critically at our budget and service portfolio, we have maintained our travel budget.  And my web developer gets to go to the one conference per year that is the conference for people in his field.  My Systems Manager has been able to go to a couple of intensive virtualization briefings or trainings, and I can bring one of my Support Team folks to SIGUCCS as well.  In the past I’ve attended Educause, too (though now it conflicts too much with SIGUCCS).

So think twice before you cut that budget.  Or perhaps take another look.  It’s an investment in your team to be able to send them to conference.  It’s an investment for an attendee in the conference itself and the community thereof.  And it’s an investment for almost everyone professionally.  And if we don’t care about our level of investment in our jobs, our careers, and the quality of our work…are we in the right field?

supporting early adopters, or punishing those that go off the reservation?

A combination of the Bring Your Own Device (BYOD) “movement” and the rise of widely-used, highly-effective third party communication systems (eg – Gmail over whatever your company/institution is using) has created what I see as a conflict about whether to support people that you’d call either early adopters or ones that have gone off on his or her own and away from standardized systems.

First, some general definitions and stuff.  BYOD has arisen mostly because incredibly powerful computing devices have hit the mainstream.  Tons of people have smart phones (though the actual % is lower than you might think – recent surveys indicate that a solid 30-40% of cell phone users have, at most, “feature” phones with keyboards or some other extra feature for faster texting, but no web browsing or anything like that.  Many others just have flip phones for regular calling).  Lots of people have tablets.  And laptop ownership went past 75% a long time ago.  So with all of these devices already in our pockets and bags, we have to start considering the ramifications of so much computing and productivity power being brought to our campuses and work places, rather than being provided by.  A computer lab might need to be only half the size of just 2 years ago, with empty desk space being far more useful for students and their own laptops, etc.

Even though AOL first offered its own e-mail system a very, very long time ago (I was on AOL starting in 1994-1995, and I remember it had not yet purchased Compuserv), the massive shift to yahoo and google for personal e-mail also causes an issue.  In the same way that we have to consider that users have their own devices which they prefer to use over the ones we provide, we must also be aware that one’s well-established gmail account might supersede the benefits of an organizational e-mail system (or calendar system, or chat system, etc).  Yes, for work purposes there is the issue of separating official from personal e-mail but the lines get more and more blurred, even for employees, as adoption of these other tools gets higher and higher.  Investment into one’s personal gmail account gets so high that his or her identity is based on that account, not the name.edu or company.com one.

We have faculty at Santa Clara Law – people who are scholars associated with an academic institution – who use their gmail accounts with gmail.com suffixes over their scu.edu accounts.  I recently worked with a few marketing folks that asked me to contact them on their yahoo and gmail accounts instead of their company ones, because it was faster (and easier to get to one their phones, etc).

A lot of the two movements go hand-in-hand.  Because it’s so easy to connect an Android phone or iPhone to gmail, people prefer to have that account when “on the go.”  As they are more and more mobile, more of their e-mail goes to those “non-sanctioned” accounts.  As more goes there, less goes to the official one.  And so on.

As we look towards a shift to a new e-mail and calendaring (and collaboration) toolset at Santa Clara University, those faculty that switched over to gmail will in fact be left “out in the cold.”  They are using the “commercial” version of gmail, and we’ll be using the “educational” version of either Google or Microsoft’s cloud-based solutions.  Even if we go with Google, there is no migration option from a personal, commercial account to an institutional, Apps for Education one.

So…in a way, we are punishing those that adopted these highly-productive tools (gmail, gchat, etc), potentially a side-effect of early adoption of highly-capable devices (smartphones, tablets).  We are penalizing early-adopters.

Yet we rely on early adopters to push the envelope, to ask the questions that those only one standard-deviation away from the mean have not yet considered, and to help motivate and inspire us to do more and be more creative.

How do we resolve this conflict?  Do we create an environment that encourages early adoption?  Many times it is these individuals that help instructional technologists (or just plain technologists) try new things and work out the bugs.  But what happens when we switch to a standard that leaves them out in the cold?  What safety nets do we provide?  If none, then do we risk fragmentation (aside from dissatisfaction, of course)?

the (legal) future of collaborative documents

Fascinating post about the US v. Lawson case over at the blog run by Eric Goldman (a professor here at SCU Law).  It’s actually a guest post by Venkat Balasubramani.

I’m not a lawyer, do not have a JD, and am generally often only about 95% informed on things which makes me very dangerous and likely to make myself look foolish.  To make things worse, the blog belongs to a professor at the law school where I work.  I don’t know the guest author, but this is still a pretty dicey situation.  But I’m going through with this post anyway.

Basically, the ruling in US v. Lawson, which was essentially about cockfighting (or “gamefowl derbies”) but involved issues of “sponsorship” of the events, was overturned because a juror printed up the definition of the word “sponsor” off of Wikipedia and brought it into deliberations.  That, unto itself, apparently would not necessarily warrant overturning the decision – the original court ruled that it did not prejudice the jury.  The Fourth Circuit, on appeal, ruled that it did prejudice the jury, however, and issued an opinion (but not a decision on the appeal?  this is where my understanding of the various rulings a court can have gets really muddy) on the matter.  The opinion is fundamentally arguing that the inherent effect of bringing an outside source into the jury deliberation room.  It also, however, makes a lot of comments about wikipedia specifically, and suggests certain things about other collaborative editing situations.

One way the court discusses the possible validity of the document brought into the jury room is whether the definition procured from Wikipedia was the legally correct one.  But since Wikipedia is always changing, someone would need to prove that the definition on wikipedia the day that the juror printed it out was accurate, not just the one that is on the site “today.”  The government didn’t bother to do that, which is weird but that isn’t want struck me.

What really intrigues and disturbs me is this quote from the ruling:

the court notes that even if historical edits were presented by the government, it could not consider these, absent some indication that Wikipedia archives of historical changes are “accurate and trustworthy.

So…while I think it’s awfully strange that the government didn’t even try to retrace the versions of the wikipedia page, the court would need an “indication” that the archives would even be accurate.  The court wouldn’t even consider the evidence presented unless it could be “proven” that Wikipedia archives were accurate.

What would be sufficient indication?  What would be enough to make one court feel that the archives of Wikipedia (or…revision history in Google Docs?  on Dropbox?) are accurate and reliable?  Wikipedia is based on the MediaWiki platform but what about other ones (like…Wordpress, which is the one that powers this blog?)  Would that “indication” for the Fourth Circuit be enough for the next court if there is an appeal?  What about a court in one state vs. the next?

Considering how many different collaborative editing tools we use these days, this is, to me, a frightening question.  Almost all of these tools – whether a productivity tool like Google Docs or a file storage/sharing system like Dropbox or Box – have some kind of version history system.  If for some reason a document that has been edited collaboratively becomes important in a legal case, what if the court decides that Box’s version history system is not sufficiently accurate?  That it is possible to somehow “break into” the history and alter it, such that it is no longer a reliable part of an e-discovery effort?  Or what if one person collaborating on the document – a “low-level” team member – went through the revision history and restored the document to some earlier point?  Then there is more editing.  If you go from version 1 to version 3, then back to version 2, then edit to version 4…I have no idea.

Do we now have to start vetting archiving and versioning systems to meet exposure, e-discover and other legal needs in addition to just general security of information?  And should we just be paranoid in general?

to the polygon!

As part of “Project 2012,” the team established an explicit expectation that we all operated as a unified team, and that we were all accountable and dependent on each other.  This may seem like something every team should have, but the point was to explicitly state – and collectively agree – that we would operate in this manner from now on, and to acknowledge that team success depended on team cohesion.

Rather than a traditional org chart of any kind – even a very flat one of manager (me) and then the rest of the staff reporting up – we are formed more in the shape of a 7-sided polygon.  Not only is each point connected to a neighboring point, but also to everyone other point.  It’s like a 7-point web, really (but a polygon is easier to visualize when I’m describing it to others).  We are all the same level.  Each position is empowered to act individually to the benefit of the team.  Each position has internal and external responsibilities.  The Classrooms and Media Production Manager, for instance, has the internal operational responsibilities of updating our classrooms, the internal programmatic responsibility of researching new educational technology that can be used by faculty in those rooms, and the external responsibility to interface with university Media Services.  Law Tech is dependent on the Classrooms Manager to keep up with Media Services such that we are effectively never surprised.  The Classrooms Manager is dependent on Law Tech to support efforts made while collaborating with Media Services, provide input, etc.

The polygon is actually also a great management tool.  It may seem that the team is decentralized but, because the dependencies are emphasized so heavily, one can never go too far without running into a checkpoint with another team member.  One can have an idea about a classroom technology, talk to other schools that have used it, and talk to the manufacturer.  But once a reseller is contacted, a dependency on the budget manager (me) comes into play.  Even before that, dependencies on Operations to make sure the new technology will fit our overall maintenance system or on the Help Desk to make sure the technology can be supported arise.  Each person on the other end – me as budget manager, Operations for maintenance and Help Desk for support – depends on the Classrooms Manager to check in with us at the appropriate points.  Failure to do so is letting us down.

In an ideal world, the polygon empowers individuals, improves communications, and leads to overall team improvements in creativity, collaboration, and performance.  At the least, it allows me to expect more from the team (and myself – this isn’t always easy for me to navigate, either) and react to deviations from the polygon approach as appropriate.  So I’m not giving up all responsibility – far from it.  I am asking each team member to step up (and of course I am putting myself in the mix at the exact same level, which I think is a good part of a team-based operation).

As today is my first back from paternity leave, having put in the polygon – at least conceptually – proved to be very comforting as I shifted all of my attention to my family.  I knew things would not go perfectly (and truth be told I’m still sifting through stuff now) but I also knew that, when in doubt, the team could “go to the polygon” and ask more of each other, and have more asked of them.

We’ll see how the next few weeks go as I switch back to a more regular schedule.

 

orchids and weeds. not your normal garden.

A number of years ago (7?), a colleague of mine at Stanford – Carlos Seligo – introduced the concept of “Orchids and Weeds.”  He meant it jokingly at the time, but it has stuck with me ever since.

Basically, there are two ends of the spectrum when it comes to technology, support, and adoption.  There are orchids, which are elegant, beautiful and perfectly aligned with needs but require an incredible amount of attention and resources to get running and properly supported.  It is unlikely that support will become more efficient over time.  Weeds, on the other hand, spring up organically and often as ancillary to something else.  They germinate quickly and soon adoption is very high and usage has proliferated.  Weeds might even be used in ways never originally intended.

An example of an orchid might be the Sakai Learning Management System, which has been developed by universities, for universities, but is quite a beast to implement and manage.  Perhaps even some of the major Microsoft products – Sharepoint comes to mind – would be orchids, too.  They can be ridiculously powerful if done right, but needs a lot of resources to get there.  The two “big” weeds would be e-mail and SMS text messaging. SMS is in particular is a great example.  Originally designed so that cell company employees could report coverage outages and other problems, it has become the primary way for many people to communicate (I send about 2000 texts a month and yet use fewer than 400 minutes).

It is important to note that there is nothing inherently bad about an orchid.  By definition, it is beautiful, elegant, and a perfect solution for the job.  It offers very high value and probably commands a high price (the amount of effort users are willing to expend to access the service).  But the cost is also extremely high.  So the overall value-cost gap (or V-C “wedge” as we’d say in my Capstone MBA course) is smaller than, say, text messaging.

But if the fundamental basis for maintaining a service portfolio is value, orchids are just fine.  You just have to pick and choose the right ones.  And that means a strong rubric for getting rid of the useless orchids (just pretty, but not the right fit) from the truly wonderful ones.

Basing services on value provision is pretty simple.  Well, actual “value” is hard to quantify, even in the business world where there is a number for everything.  But if you start with price – again, the expense of time/effort/distance traveled/hoops jumped through/etc that a user is willing to take on in order to use the service – then you can get an idea of value.  Value is always the same or higher than price (unless you have one seriously messed up measurement system).  If you put something somewhere (physical location of a high-end lab, a web service behind so many clicks of the mouse, etc) and people are still rushing in droves to use it, then the price is probably quite a bit lower than value.  Whether the price is too low is unclear, but it’s not like you’re going to disassemble that lab and move it farther away from the dorms or put even more clicks in front of the web service.  Bottom line – value is high, and it can be gauged if not measured exactly.

So once you have that, you just roll out services that are high value or high V-C.  The value could be high in an absolute sense, in which case cost is not an issue (striking a deal with a cloud-based backup company where the student pays and you get all the kudos but none of the liability).  Or it could be high in a relative sense (buying a bigger SAN for $75000 to do multiple redundant backups of faculty and staff data, on-site, in exchange for peace of mind to those users.  cost and value are high).  But if something is low in value, just don’t bother with it.  Because if it’s not of value to anyone, why are you wasting resources on it?  Worst case scenario, cost is HIGHER than value, and you’re just burning resources for nothing.

Ideally, you go low cast and high value – look for weeds.  Get as many of these as you can because they require low overhead but adoption will be high.  I’m not sure people will “value” it the same way they would with other things but they will surely use the service.  Align your staff to take advantage of these.  Have fewer staff managing more weeds – the ratio will be different.

Then go looking for orchids.  You will probably need project management staff just to test them, then a much lower staff:service ratio to maintain them.  But if you figure out the right potential orchids during testing, then deploy the best 2-5 or so and it’ll be worth it.

 

not reinventing the beeping noise

  • On our 2008 Honda CR-V, we have different beeping, alarm tones for leaving the key in ignition, headlights on, and parking brake on.  3 different tones.
  • On our 2005 Mazda 3, we have a single tone that is used for both the keys and lights.  No parking brake.
  • Of all cars, the 2004 Jeep Wrangler – bare bones vehicle, zip-up windows, etc – had a single warning signal for the lights, keys, and brake.

How is it possible that that three different cars utilize different methods for the simple task of notifying a driver that he or she has forgotten to do something, or left something behind?  The ones that are most similar – the Honda and the Jeep, are almost as far apart as possible in terms of types of car (well, if it was an Acura or something that would be farther…).  And somehow the Mazda can’t even bother with all three tones.  The features of the Jeep – a car WITH NO WINDOWS – are more user-friendly than those on our Mazda.

It’s not as if the beeping sound for the brake was first invented by Jeep, musth less copyrighted (which would be required in this scenario, since the Jeep is older than the Mazda).  Nor the idea of different tones being used in the Honda.  And it can’t possibly cost much in terms of Cost of Goods Sold (COGS), which affects margin, for there to be either three tones or three different tones.

If I had to fill out a form right now asking for features I’d like in a car, having tones (same tone okay, since I can never remember which tone is for which problem) for all of those items would be on that list.  These simple things.  Just take the simple things that others have done, and make sure that you have at least those same things.  Then innovate from there.

How is this at all relevant?  Simple – if you’re running a Help Desk, get the basics of support, time to resolution, quality of customer service, etc , to the standard that others schools with roughly the same resources and limitations have achieved.  If you have a small server farm – or a giant data center – look at what others have done and just flat out copy the best of breed basics.  I don’t mean the whole kit-and-kaboodle.  I don’t mean just copy every last detail.  But there have to be common denominators that have sound solutions, proven over time, that are easily replicated.

Just as Mazda should not spend much time deciding whether or not to put in 3 bells – which can, I think, be concluded is a better solution than just 2 bells – why not just start off with 3, and spend more time deciding if an auxiliary audio input should be put in or not? Or, a year later, when all Mazda 3’s had those inputs because basically all cars had them, how about making the front console more user-friendly or improving something else?  Innovating beyond the basics.

I’ll admit that we are not doing as good of a job at this as I’d like.  And hopefully as we’d like, either.  But once we can get everything to a baseline standard – and it is definitely feasible – then we can start to really mix things up.  Think of new things and build up an overall structure that is above and beyond.

So let’s get started on the foundation.

do more with more, more efficiently, in the same amount of time

I loathe the phrase “do more with less.”  I abhor it.  I loathe and abhor very, very few things in life, and I reserve those venomous verbs for rare occasions.  Yet I both loathe and abhor the phrase and the idea behind the notion of “doing more with less” as a management tool or concept.

The idea that any organization – whether it be an entire institution, a school, a department, or even a single project team – should be expected to provide, say, 125% output with less than 100% resources is utterly absurd.  When this is used during times of economic crisis – which is what we’re looking at right now in higher education – the philosophy can be something of a necessary evil.  When budgets are tight then almost any project will have less than 100% of funding.  When hiring freezes occur then existing staff are spread more thinly across projects.  Resources – both dollars and staff – must be at less than 100% in such a situation.  And at least for the short-term the same set of services must persist.  There simply isn’t enough time to retool an entire department, help desk, or other operation in the face of a sudden budget crunch.  Dollars per capita (DPC) goes down because it has to, at least for now.

But this cannot become standard, ongoing policy.  Service portfolios must be reviewed, staff must be rearranged, and overall operations must be reorganized to accommodate the new monetary restrictions.  Only those services that can be offered at the same, pre-crunch level should remain, and overall DPC should be restored.  This has to happen, or everything and everyone suffers.

So “more with less” cannot work as a long-term response to budgetary constraints.

As a general inspirational philosophy, however, it can have some kind of meaning.  “More” and “less” are relative terms.  Provided that there is not the expectation that we somehow work 125% time (in the perhaps utopian world where everyone works to capacity and capability, no one can work more than 100% of what he or she is capable), doing “more” simply means to provide some additional quality with the work we do.  For me, I use value to customer as the yardstick.  How much value are we offering to our students/staff/faculty (or some subset – residential students?  just the financial aid office?) with our services?  How can we provide “more” value by doing X instead of Y?  Or perhaps by doing a new version of X?

Similarly, “less” just means using a lower quantity of available resources.  It does not and should not mean that there are fewer resources, in an absolute sense, with which to provide the same value.  Just in a relative sense.  In other words, using “less” is all about efficiency.  How can we provide value in a manner that consume less time, less dollars, or both?

Ideally, how can we provide more value, more efficiently than we are doing now?

Taking the value concept further, it is quite possible that there are extremely high value products and/or services that are actually extremely costly in terms of resources.  In a perfect world we offer huge value at a low cost – a major streamlining of workflows for a specific office using open source software that is easy to install and maintain.  But if the value boost is high enough – providing a toolkit that makes Santa Clara Law students “twice” as useful to law firms than students from other schools – then cost becomes less of an issue.  Maybe that toolkit involves building multiple web-based software tools and purchasing, installing, and maintaining a very expensive piece of software that provides practical training.  Maybe it’s worth it.  Maybe not.  But it is a possibility one should consider.  The “more” value part should be the first consideration, with the “less” resources – efficiency – part coming next.  Hopefully if we offer this huge value, high cost solution, at least whatever resources we are expending are the absolute lowest amount required because we are very efficient.

Now – my preferred way of approaching this concept, the only way in which I am truly comfortable espousing anything along the lines of “more” and “less” is to:

“provide more value with more resources, using less resources and being more efficient, all within the same or less amount of time.”

Now, that’s awfully long-winded, but that’s also the history major in me.  It is, however, important that all of these points be included.

Provide more value:  covered above, but again the key thing is that we first want to provide value.  If the customers don’t get the value then we shouldn’t offer it.

With more resources: if I’m asking you to find a way to provide more value, then I’m going to give you a bigger pool of resources – staff and money – to help shape things.  To help form the project at the outset.  Start with loose reins, then bring them in when you are underway and need control.

Using less resources and being more efficient: Of course, it is not acceptable long-term to add services to our portfolio that consume resources disproportionate to the value being provided.  At the least, when pilot leads to production, the service should be streamlined and using fewer resources than all other alternatives.

All within the same or less amount of time: This is key.  We all work 100%.  For most people, that means 40 hours per week.  So what I want is for my staff to do all of this, develop new projects that provide more value, etc, all still within those 40 hours per week.

Of course, if you’re being efficient, then what is really happening is that whereas you provided X amount of value during those 40 hours, you are now providing, say, 1.5X or even 2X in the same amount of time.  The entire department is better at providing more value, and over time we do more with more while using less.

low on resources – what to do?

In my previous post, I discussed the importance of dollars per capita re: resource shortages rather than simply number of staff or absolute budget.  This may be something that all IT leaders have already discovered on their own, but to me, divorcing myself from thinking in terms of absolute budget (well, of course Santa Clara has a smaller budget than Stanford or a huge state school) or number of staff and thinking in relative terms really helped open my eyes.  Basically, the real issue at the university or even project level is how much funding is available per user/customer/student/etc.  It’s not about an absolute budget.  $10,000 for 1 customer is enough to buy a whole dedicated server plus a decent bit of enterprise software.  But the same budget for an entire project meant to serve 1000 students means sapping staff time and other resources from other projects and initiatives.  That actually provides poor DPC for the one project, and decreases DPC for the other one(s).

Of course, low DPC is a problem that plagues many organization, especially higher education.  Let’s face it – many corporate IT departments, for all their notoriety for being somewhat “faceless,” would still laugh at higher ed budgets in relation to goals.  When a for-profit company decides to do X, and doing so requires $Y, then they find the money (or they cease to exist as a company, I suppose).  That’s not the case in higher ed.  So how do we address low DPC?   (more…)

just big enough to falter

At first, this post started out as commentary about the challenges that schools of about the same size as Santa Clara University face re: technology, infrastructure, and other needs in relation to the available resources.  That, in turn, would lead to some discussion on how to best address such opposing forces.  This was fundamentally about how Santa Clara was just big enough to need enterprise-level services but not quite big enough to have the staff to maintain such systems, especially over time.

However, I found myself having a terrible time writing because I eventually realized that this isn’t about the size of school.  Whether your institution is a research or small liberal arts one, state or private, funded primarily by endowment or tuition, what fundamentally matters is operational dollars per capita. (DPC)

In other words – for any one (relatively large) project, the issue of size vs. resources is about how much money is available per student/faculty/staff/overall user group.

On the one hand, this sounds really obvious.  Sure, if one has less dollars, then it must be less dollars per person.  But if the project is small with, say, 100 users, then it’s possible that even a small school could get a research grant that yields high DPC.  Which means resources are likely going to be sufficient.  Also, universities of roughly the same size could have wildly different ratios, leading to completely different environments.  Santa Clara University and Stanford, for instance, differ by only 1000 or so undergraduates (SCU being about 5400, Stanford 6500).  Yet Stanford’s huge endowment, research funding, plus incoming tuition make for a much higher DPC figure, on average, than is the case at SCU.

Both SCU and Stanford are “big enough” to need a large scale ERP like Peoplesoft.  Stanford has probably 25 systems administrators in their overall group, not counting the database administrators and storage administrators that help round out the PS operations.  SCU has far fewer.  Peoplesoft itself is scalable – there is likely little difference between managing 5500 students and 2000 or so faculty and staff at SCU vs. 6500 students at 8000 faculty and staff at Stanford, in terms of the software.  But management is not scaleable in the same way.  It might take 10 admins – systems, database, and storage – plus a handful of others (security, data center managers, etc) just to get PS up and running.  At Stanford, because of a much higher DPC figure, will always have that number available, plus more to make the process more and more efficient.  SCU might have just barely that number of staff to begin with and once other projects are considered then, in reality, staffing is insufficient.

Dollars per capita.  That is the problem, and the “resource shortages” that it produces in institutions with low DPC go much further than simply having enough staff for a particular project.  The shortages in one department lead to decisions to cut back here, eliminate a service there, etc, that another department (perhaps one in the same overall organization) actually needs for its constituents.  So now low DPC has created a spiraling, self-reinforcing interdependency of inadequacy that is no one’s fault, but must be arrested at some point.

Case in point:

  1. Novell Groupwise e-mail system is managed by Networking and Telecommunications Group BU
  2. due to staff shortages, NTG has decided that the feature that allows standard ActiveSync connectivity (Mobility Pack) for devices will not be installed.
  3. Service Center BU has identified that providing connectivity for devices such as iPhones and Android phones is a needed part of their portfolio
  4. The devices support ActiveSync natively, but of course Groupwise does not.  Service Center BU has to purchase and manage a third-party product that acts as middleware and is close to but not quite perfect ActiveSync functionality.

The result:  most features work on iOS, Android is hit or miss if it’s anything other than stock, original versions of applications (AOSP), and on both users find certain features missing, or mishandling of events or contact details.  I personally have had to basically hack both of the Android phones I’ve had so far in order to get them to work (even though the first one I had is specifically mentioned on the third-party vendor’s site as being “certified”).  One of my staff that uses an iPhone who eagerly agreed to be the iOS support person has learned that there are, in fact, a few quirks that he has to help users with.

In this case, because of DPC being low for both groups, neither is able to take advantage of scalability or other options and work together to provide an identified, needed service via tools that are actually already built into Groupwise.

Let me make this disclaimer now – this is not a criticism of SCU Central IT.  It is indeed very tempting to point fingers when I have a faculty member doing just that at me because the mail app on the iPhone isn’t working “as expected” or because that professor cannot buy that Android phone he or she wants because it is not compatible.  But I don’t think it is right to blame either group, or anyone in those groups, in Central IT.

Santa Clara University needs an enterprise-level e-mail system.  It also has users in its community that need connectivity via a variety of mobile devices, smart phones, etc.  SCU is big enough to need both of these services.  But the dollars devoted per user (in this case those that need mobile device access to the e-mail and calendaring system) is not nearly enough to provide an effective solution, even when two different groups are working on the same problem.
The question, therefore, is whether there is any “hope” in such a situation.  Whether a school such as this is destined to slide further and further down the hole of decreasing resources in the face of increasing demands, or whether something can be done about it.
This post has gone long enough in setting the stage and asking the questions, I hope.  Next comes some ideas on what to do next.