Monthly Archives: June 2017
(Originally posted by myself on LinkedIn on June 10th, 2017).
A few months ago, I was set a challenge by my boss. Fundamentally I needed to come up with a way of measuring the productivity of a Network Architect beyond the usually accepted chargeable utilisation.
Our business had historically been driven by cost recovery carried out by timesheet booking, and project accounting based thereon. What this didn’t do was to show how efficient and effective the Architect was being. Was he or she really delivering that 10 day piece of work in 10 days? Or was it taking 20 or 30 instead? On the one level we didn’t especially care, as if it took 30 days, we’d be being paid for 30 days, and the projects paying for their time would end up carrying the cost overrun. How accurate were our resource estimates anyway? But on the other, we had a backlog of work for our Network Architects, so if they were taking longer than expected, we needed to know!
It would be nice to think that the Project Accounting would be picking up any overrun and flagging it, but the brutal truth was they weren’t doing it, at least not in time to be useful, and we’d be criticized after the fact for allowing the overrun to take place.
Whatever metric we identified needed to be easily measurable and quantifiable, so as to form part of a weekly scorecard to submit to the Senior Management Team (SMT), and ideally not require a significant effort on the part of the TDA to enable.
In essence, this is the sum of the effectiveness of multiple process elements.
In our case I picked two easily measurable items to begin with.
1) Design Approvals
Our design governance process requires that a completed design be reviewed at a Design Review Board, and approved before being issued to stakeholders. This would apply to an HLD, an LLD, or a “Design Brief” which is a simpler 3-4 page document intended for in life changes to existing environments. Any future changes to the design beyond this point would require re-approval. In all cases, controlled approval numbers were issued to documents that got approved, and these were specific to the version of the document reviewed.
Clearly if we were working optimally, the Architect would be getting their designs right-first-time, and with sufficient detail and be of good enough quality to be approved on the first attempt.
If documents required repeated attempts at getting approval, we were not working efficiently. This was measurable from our design approval register, where we could compare the number of designs approved versus those not approved in any given period.
2) Repeat Approvals
In a similar vein, designs that required a repeat approval for a v1.1, or v2.0 etc. meant that we would need to do costly re-work of a solution design. This may not be our fault as such, since project or customer goalposts regularly move themselves, but equally it could be an indication that we weren’t asking the right questions first time around. So any v1.0 approval was counted as a success, and anything greater was counted as a failure, thus generating our second product.
This second measure feels like it’s a criticism, and it isn’t intended in that way. In actual fact, what it has served to highlight is that we weren’t very effective at managing project change. Customers would often ask us to accommodate changes mid-way through a project, and rather than managing that as a project change and a lever for us to flag that it would require additional effort and/or cost, we would try and accommodate it within the in-flight design work, which would in turn contribute to the cost/effort overruns that we were trying to capture.
And the obvious!
Of course, chargeable utilisation is still a relevant factor. In our business, we would be 100% cost covered and make our planned margin etc if the Architect utilisation hit 85%. The additional 15% was intended for team meetings, admin, appraisals, one-2-one meetings, and such like, plus some training time if we could ever fit it in! If we could exceed the 85% threshold (and we frequently did!), we’d make more profit than we expected to, but this would be at the cost of the other items.
As a result of this, we introduced some additional weekly reporting for the TDA’s. I created a simple spreadsheet which the TDA should be able to update in a few minutes each week. It identified scope creep and resource overrun, both of which were eventually integrated in to the weekly performance dashboard, and had the added benefit of helping to flag RAG issues around resourcing before they actually became issues.
I’d be interested to hear if anyone else has any metrics they use for IT/Network Architects or TDA’s and how the data is captured to generate the information?