(co-written with Matt Knight)
Some background, for context
Just over a month ago I got approached to ask if I could provide some advice on assessments to support phase two of the GovTech Catalyst (GTC) scheme. For those who aren’t aware of the GovTech Catalyst Scheme, there’s a blog here that explains how the scheme was designed to connect private sector innovators with the public sector sponsors, using Innovate UK’s Small Business Research Initiative (SBRI) to help find promising solutions to some of the hardest public sector challenges.
The Sponsor we were working with (who were one of the public sector sponsors of the scheme) had put two suppliers through to the next phase and allocated funding to see how and where tech innovation could help drive societal improvements in Wales. As part of their spend approval for the next phase, the teams had to pass the equivalent of a Digital Service Standard assessment at the 6 month point in order to get funding to proceed.
For those who aren’t aware, there used to be a lovely team in GDS who would work with the GTC teams to provide advice and run the Digital Service Standard assessments for the projects; unfortunately this team got stood down last year; after the recent GTC initiatives started, leaving them with no one to talk to about assessments, nor anyone in place to assess them.
The sponsor had reached out to both GDS and NHS Digital to see if they would be willing to run the assessments or provide advice to the teams, but had no luck; which left them a bit stuck; which is where I came in. I’ve blogged before about the Digital Service Standards; which led to the Sponsor reaching out to me to ask whether I’d be willing and able to help them out; or whether I knew any other assessors who might be willing to help.
Preparing for the Assessments
As there were two services to assess; one of the first things I did was talk to the wonderful Matt Knight to see if he’d be willing and able to lead one of the assessments. Matt’s done even more assessments than me; and I knew he would be able to give some really good advice to the product teams to get the best out of them and their work.
Matt and I sat and had a discussion on how to ensure we were approaching our assessments consistently; how to ensure we were honouring and adhering to the core tenants of the Digital Standards whilst also trying to assess the teams innovation and the value for money their services could deliver in line with the criteria for the GovTech scheme.
What became quickly apparent was; because this was to support the GTC scheme; the teams doing the work were fully private sector with little experience of the Digital Service Standards. A normal assessment, with the standard ‘bar’ we’d expect teams to be able to meet, wouldn’t necessarily work well; we’d need to be a little flexible in our approach.
Obvious, no matter what type of Assessment you’re doing the basic framework of an assessment stays the same (start with user needs, then think about the End-to-End service, then you can talk about the team and design and tech, and along the way you need to ask about the awkward stuff like sustainability and open source and accessibility and metrics) can be applied to almost anything and come up with a useful result, regardless of sector/background/approach.
As the services were tasked with trying to improve public services in Wales, we also wanted to take account of the newly agreed Welsh Digital Standards; using them alongside the original Digital Standards; obviously the main difference was the bits of the Welsh Standards that covered ensuring the well-being of people in Wales and promoting the Welsh Language (standards 8 & 9), you can read more about the Well being of future generations Act here.
The assessments themselves
The assessments themselves ran well, (with thanks to Sam Hall, Coca Rivas and Claire Harrison my co-assessors) while the service teams were new to the process they were both fully open and willing to talk about their work, what went well and not so well and what they had learnt along the way. There was some great work done by both the teams we assessed, and it’s clearly a process that everyone involved learned a lot from, both in terms of the service teams, and the sponsor team, and it was great to hear about how they’d collaborated to support user research activities etc. Both panels went away to write up their notes; at which point Matt and I exchanged notes to see if there were any common themes or issues; and interestingly both assessments had flagged the need for a Service Owner from the sponsor to be more involved in order to help the team identify the success measures etc.
When we played the recommendations and findings back to the Sponsor, this led to an interesting discussion; although the sponsor had nominated someone to act as the link for the teams in order to answer their questions etc. and to try and provide the teams some guidance and steer where they could. Because of the terms of the GTC scheme, the rules on what steers they could and couldn’t give were quite strict to avoid violating the terms of the competition. Originally the GTC team within GDS would have helped the sponsors navigate these slightly confusing waters in terms of competition rules and processes. However, without an experienced team to turn to for advice it leaves sponsors in a somewhat uncomfortable and unfamiliar position; although they had clearly done their best (and the recommendations in this blog are general comments on how we can improve how we assess innovation across the board and not specifically aimed at them)”
Frustratingly this meant that even when teams were potentially heading into known dead-ends etc; while the sponsor could try to provide some guidance and steer them in a different direction; they couldn’t force the teams pivot or change; instead the only option would be to pull the funding. While this makes sense from a competition point of view; it makes little to no sense from a public purse point of view; or from a Digital Standards point of view. It leaves sponsors stuck (when things might have gone a little off track) rather than being able to get teams to pivot; they are left choosing between potentially throwing away or losing some great work; or investing money in projects that may not be able to deliver.
Which then raises the question; how should we be assessing and supporting innovation initiatives? How do we ensure they’re delivering value for the public purse whilst also remaining fair and competitive? How do we ensure we’re not missing out on innovative opportunities because of government bureaucracy and processes?
In this process, what is the point of a Digital Service Standard assessment?
If it’s like most other assessment protocols (do not start Matt on his gateway rant), then it’s only to assess work that has already happened. If so, then it’s not much good here, when teams are so new to the standards and need flexible advice and support on what they could do next etc.
If it’s to assess whether a service should be released to end users, then it’s useful in central government when looking to roll out and test a larger service; but not so much use when it’s a small service, mainly internal users or a service that’s earlier on in the process aiming to test a proof of concept etc.
If it’s to look at all of the constituent areas of a service, and provide help and guidance to a multidisciplinary team in how to make it better and what gaps there are (and a bit of clarity from people who haven’t got too close to see clearly), then it’s a lot of use here, and in other places; but we need to ensure the panel has the right mix of experts to be able to assess this.
While my panel was all fantastic; and we were able to assess the levels of user research the team had done, their understanding of the problems they were seeing to solve, their ability to integrate with legacy tech solutions and how their team was working together etc. none of us had any experience in assessing innovation business cases or understanding if teams had done the right due diligence on their financial funding models. The standards specify that teams should have their budget sorted for the next phase and a roadmap for future development; in my experience this has generally been a fairly easy yes or no; I certainly wouldn’t know a good business accelerator if it came and bopped me on the nose. So while we could take a very high level call on whether we thought a service could deliver some value to users; and whether a roadmap or budget looked reasonable; a complex discussion on funding models and investment options was a little outside our wheelhouse; so was not an area we could offer any useful advice or recommendations on.
How can we deliver and assess innovation better going forward?
If we’re continuing to use schemes like the GTC scheme to sponsor and encourage private sector innovators to work with the public sector to solve important problems affecting our society, then we obviously need a clear way to assess their success. But we also need to ensure we’re setting up these schemes in such a way that the private sector is working with the public sector; and that means we need to be working in partnership; able to advise and guide them where appropriate in order to ensure we’re spending public money wisely.
There is a lot of great potential out there to use innovative tech to help solve societal issues; but we can’t just throw those problems at the private sector and expect them to do all the hard work. While the private sector can bring innovative and different approaches and expertise, we shouldn’t ignore the wealth of experience and knowledge within the public sector either. We need people within the public sector with the right digital skills, who are able to prioritise and understand the services that are being developed inorder to ensure that the public purse doesn’t pay for stuff that already exists to be endlessly remade.
Assessment can have a role in supporting innovation; as long as we take a generous rather than nitpicking (or macro rather than micro) approach to the service standard. Assessments (and the Standards themselves) are a useful format for structuring conversations about services that involve users (hint: that’s most of them) just the act of starting with user needs – pt 1 – rather than tech – changes the whole conversation.
However, to make this work and add real value, solve a whole problem for users (point 2 of the new uk govt standard) – is critical, and that involves having someone who can see the entire end to end process for any new service and devise and own success measures for it. The best answer to both delivering innovation, and assessing it, is bringing the private and public sector together to deliver real value; creating a process that builds capacity, maturity and genuine collaboration within the wider public sector. A space to innovate and grow solutions. True multidisciplinary collaboration, working together to deliver real value.
Big thanks to Matt for helping collaborate on this, if you want to find his blog (well worth a read) you can do so here: