Boring Magic
@boringmagi.cc.web.brid.gy
3 followers 0 following 360 posts
Product and design consultancy. Helping you build products and services that meet people’s expectations. No hype, no fluff. Good, straightforward stuff that just works.
Posts Media Videos Starter Packs
boringmagi.cc.web.brid.gy
Using quarters as a checkpoint
Breaking your strategy down into smaller, more manageable chunks can help you make more progress sooner. Some things take a long while to achieve, but smaller goals help us celebrate the wins along the way. Many organisations use a quarter – a block of 3 months – to do this. And it can be helpful to look back before you look forward, to celebrate the progress you’ve made and work out what to do next. Every 3 months, we encourage product teams to take the opportunity to step back from the day-to-day and consider the objectives they’re working towards. The quarterly checkpoint is a time to refocus efforts and double-down, change direction or move on to the next objective. There are 2 stages to using the quarterly checkpoint well: 1. Check on your progress 2. Plan how to achieve your new objectives Here are two workshops you can run at each stage, but you can combine them into one workshop if you like. Whatever works. ## Check on your progress First, check on the progress your team has made on your objectives and key results (OKRs). You can do this in a team workshop lasting 30 to 60 minutes. ### 1. List out the OKRs you’ve been working on (10 to 20 mins) Run through the OKRs you’ve been working on. Talk about the progress you made on each key result and celebrate the successes – big or small! ### 2. Think about what’s left to do (20 to 40 mins) For any OKRs you haven’t completed – where progress on key results isn’t 100% – discuss as a team which initiatives you have left to do to fully achieve the objective. For example, you may need to collect some data, run a test, build a thing or achieve an outcome. Consider whether you should change your approach, for example, by doing something smaller or using different methods, based on what you’ve learned over the last quarter. It’s OK to stick to the original plan if it’s still the best approach. Write down what initiatives your team has agreed to do. ## Plan how to achieve your new objectives Next, you’ll need to form a loose plan for how to achieve your new objectives. You can treat unfinished objectives from the previous quarter as a new objective. Run another workshop lasting 30 to 45 minutes for each objective. Everyone on the team will need to input on the plan using the outline below. Write it in a doc, a slide deck or on a whiteboard – whatever works. You will probably want to present these plans to the senior management team or service owner at the start of the new quarter. If it’s easier than starting with a blank page, team leads can fill in the outline and get feedback from the rest of the team. As long as everyone gets a chance to input, it doesn’t matter. It’s OK if you take less than 30 minutes, especially if you already have a plan. ### 1. Write down and describe the objective An objective is a bold and qualitative goal that the organisation wants to achieve. It’s best that they’re ambitious, not super easy to achieve or audacious in nature; they are not sales targets. Write down the problem you’re solving and who it’s a problem for. Discuss how you’ll know when you’re done. What are the success criteria? ### 2. Think about risks and unknowns What might be a challenge? What are the riskiest assumptions or big unknowns to highlight? Do you need to try new techniques? These might form the first initiatives in your plan. You can frame your assumptions using the hypothesis statement: **Because** [of something we think or know] **We believe that** [an initiative or task] **Will achieve** [desired outcome] **Measured by** [metric] Note down dependencies on other teams, for example, where you may need another team to do something for you. ### 3. Detail all the initiatives Write a sentence for all the initiatives – tasks and activities – you’ll need to do to achieve the objective. Consider research and discovery activities, which can help you gather information to turn unknowns into knowns. Consider alphas, things to prototype, spikes, and experiments that can help you de-risk or validate assumptions. Make sure to remember the development and delivery work too – that’s how we release value to users! ### 4. What will you measure? Review your success criteria. Define the metrics that will tell you when you’ve finished or achieved the objective. These should tell you when you’re done and will become your key results. Remember, metrics should be: * tangible and quantitative * specific and measurable * achievable and realistic ### 5. Prioritise radically What would you do differently if you only had half the time? How will you start small and build up? What’s the least amount of work you can do to learn the most? Use these thoughts to consider any changes to your initiatives. Go back and edit the initiatives if you need to. ## Don’t worry about adapting your plans A core tenet of agile is responding to change over following a plan, so don’t be afraid to change your plans based on new information. The quarterly checkpoint isn’t the only time you can look back to look forward – that’s why retrospectives are useful. You can use the activities above at any point. The best product teams build these behaviours into their regular practice. If you’d like help running these workshops or have any questions, get in touch and we’ll set up a chat.
boringmagi.cc
boringmagi.cc.web.brid.gy
Going faster with Lean principles
Software teams are often asked to go faster. There are many factors that influence the speed at which teams can discover, design and deliver solutions, and those factors aren’t always in a team’s control. But Lean principles offer teams a way to analyse and adapt their operating model – their ways of working. ## What is Lean? Lean is a method of manufacturing that emerged from Toyota’s Production System in the 1950s and 1960s. It’s a system that incorporates methods of production and leadership together. The early Agile community used Lean principles to inspire methods for making digital products and services. These principles have had influence beyond the production environment and have been adapted for business and strategy functions too. ## Books on Lean Four books on Lean principles have influenced the way I work. **1._Lean Software Development: An Agile Toolkit_ by Mary and Tom Poppendieck** The earliest of the four books. It really set the standard. **2._The Lean Startup_ by Eric Ries** This started a big movement for applying Lean principles to your startup, including testing out new business models or growth opportunities. **3._Lean UX_ by Jeff Gothelf and Josh Seiden** One of my favourites. This one really brought strategic goals and user experience closer together. It also shifted teams from writing problem statements to writing hypotheses. **4._The Lean Product Playbook_ by Dan Olsen** This is relatively similar to _The Lean Startup_ but is more of a playbook, showing the practice that goes with the theory. The highlight is its emphasis on MVP tests: experiments you can run to learn something without building anything. ## Lean principles All these books have some principles in their pages, all based on the original Lean principles from Toyota. They’re all pretty similar. Combining their approaches helps us apply Lean principles to business model development, strategy, user-centred design and software delivery. > A note on principles: Principles are not rules. Principles guide your thinking and doing. Rules say what’s right and wrong. ### 1. Eliminate waste Reduce anything which does not help deliver value to the user. So: partially done work; scope creep; re-learning; task-switching; waiting; hand-offs; defects; management activities. Outcomes, not outputs. ### 2. Amplify learning Build, measure, learn. Create feedback loops. Build scrappy prototypes, run spikes. Write tests first. Think in iterations. ### 3. Decide as late as possible Call out the assumptions or uncertainties, try out different options, and make decisions based on facts or evidence. ### 4. Deliver as fast as possible Shorter cycles improve learning and communication, and helps us meet users’ needs as soon as possible. Reduce work in progress, get one thing done, and iterate. ### 5. Empower the team Figure it out together. Managers provide goals, encourage progress, spot issues and remove impediments. Designers, developers and data engineers suggest how to achieve a goal and feed in to continuous improvement. ### 6. Build integrity in Agility needs quality. Automated tests and proven design patterns allow you to focus on smaller parts of the system. A regular flow of insights to act on aids agility. ### 7. Optimise the whole Focus on the entire value stream, not just individual tasks. Align strategy with development. Consider the entire user experience in the design process. ## Three simpler principles If those seem like too many to get started with, I want to introduce three simpler principles that can help you go faster. I came across these in a book about running, which doesn’t seem like the place you’d find inspiration about product management! Think easy, light and smooth. It’s from a man called Micah True who lived in the Mexican desert and went running with the local Native Americans. They called him Caballo Blanco – ‘White Horse’ – because of his speed. > “You start with easy, because if that’s all you get, that’s not so bad. Then work on light. Make it effortless, like you don’t give a shit how high the hill is or how far you’ve got to go. When you’ve practised that so long that you forget you’re practicing, you work on making it smooooooth. You won’t have to worry about the last one – you get those three, and you’ll be fast.” You can do this every cycle. Find one thing to make easier, one thing to make lighter, and one thing to make smoother. Fast will happen naturally.
boringmagi.cc
boringmagi.cc.web.brid.gy
Our positions on generative AI
Like many trends in technology before it, we’re keeping an eye on artificial intelligence (AI). AI is more of a concept, but generative AI as a general purpose technology has come to the fore due to recent developments in cloud-based computation and machine learning. Plus, technology is more widespread and available to more people, so more people are talking about generative AI – compared to something _even more_ ubiquitous like HTML. Given the hype, it feels worthwhile stating our positions on generative AI – or as we like to call it, ‘applied statistics’. We’re open to working on and with it, but there’s a few ideas we’ll bring to the table. ## The positions 1. Utility trumps hyperbole 2. Augmented not artificial intelligence 3. Local and open first 4. There will be consequences 5. Outcomes over outputs ### Utility trumps hyperbole The fundamental principle to Boring Magic’s work is that people want technologies to work. People prefer things to be functional first; the specific technologies only matter when they reduce or undermine the quality of the utility. There are outsized, unfounded claims being made about the utility of AI. It is not ‘more profound than fire’. The macroeconomic implications of AI are often overstated, but it’ll still likely have an impact on productivity. We think it’s sensible to look at how generative AI can be useful or make things less tedious, so we’re exploring the possibilities: from making analysis more accessible through to automating repeatable tasks. We won’t sell you a bunch of hype, just deliver stuff that works. ### Augmented not artificial intelligence Technologies have an impact on the availability of jobs. The introduction of the digital spreadsheet meant that chartered accountants could easily punch the numbers, leading to accounting clerks becoming surplus to requirements. Jevon’s paradox teaches us that AI will lead to more work, not less. Over time accountants needed fewer clerks, but increases in financial activity have lead to a greater need for auditors. So we will still need people in jobs to do thinking, reasoning, assessing and other things people are good at. Rather than replacing people with machines to reduce costs, technology should be used to empower human workers. We should augment the intelligence of our people, not replace it. That means using things like large language models (LLMs) to reduce the inertia of the blank page problem, helping with brainstorming, rather than asking an LLM to write something for you. Extensive not intensive technology. ### Local and open first Right now, we’re in a hype cycle, with lots of enthusiasm, funding and support for generative AI. The boom of a hype cycle is always followed by a bust, and AI winters have been common for decades. If you add AI to your product or service and rely on a cloud-based supplier for that capability, you could find the supplier goes into administration – or worse, enshittification, when fees go up and the quality of service plunges. And free services are monetised eventually. But there are lots of openly-available generative text and vision models you can run on your own computer – your ‘local machine’ – breaking the reliance on external suppliers. When exploring how to apply generative AI to a client’s problem, we’ll always use an open model and run it locally first. It’s cheaper than using a third party, and it’s more sustainable too. It also mitigates some risks around privacy and security by keeping all data processing local, not running on a machine in a data centre. That means we can get started sooner and do a data protection impact assessment later, when necessary. We can use the big players like OpenAI and Anthropic if we need to, but let’s go local and open first. ### There will be consequences People like to think of technology as a box that does a specific thing, but technology impacts and is impacted by everything around it. Technology exists within an ecology. It’s an inescapable fact, so we should try to sense the likely and unlikely consequences of implementing generative AI – on people, animals, the environment, organisations, policy, society and economies. That sounds like a big project, but there are plenty of tools out there to make it easier. We’ve used tools like consequence scanning, effects mapping, financial forecasting, Four Futures and other extrapolation methods to explore risks and harms in the past. As responsible people, it’s our duty to bring unforeseen consequences more into view, so that we can think about how to mitigate the risks or stop. ### Outcomes over outputs It feels like everyone’s doing something with generative AI at the moment, and, if you’re not, it can lead to feeling left out. But this doesn’t mean you have to do something: FOMO is not a strategy. We’ll take a look at where generative AI might be useful, but we’ll also recommend other technologies if those are cheaper, faster or more sustainable. That might mean implementing search and filtering instead of a chatbot, especially if it’s an interface that more people are used to. It’s more important to get the job done and achieve outcomes, instead of doing the latest thing because it’s cool. ## Let’s be pragmatic Ultimately our approach to generative AI is like any other technology: we’re grounded in practicality, mindful of being responsible and ethical, and will pursue meaningful outcomes. It’s the best way to harness its potential effectively. Beware the AI snake oil.
boringmagi.cc
boringmagi.cc.web.brid.gy
Tips on doing show & tell well
## What is a show & tell? A show & tell is a regular get-together where people working on a product or service celebrate their work, talk about what they’ve learned, and get feedback from their peers. It’s also a chance to * bring together team members, management and leadership to bond, share success, and collaborate * let colleagues know what you’re working on, keep aligned, and create opportunities to connect and work together * tell stakeholders (including users, partner organisations and leadership) what you’ve been doing and take their questions as feedback (a form of governance). A show & tell may be internal, limited to other people in the same team or organisation, or open to anyone to join. Most teams start with an internal show & tell and make these open later. A show & tell might also be called a team review. ## How to run a great show & tell 1. **Don’t make it up on the spot** Spend time as a team working out what you want to say and who is going to share stories with the audience (1 or 2 people works best). 30 to 60 minutes of prep will pay off. 2. **Set the scene** Always introduce your project or epic. Who’s on the team? What are you working on? What problem are you solving? Who are your users? Why are you doing it? You don’t need to tell the full history, a 30-second overview is enough. 3. **Show the thing!** Scrappy diagrams, Mural boards, Post-it notes, screenshots, scribbles, photos, and clicking through prototypes bring things to life. Text and code is OK, but always aim to demonstrate something working – don’t just talk through a doc or some function. 4. **Talk about what you’ve learned** Share which assumptions turned out to be incorrect, or what facts surprised you. Show clips from user research and usability testing. Highlight important analytics data or performance measures. Share both findings and insights. Be clear on the methodology and any confidence intervals, levels of confidence, risky assumptions, etc. 5. **Be clear** Don’t hide behind jargon. Make bold statements. Say what you actually think! This helps everyone concentrate on the main point, and it generates discussion. 1. **Always share unfinished thinking** Forget about the polish and perfection. A show & tell is the perfect place to collect feedback, ideas and thoughts. It’s a complicated space. We’re all trying to figure it out! 2. **Rehearse** Take 10–15 minutes to rehearse your section with your team to work out whether you need to cut anything. If you’re struggling to edit, use a format like What? So what? Now what? to keep things concise. If you take up more time than you’ve been given, it’ll eat into other people’s section meaning they have to rush (or not share at all) which isn’t fair. 3. **Leave time for questions** The best show & tells have audience participation. Wherever possible, leave time for questions – either after each team or at the end. Encourage people to ask questions in the chat, on Slack, in docs, etc. If you do nothing else, follow tip number 3. You can read more tips on good show & tells from Mark Dalgarno, Emily Webber and Alan Wright. ## How to be a great show & tell audience member 1. **Be present and listen** There’s nothing worse than preparing for a show & tell only to realise that no one’s paying attention. Close Slack, close Teams, stop looking at email, and give your full attention to your team-mates. 2. **Smile, use emojis, and celebrate!** Bring the good vibes and lift each other up whenever there’s something worth celebrating. ## It’s ok to be halfway done The main thing to remember is that show & tell is not just about sharing progress and successes. It’s a time to talk about what’s hard and what didn’t work too. It’s ok to be halfway done. It’s ok to go back to the drawing board. Each sprint, try to answer these questions in your show & tell: * What did we learn or what changed our mind? * What can we show? How can we help people see behind the scenes? * What haven’t we figured out? What do we want feedback on?
boringmagi.cc
boringmagi.cc.web.brid.gy
You don’t have to do fortnightly sprints
In early 2024, we helped GOV.‌UK Design System design and implement a new model for agile delivery. It was a break away from traditional Scrum and two-week sprints towards an emphasis on iteration and reflection. ## Why change things? Traditional two-week sprints and Scrum provide good training wheels for teams who are new to agile, but those don’t work for well established or high performing teams. For research and development work (like discovery and alpha), you need a little bit longer to get your head into a domain and have time to play around making scrappy prototypes. For build work, a two-week sprint isn’t really two weeks. With all the ceremonies required for co-ordination and sharing information – which is a lot more labour-intensive in remote-first settings – you lose a couple of days with two-week sprints. Sprint goals suck too. It’s far too easy to push it along and limp from fortnight to fortnight, never really considering whether you should stop the workstream. It’s better to think about your appetite for doing something, and then to focus on getting valuable iterations out there rather than committing to a whole thing. ## How it works You can see how it works in detail on the GOV.‌UK Design System’s team playbook and in a blog post from the team’s delivery manager, Kelly. There’s also a graphic that brings the four-week cycle to life. There are a few principles that make this method work: * Fixed time, variable scope * Think in iterations: vertical not horizontal slices * Each cycle ends with something shippable or showable * R&D cycles end on decisions around scope * Each cycle starts with a brief, but the team has autonomy over delivery This gives space for ideas and conversations to breathe, for spikes and scrappy prototypes to come together, and for teams to make conscious decisions about scope and delivering value to users. ## How did it work out? In their first cycle, the team delivered three out of five briefs – which was higher than their completion rate at the time. As Kelly reported, ‘most team members enjoyed working in smaller, focused groups and having autonomy over how they deliver their work.’ A few months later, we analysed how often the team was releasing new software: **they were releasing twice as often in half the time.** Between October 2022 and October 2023, there were five releases. Between October 2023 and March 2024, there were 10 releases. One year on and the team has maintained momentum. Iterations have increased, they’ve built a steady rhythm of releasing GOV.‌UK Frontend more frequently, and according to a recent review the team is a lot happier working that way. ## Want to try something new? If you’re looking to increase team happiness and effectiveness, drop us a line and we can chat about transforming your team’s delivery model too.
boringmagi.cc
boringmagi.cc.web.brid.gy
Metrics, measures and indicators: a few things to bear in mind
Metrics, measures and indicators help you track and evaluate outcomes. They can tell us if we’re moving in the right direction, if things aren’t going well, or if we’ve achieved the outcome we set out to achieve. If you’ve reported on key performance indicators (KPIs), checked progress against objectives and key results (OKRs) or looked at user analytics, you’ll have some experience with metrics, measures and indicators. These words are often used interchangably and, in general, the difference isn’t important. Not for this post anyway. We can talk about the difference between metrics, measures and indicators later. In this post we’ll cover some guiding principles for designing and using metrics, measures and indicators. A few things to bear in mind. ## Guiding principles 1. Value outcomes over outputs 2. Measures, not targets 3. Balance the what (quantitative) and the why (qualitative) 4. Measure the entire product or service 5. Keep it light and actionable 6. Revisit or refine as things change ### Value outcomes over outputs We acknowledge that outputs are on the path to achieving outcomes. You can’t cater for a memorable birthday party without making some sandwiches. But delivering outcomes is the real reason why we’re here. So we don’t measure whether we’ve delivered a product or feature, we measure the impact it’s having. ### Measures, not targets Follow Goodhart’s Law: ‘When a measure becomes a target, it ceases to be a good measure.’ There are numerous factors that contribute to a number or reading going up or down. Metrics, measures and indicators are a starting point for a conversation, so we can ask why and do something about it (or not). The measures are in service of learning: tools, not goals. ### Balance the what (quantitative) and the why (qualitative) Grown-ups love numbers. But it’s very easy to ignore what users think and feel when you only track quantitative measures. Numbers tell us what’s happening, but feedback can tell us why. There’s no point doing something faster if it makes the experience worse for users, for example – we have to balance quantity and quality. ### Measure the entire product or service If we can see where people start, how they move through and where they end, we can identify where to focus our efforts for improvements. The same is true for people who come back too, we want to see whether we’ve made things better than last time they were here. If you’re only measuring one part, you only know how one part is performing. Get holistic readings. ### Keep them light and actionable It’s easy to go overboard and start tracking everything, but too much information can be a bad thing. If we track too many metrics, we run the risk of analysis paralysis. Similarly, one measure is too few: it’s not enough to understand an entire system. Four to eight key metrics or indicators per team is enough and should inspire action. ### Revisit or refine as things change Our priorities will change over time, meaning we will need to change our indicators, measures and metrics too. It’s no use tracking and reporting on datapoints that don’t relate to outcomes. Measure what matters. We should aim not to change them too frequently – that causes whiplash. But it’s all right to change them when you change direction or focus. ## Are we on the way? Or did we get there? Those principles are handy for working out what to measure, but there’s two types of indicator you need to know about: leading and lagging. Leading indicators tell us whether we’re making progress towards an outcome. _Are we on the way?_ For example, if we want to make it easy to find datasets, are people searching for data? Is the number of people searching for data going up? Lagging indicators tell us whether we’ve achieved the outcome. _Did we get there?_ In that same example, making it easy to find datasets, what’s the user satisfaction score? Are they requesting new datasets?
boringmagi.cc
boringmagi.cc.web.brid.gy
Using quarters as a checkpoint
Breaking your strategy down into smaller, more manageable chunks can help you make more progress sooner. Some things take a long while to achieve, but smaller goals help us celebrate the wins along the way. Many organisations use a quarter – a block of 3 months – to do this. And it can be helpful to look back before you look forward, to celebrate the progress you’ve made and work out what to do next. Every 3 months, we encourage product teams to take the opportunity to step back from the day-to-day and consider the objectives they’re working towards. The quarterly checkpoint is a time to refocus efforts and double-down, change direction or move on to the next objective. There are 2 stages to using the quarterly checkpoint well: 1. Check on your progress 2. Plan how to achieve your new objectives Here are two workshops you can run at each stage, but you can combine them into one workshop if you like. Whatever works. ## Check on your progress First, check on the progress your team has made on your objectives and key results (OKRs). You can do this in a team workshop lasting 30 to 60 minutes. ### 1. List out the OKRs you’ve been working on (10 to 20 mins) Run through the OKRs you’ve been working on. Talk about the progress you made on each key result and celebrate the successes – big or small! ### 2. Think about what’s left to do (20 to 40 mins) For any OKRs you haven’t completed – where progress on key results isn’t 100% – discuss as a team which initiatives you have left to do to fully achieve the objective. For example, you may need to collect some data, run a test, build a thing or achieve an outcome. Consider whether you should change your approach, for example, by doing something smaller or using different methods, based on what you’ve learned over the last quarter. It’s OK to stick to the original plan if it’s still the best approach. Write down what initiatives your team has agreed to do. ## Plan how to achieve your new objectives Next, you’ll need to form a loose plan for how to achieve your new objectives. You can treat unfinished objectives from the previous quarter as a new objective. Run another workshop lasting 30 to 45 minutes for each objective. Everyone on the team will need to input on the plan using the outline below. Write it in a doc, a slide deck or on a whiteboard – whatever works. You will probably want to present these plans to the senior management team or service owner at the start of the new quarter. If it’s easier than starting with a blank page, team leads can fill in the outline and get feedback from the rest of the team. As long as everyone gets a chance to input, it doesn’t matter. It’s OK if you take less than 30 minutes, especially if you already have a plan. ### 1. Write down and describe the objective An objective is a bold and qualitative goal that the organisation wants to achieve. It’s best that they’re ambitious, not super easy to achieve or audacious in nature; they are not sales targets. Write down the problem you’re solving and who it’s a problem for. Discuss how you’ll know when you’re done. What are the success criteria? ### 2. Think about risks and unknowns What might be a challenge? What are the riskiest assumptions or big unknowns to highlight? Do you need to try new techniques? These might form the first initiatives in your plan. You can frame your assumptions using the hypothesis statement: **Because** [of something we think or know] **We believe that** [an initiative or task] **Will achieve** [desired outcome] **Measured by** [metric] Note down dependencies on other teams, for example, where you may need another team to do something for you. ### 3. Detail all the initiatives Write a sentence for all the initiatives – tasks and activities – you’ll need to do to achieve the objective. Consider research and discovery activities, which can help you gather information to turn unknowns into knowns. Consider alphas, things to prototype, spikes, and experiments that can help you de-risk or validate assumptions. Make sure to remember the development and delivery work too – that’s how we release value to users! ### 4. What will you measure? Review your success criteria. Define the metrics that will tell you when you’ve finished or achieved the objective. These should tell you when you’re done and will become your key results. Remember, metrics should be: * tangible and quantitative * specific and measurable * achievable and realistic ### 5. Prioritise radically What would you do differently if you only had half the time? How will you start small and build up? What’s the least amount of work you can do to learn the most? Use these thoughts to consider any changes to your initiatives. Go back and edit the initiatives if you need to. ## Don’t worry about adapting your plans A core tenet of agile is responding to change over following a plan, so don’t be afraid to change your plans based on new information. The quarterly checkpoint isn’t the only time you can look back to look forward – that’s why retrospectives are useful. You can use the activities above at any point. The best product teams build these behaviours into their regular practice. If you’d like help running these workshops or have any questions, get in touch and we’ll set up a chat.
boringmagi.cc
boringmagi.cc.web.brid.gy
Metrics, measures and indicators: a few things to bear in mind
Metrics, measures and indicators help you track and evaluate outcomes. They can tell us if we’re moving in the right direction, if things aren’t going well, or if we’ve achieved the outcome we set out to achieve. If you’ve reported on key performance indicators (KPIs), checked progress against objectives and key results (OKRs) or looked at user analytics, you’ll have some experience with metrics, measures and indicators. These words are often used interchangably and, in general, the difference isn’t important. Not for this post anyway. We can talk about the difference between metrics, measures and indicators later. In this post we’ll cover some guiding principles for designing and using metrics, measures and indicators. A few things to bear in mind. ## Guiding principles 1. Value outcomes over outputs 2. Measures, not targets 3. Balance the what (quantitative) and the why (qualitative) 4. Measure the entire product or service 5. Keep it light and actionable 6. Revisit or refine as things change ### Value outcomes over outputs We acknowledge that outputs are on the path to achieving outcomes. You can’t cater for a memorable birthday party without making some sandwiches. But delivering outcomes is the real reason why we’re here. So we don’t measure whether we’ve delivered a product or feature, we measure the impact it’s having. ### Measures, not targets Follow Goodhart’s Law: ‘When a measure becomes a target, it ceases to be a good measure.’ There are numerous factors that contribute to a number or reading going up or down. Metrics, measures and indicators are a starting point for a conversation, so we can ask why and do something about it (or not). The measures are in service of learning: tools, not goals. ### Balance the what (quantitative) and the why (qualitative) Grown-ups love numbers. But it’s very easy to ignore what users think and feel when you only track quantitative measures. Numbers tell us what’s happening, but feedback can tell us why. There’s no point doing something faster if it makes the experience worse for users, for example – we have to balance quantity and quality. ### Measure the entire product or service If we can see where people start, how they move through and where they end, we can identify where to focus our efforts for improvements. The same is true for people who come back too, we want to see whether we’ve made things better than last time they were here. If you’re only measuring one part, you only know how one part is performing. Get holistic readings. ### Keep them light and actionable It’s easy to go overboard and start tracking everything, but too much information can be a bad thing. If we track too many metrics, we run the risk of analysis paralysis. Similarly, one measure is too few: it’s not enough to understand an entire system. Four to eight key metrics or indicators per team is enough and should inspire action. ### Revisit or refine as things change Our priorities will change over time, meaning we will need to change our indicators, measures and metrics too. It’s no use tracking and reporting on datapoints that don’t relate to outcomes. Measure what matters. We should aim not to change them too frequently – that causes whiplash. But it’s all right to change them when you change direction or focus. ## Are we on the way? Or did we get there? Those principles are handy for working out what to measure, but there’s two types of indicator you need to know about: leading and lagging. Leading indicators tell us whether we’re making progress towards an outcome. _Are we on the way?_ For example, if we want to make it easy to find datasets, are people searching for data? Is the number of people searching for data going up? Lagging indicators tell us whether we’ve achieved the outcome. _Did we get there?_ In that same example, making it easy to find datasets, what’s the user satisfaction score? Are they requesting new datasets?
boringmagi.cc
boringmagi.cc.web.brid.gy
Going faster with Lean principles
Software teams are often asked to go faster. There are many factors that influence the speed at which teams can discover, design and deliver solutions, and those factors aren’t always in a team’s control. But Lean principles offer teams a way to analyse and adapt their operating model – their ways of working. ## What is Lean? Lean is a method of manufacturing that emerged from Toyota’s Production System in the 1950s and 1960s. It’s a system that incorporates methods of production and leadership together. The early Agile community used Lean principles to inspire methods for making digital products and services. These principles have had influence beyond the production environment and have been adapted for business and strategy functions too. ## Books on Lean Four books on Lean principles have influenced the way I work. **1._Lean Software Development: An Agile Toolkit_ by Mary and Tom Poppendieck** The earliest of the four books. It really set the standard. **2._The Lean Startup_ by Eric Ries** This started a big movement for applying Lean principles to your startup, including testing out new business models or growth opportunities. **3._Lean UX_ by Jeff Gothelf and Josh Seiden** One of my favourites. This one really brought strategic goals and user experience closer together. It also shifted teams from writing problem statements to writing hypotheses. **4._The Lean Product Playbook_ by Dan Olsen** This is relatively similar to _The Lean Startup_ but is more of a playbook, showing the practice that goes with the theory. The highlight is its emphasis on MVP tests: experiments you can run to learn something without building anything. ## Lean principles All these books have some principles in their pages, all based on the original Lean principles from Toyota. They’re all pretty similar. Combining their approaches helps us apply Lean principles to business model development, strategy, user-centred design and software delivery. > A note on principles: Principles are not rules. Principles guide your thinking and doing. Rules say what’s right and wrong. ### 1. Eliminate waste Reduce anything which does not help deliver value to the user. So: partially done work; scope creep; re-learning; task-switching; waiting; hand-offs; defects; management activities. Outcomes, not outputs. ### 2. Amplify learning Build, measure, learn. Create feedback loops. Build scrappy prototypes, run spikes. Write tests first. Think in iterations. ### 3. Decide as late as possible Call out the assumptions or uncertainties, try out different options, and make decisions based on facts or evidence. ### 4. Deliver as fast as possible Shorter cycles improve learning and communication, and helps us meet users’ needs as soon as possible. Reduce work in progress, get one thing done, and iterate. ### 5. Empower the team Figure it out together. Managers provide goals, encourage progress, spot issues and remove impediments. Designers, developers and data engineers suggest how to achieve a goal and feed in to continuous improvement. ### 6. Build integrity in Agility needs quality. Automated tests and proven design patterns allow you to focus on smaller parts of the system. A regular flow of insights to act on aids agility. ### 7. Optimise the whole Focus on the entire value stream, not just individual tasks. Align strategy with development. Consider the entire user experience in the design process. ## Three simpler principles If those seem like too many to get started with, I want to introduce three simpler principles that can help you go faster. I came across these in a book about running, which doesn’t seem like the place you’d find inspiration about product management! Think easy, light and smooth. It’s from a man called Micah True who lived in the Mexican desert and went running with the local Native Americans. They called him Caballo Blanco – ‘White Horse’ – because of his speed. > “You start with easy, because if that’s all you get, that’s not so bad. Then work on light. Make it effortless, like you don’t give a shit how high the hill is or how far you’ve got to go. When you’ve practised that so long that you forget you’re practicing, you work on making it smooooooth. You won’t have to worry about the last one – you get those three, and you’ll be fast.” You can do this every cycle. Find one thing to make easier, one thing to make lighter, and one thing to make smoother. Fast will happen naturally.
boringmagi.cc
boringmagi.cc.web.brid.gy
Our positions on generative AI
Like many trends in technology before it, we’re keeping an eye on artificial intelligence (AI). AI is more of a concept, but generative AI as a general purpose technology has come to the fore due to recent developments in cloud-based computation and machine learning. Plus, technology is more widespread and available to more people, so more people are talking about generative AI – compared to something _even more_ ubiquitous like HTML. Given the hype, it feels worthwhile stating our positions on generative AI – or as we like to call it, ‘applied statistics’. We’re open to working on and with it, but there’s a few ideas we’ll bring to the table. ## The positions 1. Utility trumps hyperbole 2. Augmented not artificial intelligence 3. Local and open first 4. There will be consequences 5. Outcomes over outputs ### Utility trumps hyperbole The fundamental principle to Boring Magic’s work is that people want technologies to work. People prefer things to be functional first; the specific technologies only matter when they reduce or undermine the quality of the utility. There are outsized, unfounded claims being made about the utility of AI. It is not ‘more profound than fire’. The macroeconomic implications of AI are often overstated, but it’ll still likely have an impact on productivity. We think it’s sensible to look at how generative AI can be useful or make things less tedious, so we’re exploring the possibilities: from making analysis more accessible through to automating repeatable tasks. We won’t sell you a bunch of hype, just deliver stuff that works. ### Augmented not artificial intelligence Technologies have an impact on the availability of jobs. The introduction of the digital spreadsheet meant that chartered accountants could easily punch the numbers, leading to accounting clerks becoming surplus to requirements. Jevon’s paradox teaches us that AI will lead to more work, not less. Over time accountants needed fewer clerks, but increases in financial activity have lead to a greater need for auditors. So we will still need people in jobs to do thinking, reasoning, assessing and other things people are good at. Rather than replacing people with machines to reduce costs, technology should be used to empower human workers. We should augment the intelligence of our people, not replace it. That means using things like large language models (LLMs) to reduce the inertia of the blank page problem, helping with brainstorming, rather than asking an LLM to write something for you. Extensive not intensive technology. ### Local and open first Right now, we’re in a hype cycle, with lots of enthusiasm, funding and support for generative AI. The boom of a hype cycle is always followed by a bust, and AI winters have been common for decades. If you add AI to your product or service and rely on a cloud-based supplier for that capability, you could find the supplier goes into administration – or worse, enshittification, when fees go up and the quality of service plunges. And free services are monetised eventually. But there are lots of openly-available generative text and vision models you can run on your own computer – your ‘local machine’ – breaking the reliance on external suppliers. When exploring how to apply generative AI to a client’s problem, we’ll always use an open model and run it locally first. It’s cheaper than using a third party, and it’s more sustainable too. It also mitigates some risks around privacy and security by keeping all data processing local, not running on a machine in a data centre. That means we can get started sooner and do a data protection impact assessment later, when necessary. We can use the big players like OpenAI and Anthropic if we need to, but let’s go local and open first. ### There will be consequences People like to think of technology as a box that does a specific thing, but technology impacts and is impacted by everything around it. Technology exists within an ecology. It’s an inescapable fact, so we should try to sense the likely and unlikely consequences of implementing generative AI – on people, animals, the environment, organisations, policy, society and economies. That sounds like a big project, but there are plenty of tools out there to make it easier. We’ve used tools like consequence scanning, effects mapping, financial forecasting, Four Futures and other extrapolation methods to explore risks and harms in the past. As responsible people, it’s our duty to bring unforeseen consequences more into view, so that we can think about how to mitigate the risks or stop. ### Outcomes over outputs It feels like everyone’s doing something with generative AI at the moment, and, if you’re not, it can lead to feeling left out. But this doesn’t mean you have to do something: FOMO is not a strategy. We’ll take a look at where generative AI might be useful, but we’ll also recommend other technologies if those are cheaper, faster or more sustainable. That might mean implementing search and filtering instead of a chatbot, especially if it’s an interface that more people are used to. It’s more important to get the job done and achieve outcomes, instead of doing the latest thing because it’s cool. ## Let’s be pragmatic Ultimately our approach to generative AI is like any other technology: we’re grounded in practicality, mindful of being responsible and ethical, and will pursue meaningful outcomes. It’s the best way to harness its potential effectively. Beware the AI snake oil.
boringmagi.cc
boringmagi.cc.web.brid.gy
Tips on doing show & tell well
## What is a show & tell? A show & tell is a regular get-together where people working on a product or service celebrate their work, talk about what they’ve learned, and get feedback from their peers. It’s also a chance to * bring together team members, management and leadership to bond, share success, and collaborate * let colleagues know what you’re working on, keep aligned, and create opportunities to connect and work together * tell stakeholders (including users, partner organisations and leadership) what you’ve been doing and take their questions as feedback (a form of governance). A show & tell may be internal, limited to other people in the same team or organisation, or open to anyone to join. Most teams start with an internal show & tell and make these open later. A show & tell might also be called a team review. ## How to run a great show & tell 1. **Don’t make it up on the spot** Spend time as a team working out what you want to say and who is going to share stories with the audience (1 or 2 people works best). 30 to 60 minutes of prep will pay off. 2. **Set the scene** Always introduce your project or epic. Who’s on the team? What are you working on? What problem are you solving? Who are your users? Why are you doing it? You don’t need to tell the full history, a 30-second overview is enough. 3. **Show the thing!** Scrappy diagrams, Mural boards, Post-it notes, screenshots, scribbles, photos, and clicking through prototypes bring things to life. Text and code is OK, but always aim to demonstrate something working – don’t just talk through a doc or some function. 4. **Talk about what you’ve learned** Share which assumptions turned out to be incorrect, or what facts surprised you. Show clips from user research and usability testing. Highlight important analytics data or performance measures. Share both findings and insights. Be clear on the methodology and any confidence intervals, levels of confidence, risky assumptions, etc. 5. **Be clear** Don’t hide behind jargon. Make bold statements. Say what you actually think! This helps everyone concentrate on the main point, and it generates discussion. 1. **Always share unfinished thinking** Forget about the polish and perfection. A show & tell is the perfect place to collect feedback, ideas and thoughts. It’s a complicated space. We’re all trying to figure it out! 2. **Rehearse** Take 10–15 minutes to rehearse your section with your team to work out whether you need to cut anything. If you’re struggling to edit, use a format like What? So what? Now what? to keep things concise. If you take up more time than you’ve been given, it’ll eat into other people’s section meaning they have to rush (or not share at all) which isn’t fair. 3. **Leave time for questions** The best show & tells have audience participation. Wherever possible, leave time for questions – either after each team or at the end. Encourage people to ask questions in the chat, on Slack, in docs, etc. If you do nothing else, follow tip number 3. You can read more tips on good show & tells from Mark Dalgarno, Emily Webber and Alan Wright. ## How to be a great show & tell audience member 1. **Be present and listen** There’s nothing worse than preparing for a show & tell only to realise that no one’s paying attention. Close Slack, close Teams, stop looking at email, and give your full attention to your team-mates. 2. **Smile, use emojis, and celebrate!** Bring the good vibes and lift each other up whenever there’s something worth celebrating. ## It’s ok to be halfway done The main thing to remember is that show & tell is not just about sharing progress and successes. It’s a time to talk about what’s hard and what didn’t work too. It’s ok to be halfway done. It’s ok to go back to the drawing board. Each sprint, try to answer these questions in your show & tell: * What did we learn or what changed our mind? * What can we show? How can we help people see behind the scenes? * What haven’t we figured out? What do we want feedback on?
boringmagi.cc
boringmagi.cc.web.brid.gy
You don’t have to do fortnightly sprints
In early 2024, we helped GOV.‌UK Design System design and implement a new model for agile delivery. It was a break away from traditional Scrum and two-week sprints towards an emphasis on iteration and reflection. ## Why change things? Traditional two-week sprints and Scrum provide good training wheels for teams who are new to agile, but those don’t work for well established or high performing teams. For research and development work (like discovery and alpha), you need a little bit longer to get your head into a domain and have time to play around making scrappy prototypes. For build work, a two-week sprint isn’t really two weeks. With all the ceremonies required for co-ordination and sharing information – which is a lot more labour-intensive in remote-first settings – you lose a couple of days with two-week sprints. Sprint goals suck too. It’s far too easy to push it along and limp from fortnight to fortnight, never really considering whether you should stop the workstream. It’s better to think about your appetite for doing something, and then to focus on getting valuable iterations out there rather than committing to a whole thing. ## How it works You can see how it works in detail on the GOV.‌UK Design System’s team playbook and in a blog post from the team’s delivery manager, Kelly. There’s also a graphic that brings the four-week cycle to life. There are a few principles that make this method work: * Fixed time, variable scope * Think in iterations: vertical not horizontal slices * Each cycle ends with something shippable or showable * R&D cycles end on decisions around scope * Each cycle starts with a brief, but the team has autonomy over delivery This gives space for ideas and conversations to breathe, for spikes and scrappy prototypes to come together, and for teams to make conscious decisions about scope and delivering value to users. ## How did it work out? In their first cycle, the team delivered three out of five briefs – which was higher than their completion rate at the time. As Kelly reported, ‘most team members enjoyed working in smaller, focused groups and having autonomy over how they deliver their work.’ A few months later, we analysed how often the team was releasing new software: **they were releasing twice as often in half the time.** Between October 2022 and October 2023, there were five releases. Between October 2023 and March 2024, there were 10 releases. One year on and the team has maintained momentum. Iterations have increased, they’ve built a steady rhythm of releasing GOV.‌UK Frontend more frequently, and according to a recent review the team is a lot happier working that way. ## Want to try something new? If you’re looking to increase team happiness and effectiveness, drop us a line and we can chat about transforming your team’s delivery model too.
boringmagi.cc
boringmagi.cc.web.brid.gy
Going faster with Lean principles
Software teams are often asked to go faster. There are many factors that influence the speed at which teams can discover, design and deliver solutions, and those factors aren’t always in a team’s control. But Lean principles offer teams a way to analyse and adapt their operating model – their ways of working. ## What is Lean? Lean is a method of manufacturing that emerged from Toyota’s Production System in the 1950s and 1960s. It’s a system that incorporates methods of production and leadership together. The early Agile community used Lean principles to inspire methods for making digital products and services. These principles have had influence beyond the production environment and have been adapted for business and strategy functions too. ## Books on Lean Four books on Lean principles have influenced the way I work. **1._Lean Software Development: An Agile Toolkit_ by Mary and Tom Poppendieck** The earliest of the four books. It really set the standard. **2._The Lean Startup_ by Eric Ries** This started a big movement for applying Lean principles to your startup, including testing out new business models or growth opportunities. **3._Lean UX_ by Jeff Gothelf and Josh Seiden** One of my favourites. This one really brought strategic goals and user experience closer together. It also shifted teams from writing problem statements to writing hypotheses. **4._The Lean Product Playbook_ by Dan Olsen** This is relatively similar to _The Lean Startup_ but is more of a playbook, showing the practice that goes with the theory. The highlight is its emphasis on MVP tests: experiments you can run to learn something without building anything. ## Lean principles All these books have some principles in their pages, all based on the original Lean principles from Toyota. They’re all pretty similar. Combining their approaches helps us apply Lean principles to business model development, strategy, user-centred design and software delivery. > A note on principles: Principles are not rules. Principles guide your thinking and doing. Rules say what’s right and wrong. ### 1. Eliminate waste Reduce anything which does not help deliver value to the user. So: partially done work; scope creep; re-learning; task-switching; waiting; hand-offs; defects; management activities. Outcomes, not outputs. ### 2. Amplify learning Build, measure, learn. Create feedback loops. Build scrappy prototypes, run spikes. Write tests first. Think in iterations. ### 3. Decide as late as possible Call out the assumptions or uncertainties, try out different options, and make decisions based on facts or evidence. ### 4. Deliver as fast as possible Shorter cycles improve learning and communication, and helps us meet users’ needs as soon as possible. Reduce work in progress, get one thing done, and iterate. ### 5. Empower the team Figure it out together. Managers provide goals, encourage progress, spot issues and remove impediments. Designers, developers and data engineers suggest how to achieve a goal and feed in to continuous improvement. ### 6. Build integrity in Agility needs quality. Automated tests and proven design patterns allow you to focus on smaller parts of the system. A regular flow of insights to act on aids agility. ### 7. Optimise the whole Focus on the entire value stream, not just individual tasks. Align strategy with development. Consider the entire user experience in the design process. ## Three simpler principles If those seem like too many to get started with, I want to introduce three simpler principles that can help you go faster. I came across these in a book about running, which doesn’t seem like the place you’d find inspiration about product management! Think easy, light and smooth. It’s from a man called Micah True who lived in the Mexican desert and went running with the local Native Americans. They called him Caballo Blanco – ‘White Horse’ – because of his speed. > “You start with easy, because if that’s all you get, that’s not so bad. Then work on light. Make it effortless, like you don’t give a shit how high the hill is or how far you’ve got to go. When you’ve practised that so long that you forget you’re practicing, you work on making it smooooooth. You won’t have to worry about the last one – you get those three, and you’ll be fast.” You can do this every cycle. Find one thing to make easier, one thing to make lighter, and one thing to make smoother. Fast will happen naturally.
boringmagi.cc
boringmagi.cc.web.brid.gy
Using quarters as a checkpoint
Breaking your strategy down into smaller, more manageable chunks can help you make more progress sooner. Some things take a long while to achieve, but smaller goals help us celebrate the wins along the way. Many organisations use a quarter – a block of 3 months – to do this. And it can be helpful to look back before you look forward, to celebrate the progress you’ve made and work out what to do next. Every 3 months, we encourage product teams to take the opportunity to step back from the day-to-day and consider the objectives they’re working towards. The quarterly checkpoint is a time to refocus efforts and double-down, change direction or move on to the next objective. There are 2 stages to using the quarterly checkpoint well: 1. Check on your progress 2. Plan how to achieve your new objectives Here are two workshops you can run at each stage, but you can combine them into one workshop if you like. Whatever works. ## Check on your progress First, check on the progress your team has made on your objectives and key results (OKRs). You can do this in a team workshop lasting 30 to 60 minutes. ### 1. List out the OKRs you’ve been working on (10 to 20 mins) Run through the OKRs you’ve been working on. Talk about the progress you made on each key result and celebrate the successes – big or small! ### 2. Think about what’s left to do (20 to 40 mins) For any OKRs you haven’t completed – where progress on key results isn’t 100% – discuss as a team which initiatives you have left to do to fully achieve the objective. For example, you may need to collect some data, run a test, build a thing or achieve an outcome. Consider whether you should change your approach, for example, by doing something smaller or using different methods, based on what you’ve learned over the last quarter. It’s OK to stick to the original plan if it’s still the best approach. Write down what initiatives your team has agreed to do. ## Plan how to achieve your new objectives Next, you’ll need to form a loose plan for how to achieve your new objectives. You can treat unfinished objectives from the previous quarter as a new objective. Run another workshop lasting 30 to 45 minutes for each objective. Everyone on the team will need to input on the plan using the outline below. Write it in a doc, a slide deck or on a whiteboard – whatever works. You will probably want to present these plans to the senior management team or service owner at the start of the new quarter. If it’s easier than starting with a blank page, team leads can fill in the outline and get feedback from the rest of the team. As long as everyone gets a chance to input, it doesn’t matter. It’s OK if you take less than 30 minutes, especially if you already have a plan. ### 1. Write down and describe the objective An objective is a bold and qualitative goal that the organisation wants to achieve. It’s best that they’re ambitious, not super easy to achieve or audacious in nature; they are not sales targets. Write down the problem you’re solving and who it’s a problem for. Discuss how you’ll know when you’re done. What are the success criteria? ### 2. Think about risks and unknowns What might be a challenge? What are the riskiest assumptions or big unknowns to highlight? Do you need to try new techniques? These might form the first initiatives in your plan. You can frame your assumptions using the hypothesis statement: **Because** [of something we think or know] **We believe that** [an initiative or task] **Will achieve** [desired outcome] **Measured by** [metric] Note down dependencies on other teams, for example, where you may need another team to do something for you. ### 3. Detail all the initiatives Write a sentence for all the initiatives – tasks and activities – you’ll need to do to achieve the objective. Consider research and discovery activities, which can help you gather information to turn unknowns into knowns. Consider alphas, things to prototype, spikes, and experiments that can help you de-risk or validate assumptions. Make sure to remember the development and delivery work too – that’s how we release value to users! ### 4. What will you measure? Review your success criteria. Define the metrics that will tell you when you’ve finished or achieved the objective. These should tell you when you’re done and will become your key results. Remember, metrics should be: * tangible and quantitative * specific and measurable * achievable and realistic ### 5. Prioritise radically What would you do differently if you only had half the time? How will you start small and build up? What’s the least amount of work you can do to learn the most? Use these thoughts to consider any changes to your initiatives. Go back and edit the initiatives if you need to. ## Don’t worry about adapting your plans A core tenet of agile is responding to change over following a plan, so don’t be afraid to change your plans based on new information. The quarterly checkpoint isn’t the only time you can look back to look forward – that’s why retrospectives are useful. You can use the activities above at any point. The best product teams build these behaviours into their regular practice. If you’d like help running these workshops or have any questions, get in touch and we’ll set up a chat.
boringmagi.cc
boringmagi.cc.web.brid.gy
You don’t have to do fortnightly sprints
In early 2024, we helped GOV.‌UK Design System design and implement a new model for agile delivery. It was a break away from traditional Scrum and two-week sprints towards an emphasis on iteration and reflection. ## Why change things? Traditional two-week sprints and Scrum provide good training wheels for teams who are new to agile, but those don’t work for well established or high performing teams. For research and development work (like discovery and alpha), you need a little bit longer to get your head into a domain and have time to play around making scrappy prototypes. For build work, a two-week sprint isn’t really two weeks. With all the ceremonies required for co-ordination and sharing information – which is a lot more labour-intensive in remote-first settings – you lose a couple of days with two-week sprints. Sprint goals suck too. It’s far too easy to push it along and limp from fortnight to fortnight, never really considering whether you should stop the workstream. It’s better to think about your appetite for doing something, and then to focus on getting valuable iterations out there rather than committing to a whole thing. ## How it works You can see how it works in detail on the GOV.‌UK Design System’s team playbook and in a blog post from the team’s delivery manager, Kelly. There’s also a graphic that brings the four-week cycle to life. There are a few principles that make this method work: * Fixed time, variable scope * Think in iterations: vertical not horizontal slices * Each cycle ends with something shippable or showable * R&D cycles end on decisions around scope * Each cycle starts with a brief, but the team has autonomy over delivery This gives space for ideas and conversations to breathe, for spikes and scrappy prototypes to come together, and for teams to make conscious decisions about scope and delivering value to users. ## How did it work out? In their first cycle, the team delivered three out of five briefs – which was higher than their completion rate at the time. As Kelly reported, ‘most team members enjoyed working in smaller, focused groups and having autonomy over how they deliver their work.’ A few months later, we analysed how often the team was releasing new software: **they were releasing twice as often in half the time.** Between October 2022 and October 2023, there were five releases. Between October 2023 and March 2024, there were 10 releases. One year on and the team has maintained momentum. Iterations have increased, they’ve built a steady rhythm of releasing GOV.‌UK Frontend more frequently, and according to a recent review the team is a lot happier working that way. ## Want to try something new? If you’re looking to increase team happiness and effectiveness, drop us a line and we can chat about transforming your team’s delivery model too.
boringmagi.cc
boringmagi.cc.web.brid.gy
Our positions on generative AI
Like many trends in technology before it, we’re keeping an eye on artificial intelligence (AI). AI is more of a concept, but generative AI as a general purpose technology has come to the fore due to recent developments in cloud-based computation and machine learning. Plus, technology is more widespread and available to more people, so more people are talking about generative AI – compared to something _even more_ ubiquitous like HTML. Given the hype, it feels worthwhile stating our positions on generative AI – or as we like to call it, ‘applied statistics’. We’re open to working on and with it, but there’s a few ideas we’ll bring to the table. ## The positions 1. Utility trumps hyperbole 2. Augmented not artificial intelligence 3. Local and open first 4. There will be consequences 5. Outcomes over outputs ### Utility trumps hyperbole The fundamental principle to Boring Magic’s work is that people want technologies to work. People prefer things to be functional first; the specific technologies only matter when they reduce or undermine the quality of the utility. There are outsized, unfounded claims being made about the utility of AI. It is not ‘more profound than fire’. The macroeconomic implications of AI are often overstated, but it’ll still likely have an impact on productivity. We think it’s sensible to look at how generative AI can be useful or make things less tedious, so we’re exploring the possibilities: from making analysis more accessible through to automating repeatable tasks. We won’t sell you a bunch of hype, just deliver stuff that works. ### Augmented not artificial intelligence Technologies have an impact on the availability of jobs. The introduction of the digital spreadsheet meant that chartered accountants could easily punch the numbers, leading to accounting clerks becoming surplus to requirements. Jevon’s paradox teaches us that AI will lead to more work, not less. Over time accountants needed fewer clerks, but increases in financial activity have lead to a greater need for auditors. So we will still need people in jobs to do thinking, reasoning, assessing and other things people are good at. Rather than replacing people with machines to reduce costs, technology should be used to empower human workers. We should augment the intelligence of our people, not replace it. That means using things like large language models (LLMs) to reduce the inertia of the blank page problem, helping with brainstorming, rather than asking an LLM to write something for you. Extensive not intensive technology. ### Local and open first Right now, we’re in a hype cycle, with lots of enthusiasm, funding and support for generative AI. The boom of a hype cycle is always followed by a bust, and AI winters have been common for decades. If you add AI to your product or service and rely on a cloud-based supplier for that capability, you could find the supplier goes into administration – or worse, enshittification, when fees go up and the quality of service plunges. And free services are monetised eventually. But there are lots of openly-available generative text and vision models you can run on your own computer – your ‘local machine’ – breaking the reliance on external suppliers. When exploring how to apply generative AI to a client’s problem, we’ll always use an open model and run it locally first. It’s cheaper than using a third party, and it’s more sustainable too. It also mitigates some risks around privacy and security by keeping all data processing local, not running on a machine in a data centre. That means we can get started sooner and do a data protection impact assessment later, when necessary. We can use the big players like OpenAI and Anthropic if we need to, but let’s go local and open first. ### There will be consequences People like to think of technology as a box that does a specific thing, but technology impacts and is impacted by everything around it. Technology exists within an ecology. It’s an inescapable fact, so we should try to sense the likely and unlikely consequences of implementing generative AI – on people, animals, the environment, organisations, policy, society and economies. That sounds like a big project, but there are plenty of tools out there to make it easier. We’ve used tools like consequence scanning, effects mapping, financial forecasting, Four Futures and other extrapolation methods to explore risks and harms in the past. As responsible people, it’s our duty to bring unforeseen consequences more into view, so that we can think about how to mitigate the risks or stop. ### Outcomes over outputs It feels like everyone’s doing something with generative AI at the moment, and, if you’re not, it can lead to feeling left out. But this doesn’t mean you have to do something: FOMO is not a strategy. We’ll take a look at where generative AI might be useful, but we’ll also recommend other technologies if those are cheaper, faster or more sustainable. That might mean implementing search and filtering instead of a chatbot, especially if it’s an interface that more people are used to. It’s more important to get the job done and achieve outcomes, instead of doing the latest thing because it’s cool. ## Let’s be pragmatic Ultimately our approach to generative AI is like any other technology: we’re grounded in practicality, mindful of being responsible and ethical, and will pursue meaningful outcomes. It’s the best way to harness its potential effectively. Beware the AI snake oil.
boringmagi.cc
boringmagi.cc.web.brid.gy
Tips on doing show & tell well
## What is a show & tell? A show & tell is a regular get-together where people working on a product or service celebrate their work, talk about what they’ve learned, and get feedback from their peers. It’s also a chance to * bring together team members, management and leadership to bond, share success, and collaborate * let colleagues know what you’re working on, keep aligned, and create opportunities to connect and work together * tell stakeholders (including users, partner organisations and leadership) what you’ve been doing and take their questions as feedback (a form of governance). A show & tell may be internal, limited to other people in the same team or organisation, or open to anyone to join. Most teams start with an internal show & tell and make these open later. A show & tell might also be called a team review. ## How to run a great show & tell 1. **Don’t make it up on the spot** Spend time as a team working out what you want to say and who is going to share stories with the audience (1 or 2 people works best). 30 to 60 minutes of prep will pay off. 2. **Set the scene** Always introduce your project or epic. Who’s on the team? What are you working on? What problem are you solving? Who are your users? Why are you doing it? You don’t need to tell the full history, a 30-second overview is enough. 3. **Show the thing!** Scrappy diagrams, Mural boards, Post-it notes, screenshots, scribbles, photos, and clicking through prototypes bring things to life. Text and code is OK, but always aim to demonstrate something working – don’t just talk through a doc or some function. 4. **Talk about what you’ve learned** Share which assumptions turned out to be incorrect, or what facts surprised you. Show clips from user research and usability testing. Highlight important analytics data or performance measures. Share both findings and insights. Be clear on the methodology and any confidence intervals, levels of confidence, risky assumptions, etc. 5. **Be clear** Don’t hide behind jargon. Make bold statements. Say what you actually think! This helps everyone concentrate on the main point, and it generates discussion. 1. **Always share unfinished thinking** Forget about the polish and perfection. A show & tell is the perfect place to collect feedback, ideas and thoughts. It’s a complicated space. We’re all trying to figure it out! 2. **Rehearse** Take 10–15 minutes to rehearse your section with your team to work out whether you need to cut anything. If you’re struggling to edit, use a format like What? So what? Now what? to keep things concise. If you take up more time than you’ve been given, it’ll eat into other people’s section meaning they have to rush (or not share at all) which isn’t fair. 3. **Leave time for questions** The best show & tells have audience participation. Wherever possible, leave time for questions – either after each team or at the end. Encourage people to ask questions in the chat, on Slack, in docs, etc. If you do nothing else, follow tip number 3. You can read more tips on good show & tells from Mark Dalgarno, Emily Webber and Alan Wright. ## How to be a great show & tell audience member 1. **Be present and listen** There’s nothing worse than preparing for a show & tell only to realise that no one’s paying attention. Close Slack, close Teams, stop looking at email, and give your full attention to your team-mates. 2. **Smile, use emojis, and celebrate!** Bring the good vibes and lift each other up whenever there’s something worth celebrating. ## It’s ok to be halfway done The main thing to remember is that show & tell is not just about sharing progress and successes. It’s a time to talk about what’s hard and what didn’t work too. It’s ok to be halfway done. It’s ok to go back to the drawing board. Each sprint, try to answer these questions in your show & tell: * What did we learn or what changed our mind? * What can we show? How can we help people see behind the scenes? * What haven’t we figured out? What do we want feedback on?
boringmagi.cc
boringmagi.cc.web.brid.gy
Metrics, measures and indicators: a few things to bear in mind
Metrics, measures and indicators help you track and evaluate outcomes. They can tell us if we’re moving in the right direction, if things aren’t going well, or if we’ve achieved the outcome we set out to achieve. If you’ve reported on key performance indicators (KPIs), checked progress against objectives and key results (OKRs) or looked at user analytics, you’ll have some experience with metrics, measures and indicators. These words are often used interchangably and, in general, the difference isn’t important. Not for this post anyway. We can talk about the difference between metrics, measures and indicators later. In this post we’ll cover some guiding principles for designing and using metrics, measures and indicators. A few things to bear in mind. ## Guiding principles 1. Value outcomes over outputs 2. Measures, not targets 3. Balance the what (quantitative) and the why (qualitative) 4. Measure the entire product or service 5. Keep it light and actionable 6. Revisit or refine as things change ### Value outcomes over outputs We acknowledge that outputs are on the path to achieving outcomes. You can’t cater for a memorable birthday party without making some sandwiches. But delivering outcomes is the real reason why we’re here. So we don’t measure whether we’ve delivered a product or feature, we measure the impact it’s having. ### Measures, not targets Follow Goodhart’s Law: ‘When a measure becomes a target, it ceases to be a good measure.’ There are numerous factors that contribute to a number or reading going up or down. Metrics, measures and indicators are a starting point for a conversation, so we can ask why and do something about it (or not). The measures are in service of learning: tools, not goals. ### Balance the what (quantitative) and the why (qualitative) Grown-ups love numbers. But it’s very easy to ignore what users think and feel when you only track quantitative measures. Numbers tell us what’s happening, but feedback can tell us why. There’s no point doing something faster if it makes the experience worse for users, for example – we have to balance quantity and quality. ### Measure the entire product or service If we can see where people start, how they move through and where they end, we can identify where to focus our efforts for improvements. The same is true for people who come back too, we want to see whether we’ve made things better than last time they were here. If you’re only measuring one part, you only know how one part is performing. Get holistic readings. ### Keep them light and actionable It’s easy to go overboard and start tracking everything, but too much information can be a bad thing. If we track too many metrics, we run the risk of analysis paralysis. Similarly, one measure is too few: it’s not enough to understand an entire system. Four to eight key metrics or indicators per team is enough and should inspire action. ### Revisit or refine as things change Our priorities will change over time, meaning we will need to change our indicators, measures and metrics too. It’s no use tracking and reporting on datapoints that don’t relate to outcomes. Measure what matters. We should aim not to change them too frequently – that causes whiplash. But it’s all right to change them when you change direction or focus. ## Are we on the way? Or did we get there? Those principles are handy for working out what to measure, but there’s two types of indicator you need to know about: leading and lagging. Leading indicators tell us whether we’re making progress towards an outcome. _Are we on the way?_ For example, if we want to make it easy to find datasets, are people searching for data? Is the number of people searching for data going up? Lagging indicators tell us whether we’ve achieved the outcome. _Did we get there?_ In that same example, making it easy to find datasets, what’s the user satisfaction score? Are they requesting new datasets?
boringmagi.cc
boringmagi.cc.web.brid.gy
Using quarters as a checkpoint
Breaking your strategy down into smaller, more manageable chunks can help you make more progress sooner. Some things take a long while to achieve, but smaller goals help us celebrate the wins along the way. Many organisations use a quarter – a block of 3 months – to do this. And it can be helpful to look back before you look forward, to celebrate the progress you’ve made and work out what to do next. Every 3 months, we encourage product teams to take the opportunity to step back from the day-to-day and consider the objectives they’re working towards. The quarterly checkpoint is a time to refocus efforts and double-down, change direction or move on to the next objective. There are 2 stages to using the quarterly checkpoint well: 1. Check on your progress 2. Plan how to achieve your new objectives Here are two workshops you can run at each stage, but you can combine them into one workshop if you like. Whatever works. ## Check on your progress First, check on the progress your team has made on your objectives and key results (OKRs). You can do this in a team workshop lasting 30 to 60 minutes. ### 1. List out the OKRs you’ve been working on (10 to 20 mins) Run through the OKRs you’ve been working on. Talk about the progress you made on each key result and celebrate the successes – big or small! ### 2. Think about what’s left to do (20 to 40 mins) For any OKRs you haven’t completed – where progress on key results isn’t 100% – discuss as a team which initiatives you have left to do to fully achieve the objective. For example, you may need to collect some data, run a test, build a thing or achieve an outcome. Consider whether you should change your approach, for example, by doing something smaller or using different methods, based on what you’ve learned over the last quarter. It’s OK to stick to the original plan if it’s still the best approach. Write down what initiatives your team has agreed to do. ## Plan how to achieve your new objectives Next, you’ll need to form a loose plan for how to achieve your new objectives. You can treat unfinished objectives from the previous quarter as a new objective. Run another workshop lasting 30 to 45 minutes for each objective. Everyone on the team will need to input on the plan using the outline below. Write it in a doc, a slide deck or on a whiteboard – whatever works. You will probably want to present these plans to the senior management team or service owner at the start of the new quarter. If it’s easier than starting with a blank page, team leads can fill in the outline and get feedback from the rest of the team. As long as everyone gets a chance to input, it doesn’t matter. It’s OK if you take less than 30 minutes, especially if you already have a plan. ### 1. Write down and describe the objective An objective is a bold and qualitative goal that the organisation wants to achieve. It’s best that they’re ambitious, not super easy to achieve or audacious in nature; they are not sales targets. Write down the problem you’re solving and who it’s a problem for. Discuss how you’ll know when you’re done. What are the success criteria? ### 2. Think about risks and unknowns What might be a challenge? What are the riskiest assumptions or big unknowns to highlight? Do you need to try new techniques? These might form the first initiatives in your plan. You can frame your assumptions using the hypothesis statement: **Because** [of something we think or know] **We believe that** [an initiative or task] **Will achieve** [desired outcome] **Measured by** [metric] Note down dependencies on other teams, for example, where you may need another team to do something for you. ### 3. Detail all the initiatives Write a sentence for all the initiatives – tasks and activities – you’ll need to do to achieve the objective. Consider research and discovery activities, which can help you gather information to turn unknowns into knowns. Consider alphas, things to prototype, spikes, and experiments that can help you de-risk or validate assumptions. Make sure to remember the development and delivery work too – that’s how we release value to users! ### 4. What will you measure? Review your success criteria. Define the metrics that will tell you when you’ve finished or achieved the objective. These should tell you when you’re done and will become your key results. Remember, metrics should be: * tangible and quantitative * specific and measurable * achievable and realistic ### 5. Prioritise radically What would you do differently if you only had half the time? How will you start small and build up? What’s the least amount of work you can do to learn the most? Use these thoughts to consider any changes to your initiatives. Go back and edit the initiatives if you need to. ## Don’t worry about adapting your plans A core tenet of agile is responding to change over following a plan, so don’t be afraid to change your plans based on new information. The quarterly checkpoint isn’t the only time you can look back to look forward – that’s why retrospectives are useful. You can use the activities above at any point. The best product teams build these behaviours into their regular practice. If you’d like help running these workshops or have any questions, get in touch and we’ll set up a chat.
boringmagi.cc
boringmagi.cc.web.brid.gy
Going faster with Lean principles
Software teams are often asked to go faster. There are many factors that influence the speed at which teams can discover, design and deliver solutions, and those factors aren’t always in a team’s control. But Lean principles offer teams a way to analyse and adapt their operating model – their ways of working. ## What is Lean? Lean is a method of manufacturing that emerged from Toyota’s Production System in the 1950s and 1960s. It’s a system that incorporates methods of production and leadership together. The early Agile community used Lean principles to inspire methods for making digital products and services. These principles have had influence beyond the production environment and have been adapted for business and strategy functions too. ## Books on Lean Four books on Lean principles have influenced the way I work. **1._Lean Software Development: An Agile Toolkit_ by Mary and Tom Poppendieck** The earliest of the four books. It really set the standard. **2._The Lean Startup_ by Eric Ries** This started a big movement for applying Lean principles to your startup, including testing out new business models or growth opportunities. **3._Lean UX_ by Jeff Gothelf and Josh Seiden** One of my favourites. This one really brought strategic goals and user experience closer together. It also shifted teams from writing problem statements to writing hypotheses. **4._The Lean Product Playbook_ by Dan Olsen** This is relatively similar to _The Lean Startup_ but is more of a playbook, showing the practice that goes with the theory. The highlight is its emphasis on MVP tests: experiments you can run to learn something without building anything. ## Lean principles All these books have some principles in their pages, all based on the original Lean principles from Toyota. They’re all pretty similar. Combining their approaches helps us apply Lean principles to business model development, strategy, user-centred design and software delivery. > A note on principles: Principles are not rules. Principles guide your thinking and doing. Rules say what’s right and wrong. ### 1. Eliminate waste Reduce anything which does not help deliver value to the user. So: partially done work; scope creep; re-learning; task-switching; waiting; hand-offs; defects; management activities. Outcomes, not outputs. ### 2. Amplify learning Build, measure, learn. Create feedback loops. Build scrappy prototypes, run spikes. Write tests first. Think in iterations. ### 3. Decide as late as possible Call out the assumptions or uncertainties, try out different options, and make decisions based on facts or evidence. ### 4. Deliver as fast as possible Shorter cycles improve learning and communication, and helps us meet users’ needs as soon as possible. Reduce work in progress, get one thing done, and iterate. ### 5. Empower the team Figure it out together. Managers provide goals, encourage progress, spot issues and remove impediments. Designers, developers and data engineers suggest how to achieve a goal and feed in to continuous improvement. ### 6. Build integrity in Agility needs quality. Automated tests and proven design patterns allow you to focus on smaller parts of the system. A regular flow of insights to act on aids agility. ### 7. Optimise the whole Focus on the entire value stream, not just individual tasks. Align strategy with development. Consider the entire user experience in the design process. ## Three simpler principles If those seem like too many to get started with, I want to introduce three simpler principles that can help you go faster. I came across these in a book about running, which doesn’t seem like the place you’d find inspiration about product management! Think easy, light and smooth. It’s from a man called Micah True who lived in the Mexican desert and went running with the local Native Americans. They called him Caballo Blanco – ‘White Horse’ – because of his speed. > “You start with easy, because if that’s all you get, that’s not so bad. Then work on light. Make it effortless, like you don’t give a shit how high the hill is or how far you’ve got to go. When you’ve practised that so long that you forget you’re practicing, you work on making it smooooooth. You won’t have to worry about the last one – you get those three, and you’ll be fast.” You can do this every cycle. Find one thing to make easier, one thing to make lighter, and one thing to make smoother. Fast will happen naturally.
boringmagi.cc
boringmagi.cc.web.brid.gy
Our positions on generative AI
Like many trends in technology before it, we’re keeping an eye on artificial intelligence (AI). AI is more of a concept, but generative AI as a general purpose technology has come to the fore due to recent developments in cloud-based computation and machine learning. Plus, technology is more widespread and available to more people, so more people are talking about generative AI – compared to something _even more_ ubiquitous like HTML. Given the hype, it feels worthwhile stating our positions on generative AI – or as we like to call it, ‘applied statistics’. We’re open to working on and with it, but there’s a few ideas we’ll bring to the table. ## The positions 1. Utility trumps hyperbole 2. Augmented not artificial intelligence 3. Local and open first 4. There will be consequences 5. Outcomes over outputs ### Utility trumps hyperbole The fundamental principle to Boring Magic’s work is that people want technologies to work. People prefer things to be functional first; the specific technologies only matter when they reduce or undermine the quality of the utility. There are outsized, unfounded claims being made about the utility of AI. It is not ‘more profound than fire’. The macroeconomic implications of AI are often overstated, but it’ll still likely have an impact on productivity. We think it’s sensible to look at how generative AI can be useful or make things less tedious, so we’re exploring the possibilities: from making analysis more accessible through to automating repeatable tasks. We won’t sell you a bunch of hype, just deliver stuff that works. ### Augmented not artificial intelligence Technologies have an impact on the availability of jobs. The introduction of the digital spreadsheet meant that chartered accountants could easily punch the numbers, leading to accounting clerks becoming surplus to requirements. Jevon’s paradox teaches us that AI will lead to more work, not less. Over time accountants needed fewer clerks, but increases in financial activity have lead to a greater need for auditors. So we will still need people in jobs to do thinking, reasoning, assessing and other things people are good at. Rather than replacing people with machines to reduce costs, technology should be used to empower human workers. We should augment the intelligence of our people, not replace it. That means using things like large language models (LLMs) to reduce the inertia of the blank page problem, helping with brainstorming, rather than asking an LLM to write something for you. Extensive not intensive technology. ### Local and open first Right now, we’re in a hype cycle, with lots of enthusiasm, funding and support for generative AI. The boom of a hype cycle is always followed by a bust, and AI winters have been common for decades. If you add AI to your product or service and rely on a cloud-based supplier for that capability, you could find the supplier goes into administration – or worse, enshittification, when fees go up and the quality of service plunges. And free services are monetised eventually. But there are lots of openly-available generative text and vision models you can run on your own computer – your ‘local machine’ – breaking the reliance on external suppliers. When exploring how to apply generative AI to a client’s problem, we’ll always use an open model and run it locally first. It’s cheaper than using a third party, and it’s more sustainable too. It also mitigates some risks around privacy and security by keeping all data processing local, not running on a machine in a data centre. That means we can get started sooner and do a data protection impact assessment later, when necessary. We can use the big players like OpenAI and Anthropic if we need to, but let’s go local and open first. ### There will be consequences People like to think of technology as a box that does a specific thing, but technology impacts and is impacted by everything around it. Technology exists within an ecology. It’s an inescapable fact, so we should try to sense the likely and unlikely consequences of implementing generative AI – on people, animals, the environment, organisations, policy, society and economies. That sounds like a big project, but there are plenty of tools out there to make it easier. We’ve used tools like consequence scanning, effects mapping, financial forecasting, Four Futures and other extrapolation methods to explore risks and harms in the past. As responsible people, it’s our duty to bring unforeseen consequences more into view, so that we can think about how to mitigate the risks or stop. ### Outcomes over outputs It feels like everyone’s doing something with generative AI at the moment, and, if you’re not, it can lead to feeling left out. But this doesn’t mean you have to do something: FOMO is not a strategy. We’ll take a look at where generative AI might be useful, but we’ll also recommend other technologies if those are cheaper, faster or more sustainable. That might mean implementing search and filtering instead of a chatbot, especially if it’s an interface that more people are used to. It’s more important to get the job done and achieve outcomes, instead of doing the latest thing because it’s cool. ## Let’s be pragmatic Ultimately our approach to generative AI is like any other technology: we’re grounded in practicality, mindful of being responsible and ethical, and will pursue meaningful outcomes. It’s the best way to harness its potential effectively. Beware the AI snake oil.
boringmagi.cc