Episode 23: Start Where You Are — Assess Before You Build
Every journey of improvement begins with understanding where you stand today. The guiding principle “start where you are” emphasizes the importance of evidence-based assessment as the first step toward meaningful change. Too often, organizations leap into new initiatives driven by assumptions, trends, or pressures to innovate, only to find that the foundations they discarded were stronger than expected. This principle cautions against such haste. It reminds us that before building something new, we must carefully assess existing conditions, capabilities, and constraints. Just as a physician diagnoses before prescribing treatment, service management must diagnose its current state before investing in change. Starting from reality grounds decisions in evidence rather than speculation, reduces risk, and ensures that improvements build upon what already works. This approach accelerates meaningful progress by leveraging strengths while addressing weaknesses thoughtfully and deliberately.
A baseline is the first essential concept in this assessment. A baseline is a documented reference point that captures the current state of performance, processes, or systems. It acts like a “before picture,” enabling comparison against future improvements. For example, a baseline might include current incident resolution times or user satisfaction scores. Without such a reference point, claims of improvement remain vague and unprovable. Baselines also serve as motivational tools, showing stakeholders where progress has been made over time. Establishing baselines is not about perfection but about clarity—knowing where you stand today so you can measure tomorrow. They provide the anchor against which value, performance, and outcomes can be objectively evaluated, turning change from guesswork into measurable advancement.
Building on the idea of a baseline is the current-state assessment. This is the structured, objective analysis of the organization’s present reality. A current-state assessment goes beyond anecdotal impressions and gathers verifiable evidence of how services perform, how processes flow, and how stakeholders perceive outcomes. For instance, analyzing help desk data might reveal recurring bottlenecks, while stakeholder surveys expose frustrations with usability. The goal is to illuminate reality rather than reinforce assumptions. A current-state assessment provides the factual ground on which strategies are built. Without it, organizations risk chasing imagined problems or ignoring real ones. By anchoring decisions in current-state evidence, leaders can plan improvements that are both relevant and credible, earning stakeholder trust and avoiding wasted effort.
The quality of a current-state assessment depends on the richness of its data sources. Multiple inputs are necessary to capture a complete picture of reality. These might include performance data from monitoring systems, incident logs from service desks, and feedback from stakeholder surveys or interviews. Each data source offers a different perspective: metrics quantify system behavior, incidents reveal operational weak points, and feedback highlights lived experiences. Just as a physician orders multiple tests to ensure an accurate diagnosis, organizations must combine different data streams to avoid blind spots. This comprehensive approach transforms scattered information into actionable insight, providing a strong foundation for identifying strengths, weaknesses, and opportunities for improvement.
A critical discipline in starting where you are is the avoidance of assumptions. Assumptions can distort reality, leading to misguided investments. For example, a team might assume that service downtime is the top concern for users, only to discover through surveys that response time and clarity of communication matter even more. Validating with direct observation and data prevents these errors. Site visits, shadowing employees, and analyzing real transaction records help reveal truths that assumptions obscure. The guiding principle reminds us that while intuition can spark ideas, decisions must rest on evidence. By suspending assumptions and testing them against observed facts, organizations protect themselves from costly missteps and ensure that improvements address genuine needs rather than imagined ones.
Another benefit of this principle is the recognition of existing capabilities that already work effectively. In the rush to innovate, organizations sometimes overlook what is functioning well, discarding stable systems or processes unnecessarily. For example, a long-standing incident management workflow might already deliver strong results, even if surrounding systems need modernization. Preserving effective capabilities saves money, reduces risk, and respects institutional knowledge. This recognition prevents the wasteful reinvention of wheels and ensures continuity during change. By starting where you are, organizations learn to distinguish between what requires replacement and what should be preserved, balancing innovation with pragmatism. Such discernment accelerates improvement by building on proven strengths instead of ignoring them.
Assessment also requires identifying constraints such as policies, budgets, and technology limitations. Constraints define the boundaries within which change must occur. Ignoring them leads to unrealistic plans that collapse in practice. For instance, proposing a new enterprise application without considering budget ceilings or regulatory requirements can doom an initiative before it begins. Identifying constraints upfront allows organizations to design solutions that fit within reality, not fantasy. Constraints are not obstacles to be lamented but parameters to be respected. By acknowledging them honestly, organizations reduce frustration and focus energy on feasible improvements. This realism strengthens both planning and execution, ensuring that change is sustainable.
Linked to constraints is the recognition of risks associated with replacing elements that currently function well. Replacing a working system may introduce instability, even if the new system promises improvements. For example, migrating to a new ticketing platform could disrupt workflows, confuse users, and temporarily reduce productivity. The principle of starting where you are urges caution in replacing functional components, reminding us that change always carries risk. Identifying these risks allows for mitigation strategies, such as phased rollouts or dual-running systems. The lesson is not to resist change but to pursue it wisely, weighing benefits against potential disruption. This protects stability while still enabling progress, balancing the desire for innovation with the need for reliability.
Measuring process effectiveness and outcome relevance is another cornerstone of assessment. Effectiveness is not just about how efficiently a process runs, but whether it contributes to outcomes stakeholders value. A process may be fast but still irrelevant if it delivers results nobody needs. For example, a service that quickly generates detailed technical reports may not be valuable if customers actually want high-level summaries. Measuring relevance ensures that processes are aligned with outcomes, not just internal efficiency. This focus shifts assessment from counting activities to evaluating impact, helping organizations distinguish between busy work and meaningful work. It emphasizes that real value is found in outcomes, not outputs.
Documenting value streams provides another vital perspective on the current state. A value stream maps the flow of work from demand to delivery, revealing how activities connect to create outcomes. For example, mapping the steps of a customer support process might show how tickets are logged, triaged, escalated, and resolved. By visualizing the journey, bottlenecks and redundancies become clear. Value stream documentation transforms abstract processes into tangible maps that teams can analyze and improve. It creates shared understanding of how work actually flows, ensuring that everyone sees the same picture of reality. Without this visualization, inefficiencies remain hidden, and efforts to improve risk missing the systemic issues that matter most.
Bottleneck detection is a natural outcome of value stream documentation. Bottlenecks appear where work piles up, delays accumulate, or handoffs break down. These choke points often have disproportionate impact on outcomes. For instance, if ticket triage is under-resourced, the entire incident management process slows, regardless of how well later steps perform. Identifying bottlenecks allows organizations to target improvements where they will make the greatest difference. Rather than spreading energy thinly across the entire process, assessment highlights leverage points where focused action can unlock significant value. Detecting and addressing bottlenecks is therefore one of the most effective ways to translate assessment into rapid gains in performance and satisfaction.
Analysis also extends to variation and error sources. Processes that behave inconsistently undermine predictability, frustrating stakeholders and complicating planning. For example, if one support agent resolves tickets in fifteen minutes while another takes two hours, the variation makes outcomes unreliable. Assessing error rates and causes of variation helps organizations understand where training, standardization, or system improvements are needed. This kind of analysis adds depth to current-state understanding, showing not only what happens but also how consistently it happens. By reducing unwanted variation and errors, organizations increase trust and create a foundation for more reliable outcomes, reinforcing the principle that improvement begins with understanding what is happening today.
Confirming stakeholder priorities is another step in grounding assessment. Stakeholders often differ in what they consider most important, and assumptions about their priorities can lead to misaligned improvements. Directly engaging with stakeholders clarifies which outcomes matter most before redesign begins. For example, IT staff may believe that automation is the top priority, while users may place higher value on responsive human support. Confirming these priorities ensures that improvement efforts are aimed at what stakeholders actually want, not what providers assume they want. This prevents wasted effort and strengthens buy-in, making change more successful. By aligning assessment with stakeholder priorities, organizations ensure that they are solving the right problems.
Preserving proven practices while targeting weak points is another vital balance. Not everything in the current state is broken. Often, stable and effective practices can serve as anchors during change, providing continuity and reducing disruption. For example, a well-functioning change approval process may not need major redesign, even if surrounding processes do. Preserving strong practices shows respect for past successes and prevents the demoralization of teams who see effective work discarded without reason. It also saves resources by focusing energy where it is most needed. Starting where you are is not about tearing down everything old but about building wisely on what already works while addressing what does not.
Finally, assessment must be aligned with governance to ensure that changes support strategic direction. Governance provides oversight and ensures that initiatives align with organizational goals, values, and risk tolerances. Without this alignment, assessments may identify improvements that, while technically valid, do not contribute to broader strategy. For example, streamlining an internal process may save time but conflict with a strategic goal of strengthening customer engagement. By involving governance early, organizations ensure that assessment findings are interpreted within a strategic framework. This prevents local optimizations from undermining global objectives, ensuring that improvement remains coherent and purposeful.
The last step in Part 1 is the communication of findings to create a shared understanding of reality. Assessment results are valuable only if they are communicated clearly to stakeholders and decision-makers. Reports, presentations, and workshops transform raw data into shared insight. This shared understanding builds consensus and reduces resistance to change, as everyone agrees on the starting point. For example, presenting evidence of recurring bottlenecks across departments can rally teams around the need for action. Communication turns assessment from a technical exercise into a cultural moment, where organizations collectively face reality and commit to improvement. In this sense, sharing findings is as important as gathering them, ensuring that the principle of starting where you are is embraced broadly, not just by analysts.
For more cyber related content and books, please check out cyber author dot me. Also, there are other prepcasts on Cybersecurity and more at Bare Metal Cyber dot com.
Once the current state is assessed, the next step is to form hypotheses for improvement that are grounded in observed evidence. A hypothesis is essentially a reasoned guess: if we make this change, we expect this outcome. For example, if analysis shows that ticket backlogs are caused by slow triage, a hypothesis might state that adding one more triage analyst will reduce resolution times by 25 percent. The important point is that hypotheses are not assumptions; they are proposals rooted in evidence gathered during assessment. This scientific mindset ensures that change is guided by data rather than enthusiasm. By treating improvements as hypotheses to be tested, organizations create a culture of disciplined experimentation, where every initiative begins with evidence and is evaluated against measurable results rather than vague expectations.
Low-risk experiments represent the practical way to test these hypotheses before scaling. Instead of committing to sweeping changes immediately, organizations can pilot small adjustments and observe outcomes. For example, a new incident categorization method could be tested with one support team for a month before rolling out enterprise-wide. If results show improved accuracy and faster resolution, the approach can be expanded confidently. If not, lessons can be learned without widespread disruption. This incremental approach reduces risk while maintaining momentum. It also demonstrates to stakeholders that changes are being introduced responsibly, with careful evaluation rather than reckless overhaul. Low-risk experiments thus embody the principle of starting where you are by acknowledging reality while testing pathways forward cautiously and intelligently.
Reusing existing tools and data wherever adequate for purpose is another way to respect the principle. Organizations often believe improvement requires entirely new systems, but this is not always the case. For example, an existing project management platform may already have features that support value stream mapping or reporting. Leveraging what is available reduces costs, minimizes disruption, and accelerates adoption. Reuse also builds on familiarity, reducing the learning curve for stakeholders. Importantly, reuse is not about resisting innovation but about recognizing when current assets are sufficient to meet needs. By distinguishing between “must replace” and “can reuse,” organizations preserve stability while still pursuing progress. This selective approach ensures that energy is invested where it matters most, rather than being wasted on unnecessary replacement of functional tools.
Gap analysis provides a structured method for comparing current capabilities to desired outcomes. It answers the question: what do we have now, and what do we need to reach our goals? For example, if the current help desk resolves 80 percent of tickets within a day but the goal is 95 percent, the gap is clear. Gap analysis allows organizations to prioritize improvements based on the size and significance of these gaps. Some gaps may be minor and tolerable, while others represent critical weaknesses that demand attention. This structured comparison helps avoid scattershot improvements by focusing on the most consequential differences between current and desired states. By conducting gap analysis, organizations make improvement efforts more targeted, ensuring that each step moves them closer to specific, evidence-based outcomes.
Cost–benefit framing builds on gap analysis by evaluating whether proposed changes are worth pursuing given available resources. Not every identified gap warrants investment. For example, closing the gap from 95 percent to 99 percent uptime may cost exponentially more than the additional value it provides. By framing changes in terms of cost and benefit, organizations ensure that resources are allocated wisely. This does not mean ignoring small improvements, but rather balancing ambition with practicality. Cost–benefit framing also aids in stakeholder communication, helping sponsors and leaders understand why some initiatives are prioritized over others. By grounding these discussions in evidence, organizations maintain credibility and prevent improvement efforts from being seen as arbitrary.
Risk mitigation is another essential element of responsible change. Even evidence-based improvements can introduce instability if risks are not addressed. For example, deploying new monitoring tools may improve visibility but also strain network resources if not planned carefully. Risk mitigation involves identifying potential negative impacts and creating strategies to minimize them. These might include phased rollouts, backup systems, or training programs. The goal is not to eliminate risk entirely—an impossible task—but to manage it proactively. By protecting stable operations while introducing change, organizations demonstrate responsibility to stakeholders. This careful balance allows them to innovate without undermining the trust and reliability that services must sustain.
Decision checkpoints provide structured opportunities to verify results before committing further investment. After piloting a new process or system, organizations should pause, evaluate evidence, and decide whether to continue, adjust, or halt. These checkpoints act as safety nets, preventing flawed initiatives from consuming excessive resources. For example, after implementing a new chatbot for user support, a checkpoint review might reveal that satisfaction scores declined, prompting a reconsideration of the rollout plan. By embedding checkpoints, organizations turn improvement into a sequence of informed choices rather than a single high-stakes gamble. This iterative structure ensures that momentum is maintained while protecting against runaway initiatives that fail to deliver.
Knowledge capture ensures that lessons learned during experiments and improvements inform subsequent iterations. Too often, organizations repeat mistakes because insights are not recorded or shared. By documenting what worked, what failed, and why, teams build a collective memory that strengthens future efforts. Knowledge capture may include formal reports, knowledge base articles, or shared debrief sessions. For example, a pilot project may reveal that users struggled with unclear instructions—documenting this insight ensures future rollouts avoid the same pitfall. Capturing knowledge transforms failures into learning opportunities and successes into repeatable patterns. It reinforces the principle of starting where you are by ensuring that each cycle of improvement begins from a richer base of experience.
Stakeholder communication remains critical during these improvement cycles. Transparency about progress, results, and adjustments fosters trust and engagement. If an experiment produces mixed results, openly sharing this information shows stakeholders that the process is honest and evidence-driven. Communication should highlight both successes and challenges, explaining how lessons are being applied. For example, reporting that a new system reduced average resolution times but created usability concerns reassures stakeholders that both dimensions are being considered. Clear, consistent communication turns stakeholders into partners rather than passive observers, strengthening their support for ongoing improvements. It also ensures that changes are interpreted as purposeful rather than chaotic.
Metrics selection aligned to observed pain points and goals further ensures that improvements remain relevant. Instead of measuring everything indiscriminately, organizations should choose metrics that directly connect to the issues uncovered during assessment. For example, if bottlenecks in ticket triage were identified, measuring average triage time becomes highly relevant. If user frustration with communication was uncovered, satisfaction surveys may be more important. By aligning metrics to real pain points, organizations ensure that data collection supports meaningful evaluation. This prevents wasted energy on irrelevant indicators and sharpens focus on outcomes that matter most. Carefully chosen metrics also make it easier to tell a compelling story of improvement, linking changes directly to stakeholder value.
Avoiding “greenfield bias” is another vital discipline. Greenfield thinking assumes that the best way forward is to discard existing systems and start fresh. While appealing, this approach often underestimates the cost, risk, and disruption of replacement. In many cases, existing systems contain valuable capabilities, institutional knowledge, and embedded trust that should not be thrown away lightly. For example, replacing a functioning HR system with a brand-new platform may offer marginal benefits while creating massive upheaval. The principle of starting where you are resists greenfield bias, reminding organizations to evaluate existing assets carefully before discarding them. By focusing on reuse and incremental improvement where possible, organizations preserve continuity while still pursuing meaningful change.
Integration with continual improvement ensures that starting where you are is not a one-time exercise but an ongoing discipline. Each cycle of improvement begins with assessment, experimentation, and reflection. Over time, this creates a rhythm of steady progress rather than sporadic leaps. Continual improvement transforms the principle into a cultural habit, where evidence-based assessment becomes the natural way to begin any initiative. For example, before upgrading a major application, teams might conduct a quick assessment to identify pain points, ensuring that the upgrade addresses real needs rather than imagined ones. By embedding assessment into continual improvement, organizations institutionalize the practice of starting with reality, ensuring resilience and adaptability over the long term.
From an exam perspective, learners must distinguish evidence-based approaches from assumption-driven ones. Exam questions may present scenarios where a team rushes into change based on opinions, and the correct answer will highlight the need for assessment, baselines, or evidence gathering. Recognizing that the principle emphasizes observation, analysis, and reuse helps learners select the value-focused option. This mirrors real-world application, where successful initiatives are those grounded in reality rather than in unfounded optimism. Exam preparation, therefore, is less about memorizing steps and more about internalizing the mindset that meaningful change begins with evidence.
Practical scenarios reinforce this principle vividly. Consider an organization planning to replace its email system due to perceived inefficiency. A current-state assessment reveals that the real issue is not the system itself but inconsistent training and poor configuration. By addressing these issues, performance improves dramatically without the cost of replacement. Another example might be a retailer considering a new warehouse management system. Assessment shows that existing software is sufficient, but bottlenecks stem from manual handoffs. Automating these handoffs solves the problem more effectively than a costly system overhaul. These scenarios show that starting where you are prevents wasteful decisions and directs energy toward targeted, evidence-based improvements.
In summary, starting where you are affirms that assessment is the foundation of smart change. It emphasizes baselines, current-state analysis, evidence-based hypotheses, incremental experimentation, and disciplined reuse. It guards against assumptions, greenfield bias, and reckless replacement, instead promoting risk-aware, value-focused progress. By embedding this principle into practice, organizations reduce risk, accelerate improvement, and build stakeholder trust. For learners, the essential takeaway is that meaningful improvement never begins in a vacuum—it always begins with reality. By facing the current state honestly, documenting it thoroughly, and communicating it clearly, organizations create a strong foundation for change that is both practical and sustainable, transforming ambition into tangible results.
