The First 90 Days: Technical Assessment Framework for New Engineering Leaders

Joining a new engineering organization as a technical leader requires systematic assessment, strategic listening, and building credibility before pushing for change. This framework covers what to evaluate, how to gather information, and how to turn observations into actionable improvements.

Dan Alvare
12 min read
The First 90 Days: Technical Assessment Framework for New Engineering Leaders
Photo by Annie Spratt / Unsplash

Joining a new engineering organization as a technical leader is both exhilarating and overwhelming. You're excited about the opportunity but faced with a flood of information: codebases to understand, people to meet, processes to learn, and systems to comprehend. It can be tempting to immediately identify problems and propose solutions to prove your value.

I've learned that the first 90 days require a different approach: systematic assessment, strategic listening, and building credibility before pushing for change. Rush this phase, and you risk misdiagnosing problems, alienating your team, or spending political capital on the wrong battles. Do it well, and you establish yourself as a thoughtful leader who understands the full picture before acting.

This article shares the assessment framework I use when joining new engineering organizations, focusing on what to evaluate, how to gather information, and how to turn observations into actionable improvements.

Why Assessment Matters: The Cost of Moving Too Fast

I learned an important lesson early in my career: being technically right isn't enough. I joined an organization and identified several legitimate technical issues within my first few weeks. When I presented solutions, the reception was lukewarm. My analysis was accurate, but I had missed something crucial. I didn't yet understand why those problems existed, what tradeoffs had been made, or what had been tried before. Without that context and established credibility, the ideas landed poorly.

The reality is that most organizations already know their problems. What they don't have is someone who can:

  • See the problems with fresh eyes but without judgment
  • Understand the full context of why things are the way they are
  • Build consensus around what to fix first
  • Execute improvements without disrupting ongoing work

Your first 90 days aren't about proving yourself by finding problems. They're about proving you're wise by understanding context, building relationships, and choosing your battles strategically.

There's also a unique advantage to being new: you have a one-time fresh perspective on the organization. As time passes, you become blind to the same problem areas everyone else has normalized. Use this window wisely to observe patterns and friction points that long-time team members no longer notice.

The Three-Pillar Assessment Framework

I organize my assessment around three core areas, in this specific order: team capabilities and dynamics, code quality and technical debt, and infrastructure and architecture. The order matters. Understanding your team determines what's possible with the other two.

Pillar 1: Team Capabilities and Dynamics

Why this comes first: Your team is your most important asset and constraint. Understanding their capabilities, dynamics, and motivation determines what improvements are realistic and what may be difficult.

What to assess:

Team alignment is the first thing I look for. Are team members working toward the same goals, or does it feel disjointed? Do they understand the product vision and their role in it? Misalignment often indicates communication issues between product and engineering, unclear priorities, or siloed teams working in isolation.

Work ethic and attention to detail help me identify top performers. Who takes ownership of problems beyond their immediate scope? Who follows through on commitments? Who catches edge cases and thinks about business impact? These are the people you'll rely on for critical work and who will set the standard for others.

How to gather this information:

The most valuable insights come from one-on-one conversations in your first few weeks. I schedule 30-minute 1:1s with every engineer, asking open-ended questions:

  • What do you enjoy most about working here?
  • What are the biggest pain points in your day-to-day work?
  • If you could change one thing about our engineering processes, what would it be?
  • What improvements would you prioritize if you were in charge?

These conversations serve multiple purposes: building rapport, gathering perspectives, and identifying patterns. When three different engineers independently mention the same pain point, you've found something worth investigating.

I also observe team interactions in meetings, code reviews, and Slack channels. How do people give and receive feedback? Is there psychological safety to disagree or admit mistakes? Do senior engineers mentor juniors, or are there knowledge silos? The dynamics reveal a lot about team health.

Pay attention to who they go to for help. Every team has informal leaders - people whose judgment is trusted regardless of title. Identify these people early. You'll need them as allies for any significant changes.

Pillar 2: Code Quality and Technical Debt

Why this comes second: Code quality reflects organizational values and past decisions. It tells you about technical standards, time pressure, and whether quality is genuinely prioritized or just discussed in all-hands meetings.

What to look for:

Start with bugs as entry points into the codebase. Bugs represent real-world pain and provide concrete examples to trace through the system. Pick a few recent production bugs and follow the code paths that caused them. This reveals architectural patterns, error handling approaches, and how well the system handles edge cases.

Attention to detail in code is a proxy for management's values. If management tolerates sloppy work, it shows up everywhere: inconsistent code style, missing tests, poor documentation, and UI/UX bugs that should have been caught. The devil is in the details. Teams with high standards care about variable naming, test coverage, and edge cases. Teams without standards let these things slide.

How to gather this information:

In your first month, read code and pay attention to every pull request. This is time-consuming but invaluable. You learn the codebase, the team's coding patterns, and the quality bar. You also learn who the strong technical contributors are and who might need improvement.

Review pull requests actively and ask questions. If something looks off in a PR, there's probably a reason. Ask questions to find where the problem areas are. This serves dual purposes: you learn the codebase faster, and you establish yourself as someone who cares about quality without being prescriptive about solutions.

Modern AI tools can help you scan projects for an overview quickly. AI-assisted code analysis tools can help you identify patterns and potential issues.

Try fixing a small bug yourself in your first few weeks. There's no substitute for hands-on experience to understand the development workflow, build process, testing approach, and deployment pipeline. You'll quickly discover friction points that slow down developers.

Pillar 3: Infrastructure and Architecture

Why this comes third: Infrastructure and architecture determine your ability to ship reliably and scale. They also reveal how much technical debt has accumulated and what constraints you're working within.

What to assess:

CI/CD maturity tells you how seriously the organization takes automation and deployment safety. Is deployment a one-click operation or a manual, error-prone process? How long do builds take? Are deployments frequent or infrequent? Can you roll back easily?

Monitoring and observability paint a clear picture of system health. Good monitoring tells you where the real problems are. Bad monitoring (or no monitoring) is itself a major problem. What visibility do you have into production? Can you diagnose issues quickly? Do you know when things break before customers complain?

How to gather this information:

Start by reviewing existing monitoring dashboards and incident response processes. What metrics are tracked? What alerts fire? How are incidents handled? This reveals both technical sophistication and organizational maturity.

Talk to the infrastructure team. They often have the clearest picture of system pain points: which services are unstable, what causes outages, where performance bottlenecks exist, and what keeps them up at night.

Examine recent incidents and postmortems if they exist. How does the organization respond to failures? Is there a blameless culture focused on learning, or a blame culture focused on punishment? The incident response culture tells you a lot about organizational health.

Understanding Stakeholders Beyond Engineering

Technical assessment alone isn't enough. You need to understand the organizational context, priorities, and political landscape. This means talking to stakeholders and understanding what they care about.

Product Leadership

The product-engineering relationship determines how much technical debt you can address. If product leadership understands engineering constraints and values quality, you'll have breathing room to improve systems. If they only care about feature velocity, you'll fight constant battles over technical debt.

Green flags:

  • Product leaders who ask about technical debt in planning
  • Willingness to allocate time for refactoring and quality improvements
  • Understanding that "just ship it" has long-term costs
  • Collaborative planning where engineers have a voice

Red flags:

  • "Why is this taking so long?" without understanding complexity
  • Resistance to any work that isn't directly user-facing features
  • Consistent scope creep and deadline pressure
  • Engineers treated as order-takers, not partners

You gather this information through observation: sit in on planning meetings, sprint reviews, and roadmap discussions. Pay attention to how product and engineering communicate, who has influence, and whether technical concerns are taken seriously.

Executive Leadership

Understanding what executives actually care about is crucial for getting buy-in on technical improvements.

How to learn their priorities:

Sit in on executive meetings if you can. Pay attention to what gets discussed, what gets celebrated, and what gets escalated. Are they focused on revenue, growth, stability, fundraising, customer acquisition? This tells you what arguments will resonate.

Schedule 1:1s with key executives or stakeholders. I typically wait 30-60 days before these conversations so I can ask informed questions. Coming in new without an agenda can be advantageous. You're in a neutral position without being part of internal politics.

Pay attention to how they communicate with the team. Do they talk about quality, reliability, and technical excellence? Or is it all about speed and features? Their language reveals their values.

I've found that when you genuinely focus on helping executives achieve their goals, they naturally share what matters most to them. The key is approaching conversations with curiosity about their challenges rather than presenting a list of technical needs.

Mark Cuban puts it well: "The greatest value you can offer a boss is to reduce their stress." The best employees, he says, "analyze a situation, find a solution, and don't make a big deal of it." This is especially true for technical leaders. When executives see you identifying and solving problems without creating drama or requiring hand-holding, you become invaluable. Focus on reducing their stress rather than adding to it with long lists of technical improvements that need their attention.

Other Departments

Sales, customer success, and operations often have valuable perspectives on engineering problems - usually in the form of complaints or feature requests.

You don't need to proactively seek these out in your first 30 days, but when these conversations happen naturally, listen carefully. Customer-facing teams know where the product falls short, what competitors do better, and what's costing the company money or customers.

Red Lines vs Yellow Lines: What Demands Immediate Action

During your assessment, you'll uncover issues ranging from minor annoyances to critical risks. Learning to categorize by urgency is essential.

Red line issues require immediate escalation, even in your first 30 days:

  • Security vulnerabilities that expose customer data or system access
  • Compliance violations in regulated industries
  • Critical production bugs causing data loss or corruption
  • Imminent system failures that could cause major outages

For these issues, escalate through proper channels immediately. Don't wait for your 90-day assessment to be complete.

Yellow line issues are serious but can wait for proper planning:

  • Major technical debt that slows development
  • Architectural problems that limit scalability
  • Team skill gaps that need training or hiring
  • Process inefficiencies that waste time

These require the full context of your 90-day assessment and collaborative solutions with stakeholders. Rushing to fix them without buy-in often backfires.

An important note on security issues: If you discover security vulnerabilities, escalate them immediately through the appropriate security channels. These issues require coordinated response, proper testing, and validation. Attempting to fix security problems solo can create additional risks or miss broader implications.

For everything else, present findings as a team effort, not individual heroics. Over time, people will value your collaboration and know you're not selfishly trying to gain visibility. Good managers notice this type of behavior and value it over hero antics.

Building Credibility Through Quick Wins

Before you can tackle big, complex problems, you need credibility. Quick wins in your first 30-60 days establish that you're competent, collaborative, and focused on meaningful impact.

What makes a good quick win:

  • Visible impact that the team notices and appreciates
  • Low risk of breaking things or causing new problems
  • Achievable within a few days or weeks with your current understanding
  • Demonstrates competence in areas relevant to your role

Types of quick wins to look for:

Long-standing bug fixes that carry weight but nobody has gotten around to fixing. These show you can navigate the codebase, understand root causes, and ship improvements.

Examples:

  • A race condition in the notification system that occasionally sent duplicate emails that had been on the backlog for 6 months
  • An intermittent test failure that made CI unreliable

Well-defined projects that are already prioritized but need execution. Taking ownership of these shows you can deliver without needing extensive hand-holding.

Examples:

  • Implementing rate limiting for a specific API endpoint. The requirements were clear, scope was contained, and it was already prioritized
  • Migrating a legacy admin dashboard to the new design system with clear acceptance criteria, low risk, high visibility

Helping another developer complete a project or get unblocked. This builds relationships and demonstrates collaboration. Examples:

  • Pair programming with a mid-level engineer on a complex refactor they'd been struggling with
  • Reviewing and improving a junior engineer's first major feature before it went to production

Developer experience improvements that remove friction the team complains about. These are often quick to implement and highly appreciated. Examples:

  • Implementing multi-stage Docker builds to separate dev and production dependencies reduced image size by 95% (from ~1GB to ~50MB), eliminated dev-only security vulnerabilities, and significantly faster deployment times
  • Reducing Docker build times from 12 minutes to 3 minutes by optimizing layer caching
  • Setting up pre-commit hooks to catch linting errors locally instead of in CI

Documentation gaps that everyone asks about but nobody has documented.

Examples:

  • Creating an architecture diagram that everyone had been asking for but nobody had time to make
  • Documenting the deployment process that only 2 people knew how to do

What quick wins to avoid:

Projects that have previously failed. There's usually a reason they failed. You don't have the context or political capital to succeed where others failed.

Large projects with many dependencies. When you start, you don't have the political capital to organize and motivate a team for a complex initiative. Save these for after you've established credibility.

Anything that might step on someone's toes. If another engineer is already working on something, don't swoop in to "help" without being invited. It can come across as territorial or dismissive of their work.

The 90-Day Timeline: Flexible Phases with Guardrails

While I use "90 days" as a framework, the reality is that assessment phases depend on complexity and your specific role. A Principal Engineer joining a 500-person engineering org needs more time than a Staff Engineer joining a 20-person startup. The important thing is to make your own timeline that makes sense for the job.

That said, here's a general structure:

Phase 1: Learning and Listening (Weeks 1-4)

  • Schedule 1:1s with every team member
  • Review recent code, PRs, and architectural docs
  • Attend all team meetings and observe dynamics
  • Fix small bugs to understand the development workflow
  • Start identifying patterns and pain points
  • Build relationships across the organization

Phase 2: Building Credibility (Weeks 5-8)

  • Execute on 2-3 quick wins
  • Continue assessment but with more hands-on work
  • Deepen understanding of infrastructure and architecture
  • Start forming hypotheses about major improvements needed
  • Build relationships with stakeholders outside engineering

Phase 3: Synthesis and Planning (Weeks 9-12)

  • Consolidate findings into themes
  • Validate hypotheses with team members and stakeholders
  • Draft improvement roadmap with measurable goals
  • Socialize ideas informally before formal presentation
  • Identify sponsors who can support your initiatives
  • Present findings and collaborate on priorities

The key principle: Stay close to your target end date of 90 days, but assess based on complexity. If you're in a highly complex environment, you might extend to 120 days. If it's a smaller, simpler organization, you might be ready to present findings at 60 days.

Presenting Your Findings: Making It Collaborative

At the end of your assessment, you need to present findings to leadership in a way that gets buy-in and resources. How you structure this conversation determines whether your recommendations get implemented or filed away.

Leadership needs to see measurable value, not just problems. Frame improvements in business terms: "Optimizing our CI pipeline would reduce build times from 45 minutes to 8 minutes, giving engineers 30+ hours back per week for feature development."

Structure your presentation:

  1. What's working well - Start with positives to show you're not just criticizing
  2. Key challenges - Frame as opportunities, not failures
  3. Measurable goals - Specific, quantifiable improvements
  4. Proposed roadmap - Phased approach with milestones
  5. Resource needs - What you need to execute (time, people, budget)
  6. Expected impact - How this helps the business, not just engineering

Open it up for collaboration. You're all working together, and it's especially important at the early stages of your job to get feedback. The plan doesn't need implementation-level detail, but should clearly show the value and approach

Start by presenting to your direct manager before escalating to broader leadership. This respects organizational hierarchy and builds support at each level.

Get a sponsor who already has credibility in the organization. This might be your manager, a respected senior engineer, or a leader in another department. Having someone champion your ideas who's already trusted accelerates buy-in.

Prioritization should be collaborative. Since you're new, you don't have all of the context. Don't be afraid to lean on others to help. You bring the technical assessment; they bring the business context and organizational priorities. Together you create a roadmap that addresses real needs.

Key Takeaways

The first 90 days are about understanding context, building relationships, and making strategic improvements.

Remember:

You have a one-time fresh perspective. Use it wisely. As time passes, you'll be blind to the same problems as everyone else.

Start with people, not code. Understanding your team's capabilities and dynamics determines what's possible with everything else.

Build credibility before tackling big problems. Quick wins establish that you can execute before you ask for resources to address major issues.

Let them tell you the problems. When presenting findings, ask where people feel the problems are rather than telling them. You'll often discover important context you didn't have, and people are more invested in solving problems they've identified themselves.

Present findings as team efforts, not individual heroics. Good managers value collaboration over hero antics.

Find a sponsor who can champion your ideas. Someone with existing credibility in the organization can accelerate buy-in for your proposals.

Be flexible with timelines but have a plan. The 90-day framework is a guide. Adjust based on your role's complexity and the organization's needs.

Most importantly: use your first 90 days to build understanding, credibility, and a realistic roadmap for change. Do that well, and you'll set yourself up for long-term impact.