Skip to content

RapDev Technical Assessment Preparation

Based on the repos found in outputs/rapdev (datadog-oss, datadog_service_checker, dynatrace_migration_script, and lir-migration-utility), RapDev's technical assessments will heavily mirror their day-to-day consulting work.

They explicitly avoid LeetCode. Instead, they want to see how you build practical, production-ready CLI tools and automation scripts that interact with monitoring tool APIs (especially Datadog).

🔍 Core Themes in RapDev's Codebases

From analyzing the provided repositories, your assessment will likely involve Python (or Bash) scripting against REST APIs with the following characteristics:

  1. API Migrations & Integrations: Moving entities from one platform to another (e.g., PagerDuty to Lightstep, Dynatrace to Datadog) or mapping out entity relationships (e.g., datadog_service_checker).
  2. Safe Execution Modes: Their tools consistently implement --noop or --dry-run modes. You must demonstrate that your code is safe to run in a customer's production environment without inadvertently destroying data.
  3. CLI Ergonomics: Tools are invoked via command line using standard argument parsing (e.g., Python's argparse), accepting API keys via environment variables or .env files.
  4. Environment Portability: They expect clear execution pathways via Python Virtual Environments (requirements.txt) and Docker (Dockerfile / docker-compose.yml).
  5. Data Transformation & IaC: Normalizing raw JSON exports into a common model, or generating Terraform (.tf.json) representations for Infrastructure as Code.

🛠️ Expected Assessment Formats

You should be prepared for one of the following take-home or live-pairing tasks:

Scenario 1: The API Data Extraction & Analysis Tool

The Prompt: "Write a Python script that connects to the Datadog API, finds all monitors that have been muted for more than 30 days, and exports a CSV report with the monitor name, creator, and mute date." What they are testing:

  • API pagination handling.
  • Environment variable management for secrets (DD_API_KEY, DD_APP_KEY).
  • Data formatting (JSON to CSV).
  • Basic error handling (e.g., 403 Forbidden, 429 Rate Limit).

Scenario 2: The Safe State Change CLI (Highly Likely)

The Prompt: "Create a CLI utility to mass-update service tags across a Datadog environment. The tool must accept an old tag and a new tag. It must have a --dry-run mode." What they are testing:

  • CLI design (argparse).
  • Safety: Implementing a robust dry-run mode that logs the intended changes instead of making them.
  • Interaction with Datadog's update endpoints.
  • Idempotency (handling cases where the item is already updated).

Scenario 3: The System Migration Normalizer

The Prompt: "Given this mock JSON export from New Relic / AppDynamics, write a script to normalize it into a Datadog-compatible Dashboard representation." What they are testing:

  • Deep JSON dictionary manipulation.
  • Code architecture (separating concerns: extract, transform, load).
  • Identifying mapping gaps and logging warnings (e.g., "Widget type X is not supported in Datadog").

🚀 Practice Assignment: "Datadog Monitor Tag Standardizer"

To prepare, build this specific project. It encapsulates 90% of what RapDev will look for.

The Goal: Build a Python CLI that standardizes environment tags on Datadog monitors.

Requirements:

  1. Accept DD_API_KEY and DD_APP_KEY via environment variables (use python-dotenv).
  2. Fetch all monitors using the Datadog API. Handle pagination if necessary.
  3. Accept arguments: --old-tag (e.g., environment:qa) and --new-tag (e.g., env:staging).
  4. Find all monitors containing --old-tag and replace it with --new-tag.
  5. CRITICAL: The script must default to --dry-run. To actually apply the changes, the user must explicitly pass --apply.
  6. Output a clean log of what was changed (or what would be changed).
  7. Include a requirements.txt and a Dockerfile.
  8. Write a README.md explaining how to run it via Docker and locally.

🌟 RapDev "Cheat Sheet" & Best Practices

When submitting your technical assessment to RapDev, ensure you hit these quality markers:

  1. Do not hardcode API Keys: Ever. Use os.getenv('DD_API_KEY') or the python-dotenv package.
  2. Use argparse or click: Do not use sys.argv[1] for command-line arguments. Provide useful --help text.
  3. Structured Logging > Print: Use Python's built-in logging module.
    python
    import logging
    logging.basicConfig(level=logging.INFO, format='%(levelname)s: %(message)s')
    logging.info(f"Would update monitor {monitor_id}") # in dry-run mode
  4. Handle Rate Limits (HTTP 429): Use requests with a Retry adapter (via urllib3) to show you understand that APIs throttle clients.
  5. Use Docker: Provide a Dockerfile that COPYs the script and requirements.txt, making it zero-friction for the reviewer to run.
  6. Detailed README: Like the datadog_service_checker repo, explain exactly what the tool does, prerequisites, and copy-pasteable commands to run it locally and via Docker.
  7. Type Hinting: Use Python type hints (def process(monitors: list[dict]) -> None:) to show modern, readable Python proficiency.

👔 The "Datadog Consultant" Perspective

Because this is a consulting role (and not just an internal SRE role), RapDev will also evaluate how you engage with customers and how you structure enterprise Datadog solutions. Keep the following consulting principles in mind throughout your assessment:

1. Unified Service Tagging is Your North Star

For any Datadog project, the first recommendation is always implementing Unified Service Tagging (env, service, version). If your assessment involves mapping data, setting up monitors, or standardizing tags, explicitly mention that your goal aligns with Datadog's Unified Service Tagging best practices.

2. Cost Management & Optimization

Clients frequently hire RapDev because their Datadog bill is out of control.

  • If configuring logs: Mention the use of Logging without Limits (e.g., generating metrics from logs, routing high-volume/low-value logs to cold storage archives like S3 instead of indexing).
  • If configuring metrics: Show awareness of custom metric cardinality (tagging by user_id or session_id can cause metric costs to explode).

3. Empathy for the "Legacy" Environment

As a consultant, you can't just mandate modern Kubernetes. You will interact with messy VMWare, bare-metal Windows, and legacy systems.

  • Your solutions should account for hybrid environments.
  • Emphasize backwards compatibility and safe rollout strategies (e.g., migrating 10% of hosts first).

4. Handling Ambiguous Client Requests

If the assessment includes a "System Design" or "Case Study" discussion, expect vague prompts like: "A customer says their app is slow, how do you use Datadog to fix it?"Your Consultant Approach:

  1. Clarify the Business Impact: "Who is complaining? Users or internal teams? How does this impact revenue?"
  2. Define the Technical KPIs: "I'd start by looking at the APM Service map, checking Error Rates and P95 Latency for the entry-point services."
  3. Drill Down using Correlation: "I'd pivot from the slow APM Trace directly to the underlying Host Metrics or Container logs using the trace_id."
  4. Actionable Outcomes: "We don't just find the bug; we set up an SLO and a multi-alert monitor so the customer knows before their users do next time."

5. Proper IaC (Infrastructure as Code)

Clients want repeatable deployments. Whenever proposing a Datadog configuration (Dashboards, Synthetics, Monitors), mention that you would ultimately manage these via Terraform or Datadog CloudFormation. The dynatrace_migration_script repo heavily emphasizes outputting datadog_dashboard_json for Terraform, proving RapDev values IaC-driven workflows over clicking around in the UI.