logo

Get in touch

Awesome Image Awesome Image

#Software Testing & Automation

Building a Scalable BDD Automation Framework with Playwright and pytest


By Sandip Chauhan November 12, 2025

BDD-Automation-Framework-with-Playwright-and-pytestIn today’s fast-paced development world, speed and quality go hand in hand. Our QA team faced a familiar challenge: make automation testing faster, easier to maintain, and more readable across multiple projects without sacrificing coverage or stability.

After evaluating several tools, we landed on Playwright + pytest-bdd a modern pairing that brings both power and simplicity to end-to-end (E2E) automation with Behavior-Driven Development (BDD) at its core. Mid-intro resource for deeper context: explore the official pytest-bdd documentation to understand step definitions, fixtures, and Gherkin mapping.

Why We Chose Playwright + pytest-bdd

Playwright (by Microsoft) is one of the most reliable tools for cross-browser automation (Chromium, Firefox, WebKit), supporting network mocking, multi-context testing, and parallelism with excellent stability.

pytest-bdd brings Behavior-Driven Development (BDD) into the picture with Gherkin syntax (Given–When–Then) so both technical and non-technical stakeholders can read scenarios and collaborate.

Together, this stack gives us:

  • Speed & reliability with Playwright’s robust engine
  • Readability & collaboration via Gherkin’s Given–When–Then
  • Structure & scalability across multi-role, multi-environment projects

Quick BDD Example

Feature (Gherkin):


Feature: Login functionality

  Scenario: Successful login for valid user
    Given the user is on the login page
    When they enter valid credentials
    Then they should be redirected to the dashboard

Step Definitions (Python + Playwright + pytest-bdd):


from pytest_bdd import scenarios, given, when, then
from playwright.sync_api import expect

BASE_URL = "https://example.com"
TEST_USERNAME = "valid_user"
TEST_PASSWORD = "valid_pass"

scenarios("../features/login.feature")
@given("the user is on the login page")
def open_login_page(page):
    page.goto(f"{BASE_URL}/login")

@when("they enter valid credentials")
def login(page):
    page.fill("#username", TEST_USERNAME)
    page.fill("#password", TEST_PASSWORD)
    page.click("button[type='submit']")

@then("they should be redirected to the dashboard")
def verify_dashboard(page):
    expect(page.locator("h1")).to_have_text("Dashboard")

This natural language style bridges the gap between QA, developers, and product so everyone can see what’s tested and why.

Framework Overview

We built our BDD Automation Framework with Playwright to be scalable, modular, and CI-friendly:

  • Parallel test execution across browsers & suites
  • Reusable fixtures and helpers for clean, DRY tests
  • Environment-specific configs via .env
  • Centralized reporting & test data management

Recommend Read: Secure Redis Cluster with Spring Boot and SSL Docker Setup

Project Architecture

├── features/                     # Gherkin feature files
│   ├── login.feature
│   ├── user_management.feature
│   └── settings.feature
│
├── tests/
│   ├── steps/                    # Step definitions mapped from Gherkin
│   │   ├── login_steps.py
│   │   ├── user_steps.py
│   │   └── settings_steps.py
│   │
│   ├── test_login.py             # Uses login_steps to validate login flows
│   ├── test_user_management.py   # Uses user_steps to manage user scenarios
│   ├── test_settings.py          # Uses settings_steps for configuration tests
│   │
│   ├── conftest.py               # Shared pytest fixtures and hooks
│   ├── helpers.py                # Common UI and API utilities
│   ├── s3_util.py                # AWS S3 integration helpers
│
├── test-data/
│   ├── user_data.csv
│   ├── settings_data.json
│   └── env_specific_data/
│
├── reports/                      # HTML & Allure reports
├── requirements.txt              # Python dependencies
├── pytest.ini                    # Pytest config with markers and paths
├── .env                          # Environment-specific configurations
└── README.md

The diagram shows how authored tests and steps run through config, fixtures, and utilities to drive the Playwright browser and produce reports and failure artifacts.

Playwright Architecture Overview

Component Breakdown (What each layer does):

Layer

Description

Feature Files

Plain-English behavior specs using Gherkin

Step Definitions

Executable Python steps mapped from Gherkin

Test Scripts

Orchestrate end-to-end flows using shared steps

Fixtures (conftest.py)

Setup/teardown, browser lifecycle, auth, and test state

Helpers/Utilities

Reusable UI/API functions, S3 utilities, data ops

Test Data

CSV/JSON per environment, role, or scenario

Configuration

.env + pytest.ini for runtime control, markers, and reporting

pytest.ini (smoke/regression grouping + HTML report):


[pytest]
markers =
    smoke: Run smoke tests
    regression: Run regression suite
    demo: Run demo validations
addopts = -s -v --html=reports/report.html --self-contained-html

Advanced Capabilities (Enterprise-Ready)

  • Multi-role testing (Admin/Manager/Viewer) with context switching
  • AWS S3 integration for upload/download assertions
  • Parallel execution using pytest-xdist
  • API + UI hybrid validation (end-to-end consistency)
  • Data-driven scenarios with CSV/JSON
  • Automatic screenshots on failure
  • HTML + Allure reports for shareable insights

Running the Tests (CI-Friendly)


# Run all tests
pytest

# Run smoke suite
pytest -m smoke

# Generate HTML report
pytest --html=reports/report.html --self-contained-html

Report snapshot includes: pass/fail/skips, duration, environment, browser, and embedded screenshots for failures.

Example summary:


Test Suite: Smoke
Environment: https://dev.ourapp.com
Browser: Chromium
Passed: 25 | Failed: 0 | Skipped: 0
Duration: 4m 12s

Real Impact (Before → After)

MetricBeforeAfter
Regression Execution Time~4.5 hours1.3 hours (↓71%)
Test Maintenance EffortHighReduced by 50%
Cross-Browser CoverageLimitedFull (Chromium, Firefox, WebKit)
Test Stability85% pass98% stable runs
CollaborationQA-onlyQA + Dev + BA

What we test with this framework:

  • Authentication & Access Control
  • User Management
  • Settings & Configuration
  • End-to-End Integration flows

Lessons Learned (That Keep It Scalable)

  • Centralized fixtures to avoid repeated browser setup
  • Custom retry logic for flaky UI timing issues
  • Dynamic data factories to keep steps clean
  • Adaptive timeouts for network variability
  • Environment flags via .env for seamless switching

Playwright vs. Selenium (Quick Context Table)

CriterionPlaywrightSelenium
Speed & StabilityVery fast; auto-waits; isolated contextsCan need more explicit waits
Browser SupportChromium, Firefox, WebKitBroad, varies by driver
ParallelismFirst-class supportPossible, often more setup
DevTool ProtocolNativeVaries
ToolingCodegen, Trace Viewer, InspectorThird-party heavy

For deeper BDD grounding, see Cucumber’s BDD guide (conceptually aligned with pytest-bdd’s approach to collaboration and living documentation).

Sample Fixtures for Playwright + pytest-bdd


# tests/conftest.py
import os
import pytest
from playwright.sync_api import sync_playwright

@pytest.fixture(scope="session")
def base_url():
    return os.getenv("BASE_URL", "https://dev.ourapp.com")

@pytest.fixture(scope="session")
def browser_context_args():
    return {"accept_downloads": True, "viewport": {"width": 1366, "height": 768}}

@pytest.fixture(scope="function")
def page(browser_context_args):
    with sync_playwright() as p:
        browser = p.chromium.launch(headless=True)
        context = browser.new_context(**browser_context_args)
        page = context.new_page()
        yield page
        context.close()
        browser.close()

Why This Works for Behavior-Driven Development (BDD)

This structure keeps scenarios readable while the Python steps stay reusable. You get the best of both worlds: developer-grade automation + stakeholder-friendly specs.

It’s exactly what BDD Automation Framework with Playwright and BDD Automation Framework with pytest are designed to achieve clarity, speed, and scale.

Conclusion

Automation isn’t just “run scripts.” It’s building a self-sustaining ecosystem that scales with your product and team. With Behavior-Driven Development (BDD) via pytest-bdd and the speed of Playwright, our QA teams ship faster, communicate clearer, and maintain less while increasing coverage and stability.

If you’re exploring scalable, enterprise-ready test automation or want help aligning BDD with your delivery pipeline talk to an AI Development Agency and Company experienced in modern QA engineering and platform tooling.

Free-Strategy -Call

Written by Sandip Chauhan

Experienced Quality Assurance Engineer with over 7 years of experience in manual and automation testing across web, mobile, and API platforms. Skilled in Selenium, Appium, Postman, and JMeter, with a strong focus on building scalable, maintainable test frameworks. Passionate about Agile testing practices, CI/CD integration, and delivering high-quality, user-centric software solutions.

Bringing Software Development Expertise to Every
Corner of the World

United States

India

Germany

United Kingdom

Canada

Singapore

Australia

New Zealand

Dubai

Qatar

Kuwait

Finland

Brazil

Netherlands

Ireland

Japan

Kenya

South Africa