Skip to content

generate_unit_testsΒΆ

Generate comprehensive unit tests by using symbolic execution to discover all execution paths through a function.

Quick ReferenceΒΆ

generate_unit_tests(
    file_path: str = None,           # Path to file
    code: str = None,                # Or provide code directly
    function_name: str = None,       # Specific function
    framework: str = "pytest",       # Test framework
    data_driven: bool = False,       # Use parameterized tests
    crash_log: str = None            # Generate from crash log
) -> GeneratedTests

User StoriesΒΆ

Persona Story Tool Value
πŸ‘€ Sarah (AI User) "Generate test cases for my new function" Better test coverage
πŸ‘₯ David (Team Lead) "Ensure all functions have comprehensive tests" Quality assurance
πŸ”§ Chris (OSS Contributor) "Generate tests for symbolic execution edge cases" Comprehensive test coverage

β†’ See all user stories

ParametersΒΆ

Parameter Type Required Default Description
file_path string No* None Path to source file
code string No* None Source code as string
function_name string No None Specific function to test
framework string No "pytest" pytest, unittest, or hypothesis
data_driven bool No false Generate parameterized tests
crash_log string No None Generate test from crash/error log

*One of file_path or code is required.

Response SchemaΒΆ

{
  "data": {
    "test_code": "string",
    "test_cases": [
      {
        "name": "string",
        "description": "string",
        "inputs": {},
        "expected_output": "any",
        "is_error_case": "boolean",
        "path_constraints": ["string"]
      }
    ],
    "coverage_estimate": {
      "paths_covered": "integer",
      "total_paths": "integer",
      "coverage_percent": "float"
    },
    "imports_needed": ["string"]
  },
  "tier_applied": "string",
  "duration_ms": "integer"
}

ExamplesΒΆ

Generate Tests for a FunctionΒΆ

Generate tests for the calculate_discount function:

def calculate_discount(price: int, quantity: int) -> float:
    if quantity < 0:
        raise ValueError("Invalid quantity")
    if price <= 0:
        return 0.0
    if quantity >= 10:
        return price * quantity * 0.9
    elif quantity >= 5:
        return price * quantity * 0.95
    return price * quantity
{
  "code": "def calculate_discount(price: int, quantity: int) -> float:\n    if quantity < 0:\n        raise ValueError(\"Invalid quantity\")\n    if price <= 0:\n        return 0.0\n    if quantity >= 10:\n        return price * quantity * 0.9\n    elif quantity >= 5:\n        return price * quantity * 0.95\n    return price * quantity",
  "framework": "pytest"
}
codescalpel generate-unit-tests --code 'def calculate_discount(price: int, quantity: int) -> float:
    if quantity < 0:
        raise ValueError("Invalid quantity")
    if price <= 0:
        return 0.0
    if quantity >= 10:
        return price * quantity * 0.9
    elif quantity >= 5:
        return price * quantity * 0.95
    return price * quantity' --framework pytest
{
  "data": {
    "test_code": "import pytest\nfrom module import calculate_discount\n\n\nclass TestCalculateDiscount:\n    \"\"\"Tests for calculate_discount function.\"\"\"\n    \n    def test_negative_quantity_raises_error(self):\n        \"\"\"Test that negative quantity raises ValueError.\"\"\"\n        with pytest.raises(ValueError, match=\"Invalid quantity\"):\n            calculate_discount(100, -1)\n    \n    def test_zero_price_returns_zero(self):\n        \"\"\"Test that zero or negative price returns 0.\"\"\"\n        assert calculate_discount(0, 5) == 0.0\n        assert calculate_discount(-10, 5) == 0.0\n    \n    def test_bulk_discount_10_plus(self):\n        \"\"\"Test 10% discount for quantity >= 10.\"\"\"\n        result = calculate_discount(100, 10)\n        assert result == 900.0  # 100 * 10 * 0.9\n    \n    def test_medium_discount_5_to_9(self):\n        \"\"\"Test 5% discount for quantity 5-9.\"\"\"\n        result = calculate_discount(100, 7)\n        assert result == 665.0  # 100 * 7 * 0.95\n    \n    def test_no_discount_under_5(self):\n        \"\"\"Test no discount for quantity < 5.\"\"\"\n        result = calculate_discount(100, 3)\n        assert result == 300.0  # 100 * 3",
    "test_cases": [
      {
        "name": "test_negative_quantity_raises_error",
        "description": "Negative quantity should raise ValueError",
        "inputs": {"price": 100, "quantity": -1},
        "expected_output": "ValueError",
        "is_error_case": true,
        "path_constraints": ["quantity < 0"]
      },
      {
        "name": "test_zero_price_returns_zero",
        "description": "Zero or negative price returns 0",
        "inputs": {"price": 0, "quantity": 5},
        "expected_output": 0.0,
        "is_error_case": false,
        "path_constraints": ["quantity >= 0", "price <= 0"]
      },
      {
        "name": "test_bulk_discount_10_plus",
        "description": "10% discount for bulk orders",
        "inputs": {"price": 100, "quantity": 10},
        "expected_output": 900.0,
        "is_error_case": false,
        "path_constraints": ["quantity >= 0", "price > 0", "quantity >= 10"]
      },
      {
        "name": "test_medium_discount_5_to_9",
        "description": "5% discount for medium orders",
        "inputs": {"price": 100, "quantity": 7},
        "expected_output": 665.0,
        "is_error_case": false,
        "path_constraints": ["quantity >= 0", "price > 0", "quantity >= 5", "quantity < 10"]
      },
      {
        "name": "test_no_discount_under_5",
        "description": "No discount for small orders",
        "inputs": {"price": 100, "quantity": 3},
        "expected_output": 300.0,
        "is_error_case": false,
        "path_constraints": ["quantity >= 0", "price > 0", "quantity < 5"]
      }
    ],
    "coverage_estimate": {
      "paths_covered": 5,
      "total_paths": 5,
      "coverage_percent": 100.0
    },
    "imports_needed": ["pytest"]
  },
  "tier_applied": "community",
  "duration_ms": 145
}

Generate Parameterized TestsΒΆ

Generate data-driven tests for the validate_email function
{
  "file_path": "/project/validators.py",
  "function_name": "validate_email",
  "framework": "pytest",
  "data_driven": true
}
codescalpel generate-unit-tests validators.py \
  --function validate_email \
  --framework pytest \
  --data-driven
{
  "data": {
    "test_code": "import pytest\nfrom validators import validate_email\n\n\nclass TestValidateEmail:\n    \"\"\"Tests for validate_email function.\"\"\"\n    \n    @pytest.mark.parametrize(\"email,expected\", [\n        (\"user@example.com\", True),\n        (\"user.name@example.co.uk\", True),\n        (\"user+tag@example.org\", True),\n        (\"invalid-email\", False),\n        (\"missing@domain\", False),\n        (\"@nodomain.com\", False),\n        (\"\", False),\n        (\"spaces in@email.com\", False),\n    ])\n    def test_validate_email(self, email, expected):\n        \"\"\"Test email validation with various inputs.\"\"\"\n        assert validate_email(email) == expected\n    \n    @pytest.mark.parametrize(\"invalid_type\", [None, 123, [], {}])\n    def test_invalid_type_raises_error(self, invalid_type):\n        \"\"\"Test that non-string inputs raise TypeError.\"\"\"\n        with pytest.raises(TypeError):\n            validate_email(invalid_type)",
    "test_cases": [
      {"inputs": {"email": "user@example.com"}, "expected_output": true},
      {"inputs": {"email": "invalid-email"}, "expected_output": false}
    ]
  }
}

Generate from Crash LogΒΆ

Generate a regression test from this crash:

Traceback (most recent call last):
  File "processor.py", line 45, in process_data
    result = data['value'] / data['count']
ZeroDivisionError: division by zero
{
  "file_path": "/project/processor.py",
  "function_name": "process_data",
  "crash_log": "Traceback (most recent call last):\n  File \"processor.py\", line 45, in process_data\n    result = data['value'] / data['count']\nZeroDivisionError: division by zero"
}
codescalpel generate-unit-tests processor.py \
  --function process_data \
  --crash-log "ZeroDivisionError: division by zero at line 45"
{
  "data": {
    "test_code": "import pytest\nfrom processor import process_data\n\n\nclass TestProcessDataRegression:\n    \"\"\"Regression tests generated from crash logs.\"\"\"\n    \n    def test_division_by_zero_regression(self):\n        \"\"\"Regression test for ZeroDivisionError.\n        \n        Original crash: division by zero at line 45\n        \"\"\"\n        # Input that caused the crash\n        crash_input = {'value': 100, 'count': 0}\n        \n        # Verify proper error handling\n        with pytest.raises(ZeroDivisionError):\n            process_data(crash_input)\n        \n        # Or if fixed, verify it handles gracefully:\n        # result = process_data(crash_input)\n        # assert result == expected_value",
    "crash_analysis": {
      "exception_type": "ZeroDivisionError",
      "location": "processor.py:45",
      "trigger_condition": "data['count'] == 0",
      "suggested_fix": "Add check for count == 0 before division"
    }
  }
}

Generate Hypothesis TestsΒΆ

Generate property-based tests using hypothesis
{
  "code": "def sort_and_unique(items: list) -> list:\n    return sorted(set(items))",
  "framework": "hypothesis"
}
codescalpel generate-unit-tests \
  --code "def sort_and_unique(items: list) -> list: return sorted(set(items))" \
  --framework hypothesis
{
  "data": {
    "test_code": "from hypothesis import given, strategies as st\nfrom module import sort_and_unique\n\n\nclass TestSortAndUnique:\n    \"\"\"Property-based tests for sort_and_unique.\"\"\"\n    \n    @given(st.lists(st.integers()))\n    def test_result_is_sorted(self, items):\n        \"\"\"Result should always be sorted.\"\"\"\n        result = sort_and_unique(items)\n        assert result == sorted(result)\n    \n    @given(st.lists(st.integers()))\n    def test_result_has_no_duplicates(self, items):\n        \"\"\"Result should have no duplicate values.\"\"\"\n        result = sort_and_unique(items)\n        assert len(result) == len(set(result))\n    \n    @given(st.lists(st.integers()))\n    def test_all_input_values_present(self, items):\n        \"\"\"All unique input values should be in result.\"\"\"\n        result = sort_and_unique(items)\n        assert set(result) == set(items)\n    \n    @given(st.lists(st.integers(), min_size=0, max_size=0))\n    def test_empty_input(self, items):\n        \"\"\"Empty input should return empty list.\"\"\"\n        assert sort_and_unique(items) == []",
    "imports_needed": ["hypothesis"]
  }
}

Supported FrameworksΒΆ

Framework Output Style
pytest pytest functions with fixtures
unittest unittest.TestCase classes
hypothesis Property-based tests

Tier LimitsΒΆ

generate_unit_tests capabilities vary by tier:

Feature Community Pro Enterprise
Max test cases 5 20 Unlimited
Test frameworks pytest pytest, unittest All + custom
Basic test generation βœ… βœ… βœ…
Parameterized tests βœ… βœ… βœ…
Crash log analysis ❌ βœ… βœ…
Hypothesis tests ❌ βœ… βœ…
Mock generation ❌ βœ… Auto-mocking βœ… Advanced mocking
Custom frameworks ❌ ❌ βœ… Plugin support

Community TierΒΆ

  • βœ… Generate up to 5 test cases per function
  • βœ… pytest framework support
  • βœ… Basic parameterized tests
  • βœ… Coverage estimation
  • ⚠️ Max 5 test cases - Limited coverage
  • ⚠️ pytest only - No unittest or hypothesis
  • ❌ No crash log analysis
  • ❌ No mock generation

Pro TierΒΆ

  • βœ… All Community features
  • βœ… Up to 20 test cases - Better coverage
  • βœ… pytest and unittest - More framework options
  • βœ… Crash log analysis - Generate regression tests
  • βœ… Hypothesis tests - Property-based testing
  • βœ… Auto-mocking - Generate mocks for dependencies
  • βœ… Data-driven tests - Parameterized test generation

Enterprise TierΒΆ

  • βœ… All Pro features
  • βœ… Unlimited test cases - Complete coverage
  • βœ… All frameworks - pytest, unittest, hypothesis, nose, etc.
  • βœ… Custom framework support - Organization-specific frameworks
  • βœ… Advanced mocking - Complex mock scenarios
  • βœ… Integration tests - Multi-function test generation
  • βœ… Contract tests - API contract verification

Key Difference: Test Case Volume and Framework Support - Community: 5 test cases, pytest - Basic coverage - Pro: 20 test cases, pytest/unittest/hypothesis - Comprehensive testing - Enterprise: Unlimited, all frameworks - Complete test suite generation

β†’ See tier comparison

Best PracticesΒΆ

  1. Review generated tests - Verify logic is correct
  2. Add edge cases - Tool covers paths, you know domain
  3. Use data-driven - Easier to maintain
  4. Update imports - Adjust module paths as needed
  5. Run after generation - Ensure tests pass