generate_unit_testsΒΆ
Generate comprehensive unit tests by using symbolic execution to discover all execution paths through a function.
Quick ReferenceΒΆ
generate_unit_tests(
file_path: str = None, # Path to file
code: str = None, # Or provide code directly
function_name: str = None, # Specific function
framework: str = "pytest", # Test framework
data_driven: bool = False, # Use parameterized tests
crash_log: str = None # Generate from crash log
) -> GeneratedTests
User StoriesΒΆ
| Persona | Story | Tool Value |
|---|---|---|
| π€ Sarah (AI User) | "Generate test cases for my new function" | Better test coverage |
| π₯ David (Team Lead) | "Ensure all functions have comprehensive tests" | Quality assurance |
| π§ Chris (OSS Contributor) | "Generate tests for symbolic execution edge cases" | Comprehensive test coverage |
ParametersΒΆ
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
file_path | string | No* | None | Path to source file |
code | string | No* | None | Source code as string |
function_name | string | No | None | Specific function to test |
framework | string | No | "pytest" | pytest, unittest, or hypothesis |
data_driven | bool | No | false | Generate parameterized tests |
crash_log | string | No | None | Generate test from crash/error log |
*One of file_path or code is required.
Response SchemaΒΆ
{
"data": {
"test_code": "string",
"test_cases": [
{
"name": "string",
"description": "string",
"inputs": {},
"expected_output": "any",
"is_error_case": "boolean",
"path_constraints": ["string"]
}
],
"coverage_estimate": {
"paths_covered": "integer",
"total_paths": "integer",
"coverage_percent": "float"
},
"imports_needed": ["string"]
},
"tier_applied": "string",
"duration_ms": "integer"
}
ExamplesΒΆ
Generate Tests for a FunctionΒΆ
Generate tests for the calculate_discount function:
def calculate_discount(price: int, quantity: int) -> float:
if quantity < 0:
raise ValueError("Invalid quantity")
if price <= 0:
return 0.0
if quantity >= 10:
return price * quantity * 0.9
elif quantity >= 5:
return price * quantity * 0.95
return price * quantity
{
"code": "def calculate_discount(price: int, quantity: int) -> float:\n if quantity < 0:\n raise ValueError(\"Invalid quantity\")\n if price <= 0:\n return 0.0\n if quantity >= 10:\n return price * quantity * 0.9\n elif quantity >= 5:\n return price * quantity * 0.95\n return price * quantity",
"framework": "pytest"
}
codescalpel generate-unit-tests --code 'def calculate_discount(price: int, quantity: int) -> float:
if quantity < 0:
raise ValueError("Invalid quantity")
if price <= 0:
return 0.0
if quantity >= 10:
return price * quantity * 0.9
elif quantity >= 5:
return price * quantity * 0.95
return price * quantity' --framework pytest
{
"data": {
"test_code": "import pytest\nfrom module import calculate_discount\n\n\nclass TestCalculateDiscount:\n \"\"\"Tests for calculate_discount function.\"\"\"\n \n def test_negative_quantity_raises_error(self):\n \"\"\"Test that negative quantity raises ValueError.\"\"\"\n with pytest.raises(ValueError, match=\"Invalid quantity\"):\n calculate_discount(100, -1)\n \n def test_zero_price_returns_zero(self):\n \"\"\"Test that zero or negative price returns 0.\"\"\"\n assert calculate_discount(0, 5) == 0.0\n assert calculate_discount(-10, 5) == 0.0\n \n def test_bulk_discount_10_plus(self):\n \"\"\"Test 10% discount for quantity >= 10.\"\"\"\n result = calculate_discount(100, 10)\n assert result == 900.0 # 100 * 10 * 0.9\n \n def test_medium_discount_5_to_9(self):\n \"\"\"Test 5% discount for quantity 5-9.\"\"\"\n result = calculate_discount(100, 7)\n assert result == 665.0 # 100 * 7 * 0.95\n \n def test_no_discount_under_5(self):\n \"\"\"Test no discount for quantity < 5.\"\"\"\n result = calculate_discount(100, 3)\n assert result == 300.0 # 100 * 3",
"test_cases": [
{
"name": "test_negative_quantity_raises_error",
"description": "Negative quantity should raise ValueError",
"inputs": {"price": 100, "quantity": -1},
"expected_output": "ValueError",
"is_error_case": true,
"path_constraints": ["quantity < 0"]
},
{
"name": "test_zero_price_returns_zero",
"description": "Zero or negative price returns 0",
"inputs": {"price": 0, "quantity": 5},
"expected_output": 0.0,
"is_error_case": false,
"path_constraints": ["quantity >= 0", "price <= 0"]
},
{
"name": "test_bulk_discount_10_plus",
"description": "10% discount for bulk orders",
"inputs": {"price": 100, "quantity": 10},
"expected_output": 900.0,
"is_error_case": false,
"path_constraints": ["quantity >= 0", "price > 0", "quantity >= 10"]
},
{
"name": "test_medium_discount_5_to_9",
"description": "5% discount for medium orders",
"inputs": {"price": 100, "quantity": 7},
"expected_output": 665.0,
"is_error_case": false,
"path_constraints": ["quantity >= 0", "price > 0", "quantity >= 5", "quantity < 10"]
},
{
"name": "test_no_discount_under_5",
"description": "No discount for small orders",
"inputs": {"price": 100, "quantity": 3},
"expected_output": 300.0,
"is_error_case": false,
"path_constraints": ["quantity >= 0", "price > 0", "quantity < 5"]
}
],
"coverage_estimate": {
"paths_covered": 5,
"total_paths": 5,
"coverage_percent": 100.0
},
"imports_needed": ["pytest"]
},
"tier_applied": "community",
"duration_ms": 145
}
Generate Parameterized TestsΒΆ
{
"data": {
"test_code": "import pytest\nfrom validators import validate_email\n\n\nclass TestValidateEmail:\n \"\"\"Tests for validate_email function.\"\"\"\n \n @pytest.mark.parametrize(\"email,expected\", [\n (\"user@example.com\", True),\n (\"user.name@example.co.uk\", True),\n (\"user+tag@example.org\", True),\n (\"invalid-email\", False),\n (\"missing@domain\", False),\n (\"@nodomain.com\", False),\n (\"\", False),\n (\"spaces in@email.com\", False),\n ])\n def test_validate_email(self, email, expected):\n \"\"\"Test email validation with various inputs.\"\"\"\n assert validate_email(email) == expected\n \n @pytest.mark.parametrize(\"invalid_type\", [None, 123, [], {}])\n def test_invalid_type_raises_error(self, invalid_type):\n \"\"\"Test that non-string inputs raise TypeError.\"\"\"\n with pytest.raises(TypeError):\n validate_email(invalid_type)",
"test_cases": [
{"inputs": {"email": "user@example.com"}, "expected_output": true},
{"inputs": {"email": "invalid-email"}, "expected_output": false}
]
}
}
Generate from Crash LogΒΆ
{
"data": {
"test_code": "import pytest\nfrom processor import process_data\n\n\nclass TestProcessDataRegression:\n \"\"\"Regression tests generated from crash logs.\"\"\"\n \n def test_division_by_zero_regression(self):\n \"\"\"Regression test for ZeroDivisionError.\n \n Original crash: division by zero at line 45\n \"\"\"\n # Input that caused the crash\n crash_input = {'value': 100, 'count': 0}\n \n # Verify proper error handling\n with pytest.raises(ZeroDivisionError):\n process_data(crash_input)\n \n # Or if fixed, verify it handles gracefully:\n # result = process_data(crash_input)\n # assert result == expected_value",
"crash_analysis": {
"exception_type": "ZeroDivisionError",
"location": "processor.py:45",
"trigger_condition": "data['count'] == 0",
"suggested_fix": "Add check for count == 0 before division"
}
}
}
Generate Hypothesis TestsΒΆ
{
"data": {
"test_code": "from hypothesis import given, strategies as st\nfrom module import sort_and_unique\n\n\nclass TestSortAndUnique:\n \"\"\"Property-based tests for sort_and_unique.\"\"\"\n \n @given(st.lists(st.integers()))\n def test_result_is_sorted(self, items):\n \"\"\"Result should always be sorted.\"\"\"\n result = sort_and_unique(items)\n assert result == sorted(result)\n \n @given(st.lists(st.integers()))\n def test_result_has_no_duplicates(self, items):\n \"\"\"Result should have no duplicate values.\"\"\"\n result = sort_and_unique(items)\n assert len(result) == len(set(result))\n \n @given(st.lists(st.integers()))\n def test_all_input_values_present(self, items):\n \"\"\"All unique input values should be in result.\"\"\"\n result = sort_and_unique(items)\n assert set(result) == set(items)\n \n @given(st.lists(st.integers(), min_size=0, max_size=0))\n def test_empty_input(self, items):\n \"\"\"Empty input should return empty list.\"\"\"\n assert sort_and_unique(items) == []",
"imports_needed": ["hypothesis"]
}
}
Supported FrameworksΒΆ
| Framework | Output Style |
|---|---|
pytest | pytest functions with fixtures |
unittest | unittest.TestCase classes |
hypothesis | Property-based tests |
Tier LimitsΒΆ
generate_unit_tests capabilities vary by tier:
| Feature | Community | Pro | Enterprise |
|---|---|---|---|
| Max test cases | 5 | 20 | Unlimited |
| Test frameworks | pytest | pytest, unittest | All + custom |
| Basic test generation | β | β | β |
| Parameterized tests | β | β | β |
| Crash log analysis | β | β | β |
| Hypothesis tests | β | β | β |
| Mock generation | β | β Auto-mocking | β Advanced mocking |
| Custom frameworks | β | β | β Plugin support |
Community TierΒΆ
- β Generate up to 5 test cases per function
- β pytest framework support
- β Basic parameterized tests
- β Coverage estimation
- β οΈ Max 5 test cases - Limited coverage
- β οΈ pytest only - No unittest or hypothesis
- β No crash log analysis
- β No mock generation
Pro TierΒΆ
- β All Community features
- β Up to 20 test cases - Better coverage
- β pytest and unittest - More framework options
- β Crash log analysis - Generate regression tests
- β Hypothesis tests - Property-based testing
- β Auto-mocking - Generate mocks for dependencies
- β Data-driven tests - Parameterized test generation
Enterprise TierΒΆ
- β All Pro features
- β Unlimited test cases - Complete coverage
- β All frameworks - pytest, unittest, hypothesis, nose, etc.
- β Custom framework support - Organization-specific frameworks
- β Advanced mocking - Complex mock scenarios
- β Integration tests - Multi-function test generation
- β Contract tests - API contract verification
Key Difference: Test Case Volume and Framework Support - Community: 5 test cases, pytest - Basic coverage - Pro: 20 test cases, pytest/unittest/hypothesis - Comprehensive testing - Enterprise: Unlimited, all frameworks - Complete test suite generation
Best PracticesΒΆ
- Review generated tests - Verify logic is correct
- Add edge cases - Tool covers paths, you know domain
- Use data-driven - Easier to maintain
- Update imports - Adjust module paths as needed
- Run after generation - Ensure tests pass
Related ToolsΒΆ
- symbolic_execute - Understand paths first
- simulate_refactor - Test before changing
- analyze_code - Find functions to test