User Tools

Site Tools


automation:test-the-network

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
automation:test-the-network [2025/07/06 06:48] jotasandokuautomation:test-the-network [2025/07/18 22:04] (current) jotasandoku
Line 2: Line 2:
   * **Unit testing (atomic level)** - individual functions/methods. (Module testing often gets lumped into either "unit testing" (for simple modules) or "component testing" (for complex modules))   * **Unit testing (atomic level)** - individual functions/methods. (Module testing often gets lumped into either "unit testing" (for simple modules) or "component testing" (for complex modules))
   * Component testing - groups of related units working together   * Component testing - groups of related units working together
-  * **Integration testing** - multiple components interacting+  * **Integration testing** - multiple components interactingfPYATS
   * System testing - entire application   * System testing - entire application
  
 +----
 +
 +
 +====PYATS===
 +  * {{ :automation:pyats_and_network_validation.pdf |}}
 +  * {{ :automation:pyats_and_network_validation_examples.pdf |}}
 +  * {{ :automation:pyats_oreilly-danw.pdf | OREILLY-PRES}} / [[https://github.com/dannywade/oreilly-live-training-pyats]]
 +
 +Topics
 +  * 'Testbed' - Describes the physical devices and connections in a YAML file. Is an inventory file but augmented. You can have interfaces and topology with links and also credentials. Unicon is a library underneath. Used to control device connectivity (~paramiko)
 +  * TestScripts: Contains the logic of all the tests.
 +    * Jobs: executes TestScripts as tasks. Also allows to execute multiple TestScripts in parallel
 +
 +  * pyATS library (Genie): Data PARSING. Tooling used to extract the network data.
 +    * Genie Harness: Used to build **test cases using YAML datafiles**
 +
 +  * Easypy - Runtime
 +  * Others: Blitz, Clean, 'Health Check'; Robot
 +  * AEtest (Easy Testing): scaffolding/framework for the Testing
 +
 +
 +=== TESTSCRIPT ===
 +Is the python file that holds all your tests.
 +
 +  * Common setup: Initial configuration and device initialization including loops - decorator : ''@aetest.setup''
 +  * Testcases: 'Container'(self-contained) for our tests. Setup/Test/Cleanup. testcase.uid (must be unique, is a python 'property') - decorator: ''@aetest.test''
 +  * Cleanup: Resets the testing environment after the test - decorator: ''@aetest.cleanup''
 +
 +** see slides 46,47 for examples (classes..)) **
 +\\
 +Test Parameters: TestScript > Testcase > TestSection  (they are classes (eg: testcase). see code))
 +
 +  * Callable 
 +  * Test Parameters - decorator : ''@aetest.parameter''
 +  * Test Execution: 
 +    * Standalone: all logging is sent to stdout. See slide 64-65
 +    * Easypy: Standardized runtime env within pyATS. Helpful for regression testing
  
 +----
 ---- ----
 ==== OPTIONS ==== ==== OPTIONS ====
 === Classic1: Unittest / **pytest** libraries === === Classic1: Unittest / **pytest** libraries ===
-TODO 
- 
-=== Modern1: PyATS === 
 == Assertions == == Assertions ==
   * **Definition:** Statements used in tests to verify that a specific condition holds true.   * **Definition:** Statements used in tests to verify that a specific condition holds true.
Line 22: Line 57:
  
 ---- ----
 +(@ is a decorator. special function that modifies the behavior of another function or class without changing its actual code)
 == Markers == == Markers ==
   * **Definition:** Annotations to add metadata to test functions or classes.   * **Definition:** Annotations to add metadata to test functions or classes.
-  * **Usage:** Used to categorize, skip, or parametrize tests.+  * **Usage:** Used to categorize, skip, or parametrize tests. 
   * **Examples:**   * **Examples:**
     * <code>@pytest.mark.sanity</code> (categorization)     * <code>@pytest.mark.sanity</code> (categorization)
Line 36: Line 71:
 == Fixtures == == Fixtures ==
   * **Definition:** Reusable setup/teardown functions for tests.   * **Definition:** Reusable setup/teardown functions for tests.
-  * **Usage:** Implemented with <code>@pytest.fixture</code>+  * **Usage:** Fixtures are defined using the `@pytest.fixture` decorator and are **injected automatically** into test functions based on parameter names
-  * **Example:*+  * Provide setup logic for tests; Can include teardown logic after the test (using `yield`); Can be reused across multiple tests; Can depend on other fixtures; Can be parameterised
-    <code python> +
-    @pytest.fixture +
-    def test_device(): +
-        device = connect_to_device("router1"+
-        yield device +
-        device.disconnect() +
-    </code> +
-  * **Purpose:** Manage test environments and dependencies, ensuring isolation and reproducibility.+
  
-----+**Example1:**: The ''greeting'' fixture is provided automatically to both test functions. In this basic example, the greeting fixture returns a simple string. Because both test_uppercase and test_length declare a parameter named greeting, pytest automatically calls the fixture and passes its return value to each test function. This means you don’t have to repeat "hello world" in multiple tests — the fixture provides a single source of truth **for that test data**, ensuring consistency and reducing duplication.
  
-== Parametrization == +  import pytest 
-  * **Definition:** Technique to run the same test with different sets of parameters+  @pytest.fixture 
-  * **Usage:** Implemented with <code>@pytest.mark.parametrize</code>. +  def greeting(): 
-  * **Example:** +      return "hello world" 
-    <code python> +  def test_uppercase(greeting)
-    @pytest.mark.parametrize("ip", ["8.8.8.8", "1.1.1.1"]) +      assert greeting.upper() == "HELLO WORLD
-    def test_ping(ip): +  def test_length(greeting): 
-        assert ping(ip) +      assert len(greeting== 11
-    </code> +
-  * **Purpose:** Increase coverage by testing with multiple data sets.+
  
-----+**Fixture with Setup and Teardown**: The yield keyword pauses the fixture until the test finishes, then resumes for any cleanup. This fixture uses **yield** to provide a temporary **file path to the test**, allowing the test to use the file as needed. Once the test finishes, the code after the yield runs as the teardown phase — perfect for cleanup or logging. The tmp_path argument itself is a built-in pytest fixture that provides a temporary directory, which ensures isolation and avoids side effects between tests. 
 +  import pytest 
 +  @pytest.fixture 
 +  def temp_file(tmp_path): 
 +      file = tmp_path / "sample.txt" 
 +      file.write_text("Hello"
 +      yield file 
 +      # Cleanup code here (e.g. delete or log)
  
-== Other Relevant Concepts == +** Dependent Fixtures **: Fixtures can call other fixtures automatically. Here, the **user_token*fixture depends on the user_data fixturewhich pytest resolves automaticallyThis shows how fixtures can be layered: one fixture can receive another as a parameterenabling you to build more complex test setups from smaller, testable piecesThe test then uses user_token without needing to worry about where or how the token was created — the fixtures manage it all behind the scenes
-  * **Testcase/Testscript:**  +  @pytest.fixture 
-    A testcase is a Python classoften subclassing <code>aetest.Testcase</code>containing test sections. +  def user_data()
-    * A testscript is a file containing one or more testcases+      return {"name": "Alice""id": 1} 
-  * **Sections:**  +  @pytest.fixture 
-    * Special methods within a testcasesuch as <code>setup</code>, <code>test</code>, and <code>cleanup</code>+  def user_token(user_data)
-  * **Testbed:**  +      return f"TOKEN-{user_data['id']}" 
-    * A YAML or Python file describing devices and connections used in tests. +  def test_token(user_token)
-  * **Loggers:**  +      assert user_token.startswith("TOKEN-"
-    * Used for reporting test steps and results (<code>log</code> object). +       
-  * **aetest:**  +*Parametrised Fixtures ** : The test runs once for each value of the user parameter. This example shows how to create a single fixture that can supply multiple different values across different test runs. Here, the user fixture will run the test twice — once with "alice" and once with "bob". The request.param object is provided by pytest for each parameter. This approach is ideal when you want to test the same logic against a range of inputs without writing separate test functions for each.
-    * Core pyATS test harness module for organizing and running tests.+
  
 +  @pytest.fixture(params=["alice", "bob"])
 +  def user(request):
 +      return request.param
 +  def test_starts_lowercase(user):
 +      assert user[0].islower()
 +      
 +** restapi test **
 +In this example, we are testing the REST API of a network device. The api_base_url fixture provides the root URL of the API, which could represent a router or switch’s management interface. The auth_headers fixture supplies the necessary HTTP headers for authenticating the request, such as a bearer token and the content type. These two fixtures are injected into the test function, which uses them to send a GET request to the /interfaces endpoint. The test asserts that the response status is 200 (OK), and that the response JSON contains an "interfaces" key. This setup is clean, extensible, and easily scalable to test other endpoints, methods, or even multiple devices with different credentials or configurations.
 +import pytest
 +import requests
  
 +  @pytest.fixture
 +  def api_base_url():
 +      return "http://192.168.1.1/api/v1"
 +  @pytest.fixture
 +  def auth_headers():
 +      return {
 +          "Authorization": "Bearer test-token",
 +          "Content-Type": "application/json"
 +      }
 +  def test_get_interfaces(api_base_url, auth_headers):
 +      url = f"{api_base_url}/interfaces"
 +      response = requests.get(url, headers=auth_headers)
 +      assert response.status_code == 200
 +      assert "interfaces" in response.json()
  
----- 
  
-== Examples == + 
-  * NUTS (network unit tests)[[https://packetpushers.net/blog/open-source-networking-projects/#automation]] << more examples in this packetpushers site+ 
 +===== Further Reading ===== 
 + 
 +  * [[https://docs.pytest.org/en/stable/how-to/fixtures.html|Pytest Fixture Documentation]] 
 +  * [[https://realpython.com/pytest-python-testing/#using-pytest-fixtures|Real Python: Pytest Fixtures]] 
 +  * [[https://docs.pytest.org/en/stable/how-to/fixtures.html#parametrizing-fixtures|Parametrising Fixtures]] 
 +  * [[https://gist.github.com/kwaldrip/0ed22c6e3c8b476b8a84cf3c137b3e15|Pytest Fixture Cheat Sheet (Gist)]]  
 +  * [[https://packetpushers.net/blog/open-source-networking-projects/#automation]] << more examples in this packetpushers site
automation/test-the-network.1751784480.txt.gz · Last modified: by jotasandoku