User Tools

Site Tools


automation:test-the-network

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
automation:test-the-network [2025/07/06 07:16] jotasandokuautomation:test-the-network [2025/07/18 22:04] (current) jotasandoku
Line 2: Line 2:
   * **Unit testing (atomic level)** - individual functions/methods. (Module testing often gets lumped into either "unit testing" (for simple modules) or "component testing" (for complex modules))   * **Unit testing (atomic level)** - individual functions/methods. (Module testing often gets lumped into either "unit testing" (for simple modules) or "component testing" (for complex modules))
   * Component testing - groups of related units working together   * Component testing - groups of related units working together
-  * **Integration testing** - multiple components interacting+  * **Integration testing** - multiple components interactingfPYATS
   * System testing - entire application   * System testing - entire application
  
 +----
 +
 +
 +====PYATS===
 +  * {{ :automation:pyats_and_network_validation.pdf |}}
 +  * {{ :automation:pyats_and_network_validation_examples.pdf |}}
 +  * {{ :automation:pyats_oreilly-danw.pdf | OREILLY-PRES}} / [[https://github.com/dannywade/oreilly-live-training-pyats]]
 +
 +Topics
 +  * 'Testbed' - Describes the physical devices and connections in a YAML file. Is an inventory file but augmented. You can have interfaces and topology with links and also credentials. Unicon is a library underneath. Used to control device connectivity (~paramiko)
 +  * TestScripts: Contains the logic of all the tests.
 +    * Jobs: executes TestScripts as tasks. Also allows to execute multiple TestScripts in parallel
 +
 +  * pyATS library (Genie): Data PARSING. Tooling used to extract the network data.
 +    * Genie Harness: Used to build **test cases using YAML datafiles**
 +
 +  * Easypy - Runtime
 +  * Others: Blitz, Clean, 'Health Check'; Robot
 +  * AEtest (Easy Testing): scaffolding/framework for the Testing
 +
 +
 +=== TESTSCRIPT ===
 +Is the python file that holds all your tests.
 +
 +  * Common setup: Initial configuration and device initialization including loops - decorator : ''@aetest.setup''
 +  * Testcases: 'Container'(self-contained) for our tests. Setup/Test/Cleanup. testcase.uid (must be unique, is a python 'property') - decorator: ''@aetest.test''
 +  * Cleanup: Resets the testing environment after the test - decorator: ''@aetest.cleanup''
 +
 +** see slides 46,47 for examples (classes..)) **
 +\\
 +Test Parameters: TestScript > Testcase > TestSection  (they are classes (eg: testcase). see code))
 +
 +  * Callable 
 +  * Test Parameters - decorator : ''@aetest.parameter''
 +  * Test Execution: 
 +    * Standalone: all logging is sent to stdout. See slide 64-65
 +    * Easypy: Standardized runtime env within pyATS. Helpful for regression testing
  
 +----
 ---- ----
 ==== OPTIONS ==== ==== OPTIONS ====
 === Classic1: Unittest / **pytest** libraries === === Classic1: Unittest / **pytest** libraries ===
-TODO 
- 
-=== Modern1: PyATS === 
 == Assertions == == Assertions ==
   * **Definition:** Statements used in tests to verify that a specific condition holds true.   * **Definition:** Statements used in tests to verify that a specific condition holds true.
Line 39: Line 74:
   * Provide setup logic for tests; Can include teardown logic after the test (using `yield`); Can be reused across multiple tests; Can depend on other fixtures; Can be parameterised   * Provide setup logic for tests; Can include teardown logic after the test (using `yield`); Can be reused across multiple tests; Can depend on other fixtures; Can be parameterised
  
-**Example1:**: The ''greeting'' fixture is provided automatically to both test functions.+**Example1:**: The ''greeting'' fixture is provided automatically to both test functions. In this basic example, the greeting fixture returns a simple string. Because both test_uppercase and test_length declare a parameter named greeting, pytest automatically calls the fixture and passes its return value to each test function. This means you don’t have to repeat "hello world" in multiple tests — the fixture provides a single source of truth **for that test data**, ensuring consistency and reducing duplication.
  
   import pytest   import pytest
- 
   @pytest.fixture   @pytest.fixture
   def greeting():   def greeting():
       return "hello world"       return "hello world"
- 
   def test_uppercase(greeting):   def test_uppercase(greeting):
       assert greeting.upper() == "HELLO WORLD"       assert greeting.upper() == "HELLO WORLD"
- 
   def test_length(greeting):   def test_length(greeting):
       assert len(greeting) == 11       assert len(greeting) == 11
  
-**Fixture with Setup and Teardown**: The yield keyword pauses the fixture until the test finishes, then resumes for any cleanup.+**Fixture with Setup and Teardown**: The yield keyword pauses the fixture until the test finishes, then resumes for any cleanup. This fixture uses **yield** to provide a temporary **file path to the test**, allowing the test to use the file as needed. Once the test finishes, the code after the yield runs as the teardown phase — perfect for cleanup or logging. The tmp_path argument itself is a built-in pytest fixture that provides a temporary directory, which ensures isolation and avoids side effects between tests.
   import pytest   import pytest
   @pytest.fixture   @pytest.fixture
Line 62: Line 94:
       # Cleanup code here (e.g. delete or log)       # Cleanup code here (e.g. delete or log)
  
-** Dependent Fixtures **: Fixtures can call other fixtures automatically.+** Dependent Fixtures **: Fixtures can call other fixtures automatically. Here, the **user_token** fixture depends on the user_data fixture, which pytest resolves automatically. This shows how fixtures can be layered: one fixture can receive another as a parameter, enabling you to build more complex test setups from smaller, testable pieces. The test then uses user_token without needing to worry about where or how the token was created — the fixtures manage it all behind the scenes.
   @pytest.fixture   @pytest.fixture
   def user_data():   def user_data():
       return {"name": "Alice", "id": 1}       return {"name": "Alice", "id": 1}
- 
   @pytest.fixture   @pytest.fixture
   def user_token(user_data):   def user_token(user_data):
       return f"TOKEN-{user_data['id']}"       return f"TOKEN-{user_data['id']}"
- 
   def test_token(user_token):   def test_token(user_token):
       assert user_token.startswith("TOKEN-")       assert user_token.startswith("TOKEN-")
              
-** Parametrised Fixtures ** : The test runs once for each value of the user parameter.+** Parametrised Fixtures ** : The test runs once for each value of the user parameter. This example shows how to create a single fixture that can supply multiple different values across different test runs. Here, the user fixture will run the test twice — once with "alice" and once with "bob". The request.param object is provided by pytest for each parameter. This approach is ideal when you want to test the same logic against a range of inputs without writing separate test functions for each. 
   @pytest.fixture(params=["alice", "bob"])   @pytest.fixture(params=["alice", "bob"])
   def user(request):   def user(request):
       return request.param       return request.param
- 
   def test_starts_lowercase(user):   def test_starts_lowercase(user):
       assert user[0].islower()       assert user[0].islower()
              
-      +** restapi test ** 
 +In this example, we are testing the REST API of a network device. The api_base_url fixture provides the root URL of the API, which could represent a router or switch’s management interface. The auth_headers fixture supplies the necessary HTTP headers for authenticating the request, such as a bearer token and the content type. These two fixtures are injected into the test function, which uses them to send a GET request to the /interfaces endpoint. The test asserts that the response status is 200 (OK), and that the response JSON contains an "interfaces" key. This setup is clean, extensible, and easily scalable to test other endpoints, methods, or even multiple devices with different credentials or configurations. 
 +import pytest 
 +import requests
  
-----+  @pytest.fixture 
 +  def api_base_url(): 
 +      return "http://192.168.1.1/api/v1" 
 +  @pytest.fixture 
 +  def auth_headers(): 
 +      return { 
 +          "Authorization": "Bearer test-token", 
 +          "Content-Type": "application/json" 
 +      } 
 +  def test_get_interfaces(api_base_url, auth_headers): 
 +      url = f"{api_base_url}/interfaces" 
 +      response = requests.get(url, headers=auth_headers) 
 +      assert response.status_code == 200 
 +      assert "interfaces" in response.json()
  
-===== Further Reading ===== 
  
-    [[https://docs.pytest.org/en/stable/how-to/fixtures.html|Pytest Fixture Documentation]] 
  
-    [[https://realpython.com/pytest-python-testing/#using-pytest-fixtures|Real Python: Pytest Fixtures]] 
  
-    [[https://docs.pytest.org/en/stable/how-to/fixtures.html#parametrizing-fixtures|Parametrising Fixtures]] +===== Further Reading =====
- +
-    [[https://gist.github.com/kwaldrip/0ed22c6e3c8b476b8a84cf3c137b3e15|Pytest Fixture Cheat Sheet (Gist)]] +
- +
- +
- +
- +
----- +
- +
-== Parametrization == +
-  * **Definition:** Technique to run the same test with different sets of parameters. +
-  * **Usage:** Implemented with <code>@pytest.mark.parametrize</code>+
-  * **Example:** +
-    <code python> +
-    @pytest.mark.parametrize("ip", ["8.8.8.8", "1.1.1.1"]) +
-    def test_ping(ip): +
-        assert ping(ip) +
-    </code> +
-  * **Purpose:** Increase coverage by testing with multiple data sets. +
- +
----- +
- +
-== Other Relevant Concepts == +
-  * **Testcase/Testscript:**  +
-    * A testcase is a Python class, often subclassing <code>aetest.Testcase</code>, containing test sections. +
-    * A testscript is a file containing one or more testcases. +
-  * **Sections:**  +
-    * Special methods within a testcase, such as <code>setup</code>, <code>test</code>, and <code>cleanup</code>+
-  * **Testbed:**  +
-    * A YAML or Python file describing devices and connections used in tests. +
-  * **Loggers:**  +
-    * Used for reporting test steps and results (<code>log</code> object). +
-  * **aetest:**  +
-    * Core pyATS test harness module for organizing and running tests. +
- +
- +
- +
-----+
  
-== Examples == +  * [[https://docs.pytest.org/en/stable/how-to/fixtures.html|Pytest Fixture Documentation]] 
-  * NUTS (network unit tests)[[https://packetpushers.net/blog/open-source-networking-projects/#automation]] << more examples in this packetpushers site+  * [[https://realpython.com/pytest-python-testing/#using-pytest-fixtures|Real Python: Pytest Fixtures]] 
 +  * [[https://docs.pytest.org/en/stable/how-to/fixtures.html#parametrizing-fixtures|Parametrising Fixtures]] 
 +  * [[https://gist.github.com/kwaldrip/0ed22c6e3c8b476b8a84cf3c137b3e15|Pytest Fixture Cheat Sheet (Gist)]]  
 +  * [[https://packetpushers.net/blog/open-source-networking-projects/#automation]] << more examples in this packetpushers site
automation/test-the-network.1751786208.txt.gz · Last modified: by jotasandoku