Agenda
1. Welcome / Intro - Lincoln / Rabi / Trevor (5 minutes) - set expected outcomes
2. Update on where CNTT is todate - Rabi (10 minutes)
3. Definition of performance - working session to document performance terms that will be used in RC outputs and OVP - Group (15 minutes)
4. Performance Testing: CNTT RC and OVP2.0 Performance Testing Requirements - Trevor (30 minutes)
Attendees
- Lincoln Lavoie
- Rabi Abdel
- Trevor Cooper(Intel)
- Al Morton
- Beth Cohen
- Walter.kozlowski
- Petar Torre (Deactivated) (Intel)
- Gergely Csatari (Nokia)
- Karine Sevilla (Orange)
- Toshi Wakayama (KDDI)
- Bob Monkman (Deactivated) (Intel)
- Padmakumar Subramani (Nokia)
- Carlo Cavazzoni (TIM / Telecom Italia)
- Georg Kunz (Ericsson Software Technology)
- Ahmed El Sawaf (STC)
- Sridhar Rao
- Parth yadav
- Mark Beierl
- Michael Pedersen
- Heather Kirksey
- Pankaj.Goyal (AT&T)
Minutes
Minutes of June 23 7am EDT session "Joint Topic: Performance"
Definitions of Performance
- CNTT_vF2F_June_Performance-edits TCAMv02.pptx
- Work plan - use the next CVC calls (July 6) to begin documenting the agreed terms for performance
- Then map how these terms apply within the context of the CNTT RC definitions.
- Performance Measurement -
Performance Measurement:
The procedure or set of operations having the objective of determining a Measured Value or Measurement Result of an infrastructure in operation.
In the context of telemetry Performance Measurements reflect data generated and collected within the cloud infrastructure, that reflects a performance aspect of the cloud infrastructure. For example, a count of frames or packets traversing an interface per unit of time, memory usage information, other resource usage and availability, etc. This data may be instantaneous or accumulated, and made available (i.e. exposed) based on permissions and contexts (e.g., workload vs. infra).
Performance Testing:
The main objective of performance testing is to understand if the System Under Test is able to achieve the expected performance, through conducting a series of Performance Measurements, comparing those results against a specific (Benchmark / Threshold).
It requires a set of performance testing tools (open source) that help with the dimensioning of a solution by an operator.
Testing results may be useful to compare infrastructure capabilites between the System Under Test (SUT) and a CNTT reference implementation of RA-2. Performance testing for the purpose of comparing between different commercial implementations is not a goal of CNTT. Performance testing relies on well established benchmark specifications.
Benchmarking:
Benchmarking assessments do not define acceptance criteria or numerical performance requirement. Benchmark testing and Conformance testing intersect when a specific requirement in the software specification is very important, such as a frequently-executed function. Correct execution with the expected result constitutes conformance. The completion time for a single conforming execution, or the number of conforming executions per second are potential Benchmarks. Benchmarks assess a key aspect of the computing environment in its role as the infrastructure for cloud-native network functions. The benchmarks (and related metrics) have been agreed by the Industry and documented in publications of an accredited standards body. As a result, benchmarks are a sub-set of all possible performance metrics. Examples benchmarks include data rate, latency, and loss ratio of various components of the environment, expressed in quantitative units to allow direct comparison between different systems treated as a black box (vendor-independence). Because the demands on a particular system may vary from deployment to deployment, Benchmarking assessments do not define acceptance criteria or numerical performance requirements.
Calibration:
The process of adjusting a measurement device, or its outputs, to improve the overall quality of the data.
Functional Testing:
The main objected of functional testing is the verification of a specific stimulus / response behavior within the SUT, including causation. These tests generally result in a binary outcome, i.e. pass / fall. For example, verification of an "API call" and its associated response, the instantiation of a VM and verification of its existence, or the availability of a specific feature of the SUT (i.e. SR-IOV).
- Threshold -
- Functional Tests
- Measurement Performance of functions
- Exception Threshold
- Network
- Computation
- CPU
- GPU
- Life cycle
- Time to create VM / container
- Time to delete VM / container
- Maximum number of VMs / containers