Tuesday, 28 March 2017

Cell Edge Specification Notation (CESN)


When it comes to designing WLAN RF environments, everyone seems to have their favourite cell edge signal level that they like to shoot for. Common figures include -67dBm for voice grade WLANs, maybe -60dBm for higher 802.11ac speeds and perhaps -72dBm for general data traffic coverage. Each vendor and wireless consultant seems to have their own preferred cell edge design target that will vary with WLAN requirements. However, these figures are meaningless without some type of explanation or context. If you rely solely on these types of figures, you are very likely designing incorrectly....

Background
Around 18 months ago, I was involved in a project that required the deployment of a new wireless LAN network at many sites around the globe. The project required that all sites would be subject to the same standard of RF parameters to provide a consistent design approach at all sites.

A team was sent to the first site to perform an “AP on a stick” survey using an AP of the same model that would be deployed for the final solution. The results from the survey report looked very good - we were hitting the coverage and capacity levels we were hoping for. We had a well defined threshold for (amongst other parameters) the cell edge signal level we wanted to achieve (let’s call it -62dBm).

Another team was sent to a second site in another part of the globe to perform another survey, using the same survey thresholds. They were using the same survey software, the same type of survey AP (with the same settings) and were working to the same cell edge figure of -62dBm.

Both buildings were primarily large, modern, open plan offices.

However, the results in the two survey reports were hugely different in terms of the cell coverage achieved for the same AP transmit power. The physical cell size for each AP cell (at the same cell edge threshold) in the second survey was 50% or less of that observed in the first survey. We would would need twice as many APs for the second site even though they were being designed to the same RF standard.

This couldn’t be right..could it? Two surveys done with the same software, in similar environments, using the same survey thresholds with twice as many APs required in one survey compared to the other? Something just didn't make sense. We would expect similar AP cell sizes and similar AP densities at both sites.

After lots of head-scratching, configuration checking and re-surveying the culprit of the huge difference in survey results came to light: the two teams had surveyed with different wireless NICs in their survey rig. One had used Proxim adapters, while the second team had used Netgear adapters. Both were valid adapters supported by the survey software, but they gave wildly different survey results due to their different RF characteristics.  

Which adapters were giving the correct results? In the absence of any other data, then they were both, arguably, correct.

Which Clients Are You Designing For?
The fundamental flaw in this survey approach was the lack of specification of the wireless NIC to be used to perform the survey.

Each model of wireless NIC is likely to have very different RF characteristics in terms of antenna capabilities and RF sensitivity. As a minimum, the model of wireless NIC to be used for the surveys should have been specified. This would have perhaps at least ensured that the surveys would achieve reasonably comparable results and a more consistent RF design.

(Note: even models of the same wireless NIC may vary from device to device due to manufacturing tolerances, so there will still be a variation of a few dB between devices)

The solution to this issue was to ensure that all survey teams used the same type of Proxim adapter. The results from the different survey teams were far more comparable once this standard approach had been agreed.

However, even this approach had flaws. The survey cell-edge threshold was provided by a 3rd party, based on their previous general design experience. A better approach would have been to design for the RF behaviour of a specific device or devices, based on customer requirements.

The best-practice approach is to understand the behaviour of one or more of the clients that will use the wireless LAN and tailor the RF environment for those devices (as closely as possible). This is incredibly difficult to achieve in the real world, as there are likely to be many different types of client with a variety of RF capabilities and behaviours. The choice of which RF characteristics to design for will have to be a judgement call based on business requirements and priorities.

Compensation
Once a decision has been made around which device will be used for the design decisions, then the RF survey criteria will need to be decided.

As an example, let’s assume that the primary device on our WLAN is an Apple iPad Mini 4, as we have many deployed across our network for a mission critical service . It also happens (in this theoretical scenario) that a few other devices on our proposed WLAN will have quite similar characteristics, so this RF design will suit quite a few other devices too.

We have also seen from a vendor wireless design guide that a useful cell edge design threshold is -70dBm for Apple devices. Therefore, we will use this as our cell edge threshold.

However, we will be performing our survey with a Proxim wireless NIC. The iPad Mini 4 is not able to run our wireless survey software, so we have to run it on our laptop with the Proxim adapter.

We suspect our Proxim adapter is going to have very different RF characteristics to our iPad Mini 4. But, we want to design our WLAN network with cell edges of -70dBm, as seen by an iPad Mini 4 (not a Proxim adapter).  Cell sizes at -70dBm for a Proxim adapter are likely to be very different to those of an iPad Mini 4.

Therefore, we need a way for the Proxim adapter to see the RF world in a similar way to the iPad Mini4. This is generally done using a “compensation” technique.

In summary, when we compensate for our survey adapter in a survey report, we add (or subtract) a dB  offset to/from our survey data to account for the different in the survey NIC and the required client that we are trying to emulate.

For instance, we might set up a test access point and measure the signal level we observe at a distance of 6 metres (20 feet) with the Proxim adapter. We would then measure the signal level with a wireless client (e.g. our iPad Mini 4) at the same distance. The difference between the two signals observed is the offset we need to apply to our survey data.  

As an example, if the survey NIC showed a signal level that was 5dB stronger than the client, we would have to apply an offset of 5dB to signal levels in the survey data. If we were aiming for the -70dBm cell edge we previously discussed, the cell edge shown in our report would be -65dBm, as we know that this would be observed as -70dBm by our wireless client.

Signal Level Specification
Hopefully, the discussions above will underline need for some form of context or reference when specifying a signal level for an RF design cell edge. Simply stating that we need to: “design for voice to cell edge of -67dBm” is meaningless without further information. The key piece of information missing is the specification of which device observes the cell edge.

There is a common (often unspoken) rule around cell-edge signal level specifications among wireless LAN pros. A signal level is generally accepted to be that observed by the client device that is in-use. For instance, if a design guide specifies that a cell edge of -67dBm for voice handsets be used, experienced WLAN pros will assume that this signal level is measured with the actual voice handset.

However, this assumption is not widely understood or is often unknown. Many people performing a wireless design will not even consider that there may be a difference between their survey rig and the clients that will use the final deployed WLAN.

I believe there needs to be a better, unambiguous way to specify signal levels, particularly for cell edge measurements in WAN survey work.

Signal Level Notation
SIgnal levels need to be expressed in a less ambiguous format. The information about the expectation of how the signal level is to be measured needs to be embedded in a standardized notation.

If we consider the case of “AP on a stick” surveys, we have two methods of specifying the cell edge signal we’d like to aim for:

  • A signal level as seen by a specific client type (e.g. -70dBm as seen by an Apple iPad Mini 4)
  • A raw signal level value that we’d like to use with our survey adapter (e.g. we know that our adapter works well for many Apple devices at -65dBm from previous work or compensation testing, so we’d like to use that specific value)

We may also have a third method of specifying signal levels if performing predictive modelling with a wireless survey tool. Predictive models will generally provide “raw” signal levels based on Free Space Loss, which will not account for an adapter or client type. With some of our own real-world vs predictive testing, we may be able to specify a suitable predictive signal level which is suitable for our purposes.

To meet each of these three scenarios, I’d like to propose the following notation specification:

<signal level in dBm> (<measurement type>: “<device type”)

The ‘measurement types’ would be specified as:

UC = Un-compensated
CF = Compensated For
PM = Predictive Model

Here are three signal levels, specified using the proposed notation:

  • -65dBm (UC: “Ekahau USB-300”)
  • -65dBm (CF: “Apple iPad Mini 4”)
  • -65dBm (PM: “FSPL”)

These translate to

  • -65dBm (UC: “Ekahau USB-300”) : a signal level of -65dBm that is a raw (un-compensated) measurement made with an Ekahau USB-300 NIC
  • -65dBm (CF: “Apple iPad Mini 4”): a signal level that has been compensated for a value that would appear as -65dBm to an iPad Mini 4 device
  • -65dBm (PM: “FSPL”): a signal level that uses a free space loss measurement as represented in a predictive model.

Conclusion
RF design for wireless LANs  is a complex topic, which is very difficult to “get right” in the real world.

However, it is very easy to “get it wrong” through simple misunderstandings and the lack of standardized reference points for RF measurements.

Although the proposed signal level notation adds a little more complexity to the mix when creating an RF design, it does removes a level of ambiguity from signal level specifications. This will hopefully provide a more consistent WLAN design methodology across the industry.

I’d be very interested to hear your thoughts on this.