Cleverscope Transfer Rate And Display Update Rate

Cleverscope Transfer Rate And Display Update Rate, a forum discussion on Cleverscope. Join us for more discussions on Cleverscope Transfer Rate And Display Update Rate on our Interesting Questions forum.

Back to Forum Index : Back to Interesting Questions   RSS

27 Sep 2006
Posts: 401

We have had many requests to explain the Cleverscope transfer rate, and asking about USB 2.0.
Cleverscope currently uses a USB2.0 compatible system running at 12 Mbit/sec. We will introduce a system running at 480 Mbit/sec, but all in good time. However the transfer rate is not the full story on how fast you can update the screen.

Cleverscope, while displaying the scope graph, and information display, updates at 16 displays per second on a modern PC. We achieve this by transferring only the samples needed to display the actual pixels on the display. This process of choosing the samples to display is called decimation. Cleverscope offers two ways of decimating - sampled, or peak captured.

It works like this:
1. Whenever the Cleverscope time axis on any graph is changed, the Cleverscope application calculates the start and end times (referenced to the trigger), and the number of display samples required to make the display. This is always 8 times the pixel width. So for a 300 pixel wide display, we transfer 2400 samples (2A + 8D + Ext Trigger). As you can imagine transferring 2400 samples is a lot faster than transferring 2M samples.

2. The Cleverscope acquisition unit receives the transfer request (which could have been a pan, or a zoom in or out), and clips the start and end times to what is actually in the buffer. Then it calculates the number of samples that represent each returned display sample.

As an example say the request was from -2 to +8 ms, with a frame 20ms long, containing 2M samples. Say the display graph was 250 pixels wide, requiring 2000 display samples returned. In the duration -2 to +8msec, we have 10msecs duration, which, with a sample period of 10ns, is 1M samples. So we want to have 2M/2000 = 1000 samples between each returned display sample. Here we have to decimate the 1000 samples to return one display sample. The decimation types are:

a. Sampled decimation - we simply return every 1000th sample, starting at the time start point (that is, we return frame buffer samples n, n+1000, n+2000, n+3000,... where n is the start point). All the information in the intervening samples is not displayed, and you may get aliasing.

b. Peak captured sampling - we scan all the 1000 samples between each display sample, and return both the minimum and maximum values. Both are displayed on the graph, on the same pixel - making a vertical line equivalent to the max - min difference. This way of displaying eliminates aliasing, and shows the limits of all the samples in the buffer. The cost is that we must scan all samples (and everytime a new display is made). The acquisition unit can scan for max/min peak values at 14ns per sample, so it takes at the most 28 msecs to scan 2M samples.

3. The acquisition unit returns the decimated display samples to the PC, via USB, and the PC displays them. The PC may do some post processing - for example calculate the spectrum to display the frequency graph, or run the Maths equations for the Maths graph. It also calculates the values shown in the information display.

In a standard PC, we can do this 16 times per second (if not limited by the capture duration).

The only time we get the full frame of samples is if the user clicks 'Get Frame'. You only need to do this if you wish to save all the samples to disk for later viewing or analysis.

To summarize, we have processing time costs in the acquisition unit (to do peak capture), on the USB (to transfer the small sample set required), and in the PC (to calculate the spectrum, process Maths values, calculate the information values, and to do the screen updates). All of these processes contribute to the update rate. Currently we do all these processes in less than 60msec. When we introduce USB 2.0 running at 480 Mbit/sec, we will reduce the transfer time, but will not reduce the other processing times. So we do not expect the update rate to change markedly. None of our users have complained of slow update times, and so it can only get better!

27 Sep 2006
Posts:

I understand that you decimate the data before transfer to the PC during real-time display. It would otherwise be impossible to get a decent frame rate (even with USB2.0).

But what if I want to save the data to disk? If I don't save the buffer to the PC, then the sampled data is lost forever when the unit is powered down right?

In a previous post you mentioned that it takes about 20 seconds to transfer 2M of data. So if I'm able to use the full 4M of advertised storage, then it will take 40 seconds to save one frame of data. This seems very inconvenient to me, and is the reason I'm considering holding out for the USB2.0 version. I'm hoping it will be available this year.

If I do save the sampled data to disk, how long does it take the software to pan or zoom when using peak-detect (or do I always have to have the hardware connected for pan/zoom)?

A non peak-detected display is impractical for me due to the aliasing issues you've sighted. Without peak-detection the other 999 samples of data between ""every 1000th sample"" could contain entire waveform cycles which wouldn't be displayed at all. This seems particularly problematic for math functions.

27 Sep 2006
Posts: 401

Hello JMB,
You are quite right - when you want to save a full frame to disk, you must use the 'Get Frame' button and transfer the full frame. If you power down the CS328 the data will be lost. With USB 2.0 the frame transfer will run a lot faster. The CS328A will be available this year (!), I'm picking available from the distributor in November. Unfortunately we allowed ourselves to be hijacked by a bit of feature creep, and this has extended the development time.

We do have 4M sample storage, but precisely because we do decimation out of the unit, we must have at least two frames. One is used as a circulating buffer waiting for the next trigger, while the other is used to display the last captured frame. You can continue to pan and zoom on the last captured frame, even while sampling for the next trigger. This is exactly how a Tek or Agilent scope works. We do not offer a one frame capture, because of initial compliants about the inability to continue viewing the data while waiting for the trigger. Maybe we will change this if sufficient people want it.

When you transfer a full frame to disk (2M samples maximum), (first by using 'Get Frame', and then by doing a save as), we do decimation in the PC. This means you can continue to have a peak captured display on the saved data. We have tried very hard to get the decimation time down, but at present we can manage only about 40ns per sample on a standard PC. This is nearly three times slower than the acquisition unit, and so the display seems somewhat more sluggish when displaying from the PC buffer - but hey - it works. (so 28ms for a standard 2M frame out of the acquisition unit, becomes 80ms out of the PC buffer).

I hope this helps.

2 Oct 2006
Posts:

Hi,

Thanks for the explanation on how the data is decimated. One thing I do not understand is how this effects the measurements when using cleverscope software.

If you have 2 frames of 2M, yet only transfer back say 2400 samples are the measurements available in the software (pulse width, frequency etc) performed on the 2M samples in the hardware or the 2400 samples transfered to the PC?

Obviously to avoid errors you want to perform measurements using as many sample points as possible.

Thanks
CW
Back to Forum Index : Back to Interesting Questions   RSS
You must be logged in to post a reply



You need to Register or Log In before posting on these forums.

×

Your shopping cart is empty.