-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Missing samples with RSP1a #70
Comments
@dickh768 - tonight I took the SoapSDR Python example from here, I modified it a little to count the number of samples and the elapsed time (my version of the script is attached), and I ran it with a center frequency of 125kHz and a sample rate of 250kHz using my RSPdx. This is the output:
which to me looks what one would expect: 10,000,384 / 250,000 = 40.001536. Given the fact that you are experiencing samples dropped between the blocks, I am wondering if your code is doing some processing in between blocks, and it is not calling Franco |
Hi Franco, thanks for the reply. I tried your code and got pretty much the same result, so as you say there does not seem to be an enormous issue. I also tried to repeat my own test and on this occasion there was no glitch (!). However I then noticed that your code is not performing a defined number of readStream operations, but loops until the number of reported samples exceeds a threshold. So I modified the script to use a 'for' loop with 100,000 operations. Number of samples expected = 102,400,000 So the rate of samples received was 249956.6 which is just a bit low. This could be simple measurement error but in the elapsed time you would expect to receive 100817500 samples at 250000 sps - so there is an apparent deficit of about 1 sample in every 6 read operations. The other important take-away for me was the variability in the number of samples reportedly returned. I had been assuming that the buffer would always be returned full and/or use the number of samples in the third parameter of the call. But it seems to be more variable than that and I need to take that into account when filling my larger working buffer. I am currently using a separate thread to perform the readStream operations and so the scheduling of cpu cycles is determined primarily by the OS. The thread is also competing with fft processing in other threads for time on the same core. Additionally, at the moment I suspect that a large number of 'print' operations for debugging purposes is also stealing quite a lot of time. I can probably live with losing 1 sample in 6000 but it would be nice to have a notification when something is dropped - then one could potentially insert an interpolated value. DickH |
@dickh768 - there's no guarantee that This morning I changed that Python script to print out when the number of samples is not 1024, and I can see the blocks of 8064 samples you mentioned in your initial comment:
Since at a sample rate of 250kHz the receive callback is called with 252 samples every time, and 8064 = 252 * 32, I think what we are seeing are the buffers being rotated, but I am going to double check tonight after work. Regarding your other question of detecting dropped samples, since the SDRplay API passes an argument to the receive callback function with the sequence number of the first sample for that callback, there's a way to detect when there is a gap in that sequence. A while ago I created a simple program to just stream the samples using the SDRplay API directly (https://github.com/fventuri/single-tuner-experiments), and you can see here how I detect when there are dropped samples: https://github.com/fventuri/single-tuner-experiments/blob/main/single_tuner_recorder.c#L673-L683 Franco |
As an ex hardware engineer I really hanker for the days when I could get into the nuts and bolts of a system with oscilloscopes and logic analysers etc. With something like the RSP1 you only get to interact via several layers of abstraction - with hardware drivers, prebuilt apps like sdruno, generic tools like SDRSoapy and the associated extra driver to map onto the hardware driver etc... So I usually end up with a lot of trial and error to find out what actually works... Back to the original issue - I think that is/was largely an issue of excessive debugging overheads delaying some of my readStream calls, so I need to be more careful with my programming. I also was not explicitly handling errors returned from readStream so I was probably occasionally failing to fill parts of my circular buffer and leaving old data to be re-used. (I did see a few errors when running the tests). Doing some more work with your original test script, I've changed the timer to use time.perf_counter as that is reputed to be the best available for Python on Windows. I've also added a dummy readStream statement before the timing loop because the first call always seems to return a null buffer and I wanted to exclude that from the test loop. As for the size of the buffer, after a few experiments, the RSP1a seems most happy when using values of 1008, 2016, 4032, 8064 etc. - and then delivers consistent buffer lengths of returned data - I assume this is related to some hardware timing 'feature' around the DSP chip. Having done all that, I get the curious result that the reported timing is actually shorter than it should be. It gives the impression that the sampling rate is actually higher than the 250k selected. However, since the error gets smaller as test runs get longer, it must be a spurious artefact - I can only assume there is some 'optimisation' in Python which is pre-emptively running the end timer before all the samples are returned. In short, it now seems to be running fairly happily! DickH |
@dickh768 - since I am an electrical engineer too, I can relate to your experience with all the layers of abstraction. Going back to the specific issue of the sample rate and the timing of the samples coming in, there's actually more to the story. I haven't done more research to see if this also happens at a sample rate of 250kHz (which might explain some of the odd behavior you saw), but you can run the Franco |
Thanks for the additional thoughts. I had a quick look at your github and Linrad links and found them very interesting. I'm a long-lapsed radio amateur (G3UWB) who dropped out when all the RF stuff was still analogue. My SDR experience up till now has just been based on pre-packaged solutions with the RTL sticks for general listening and decoding aircraft data, wireless data links etc. So this is my first attempt to directly drive an engineered SDR device. I was quite surprised that the Soapy interface does not have callbacks and maybe that explains some of the oddities - my other experience with video and audio streams have all used callbacks in the interface. Given that my recent programming experience is mostly with Python and my C is a bit rusty, I'll probably persevere with the Soapy approach for a while. If my frustration crosses the threshold perhaps I'll follow your lead and try the direct API approach. Thanks for your help DickH |
I had assumed that the problem of corrupted reception would be caused by samples dropped between each frame of data received from the RSP1a - either due to a glitch in the hardware, the API - or maybe in the Soapy adaption layer. What I am actually seeing though seems to be various faults within the frames being delivered from the RSP1a - or as they are processed by Soapy. Attached image of recovered time domain sequence from 192ksps frame with 4kHz test input signal. (The glitches are also visible on the I and Q signals so they are not fft related). |
@dickh768 - to rule out (or not) the SoapySDR interface layer, would you mind running the same tests building and running the Also I would avoid having a sample rate (192ksps) that is exactly twice the centre frequency (96kHz) because of the known issues with the DC (0Hz) component. Finally I am confused by your mention of the 250ksps sample rate - are you still running that with a centre frequency of 96kHz? If so you may have 'negative' frequencies since your sample rate is more than twice your center frequency, and I am not really sure what to expect in that case. Franco |
Thanks for the reply Franco. Answering your last question first – when using 250ksps I also shifted the centre frequency to 125k to give myself the range 0 – 250 vs 0 to 192. My interest in 192 was partly because it is a multiple of common audio sample rates, but also because I naively assumed it might reduce cpu load and hence power – I now discover that you need to run the RSP with 3072ksps and decimation of 16 to get that rate - so the API actually uses more cpu than running at 2Msps and 8x decimation. I had also worried slightly about including 0Hz in the output but it is no big deal if I move the centre frequency up by 1kHz to avoid it. Anyway, I had given up on the Python approach a couple of days ago because of the odd results and have just started getting some useful results with C on a Raspberry Pi. I started with your single tuner code and also the example from the SDRPlay website. I already had the API and Cubic working so I was reasonably sure things should work. Compiling your code went fairly smoothly however I have been getting strange results with all 0’s in the Q branch. For testing I just edited your command lines. The console outputs were: ./single_tuner_recorder -r 6000000 -i 1620 -b 1536 -l 3 -f 162550000 -o noaa-6M-SAMPLERATE.iq16 ./single_tuner_recorder -r 2000000 -i 1620 -b 1536 -l 3 -f 162550000 -o noaa-6M-SAMPLERATE.iq16 ./single_tuner_recorder -r 2000000 -i 1620 -b 1536 -l 3 -d 8 -f 162550000 -o noaa-6M-SAMPLERATE.iq16 ./single_tuner_recorder -r 3072000 -i 1620 -b 1536 -l 3 -d 16 -f 162550000 -o noaa-6M-SAMPLERATE.iq16 As you can see, the Q-range is 0,0 on three of the runs. Rather than try and debug your code I decided to try putting together some minimal code just for the RSP1a, stripping out a lot of stuff which I don’t need. So far I have got it basically running although I also have problems with the I/Q output which doesn’t seem to make sense. A bit of debugging is obviously required. If you have any suggestions as to why we are not getting valid figures in the Q branch on your code, I’m happy to test things out. Best Regards Dick H From: Franco Venturi [mailto:[email protected]] @dickh768 - to rule out (or not) the SoapySDR interface layer, would you mind running the same tests building and running the single_tuner_recorder utility from the single-tuner-experients repository (https://github.com/fventuri/single-tuner-experiments)? |
@dickh768 you are using Low-IF mode (If=1620) so therefore in the API the only sample rate that delivers full I/Q samples is 6 MHz (final sample rate will be 2 MHz after de-rotation and down conversion). Any other sample rate is single ended, hence why Q is 0 You can use Zero IF (If=0) and then you can use any arbitrary sample rate between 2 and 10 MHz and this will also deliver full I/Q samples (but be wary of the DC spike in the center of the spectrum) You can also use decimation with either IF mode to reduce the final sample rate if you need lower than 2 MHz - I'm not quite sure if you are using SoapySDR or your own code, but I'm pretty sure the SoapySDRPlay library just took in a sample rate and set all the other properties for you. Andy |
@dickh768 As Andy wrote, with the SDRplay API a non-zero IF (Low-IF or LIF) is only possible for a very specific set of combinations of (sample rate before decimation, IF, IF bandwidth) that are listed on page 26 of the SDRplay API Specification Guide (https://www.sdrplay.com/docs/SDRplay_API_Specification_v3.07.pdf):
For all the other values of sample rate and IF bandwidth, you have to set IF=0 (Zero-IF). Franco |
Thanks Andy, Franco for the pointers. The results now look very good. With SR = 3072000 and Dec= 16 etc. ./single_tuner_recorder -r 3072000 -i 0 -b 200 -l 3 -d 16 -f 97000 -o bw200-SAMPLERATE.iq16 The IQ file is at https://www.dropbox.com/scl/fi/grvcdm79e1gouoqtwkak0/bw200-192.iq16?rlkey=vt266axwblxd63fjixmalyfrz&dl=0 Converting to a .wav file and feeding into HDSDR I can then see a pretty clean spectrum with a spike 3kHz from the LH end (I’ve offset the centre to 97kHz). The results are pretty similar for 250kHz (2Msps & Dec = 8). |
NB: I also need to do a bit more work to check for phase discontinuities in the 3kHz recovered signal - just need to get my head around fftw. |
@dickh768 I'll close the ticket you have open on our system then? |
Yes, my potential issue with the API seems to be resolved thanks,
DH
From: SDRplay ***@***.***
Sent: 26 July 2023 17:13
To: pothosware/SoapySDRPlay3 ***@***.***>
Cc: dickh768 ***@***.***>; Mention ***@***.***>
Subject: Re: [pothosware/SoapySDRPlay3] Missing samples with RSP1a (Issue #70)
@dickh768 <https://github.com/dickh768> I'll close the ticket you have open on our system then?
—
Reply to this email directly, view it on GitHub <#70 (comment)> , or unsubscribe <https://github.com/notifications/unsubscribe-auth/AM3TZ4LIHPA7676VB7FCDADXSE6ZLANCNFSM6AAAAAAY6G2YFM> .
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
@dickh768 - thanks for your analysis. How did you exactly compute the 'average received sample rate'? I just ran the
since at this sample rate of 192kHz each rx_callback receives 63 samples, I would have expected a time difference of 63 / 192k about 328us - however the output from the command above shows that every 96th callback the time difference is much greater (about 31ms; the last column is the time difference in ns):
I am not really sure about the reason of this behavior, where 95 out of 96 times there's virtually no time elapsed between two rx callbacks while on the 96th time the time difference is very large. Franco |
Hi Franco, my application is currently configured to so that the callback routine fills a series of buffers (4095 samples for the test). I then use a queue to tell the main thread when a new buffer has been filled. My timings were therefore taken from the point at which each buffer was full - at that point I wrote the time to a file (using gettimeofday() . I also wrote the start time to the file (from within the 'reset' block) which I used to calculate the long term frequency. The actual timing figure recorded for the start time is rather strange - and I have had this before - I assume it is because the gettimeofday statement is in a different block of code and is being executed out-of-order in some way - and hence does not give a true start time. I'm guessing that an incorrect start time is the most likely reason for the graph showing the frequency dropping asymptotically over time - as the error becomes less relevant - although I suppose it could be a genuine artefact when the API is first started. I was prompted to collect this data because the buffer index numbers in my queue sometimes appeared to be out of sequence (with 4095 sample buffers). Clearly with the timing jumps of the order of 6400 samples I will need to consider larger buffers. Also, seeing the magnitude of the timing steps in these direct API tests, I suspect that the choice of 4095 as the block size in SoapySDR for the 192kHz rate is too small to smooth out these glitches, and is probably why I had various issues with the Python approach. DickH |
|
Update - by going back to the original data, and by ignoring the nominal start timestamp, I can see that the frequency curve is genuinely a 'feature' of the RSP itself rather than an erroneous start timestamp. So putting my system designer's hat on, this looks to me like the response of a phase locked loop (in software) which is trying to respond to a delay in getting the callbacks working but has already started receiving samples from the ADC. It therefore has to run fast for a period until the excess samples have been used up. |
To confirm your findings I wrote the attached C++ program based on SoapySDR C++ example (https://github.com/pothosware/SoapySDR/wiki/Cpp_API_Example). I chose a buffer size of 4096 I/Q samples, which means that at a sample rate of 192 ksps, it should take about 4096 / 192k = 21.3 ms for each buffer to be ready. I used
Franco |
It works on Linux Fedora 38 with gcc 13.2.1; think in your case you might need to add
Franco |
The -lpthread suggestion unfortunately didn't solve the problem. However I found this explanation at https://linuxpip.org/how-to-fix-dso-missing-from-command-line/ . The workaround is to run: So, running your program I get: Found device #0: driver = sdrplay [INFO] devIdx: 0 HTH |
@dickh768 - thanks for the update and your experiments; I too am busy with work and a couple of other parallel SDR projects, so I apologize if I can answer you only every few days. Your observation regarding the scheduling of processes and threads on the Raspberry Pi reminded me of another comment on the Also SoapySDR offers another streaming interface called 'Direct buffer access API' (https://github.com/pothosware/SoapySDR/wiki/DriverGuide#direct-buffer-access-api), and I think the SoapySDRPlay3 driver implements it too. Franco |
I find a little time away from the bench is often useful to gain new perspectives on a problem. As an example, I went back to a simple Python SoapySDR script on the PC and looked at the phase of my 4kHz test signal over time as seen by the RSP1. I simply recorded around 100 x 4095 sample frames (at 192kHz) to a disk file and then did an fft on successive blocks of 384 I/Q samples. This gives 0.5kHz frequency bins - so bin 8 contained the amplitude and phase of my test signal. Plotting the phase, there is significant chaos for the first 50mS and then the phase signal becomes nice and clean - indicating that no samples are being dropped (over the 2s period at least). The evident regular drift is caused by a frequency error of around 10Hz in setting my test oscillator. A second unexpected oddity concerns the spectrum obtained using the API via C. On the Raspberry Pi and simply forwarding the I/Q samples to a data file, I can get a very clean spectrum with a sampling rate of 2000kHz and decimation of 8x. This agrees very well with the spectrum seen on the PC with SDRUno. With most other sample rates, significant semi-random spurii of 8 – 15dB appear across the spectrum (eg with 16x decimation and 3000 or 3072ksps. ) As a cross-check I thought I would try with CubicSDR also on the RPi – and observed the same problem. At 2000ksps/8x the noisefloor is clean but at virtually all other sample rates the noisefloor is polluted by unwanted spurii. Simply recording I/Q samples using Python + SoapySDR avoids this problem so it is not intrinsic to the Rpi, but presumably to some configuration setting. |
I'm trying to use SoapySDR with Python to capture ultrasound signals from a microphone/preamp. I have my RSP1a configured with a sample rate of 250kHz and a centre frequency of 125kHz and am using sdr.readStream(...) to grab repeated chunks of 8064 samples for onward processing. On the face of it, this appeared to be working correctly, however if I feed a test signal at 120kHz into the unit, the 5kHz sine wave beat frequency (observed via the real part of the I/Q) is showing glitches consistent with samples dropped between the blocks of 8064.
Is there a way under SoapySDR to ensure a continuous feed of data without dropping samples?
The text was updated successfully, but these errors were encountered: