Skip to main content

Featured

HP5430A 18GHz微波计数器维修+测试 / Microwave Counter Repair+Testing

这其实是2019年11月完成的事情,当时拍了一些照片记录,不过因为拍摄环境限制拍得不是很好,就没发出来。最近本科快毕业了,在考虑离开学校的事情,整理了一下自己在学校外租的,专门用来当实验室(事实上更像是仓库)的出租屋。这台仪器作为所有收集到的仪器中个人最喜欢的一台,作为一个纪念,还是发出来了。 HP5430A是HP Journal 1973年第4期的封面产品,这是一款量程覆盖10Hz~18GHz(加选件后可进一步提升至23GHz)的采样式微波计数器。和同期推出的,搭配180系列示波器使用的采样头一样,这台仪器的核心是HP在当时刚开发出来的20GHz采样模块。 实现18GHz的测量并非难事:使用一些微波技巧和当时速度已经足够快的射频二极管,通过混频的方式就可以将高频挪到低频。但是这样做的问题很明显:一个混频器+本振+相关的滤波器只能实现窄带测量,所谓窄带,在当时来说就是300MHz的量级。如果希望测量10Hz~18GHz这样广的频率范围,使用混频方式的成本和制造难度是超高的,只有在频谱仪这类“通用仪器”中才会采用。对于计数器这种需求略有不同的专用产品而言,HP采用了一种基于采样,也就是频域卷积的解决方案,大幅降低了成本。将简单来说:该仪器在工作时,利用采样过程导致的混叠,将微波信号“折叠”到低频,再用普通计数器计数,再去反推原频率。该仪器采用了一个巧妙的方式实现了折叠次数的判断,具体实现方式可以阅读这一期HPJ,Youtube上也有几个视频中有讲解。 作为当时HP微波产品线毋庸置疑的高端产品,这台仪器的售价达到了1973年的$5900,换算到今天就是2019年的$34428.58,约24万RMB。因为当时做高频ESD防护和高频信号限幅很困难,这台仪器的输入保护远远没有现代射频仪器那样完善,真正实现了摸下就死——指现实中的。以至于机器的上面贴了一块金属铭牌,特别强调:“警告:向50Ω输入端输入大于+30dbm(1W,7.07Vrms)的信号将造成严重,且维修费用高昂的损坏”。 温馨提示 因为以上的这些原因,购买前我非常犹豫。这台仪器已知有相当多的故障,前端,也就是价值最高的部分好坏未知。最后还是看在它有辉光管显示屏的份上,付了一大笔运费把它买下了。想着:如果实在没救了,改造成一个超重的辉光钟也不是不行。 首次通电 收到机器,检查变压器设置没问题之后通电测试,状态就和卖家...

Python Multi-core / GPU Digital Phosphor Rendering of Huge Waveform Data

Most modern oscilloscopes are marketed as Digital Phosphor Oscilloscope (DPO) because the waveform shown on those scopes looks night-and-day compared to their old counterparts.  


DPO vs DSO, from Tektronix TDS784D marketing materials

This is because although the traditional DSO can capture data at a blazling fast speed, they lacked the processing bandwidth to show them on the display: It may be able to capture 100 million waveform data for one trigger point and store it in the sample memory, but the monitor only has say 1024 pixels wide. DSO simply throw away most of the points, resulting in an ugly aliased apperarance with 1bit per pixel.  

To achieve the nice and smooth look of a DPO, what we want to do is to down sample the 100 million points to 1024 pixels wide with a correct down sampling algorithm. 

Recently I've been working with some huge waveform captures with more than 1G points. Plotting such data with the beloved matplotlib will result in an ugly blob with all the nuances lost. Even worse, matplotlib uses a vector drawing method, so the speed is extremely slow: drawing things beyond 100M points will become untolerable.

One way to speed things up and achieve a DPO look is to use plt.hist2d. There seems to be some automatic vectorization happening, but it's still ugly and slow. Another issue with this approach is that hist2d draws only the points but not the lines connecting them. In some applications (like the NTSC video signal example shown in the TDS784D comparison), the thin lines in regions like the rising and falling edges of a square wave will dissappear. And pyplt's hist2d implementation seems to also have some aliasing on the x axis.

I couldn't find any useful library for this particular requirement. So I wrote my own multi-thread rasterizer that can be deployed on CPU / GPU. The speed up is significant, and the result is a nice looking DPO plot with vector lines connecting all the dots together. 

A Random Walk Sequence, 100M Points

An AM Signal with fc=1MHz, fmod=10Hz, fs=10MHz, 100M Points

An AM Signal with fc=100kHz, fmod=0.1Hz, fs=10MHz, 100M Points

The implementation is straight forward. Think about a single-thread implementation: all we want to do is to use the Bresenham line drawing algorithm to draw all the lines connecting the points in your dataset into a pixel buffer. With Numba, it can be turned in to a parallel code such that each worker is responsible for one Bresenham line. The only modification we need to do is to change the add instruction into an atomic operation, which can be achieved by numba.cuda.atomic.add.

The summation operation can also be parallelized to get extra speed up. Based on my experiments with 6GB of VRAM, 64 workers is already saturating the processing capability of my graphics card.

Comments