Posted by Daniel Kreindler, Samplify

Signal Processing Techniques

Article

Daniel explains why wireless and medical imaging systems cannot simply appropriate data compression technologies, such as MP3, used in other fields, elaborating on why compression technologies fall short for higher-performance applications. He then describes a new compression algorithm that can keep pace with sample rates of up to 40 gigasamples per second.Today’s Data Acquisition Systems (DAS) using Analog-to-Digital Converters (ADCs) in wireless infrastructure and medical imaging applications are adding more channels while at the same time increasing both bit-resolutions and sampling rates, causing a data explosion at the system level. All this data must be transported from the DAS, where it is acquired, to processing devices such as FPGAs or CPUs, where the raw data is processed. This “data pipe” uses many device I/Os, crosses various interfaces and busses, and often includes different forms of storage along the way such as DRAM or disk drives and RAID. Over the years, the brute force approach to increasing the size and capacity of this pipe was to throw more hardware at the problem. Indeed, ADCs grew wider in bits; FPGAs grew larger in density, and memory requirements increased accordingly, with the same thing happening to a lesser extent in I/O as well. Systems grew more complex, used more power, and saw overall higher bill of material costs.

Dealing with this data explosion without increasing complexity and cost in FPGA-based designs means adopting some form of data compression in the FPGA, usually with IP algorithms. Real-time signal compression of the data sampled in systems such as wireless and medical imaging can take on the data geyser, at the same time reducing system complexity, power, and costs.[Continue reading →]

In a New Tab/Window

Signal Processing Types

Article

Daniel explains why wireless and medical imaging systems cannot simply appropriate data compression technologies, such as MP3, used in other fields, elaborating on why compression technologies fall short for higher-performance applications. He then describes a new compression algorithm that can keep pace with sample rates of up to 40 gigasamples per second.Today’s Data Acquisition Systems (DAS) using Analog-to-Digital Converters (ADCs) in wireless infrastructure and medical imaging applications are adding more channels while at the same time increasing both bit-resolutions and sampling rates, causing a data explosion at the system level. All this data must be transported from the DAS, where it is acquired, to processing devices such as FPGAs or CPUs, where the raw data is processed. This “data pipe” uses many device I/Os, crosses various interfaces and busses, and often includes different forms of storage along the way such as DRAM or disk drives and RAID. Over the years, the brute force approach to increasing the size and capacity of this pipe was to throw more hardware at the problem. Indeed, ADCs grew wider in bits; FPGAs grew larger in density, and memory requirements increased accordingly, with the same thing happening to a lesser extent in I/O as well. Systems grew more complex, used more power, and saw overall higher bill of material costs.

Dealing with this data explosion without increasing complexity and cost in FPGA-based designs means adopting some form of data compression in the FPGA, usually with IP algorithms. Real-time signal compression of the data sampled in systems such as wireless and medical imaging can take on the data geyser, at the same time reducing system complexity, power, and costs.[Continue reading →]

In a New Tab/Window

 Expand All