Posted by Joe Pavlat, Editorial Director

Signal Processing COTS Standards

Article

Open standards for embedded computing offer the customer a wide range of products, vendor independence, and a fairly predictable upgrade path to incorporate new semiconductor, storage, and software technologies. The standards themselves are generally developed and maintained by open organizations or consortia that work to be inclusive and not beholden to one or a few companies. There are many successful open standards organizations in operation, and the IEEE, PICMG, and VITA are among the best known.

One of PICMG’s early successful standards is CompactPCI, which incorporates the wealth of PCI silicon developed for the desktop PC world into a modular and rugged standard for embedded computing. First released in 1995, it continues to be the solution of choice for a very wide range of applications, including the Mars rover, Curiosity (I just never get tired of saying that). [Continue reading →]

In a New Tab/Window

Article

For decades, modular embedded computer systems have been built using card cages, backplanes, and plug-in boards that perform a wide variety of functions. While it is less expensive to do something on a single board, if volumes are high and there is not...[Continue reading →]

In a New Tab/Window

Article

While we have tended to focus on high-end platforms like AdvancedTCA (ATCA) over the last couple of years, the embedded computer world is much broader than that and a recent trip to a very large, non-telecom-oriented tradeshow made that very clear.

The Embedded World tradeshow and conference was held February 25-27 in Nuremburg, Germany. Over 850 exhibitors from 35 countries exhibited their products and technologies, and more than 26,000 people attended. The show occupied six exhibit halls at NürnbergMesse, and is now the world’s largest that is strictly devoted to embedded technologies. There was very little telecom equipment, as most of those suppliers were at Mobile World Congress in Barcelona, which was held the same week.[Continue reading →]

In a New Tab/Window

Article

When Bob Dylan wrote that song 40 years ago he was likely thinking about the dramatic social and political changes that were happening in the 1960s, but a similar change in the technology of big networks is happening now – the way they are built, used, and the business models that support them are all being transformed. Terms like Network Functions Virtualization (NFV) and Software-Defined Networking (SDN) are the latest additions to the alphabet soup of terms used by traditional telecom service providers and in the datacenter. The architectures used by both of these industries are beginning to merge into one where services, storage, and computing are provided to customers that want to sell something via e-commerce or share cat videos or corporate spreadsheets. With the increasing use of mobile devices, programs and data will not be on your desktop, but “in the cloud,” and accessible from anywhere.

This column was going to focus on introducing some of the concepts involved with SDN and NFV, but my good colleague Curt Schwaderer did a far better job than I ever could have in his excellent article on page 12, “Infrastructure as a Service (IaaS) explained.” I highly recommend you read it, and also the European Telecommunications Standards Institute (ETSI) white paper he references. [Continue reading →]

In a New Tab/Window

Article

Open systems have the advantage of being the creation of many minds with many perspectives on what should be done. Therefore, many of these systems were originally developed to be general in their applicability, and found deployment applications like machine control and instrumentation. However, many of these systems have also migrated to industries that traditionally embraced only purpose-built and proprietary architectures based the belief that no open standard adequately met their needs. PICMG has been very busy of late trying to bridge this gap, and has released, or is about to release, a number of significant specifications to both improve existing platforms and adapt them to new market spaces,

CompactPCI Express[Continue reading →]

In a New Tab/Window

Article

Dog days? More like dogged pursuit of a number of markets for CompactPCI, and success as CompactPCI demonstrates its ability to adapt – check out the mil-areo app described in the Global Technology column this month.Most successful computer open standards, if they are to remain viable, adapt to changes in technology. The best expand their capabilities while maintaining a high degree of backwards compatibility. This leverages existing product offerings and reduces the cost of upgrades. CompactPCI, which has been around for 15 years now, has continued to evolve to improve performance and add new features. The original spec supported the 33 MHz, 32-bit parallel PCI bus. This was then expanded to 66 MHz, 64-bit PCI.

By the year 2000, parallel buses like PCI were being supplanted by higher-speed switched serial buses. They were popping up like weeds and were simply too numerous for all to survive. Some were chip-to-chip interconnects, while others were intended to interconnect boards at the system level. The most popular system-level interconnect, Ethernet, just kept getting faster, cheaper, and better. In 2001 a PICMG technical committee headed up by John Peters from Performance Technologies developed a specification that allowed boards in a CompactPCI system to communicate via Ethernet over the backplane. This was the industry’s first switched serial fabric standard, and it is still widely used. In 2005 PCI Express was added as a backplane interconnect in the CompactPCI Express specification.[Continue reading →]

In a New Tab/Window

Article

For decades, modular embedded computer systems have been built using card cages, backplanes, and plug-in boards that perform a wide variety of functions. While it is less expensive to do something on a single board, if volumes are high and there is not...[Continue reading →]

In a New Tab/Window

 Expand All