FPGA Blog Index
- What is an FPGA?
- A Brief History of FPGAs
- Advantages and Disadvantages of FPGAs
- FPGA and GPU differences
- The main challenges of running an FPGA
- Pipelining and chip speed
- FPGA technology applications
- FPGAs and hobbyists
- CPLDs vs FPGA
- FPGA compared with regular software
- FPGA programming and design experts
The Ultimate Guide to FPGA Design
FPGA Design (Field Programmable Gate Arrays) can be a great option for products, depending on the use case and product requirements.
Considering FPGA programming for your project? This article answers the burning questions and issues product developers ask when it comes to implementing FPGA design. Plus, we’ll talk you through the benefits and drawbacks, and review some real-life applications, including its use in space technology.
If you have any further questions following this article, get in touch and our engineers will be happy to answer them.
In the world of electronics, we’ve got hardware on the one hand, which is your PCB (Printed Circuit Board) or components (which may include a processor), and then you’ve got the software on the other, which is the ones and zeros that run on the processor.
An FPGA is a piece of hardware, but it’s configurable by some software which works like a set of build instructions. I’ve always likened it to a box of Lego. For example, if I build something by hand, then that is similar to making hardware, I build it once and have a fixed product.
However, if I had a programmable robot arm, I could theoretically program it with instructions and have it build the final product for me. From then on, I could repeatably produce the same object in a shareable way, but at the end of the day, you can still tear it all down and go back to your box of Lego.
The Lego in this context is logic gates. Basic logical elements where you can perform AND, NOT, OR type operations – FPGA programming allows you to take millions of little gates and wire them up together to form more complicated logical objects.
Applying this analogy to conventional processors, referred to as ASICs (Application-Specific Integrated Circuits). These represent your custom injection moulded plastic parts. They’ve been built at the factory, shipped to you and you’ve got that object. It’s good at its job, but isn’t capable of change.
However, when you get sent a FPGA, it’s essentially a blank unconfigured bucket of logic gates. If you manage to configure it properly, you can perform the exact same operations as the regular, preconfigured ASIC would do, then tear it down and build something else.
FPGAs used to essentially be just big buckets of logic gates. There were largely homogeneous die full of ‘slices’, which comprise groups of registers (1-bit memory elements) and the physical gates.
The gates are usually implemented with Lookup tables, so rather than having physical AND and OR gates, you input a binary number made of your input states e.g. 0001 and get out a single 1 or 0. The binary input represents the row number, and each row contains the output for that code.
As time went on, certain people started implementing the same functions repeatedly. If you’ve got Ethernet, you need an Ethernet controller. You spend loads of your super expensive logic gates doing Ethernet every time. It’s a big waste in time and money doing it this way and as the process is quite predictable one of the vendors will often put down a bit of ASIC inside their FPGA that does Ethernet or PCI Express (Peripheral Component Interconnect Express), or another function.
Therefore over time, FPGAs have tended more towards having both a large lump of homogenous logic and programmable logic, as well as hard IP blocks which can be called upon to do some of the predictable, but not really changeable functions.
Disadvantages of Using an FPGA
The obvious downside to an FPGA is the expense. Because the device needs its own logic and circuitry, they become a lot bigger and use a lot more silicon to do the job. They also use more power, which adds up to a more expensive solution.
Let’s put this into perspective – the 4G modem chip in your phone costs in the £10s – £100s. Whereas the FPGA required to implement all the logic that goes into that device might be in the £1000s.
Because the distances between the individual transistors gets bigger and they usually run slower. For example, often you’ll find your home PC with roughly four Gigahertz clock speeds, whereas your FPGA logic will run closer to 400 Megahertz, although it varies massively with technology. Some of the silicon technology for FPGAs is the absolute state of the art where you are down to 7nanometer FinFET with tiny little transistors and much shorter routes between gates.
As that technology gets smaller and smaller, the devices use lower power, are faster, and much smaller in physical size with reduced thermal demands. However, because the devices themselves are very expensive this often puts people off. Therefore, they are more prevalent in high-cost sectors such as defence and the space sector.
One of the big benefits to an FPGA is the ability to deviate from the standard.
Perhaps you’ve got an unconventional use case, an optimisation or a customisation you’d like to make, or there are risks and uncertainties you’d like to hedge against. FPGAs are useful for ‘get out of jail’ situations in hardware design.
For example, if I build my own Ethernet controller and wish to change the way the check sums are calculated, put a different type of header on, or encrypt it, in FPGA programming it’s extremely easy to add that at one end and the other by creating your own custom implementation.
There is a trade-off between the flexibility to do what you like, and the cost of implementing things in the flexible logic. And if you’re doing something that is standard and connecting your processor to the network with good old Ethernet, then use the hard IP core.
But if you are trying to do something more customisable then using the configurable logic is the way to go.
FPGAs are also helpful if your link partner doesn’t implement the spec properly, and you can work around their inadequacies with your FPGA device. This way you can account for a lack of adequate specification at the design stage.
Ultimately, FPGAs are a lot of money if you’re only trying to replicate an identical product. In that sense, the cost doesn’t justify the end solution. However, if you want to re-spin a regular ASIC chip it’s going to cost perhaps $1 million just to have a look at re-spinning it. Whereas with an FPGA you can simply ask your engineer to recompile the software code.
It also means that with an FPGA, you can afford to make mistakes. Ordinarily, if you’ve released a product into the field with a flaw, you’ve got a recall on your hands, whereas if there is an FPGA in there, it can be fixed as it’s a firmware image.
You can send it to all your customers and share the data with them, essentially doing a software update without the need for a costly and disruptive product recall. But hopefully your system is reprogrammable in the field and you’re not risking your hardware with soldering irons.
When we think about a GPU (Graphics Processing Unit) we don’t often think about the hardware aspect. Instead, we think about the software that we run on it. But ultimately, it’s a fixed piece of hardware.
An FPGA is fundamentally a piece of hardware, however, the configuration that you load into it builds another piece of hardware, so it’s possible for you to build a proper processor in an FPGA and then run some software on it. But the FPGA itself, in my mind, is a configurable hardware object.
In the GPU world, you’ve got this very specialized processor and you’re running software on top of it. You’re constrained by what the processor can do and how it’s been built, and in the FPGA world, you’re constrained by how much logic you’ve got, and what the software tools that you use to write your design can do. But in the world of the actual logic, you are completely free, and the biggest advantage that FPGA has over a conventional processor is parallelism.
In an old school processor that’s got one core and thread of execution you can only do one thing at a time, and if you have to wait for something, everything stops. For example, if you want to blink a light at a consistent rate to then go and read something off the hard disk, the hard disk will stop the light from blinking (and vice versa).
In a modern FPGA these are two complete separate processes. They don’t need to know anything about each other. One can be utterly autonomous and independent from the other.
Main challenges of an FPGA
The main challenge with an FPGA is whether you have enough timing resource – this is timing of a different sort. In a GPU, your timing is how many clock cycles you’ve got, and how much physical time you have, to do the thinking you’re trying to achieve.
Whereas in an FPGA you have physical clock signals that must electrically propagate along the device, so when I push a one or a zero out of a logic gate, it only has a certain amount of time to get to its destination before the next clock edge comes along, and that one or zero needs to trigger the next event.
If you’re halfway through the change when you get there, there are electrical problems that can occur and are referred to as metastability where your receiving gate is uncertain about whether it got a one or a zero, and it may just hang up because it doesn’t know.
And so, in an FPGA world, you are constrained by how far your signals have to travel inside the device and if you’re unable to get your signal from point ‘A’ to be quick enough, you need to run the whole circuit slower. You need to slow down your clock speed and increase the interval between clock edges.
How pipelining can increase chip speed
The FPGA designer working on a project needs to pay a lot of attention to the hardware process, the electrical propagation. How many physical logic gates there are between 2 timing points and whether you can get there in time.
If you want to run the design faster, you can employ techniques such as pipelining, which is where you take your logical construction, split it up into little blocks, and you do a small sprint between each clock cycle. Pipeline processing does make the entire process take technically longer, but you can run the whole chip at a faster rate, which usually means you get a faster design.
I think for me the significant difference is mindset between the two types, amongst your conventional processors, where you are fixed in what platform you’ve got and you’re thinking about how to use its resource.
Whereas in the FPGA you’ve got so much freedom that you almost need to put your own rules in place, and you need to make sure you don’t design systems that are utterly infeasible in the laws of physics. Instead, remember you are bound by the physics for each situation rather than being a purely behavioural system like a regular process.
A bunch of the old space probes and rovers will have had quite old generation FPGA-like devices in them, such as an original Xilinx Virtex II. The older technologies still get used, particularly for things like image sensors and navigation. Xilinx are very proud of their devices on Mars.
They often use the older chips because they’re built on an older silicon process, so the physical logic gates are bigger and the SRAM (Static random-access memory) cells store more energy, so they’re less likely to get disrupted by a single-event upset (SEU).
Also, in a space environment where your weight is critical, it might benefit you to have a very custom specific computer. Let’s say you’ve got your flight navigation computer and whilst you’re launching off, you’ve got this big beefy powerful processor. It’s doing all your flight navigation, cruising along and you are going to be passing Jupiter later in the mission.
At this point you’ll want an image processor. If you have an FPGA-based platform you could tear down your little flight computer, restore it to a minimal little thing, and that’s all you need for cruising around in deep space. This provides a bunch of resources for image processing or maybe even the communications you send back to Earth.
These are applications that would really benefit from that configurability. But they’re also the things that are most susceptible, so it’s a design challenge for a difficult environment. Still, it’s nonetheless an exciting challenge and it’s fascinating to see how FPGA designers overcome the issues associated with space travel.
It’s worth remembering that, in general, aerospace and defence market projects take an exceptionally long time. The BoM (Bill of Materials) is nailed down very early on. Take the James Webb Telescope that NASA recently launched, that’s been in development since the late 1990s!
In space tech, equipment is well validated and tested so they don’t want to commit to a new material and then be forced to begin testing again. For example, they don’t want to go from an older 2-megapixel sensor that’s exceptionally well understood to a 10-megapixel sensor that will require all the testing again. Space development is expensive, and safety is paramount, so even if there are higher quality parts available, the designers prefer to rely on the known quantities.
FPGAs usage in radio
For radio and telecommunications FPGA devices are useful when implementing codecs where we have coders and decoders. For 5G telecoms we use something called a Low-Density Parity-Check (LDPC) code which is a piece of maths that’s been around for decades, but regular computing power wasn’t enough to make it efficient.
Over the last couple of decades people have started bringing it forward and using it in 5G networks and technologies. Many of the first versions of LDPC codes, which you might use in your telephone if it has 5G, will have been originally validated in FPGA, so someone would have built the algorithm and run it in pure simulation on a computer somewhere.
It means when they make their first proper radio system, rather than spending $1,000,000 taping out a processor, they will have spent less than $1,000,000 in lab time writing all the firmware and building everything for the FPGAs, but they are then able to implement something in real time.
Essentially, they can see they have a bug and realise that they only need to change a register, and have it spun round in a day, rather than spending another $1,000,000 in tape out costs or wire mods and creating additional issues with the hardware.
FPGA design For Vision Systems
Whether you’re in Air Traffic Control, creating supersized sports displays, or showing films at the planetarium – you need a lot of pixels.
FPGAs are perfectly suited to the real-time vision mixing of huge displays. A technology with the competing demands of processing multiple input streams of data, complex overlaying and distortion, and smooth output with no buffering.
We’ve all seen the NASA control room and thought “I’d like that many monitors on my desk, but I only have two HDMI ports on my graphics card”, how do huge control rooms manage all their video feeds?
This one is fairly simple – an FPGA based vision mixer can take any number of HDMI streams in, and route them to any number of HDMI outputs with little processing requirement. HDMI uses TMDS (Transition Minimized Differential Signalling) which is easily supported by parts as old as a Spartan 6.
The real magic comes with non-planar projection, as used in a planetarium or flight simulator. Producing a huge spherical display, with billions of projected pixels, requires multiple high-resolution projectors. Each projector’s image needs to be distorted to fit the curve, and carefully trimmed & faded to create a seamless single image.
These complex real-time transformations are perfect for the FPGA. Frame buffers of data arrive from multiple sources, are warped, stitched or split into the required output streams, and then passed on to their respective display device. Creating such an application using conventional hardware would be fraught with issues of timing, interrupts and flicker, which demonstrates the brilliance of FPGA Design.
More common FPGA applications
Ultimately, as discussed, the appeal of FPGAs is in their flexibility.
FPGAs can also be used for satellites that act as radio repeaters. When a bulk radio signal is transmitted to them from a ground station, it can be sliced up and sent to lots of different endpoints on the Earth – and that slicing operation is done with a big FPGA.
They also get used a lot in defence radar. Where you want to implement a very specific algorithm, perhaps using a custom set of parameters and with a requirement to keep your implementation secret. FPGAs provide that flexibility to create custom algorithms and control your IP.
Similarly, in cryptography, you might want the ability to modify your encryption if it gets compromised. Compare this to standard hard-coded silicon where, if someone manages to break it, you’re in trouble – so there’s a decent market for FPGA-based encryption devices.
How Cheaper FPGAs are introducing Hobbyists
FPGAs are becoming cheaper and more accessible as time goes on. It’s become more reasonable on later devices for the vendors to make smaller devices made of fewer logic gates.
The vendors’ tools have got a lot friendlier for the hobbyist market too. I’m currently building a small helicopter with a £100 board built by Digilent, with a little Xilinx arty device on it. I picked up an FPGA and it’s big enough that you can build a small processor in it that runs C, as well as all the peripherals and IO required.
It’s not going to space, and it obviously won’t be able to synthesize its own radio, but it’s still a pretty good device and it’s powerful for what it is.
The helicopter task is an interesting one because it’s not very computationally intensive. It’s entirely possible to build a quadcopter with a little Arduino. A lot of things you buy now will have quite a powerful conventional processor and you might have some custom peripherals, but you don’t need it, and the job of flying a little helicopter is simple enough that a decent processor can just sit there and do it in a single thread.
However, I always wanted to use an FPGA for that application, just for my own interest. Long ago, my final year project at University was a quadcopter built around a Linux single board computer and I had all sorts of trouble with the fact you’re running a whole operating system trying to talk to all these sensors and servos. Another issue I had was timing and time consistency.
Then when I started working with FPGAs in the field, and they just seemed a better way of doing things because they are so inherently parallel and so capable of being independent. So, whilst the FPGA is idiomatically the best thing, it is overkill for this project. Still, it’s a nonetheless an interesting and fun project!
What About CPLDs? Are they a Viable Alternative?
Beyond FPGAs, there is a smaller class of device called the CPLD (Complex Programmable Logic Device).
They’re like the old school FPGA with the homogenous logic gates, but it will be like a 64-logic cell device. It’s a tiny thing and really useful for wiring in the IO on your piece of hardware.
If you are doing experimental things, putting down someone else’s processor for the first time, for example, you won’t necessarily trust that the reset button is going to be Active High or Active Low.
In this case you can run it through a little CPLD device and if it turns out it needs to be Active Low instead of Active High you can put an inverter in just with programming configuration and those devices are relatively cheap.
They are great for those ‘get out of jail’ scenarios for hardware designs and they are useful for scenarios where you want configurability but you don’t need the whole flexibility of a giant FPGA and all the costs associated with it.
CPLDs usually come with their own internal storage as well. This means when you program them, they are non-volatile. They are programmed forever but they will resist a power cycle.
One of the difficulties with an FPGA compared to regular software is that it takes a long time to build a complete project. The compilation step for an FPGA takes a long time. If you’re writing software for a small processor and you run make to build it, it might take 10 seconds or 30 seconds to build it for a big project.
Of course, building the Linux kernel might take a couple of hours but even a medium-sized FPGA project can take an hour or two to build.
Larger, more extensive FPGA projects can take days – you press build, you come back a couple of days later and hope it passed.
There’s a huge amount of money and research going into making that better but it’s an exceptionally long-winded process. You’ve got your hardware spin that takes weeks because of postages and PCB’s (Printed Circuit Boards), and you’ve got your software spin which measures in minutes, and then your FPGA spins somewhere in the middle. Even my little helicopter took 20 minutes to build in, my computer vision board takes an hour and some of the satellite stuff I’ve worked on takes 9 hours.
In the past I’ve worked on things that I’ve left overnight, which is a downside of sorts and means that, to ensure the object is built successfully, you need to spend more time on the diligence of the software code.
Related Article: Improving the embedded software testing process
Sometimes with regular software you can just make an alteration to the code and hit build again, whereas with an FPGA you inherently have to stand back and be more careful before you run the build process, because if you don’t, it’s going to waste a lot of time.
This is why I place FPGA design more on the hardware side of things than the software side, simply because it requires that level of completeness before you press go.
Why FPGA Programming is Best Done by an Expert
Conventional processors have their own tools and might be programmed in a slightly different language, but most languages have a similar look and feel.
Many embedded applications look similar and so there is a lot of knowledge out there, and plenty of engineers can do embedded or do Python programming. However, the toolset and knowledge required to do FPGA programming and development is very different.
Some of the knowledge is transferrable, however most prior knowledge will need to be left at the door because there are ways of thinking for conventional processors that will hang you out to dry when you try and build configurable hardware and vice versa.
So, you want a good FPGA engineer to do your FPGA programming work because there is a huge learning curve. Going from nothing, to a fully working, reliable design, requires a lot of training, and a lot more experience – as well as the tools themselves.
In general, the tools are a lot better now and not as expensive as they used to be. Clearly a lot of vendors have realised they need to offer their tools for free and support some of their smaller, cheaper hobbyist devices to try and get people into the FPGA market.
But if you’re planning to use FPGA programming for an embedded software design, always consult someone seasoned in the field – your final product will be of a much higher quality (and no doubt cheaper) as a result.
Ignys and FPGA Design
At Ignys we have the whole suite of Hardware, Firmware and Software Engineers. We’re also experts in all things FPGA Programming! We can help with everything from specifying the device and laying out the circuit board to porting your old project to the latest technology.
With the current component shortage meaning Xilinx Spartan 6 are even rarer than some Pokémon, we’re here to support you in moving to a newer device or even a different vendor entirely.
Thank you to Jason Jibrail for sharing your valuable FPGA knowledge for this blog.
Recommended for you
Love a technical deep dive? Read about SiC and GaN