Welcome to the blog post of the AMD FPGA video series.
When I was at my parents’ house a couple of months ago, I dug out my very first FPGA board. A digilent board with a Spartan 3 FPGA. It was the first time you could get a decent size FPGA board for a price affordable for a student back in 2004.
The Spartan 3 FPGA is not supported by Vivado, but only by ISE. The last time that I used ISE was in 2021 for programming a client’s board with a Spartan 6. Back then I tried with the ISE virtual machine provided by AMD, but I couldn’t program the FPGA with the Platform Cable. At the end I used my old Windows 7 notebook that was the machine used to develop the FPGA for the project in 2014.
I have decided to go back and test how it is possible to use ISE for generating the bitstream and downloading it to the FPGA on current versions of Windows and Linux without the virtual machine provided by AMD.
I am also going to show you how to install ISE in a docker container.
Full disclaimer: this video is not sponsored by Digilent nor AMD and the JTAG cable has been bought by Starware Design.
Starware Design has experience in edge AI for audio and video applications. Services:
Architecture definition/evaluation Implementation on FPGA/ASIC Implementation on microprocessor/microcontroller Verification of the implementation agains the model (i.e. using Cocotb)
Previous projects
Video AI proof-of-concept
Person detection proof-of-concept running on Zynq UltraScale (ZCU104).
Starware Design tasks:
Model preparation for FPGA deployment
Software running on the FPGA with PyQT GUI
Audio AI ASIC
28nm audio AI ASIC for keyword spotting.
Starware Design tasks:
Benchmark of the existing AI architecture and proposals for the next generation architecture.
AI network bit-accurate modelling
Evaluation board hardware, software and FPGA design (Xilinx Artix7 plus STMicroelectronics STM32MP1).
Automated lab test setup design and implementation (similar to Amazon Alexa compatible devices testing).
RTL design and validation using Cocotb and AI model in Python
If your project requires high levels of integration and performance then an FPGA is probably the optimal solution. Starware Design has experience in using toolchains and devices from all the major FPGA providers. Starware Design design support can range from a bespoke IP block to a turnkey solution. Services:
Architecture design Hardware / software partitioning RTL coding (VHDL and Verilog/SystemVerilog) Verification (UVVM, Cocotb, co-simulation) System On Chip (Zynq, Zynq MPSoC) Design for Xilinx, Altera/Intel and Lattice FPGAs Interfacing with PCIe, DDR memories, high speed ADCs, Gigabit Ethernet
Previous projects
Video AI proof-of-concept
Person detection proof-of-concept running on Zynq UltraScale (ZCU104).
Starware Design tasks:
Model preparation for FPGA deployment
Software running on the FPGA with PyQT GUI
Audio AI ASIC
28nm audio AI ASIC for keyword spotting.
Starware Design tasks:
Porting ASIC design to FPGA for rapid prototyping
Bit-accurate validation using Cocotb and AI model in Python
Video processing platform
Xilinx Zynq FPGA with multiple video in and video out up to 1080p resolution. Mixture of Xilinx IP cores and custom cores.
Starware Design tasks:
Proof of concept on evaluation board FPGA design and validation, IP cores creation and customisation Bare metal and Linux drivers/software
High performance Software Design Radio (SDR) platform
Xilinx Kintex 7 with high speed ADCs and PCIe interface to x86 platform.
Starware Design tasks: Creation of a co-simulation platform: QEMU running Linux with target device driver and apps interacting with Modelsim running the FPGA simulation plus the embedded microcontroller code.
Industrial ultrasound probe
Xilinx Artix-7 with DDR-3 memory, PCIe express and ADC LVDS interface. Mixture of Xilinx IP cores and custom cores.
Starware Design tasks: FPGA design and validation, IP cores creation and customisation.
Other projects include FPGA design due diligence and/or verification (UVVM or SystemVerilog).
During the development and support phase of a product containing an FPGA bitstreams are released containing new features, bug fixes etc.
Releases are more frequent during the development phase as new features are added to the design. The support phase can last from a couple of years for a consumer product to five or more years for an industrial product.
In the previous blog post we learned how to integrate Xilinx Vivado with Docker and Jenkins to build automatically (or with a single button) the FPGA bitstream.
During the project life span, the FPGA bitstream is going to be built a large number of times. Wouldn’t be interesting to collect metrics from each build and track them?
In this blog post of the series “FPGA meets DevOps” I am going show you how to get metrics from a Xilinx Vivado build and track them in Jenkins using the Plot plugin.
In particular we are going to track resource usage (i.e. LUT, FF, DSP and memory). This gives you insight on how the resource usage evolved during the project life span and if the FPGA is getting too full.
In the previous blog post we created a system that automatically builds the FPGA bitstream and Linux image. Let’s imagine a bug has been found after a bitstream or Linux image has been released. The questions we need to answer to fix the problem are:
What is the version with the bug?
What is the source code that was used to build that particular version?
By the end of this blog post we will be able to answer those questions for FPGA bitstream and Linux image, but also to identify a particular board i.e. for RMA.
The problem with this approach is that changes to the project in Vivado (i.e. changing the implementation strategy or place and route parameters) have to be manually ported to the TCL file.
My typical Xilinx Vivado FPGA project has a block design as top level with automatically generated and managed wrapper. It has a mix of Xilinx and custom IP cores and I use the Out Of Context flow for synthesis since it reduces build time by caching IP cores that haven’t been modified or updated.
When I started researching how to better integrate Vivado with source version control, I defined the following requirements:
The block design is the primary source to recreate the design (IP cores configuration, wiring, etc)
The top level wrapper HDL file shouldn’t be under version control since it can be recreated from the block diagram
Minimum TCL scripts coding for each project
Easy to save changes made in Vivado GUI (i.e. implementation settings)
Use the project-based out of context flow to reduces build time
In this second blog post of the series “FPGA meets DevOps” I am going show you how to integrate Xilinx Vivado with Docker and Jenkins.
Docker provides a lightweight operating system level virtualisation. It allows developers to package up an application with all the parts it needs in a container, and then ship it out as one package. A container image is described by a file (Dockerfile) which contains a sequence of commands to create the image itself (i.e.: packages to install, configuration tasks, etc) and it is all you need to replicate the exact build environment on another machine.
The objective is to create a container that will run Vivado in headless mode (without user interface) to build the FPGA image.
A couple of years ago I wrote a few blog posts regarding FPGA and devops; in particular on how to use Xilinx/AMD Vivado with git, Jenkins and docker.
With these new blog posts, I am going to update that content using Vivado 2022.2. I will also replace Jenkins with Gitlab for continuous integration.
I want to show you that it is not difficult nor expensive to get started with devops for FPGA development.
In this blog post, I am going to show you how to use version control for Xilinx/AMD Vivado and Petalinux projects. I am going to use git, but you can use SVN or other version control tools.
Welcome to the first blog post in the Microchip PolarFire SoC series! I am going to show you how to set up the tools, build the reference design, and program the board.
More blog posts are coming, diving deeper into the tools, device features, and much more!
Full disclaimer: this video is not sponsored by Microchip and the video kit has been bought by Starware Design.
Welcome to the blog post number 2 in the Microchip PolarFire SoC series! Today, we’re creating a basic design for the PolarFire SoC video kit from scratch. While in the previous blog post we’ve seen how to build the reference design, it is important to be able to create a design from scratch. Until you do, you might miss some important details.
We’re going to create a custom MSS configuration and an FPGA design with two GPIO banks connected to the LEDs and dip switches on the video kit.
We’re going to add support for the GPIO banks to the Linux kernel, and write some examples in Python for testing.
I assume you’ve read the first blog post of the Microchip Polarfire SoC series since I did explain about how to install the tools, build the HSS firmware, Yocto image, etc.
Full disclaimer: this video is not sponsored by Microchip and the video kit has been bought by Starware Design.
Welcome to the blog post number 3 in the Microchip PolarFire SoC series!
Today, we’re integrating a custom IP into the PolarFire SoC video kit’s base design, addressing a key aspect of practical FPGA development. We’re going to add the system version IP that I have created for the FPGA meets devops video series, but this time the bus interface is APB instead of AXI. A simple testbench written in Python and cocoTB is used to validate the IP. And we’re going to add the system version application to the Linux image with a custom meta layer.
In this blog post, I assume you’ve watched the first two videos of the Microchip Polarfire SoC series where I explained how to install the tools, build the Yocto image, etc.
Full disclaimer: this video is not sponsored by Microchip and the video kit has been bought by Starware Design.
Working as an embedded systems consultant, I have to quickly switch between projects or sometimes between different boards for the same project or client.
When I was looking for inspiration on how to set up my workbench, I found this blog post from Jay Carlson about the project tray system.