PowerPoint Demo

***Note: You will need sudo access to your machine.***

Before you clone a Github Repo

  1. Create a GitHub account: https://github.com/
  2. On the Linux machine that you will clone the GitHub from, generate the SSH key (if not already done): https://help.github.com/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent/
  3. Add the new SSH key to your GitHub account: https://help.github.com/articles/adding-a-new-ssh-key-to-your-github-account/
  4. Setup for large file systems on GitHub.

    git lfs install
  5. Verify that you have git version 2.13.0 (or later) installed.

    git version
  6. Verify that you have git-lfs version 2.1.1 (or later) installed.

    git-lfs version

Clone the git repository

Clone the example repository into the "SNLDemo" directory.

git clone --recursive https://github.com/slaclab/snl-Examples.git SNLDemo


Training your own model and transferring it to SNL

***Note: This repo assumes that you are using the default MLP model. If you are using a different model, make sure you define its structure in all necessary files and follow the instructions in this section.***

  1. Open the Jupyter notebook "MLP_MNIST_Training.ipynb" from "SNLDemo/firmware/python/snl_MLP".
    MLP_MNIST_Training.ipynb
  2. This Jupyter notebook trains a 3-layer MLP on the MNIST dataset. After you run the notebook, the weights and biases will be saved to .txt files, and the entire model, as well as the weights and biases, will be saved to an .h5 file. Copy all the generated .txt files to "SNLDemo/firmware/shared/SnlMLP/data".

  3. In "SNLDemo/firmware/shared/SnlMLP/src/Network.cc," ensure that the model is defined in class Wba - construct(). For example, for this MLP model, the construct method looks like this:

    m_layer0.construct (weights.m_layer0); //Dense
    m_layer1.construct (weights.m_layer1); //Dense
    m_layer2.construct (weights.m_layer2); //Desne

    as there are 3 dense layers.

  4. Add or remove typename from the Wba class as necessary. For this model, it looks like this:

    typename Layer::Wba<0> m_layer0;
    typename Layer::Wba<1> m_layer1;
    typename Layer::Wba<2> m_layer2;
  5. Now, modify the processStream() method to reflect more or fewer layers. For this model, it looks like this:


  6. Next, modify the processNetwork() method to reflect more or fewer layers. For this model, it looks like this:


  7. Now, go to "SNLDemo/firmware/shared/SnlMLP/include/Parameters_Set0.hh" and add/modify namespaces to reflect the correct layers. For this model, an example looks like this:


  8. After modifying all namespaces, modify the following pieces of code to reflect the changed layers:


  9. Next, it is time to test the kernels. Modify the following piece of code to reflect the changed layers:


    For each layer in your model, change the name of the .txt file you will use to test in the following code:


  10. In firmware/shared/SnlMLP/src/Test.cc, in line 297, feel free to modify the test image. Right now, it is an image of all 1's. You can modify it to be all zeros or some other number. If you do modify the test image, ensure you can recreate it in Keras and pass the image through your trained model to get the raw logits output before the last softmax activation layer. This will be used to compare the results in SNL against your software results.

    Now your new model is ready to move on to the next steps.


How to build the Vitis HLS Model

  1. Set up Xilinx licensing. If you are on the SLAC network:

    source SNLDemo/firmware/setup_env_slac.sh

    Otherwise, you will have to install Vivado/Vitis and install the Xilinx licensing.

  2. Go to the HLS target directory and build the HSL.

    cd SNLDemo/firmware/shared/SnlMLP
    make clean
  3. Launch the GUI.

    make gui
  4. On the Flow Navigator, click on "Run C Simulation" to verify that your model structure is correct. Select "Clean Build" and "OK."



  5. You will then see a "processStream_csim.log" file, which will show you the structure of the model. Verify that everything looks correct.



  6. At the end of the csim log, you will see the results of passing your test image through the model (the raw logits before the final softmax activation layer). Check that this matches the software intermediate output of the layer right before the final softmax activation layer in "MLP_MNIST_Training.ipynb." This verifies that you have successfully transferred your software model to SNL.
  7. Create the IP. To do this, click on "Run C Synthesis" in the Flow Navigator. Click "OK."



  8. You will then see the "Synthesis Summary." This provides how much resource (DSP, LUT, etc.) your model is using. Each construct represents a single layer.



  9. Close the GUI and run the following commands in the terminal. This will generate an IP zip file.

    make clean
    make
  10. After the process finishes, you will see two lines like this.

    INFO: [HLS 200-111] Finished Command export_design CPU user time: 18 seconds. CPU system time: 1.15 seconds. Elapsed time: 30.02 seconds; current allocated memory: 13.047 MB.
    /u1/jaeylee/Projects/SNLDemo/firmware/shared/SnlMLP/ip/SnlMLP-v1.0-20231030114131-jaeylee-46d334d.zip

    Copy the part that begins with "/ip/SnlMLP..." to your clipboard. Make sure that the name does not contain "dirty," which means it was made using uncommitted code.

  11. Launch vim to edit the "ruckus.tcl" file under the "SnlMLP" directory.

    vim ruckus.tcl

    Keep note of the IP name. For example, in this "ruckus.tcl" file

    if { [get_ips MLP_50] eq ""  } {

    The name is "MLP_50."

    In "ruckus.tcl," replace the path that begins with "/ip/SnlMLP" with the copied path in your clipboard. Save the modified file.

  12. Now, go to "SNLDemo/firmware/python/snl_MLP/_Application.py" and create the correct variables that correspond to the weights and biases of your model. To get the correct offset and number fields, go to "SNLDemo/firmware/shared/SnlMLP/ip" and unzip the ip zip file by running

    unzip [IP_ZIP_FILE].zip

    Then, go to "SNLDemo/firmware/shared/SnlMLP/ip/drivers/processStream_v1_0/src" and run

    cat xprocessstream_hw.h
    

    The output will show the offset and number for each variable. Create your variables according to these values. *Note: "number" stands for the "DEPTH" values in the output.


  13. In the getWeights() method, load the correct .h5 file for your model and write the correct weights and biases to hardware. For this model, it looks like this:

How to build the Vivado firmware

  1. Go to 

    SNLDemo/firmware/targets

    and run "ls."

    This will output a list of available FPGAs. For example,

    shared_version.mk  SnlMLPAlveoU200  SnlMLPKcu105  SnlMLPKcu105HlsBypass  SnlMLPKcu1500

    After you select an FPGA, "cd" into it and run

    make clean
    make

    This will take a long time. After it finishes running, run

    make gui

    This will bring up the Vivado GUI.

  2. Under the "U_App : Application," make sure that the "GEN_HLS.U_HLS" has the correct name. To verify this, check that the name you noted from the "ruckus.tcl" file matches both the name in the GUI and the name in the "Application.vhd" file. For example, in this case, the name from "ruckus.tcl" was "MLP_50," the name on the GUI is "MLP_50," and the name in the "Application.vhd" file is also "MLP_50."


  3. Under the "Project Summary" tab, you can view the resource utilization graph, power consumption, and other information regarding the implementation. In the left panel, expand the "Open Implemented Design" and click on any reports to view the information in more detail. Opening the implementation reports provides more accurate data than the synthesis reports.


How to reprogram PCIe firmware via Rogue

  1. Setup the Rogue environment.

    cd SNLDemo/software
    source setup_env_slac.sh
  2. Run the PCIe firmware update script with the path equal to "SNLDemo/firmware/targets/<Name of FPGA>/images"

    python scripts/updatePcieFpga.py --path .../SNLDemo/firmware/targets/<Name of FPGA>/images
  3. Select the image number (in this case, 0) and make sure that the FwTarget is set to the correct FPGA (the names might be different, but if it is the correct FPGA, enter "y").
  4. Reboot the computer.

    sudo reboot

How to run the software to load weights and images from HDF5 model 

Open "software/NoteBook/DataProcessing.ipynb."

After changing the paths to be correct, you will be able to run the cells and get the accuracy of your model compared to the ground truth labels at the end. You will also see a count of all labels your model generated, and you can compare this count to the count you got in "MLP_MNIST_Training.ipynb" from "SNLDemo/firmware/python/snl_MLP" to see if your FPGA output matches your software output.

Note that the predicted class counter for the FPGA output (left) matches the predicted class counter for the software output (right), meaning that the model was successfully moved to the FPGA.

  • No labels