Should I apply model compression during training, after training or both? I will be using hybrid compression techniques post-training too.
In the Dockerfile that you created, a lot of packages are not required. Eg - Anaconda, pytorch, django, sqlite3,etc. May I start with my own Dockerfile and add packages as and when required?
For the architecture of the system, I will be creating a Flask application that will accept trained models and return compressed models. How do I integrate the Qemu emulator with this application? Or should I keep it separate just for testing purposes of the models? For running different models, different scripts need to be written as per the edge device. So should I keep the Flask application as the end product? I will add features to it to return the comparison between the original and compressed model.
I need to discuss a few project specifications with you and the roadmap for the summer. Asynchronous communication will take too long to receive replies. Please can you allot me some time for a video call meeting next week?
You should try both. But I think post training works best.
I would suggest that you make changes in Dockerfile and raise a PR. I made a generic one so that base system remains same.
I think for now you should keep qemu emulator separate. Later on you need to run on qemu. Like the Raspbian you emulated and work on that. Is this clear?
Regarding call I am available on Wednesday as @judywawira suggested.
I wanted to change the base image in the Dockerfile to Ubuntu instead of cudnn. Will there be seperate dockerfiles to run the Flask application and the Qemu emulator? If that is the case then I can seperate the packages that you wrote in the generic Dockerfile wih respect to the required base image.
For the meeting, I will send a fixed timing. What timezones do you reside in @judywawira@pri2si17?
I think you can change the base image. But you don’t need separate dockerfile, you can write in the same dockerfile or write instructions how to set up inside docker container?