Skip to content

Latest commit

 

History

History
30 lines (21 loc) · 1.54 KB

README.md

File metadata and controls

30 lines (21 loc) · 1.54 KB

Efficient Transformer-based Hyper-parameter Optimization for Resource-constrained IoT Environments

Tentative code: This code provides the implementations of Transformer-based Reinforcement Learning Hyper-parameter Optimization (TRL-HPO), which is the convergence of transformers and Actor-critic Reinforcement Learning. All the code documentation and variable definition mirrors the content of the manuscript published in IEEE Internet of Things Magazine.

The link to the paper (arxiv): https://arxiv.org/abs/2403.12237

The link to the paper (ieee): https://ieeexplore.ieee.org/document/10570354/

The functional scripts are as follows:

  1. Run run.py to train the model.
  2. Run analyze_results.py to evaluate the trained model.
  3. Run explainability_results.py to understand the models' results.
  4. Run flops_count.py to output the FLOPS of the model.

Methodology

Alt text

Requirements

The requirements are included in the requirements.txt file. To install the packages included in this file, use the following command: pip install -r requirements.txt

Contact-Info

Please feel free to contact me for any questions or research opportunities.