This sofware package implements thicknessML framework that rapidly extracts/predicts semiconductor thin film thickness d from optical spectroscopic reflection R and transmission T. This repository is organized according to the two-stage transfer learning workflow in the thicknessML framework.
- pre-training: pre-training models using the generic simulated single Tauc-Lorentz (TL) dataset (in
pre-training.py
) - transfer learning or retraining: retrain the pre-trained models using the simulated literature perovskite dataset, and apply the retrained models on experimentally measured R and T spectra of six synthesized methylammonium lead iodide (MAPbI3) perovskite film (in
transfer-learning.py
)
The following paper describes the details of the thickness framework: Transfer Learning for Rapid Extraction of Thickness from Optical Spectra of Semiconductor Thin Films (link to be added)
Please cite the following work if you want to use thicknessML
To be added
To install, clone the repository, navigate to the folder, and use pip install -r requirements.txt
in a python 3.6 environment.
A possible way to create a python 3.6 enviroment with conda1
Conda is the package manager that the Anaconda distribution is built upon. It is a package manager that is both cross-platform and language agnostic (it can play a similar role to a pip and virtualenv combination).
Miniconda allows you to create a minimal self contained Python installation, and then use the Conda command to install additional packages.
First you will need Conda to be installed and downloading and running the Miniconda will do this for you. The installer can be found here
The next step is to create a new conda environment. A conda environment is like a virtualenv that allows you to specify a specific version of Python and set of libraries. Run the following commands from a terminal window:
conda create -n thicknessML python=3.6
This will create a minimal environment with only Python installed in it. To put your self inside this environment run:
source activate thicknessML
or
conda activate thicknessML
Now you're ready to run pip install -r requirements.txt
once navigated into the cloned/downloaded thicknessML folder.
After the once-for-all installation, every time before running the code as described in Usage,
simply activate the installed environment by source/conda activate thicknessML
.
default: training MultiTask Learning models
python pre-training.py
if training Single-Task Learning models
python pre-training.py --STL
default: loading MTL pre-trained models with partial-weight retraining
python transfer-learning.py
if doing full-weight retraining
python transfer-learning.py --full-weight
loading STL pre-trained models is toggled by adding --STL
as in Stage 1
- Download compressed data file
data.tar.gz
from https://doi.org/10.6084/m9.figshare.23501715.v1 - Move
data.tar.gz
to inside the data directory. - Run
tar -xvf data.tar.gz
after navigating into the data directory.
Datasets will automatically appear in folder data after uncompressing using tar
.
Note that .h5
files are quite reliant on specific h5py
version. Please make sure to have h5py
of 2.10.0
for smooth opening of the data files.
Scripts | Description |
---|---|
pre-training.py |
Stage 1: pre-train and save models on the TL dataset |
transfer-learning.py |
Stage 2: retrain (transfer) pre-trained models on the literature perovskite dataset, and predict experimental perovskite film thicknesses from measured RT |
utils.py |
Auxiliary functions |
Folders | Description |
---|---|
data | hosts saved datasets; the utils folder within also contains scripts for Tauc-Lorentz oscillator TaucLorentz.py and transfer-matrix method ScatteringMatrix.py . |
pre-trained models | hosts pre-trained models; pre-trained models outputted by a running of pre-training.py will replace the current saved pre-trained models. |
The code was primarily written by Siyu Isaac Parker Tian and Zhe Liu, under the supervision of Zhe Liu, Tonio Buonassisi and Qianxiao Li.
Footnotes
-
explanations of this section borrow from https://pandas.pydata.org/docs/getting_started/install.html ↩