PH-SA is a topology-based automatic active phase construction framework, enabling thorough configuration sampling and efficient computation.
The code in this repository has been tested with the following software and hardware requirements:
- Operating System: Linux (Rocky Linux 8.8) ,or Windows 10,11
- Python: 3.8.13
- ASE: 3.22
- Gudhi: 3.8
- Numpy: 1.23
- NetworkX: 2.8.8
- SciPy: 1.10.1
- Processor: Multi-core CPU (e.g., Intel i5 or AMD equivalent)
- Memory: Minimum 8 GB RAM (16 GB recommended for large datasets)
-
Install Python and Dependencies:
-
Ensure Python is installed. You can use Anaconda or pyenv for Python version management.
-
Create a virtual environment (recommended):
python3 -m venv ph-sa-env source ph-sa-env/bin/activate # For Linux/macOS ph-sa-env\Scripts\activate # For Windows
-
Install dependencies:
pip install ase==3.22 gudhi==3.8 numpy==1.23 networkx==2.8.8 scipy==1.10.1
-
-
Clone the Repository:
git clone https://github.com/JFLigroup/PH-SA.git cd PH-SA
-
Check Installation: Test the installation by running a provided example (see the "Examples" section below).
On a standard desktop computer with an i5 processor and 16 GB RAM, the installation typically takes about 1-2 minutes.
adsorption_sites.py
: Searches for surface and embedding sites in periodic or aperiodic structures.utils.py
: Some utilities for site enumeration and for configuration generation.structural_optimization.py
:Simple example of structural optimization using a model trained by dpa-1
Two Jupyter notebooks provide a quick start to the workflow for finding unique configurations for both clusters and slabs:
cluster_workflow.ipynb
: Demonstrates workflow for aperiodic structures.slab_workflow.ipynb
: Demonstrates workflow for periodic structures.
A simple structure is provided for running the workflow and verifying installation.
input.json
: Simple sample input file for dpa-1 fine tuning.
OC_10M.pb
: Pre-trained weight for DPA-1 fine-tuning
-
Launch the Jupyter Notebook:
jupyter notebook
-
Open the
cluster_workflow.ipynb
orslab_workflow.ipynb
notebook. -
Follow the step-by-step instructions in the notebook to run the workflow on example data.
- Output: Unique atomic configurations for clusters or slabs.
- Typical Runtime:
- Small datasets: ~1-2 minutes on a standard desktop computer.
- Large datasets: Runtime depends on the data size but typically completes within 10-30 minutes.
-
Prepare your input data in the appropriate format (e.g., XYZ or POSCAR files for atomic structures).
-
Modify the input paths in the Jupyter notebook to point to your data.
-
Run the workflow.
-
Analyze the output configurations generated in the results directory.
-
Use DFT for structure optimization or for molecular dynamics calculations
-
Transform the structure of the computed data into dpdata and train it
dp train input.json -- finetune OC_10M.pb
-
Structural optimization using trained weights
- On a standard desktop computer, running the demonstration typically takes ~2-5 minutes.