Protein Information System (PIS) is an integrated biological information system focused on extracting, processing, and managing protein-related data. PIS consolidates data from UniProt, PDB, and GOA, enabling the efficient retrieval and organization of protein sequences, structures, and functional annotations.
The primary goal of PIS is to provide a robust framework for large-scale protein data extraction, facilitating downstream functional analysis and annotation transfer. The system is designed for high-performance computing (HPC) environments, ensuring scalability and efficiency.
🔄 FANTASIA has been completely redesigned and is now available at:
FANTASIA Repository
This new version is a pipeline for annotating GO (Gene Ontology) terms in protein sequence files (FASTAs). The redesign focuses on long-term support, updated dependencies, and improved integration with High-Performance Computing (HPC) environments.
🛠️ A stable version of the information system for working with UniProt and annotation transfer is available at:
Zenodo Stable Release
This version serves as a reference implementation and provides a consistent environment for annotation transfer tasks.
- Python 3.11.6
- RabbitMQ
- PostgreSQL with pgVector extension installed.
Ensure Docker is installed on your system. If it’s not, you can download it from here.
Ensure PostgreSQL and RabbitMQ services are running.
docker run -d --name pgvectorsql \
--shm-size=64g \
-e POSTGRES_USER=usuario \
-e POSTGRES_PASSWORD=clave \
-e POSTGRES_DB=BioData \
-p 5432:5432 \
pgvector/pgvector:pg16 \
-c shared_buffers=16GB \
-c effective_cache_size=32GB \
-c work_mem=64MB
The configuration parameters provided above have been optimized for a machine with 128GB of RAM and 32 CPU cores, allowing up to 20 concurrent workers. These settings enhance PostgreSQL’s performance when handling large datasets and computationally intensive queries.
--shm-size=64g
: Allocates 64GB of shared memory to the container, preventing PostgreSQL from running out of memory in high-performance environments.-c shared_buffers=16GB
: Allocates 16GB of RAM for PostgreSQL’s shared memory buffers. This should typically be 25-40% of total system memory.-c effective_cache_size=32GB
: Sets PostgreSQL’s estimated available memory for disk caching to 32GB. This helps the query planner make better decisions.-c work_mem=64MB
: Defines 64MB of memory per worker for operations like sorting and hashing. This is crucial when handling parallel query execution.
You can use pgAdmin 4, a graphical interface for managing and interacting with PostgreSQL databases, or any other SQL client.
Start a RabbitMQ container using the command below:
docker run -d --name rabbitmq \
-p 15672:15672 \
-p 5672:5672 \
rabbitmq:management
Once RabbitMQ is running, you can access its management interface at RabbitMQ Management Interface.
To execute the full extraction process, simply run:
python main.py
This command will trigger the complete workflow, starting from the initial data preprocessing stages and continuing through to the final data organization and storage.
You can customize the sequence of tasks executed by modifying main.py
or adjusting the relevant parameters in the config.yaml
file. This allows you to tailor the extraction process to meet specific research needs or to experiment with different data processing configurations.